hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ee5461951bf3deeb4f5db508c1ddab7785fdeac4 | 20 | py | Python | src/ml_optic/__init__.py | Tingenek/ml_optic | 2eab48a51fa9dffb043b27a4de1aab6fe204773e | [
"Apache-2.0"
] | 21 | 2018-02-14T18:51:49.000Z | 2022-01-15T06:59:16.000Z | src/ml_optic/__init__.py | Tingenek/ml_optic | 2eab48a51fa9dffb043b27a4de1aab6fe204773e | [
"Apache-2.0"
] | 1 | 2019-01-30T19:55:22.000Z | 2020-05-16T21:31:58.000Z | src/ml_optic/__init__.py | Tingenek/ml_optic | 2eab48a51fa9dffb043b27a4de1aab6fe204773e | [
"Apache-2.0"
] | 7 | 2017-10-02T16:14:16.000Z | 2022-03-16T14:27:32.000Z | from .magic import * | 20 | 20 | 0.75 | 3 | 20 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 20 | 1 | 20 | 20 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c9b3506286fb556f6659b813d0b13cb612f47f32 | 21 | py | Python | segm_benchmark/layers/__init__.py | anotherTK/segmentation.pytorch | 36b6b412ee5561745fd9a67e4b6e28c0b9f58d68 | [
"MIT"
] | null | null | null | segm_benchmark/layers/__init__.py | anotherTK/segmentation.pytorch | 36b6b412ee5561745fd9a67e4b6e28c0b9f58d68 | [
"MIT"
] | null | null | null | segm_benchmark/layers/__init__.py | anotherTK/segmentation.pytorch | 36b6b412ee5561745fd9a67e4b6e28c0b9f58d68 | [
"MIT"
] | null | null | null |
from .jpu import JPU | 10.5 | 20 | 0.761905 | 4 | 21 | 4 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 21 | 2 | 20 | 10.5 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c9dde8609ca5fdedea2d7380def9dd714bd4133d | 37,895 | py | Python | instances/passenger_demand/pas-20210421-2109-int14000000000000001e/85.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210421-2109-int14000000000000001e/85.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210421-2109-int14000000000000001e/85.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null |
"""
PASSENGERS
"""
numPassengers = 3209
passenger_arriving = (
(3, 7, 10, 7, 0, 0, 5, 6, 5, 6, 4, 0), # 0
(7, 6, 5, 2, 4, 0, 6, 10, 8, 4, 2, 0), # 1
(5, 3, 6, 1, 3, 0, 9, 3, 10, 5, 0, 0), # 2
(3, 9, 6, 7, 3, 0, 4, 9, 8, 3, 1, 0), # 3
(5, 8, 8, 3, 1, 0, 3, 7, 9, 6, 2, 0), # 4
(3, 11, 6, 6, 2, 0, 4, 11, 3, 5, 3, 0), # 5
(4, 10, 5, 6, 0, 0, 8, 7, 3, 8, 1, 0), # 6
(4, 9, 3, 7, 0, 0, 6, 12, 10, 4, 2, 0), # 7
(4, 7, 5, 4, 0, 0, 8, 10, 3, 6, 3, 0), # 8
(3, 8, 8, 4, 4, 0, 10, 10, 4, 7, 4, 0), # 9
(2, 7, 8, 8, 0, 0, 6, 9, 12, 5, 0, 0), # 10
(6, 6, 3, 2, 4, 0, 9, 5, 8, 5, 3, 0), # 11
(3, 7, 12, 5, 3, 0, 7, 10, 5, 7, 0, 0), # 12
(1, 14, 5, 4, 0, 0, 5, 6, 6, 4, 4, 0), # 13
(4, 7, 6, 2, 1, 0, 5, 6, 6, 5, 3, 0), # 14
(2, 5, 8, 4, 3, 0, 4, 8, 7, 4, 0, 0), # 15
(1, 14, 2, 5, 1, 0, 8, 4, 5, 1, 4, 0), # 16
(8, 13, 13, 1, 1, 0, 7, 10, 6, 3, 3, 0), # 17
(7, 9, 5, 5, 1, 0, 6, 2, 8, 5, 5, 0), # 18
(4, 7, 4, 2, 3, 0, 9, 12, 3, 4, 0, 0), # 19
(1, 8, 9, 2, 3, 0, 4, 11, 3, 3, 3, 0), # 20
(7, 9, 8, 3, 1, 0, 2, 5, 5, 6, 6, 0), # 21
(6, 12, 6, 3, 1, 0, 6, 3, 2, 4, 0, 0), # 22
(4, 10, 7, 1, 2, 0, 5, 9, 5, 5, 1, 0), # 23
(2, 12, 6, 4, 1, 0, 9, 4, 6, 5, 3, 0), # 24
(2, 12, 13, 4, 3, 0, 9, 8, 8, 6, 2, 0), # 25
(6, 6, 8, 4, 2, 0, 2, 6, 8, 2, 2, 0), # 26
(4, 12, 8, 9, 4, 0, 4, 8, 6, 7, 5, 0), # 27
(8, 14, 6, 6, 1, 0, 8, 10, 4, 6, 1, 0), # 28
(7, 9, 10, 8, 5, 0, 5, 6, 8, 4, 3, 0), # 29
(4, 11, 10, 7, 3, 0, 6, 16, 6, 3, 0, 0), # 30
(3, 12, 11, 4, 1, 0, 6, 6, 7, 11, 3, 0), # 31
(4, 7, 12, 6, 2, 0, 12, 5, 3, 3, 3, 0), # 32
(3, 4, 6, 7, 2, 0, 9, 8, 5, 8, 2, 0), # 33
(6, 7, 4, 0, 6, 0, 8, 6, 11, 2, 3, 0), # 34
(5, 6, 10, 5, 2, 0, 3, 6, 5, 2, 6, 0), # 35
(3, 4, 7, 1, 0, 0, 7, 7, 7, 4, 1, 0), # 36
(7, 8, 9, 1, 3, 0, 9, 10, 6, 5, 1, 0), # 37
(2, 9, 7, 3, 3, 0, 7, 6, 5, 5, 0, 0), # 38
(4, 8, 2, 1, 1, 0, 7, 12, 8, 7, 6, 0), # 39
(1, 6, 6, 6, 2, 0, 5, 9, 13, 5, 2, 0), # 40
(5, 14, 6, 7, 4, 0, 10, 10, 4, 3, 3, 0), # 41
(7, 11, 5, 2, 2, 0, 7, 12, 5, 8, 4, 0), # 42
(2, 10, 5, 4, 3, 0, 7, 7, 10, 5, 2, 0), # 43
(7, 11, 9, 0, 2, 0, 9, 8, 6, 2, 0, 0), # 44
(4, 8, 5, 5, 1, 0, 6, 5, 7, 11, 6, 0), # 45
(3, 11, 5, 3, 1, 0, 5, 5, 5, 1, 2, 0), # 46
(3, 10, 7, 1, 3, 0, 6, 7, 7, 4, 1, 0), # 47
(4, 13, 7, 2, 0, 0, 3, 5, 5, 3, 6, 0), # 48
(7, 11, 8, 3, 3, 0, 5, 8, 5, 9, 2, 0), # 49
(6, 13, 5, 4, 0, 0, 4, 12, 7, 1, 0, 0), # 50
(6, 5, 9, 4, 5, 0, 8, 9, 7, 8, 0, 0), # 51
(6, 11, 7, 3, 5, 0, 7, 7, 4, 4, 3, 0), # 52
(8, 2, 10, 3, 0, 0, 5, 11, 10, 5, 2, 0), # 53
(6, 4, 11, 4, 1, 0, 4, 2, 4, 2, 3, 0), # 54
(5, 15, 9, 4, 3, 0, 4, 10, 4, 5, 3, 0), # 55
(10, 7, 11, 6, 1, 0, 5, 5, 6, 6, 2, 0), # 56
(3, 8, 6, 6, 5, 0, 3, 17, 13, 5, 3, 0), # 57
(5, 8, 6, 3, 5, 0, 3, 5, 9, 7, 3, 0), # 58
(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), # 59
)
station_arriving_intensity = (
(3.7095121817383676, 9.515044981060607, 11.19193043059126, 8.87078804347826, 10.000240384615385, 6.659510869565219), # 0
(3.7443308140669203, 9.620858238197952, 11.252381752534994, 8.920190141908213, 10.075193108974359, 6.657240994867151), # 1
(3.7787518681104277, 9.725101964085297, 11.31139817195087, 8.968504830917876, 10.148564102564103, 6.654901690821256), # 2
(3.8127461259877085, 9.827663671875001, 11.368936576156813, 9.01569089673913, 10.22028605769231, 6.652493274456523), # 3
(3.8462843698175795, 9.928430874719417, 11.424953852470724, 9.061707125603865, 10.290291666666668, 6.6500160628019325), # 4
(3.879337381718857, 10.027291085770905, 11.479406888210512, 9.106512303743962, 10.358513621794872, 6.647470372886473), # 5
(3.9118759438103607, 10.12413181818182, 11.53225257069409, 9.150065217391306, 10.424884615384617, 6.644856521739131), # 6
(3.943870838210907, 10.218840585104518, 11.58344778723936, 9.19232465277778, 10.489337339743592, 6.64217482638889), # 7
(3.975292847039314, 10.311304899691358, 11.632949425164242, 9.233249396135266, 10.551804487179488, 6.639425603864735), # 8
(4.006112752414399, 10.401412275094698, 11.680714371786634, 9.272798233695653, 10.61221875, 6.636609171195653), # 9
(4.03630133645498, 10.489050224466892, 11.72669951442445, 9.310929951690824, 10.670512820512823, 6.633725845410628), # 10
(4.065829381279876, 10.5741062609603, 11.7708617403956, 9.347603336352659, 10.726619391025642, 6.630775943538648), # 11
(4.094667669007903, 10.656467897727273, 11.813157937017996, 9.382777173913043, 10.780471153846154, 6.627759782608695), # 12
(4.122786981757876, 10.736022647920176, 11.85354499160954, 9.416410250603866, 10.832000801282053, 6.624677679649759), # 13
(4.15015810164862, 10.81265802469136, 11.891979791488144, 9.448461352657004, 10.881141025641025, 6.621529951690821), # 14
(4.1767518107989465, 10.886261541193182, 11.928419223971721, 9.478889266304348, 10.92782451923077, 6.618316915760871), # 15
(4.202538891327675, 10.956720710578002, 11.96282017637818, 9.507652777777778, 10.971983974358976, 6.61503888888889), # 16
(4.227490125353625, 11.023923045998176, 11.995139536025421, 9.53471067330918, 11.013552083333336, 6.611696188103866), # 17
(4.25157629499561, 11.087756060606061, 12.025334190231364, 9.560021739130436, 11.052461538461543, 6.608289130434783), # 18
(4.274768182372451, 11.148107267554012, 12.053361026313912, 9.58354476147343, 11.088645032051284, 6.604818032910629), # 19
(4.297036569602966, 11.204864179994388, 12.079176931590974, 9.60523852657005, 11.122035256410259, 6.601283212560387), # 20
(4.318352238805971, 11.257914311079544, 12.102738793380466, 9.625061820652174, 11.152564903846153, 6.597684986413044), # 21
(4.338685972100283, 11.307145173961842, 12.124003499000287, 9.642973429951692, 11.180166666666667, 6.5940236714975855), # 22
(4.358008551604722, 11.352444281793632, 12.142927935768354, 9.658932140700484, 11.204773237179488, 6.590299584842997), # 23
(4.3762907594381035, 11.393699147727272, 12.159468991002571, 9.672896739130437, 11.226317307692307, 6.586513043478261), # 24
(4.393503377719247, 11.430797284915124, 12.173583552020853, 9.684826011473431, 11.244731570512819, 6.582664364432368), # 25
(4.409617188566969, 11.46362620650954, 12.185228506141103, 9.694678743961353, 11.259948717948719, 6.5787538647343), # 26
(4.424602974100088, 11.492073425662877, 12.194360740681233, 9.702413722826089, 11.271901442307694, 6.574781861413045), # 27
(4.438431516437421, 11.516026455527497, 12.200937142959157, 9.707989734299519, 11.280522435897437, 6.570748671497586), # 28
(4.4510735976977855, 11.535372809255753, 12.204914600292774, 9.711365564613528, 11.285744391025641, 6.566654612016909), # 29
(4.4625, 11.55, 12.20625, 9.7125, 11.287500000000001, 6.562500000000001), # 30
(4.47319183983376, 11.56215031960227, 12.205248928140096, 9.712295118464054, 11.286861125886526, 6.556726763701484), # 31
(4.4836528452685425, 11.574140056818184, 12.202274033816424, 9.711684477124184, 11.28495815602837, 6.547834661835751), # 32
(4.493887715792838, 11.585967720170455, 12.197367798913046, 9.710674080882354, 11.281811569148937, 6.535910757121439), # 33
(4.503901150895141, 11.597631818181819, 12.19057270531401, 9.709269934640524, 11.277441843971632, 6.521042112277196), # 34
(4.513697850063939, 11.609130859374998, 12.181931234903383, 9.707478043300654, 11.27186945921986, 6.503315790021656), # 35
(4.523282512787724, 11.62046335227273, 12.171485869565219, 9.705304411764708, 11.265114893617023, 6.482818853073463), # 36
(4.532659838554988, 11.631627805397729, 12.159279091183576, 9.70275504493464, 11.257198625886524, 6.4596383641512585), # 37
(4.5418345268542195, 11.642622727272729, 12.145353381642513, 9.699835947712419, 11.248141134751775, 6.433861385973679), # 38
(4.5508112771739135, 11.653446626420456, 12.129751222826087, 9.696553125000001, 11.23796289893617, 6.40557498125937), # 39
(4.559594789002558, 11.664098011363638, 12.11251509661836, 9.692912581699348, 11.22668439716312, 6.37486621272697), # 40
(4.568189761828645, 11.674575390625, 12.093687484903382, 9.68892032271242, 11.214326108156028, 6.34182214309512), # 41
(4.576600895140665, 11.684877272727276, 12.07331086956522, 9.684582352941177, 11.2009085106383, 6.3065298350824595), # 42
(4.584832888427111, 11.69500216619318, 12.051427732487923, 9.679904677287583, 11.186452083333334, 6.26907635140763), # 43
(4.592890441176471, 11.704948579545455, 12.028080555555556, 9.674893300653595, 11.17097730496454, 6.229548754789272), # 44
(4.600778252877237, 11.714715021306818, 12.003311820652177, 9.669554227941177, 11.15450465425532, 6.188034107946028), # 45
(4.6085010230179035, 11.724300000000003, 11.97716400966184, 9.663893464052288, 11.137054609929079, 6.144619473596536), # 46
(4.616063451086957, 11.733702024147728, 11.9496796044686, 9.65791701388889, 11.118647650709221, 6.099391914459438), # 47
(4.623470236572891, 11.742919602272728, 11.920901086956523, 9.651630882352942, 11.099304255319149, 6.052438493253375), # 48
(4.630726078964194, 11.751951242897727, 11.890870939009663, 9.645041074346407, 11.079044902482272, 6.003846272696985), # 49
(4.6378356777493615, 11.760795454545454, 11.85963164251208, 9.638153594771243, 11.057890070921987, 5.953702315508913), # 50
(4.6448037324168805, 11.769450745738636, 11.827225679347826, 9.630974448529413, 11.035860239361703, 5.902093684407797), # 51
(4.651634942455243, 11.777915625, 11.793695531400965, 9.623509640522876, 11.012975886524824, 5.849107442112278), # 52
(4.658334007352941, 11.786188600852274, 11.759083680555555, 9.615765175653596, 10.989257491134753, 5.794830651340996), # 53
(4.6649056265984665, 11.79426818181818, 11.723432608695653, 9.60774705882353, 10.964725531914894, 5.739350374812594), # 54
(4.671354499680307, 11.802152876420456, 11.686784797705313, 9.599461294934642, 10.939400487588653, 5.682753675245711), # 55
(4.677685326086957, 11.809841193181818, 11.649182729468599, 9.59091388888889, 10.913302836879433, 5.625127615358988), # 56
(4.683902805306906, 11.817331640625003, 11.610668885869565, 9.582110845588236, 10.886453058510638, 5.566559257871065), # 57
(4.690011636828645, 11.824622727272727, 11.57128574879227, 9.573058169934642, 10.858871631205675, 5.507135665500583), # 58
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 59
)
passenger_arriving_acc = (
(3, 7, 10, 7, 0, 0, 5, 6, 5, 6, 4, 0), # 0
(10, 13, 15, 9, 4, 0, 11, 16, 13, 10, 6, 0), # 1
(15, 16, 21, 10, 7, 0, 20, 19, 23, 15, 6, 0), # 2
(18, 25, 27, 17, 10, 0, 24, 28, 31, 18, 7, 0), # 3
(23, 33, 35, 20, 11, 0, 27, 35, 40, 24, 9, 0), # 4
(26, 44, 41, 26, 13, 0, 31, 46, 43, 29, 12, 0), # 5
(30, 54, 46, 32, 13, 0, 39, 53, 46, 37, 13, 0), # 6
(34, 63, 49, 39, 13, 0, 45, 65, 56, 41, 15, 0), # 7
(38, 70, 54, 43, 13, 0, 53, 75, 59, 47, 18, 0), # 8
(41, 78, 62, 47, 17, 0, 63, 85, 63, 54, 22, 0), # 9
(43, 85, 70, 55, 17, 0, 69, 94, 75, 59, 22, 0), # 10
(49, 91, 73, 57, 21, 0, 78, 99, 83, 64, 25, 0), # 11
(52, 98, 85, 62, 24, 0, 85, 109, 88, 71, 25, 0), # 12
(53, 112, 90, 66, 24, 0, 90, 115, 94, 75, 29, 0), # 13
(57, 119, 96, 68, 25, 0, 95, 121, 100, 80, 32, 0), # 14
(59, 124, 104, 72, 28, 0, 99, 129, 107, 84, 32, 0), # 15
(60, 138, 106, 77, 29, 0, 107, 133, 112, 85, 36, 0), # 16
(68, 151, 119, 78, 30, 0, 114, 143, 118, 88, 39, 0), # 17
(75, 160, 124, 83, 31, 0, 120, 145, 126, 93, 44, 0), # 18
(79, 167, 128, 85, 34, 0, 129, 157, 129, 97, 44, 0), # 19
(80, 175, 137, 87, 37, 0, 133, 168, 132, 100, 47, 0), # 20
(87, 184, 145, 90, 38, 0, 135, 173, 137, 106, 53, 0), # 21
(93, 196, 151, 93, 39, 0, 141, 176, 139, 110, 53, 0), # 22
(97, 206, 158, 94, 41, 0, 146, 185, 144, 115, 54, 0), # 23
(99, 218, 164, 98, 42, 0, 155, 189, 150, 120, 57, 0), # 24
(101, 230, 177, 102, 45, 0, 164, 197, 158, 126, 59, 0), # 25
(107, 236, 185, 106, 47, 0, 166, 203, 166, 128, 61, 0), # 26
(111, 248, 193, 115, 51, 0, 170, 211, 172, 135, 66, 0), # 27
(119, 262, 199, 121, 52, 0, 178, 221, 176, 141, 67, 0), # 28
(126, 271, 209, 129, 57, 0, 183, 227, 184, 145, 70, 0), # 29
(130, 282, 219, 136, 60, 0, 189, 243, 190, 148, 70, 0), # 30
(133, 294, 230, 140, 61, 0, 195, 249, 197, 159, 73, 0), # 31
(137, 301, 242, 146, 63, 0, 207, 254, 200, 162, 76, 0), # 32
(140, 305, 248, 153, 65, 0, 216, 262, 205, 170, 78, 0), # 33
(146, 312, 252, 153, 71, 0, 224, 268, 216, 172, 81, 0), # 34
(151, 318, 262, 158, 73, 0, 227, 274, 221, 174, 87, 0), # 35
(154, 322, 269, 159, 73, 0, 234, 281, 228, 178, 88, 0), # 36
(161, 330, 278, 160, 76, 0, 243, 291, 234, 183, 89, 0), # 37
(163, 339, 285, 163, 79, 0, 250, 297, 239, 188, 89, 0), # 38
(167, 347, 287, 164, 80, 0, 257, 309, 247, 195, 95, 0), # 39
(168, 353, 293, 170, 82, 0, 262, 318, 260, 200, 97, 0), # 40
(173, 367, 299, 177, 86, 0, 272, 328, 264, 203, 100, 0), # 41
(180, 378, 304, 179, 88, 0, 279, 340, 269, 211, 104, 0), # 42
(182, 388, 309, 183, 91, 0, 286, 347, 279, 216, 106, 0), # 43
(189, 399, 318, 183, 93, 0, 295, 355, 285, 218, 106, 0), # 44
(193, 407, 323, 188, 94, 0, 301, 360, 292, 229, 112, 0), # 45
(196, 418, 328, 191, 95, 0, 306, 365, 297, 230, 114, 0), # 46
(199, 428, 335, 192, 98, 0, 312, 372, 304, 234, 115, 0), # 47
(203, 441, 342, 194, 98, 0, 315, 377, 309, 237, 121, 0), # 48
(210, 452, 350, 197, 101, 0, 320, 385, 314, 246, 123, 0), # 49
(216, 465, 355, 201, 101, 0, 324, 397, 321, 247, 123, 0), # 50
(222, 470, 364, 205, 106, 0, 332, 406, 328, 255, 123, 0), # 51
(228, 481, 371, 208, 111, 0, 339, 413, 332, 259, 126, 0), # 52
(236, 483, 381, 211, 111, 0, 344, 424, 342, 264, 128, 0), # 53
(242, 487, 392, 215, 112, 0, 348, 426, 346, 266, 131, 0), # 54
(247, 502, 401, 219, 115, 0, 352, 436, 350, 271, 134, 0), # 55
(257, 509, 412, 225, 116, 0, 357, 441, 356, 277, 136, 0), # 56
(260, 517, 418, 231, 121, 0, 360, 458, 369, 282, 139, 0), # 57
(265, 525, 424, 234, 126, 0, 363, 463, 378, 289, 142, 0), # 58
(265, 525, 424, 234, 126, 0, 363, 463, 378, 289, 142, 0), # 59
)
passenger_arriving_rate = (
(3.7095121817383676, 7.612035984848484, 6.715158258354756, 3.5483152173913037, 2.000048076923077, 0.0, 6.659510869565219, 8.000192307692307, 5.322472826086956, 4.476772172236504, 1.903008996212121, 0.0), # 0
(3.7443308140669203, 7.696686590558361, 6.751429051520996, 3.5680760567632848, 2.0150386217948717, 0.0, 6.657240994867151, 8.060154487179487, 5.352114085144928, 4.500952701013997, 1.9241716476395903, 0.0), # 1
(3.7787518681104277, 7.780081571268237, 6.786838903170522, 3.58740193236715, 2.0297128205128203, 0.0, 6.654901690821256, 8.118851282051281, 5.381102898550726, 4.524559268780347, 1.9450203928170593, 0.0), # 2
(3.8127461259877085, 7.8621309375, 6.821361945694087, 3.6062763586956517, 2.044057211538462, 0.0, 6.652493274456523, 8.176228846153847, 5.409414538043478, 4.547574630462725, 1.965532734375, 0.0), # 3
(3.8462843698175795, 7.942744699775533, 6.854972311482434, 3.624682850241546, 2.0580583333333333, 0.0, 6.6500160628019325, 8.232233333333333, 5.437024275362319, 4.569981540988289, 1.9856861749438832, 0.0), # 4
(3.879337381718857, 8.021832868616723, 6.887644132926307, 3.6426049214975844, 2.0717027243589743, 0.0, 6.647470372886473, 8.286810897435897, 5.463907382246377, 4.591762755284204, 2.005458217154181, 0.0), # 5
(3.9118759438103607, 8.099305454545455, 6.919351542416455, 3.660026086956522, 2.084976923076923, 0.0, 6.644856521739131, 8.339907692307692, 5.490039130434783, 4.612901028277636, 2.0248263636363637, 0.0), # 6
(3.943870838210907, 8.175072468083613, 6.950068672343615, 3.6769298611111116, 2.0978674679487184, 0.0, 6.64217482638889, 8.391469871794873, 5.515394791666668, 4.633379114895743, 2.043768117020903, 0.0), # 7
(3.975292847039314, 8.249043919753085, 6.979769655098544, 3.693299758454106, 2.1103608974358976, 0.0, 6.639425603864735, 8.44144358974359, 5.5399496376811594, 4.653179770065696, 2.062260979938271, 0.0), # 8
(4.006112752414399, 8.321129820075758, 7.00842862307198, 3.709119293478261, 2.12244375, 0.0, 6.636609171195653, 8.489775, 5.563678940217391, 4.672285748714653, 2.0802824550189394, 0.0), # 9
(4.03630133645498, 8.391240179573513, 7.03601970865467, 3.724371980676329, 2.134102564102564, 0.0, 6.633725845410628, 8.536410256410257, 5.586557971014494, 4.690679805769779, 2.0978100448933783, 0.0), # 10
(4.065829381279876, 8.459285008768239, 7.06251704423736, 3.739041334541063, 2.145323878205128, 0.0, 6.630775943538648, 8.581295512820512, 5.608562001811595, 4.70834469615824, 2.1148212521920597, 0.0), # 11
(4.094667669007903, 8.525174318181818, 7.087894762210797, 3.7531108695652167, 2.156094230769231, 0.0, 6.627759782608695, 8.624376923076923, 5.6296663043478254, 4.725263174807198, 2.1312935795454546, 0.0), # 12
(4.122786981757876, 8.58881811833614, 7.112126994965724, 3.766564100241546, 2.1664001602564102, 0.0, 6.624677679649759, 8.665600641025641, 5.649846150362319, 4.741417996643816, 2.147204529584035, 0.0), # 13
(4.15015810164862, 8.650126419753088, 7.135187874892886, 3.779384541062801, 2.1762282051282047, 0.0, 6.621529951690821, 8.704912820512819, 5.669076811594202, 4.756791916595257, 2.162531604938272, 0.0), # 14
(4.1767518107989465, 8.709009232954545, 7.157051534383032, 3.7915557065217387, 2.1855649038461538, 0.0, 6.618316915760871, 8.742259615384615, 5.6873335597826085, 4.771367689588688, 2.177252308238636, 0.0), # 15
(4.202538891327675, 8.7653765684624, 7.177692105826908, 3.803061111111111, 2.194396794871795, 0.0, 6.61503888888889, 8.77758717948718, 5.7045916666666665, 4.785128070551272, 2.1913441421156, 0.0), # 16
(4.227490125353625, 8.81913843679854, 7.197083721615253, 3.8138842693236716, 2.202710416666667, 0.0, 6.611696188103866, 8.810841666666668, 5.720826403985508, 4.798055814410168, 2.204784609199635, 0.0), # 17
(4.25157629499561, 8.870204848484848, 7.215200514138818, 3.824008695652174, 2.2104923076923084, 0.0, 6.608289130434783, 8.841969230769234, 5.736013043478262, 4.810133676092545, 2.217551212121212, 0.0), # 18
(4.274768182372451, 8.918485814043208, 7.232016615788346, 3.8334179045893717, 2.2177290064102566, 0.0, 6.604818032910629, 8.870916025641026, 5.750126856884058, 4.8213444105255645, 2.229621453510802, 0.0), # 19
(4.297036569602966, 8.96389134399551, 7.247506158954584, 3.8420954106280196, 2.2244070512820517, 0.0, 6.601283212560387, 8.897628205128207, 5.76314311594203, 4.831670772636389, 2.2409728359988774, 0.0), # 20
(4.318352238805971, 9.006331448863634, 7.261643276028279, 3.8500247282608693, 2.2305129807692303, 0.0, 6.597684986413044, 8.922051923076921, 5.775037092391305, 4.841095517352186, 2.2515828622159084, 0.0), # 21
(4.338685972100283, 9.045716139169473, 7.274402099400172, 3.8571893719806765, 2.2360333333333333, 0.0, 6.5940236714975855, 8.944133333333333, 5.785784057971015, 4.849601399600115, 2.2614290347923682, 0.0), # 22
(4.358008551604722, 9.081955425434906, 7.285756761461012, 3.8635728562801934, 2.2409546474358972, 0.0, 6.590299584842997, 8.963818589743589, 5.79535928442029, 4.857171174307341, 2.2704888563587264, 0.0), # 23
(4.3762907594381035, 9.114959318181818, 7.295681394601543, 3.869158695652174, 2.2452634615384612, 0.0, 6.586513043478261, 8.981053846153845, 5.803738043478262, 4.863787596401028, 2.2787398295454544, 0.0), # 24
(4.393503377719247, 9.1446378279321, 7.304150131212511, 3.8739304045893723, 2.2489463141025636, 0.0, 6.582664364432368, 8.995785256410255, 5.810895606884059, 4.869433420808341, 2.286159456983025, 0.0), # 25
(4.409617188566969, 9.17090096520763, 7.311137103684661, 3.8778714975845405, 2.2519897435897436, 0.0, 6.5787538647343, 9.007958974358974, 5.816807246376811, 4.874091402456441, 2.2927252413019077, 0.0), # 26
(4.424602974100088, 9.193658740530301, 7.31661644440874, 3.880965489130435, 2.2543802884615385, 0.0, 6.574781861413045, 9.017521153846154, 5.821448233695653, 4.877744296272493, 2.2984146851325753, 0.0), # 27
(4.438431516437421, 9.212821164421996, 7.320562285775494, 3.8831958937198072, 2.256104487179487, 0.0, 6.570748671497586, 9.024417948717948, 5.824793840579711, 4.8803748571836625, 2.303205291105499, 0.0), # 28
(4.4510735976977855, 9.228298247404602, 7.322948760175664, 3.884546225845411, 2.257148878205128, 0.0, 6.566654612016909, 9.028595512820512, 5.826819338768117, 4.881965840117109, 2.3070745618511506, 0.0), # 29
(4.4625, 9.24, 7.32375, 3.885, 2.2575000000000003, 0.0, 6.562500000000001, 9.030000000000001, 5.8275, 4.8825, 2.31, 0.0), # 30
(4.47319183983376, 9.249720255681815, 7.323149356884057, 3.884918047385621, 2.257372225177305, 0.0, 6.556726763701484, 9.02948890070922, 5.827377071078432, 4.882099571256038, 2.312430063920454, 0.0), # 31
(4.4836528452685425, 9.259312045454546, 7.3213644202898545, 3.884673790849673, 2.2569916312056737, 0.0, 6.547834661835751, 9.027966524822695, 5.82701068627451, 4.880909613526569, 2.3148280113636366, 0.0), # 32
(4.493887715792838, 9.268774176136363, 7.3184206793478275, 3.8842696323529413, 2.2563623138297872, 0.0, 6.535910757121439, 9.025449255319149, 5.826404448529412, 4.878947119565218, 2.3171935440340907, 0.0), # 33
(4.503901150895141, 9.278105454545454, 7.314343623188405, 3.8837079738562093, 2.2554883687943263, 0.0, 6.521042112277196, 9.021953475177305, 5.825561960784314, 4.876229082125604, 2.3195263636363634, 0.0), # 34
(4.513697850063939, 9.287304687499997, 7.3091587409420296, 3.882991217320261, 2.2543738918439717, 0.0, 6.503315790021656, 9.017495567375887, 5.824486825980392, 4.872772493961353, 2.3218261718749993, 0.0), # 35
(4.523282512787724, 9.296370681818182, 7.302891521739131, 3.8821217647058828, 2.253022978723404, 0.0, 6.482818853073463, 9.012091914893617, 5.823182647058824, 4.868594347826087, 2.3240926704545455, 0.0), # 36
(4.532659838554988, 9.305302244318183, 7.295567454710145, 3.881102017973856, 2.2514397251773044, 0.0, 6.4596383641512585, 9.005758900709218, 5.821653026960784, 4.86371163647343, 2.3263255610795457, 0.0), # 37
(4.5418345268542195, 9.314098181818181, 7.287212028985508, 3.8799343790849674, 2.249628226950355, 0.0, 6.433861385973679, 8.99851290780142, 5.819901568627452, 4.858141352657005, 2.3285245454545453, 0.0), # 38
(4.5508112771739135, 9.322757301136363, 7.277850733695652, 3.87862125, 2.247592579787234, 0.0, 6.40557498125937, 8.990370319148935, 5.817931875, 4.8519004891304345, 2.330689325284091, 0.0), # 39
(4.559594789002558, 9.33127840909091, 7.267509057971015, 3.8771650326797387, 2.245336879432624, 0.0, 6.37486621272697, 8.981347517730496, 5.815747549019608, 4.845006038647344, 2.3328196022727274, 0.0), # 40
(4.568189761828645, 9.3396603125, 7.256212490942029, 3.8755681290849675, 2.2428652216312055, 0.0, 6.34182214309512, 8.971460886524822, 5.813352193627452, 4.837474993961353, 2.334915078125, 0.0), # 41
(4.576600895140665, 9.34790181818182, 7.2439865217391315, 3.8738329411764707, 2.2401817021276598, 0.0, 6.3065298350824595, 8.960726808510639, 5.810749411764706, 4.829324347826088, 2.336975454545455, 0.0), # 42
(4.584832888427111, 9.356001732954544, 7.230856639492753, 3.8719618709150327, 2.2372904166666667, 0.0, 6.26907635140763, 8.949161666666667, 5.80794280637255, 4.820571092995169, 2.339000433238636, 0.0), # 43
(4.592890441176471, 9.363958863636363, 7.216848333333333, 3.8699573202614377, 2.2341954609929076, 0.0, 6.229548754789272, 8.93678184397163, 5.804935980392157, 4.811232222222222, 2.3409897159090907, 0.0), # 44
(4.600778252877237, 9.371772017045453, 7.201987092391306, 3.8678216911764705, 2.230900930851064, 0.0, 6.188034107946028, 8.923603723404256, 5.801732536764706, 4.80132472826087, 2.3429430042613633, 0.0), # 45
(4.6085010230179035, 9.379440000000002, 7.186298405797103, 3.8655573856209147, 2.2274109219858156, 0.0, 6.144619473596536, 8.909643687943262, 5.798336078431372, 4.790865603864735, 2.3448600000000006, 0.0), # 46
(4.616063451086957, 9.386961619318182, 7.16980776268116, 3.8631668055555552, 2.223729530141844, 0.0, 6.099391914459438, 8.894918120567375, 5.794750208333333, 4.77987184178744, 2.3467404048295455, 0.0), # 47
(4.623470236572891, 9.394335681818182, 7.152540652173913, 3.8606523529411763, 2.21986085106383, 0.0, 6.052438493253375, 8.87944340425532, 5.790978529411765, 4.7683604347826085, 2.3485839204545456, 0.0), # 48
(4.630726078964194, 9.401560994318181, 7.134522563405797, 3.8580164297385626, 2.2158089804964543, 0.0, 6.003846272696985, 8.863235921985817, 5.787024644607844, 4.7563483756038645, 2.3503902485795454, 0.0), # 49
(4.6378356777493615, 9.408636363636361, 7.115778985507247, 3.8552614379084966, 2.211578014184397, 0.0, 5.953702315508913, 8.846312056737588, 5.782892156862745, 4.743852657004831, 2.3521590909090904, 0.0), # 50
(4.6448037324168805, 9.415560596590907, 7.096335407608696, 3.852389779411765, 2.2071720478723407, 0.0, 5.902093684407797, 8.828688191489363, 5.778584669117648, 4.73089027173913, 2.353890149147727, 0.0), # 51
(4.651634942455243, 9.4223325, 7.0762173188405795, 3.84940385620915, 2.2025951773049646, 0.0, 5.849107442112278, 8.810380709219858, 5.774105784313726, 4.717478212560386, 2.355583125, 0.0), # 52
(4.658334007352941, 9.428950880681818, 7.055450208333333, 3.8463060702614382, 2.1978514982269504, 0.0, 5.794830651340996, 8.791405992907801, 5.769459105392158, 4.703633472222222, 2.3572377201704544, 0.0), # 53
(4.6649056265984665, 9.435414545454544, 7.034059565217391, 3.843098823529412, 2.192945106382979, 0.0, 5.739350374812594, 8.771780425531915, 5.764648235294119, 4.689373043478261, 2.358853636363636, 0.0), # 54
(4.671354499680307, 9.441722301136364, 7.012070878623187, 3.8397845179738566, 2.1878800975177306, 0.0, 5.682753675245711, 8.751520390070922, 5.759676776960785, 4.674713919082125, 2.360430575284091, 0.0), # 55
(4.677685326086957, 9.447872954545453, 6.989509637681159, 3.8363655555555556, 2.1826605673758865, 0.0, 5.625127615358988, 8.730642269503546, 5.754548333333334, 4.65967309178744, 2.361968238636363, 0.0), # 56
(4.683902805306906, 9.453865312500001, 6.966401331521738, 3.832844338235294, 2.1772906117021273, 0.0, 5.566559257871065, 8.70916244680851, 5.749266507352941, 4.644267554347826, 2.3634663281250003, 0.0), # 57
(4.690011636828645, 9.459698181818181, 6.942771449275362, 3.8292232679738563, 2.1717743262411346, 0.0, 5.507135665500583, 8.687097304964539, 5.743834901960785, 4.628514299516908, 2.3649245454545453, 0.0), # 58
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 59
)
passenger_allighting_rate = (
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 0
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 1
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 2
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 3
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 4
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 5
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 6
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 7
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 8
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 9
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 10
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 11
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 12
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 13
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 14
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 15
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 16
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 17
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 18
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 19
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 20
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 21
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 22
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 23
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 24
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 25
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 26
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 27
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 28
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 29
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 30
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 31
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 32
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 33
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 34
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 35
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 36
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 37
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 38
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 39
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 40
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 41
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 42
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 43
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 44
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 45
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 46
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 47
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 48
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 49
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 50
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 51
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 52
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 53
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 54
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 55
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 56
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 57
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 58
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 59
)
"""
parameters for reproducibiliy. More information: https://numpy.org/doc/stable/reference/random/parallel.html
"""
#initial entropy
entropy = 258194110137029475889902652135037600173
#index for seed sequence child
child_seed_index = (
1, # 0
84, # 1
)
| 113.119403 | 212 | 0.729146 | 5,147 | 37,895 | 5.366233 | 0.227123 | 0.312817 | 0.247647 | 0.469225 | 0.330196 | 0.327806 | 0.327806 | 0.327806 | 0.327806 | 0.327806 | 0 | 0.819059 | 0.119119 | 37,895 | 334 | 213 | 113.458084 | 0.008358 | 0.031957 | 0 | 0.202532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.015823 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c9ee8bbe0a19fe87d9c3ac43a4ce5ea675afd925 | 340 | py | Python | Code-Collection/Energy-Staggering-Wobbling/python/test_data.py | basavyr/physics-code-collection | 6ce50ec184ff2de081d0ca29e679e54dbb21f592 | [
"MIT"
] | 1 | 2021-04-20T04:49:59.000Z | 2021-04-20T04:49:59.000Z | Code-Collection/Energy-Staggering-Wobbling/python/test_data.py | basavyr/physics-code-collection | 6ce50ec184ff2de081d0ca29e679e54dbb21f592 | [
"MIT"
] | 43 | 2021-01-19T05:02:48.000Z | 2022-03-12T01:07:32.000Z | Code-Collection/Energy-Staggering-Wobbling/python/test_data.py | basavyr/physics-code-collection | 6ce50ec184ff2de081d0ca29e679e54dbb21f592 | [
"MIT"
] | null | null | null | #!/Users/robertpoenaru/.pyenv/shims/python
import numpy as np
import matplotlib.pyplot as plt
import finder
import importer
import plotter
import staggering
import data_analysis as MAKE
MAKE.Data_Analysis('Ru', 108)
MAKE.Data_Analysis('Ru', 110)
MAKE.Data_Analysis('Ru', 112)
MAKE.Data_Analysis('Pd', 112)
MAKE.Data_Analysis('Pd', 114)
| 18.888889 | 42 | 0.782353 | 52 | 340 | 5 | 0.480769 | 0.276923 | 0.307692 | 0.207692 | 0.161538 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04918 | 0.102941 | 340 | 17 | 43 | 20 | 0.803279 | 0.120588 | 0 | 0 | 0 | 0 | 0.033557 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.583333 | 0 | 0.583333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c9eedb96dc4975c0438d24c5f246ad40ddfb7bd8 | 45 | py | Python | pcl_segmentation/configs/__init__.py | TillBeemelmanns/PCLSegmentation | 902539b58214a6de377acd8de18af5b29deea802 | [
"MIT"
] | 8 | 2021-03-10T16:19:45.000Z | 2022-01-02T17:57:33.000Z | pcl_segmentation/configs/__init__.py | TillBeemelmanns/PCLSegmentation | 902539b58214a6de377acd8de18af5b29deea802 | [
"MIT"
] | null | null | null | pcl_segmentation/configs/__init__.py | TillBeemelmanns/PCLSegmentation | 902539b58214a6de377acd8de18af5b29deea802 | [
"MIT"
] | 8 | 2021-03-11T16:12:02.000Z | 2022-03-29T22:50:15.000Z | from .SqueezeSegV2 import SqueezeSegV2Config
| 22.5 | 44 | 0.888889 | 4 | 45 | 10 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04878 | 0.088889 | 45 | 1 | 45 | 45 | 0.926829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4e63a6c3838da271acdc215424e5ede1d3ef06c2 | 1,824 | py | Python | temboo/core/Library/Zendesk/Users/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | 7 | 2016-03-07T02:07:21.000Z | 2022-01-21T02:22:41.000Z | temboo/core/Library/Zendesk/Users/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | null | null | null | temboo/core/Library/Zendesk/Users/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | 8 | 2016-06-14T06:01:11.000Z | 2020-04-22T09:21:44.000Z | from temboo.Library.Zendesk.Users.CreateManyUsers import CreateManyUsers, CreateManyUsersInputSet, CreateManyUsersResultSet, CreateManyUsersChoreographyExecution
from temboo.Library.Zendesk.Users.CreateUser import CreateUser, CreateUserInputSet, CreateUserResultSet, CreateUserChoreographyExecution
from temboo.Library.Zendesk.Users.DeleteUser import DeleteUser, DeleteUserInputSet, DeleteUserResultSet, DeleteUserChoreographyExecution
from temboo.Library.Zendesk.Users.ListAllUsers import ListAllUsers, ListAllUsersInputSet, ListAllUsersResultSet, ListAllUsersChoreographyExecution
from temboo.Library.Zendesk.Users.ListUsersByGroup import ListUsersByGroup, ListUsersByGroupInputSet, ListUsersByGroupResultSet, ListUsersByGroupChoreographyExecution
from temboo.Library.Zendesk.Users.ListUsersByOrganization import ListUsersByOrganization, ListUsersByOrganizationInputSet, ListUsersByOrganizationResultSet, ListUsersByOrganizationChoreographyExecution
from temboo.Library.Zendesk.Users.SearchUsers import SearchUsers, SearchUsersInputSet, SearchUsersResultSet, SearchUsersChoreographyExecution
from temboo.Library.Zendesk.Users.ShowCurrentUser import ShowCurrentUser, ShowCurrentUserInputSet, ShowCurrentUserResultSet, ShowCurrentUserChoreographyExecution
from temboo.Library.Zendesk.Users.ShowUser import ShowUser, ShowUserInputSet, ShowUserResultSet, ShowUserChoreographyExecution
from temboo.Library.Zendesk.Users.SuspendUser import SuspendUser, SuspendUserInputSet, SuspendUserResultSet, SuspendUserChoreographyExecution
from temboo.Library.Zendesk.Users.UpdateUser import UpdateUser, UpdateUserInputSet, UpdateUserResultSet, UpdateUserChoreographyExecution
from temboo.Library.Zendesk.Users.UpdateUserImage import UpdateUserImage, UpdateUserImageInputSet, UpdateUserImageResultSet, UpdateUserImageChoreographyExecution
| 140.307692 | 201 | 0.907895 | 132 | 1,824 | 12.545455 | 0.409091 | 0.072464 | 0.123188 | 0.173913 | 0.210145 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046053 | 1,824 | 12 | 202 | 152 | 0.951724 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4e870cb2579e0a77420c6b6e7fdc7bcc6d5464c3 | 21,071 | py | Python | tests/test_reportseff.py | sglim2/reportseff | 9ac9f6acfcd7b48b5a3f4190067a864d99334a23 | [
"MIT"
] | null | null | null | tests/test_reportseff.py | sglim2/reportseff | 9ac9f6acfcd7b48b5a3f4190067a864d99334a23 | [
"MIT"
] | null | null | null | tests/test_reportseff.py | sglim2/reportseff | 9ac9f6acfcd7b48b5a3f4190067a864d99334a23 | [
"MIT"
] | null | null | null | import pytest
from reportseff import reportseff
from reportseff.job_collection import Job_Collection
from reportseff.output_renderer import Output_Renderer
from reportseff.db_inquirer import Sacct_Inquirer
from click.testing import CliRunner
@pytest.fixture
def mock_inquirer(mocker):
def mock_valid(self):
return ('JobID,State,Elapsed,JobIDRaw,State,TotalCPU,AllocCPUS,'
'REQMEM,NNodes,MaxRSS,Timelimit').split(',')
mocker.patch.object(Sacct_Inquirer, 'get_valid_formats', new=mock_valid)
def test_directory_input(mocker, mock_inquirer):
mocker.patch('reportseff.reportseff.which', return_value=True)
runner = CliRunner()
sub_result = mocker.MagicMock()
sub_result.returncode = 0
sub_result.stdout = (
'1|01:27:42|24418435|24418435||1|1Gn|'
'COMPLETED|03:00:00|01:27:29\n'
'1|01:27:42|24418435.batch|24418435.batch|499092K|1|1Gn|'
'COMPLETED||01:27:29\n'
'1|01:27:42|24418435.extern|24418435.extern|1376K|1|1Gn|'
'COMPLETED||00:00:00\n'
)
mocker.patch('reportseff.db_inquirer.subprocess.run',
return_value=sub_result)
def set_jobs(self, directory):
self.set_jobs(('24418435',))
mocker.patch.object(Job_Collection, 'set_out_dir', new=set_jobs)
result = runner.invoke(reportseff.reportseff,
'--no-color --format '
'"JobID,State,Elapsed,TimeEff,CPUEff,MemEff"')
assert result.exit_code == 0
# remove header
output = result.output.split('\n')[1:]
assert output[0].split() == [
'24418435', 'COMPLETED', '01:27:42', '48.7%', '99.8%', '47.7%'
]
def test_directory_input_exception(mocker, mock_inquirer):
mocker.patch('reportseff.reportseff.which', return_value=True)
runner = CliRunner()
sub_result = mocker.MagicMock()
sub_result.returncode = 0
sub_result.stdout = (
'24418435|24418435|COMPLETED|1|'
'01:27:29|01:27:42|03:00:00|1Gn||1|\n'
'24418435.batch|24418435.batch|COMPLETED|1|'
'01:27:29|01:27:42||1Gn|499092K|1|1\n'
'24418435.extern|24418435.extern|COMPLETED|1|'
'00:00:00|01:27:42||1Gn|1376K|1|1\n'
)
mocker.patch('reportseff.db_inquirer.subprocess.run',
return_value=sub_result)
def set_jobs(self, directory):
raise ValueError('Testing EXCEPTION')
mocker.patch.object(Job_Collection, 'set_out_dir', new=set_jobs)
result = runner.invoke(reportseff.reportseff,
'--no-color')
assert result.exit_code == 1
assert 'Testing EXCEPTION' in result.output
def test_debug_option(mocker, mock_inquirer):
mocker.patch('reportseff.reportseff.which', return_value=True)
runner = CliRunner()
sub_result = mocker.MagicMock()
sub_result.returncode = 0
sub_result.stdout = (
'16|00:00:00|23000233|23000233||1|4000Mc|'
'CANCELLED by 129319|6-00:00:00|00:00:00\n'
)
mocker.patch('reportseff.db_inquirer.subprocess.run',
return_value=sub_result)
result = runner.invoke(reportseff.reportseff,
'--format '
'"JobID,State,Elapsed,TimeEff,CPUEff,MemEff" '
'--no-color --debug 23000233')
assert result.exit_code == 0
# remove header
output = result.output.split('\n')
assert output[0] == (
'16|00:00:00|23000233|23000233||1|4000Mc|'
'CANCELLED by 129319|6-00:00:00|00:00:00'
)
assert output[3].split() == [
'23000233', 'CANCELLED', '00:00:00', '0.0%', '---', '0.0%'
]
def test_process_failure(mocker, mock_inquirer):
mocker.patch('reportseff.reportseff.which', return_value=True)
runner = CliRunner()
sub_result = mocker.MagicMock()
sub_result.returncode = 0
sub_result.stdout = (
'16|00:00:00|23000233|23000233||1|4000Mc|'
'CANCELLED by 129319|6-00:00:00|00:00:00\n'
)
mocker.patch('reportseff.db_inquirer.subprocess.run',
return_value=sub_result)
mocker.patch.object(Job_Collection,
'process_entry',
side_effect=Exception('TESTING'))
result = runner.invoke(reportseff.reportseff,
'--no-color 23000233')
assert result.exit_code != 0
# remove header
output = result.output.split('\n')
assert output[0] == 'Error processing entry: ' + (
"{'AllocCPUS': '16', 'Elapsed': '00:00:00', 'JobID': '23000233', "
"'JobIDRaw': '23000233', 'MaxRSS': '', 'NNodes': '1', "
"'REQMEM': '4000Mc', 'State': 'CANCELLED by 129319', "
"'TotalCPU': '6-00:00:00'}"
)
def test_short_output(mocker, mock_inquirer):
mocker.patch('reportseff.reportseff.which', return_value=True)
runner = CliRunner()
sub_result = mocker.MagicMock()
sub_result.returncode = 0
sub_result.stdout = (
'23000233|23000233|CANCELLED by 129319|16|'
'00:00:00|00:00:00|6-00:00:00|4000Mc||1|\n'
)
mocker.patch('reportseff.db_inquirer.subprocess.run',
return_value=sub_result)
mocker.patch('reportseff.reportseff.len', return_value=20)
mocker.patch.object(Output_Renderer,
'format_jobs',
return_value='output')
mock_click = mocker.patch('reportseff.reportseff.click.echo')
result = runner.invoke(reportseff.reportseff,
'--no-color 23000233')
assert result.exit_code == 0
mock_click.assert_called_once_with('output', color=False)
def test_long_output(mocker, mock_inquirer):
mocker.patch('reportseff.reportseff.which', return_value=True)
runner = CliRunner()
sub_result = mocker.MagicMock()
sub_result.returncode = 0
sub_result.stdout = (
'16|00:00:00|23000233|23000233||1|4000Mc|'
'CANCELLED by 129319|00:00:00\n'
)
mocker.patch('reportseff.db_inquirer.subprocess.run',
return_value=sub_result)
mocker.patch('reportseff.reportseff.len', return_value=21)
mocker.patch.object(Output_Renderer,
'format_jobs',
return_value='output')
mock_click = mocker.patch('reportseff.reportseff.click.echo_via_pager')
result = runner.invoke(reportseff.reportseff,
'--no-color 23000233')
assert result.exit_code == 0
mock_click.assert_called_once_with('output', color=False)
def test_simple_job(mocker, mock_inquirer):
mocker.patch('reportseff.reportseff.which', return_value=True)
runner = CliRunner()
sub_result = mocker.MagicMock()
sub_result.returncode = 0
sub_result.stdout = (
'1|01:27:42|24418435|24418435||1|1Gn|'
'COMPLETED|01:27:29\n'
'1|01:27:42|24418435.batch|24418435.batch|499092K|1|1Gn|'
'COMPLETED|01:27:29\n'
'1|01:27:42|24418435.extern|24418435.extern|1376K|1|1Gn|'
'COMPLETED|00:00:00\n'
)
mocker.patch('reportseff.db_inquirer.subprocess.run',
return_value=sub_result)
result = runner.invoke(reportseff.reportseff,
'--no-color 24418435')
assert result.exit_code == 0
# remove header
output = result.output.split('\n')[1:]
assert output[0].split() == [
'24418435', 'COMPLETED', '01:27:42', '99.8%', '47.7%'
]
def test_simple_user(mocker, mock_inquirer):
mocker.patch('reportseff.reportseff.which', return_value=True)
runner = CliRunner()
sub_result = mocker.MagicMock()
sub_result.returncode = 0
sub_result.stdout = (
'1|01:27:42|24418435|24418435||1|1Gn|'
'COMPLETED|01:27:29\n'
'1|01:27:42|24418435.batch|24418435.batch|499092K|1|1Gn|'
'COMPLETED|01:27:29\n'
'1|01:27:42|24418435.extern|24418435.extern|1376K|1|1Gn|'
'COMPLETED|00:00:00\n'
'1|21:14:48|25569410|25569410||1|4000Mc|COMPLETED|19:28:36\n'
'1|21:14:49|25569410.extern|25569410.extern|1548K|1|4000Mc|'
'COMPLETED|00:00:00\n'
'1|21:14:43|25569410.0|25569410.0|62328K|1|4000Mc|COMPLETED|19:28:36\n'
)
mocker.patch('reportseff.db_inquirer.subprocess.run',
return_value=sub_result)
result = runner.invoke(reportseff.reportseff,
'--no-color --user test')
assert result.exit_code == 0
# remove header
output = result.output.split('\n')[1:]
assert output[0].split() == [
'24418435', 'COMPLETED', '01:27:42', '99.8%', '47.7%'
]
assert output[1].split() == [
'25569410', 'COMPLETED', '21:14:48', '91.7%', '1.6%'
]
def test_format_add(mocker, mock_inquirer):
mocker.patch('reportseff.reportseff.which', return_value=True)
runner = CliRunner()
mock_jobs = mocker.patch('reportseff.reportseff.get_jobs',
return_value=('Testing', 1))
result = runner.invoke(reportseff.reportseff,
'--no-color --format=test')
assert result.exit_code == 0
mock_jobs.call_args[1] == 'test'
# test adding onto end
result = runner.invoke(reportseff.reportseff,
'--no-color --format=+test')
assert result.exit_code == 0
mock_jobs.call_args[1] == 'JobID%>,State,Elapsed%>,CPUEff,MemEff,test'
def test_since(mocker, mock_inquirer):
mocker.patch('reportseff.reportseff.which', return_value=True)
runner = CliRunner()
sub_result = mocker.MagicMock()
sub_result.returncode = 0
sub_result.stdout = (
'1|01:27:42|24418435|24418435||1|1Gn|'
'COMPLETED|01:27:29\n'
'1|01:27:42|24418435.batch|24418435.batch|499092K|1|1Gn|'
'COMPLETED|01:27:29\n'
'1|01:27:42|24418435.extern|24418435.extern|1376K|1|1Gn|'
'COMPLETED|00:00:00\n'
'1|21:14:48|25569410|25569410||1|4000Mc|COMPLETED|19:28:36\n'
'1|21:14:49|25569410.extern|25569410.extern|1548K|1|4000Mc|'
'COMPLETED|00:00:00\n'
'1|21:14:43|25569410.0|25569410.0|62328K|1|4000Mc|COMPLETED|19:28:36\n'
)
mocker.patch('reportseff.db_inquirer.subprocess.run',
return_value=sub_result)
result = runner.invoke(reportseff.reportseff,
'--no-color --since 200406 24418435 25569410')
assert result.exit_code == 0
# remove header
output = result.output.split('\n')[1:]
assert output[0].split() == [
'24418435', 'COMPLETED', '01:27:42', '99.8%', '47.7%'
]
assert output[1].split() == [
'25569410', 'COMPLETED', '21:14:48', '91.7%', '1.6%'
]
def test_simple_state(mocker, mock_inquirer):
mocker.patch('reportseff.reportseff.which', return_value=True)
runner = CliRunner()
sub_result = mocker.MagicMock()
sub_result.returncode = 0
sub_result.stdout = (
'1|01:27:42|24418435|24418435||1|1Gn|'
'COMPLETED|01:27:29\n'
'1|01:27:42|24418435.batch|24418435.batch|499092K|1|1Gn|'
'COMPLETED|01:27:29\n'
'1|01:27:42|24418435.extern|24418435.extern|1376K|1|1Gn|'
'COMPLETED|00:00:00\n'
'1|21:14:48|25569410|25569410||1|4000Mc|RUNNING|19:28:36\n'
'1|21:14:49|25569410.extern|25569410.extern|1548K|1|4000Mc|'
'RUNNING|00:00:00\n'
'1|21:14:43|25569410.0|25569410.0|62328K|1|4000Mc|RUNNING|19:28:36\n'
)
mocker.patch('reportseff.db_inquirer.subprocess.run',
return_value=sub_result)
result = runner.invoke(reportseff.reportseff,
'--no-color --state completed '
'25569410 24418435'
)
assert result.exit_code == 0
# remove header
output = result.output.split('\n')[1:]
assert output[0].split() == [
'24418435', 'COMPLETED', '01:27:42', '99.8%', '47.7%'
]
# other is suppressed by state filter
assert output[1].split() == []
def test_no_state(mocker, mock_inquirer):
mocker.patch('reportseff.reportseff.which', return_value=True)
runner = CliRunner()
sub_result = mocker.MagicMock()
sub_result.returncode = 0
sub_result.stdout = (
'1|01:27:42|24418435|24418435||1|1Gn|'
'COMPLETED|01:27:29\n'
'1|01:27:42|24418435.batch|24418435.batch|499092K|1|1Gn|'
'COMPLETED|01:27:29\n'
'1|01:27:42|24418435.extern|24418435.extern|1376K|1|1Gn|'
'COMPLETED|00:00:00\n'
'1|21:14:48|25569410|25569410||1|4000Mc|RUNNING|19:28:36\n'
'1|21:14:49|25569410.extern|25569410.extern|1548K|1|4000Mc|'
'RUNNING|00:00:00\n'
'1|21:14:43|25569410.0|25569410.0|62328K|1|4000Mc|RUNNING|19:28:36\n'
)
mocker.patch('reportseff.db_inquirer.subprocess.run',
return_value=sub_result)
result = runner.invoke(reportseff.reportseff,
'--no-color --state ZZ '
'25569410 24418435'
)
assert result.exit_code == 0
# remove header
output = result.output.split('\n')
assert output[0] == 'Unknown state ZZ'
assert output[1] == 'No valid states provided'
assert output[2].split() == [
'JobID', 'State', 'Elapsed', 'CPUEff', 'MemEff'
]
assert output[3] == ''
def test_array_job_raw_id(mocker, mock_inquirer):
mocker.patch('reportseff.reportseff.which', return_value=True)
runner = CliRunner()
sub_result = mocker.MagicMock()
sub_result.returncode = 0
sub_result.stdout = (
'1|00:09:34|24220929_421|24221219||1|16000Mn|'
'COMPLETED|09:28.052\n'
'1|00:09:34|24220929_421.batch|24221219.batch|5664932K|1|16000Mn|'
'COMPLETED|09:28.051\n'
'1|00:09:34|24220929_421.extern|24221219.extern|1404K|1|16000Mn|'
'COMPLETED|00:00:00\n'
)
mocker.patch('reportseff.db_inquirer.subprocess.run',
return_value=sub_result)
result = runner.invoke(reportseff.reportseff,
'--no-color 24221219')
assert result.exit_code == 0
# remove header
output = result.output.split('\n')[1:-1]
assert output[0].split() == [
'24220929_421', 'COMPLETED', '00:09:34', '99.0%', '34.6%'
]
assert len(output) == 1
def test_array_job_single(mocker, mock_inquirer):
mocker.patch('reportseff.reportseff.which', return_value=True)
runner = CliRunner()
sub_result = mocker.MagicMock()
sub_result.returncode = 0
sub_result.stdout = (
'1|00:09:34|24220929_421|24221219||1|16000Mn|'
'COMPLETED|09:28.052\n'
'1|00:09:34|24220929_421.batch|24221219.batch|5664932K|1|16000Mn|'
'COMPLETED|09:28.051\n'
'1|00:09:34|24220929_421.extern|24221219.extern|1404K|1|16000Mn|'
'COMPLETED|00:00:00\n'
'1|00:09:33|24220929_431|24221220||1|16000Mn|'
'PENDING|09:27.460\n'
'1|00:09:33|24220929_431.batch|24221220.batch|5518572K|1|16000Mn|'
'PENDING|09:27.459\n'
'1|00:09:33|24220929_431.extern|24221220.extern|1400K|1|16000Mn|'
'PENDING|00:00:00\n'
)
mocker.patch('reportseff.db_inquirer.subprocess.run',
return_value=sub_result)
result = runner.invoke(reportseff.reportseff,
'--no-color 24220929_421')
assert result.exit_code == 0
# remove header
output = result.output.split('\n')[1:-1]
assert output[0].split() == [
'24220929_421', 'COMPLETED', '00:09:34', '99.0%', '34.6%'
]
assert len(output) == 1
def test_array_job_base(mocker, mock_inquirer):
mocker.patch('reportseff.reportseff.which', return_value=True)
runner = CliRunner()
sub_result = mocker.MagicMock()
sub_result.returncode = 0
sub_result.stdout = (
'1|00:09:34|24220929_421|24221219||1|16000Mn|'
'COMPLETED|09:28.052\n'
'1|00:09:34|24220929_421.batch|24221219.batch|5664932K|1|16000Mn|'
'COMPLETED|09:28.051\n'
'1|00:09:34|24220929_421.extern|24221219.extern|1404K|1|16000Mn|'
'COMPLETED|00:00:00\n'
'1|00:09:33|24220929_431|24221220||1|16000Mn|'
'PENDING|09:27.460\n'
'1|00:09:33|24220929_431.batch|24221220.batch|5518572K|1|16000Mn|'
'PENDING|09:27.459\n'
'1|00:09:33|24220929_431.extern|24221220.extern|1400K|1|16000Mn|'
'PENDING|00:00:00\n'
)
mocker.patch('reportseff.db_inquirer.subprocess.run',
return_value=sub_result)
result = runner.invoke(reportseff.reportseff,
'--no-color 24220929')
assert result.exit_code == 0
# remove header
output = result.output.split('\n')[1:-1]
assert output[0].split() == [
'24220929_421', 'COMPLETED', '00:09:34', '99.0%', '34.6%'
]
assert output[1].split() == [
'24220929_431', 'PENDING', '---', '---', '---'
]
assert len(output) == 2
def test_sacct_error(mocker, mock_inquirer):
mocker.patch('reportseff.reportseff.which', return_value=True)
runner = CliRunner()
sub_result = mocker.MagicMock()
sub_result.returncode = 1
sub_result.stdout = (
''
)
mocker.patch('reportseff.db_inquirer.subprocess.run',
return_value=sub_result)
result = runner.invoke(reportseff.reportseff,
'--no-color 9999999')
assert result.exit_code == 1
assert 'Error running sacct!' in result.output
def test_empty_sacct(mocker, mock_inquirer):
mocker.patch('reportseff.reportseff.which', return_value=True)
runner = CliRunner()
sub_result = mocker.MagicMock()
sub_result.returncode = 0
sub_result.stdout = (
''
)
mocker.patch('reportseff.db_inquirer.subprocess.run',
return_value=sub_result)
result = runner.invoke(reportseff.reportseff,
'--no-color 9999999')
assert result.exit_code == 0
output = result.output.split('\n')[:-1]
assert output[0].split() == [
'JobID', 'State', 'Elapsed', 'CPUEff', 'MemEff'
]
assert len(output) == 1
def test_failed_no_mem(mocker, mock_inquirer):
mocker.patch('reportseff.reportseff.which', return_value=True)
runner = CliRunner()
sub_result = mocker.MagicMock()
sub_result.returncode = 0
sub_result.stdout = (
'8|00:00:12|23000381|23000381||1|4000Mc|FAILED|00:00:00\n'
'8|00:00:12|23000381.batch|23000381.batch||1|4000Mc|'
'FAILED|00:00:00\n'
'8|00:00:12|23000381.extern|23000381.extern|1592K|1|4000Mc|'
'COMPLETED|00:00:00\n'
)
mocker.patch('reportseff.db_inquirer.subprocess.run',
return_value=sub_result)
result = runner.invoke(reportseff.reportseff,
'--no-color 23000381')
assert result.exit_code == 0
# remove header
output = result.output.split('\n')[1:-1]
assert output[0].split() == [
'23000381', 'FAILED', '00:00:12', '---', '0.0%'
]
assert len(output) == 1
def test_canceled_by_other(mocker, mock_inquirer):
mocker.patch('reportseff.reportseff.which', return_value=True)
runner = CliRunner()
sub_result = mocker.MagicMock()
sub_result.returncode = 0
sub_result.stdout = (
'16|00:00:00|23000233|23000233||1|4000Mc|'
'CANCELLED by 129319|00:00:00\n'
)
mocker.patch('reportseff.db_inquirer.subprocess.run',
return_value=sub_result)
result = runner.invoke(reportseff.reportseff,
'--no-color 23000233')
assert result.exit_code == 0
# remove header
output = result.output.split('\n')[1:-1]
assert output[0].split() == [
'23000233', 'CANCELLED', '00:00:00', '---', '0.0%'
]
assert len(output) == 1
def test_zero_runtime(mocker, mock_inquirer):
mocker.patch('reportseff.reportseff.which', return_value=True)
runner = CliRunner()
sub_result = mocker.MagicMock()
sub_result.returncode = 0
sub_result.stdout = (
'8|00:00:00|23000210|23000210||1|20000Mn|'
'FAILED|00:00.007\n'
'8|00:00:00|23000210.batch|23000210.batch|1988K|1|20000Mn|'
'FAILED|00:00.006\n'
'8|00:00:00|23000210.extern|23000210.extern|1556K|1|20000Mn|'
'COMPLETED|00:00:00\n'
)
mocker.patch('reportseff.db_inquirer.subprocess.run',
return_value=sub_result)
result = runner.invoke(reportseff.reportseff,
'--no-color 23000210')
assert result.exit_code == 0
# remove header
output = result.output.split('\n')[1:-1]
assert output[0].split() == [
'23000210', 'FAILED', '00:00:00', '---', '0.0%'
]
assert len(output) == 1
def test_no_systems(mocker, mock_inquirer):
mocker.patch('reportseff.reportseff.which', return_value=None)
runner = CliRunner()
result = runner.invoke(reportseff.reportseff,
'--no-color 23000210')
assert result.exit_code == 1
# remove header
output = result.output.split('\n')
assert output[0] == 'No supported scheduling systems found!'
| 36.392055 | 79 | 0.61924 | 2,694 | 21,071 | 4.732368 | 0.072383 | 0.031375 | 0.024473 | 0.063221 | 0.870578 | 0.858969 | 0.849635 | 0.837085 | 0.827516 | 0.827516 | 0 | 0.179382 | 0.227991 | 21,071 | 578 | 80 | 36.455017 | 0.604352 | 0.012624 | 0 | 0.693252 | 0 | 0.087935 | 0.363015 | 0.24982 | 0 | 0 | 0 | 0 | 0.116564 | 1 | 0.051125 | false | 0 | 0.01227 | 0.002045 | 0.06544 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
09023398d84eca0ba574d77e8918a988e49ed820 | 187 | py | Python | AS2101_Labwork/4.Submissions/Task 4/func.py | kirtan2605/Coursework_Codes | 3455496e8ec0ae3a576cb3fc3b2ed01a055149c5 | [
"MIT"
] | null | null | null | AS2101_Labwork/4.Submissions/Task 4/func.py | kirtan2605/Coursework_Codes | 3455496e8ec0ae3a576cb3fc3b2ed01a055149c5 | [
"MIT"
] | null | null | null | AS2101_Labwork/4.Submissions/Task 4/func.py | kirtan2605/Coursework_Codes | 3455496e8ec0ae3a576cb3fc3b2ed01a055149c5 | [
"MIT"
] | null | null | null | import numpy as np
def f(x):
"""
returns the function value at input
Input:
1.x : input number, (in radian)
Output: function value at x
"""
return np.sin(x)
| 15.583333 | 39 | 0.588235 | 29 | 187 | 3.793103 | 0.689655 | 0.236364 | 0.272727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007752 | 0.31016 | 187 | 11 | 40 | 17 | 0.844961 | 0.55615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e10975e42bd5b4d29154da2f16975f7a77de9740 | 320 | py | Python | basically_ti_basic/__init__.py | TabulateJarl8/basically-ti-basic | 8d9ab4222c2ee95409d30eaec221d8a037d427ba | [
"MIT"
] | null | null | null | basically_ti_basic/__init__.py | TabulateJarl8/basically-ti-basic | 8d9ab4222c2ee95409d30eaec221d8a037d427ba | [
"MIT"
] | null | null | null | basically_ti_basic/__init__.py | TabulateJarl8/basically-ti-basic | 8d9ab4222c2ee95409d30eaec221d8a037d427ba | [
"MIT"
] | null | null | null | from .__version__ import __title__, __description__, __url__, __version__, __author__, __author_email__, __license__, __copyright__
# from basically_ti_basic.compiler.__init__ import PrgmCompiler
# from basically_ti_basic.files.__init__ import TIPrgmFile
from basically_ti_basic.main import compile_file, decompile_file
| 64 | 131 | 0.86875 | 38 | 320 | 5.921053 | 0.578947 | 0.173333 | 0.2 | 0.266667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08125 | 320 | 4 | 132 | 80 | 0.765306 | 0.36875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e11018afcb8fb4373f0322553fc0539eb45491c3 | 157 | py | Python | test.py | marcelrsoub/fdbpm-python | cfdc2d1aee50b86da1147bfdb2e76b8eec6b8376 | [
"MIT"
] | 1 | 2021-02-18T21:04:19.000Z | 2021-02-18T21:04:19.000Z | test.py | marcelrsoub/fdbpm-python | cfdc2d1aee50b86da1147bfdb2e76b8eec6b8376 | [
"MIT"
] | null | null | null | test.py | marcelrsoub/fdbpm-python | cfdc2d1aee50b86da1147bfdb2e76b8eec6b8376 | [
"MIT"
] | null | null | null | import PySide2.QtCore
# Prints PySide2 version
print(PySide2.__version__)
# Prints the Qt version used to compile PySide2
print(PySide2.QtCore.__version__) | 22.428571 | 47 | 0.821656 | 21 | 157 | 5.761905 | 0.52381 | 0.214876 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035971 | 0.11465 | 157 | 7 | 48 | 22.428571 | 0.834532 | 0.433121 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
011330a9f6a56efec808cd5a4db3b25d08afb6f4 | 165 | py | Python | wellcomeml/metrics/__init__.py | wellcometrust/WellcomeML | f7f5427f6dfdc6e5ee1342764263c6411e0f9bdf | [
"MIT"
] | 29 | 2020-01-31T17:05:38.000Z | 2021-12-14T14:17:55.000Z | wellcomeml/metrics/__init__.py | wellcometrust/WellcomeML | f7f5427f6dfdc6e5ee1342764263c6411e0f9bdf | [
"MIT"
] | 342 | 2020-02-05T10:40:43.000Z | 2022-03-17T19:50:23.000Z | wellcomeml/metrics/__init__.py | wellcometrust/WellcomeML | f7f5427f6dfdc6e5ee1342764263c6411e0f9bdf | [
"MIT"
] | 9 | 2020-06-07T17:01:00.000Z | 2021-11-24T16:03:38.000Z | from .f1 import f1_loss, f1_metric
from .ner_classification_report import ner_classification_report
__all__ = ["ner_classification_report", "f1_metric", "f1_loss"]
| 33 | 64 | 0.824242 | 23 | 165 | 5.304348 | 0.391304 | 0.418033 | 0.565574 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033333 | 0.090909 | 165 | 4 | 65 | 41.25 | 0.78 | 0 | 0 | 0 | 0 | 0 | 0.248485 | 0.151515 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0132288ff35b7a8c88ba03baa34d377f037b7063 | 8,060 | py | Python | testing/test_games_deck.py | egouletlang/logmein-assignment | 3c83d5c8462d5c88552b66ab531cee9ee8f9e4a2 | [
"MIT"
] | null | null | null | testing/test_games_deck.py | egouletlang/logmein-assignment | 3c83d5c8462d5c88552b66ab531cee9ee8f9e4a2 | [
"MIT"
] | null | null | null | testing/test_games_deck.py | egouletlang/logmein-assignment | 3c83d5c8462d5c88552b66ab531cee9ee8f9e4a2 | [
"MIT"
] | null | null | null |
from util import Client
from test_base import BaseTest
class TestGamesPlayers(BaseTest):
# GET /games/{game_id}/deck
def test_getGamesDeck_checkForPresence(self):
game_id, _, _ = self.setup_game()
status, response = self.client.get('/games/%s/deck' % game_id)
self.assertEqual(status, 200)
self.assertIsInstance(response, dict)
def test_getGamesDeck_checkForPresence_invalidGameId(self):
game_id, _, _ = self.setup_game()
status, response = self.client.get('/games/%s/deck' % self.randomString())
self.assertEqual(status, 404)
# POST /games/{game_id}/deck/add
def test_getGamesDeck_add(self):
game_id, _, _ = self.setup_game()
status, response = self.client.post('/games/%s/deck/add' % game_id)
self.assertEqual(status, 200)
self.assertIsInstance(response, dict)
self.assertNotEqual(response.get('cards'), None)
self.assertEqual(len(response.get('cards')), 52)
status, response = self.client.post('/games/%s/deck/add' % game_id)
self.assertEqual(status, 200)
self.assertIsInstance(response, dict)
self.assertNotEqual(response.get('cards'), None)
self.assertEqual(len(response.get('cards')), 104)
def test_getGamesDeck_add_invalidGameId(self):
game_id = self.create_test_game()
player_1_id = self.create_test_player()
player_2_id = self.create_test_player()
self.add_player_to_game(game_id, player_1_id)
self.add_player_to_game(game_id, player_2_id)
status, response = self.client.post('/games/%s/deck/add' % self.randomString())
self.assertEqual(status, 404)
# POST /games/{game_id}/deck/shuffle
def test_getGamesDeck_shuffle(self):
game_id, _, _ = self.setup_game()
self.add_deck(game_id)
status, response = self.client.post('/games/%s/deck/shuffle' % game_id)
self.assertEqual(status, 200)
self.assertIsInstance(response, dict)
self.assertNotEqual(response.get('cards'), None)
self.assertNotEqual(response.get('cards')[0], 0)
def test_getGamesDeck_shuffle_invalidGameId(self):
game_id, _, _ = self.setup_game()
self.add_deck(game_id)
status, response = self.client.post('/games/%s/deck/shuffle' % self.randomString())
self.assertEqual(status, 404)
# POST /games/{game_id}/deck/deal
def test_getGamesDeck_deal(self):
game_id, _, _ = self.setup_game()
self.add_deck(game_id)
status, response = self.client.post('/games/%s/deck/deal' % game_id)
self.assertEqual(status, 200)
self.assertIsInstance(response, dict)
self.assertNotEqual(response.get('cards'), None)
self.assertEqual(len(response.get('cards')), 50) # one to each player
def test_getGamesDeck_deal_invalidGameId(self):
game_id, _, _ = self.setup_game()
self.add_deck(game_id)
status, response = self.client.post('/games/%s/deck/deal' % self.randomString())
self.assertEqual(status, 404)
def test_getGamesDeck_dealWithValidCount(self):
game_id, _, _ = self.setup_game()
self.add_deck(game_id)
status, response = self.client.post('/games/%s/deck/deal' % game_id, {'count': 5})
self.assertEqual(status, 200)
self.assertIsInstance(response, dict)
self.assertNotEqual(response.get('cards'), None)
self.assertEqual(len(response.get('cards')), 42) # 5 to each player
def test_getGamesDeck_dealWithInvalidCount(self):
game_id, _, _ = self.setup_game()
self.add_deck(game_id)
status, response = self.client.post('/games/%s/deck/deal' % game_id, {'count': -1})
self.assertEqual(status, 400)
def test_getGamesDeck_dealWithValidPlayer(self):
game_id, player_1_id, _ = self.setup_game()
self.add_deck(game_id)
status, response = self.client.post('/games/%s/deck/deal' % game_id, {'id': player_1_id})
self.assertEqual(status, 200)
self.assertIsInstance(response, dict)
self.assertNotEqual(response.get('cards'), None)
self.assertEqual(len(response.get('cards')), 51) # 1 to each player to player_1_id
def test_getGamesDeck_dealWithInvalidPlayer(self):
game_id, player_1_id, _ = self.setup_game()
self.add_deck(game_id)
status, response = self.client.post('/games/%s/deck/deal' % game_id, {'id': self.randomString()})
self.assertEqual(status, 400)
# POST /games/{game_id}/deck/collect
def test_getGamesDeck_collect(self):
game_id, _, _ = self.setup_game()
self.add_deck(game_id)
status, response = self.client.post('/games/%s/deck/deal' % game_id)
self.assertEqual(status, 200)
self.assertIsInstance(response, dict)
self.assertNotEqual(response.get('cards'), None)
self.assertEqual(len(response.get('cards')), 50) # one to each player
status, response = self.client.post('/games/%s/deck/collect' % game_id)
self.assertEqual(status, 200)
self.assertIsInstance(response, dict)
self.assertNotEqual(response.get('cards'), None)
self.assertEqual(len(response.get('cards')), 52) # one to each player
def test_getGamesDeck_collect_invalidGameId(self):
game_id, _, _ = self.setup_game()
self.add_deck(game_id)
status, response = self.client.post('/games/%s/deck/deal' % game_id)
self.assertEqual(status, 200)
self.assertIsInstance(response, dict)
self.assertNotEqual(response.get('cards'), None)
self.assertEqual(len(response.get('cards')), 50) # one to each player
status, response = self.client.post('/games/%s/deck/collect' % self.randomString())
self.assertEqual(status, 404)
# GET /games/{game_id}/deck/remaining
def test_getGamesDeck_remaining(self):
game_id, _, _ = self.setup_game()
self.add_deck(game_id)
self.shuffle_deck(game_id)
status, response = self.client.get('/games/%s/deck/remaining' % game_id)
self.assertEqual(len(response), 52)
for remaining in response:
self.assertEqual(remaining.get('count'), 1)
self.add_deck(game_id)
self.add_deck(game_id)
self.add_deck(game_id)
self.add_deck(game_id)
status, response = self.client.get('/games/%s/deck/remaining' % game_id)
self.assertEqual(len(response), 52)
for remaining in response:
self.assertEqual(remaining.get('count'), 5)
def test_getGamesDeck_remaining_invalidGameId(self):
game_id, _, _ = self.setup_game()
self.add_deck(game_id)
self.shuffle_deck(game_id)
status, response = self.client.get('/games/%s/deck/remaining' % self.randomString())
self.assertEqual(status, 404)
# GET /games/{game_id}/deck/remaining/suit
def test_getGamesDeck_remainingSuit(self):
game_id, _, _ = self.setup_game()
self.add_deck(game_id)
self.shuffle_deck(game_id)
status, response = self.client.get('/games/%s/deck/remaining/suit' % game_id)
self.assertEqual(status, 200)
self.assertEqual(len(response), 4)
for remaining in response:
self.assertEqual(remaining.get('count'), 13)
self.add_deck(game_id)
self.add_deck(game_id)
self.add_deck(game_id)
self.add_deck(game_id)
status, response = self.client.get('/games/%s/deck/remaining/suit' % game_id)
self.assertEqual(status, 200)
self.assertEqual(len(response), 4)
for remaining in response:
self.assertEqual(remaining.get('count'), 65)
def test_getGamesDeck_remainingSuit_invalidGameId(self):
game_id, _, _ = self.setup_game()
self.add_deck(game_id)
self.shuffle_deck(game_id)
status, response = self.client.get('/games/%s/deck/remaining/suit' % self.randomString())
self.assertEqual(status, 404)
| 37.663551 | 105 | 0.655583 | 1,006 | 8,060 | 5.037773 | 0.065606 | 0.081689 | 0.07498 | 0.108919 | 0.866022 | 0.830111 | 0.807616 | 0.794594 | 0.78236 | 0.773875 | 0 | 0.01673 | 0.213896 | 8,060 | 213 | 106 | 37.840376 | 0.783144 | 0.044665 | 0 | 0.686275 | 0 | 0 | 0.078745 | 0.032149 | 0 | 0 | 0 | 0 | 0.372549 | 1 | 0.117647 | false | 0 | 0.013072 | 0 | 0.137255 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0134b9b61c98075d5c52edf95fd862e5bb170608 | 9,045 | py | Python | mqtt/client/test/test_pubsubs.py | drewp/twisted-mqtt | 51828655052d5e51dc4e98fc4aed881f42e3e04f | [
"MIT"
] | 34 | 2015-10-03T20:08:19.000Z | 2022-01-31T23:51:02.000Z | mqtt/client/test/test_pubsubs.py | drewp/twisted-mqtt | 51828655052d5e51dc4e98fc4aed881f42e3e04f | [
"MIT"
] | 16 | 2016-04-23T18:46:42.000Z | 2020-08-24T07:33:01.000Z | mqtt/client/test/test_pubsubs.py | drewp/twisted-mqtt | 51828655052d5e51dc4e98fc4aed881f42e3e04f | [
"MIT"
] | 15 | 2015-12-12T03:35:55.000Z | 2021-07-08T05:39:19.000Z | # ----------------------------------------------------------------------
# Copyright (C) 2015 by Rafael Gonzalez
# #
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#----------------------------------------------------------------------
from twisted.trial import unittest
from twisted.test import proto_helpers
from twisted.internet import task, defer, error
from mqtt import v31
from mqtt.error import MQTTWindowError
from mqtt.pdu import CONNACK, PUBACK, PUBREC, PUBREL, PUBCOMP
from mqtt.client.base import MQTTBaseProtocol
from mqtt.client.factory import MQTTFactory
from mqtt.client.subscriber import MQTTProtocol as MQTTSubscriberProtocol
from mqtt.client.publisher import MQTTProtocol as MQTTPublisherProtocol
from mqtt.client.pubsubs import MQTTProtocol as MQTTPubSubsProtocol
class TestMQTTPublisherDisconnect(unittest.TestCase):
'''
Testing various cases of disconnect callback
'''
def setUp(self):
'''
Set up a connencted state
'''
self.transport = proto_helpers.StringTransportWithDisconnection()
self.clock = task.Clock()
MQTTBaseProtocol.callLater = self.clock.callLater
self.factory = MQTTFactory(MQTTFactory.PUBLISHER | MQTTFactory.SUBSCRIBER)
self._rebuild()
self.disconnected = 0
def _connect(self, cleanStart=True):
'''
Go to connected state
'''
ack = CONNACK()
ack.session = False
ack.resultCode = 0
ack.encode()
self.protocol.connect("TwistedMQTT-pubsubs", keepalive=0, cleanStart=cleanStart, version=v31)
self.transport.clear()
self.protocol.dataReceived(ack.encoded)
def _disconnected(self, reason):
self.disconnected += 1
def _serverDown(self):
self.transport.loseConnection()
self.transport.clear()
del self.protocol
def _rebuild(self):
self.protocol = self.factory.buildProtocol(0)
self.transport.protocol = self.protocol
MQTTBaseProtocol.callLater = self.clock.callLater
self.protocol.makeConnection(self.transport)
def test_disconnect_1(self):
'''Just connect and lose the transport'''
self._connect()
self.protocol.onDisconnection = self._disconnected
self.transport.loseConnection()
self.assertEqual(self.disconnected, 1)
def test_disconnect_2(self):
'''connect and disconnect'''
self._connect()
self.protocol.onDisconnection = self._disconnected
self.protocol.disconnect()
self.assertEqual(self.disconnected, 1)
def test_disconnect_3(self):
'''connect, generate a deferred and lose the transport'''
self._connect()
self.protocol.onDisconnection = self._disconnected
d = self.protocol.publish(topic="foo/bar/baz1", qos=1, message="hello world 1")
self.transport.clear()
self.transport.loseConnection()
self.assertEqual(self.disconnected, 1)
self.failureResultOf(d).trap(error.ConnectionDone)
def test_disconnect_4(self):
'''connect, generate a deferred and disconnect'''
self._connect()
self.protocol.onDisconnection = self._disconnected
d = self.protocol.publish(topic="foo/bar/baz1", qos=1, message="hello world 1")
self.transport.clear()
self.protocol.disconnect()
self.assertEqual(self.disconnected, 1)
self.failureResultOf(d).trap(error.ConnectionDone)
def test_disconnect_5(self):
'''connect with persistent session, generate a deferred and disconnect'''
self._connect(cleanStart=False)
self.protocol.onDisconnection = self._disconnected
d = self.protocol.publish(topic="foo/bar/baz1", qos=1, message="hello world 1")
self.transport.clear()
self.protocol.disconnect()
self.assertEqual(self.disconnected, 1)
self.assertNoResult(d)
def test_disconnect_6(self):
'''connect with persistent session, generate a deferred , rebuilds protocol'''
self._connect(cleanStart=False)
self.protocol.onDisconnection = self._disconnected
d = self.protocol.publish(topic="foo/bar/baz1", qos=1, message="hello world 1")
self._serverDown()
self._rebuild()
self.assertEqual(self.disconnected, 1)
self.assertNoResult(d)
class TestMQTTSubscriberDisconnect(unittest.TestCase):
'''
Testing various cases of disconnect callback
'''
def setUp(self):
'''
Set up a connencted state
'''
self.transport = proto_helpers.StringTransportWithDisconnection()
self.clock = task.Clock()
MQTTBaseProtocol.callLater = self.clock.callLater
self.factory = MQTTFactory(MQTTFactory.PUBLISHER | MQTTFactory.SUBSCRIBER)
self._rebuild()
self.disconnected = 0
def _connect(self, cleanStart=True):
'''
Go to connected state
'''
ack = CONNACK()
ack.session = False
ack.resultCode = 0
ack.encode()
self.protocol.connect("TwistedMQTT-sub", keepalive=0, cleanStart=cleanStart, version=v31)
self.transport.clear()
self.protocol.dataReceived(ack.encoded)
def _disconnected(self, reason):
self.disconnected += 1
def _rebuild(self):
self.protocol = self.factory.buildProtocol(0)
self.transport.protocol = self.protocol
MQTTBaseProtocol.callLater = self.clock.callLater
self.protocol.makeConnection(self.transport)
def _serverDown(self):
self.transport.loseConnection()
self.transport.clear()
del self.protocol
def test_disconnect_1(self):
'''Just connect and lose the transport'''
self._connect()
self.protocol.onDisconnection = self._disconnected
self.transport.loseConnection()
self.assertEqual(self.disconnected, 1)
def test_disconnect_2(self):
'''connect and disconnect'''
self._connect()
self.protocol.onDisconnection = self._disconnected
self.protocol.disconnect()
self.assertEqual(self.disconnected, 1)
def test_disconnect_3(self):
'''connect, generate a deferred and lose the transport'''
self._connect()
self.protocol.onDisconnection = self._disconnected
d = self.protocol.subscribe("foo/bar/baz1", 2 )
self.transport.clear()
self.transport.loseConnection()
self.assertEqual(self.disconnected, 1)
self.failureResultOf(d).trap(error.ConnectionDone)
def test_disconnect_4(self):
'''connect, generate a deferred and disconnect'''
self._connect()
self.protocol.onDisconnection = self._disconnected
d = self.protocol.subscribe("foo/bar/baz1", 2 )
self.transport.clear()
self.protocol.disconnect()
self.assertEqual(self.disconnected, 1)
self.failureResultOf(d).trap(error.ConnectionDone)
def test_disconnect_5(self):
'''connect with persistent session,
enerate a deferred that will not errback
and then disconnect'''
self._connect(cleanStart=False)
self.protocol.onDisconnection = self._disconnected
d = self.protocol.subscribe("foo/bar/baz1", 2 )
self.transport.clear()
self.protocol.disconnect()
self.assertEqual(self.disconnected, 1)
self.assertNoResult(d)
def test_disconnect_6(self):
'''connect with persistent session,
generate a deferred that will not errback yet,
then rebuilds protocol'''
self._connect(cleanStart=False)
self.protocol.onDisconnection = self._disconnected
d = self.protocol.subscribe("foo/bar/baz1", 2 )
self._serverDown()
self._rebuild()
self.assertEqual(self.disconnected, 1)
self.assertNoResult(d)
| 36.619433 | 101 | 0.659812 | 976 | 9,045 | 6.048156 | 0.214139 | 0.077249 | 0.040318 | 0.063019 | 0.759106 | 0.759106 | 0.751482 | 0.750974 | 0.748094 | 0.748094 | 0 | 0.009237 | 0.233941 | 9,045 | 246 | 102 | 36.768293 | 0.84269 | 0.225539 | 0 | 0.899329 | 0 | 0 | 0.026891 | 0 | 0 | 0 | 0 | 0 | 0.107383 | 1 | 0.147651 | false | 0 | 0.073826 | 0 | 0.234899 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
013fcba36f2f5075ea76b55573fcbd37d0feb7bc | 49,148 | py | Python | osmchadjango/feature/tests/test_views.py | juusokor/osmcha-django | 5daafa015d5e341aa8ad6f847be2b7cc1a204e2b | [
"BSD-2-Clause"
] | null | null | null | osmchadjango/feature/tests/test_views.py | juusokor/osmcha-django | 5daafa015d5e341aa8ad6f847be2b7cc1a204e2b | [
"BSD-2-Clause"
] | null | null | null | osmchadjango/feature/tests/test_views.py | juusokor/osmcha-django | 5daafa015d5e341aa8ad6f847be2b7cc1a204e2b | [
"BSD-2-Clause"
] | null | null | null | import json
from datetime import date, datetime
from django.urls import reverse
from django.conf import settings
from django.contrib.gis.geos import GEOSGeometry
from rest_framework.authtoken.models import Token
from rest_framework.test import APITestCase
from social_django.models import UserSocialAuth
from ...changeset.models import (Tag, SuspicionReasons, Changeset)
from ...changeset.tests.modelfactories import TagFactory
from ...changeset.views import PaginatedCSVRenderer
from ...users.models import User
from ..models import Feature
from ..views import FeatureListAPIView
from .modelfactories import (
FeatureFactory, CheckedFeatureFactory, WayFeatureFactory
)
class TestCreateFeature(APITestCase):
def setUp(self):
self.fixture = json.load(open(
settings.APPS_DIR.path('feature/tests/fixtures/way-23.json')(),
'r'
))
self.new_fixture = json.load(open(
settings.APPS_DIR.path('feature/tests/fixtures/way-24.json')(),
'r'
))
self.unvisible_fixture = json.load(open(
settings.APPS_DIR.path('feature/tests/fixtures/way-23-with-unvisible-reason.json')(),
'r'
))
self.user = User.objects.create_user(
username='test',
password='password',
email='a@a.com',
is_staff=True,
)
UserSocialAuth.objects.create(
user=self.user,
provider='openstreetmap',
uid='123123',
)
self.token = Token.objects.create(user=self.user)
def test_create_feature(self):
response = self.client.post(
reverse('feature:create'),
data=json.dumps(self.fixture),
content_type="application/json",
HTTP_AUTHORIZATION='Token {}'.format(self.token.key)
)
self.assertEqual(response.status_code, 201)
response = self.client.post(
reverse('feature:create'),
data=json.dumps(self.new_fixture),
content_type="application/json",
HTTP_AUTHORIZATION='Token {}'.format(self.token.key)
)
self.assertEqual(response.status_code, 201)
self.assertEqual(Feature.objects.count(), 2)
self.assertEqual(SuspicionReasons.objects.count(), 2)
self.assertEqual(
SuspicionReasons.objects.filter(is_visible=True).count(), 2
)
self.assertEqual(Changeset.objects.filter(is_suspect=True).count(), 2)
feature = Feature.objects.get(
osm_id=169218447, changeset__id=42893048
)
self.assertEqual(feature.url, 'way-169218447')
self.assertEqual(feature.reasons.count(), 2)
self.assertTrue(
feature.geometry.equals(
GEOSGeometry(json.dumps(self.fixture.get('geometry')))
)
)
self.assertTrue(
feature.old_geometry.equals(
GEOSGeometry(
json.dumps(self.fixture['properties']['oldVersion'].get('geometry'))
)
)
)
self.assertIn('properties', feature.old_geojson.keys())
self.assertIn('geometry', feature.old_geojson.keys())
self.assertIn('properties', feature.geojson.keys())
self.assertNotIn('suspicions', feature.geojson['properties'].keys())
self.assertIn('geometry', feature.geojson.keys())
def test_unathenticated_request(self):
response = self.client.post(
reverse('feature:create'),
data=json.dumps(self.fixture),
content_type="application/json",
)
self.assertEqual(response.status_code, 401)
def test_is_not_staff_user_request(self):
user = User.objects.create_user(
username='test_2',
password='password',
email='b@a.com',
)
UserSocialAuth.objects.create(
user=user,
provider='openstreetmap',
uid='444444',
)
token = Token.objects.create(user=user)
response = self.client.post(
reverse('feature:create'),
data=json.dumps(self.new_fixture),
content_type="application/json",
HTTP_AUTHORIZATION='Token {}'.format(token.key)
)
self.assertEqual(response.status_code, 403)
self.assertEqual(Feature.objects.count(), 0)
def test_update_changeset(self):
Changeset.objects.create(
id=self.fixture['properties'].get('osm:changeset'),
uid=self.fixture['properties'].get('osm:uid'),
user=self.fixture['properties'].get('osm:user'),
date=datetime.utcfromtimestamp(
self.fixture['properties'].get('osm:timestamp') / 1000
),
is_suspect=False
)
response = self.client.post(
reverse('feature:create'),
data=json.dumps(self.fixture),
content_type="application/json",
HTTP_AUTHORIZATION='Token {}'.format(self.token.key)
)
self.assertEqual(response.status_code, 201)
self.assertEqual(Changeset.objects.count(), 1)
self.assertEqual(Changeset.objects.filter(is_suspect=True).count(), 1)
def test_invalid_geometry(self):
self.fixture['geometry'] = {}
response = self.client.post(
reverse('feature:create'),
data=json.dumps(self.fixture),
content_type="application/json",
HTTP_AUTHORIZATION='Token {}'.format(self.token.key)
)
self.assertEqual(response.status_code, 400)
self.assertEqual(Feature.objects.count(), 0)
self.assertEqual(Changeset.objects.count(), 0)
self.assertEqual(SuspicionReasons.objects.count(), 0)
def test_create_feature_with_is_visible_false(self):
response = self.client.post(
reverse('feature:create'),
data=json.dumps(self.unvisible_fixture),
content_type="application/json",
HTTP_AUTHORIZATION='Token {}'.format(self.token.key)
)
self.assertEqual(response.status_code, 201)
self.assertEqual(Feature.objects.count(), 1)
self.assertEqual(SuspicionReasons.objects.count(), 1)
self.assertEqual(
SuspicionReasons.objects.filter(is_visible=False).count(), 1
)
self.assertEqual(Changeset.objects.filter(is_suspect=True).count(), 0)
def test_create_feature_two_times_with_different_reasons(self):
response = self.client.post(
reverse('feature:create'),
data=json.dumps(self.fixture),
content_type="application/json",
HTTP_AUTHORIZATION='Token {}'.format(self.token.key)
)
self.assertEqual(response.status_code, 201)
response = self.client.post(
reverse('feature:create'),
data=json.dumps(self.unvisible_fixture),
content_type="application/json",
HTTP_AUTHORIZATION='Token {}'.format(self.token.key)
)
self.assertEqual(response.status_code, 201)
self.assertEqual(Feature.objects.count(), 1)
self.assertEqual(Feature.objects.get(osm_id=169218447).reasons.count(), 3)
self.assertEqual(SuspicionReasons.objects.count(), 3)
self.assertEqual(
SuspicionReasons.objects.filter(is_visible=False).count(), 1
)
self.assertEqual(Changeset.objects.filter(is_suspect=True).count(), 1)
def test_create_feature_with_is_visible_false_and_suspect_changeset(self):
Changeset.objects.create(
id=self.unvisible_fixture['properties'].get('osm:changeset'),
uid=self.unvisible_fixture['properties'].get('osm:uid'),
user=self.unvisible_fixture['properties'].get('osm:user'),
date=datetime.utcfromtimestamp(
self.unvisible_fixture['properties'].get('osm:timestamp') / 1000
),
is_suspect=True
)
response = self.client.post(
reverse('feature:create'),
data=json.dumps(self.unvisible_fixture),
content_type="application/json",
HTTP_AUTHORIZATION='Token {}'.format(self.token.key)
)
self.assertEqual(response.status_code, 201)
self.assertEqual(Feature.objects.count(), 1)
self.assertEqual(SuspicionReasons.objects.count(), 1)
self.assertEqual(
SuspicionReasons.objects.filter(is_visible=False).count(), 1
)
self.assertEqual(Changeset.objects.filter(is_suspect=True).count(), 1)
class TestFeatureListAPIView(APITestCase):
def setUp(self):
FeatureFactory.create_batch(15)
WayFeatureFactory.create_batch(15)
CheckedFeatureFactory.create_batch(15)
CheckedFeatureFactory.create_batch(15, harmful=False)
self.url = reverse('feature:list')
def test_list_view(self):
response = self.client.get(self.url)
self.assertEqual(response.status_code, 200)
self.assertEqual(len(response.data.get('features')), 50)
self.assertEqual(response.data.get('count'), 60)
def test_pagination(self):
response = self.client.get(self.url, {'page': 2})
self.assertEqual(response.status_code, 200)
self.assertEqual(len(response.data['features']), 10)
self.assertEqual(response.data['count'], 60)
# test page_size parameter
response = self.client.get(self.url, {'page_size': 60})
self.assertEqual(response.status_code, 200)
self.assertEqual(len(response.data['features']), 60)
def test_filters(self):
response = self.client.get(self.url, {'in_bbox': '40,13,43,15'})
self.assertEqual(response.status_code, 200)
self.assertEqual(response.data['count'], 60)
response = self.client.get(
self.url,
{'in_bbox': '40,13,43,15', 'checked': 'true'}
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response.data['count'], 30)
response = self.client.get(self.url, {'in_bbox': '-3.17,-91.98,-2.1,-90.5'})
self.assertEqual(response.status_code, 200)
self.assertEqual(response.data['count'], 0)
response = self.client.get(self.url, {'harmful': 'true'})
self.assertEqual(response.status_code, 200)
self.assertEqual(response.data['count'], 15)
response = self.client.get(self.url, {'harmful': 'false'})
self.assertEqual(response.status_code, 200)
self.assertEqual(response.data['count'], 15)
def test_csv_renderer(self):
self.assertIn(PaginatedCSVRenderer, FeatureListAPIView().renderer_classes)
response = self.client.get(self.url, {'format': 'csv', 'page_size': 70})
self.assertEqual(response.status_code, 200)
self.assertEqual(len(response.data['features']), 60)
response = self.client.get(self.url, {'format': 'csv', 'checked': True})
self.assertEqual(response.status_code, 200)
self.assertEqual(len(response.data['features']), 30)
class TestOrderingOfFeatureListAPIView(APITestCase):
def setUp(self):
CheckedFeatureFactory.create_batch(10)
self.url = reverse('feature:list')
def test_ordering(self):
# default ordering is by descending changeset id
response = self.client.get(self.url)
self.assertEqual(
[i['id'] for i in response.data.get('features')],
[i.id for i in Feature.objects.all()]
)
# ascending changeset id
response = self.client.get(self.url, {'order_by': 'changeset_id'})
self.assertEqual(
[i['id'] for i in response.data.get('features')],
[i.id for i in Feature.objects.order_by('changeset_id')]
)
# descending changeset date
response = self.client.get(self.url, {'order_by': '-changeset__date'})
self.assertEqual(
[i['id'] for i in response.data.get('features')],
[i.id for i in Feature.objects.order_by('-changeset__date')]
)
# ascending changeset date
response = self.client.get(self.url, {'order_by': 'changeset__date'})
self.assertEqual(
[i['id'] for i in response.data.get('features')],
[i.id for i in Feature.objects.order_by('changeset__date')]
)
# ascending id
response = self.client.get(self.url, {'order_by': 'id'})
self.assertEqual(
[i['id'] for i in response.data.get('features')],
[i.id for i in Feature.objects.order_by('id')]
)
# descending id
response = self.client.get(self.url, {'order_by': '-id'})
self.assertEqual(
[i['id'] for i in response.data.get('features')],
[i.id for i in Feature.objects.order_by('-id')]
)
# ascending osm_id
response = self.client.get(self.url, {'order_by': 'osm_id'})
self.assertEqual(
[i['id'] for i in response.data.get('features')],
[i.id for i in Feature.objects.order_by('osm_id')]
)
# descending osm_id
response = self.client.get(self.url, {'order_by': '-osm_id'})
self.assertEqual(
[i['id'] for i in response.data.get('features')],
[i.id for i in Feature.objects.order_by('-osm_id')]
)
# ascending check_date
response = self.client.get(self.url, {'order_by': 'check_date'})
self.assertEqual(
[i['id'] for i in response.data.get('features')],
[i.id for i in Feature.objects.order_by('check_date')]
)
# descending check_date
response = self.client.get(self.url, {'order_by': '-check_date'})
self.assertEqual(
[i['id'] for i in response.data.get('features')],
[i.id for i in Feature.objects.order_by('-check_date')]
)
def test_ordering_by_number_reasons(self):
feature_1, feature_2 = Feature.objects.all()[:2]
self.reason_1 = SuspicionReasons.objects.create(name='possible import')
self.reason_1.features.add(feature_1)
self.reason_2 = SuspicionReasons.objects.create(name='suspect word')
self.reason_2.features.add(feature_1, feature_2)
response = self.client.get(
self.url,
{'order_by': '-number_reasons', 'page_size': 2}
)
self.assertEqual(
[i['id'] for i in response.data.get('features')],
[feature_1.id, feature_2.id]
)
response = self.client.get(self.url, {'order_by': 'number_reasons'})
self.assertEqual(
[i['id'] for i in response.data.get('features')[-2:]],
[feature_2.id, feature_1.id]
)
class TestFeatureDetailAPIView(APITestCase):
def setUp(self):
self.feature = CheckedFeatureFactory()
def test_feature_detail_view(self):
response = self.client.get(
reverse(
'feature:detail',
args=[self.feature.changeset.id, self.feature.url]
)
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response.data.get('id'), self.feature.id)
self.assertEqual(
response.data['properties']['osm_id'],
self.feature.osm_id
)
self.assertEqual(
response.data['properties']['osm_link'],
self.feature.osm_link()
)
self.assertIsInstance(
response.data['properties']['date'],
date
)
self.assertEqual(
response.data['properties']['source'],
self.feature.changeset.source
)
self.assertEqual(
response.data['properties']['comment'],
self.feature.changeset.comment
)
self.assertEqual(
response.data['properties']['imagery_used'],
self.feature.changeset.imagery_used
)
self.assertEqual(
response.data['properties']['editor'],
self.feature.changeset.editor
)
self.assertEqual(
response.data['properties']['url'],
self.feature.url
)
self.assertEqual(
response.data['properties']['checked'],
self.feature.checked
)
self.assertEqual(
response.data['properties']['harmful'],
self.feature.harmful
)
self.assertEqual(
response.data['properties']['check_user'],
self.feature.check_user.name
)
self.assertEqual(
response.data['properties']['changeset'],
self.feature.changeset.id
)
self.assertIn('properties', response.data.keys())
self.assertIn('geojson', response.data['properties'].keys())
self.assertIn('check_date', response.data['properties'].keys())
self.assertIn('old_geojson', response.data['properties'].keys())
self.assertIn('geometry', response.data.keys())
class TestReasonsAndTagsFields(APITestCase):
def setUp(self):
self.feature = CheckedFeatureFactory()
self.tag = Tag.objects.create(name='Vandalism')
self.tag.features.add(self.feature)
self.private_tag = Tag.objects.create(
name='Bad feature in my city',
is_visible=False
)
self.private_tag.features.add(self.feature)
self.reason = SuspicionReasons.objects.create(
name='new mapper edits'
)
self.reason.features.add(self.feature)
self.private_reason = SuspicionReasons.objects.create(
name='Suspicious Feature in my city',
is_visible=False
)
self.private_reason.features.add(self.feature)
self.user = User.objects.create_user(
username='test',
password='password',
email='a@a.com',
is_staff=True
)
UserSocialAuth.objects.create(
user=self.user,
provider='openstreetmap',
uid='123123',
)
def test_detail_view_with_normal_user(self):
response = self.client.get(
reverse(
'feature:detail',
args=[self.feature.changeset.id, self.feature.url]
)
)
self.assertIn(
{'id': self.reason.id, 'name': 'new mapper edits'},
response.data['properties']['reasons']
)
self.assertIn(
{'id': self.tag.id, 'name': 'Vandalism'},
response.data['properties']['tags']
)
self.assertNotIn(
{'id': self.private_reason.id, 'name': 'Suspicious Feature in my city'},
response.data['properties']['reasons']
)
self.assertNotIn(
{'id': self.private_tag.id, 'name': 'Bad feature in my city'},
response.data['properties']['tags']
)
def test_list_view_with_normal_user(self):
response = self.client.get(reverse('feature:list'))
self.assertIn(
{'id': self.reason.id, 'name': 'new mapper edits'},
response.data['features'][0]['properties']['reasons']
)
self.assertIn(
{'id': self.tag.id, 'name': 'Vandalism'},
response.data['features'][0]['properties']['tags']
)
self.assertNotIn(
{'id': self.private_reason.id, 'name': 'Suspicious Feature in my city'},
response.data['features'][0]['properties']['reasons']
)
self.assertNotIn(
{'id': self.private_tag.id, 'name': 'Bad feature in my city'},
response.data['features'][0]['properties']['tags']
)
def test_detail_view_with_admin(self):
self.client.login(username=self.user.username, password='password')
response = self.client.get(
reverse(
'feature:detail',
args=[self.feature.changeset.id, self.feature.url]
)
)
self.assertIn(
{'id': self.reason.id, 'name': 'new mapper edits'},
response.data['properties']['reasons']
)
self.assertIn(
{'id': self.tag.id, 'name': 'Vandalism'},
response.data['properties']['tags']
)
self.assertIn(
{'id': self.private_reason.id, 'name': 'Suspicious Feature in my city'},
response.data['properties']['reasons']
)
self.assertIn(
{'id': self.private_tag.id, 'name': 'Bad feature in my city'},
response.data['properties']['tags']
)
def test_list_view_with_admin(self):
self.client.login(username=self.user.username, password='password')
response = self.client.get(reverse('feature:list'))
self.assertIn(
{'id': self.reason.id, 'name': 'new mapper edits'},
response.data['features'][0]['properties']['reasons']
)
self.assertIn(
{'id': self.tag.id, 'name': 'Vandalism'},
response.data['features'][0]['properties']['tags']
)
self.assertIn(
{'id': self.private_reason.id, 'name': 'Suspicious Feature in my city'},
response.data['features'][0]['properties']['reasons']
)
self.assertIn(
{'id': self.private_tag.id, 'name': 'Bad feature in my city'},
response.data['features'][0]['properties']['tags']
)
class TestCheckFeatureViews(APITestCase):
def setUp(self):
self.feature = FeatureFactory()
self.user = User.objects.create_user(
username='test_user',
password='password',
email='a@a.com'
)
UserSocialAuth.objects.create(
user=self.user,
provider='openstreetmap',
uid='44444',
)
self.changeset_user = User.objects.create_user(
username='test',
password='password',
email='b@a.com'
)
UserSocialAuth.objects.create(
user=self.changeset_user,
provider='openstreetmap',
uid='123123',
)
self.tag_1 = Tag.objects.create(name='Illegal import')
self.tag_2 = Tag.objects.create(name='Vandalism')
self.set_harmful_url = reverse(
'feature:set-harmful',
args=[self.feature.changeset, self.feature.url]
)
self.set_good_url = reverse(
'feature:set-good',
args=[self.feature.changeset.id, self.feature.url]
)
def test_check_feature_unauthenticated(self):
response = self.client.put(self.set_harmful_url)
self.assertEqual(response.status_code, 401)
self.feature.refresh_from_db()
self.assertIsNone(self.feature.harmful)
self.assertFalse(self.feature.checked)
response = self.client.put(self.set_good_url)
self.assertEqual(response.status_code, 401)
self.feature.refresh_from_db()
self.assertIsNone(self.feature.harmful)
self.assertFalse(self.feature.checked)
def test_set_harmful_feature_not_allowed(self):
"""User can't mark the feature as harmful because he is the author of
the changeset that modified the feature.
"""
self.client.login(username=self.changeset_user.username, password='password')
response = self.client.put(self.set_harmful_url)
self.assertEqual(response.status_code, 403)
self.feature.refresh_from_db()
self.assertIsNone(self.feature.harmful)
self.assertFalse(self.feature.checked)
self.assertIsNone(self.feature.check_user)
self.assertIsNone(self.feature.check_date)
def test_set_good_feature_not_allowed(self):
"""User can't mark the feature as good because he is the author of
the changeset that modified the feature.
"""
self.client.login(username=self.changeset_user.username, password='password')
response = self.client.put(self.set_good_url)
self.assertEqual(response.status_code, 403)
self.feature.refresh_from_db()
self.assertIsNone(self.feature.harmful)
self.assertFalse(self.feature.checked)
self.assertIsNone(self.feature.check_user)
self.assertIsNone(self.feature.check_date)
def test_set_harmful_feature(self):
self.client.login(username=self.user.username, password='password')
response = self.client.put(
self.set_harmful_url,
{'tags': [self.tag_1.id, self.tag_2.id]},
)
self.assertEqual(response.status_code, 200)
self.feature.refresh_from_db()
self.assertTrue(self.feature.harmful)
self.assertTrue(self.feature.checked)
self.assertEqual(self.feature.tags.count(), 2)
self.assertIn(self.tag_1, self.feature.tags.all())
self.assertIn(self.tag_2, self.feature.tags.all())
def test_set_harmful_feature_invalid_tag_ids(self):
self.client.login(username=self.user.username, password='password')
response = self.client.put(
self.set_harmful_url,
{'tags': [self.tag_1.id, 345435, 898734]},
)
self.assertEqual(response.status_code, 400)
self.feature.refresh_from_db()
self.assertIsNone(self.feature.harmful)
self.assertFalse(self.feature.checked)
self.assertIsNone(self.feature.check_user)
self.assertIsNone(self.feature.check_date)
self.assertEqual(self.feature.tags.count(), 0)
def test_set_harmful_feature_without_tags(self):
self.client.login(username=self.user.username, password='password')
response = self.client.put(self.set_harmful_url)
self.assertEqual(response.status_code, 200)
self.feature.refresh_from_db()
self.assertTrue(self.feature.harmful)
self.assertTrue(self.feature.checked)
def test_set_good_feature(self):
self.client.login(username=self.user.username, password='password')
response = self.client.put(
self.set_good_url,
{'tags': [self.tag_1.id, self.tag_2.id]},
)
self.assertEqual(response.status_code, 200)
self.feature.refresh_from_db()
self.assertFalse(self.feature.harmful)
self.assertTrue(self.feature.checked)
self.assertEqual(self.feature.tags.count(), 2)
self.assertIn(self.tag_1, self.feature.tags.all())
self.assertIn(self.tag_2, self.feature.tags.all())
def test_set_good_feature_invalid_tag_ids(self):
self.client.login(username=self.user.username, password='password')
response = self.client.put(
self.set_good_url,
{'tags': [self.tag_1.id, 43543, 43523]},
)
self.assertEqual(response.status_code, 400)
self.feature.refresh_from_db()
self.assertIsNone(self.feature.harmful)
self.assertFalse(self.feature.checked)
self.assertIsNone(self.feature.check_user)
self.assertIsNone(self.feature.check_date)
self.assertEqual(self.feature.tags.count(), 0)
def test_set_good_feature_without_tags(self):
self.client.login(username=self.user.username, password='password')
response = self.client.put(self.set_good_url)
self.assertEqual(response.status_code, 200)
self.feature.refresh_from_db()
self.assertFalse(self.feature.harmful)
self.assertTrue(self.feature.checked)
def test_try_to_check_feature_already_checked(self):
feature = CheckedFeatureFactory()
self.client.login(username=self.user.username, password='password')
# first try to mark a checked feature as good
response = self.client.put(
reverse('feature:set-good', args=[feature.changeset, feature.url])
)
self.assertEqual(response.status_code, 403)
feature.refresh_from_db()
self.assertNotEqual(feature.check_user, self.user)
# now try to mark a checked feature as harmful
response = self.client.put(
reverse('feature:set-harmful', args=[feature.changeset, feature.url]),
{'tags': [self.tag_1.id, self.tag_2.id]},
)
self.assertEqual(response.status_code, 403)
feature.refresh_from_db()
self.assertNotEqual(feature.check_user, self.user)
def test_404(self):
self.client.login(username=self.user.username, password='password')
response = self.client.put(
reverse('feature:set-good', args=[4988787832, 'way-16183212']),
)
self.assertEqual(response.status_code, 404)
response = self.client.put(
reverse('feature:set-harmful', args=[4988787832, 'way-16183212']),
)
self.assertEqual(response.status_code, 404)
class TestThrottling(APITestCase):
def setUp(self):
self.features = FeatureFactory.create_batch(5)
self.user = User.objects.create_user(
username='test_user',
password='password',
email='a@a.com'
)
UserSocialAuth.objects.create(
user=self.user,
provider='openstreetmap',
uid='44444',
)
def test_set_harmful_throttling(self):
"""A user can only check 3 features each minute"""
self.client.login(username=self.user.username, password='password')
for f in self.features:
response = self.client.put(
reverse('feature:set-harmful', args=[f.changeset.id, f.url])
)
self.assertEqual(response.status_code, 429)
self.assertEqual(Feature.objects.filter(checked=True).count(), 3)
def test_set_good_throttling(self):
self.client.login(username=self.user.username, password='password')
for f in self.features:
response = self.client.put(
reverse('feature:set-good', args=[f.changeset.id, f.url])
)
self.assertEqual(response.status_code, 429)
self.assertEqual(Feature.objects.filter(checked=True).count(), 3)
def test_mixed_throttling(self):
"""Test if both set_harmful and set_good views are throttled together."""
self.client.login(username=self.user.username, password='password')
for f in self.features[:3]:
response = self.client.put(
reverse('feature:set-good', args=[f.changeset.id, f.url])
)
self.assertEqual(response.status_code, 200)
response = self.client.put(
reverse(
'feature:set-harmful',
args=[self.features[3].changeset.id, self.features[3].url]
)
)
self.assertEqual(response.status_code, 429)
self.assertEqual(Feature.objects.filter(checked=True).count(), 3)
def test_set_harmful_by_staff_user(self):
"""Staff users have not limit of checked features by minute."""
user = User.objects.create_user(
username='test_staff',
password='password',
email='a@a.com',
is_staff=True
)
UserSocialAuth.objects.create(
user=user,
provider='openstreetmap',
uid='8987',
)
self.client.login(username=user.username, password='password')
for f in self.features:
response = self.client.put(
reverse('feature:set-harmful', args=[f.changeset.id, f.url])
)
self.assertEqual(response.status_code, 200)
self.assertEqual(Feature.objects.filter(checked=True).count(), 5)
def test_set_good_by_staff_user(self):
"""Staff users have not limit of checked features by minute."""
user = User.objects.create_user(
username='test_staff',
password='password',
email='a@a.com',
is_staff=True
)
UserSocialAuth.objects.create(
user=user,
provider='openstreetmap',
uid='8987',
)
self.client.login(username=user.username, password='password')
for f in self.features:
response = self.client.put(
reverse('feature:set-good', args=[f.changeset.id, f.url])
)
self.assertEqual(response.status_code, 200)
self.assertEqual(Feature.objects.filter(checked=True).count(), 5)
class TestUncheckFeatureView(APITestCase):
def setUp(self):
self.user = User.objects.create_user(
username='test_2',
password='password',
email='a@a.com'
)
UserSocialAuth.objects.create(
user=self.user,
provider='openstreetmap',
uid='123123',
)
self.feature = FeatureFactory()
self.good_feature = CheckedFeatureFactory(
check_user=self.user, harmful=False
)
self.harmful_feature = CheckedFeatureFactory(check_user=self.user)
self.harmful_feature_2 = CheckedFeatureFactory()
self.tag = Tag.objects.create(name='Vandalism')
self.tag.features.set(
[self.harmful_feature, self.harmful_feature_2, self.good_feature]
)
def test_unauthenticated_response(self):
response = self.client.put(
reverse(
'feature:uncheck',
args=[self.harmful_feature.changeset, self.harmful_feature.url]
)
)
self.assertEqual(response.status_code, 401)
self.harmful_feature.refresh_from_db()
self.assertTrue(self.harmful_feature.harmful)
self.assertTrue(self.harmful_feature.checked)
self.assertEqual(self.harmful_feature.check_user, self.user)
self.assertIsNotNone(self.harmful_feature.check_date)
self.assertEqual(self.harmful_feature.tags.count(), 1)
self.assertIn(self.tag, self.harmful_feature.tags.all())
def test_uncheck_harmful_feature(self):
self.client.login(username=self.user.username, password='password')
response = self.client.put(
reverse(
'feature:uncheck',
args=[self.harmful_feature.changeset, self.harmful_feature.url]
)
)
self.assertEqual(response.status_code, 200)
self.harmful_feature.refresh_from_db()
self.assertIsNone(self.harmful_feature.harmful)
self.assertFalse(self.harmful_feature.checked)
self.assertIsNone(self.harmful_feature.check_user)
self.assertIsNone(self.harmful_feature.check_date)
self.assertEqual(self.harmful_feature.tags.count(), 1)
self.assertIn(self.harmful_feature, self.tag.features.all())
def test_uncheck_good_feature(self):
self.client.login(username=self.user.username, password='password')
response = self.client.put(
reverse(
'feature:uncheck',
args=[self.good_feature.changeset, self.good_feature.url]
)
)
self.assertEqual(response.status_code, 200)
self.good_feature.refresh_from_db()
self.assertIsNone(self.good_feature.harmful)
self.assertFalse(self.good_feature.checked)
self.assertIsNone(self.good_feature.check_user)
self.assertIsNone(self.good_feature.check_date)
self.assertEqual(self.good_feature.tags.count(), 1)
def test_user_uncheck_permission(self):
"""User can only uncheck features that he checked."""
self.client.login(username=self.user.username, password='password')
response = self.client.put(
reverse(
'feature:uncheck',
args=[self.harmful_feature_2.changeset, self.harmful_feature_2.url]
)
)
self.assertEqual(response.status_code, 403)
self.harmful_feature.refresh_from_db()
self.assertTrue(self.harmful_feature_2.harmful)
self.assertTrue(self.harmful_feature_2.checked)
self.assertIsNotNone(self.harmful_feature_2.check_user)
self.assertIsNotNone(self.harmful_feature_2.check_date)
def test_try_to_uncheck_unchecked_feature(self):
"""It's not possible to uncheck an unchecked feature!"""
self.client.login(username=self.user.username, password='password')
response = self.client.put(
reverse(
'feature:uncheck',
args=[self.feature.changeset, self.feature.url]
)
)
self.assertEqual(response.status_code, 403)
def test_staff_user_can_uncheck_any_feature(self):
"""A staff user can uncheck feature checked by any user"""
staff_user = User.objects.create_user(
username='staff_test',
password='password',
email='s@a.com',
is_staff=True
)
UserSocialAuth.objects.create(
user=staff_user,
provider='openstreetmap',
uid='87873',
)
self.client.login(username=staff_user.username, password='password')
response = self.client.put(
reverse(
'feature:uncheck',
args=[self.good_feature.changeset, self.good_feature.url]
)
)
self.assertEqual(response.status_code, 200)
response = self.client.put(
reverse(
'feature:uncheck',
args=[self.harmful_feature_2.changeset, self.harmful_feature_2.url]
)
)
self.assertEqual(response.status_code, 200)
response = self.client.put(
reverse(
'feature:uncheck',
args=[self.harmful_feature.changeset, self.harmful_feature.url]
)
)
self.assertEqual(response.status_code, 200)
self.assertEqual(Feature.objects.filter(checked=True).count(), 0)
class TestAddTagToFeature(APITestCase):
def setUp(self):
self.user = User.objects.create_user(
username='user',
email='c@a.com',
password='password',
)
UserSocialAuth.objects.create(
user=self.user,
provider='openstreetmap',
uid='999',
)
self.changeset_user = User.objects.create_user(
username='test',
email='b@a.com',
password='password',
)
UserSocialAuth.objects.create(
user=self.changeset_user,
provider='openstreetmap',
uid='123123',
)
self.feature = FeatureFactory()
self.checked_feature = CheckedFeatureFactory(check_user=self.user)
self.tag = TagFactory(name='Not verified', for_feature=True)
def test_unauthenticated_can_not_add_tag(self):
response = self.client.post(
reverse(
'feature:tags',
args=[self.feature.changeset, self.feature.url, self.tag.id]
)
)
self.assertEqual(response.status_code, 401)
self.assertEqual(self.feature.tags.count(), 0)
def test_can_not_add_invalid_tag_id(self):
"""When the tag id does not exist, it will return a 404 response."""
self.client.login(username=self.user.username, password='password')
response = self.client.post(
reverse(
'feature:tags',
args=[self.feature.changeset.id, self.feature.url, 876343]
)
)
self.assertEqual(response.status_code, 404)
self.assertEqual(self.feature.tags.count(), 0)
def test_add_tag(self):
"""A user that is not the creator of the feature can add tags to an
unchecked feature.
"""
self.client.login(username=self.user.username, password='password')
response = self.client.post(
reverse(
'feature:tags',
args=[self.feature.changeset.id, self.feature.url, self.tag.id]
)
)
self.assertEqual(response.status_code, 200)
self.assertEqual(self.feature.tags.count(), 1)
self.assertIn(self.tag, self.feature.tags.all())
# test add the same tag again
response = self.client.post(
reverse(
'feature:tags',
args=[self.feature.changeset.id, self.feature.url, self.tag.id]
)
)
self.assertEqual(response.status_code, 200)
self.assertEqual(self.feature.tags.count(), 1)
def test_add_tag_by_feature_owner(self):
"""The user that created the feature can not add tags to it."""
self.client.login(username=self.changeset_user.username, password='password')
response = self.client.post(
reverse(
'feature:tags',
args=[self.feature.changeset.id, self.feature.url, self.tag.id]
)
)
self.assertEqual(response.status_code, 403)
self.assertEqual(self.feature.tags.count(), 0)
def test_add_tag_to_checked_feature(self):
"""The user that checked the feature can add tags to it."""
self.client.login(username=self.user.username, password='password')
response = self.client.post(
reverse(
'feature:tags',
args=[self.checked_feature.changeset.id, self.checked_feature.url, self.tag.id]
)
)
self.assertEqual(response.status_code, 200)
self.assertEqual(self.checked_feature.tags.count(), 1)
self.assertIn(self.tag, self.checked_feature.tags.all())
def test_other_user_can_not_add_tag_to_checked_feature(self):
"""A non staff user can not add tags to a feature that other user have
checked.
"""
other_user = User.objects.create_user(
username='other_user',
email='b@a.com',
password='password',
)
UserSocialAuth.objects.create(
user=other_user,
provider='openstreetmap',
uid='28763',
)
self.client.login(username=other_user.username, password='password')
response = self.client.post(
reverse(
'feature:tags',
args=[self.checked_feature.changeset.id, self.checked_feature.url, self.tag.id]
)
)
self.assertEqual(response.status_code, 403)
self.assertEqual(self.feature.tags.count(), 0)
def test_staff_user_add_tag_to_checked_feature(self):
"""A staff user can add tags to a feature."""
staff_user = User.objects.create_user(
username='admin',
email='b@a.com',
password='password',
is_staff=True
)
UserSocialAuth.objects.create(
user=staff_user,
provider='openstreetmap',
uid='28763',
)
self.client.login(username=staff_user.username, password='password')
response = self.client.post(
reverse(
'feature:tags',
args=[self.checked_feature.changeset.id, self.checked_feature.url, self.tag.id]
)
)
self.assertEqual(response.status_code, 200)
self.assertEqual(self.checked_feature.tags.count(), 1)
self.assertIn(self.tag, self.checked_feature.tags.all())
class TestRemoveTagFromFeature(APITestCase):
def setUp(self):
self.user = User.objects.create_user(
username='user',
email='c@a.com',
password='password',
)
UserSocialAuth.objects.create(
user=self.user,
provider='openstreetmap',
uid='999',
)
self.changeset_user = User.objects.create_user(
username='test',
email='b@a.com',
password='password',
)
UserSocialAuth.objects.create(
user=self.changeset_user,
provider='openstreetmap',
uid='123123',
)
self.feature = FeatureFactory()
self.checked_feature = CheckedFeatureFactory(check_user=self.user)
self.tag = TagFactory(name='Not verified')
self.feature.tags.add(self.tag)
self.checked_feature.tags.add(self.tag)
def test_unauthenticated_can_not_remove_tag(self):
response = self.client.delete(
reverse(
'feature:tags',
args=[self.feature.changeset.id, self.feature.url, self.tag.id]
)
)
self.assertEqual(response.status_code, 401)
self.assertEqual(self.feature.tags.count(), 1)
def test_can_not_remove_invalid_tag_id(self):
"""When the tag id does not exist it will return a 404 response."""
self.client.login(username=self.user.username, password='password')
response = self.client.delete(
reverse(
'feature:tags',
args=[self.feature.changeset.id, self.feature.url, 433232]
)
)
self.assertEqual(response.status_code, 404)
def test_remove_tag(self):
"""A user that is not the creator of the changeset can remote tags to an
unchecked changeset.
"""
self.client.login(username=self.user.username, password='password')
response = self.client.delete(
reverse(
'feature:tags',
args=[self.feature.changeset.id, self.feature.url, self.tag.id]
)
)
self.assertEqual(response.status_code, 200)
self.assertEqual(self.feature.tags.count(), 0)
def test_remove_tag_by_changeset_owner(self):
"""The user that created the changeset can not remove its tags."""
self.client.login(username=self.changeset_user.username, password='password')
response = self.client.delete(
reverse(
'feature:tags',
args=[self.feature.changeset.id, self.feature.url, self.tag.id]
)
)
self.assertEqual(response.status_code, 403)
self.assertEqual(self.feature.tags.count(), 1)
def test_remove_tag_of_checked_changeset(self):
"""The user that checked the changeset can remove its tags."""
self.client.login(username=self.user.username, password='password')
response = self.client.delete(
reverse(
'feature:tags',
args=[self.checked_feature.changeset.id, self.checked_feature.url, self.tag.id]
)
)
self.assertEqual(response.status_code, 200)
self.assertEqual(self.checked_feature.tags.count(), 0)
def test_other_user_can_not_remove_tag_to_checked_changeset(self):
"""A non staff user can not remove tags of a changeset that other user
have checked.
"""
other_user = User.objects.create_user(
username='other_user',
email='b@a.com',
password='password',
)
UserSocialAuth.objects.create(
user=other_user,
provider='openstreetmap',
uid='28763',
)
self.client.login(username=other_user.username, password='password')
response = self.client.delete(
reverse(
'feature:tags',
args=[self.checked_feature.changeset.id, self.checked_feature.url, self.tag.id]
)
)
self.assertEqual(response.status_code, 403)
self.assertEqual(self.checked_feature.tags.count(), 1)
def test_staff_user_remove_tag_to_checked_changeset(self):
"""A staff user can remove tags to a changeset."""
staff_user = User.objects.create_user(
username='admin',
email='b@a.com',
password='password',
is_staff=True
)
UserSocialAuth.objects.create(
user=staff_user,
provider='openstreetmap',
uid='28763',
)
self.client.login(username=staff_user.username, password='password')
response = self.client.delete(
reverse(
'feature:tags',
args=[self.checked_feature.changeset.id, self.checked_feature.url, self.tag.id]
)
)
self.assertEqual(response.status_code, 200)
self.assertEqual(self.checked_feature.tags.count(), 0)
| 39.34988 | 97 | 0.596932 | 5,326 | 49,148 | 5.381337 | 0.054825 | 0.080074 | 0.066606 | 0.064757 | 0.868776 | 0.827291 | 0.780433 | 0.727434 | 0.705349 | 0.682426 | 0 | 0.016528 | 0.282311 | 49,148 | 1,248 | 98 | 39.38141 | 0.79602 | 0.035322 | 0 | 0.596206 | 0 | 0 | 0.088767 | 0.003115 | 0 | 0 | 0 | 0 | 0.214092 | 1 | 0.058717 | false | 0.046974 | 0.015357 | 0 | 0.083108 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
014ada924988874cb495f4666cbb7254d19a0392 | 354 | py | Python | addressbook/events.py | kctzstyle/python-gui-addressbook | 0bd07a3e4c8181959d249a8272d890d19e44ceb6 | [
"MIT"
] | 3 | 2021-06-29T03:45:31.000Z | 2021-12-15T21:57:28.000Z | addressbook/events.py | kctzstyle/python-gui-addressbook | 0bd07a3e4c8181959d249a8272d890d19e44ceb6 | [
"MIT"
] | null | null | null | addressbook/events.py | kctzstyle/python-gui-addressbook | 0bd07a3e4c8181959d249a8272d890d19e44ceb6 | [
"MIT"
] | null | null | null |
from abc import ABCMeta
class EventHandler(metaclass=ABCMeta):
@staticmethod
def on_click(e, **kwargs):
pass
@staticmethod
def on_message(e, **kwargs):
pass
class AddressBookEventHandler(EventHandler):
@staticmethod
def on_click(e, **kwargs):
data = kwargs.get('data')
return data
| 15.391304 | 44 | 0.621469 | 37 | 354 | 5.864865 | 0.513514 | 0.207373 | 0.235023 | 0.202765 | 0.267281 | 0.267281 | 0 | 0 | 0 | 0 | 0 | 0 | 0.282486 | 354 | 22 | 45 | 16.090909 | 0.854331 | 0 | 0 | 0.538462 | 0 | 0 | 0.011364 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0.153846 | 0.076923 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
01740be98d481471da5b40904cc2731b25f8221c | 23 | py | Python | test/__init__.py | ltricot/py86 | c25e5081a154d1a1b1f7987f4c94e79721bde4cf | [
"MIT"
] | 1 | 2022-03-24T13:21:24.000Z | 2022-03-24T13:21:24.000Z | test/__init__.py | ltricot/py86 | c25e5081a154d1a1b1f7987f4c94e79721bde4cf | [
"MIT"
] | null | null | null | test/__init__.py | ltricot/py86 | c25e5081a154d1a1b1f7987f4c94e79721bde4cf | [
"MIT"
] | null | null | null | from . import testasm
| 7.666667 | 21 | 0.73913 | 3 | 23 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.217391 | 23 | 2 | 22 | 11.5 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
01774200d3c506c68477b9e629ce713ba8b060d9 | 49 | py | Python | tests/python_args/args.py | hixio-mh/plugin-python | d59dca4b6166dc20eec3e7aa57b0649c072507ce | [
"MIT"
] | 362 | 2018-02-17T10:25:11.000Z | 2022-03-30T21:04:59.000Z | tests/python_args/args.py | hixio-mh/plugin-python | d59dca4b6166dc20eec3e7aa57b0649c072507ce | [
"MIT"
] | 70 | 2018-02-17T04:00:14.000Z | 2019-08-21T18:01:52.000Z | tests/python_args/args.py | hixio-mh/plugin-python | d59dca4b6166dc20eec3e7aa57b0649c072507ce | [
"MIT"
] | 36 | 2018-02-18T23:11:25.000Z | 2021-09-20T07:19:36.000Z | def hello(*args):
print("hello world", args)
| 16.333333 | 30 | 0.632653 | 7 | 49 | 4.428571 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.183673 | 49 | 2 | 31 | 24.5 | 0.775 | 0 | 0 | 0 | 0 | 0 | 0.22449 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
018fb5cc0f5785a6729abec2548ce11fb07febb2 | 71 | py | Python | gammagl/io/__init__.py | BUPT-GAMMA/GammaGL | 2b9f32e1ac3533cb75a063243e8a2fa654466d18 | [
"Apache-2.0"
] | null | null | null | gammagl/io/__init__.py | BUPT-GAMMA/GammaGL | 2b9f32e1ac3533cb75a063243e8a2fa654466d18 | [
"Apache-2.0"
] | null | null | null | gammagl/io/__init__.py | BUPT-GAMMA/GammaGL | 2b9f32e1ac3533cb75a063243e8a2fa654466d18 | [
"Apache-2.0"
] | null | null | null | from .txt_array import read_txt_array
from .tu import remove_self_loops | 35.5 | 37 | 0.873239 | 13 | 71 | 4.384615 | 0.692308 | 0.280702 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098592 | 71 | 2 | 38 | 35.5 | 0.890625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6d800642d302f0713d0ff4eb3b4744cc1002dd16 | 860 | py | Python | examples/mcell4/hybrid_circadian_clock/mcell4_particle_based/geometry.py | mcellteam/mcell-tests | 34d2d967b75d56edbae999bf0090641850f4f4fe | [
"MIT"
] | 1 | 2021-08-13T20:40:54.000Z | 2021-08-13T20:40:54.000Z | examples/mcell4/hybrid_circadian_clock/mcell4_particle_based/geometry.py | mcellteam/mcell_tests | 34d2d967b75d56edbae999bf0090641850f4f4fe | [
"MIT"
] | null | null | null | examples/mcell4/hybrid_circadian_clock/mcell4_particle_based/geometry.py | mcellteam/mcell_tests | 34d2d967b75d56edbae999bf0090641850f4f4fe | [
"MIT"
] | null | null | null | # WARNING: This is an automatically generated file and will be overwritten
# by CellBlender on the next model export.
import mcell as m
# ---- Cube ----
# originally: 0.806008994579315
Cube_vertex_list = [
[-0.125, -0.125, -0.125],
[-0.125, -0.125, 0.125],
[-0.125, 0.125, -0.125],
[-0.125, 0.125, 0.125],
[0.125, -0.125, -0.125],
[0.125, -0.125, 0.125],
[0.125, 0.125, -0.125],
[0.125, 0.125, 0.125]
] # Cube_vertex_list
Cube_wall_list = [
[3, 0, 1],
[7, 2, 3],
[5, 6, 7],
[1, 4, 5],
[2, 4, 0],
[7, 1, 5],
[3, 2, 0],
[7, 6, 2],
[5, 4, 6],
[1, 0, 4],
[2, 6, 4],
[7, 3, 1]
] # Cube_wall_list
Cube = m.GeometryObject(
name = 'Cube',
vertex_list = Cube_vertex_list,
wall_list = Cube_wall_list,
surface_regions = []
)
# ^^^^ Cube ^^^^
| 20 | 74 | 0.495349 | 140 | 860 | 2.935714 | 0.292857 | 0.233577 | 0.279805 | 0.447689 | 0.233577 | 0.233577 | 0.233577 | 0.233577 | 0.233577 | 0.233577 | 0 | 0.243421 | 0.293023 | 860 | 42 | 75 | 20.47619 | 0.432566 | 0.248837 | 0 | 0 | 1 | 0 | 0.006289 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.032258 | 0 | 0.032258 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6dc3b3f95e943ea5b6c534b77367644ad68e1fc0 | 28 | py | Python | lmcut/__init__.py | wannaphong/ThaiLMCUT | 36fb1e3eb5dcf7c4852c2e09889176824cbbe2b2 | [
"MIT"
] | 19 | 2020-02-25T14:40:27.000Z | 2021-03-09T04:38:29.000Z | lmcut/__init__.py | wannaphong/ThaiLMCUT | 36fb1e3eb5dcf7c4852c2e09889176824cbbe2b2 | [
"MIT"
] | 3 | 2020-05-26T20:56:30.000Z | 2020-12-15T16:19:39.000Z | lmcut/__init__.py | wannaphong/ThaiLMCUT | 36fb1e3eb5dcf7c4852c2e09889176824cbbe2b2 | [
"MIT"
] | 5 | 2020-05-15T15:22:20.000Z | 2022-02-20T12:11:01.000Z | from .lmcut import tokenize
| 14 | 27 | 0.821429 | 4 | 28 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.958333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6de555b8246694d4d58ab71d24d1ce9f42485111 | 37 | py | Python | gfapy/line/edge/containment/__init__.py | ujjwalsh/gfapy | 891ef3df695f20c67809e5a54549c876d90690b4 | [
"ISC"
] | 44 | 2017-03-18T08:08:04.000Z | 2021-11-10T16:11:15.000Z | gfapy/line/edge/containment/__init__.py | ujjwalsh/gfapy | 891ef3df695f20c67809e5a54549c876d90690b4 | [
"ISC"
] | 22 | 2017-04-04T21:20:31.000Z | 2022-03-09T19:05:30.000Z | gfapy/line/edge/containment/__init__.py | ujjwalsh/gfapy | 891ef3df695f20c67809e5a54549c876d90690b4 | [
"ISC"
] | 5 | 2017-07-07T02:56:56.000Z | 2020-09-30T20:10:49.000Z | from .containment import Containment
| 18.5 | 36 | 0.864865 | 4 | 37 | 8 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 0.969697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0966cde6a1ae70291b728705e25b8330c539b4cb | 149 | py | Python | dclnt/statistic.py | alexyvassili/dclnt | 10be7f8cd6a2dc79af5c68617b93df3c081f9079 | [
"MIT"
] | null | null | null | dclnt/statistic.py | alexyvassili/dclnt | 10be7f8cd6a2dc79af5c68617b93df3c081f9079 | [
"MIT"
] | null | null | null | dclnt/statistic.py | alexyvassili/dclnt | 10be7f8cd6a2dc79af5c68617b93df3c081f9079 | [
"MIT"
] | null | null | null | import collections
def get_top_words(words, top_size=10) -> 'collections.Counter list':
return collections.Counter(words).most_common(top_size) | 29.8 | 68 | 0.791946 | 21 | 149 | 5.380952 | 0.619048 | 0.123894 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014925 | 0.100671 | 149 | 5 | 69 | 29.8 | 0.828358 | 0 | 0 | 0 | 0 | 0 | 0.16 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
098cc3d4382f5d0a26e26c79ba495679a9cd1588 | 9,648 | py | Python | steps/optimize.py | yukiizm/z-quantum-core | c96804d9f0a35e1dde150db21b9e0e91a54f449f | [
"Apache-2.0"
] | 24 | 2020-04-15T17:36:59.000Z | 2022-01-25T05:02:14.000Z | steps/optimize.py | yukiizm/z-quantum-core | c96804d9f0a35e1dde150db21b9e0e91a54f449f | [
"Apache-2.0"
] | 177 | 2020-04-23T15:19:59.000Z | 2022-03-30T18:06:17.000Z | steps/optimize.py | yukiizm/z-quantum-core | c96804d9f0a35e1dde150db21b9e0e91a54f449f | [
"Apache-2.0"
] | 19 | 2020-06-24T10:56:02.000Z | 2021-09-30T13:02:21.000Z | import json
from typing import List, Optional, Union
import numpy as np
import zquantum.core.circuits as new_circuits
from openfermion import SymbolicOperator
from zquantum.core.circuits import Circuit
from zquantum.core.cost_function import (
AnsatzBasedCostFunction,
get_ground_state_cost_function,
)
from zquantum.core.estimation import estimate_expectation_values_by_averaging
from zquantum.core.openfermion import load_qubit_operator
from zquantum.core.serialization import (
load_array,
save_array,
save_optimization_results,
)
from zquantum.core.typing import Specs
from zquantum.core.utils import create_object, load_list
def optimize_parametrized_circuit_for_ground_state_of_operator(
optimizer_specs: Specs,
target_operator: Union[SymbolicOperator, str],
parametrized_circuit: Union[Circuit, str],
backend_specs: Specs,
estimation_method_specs: Optional[Specs] = None,
estimation_preprocessors_specs: Optional[List[Specs]] = None,
initial_parameters: Union[str, np.ndarray, List[float]] = None,
fixed_parameters: Optional[Union[np.ndarray, str]] = None,
parameter_precision: Optional[float] = None,
parameter_precision_seed: Optional[int] = None,
keep_history: bool = True,
**kwargs,
):
"""Optimize the parameters of a parametrized quantum circuit to prepare the ground
state of a target operator.
Args:
optimizer_specs: The specs of the optimizer to use to refine the parameter
values
target_operator: The operator of which to prepare the ground state
parametrized_circuit: The parametrized quantum circuit that prepares trial
states
backend_specs: The specs of the quantum backend (or simulator) to use to run the
circuits
estimation_method_specs: A reference to a callable to use to estimate the
expectation value of the operator. The default is the
estimate_expectation_values_by_averaging function.
estimation_preprocessors_specs: A list of Specs that describe callable functions
that adhere to the EstimationPreprocessor protocol.
initial_parameters: The initial parameter values to begin optimization
fixed_parameters: values for the circuit parameters that should be fixed.
parameter_precision: the standard deviation of the Gaussian noise to add to each
parameter, if any.
parameter_precision_seed: seed for randomly generating parameter deviation if
using parameter_precision
keep_history: flag indicating whether to store optimization history.
kwargs: unused, exists for compatibility
"""
if isinstance(optimizer_specs, str):
optimizer_specs = json.loads(optimizer_specs)
optimizer = create_object(optimizer_specs)
if isinstance(target_operator, str):
target_operator = load_qubit_operator(target_operator)
if isinstance(parametrized_circuit, str):
with open(parametrized_circuit) as f:
parametrized_circuit = new_circuits.circuit_from_dict(json.load(f))
if isinstance(backend_specs, str):
backend_specs = json.loads(backend_specs)
backend = create_object(backend_specs)
if estimation_method_specs is not None:
if isinstance(estimation_method_specs, str):
estimation_method_specs = json.loads(estimation_method_specs)
estimation_method = create_object(estimation_method_specs)
else:
estimation_method = estimate_expectation_values_by_averaging
estimation_preprocessors = []
if estimation_preprocessors_specs is not None:
for estimation_preprocessor_specs in estimation_preprocessors_specs:
if isinstance(estimation_preprocessor_specs, str):
estimation_preprocessor_specs = json.loads(
estimation_preprocessor_specs
)
estimation_preprocessors.append(
create_object(estimation_preprocessor_specs)
)
if initial_parameters is not None:
if isinstance(initial_parameters, str):
initial_parameters = load_array(initial_parameters)
if fixed_parameters is not None:
if isinstance(fixed_parameters, str):
fixed_parameters = load_array(fixed_parameters)
cost_function = get_ground_state_cost_function(
target_operator,
parametrized_circuit,
backend,
estimation_method=estimation_method,
estimation_preprocessors=estimation_preprocessors,
fixed_parameters=fixed_parameters,
parameter_precision=parameter_precision,
parameter_precision_seed=parameter_precision_seed,
)
optimization_results = optimizer.minimize(
cost_function, initial_parameters, keep_history
)
save_optimization_results(optimization_results, "optimization-results.json")
save_array(optimization_results.opt_params, "optimized-parameters.json")
def optimize_ansatz_based_cost_function(
optimizer_specs: Specs,
target_operator: Union[SymbolicOperator, str],
ansatz_specs: Specs,
backend_specs: Specs,
estimation_method_specs: Optional[Specs] = None,
estimation_preprocessors_specs: Optional[List[Specs]] = None,
initial_parameters: Union[str, np.ndarray, List[float]] = None,
fixed_parameters: Optional[Union[np.ndarray, str]] = None,
parameter_precision: Optional[float] = None,
parameter_precision_seed: Optional[int] = None,
keep_history: bool = False,
**kwargs,
):
"""Optimize the parameters of an ansatz circuit to prepare the ground state of a
target operator.
Args:
optimizer_specs: The specs of the optimizer to use to refine the parameter
values
target_operator: The operator of which to prepare the ground state
ansatz_specs: The specs describing an Ansatz which will prepare the quantum
circuit
backend_specs: The specs of the quantum backend (or simulator) to use to run the
circuits
estimation_method_specs: A reference to a callable to use to estimate the
expectation value of the operator. The default is the
estimate_expectation_values_by_averaging function.
estimation_preprocessors_specs: A list of Specs that describe callable functions
that adhere to the EstimationPreprocessor protocol.
initial_parameters: The initial parameter values to begin optimization
fixed_parameters: values for the circuit parameters that should be fixed.
parameter_precision: the standard deviation of the Gaussian noise to add to each
parameter, if any.
parameter_precision_seed: seed for randomly generating parameter deviation if
using parameter_precision
keep_history: flag indicating whether to store optimization history.
kwargs:
The following key word arguments are handled explicitly when appropriate:
- thetas: A list of thetas used to initialize the WarmStartQAOAAnsatz
"""
if isinstance(optimizer_specs, str):
optimizer_specs = json.loads(optimizer_specs)
optimizer = create_object(optimizer_specs)
if isinstance(target_operator, str):
target_operator = load_qubit_operator(target_operator)
if isinstance(ansatz_specs, str):
ansatz_specs = json.loads(ansatz_specs)
if "WarmStartQAOAAnsatz" in ansatz_specs["function_name"]:
ansatz_specs["thetas"] = np.array(load_list(kwargs.pop("thetas")))
ansatz_specs["cost_hamiltonian"] = target_operator
elif "QAOA" in ansatz_specs["function_name"]:
ansatz_specs["cost_hamiltonian"] = target_operator
ansatz = create_object(ansatz_specs)
if isinstance(backend_specs, str):
backend_specs = json.loads(backend_specs)
backend = create_object(backend_specs)
if estimation_method_specs is not None:
if isinstance(estimation_method_specs, str):
estimation_method_specs = json.loads(estimation_method_specs)
estimation_method = create_object(estimation_method_specs)
else:
estimation_method = estimate_expectation_values_by_averaging
estimation_preprocessors = []
if estimation_preprocessors_specs is not None:
for estimation_preprocessor_specs in estimation_preprocessors_specs:
if isinstance(estimation_preprocessor_specs, str):
estimation_preprocessor_specs = json.loads(
estimation_preprocessor_specs
)
estimation_preprocessors.append(
create_object(estimation_preprocessor_specs)
)
if initial_parameters is not None:
if isinstance(initial_parameters, str):
initial_parameters = load_array(initial_parameters)
if fixed_parameters is not None:
if isinstance(fixed_parameters, str):
fixed_parameters = load_array(fixed_parameters)
cost_function = AnsatzBasedCostFunction(
target_operator,
ansatz,
backend,
estimation_method=estimation_method,
estimation_preprocessors=estimation_preprocessors,
fixed_parameters=fixed_parameters,
parameter_precision=parameter_precision,
parameter_precision_seed=parameter_precision_seed,
)
optimization_results = optimizer.minimize(
cost_function, initial_parameters, keep_history
)
save_optimization_results(optimization_results, "optimization-results.json")
save_array(optimization_results.opt_params, "optimized-parameters.json")
| 42.131004 | 88 | 0.729374 | 1,091 | 9,648 | 6.185151 | 0.140238 | 0.052164 | 0.043568 | 0.009781 | 0.814612 | 0.792976 | 0.78275 | 0.772081 | 0.755187 | 0.755187 | 0 | 0 | 0.218076 | 9,648 | 228 | 89 | 42.315789 | 0.894486 | 0.285759 | 0 | 0.689189 | 0 | 0 | 0.028866 | 0.014957 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013514 | false | 0 | 0.081081 | 0 | 0.094595 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
09ddaa5acecbf06534f8302e32fe80d6938dcca5 | 266 | py | Python | holobot/discord/components/__init__.py | rexor12/holobot | 89b7b416403d13ccfeee117ef942426b08d3651d | [
"MIT"
] | 1 | 2021-05-24T00:17:46.000Z | 2021-05-24T00:17:46.000Z | holobot/discord/components/__init__.py | rexor12/holobot | 89b7b416403d13ccfeee117ef942426b08d3651d | [
"MIT"
] | 41 | 2021-03-24T22:50:09.000Z | 2021-12-17T12:15:13.000Z | holobot/discord/components/__init__.py | rexor12/holobot | 89b7b416403d13ccfeee117ef942426b08d3651d | [
"MIT"
] | null | null | null | from .component_interaction_processor import ComponentInteractionProcessor
from .component_transformer import ComponentTransformer
from .icomponent_interaction_processor import IComponentInteractionProcessor
from .icomponent_transformer import IComponentTransformer
| 53.2 | 76 | 0.924812 | 22 | 266 | 10.909091 | 0.5 | 0.108333 | 0.216667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06015 | 266 | 4 | 77 | 66.5 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
09e8a16d095347af8dbf6912df6d8363030d502d | 112 | py | Python | django_client_data/app_settings.py | enricobarzetti/django_client_data | 78ed02372dd1523ac0dc24a13ed1a83894befc4e | [
"MIT"
] | null | null | null | django_client_data/app_settings.py | enricobarzetti/django_client_data | 78ed02372dd1523ac0dc24a13ed1a83894befc4e | [
"MIT"
] | 3 | 2020-02-12T00:10:27.000Z | 2021-06-10T19:48:04.000Z | django_client_data/app_settings.py | enricobarzetti/django_client_data | 78ed02372dd1523ac0dc24a13ed1a83894befc4e | [
"MIT"
] | null | null | null | from django.conf import settings
CLIENT_DATA_NAMESPACE = getattr(settings, 'CLIENT_DATA_NAMESPACE', 'DJANGO')
| 22.4 | 76 | 0.8125 | 14 | 112 | 6.214286 | 0.642857 | 0.321839 | 0.413793 | 0.62069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098214 | 112 | 4 | 77 | 28 | 0.861386 | 0 | 0 | 0 | 0 | 0 | 0.241071 | 0.1875 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
61e4b87c17eb268942310fc2be20716a43fddf26 | 20 | py | Python | xeda/plugins/lwc/__init__.py | XedaHQ/xeda | e5bea8663a9001d0e98f6b7a91575e13fba06493 | [
"Apache-2.0"
] | 16 | 2021-01-21T12:58:51.000Z | 2022-02-26T22:43:56.000Z | xeda/plugins/lwc/__init__.py | XedaHQ/xeda | e5bea8663a9001d0e98f6b7a91575e13fba06493 | [
"Apache-2.0"
] | 7 | 2021-01-18T03:19:41.000Z | 2022-03-20T21:43:26.000Z | xeda/plugins/lwc/__init__.py | kammoh/xeda | e5bea8663a9001d0e98f6b7a91575e13fba06493 | [
"Apache-2.0"
] | 3 | 2020-09-03T02:37:20.000Z | 2020-09-10T14:25:49.000Z | from .flows import * | 20 | 20 | 0.75 | 3 | 20 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 20 | 1 | 20 | 20 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
feee38f9051db6cfd5bed1dff0e500a4a5f2d3b6 | 76 | py | Python | cedar/config/tests/test_config_core.py | ceholden/cedar-datacube | d9463a28ce52665faaed069481d34a5ebe60558e | [
"BSD-3-Clause"
] | 12 | 2019-07-19T17:35:24.000Z | 2021-12-29T20:22:12.000Z | cedar/config/tests/test_config_core.py | ceholden/cedar-datacube | d9463a28ce52665faaed069481d34a5ebe60558e | [
"BSD-3-Clause"
] | null | null | null | cedar/config/tests/test_config_core.py | ceholden/cedar-datacube | d9463a28ce52665faaed069481d34a5ebe60558e | [
"BSD-3-Clause"
] | 2 | 2019-10-06T06:36:39.000Z | 2020-06-15T04:07:07.000Z | """ Tests for :py:mod:`cedar.config.core`
"""
from cedar.config import core
| 19 | 41 | 0.697368 | 12 | 76 | 4.416667 | 0.75 | 0.415094 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.118421 | 76 | 3 | 42 | 25.333333 | 0.791045 | 0.486842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3a0c4d0e5a33901c76418540e9e2eb38a30aa769 | 128 | py | Python | __init__.py | nick-terry/Splitting-GP | efd886f6442f096833460cf8cd28ff3e18da732a | [
"MIT"
] | 1 | 2021-12-24T09:10:03.000Z | 2021-12-24T09:10:03.000Z | __init__.py | nick-terry/Splitting-GP | efd886f6442f096833460cf8cd28ff3e18da732a | [
"MIT"
] | null | null | null | __init__.py | nick-terry/Splitting-GP | efd886f6442f096833460cf8cd28ff3e18da732a | [
"MIT"
] | null | null | null | from GP import GP
from DGP import DGP
from DataSet import DataSet
from dgp import gBCM
from dgp import gPoE
from dgp import PoE
| 18.285714 | 27 | 0.8125 | 24 | 128 | 4.333333 | 0.333333 | 0.269231 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1875 | 128 | 6 | 28 | 21.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
3a26cd70067cced018d97310e21511c82fe4dc70 | 66 | py | Python | src/File.py | CorentinGaillard/MyLittleERP | 8ac63f0be0298a124351b6a454c43d296bffb093 | [
"MIT"
] | null | null | null | src/File.py | CorentinGaillard/MyLittleERP | 8ac63f0be0298a124351b6a454c43d296bffb093 | [
"MIT"
] | null | null | null | src/File.py | CorentinGaillard/MyLittleERP | 8ac63f0be0298a124351b6a454c43d296bffb093 | [
"MIT"
] | 1 | 2019-06-03T21:18:21.000Z | 2019-06-03T21:18:21.000Z | # coding=UTF-8
from Rule import *
class File (Rule):
pass
| 7.333333 | 18 | 0.621212 | 10 | 66 | 4.1 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020833 | 0.272727 | 66 | 8 | 19 | 8.25 | 0.833333 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
28b8656d2ff3e81d7bd4206f9705f0cc39b79f29 | 115 | py | Python | cvpods/utils/file/__init__.py | hanqiu-hq/cvpods | 597fa669151fdad87c250fa118a9e3a555f4fb5e | [
"Apache-2.0"
] | 758 | 2021-03-11T08:14:26.000Z | 2022-03-31T07:24:13.000Z | cvpods/utils/file/__init__.py | wondervictor/cvpods | 614a975e5425bbaeb66bbd1ffca552d633ba89ca | [
"Apache-2.0"
] | 58 | 2020-12-04T19:47:10.000Z | 2022-03-30T06:52:13.000Z | cvpods/utils/file/__init__.py | wondervictor/cvpods | 614a975e5425bbaeb66bbd1ffca552d633ba89ca | [
"Apache-2.0"
] | 110 | 2021-03-18T01:59:31.000Z | 2022-03-18T21:26:56.000Z | #!/usr/bin/python3
# -*- coding:utf-8 -*-
from .download import *
from .file_io import *
from .serialize import *
| 16.428571 | 24 | 0.669565 | 16 | 115 | 4.75 | 0.75 | 0.263158 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020619 | 0.156522 | 115 | 6 | 25 | 19.166667 | 0.762887 | 0.330435 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3ad46d042fba1bd15fbe1ce50e3d45c9b9417f8c | 43 | py | Python | eosUtils/__init__.py | hysds/eosUtils | a0e7bbb7cca445ce81630d3797c1bc50fd337f83 | [
"Apache-2.0"
] | null | null | null | eosUtils/__init__.py | hysds/eosUtils | a0e7bbb7cca445ce81630d3797c1bc50fd337f83 | [
"Apache-2.0"
] | null | null | null | eosUtils/__init__.py | hysds/eosUtils | a0e7bbb7cca445ce81630d3797c1bc50fd337f83 | [
"Apache-2.0"
] | null | null | null | import granule
import granule2
import misc
| 10.75 | 15 | 0.860465 | 6 | 43 | 6.166667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027027 | 0.139535 | 43 | 3 | 16 | 14.333333 | 0.972973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c90385e5b81308f60c68abd6b6d95f8080101472 | 387 | py | Python | scripts.py | appleparan/template.py | 7af9fbace44f156f27bf40e73fce68f77d2188ee | [
"MIT"
] | null | null | null | scripts.py | appleparan/template.py | 7af9fbace44f156f27bf40e73fce68f77d2188ee | [
"MIT"
] | null | null | null | scripts.py | appleparan/template.py | 7af9fbace44f156f27bf40e73fce68f77d2188ee | [
"MIT"
] | null | null | null | import subprocess
def test():
"""
Run all unittests. Equivalent to:
`poetry run python -u -m unittest discover`
"""
subprocess.Popen(["python", "-m", "pytest", "--import-mode=importlib"])
def lint():
"""
Run all unittests. Equivalent to:
`poetry run python -u -m unittest discover`
"""
subprocess.Popen(["pylint", ".", " --rcfile=.pylintrc"])
| 21.5 | 75 | 0.599483 | 43 | 387 | 5.395349 | 0.534884 | 0.051724 | 0.12931 | 0.215517 | 0.646552 | 0.646552 | 0.646552 | 0.646552 | 0.646552 | 0.646552 | 0 | 0 | 0.222222 | 387 | 17 | 76 | 22.764706 | 0.770764 | 0.400517 | 0 | 0 | 0 | 0 | 0.326425 | 0.119171 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | true | 0 | 0.4 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c9395342adcb0a922176e123e91e4ab09e5c0da9 | 25 | py | Python | mavsdk/telemetry/__init__.py | appetito/MAVSDK-Python | 3a44abce658c1df29bc41cb6b6c1ba523de6c606 | [
"BSD-3-Clause"
] | null | null | null | mavsdk/telemetry/__init__.py | appetito/MAVSDK-Python | 3a44abce658c1df29bc41cb6b6c1ba523de6c606 | [
"BSD-3-Clause"
] | null | null | null | mavsdk/telemetry/__init__.py | appetito/MAVSDK-Python | 3a44abce658c1df29bc41cb6b6c1ba523de6c606 | [
"BSD-3-Clause"
] | null | null | null | from .telemetry import *
| 12.5 | 24 | 0.76 | 3 | 25 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c95d9fc9298f3554af8bd528a8613d193809b1f1 | 4,277 | py | Python | migrations/versions/0e8bc3970388_add_historical_logs_table.py | PrateekPisat/covid_tracker | 515ff44bc5fde756acc00737651eaddabf6e90a6 | [
"MIT"
] | null | null | null | migrations/versions/0e8bc3970388_add_historical_logs_table.py | PrateekPisat/covid_tracker | 515ff44bc5fde756acc00737651eaddabf6e90a6 | [
"MIT"
] | null | null | null | migrations/versions/0e8bc3970388_add_historical_logs_table.py | PrateekPisat/covid_tracker | 515ff44bc5fde756acc00737651eaddabf6e90a6 | [
"MIT"
] | null | null | null | """Add historical_logs table.
Revision ID: 0e8bc3970388
Revises:
Create Date: 2020-09-26 12:30:42.001562
"""
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "0e8bc3970388"
down_revision = None
branch_labels = None
depends_on = None
def upgrade():
op.create_table(
"historical_logs",
sa.Column("id", sa.Integer(), autoincrement=True, nullable=False),
sa.Column("date", sa.Date(), nullable=False),
sa.Column("state", sa.Unicode(length=255), nullable=False),
sa.Column("data_quality", sa.Unicode(length=255), nullable=True),
sa.Column("deaths", sa.Integer(), nullable=False, default=0),
sa.Column("deaths_confirmed", sa.Integer(), nullable=False, default=0),
sa.Column("deaths_increase", sa.Integer(), nullable=False, default=0),
sa.Column("deaths_probable", sa.Integer(), nullable=False, default=0),
sa.Column("hospitalized", sa.Integer(), nullable=False, default=0),
sa.Column("hospitalized_cumulative", sa.Integer(), nullable=False, default=0),
sa.Column("hospitalized_currently", sa.Integer(), nullable=False, default=0),
sa.Column("hospitalized_increase", sa.Integer(), nullable=False, default=0),
sa.Column("in_icu_cumulative", sa.Integer(), nullable=False, default=0),
sa.Column("in_icu_currently", sa.Integer(), nullable=False, default=0),
sa.Column("negative", sa.Integer(), nullable=False, default=0),
sa.Column("negative_increase", sa.Integer(), nullable=False, default=0),
sa.Column("negative_testsAntibody", sa.Integer(), nullable=False, default=0),
sa.Column("negative_testsPeopleAntibody", sa.Integer(), nullable=False, default=0,),
sa.Column("negative_testsViral", sa.Integer(), nullable=False, default=0),
sa.Column("on_ventilator_cumulative", sa.Integer(), nullable=False, default=0),
sa.Column("on_ventilator_currently", sa.Integer(), nullable=False, default=0),
sa.Column("pending", sa.Integer(), nullable=False, default=0),
sa.Column("positive", sa.Integer(), nullable=False, default=0),
sa.Column("positive_cases_viral", sa.Integer(), nullable=False, default=0),
sa.Column("positive_increase", sa.Integer(), nullable=False, default=0),
sa.Column("positive_score", sa.Integer(), nullable=False, default=0),
sa.Column("positive_tests_antibody", sa.Integer(), nullable=False, default=0),
sa.Column("positive_tests_antigen", sa.Integer(), nullable=False, default=0),
sa.Column("positive_tests_people_antibody", sa.Integer(), nullable=False, default=0,),
sa.Column("positive_tests_people_antigen", sa.Integer(), nullable=False, default=0,),
sa.Column("positive_tests_viral", sa.Integer(), nullable=False, default=0),
sa.Column("recovered", sa.Integer(), nullable=False, default=0),
sa.Column("total_test_encounters_viral", sa.Integer(), nullable=False, default=0,),
sa.Column("total_testEncounters_viral_increase", sa.Integer(), nullable=False, default=0,),
sa.Column("total_test_results", sa.Integer(), nullable=False, default=0),
sa.Column("total_test_results_increase", sa.Integer(), nullable=False, default=0,),
sa.Column("total_tests_antibody", sa.Integer(), nullable=False, default=0),
sa.Column("total_tests_antigen", sa.Integer(), nullable=False, default=0),
sa.Column("total_tests_people_antibody", sa.Integer(), nullable=False, default=0,),
sa.Column("total_tests_people_antigen", sa.Integer(), nullable=False, default=0,),
sa.Column("total_tests_people_viral", sa.Integer(), nullable=False, default=0),
sa.Column("total_tests_people_viral_increase", sa.Integer(), nullable=False, default=0,),
sa.Column("total_tests_viral", sa.Integer(), nullable=False, default=0),
sa.Column("total_tests_viral_increase", sa.Integer(), nullable=False, default=0,),
sa.Column(
"created_at",
sa.DateTime(timezone=True),
server_default=sa.text("CURRENT_TIMESTAMP"),
nullable=False,
),
sa.PrimaryKeyConstraint("id"),
)
def downgrade():
op.drop_table("historical_logs")
| 55.545455 | 99 | 0.68015 | 536 | 4,277 | 5.279851 | 0.166045 | 0.127208 | 0.240283 | 0.310954 | 0.774558 | 0.756184 | 0.756184 | 0.756184 | 0.756184 | 0.445936 | 0 | 0.02334 | 0.158522 | 4,277 | 76 | 100 | 56.276316 | 0.76299 | 0.033435 | 0 | 0 | 0 | 0 | 0.217159 | 0.119244 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031746 | false | 0 | 0.031746 | 0 | 0.063492 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c986e83ad547b005d2149dc109aa8eb57c733771 | 389 | py | Python | soluciones/hola_mundo/hola_mundo.py | estefaniamiguel/blockly | 89fc731ee0521175837f9aa37380b8548355e078 | [
"Apache-2.0"
] | null | null | null | soluciones/hola_mundo/hola_mundo.py | estefaniamiguel/blockly | 89fc731ee0521175837f9aa37380b8548355e078 | [
"Apache-2.0"
] | 40 | 2020-06-08T20:09:27.000Z | 2022-01-22T12:11:49.000Z | soluciones/hola_mundo/hola_mundo.py | unipeinformatica/blockly | 89fc731ee0521175837f9aa37380b8548355e078 | [
"Apache-2.0"
] | null | null | null | from consola import leer_caracter
from consola import leer_entrada_completa
from consola import obtener_caracter
from consola import avanzar_caracter
from consola import hay_mas_caracteres
from consola import imprimir
from consola import cambiar_color_texto
def Copiar_la_entrada():
while hay_mas_caracteres():
imprimir(leer_caracter())
avanzar_caracter()
Copiar_la_entrada()
| 24.3125 | 41 | 0.840617 | 53 | 389 | 5.849057 | 0.377358 | 0.248387 | 0.383871 | 0.241935 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123393 | 389 | 15 | 42 | 25.933333 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | true | 0 | 0.583333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a316497581c122834f344b78aa0f3cdbe1eb941d | 4,995 | py | Python | tests/test_paths.py | rochamatcomp/python-rocha | bbf8b559f8052f8c081be29ef21d3e1f697477c3 | [
"MIT"
] | 1 | 2021-02-27T14:35:22.000Z | 2021-02-27T14:35:22.000Z | tests/test_paths.py | rochamatcomp/python-rocha | bbf8b559f8052f8c081be29ef21d3e1f697477c3 | [
"MIT"
] | null | null | null | tests/test_paths.py | rochamatcomp/python-rocha | bbf8b559f8052f8c081be29ef21d3e1f697477c3 | [
"MIT"
] | 1 | 2021-02-27T15:27:53.000Z | 2021-02-27T15:27:53.000Z | # -*- coding: utf-8 -*-
"""
:mod:`paths` -- Test paths manipulation
=======================================
.. module:: paths
:platform: Unix, Windows
:synopsis: Paths manipulation and files discovery.
.. moduleauthor:: Andre Rocha <rocha.matcomp@gmail.com>
"""
import os
from src.rocha import paths
def test_full_path():
"""
Test find full path of the files.
"""
path = 'data'
pattern = '*region*.shp'
files = ['data/regions.shp',
'data/output/region_mid-west.shp',
'data/output/region_northeast.shp',
'data/output/region_southeast.shp',
'data/output/region_south.shp']
current = os.getcwd()
filenames = [os.sep.join([current, file]) for file in files]
results = paths.find(path, pattern, relative = False)
names = [name for name in results]
assert names == filenames
def test_relative_path():
"""
Test find relative path of the files.
"""
path = 'data'
pattern = '*region*.shp'
filenames = ['data/regions.shp',
'data/output/region_mid-west.shp',
'data/output/region_northeast.shp',
'data/output/region_southeast.shp',
'data/output/region_south.shp']
results = paths.find(path, pattern, relative = True)
names = [name for name in results]
assert names == filenames
def test_output_same_structure():
"""
Test output filename with the same directory structure as input filename.
"""
input_path = 'data/inputs'
output_path = 'data/outputs'
input_file = 'data/inputs/south/raster.tif'
output_file = 'data/outputs/south/raster.tif'
result = paths.output(input_file, input_path, output_path, change = False)
assert result == output_file
def test_output_change_root():
"""
Test output filename with output path as root path.
Output filename with a simple directory structure.
"""
input_path = 'data/inputs'
output_path = 'data/outputs'
input_file = 'data/inputs/south/raster.tif'
output_file = 'data/outputs/raster.tif'
result = paths.output(input_file, input_path, output_path, change = True)
assert result == output_file
def test_output_change_extra_end():
"""
Test output filename with the extra name in the ending, and changed strutucture.
"""
input_path = 'data/inputs'
output_path = 'data/outputs'
input_file = 'data/inputs/south/raster.tif'
output_file = 'data/outputs/raster_south.tif'
extra = '_south'
result = paths.output(input_file, input_path, output_path, change = True, extra = extra, begin = False)
assert result == output_file
def test_output_change_extra_begin():
"""
Test output filename with the extra name in the beginning, and changed strutucture.
"""
input_path = 'data/inputs'
output_path = 'data/outputs'
input_file = 'data/inputs/south/raster.tif'
output_file = 'data/outputs/south_raster.tif'
extra = 'south_'
result = paths.output(input_file, input_path, output_path, change = True, extra = extra, begin = True)
assert result == output_file
def test_output_same_extra_end():
"""
Test output filename with the extra name in the ending, and the same strutucture.
"""
input_path = 'data/inputs'
output_path = 'data/outputs'
input_file = 'data/inputs/south/raster.tif'
output_file = 'data/outputs/south/raster_south.tif'
extra = '_south'
result = paths.output(input_file, input_path, output_path, change = False, extra = extra, begin = False)
assert result == output_file
def test_output_same_extra_begin():
"""
Test output filename with the extra name in the beginning, and the same strutucture.
"""
input_path = 'data/inputs'
output_path = 'data/outputs'
input_file = 'data/inputs/south/raster.tif'
output_file = 'data/outputs/south/south_raster.tif'
extra = 'south_'
result = paths.output(input_file, input_path, output_path, change = False, extra = extra, begin = True)
assert result == output_file
def test_output_change_extension():
"""
Test output filename with another extension, and changed strutucture.
"""
input_path = 'data/inputs'
output_path = 'data/outputs'
input_file = 'data/inputs/south/raster.tif'
output_file = 'data/outputs/raster.asc'
extension = '.asc'
result = paths.output(input_file, input_path, output_path, change = True, output_extension = extension)
assert result == output_file
def test_output_same_extension():
"""
Test output filename with another extension, and the same strutucture.
"""
input_path = 'data/inputs'
output_path = 'data/outputs'
input_file = 'data/inputs/south/raster.tif'
output_file = 'data/outputs/south/raster.asc'
extension = '.asc'
result = paths.output(input_file, input_path, output_path, change = False, output_extension = extension)
assert result == output_file | 31.024845 | 108 | 0.668068 | 634 | 4,995 | 5.083596 | 0.130915 | 0.044679 | 0.047782 | 0.047161 | 0.855724 | 0.847968 | 0.826249 | 0.80453 | 0.763264 | 0.708036 | 0 | 0.000254 | 0.212012 | 4,995 | 161 | 109 | 31.024845 | 0.818598 | 0.196997 | 0 | 0.613636 | 0 | 0 | 0.255065 | 0.182338 | 0 | 0 | 0 | 0 | 0.113636 | 1 | 0.113636 | false | 0 | 0.022727 | 0 | 0.136364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6eca9d8c3eeb77137b16c46e0e253e19cc06d04b | 25,957 | py | Python | pykeops/tutorials/a_LazyTensors/plot_test_tensordot.py | mdiazmel/keops | 52a3d2ee80a720639f52898305f85399b7b45a63 | [
"MIT"
] | 695 | 2019-04-29T10:20:55.000Z | 2022-03-31T13:07:24.000Z | pykeops/tutorials/a_LazyTensors/plot_test_tensordot.py | mdiazmel/keops | 52a3d2ee80a720639f52898305f85399b7b45a63 | [
"MIT"
] | 213 | 2019-04-18T09:24:39.000Z | 2022-03-31T14:27:12.000Z | pykeops/tutorials/a_LazyTensors/plot_test_tensordot.py | mdiazmel/keops | 52a3d2ee80a720639f52898305f85399b7b45a63 | [
"MIT"
] | 52 | 2019-04-18T09:18:08.000Z | 2022-03-27T01:48:33.000Z | """
=========
TensorDot
=========
This is a test script to showcase the tensordot syntax.
"""
import numpy as np
import torch
from pykeops.torch import LazyTensor
M, N = 2, 10
#######################################################################################################################
# Matrix multiplication as a special case of Tensordot
# ----------------------------------------------------
#
a = torch.randn(4 * 7, requires_grad=True, dtype=torch.float64)
b = torch.randn(7, requires_grad=True, dtype=torch.float64)
c = a.reshape(4, 7) @ b
#######################################################################################################################
# A single matrix multiplication
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
#
# In this case no need to use KeOps: this is a sanity check.
A = LazyTensor(a[None, None, :])
B = LazyTensor(b[None, None, :])
C = A.keops_tensordot(B, (4, 7), (7,), (1,), (0,)).sum_reduction(dim=1)
# print(C, c)
print(
"Compare the two MatVecMul implementations. All good?",
torch.allclose(c.flatten(), C.flatten()),
)
xi = torch.randn(4, dtype=torch.float64)
dC = torch.autograd.grad(C, a, xi.reshape(1, 4), retain_graph=True)[0].view(-1)
dc = torch.autograd.grad(c, a, xi, retain_graph=True)[0].view(-1)
# print(dC, dc)
print(
"Compare the two MatVecMul gradient wrt a implementations. All good?",
torch.allclose(dc.flatten(), dC.flatten()),
)
dC = torch.autograd.grad(C, b, xi.reshape(1, 4))[0].view(-1)
dc = torch.autograd.grad(c, b, xi)[0].view(-1)
# print(dC, dc)
print(
"Compare the two MatVecMul gradient wrt b implementations. All good?",
torch.allclose(dc.flatten(), dC.flatten()),
)
print("-------------------------------")
#######################################################################################################################
# Matrix multiplication with a sum reduction
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
#
# That is where KeOps come into play.
a = torch.randn(M, 4 * 7, requires_grad=True, dtype=torch.float64)
b = torch.randn(N, 7, requires_grad=True, dtype=torch.float64)
c = torch.tensordot(a.reshape(M, 4, 7), b.reshape(N, 7), dims=([2], [1])).sum(2)
A = LazyTensor(a[:, None, :])
B = LazyTensor(b[None, :, :])
C = A.keops_tensordot(B, (4, 7), (7,), (1,), (0,)).sum_reduction(dim=1)
# print(C, c)
print(
"Compare the two MatVecMul with sum implementations. All good ?",
torch.allclose(c.flatten(), C.flatten()),
)
xi = torch.randn(M, 4, dtype=torch.float64)
dCa = torch.autograd.grad(C, a, xi, retain_graph=True)[0].view(-1)
dca = torch.autograd.grad(c, a, xi, retain_graph=True)[0].view(-1)
# print(dC, dc)
print(
"Compare the two MatVecMul with sum gradient wrt a implementations. All good ?",
torch.allclose(dca.flatten(), dCa.flatten()),
)
dCb = torch.autograd.grad(C, b, xi)[0].view(-1)
dcb = torch.autograd.grad(c, b, xi)[0].view(-1)
# print(dC, dc)
print(
"Compare the two MatVecMul with sum gradient wrt b implementations. All good ?",
torch.allclose(dcb.flatten(), dCb.flatten()),
)
print("-------------------------------")
#######################################################################################################################
# Matrix-Matrix multiplication as a special case of Tensordot
# -----------------------------------------------------------
#
a = torch.randn(4 * 7, requires_grad=True, dtype=torch.float64)
b = torch.randn(7 * 2, requires_grad=True, dtype=torch.float64)
c = a.reshape(4, 7) @ b.reshape(7, 2)
A = LazyTensor(a[None, None, :])
B = LazyTensor(b[None, None, :])
C = A.keops_tensordot(B, (4, 7), (7, 2), (1,), (0,)).sum_reduction(dim=1)
# print(C, c)
print(
"Compare the two MatMul implementations. All good?",
torch.allclose(c.flatten(), C.flatten()),
)
xi = torch.randn(4 * 2, dtype=torch.float64)
dC = torch.autograd.grad(C, a, xi.reshape(1, 4 * 2), retain_graph=True)[0].view(-1)
dc = torch.autograd.grad(c, a, xi.reshape(4, 2), retain_graph=True)[0].view(-1)
# print(dC, dc)
print(
"Compare the two MatMul gradient wrt a implementations. All good?",
torch.allclose(dc.flatten(), dC.flatten()),
)
dCb = torch.autograd.grad(C, b, xi.reshape(1, 4 * 2))[0].view(-1)
dcb = torch.autograd.grad(c, b, xi.reshape(4, 2))[0].view(-1)
# print(dCb, dcb)
print(
"Compare the two MatMul gradient wrt b implementations. All good?",
torch.allclose(dcb.flatten(), dCb.flatten()),
)
print("-------------------------------")
#######################################################################################################################
# Tensordot in keops (generic case)
# ---------------------------------
#
# A fisrt example
# ^^^^^^^^^^^^^^^
#
# First, let us start with a standard torch implementation. We contract two tensor along a common axis of size 7.
# Then, a reduction is performed alog the dimension of size N.
x = torch.randn(M, 4, 7, 3, requires_grad=True, dtype=torch.float64)
y = torch.randn(N, 7, 2, requires_grad=True, dtype=torch.float64)
f_torch = torch.tensordot(x, y, dims=([2], [1])) # now is shape (M, 4, 3, N, 2)
sum_f_torch2 = f_torch.sum(3) # ... yielding a result of dimension (M,4*3*2)
# In KeOps, we forgot the first reduction axis (size M and N respectively). We then need to tell the compiler not only
# the contration axis (1 and 0 respectively both of dimension 7) but the shapes (4,7,3) and (7,2) as well,
# keeping in mind that the 2 actual first axis of x and y (reduction axis) are ignored so the result has
# shape (M,4*3*2) or (N, 4*3*2) depending on the chosen reduction axis.
f_keops = LazyTensor(x.reshape(M, 1, 4 * 7 * 3)).keops_tensordot(
LazyTensor(y.reshape(1, N, 7 * 2)), (4, 7, 3), (7, 2), (1,), (0,)
)
sum_f_keops = f_keops.sum_reduction(dim=1) # reduction is perform along second axis
# print(sum_f_keops.flatten()) # ... yielding a result of dimension (M,4*3*2)
print(
"Compare the two tensordot implementation. All good ?",
torch.allclose(sum_f_keops.flatten(), sum_f_torch2.flatten(), rtol=1e-4),
)
########################################################################################################################
# As before, let us check the gradients
e = torch.randn(M, 4 * 3 * 2, dtype=torch.float64)
Ee = e.reshape(M, 4, 3, 2)
grad_keops = (
torch.autograd.grad(sum_f_keops, x, e, retain_graph=True)[0].squeeze().numpy()
)
grad_torch = (
torch.autograd.grad(sum_f_torch2, x, Ee, retain_graph=True)[0].squeeze().numpy()
)
# print(grad_keops[0,:,:,:])
# print(grad_torch[0,:,:,:])
print(
"Check gradient wrt x. All good ?",
np.allclose(grad_keops.flatten(), grad_torch.flatten()),
)
# tmp = torch.tensordot(Ee,y, dims=([3], [2])).sum(3).detach().numpy()
# print("grad_keops and tmp are the same? ", np.allclose(tmp.flatten(), grad_keops.flatten()))
# print("grad_torch and tmp are the same? ", np.allclose(grad_torch , np.moveaxis(tmp, [0,1,2,3], [0,1,3,2])))
grad_keops = torch.autograd.grad(sum_f_keops, y, e)[0].numpy()
grad_torch = torch.autograd.grad(sum_f_torch2, y, Ee)[0].numpy()
# print(grad_keops[:1])
# print(grad_torch[:1])
print(
"Check gradient wrt y. All good ?",
np.allclose(grad_keops.flatten(), grad_torch.flatten()),
)
print("-------------------------------")
#######################################################################################################################
# A Second example
# ^^^^^^^^^^^^^^^^
#
# Torch version
x = torch.randn(M, 4, 3, 7, requires_grad=True, dtype=torch.float64)
y = torch.randn(N, 7, 2, requires_grad=True, dtype=torch.float64)
f_torch = torch.tensordot(x, y, dims=([3], [1])) # now is shape (M, 4, 3, N, 2)
sum_f_torch2 = f_torch.sum(3) # ... yielding a result of dimension (M,4,3,2)
#######################################################################################################################
# And corresponding KeOps version
f_keops = LazyTensor(x.reshape(M, 1, 4 * 3 * 7)).keops_tensordot(
LazyTensor(y.reshape(1, N, 7 * 2)), (4, 3, 7), (7, 2), (2,), (0,)
)
sum_f_keops = f_keops.sum_reduction(dim=1) # reduction is perform along second axis
# print(sum_f_keops.shape) # ... yielding a result of dimension (M,4*3*2)
print(
"Compare the two tensordot implementation. All good ?",
torch.allclose(sum_f_keops.flatten(), sum_f_torch2.flatten(), rtol=1e-4),
)
# checking gradients
e = torch.randn(M, 4 * 3 * 2, dtype=torch.float64)
Ee = e.reshape(M, 4, 3, 2)
grad_keops = (
torch.autograd.grad(sum_f_keops, x, e, retain_graph=True)[0].squeeze().numpy()
)
grad_torch = (
torch.autograd.grad(sum_f_torch2, x, Ee, retain_graph=True)[0].squeeze().numpy()
)
# print(grad_keops[0,:,:,:])
# print(grad_torch[0,:,:,:])
print(
"Compare the two gradient x tensordot implementation. All good ?",
np.allclose(grad_keops.flatten(), grad_torch.flatten(), rtol=1e-4),
)
grad_keops = (
torch.autograd.grad(sum_f_keops, y, e, retain_graph=True)[0].squeeze().numpy()
)
grad_torch = (
torch.autograd.grad(sum_f_torch2, y, Ee, retain_graph=True)[0].squeeze().numpy()
)
# print(grad_keops[0,:,:,:])
# print(grad_torch[0,:,:,:])
print(
"Compare the two gradient y tensordot implementation. All good ?",
np.allclose(grad_keops.flatten(), grad_torch.flatten(), rtol=1e-4),
)
print("------------------------------------------")
#######################################################################################################################
# A Third example
# ^^^^^^^^^^^^^^^^
x = torch.randn(M, 4, 3, 2, requires_grad=True, dtype=torch.float64)
y = torch.randn(N, 4, 2, requires_grad=True, dtype=torch.float64)
xshape, yshape = x.shape[1:], y.shape[1:]
f_keops = LazyTensor(x.reshape(M, 1, int(np.array((xshape)).prod()))).keops_tensordot(
LazyTensor(y.reshape(1, N, int(np.array(yshape).prod()))),
xshape,
yshape,
(0, 2),
(0, 1),
)
sum_f_keops = f_keops.sum_reduction(dim=1)
sum_f_torch2 = torch.tensordot(x, y, dims=([1, 3], [1, 2])).sum(2)
# sum_f_torch2 = torch.tensordot(x, y, dims=([3], [1])).sum(3)
print(
"Compare the two tensordot implementation. All good ????",
torch.allclose(sum_f_keops.flatten(), sum_f_torch2.flatten()),
)
# checking gradients
e = torch.randn_like(sum_f_torch2)
grad_keops = torch.autograd.grad(sum_f_keops, x, e.reshape(M, -1), retain_graph=True)[
0
].numpy()
grad_torch = torch.autograd.grad(sum_f_torch2, x, e, retain_graph=True)[0].numpy()
print(
"Compare the two gradient x tensordot implementation. is All good ????",
np.allclose(grad_keops.flatten(), grad_torch.flatten(), rtol=1e-4),
)
grad_keops = torch.autograd.grad(sum_f_keops, y, e.reshape(M, -1), retain_graph=True)[
0
].numpy()
grad_torch = torch.autograd.grad(sum_f_torch2, y, e, retain_graph=True)[0].numpy()
print(
"Compare the two gradient y tensordot implementation. is All good ????",
np.allclose(grad_keops.flatten(), grad_torch.flatten(), rtol=1e-4),
)
print("------------------------------------------")
#######################################################################################################################
# A Fourth example
# ^^^^^^^^^^^^^^^^
x = torch.randn(M, 2, 3, 4, 2, 2, requires_grad=True, dtype=torch.float64)
y = torch.randn(N, 2, 4, 5, 3, 2, requires_grad=True, dtype=torch.float64)
xshape, yshape = x.shape[1:], y.shape[1:]
f_keops = LazyTensor(x.reshape(M, 1, int(np.array((xshape)).prod()))).keops_tensordot(
LazyTensor(y.reshape(1, N, int(np.array(yshape).prod()))),
xshape,
yshape,
(0, 1, 4),
(0, 3, 4),
)
sum_f_keops = f_keops.sum_reduction(dim=1)
sum_f_torch2 = torch.tensordot(x, y, dims=([1, 2, 5], [1, 4, 5])).sum(3)
# sum_f_torch2 = torch.tensordot(x, y, dims=([3], [1])).sum(3)
print(
"Compare the two tensordot implementation. All good ????!",
torch.allclose(sum_f_keops.flatten(), sum_f_torch2.flatten()),
)
# checking gradients
e = torch.randn_like(sum_f_torch2)
grad_keops = torch.autograd.grad(sum_f_keops, x, e.reshape(M, -1), retain_graph=True)[
0
].numpy()
grad_torch = torch.autograd.grad(sum_f_torch2, x, e, retain_graph=True)[0].numpy()
print(
"Compare the two gradient x tensordot implementation. All good ????!",
np.allclose(grad_keops.flatten(), grad_torch.flatten(), rtol=1e-4),
)
grad_keops = torch.autograd.grad(sum_f_keops, y, e.reshape(M, -1), retain_graph=True)[
0
].numpy()
grad_torch = torch.autograd.grad(sum_f_torch2, y, e, retain_graph=True)[0].numpy()
print(
"Compare the two gradient y tensordot implementation. All good ????!",
np.allclose(grad_keops.flatten(), grad_torch.flatten(), rtol=1e-4),
)
print("------------------------------------------")
#######################################################################################################################
# A Fifth example
# ^^^^^^^^^^^^^^^^
x = torch.randn(M, 2, 3, 4, requires_grad=True, dtype=torch.float64)
y = torch.randn(N, 2, 4, 5, requires_grad=True, dtype=torch.float64)
xshape, yshape = x.shape[1:], y.shape[1:]
f_keops = LazyTensor(x.reshape(M, 1, int(np.array((xshape)).prod()))).keops_tensordot(
LazyTensor(y.reshape(1, N, int(np.array(yshape).prod()))),
xshape,
yshape,
(2, 0),
(1, 0),
)
sum_f_keops = f_keops.sum_reduction(dim=1)
sum_f_torch2 = torch.tensordot(x, y, dims=([3, 1], [2, 1])).sum(2)
# sum_f_torch2 = torch.tensordot(x, y, dims=([3], [1])).sum(3)
print(
"Compare the two tensordot implementation. All good ????!",
torch.allclose(sum_f_keops.flatten(), sum_f_torch2.flatten()),
)
# checking gradients
e = torch.randn_like(sum_f_torch2)
grad_keops = torch.autograd.grad(sum_f_keops, x, e.reshape(M, -1), retain_graph=True)[
0
].numpy()
grad_torch = torch.autograd.grad(sum_f_torch2, x, e, retain_graph=True)[0].numpy()
print(
"Compare the two gradient x tensordot implementation. All good ????!",
np.allclose(grad_keops.flatten(), grad_torch.flatten(), rtol=1e-4),
)
grad_keops = torch.autograd.grad(sum_f_keops, y, e.reshape(M, -1), retain_graph=True)[
0
].numpy()
grad_torch = torch.autograd.grad(sum_f_torch2, y, e, retain_graph=True)[0].numpy()
print(
"Compare the two gradient y tensordot implementation. All good ????!",
np.allclose(grad_keops.flatten(), grad_torch.flatten(), rtol=1e-4),
)
print("------------------------------------------")
#######################################################################################################################
# A Sixth example
# ^^^^^^^^^^^^^^^^
x = torch.randn(M, 2, 3, 4, requires_grad=True, dtype=torch.float64)
y = torch.randn(N, 4, 2, requires_grad=True, dtype=torch.float64)
xshape, yshape = x.shape[1:], y.shape[1:]
f_keops = LazyTensor(x.reshape(M, 1, int(np.array((xshape)).prod()))).keops_tensordot(
LazyTensor(y.reshape(1, N, int(np.array(yshape).prod()))),
xshape,
yshape,
(2, 0),
(0, 1),
)
sum_f_keops = f_keops.sum_reduction(dim=1)
sum_f_torch2 = torch.tensordot(x, y, dims=([3, 1], [1, 2])).sum(2)
# sum_f_torch2 = torch.tensordot(x, y, dims=([3], [1])).sum(3)
print(
"Compare the two tensordot implementation. All good ????",
torch.allclose(sum_f_keops.flatten(), sum_f_torch2.flatten()),
)
# checking gradients
e = torch.randn_like(sum_f_torch2)
grad_keops = torch.autograd.grad(sum_f_keops, x, e.reshape(M, -1), retain_graph=True)[
0
].numpy()
grad_torch = torch.autograd.grad(sum_f_torch2, x, e, retain_graph=True)[0].numpy()
print(
"Compare the two gradient x tensordot implementation. All good ????",
np.allclose(grad_keops.flatten(), grad_torch.flatten(), rtol=1e-4),
)
grad_keops = torch.autograd.grad(sum_f_keops, y, e.reshape(M, -1))[0].numpy()
grad_torch = torch.autograd.grad(sum_f_torch2, y, e)[0].numpy()
# print(grad_keops)
# print(grad_torch)
print(
"Compare the two gradient y tensordot implementation. All good ????",
np.allclose(grad_keops.flatten(), grad_torch.flatten(), rtol=1e-4),
)
print("------------------------------------------")
#######################################################################################################################
# A Seventh example
# ^^^^^^^^^^^^^^^^^
x = torch.randn(M, 2, 3, 2, 2, 4, requires_grad=True, dtype=torch.float64)
y = torch.randn(N, 2, 4, 2, 3, 2, 3, requires_grad=True, dtype=torch.float64)
xshape, yshape = x.shape[1:], y.shape[1:]
f_keops = LazyTensor(x.reshape(M, 1, int(np.array((xshape)).prod()))).keops_tensordot(
LazyTensor(y.reshape(1, N, int(np.array(yshape).prod()))),
xshape,
yshape,
(4, 0, 2),
(1, 4, 2),
)
sum_f_keops = f_keops.sum_reduction(dim=1)
sum_f_torch2 = torch.tensordot(x, y, dims=([5, 1, 3], [2, 5, 3])).sum(3)
# sum_f_torch2 = torch.tensordot(x, y, dims=([3], [1])).sum(3)
print(
"Compare the two tensordot implementation. All good ????!",
torch.allclose(sum_f_keops.flatten(), sum_f_torch2.flatten()),
)
# checking gradients
e = torch.randn_like(sum_f_torch2)
grad_keops = torch.autograd.grad(sum_f_keops, x, e.reshape(M, -1), retain_graph=True)[
0
].numpy()
grad_torch = torch.autograd.grad(sum_f_torch2, x, e, retain_graph=True)[0].numpy()
print(
"Compare the two gradient x tensordot implementation. All good ????!",
np.allclose(grad_keops.flatten(), grad_torch.flatten(), rtol=1e-4),
)
grad_keops = torch.autograd.grad(sum_f_keops, y, e.reshape(M, -1), retain_graph=True)[
0
].numpy()
grad_torch = torch.autograd.grad(sum_f_torch2, y, e, retain_graph=True)[0].numpy()
print(
"Compare the two gradient y tensordot implementation. All good ????!",
np.allclose(grad_keops.flatten(), grad_torch.flatten(), rtol=1e-4),
)
print("------------------------------------------")
#######################################################################################################################
#
#
#
def my_tensordort_perm(a, b, dims=None, perm=None):
# print(torch.tensordot(a, b, dims=dims).sum(3).shape)
return torch.tensordot(a, b, dims=dims).sum(3).permute(perm)
def invert_permutation_numpy(permutation):
return np.arange(len(permutation))[np.argsort(permutation)]
x = torch.randn(M, 2, 3, 4, requires_grad=True, dtype=torch.float64)
y = torch.randn(N, 2, 4, requires_grad=True, dtype=torch.float64)
dimfa, dimfb = x.shape[1:], y.shape[1:]
contfa, contfb = [3], [2]
keepfa, keepfb = [item - 1 for item in [1, 2, 3] if item not in contfa], [
item for item in [1, 2] if item not in contfb
]
# contfa, contfb = [2, 3], [1, 2]
n = len(dimfa) + len(dimfb) - 2 * len(contfa)
# perm = [int(i) for i in torch.randperm(n)]
perm = [2, 0, 1]
# perm = [2, 1, 3, 0]
# perm = [1, 0]
perm_torch = (0,) + tuple([(i + 1) for i in invert_permutation_numpy(perm)])
sum_f_torch2 = my_tensordort_perm(
x, y, dims=(contfa, contfb), perm=perm_torch
) # 1, 2,3,5,4 -> 1, 5,3,4,2
f_keops = LazyTensor(x.reshape(M, 1, int(np.array((dimfa)).prod()))).keops_tensordot(
LazyTensor(y.reshape(1, N, int(np.array(dimfb).prod()))),
dimfa,
dimfb,
tuple(np.array(contfa) - 1),
tuple(np.array(contfb) - 1),
tuple(perm),
)
sum_f_keops = f_keops.sum_reduction(dim=1)
print(
"Compare the two tensordot implementation. All good ????!!",
torch.allclose(sum_f_keops.flatten(), sum_f_torch2.flatten()),
)
# checking gradients
e = torch.randn_like(sum_f_torch2)
grad_keops = torch.autograd.grad(sum_f_keops, x, e.reshape(M, -1), retain_graph=True)[
0
].numpy()
# grad_torch2 = my_tensordort_perm(e, y, dims=([4,2], keepfb), perm=[0,1,2])
grad_torch = torch.autograd.grad(sum_f_torch2, x, e, retain_graph=True)[0].numpy()
print(
"Compare the two gradient x tensordot implementation. All good ????!",
np.allclose(grad_keops.flatten(), grad_torch.flatten(), rtol=1e-4),
)
# print("Compare the two gradient x tensordot implementation. All good ????!",
# np.allclose(grad_torch2.detach().numpy(), grad_torch, rtol=1e-4))
grad_keops = torch.autograd.grad(sum_f_keops, y, e.reshape(M, -1), retain_graph=True)[
0
].numpy()
grad_torch = torch.autograd.grad(sum_f_torch2, y, e, retain_graph=True)[0].numpy()
# grad_torch2 = my_tensordort_perm(e, x, dims=([1,3], [1,2]), perm=[0,1,2,3]).permute(perm)
# grad_torch2 = my_tensordort_perm(e, x, dims=([1,3], [1,2]), perm=perm)
print(
"Compare the two gradient y tensordot implementation. All good ????!",
np.allclose(grad_keops.flatten(), grad_torch.flatten(), rtol=1e-4),
)
# print("Compare the two gradient y tensordot implementation. All good ????!",
# np.allclose(grad_torch2.detach().numpy(), grad_torch, rtol=1e-4))
print("------------------------------------------")
x = torch.randn(M, 2, 3, 4, requires_grad=True, dtype=torch.float64)
y = torch.randn(N, 2, 4, 5, requires_grad=True, dtype=torch.float64)
dimfa, dimfb = x.shape[1:], y.shape[1:]
contfa, contfb = [3], [2]
keepfa, keepfb = [item - 1 for item in [1, 2, 3] if item not in contfa], [
item for item in [1, 2, 3] if item not in contfb
]
# contfa, contfb = [2, 3], [1, 2]
n = len(dimfa) + len(dimfb) - 2 * len(contfa)
# perm = [int(i) for i in torch.randperm(n)]
perm = [0, 2, 3, 1]
# perm = [2, 1, 3, 0]
# perm = [1, 0]
perm_torch = (0,) + tuple([(i + 1) for i in invert_permutation_numpy(perm)])
sum_f_torch2 = my_tensordort_perm(
x, y, dims=(contfa, contfb), perm=perm_torch
) # 1, 2,3,5,4 -> 1, 5,3,4,2
f_keops = LazyTensor(x.reshape(M, 1, int(np.array((dimfa)).prod()))).keops_tensordot(
LazyTensor(y.reshape(1, N, int(np.array(dimfb).prod()))),
dimfa,
dimfb,
tuple(np.array(contfa) - 1),
tuple(np.array(contfb) - 1),
tuple(perm),
)
sum_f_keops = f_keops.sum_reduction(dim=1)
print(
"Compare the two tensordot implementation. All good ????!!",
torch.allclose(sum_f_keops.flatten(), sum_f_torch2.flatten()),
)
# checking gradients
e = torch.randn_like(sum_f_torch2)
grad_keops = torch.autograd.grad(sum_f_keops, x, e.reshape(M, -1), retain_graph=True)[
0
].numpy()
# grad_torch2 = my_tensordort_perm(e, y, dims=([4,2], keepfb), perm=[0,1,2,3])
grad_torch = torch.autograd.grad(sum_f_torch2, x, e, retain_graph=True)[0].numpy()
print(
"Compare the two gradient x tensordot implementation. All good ????!",
np.allclose(grad_keops.flatten(), grad_torch.flatten(), rtol=1e-4),
)
# print("Compare the two gradient x tensordot implementation. All good ????!",
# np.allclose(grad_torch2.detach().numpy(), grad_torch, rtol=1e-4))
grad_keops = torch.autograd.grad(sum_f_keops, y, e.reshape(M, -1), retain_graph=True)[0]
grad_torch = torch.autograd.grad(sum_f_torch2, y, e, retain_graph=True)[0]
# grad_torch2 = my_tensordort_perm(e, x, dims=([1,3], [1,2]), perm=perm)
print(
"Compare the two gradient y tensordot implementation. All good ????!",
np.allclose(grad_keops.numpy().flatten(), grad_torch.numpy().flatten(), rtol=1e-4),
)
# print("Compare the two gradient y tensordot implementation. All good ????!",
# np.allclose(grad_torch2.detach().numpy(), grad_torch, rtol=1e-4))
print("------------------------------------------")
x = torch.randn(M, 2, 3, 2, 2, 4, requires_grad=True, dtype=torch.float64)
y = torch.randn(N, 2, 4, 2, 3, 2, 3, requires_grad=True, dtype=torch.float64)
dimfa, dimfb = x.shape[1:], y.shape[1:]
contfa, contfb = [5, 1, 3], [2, 5, 3]
n = len(dimfa) + len(dimfb) - 2 * len(contfa)
# perm_id = [int(i) for i in range(n+1)]
# perm = [int(i) for i in torch.randperm(n)]
# perm = [0,2,1,4,3]
perm = [4, 3, 2, 0, 1]
perm_torch = (0,) + tuple([(i + 1) for i in invert_permutation_numpy(perm)])
sum_f_torch2 = my_tensordort_perm(x, y, dims=(contfa, contfb), perm=perm_torch)
# print(sum_f_torch2.shape)
f_keops = LazyTensor(x.reshape(M, 1, int(np.array((dimfa)).prod()))).keops_tensordot(
LazyTensor(y.reshape(1, N, int(np.array(dimfb).prod()))),
dimfa,
dimfb,
tuple(np.array(contfa) - 1),
tuple(np.array(contfb) - 1),
tuple(perm),
)
sum_f_keops = f_keops.sum_reduction(dim=1)
print(
"Compare the two tensordot implementation. All good ????!!",
torch.allclose(sum_f_keops.flatten(), sum_f_torch2.flatten()),
)
# checking gradients
e = torch.randn_like(sum_f_torch2)
grad_keops = torch.autograd.grad(sum_f_keops, x, e.reshape(M, -1), retain_graph=True)[
0
].numpy()
grad_torch = torch.autograd.grad(sum_f_torch2, x, e, retain_graph=True)[0].numpy()
# grad_torch2 = my_tensordort_perm(e, y, dims=([1,2,3], [1,4,6]), perm=[0,1,2,3,4,5]).permute(3)
print(
"Compare the two gradient x tensordot implementation. All good ????!",
np.allclose(grad_keops.flatten(), grad_torch.flatten(), rtol=1e-4),
)
grad_keops = torch.autograd.grad(sum_f_keops, y, e.reshape(M, -1), retain_graph=True)[
0
].numpy()
grad_torch = torch.autograd.grad(sum_f_torch2, y, e, retain_graph=True)[0].numpy()
print(
"Compare the two gradient y tensordot implementation. All good ????!",
np.allclose(grad_keops.flatten(), grad_torch.flatten(), rtol=1e-4),
)
print("------------------------------------------")
#######################################################################################################################
# Using gradcheck
# ---------------
#
# def my_tensordot(x,y):
# f_keops = LazyTensor(x.reshape(M, 1, 4 * 3 * 7)).keops_tensordot(LazyTensor(y.reshape(1, N, 7 * 2)), (4, 3, 7),
# (7, 2), (2,), (0,))
# return f_keops.sum_reduction(dim=1)
# print(torch.autograd.gradcheck(my_tensordot, [x,y]))
def my_tensordot2(x, y):
xshape, yshape = x.shape[1:], y.shape[1:]
f_keops = LazyTensor(
x.reshape(M, 1, int(np.array((xshape)).prod()))
).keops_tensordot(
LazyTensor(y.reshape(1, N, int(np.array(yshape).prod()))),
xshape,
yshape,
(2, 0), # (2,0,1),
(0, 1), # (0,3,2)
)
return f_keops.sum_reduction(dim=1)
x = torch.randn(M, 2, 2, 2, requires_grad=True, dtype=torch.float64)
y = torch.randn(N, 2, 2, requires_grad=True, dtype=torch.float64)
print(torch.autograd.gradcheck(my_tensordot2, [x, y], atol=1e-5, rtol=1e-5))
print(torch.autograd.gradgradcheck(my_tensordot2, [x, y], atol=1e-5, rtol=1e-5))
| 35.124493 | 120 | 0.598875 | 3,852 | 25,957 | 3.911994 | 0.049325 | 0.025483 | 0.035835 | 0.044595 | 0.899728 | 0.885327 | 0.882739 | 0.871989 | 0.855465 | 0.845909 | 0 | 0.034774 | 0.133644 | 25,957 | 738 | 121 | 35.172087 | 0.635317 | 0.187156 | 0 | 0.61978 | 0 | 0 | 0.151126 | 0.026179 | 0 | 0 | 0 | 0 | 0 | 1 | 0.006593 | false | 0 | 0.006593 | 0.004396 | 0.01978 | 0.118681 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6eed12624af40b5e2d79babdefe80a58a9355244 | 163 | py | Python | pkgs/filetransferutils-pkg/src/genie/libs/filetransferutils/plugins/ios/scp/fileutils.py | jbronikowski/genielibs | 200a34e5fe4838a27b5a80d5973651b2e34ccafb | [
"Apache-2.0"
] | 94 | 2018-04-30T20:29:15.000Z | 2022-03-29T13:40:31.000Z | pkgs/filetransferutils-pkg/src/genie/libs/filetransferutils/plugins/ios/scp/fileutils.py | jbronikowski/genielibs | 200a34e5fe4838a27b5a80d5973651b2e34ccafb | [
"Apache-2.0"
] | 67 | 2018-12-06T21:08:09.000Z | 2022-03-29T18:00:46.000Z | pkgs/filetransferutils-pkg/src/genie/libs/filetransferutils/plugins/ios/scp/fileutils.py | jbronikowski/genielibs | 200a34e5fe4838a27b5a80d5973651b2e34ccafb | [
"Apache-2.0"
] | 49 | 2018-06-29T18:59:03.000Z | 2022-03-10T02:07:59.000Z | """ File utils base class for SCP on IOS devices. """
from ...iosxe.scp.fileutils import FileUtils as FileUtilsXEBase
class FileUtils(FileUtilsXEBase):
pass
| 23.285714 | 63 | 0.748466 | 21 | 163 | 5.809524 | 0.761905 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.159509 | 163 | 6 | 64 | 27.166667 | 0.890511 | 0.276074 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
42d9a84b42f411d0933ed7c3c098bc584c67c1c1 | 89 | py | Python | funniest/__init__.py | mrityunjaykumar911/funniest | 901d4a2ad6a52e89d8538b665d5b67346c5d506a | [
"MIT"
] | null | null | null | funniest/__init__.py | mrityunjaykumar911/funniest | 901d4a2ad6a52e89d8538b665d5b67346c5d506a | [
"MIT"
] | null | null | null | funniest/__init__.py | mrityunjaykumar911/funniest | 901d4a2ad6a52e89d8538b665d5b67346c5d506a | [
"MIT"
] | null | null | null | def joke():
return "ek haathi tha, ek chiti thi" + "ek din chiti ne kaha" + "haahaa"
| 29.666667 | 76 | 0.629213 | 15 | 89 | 3.733333 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.235955 | 89 | 2 | 77 | 44.5 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0.595506 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
6e057e8802ba51ab74ed63ac2849563b7e6eddf0 | 13,514 | py | Python | checkerpy/tests/validators/all/test_limitedtuple.py | yedivanseven/CheckerPy | 04612086d25fecdd0b20ca0a050db8620c437b0e | [
"MIT"
] | 1 | 2018-01-12T19:20:51.000Z | 2018-01-12T19:20:51.000Z | checkerpy/tests/validators/all/test_limitedtuple.py | yedivanseven/CheckerPy | 04612086d25fecdd0b20ca0a050db8620c437b0e | [
"MIT"
] | null | null | null | checkerpy/tests/validators/all/test_limitedtuple.py | yedivanseven/CheckerPy | 04612086d25fecdd0b20ca0a050db8620c437b0e | [
"MIT"
] | null | null | null | import logging
import unittest as ut
from ....functional import CompositionOf
from ....validators.all import LimitedTuple
from ....exceptions import LenError, WrongTypeError, LimitError, CallableError
from ....types.all import _ALL_COMPARABLES, TypedDict
class TestLimitedTupleLimits(ut.TestCase):
def test_error_on_limits_not_tuple(self):
err_msg = 'Type of limits must be tuple, not int like 1!'
with self.assertRaises(TypeError) as err:
_ = LimitedTuple((1, 3, 4), limits=1)
self.assertEqual(str(err.exception), err_msg)
def test_error_on_limits_named_type(self):
err_msg = 'Type of limits must be tuple, not frozenset!'
with self.assertRaises(TypeError) as err:
_ = LimitedTuple((1, 3, 4), limits=frozenset({2}))
self.assertEqual(str(err.exception), err_msg)
def test_error_on_limit_not_tuple(self):
err_msg = 'Type of limits on argument 1 must be tuple, not int like 2!'
with self.assertRaises(TypeError) as err:
_ = LimitedTuple((1, 2), limits=((0, 3), 2))
self.assertEqual(str(err.exception), err_msg)
def test_error_on_limit_named_type(self):
err_msg = 'Type of limits on argument 1 must be tuple, not frozenset!'
with self.assertRaises(TypeError) as err:
_ = LimitedTuple((1, 2), limits=((0, 3), frozenset({2})))
self.assertEqual(str(err.exception), err_msg)
def test_error_on_limit_length_not_2(self):
err_msg = ('There must be exactly 2 limits (lo'
' and hi) for argument 1, not 3!')
with self.assertRaises(ValueError) as err:
_ = LimitedTuple((1, 2), limits=((0, 3), (1, 2, 3)))
self.assertEqual(str(err.exception), err_msg)
def test_works_with_all_limits_set(self):
inp = (1, 2)
out = LimitedTuple(inp, limits=((0, 2), (1, 3)))
self.assertTupleEqual(out, inp)
def test_works_with_one_ellipsis_in_limit(self):
inp = (1, 2)
out = LimitedTuple(inp, limits=((0, 2), (..., 3)))
self.assertTupleEqual(out, inp)
def test_works_with_two_ellipsis_in_limit(self):
inp = (1, 2)
out = LimitedTuple(inp, limits=((0, 2), (..., ...)))
self.assertTupleEqual(out, inp)
def test_works_with_ellipsis_instead_of_limit(self):
inp = (1, 2, 3)
out = LimitedTuple(inp, limits=((0, 2), ..., (2, 4)))
self.assertTupleEqual(out, inp)
class TestLimitedTupleValue(ut.TestCase):
def test_works_with_empty_tuple(self):
out = LimitedTuple(())
self.assertTupleEqual(out, ())
def test_error_on_unnamed_tuple_wrong_length(self):
inp = (1, 'foo', True)
log_msg = ["ERROR:root:Length of tuple (1, "
"'foo', True) must be 2, not 3!"]
err_msg = "Length of tuple (1, 'foo', True) must be 2, not 3!"
with self.assertLogs(level=logging.ERROR) as log:
with self.assertRaises(LenError) as err:
_ = LimitedTuple(inp, limits=((0, 2), (1, 3)))
self.assertEqual(str(err.exception), err_msg)
self.assertEqual(log.output, log_msg)
def test_error_on_named_tuple_wrong_length(self):
inp = (1, 'foo', True)
log_msg = ['ERROR:root:Length of tuple test must be 2, not 3!']
err_msg = 'Length of tuple test must be 2, not 3!'
with self.assertLogs(level=logging.ERROR) as log:
with self.assertRaises(LenError) as err:
_ = LimitedTuple(inp, 'test', limits=((0, 2), (1, 3)))
self.assertEqual(str(err.exception), err_msg)
self.assertEqual(log.output, log_msg)
def test_error_on_element_of_unnamed_tuple_out_of_bounds(self):
inp = (1, 2)
log_msg = ['ERROR:root:Value 2 of element 1 in tuple (1, '
'2) lies outside the allowed interval [3, 5]!']
err_msg = ('Value 2 of element 1 in tuple (1, 2) lies'
' outside the allowed interval [3, 5]!')
with self.assertLogs(level=logging.ERROR) as log:
with self.assertRaises(LimitError) as err:
_ = LimitedTuple(inp, limits=((0, 2), (3, 5)))
self.assertEqual(str(err.exception), err_msg)
self.assertEqual(log.output, log_msg)
def test_error_on_element_of_named_tuple_out_of_bounds(self):
inp = (1, 2)
log_msg = ['ERROR:root:Value 2 of element 1 in tuple test'
' lies outside the allowed interval [3, 5]!']
err_msg = ('Value 2 of element 1 in tuple test lies'
' outside the allowed interval [3, 5]!')
with self.assertLogs(level=logging.ERROR) as log:
with self.assertRaises(LimitError) as err:
_ = LimitedTuple(inp, 'test', limits=((0, 2), (3, 5)))
self.assertEqual(str(err.exception), err_msg)
self.assertEqual(log.output, log_msg)
def test_error_on_element_of_unnamed_tuple_uncomparable(self):
inp = (1, 2)
log_msg = ['ERROR:root:Cannot compare type int of element 1 '
'in tuple (1, 2) with limits of types str and str!']
err_msg = ('Cannot compare type int of element 1 in tuple'
' (1, 2) with limits of types str and str!')
with self.assertLogs(level=logging.ERROR) as log:
with self.assertRaises(WrongTypeError) as err:
_ = LimitedTuple(inp, limits=((0, 2), ('a', 'b')))
self.assertEqual(str(err.exception), err_msg)
self.assertEqual(log.output, log_msg)
def test_error_on_element_of_named_tuple_uncomparable(self):
inp = (1, 2)
log_msg = ['ERROR:root:Cannot compare type int of element 1 '
'in tuple test with limits of types str and str!']
err_msg = ('Cannot compare type int of element 1 in tuple'
' test with limits of types str and str!')
with self.assertLogs(level=logging.ERROR) as log:
with self.assertRaises(WrongTypeError) as err:
_ = LimitedTuple(inp, 'test', limits=((0, 2), ('a', 'b')))
self.assertEqual(str(err.exception), err_msg)
self.assertEqual(log.output, log_msg)
def test_out_of_bounds_error_on_element_of_unnamed_tuple_lower_bound(self):
inp = (1, 2)
log_msg = ['ERROR:root:Value 2 of element 1 in tuple (1, '
'2) lies outside the allowed interval [3, inf)!']
err_msg = ('Value 2 of element 1 in tuple (1, 2) lies'
' outside the allowed interval [3, inf)!')
with self.assertLogs(level=logging.ERROR) as log:
with self.assertRaises(LimitError) as err:
_ = LimitedTuple(inp, limits=((0, 2), (3, ...)))
self.assertEqual(str(err.exception), err_msg)
self.assertEqual(log.output, log_msg)
def test_out_of_bounds_error_on_element_of_named_tuple_lower_bound(self):
inp = (1, 2)
log_msg = ['ERROR:root:Value 2 of element 1 in tuple test'
' lies outside the allowed interval [3, inf)!']
err_msg = ('Value 2 of element 1 in tuple test lies'
' outside the allowed interval [3, inf)!')
with self.assertLogs(level=logging.ERROR) as log:
with self.assertRaises(LimitError) as err:
_ = LimitedTuple(inp, 'test', limits=((0, 2), (3, ...)))
self.assertEqual(str(err.exception), err_msg)
self.assertEqual(log.output, log_msg)
def test_out_of_bounds_error_on_element_of_unnamed_tuple_upper_bound(self):
inp = (1, 2)
log_msg = ['ERROR:root:Value 2 of element 1 in tuple (1, '
'2) lies outside the allowed interval (-inf, 1]!']
err_msg = ('Value 2 of element 1 in tuple (1, 2) lies'
' outside the allowed interval (-inf, 1]!')
with self.assertLogs(level=logging.ERROR) as log:
with self.assertRaises(LimitError) as err:
_ = LimitedTuple(inp, limits=((0, 2), (..., 1)))
self.assertEqual(str(err.exception), err_msg)
self.assertEqual(log.output, log_msg)
def test_out_of_bounds_error_on_element_of_named_tuple_upper_bound(self):
inp = (1, 2)
log_msg = ['ERROR:root:Value 2 of element 1 in tuple test'
' lies outside the allowed interval (-inf, 1]!']
err_msg = ('Value 2 of element 1 in tuple test lies'
' outside the allowed interval (-inf, 1]!')
with self.assertLogs(level=logging.ERROR) as log:
with self.assertRaises(LimitError) as err:
_ = LimitedTuple(inp, 'test', limits=((0, 2), (..., 1)))
self.assertEqual(str(err.exception), err_msg)
self.assertEqual(log.output, log_msg)
def test_error_on_element_of_unnamed_tuple_uncomparable_lower_bound(self):
inp = (1, 2)
log_msg = ['ERROR:root:Cannot compare type int of element 1 in '
'tuple (1, 2) with limits of types str and ellipsis!']
err_msg = ('Cannot compare type int of element 1 in tuple '
'(1, 2) with limits of types str and ellipsis!')
with self.assertLogs(level=logging.ERROR) as log:
with self.assertRaises(WrongTypeError) as err:
_ = LimitedTuple(inp, limits=((0, 2), ('a', ...)))
self.assertEqual(str(err.exception), err_msg)
self.assertEqual(log.output, log_msg)
def test_error_on_element_of_named_tuple_uncomparable_lower_bound(self):
inp = (1, 2)
log_msg = ['ERROR:root:Cannot compare type int of element 1 in'
' tuple test with limits of types str and ellipsis!']
err_msg = ('Cannot compare type int of element 1 in tuple'
' test with limits of types str and ellipsis!')
with self.assertLogs(level=logging.ERROR) as log:
with self.assertRaises(WrongTypeError) as err:
_ = LimitedTuple(inp, 'test', limits=((0, 2), ('a', ...)))
self.assertEqual(str(err.exception), err_msg)
self.assertEqual(log.output, log_msg)
def test_error_on_element_of_unnamed_tuple_uncomparable_upper_bound(self):
inp = (1, 2)
log_msg = ['ERROR:root:Cannot compare type int of element 1 in '
'tuple (1, 2) with limits of types ellipsis and str!']
err_msg = ('Cannot compare type int of element 1 in tuple '
'(1, 2) with limits of types ellipsis and str!')
with self.assertLogs(level=logging.ERROR) as log:
with self.assertRaises(WrongTypeError) as err:
_ = LimitedTuple(inp, limits=((0, 2), (..., 'b')))
self.assertEqual(str(err.exception), err_msg)
self.assertEqual(log.output, log_msg)
def test_error_on_element_of_named_tuple_uncomparable_upper_bound(self):
inp = (1, 2)
log_msg = ['ERROR:root:Cannot compare type int of element 1 '
'in tuple test with limits of types ellipsis and str!']
err_msg = ('Cannot compare type int of element 1 in tuple'
' test with limits of types ellipsis and str!')
with self.assertLogs(level=logging.ERROR) as log:
with self.assertRaises(WrongTypeError) as err:
_ = LimitedTuple(inp, 'test', limits=((0, 2), (..., 'b')))
self.assertEqual(str(err.exception), err_msg)
self.assertEqual(log.output, log_msg)
class TestLimitedTupleMethods(ut.TestCase):
def test_has_attribute_o(self):
self.assertTrue(hasattr(LimitedTuple, 'o'))
def test_attribute_o_is_callable(self):
self.assertTrue(callable(LimitedTuple.o))
def test_o_returns_composition(self):
def f(x):
return x
composition = LimitedTuple.o(f)
self.assertIsInstance(composition, CompositionOf)
def test_o_raises_error_on_argument_not_callable(self):
err_msg = ('foo must be a callable that accepts (i) a value,'
' (ii) an optional name for that value, and (iii)'
' any number of keyword arguments!')
with self.assertRaises(CallableError) as err:
_ = LimitedTuple.o('foo')
self.assertEqual(str(err.exception), err_msg)
def test_has_all_comparable_type_checker_attributes(self):
all_comparables = (c for c in _ALL_COMPARABLES if c is not TypedDict)
for comparable in all_comparables:
self.assertTrue(hasattr(LimitedTuple, comparable.__name__))
def test_all_comparable_type_checkers_are_type_CompositionOf(self):
all_comparables = (c for c in _ALL_COMPARABLES if c is not TypedDict)
for comparable in all_comparables:
type_checker = getattr(LimitedTuple, comparable.__name__)
self.assertIsInstance(type_checker, CompositionOf)
def test_limits_are_passed_through_type_checkers(self):
inp = (1, 2)
log_msg = ['ERROR:root:Value 2 of element 1 in tuple test'
' lies outside the allowed interval [3, 5]!']
err_msg = ('Value 2 of element 1 in tuple test lies'
' outside the allowed interval [3, 5]!')
with self.assertLogs(level=logging.ERROR) as log:
with self.assertRaises(LimitError) as err:
_ = LimitedTuple.AllInt(inp, 'test', limits=((0, 2), (3, 5)))
self.assertEqual(str(err.exception), err_msg)
self.assertEqual(log.output, log_msg)
if __name__ == '__main__':
ut.main()
| 47.75265 | 79 | 0.62032 | 1,815 | 13,514 | 4.434711 | 0.076584 | 0.031308 | 0.032302 | 0.038763 | 0.83762 | 0.83265 | 0.82768 | 0.82768 | 0.814635 | 0.791527 | 0 | 0.022319 | 0.267278 | 13,514 | 282 | 80 | 47.921986 | 0.790547 | 0 | 0 | 0.539095 | 0 | 0 | 0.218662 | 0 | 0 | 0 | 0 | 0 | 0.337449 | 1 | 0.131687 | false | 0.004115 | 0.024691 | 0.004115 | 0.17284 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
285ecbf656ab5e53938ed61003509ca9870276d2 | 41 | py | Python | mobo/test_functions/__init__.py | MasashiSode/MOBO | 8083d482ce5bff3ede82daca63de818e4be825b0 | [
"MIT"
] | 30 | 2019-03-21T01:53:29.000Z | 2022-02-26T12:37:26.000Z | mobo/test_functions/__init__.py | MasashiSode/MOGP | 8083d482ce5bff3ede82daca63de818e4be825b0 | [
"MIT"
] | null | null | null | mobo/test_functions/__init__.py | MasashiSode/MOGP | 8083d482ce5bff3ede82daca63de818e4be825b0 | [
"MIT"
] | 7 | 2020-06-03T01:05:04.000Z | 2022-03-08T11:48:14.000Z | from .multi_objective_functions import *
| 20.5 | 40 | 0.853659 | 5 | 41 | 6.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 41 | 1 | 41 | 41 | 0.891892 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
955ce9004ac4a8077aede1f65263dc9a242625a7 | 22 | py | Python | gridopt/scripts/__init__.py | romcon/GRIDOPT | e44dca8a1c5579c1df8adad06f1749d59be18392 | [
"BSD-2-Clause"
] | 13 | 2015-11-13T22:34:36.000Z | 2021-08-19T11:47:45.000Z | gridopt/scripts/__init__.py | ttinoco/GRIDOPT | b5001212e5aee4628243c75cf78fe5872f5a90c3 | [
"BSD-2-Clause"
] | 35 | 2016-11-28T21:12:57.000Z | 2019-08-08T20:39:31.000Z | gridopt/scripts/__init__.py | romcon/GRIDOPT | e44dca8a1c5579c1df8adad06f1749d59be18392 | [
"BSD-2-Clause"
] | 11 | 2016-03-11T10:32:34.000Z | 2019-09-26T16:29:13.000Z | from . import gridopt
| 11 | 21 | 0.772727 | 3 | 22 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 22 | 1 | 22 | 22 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
956e934f2b80ba3f55527cc9adb5f9dc71879ace | 42 | py | Python | src/utils.py | otaviocap/PyDashboard | b2ffbe47e93d90700a266c8a7e229eb81845c65f | [
"MIT"
] | null | null | null | src/utils.py | otaviocap/PyDashboard | b2ffbe47e93d90700a266c8a7e229eb81845c65f | [
"MIT"
] | null | null | null | src/utils.py | otaviocap/PyDashboard | b2ffbe47e93d90700a266c8a7e229eb81845c65f | [
"MIT"
] | null | null | null | def mod(n):
return n * -1 if n < 0 else n | 21 | 30 | 0.571429 | 11 | 42 | 2.181818 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 0.285714 | 42 | 2 | 30 | 21 | 0.733333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
95832298703d64afcf4c76fa55a17da9f6c6cb4c | 5,489 | py | Python | pscan/tests/test_scan.py | rtapadar/pscan | 8176749438fb51d6570e5f51350058f14bbd06f8 | [
"Apache-2.0"
] | null | null | null | pscan/tests/test_scan.py | rtapadar/pscan | 8176749438fb51d6570e5f51350058f14bbd06f8 | [
"Apache-2.0"
] | null | null | null | pscan/tests/test_scan.py | rtapadar/pscan | 8176749438fb51d6570e5f51350058f14bbd06f8 | [
"Apache-2.0"
] | null | null | null | # Copyright 2016 Rudrajit Tapadar
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from base import TestPscan
import errno
import mock
from StringIO import StringIO
import sys
class TestScan(TestPscan):
@mock.patch('socket.socket.connect')
def test_tcp_port_open(self, mock_connect):
hosts = "127.0.0.1"
ports = "22"
mock_connect.return_value = None
scanner = self.get_scanner_obj(hosts, ports)
scanner.tcp()
h = self.get_host_obj(hosts, [22])
h[0].ports[0].status = "Open"
self.assertPortsEqual(scanner.hosts[0].ports,
h[0].ports)
@mock.patch('socket.socket.connect')
def test_tcp_port_closed(self, mock_connect):
hosts = "127.0.0.1"
ports = "22"
mock_connect.side_effect = IOError()
scanner = self.get_scanner_obj(hosts, ports)
scanner.tcp()
h = self.get_host_obj(hosts, [22])
h[0].ports[0].status = "Closed"
self.assertPortsEqual(scanner.hosts[0].ports,
h[0].ports)
@mock.patch('socket.socket.connect')
def test_tcp_port_range(self, mock_connect):
hosts = "127.0.0.1"
ports = "21-22"
mock_connect.return_value = None
mock_connect.side_effect = [IOError(), None]
scanner = self.get_scanner_obj(hosts, ports)
scanner.tcp()
h = self.get_host_obj(hosts, [21, 22])
h[0].ports[0].status = "Closed"
h[0].ports[1].status = "Open"
self.assertPortsEqual(scanner.hosts[0].ports,
h[0].ports)
@mock.patch('socket.socket.connect')
def test_show_open_port(self, mock_connect):
hosts = "127.0.0.1"
ports = "5672"
mock_connect.return_value = None
scanner = self.get_scanner_obj(hosts, ports)
scanner.tcp()
s = sys.stdout
o = StringIO()
sys.stdout = o
output = (
"Showing results for target: 127.0.0.1\n"
"+------+----------+-------+---------+\n"
"| Port | Protocol | State | Service |\n"
"+------+----------+-------+---------+\n"
"| 5672 | TCP | Open | amqp |\n"
"+------+----------+-------+---------+"
)
scanner.show()
self.assertEqual(o.getvalue().strip(), output)
sys.stdout = s
@mock.patch('socket.socket.connect')
def test_show_closed_port(self, mock_connect):
hosts = "127.0.0.1"
ports = "5673"
mock_connect.side_effect = IOError()
scanner = self.get_scanner_obj(hosts, ports)
scanner.tcp()
s = sys.stdout
o = StringIO()
sys.stdout = o
output = (
"Showing results for target: 127.0.0.1\n"
"+------+----------+--------+---------+\n"
"| Port | Protocol | State | Service |\n"
"+------+----------+--------+---------+\n"
"| 5673 | TCP | Closed | unknown |\n"
"+------+----------+--------+---------+"
)
scanner.show()
self.assertEqual(o.getvalue().strip(), output)
sys.stdout = s
@mock.patch('socket.socket.connect')
def test_show_closed_port_range(self, mock_connect):
hosts = "127.0.0.1"
ports = "5673-5674"
mock_connect.side_effect = IOError(errno.ECONNREFUSED)
scanner = self.get_scanner_obj(hosts, ports)
scanner.tcp()
s = sys.stdout
o = StringIO()
sys.stdout = o
output = (
"Showing results for target: 127.0.0.1\n"
"All 2 scanned ports are closed on the target."
)
scanner.show()
self.assertEqual(o.getvalue().strip(), output)
sys.stdout = s
@mock.patch('socket.socket.connect')
def test_show_partially_open_port_range(self, mock_connect):
hosts = "127.0.0.1"
ports = "5671-5672"
mock_connect.return_value = None
mock_connect.side_effect = [IOError(), None]
scanner = self.get_scanner_obj(hosts, ports)
scanner.tcp()
s = sys.stdout
o = StringIO()
sys.stdout = o
output = (
"Showing results for target: 127.0.0.1\n"
"+------+----------+-------+---------+\n"
"| Port | Protocol | State | Service |\n"
"+------+----------+-------+---------+\n"
"| 5672 | TCP | Open | amqp |\n"
"+------+----------+-------+---------+"
)
scanner.show()
self.assertEqual(o.getvalue().strip(), output)
@mock.patch('socket.socket.connect')
def test_udp_port_open(self, mock_connect):
hosts = "127.0.0.1"
ports = "53"
mock_connect.return_value = None
scanner = self.get_scanner_obj(hosts, ports)
scanner.udp()
#h = self.get_host_obj(hosts, [22])
#h[0].ports[0].status = "Open"
#self.assertPortsEqual(scanner.hosts[0].ports,
# h[0].ports)
| 34.961783 | 76 | 0.539078 | 652 | 5,489 | 4.417178 | 0.20092 | 0.06875 | 0.020833 | 0.025 | 0.767708 | 0.757986 | 0.754514 | 0.732639 | 0.73125 | 0.718056 | 0 | 0.03944 | 0.284023 | 5,489 | 156 | 77 | 35.185897 | 0.693384 | 0.129714 | 0 | 0.75 | 0 | 0 | 0.227454 | 0.108472 | 0 | 0 | 0 | 0 | 0.054688 | 1 | 0.0625 | false | 0 | 0.039063 | 0 | 0.109375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
254d3ff66b0940b1af4476e187f61930fd9d8ab2 | 178 | py | Python | code/normalize.py | lionelmessi6410/Image-Filtering-and-Hybrid-Images | b4c87343cc832f0d1d184cf517e02a7d7aa2ddac | [
"MIT"
] | 2 | 2021-04-09T09:00:57.000Z | 2021-05-04T13:09:08.000Z | code/normalize.py | lionelmessi6410/Image-Filtering-and-Hybrid-Images | b4c87343cc832f0d1d184cf517e02a7d7aa2ddac | [
"MIT"
] | null | null | null | code/normalize.py | lionelmessi6410/Image-Filtering-and-Hybrid-Images | b4c87343cc832f0d1d184cf517e02a7d7aa2ddac | [
"MIT"
] | 2 | 2020-12-19T13:36:29.000Z | 2021-10-01T04:26:35.000Z | def normalize(img):
''' Function to normalize an input array to 0-1 '''
img_min = img.min()
img_max = img.max()
return (img - img_min) / (img_max - img_min)
| 29.666667 | 56 | 0.595506 | 28 | 178 | 3.607143 | 0.464286 | 0.237624 | 0.267327 | 0.237624 | 0.29703 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015267 | 0.264045 | 178 | 5 | 57 | 35.6 | 0.755725 | 0.241573 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6c1bfa4bd6074b31ba89b777c6b3d51717ccad65 | 24 | py | Python | platform-modules/batch-models/src/test/resources/python/main/vidyavaani/content/get_concepts.py | harshavardhanc/sunbird-analytics | eebb74a9b97bf9d19b61cca7ca8befc40207c3c7 | [
"MIT"
] | 2 | 2019-02-14T11:15:41.000Z | 2019-10-04T10:31:00.000Z | platform-modules/batch-models/src/test/resources/python/main/vidyavaani/content/get_concepts.py | harshavardhanc/sunbird-analytics | eebb74a9b97bf9d19b61cca7ca8befc40207c3c7 | [
"MIT"
] | 73 | 2018-07-27T09:51:45.000Z | 2021-12-14T21:14:15.000Z | platform-modules/batch-models/src/test/resources/python/main/vidyavaani/content/get_concepts.py | harshavardhanc/sunbird-analytics | eebb74a9b97bf9d19b61cca7ca8befc40207c3c7 | [
"MIT"
] | 41 | 2018-11-08T09:28:47.000Z | 2021-05-31T16:59:41.000Z | print "getting concepts" | 24 | 24 | 0.833333 | 3 | 24 | 6.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 24 | 1 | 24 | 24 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0.64 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
6c3387e5730f41f6273f718dc685aadb8c602116 | 182 | py | Python | application/routes/views.py | dejbug/full-stack-python-test-1 | c5256e24d33ef5f8e1cc9dc9330507c15421f944 | [
"MIT"
] | null | null | null | application/routes/views.py | dejbug/full-stack-python-test-1 | c5256e24d33ef5f8e1cc9dc9330507c15421f944 | [
"MIT"
] | null | null | null | application/routes/views.py | dejbug/full-stack-python-test-1 | c5256e24d33ef5f8e1cc9dc9330507c15421f944 | [
"MIT"
] | null | null | null | from flask import session, redirect, url_for
from application import app, utils
@app.route("/")
def index():
utils.init_session()
return redirect(url_for("leads.index"))
| 20.222222 | 45 | 0.714286 | 25 | 182 | 5.08 | 0.64 | 0.173228 | 0.220472 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.159341 | 182 | 8 | 46 | 22.75 | 0.830065 | 0 | 0 | 0 | 0 | 0 | 0.068966 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6c62dcc6b1f8382bac8e15921120dc0f14f45160 | 109 | py | Python | ml_gym/spaces/__init__.py | olliethomas/ml-fairness-gym | adaa878596d3ce7dc0ee821f53f99cdf0cd2ef5f | [
"Apache-2.0"
] | null | null | null | ml_gym/spaces/__init__.py | olliethomas/ml-fairness-gym | adaa878596d3ce7dc0ee821f53f99cdf0cd2ef5f | [
"Apache-2.0"
] | null | null | null | ml_gym/spaces/__init__.py | olliethomas/ml-fairness-gym | adaa878596d3ce7dc0ee821f53f99cdf0cd2ef5f | [
"Apache-2.0"
] | null | null | null | from .batch import *
from .graph import *
from .multi_discrete_with_none import *
from .multinomial import *
| 21.8 | 39 | 0.779817 | 15 | 109 | 5.466667 | 0.6 | 0.365854 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146789 | 109 | 4 | 40 | 27.25 | 0.88172 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
667b14d84e0ca367f8035c5f767a5a29f4b2ae4a | 5,501 | py | Python | tools/prune/prune_vggnet.py | ZJCV/SSL | c52bf6fd6ca1f960107dbe0412f16be4df285e1e | [
"Apache-2.0"
] | 8 | 2021-07-18T04:32:05.000Z | 2022-03-29T16:26:55.000Z | tools/prune/prune_vggnet.py | ZJCV/SSL | c52bf6fd6ca1f960107dbe0412f16be4df285e1e | [
"Apache-2.0"
] | 1 | 2021-12-27T07:12:11.000Z | 2021-12-27T14:19:02.000Z | tools/prune/prune_vggnet.py | ZJCV/SSL | c52bf6fd6ca1f960107dbe0412f16be4df285e1e | [
"Apache-2.0"
] | 4 | 2021-09-29T02:30:26.000Z | 2022-03-18T11:29:11.000Z | # -*- coding: utf-8 -*-
"""
@date: 2021/6/11 上午10:11
@file: prune_vggnet.py
@author: zj
@description:
"""
import torch
import warnings
warnings.filterwarnings("ignore")
from sslearning.config.key_word import KEY_CHANNEL, KEY_FILTER, KEY_FILTER_AND_CHANNEL
from operation import load_model, prune_model, save_model
def prune_filter(cfg_file, prune_way='mean_abs', pruned_rate=0.2, minimum_channels=8, divisor=8):
model, arch_name = load_model(cfg_file, data_shape=(1, 3, 224, 224), device=torch.device('cpu'))
pruned_type = KEY_FILTER
pruned_model, true_pruned_ratio, threshold = prune_model(pruned_type,
prune_way,
arch_name,
model,
ratio=pruned_rate,
minimum_channels=minimum_channels,
divisor=divisor,
)
print(pruned_model)
print('pruned ratio:', true_pruned_ratio)
print('threshold:', threshold)
model_name = f'outputs/vggnet_pruned_filter/{arch_name}_pruned_{pruned_type}_{prune_way}_{pruned_rate}.pkl'
save_model(pruned_model, model_name)
def prune_channel(cfg_file, prune_way='mean_abs', pruned_rate=0.2, minimum_channels=8, divisor=8):
model, arch_name = load_model(cfg_file, data_shape=(1, 3, 224, 224), device=torch.device('cpu'))
pruned_type = KEY_CHANNEL
pruned_model, true_pruned_ratio, threshold = prune_model(pruned_type,
prune_way,
arch_name,
model,
ratio=pruned_rate,
minimum_channels=minimum_channels,
divisor=divisor,
)
print(pruned_model)
print('pruned ratio:', true_pruned_ratio)
print('threshold:', threshold)
model_name = f'outputs/vggnet_pruned_channel/{arch_name}_pruned_{pruned_type}_{prune_way}_{pruned_rate}.pkl'
save_model(pruned_model, model_name)
def prune_filter_and_channel(cfg_file, prune_way='mean_abs', pruned_rate=0.2, minimum_channels=8, divisor=8):
model, arch_name = load_model(cfg_file, data_shape=(1, 3, 224, 224), device=torch.device('cpu'))
pruned_type = KEY_FILTER_AND_CHANNEL
pruned_model, true_pruned_ratio, threshold = prune_model(pruned_type,
prune_way,
arch_name,
model,
ratio=pruned_rate,
minimum_channels=minimum_channels,
divisor=divisor,
)
print(pruned_model)
print('pruned ratio:', true_pruned_ratio)
print('threshold:', threshold)
model_name = f'outputs/vggnet_pruned_filter_and_channel/{arch_name}_pruned_{pruned_type}_{prune_way}_{pruned_rate}.pkl'
save_model(pruned_model, model_name)
if __name__ == '__main__':
# cfg_file = 'configs/vggnet/vgg16_bn_cifar100_224_e100_sgd_mslr_ssl_filter_wise_1e_5.yaml'
# prune_filter(cfg_file, prune_way='group_lasso', pruned_rate=0.2, minimum_channels=8, divisor=8)
# prune_filter(cfg_file, prune_way='mean_abs', pruned_rate=0.2, minimum_channels=8, divisor=8)
# prune_filter(cfg_file, prune_way='mean', pruned_rate=0.2, minimum_channels=8, divisor=8)
# prune_filter(cfg_file, prune_way='sum_abs', pruned_rate=0.2, minimum_channels=8, divisor=8)
# prune_filter(cfg_file, prune_way='sum', pruned_rate=0.2, minimum_channels=8, divisor=8)
# prune_filter(cfg_file, prune_way='mean_abs', pruned_rate=0.4, minimum_channels=8, divisor=8)
# prune_filter(cfg_file, prune_way='mean_abs', pruned_rate=0.6, minimum_channels=8, divisor=8)
# cfg_file = 'configs/vggnet/vgg16_bn_cifar100_224_e100_sgd_mslr_ssl_channel_wise_1e_5.yaml'
# prune_channel(cfg_file, prune_way='group_lasso', pruned_rate=0.2, minimum_channels=8, divisor=8)
# prune_channel(cfg_file, prune_way='mean_abs', pruned_rate=0.2, minimum_channels=8, divisor=8)
# prune_channel(cfg_file, prune_way='mean_abs', pruned_rate=0.4, minimum_channels=8, divisor=8)
# prune_channel(cfg_file, prune_way='mean_abs', pruned_rate=0.6, minimum_channels=8, divisor=8)
cfg_file = 'configs/vggnet/vgg16_bn_cifar100_224_e100_sgd_mslr_ssl_filter_and_channel_wise_1e_5.yaml'
prune_filter_and_channel(cfg_file, prune_way='group_lasso', pruned_rate=0.2, minimum_channels=8, divisor=8)
# prune_filter_and_channel(cfg_file, prune_way='mean_abs', pruned_rate=0.2, minimum_channels=8, divisor=8)
# prune_filter_and_channel(cfg_file, prune_way='mean_abs', pruned_rate=0.4, minimum_channels=8, divisor=8)
# prune_filter_and_channel(cfg_file, prune_way='mean_abs', pruned_rate=0.6, minimum_channels=8, divisor=8)
| 55.01 | 123 | 0.593165 | 663 | 5,501 | 4.506787 | 0.119155 | 0.056225 | 0.072289 | 0.090361 | 0.914324 | 0.914324 | 0.89324 | 0.89324 | 0.888554 | 0.888554 | 0 | 0.038971 | 0.314306 | 5,501 | 99 | 124 | 55.565657 | 0.753181 | 0.293765 | 0 | 0.642857 | 0 | 0 | 0.129759 | 0.096866 | 0 | 0 | 0 | 0 | 0 | 1 | 0.053571 | false | 0 | 0.071429 | 0 | 0.125 | 0.160714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
66c318857ce48048437dede7072901ad6471b8fc | 29 | py | Python | vocoders/__init__.py | ishine/DiffSinger-1 | 9a5baf553f635f088ca110aa22e87b67ece6e947 | [
"MIT"
] | 288 | 2021-12-19T04:02:00.000Z | 2022-03-27T16:13:44.000Z | vocoders/__init__.py | ishine/DiffSinger-1 | 9a5baf553f635f088ca110aa22e87b67ece6e947 | [
"MIT"
] | 44 | 2021-12-27T07:11:20.000Z | 2022-03-29T08:39:41.000Z | vocoders/__init__.py | ishine/DiffSinger-1 | 9a5baf553f635f088ca110aa22e87b67ece6e947 | [
"MIT"
] | 37 | 2021-12-19T16:51:34.000Z | 2022-03-23T09:22:31.000Z | from vocoders import hifigan
| 14.5 | 28 | 0.862069 | 4 | 29 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
66ee005d2a20a2cbbbaf89c5c39e0af7da889f6a | 60 | py | Python | test/data/multiline_bad_function_one_param.py | javulticat/flake8-commas | 32647a9806e4e1d1ad4e6682e8d3229a519de43a | [
"MIT"
] | 97 | 2018-01-13T03:13:57.000Z | 2022-03-28T06:18:33.000Z | test/data/multiline_bad_function_one_param.py | javulticat/flake8-commas | 32647a9806e4e1d1ad4e6682e8d3229a519de43a | [
"MIT"
] | 28 | 2017-01-13T17:04:56.000Z | 2018-01-03T06:15:56.000Z | test/data/multiline_bad_function_one_param.py | javulticat/flake8-commas | 32647a9806e4e1d1ad4e6682e8d3229a519de43a | [
"MIT"
] | 9 | 2018-03-15T15:01:28.000Z | 2022-03-01T17:50:09.000Z | def func(
a = 3
):
pass
func(
a = 3
)
| 6 | 13 | 0.316667 | 8 | 60 | 2.375 | 0.625 | 0.526316 | 0.631579 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 0.566667 | 60 | 9 | 14 | 6.666667 | 0.653846 | 0 | 0 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0.142857 | 0 | 0 | 0.142857 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
dd1022b3b099705196351a0f62ba30f4c2f0c20b | 370 | py | Python | modules/tests/test_rideservices.py | joemulray/ostorybook-jarvis | c08502b843691207f68270e621032fe4299350d7 | [
"MIT"
] | 3 | 2018-01-25T18:00:50.000Z | 2019-02-20T03:22:07.000Z | modules/tests/test_rideservices.py | joemulray/ostorybook-jarvis | c08502b843691207f68270e621032fe4299350d7 | [
"MIT"
] | null | null | null | modules/tests/test_rideservices.py | joemulray/ostorybook-jarvis | c08502b843691207f68270e621032fe4299350d7 | [
"MIT"
] | null | null | null | import modules
def test_rideservices():
assert ('rideservices' == modules.process_query('I need a ride')[0])
assert ('rideservices' == modules.process_query('Call me an <Uber/Lyft>')[0])
assert ('rideservices' == modules.process_query('Can you drive me home?')[0])
assert ('rideservices' != modules.process_query('Can you order me an <Uber/Lyft>')[0])
| 41.111111 | 90 | 0.689189 | 50 | 370 | 5 | 0.46 | 0.288 | 0.4 | 0.512 | 0.752 | 0.504 | 0.352 | 0.352 | 0 | 0 | 0 | 0.012618 | 0.143243 | 370 | 8 | 91 | 46.25 | 0.776025 | 0 | 0 | 0 | 0 | 0 | 0.367568 | 0 | 0 | 0 | 0 | 0 | 0.666667 | 1 | 0.166667 | true | 0 | 0.166667 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dd4305e59c5b90ed533492e4a081af0efedcba80 | 106 | py | Python | scheduler.py | schappe/yo-water-tracker | 7577d7f235bd2c175a5e925ab80b1134b04efa08 | [
"MIT"
] | 2 | 2015-10-06T17:32:16.000Z | 2016-05-13T23:17:42.000Z | scheduler.py | schappe/yo-water-tracker | 7577d7f235bd2c175a5e925ab80b1134b04efa08 | [
"MIT"
] | 1 | 2015-10-08T18:17:08.000Z | 2015-10-09T03:58:32.000Z | scheduler.py | schappe/yo-water-tracker | 7577d7f235bd2c175a5e925ab80b1134b04efa08 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from reminders import trigger_applicable_reminders
trigger_applicable_reminders() | 26.5 | 50 | 0.801887 | 12 | 106 | 6.75 | 0.666667 | 0.419753 | 0.641975 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010417 | 0.09434 | 106 | 4 | 51 | 26.5 | 0.833333 | 0.198113 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
dd4343248f0f73b6be6a73ac3a23119738b5d642 | 392 | py | Python | db/__init__.py | wojtekwrona232/seminar-flask-showcase | 82ebc4f694f90f009bbbd3fb4e14adbaa07e1cb0 | [
"MIT"
] | null | null | null | db/__init__.py | wojtekwrona232/seminar-flask-showcase | 82ebc4f694f90f009bbbd3fb4e14adbaa07e1cb0 | [
"MIT"
] | null | null | null | db/__init__.py | wojtekwrona232/seminar-flask-showcase | 82ebc4f694f90f009bbbd3fb4e14adbaa07e1cb0 | [
"MIT"
] | null | null | null | # ****************************************************************************************
# Copyright (c) 2021. Wojciech Wrona *
# ****************************************************************************************
from .orm import Address, User
from .db_user import DBUser
from .db_address import DBAddress
from .util import SQLUtil
| 49 | 90 | 0.316327 | 24 | 392 | 5.083333 | 0.625 | 0.098361 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012821 | 0.204082 | 392 | 7 | 91 | 56 | 0.378205 | 0.678571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
06b4f874a4f25d214a1f8368bc405cee37a4ff14 | 107,989 | py | Python | cisco-ios-xr/ydk/models/cisco_ios_xr/Cisco_IOS_XR_lpts_lib_cfg.py | CiscoDevNet/ydk-py | 073731fea50694d0bc6cd8ebf10fec308dcc0aa9 | [
"ECL-2.0",
"Apache-2.0"
] | 177 | 2016-03-15T17:03:51.000Z | 2022-03-18T16:48:44.000Z | cisco-ios-xr/ydk/models/cisco_ios_xr/Cisco_IOS_XR_lpts_lib_cfg.py | CiscoDevNet/ydk-py | 073731fea50694d0bc6cd8ebf10fec308dcc0aa9 | [
"ECL-2.0",
"Apache-2.0"
] | 18 | 2016-03-30T10:45:22.000Z | 2020-07-14T16:28:13.000Z | cisco-ios-xr/ydk/models/cisco_ios_xr/Cisco_IOS_XR_lpts_lib_cfg.py | CiscoDevNet/ydk-py | 073731fea50694d0bc6cd8ebf10fec308dcc0aa9 | [
"ECL-2.0",
"Apache-2.0"
] | 85 | 2016-03-16T20:38:57.000Z | 2022-02-22T04:26:02.000Z | """ Cisco_IOS_XR_lpts_lib_cfg
This module contains a collection of YANG definitions
for Cisco IOS\-XR lpts\-lib package configuration.
This module contains definitions
for the following management objects\:
lpts\: lpts configuration commands
Copyright (c) 2013\-2018 by Cisco Systems, Inc.
All rights reserved.
"""
import sys
from collections import OrderedDict
from ydk.types import Entity as _Entity_
from ydk.types import EntityPath, Identity, Enum, YType, YLeaf, YLeafList, YList, LeafDataList, Bits, Empty, Decimal64
from ydk.types import Entity, EntityPath, Identity, Enum, YType, YLeaf, YLeafList, YList, LeafDataList, Bits, Empty, Decimal64
from ydk.filters import YFilter
from ydk.errors import YError, YModelError
from ydk.errors.error_handler import handle_type_error as _handle_type_error
class Lpts(_Entity_):
"""
lpts configuration commands
.. attribute:: ipolicer
Pre IFiB Policer Configuration
**type**\: :py:class:`Ipolicer <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.Ipolicer>`
**presence node**\: True
.. attribute:: domain_names
Pre IFiB Domains Configuration
**type**\: :py:class:`DomainNames <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.DomainNames>`
.. attribute:: ipunt_policer
Pre IFiB Punt Policer Configuration
**type**\: :py:class:`IpuntPolicer <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.IpuntPolicer>`
**presence node**\: True
.. attribute:: punt
Configure penalty timeout value
**type**\: :py:class:`Punt <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.Punt>`
"""
_prefix = 'lpts-lib-cfg'
_revision = '2015-11-09'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts, self).__init__()
self._top_entity = None
self.yang_name = "lpts"
self.yang_parent_name = "Cisco-IOS-XR-lpts-lib-cfg"
self.is_top_level_class = True
self.has_list_ancestor = False
self.ylist_key_names = []
self._child_classes = OrderedDict([("Cisco-IOS-XR-lpts-pre-ifib-cfg:ipolicer", ("ipolicer", Lpts.Ipolicer)), ("Cisco-IOS-XR-lpts-pre-ifib-cfg:domain-names", ("domain_names", Lpts.DomainNames)), ("Cisco-IOS-XR-lpts-pre-ifib-cfg:ipunt-policer", ("ipunt_policer", Lpts.IpuntPolicer)), ("Cisco-IOS-XR-lpts-punt-flowtrap-cfg:punt", ("punt", Lpts.Punt))])
self._leafs = OrderedDict()
self.ipolicer = None
self._children_name_map["ipolicer"] = "Cisco-IOS-XR-lpts-pre-ifib-cfg:ipolicer"
self.domain_names = Lpts.DomainNames()
self.domain_names.parent = self
self._children_name_map["domain_names"] = "Cisco-IOS-XR-lpts-pre-ifib-cfg:domain-names"
self.ipunt_policer = None
self._children_name_map["ipunt_policer"] = "Cisco-IOS-XR-lpts-pre-ifib-cfg:ipunt-policer"
self.punt = Lpts.Punt()
self.punt.parent = self
self._children_name_map["punt"] = "Cisco-IOS-XR-lpts-punt-flowtrap-cfg:punt"
self._segment_path = lambda: "Cisco-IOS-XR-lpts-lib-cfg:lpts"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts, [], name, value)
class Ipolicer(_Entity_):
"""
Pre IFiB Policer Configuration
.. attribute:: acls
Table for ACLs
**type**\: :py:class:`Acls <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.Ipolicer.Acls>`
.. attribute:: enable
Enabled
**type**\: :py:class:`Empty<ydk.types.Empty>`
**mandatory**\: True
.. attribute:: policer_domains
Policer Domain Table
**type**\: :py:class:`PolicerDomains <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.Ipolicer.PolicerDomains>`
.. attribute:: flows
Table for Flows
**type**\: :py:class:`Flows <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.Ipolicer.Flows>`
This class is a :ref:`presence class<presence-class>`
"""
_prefix = 'lpts-pre-ifib-cfg'
_revision = '2019-10-23'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.Ipolicer, self).__init__()
self.yang_name = "ipolicer"
self.yang_parent_name = "lpts"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = []
self._child_classes = OrderedDict([("acls", ("acls", Lpts.Ipolicer.Acls)), ("policer-domains", ("policer_domains", Lpts.Ipolicer.PolicerDomains)), ("flows", ("flows", Lpts.Ipolicer.Flows))])
self.is_presence_container = True
self._leafs = OrderedDict([
('enable', (YLeaf(YType.empty, 'enable'), ['Empty'])),
])
self.enable = None
self.acls = Lpts.Ipolicer.Acls()
self.acls.parent = self
self._children_name_map["acls"] = "acls"
self.policer_domains = Lpts.Ipolicer.PolicerDomains()
self.policer_domains.parent = self
self._children_name_map["policer_domains"] = "policer-domains"
self.flows = Lpts.Ipolicer.Flows()
self.flows.parent = self
self._children_name_map["flows"] = "flows"
self._segment_path = lambda: "Cisco-IOS-XR-lpts-pre-ifib-cfg:ipolicer"
self._absolute_path = lambda: "Cisco-IOS-XR-lpts-lib-cfg:lpts/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.Ipolicer, ['enable'], name, value)
class Acls(_Entity_):
"""
Table for ACLs
.. attribute:: acl
ACL name
**type**\: list of :py:class:`Acl <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.Ipolicer.Acls.Acl>`
"""
_prefix = 'lpts-pre-ifib-cfg'
_revision = '2019-10-23'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.Ipolicer.Acls, self).__init__()
self.yang_name = "acls"
self.yang_parent_name = "ipolicer"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = []
self._child_classes = OrderedDict([("acl", ("acl", Lpts.Ipolicer.Acls.Acl))])
self._leafs = OrderedDict()
self.acl = YList(self)
self._segment_path = lambda: "acls"
self._absolute_path = lambda: "Cisco-IOS-XR-lpts-lib-cfg:lpts/Cisco-IOS-XR-lpts-pre-ifib-cfg:ipolicer/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.Ipolicer.Acls, [], name, value)
class Acl(_Entity_):
"""
ACL name
.. attribute:: acl_name (key)
ACL name
**type**\: str
**pattern:** [\\w\\\-\\.\:,\_@#%$\\+=\\\|;]+
.. attribute:: afi_types
AFI Family
**type**\: :py:class:`AfiTypes <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.Ipolicer.Acls.Acl.AfiTypes>`
"""
_prefix = 'lpts-pre-ifib-cfg'
_revision = '2019-10-23'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.Ipolicer.Acls.Acl, self).__init__()
self.yang_name = "acl"
self.yang_parent_name = "acls"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = ['acl_name']
self._child_classes = OrderedDict([("afi-types", ("afi_types", Lpts.Ipolicer.Acls.Acl.AfiTypes))])
self._leafs = OrderedDict([
('acl_name', (YLeaf(YType.str, 'acl-name'), ['str'])),
])
self.acl_name = None
self.afi_types = Lpts.Ipolicer.Acls.Acl.AfiTypes()
self.afi_types.parent = self
self._children_name_map["afi_types"] = "afi-types"
self._segment_path = lambda: "acl" + "[acl-name='" + str(self.acl_name) + "']"
self._absolute_path = lambda: "Cisco-IOS-XR-lpts-lib-cfg:lpts/Cisco-IOS-XR-lpts-pre-ifib-cfg:ipolicer/acls/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.Ipolicer.Acls.Acl, ['acl_name'], name, value)
class AfiTypes(_Entity_):
"""
AFI Family
.. attribute:: afi_type
AFI Family type
**type**\: list of :py:class:`AfiType <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.Ipolicer.Acls.Acl.AfiTypes.AfiType>`
"""
_prefix = 'lpts-pre-ifib-cfg'
_revision = '2019-10-23'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.Ipolicer.Acls.Acl.AfiTypes, self).__init__()
self.yang_name = "afi-types"
self.yang_parent_name = "acl"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = []
self._child_classes = OrderedDict([("afi-type", ("afi_type", Lpts.Ipolicer.Acls.Acl.AfiTypes.AfiType))])
self._leafs = OrderedDict()
self.afi_type = YList(self)
self._segment_path = lambda: "afi-types"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.Ipolicer.Acls.Acl.AfiTypes, [], name, value)
class AfiType(_Entity_):
"""
AFI Family type
.. attribute:: afi_family_type (key)
AFI Family Type
**type**\: :py:class:`Lptsafi <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_pre_ifib_cfg.Lptsafi>`
.. attribute:: vrf_names
VRF list
**type**\: :py:class:`VrfNames <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.Ipolicer.Acls.Acl.AfiTypes.AfiType.VrfNames>`
"""
_prefix = 'lpts-pre-ifib-cfg'
_revision = '2019-10-23'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.Ipolicer.Acls.Acl.AfiTypes.AfiType, self).__init__()
self.yang_name = "afi-type"
self.yang_parent_name = "afi-types"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = ['afi_family_type']
self._child_classes = OrderedDict([("vrf-names", ("vrf_names", Lpts.Ipolicer.Acls.Acl.AfiTypes.AfiType.VrfNames))])
self._leafs = OrderedDict([
('afi_family_type', (YLeaf(YType.enumeration, 'afi-family-type'), [('ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_pre_ifib_cfg', 'Lptsafi', '')])),
])
self.afi_family_type = None
self.vrf_names = Lpts.Ipolicer.Acls.Acl.AfiTypes.AfiType.VrfNames()
self.vrf_names.parent = self
self._children_name_map["vrf_names"] = "vrf-names"
self._segment_path = lambda: "afi-type" + "[afi-family-type='" + str(self.afi_family_type) + "']"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.Ipolicer.Acls.Acl.AfiTypes.AfiType, ['afi_family_type'], name, value)
class VrfNames(_Entity_):
"""
VRF list
.. attribute:: vrf_name
VRF name
**type**\: list of :py:class:`VrfName <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.Ipolicer.Acls.Acl.AfiTypes.AfiType.VrfNames.VrfName>`
"""
_prefix = 'lpts-pre-ifib-cfg'
_revision = '2019-10-23'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.Ipolicer.Acls.Acl.AfiTypes.AfiType.VrfNames, self).__init__()
self.yang_name = "vrf-names"
self.yang_parent_name = "afi-type"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = []
self._child_classes = OrderedDict([("vrf-name", ("vrf_name", Lpts.Ipolicer.Acls.Acl.AfiTypes.AfiType.VrfNames.VrfName))])
self._leafs = OrderedDict()
self.vrf_name = YList(self)
self._segment_path = lambda: "vrf-names"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.Ipolicer.Acls.Acl.AfiTypes.AfiType.VrfNames, [], name, value)
class VrfName(_Entity_):
"""
VRF name
.. attribute:: vrf_name (key)
VRF name
**type**\: str
**pattern:** [\\w\\\-\\.\:,\_@#%$\\+=\\\|;]+
.. attribute:: acl_rate
pre\-ifib policer rate config commands
**type**\: int
**range:** 0..100000
"""
_prefix = 'lpts-pre-ifib-cfg'
_revision = '2019-10-23'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.Ipolicer.Acls.Acl.AfiTypes.AfiType.VrfNames.VrfName, self).__init__()
self.yang_name = "vrf-name"
self.yang_parent_name = "vrf-names"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = ['vrf_name']
self._child_classes = OrderedDict([])
self._leafs = OrderedDict([
('vrf_name', (YLeaf(YType.str, 'vrf-name'), ['str'])),
('acl_rate', (YLeaf(YType.uint32, 'acl-rate'), ['int'])),
])
self.vrf_name = None
self.acl_rate = None
self._segment_path = lambda: "vrf-name" + "[vrf-name='" + str(self.vrf_name) + "']"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.Ipolicer.Acls.Acl.AfiTypes.AfiType.VrfNames.VrfName, ['vrf_name', 'acl_rate'], name, value)
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.Ipolicer.Acls.Acl.AfiTypes.AfiType.VrfNames.VrfName']['meta_info']
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.Ipolicer.Acls.Acl.AfiTypes.AfiType.VrfNames']['meta_info']
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.Ipolicer.Acls.Acl.AfiTypes.AfiType']['meta_info']
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.Ipolicer.Acls.Acl.AfiTypes']['meta_info']
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.Ipolicer.Acls.Acl']['meta_info']
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.Ipolicer.Acls']['meta_info']
class PolicerDomains(_Entity_):
"""
Policer Domain Table
.. attribute:: policer_domain
Domain name
**type**\: list of :py:class:`PolicerDomain <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.Ipolicer.PolicerDomains.PolicerDomain>`
"""
_prefix = 'lpts-pre-ifib-cfg'
_revision = '2019-10-23'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.Ipolicer.PolicerDomains, self).__init__()
self.yang_name = "policer-domains"
self.yang_parent_name = "ipolicer"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = []
self._child_classes = OrderedDict([("policer-domain", ("policer_domain", Lpts.Ipolicer.PolicerDomains.PolicerDomain))])
self._leafs = OrderedDict()
self.policer_domain = YList(self)
self._segment_path = lambda: "policer-domains"
self._absolute_path = lambda: "Cisco-IOS-XR-lpts-lib-cfg:lpts/Cisco-IOS-XR-lpts-pre-ifib-cfg:ipolicer/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.Ipolicer.PolicerDomains, [], name, value)
class PolicerDomain(_Entity_):
"""
Domain name
.. attribute:: domain_name (key)
Domain name
**type**\: str
**pattern:** [\\w\\\-\\.\:,\_@#%$\\+=\\\|;]+
.. attribute:: flows
Table for Flows
**type**\: :py:class:`Flows <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.Ipolicer.PolicerDomains.PolicerDomain.Flows>`
"""
_prefix = 'lpts-pre-ifib-cfg'
_revision = '2019-10-23'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.Ipolicer.PolicerDomains.PolicerDomain, self).__init__()
self.yang_name = "policer-domain"
self.yang_parent_name = "policer-domains"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = ['domain_name']
self._child_classes = OrderedDict([("flows", ("flows", Lpts.Ipolicer.PolicerDomains.PolicerDomain.Flows))])
self._leafs = OrderedDict([
('domain_name', (YLeaf(YType.str, 'domain-name'), ['str'])),
])
self.domain_name = None
self.flows = Lpts.Ipolicer.PolicerDomains.PolicerDomain.Flows()
self.flows.parent = self
self._children_name_map["flows"] = "flows"
self._segment_path = lambda: "policer-domain" + "[domain-name='" + str(self.domain_name) + "']"
self._absolute_path = lambda: "Cisco-IOS-XR-lpts-lib-cfg:lpts/Cisco-IOS-XR-lpts-pre-ifib-cfg:ipolicer/policer-domains/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.Ipolicer.PolicerDomains.PolicerDomain, ['domain_name'], name, value)
class Flows(_Entity_):
"""
Table for Flows
.. attribute:: flow
selected flow type
**type**\: list of :py:class:`Flow <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.Ipolicer.PolicerDomains.PolicerDomain.Flows.Flow>`
"""
_prefix = 'lpts-pre-ifib-cfg'
_revision = '2019-10-23'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.Ipolicer.PolicerDomains.PolicerDomain.Flows, self).__init__()
self.yang_name = "flows"
self.yang_parent_name = "policer-domain"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = []
self._child_classes = OrderedDict([("flow", ("flow", Lpts.Ipolicer.PolicerDomains.PolicerDomain.Flows.Flow))])
self._leafs = OrderedDict()
self.flow = YList(self)
self._segment_path = lambda: "flows"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.Ipolicer.PolicerDomains.PolicerDomain.Flows, [], name, value)
class Flow(_Entity_):
"""
selected flow type
.. attribute:: flow_type (key)
LPTS Flow Type
**type**\: :py:class:`LptsFlow <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_pre_ifib_cfg.LptsFlow>`
.. attribute:: precedences
TOS Precedence value(s)
**type**\: :py:class:`Precedences <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.Ipolicer.PolicerDomains.PolicerDomain.Flows.Flow.Precedences>`
.. attribute:: rate
Configured rate value
**type**\: int
**range:** 0..4294967295
"""
_prefix = 'lpts-pre-ifib-cfg'
_revision = '2019-10-23'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.Ipolicer.PolicerDomains.PolicerDomain.Flows.Flow, self).__init__()
self.yang_name = "flow"
self.yang_parent_name = "flows"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = ['flow_type']
self._child_classes = OrderedDict([("precedences", ("precedences", Lpts.Ipolicer.PolicerDomains.PolicerDomain.Flows.Flow.Precedences))])
self._leafs = OrderedDict([
('flow_type', (YLeaf(YType.enumeration, 'flow-type'), [('ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_pre_ifib_cfg', 'LptsFlow', '')])),
('rate', (YLeaf(YType.uint32, 'rate'), ['int'])),
])
self.flow_type = None
self.rate = None
self.precedences = Lpts.Ipolicer.PolicerDomains.PolicerDomain.Flows.Flow.Precedences()
self.precedences.parent = self
self._children_name_map["precedences"] = "precedences"
self._segment_path = lambda: "flow" + "[flow-type='" + str(self.flow_type) + "']"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.Ipolicer.PolicerDomains.PolicerDomain.Flows.Flow, ['flow_type', 'rate'], name, value)
class Precedences(_Entity_):
"""
TOS Precedence value(s)
.. attribute:: precedence
Precedence values
**type**\: union of the below types:
**type**\: list of :py:class:`LptsPreIFibPrecedenceNumber <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_pre_ifib_cfg.LptsPreIFibPrecedenceNumber>`
**type**\: list of int
**range:** 0..7
"""
_prefix = 'lpts-pre-ifib-cfg'
_revision = '2019-10-23'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.Ipolicer.PolicerDomains.PolicerDomain.Flows.Flow.Precedences, self).__init__()
self.yang_name = "precedences"
self.yang_parent_name = "flow"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = []
self._child_classes = OrderedDict([])
self._leafs = OrderedDict([
('precedence', (YLeafList(YType.str, 'precedence'), [('ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_pre_ifib_cfg', 'LptsPreIFibPrecedenceNumber', ''),'int'])),
])
self.precedence = []
self._segment_path = lambda: "precedences"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.Ipolicer.PolicerDomains.PolicerDomain.Flows.Flow.Precedences, ['precedence'], name, value)
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.Ipolicer.PolicerDomains.PolicerDomain.Flows.Flow.Precedences']['meta_info']
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.Ipolicer.PolicerDomains.PolicerDomain.Flows.Flow']['meta_info']
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.Ipolicer.PolicerDomains.PolicerDomain.Flows']['meta_info']
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.Ipolicer.PolicerDomains.PolicerDomain']['meta_info']
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.Ipolicer.PolicerDomains']['meta_info']
class Flows(_Entity_):
"""
Table for Flows
.. attribute:: flow
selected flow type
**type**\: list of :py:class:`Flow <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.Ipolicer.Flows.Flow>`
"""
_prefix = 'lpts-pre-ifib-cfg'
_revision = '2019-10-23'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.Ipolicer.Flows, self).__init__()
self.yang_name = "flows"
self.yang_parent_name = "ipolicer"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = []
self._child_classes = OrderedDict([("flow", ("flow", Lpts.Ipolicer.Flows.Flow))])
self._leafs = OrderedDict()
self.flow = YList(self)
self._segment_path = lambda: "flows"
self._absolute_path = lambda: "Cisco-IOS-XR-lpts-lib-cfg:lpts/Cisco-IOS-XR-lpts-pre-ifib-cfg:ipolicer/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.Ipolicer.Flows, [], name, value)
class Flow(_Entity_):
"""
selected flow type
.. attribute:: flow_type (key)
LPTS Flow Type
**type**\: :py:class:`LptsFlow <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_pre_ifib_cfg.LptsFlow>`
.. attribute:: precedences
TOS Precedence value(s)
**type**\: :py:class:`Precedences <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.Ipolicer.Flows.Flow.Precedences>`
.. attribute:: rate
Configured rate value
**type**\: int
**range:** 0..4294967295
"""
_prefix = 'lpts-pre-ifib-cfg'
_revision = '2019-10-23'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.Ipolicer.Flows.Flow, self).__init__()
self.yang_name = "flow"
self.yang_parent_name = "flows"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = ['flow_type']
self._child_classes = OrderedDict([("precedences", ("precedences", Lpts.Ipolicer.Flows.Flow.Precedences))])
self._leafs = OrderedDict([
('flow_type', (YLeaf(YType.enumeration, 'flow-type'), [('ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_pre_ifib_cfg', 'LptsFlow', '')])),
('rate', (YLeaf(YType.uint32, 'rate'), ['int'])),
])
self.flow_type = None
self.rate = None
self.precedences = Lpts.Ipolicer.Flows.Flow.Precedences()
self.precedences.parent = self
self._children_name_map["precedences"] = "precedences"
self._segment_path = lambda: "flow" + "[flow-type='" + str(self.flow_type) + "']"
self._absolute_path = lambda: "Cisco-IOS-XR-lpts-lib-cfg:lpts/Cisco-IOS-XR-lpts-pre-ifib-cfg:ipolicer/flows/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.Ipolicer.Flows.Flow, ['flow_type', 'rate'], name, value)
class Precedences(_Entity_):
"""
TOS Precedence value(s)
.. attribute:: precedence
Precedence values
**type**\: union of the below types:
**type**\: list of :py:class:`LptsPreIFibPrecedenceNumber <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_pre_ifib_cfg.LptsPreIFibPrecedenceNumber>`
**type**\: list of int
**range:** 0..7
"""
_prefix = 'lpts-pre-ifib-cfg'
_revision = '2019-10-23'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.Ipolicer.Flows.Flow.Precedences, self).__init__()
self.yang_name = "precedences"
self.yang_parent_name = "flow"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = []
self._child_classes = OrderedDict([])
self._leafs = OrderedDict([
('precedence', (YLeafList(YType.str, 'precedence'), [('ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_pre_ifib_cfg', 'LptsPreIFibPrecedenceNumber', ''),'int'])),
])
self.precedence = []
self._segment_path = lambda: "precedences"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.Ipolicer.Flows.Flow.Precedences, ['precedence'], name, value)
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.Ipolicer.Flows.Flow.Precedences']['meta_info']
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.Ipolicer.Flows.Flow']['meta_info']
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.Ipolicer.Flows']['meta_info']
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.Ipolicer']['meta_info']
class DomainNames(_Entity_):
"""
Pre IFiB Domains Configuration
.. attribute:: domain_name
Domain name
**type**\: list of :py:class:`DomainName <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.DomainNames.DomainName>`
"""
_prefix = 'lpts-pre-ifib-cfg'
_revision = '2019-10-23'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.DomainNames, self).__init__()
self.yang_name = "domain-names"
self.yang_parent_name = "lpts"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = []
self._child_classes = OrderedDict([("domain-name", ("domain_name", Lpts.DomainNames.DomainName))])
self._leafs = OrderedDict()
self.domain_name = YList(self)
self._segment_path = lambda: "Cisco-IOS-XR-lpts-pre-ifib-cfg:domain-names"
self._absolute_path = lambda: "Cisco-IOS-XR-lpts-lib-cfg:lpts/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.DomainNames, [], name, value)
class DomainName(_Entity_):
"""
Domain name
.. attribute:: domain_name (key)
Domain name
**type**\: str
**pattern:** [\\w\\\-\\.\:,\_@#%$\\+=\\\|;]+
.. attribute:: interface_names
Domain Interface
**type**\: :py:class:`InterfaceNames <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.DomainNames.DomainName.InterfaceNames>`
"""
_prefix = 'lpts-pre-ifib-cfg'
_revision = '2019-10-23'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.DomainNames.DomainName, self).__init__()
self.yang_name = "domain-name"
self.yang_parent_name = "domain-names"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = ['domain_name']
self._child_classes = OrderedDict([("interface-names", ("interface_names", Lpts.DomainNames.DomainName.InterfaceNames))])
self._leafs = OrderedDict([
('domain_name', (YLeaf(YType.str, 'domain-name'), ['str'])),
])
self.domain_name = None
self.interface_names = Lpts.DomainNames.DomainName.InterfaceNames()
self.interface_names.parent = self
self._children_name_map["interface_names"] = "interface-names"
self._segment_path = lambda: "domain-name" + "[domain-name='" + str(self.domain_name) + "']"
self._absolute_path = lambda: "Cisco-IOS-XR-lpts-lib-cfg:lpts/Cisco-IOS-XR-lpts-pre-ifib-cfg:domain-names/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.DomainNames.DomainName, ['domain_name'], name, value)
class InterfaceNames(_Entity_):
"""
Domain Interface
.. attribute:: interface_name
pre\-ifib Domain Single interface configuration
**type**\: list of :py:class:`InterfaceName <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.DomainNames.DomainName.InterfaceNames.InterfaceName>`
"""
_prefix = 'lpts-pre-ifib-cfg'
_revision = '2019-10-23'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.DomainNames.DomainName.InterfaceNames, self).__init__()
self.yang_name = "interface-names"
self.yang_parent_name = "domain-name"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = []
self._child_classes = OrderedDict([("interface-name", ("interface_name", Lpts.DomainNames.DomainName.InterfaceNames.InterfaceName))])
self._leafs = OrderedDict()
self.interface_name = YList(self)
self._segment_path = lambda: "interface-names"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.DomainNames.DomainName.InterfaceNames, [], name, value)
class InterfaceName(_Entity_):
"""
pre\-ifib Domain Single interface
configuration
.. attribute:: domain_interface_name (key)
Interface Name
**type**\: str
**pattern:** [a\-zA\-Z0\-9.\_/\-]+
.. attribute:: domain_interface_name_xr
Enabled or disabled
**type**\: bool
**mandatory**\: True
"""
_prefix = 'lpts-pre-ifib-cfg'
_revision = '2019-10-23'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.DomainNames.DomainName.InterfaceNames.InterfaceName, self).__init__()
self.yang_name = "interface-name"
self.yang_parent_name = "interface-names"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = ['domain_interface_name']
self._child_classes = OrderedDict([])
self._leafs = OrderedDict([
('domain_interface_name', (YLeaf(YType.str, 'domain-interface-name'), ['str'])),
('domain_interface_name_xr', (YLeaf(YType.boolean, 'domain-interface-name-xr'), ['bool'])),
])
self.domain_interface_name = None
self.domain_interface_name_xr = None
self._segment_path = lambda: "interface-name" + "[domain-interface-name='" + str(self.domain_interface_name) + "']"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.DomainNames.DomainName.InterfaceNames.InterfaceName, ['domain_interface_name', 'domain_interface_name_xr'], name, value)
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.DomainNames.DomainName.InterfaceNames.InterfaceName']['meta_info']
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.DomainNames.DomainName.InterfaceNames']['meta_info']
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.DomainNames.DomainName']['meta_info']
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.DomainNames']['meta_info']
class IpuntPolicer(_Entity_):
"""
Pre IFiB Punt Policer Configuration
.. attribute:: punt_type_table
Punt Policer Table
**type**\: :py:class:`PuntTypeTable <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.IpuntPolicer.PuntTypeTable>`
.. attribute:: enable
Enabled
**type**\: :py:class:`Empty<ydk.types.Empty>`
**mandatory**\: True
.. attribute:: punt_policer_domains
Punt Policer Domain Table
**type**\: :py:class:`PuntPolicerDomains <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.IpuntPolicer.PuntPolicerDomains>`
.. attribute:: punt_policer_interface_names
Punt Policer Interface
**type**\: :py:class:`PuntPolicerInterfaceNames <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.IpuntPolicer.PuntPolicerInterfaceNames>`
This class is a :ref:`presence class<presence-class>`
"""
_prefix = 'lpts-pre-ifib-cfg'
_revision = '2019-10-23'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.IpuntPolicer, self).__init__()
self.yang_name = "ipunt-policer"
self.yang_parent_name = "lpts"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = []
self._child_classes = OrderedDict([("punt-type-table", ("punt_type_table", Lpts.IpuntPolicer.PuntTypeTable)), ("punt-policer-domains", ("punt_policer_domains", Lpts.IpuntPolicer.PuntPolicerDomains)), ("punt-policer-interface-names", ("punt_policer_interface_names", Lpts.IpuntPolicer.PuntPolicerInterfaceNames))])
self.is_presence_container = True
self._leafs = OrderedDict([
('enable', (YLeaf(YType.empty, 'enable'), ['Empty'])),
])
self.enable = None
self.punt_type_table = Lpts.IpuntPolicer.PuntTypeTable()
self.punt_type_table.parent = self
self._children_name_map["punt_type_table"] = "punt-type-table"
self.punt_policer_domains = Lpts.IpuntPolicer.PuntPolicerDomains()
self.punt_policer_domains.parent = self
self._children_name_map["punt_policer_domains"] = "punt-policer-domains"
self.punt_policer_interface_names = Lpts.IpuntPolicer.PuntPolicerInterfaceNames()
self.punt_policer_interface_names.parent = self
self._children_name_map["punt_policer_interface_names"] = "punt-policer-interface-names"
self._segment_path = lambda: "Cisco-IOS-XR-lpts-pre-ifib-cfg:ipunt-policer"
self._absolute_path = lambda: "Cisco-IOS-XR-lpts-lib-cfg:lpts/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.IpuntPolicer, ['enable'], name, value)
class PuntTypeTable(_Entity_):
"""
Punt Policer Table
.. attribute:: punt_type
Punt Protocol Type
**type**\: list of :py:class:`PuntType <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.IpuntPolicer.PuntTypeTable.PuntType>`
"""
_prefix = 'lpts-pre-ifib-cfg'
_revision = '2019-10-23'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.IpuntPolicer.PuntTypeTable, self).__init__()
self.yang_name = "punt-type-table"
self.yang_parent_name = "ipunt-policer"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = []
self._child_classes = OrderedDict([("punt-type", ("punt_type", Lpts.IpuntPolicer.PuntTypeTable.PuntType))])
self._leafs = OrderedDict()
self.punt_type = YList(self)
self._segment_path = lambda: "punt-type-table"
self._absolute_path = lambda: "Cisco-IOS-XR-lpts-lib-cfg:lpts/Cisco-IOS-XR-lpts-pre-ifib-cfg:ipunt-policer/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.IpuntPolicer.PuntTypeTable, [], name, value)
class PuntType(_Entity_):
"""
Punt Protocol Type
.. attribute:: punt_id (key)
Punt Type
**type**\: :py:class:`LptsPunt <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_pre_ifib_cfg.LptsPunt>`
.. attribute:: rate
Enable or Disable Punt Police and corresponding Rate in PPS
**type**\: :py:class:`Rate <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.IpuntPolicer.PuntTypeTable.PuntType.Rate>`
**presence node**\: True
"""
_prefix = 'lpts-pre-ifib-cfg'
_revision = '2019-10-23'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.IpuntPolicer.PuntTypeTable.PuntType, self).__init__()
self.yang_name = "punt-type"
self.yang_parent_name = "punt-type-table"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = ['punt_id']
self._child_classes = OrderedDict([("rate", ("rate", Lpts.IpuntPolicer.PuntTypeTable.PuntType.Rate))])
self._leafs = OrderedDict([
('punt_id', (YLeaf(YType.enumeration, 'punt-id'), [('ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_pre_ifib_cfg', 'LptsPunt', '')])),
])
self.punt_id = None
self.rate = None
self._children_name_map["rate"] = "rate"
self._segment_path = lambda: "punt-type" + "[punt-id='" + str(self.punt_id) + "']"
self._absolute_path = lambda: "Cisco-IOS-XR-lpts-lib-cfg:lpts/Cisco-IOS-XR-lpts-pre-ifib-cfg:ipunt-policer/punt-type-table/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.IpuntPolicer.PuntTypeTable.PuntType, ['punt_id'], name, value)
class Rate(_Entity_):
"""
Enable or Disable Punt Police and corresponding
Rate in PPS
.. attribute:: is_enabled
Is Punt Policer enabled
**type**\: bool
**mandatory**\: True
.. attribute:: rate
Configured rate value
**type**\: int
**range:** 0..4294967295
This class is a :ref:`presence class<presence-class>`
"""
_prefix = 'lpts-pre-ifib-cfg'
_revision = '2019-10-23'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.IpuntPolicer.PuntTypeTable.PuntType.Rate, self).__init__()
self.yang_name = "rate"
self.yang_parent_name = "punt-type"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = []
self._child_classes = OrderedDict([])
self.is_presence_container = True
self._leafs = OrderedDict([
('is_enabled', (YLeaf(YType.boolean, 'is-enabled'), ['bool'])),
('rate', (YLeaf(YType.uint32, 'rate'), ['int'])),
])
self.is_enabled = None
self.rate = None
self._segment_path = lambda: "rate"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.IpuntPolicer.PuntTypeTable.PuntType.Rate, ['is_enabled', 'rate'], name, value)
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.IpuntPolicer.PuntTypeTable.PuntType.Rate']['meta_info']
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.IpuntPolicer.PuntTypeTable.PuntType']['meta_info']
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.IpuntPolicer.PuntTypeTable']['meta_info']
class PuntPolicerDomains(_Entity_):
"""
Punt Policer Domain Table
.. attribute:: punt_policer_domain
Domain name
**type**\: list of :py:class:`PuntPolicerDomain <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.IpuntPolicer.PuntPolicerDomains.PuntPolicerDomain>`
"""
_prefix = 'lpts-pre-ifib-cfg'
_revision = '2019-10-23'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.IpuntPolicer.PuntPolicerDomains, self).__init__()
self.yang_name = "punt-policer-domains"
self.yang_parent_name = "ipunt-policer"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = []
self._child_classes = OrderedDict([("punt-policer-domain", ("punt_policer_domain", Lpts.IpuntPolicer.PuntPolicerDomains.PuntPolicerDomain))])
self._leafs = OrderedDict()
self.punt_policer_domain = YList(self)
self._segment_path = lambda: "punt-policer-domains"
self._absolute_path = lambda: "Cisco-IOS-XR-lpts-lib-cfg:lpts/Cisco-IOS-XR-lpts-pre-ifib-cfg:ipunt-policer/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.IpuntPolicer.PuntPolicerDomains, [], name, value)
class PuntPolicerDomain(_Entity_):
"""
Domain name
.. attribute:: domain_name (key)
Domain name
**type**\: str
**pattern:** [\\w\\\-\\.\:,\_@#%$\\+=\\\|;]+
.. attribute:: punt_type_domain_table
Punt Policer Table
**type**\: :py:class:`PuntTypeDomainTable <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.IpuntPolicer.PuntPolicerDomains.PuntPolicerDomain.PuntTypeDomainTable>`
"""
_prefix = 'lpts-pre-ifib-cfg'
_revision = '2019-10-23'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.IpuntPolicer.PuntPolicerDomains.PuntPolicerDomain, self).__init__()
self.yang_name = "punt-policer-domain"
self.yang_parent_name = "punt-policer-domains"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = ['domain_name']
self._child_classes = OrderedDict([("punt-type-domain-table", ("punt_type_domain_table", Lpts.IpuntPolicer.PuntPolicerDomains.PuntPolicerDomain.PuntTypeDomainTable))])
self._leafs = OrderedDict([
('domain_name', (YLeaf(YType.str, 'domain-name'), ['str'])),
])
self.domain_name = None
self.punt_type_domain_table = Lpts.IpuntPolicer.PuntPolicerDomains.PuntPolicerDomain.PuntTypeDomainTable()
self.punt_type_domain_table.parent = self
self._children_name_map["punt_type_domain_table"] = "punt-type-domain-table"
self._segment_path = lambda: "punt-policer-domain" + "[domain-name='" + str(self.domain_name) + "']"
self._absolute_path = lambda: "Cisco-IOS-XR-lpts-lib-cfg:lpts/Cisco-IOS-XR-lpts-pre-ifib-cfg:ipunt-policer/punt-policer-domains/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.IpuntPolicer.PuntPolicerDomains.PuntPolicerDomain, ['domain_name'], name, value)
class PuntTypeDomainTable(_Entity_):
"""
Punt Policer Table
.. attribute:: punt_type
Punt Protocol Type
**type**\: list of :py:class:`PuntType <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.IpuntPolicer.PuntPolicerDomains.PuntPolicerDomain.PuntTypeDomainTable.PuntType>`
"""
_prefix = 'lpts-pre-ifib-cfg'
_revision = '2019-10-23'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.IpuntPolicer.PuntPolicerDomains.PuntPolicerDomain.PuntTypeDomainTable, self).__init__()
self.yang_name = "punt-type-domain-table"
self.yang_parent_name = "punt-policer-domain"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = []
self._child_classes = OrderedDict([("punt-type", ("punt_type", Lpts.IpuntPolicer.PuntPolicerDomains.PuntPolicerDomain.PuntTypeDomainTable.PuntType))])
self._leafs = OrderedDict()
self.punt_type = YList(self)
self._segment_path = lambda: "punt-type-domain-table"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.IpuntPolicer.PuntPolicerDomains.PuntPolicerDomain.PuntTypeDomainTable, [], name, value)
class PuntType(_Entity_):
"""
Punt Protocol Type
.. attribute:: punt_id (key)
Punt Type
**type**\: :py:class:`LptsPunt <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_pre_ifib_cfg.LptsPunt>`
.. attribute:: rate
Enable or Disable Punt Police and corresponding Rate in PPS
**type**\: :py:class:`Rate <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.IpuntPolicer.PuntPolicerDomains.PuntPolicerDomain.PuntTypeDomainTable.PuntType.Rate>`
**presence node**\: True
"""
_prefix = 'lpts-pre-ifib-cfg'
_revision = '2019-10-23'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.IpuntPolicer.PuntPolicerDomains.PuntPolicerDomain.PuntTypeDomainTable.PuntType, self).__init__()
self.yang_name = "punt-type"
self.yang_parent_name = "punt-type-domain-table"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = ['punt_id']
self._child_classes = OrderedDict([("rate", ("rate", Lpts.IpuntPolicer.PuntPolicerDomains.PuntPolicerDomain.PuntTypeDomainTable.PuntType.Rate))])
self._leafs = OrderedDict([
('punt_id', (YLeaf(YType.enumeration, 'punt-id'), [('ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_pre_ifib_cfg', 'LptsPunt', '')])),
])
self.punt_id = None
self.rate = None
self._children_name_map["rate"] = "rate"
self._segment_path = lambda: "punt-type" + "[punt-id='" + str(self.punt_id) + "']"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.IpuntPolicer.PuntPolicerDomains.PuntPolicerDomain.PuntTypeDomainTable.PuntType, ['punt_id'], name, value)
class Rate(_Entity_):
"""
Enable or Disable Punt Police and corresponding
Rate in PPS
.. attribute:: is_enabled
Is Punt Policer enabled
**type**\: bool
**mandatory**\: True
.. attribute:: rate
Configured rate value
**type**\: int
**range:** 0..4294967295
This class is a :ref:`presence class<presence-class>`
"""
_prefix = 'lpts-pre-ifib-cfg'
_revision = '2019-10-23'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.IpuntPolicer.PuntPolicerDomains.PuntPolicerDomain.PuntTypeDomainTable.PuntType.Rate, self).__init__()
self.yang_name = "rate"
self.yang_parent_name = "punt-type"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = []
self._child_classes = OrderedDict([])
self.is_presence_container = True
self._leafs = OrderedDict([
('is_enabled', (YLeaf(YType.boolean, 'is-enabled'), ['bool'])),
('rate', (YLeaf(YType.uint32, 'rate'), ['int'])),
])
self.is_enabled = None
self.rate = None
self._segment_path = lambda: "rate"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.IpuntPolicer.PuntPolicerDomains.PuntPolicerDomain.PuntTypeDomainTable.PuntType.Rate, ['is_enabled', 'rate'], name, value)
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.IpuntPolicer.PuntPolicerDomains.PuntPolicerDomain.PuntTypeDomainTable.PuntType.Rate']['meta_info']
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.IpuntPolicer.PuntPolicerDomains.PuntPolicerDomain.PuntTypeDomainTable.PuntType']['meta_info']
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.IpuntPolicer.PuntPolicerDomains.PuntPolicerDomain.PuntTypeDomainTable']['meta_info']
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.IpuntPolicer.PuntPolicerDomains.PuntPolicerDomain']['meta_info']
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.IpuntPolicer.PuntPolicerDomains']['meta_info']
class PuntPolicerInterfaceNames(_Entity_):
"""
Punt Policer Interface
.. attribute:: punt_policer_interface_name
Pre\-ifib Punt Policer Interface Configuration
**type**\: list of :py:class:`PuntPolicerInterfaceName <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.IpuntPolicer.PuntPolicerInterfaceNames.PuntPolicerInterfaceName>`
"""
_prefix = 'lpts-pre-ifib-cfg'
_revision = '2019-10-23'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.IpuntPolicer.PuntPolicerInterfaceNames, self).__init__()
self.yang_name = "punt-policer-interface-names"
self.yang_parent_name = "ipunt-policer"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = []
self._child_classes = OrderedDict([("punt-policer-interface-name", ("punt_policer_interface_name", Lpts.IpuntPolicer.PuntPolicerInterfaceNames.PuntPolicerInterfaceName))])
self._leafs = OrderedDict()
self.punt_policer_interface_name = YList(self)
self._segment_path = lambda: "punt-policer-interface-names"
self._absolute_path = lambda: "Cisco-IOS-XR-lpts-lib-cfg:lpts/Cisco-IOS-XR-lpts-pre-ifib-cfg:ipunt-policer/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.IpuntPolicer.PuntPolicerInterfaceNames, [], name, value)
class PuntPolicerInterfaceName(_Entity_):
"""
Pre\-ifib Punt Policer Interface Configuration
.. attribute:: punt_interface_name (key)
Interface Name
**type**\: str
**pattern:** [a\-zA\-Z0\-9.\_/\-]+
.. attribute:: punt_type_interface_table
Punt Policer Table
**type**\: :py:class:`PuntTypeInterfaceTable <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.IpuntPolicer.PuntPolicerInterfaceNames.PuntPolicerInterfaceName.PuntTypeInterfaceTable>`
"""
_prefix = 'lpts-pre-ifib-cfg'
_revision = '2019-10-23'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.IpuntPolicer.PuntPolicerInterfaceNames.PuntPolicerInterfaceName, self).__init__()
self.yang_name = "punt-policer-interface-name"
self.yang_parent_name = "punt-policer-interface-names"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = ['punt_interface_name']
self._child_classes = OrderedDict([("punt-type-interface-table", ("punt_type_interface_table", Lpts.IpuntPolicer.PuntPolicerInterfaceNames.PuntPolicerInterfaceName.PuntTypeInterfaceTable))])
self._leafs = OrderedDict([
('punt_interface_name', (YLeaf(YType.str, 'punt-interface-name'), ['str'])),
])
self.punt_interface_name = None
self.punt_type_interface_table = Lpts.IpuntPolicer.PuntPolicerInterfaceNames.PuntPolicerInterfaceName.PuntTypeInterfaceTable()
self.punt_type_interface_table.parent = self
self._children_name_map["punt_type_interface_table"] = "punt-type-interface-table"
self._segment_path = lambda: "punt-policer-interface-name" + "[punt-interface-name='" + str(self.punt_interface_name) + "']"
self._absolute_path = lambda: "Cisco-IOS-XR-lpts-lib-cfg:lpts/Cisco-IOS-XR-lpts-pre-ifib-cfg:ipunt-policer/punt-policer-interface-names/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.IpuntPolicer.PuntPolicerInterfaceNames.PuntPolicerInterfaceName, ['punt_interface_name'], name, value)
class PuntTypeInterfaceTable(_Entity_):
"""
Punt Policer Table
.. attribute:: punt_type
Punt Protocol Type
**type**\: list of :py:class:`PuntType <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.IpuntPolicer.PuntPolicerInterfaceNames.PuntPolicerInterfaceName.PuntTypeInterfaceTable.PuntType>`
"""
_prefix = 'lpts-pre-ifib-cfg'
_revision = '2019-10-23'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.IpuntPolicer.PuntPolicerInterfaceNames.PuntPolicerInterfaceName.PuntTypeInterfaceTable, self).__init__()
self.yang_name = "punt-type-interface-table"
self.yang_parent_name = "punt-policer-interface-name"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = []
self._child_classes = OrderedDict([("punt-type", ("punt_type", Lpts.IpuntPolicer.PuntPolicerInterfaceNames.PuntPolicerInterfaceName.PuntTypeInterfaceTable.PuntType))])
self._leafs = OrderedDict()
self.punt_type = YList(self)
self._segment_path = lambda: "punt-type-interface-table"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.IpuntPolicer.PuntPolicerInterfaceNames.PuntPolicerInterfaceName.PuntTypeInterfaceTable, [], name, value)
class PuntType(_Entity_):
"""
Punt Protocol Type
.. attribute:: punt_id (key)
Punt Type
**type**\: :py:class:`LptsPunt <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_pre_ifib_cfg.LptsPunt>`
.. attribute:: rate
Enable or Disable Punt Police and corresponding Rate in PPS
**type**\: :py:class:`Rate <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.IpuntPolicer.PuntPolicerInterfaceNames.PuntPolicerInterfaceName.PuntTypeInterfaceTable.PuntType.Rate>`
**presence node**\: True
"""
_prefix = 'lpts-pre-ifib-cfg'
_revision = '2019-10-23'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.IpuntPolicer.PuntPolicerInterfaceNames.PuntPolicerInterfaceName.PuntTypeInterfaceTable.PuntType, self).__init__()
self.yang_name = "punt-type"
self.yang_parent_name = "punt-type-interface-table"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = ['punt_id']
self._child_classes = OrderedDict([("rate", ("rate", Lpts.IpuntPolicer.PuntPolicerInterfaceNames.PuntPolicerInterfaceName.PuntTypeInterfaceTable.PuntType.Rate))])
self._leafs = OrderedDict([
('punt_id', (YLeaf(YType.enumeration, 'punt-id'), [('ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_pre_ifib_cfg', 'LptsPunt', '')])),
])
self.punt_id = None
self.rate = None
self._children_name_map["rate"] = "rate"
self._segment_path = lambda: "punt-type" + "[punt-id='" + str(self.punt_id) + "']"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.IpuntPolicer.PuntPolicerInterfaceNames.PuntPolicerInterfaceName.PuntTypeInterfaceTable.PuntType, ['punt_id'], name, value)
class Rate(_Entity_):
"""
Enable or Disable Punt Police and corresponding
Rate in PPS
.. attribute:: is_enabled
Is Punt Policer enabled
**type**\: bool
**mandatory**\: True
.. attribute:: rate
Configured rate value
**type**\: int
**range:** 0..4294967295
This class is a :ref:`presence class<presence-class>`
"""
_prefix = 'lpts-pre-ifib-cfg'
_revision = '2019-10-23'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.IpuntPolicer.PuntPolicerInterfaceNames.PuntPolicerInterfaceName.PuntTypeInterfaceTable.PuntType.Rate, self).__init__()
self.yang_name = "rate"
self.yang_parent_name = "punt-type"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = []
self._child_classes = OrderedDict([])
self.is_presence_container = True
self._leafs = OrderedDict([
('is_enabled', (YLeaf(YType.boolean, 'is-enabled'), ['bool'])),
('rate', (YLeaf(YType.uint32, 'rate'), ['int'])),
])
self.is_enabled = None
self.rate = None
self._segment_path = lambda: "rate"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.IpuntPolicer.PuntPolicerInterfaceNames.PuntPolicerInterfaceName.PuntTypeInterfaceTable.PuntType.Rate, ['is_enabled', 'rate'], name, value)
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.IpuntPolicer.PuntPolicerInterfaceNames.PuntPolicerInterfaceName.PuntTypeInterfaceTable.PuntType.Rate']['meta_info']
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.IpuntPolicer.PuntPolicerInterfaceNames.PuntPolicerInterfaceName.PuntTypeInterfaceTable.PuntType']['meta_info']
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.IpuntPolicer.PuntPolicerInterfaceNames.PuntPolicerInterfaceName.PuntTypeInterfaceTable']['meta_info']
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.IpuntPolicer.PuntPolicerInterfaceNames.PuntPolicerInterfaceName']['meta_info']
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.IpuntPolicer.PuntPolicerInterfaceNames']['meta_info']
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.IpuntPolicer']['meta_info']
class Punt(_Entity_):
"""
Configure penalty timeout value
.. attribute:: flowtrap
excessive punt flow trap configuration commands
**type**\: :py:class:`Flowtrap <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.Punt.Flowtrap>`
"""
_prefix = 'lpts-punt-flowtrap-cfg'
_revision = '2017-09-07'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.Punt, self).__init__()
self.yang_name = "punt"
self.yang_parent_name = "lpts"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = []
self._child_classes = OrderedDict([("flowtrap", ("flowtrap", Lpts.Punt.Flowtrap))])
self._leafs = OrderedDict()
self.flowtrap = Lpts.Punt.Flowtrap()
self.flowtrap.parent = self
self._children_name_map["flowtrap"] = "flowtrap"
self._segment_path = lambda: "Cisco-IOS-XR-lpts-punt-flowtrap-cfg:punt"
self._absolute_path = lambda: "Cisco-IOS-XR-lpts-lib-cfg:lpts/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.Punt, [], name, value)
class Flowtrap(_Entity_):
"""
excessive punt flow trap configuration commands
.. attribute:: penalty_rates
Configure penalty policing rate
**type**\: :py:class:`PenaltyRates <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.Punt.Flowtrap.PenaltyRates>`
.. attribute:: penalty_timeouts
Configure penalty timeout value
**type**\: :py:class:`PenaltyTimeouts <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.Punt.Flowtrap.PenaltyTimeouts>`
.. attribute:: exclude
Exclude an item from all traps
**type**\: :py:class:`Exclude <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.Punt.Flowtrap.Exclude>`
.. attribute:: max_flow_gap
Maximum flow gap in milliseconds
**type**\: int
**range:** 1..60000
.. attribute:: et_size
Should be power of 2. Any one of 1,2,4,8,16,32 ,64,128
**type**\: int
**range:** 1..128
.. attribute:: eviction_threshold
Eviction threshold, should be less than report\-threshold
**type**\: int
**range:** 1..65535
.. attribute:: report_threshold
Threshold to cross for a flow to be considered as bad actor flow
**type**\: int
**range:** 1..65535
.. attribute:: non_subscriber_interfaces
Enable trap based on source mac on non\-subscriber interface
**type**\: int
**range:** 0..4294967295
.. attribute:: sample_prob
Probability of packets to be sampled
**type**\: str
**length:** 1..32
.. attribute:: eviction_search_limit
Eviction search limit, should be less than trap\-size
**type**\: int
**range:** 1..128
.. attribute:: routing_protocols_enable
Allow routing protocols to pass through copp sampler
**type**\: bool
.. attribute:: subscriber_interfaces
Enable the trap on subscriber interfaces
**type**\: bool
.. attribute:: interface_based_flow
Identify flow based on interface and flowtype
**type**\: bool
.. attribute:: dampening
Dampening period for a bad actor flow in milliseconds
**type**\: int
**range:** 5000..60000
"""
_prefix = 'lpts-punt-flowtrap-cfg'
_revision = '2017-09-07'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.Punt.Flowtrap, self).__init__()
self.yang_name = "flowtrap"
self.yang_parent_name = "punt"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = []
self._child_classes = OrderedDict([("penalty-rates", ("penalty_rates", Lpts.Punt.Flowtrap.PenaltyRates)), ("penalty-timeouts", ("penalty_timeouts", Lpts.Punt.Flowtrap.PenaltyTimeouts)), ("exclude", ("exclude", Lpts.Punt.Flowtrap.Exclude))])
self._leafs = OrderedDict([
('max_flow_gap', (YLeaf(YType.uint32, 'max-flow-gap'), ['int'])),
('et_size', (YLeaf(YType.uint32, 'et-size'), ['int'])),
('eviction_threshold', (YLeaf(YType.uint32, 'eviction-threshold'), ['int'])),
('report_threshold', (YLeaf(YType.uint16, 'report-threshold'), ['int'])),
('non_subscriber_interfaces', (YLeaf(YType.uint32, 'non-subscriber-interfaces'), ['int'])),
('sample_prob', (YLeaf(YType.str, 'sample-prob'), ['str'])),
('eviction_search_limit', (YLeaf(YType.uint32, 'eviction-search-limit'), ['int'])),
('routing_protocols_enable', (YLeaf(YType.boolean, 'routing-protocols-enable'), ['bool'])),
('subscriber_interfaces', (YLeaf(YType.boolean, 'subscriber-interfaces'), ['bool'])),
('interface_based_flow', (YLeaf(YType.boolean, 'interface-based-flow'), ['bool'])),
('dampening', (YLeaf(YType.uint32, 'dampening'), ['int'])),
])
self.max_flow_gap = None
self.et_size = None
self.eviction_threshold = None
self.report_threshold = None
self.non_subscriber_interfaces = None
self.sample_prob = None
self.eviction_search_limit = None
self.routing_protocols_enable = None
self.subscriber_interfaces = None
self.interface_based_flow = None
self.dampening = None
self.penalty_rates = Lpts.Punt.Flowtrap.PenaltyRates()
self.penalty_rates.parent = self
self._children_name_map["penalty_rates"] = "penalty-rates"
self.penalty_timeouts = Lpts.Punt.Flowtrap.PenaltyTimeouts()
self.penalty_timeouts.parent = self
self._children_name_map["penalty_timeouts"] = "penalty-timeouts"
self.exclude = Lpts.Punt.Flowtrap.Exclude()
self.exclude.parent = self
self._children_name_map["exclude"] = "exclude"
self._segment_path = lambda: "flowtrap"
self._absolute_path = lambda: "Cisco-IOS-XR-lpts-lib-cfg:lpts/Cisco-IOS-XR-lpts-punt-flowtrap-cfg:punt/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.Punt.Flowtrap, ['max_flow_gap', 'et_size', 'eviction_threshold', 'report_threshold', 'non_subscriber_interfaces', 'sample_prob', 'eviction_search_limit', 'routing_protocols_enable', 'subscriber_interfaces', 'interface_based_flow', 'dampening'], name, value)
class PenaltyRates(_Entity_):
"""
Configure penalty policing rate
.. attribute:: penalty_rate
none
**type**\: list of :py:class:`PenaltyRate <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.Punt.Flowtrap.PenaltyRates.PenaltyRate>`
"""
_prefix = 'lpts-punt-flowtrap-cfg'
_revision = '2017-09-07'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.Punt.Flowtrap.PenaltyRates, self).__init__()
self.yang_name = "penalty-rates"
self.yang_parent_name = "flowtrap"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = []
self._child_classes = OrderedDict([("penalty-rate", ("penalty_rate", Lpts.Punt.Flowtrap.PenaltyRates.PenaltyRate))])
self._leafs = OrderedDict()
self.penalty_rate = YList(self)
self._segment_path = lambda: "penalty-rates"
self._absolute_path = lambda: "Cisco-IOS-XR-lpts-lib-cfg:lpts/Cisco-IOS-XR-lpts-punt-flowtrap-cfg:punt/flowtrap/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.Punt.Flowtrap.PenaltyRates, [], name, value)
class PenaltyRate(_Entity_):
"""
none
.. attribute:: protocol_name (key)
none
**type**\: :py:class:`LptsPuntFlowtrapProtoId <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_punt_flowtrap_cfg.LptsPuntFlowtrapProtoId>`
.. attribute:: rate
Penalty policer rate in packets\-per\-second
**type**\: int
**range:** 2..100
**mandatory**\: True
"""
_prefix = 'lpts-punt-flowtrap-cfg'
_revision = '2017-09-07'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.Punt.Flowtrap.PenaltyRates.PenaltyRate, self).__init__()
self.yang_name = "penalty-rate"
self.yang_parent_name = "penalty-rates"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = ['protocol_name']
self._child_classes = OrderedDict([])
self._leafs = OrderedDict([
('protocol_name', (YLeaf(YType.enumeration, 'protocol-name'), [('ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_punt_flowtrap_cfg', 'LptsPuntFlowtrapProtoId', '')])),
('rate', (YLeaf(YType.uint32, 'rate'), ['int'])),
])
self.protocol_name = None
self.rate = None
self._segment_path = lambda: "penalty-rate" + "[protocol-name='" + str(self.protocol_name) + "']"
self._absolute_path = lambda: "Cisco-IOS-XR-lpts-lib-cfg:lpts/Cisco-IOS-XR-lpts-punt-flowtrap-cfg:punt/flowtrap/penalty-rates/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.Punt.Flowtrap.PenaltyRates.PenaltyRate, ['protocol_name', 'rate'], name, value)
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.Punt.Flowtrap.PenaltyRates.PenaltyRate']['meta_info']
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.Punt.Flowtrap.PenaltyRates']['meta_info']
class PenaltyTimeouts(_Entity_):
"""
Configure penalty timeout value
.. attribute:: penalty_timeout
none
**type**\: list of :py:class:`PenaltyTimeout <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.Punt.Flowtrap.PenaltyTimeouts.PenaltyTimeout>`
"""
_prefix = 'lpts-punt-flowtrap-cfg'
_revision = '2017-09-07'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.Punt.Flowtrap.PenaltyTimeouts, self).__init__()
self.yang_name = "penalty-timeouts"
self.yang_parent_name = "flowtrap"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = []
self._child_classes = OrderedDict([("penalty-timeout", ("penalty_timeout", Lpts.Punt.Flowtrap.PenaltyTimeouts.PenaltyTimeout))])
self._leafs = OrderedDict()
self.penalty_timeout = YList(self)
self._segment_path = lambda: "penalty-timeouts"
self._absolute_path = lambda: "Cisco-IOS-XR-lpts-lib-cfg:lpts/Cisco-IOS-XR-lpts-punt-flowtrap-cfg:punt/flowtrap/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.Punt.Flowtrap.PenaltyTimeouts, [], name, value)
class PenaltyTimeout(_Entity_):
"""
none
.. attribute:: protocol_name (key)
none
**type**\: :py:class:`LptsPuntFlowtrapProtoId <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_punt_flowtrap_cfg.LptsPuntFlowtrapProtoId>`
.. attribute:: timeout
Timeout value in minutes
**type**\: int
**range:** 1..1000
**mandatory**\: True
"""
_prefix = 'lpts-punt-flowtrap-cfg'
_revision = '2017-09-07'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.Punt.Flowtrap.PenaltyTimeouts.PenaltyTimeout, self).__init__()
self.yang_name = "penalty-timeout"
self.yang_parent_name = "penalty-timeouts"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = ['protocol_name']
self._child_classes = OrderedDict([])
self._leafs = OrderedDict([
('protocol_name', (YLeaf(YType.enumeration, 'protocol-name'), [('ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_punt_flowtrap_cfg', 'LptsPuntFlowtrapProtoId', '')])),
('timeout', (YLeaf(YType.uint32, 'timeout'), ['int'])),
])
self.protocol_name = None
self.timeout = None
self._segment_path = lambda: "penalty-timeout" + "[protocol-name='" + str(self.protocol_name) + "']"
self._absolute_path = lambda: "Cisco-IOS-XR-lpts-lib-cfg:lpts/Cisco-IOS-XR-lpts-punt-flowtrap-cfg:punt/flowtrap/penalty-timeouts/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.Punt.Flowtrap.PenaltyTimeouts.PenaltyTimeout, ['protocol_name', 'timeout'], name, value)
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.Punt.Flowtrap.PenaltyTimeouts.PenaltyTimeout']['meta_info']
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.Punt.Flowtrap.PenaltyTimeouts']['meta_info']
class Exclude(_Entity_):
"""
Exclude an item from all traps
.. attribute:: interface_names
none
**type**\: :py:class:`InterfaceNames <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.Punt.Flowtrap.Exclude.InterfaceNames>`
"""
_prefix = 'lpts-punt-flowtrap-cfg'
_revision = '2017-09-07'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.Punt.Flowtrap.Exclude, self).__init__()
self.yang_name = "exclude"
self.yang_parent_name = "flowtrap"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = []
self._child_classes = OrderedDict([("interface-names", ("interface_names", Lpts.Punt.Flowtrap.Exclude.InterfaceNames))])
self._leafs = OrderedDict()
self.interface_names = Lpts.Punt.Flowtrap.Exclude.InterfaceNames()
self.interface_names.parent = self
self._children_name_map["interface_names"] = "interface-names"
self._segment_path = lambda: "exclude"
self._absolute_path = lambda: "Cisco-IOS-XR-lpts-lib-cfg:lpts/Cisco-IOS-XR-lpts-punt-flowtrap-cfg:punt/flowtrap/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.Punt.Flowtrap.Exclude, [], name, value)
class InterfaceNames(_Entity_):
"""
none
.. attribute:: interface_name
Name of interface to exclude from all traps
**type**\: list of :py:class:`InterfaceName <ydk.models.cisco_ios_xr.Cisco_IOS_XR_lpts_lib_cfg.Lpts.Punt.Flowtrap.Exclude.InterfaceNames.InterfaceName>`
"""
_prefix = 'lpts-punt-flowtrap-cfg'
_revision = '2017-09-07'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.Punt.Flowtrap.Exclude.InterfaceNames, self).__init__()
self.yang_name = "interface-names"
self.yang_parent_name = "exclude"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = []
self._child_classes = OrderedDict([("interface-name", ("interface_name", Lpts.Punt.Flowtrap.Exclude.InterfaceNames.InterfaceName))])
self._leafs = OrderedDict()
self.interface_name = YList(self)
self._segment_path = lambda: "interface-names"
self._absolute_path = lambda: "Cisco-IOS-XR-lpts-lib-cfg:lpts/Cisco-IOS-XR-lpts-punt-flowtrap-cfg:punt/flowtrap/exclude/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.Punt.Flowtrap.Exclude.InterfaceNames, [], name, value)
class InterfaceName(_Entity_):
"""
Name of interface to exclude from all traps
.. attribute:: ifname (key)
Name of interface to exclude from all traps
**type**\: str
**pattern:** [a\-zA\-Z0\-9.\_/\-]+
.. attribute:: id1
Enabled or disabled
**type**\: bool
**mandatory**\: True
"""
_prefix = 'lpts-punt-flowtrap-cfg'
_revision = '2017-09-07'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(Lpts.Punt.Flowtrap.Exclude.InterfaceNames.InterfaceName, self).__init__()
self.yang_name = "interface-name"
self.yang_parent_name = "interface-names"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = ['ifname']
self._child_classes = OrderedDict([])
self._leafs = OrderedDict([
('ifname', (YLeaf(YType.str, 'ifname'), ['str'])),
('id1', (YLeaf(YType.boolean, 'id1'), ['bool'])),
])
self.ifname = None
self.id1 = None
self._segment_path = lambda: "interface-name" + "[ifname='" + str(self.ifname) + "']"
self._absolute_path = lambda: "Cisco-IOS-XR-lpts-lib-cfg:lpts/Cisco-IOS-XR-lpts-punt-flowtrap-cfg:punt/flowtrap/exclude/interface-names/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Lpts.Punt.Flowtrap.Exclude.InterfaceNames.InterfaceName, ['ifname', 'id1'], name, value)
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.Punt.Flowtrap.Exclude.InterfaceNames.InterfaceName']['meta_info']
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.Punt.Flowtrap.Exclude.InterfaceNames']['meta_info']
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.Punt.Flowtrap.Exclude']['meta_info']
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.Punt.Flowtrap']['meta_info']
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts.Punt']['meta_info']
def clone_ptr(self):
self._top_entity = Lpts()
return self._top_entity
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_lpts_lib_cfg as meta
return meta._meta_table['Lpts']['meta_info']
| 45.278407 | 357 | 0.494097 | 9,703 | 107,989 | 5.154282 | 0.029269 | 0.04351 | 0.054387 | 0.046749 | 0.871731 | 0.837479 | 0.79253 | 0.762317 | 0.734904 | 0.720127 | 0 | 0.009038 | 0.407801 | 107,989 | 2,384 | 358 | 45.297399 | 0.772998 | 0.153682 | 0 | 0.671266 | 0 | 0.017045 | 0.143597 | 0.077436 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105519 | false | 0 | 0.041396 | 0 | 0.219156 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
06dfe3183349a06a54499d2159a1c70cf57fbab8 | 211 | py | Python | MstarHe2R/apps/test.py | IzayoiRin/MstarHe2R | 938d83acdfa5ec4464cf9113fef104a6e80ad662 | [
"MIT"
] | null | null | null | MstarHe2R/apps/test.py | IzayoiRin/MstarHe2R | 938d83acdfa5ec4464cf9113fef104a6e80ad662 | [
"MIT"
] | 2 | 2021-06-08T21:19:41.000Z | 2021-09-08T01:54:27.000Z | MstarHe2R/apps/test.py | IzayoiRin/MstarHe2R | 938d83acdfa5ec4464cf9113fef104a6e80ad662 | [
"MIT"
] | null | null | null | def main(*args, **kwargs):
print('*'*10 + "Successful Testing" + '*'*10)
if args:
print('*'*10 + "Args: %s" % args + '*'*10)
if kwargs:
print('*'*10 + "Kwargs: %s" % kwargs + '*'*10)
| 30.142857 | 54 | 0.459716 | 25 | 211 | 3.88 | 0.4 | 0.216495 | 0.268041 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.077922 | 0.270142 | 211 | 6 | 55 | 35.166667 | 0.551948 | 0 | 0 | 0 | 0 | 0 | 0.199052 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | true | 0 | 0 | 0 | 0.166667 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
6603ca9baaac6239e8efa7437bca5a480dc2194f | 1,185 | py | Python | surname_fnn/surname/args.py | sudarshan85/nlpbook | 41e59d706fb31f5185a0133789639ccffbddb41f | [
"Apache-2.0"
] | null | null | null | surname_fnn/surname/args.py | sudarshan85/nlpbook | 41e59d706fb31f5185a0133789639ccffbddb41f | [
"Apache-2.0"
] | null | null | null | surname_fnn/surname/args.py | sudarshan85/nlpbook | 41e59d706fb31f5185a0133789639ccffbddb41f | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
from argparse import Namespace
mlp_args = Namespace(
raw_dataset_csv='surnames.csv',
train_proportion=0.7,
proc_dataset_csv='surnames_with_splits.csv',
model_dir='models',
vectorizer_fname='mlp_vectorizer.json',
cw_file='class_weights.pt',
batch_size=64,
hidden_dim=300,
early_stopping_criteria=5,
learning_rate=0.001,
num_epochs=100,
device='cuda:3',
checkpointer_prefix='surname',
checkpointer_name='mlp_classifier',
save_every=2, # save model every n epochs
save_total=5 # have total of n models saved
)
cnn_args = Namespace(
raw_dataset_csv='surnames.csv',
train_proportion=0.7,
proc_dataset_csv='surnames_with_splits.csv',
model_dir='models',
vectorizer_fname='cnn_vectorizer.json',
cw_file='class_weights.pt',
batch_size=64,
hidden_dim=100,
dropout_p=0.1,
num_channels=256,
early_stopping_criteria=5,
learning_rate=0.001,
num_epochs=100,
device='cuda:3',
checkpointer_prefix='surname',
checkpointer_name='cnn_classifier',
save_every=2, # save model every n epochs
save_total=5 # have total of n models saved
)
| 26.931818 | 48 | 0.705485 | 167 | 1,185 | 4.706587 | 0.419162 | 0.050891 | 0.091603 | 0.058524 | 0.885496 | 0.885496 | 0.885496 | 0.885496 | 0.885496 | 0.885496 | 0 | 0.042531 | 0.186498 | 1,185 | 43 | 49 | 27.55814 | 0.772822 | 0.109705 | 0 | 0.666667 | 0 | 0 | 0.198095 | 0.045714 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.025641 | 0 | 0.025641 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
660d1b551adf5dc0460fcb8cbcb74c7e4b2be411 | 10,066 | py | Python | neuralknight/tests/test_board.py | grandquista/neuralknight-v1 | f54e8e1ac90019c1a3a84089bac20485e8f626b4 | [
"MIT"
] | null | null | null | neuralknight/tests/test_board.py | grandquista/neuralknight-v1 | f54e8e1ac90019c1a3a84089bac20485e8f626b4 | [
"MIT"
] | 158 | 2018-08-14T17:42:49.000Z | 2021-08-19T13:19:44.000Z | neuralknight/tests/test_board.py | grandquista/neuralknight-v1 | f54e8e1ac90019c1a3a84089bac20485e8f626b4 | [
"MIT"
] | 2 | 2018-04-06T02:40:16.000Z | 2018-04-16T16:13:21.000Z | # from collections import deque
# # from itertools import starmap
# from pytest import raises
#
# from ..models.board_constants import KING, QUEEN
#
#
# # def test_board_creation_valid(start_board):
# # assert start_board
# # assert start_board.board == tuple(map(bytes, (
# # (12, 6, 2, 10, 4, 2, 6, 12),
# # (8, 8, 8, 8, 8, 8, 8, 8),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (9, 9, 9, 9, 9, 9, 9, 9),
# # (13, 7, 3, 11, 5, 3, 7, 13))))
#
#
# def test_pieces_on_board(start_board):
# assert KING in start_board
# assert QUEEN in start_board
# assert KING | 1 in start_board
# assert QUEEN | 1 in start_board
#
#
# def test_first_move_available(start_board):
# assert next(start_board.lookahead_boards(1))
#
#
# def test_iterates_future_boards(start_board):
# assert isinstance(next(iter(start_board))[0], tuple)
#
#
# def test_string_represention(start_board):
# assert str(start_board) == '''\
# ♜♞♝♛♚♝♞♜
# ♟♟♟♟♟♟♟♟
# ▪▫▪▫▪▫▪▫
# ▫▪▫▪▫▪▫▪
# ▪▫▪▫▪▫▪▫
# ▫▪▫▪▫▪▫▪
# ♙♙♙♙♙♙♙♙
# ♖♘♗♕♔♗♘♖\
# '''
#
#
# def test_string_represention_swap(start_board):
# start = str(start_board)
# start_board._board = start_board._board.swap()
# assert str(start_board) == start
#
#
# def test_string_represention_end(end_game_board):
# assert str(end_game_board) == '''\
# ▪▫▪▫▪▫▪▫
# ▫▪▫▪▫▪▫▪
# ▪▫▪▫▪▫▪▫
# ▫▪▫♚▫▪▫▪
# ▪▫▪▫▪▫♕▫
# ▫▪▫♔▫▪▫▪
# ▪▫▪▫▪▫▪▫
# ▫▪▫▪▫▪▫▪\
# '''
#
#
# def test_lookahead_length(start_board):
# assert set(map(len, start_board.lookahead_boards(1))) == {1}
# assert set(map(len, start_board.lookahead_boards(3))) == {3}
# assert set(map(len, start_board.prune_lookahead_boards(4))) == {2}
#
#
# def test_more_than_one_next_move(start_board):
# it = start_board.lookahead_boards(1)
# assert next(it)
# assert next(it)
#
#
# def test_moves_consumption_lookahead_1(start_board):
# it = start_board.lookahead_boards(1)
# deque(it, maxlen=0)
# with raises(StopIteration):
# next(it)
#
#
# def test_moves_consumption_lookahead_2(start_board):
# it = start_board.lookahead_boards(2)
# deque(it, maxlen=0)
# with raises(StopIteration):
# next(it)
#
#
# # def test_moves_to_end(start_board):
# # def test(*args):
# # assert all(isinstance(board, type(start_board)) for board in args)
# # return None if args[-1] else args
# # win = next(filter(None, starmap(test, start_board.lookahead_boards(5))))
# # assert not win[-1]
#
#
# # def test_moves_pawn_init_board(pawn_capture_board):
# # for state, _ in pawn_capture_board.lookahead_boards(2):
# # assert pawn_capture_board.update(state)
#
#
# # def test_moves_pawn_final_board(min_pawn_board):
# # for state, _, _ in min_pawn_board.lookahead_boards(3):
# # assert min_pawn_board.update(state)
#
#
# def test_board_mutations_are_valid(start_board):
# mutated_board = next(start_board.lookahead_boards(1))[0]
# assert -1 not in mutated_board
#
#
# # def test_invalid_board_move_two(start_board):
# # with raises(RuntimeError):
# # start_board.update(tuple(map(bytes, (
# # (12, 6, 2, 10, 4, 2, 6, 12),
# # (8, 8, 8, 8, 8, 8, 8, 8),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (9, 9, 0, 0, 0, 0, 0, 0),
# # (0, 0, 9, 9, 9, 9, 9, 9),
# # (13, 7, 3, 11, 5, 3, 7, 13)))))
#
#
# # def test_invalid_board_move_extra_pieces(start_board):
# # with raises(RuntimeError):
# # start_board.update(tuple(map(bytes, (
# # (12, 6, 2, 10, 4, 2, 6, 12),
# # (8, 8, 8, 8, 8, 8, 8, 8),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (9, 9, 0, 0, 0, 0, 0, 0),
# # (0, 9, 9, 9, 9, 9, 9, 9),
# # (13, 7, 3, 11, 5, 3, 7, 13)))))
#
#
# # def test_invalid_board_duplicate_pieces(start_board):
# # with raises(RuntimeError):
# # start_board.update(tuple(map(bytes, (
# # (12, 6, 2, 10, 4, 2, 6, 12),
# # (8, 8, 8, 8, 8, 8, 8, 8),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (7, 0, 0, 0, 0, 0, 0, 0),
# # (9, 9, 9, 9, 9, 9, 9, 9),
# # (13, 7, 3, 11, 5, 3, 7, 13)))))
#
#
# # def test_invalid_board_move_invalid(start_board):
# # with raises(RuntimeError):
# # start_board.update(tuple(map(bytes, (
# # (12, 6, 2, 10, 4, 2, 6, 12),
# # (8, 8, 8, 8, 8, 8, 8, 8),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (0, 7, 0, 0, 0, 0, 0, 0),
# # (9, 9, 9, 9, 9, 9, 9, 9),
# # (13, 0, 3, 11, 5, 3, 7, 13)))))
#
#
# # def test_invalid_board_move_inactive_piece(start_board):
# # with raises(RuntimeError):
# # start_board.update(tuple(map(bytes, (
# # (12, 0, 2, 10, 4, 2, 6, 12),
# # (8, 8, 8, 8, 8, 8, 8, 8),
# # (0, 0, 6, 0, 0, 0, 0, 0),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (9, 9, 9, 9, 9, 9, 9, 9),
# # (13, 7, 3, 11, 5, 3, 7, 13)))))
#
#
# # def test_invalid_board_move_capture_own(start_board):
# # with raises(RuntimeError):
# # start_board.update(tuple(map(bytes, (
# # (12, 6, 2, 10, 4, 2, 6, 12),
# # (8, 8, 8, 8, 8, 8, 8, 8),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (9, 9, 9, 3, 9, 9, 9, 9),
# # (13, 7, 0, 11, 5, 3, 7, 13)))))
#
#
# # def test_invalid_board_move_blocked(start_board):
# # with raises(RuntimeError):
# # start_board.update(tuple(map(bytes, (
# # (12, 6, 2, 10, 4, 2, 6, 12),
# # (8, 8, 8, 8, 8, 8, 8, 8),
# # (0, 0, 0, 0, 0, 0, 0, 3),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (9, 9, 9, 9, 9, 9, 9, 9),
# # (13, 7, 0, 11, 5, 3, 7, 13)))))
#
#
# # def test_invalid_board_move_modifies_piece(start_board):
# # with raises(RuntimeError):
# # start_board.update(tuple(map(bytes, (
# # (12, 6, 2, 10, 4, 2, 6, 12),
# # (8, 8, 8, 8, 8, 8, 8, 8),
# # (0, 0, 0, 0, 0, 0, 0, 11),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (9, 9, 9, 9, 9, 9, 9, 9),
# # (13, 7, 0, 11, 5, 3, 7, 13)))))
#
#
# # def test_invalid_board_move_piece_swap(start_board):
# # with raises(RuntimeError):
# # start_board.update(tuple(map(bytes, (
# # (12, 6, 2, 10, 4, 2, 6, 12),
# # (8, 8, 8, 8, 8, 8, 8, 8),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (7, 9, 9, 9, 9, 9, 9, 9),
# # (13, 9, 3, 11, 5, 3, 7, 13)))))
#
#
# # def test_invalid_board_move_no_change(start_board):
# # with raises(RuntimeError):
# # start_board.update(tuple(map(bytes, (
# # (12, 6, 2, 10, 4, 2, 6, 12),
# # (8, 8, 8, 8, 8, 8, 8, 8),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (9, 9, 9, 9, 9, 9, 9, 9),
# # (13, 7, 3, 11, 5, 3, 7, 13)))))
#
#
# # def test_valid_board_move_backwards(end_game_board):
# # assert end_game_board.update(tuple(map(bytes, (
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (0, 0, 0, 4, 0, 0, 0, 0),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (0, 0, 0, 5, 0, 0, 0, 0),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (0, 0, 0, 0, 0, 0, 11, 0)))))
#
#
# # def test_board_provides_update(start_board):
# # mutated_board = next(iter(start_board))[0]
# # assert start_board.update(mutated_board).board == tuple(map(bytes, (
# # (12, 6, 2, 4, 10, 2, 6, 12),
# # (8, 8, 8, 8, 8, 8, 8, 0),
# # (0, 0, 0, 0, 0, 0, 0, 8),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (0, 0, 0, 0, 0, 0, 0, 0),
# # (9, 9, 9, 9, 9, 9, 9, 9),
# # (13, 7, 3, 5, 11, 3, 7, 13))))
#
#
# def test_board_lookahead_player_is_constant(start_board):
# states = next(start_board.lookahead_boards(3))
# assert states[0] == tuple(map(bytes, (
# (12, 6, 2, 10, 4, 2, 6, 12),
# (8, 8, 8, 8, 8, 8, 8, 8),
# (0, 0, 0, 0, 0, 0, 0, 0),
# (0, 0, 0, 0, 0, 0, 0, 0),
# (0, 0, 0, 0, 0, 0, 0, 0),
# (9, 0, 0, 0, 0, 0, 0, 0),
# (0, 9, 9, 9, 9, 9, 9, 9),
# (13, 7, 3, 11, 5, 3, 7, 13))))
# assert states[1] == tuple(map(bytes, (
# (12, 6, 2, 10, 4, 2, 6, 12),
# (8, 8, 8, 8, 8, 8, 8, 0),
# (0, 0, 0, 0, 0, 0, 0, 8),
# (0, 0, 0, 0, 0, 0, 0, 0),
# (0, 0, 0, 0, 0, 0, 0, 0),
# (9, 0, 0, 0, 0, 0, 0, 0),
# (0, 9, 9, 9, 9, 9, 9, 9),
# (13, 7, 3, 11, 5, 3, 7, 13))))
# assert states[2] == tuple(map(bytes, (
# (12, 6, 2, 10, 4, 2, 6, 12),
# (8, 8, 8, 8, 8, 8, 8, 0),
# (0, 0, 0, 0, 0, 0, 0, 8),
# (0, 0, 0, 0, 0, 0, 0, 0),
# (9, 0, 0, 0, 0, 0, 0, 0),
# (0, 0, 0, 0, 0, 0, 0, 0),
# (0, 9, 9, 9, 9, 9, 9, 9),
# (13, 7, 3, 11, 5, 3, 7, 13))))
| 33.44186 | 80 | 0.427479 | 1,649 | 10,066 | 2.559733 | 0.070952 | 0.238332 | 0.335466 | 0.418858 | 0.671879 | 0.592751 | 0.572613 | 0.550107 | 0.502251 | 0.501066 | 0 | 0.172981 | 0.341844 | 10,066 | 300 | 81 | 33.553333 | 0.44483 | 0.924896 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
66267e7603947ad37c6b1557241bbd6126085b5e | 40 | py | Python | app_routes/threads/__init__.py | kskarbinski/threads-api | c144c1cb51422095922310d278f80e4996c10ea0 | [
"MIT"
] | null | null | null | app_routes/threads/__init__.py | kskarbinski/threads-api | c144c1cb51422095922310d278f80e4996c10ea0 | [
"MIT"
] | null | null | null | app_routes/threads/__init__.py | kskarbinski/threads-api | c144c1cb51422095922310d278f80e4996c10ea0 | [
"MIT"
] | null | null | null | from .threads_route import ThreadsRoute
| 20 | 39 | 0.875 | 5 | 40 | 6.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 40 | 1 | 40 | 40 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b08bd9cb26dc9db7132c636f76abc80855571cfd | 7,832 | py | Python | coexistence_mechanisms.py | netgroup/Dreamer-Mininet-Extensions | fbc1099fb5b143417d5b9375d01635c1686cb0de | [
"Apache-2.0"
] | 2 | 2018-10-27T11:17:33.000Z | 2019-04-15T07:03:25.000Z | coexistence_mechanisms.py | netgroup/Dreamer-Mininet-Extensions | fbc1099fb5b143417d5b9375d01635c1686cb0de | [
"Apache-2.0"
] | 3 | 2016-02-09T13:39:03.000Z | 2018-05-04T06:45:26.000Z | coexistence_mechanisms.py | netgroup/Dreamer-Mininet-Extensions | fbc1099fb5b143417d5b9375d01635c1686cb0de | [
"Apache-2.0"
] | 5 | 2017-07-26T15:15:22.000Z | 2021-12-22T23:26:44.000Z | #!/usr/bin/python
import sys
from mininet.log import error
class CoexistenceMechanism (object):
prio_std_rules = 300
prio_special_rules = 301
prio_max = 32768
def __init__(self, eths, vis, name):
self.eths = eths
self.vis = vis
self.name = name
class CoexA(CoexistenceMechanism):
tableIP = 1
tableSBP = 0
def __init__(self, vlan_id, eths, vis, name):
if vlan_id > 4095:
error("ERROR VLAN ID Not Valid\n")
sys.exit(-2)
self.vlanIP = vlan_id
CoexistenceMechanism.__init__(self, eths, vis, name)
def getOVSRules(self):
rules = []
rules.append('ovs-ofctl add-flow %s "table=0,hard_timeout=0,priority=%s,dl_vlan=%s,actions=resubmit(,%s)"' %(self.name, self.prio_std_rules,
self.vlanIP, self.tableIP))
for eth, vi in zip(self.eths, self.vis):
rules.append('ovs-ofctl add-flow %s "table=%s,hard_timeout=0,priority=%s,in_port=%s,action=output:%s"' %(self.name, self.tableIP,
self.prio_std_rules, eth, vi))
rules.append('ovs-ofctl add-flow %s "table=%s,hard_timeout=0,priority=%s,in_port=%s,action=output:%s"' %(self.name, self.tableIP,
self.prio_std_rules, vi, eth))
rules.append('ovs-ofctl add-flow %s "table=%s,hard_timeout=0,priority=%s,dl_type=0x88cc,action=controller"' %(self.name, self.tableIP,
self.prio_special_rules))
rules.append('ovs-ofctl add-flow %s "table=%s,hard_timeout=0,priority=%s,dl_type=0x8942,action=controller"' %(self.name, self.tableIP,
self.prio_special_rules))
return rules
def getIPCommands(self):
commands = []
for eth, vi in zip(self.eths, self.vis):
commands.append('ifconfig %s 0' % eth)
commands.append('ip link set %s up' % vi)
commands.append('vconfig add %s %s' % (vi, self.vlanIP))
return commands
def getQuaggaInterfaces(self):
interfaces = []
for vi in self.vis:
interfaces.append("%s.%s" %(vi, self.vlanIP))
return interfaces
class CoexA_13(CoexA):
def __init__(self, vlan_id, eths, vis, name):
CoexA.__init__(self, vlan_id, eths, vis, name)
def getOVSRules(self):
rules = []
rules.append('ovs-ofctl -O OpenFlow13 add-flow %s "table=0,hard_timeout=0,priority=%s,dl_vlan=%s,actions=goto_table:%s"' %(self.name,
self.prio_std_rules, self.vlanIP, self.tableIP))
for eth, vi in zip(self.eths, self.vis):
rules.append('ovs-ofctl -O OpenFlow13 add-flow %s "table=%s,hard_timeout=0,priority=%s,in_port=%s,action=output:%s"' %(self.name,
self.tableIP, self.prio_std_rules, eth, vi))
rules.append('ovs-ofctl -O OpenFlow13 add-flow %s "table=%s,hard_timeout=0,priority=%s,in_port=%s,action=output:%s"' %(self.name,
self.tableIP, self.prio_std_rules, vi, eth))
rules.append('ovs-ofctl -O OpenFlow13 add-flow %s "table=%s,hard_timeout=0,priority=%s,dl_type=0x88cc,action=controller"' %(self.name,
self.tableIP, self.prio_special_rules))
rules.append('ovs-ofctl -O OpenFlow13 add-flow %s "table=%s,hard_timeout=0,priority=%s,dl_type=0x8942,action=controller"' %(self.name,
self.tableIP, self.prio_special_rules))
return rules
class CoexB(CoexistenceMechanism):
tableIP = 1
tableSBP = 0
def __init__(self, eths, vis, name):
CoexistenceMechanism.__init__(self, eths, vis, name)
def getOVSRules(self):
rules = []
rules.append('ovs-ofctl add-flow %s "table=0,hard_timeout=0,priority=%s,dl_vlan=%s,actions=resubmit(,%s)"' %(self.name, self.prio_std_rules,
"0xffff", self.tableIP))
for eth, vi in zip(self.eths, self.vis):
rules.append('ovs-ofctl add-flow %s "table=%s,hard_timeout=0,priority=%s,in_port=%s,action=output:%s"' %(self.name, self.tableIP,
self.prio_std_rules, eth, vi))
rules.append('ovs-ofctl add-flow %s "table=%s,hard_timeout=0,priority=%s,in_port=%s,action=output:%s"' %(self.name, self.tableIP,
self.prio_std_rules, vi, eth))
rules.append('ovs-ofctl add-flow %s "table=%s,hard_timeout=0,priority=%s,dl_type=0x88cc,action=controller"' %(self.name, self.tableIP,
self.prio_special_rules))
rules.append('ovs-ofctl add-flow %s "table=%s,hard_timeout=0,priority=%s,dl_type=0x8942,action=controller"' %(self.name, self.tableIP,
self.prio_special_rules))
return rules
def getIPCommands(self):
commands = []
for eth, vi in zip(self.eths, self.vis):
commands.append('ifconfig %s 0' % eth)
return commands
def getQuaggaInterfaces(self):
interfaces = self.vis
return interfaces
class CoexB_13(CoexB):
def __init__(self, eths, vis, name):
CoexB.__init__(self, eths, vis, name)
def getOVSRules(self):
rules = []
rules.append('ovs-ofctl -O OpenFlow13 add-flow %s "table=0,hard_timeout=0,priority=%s,dl_vlan=%s,actions=goto_table:%s"' %(self.name,
self.prio_std_rules, "0xffff", self.tableIP))
for eth, vi in zip(self.eths, self.vis):
rules.append('ovs-ofctl -O OpenFlow13 add-flow %s "table=%s,hard_timeout=0,priority=%s,in_port=%s,action=output:%s"' %(self.name,
self.tableIP, self.prio_std_rules, eth, vi))
rules.append('ovs-ofctl -O OpenFlow13 add-flow %s "table=%s,hard_timeout=0,priority=%s,in_port=%s,action=output:%s"' %(self.name,
self.tableIP, self.prio_std_rules, vi, eth))
rules.append('ovs-ofctl -O OpenFlow13 add-flow %s "table=%s,hard_timeout=0,priority=%s,dl_type=0x88cc,action=controller"' %(self.name,
self.tableIP, self.prio_special_rules))
rules.append('ovs-ofctl -O OpenFlow13 add-flow %s "table=%s,hard_timeout=0,priority=%s,dl_type=0x8942,action=controller"' %(self.name,
self.tableIP, self.prio_special_rules))
return rules
class CoexH(CoexistenceMechanism):
tableIP=0
tableSBP = 1
MPLS_UNICAST = "0x8847"
MPLS_MULTICAST = "0x8848"
def __init__(self, eths, vis, name):
CoexistenceMechanism.__init__(self, eths, vis, name)
def getOVSRules(self):
rules = []
rules.append('ovs-ofctl -O OpenFlow13 add-flow %s "table=0,hard_timeout=0,priority=%s,dl_type=%s,actions=goto_table:%s"' %(self.name,
self.prio_max, self.MPLS_UNICAST, self.tableSBP))
rules.append('ovs-ofctl -O OpenFlow13 add-flow %s "table=0,hard_timeout=0,priority=%s,dl_type=%s,actions=goto_table:%s"' %(self.name,
self.prio_max, self.MPLS_MULTICAST, self.tableSBP))
for eth, vi in zip(self.eths, self.vis):
rules.append('ovs-ofctl -O OpenFlow13 add-flow %s "table=0,hard_timeout=0,priority=%s,in_port=%s,action=output:%s"' %(self.name,
self.prio_std_rules, eth, vi))
rules.append('ovs-ofctl -O OpenFlow13 add-flow %s "table=0,hard_timeout=0,priority=%s,in_port=%s,action=output:%s"' %(self.name,
self.prio_std_rules, vi, eth))
rules.append('ovs-ofctl -O OpenFlow13 add-flow %s "table=0,hard_timeout=0,priority=%s,dl_type=0x88cc,action=controller"' %(self.name,
self.prio_special_rules))
rules.append('ovs-ofctl -O OpenFlow13 add-flow %s "table=0,hard_timeout=0,priority=%s,dl_type=0x8942,action=controller"' %(self.name,
self.prio_special_rules))
return rules
def getIPCommands(self):
commands = []
for eth, vi in zip(self.eths, self.vis):
commands.append('ifconfig %s 0' % eth)
return commands
def getQuaggaInterfaces(self):
interfaces = self.vis
return interfaces
class CoexFactory(object):
coex_types=["COEXA", "COEXB", "COEXH"]
def getCoex(self, coex_type, coex_data, eths, vis, name, OF_V):
if coex_type not in self.coex_types:
error("ERROR %s not supported" % coex_type)
sys.exit(-2)
if coex_type == "COEXA":
if OF_V == None:
return CoexA(coex_data, eths, vis, name)
elif OF_V == "OpenFlow13":
return CoexA_13(coex_data, eths, vis, name)
if coex_type == "COEXB":
if OF_V == None:
return CoexB(eths, vis, name)
elif OF_V == "OpenFlow13":
return CoexB_13(eths, vis, name)
if coex_type == "COEXH":
if OF_V == None:
error("ERROR %s is not supported by OpenFlow 1.0" % coex_type)
sys.exit(-2)
elif OF_V == "OpenFlow13":
return CoexH(eths, vis, name)
| 31.837398 | 143 | 0.700843 | 1,243 | 7,832 | 4.265487 | 0.082864 | 0.040739 | 0.068653 | 0.093172 | 0.866088 | 0.843455 | 0.811392 | 0.808751 | 0.771407 | 0.771407 | 0 | 0.024198 | 0.139939 | 7,832 | 245 | 144 | 31.967347 | 0.762916 | 0.002043 | 0 | 0.685535 | 0 | 0.163522 | 0.359053 | 0.225208 | 0 | 0 | 0.010749 | 0 | 0 | 1 | 0.113208 | false | 0 | 0.012579 | 0 | 0.345912 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b09184cee2c7d3dacb5d730cf6c814be38c1963f | 12,103 | py | Python | core/Functions/cuda/lib.py | davidliyutong/Flint | 4e2552dac8d781c21e8998ad68bbf1b986b09258 | [
"MIT"
] | null | null | null | core/Functions/cuda/lib.py | davidliyutong/Flint | 4e2552dac8d781c21e8998ad68bbf1b986b09258 | [
"MIT"
] | 1 | 2020-07-08T02:57:50.000Z | 2020-07-08T02:57:50.000Z | core/Functions/cuda/lib.py | davidliyutong/Flint | 4e2552dac8d781c21e8998ad68bbf1b986b09258 | [
"MIT"
] | null | null | null | import cupy as cp
import math
import numpy as np
def auto_detect(shape: tuple, matmul=False):
assert isinstance(shape, tuple)
grid_dim = ()
block_dim = ()
if len(shape) == 4:
raise NotImplementedError
elif len(shape) == 3:
# if shape[1] < 16 and shape[2] < 16:
# block_dim = (1, shape[1], shape[2])
# elif shape[1] < 16 and shape[2] >= 16:
# block_dim = (1, shape[1], math.floor(256 / shape[1]))
# elif shape[2] < 16 and shape[1] >= 16:
# block_dim = (1, math.floor(256 / shape[2]), shape[2])
# else:
# block_dim = (1, 16, 16)
block_dim = (1, 16, 16)
grid_dim = (
math.ceil(shape[0] / block_dim[0]), math.ceil(shape[1] / block_dim[1]), math.ceil(shape[2] / block_dim[2]))
elif len(shape) == 2:
if matmul:
block_dim = (32, 32)
grid_edge = max(math.ceil(shape[0] / block_dim[0]), math.ceil(shape[0] / block_dim[1]))
grid_dim = (grid_edge, grid_edge)
return grid_dim, block_dim
if shape[0] < 32 and shape[1] < 32:
block_dim = (shape[0], shape[1])
grid_dim = (1, 1)
elif shape[1] < 32:
block_dim = (math.floor(1024 / shape[1]), shape[1])
grid_dim = math.ceil(shape[0] / block_dim[0]), math.ceil(shape[1] / block_dim[1])
elif shape[0] < 32:
block_dim = (shape[0], math.floor(1024 / shape[0]))
grid_dim = math.ceil(shape[0] / block_dim[0]), math.ceil(shape[1] / block_dim[1])
else:
block_dim = (32, 32)
grid_dim = math.ceil(shape[0] / block_dim[0]), math.ceil(shape[1] / block_dim[1])
elif len(shape) == 1:
if matmul:
block_dim = (32, 32)
grid_dim = (math.ceil(shape[0] / 32), math.ceil(shape[0] / 32))
return grid_dim, block_dim
if shape[0] < 1024:
block_dim = (shape[0], 1)
grid_dim = (1, 1)
else:
block_dim = (1024, 1)
grid_dim = (1, 1)
else:
raise ValueError
return grid_dim, block_dim
_cu_im2col_kernel = cp.RawKernel(r'''
extern "C" __global__
void cu_im2col(float * A, float * B, int output_span, int output_sz, int kernel_sz, int in_c,
int limit_x, int limit_y,
int dim_k, int dim_n, int dim_m, int dim_c) {
int y = blockDim.y * blockIdx.y + threadIdx.y;
int x = blockDim.x * blockIdx.x + threadIdx.x;
if (x >= limit_x or y >= limit_y) {
return;
}
int batch_idx = __float2int_rd(fdividef(x,output_span));
int in_c_idx = y % in_c;
int kernel_idx = __float2int_rd(fdividef(y,in_c));
int kernel_idx_y = kernel_idx % kernel_sz;
int kernel_idx_x = __float2int_rd(fdividef(kernel_idx,kernel_sz));
int output_idx = x % (output_span);
int output_idx_y = output_idx % output_sz;
int output_idx_x = __float2int_rd(fdividef(output_idx,output_sz));
B[x * limit_y + y] = A[batch_idx * dim_n * dim_m * dim_c + (output_idx_x + kernel_idx_x) * dim_m * dim_c + (output_idx_y + kernel_idx_y) * dim_c + in_c_idx];
}
''', 'cu_im2col')
def cu_im2col(A, kernel_shape, from_host=False, to_host=False):
"""
:param inpt: shape=(m,m,C1,k)
:return: shape=(c1*a*a,(n+2p-a+1)^2*k)
"""
if from_host:
A = cp.array(A)
batch_sz, h, w, in_c = A.shape
kernel_sz = kernel_shape[0]
output_sz = (h - kernel_sz) + 1
res = cp.zeros(shape=(batch_sz * output_sz * output_sz, kernel_sz * kernel_sz * in_c),
dtype=cp.float32)
output_span = output_sz * output_sz
grid_dim, block_dim = auto_detect(res.shape)
_cu_im2col_kernel(grid_dim,
block_dim,
(A, res, output_span, output_sz, kernel_sz, in_c,
res.shape[0], res.shape[1],
A.shape[0], A.shape[1], A.shape[2], A.shape[3]))
if to_host:
return cp.asnumpy(res)
else:
return res
_cu_max_pool_ds_kernel = cp.RawKernel(r'''
extern "C" __global__
void cu_max_pool_ds(float * A, float * B, int limit_x, int limit_y, int limit_z, int output_sz, int kernel_size,
int dim_k, int dim_n, int dim_m, int dim_c) {
int x = blockDim.x * blockIdx.x + threadIdx.x;
int y = blockDim.y * blockIdx.y + threadIdx.y;
int z = blockDim.z * blockIdx.z + threadIdx.z;
if (x >= limit_x or y >= limit_y or z >= limit_z) {
return;
}
int batch_idx = __float2int_rd(fdividef(x,output_sz));
int output_idx_x = x % output_sz;
int output_idx_y = y;
int output_ch = z;
float max_v = - 1 / 0;
float curr_v = 0.;
for (int i = 0; i < kernel_size; i++) {
for (int j = 0; j < kernel_size; j++) {
curr_v = A[batch_idx * dim_n * dim_m * dim_c + (output_idx_x * kernel_size + i) * dim_m * dim_c + (output_idx_y * kernel_size + j) * dim_c + output_ch];
// curr_v = A[x * limit_y * limit_z + y * limit_z + z];
if (curr_v > max_v) {
max_v = curr_v;
}
}
}
B[x * limit_y * limit_z + y * limit_z + z] = max_v;
}
''', 'cu_max_pool_ds')
def cu_max_pool_ds(A, output_shape, kernel_sz, from_host=False, to_host=False):
if from_host:
A = cp.array(A)
batch_sz = A.shape[0]
output_sz = output_shape[1]
# A.reshape(A.shape[0] * A.shape[1], A.shape[2], A.shape[3])
res = cp.zeros(shape=(output_shape[0] * batch_sz, output_shape[1], output_shape[2]), dtype=cp.float32)
grid_dim, block_dim = auto_detect(res.shape)
_cu_max_pool_ds_kernel(grid_dim,
block_dim,
(A, res, res.shape[0], res.shape[1], res.shape[2], output_sz, kernel_sz,
A.shape[0], A.shape[1], A.shape[2], A.shape[3]))
res = cp.reshape(res, newshape=(batch_sz, *output_shape))
if to_host:
return cp.asnumpy(res)
else:
return res
_cu_max_pool_us_kernel = cp.RawKernel(r'''
extern "C" __global__
void cu_max_pool_us(float * A, float * B, float * C, int limit_x, int limit_y, int limit_z, int output_sz, int kernel_size,
int dim_k, int dim_n, int dim_m, int dim_c) {
int x = blockDim.x * blockIdx.x + threadIdx.x;
int y = blockDim.y * blockIdx.y + threadIdx.y;
int z = blockDim.z * blockIdx.z + threadIdx.z;
if (x >= limit_x or y >= limit_y or z >= limit_z) {
return;
}
int batch_idx = __float2int_rd(fdividef(x,output_sz));
int output_idx_x = x % output_sz;
int output_idx_y = y;
int output_ch = z;
float max_v = -1/0;
int max_idx = - 1;
int curr_idx = -1;
float curr_v = 0;
for (int i = 0; i < kernel_size; i++) {
for (int j = 0; j < kernel_size; j++) {
curr_idx = batch_idx * dim_n * dim_m * dim_c + (output_idx_x * kernel_size + i) * dim_m * dim_c + (output_idx_y * kernel_size + j) * dim_c + output_ch;
curr_v = A[curr_idx];
if (curr_v > max_v) {
max_v = curr_v;
max_idx = curr_idx;
}
}
}
C[max_idx] = B[x * limit_y * limit_z + y * limit_z + z];
}
''', 'cu_max_pool_us')
def cu_max_pool_us(A, kernel_sz, last_input, from_host=False, to_host=False):
if from_host:
A = cp.array(A)
output_sz = A.shape[1]
A = A.reshape(A.shape[0] * A.shape[1], A.shape[2], A.shape[3])
res = cp.zeros(shape=last_input.shape, dtype=cp.float32)
grid_dim, block_dim = auto_detect(A.shape)
_cu_max_pool_us_kernel(grid_dim,
block_dim,
(last_input, A, res, A.shape[0], A.shape[1], A.shape[2], output_sz, kernel_sz,
last_input.shape[0], last_input.shape[1], last_input.shape[2], last_input.shape[3]))
if to_host:
return cp.asnumpy(res)
else:
return res
_cu_avg_pool_ds_kernel = cp.RawKernel(r'''
extern "C" __global__
void cu_avg_pool_ds(float * A, float * B, int limit_x, int limit_y, int limit_z, int output_sz, int kernel_size, int span,
int dim_k, int dim_n, int dim_m, int dim_c) {
int x = blockDim.x * blockIdx.x + threadIdx.x;
int y = blockDim.y * blockIdx.y + threadIdx.y;
int z = blockDim.z * blockIdx.z + threadIdx.z;
if (x >= limit_x or y >= limit_y or z >= limit_z) {
return;
}
int batch_idx = __float2int_rd(fdividef(x,output_sz));
int output_idx_x = x % output_sz;
int output_idx_y = y;
int output_ch = z;
float sum_v = 0;
float curr_v = 0.;
for (int i = 0; i < kernel_size; i++) {
for (int j = 0; j < kernel_size; j++) {
sum_v += A[batch_idx * dim_n * dim_m * dim_c + (output_idx_x * kernel_size + i) * dim_m * dim_c + (output_idx_y * kernel_size + j) * dim_c + output_ch];
}
}
B[x * limit_y * limit_z + y * limit_z + z] = sum_v / span;
}
''', 'cu_avg_pool_ds')
def cu_avg_pool_ds(A, output_shape, kernel_sz, from_host=False, to_host=False):
if from_host:
A = cp.array(A)
batch_sz = A.shape[0]
output_sz = output_shape[1]
span = kernel_sz * kernel_sz
res = cp.zeros(shape=(output_shape[0] * batch_sz, output_shape[1], output_shape[2]), dtype=cp.float32)
grid_dim, block_dim = auto_detect(res.shape)
_cu_avg_pool_ds_kernel(grid_dim,
block_dim,
(A, res, res.shape[0], res.shape[1], res.shape[2], output_sz, kernel_sz, span,
A.shape[0], A.shape[1], A.shape[2], A.shape[3]))
res = cp.reshape(res, newshape=(batch_sz, *output_shape))
if to_host:
return cp.asnumpy(res)
else:
return res
_cu_avg_pool_us_kernel = cp.RawKernel(r'''
extern "C" __global__
void cu_avg_pool_us(float * A, float * B, float * C, int limit_x, int limit_y, int limit_z, int output_sz, int kernel_size, int span,
int dim_k, int dim_n, int dim_m, int dim_c) {
int x = blockDim.x * blockIdx.x + threadIdx.x;
int y = blockDim.y * blockIdx.y + threadIdx.y;
int z = blockDim.z * blockIdx.z + threadIdx.z;
if (x >= limit_x or y >= limit_y or z >= limit_z) {
return;
}
int batch_idx = __float2int_rd(fdividef(x,output_sz));
int output_idx_x = x % output_sz;
int output_idx_y = y;
int output_ch = z;
int raw_idx = x * limit_y * limit_z + y * limit_z + z;
int curr_idx = -1;
float raw_v;
for (int i = 0; i < kernel_size; i++) {
for (int j = 0; j < kernel_size; j++) {
curr_idx = batch_idx * dim_n * dim_m * dim_c + (output_idx_x * kernel_size + i) * dim_m * dim_c + (output_idx_y * kernel_size + j) * dim_c + output_ch;
C[curr_idx] = B[raw_idx] / span;
}
}
}
''', 'cu_avg_pool_us')
def cu_avg_pool_us(A, kernel_sz, last_input, from_host=False, to_host=False):
if from_host:
A = cp.array(A)
output_sz = A.shape[1]
span = kernel_sz * kernel_sz
A = A.reshape(A.shape[0] * A.shape[1], A.shape[2], A.shape[3])
res = cp.zeros(shape=last_input.shape, dtype=cp.float32)
grid_dim, block_dim = auto_detect(A.shape)
_cu_avg_pool_us_kernel(grid_dim,
block_dim,
(last_input, A, res, A.shape[0], A.shape[1], A.shape[2], output_sz, kernel_sz, span,
last_input.shape[0], last_input.shape[1], last_input.shape[2], last_input.shape[3]))
if to_host:
return cp.asnumpy(res)
else:
return res
| 36.345345 | 168 | 0.551434 | 1,886 | 12,103 | 3.253446 | 0.056204 | 0.049544 | 0.027379 | 0.034224 | 0.849087 | 0.788299 | 0.766786 | 0.749674 | 0.725554 | 0.683996 | 0 | 0.02894 | 0.320499 | 12,103 | 332 | 169 | 36.454819 | 0.717169 | 0.035694 | 0 | 0.571984 | 0 | 0.038911 | 0.502276 | 0.027398 | 0 | 0 | 0 | 0 | 0.003891 | 1 | 0.023346 | false | 0 | 0.011673 | 0 | 0.085603 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b0996acb892f45daa4fb3cdecd900ced48b0a2a0 | 10,232 | py | Python | request-management-api/tests/restapi/test_foicomment_api.py | bcgov/foi-flow | 7f9897b3aad4ba91fbc8edcb8f526906efb490df | [
"Apache-2.0"
] | null | null | null | request-management-api/tests/restapi/test_foicomment_api.py | bcgov/foi-flow | 7f9897b3aad4ba91fbc8edcb8f526906efb490df | [
"Apache-2.0"
] | 1,579 | 2021-04-14T18:27:45.000Z | 2022-03-31T23:49:42.000Z | request-management-api/tests/restapi/test_foicomment_api.py | bcgov/foi-flow | 7f9897b3aad4ba91fbc8edcb8f526906efb490df | [
"Apache-2.0"
] | 1 | 2022-03-01T20:17:47.000Z | 2022-03-01T20:17:47.000Z | import json
import uuid
import os
import requests
import ast
TEST_USER_PAYLOAD = {
'client_id': 'forms-flow-web',
'grant_type': 'password',
'username' : os.getenv('TEST_INTAKE_USERID'),
'password': os.getenv('TEST_INTAKE_PASSWORD')
}
def factory_auth_header(app, client):
url = '{0}/auth/realms/{1}/protocol/openid-connect/token'.format(os.getenv('KEYCLOAK_ADMIN_HOST'),os.getenv('KEYCLOAK_ADMIN_REALM'))
x = requests.post(url, TEST_USER_PAYLOAD, verify=True).content.decode('utf-8')
return {'Authorization': 'Bearer ' + str(ast.literal_eval(x)['access_token'])}
def test_ping(app, client):
response = client.get('/api/healthz')
assert response.status_code == 200
with open('tests/samplerequestjson/rawrequest.json') as x:
rawrequestjson = json.load(x)
def test_foirawcomment(app, client):
rawresponse = client.post('/api/foirawrequests',data=json.dumps(rawrequestjson), headers=factory_auth_header(app, client), content_type='application/json')
jsondata = json.loads(rawresponse.data)
commentjson = {
"requestid":str(jsondata["id"]),
"comment": "test comment",
"isactive": True
}
createcommentresponse = client.post('/api/foicomment/rawrequest', data=json.dumps(commentjson), headers=factory_auth_header(app, client), content_type='application/json')
getcommentresponse = client.get('/api/foicomment/rawrequest/'+str(jsondata["id"]), headers=factory_auth_header(app, client), content_type='application/json')
assert rawresponse.status_code == 200 and createcommentresponse.status_code == 200 and getcommentresponse.status_code == 200
with open('tests/samplerequestjson/rawrequest.json') as x:
rawrequestjson = json.load(x)
def test_foirawcommentdisable(app, client):
rawresponse = client.post('/api/foirawrequests',data=json.dumps(rawrequestjson), headers=factory_auth_header(app, client), content_type='application/json')
jsondata = json.loads(rawresponse.data)
commentjson = {
"requestid":str(jsondata["id"]),
"comment": "test comment",
"isactive": True
}
createcommentresponse = client.post('/api/foicomment/rawrequest', data=json.dumps(commentjson), headers=factory_auth_header(app, client), content_type='application/json')
comment = json.loads(createcommentresponse.data)
disablecommentresponse = client.put('/api/foicomment/rawrequest/'+str(comment["id"])+'/disable',data=json.dumps(commentjson), headers=factory_auth_header(app, client), content_type='application/json')
assert rawresponse.status_code == 200 and createcommentresponse.status_code == 200 and disablecommentresponse.status_code == 200
with open('tests/samplerequestjson/rawrequest.json') as x:
rawrequestjson = json.load(x)
def test_foirawcommentupdate(app, client):
rawresponse = client.post('/api/foirawrequests',data=json.dumps(rawrequestjson), headers=factory_auth_header(app, client), content_type='application/json')
jsondata = json.loads(rawresponse.data)
commentjson = {
"requestid":str(jsondata["id"]),
"comment": "test comment",
"isactive": True
}
createcommentresponse = client.post('/api/foicomment/rawrequest', data=json.dumps(commentjson), headers=factory_auth_header(app, client), content_type='application/json')
comment = json.loads(createcommentresponse.data)
updatecommentjson = {
"comment": "test comment - updated",
}
updatecommentresponse = client.put('/api/foicomment/rawrequest/'+str(comment["id"]),data=json.dumps(updatecommentjson), headers=factory_auth_header(app, client), content_type='application/json')
assert rawresponse.status_code == 200 and createcommentresponse.status_code == 200 and updatecommentresponse.status_code == 200
with open('tests/samplerequestjson/rawrequest.json') as x, open('tests/samplerequestjson/foirequest-general.json') as y:
generalrequestjson = json.load(y)
rawrequestjson = json.load(x)
def test_foiministrycomment(app, client):
rawresponse = client.post('/api/foirawrequests',data=json.dumps(rawrequestjson), headers=factory_auth_header(app, client), content_type='application/json')
jsondata = json.loads(rawresponse.data)
foirequest = generalrequestjson
foirequest["id"] = str(jsondata["id"])
foirequest["requeststatusid"] = 1
foiresponse = client.post('/api/foirequests',data=json.dumps(foirequest), headers=factory_auth_header(app, client), content_type='application/json')
foijsondata = json.loads(foiresponse.data)
commentjson = {
"ministryrequestid":str(foijsondata["id"]),
"comment": "test comment",
"isactive": True
}
createcommentresponse = client.post('/api/foicomment/ministryrequest', data=json.dumps(commentjson), headers=factory_auth_header(app, client), content_type='application/json')
getcommentresponse = client.get('/api/foicomment/ministryrequest/'+str(foijsondata["id"]), headers=factory_auth_header(app, client), content_type='application/json')
assert rawresponse.status_code == 200 and foiresponse.status_code == 200 and createcommentresponse.status_code == 200 and getcommentresponse.status_code == 200
with open('tests/samplerequestjson/rawrequest.json') as x, open('tests/samplerequestjson/foirequest-general.json') as y:
generalrequestjson = json.load(y)
rawrequestjson = json.load(x)
def test_foiministrycommentdisable(app, client):
rawresponse = client.post('/api/foirawrequests',data=json.dumps(rawrequestjson), headers=factory_auth_header(app, client), content_type='application/json')
jsondata = json.loads(rawresponse.data)
foirequest = generalrequestjson
foirequest["id"] = str(jsondata["id"])
foirequest["requeststatusid"] = 1
foiresponse = client.post('/api/foirequests',data=json.dumps(foirequest), headers=factory_auth_header(app, client), content_type='application/json')
foijsondata = json.loads(foiresponse.data)
commentjson = {
"ministryrequestid":str(foijsondata["id"]),
"comment": "test comment",
"isactive": True
}
createcommentresponse = client.post('/api/foicomment/ministryrequest', data=json.dumps(commentjson), headers=factory_auth_header(app, client), content_type='application/json')
comment = json.loads(createcommentresponse.data)
childcommentjson = {
"ministryrequestid":str(foijsondata["id"]),
"comment": "test comment",
"isactive": True,
"parentcommentid": str(comment["id"])
}
createcommentresponse2 = client.post('/api/foicomment/ministryrequest', data=json.dumps(childcommentjson), headers=factory_auth_header(app, client), content_type='application/json')
disablecommentresponse = client.put('/api/foicomment/rawrequest/'+str(comment["id"])+'/disable',data=json.dumps(commentjson), headers=factory_auth_header(app, client), content_type='application/json')
assert rawresponse.status_code == 200 and foiresponse.status_code == 200 and createcommentresponse.status_code == 200 and createcommentresponse2.status_code == 200 and disablecommentresponse.status_code == 200
with open('tests/samplerequestjson/rawrequest.json') as x, open('tests/samplerequestjson/foirequest-general.json') as y:
generalrequestjson = json.load(y)
rawrequestjson = json.load(x)
def test_foiministrycommentupdate(app, client):
rawresponse = client.post('/api/foirawrequests',data=json.dumps(rawrequestjson), headers=factory_auth_header(app, client), content_type='application/json')
jsondata = json.loads(rawresponse.data)
foirequest = generalrequestjson
foirequest["id"] = str(jsondata["id"])
foirequest["requeststatusid"] = 1
foiresponse = client.post('/api/foirequests',data=json.dumps(foirequest), headers=factory_auth_header(app, client), content_type='application/json')
foijsondata = json.loads(foiresponse.data)
commentjson = {
"ministryrequestid":str(foijsondata["id"]),
"comment": "test comment",
"isactive": True
}
createcommentresponse = client.post('/api/foicomment/ministryrequest', data=json.dumps(commentjson), headers=factory_auth_header(app, client), content_type='application/json')
comment = json.loads(createcommentresponse.data)
updatecommentjson = {
"comment": "test comment - updated",
}
updatecommentresponse = client.put('/api/foicomment/ministryrequest/'+str(comment["id"]),data=json.dumps(updatecommentjson), headers=factory_auth_header(app, client), content_type='application/json')
assert rawresponse.status_code == 200 and foiresponse.status_code == 200 and createcommentresponse.status_code == 200 and updatecommentresponse.status_code == 200
with open('tests/samplerequestjson/rawrequest.json') as x, open('tests/samplerequestjson/foirequest-general.json') as y:
generalrequestjson = json.load(y)
rawrequestjson = json.load(x)
def test_foiministrycommentmigrate(app, client):
rawresponse = client.post('/api/foirawrequests',data=json.dumps(rawrequestjson), headers=factory_auth_header(app, client), content_type='application/json')
jsondata = json.loads(rawresponse.data)
foirequest = generalrequestjson
foirequest["id"] = str(jsondata["id"])
foirequest["requeststatusid"] = 1
commentjson = {
"requestid":str(jsondata["id"]),
"comment": "test comment",
"isactive": True
}
createcommentresponse1 = client.post('/api/foicomment/rawrequest', data=json.dumps(commentjson), headers=factory_auth_header(app, client), content_type='application/json')
comment = json.loads(createcommentresponse1.data)
childcommentjson = {
"requestid":str(jsondata["id"]),
"comment": "test comment",
"isactive": True,
"parentcommentid": str(comment["id"])
}
createcommentresponse2 = client.post('/api/foicomment/rawrequest', data=json.dumps(childcommentjson), headers=factory_auth_header(app, client), content_type='application/json')
foiresponse = client.post('/api/foirequests',data=json.dumps(foirequest), headers=factory_auth_header(app, client), content_type='application/json')
assert rawresponse.status_code == 200 and createcommentresponse1.status_code == 200 and createcommentresponse2.status_code == 200 and foiresponse.status_code == 200 | 60.904762 | 214 | 0.745504 | 1,138 | 10,232 | 6.587873 | 0.100176 | 0.042017 | 0.061224 | 0.072029 | 0.899426 | 0.895958 | 0.895958 | 0.895958 | 0.888089 | 0.871148 | 0 | 0.01053 | 0.118256 | 10,232 | 168 | 215 | 60.904762 | 0.820439 | 0 | 0 | 0.673077 | 0 | 0 | 0.224958 | 0.091469 | 0 | 0 | 0 | 0 | 0.051282 | 1 | 0.057692 | false | 0.012821 | 0.032051 | 0 | 0.096154 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b0c7b12972d038ee37f921c9a76763795c730cd5 | 1,180 | py | Python | tests/test_random_codebreaker.py | deepshig/mastermind | 0517343ea8522d8e3981e5613abca5bb173c2de8 | [
"MIT"
] | 2 | 2021-01-02T13:02:16.000Z | 2022-01-06T01:59:01.000Z | tests/test_random_codebreaker.py | deepshig/mastermind | 0517343ea8522d8e3981e5613abca5bb173c2de8 | [
"MIT"
] | 1 | 2020-09-22T00:38:02.000Z | 2020-10-27T18:19:32.000Z | tests/test_random_codebreaker.py | deepshig/mastermind | 0517343ea8522d8e3981e5613abca5bb173c2de8 | [
"MIT"
] | null | null | null | import pytest
import sys
sys.path.append('../')
from random_codebreaker import RandomCodeBreaker # NOQA
def test_get_first_move():
"""
1. Test if the length of move is correct
2. Test if all the elements of the move belong to the given set
3. Test if all the elements appear only once within the move
"""
available_elements = [1, 2, 3, 4, 5, 6]
codebreaker_player = RandomCodeBreaker()
first_move = codebreaker_player.get_first_move()
assert len(first_move) == 4
for i in range(0, 4):
assert available_elements.count(first_move[i]) != 0
assert first_move.count(first_move[i]) == 1
def test_get_next_move():
"""
1. Test if the length of move is correct
2. Test if all the elements of the move belong to the given set
3. Test if all the elements appear only once within the move
"""
available_elements = [1, 2, 3, 4, 5, 6]
codebreaker_player = RandomCodeBreaker()
first_move = codebreaker_player.get_next_move([])
assert len(first_move) == 4
for i in range(0, 4):
assert available_elements.count(first_move[i]) != 0
assert first_move.count(first_move[i]) == 1
| 29.5 | 67 | 0.675424 | 183 | 1,180 | 4.196721 | 0.262295 | 0.140625 | 0.046875 | 0.0625 | 0.84375 | 0.84375 | 0.84375 | 0.84375 | 0.84375 | 0.84375 | 0 | 0.030871 | 0.231356 | 1,180 | 39 | 68 | 30.25641 | 0.815877 | 0.285593 | 0 | 0.6 | 0 | 0 | 0.003769 | 0 | 0 | 0 | 0 | 0 | 0.3 | 1 | 0.1 | false | 0 | 0.15 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b0ef573ea965cdda30c8f368e83bbb0d842cfd69 | 9,350 | py | Python | PSEF_SCRIPTS/cfg.py | snatch2013/XXS_PSEFABRIC | a382d28545d3f5dac0403bfeb82ee609d137db24 | [
"Apache-2.0",
"MIT"
] | null | null | null | PSEF_SCRIPTS/cfg.py | snatch2013/XXS_PSEFABRIC | a382d28545d3f5dac0403bfeb82ee609d137db24 | [
"Apache-2.0",
"MIT"
] | null | null | null | PSEF_SCRIPTS/cfg.py | snatch2013/XXS_PSEFABRIC | a382d28545d3f5dac0403bfeb82ee609d137db24 | [
"Apache-2.0",
"MIT"
] | null | null | null | '''
'''
import re
import mult_cfg
import ptemplates
import acitemplates
cfg = {}
def create_configs (cmd_for_host, cmd_for_host_full):
########## Description #######
'''
Extracts data from dict cmd_for_host and transforms it into configuration lines using templates.
'''
############# BODY ############
for eq_name in cmd_for_host_full:
cfg[eq_name] = ''
'''
Sequence is important. This sequence we will have in our configuration files.
rm policy
add service
add service-set
add application
add application-set
add address
add address-set
rm address-set
rm address
rm service-set
rm service
rm application
rm application-set
add policy
'''
# rm policy
if (eq_name == 'panorama'):
if (cmd_for_host_full[eq_name]['rm']['policy']):
policy_list = cmd_for_host_full[eq_name]['rm']['policy']
for el in policy_list:
for command_element in el['command-list']:
cfg_new = eval(command_element + "(el['eq_parameter'], el['policy-alias-1'])")
cfg[eq_name] = cfg[eq_name] + '\n' + cfg_new
cfg_new = ''
if (re.search('aci_', eq_name)):
if (cmd_for_host[eq_name]['rm']['policy']):
policy_list = cmd_for_host[eq_name]['rm']['policy']
j = 0
for el in policy_list:
if (cmd_for_host[eq_name]['ad']['policy']):
policy_ad_list = cmd_for_host[eq_name]['ad']['policy']
for el_ad in policy_ad_list:
if el['name'] in el_ad.itervalues():
status = 'change'
else:
status = 'delete'
else:
status = 'delete'
for command_element in el['command-list']:
cfg_new = eval(command_element + "(el['policy-alias-2'], el['source-address-sets'], el['destination-address-sets'], el['service-set-dicts'], 'permit', status)")
cfg[eq_name] = cfg[eq_name] + '\n' + cfg_new
cfg_new = ''
j = j + 1
# add service
if (eq_name == 'panorama'):
if len(cmd_for_host_full[eq_name]['ad']['service']):
for el in cmd_for_host_full[eq_name]['ad']['service']:
for command_element in el['command-list']:
if 'ports' in el:
cfg_new = eval (command_element + "(el['eq_parameter'], el['service-alias-1'],el['prot'],el['ports']['destination-port-range'])")
else:
cfg_new = eval(command_element + "(el['eq_parameter'], el['service-alias-1'],el['prot'],{})")
cfg[eq_name] = cfg[eq_name] + '\n' + cfg_new
cfg_new = ''
# add service-set
if (eq_name == 'panorama'):
if len(cmd_for_host_full[eq_name]['ad']['service-set']):
for el in cmd_for_host_full[eq_name]['ad']['service-set']:
for command_element in el['command-list']:
cfg_new = eval (command_element + "(el['eq_parameter'], el['service-set-alias-1'],el['service-list'])")
cfg[eq_name] = cfg[eq_name] + '\n' + cfg_new
cfg_new = ''
# add application-set
if (eq_name == 'panorama'):
if len(cmd_for_host_full[eq_name]['ad']['application-set']):
for el in cmd_for_host_full[eq_name]['ad']['application-set']:
for command_element in el['command-list']:
cfg_new = eval (command_element + "(el['eq_parameter'], el['name'],el['application'])")
cfg[eq_name] = cfg[eq_name] + '\n' + cfg_new
cfg_new = ''
# add address
if (eq_name == 'panorama'):
if len(cmd_for_host_full[eq_name]['ad']['address']):
for el in cmd_for_host_full[eq_name]['ad']['address']:
for command_element in el['command-list']:
cfg_new = eval (command_element + "(el['eq_parameter'], el['address-alias-1'],el['ipv4-prefix'])")
cfg[eq_name] = cfg[eq_name] + '\n' + cfg_new
cfg_new = ''
# add address-set
if (eq_name == 'panorama'):
if len(cmd_for_host_full[eq_name]['ad']['address-set']):
for el in cmd_for_host_full[eq_name]['ad']['address-set']:
for command_element in el['command-list']:
cfg_new = eval (command_element + "(el['eq_parameter'], el['address-set-alias-1'],el['address-list'])")
cfg[eq_name] = cfg[eq_name] + '\n' + cfg_new
cfg_new = ''
# rm address-set
if (eq_name == 'panorama'):
if (cmd_for_host[eq_name]['rm']['address-set']):
for el in cmd_for_host_full[eq_name]['rm']['address-set']:
for command_element in el['command-list']:
cfg_new = eval (command_element + "(el['eq_parameter'], el['address-set-alias-1'],el['address-list'])")
cfg[eq_name] = cfg[eq_name] + '\n' + cfg_new
cfg_new = ''
# rm address
if (eq_name == 'panorama'):
if (cmd_for_host[eq_name]['rm']['address']):
for el in cmd_for_host[eq_name]['rm']['address']:
for command_element in el['command-list']:
cfg_new = eval (command_element + "(el['eq_parameter'], el['address-alias-1'],el['ipv4-prefix'])")
cfg[eq_name] = cfg[eq_name] + '\n' + cfg_new
cfg_new = ''
# rm service-set
if (eq_name == 'panorama'):
if (cmd_for_host[eq_name]['rm']['service-set']):
for el in cmd_for_host_full[eq_name]['rm']['service-set']:
for command_element in el['command-list']:
cfg_new = eval (command_element + "(el['eq_parameter'], el['service-set-alias-1'],el['service-list'])")
cfg[eq_name] = cfg[eq_name] + '\n' + cfg_new
cfg_new = ''
# rm service
if (eq_name == 'panorama'):
if (cmd_for_host[eq_name]['rm']['service']):
for el in cmd_for_host[eq_name]['rm']['service']:
for command_element in el['command-list']:
if 'ports' in el:
cfg_new = eval (command_element + "(el['eq_parameter'], el['service-alias-1'],el['prot'],el['ports']['destination-port-range'])")
else:
cfg_new = eval(command_element + "(el['eq_parameter'], el['service-alias-1'],el['prot'],{})")
cfg[eq_name] = cfg[eq_name] + '\n' + cfg_new
cfg_new = ''
# rm application-set
if (eq_name == 'panorama'):
if (cmd_for_host[eq_name]['rm']['application-set']):
for el in cmd_for_host_full[eq_name]['rm']['application-set']:
for command_element in el['command-list']:
cfg_new = eval (command_element + "(el['eq_parameter'], el['name'],el['application'])")
cfg[eq_name] = cfg[eq_name] + '\n' + cfg_new
cfg_new = ''
# add policy
if (eq_name == 'panorama'):
if len(cmd_for_host_full[eq_name]['ad']['policy']):
policy_list = cmd_for_host_full[eq_name]['ad']['policy']
for el in policy_list:
for command_element in el['command-list']:
cfg_new = eval(command_element + "(el['eq_parameter'], el['policy-alias-1'],el['source-address-sets'],el['destination-address-sets'],el['application-sets'],el['service-sets'],el['src_dc'], el['src_area'], el['src_zone'], el['dst_dc'], el['dst_area'], el['dst_zone'], 'permit')")
cfg[eq_name] = cfg[eq_name] + '\n' + cfg_new
cfg_new = ''
if (re.search('aci_', eq_name)):
if len(cmd_for_host_full[eq_name]['ad']['policy']):
policy_list = cmd_for_host_full[eq_name]['ad']['policy']
j = 0
for el in policy_list:
for command_element in el['command-list']:
cfg_new = eval(command_element + "(el['policy-alias-2'], el['source-address-sets'],el['destination-address-sets'],el['service-set-dicts'], 'permit')")
cfg[eq_name] = cfg[eq_name] + '\n' + cfg_new
cfg_new = ''
j = j + 1
return cfg
def remove_doubled_names(mylist):
########## Description #######
'''
For removing doubled policy configurations
'''
############# BODY ############
newlist = []
namelist = []
for i in mylist:
if i['name'] not in namelist:
newlist.append(i)
namelist.append(i['name'])
return newlist
| 45.38835 | 302 | 0.497005 | 1,123 | 9,350 | 3.891362 | 0.087266 | 0.101602 | 0.077803 | 0.067277 | 0.817391 | 0.812357 | 0.805492 | 0.783753 | 0.769794 | 0.730206 | 0 | 0.003282 | 0.348235 | 9,350 | 205 | 303 | 45.609756 | 0.713817 | 0.037647 | 0 | 0.637681 | 0 | 0.065217 | 0.233302 | 0.105637 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014493 | false | 0 | 0.028986 | 0 | 0.057971 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7c01d6333da342ac2d2dbd9062eac33e999961ea | 32 | py | Python | test/integration/test_raspi.py | srjustice/kubos | ff33c17a7b5335d981cd0a49c65874f9f733338d | [
"Apache-2.0"
] | 60 | 2017-01-12T18:14:33.000Z | 2018-01-05T00:15:13.000Z | test/integration/test_raspi.py | srjustice/kubos | ff33c17a7b5335d981cd0a49c65874f9f733338d | [
"Apache-2.0"
] | 120 | 2016-10-26T20:18:32.000Z | 2018-01-05T23:27:36.000Z | test/integration/test_raspi.py | srjustice/kubos | ff33c17a7b5335d981cd0a49c65874f9f733338d | [
"Apache-2.0"
] | 20 | 2016-11-23T15:25:37.000Z | 2018-01-06T21:52:07.000Z | print "THIS IS A PYTHON SCRIPT"
| 16 | 31 | 0.75 | 6 | 32 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1875 | 32 | 1 | 32 | 32 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0.71875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
9fdead3d9a8f05f66c91176c6df4a37ef847bc46 | 198 | py | Python | processing/datadir.py | rokuingh/sysveg | c1803cc59200ef7d9bbc4a4030cd5ffdb55c8de8 | [
"MIT"
] | null | null | null | processing/datadir.py | rokuingh/sysveg | c1803cc59200ef7d9bbc4a4030cd5ffdb55c8de8 | [
"MIT"
] | null | null | null | processing/datadir.py | rokuingh/sysveg | c1803cc59200ef7d9bbc4a4030cd5ffdb55c8de8 | [
"MIT"
] | null | null | null | datadiroslo = "/home/ryan/sandbox/sysveg/data/AirQuality/Oslo"
datadirbarcelona = "/home/ryan/sandbox/sysveg/data/AirQuality/Barcelona"
datadiroslometeo = "/home/ryan/sandbox/sysveg/data/Meteo/Oslo" | 66 | 72 | 0.808081 | 24 | 198 | 6.666667 | 0.5 | 0.15 | 0.28125 | 0.39375 | 0.59375 | 0.4375 | 0 | 0 | 0 | 0 | 0 | 0 | 0.040404 | 198 | 3 | 73 | 66 | 0.842105 | 0 | 0 | 0 | 0 | 0 | 0.693467 | 0.693467 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9feca8332f4a89768cbb0b0d585c3813c6e7babf | 114 | py | Python | RecurrenceRelationSolver/__init__.py | rowanG077/RecurrenceRelationSolver | e108cf7a41c8341f5c6ea001d891a1185ff1fab6 | [
"BSD-2-Clause"
] | 2 | 2018-12-17T15:11:15.000Z | 2018-12-19T09:35:38.000Z | RecurrenceRelationSolver/__init__.py | rowanG077/RecurrenceRelationSolver | e108cf7a41c8341f5c6ea001d891a1185ff1fab6 | [
"BSD-2-Clause"
] | null | null | null | RecurrenceRelationSolver/__init__.py | rowanG077/RecurrenceRelationSolver | e108cf7a41c8341f5c6ea001d891a1185ff1fab6 | [
"BSD-2-Clause"
] | null | null | null | from .RecurrenceRelation import RecurrenceRelation
from .RecurrenceRelationParser import RecurrenceRelationParser
| 38 | 62 | 0.912281 | 8 | 114 | 13 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.070175 | 114 | 2 | 63 | 57 | 0.981132 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b00814f1ca61e7e79f259a18a67e5385b7d94216 | 32 | py | Python | engine/keyboards/__init__.py | LloydTao/ecm3423-fur-effect | fefa73665b459dfd1648dca97a95e8313cf53dd5 | [
"MIT"
] | null | null | null | engine/keyboards/__init__.py | LloydTao/ecm3423-fur-effect | fefa73665b459dfd1648dca97a95e8313cf53dd5 | [
"MIT"
] | null | null | null | engine/keyboards/__init__.py | LloydTao/ecm3423-fur-effect | fefa73665b459dfd1648dca97a95e8313cf53dd5 | [
"MIT"
] | null | null | null | from .keyboards import Keyboard
| 16 | 31 | 0.84375 | 4 | 32 | 6.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.964286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b04db4e52596cf3da98923a3bcb13e15d3d85600 | 27,709 | py | Python | run_rcv1/src/metrics.py | princeton-nlp/semsup | 8c6a9314b7e6ac167de119f6cf7f5a4bff28f71e | [
"MIT"
] | 7 | 2022-03-15T20:09:28.000Z | 2022-03-29T08:21:12.000Z | run_rcv1/src/metrics.py | princeton-nlp/semsup | 8c6a9314b7e6ac167de119f6cf7f5a4bff28f71e | [
"MIT"
] | null | null | null | run_rcv1/src/metrics.py | princeton-nlp/semsup | 8c6a9314b7e6ac167de119f6cf7f5a4bff28f71e | [
"MIT"
] | null | null | null | """
Metrics for multi-label text classification
"""
import numpy as np
from scipy.special import expit
import itertools
import copy
# Metrics
from sklearn.metrics import (
accuracy_score,
f1_score,
classification_report,
precision_score,
recall_score,
label_ranking_average_precision_score,
coverage_error
)
def get_ancestors(data_args, label2id):
"""
Get the ancestors of all the nodes using the hierarchy file given.
"""
if data_args.task_name == 'rcv1':
# Open the file and get all the lines
lines = open(data_args.hierarchy_file, 'r').readlines()
# Store the parents for each node as a list
# Store all the lists in a dict
parents = {}
# Go over all the lines in the file
label2id_copy = copy.deepcopy(label2id)
label2id_copy['Root'] = -1
for line in lines:
split_line = line.strip().split()
if split_line[1] != "None" and split_line[3] in label2id_copy.keys():
parents[split_line[3]] = split_line[1]
def get_ancestors_recursion(node):
if node == 'Root':
return ['Root']
else:
return [node] + get_ancestors_recursion(parents[node])
# Collect all the ancestors
ancestors = {}
for key in parents.keys():
ancestors[key] = get_ancestors_recursion(key)
# Convert labels to IDS
for key in ancestors.keys():
ancestors[key] = [label2id_copy[i] for i in ancestors[key]]
return ancestors
else:
raise("Hierarchal metrics support only for RCV1.")
def compute_hierarchical_micro_f1(preds, label_ids, ancestor_dict, id2label):
"""
Compute the hierarchical micro F-1.
"""
precision = [0, 0]
recall = [0, 0]
# Loop over all the instances
for i in range(preds.shape[0]):
true_nodes = []
predicted_nodes = []
# Collect true node ancestors
for j in range(preds.shape[1]):
if label_ids[i][j] == 1:
true_nodes = true_nodes + ancestor_dict[id2label[j]]
if preds[i][j] == 1:
predicted_nodes = predicted_nodes + ancestor_dict[id2label[j]]
# Compute the intersection
# Numerator
precision[0] += len(set(true_nodes) & set(predicted_nodes))
recall[0] += len(set(true_nodes) & set(predicted_nodes))
# Denominator
precision[1] += len(set(predicted_nodes))
recall[1] += len(set(true_nodes))
# Compute precision and recall
# Handle zero-division errors
if precision[1] == 0:
h_precision = 1
else:
h_precision = precision[0] / precision[1]
if recall[1] == 0:
h_recall = 1
else:
h_recall = recall[0] / recall[1]
if h_precision == 0 and h_recall == 0:
h_f1 = 0
else:
h_f1 = 2 * h_precision * h_recall / (h_precision + h_recall)
return h_f1
def multilabel_metrics(data_args, id2label, label2id, fbr):
"""
Metrics function used for multilabel classification.
Datasets: RCV1-V2
:fbr : A dict containing global thresholds to be used for selecting a class.
We use global thresholds because we want to handle unseen classes,
for which the threshold is not known in advance.
"""
def compute_metrics(p):
# Collect the logits
preds = p.predictions[0] if isinstance(p.predictions, tuple) else p.predictions
# Compute the logistic sigmoid
preds = expit(preds)
# METRIC 1: Compute accuracy
if 'accuracy' not in fbr.keys():
performance = {}
for threshold in np.arange(0.1, 1, 0.1):
accuracy_preds = np.where(preds > threshold, 1, 0)
performance[threshold] = np.sum(p.label_ids == accuracy_preds) / accuracy_preds.size * 100
# Choose the best threshold
best_threshold = max(performance, key=performance.get)
fbr['accuracy'] = best_threshold
accuracy = performance[best_threshold]
else:
accuracy_preds = np.where(preds > fbr['accuracy'], 1, 0)
accuracy = np.sum(p.label_ids == accuracy_preds) / accuracy_preds.size * 100
# METRIC 2: Compute the subset accuracy
if 'subset_accuracy' not in fbr.keys():
performance = {}
for threshold in np.arange(0.1, 1, 0.1):
subset_accuracy_preds = np.where(preds > threshold, 1, 0)
performance[threshold] = accuracy_score(p.label_ids, subset_accuracy_preds)
# Choose the best threshold
best_threshold = max(performance, key=performance.get)
fbr['subset_accuracy'] = best_threshold
subset_accuracy = performance[best_threshold]
else:
subset_accuracy_preds = np.where(preds > fbr['subset_accuracy'], 1, 0)
subset_accuracy = accuracy_score(p.label_ids, subset_accuracy_preds)
# METRIC 3: Macro F-1
if 'macro_f1' not in fbr.keys():
performance = {}
for threshold in np.arange(0.1, 1, 0.1):
macro_f1_preds = np.where(preds > threshold, 1, 0)
performance[threshold] = f1_score(p.label_ids, macro_f1_preds, average='macro')
# Choose the best threshold
best_threshold = max(performance, key=performance.get)
fbr['macro_f1'] = best_threshold
macro_f1 = performance[best_threshold]
else:
macro_f1_preds = np.where(preds > fbr['macro_f1'], 1, 0)
macro_f1 = f1_score(p.label_ids, macro_f1_preds, average='macro')
# METRIC 4: Micro F-1
if 'micro_f1' not in fbr.keys():
performance = {}
for threshold in np.arange(0.1, 1, 0.1):
micro_f1_preds = np.where(preds > threshold, 1, 0)
performance[threshold] = f1_score(p.label_ids, micro_f1_preds, average='micro')
# Choose the best threshold
best_threshold = max(performance, key=performance.get)
fbr['micro_f1'] = best_threshold
micro_f1 = performance[best_threshold]
else:
micro_f1_preds = np.where(preds > fbr['micro_f1'], 1, 0)
micro_f1 = f1_score(p.label_ids, micro_f1_preds, average='micro')
# METRIC 5: Hierarchical micro F-1
ancestor_dict = get_ancestors(data_args, label2id)
if 'hier_micro_f1' not in fbr.keys():
performance = {}
for threshold in np.arange(0.1, 1, 0.1):
hier_micro_f1_preds = np.where(preds > threshold, 1, 0)
performance[threshold] = compute_hierarchical_micro_f1(hier_micro_f1_preds, p.label_ids, ancestor_dict, id2label)
# Choose the best threshold
best_threshold = max(performance, key=performance.get)
fbr['hier_micro_f1'] = best_threshold
hier_micro_f1 = performance[best_threshold]
else:
hier_micro_f1_preds = np.where(preds > fbr['hier_micro_f1'], 1, 0)
hier_micro_f1 = compute_hierarchical_micro_f1(hier_micro_f1_preds, p.label_ids, ancestor_dict, id2label)
# Multi-label classification report
# Optimized for Micro F-1
report = classification_report(p.label_ids, micro_f1_preds, target_names=[id2label[i] for i in range(len(id2label))])
print(report)
return {
"accuracy": accuracy,
"subset_accuracy": subset_accuracy,
"macro_f1": macro_f1,
"micro_f1": micro_f1,
"hier_micro_f1": hier_micro_f1,
"fbr": fbr
}
return compute_metrics
def multilabel_label_descriptions_metrics(data_args, id2label, label2id, label_list_dict, fbr):
"""
Metrics function used for multilabel classification.
Datasets: RCV1-V2
:fbr : A dict containing global thresholds to be used for selecting a class.
We use global thresholds because we want to handle unseen classes,
for which the threshold is not known in advance.
"""
def compute_metrics(p):
# Collect the logits
preds = p.predictions[0] if isinstance(p.predictions, tuple) else p.predictions
# Compute the logistic sigmoid
preds = expit(preds)
# Determine if it's validation or prediction
is_validation = False if fbr else True
# Define the range over which the best fbr value is chosen
fbr_low = 0.0
fbr_high = 1.0
fbr_step = 0.05
# METRIC 1: Compute accuracy
if 'accuracy' not in fbr.keys():
performance = {}
for threshold in np.arange(fbr_low, fbr_high, fbr_step):
accuracy_preds = np.where(preds > threshold, 1, 0)
performance[threshold] = np.sum(p.label_ids == accuracy_preds) / accuracy_preds.size * 100
# Choose the best threshold
best_threshold = max(performance, key=performance.get)
fbr['accuracy'] = best_threshold
accuracy = performance[best_threshold]
else:
accuracy_preds = np.where(preds > fbr['accuracy'], 1, 0)
accuracy = np.sum(p.label_ids == accuracy_preds) / accuracy_preds.size * 100
# METRIC 2: Compute the subset accuracy
if 'subset_accuracy' not in fbr.keys():
performance = {}
for threshold in np.arange(fbr_low, fbr_high, fbr_step):
subset_accuracy_preds = np.where(preds > threshold, 1, 0)
performance[threshold] = accuracy_score(p.label_ids, subset_accuracy_preds)
# Choose the best threshold
best_threshold = max(performance, key=performance.get)
fbr['subset_accuracy'] = best_threshold
subset_accuracy = performance[best_threshold]
else:
subset_accuracy_preds = np.where(preds > fbr['subset_accuracy'], 1, 0)
subset_accuracy = accuracy_score(p.label_ids, subset_accuracy_preds)
# METRIC 3: Macro F-1
if 'macro_f1' not in fbr.keys():
performance = {}
for threshold in np.arange(fbr_low, fbr_high, fbr_step):
macro_f1_preds = np.where(preds > threshold, 1, 0)
performance[threshold] = f1_score(p.label_ids, macro_f1_preds, average='macro')
# Choose the best threshold
best_threshold = max(performance, key=performance.get)
fbr['macro_f1'] = best_threshold
macro_f1 = performance[best_threshold]
else:
macro_f1_preds = np.where(preds > fbr['macro_f1'], 1, 0)
macro_f1 = f1_score(p.label_ids, macro_f1_preds, average='macro')
# METRIC 4: Micro F-1
if 'micro_f1' not in fbr.keys():
performance = {}
for threshold in np.arange(fbr_low, fbr_high, fbr_step):
micro_f1_preds = np.where(preds > threshold, 1, 0)
performance[threshold] = f1_score(p.label_ids, micro_f1_preds, average='micro')
# Choose the best threshold
best_threshold = max(performance, key=performance.get)
fbr['micro_f1'] = best_threshold
micro_f1 = performance[best_threshold]
else:
micro_f1_preds = np.where(preds > fbr['micro_f1'], 1, 0)
micro_f1 = f1_score(p.label_ids, micro_f1_preds, average='micro')
# METRIC 5: Hierarchical micro F-1
ancestor_dict = get_ancestors(data_args, label2id)
if 'hier_micro_f1' not in fbr.keys():
performance = {}
for threshold in np.arange(fbr_low, fbr_high, fbr_step):
hier_micro_f1_preds = np.where(preds > threshold, 1, 0)
performance[threshold] = compute_hierarchical_micro_f1(hier_micro_f1_preds, p.label_ids, ancestor_dict, id2label)
# Choose the best threshold
best_threshold = max(performance, key=performance.get)
fbr['hier_micro_f1'] = best_threshold
hier_micro_f1 = performance[best_threshold]
else:
hier_micro_f1_preds = np.where(preds > fbr['hier_micro_f1'], 1, 0)
hier_micro_f1 = compute_hierarchical_micro_f1(hier_micro_f1_preds, p.label_ids, ancestor_dict, id2label)
# METRIC X: DELETE ##############################
if 'm14_micro_f1' not in fbr.keys():
performance = {}
for threshold in np.arange(fbr_low, fbr_high, fbr_step * 0.1):
m14_micro_f1_preds = np.where(preds > threshold, 1, 0)
performance[threshold] = f1_score(p.label_ids[:,label2id['M14']], m14_micro_f1_preds[:,label2id['M14']], average='binary')
# Choose the best threshold
best_threshold = max(performance, key=performance.get)
fbr['m14_micro_f1'] = best_threshold
M14_score = performance[best_threshold]
print("M14 score is: {}".format(M14_score))
m14_micro_f1_preds = np.where(preds > fbr['m14_micro_f1'], 1, 0)
print("M14 precision is: {}".format(precision_score(p.label_ids[:,label2id['M14']], m14_micro_f1_preds[:,label2id['M14']], average='binary')))
print("M14 recall is: {}".format(recall_score(p.label_ids[:,label2id['M14']], m14_micro_f1_preds[:,label2id['M14']], average='binary')))
else:
m14_micro_f1_preds = np.where(preds > fbr['m14_micro_f1'], 1, 0)
M14_score = f1_score(p.label_ids[:,label2id['M14']], m14_micro_f1_preds[:,label2id['M14']], average='binary')
print("M14 score is: {}".format(M14_score))
print("M14 precision is: {}".format(precision_score(p.label_ids[:,label2id['M14']], m14_micro_f1_preds[:,label2id['M14']], average='binary')))
print("M14 recall is: {}".format(recall_score(p.label_ids[:,label2id['M14']], m14_micro_f1_preds[:,label2id['M14']], average='binary')))
#################################################
# Multi-label classification report
# Optimized for Micro F-1 (the use of micro_f1_preds)
print("*** Classification report for all the classes ***")
micro_f1_preds = np.where(preds > fbr['micro_f1'], 1, 0)
report = classification_report(p.label_ids, micro_f1_preds, target_names=[id2label[i] for i in p.represented_labels])
print(report)
print("********************************************")
# Classification report only for classes that appeared in the validation/prediction set but not the train set
# Get the classes which belong to the validation set but not the train set
if data_args.evaluation_type == 'gzs':
key = 'validation' if is_validation else 'test'
set_difference = list(set(label_list_dict[key]).difference(set(label_list_dict['train'])))
set_difference = [label2id[label] for label in set_difference]
# Take an intersection with the labels that are represented in the current data
set_difference = list(set(set_difference).intersection(set(p.represented_labels)))
set_difference.sort()
if len(set_difference) > 0:
print("*** Classification report for classes not in the train set ***")
target_names = [id2label[i] for i in set_difference]
# HACK: Find macro F-1 optimized for the unseen labels
performance = {}
for threshold in np.arange(fbr_low, fbr_high, fbr_step):
unseen_macro_f1_preds = np.where(preds[:,set_difference] > threshold, 1, 0)
performance[threshold] = f1_score(p.label_ids[:,set_difference], unseen_macro_f1_preds, average='macro')
# Choose the best threshold
best_threshold = max(performance, key=performance.get)
unseen_macro_f1 = performance[best_threshold]
report = classification_report(p.label_ids[:,set_difference], np.where(preds[:,set_difference] > best_threshold, 1, 0), target_names=target_names)
print(report)
print("Best threshold for unseen macro F-1: {}".format(best_threshold))
print("********************************************")
# Print the thresholds used
print("********************************************")
print("Thresholds used: {}".format(fbr))
print("********************************************")
return {
"accuracy": accuracy,
"subset_accuracy": subset_accuracy,
"macro_f1": macro_f1,
"micro_f1": micro_f1,
"hier_micro_f1": hier_micro_f1,
"unseen_micro_f1": unseen_macro_f1,
"fbr": fbr,
"unseen_average_prediction_score": np.average(preds[:,set_difference]),
"seen_average_prediction_score": np.average(preds[:, [label2id[label] for label in label_list_dict['train']]]),
}
return compute_metrics
def multilabel_label_descriptions_per_class_threshold_metrics(data_args, id2label, label2id, label_list_dict, fbr):
"""
Metrics function used for multilabel classification.
Choose a different threshold for each class.
Don't compute accuracy and subset accuracy.
Datasets: RCV1-V2
:fbr : A dict containing global thresholds to be used for selecting a class.
We use global thresholds because we want to handle unseen classes,
for which the threshold is not known in advance.
"""
def compute_metrics(p):
# Collect the logits
preds = p.predictions[0] if isinstance(p.predictions, tuple) else p.predictions
# Compute the logistic sigmoid
preds = expit(preds)
# Determine if it's validation or prediction
is_validation = False if fbr else True
# Define the range over which the best fbr value is chosen
fbr_low = 0.0
fbr_high = 1.0
fbr_step = 0.05
# METRIC 3: Macro F-1
if 'macro_f1' not in fbr.keys():
# Store the best threshold for each class
best_thresholds = []
# Loop over all the classes
for i in range(preds.shape[1]):
performance = {}
for threshold in np.arange(fbr_low, fbr_high, fbr_step):
macro_f1_preds = np.where(preds[:,i] > threshold, 1, 0)
performance[threshold] = f1_score(p.label_ids[:,i], macro_f1_preds, average='binary')
# Choose the best threshold
best_thresholds.append(max(performance, key=performance.get))
fbr['macro_f1'] = np.array(best_thresholds)
macro_f1 = f1_score(p.label_ids, np.where(preds > fbr['macro_f1'], 1, 0), average='macro')
else:
macro_f1 = f1_score(p.label_ids, np.where(preds > fbr['macro_f1'], 1, 0), average='macro')
# NOTE: The following function was only for debugging purposes
# METRIC: Macro F-1 with global threshold
if 'global_macro_f1' not in fbr.keys():
performance = {}
for threshold in np.arange(fbr_low, fbr_high, fbr_step):
global_macro_f1_preds = np.where(preds > threshold, 1, 0)
performance[threshold] = f1_score(p.label_ids, global_macro_f1_preds, average='macro')
# Choose the best threshold
best_threshold = max(performance, key=performance.get)
fbr['global_macro_f1'] = best_threshold
global_macro_f1 = performance[best_threshold]
else:
global_macro_f1_preds = np.where(preds > fbr['global_macro_f1'], 1, 0)
global_macro_f1 = f1_score(p.label_ids, global_macro_f1_preds, average='macro')
# METRIC: LRAP (Label ranking average precision)
total_lrap = label_ranking_average_precision_score(p.label_ids, preds)
# Multi-label classification report
# Optimized for Micro F-1 (the use of micro_f1_preds)
print("*** Classification report for all the classes ***")
macro_f1_preds = np.where(preds > fbr['macro_f1'], 1, 0)
report = classification_report(p.label_ids, macro_f1_preds, target_names=[id2label[i] for i in p.represented_labels])
print(report)
print("********************************************")
# Classification report only for classes that appeared in the validation/prediction set but not the train set
# Get the classes which belong to the validation set but not the train set
unseen_macro_f1 = 0.
if data_args.evaluation_type == 'gzs':
key = 'validation' if is_validation else 'test'
set_difference = list(set(label_list_dict[key]).difference(set(label_list_dict['train'])))
set_difference = [label2id[label] for label in set_difference]
# Take an intersection with the labels that are represented in the current data
set_difference = list(set(set_difference).intersection(set(p.represented_labels)))
set_difference.sort()
if len(set_difference) > 0:
# METRIC: Unseen Macro F-1
# Store the best threshold for each class
best_thresholds = []
# Loop over all the classes
for i in set_difference:
performance = {}
for threshold in np.arange(fbr_low, fbr_high, fbr_step * 0.1):
unseen_micro_f1_preds = np.where(preds[:,i] > threshold, 1, 0)
performance[threshold] = f1_score(p.label_ids[:,i], unseen_micro_f1_preds, average='binary')
# Choose the best threshold
best_thresholds.append(max(performance, key=performance.get))
fbr['unseen_macro_f1'] = np.array(best_thresholds)
unseen_macro_f1 = f1_score(p.label_ids[:,set_difference], np.where(preds[:,set_difference] > fbr['unseen_macro_f1'], 1, 0), average='macro')
# METRIC: Unseen Macro F-1 with a global threshold
performance = {}
for threshold in np.arange(fbr_low, fbr_high, fbr_step):
global_unseen_macro_f1_preds = np.where(preds[:,set_difference] > threshold, 1, 0)
performance[threshold] = f1_score(p.label_ids[:,set_difference], global_unseen_macro_f1_preds, average='macro')
# Choose the best threshold
best_threshold = max(performance, key=performance.get)
global_unseen_macro_f1 = performance[best_threshold]
fbr['global_unseen_macro_f1'] = best_threshold
# METRIC: Unseen LRAP
unseen_lrap = label_ranking_average_precision_score(p.label_ids[:,set_difference], preds[:,set_difference])
# Print the classification report
print("*** Classification report for classes not in the train set ***")
target_names = [id2label[i] for i in set_difference]
report = classification_report(p.label_ids[:,set_difference], np.where(preds[:,set_difference] > fbr['unseen_macro_f1'], 1, 0), target_names=target_names)
print(report)
print("Best thresholds for unseen micro F-1: {}".format(best_thresholds))
print("********************************************")
return {
"macro_f1": macro_f1,
"global_macro_f1": global_macro_f1,
"unseen_macro_f1": unseen_macro_f1,
"global_unseen_macro_f1": global_unseen_macro_f1,
"total_lrap": total_lrap,
"unseen_lrap": unseen_lrap,
"fbr": fbr,
"unseen_average_prediction_score": np.average(preds[:,set_difference]),
"seen_average_prediction_score": np.average(preds[:, [label2id[label] for label in label_list_dict['train']]]),
}
return compute_metrics
def multilabel_label_descriptions_ranking_metrics(data_args, id2label, label2id, label_list_dict, fbr):
"""
Ranking metrics for multilabel classification.
LRAP and coverage error.
Higher is better for LRAP and lower is better for coverage error.
fbr is a flag which tells us if it is train or validation
Datasets: RCV1-V2
"""
def compute_metrics(p):
# Collect the logits
preds = p.predictions[0] if isinstance(p.predictions, tuple) else p.predictions
# Compute the logistic sigmoid
preds = expit(preds)
# Determine if it's validation or prediction
is_validation = False if fbr else True
# Compute the metrics over all the labels
# Choose only examples which have at least one positive in train_labels
bool_arr = np.sum(p.label_ids, axis=1) > 0
# METRIC: LRAP (Label ranking average precision)
total_lrap = label_ranking_average_precision_score(p.label_ids[bool_arr], preds[bool_arr])
# METRIC: Coverage error
total_coverage_error = coverage_error(p.label_ids[bool_arr], preds[bool_arr])
# Compute the metrics for seen and unseen classes
seen_lrap, seen_coverage_error, unseen_lrap, unseen_coverage_error = 0., 0., 0., 0.
if data_args.evaluation_type != 'seen':
key = 'validation' if is_validation else 'test'
set_difference = list(set(label_list_dict[key]).difference(set(label_list_dict['train'])))
set_difference = [label2id[label] for label in set_difference]
# Take an intersection with the labels that are represented in the current data
validation_labels = list(set(set_difference).intersection(set(p.represented_labels)))
validation_labels.sort()
train_labels = [label2id[label] for label in label_list_dict['train']]
# Compute metrics only on the train labels
# Choose only examples which have at least one positive in train_labels
bool_arr = np.sum(p.label_ids[:,train_labels], axis=1) > 0
# METRIC: LRAP (Label ranking average precision)
seen_lrap = label_ranking_average_precision_score(p.label_ids[bool_arr][:,train_labels], preds[bool_arr][:,train_labels])
# METRIC: Coverage error
seen_coverage_error = coverage_error(p.label_ids[bool_arr][:,train_labels], preds[bool_arr][:,train_labels])
# Copmute metrics only on the eval labels
if len(validation_labels) > 0:
# Choose only examples which have at least one positive in validation_labels
bool_arr = np.sum(p.label_ids[:,validation_labels], axis=1) > 0
# METRIC: LRAP (Label ranking average precision)
unseen_lrap = label_ranking_average_precision_score(p.label_ids[bool_arr][:,validation_labels], preds[bool_arr][:,validation_labels])
# METRIC: Coverage error
# BUG: The following line does not work for some reason
# unseen_coverage_error = coverage_error(p.label_ids[bool_arr][:,validation_labels], preds[bool_arr][:,validation_labels])
return {
"total_lrap": total_lrap,
"total_coverage_error": total_coverage_error,
"seen_lrap": seen_lrap,
"seen_coverage_error": seen_coverage_error,
"unseen_lrap": unseen_lrap,
# "unseen_coverage_error": unseen_coverage_error,
"fbr": {"total_lrap": total_lrap},
}
return compute_metrics | 46.104825 | 170 | 0.615468 | 3,431 | 27,709 | 4.744389 | 0.068785 | 0.030532 | 0.028198 | 0.027522 | 0.82117 | 0.794815 | 0.775095 | 0.762317 | 0.7547 | 0.734734 | 0 | 0.02445 | 0.278213 | 27,709 | 601 | 171 | 46.104825 | 0.78945 | 0.177271 | 0 | 0.645777 | 0 | 0 | 0.08569 | 0.019062 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029973 | false | 0 | 0.013624 | 0 | 0.076294 | 0.065395 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c65e220e27d50c5c5039890b450d1abcb55b34f5 | 504 | py | Python | django_analyses/models/input/types/__init__.py | TheLabbingProject/django_analyses | 08cac40a32754a265b37524f08ec6160c69ebea8 | [
"Apache-2.0"
] | 1 | 2020-12-30T12:43:34.000Z | 2020-12-30T12:43:34.000Z | django_analyses/models/input/types/__init__.py | TheLabbingProject/django_analyses | 08cac40a32754a265b37524f08ec6160c69ebea8 | [
"Apache-2.0"
] | 59 | 2019-12-25T13:14:56.000Z | 2021-07-22T12:24:46.000Z | django_analyses/models/input/types/__init__.py | TheLabbingProject/django_analyses | 08cac40a32754a265b37524f08ec6160c69ebea8 | [
"Apache-2.0"
] | 2 | 2020-05-24T06:44:27.000Z | 2020-07-09T15:47:31.000Z | from django_analyses.models.input.types.boolean_input import BooleanInput
from django_analyses.models.input.types.directory_input import DirectoryInput
from django_analyses.models.input.types.file_input import FileInput
from django_analyses.models.input.types.float_input import FloatInput
from django_analyses.models.input.types.integer_input import IntegerInput
from django_analyses.models.input.types.list_input import ListInput
from django_analyses.models.input.types.string_input import StringInput
| 63 | 77 | 0.888889 | 70 | 504 | 6.2 | 0.3 | 0.16129 | 0.290323 | 0.387097 | 0.548387 | 0.548387 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 504 | 7 | 78 | 72 | 0.911765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c669fde64ee4fbbe71ce7e7ddfb29274a5d5b50d | 49 | py | Python | start/First steps in the library/datetime/showdate2.py | codermoji-contrib/python | 764bffaf0e92270be196aa5728f255aaaf5b8150 | [
"MIT"
] | null | null | null | start/First steps in the library/datetime/showdate2.py | codermoji-contrib/python | 764bffaf0e92270be196aa5728f255aaaf5b8150 | [
"MIT"
] | 32 | 2017-09-01T00:52:17.000Z | 2017-10-01T00:30:02.000Z | start/First steps in the library/datetime/showdate2.py | codermoji-contrib/python | 764bffaf0e92270be196aa5728f255aaaf5b8150 | [
"MIT"
] | null | null | null | import datetime
print(datetime.datetime.today())
| 16.333333 | 32 | 0.816327 | 6 | 49 | 6.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061224 | 49 | 2 | 33 | 24.5 | 0.869565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
c6c96e0e6e13d82f719cabae6093a5263bc3dea6 | 72 | py | Python | tools/utils/__init__.py | realphongha/st-gcn | 18f45aaf5088faaacdd5ce4328b70c6b31831ea1 | [
"BSD-2-Clause"
] | null | null | null | tools/utils/__init__.py | realphongha/st-gcn | 18f45aaf5088faaacdd5ce4328b70c6b31831ea1 | [
"BSD-2-Clause"
] | null | null | null | tools/utils/__init__.py | realphongha/st-gcn | 18f45aaf5088faaacdd5ce4328b70c6b31831ea1 | [
"BSD-2-Clause"
] | null | null | null | from . import video
from . import openpose
from . import visualization | 24 | 27 | 0.777778 | 9 | 72 | 6.222222 | 0.555556 | 0.535714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.180556 | 72 | 3 | 27 | 24 | 0.949153 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c6ef7f4294ca8b30bf6a9bc7d0eab3bec0ee8c44 | 79 | py | Python | ffscraper/fanfic/__init__.py | hayesall/ffscraper | 0c8e034b6b57741bc82e8f69bcddd3f8eb785541 | [
"Apache-2.0"
] | 4 | 2021-02-20T22:20:36.000Z | 2022-02-27T01:31:15.000Z | ffscraper/fanfic/__init__.py | hayesall/FanFiction-Collaborative-Filtering | 0c8e034b6b57741bc82e8f69bcddd3f8eb785541 | [
"Apache-2.0"
] | 7 | 2018-05-13T21:58:56.000Z | 2018-06-01T22:42:16.000Z | ffscraper/fanfic/__init__.py | hayesall/FanFiction-Collaborative-Filtering | 0c8e034b6b57741bc82e8f69bcddd3f8eb785541 | [
"Apache-2.0"
] | null | null | null | """
Doc-string for FanFicScraper
"""
from . import story
from . import review
| 11.285714 | 28 | 0.708861 | 10 | 79 | 5.6 | 0.8 | 0.357143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177215 | 79 | 6 | 29 | 13.166667 | 0.861538 | 0.35443 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
05b4fb9883ba6138e4151c24e1641539280a81b5 | 191 | py | Python | react_router/models.py | HorizonXP/python-react-router | d8acce88e12a808677fc82e2dffdfb984c4b52a3 | [
"MIT"
] | 1 | 2015-12-01T02:43:07.000Z | 2015-12-01T02:43:07.000Z | react_router/models.py | HorizonXP/python-react-router | d8acce88e12a808677fc82e2dffdfb984c4b52a3 | [
"MIT"
] | null | null | null | react_router/models.py | HorizonXP/python-react-router | d8acce88e12a808677fc82e2dffdfb984c4b52a3 | [
"MIT"
] | null | null | null | # Django hook to configure settings on startup
from django.conf import settings
import react_router.conf
react_router.conf.settings.configure(
**getattr(settings, 'REACT_ROUTER', {})
)
| 21.222222 | 46 | 0.774869 | 25 | 191 | 5.8 | 0.52 | 0.227586 | 0.206897 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.13089 | 191 | 8 | 47 | 23.875 | 0.873494 | 0.230366 | 0 | 0 | 0 | 0 | 0.082759 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
05b57ae412d298cf0b97b0f87ac6fa830279275e | 126 | py | Python | vimfiles/bundle/vim-python/submodules/pylint/tests/functional/c/consider/consider_using_sys_exit_local_scope.py | ciskoinch8/vimrc | 5bf77a7e7bc70fac5173ab2e9ea05d7dda3e52b8 | [
"MIT"
] | 463 | 2015-01-15T08:17:42.000Z | 2022-03-28T15:10:20.000Z | vimfiles/bundle/vim-python/submodules/pylint/tests/functional/c/consider/consider_using_sys_exit_local_scope.py | ciskoinch8/vimrc | 5bf77a7e7bc70fac5173ab2e9ea05d7dda3e52b8 | [
"MIT"
] | 52 | 2015-01-06T02:43:59.000Z | 2022-03-14T11:15:21.000Z | vimfiles/bundle/vim-python/submodules/pylint/tests/functional/c/consider/consider_using_sys_exit_local_scope.py | ciskoinch8/vimrc | 5bf77a7e7bc70fac5173ab2e9ea05d7dda3e52b8 | [
"MIT"
] | 249 | 2015-01-07T22:49:49.000Z | 2022-03-18T02:32:06.000Z | # pylint: disable=missing-docstring,import-outside-toplevel,redefined-builtin
def run():
from sys import exit
exit()
| 21 | 77 | 0.738095 | 16 | 126 | 5.8125 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150794 | 126 | 5 | 78 | 25.2 | 0.869159 | 0.595238 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
af8ab8d695a9a60ae4b9d5259e7ec81734436542 | 10,532 | py | Python | test_api.py | lalebdi/FlaskDataProcess | d305526bbcf44b301f9789af9de2f8b71e7c19c3 | [
"Unlicense"
] | null | null | null | test_api.py | lalebdi/FlaskDataProcess | d305526bbcf44b301f9789af9de2f8b71e7c19c3 | [
"Unlicense"
] | null | null | null | test_api.py | lalebdi/FlaskDataProcess | d305526bbcf44b301f9789af9de2f8b71e7c19c3 | [
"Unlicense"
] | null | null | null | import unittest
import requests
from app import app
# set the application to testing mode
app.testing = True
class TestApi(unittest.TestCase):
BASE = "http://127.0.0.1:5000/"
def test_1(self):
payload = {'value': 'value1', 'mode': 'value2', 'replace_with': 'null'}
expected = {'message': {'mode': 'phone || name || amount'}}
res = requests.post(TestApi.BASE, json=payload)
self.assertEqual(res.status_code, 201)
self.assertEqual(res.json(), expected)
def test_2(self): # Take a look
payload = {'value': 'value1', 'mode': 'value2', 'wrong_key': 'null'}
expected = {'message': {'replace_with': 'Choose either --blank-- || --original--'}}
res = requests.post(TestApi.BASE, json=payload)
self.assertEqual(res.status_code, 400)
self.assertEqual(res.json(), expected)
def test_3(self):
payload = {'value': 'value1', 'mode': 'value2', 'replace_with': '--original--'}
expected = {'message': {'mode': 'phone || name || amount'}}
res = requests.post(TestApi.BASE, json=payload)
self.assertEqual(res.status_code, 201)
self.assertEqual(res.json(), expected)
def test_4(self):
payload = {'value': 'value1', 'mode': 'value2', 'replace_with': '--blank--'}
expected = {'message': {'mode': 'phone || name || amount'}}
res = requests.post(TestApi.BASE, json=payload)
self.assertEqual(res.status_code, 201)
self.assertEqual(res.json(), expected)
def test_5(self):
payload = {'value': 'value1', 'mode': 'name', 'replace_with': '--blank--'}
expected = {'original_value': 'value1', 'mode': 'name', 'output': {'first': '--blank--', 'middle': '--blank--', 'last': '--blank--'}}
res = requests.post(TestApi.BASE, json=payload)
self.assertEqual(res.status_code, 201)
self.assertEqual(res.json(), expected)
def test_6(self):
payload = {'value': 'value1', 'mode': 'phone', 'replace_with': '--blank--'}
expected = {'original_value': 'value1', 'mode': 'phone', 'output': '--blank--'}
res = requests.post(TestApi.BASE, json=payload)
self.assertEqual(res.status_code, 201)
self.assertEqual(res.json(), expected)
def test_7(self):
payload = {'value': '1dollar', 'mode': 'amount', 'replace_with': '--blank--'}
expected = {'original_value': '1dollar', 'mode': 'amount', 'output': '1.00'}
res = requests.post(TestApi.BASE, json=payload)
self.assertEqual(res.status_code, 201)
self.assertEqual(res.json(), expected)
def test_8(self):
payload = {'value': '(512) 234-9293', 'mode': 'phone', 'replace_with': '--blank--'}
expected = {'original_value': '(512) 234-9293', 'mode': 'phone', 'output': '5122349293'}
res = requests.post(TestApi.BASE, json=payload)
self.assertEqual(res.status_code, 201)
self.assertEqual(res.json(), expected)
def test_9(self):
payload = {'value': 'unknown', 'mode': 'phone', 'replace_with': '--blank--'}
expected = {'original_value': 'unknown', 'mode': 'phone', 'output': '--blank--'}
res = requests.post(TestApi.BASE, json=payload)
self.assertEqual(res.status_code, 201)
self.assertEqual(res.json(), expected)
def test_10(self):
payload = {'value': 'unknown', 'mode': 'phone', 'replace_with': '--original--'}
expected = {'original_value': 'unknown', 'mode': 'phone', 'output': 'unknown'}
res = requests.post(TestApi.BASE, json=payload)
self.assertEqual(res.status_code, 201)
self.assertEqual(res.json(), expected)
def test_11(self):
payload = {'value': 'Robert Lance Smith', 'mode': 'name', 'replace_with': '--blank--'}
expected = {'original_value': 'Robert Lance Smith', 'mode': 'name', 'output': {'first': 'Robert', 'middle': 'Lance', 'last': 'Smith'}}
res = requests.post(TestApi.BASE, json=payload)
self.assertEqual(res.status_code, 201)
self.assertEqual(res.json(), expected)
def test_12(self):
payload = {'value': '12234', 'mode': 'name', 'replace_with': '--blank--'}
expected = {'original_value': '12234', 'mode': 'name', 'output': {'first': '--blank--', 'middle': '--blank--', 'last': '--blank--'}}
res = requests.post(TestApi.BASE, json=payload)
self.assertEqual(res.status_code, 201)
self.assertEqual(res.json(), expected)
def test_13(self):
payload = {'value': '-12234', 'mode': 'name', 'replace_with': '--blank--'}
expected = {'original_value': '-12234', 'mode': 'name', 'output': {'first': '--blank--', 'middle': '--blank--', 'last': '--blank--'}}
res = requests.post(TestApi.BASE, json=payload)
self.assertEqual(res.status_code, 201)
self.assertEqual(res.json(), expected)
def test_14(self):
payload = {'value': 'unknown', 'mode': 'name', 'replace_with': '--blank--'}
expected = {'original_value': 'unknown', 'mode': 'name', 'output': {'first': '--blank--', 'middle': '--blank--', 'last': '--blank--'}}
res = requests.post(TestApi.BASE, json=payload)
self.assertEqual(res.status_code, 201)
self.assertEqual(res.json(), expected)
def test_15(self):
payload = {'value': ' ', 'mode': 'name', 'replace_with': '--blank--'}
expected = {'original_value': ' ', 'mode': 'name', 'output': {'first': '--blank--', 'middle': '--blank--', 'last': '--blank--'}}
res = requests.post(TestApi.BASE, json=payload)
self.assertEqual(res.status_code, 201)
self.assertEqual(res.json(), expected)
def test_16(self):
payload = {'value': 'Robert Lance Smith', 'mode': 'name', 'replace_with': '--original--'}
expected = {'original_value': 'Robert Lance Smith', 'mode': 'name', 'output': {'first': 'Robert', 'middle': 'Lance', 'last': 'Smith'}}
res = requests.post(TestApi.BASE, json=payload)
self.assertEqual(res.status_code, 201)
self.assertEqual(res.json(), expected)
def test_17(self):
payload = {'value': '12234', 'mode': 'name', 'replace_with': '--original--'}
expected = {'original_value': '12234', 'mode': 'name', 'output': '12234'}
res = requests.post(TestApi.BASE, json=payload)
self.assertEqual(res.status_code, 201)
self.assertEqual(res.json(), expected)
def test_18(self):
payload = {'value': 'unknown', 'mode': 'name', 'replace_with': '--original--'}
expected = {'original_value': 'unknown', 'mode': 'name', 'output': 'unknown'}
res = requests.post(TestApi.BASE, json=payload)
self.assertEqual(res.status_code, 201)
self.assertEqual(res.json(), expected)
def test_19(self):
payload = {'value': ' ', 'mode': 'name', 'replace_with': '--original--'}
expected = {'original_value': ' ', 'mode': 'name', 'output': ' '}
res = requests.post(TestApi.BASE, json=payload)
self.assertEqual(res.status_code, 201)
self.assertEqual(res.json(), expected)
def test_20(self):
payload = {'value': '$12,345.6', 'mode': 'amount', 'replace_with': '--blank--'}
expected = {'original_value': '$12,345.6', 'mode': 'amount', 'output': '12345.60'}
res = requests.post(TestApi.BASE, json=payload)
self.assertEqual(res.status_code, 201)
self.assertEqual(res.json(), expected)
def test_21(self):
payload = {'value': '35', 'mode': 'amount', 'replace_with': '--blank--'}
expected = {'original_value': '35', 'mode': 'amount', 'output': '35.00'}
res = requests.post(TestApi.BASE, json=payload)
self.assertEqual(res.status_code, 201)
self.assertEqual(res.json(), expected)
def test_22(self):
payload = {'value': 'Hello World', 'mode': 'amount', 'replace_with': '--blank--'}
expected = {'original_value': 'Hello World', 'mode': 'amount', 'output': '--blank--'}
res = requests.post(TestApi.BASE, json=payload)
self.assertEqual(res.status_code, 201)
self.assertEqual(res.json(), expected)
def test_23(self):
payload = {'value': '$12,345.6', 'mode': 'amount', 'replace_with': '--original--'}
expected = {'original_value': '$12,345.6', 'mode': 'amount', 'output': '12345.60'}
res = requests.post(TestApi.BASE, json=payload)
self.assertEqual(res.status_code, 201)
self.assertEqual(res.json(), expected)
def test_24(self):
payload = {'value': 'Hello World', 'mode': 'amount', 'replace_with': '--original--'}
expected = {'original_value': 'Hello World', 'mode': 'amount', 'output': 'Hello World'}
res = requests.post(TestApi.BASE, json=payload)
self.assertEqual(res.status_code, 201)
self.assertEqual(res.json(), expected)
def test_25(self):
payload = {'value': '35', 'mode': 'amount', 'replace_with': '--original--'}
expected = {'original_value': '35', 'mode': 'amount', 'output': '35.00'}
res = requests.post(TestApi.BASE, json=payload)
self.assertEqual(res.status_code, 201)
self.assertEqual(res.json(), expected)
def test_26(self):
payload = {'value': ' ', 'mode': 'amount', 'replace_with': '--original--'}
expected = {'original_value': ' ', 'mode': 'amount', 'output': ' '}
res = requests.post(TestApi.BASE, json=payload)
self.assertEqual(res.status_code, 201)
self.assertEqual(res.json(), expected)
def test_27(self):
payload = {'value': ' ', 'mode': 'amount', 'replace_with': '--blank--'}
expected = {'original_value': ' ', 'mode': 'amount', 'output': '--blank--'}
res = requests.post(TestApi.BASE, json=payload)
self.assertEqual(res.status_code, 201)
self.assertEqual(res.json(), expected)
def test_28(self):
payload = {'value': '', 'mode': 'amount', 'replace_with': '--blank--'}
expected = {'original_value': '', 'mode': 'amount', 'output': '--blank--'}
res = requests.post(TestApi.BASE, json=payload)
self.assertEqual(res.status_code, 201)
self.assertEqual(res.json(), expected)
def test_29(self):
payload = {'value': '', 'mode': 'amount', 'replace_with': '--original--'}
expected = {'original_value': '', 'mode': 'amount', 'output': ''}
res = requests.post(TestApi.BASE, json=payload)
self.assertEqual(res.status_code, 201)
self.assertEqual(res.json(), expected)
| 48.986047 | 142 | 0.591626 | 1,177 | 10,532 | 5.197961 | 0.086661 | 0.142203 | 0.170644 | 0.104282 | 0.938705 | 0.926937 | 0.920726 | 0.904217 | 0.738967 | 0.723602 | 0 | 0.033676 | 0.204899 | 10,532 | 214 | 143 | 49.214953 | 0.696919 | 0.004463 | 0 | 0.572222 | 0 | 0 | 0.255987 | 0 | 0 | 0 | 0 | 0 | 0.322222 | 1 | 0.161111 | false | 0 | 0.016667 | 0 | 0.188889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
afc75874de5baef4ee3d72f17dc3c8625be062ed | 27 | py | Python | __init__.py | L0SG/Liver_segmentation | 178b2367cf606ba7d704e96f855389be4c1abd14 | [
"MIT"
] | 34 | 2019-02-04T07:35:11.000Z | 2022-02-08T07:10:57.000Z | __init__.py | L0SG/Liver_segmentation | 178b2367cf606ba7d704e96f855389be4c1abd14 | [
"MIT"
] | null | null | null | __init__.py | L0SG/Liver_segmentation | 178b2367cf606ba7d704e96f855389be4c1abd14 | [
"MIT"
] | 8 | 2019-03-28T04:07:25.000Z | 2021-04-19T18:18:22.000Z | from .ssd_liverdet import * | 27 | 27 | 0.814815 | 4 | 27 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 27 | 1 | 27 | 27 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bb9b786567de9ad0be400eaefc52d4d08bda456a | 141 | py | Python | tshotel/tshotel/doctype/tsempsal/test_tsempsal.py | KaviyaPeriyasamy/crm-customize | cd73585dcfef3e26160abe92b26c41e835ff5f52 | [
"MIT"
] | null | null | null | tshotel/tshotel/doctype/tsempsal/test_tsempsal.py | KaviyaPeriyasamy/crm-customize | cd73585dcfef3e26160abe92b26c41e835ff5f52 | [
"MIT"
] | null | null | null | tshotel/tshotel/doctype/tsempsal/test_tsempsal.py | KaviyaPeriyasamy/crm-customize | cd73585dcfef3e26160abe92b26c41e835ff5f52 | [
"MIT"
] | 2 | 2022-01-28T08:43:14.000Z | 2022-03-05T08:48:03.000Z | # Copyright (c) 2021, sibi and Contributors
# See license.txt
# import frappe
import unittest
class Testtsempsal(unittest.TestCase):
pass
| 15.666667 | 43 | 0.77305 | 18 | 141 | 6.055556 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033333 | 0.148936 | 141 | 8 | 44 | 17.625 | 0.875 | 0.503546 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
bbfd1928ba09ab823e53cf95757d9b4c4473b08f | 28 | py | Python | split_paren/__init__.py | sfinktah/split_paren | d312c277d9488c1fa1ae5febde2cca1a6a96835f | [
"MIT"
] | null | null | null | split_paren/__init__.py | sfinktah/split_paren | d312c277d9488c1fa1ae5febde2cca1a6a96835f | [
"MIT"
] | null | null | null | split_paren/__init__.py | sfinktah/split_paren | d312c277d9488c1fa1ae5febde2cca1a6a96835f | [
"MIT"
] | null | null | null | from .split_paren import *
| 9.333333 | 26 | 0.75 | 4 | 28 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178571 | 28 | 2 | 27 | 14 | 0.869565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a5b84e69164dc29457766805509407a8e4900941 | 5,730 | py | Python | projecteuler/013_large_sum.py | vikasmunshi/euler | da4a7916b24e2d2b902ffdd1094991416eb74034 | [
"MIT"
] | null | null | null | projecteuler/013_large_sum.py | vikasmunshi/euler | da4a7916b24e2d2b902ffdd1094991416eb74034 | [
"MIT"
] | null | null | null | projecteuler/013_large_sum.py | vikasmunshi/euler | da4a7916b24e2d2b902ffdd1094991416eb74034 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3.8
# -*- coding: utf-8 -*-
"""
https://projecteuler.net/problem=13
Work out the first ten digits of the sum of the following one-hundred 50-digit numbers.
Answer: 5537376230
"""
numbers = """
37107287533902102798797998220837590246510135740250
46376937677490009712648124896970078050417018260538
74324986199524741059474233309513058123726617309629
91942213363574161572522430563301811072406154908250
23067588207539346171171980310421047513778063246676
89261670696623633820136378418383684178734361726757
28112879812849979408065481931592621691275889832738
44274228917432520321923589422876796487670272189318
47451445736001306439091167216856844588711603153276
70386486105843025439939619828917593665686757934951
62176457141856560629502157223196586755079324193331
64906352462741904929101432445813822663347944758178
92575867718337217661963751590579239728245598838407
58203565325359399008402633568948830189458628227828
80181199384826282014278194139940567587151170094390
35398664372827112653829987240784473053190104293586
86515506006295864861532075273371959191420517255829
71693888707715466499115593487603532921714970056938
54370070576826684624621495650076471787294438377604
53282654108756828443191190634694037855217779295145
36123272525000296071075082563815656710885258350721
45876576172410976447339110607218265236877223636045
17423706905851860660448207621209813287860733969412
81142660418086830619328460811191061556940512689692
51934325451728388641918047049293215058642563049483
62467221648435076201727918039944693004732956340691
15732444386908125794514089057706229429197107928209
55037687525678773091862540744969844508330393682126
18336384825330154686196124348767681297534375946515
80386287592878490201521685554828717201219257766954
78182833757993103614740356856449095527097864797581
16726320100436897842553539920931837441497806860984
48403098129077791799088218795327364475675590848030
87086987551392711854517078544161852424320693150332
59959406895756536782107074926966537676326235447210
69793950679652694742597709739166693763042633987085
41052684708299085211399427365734116182760315001271
65378607361501080857009149939512557028198746004375
35829035317434717326932123578154982629742552737307
94953759765105305946966067683156574377167401875275
88902802571733229619176668713819931811048770190271
25267680276078003013678680992525463401061632866526
36270218540497705585629946580636237993140746255962
24074486908231174977792365466257246923322810917141
91430288197103288597806669760892938638285025333403
34413065578016127815921815005561868836468420090470
23053081172816430487623791969842487255036638784583
11487696932154902810424020138335124462181441773470
63783299490636259666498587618221225225512486764533
67720186971698544312419572409913959008952310058822
95548255300263520781532296796249481641953868218774
76085327132285723110424803456124867697064507995236
37774242535411291684276865538926205024910326572967
23701913275725675285653248258265463092207058596522
29798860272258331913126375147341994889534765745501
18495701454879288984856827726077713721403798879715
38298203783031473527721580348144513491373226651381
34829543829199918180278916522431027392251122869539
40957953066405232632538044100059654939159879593635
29746152185502371307642255121183693803580388584903
41698116222072977186158236678424689157993532961922
62467957194401269043877107275048102390895523597457
23189706772547915061505504953922979530901129967519
86188088225875314529584099251203829009407770775672
11306739708304724483816533873502340845647058077308
82959174767140363198008187129011875491310547126581
97623331044818386269515456334926366572897563400500
42846280183517070527831839425882145521227251250327
55121603546981200581762165212827652751691296897789
32238195734329339946437501907836945765883352399886
75506164965184775180738168837861091527357929701337
62177842752192623401942399639168044983993173312731
32924185707147349566916674687634660915035914677504
99518671430235219628894890102423325116913619626622
73267460800591547471830798392868535206946944540724
76841822524674417161514036427982273348055556214818
97142617910342598647204516893989422179826088076852
87783646182799346313767754307809363333018982642090
10848802521674670883215120185883543223812876952786
71329612474782464538636993009049310363619763878039
62184073572399794223406235393808339651327408011116
66627891981488087797941876876144230030984490851411
60661826293682836764744779239180335110989069790714
85786944089552990653640447425576083659976645795096
66024396409905389607120198219976047599490197230297
64913982680032973156037120041377903785566085089252
16730939319872750275468906903707539413042652315011
94809377245048795150954100921645863754710598436791
78639167021187492431995700641917969777599028300699
15368713711936614952811305876380278410754449733078
40789923115535562561142322423255033685442488917353
44889911501440648020369068063960672322193204149535
41503128880339536053299340368006977710650566631954
81234880673210146739058568557934581403627822703280
82616570773948327592232845941706525094512325230608
22918802058777319719839450180888072429661980811197
77158542502016545090413245809786882778948721859617
72107838435069186155435662884062257473692284509516
20849603980134001723930671666823555245252804609722
53503534226472524250874054075591789781264330331690
"""
def n_digits_of_sum(digits: int) -> str:
if digits:
return str(sum([int(i[:digits + 1]) for i in numbers.splitlines() if i != '']))[:digits]
else:
return str(sum([int(i) for i in numbers.splitlines() if i != '']))
if __name__ == '__main__':
from .evaluate import Watchdog
with Watchdog() as wd:
result = wd.evaluate_range(n_digits_of_sum, answers={10: '5537376230'})
| 46.209677 | 96 | 0.943106 | 193 | 5,730 | 27.92228 | 0.792746 | 0.004454 | 0.00334 | 0.004454 | 0.015587 | 0.009649 | 0.009649 | 0 | 0 | 0 | 0 | 0.913383 | 0.038918 | 5,730 | 123 | 97 | 46.585366 | 0.06519 | 0.03281 | 0 | 0 | 0 | 0 | 0.925176 | 0.903669 | 0 | 1 | 0 | 0 | 0 | 1 | 0.009009 | false | 0 | 0.009009 | 0 | 0.036036 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3c0501e9ab5ad58506df6d367ba1f78bf1f781d3 | 67 | py | Python | fractalprojection/__init__.py | kcsaff/fractalprojection | e0cec282f2889b96491615eff349a8763a343e9d | [
"MIT"
] | null | null | null | fractalprojection/__init__.py | kcsaff/fractalprojection | e0cec282f2889b96491615eff349a8763a343e9d | [
"MIT"
] | null | null | null | fractalprojection/__init__.py | kcsaff/fractalprojection | e0cec282f2889b96491615eff349a8763a343e9d | [
"MIT"
] | null | null | null | def entry():
from fractalprojection.main import main
main() | 22.333333 | 43 | 0.701493 | 8 | 67 | 5.875 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.208955 | 67 | 3 | 44 | 22.333333 | 0.886792 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3c503b218a97268cda86afa5a1e265746ac094a8 | 153 | py | Python | app/models/__init__.py | pd-Shah/FlaskRecycle | 54060aa5c0eacefc0874ea01cbe6545000b416e0 | [
"MIT"
] | 1 | 2022-03-18T19:25:55.000Z | 2022-03-18T19:25:55.000Z | app/models/__init__.py | pd-Shah/FlaskRecycle | 54060aa5c0eacefc0874ea01cbe6545000b416e0 | [
"MIT"
] | null | null | null | app/models/__init__.py | pd-Shah/FlaskRecycle | 54060aa5c0eacefc0874ea01cbe6545000b416e0 | [
"MIT"
] | null | null | null | from .user import *
from .message import *
from .comment import *
from .contact import *
from .offers import *
from .email import *
from .task import *
| 17 | 22 | 0.718954 | 21 | 153 | 5.238095 | 0.428571 | 0.545455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.189542 | 153 | 8 | 23 | 19.125 | 0.887097 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3c5d12f23b526dfbca4f8a648444c480b4a63e23 | 110 | py | Python | meerkat/views/__init__.py | by46/meerkat | 41376dc1636b5975a50020bad5632b4edbf5b16d | [
"MIT"
] | null | null | null | meerkat/views/__init__.py | by46/meerkat | 41376dc1636b5975a50020bad5632b4edbf5b16d | [
"MIT"
] | null | null | null | meerkat/views/__init__.py | by46/meerkat | 41376dc1636b5975a50020bad5632b4edbf5b16d | [
"MIT"
] | null | null | null | from . import index
from . import simple
from . import packages
from . import portal
from . import score
| 18.333333 | 23 | 0.727273 | 15 | 110 | 5.333333 | 0.466667 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.227273 | 110 | 5 | 24 | 22 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3c81882350cbe2f5393730995294f0318ad818d3 | 86 | py | Python | pybamm/models/submodels/interface/inverse_kinetics/__init__.py | YannickNoelStephanKuhn/PyBaMM | d90636a755b7b77bbc75ae7bc2728c8ee2fa730a | [
"BSD-3-Clause"
] | 1 | 2020-06-22T10:11:40.000Z | 2020-06-22T10:11:40.000Z | pybamm/models/submodels/interface/inverse_kinetics/__init__.py | masoodtamaddon/PyBaMM | a31e2095600bb92e913598ac4d02b2b6b77b31c1 | [
"BSD-3-Clause"
] | 1 | 2021-01-23T08:54:49.000Z | 2021-01-23T08:54:49.000Z | pybamm/models/submodels/interface/inverse_kinetics/__init__.py | masoodtamaddon/PyBaMM | a31e2095600bb92e913598ac4d02b2b6b77b31c1 | [
"BSD-3-Clause"
] | 2 | 2020-05-21T23:16:29.000Z | 2020-06-22T10:11:40.000Z | from .inverse_butler_volmer import InverseButlerVolmer, CurrentForInverseButlerVolmer
| 43 | 85 | 0.918605 | 7 | 86 | 11 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05814 | 86 | 1 | 86 | 86 | 0.950617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3c9107402e84ec697ab6f0b0d5e42fb1420a986b | 25 | py | Python | rsHRF/sFIR/__init__.py | PeerHerholz/rsHRF | ac5d8bdb3718fc94822de5ad573f5e4dbdff68c8 | [
"MIT"
] | 16 | 2017-09-08T20:02:22.000Z | 2022-03-10T20:56:36.000Z | rsHRF/sFIR/__init__.py | PeerHerholz/rsHRF | ac5d8bdb3718fc94822de5ad573f5e4dbdff68c8 | [
"MIT"
] | 10 | 2019-06-06T18:32:40.000Z | 2021-09-13T08:14:15.000Z | rsHRF/sFIR/__init__.py | PeerHerholz/rsHRF | ac5d8bdb3718fc94822de5ad573f5e4dbdff68c8 | [
"MIT"
] | 8 | 2019-03-26T16:40:04.000Z | 2021-04-11T14:08:52.000Z | from . import smooth_fir
| 12.5 | 24 | 0.8 | 4 | 25 | 4.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b1f7b8c0ef9e788f77886d65222018c488a671ae | 106 | py | Python | navigator/__init__.py | jelitox/navigator-api | 202efaff10e3d4bea8082dddbb7fad807f90bf11 | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | navigator/__init__.py | jelitox/navigator-api | 202efaff10e3d4bea8082dddbb7fad807f90bf11 | [
"Apache-2.0",
"BSD-3-Clause"
] | 92 | 2020-10-17T17:48:02.000Z | 2022-03-31T10:30:25.000Z | navigator/__init__.py | jelitox/navigator-api | 202efaff10e3d4bea8082dddbb7fad807f90bf11 | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | from .navigator import Application, Response, get_version
__all__ = (Application, Response, get_version)
| 26.5 | 57 | 0.811321 | 12 | 106 | 6.666667 | 0.666667 | 0.475 | 0.55 | 0.725 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113208 | 106 | 3 | 58 | 35.333333 | 0.851064 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
3cae6a7644a20a9d3ddfb4ff2aaa1021cee60949 | 57 | py | Python | publications/tests/__init__.py | paleocore/django-publications | 34f2d13ef2bdd62f2c36e6760caf6a4bb82229f8 | [
"MIT"
] | 81 | 2015-01-26T11:32:09.000Z | 2022-01-04T03:20:24.000Z | publications/tests/__init__.py | paleocore/django-publications | 34f2d13ef2bdd62f2c36e6760caf6a4bb82229f8 | [
"MIT"
] | 38 | 2015-04-12T07:54:51.000Z | 2021-08-29T21:09:40.000Z | publications/tests/__init__.py | paleocore/django-publications | 34f2d13ef2bdd62f2c36e6760caf6a4bb82229f8 | [
"MIT"
] | 38 | 2015-01-14T16:10:58.000Z | 2021-09-27T16:14:04.000Z | from tests import Tests
from tests_live import LiveTests
| 19 | 32 | 0.859649 | 9 | 57 | 5.333333 | 0.555556 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140351 | 57 | 2 | 33 | 28.5 | 0.979592 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3cdf1b6efa6a6828ff787d1d37ee76c656d7fc59 | 4,770 | py | Python | tests/test_pnr.py | andrewheusser/quail | fce1152a3f7dc983f4a3143698fdc3e27f61d1d2 | [
"MIT"
] | 17 | 2017-04-12T15:45:37.000Z | 2021-07-12T21:25:50.000Z | tests/test_pnr.py | vishalbelsare/quail | 6c847a49f31d953f3264294439576a23588b84d8 | [
"MIT"
] | 80 | 2017-04-12T18:54:10.000Z | 2021-06-05T17:28:33.000Z | tests/test_pnr.py | vishalbelsare/quail | 6c847a49f31d953f3264294439576a23588b84d8 | [
"MIT"
] | 8 | 2018-02-01T18:53:46.000Z | 2020-01-12T17:36:33.000Z | from quail.egg import Egg
import numpy as np
import pytest
def test_analysis_pfr():
presented=[[['cat', 'bat', 'hat', 'goat'],['zoo', 'animal', 'zebra', 'horse']]]
recalled=[[['bat', 'cat', 'goat', 'hat'],['animal', 'horse', 'zoo']]]
egg = Egg(pres=presented,rec=recalled)
assert np.array_equal(egg.analyze('pfr').data.values,[np.array([ 0., 1., 0., 0.]), np.array([ 0., 1., 0., 0.])])
def test_analysis_pnr_pos1():
presented=[[['cat', 'bat', 'hat', 'goat'],['zoo', 'animal', 'zebra', 'horse']]]
recalled=[[['bat', 'cat', 'goat', 'hat'],['animal', 'horse', 'zoo']]]
egg = Egg(pres=presented, rec=recalled)
assert np.array_equal(egg.analyze('pnr', position=1).data.values, [np.array([ 1., 0., 0., 0.]), np.array([ 0., 0., 0., 1.])])
#
def test_pfr_best_euclidean():
presented = [[[{'item' : i, 'feature1' : i*10} for i in range(1, 5)] for i in range(2)]]
recalled=[[[{'item' : i, 'feature1' : i*10} for i in [2, 1, 4, 3]],[{'item' : i, 'feature1' : i*10} for i in [2, 4, 1]]]]
egg = Egg(pres=presented,rec=recalled)
assert np.array_equal(egg.analyze('pfr', match='best', distance='euclidean').data.values,[np.array([0., 1., 0., 0.]),np.array([0., 1., 0., 0.])])
#
def test_pfr_best_euclidean_3d():
presented = [[[{'item' : i, 'feature1' : [i*10, 0, 0]} for i in range(1, 5)] for i in range(2)]]
recalled=[[[{'item' : i, 'feature1' : [i*10, 0, 0]} for i in [2, 1, 4, 3]],[{'item' : i, 'feature1' : [i*10, 0, 0]} for i in [2, 4, 1]]]]
egg = Egg(pres=presented,rec=recalled)
assert np.array_equal(egg.analyze('pfr', match='best', distance='euclidean').data.values,[np.array([0., 1., 0., 0.]),np.array([0., 1., 0., 0.])])
def test_pfr_best_euclidean_3d_2features():
presented = [[[{'item' : i, 'feature1' : [i*10, 0, 0], 'feature2' : 0} for i in range(1, 5)] for i in range(2)]]
recalled=[[[{'item' : i, 'feature1' : [i*10, 0, 0], 'feature2': 0} for i in [2, 1, 4, 3]],[{'item' : i, 'feature1' : [i*10, 0, 0], 'feature2':0} for i in [2, 4, 1]]]]
egg = Egg(pres=presented,rec=recalled)
assert np.array_equal(egg.analyze('pfr', match='best', distance='euclidean', features=['feature1', 'feature2']).data.values,[np.array([0., 1., 0., 0.]),np.array([0., 1., 0., 0.])])
def test_acc_best_euclidean_3d_features_not_set():
presented = [[[{'item' : i, 'feature1' : [i*10, 0, 0]} for i in range(1, 5)] for i in range(2)]]
recalled=[[[{'item' : i, 'feature1' : [i*10, 0, 0]} for i in [2, 1, 4, 3]],[{'item' : i, 'feature1' : [i*10, 0, 0]} for i in [2, 4, 1]]]]
egg = Egg(pres=presented,rec=recalled)
assert np.array_equal(egg.analyze('pfr', match='best', distance='euclidean').data.values,[np.array([0., 1., 0., 0.]),np.array([0., 1., 0., 0.])])
def test_pfr_best_euclidean_3d_exception_no_features():
presented=[[[[10, 0, 0], [20, 0, 0], [30, 0, 0], [40, 0, 0]],
[[10, 0, 0], [20, 0, 0], [30, 0, 0], [40, 0, 0]]]]
recalled=[[[[20, 0, 0], [10, 0, 0], [40, 0, 0], [30, 0, 0]],
[[20, 0, 0], [40, 0, 0], [10, 0, 0]]]]
egg = Egg(pres=presented,rec=recalled)
with pytest.raises(Exception):
assert np.array_equal(egg.analyze('pfr', match='best', distance='euclidean').data.values,[np.array([0., 1., 0., 0.]),np.array([0., 1., 0., 0.])])
def test_pfr_best_euclidean_3d_exception_item_specified():
presented=[[[[10, 0, 0], [20, 0, 0], [30, 0, 0], [40, 0, 0]],
[[10, 0, 0], [20, 0, 0], [30, 0, 0], [40, 0, 0]]]]
recalled=[[[[20, 0, 0], [10, 0, 0], [40, 0, 0], [30, 0, 0]],
[[20, 0, 0], [40, 0, 0], [10, 0, 0]]]]
egg = Egg(pres=presented,rec=recalled)
assert np.array_equal(egg.analyze('pfr', match='best', distance='euclidean', features='item').data.values,[np.array([0., 1., 0., 0.]),np.array([0., 1., 0., 0.])])
def test_pfr_best_correlation_3d():
presented=[[[[10, 0, 10], [20, 0, 0], [30, 0, -10], [40, 0, -20]],
[[10, 0, 10], [20, 0, 0], [30, 0, -10], [40, 0, -20]]]]
recalled=[[[[20, 0, 0], [10, 0, 10], [40, 0, -20], [30, 0, -10]],
[[20, 0, 0], [40, 0, -20], [10, 0, 10]]]]
egg = Egg(pres=presented,rec=recalled)
assert np.array_equal(egg.analyze('pfr', match='best', distance='correlation', features='item').data.values,[np.array([0., 1., 0., 0.]),np.array([0., 1., 0., 0.])])
def test_pfr_smooth_correlation_3d():
presented=[[[[10, 0, 10], [20, 0, 0], [30, 0, -10], [40, 0, -20]],
[[10, 0, 10], [20, 0, 0], [30, 0, -10], [40, 0, -20]]]]
recalled=[[[[20, 0, 0], [10, 0, 10], [40, 0, -20], [30, 0, -10]],
[[20, 0, 0], [40, 0, -20], [10, 0, 10]]]]
egg = Egg(pres=presented,rec=recalled)
egg.analyze('pfr', match='smooth', distance='correlation', features='item').data.values
| 65.342466 | 184 | 0.539203 | 788 | 4,770 | 3.194162 | 0.085025 | 0.053238 | 0.054033 | 0.057211 | 0.900675 | 0.887565 | 0.863329 | 0.863329 | 0.853397 | 0.853397 | 0 | 0.114447 | 0.183019 | 4,770 | 72 | 185 | 66.25 | 0.531434 | 0 | 0 | 0.612903 | 0 | 0 | 0.095218 | 0 | 0 | 0 | 0 | 0 | 0.145161 | 1 | 0.16129 | false | 0 | 0.048387 | 0 | 0.209677 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3ceddb5b1aaea21d9b4e8754450deb58172cc652 | 149 | py | Python | simple_notes/notes/admin.py | Namnetsy/simple-notes-django-app | 385fa829c43162e1c1a4682acc4668623e6e47b3 | [
"MIT"
] | null | null | null | simple_notes/notes/admin.py | Namnetsy/simple-notes-django-app | 385fa829c43162e1c1a4682acc4668623e6e47b3 | [
"MIT"
] | 9 | 2021-04-08T20:20:53.000Z | 2022-03-12T00:54:21.000Z | simple_notes/notes/admin.py | Namnetsy/simple-notes-django-app | 385fa829c43162e1c1a4682acc4668623e6e47b3 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Notebook, Note, PublicSharedNote
admin.site.register(
[Notebook, Note, PublicSharedNote]
)
| 21.285714 | 52 | 0.785235 | 17 | 149 | 6.882353 | 0.647059 | 0.205128 | 0.478632 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134228 | 149 | 6 | 53 | 24.833333 | 0.906977 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
a7216f90aac406ae01f46d6d236d36550c6521ee | 43 | py | Python | Misc/tutmain2.py | Jigyanshu17/Python-Ka-Saara-Gyaan | d3f5dbb3fef45a7a6953bf6041b0b3bf6c54ad2b | [
"Apache-2.0"
] | null | null | null | Misc/tutmain2.py | Jigyanshu17/Python-Ka-Saara-Gyaan | d3f5dbb3fef45a7a6953bf6041b0b3bf6c54ad2b | [
"Apache-2.0"
] | null | null | null | Misc/tutmain2.py | Jigyanshu17/Python-Ka-Saara-Gyaan | d3f5dbb3fef45a7a6953bf6041b0b3bf6c54ad2b | [
"Apache-2.0"
] | null | null | null | import tutmain1
print(tutmain1.add(5,3))
| 14.333333 | 25 | 0.744186 | 7 | 43 | 4.571429 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 0.116279 | 43 | 2 | 26 | 21.5 | 0.736842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
a730f587f73c6677ce6150b507fdc0d4f8855d1d | 104 | py | Python | mochinode/__init__.py | mochio326/mochinode | 747f61bdd4047d5c0cdb5e3a0f0c24324194a414 | [
"MIT"
] | 7 | 2019-05-24T00:40:14.000Z | 2021-03-16T01:35:46.000Z | mochinode/__init__.py | mochio326/mochinode | 747f61bdd4047d5c0cdb5e3a0f0c24324194a414 | [
"MIT"
] | 1 | 2019-03-08T05:30:35.000Z | 2019-03-08T05:30:35.000Z | mochinode/__init__.py | mochio326/mochinode | 747f61bdd4047d5c0cdb5e3a0f0c24324194a414 | [
"MIT"
] | 2 | 2019-03-03T09:53:55.000Z | 2019-03-08T05:00:03.000Z | # -*- coding: utf-8 -*-
from .view import *
from .node import *
from .line import *
from .port import *
| 17.333333 | 23 | 0.634615 | 15 | 104 | 4.4 | 0.6 | 0.454545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012048 | 0.201923 | 104 | 5 | 24 | 20.8 | 0.783133 | 0.201923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
59c6c954ae78977661582f6d04a0698e947b1744 | 138 | py | Python | ch1/Babylonian_sqr/test_algorithm.py | omar659/Algorithms-Sequential-Parallel-Distributed | 3543631139a20625f413cea2ba1f013f3a40d123 | [
"MIT"
] | 2 | 2020-02-19T09:27:20.000Z | 2020-02-19T09:28:21.000Z | ch1/Babylonian_sqr/test_algorithm.py | omar-3/Algorithms-Sequential-Parallel-Distributed | 3543631139a20625f413cea2ba1f013f3a40d123 | [
"MIT"
] | null | null | null | ch1/Babylonian_sqr/test_algorithm.py | omar-3/Algorithms-Sequential-Parallel-Distributed | 3543631139a20625f413cea2ba1f013f3a40d123 | [
"MIT"
] | null | null | null | from algorithm import square
def test():
assert int(square(16)) == 4
assert int(square(9)) == 3
assert int(square(100)) == 10 | 23 | 33 | 0.630435 | 21 | 138 | 4.142857 | 0.666667 | 0.310345 | 0.517241 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092593 | 0.217391 | 138 | 6 | 33 | 23 | 0.712963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.6 | 1 | 0.2 | true | 0 | 0.2 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
59cfdabc5d9e63fca39151e7c4cc4ae073cb47ce | 294 | py | Python | stripe_modern/api_resources/terminal/__init__.py | photocrowd/stripe-python | 7c705e3d41f38f8524e419eb7ea18c1425a4ad89 | [
"MIT"
] | null | null | null | stripe_modern/api_resources/terminal/__init__.py | photocrowd/stripe-python | 7c705e3d41f38f8524e419eb7ea18c1425a4ad89 | [
"MIT"
] | null | null | null | stripe_modern/api_resources/terminal/__init__.py | photocrowd/stripe-python | 7c705e3d41f38f8524e419eb7ea18c1425a4ad89 | [
"MIT"
] | null | null | null | from __future__ import absolute_import, division, print_function
# flake8: noqa
from stripe_modern.api_resources.terminal.connection_token import ConnectionToken
from stripe_modern.api_resources.terminal.location import Location
from stripe_modern.api_resources.terminal.reader import Reader
| 36.75 | 81 | 0.877551 | 38 | 294 | 6.447368 | 0.5 | 0.122449 | 0.195918 | 0.232653 | 0.440816 | 0.440816 | 0 | 0 | 0 | 0 | 0 | 0.00369 | 0.078231 | 294 | 7 | 82 | 42 | 0.900369 | 0.040816 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.25 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
abbfe4c1064ae341e4fa5e43c5aa86e40f4b2053 | 106 | py | Python | snapchat_fs/__init__.py | hausdorff/snapchat-fs | acb6af5d133fd5a5d0880a38425e1bf0e23d0069 | [
"MIT"
] | 145 | 2015-01-07T22:28:06.000Z | 2022-03-26T15:05:05.000Z | snapchat_fs/__init__.py | Tominous/snapchat-fs | acb6af5d133fd5a5d0880a38425e1bf0e23d0069 | [
"MIT"
] | 2 | 2015-02-21T21:42:43.000Z | 2022-02-11T00:15:38.000Z | snapchat_fs/__init__.py | Tominous/snapchat-fs | acb6af5d133fd5a5d0880a38425e1bf0e23d0069 | [
"MIT"
] | 12 | 2015-07-17T00:47:47.000Z | 2022-01-28T22:45:27.000Z | from snapchatfs import list_all_downloadable_sfs_files, download_all_sfs, download_by_id, upload_sfs_file
| 53 | 105 | 0.90566 | 17 | 106 | 5.058824 | 0.764706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066038 | 106 | 1 | 106 | 106 | 0.868687 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f9f10f26c734c8a468d9765dd0dcd505b9d6c80c | 32 | py | Python | main.py | hexdev-io/action_setup-cross-tc | 1f38df4c6518f716bf77c0b12bf5cd5c658d6920 | [
"MIT"
] | null | null | null | main.py | hexdev-io/action_setup-cross-tc | 1f38df4c6518f716bf77c0b12bf5cd5c658d6920 | [
"MIT"
] | null | null | null | main.py | hexdev-io/action_setup-cross-tc | 1f38df4c6518f716bf77c0b12bf5cd5c658d6920 | [
"MIT"
] | null | null | null | import sys
print("Hello world") | 10.666667 | 20 | 0.75 | 5 | 32 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 3 | 20 | 10.666667 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
e645ba29c6b648d5592671baed23ec7150e0d951 | 125 | py | Python | Testing/RequestFactory/get/views.py | looking-for-a-job/django-examples | dfafa450668cac5c0351f6c7238b8886511229bf | [
"Unlicense"
] | null | null | null | Testing/RequestFactory/get/views.py | looking-for-a-job/django-examples | dfafa450668cac5c0351f6c7238b8886511229bf | [
"Unlicense"
] | null | null | null | Testing/RequestFactory/get/views.py | looking-for-a-job/django-examples | dfafa450668cac5c0351f6c7238b8886511229bf | [
"Unlicense"
] | null | null | null | #!/usr/bin/env python
from django.http import HttpResponse
def my_view(request):
return HttpResponse("reponse string")
| 17.857143 | 41 | 0.76 | 17 | 125 | 5.529412 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136 | 125 | 6 | 42 | 20.833333 | 0.87037 | 0.16 | 0 | 0 | 0 | 0 | 0.134615 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
050ffcd87026cd1fc2ee0797cd2da91a932222e2 | 30 | py | Python | __init__.py | 85599/moneychamb | 1dff0d9639646c1a6d5bb91841b8eef7d8aa74cc | [
"MIT"
] | 1 | 2021-05-07T13:28:06.000Z | 2021-05-07T13:28:06.000Z | __init__.py | 85599/moneychamb | 1dff0d9639646c1a6d5bb91841b8eef7d8aa74cc | [
"MIT"
] | null | null | null | __init__.py | 85599/moneychamb | 1dff0d9639646c1a6d5bb91841b8eef7d8aa74cc | [
"MIT"
] | null | null | null | from .mainV2 import Portfolio
| 15 | 29 | 0.833333 | 4 | 30 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038462 | 0.133333 | 30 | 1 | 30 | 30 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0510b6a3f2b10324df818e2609978974002bee20 | 99 | py | Python | tests/test_app_pytest.py | 734990893/Nana | 712abb39aeb7a5c6e931fcd6e4802c62ca39c791 | [
"MIT"
] | null | null | null | tests/test_app_pytest.py | 734990893/Nana | 712abb39aeb7a5c6e931fcd6e4802c62ca39c791 | [
"MIT"
] | null | null | null | tests/test_app_pytest.py | 734990893/Nana | 712abb39aeb7a5c6e931fcd6e4802c62ca39c791 | [
"MIT"
] | null | null | null | from src.api.app import welcome
def test_welcome():
assert welcome() == 'Nana welcomes you!'
| 16.5 | 44 | 0.69697 | 14 | 99 | 4.857143 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 99 | 5 | 45 | 19.8 | 0.839506 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
05368aeb733e84d9413414743bb3d6c7a624a6de | 5,208 | py | Python | tests/commands/cloud/test_pull.py | InvestWeMust/lean-cli | a7241a0af6202dc7d56c0f35d09e51798cc5d426 | [
"Apache-2.0"
] | 76 | 2021-02-03T02:32:32.000Z | 2022-03-28T17:04:03.000Z | tests/commands/cloud/test_pull.py | InvestWeMust/lean-cli | a7241a0af6202dc7d56c0f35d09e51798cc5d426 | [
"Apache-2.0"
] | 64 | 2021-02-28T23:14:17.000Z | 2022-03-30T23:22:24.000Z | tests/commands/cloud/test_pull.py | InvestWeMust/lean-cli | a7241a0af6202dc7d56c0f35d09e51798cc5d426 | [
"Apache-2.0"
] | 50 | 2021-02-11T01:25:24.000Z | 2022-03-17T03:56:29.000Z | # QUANTCONNECT.COM - Democratizing Finance, Empowering Individuals.
# Lean CLI v1.0. Copyright 2021 QuantConnect Corporation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from unittest import mock
from click.testing import CliRunner
from dependency_injector import providers
from lean.commands import lean
from lean.container import container
from tests.test_helpers import create_api_project, create_fake_lean_cli_directory
def test_cloud_pull_pulls_all_non_bootcamp_projects_when_no_options_given() -> None:
create_fake_lean_cli_directory()
cloud_projects = [create_api_project(1, "Project 1"),
create_api_project(2, "Project 2"),
create_api_project(3, "Project 3"),
create_api_project(4, "Boot Camp/Project 4"),
create_api_project(5, "Boot Camp/Project 5")]
api_client = mock.Mock()
api_client.projects.get_all.return_value = cloud_projects
container.api_client.override(providers.Object(api_client))
pull_manager = mock.Mock()
container.pull_manager.override(providers.Object(pull_manager))
result = CliRunner().invoke(lean, ["cloud", "pull"])
assert result.exit_code == 0
pull_manager.pull_projects.assert_called_once_with(cloud_projects[:3])
def test_cloud_pull_pulls_all_projects_when_pull_bootcamp_option_given() -> None:
create_fake_lean_cli_directory()
cloud_projects = [create_api_project(1, "Project 1"),
create_api_project(2, "Project 2"),
create_api_project(3, "Project 3"),
create_api_project(4, "Boot Camp/Project 4"),
create_api_project(5, "Boot Camp/Project 5")]
api_client = mock.Mock()
api_client.projects.get_all.return_value = cloud_projects
container.api_client.override(providers.Object(api_client))
pull_manager = mock.Mock()
container.pull_manager.override(providers.Object(pull_manager))
result = CliRunner().invoke(lean, ["cloud", "pull", "--pull-bootcamp"])
assert result.exit_code == 0
pull_manager.pull_projects.assert_called_once_with(cloud_projects)
def test_cloud_pull_pulls_project_by_id() -> None:
create_fake_lean_cli_directory()
cloud_projects = [create_api_project(1, "Project 1"),
create_api_project(2, "Project 2"),
create_api_project(3, "Project 3"),
create_api_project(4, "Boot Camp/Project 4"),
create_api_project(5, "Boot Camp/Project 5")]
api_client = mock.Mock()
api_client.projects.get_all.return_value = cloud_projects
container.api_client.override(providers.Object(api_client))
pull_manager = mock.Mock()
container.pull_manager.override(providers.Object(pull_manager))
result = CliRunner().invoke(lean, ["cloud", "pull", "--project", "1"])
assert result.exit_code == 0
pull_manager.pull_projects.assert_called_once_with([cloud_projects[0]])
def test_cloud_pull_pulls_project_by_name() -> None:
create_fake_lean_cli_directory()
cloud_projects = [create_api_project(1, "Project 1"),
create_api_project(2, "Project 2"),
create_api_project(3, "Project 3"),
create_api_project(4, "Boot Camp/Project 4"),
create_api_project(5, "Boot Camp/Project 5")]
api_client = mock.Mock()
api_client.projects.get_all.return_value = cloud_projects
container.api_client.override(providers.Object(api_client))
pull_manager = mock.Mock()
container.pull_manager.override(providers.Object(pull_manager))
result = CliRunner().invoke(lean, ["cloud", "pull", "--project", "Project 1"])
assert result.exit_code == 0
pull_manager.pull_projects.assert_called_once_with([cloud_projects[0]])
def test_cloud_pull_aborts_when_project_input_matches_no_cloud_projects() -> None:
create_fake_lean_cli_directory()
cloud_projects = [create_api_project(1, "Project 1"),
create_api_project(2, "Project 2"),
create_api_project(3, "Project 3"),
create_api_project(4, "Boot Camp/Project 4"),
create_api_project(5, "Boot Camp/Project 5")]
api_client = mock.Mock()
api_client.projects.get_all.return_value = cloud_projects
container.api_client.override(providers.Object(api_client))
pull_manager = mock.Mock()
container.pull_manager.override(providers.Object(pull_manager))
result = CliRunner().invoke(lean, ["cloud", "pull", "--project", "Project 4"])
assert result.exit_code != 0
pull_manager.pull_projects.assert_not_called()
| 38.014599 | 84 | 0.697773 | 680 | 5,208 | 5.036765 | 0.191176 | 0.068321 | 0.12146 | 0.029781 | 0.753869 | 0.746277 | 0.732263 | 0.719416 | 0.719416 | 0.719416 | 0 | 0.017055 | 0.200653 | 5,208 | 136 | 85 | 38.294118 | 0.805669 | 0.122696 | 0 | 0.753086 | 0 | 0 | 0.094601 | 0 | 0 | 0 | 0 | 0 | 0.123457 | 1 | 0.061728 | false | 0 | 0.074074 | 0 | 0.135802 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
05705a0e603906cdbb2c2b0c32fb199a0bd43f0d | 40 | py | Python | map/__init__.py | dc-aichara/COVID19-India-Tracker | 6685c996ca116b60e4ebbf0d3902d41ae2608864 | [
"MIT"
] | 9 | 2020-03-22T11:11:16.000Z | 2021-07-25T02:03:38.000Z | map/__init__.py | dc-aichara/COVID19-India-Tracker | 6685c996ca116b60e4ebbf0d3902d41ae2608864 | [
"MIT"
] | 1 | 2021-04-17T19:52:03.000Z | 2021-04-19T13:40:27.000Z | map/__init__.py | dc-aichara/COVID19-India-Tracker | 6685c996ca116b60e4ebbf0d3902d41ae2608864 | [
"MIT"
] | 2 | 2020-04-11T21:54:54.000Z | 2021-04-17T14:58:18.000Z | from .scatter_map import scatter_mapbox
| 20 | 39 | 0.875 | 6 | 40 | 5.5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 40 | 1 | 40 | 40 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.