hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
5490ecf911b1dfc8f7debc8aabd7391b7dcc7d81 | 25,273 | py | Python | sota/cnn/genotypes.py | RuochenWang/darts-pt | a5393d5c9731a6eff0bc379f3153bf24a2ab8ec5 | [
"Apache-2.0"
] | 70 | 2021-02-26T15:03:35.000Z | 2022-03-28T03:08:21.000Z | sota/cnn/genotypes.py | RuochenWang/darts-pt | a5393d5c9731a6eff0bc379f3153bf24a2ab8ec5 | [
"Apache-2.0"
] | 7 | 2021-04-14T06:21:28.000Z | 2022-03-24T01:00:46.000Z | sota/cnn/genotypes.py | RuochenWang/darts-pt | a5393d5c9731a6eff0bc379f3153bf24a2ab8ec5 | [
"Apache-2.0"
] | 13 | 2021-04-16T13:40:26.000Z | 2021-12-27T13:36:34.000Z | from collections import namedtuple
Genotype = namedtuple('Genotype', 'normal normal_concat reduce reduce_concat')
PRIMITIVES = [
'none',
'noise',
'max_pool_3x3',
'avg_pool_3x3',
'skip_connect',
'sep_conv_3x3',
'sep_conv_5x5',
'dil_conv_3x3',
'dil_conv_5x5'
]
######## S1-S4 Space ########
#### cifar10 s1 - s4
darts_pt_s1_1 = Genotype(normal=[('skip_connect', 0), ('skip_connect', 1), ('skip_connect', 0), ('skip_connect', 1), ('skip_connect', 0), ('skip_connect', 1), ('dil_conv_3x3', 3), ('dil_conv_5x5', 4)], normal_concat=range(2, 6), reduce=[('max_pool_3x3', 0), ('dil_conv_3x3', 1), ('max_pool_3x3', 0), ('max_pool_3x3', 1), ('sep_conv_3x3', 1), ('skip_connect', 3), ('skip_connect', 2), ('dil_conv_5x5', 3)], reduce_concat=range(2, 6)) # 1.87
darts_pt_s1_2 = Genotype(normal=[('dil_conv_3x3', 0), ('skip_connect', 1), ('skip_connect', 0), ('skip_connect', 2), ('skip_connect', 0), ('skip_connect', 3), ('dil_conv_3x3', 3), ('dil_conv_5x5', 4)], normal_concat=range(2, 6), reduce=[('max_pool_3x3', 0), ('max_pool_3x3', 1), ('max_pool_3x3', 0), ('max_pool_3x3', 1), ('sep_conv_3x3', 1), ('skip_connect', 2), ('skip_connect', 2), ('dil_conv_5x5', 4)], reduce_concat=range(2, 6)) # 2.02
darts_pt_s2_1 = Genotype(normal=[('skip_connect', 0), ('skip_connect', 1), ('sep_conv_3x3', 0), ('skip_connect', 2), ('sep_conv_3x3', 1), ('skip_connect', 3), ('sep_conv_3x3', 1), ('sep_conv_3x3', 4)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('skip_connect', 2), ('sep_conv_3x3', 0), ('skip_connect', 2), ('sep_conv_3x3', 1), ('skip_connect', 4)], reduce_concat=range(2, 6)) # 3.09
darts_pt_s2_2 = Genotype(normal=[('skip_connect', 0), ('sep_conv_3x3', 1), ('skip_connect', 1), ('skip_connect', 2), ('sep_conv_3x3', 0), ('skip_connect', 3), ('skip_connect', 3), ('sep_conv_3x3', 4)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('skip_connect', 2), ('sep_conv_3x3', 1), ('sep_conv_3x3', 3), ('sep_conv_3x3', 1), ('skip_connect', 3)], reduce_concat=range(2, 6)) # 2.79
darts_pt_s3_1 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('skip_connect', 1), ('skip_connect', 0), ('skip_connect', 2), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('skip_connect', 1), ('sep_conv_3x3', 0), ('skip_connect', 2), ('sep_conv_3x3', 0), ('skip_connect', 3), ('skip_connect', 3), ('sep_conv_3x3', 4)], reduce_concat=range(2, 6)) # 3.42
darts_pt_s3_2 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('skip_connect', 0), ('sep_conv_3x3', 1), ('skip_connect', 0), ('sep_conv_3x3', 3), ('sep_conv_3x3', 1), ('skip_connect', 4)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('skip_connect', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 2), ('skip_connect', 3)], reduce_concat=range(2, 6)) # 3.54
darts_pt_s4_1 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 1), ('sep_conv_3x3', 2)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 2), ('sep_conv_3x3', 0), ('sep_conv_3x3', 3), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1)], reduce_concat=range(2, 6)) # 4.7
darts_pt_s4_2 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 3), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 2)], reduce_concat=range(2, 6)) # 4.7
blank_pt_s1_1 = Genotype(normal=[('skip_connect', 0), ('dil_conv_5x5', 1), ('sep_conv_3x3', 1), ('skip_connect', 2), ('max_pool_3x3', 0), ('skip_connect', 3), ('sep_conv_3x3', 0), ('max_pool_3x3', 1)], normal_concat=range(2, 6), reduce=[('avg_pool_3x3', 0), ('max_pool_3x3', 1), ('max_pool_3x3', 0), ('max_pool_3x3', 1), ('skip_connect', 2), ('dil_conv_5x5', 3), ('dil_conv_5x5', 2), ('skip_connect', 3)], reduce_concat=range(2, 6)) # 2.36
blank_pt_s1_2 = Genotype(normal=[('skip_connect', 0), ('skip_connect', 1), ('sep_conv_3x3', 1), ('skip_connect', 2), ('max_pool_3x3', 0), ('sep_conv_3x3', 1), ('skip_connect', 2), ('dil_conv_5x5', 3)], normal_concat=range(2, 6), reduce=[('avg_pool_3x3', 0), ('dil_conv_3x3', 1), ('max_pool_3x3', 1), ('dil_conv_5x5', 2), ('avg_pool_3x3', 0), ('skip_connect', 3), ('skip_connect', 2), ('dil_conv_5x5', 3)], reduce_concat=range(2, 6)) # 2.39
blank_pt_s2_1 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('skip_connect', 0), ('skip_connect', 2), ('sep_conv_3x3', 1), ('sep_conv_3x3', 2), ('sep_conv_3x3', 0), ('skip_connect', 3)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 1), ('skip_connect', 2), ('sep_conv_3x3', 0), ('skip_connect', 3), ('sep_conv_3x3', 2), ('skip_connect', 4)], reduce_concat=range(2, 6)) # 3.45
blank_pt_s2_2 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 2), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('skip_connect', 4)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 2), ('sep_conv_3x3', 0), ('skip_connect', 1), ('sep_conv_3x3', 1), ('sep_conv_3x3', 2)], reduce_concat=range(2, 6)) # 4.3
blank_pt_s3_1 = Genotype(normal=[('sep_conv_3x3', 0), ('skip_connect', 1), ('skip_connect', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('skip_connect', 2), ('skip_connect', 3), ('skip_connect', 4)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 2), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 1), ('sep_conv_3x3', 3)], reduce_concat=range(2, 6)) # 2.9
blank_pt_s3_2 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('skip_connect', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('skip_connect', 0), ('sep_conv_3x3', 2)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('skip_connect', 3), ('skip_connect', 2), ('skip_connect', 3)], reduce_concat=range(2, 6)) # 3.81
blank_pt_s4_1 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 2), ('sep_conv_3x3', 0), ('sep_conv_3x3', 3), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1)], reduce_concat=range(2, 6)) # 4.7
blank_pt_s4_2 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1)], reduce_concat=range(2, 6)) # 4.7
#### cifar100 s1 - s4
darts_pt_s1_c100_1 = Genotype(normal=[('skip_connect', 0), ('skip_connect', 1), ('dil_conv_5x5', 0), ('skip_connect', 2), ('sep_conv_3x3', 2), ('skip_connect', 3), ('sep_conv_3x3', 0), ('skip_connect', 2)], normal_concat=range(2, 6), reduce=[('max_pool_3x3', 0), ('max_pool_3x3', 1), ('max_pool_3x3', 1), ('dil_conv_5x5', 2), ('sep_conv_3x3', 1), ('skip_connect', 2), ('skip_connect', 2), ('dil_conv_5x5', 3)], reduce_concat=range(2, 6)) # 2.47
darts_pt_s1_c100_2 = Genotype(normal=[('dil_conv_3x3', 0), ('skip_connect', 1), ('skip_connect', 0), ('skip_connect', 2), ('skip_connect', 0), ('dil_conv_3x3', 3), ('skip_connect', 2), ('dil_conv_5x5', 3)], normal_concat=range(2, 6), reduce=[('avg_pool_3x3', 0), ('max_pool_3x3', 1), ('max_pool_3x3', 0), ('dil_conv_5x5', 2), ('avg_pool_3x3', 0), ('sep_conv_3x3', 1), ('max_pool_3x3', 1), ('skip_connect', 4)], reduce_concat=range(2, 6)) # 2.07
darts_pt_s2_c100_1 = Genotype(normal=[('skip_connect', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('skip_connect', 2), ('sep_conv_3x3', 1), ('skip_connect', 3), ('skip_connect', 3), ('sep_conv_3x3', 4)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('skip_connect', 0), ('skip_connect', 2), ('skip_connect', 0), ('skip_connect', 3), ('sep_conv_3x3', 2), ('skip_connect', 4)], reduce_concat=range(2, 6)) # 3.08
darts_pt_s2_c100_2 = Genotype(normal=[('skip_connect', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('skip_connect', 2), ('skip_connect', 2), ('sep_conv_3x3', 3), ('skip_connect', 3), ('sep_conv_3x3', 4)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('skip_connect', 1), ('skip_connect', 2), ('sep_conv_3x3', 1), ('skip_connect', 3), ('sep_conv_3x3', 0), ('skip_connect', 3)], reduce_concat=range(2, 6)) # 3.11
darts_pt_s3_c100_1 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('skip_connect', 0), ('skip_connect', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('skip_connect', 1), ('sep_conv_3x3', 0), ('skip_connect', 2), ('skip_connect', 2), ('sep_conv_3x3', 3), ('sep_conv_3x3', 1), ('skip_connect', 3)], reduce_concat=range(2, 6)) # 3.83
darts_pt_s3_c100_2 = Genotype(normal=[('sep_conv_3x3', 0), ('skip_connect', 1), ('sep_conv_3x3', 0), ('skip_connect', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 3), ('sep_conv_3x3', 1), ('skip_connect', 4)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('skip_connect', 1), ('sep_conv_3x3', 0), ('skip_connect', 2), ('skip_connect', 1), ('skip_connect', 3), ('sep_conv_3x3', 2), ('skip_connect', 4)], reduce_concat=range(2, 6)) # 3.44
darts_pt_s4_c100_1 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1)], reduce_concat=range(2, 6)) # 4.75
darts_pt_s4_c100_2 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 2), ('sep_conv_3x3', 0), ('sep_conv_3x3', 4)], reduce_concat=range(2, 6)) # 4.75
blank_pt_s1_c100_1 = Genotype(normal=[('skip_connect', 0), ('dil_conv_5x5', 1), ('sep_conv_3x3', 1), ('skip_connect', 2), ('max_pool_3x3', 0), ('skip_connect', 3), ('sep_conv_3x3', 0), ('max_pool_3x3', 1)], normal_concat=range(2, 6), reduce=[('max_pool_3x3', 0), ('dil_conv_3x3', 1), ('max_pool_3x3', 1), ('skip_connect', 2), ('sep_conv_3x3', 1), ('dil_conv_5x5', 3), ('skip_connect', 3), ('skip_connect', 4)], reduce_concat=range(2, 6)) # 2.46
blank_pt_s1_c100_2 = Genotype(normal=[('skip_connect', 0), ('dil_conv_5x5', 1), ('skip_connect', 0), ('sep_conv_3x3', 1), ('max_pool_3x3', 0), ('skip_connect', 3), ('sep_conv_3x3', 0), ('max_pool_3x3', 1)], normal_concat=range(2, 6), reduce=[('max_pool_3x3', 0), ('dil_conv_3x3', 1), ('avg_pool_3x3', 1), ('skip_connect', 2), ('max_pool_3x3', 1), ('dil_conv_5x5', 3), ('dil_conv_5x5', 2), ('skip_connect', 3)], reduce_concat=range(2, 6)) # 2.44
blank_pt_s2_c100_1 = Genotype(normal=[('skip_connect', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('skip_connect', 2), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('skip_connect', 3), ('skip_connect', 4)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 1), ('skip_connect', 2), ('skip_connect', 3), ('skip_connect', 4)], reduce_concat=range(2, 6)) # 3.14
blank_pt_s2_c100_2 = Genotype(normal=[('skip_connect', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('skip_connect', 2), ('skip_connect', 0), ('sep_conv_3x3', 1), ('skip_connect', 3), ('skip_connect', 4)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 1), ('skip_connect', 2), ('sep_conv_3x3', 2), ('skip_connect', 4)], reduce_concat=range(2, 6)) # 2.84
blank_pt_s3_c100_1 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('skip_connect', 0), ('skip_connect', 2), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 4)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('skip_connect', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('skip_connect', 2), ('sep_conv_3x3', 3), ('sep_conv_3x3', 1), ('skip_connect', 3)], reduce_concat=range(2, 6)) # 3.89
blank_pt_s3_c100_2 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('skip_connect', 0), ('sep_conv_3x3', 1), ('skip_connect', 0), ('sep_conv_3x3', 3), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 2), ('skip_connect', 3), ('skip_connect', 0), ('sep_conv_3x3', 1)], reduce_concat=range(2, 6)) # 3.95
blank_pt_s4_c100_1 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 2), ('sep_conv_3x3', 0), ('sep_conv_3x3', 3), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1)], reduce_concat=range(2, 6)) # 4.75
blank_pt_s4_c100_2 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 2), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1)], reduce_concat=range(2, 6)) # 4.75
#### svhn s1 - s4
darts_pt_s1_svhn_1 = Genotype(normal=[('skip_connect', 0), ('skip_connect', 1), ('dil_conv_5x5', 0), ('skip_connect', 2), ('max_pool_3x3', 0), ('sep_conv_3x3', 2), ('dil_conv_3x3', 3), ('dil_conv_3x3', 4)], normal_concat=range(2, 6), reduce=[('max_pool_3x3', 0), ('dil_conv_3x3', 1), ('avg_pool_3x3', 1), ('dil_conv_5x5', 2), ('skip_connect', 2), ('dil_conv_5x5', 3), ('avg_pool_3x3', 0), ('skip_connect', 4)], reduce_concat=range(2, 6)) # 2.38
darts_pt_s1_svhn_2 = Genotype(normal=[('skip_connect', 0), ('skip_connect', 1), ('dil_conv_5x5', 0), ('skip_connect', 2), ('max_pool_3x3', 0), ('skip_connect', 3), ('dil_conv_3x3', 3), ('dil_conv_3x3', 4)], normal_concat=range(2, 6), reduce=[('avg_pool_3x3', 0), ('max_pool_3x3', 1), ('avg_pool_3x3', 0), ('dil_conv_5x5', 2), ('sep_conv_3x3', 1), ('dil_conv_5x5', 3), ('skip_connect', 3), ('skip_connect', 4)], reduce_concat=range(2, 6)) # 2.05
darts_pt_s2_svhn_1 = Genotype(normal=[('sep_conv_3x3', 0), ('skip_connect', 1), ('skip_connect', 0), ('skip_connect', 2), ('sep_conv_3x3', 0), ('skip_connect', 3), ('sep_conv_3x3', 1), ('sep_conv_3x3', 4)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('skip_connect', 1), ('skip_connect', 0), ('sep_conv_3x3', 2), ('skip_connect', 2), ('sep_conv_3x3', 3), ('skip_connect', 3), ('sep_conv_3x3', 4)], reduce_concat=range(2, 6)) # 3.08
darts_pt_s2_svhn_2 = Genotype(normal=[('skip_connect', 0), ('skip_connect', 1), ('sep_conv_3x3', 0), ('skip_connect', 2), ('sep_conv_3x3', 0), ('sep_conv_3x3', 3), ('sep_conv_3x3', 2), ('skip_connect', 4)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('skip_connect', 1), ('skip_connect', 0), ('skip_connect', 2), ('sep_conv_3x3', 1), ('sep_conv_3x3', 2), ('sep_conv_3x3', 0), ('sep_conv_3x3', 4)], reduce_concat=range(2, 6)) # 3.14
darts_pt_s3_svhn_1 = Genotype(normal=[('sep_conv_3x3', 0), ('skip_connect', 1), ('sep_conv_3x3', 1), ('sep_conv_3x3', 2), ('skip_connect', 2), ('skip_connect', 3), ('sep_conv_3x3', 3), ('sep_conv_3x3', 4)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('skip_connect', 1), ('skip_connect', 1), ('sep_conv_3x3', 2), ('skip_connect', 2), ('sep_conv_3x3', 3), ('sep_conv_3x3', 3), ('sep_conv_3x3', 4)], reduce_concat=range(2, 6)) # 3.50
darts_pt_s3_svhn_2 = Genotype(normal=[('skip_connect', 0), ('skip_connect', 1), ('skip_connect', 0), ('skip_connect', 1), ('sep_conv_3x3', 1), ('sep_conv_3x3', 3), ('sep_conv_3x3', 0), ('skip_connect', 4)], normal_concat=range(2, 6), reduce=[('skip_connect', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 1), ('sep_conv_3x3', 2), ('sep_conv_3x3', 2), ('sep_conv_3x3', 3), ('skip_connect', 2), ('sep_conv_3x3', 4)], reduce_concat=range(2, 6)) # 2.82
darts_pt_s4_svhn_1 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 1), ('sep_conv_3x3', 2)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 2), ('sep_conv_3x3', 0), ('sep_conv_3x3', 3), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1)], reduce_concat=range(2, 6)) # 4.70
darts_pt_s4_svhn_2 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 2), ('sep_conv_3x3', 1), ('sep_conv_3x3', 3), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 2), ('sep_conv_3x3', 0), ('sep_conv_3x3', 2), ('sep_conv_3x3', 1), ('sep_conv_3x3', 3)], reduce_concat=range(2, 6)) # 4.70
blank_pt_s1_svhn_1 = Genotype(normal=[('dil_conv_3x3', 0), ('dil_conv_5x5', 1), ('sep_conv_3x3', 1), ('skip_connect', 2), ('sep_conv_3x3', 2), ('skip_connect', 3), ('dil_conv_5x5', 3), ('dil_conv_5x5', 4)], normal_concat=range(2, 6), reduce=[('avg_pool_3x3', 0), ('max_pool_3x3', 1), ('avg_pool_3x3', 1), ('skip_connect', 2), ('skip_connect', 2), ('dil_conv_5x5', 3), ('skip_connect', 3), ('dil_conv_5x5', 4)], reduce_concat=range(2, 6)) # 2.95
blank_pt_s1_svhn_2 = Genotype(normal=[('dil_conv_3x3', 0), ('dil_conv_5x5', 1), ('sep_conv_3x3', 1), ('dil_conv_3x3', 2), ('sep_conv_3x3', 1), ('sep_conv_3x3', 2), ('sep_conv_3x3', 0), ('max_pool_3x3', 1)], normal_concat=range(2, 6), reduce=[('avg_pool_3x3', 0), ('dil_conv_3x3', 1), ('max_pool_3x3', 0), ('avg_pool_3x3', 1), ('avg_pool_3x3', 0), ('skip_connect', 2), ('max_pool_3x3', 1), ('dil_conv_5x5', 4)], reduce_concat=range(2, 6)) # 3.43
blank_pt_s2_svhn_1 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('skip_connect', 1), ('sep_conv_3x3', 2), ('sep_conv_3x3', 1), ('sep_conv_3x3', 2), ('sep_conv_3x3', 1), ('sep_conv_3x3', 3)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('skip_connect', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 2), ('sep_conv_3x3', 2), ('sep_conv_3x3', 3), ('skip_connect', 2), ('skip_connect', 4)], reduce_concat=range(2, 6)) # 4.20
blank_pt_s2_svhn_2 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 1), ('sep_conv_3x3', 2), ('sep_conv_3x3', 1), ('sep_conv_3x3', 3), ('sep_conv_3x3', 3), ('skip_connect', 4)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 2), ('skip_connect', 1), ('sep_conv_3x3', 3), ('sep_conv_3x3', 0), ('skip_connect', 2)], reduce_concat=range(2, 6)) # 4.26
blank_pt_s3_svhn_1 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 1), ('skip_connect', 2), ('sep_conv_3x3', 0), ('skip_connect', 3), ('sep_conv_3x3', 1), ('sep_conv_3x3', 3)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('skip_connect', 1), ('sep_conv_3x3', 2), ('sep_conv_3x3', 2), ('sep_conv_3x3', 3), ('skip_connect', 2), ('sep_conv_3x3', 4)], reduce_concat=range(2, 6)) # 3.90
blank_pt_s3_svhn_2 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 1), ('sep_conv_3x3', 2), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('skip_connect', 1), ('sep_conv_3x3', 2), ('skip_connect', 0), ('sep_conv_3x3', 3), ('sep_conv_3x3', 2), ('sep_conv_3x3', 4)], reduce_concat=range(2, 6)) # 4.64
blank_pt_s4_svhn_1 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 1), ('sep_conv_3x3', 2), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 1), ('sep_conv_3x3', 2), ('sep_conv_3x3', 1), ('sep_conv_3x3', 3), ('sep_conv_3x3', 0), ('sep_conv_3x3', 2)], reduce_concat=range(2, 6)) # 4.70
blank_pt_s4_svhn_2 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 2), ('sep_conv_3x3', 3), ('sep_conv_3x3', 1), ('sep_conv_3x3', 4)], reduce_concat=range(2, 6)) # 4.70
######## DARTS Space ########
##### darts
darts_pt_s5_0 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_5x5', 0), ('dil_conv_3x3', 1), ('skip_connect', 0), ('skip_connect', 2), ('skip_connect', 0), ('max_pool_3x3', 4)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('avg_pool_3x3', 1), ('dil_conv_5x5', 1), ('dil_conv_3x3', 2), ('avg_pool_3x3', 0), ('max_pool_3x3', 2), ('sep_conv_3x3', 2), ('skip_connect', 4)], reduce_concat=range(2, 6)) # 2.85 (param size)
darts_pt_s5_1 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('dil_conv_3x3', 1), ('max_pool_3x3', 0), ('skip_connect', 2), ('skip_connect', 0), ('dil_conv_3x3', 4)], normal_concat=range(2, 6), reduce=[('dil_conv_5x5', 0), ('avg_pool_3x3', 1), ('max_pool_3x3', 0), ('sep_conv_5x5', 2), ('max_pool_3x3', 1), ('dil_conv_3x3', 3), ('sep_conv_5x5', 0), ('sep_conv_3x3', 4)], reduce_concat=range(2, 6)) # 3.05
darts_pt_s5_2 = Genotype(normal=[('skip_connect', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('dil_conv_3x3', 1), ('skip_connect', 2), ('dil_conv_3x3', 3), ('max_pool_3x3', 1), ('skip_connect', 2)], normal_concat=range(2, 6), reduce=[('avg_pool_3x3', 0), ('max_pool_3x3', 1), ('avg_pool_3x3', 0), ('sep_conv_3x3', 1), ('dil_conv_5x5', 2), ('skip_connect', 3), ('sep_conv_3x3', 2), ('sep_conv_5x5', 4)], reduce_concat=range(2, 6)) # 2.66
darts_pt_s5_3 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('skip_connect', 0), ('skip_connect', 2), ('sep_conv_3x3', 1), ('skip_connect', 3)], normal_concat=range(2, 6), reduce=[('avg_pool_3x3', 0), ('sep_conv_5x5', 1), ('max_pool_3x3', 1), ('skip_connect', 2), ('dil_conv_5x5', 1), ('max_pool_3x3', 3), ('sep_conv_5x5', 2), ('skip_connect', 3)], reduce_concat=range(2, 6)) # 3.33
#### blank
blank_pt_s5_0 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_5x5', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('avg_pool_3x3', 1), ('skip_connect', 3), ('max_pool_3x3', 0), ('avg_pool_3x3', 2)], normal_concat=range(2, 6), reduce=[('skip_connect', 0), ('sep_conv_5x5', 1), ('dil_conv_3x3', 1), ('sep_conv_5x5', 2), ('max_pool_3x3', 0), ('skip_connect', 3), ('max_pool_3x3', 0), ('max_pool_3x3', 2)], reduce_concat=range(2, 6)) # 3.04 (param size)
blank_pt_s5_1 = Genotype(normal=[('avg_pool_3x3', 0), ('avg_pool_3x3', 1), ('sep_conv_3x3', 0), ('max_pool_3x3', 2), ('skip_connect', 2), ('sep_conv_3x3', 3), ('sep_conv_5x5', 0), ('sep_conv_5x5', 1)], normal_concat=range(2, 6), reduce=[('max_pool_3x3', 0), ('dil_conv_5x5', 1), ('sep_conv_3x3', 0), ('skip_connect', 1), ('avg_pool_3x3', 2), ('max_pool_3x3', 3), ('sep_conv_5x5', 3), ('sep_conv_5x5', 4)], reduce_concat=range(2, 6)) # 3.15
blank_pt_s5_2 = Genotype(normal=[('avg_pool_3x3', 0), ('dil_conv_3x3', 1), ('sep_conv_5x5', 0), ('max_pool_3x3', 1), ('max_pool_3x3', 0), ('skip_connect', 1), ('sep_conv_5x5', 1), ('avg_pool_3x3', 2)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('avg_pool_3x3', 1), ('avg_pool_3x3', 0), ('sep_conv_5x5', 1), ('max_pool_3x3', 0), ('sep_conv_5x5', 3), ('max_pool_3x3', 2), ('dil_conv_5x5', 4)], reduce_concat=range(2, 6)) # 2.58
blank_pt_s5_3 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('skip_connect', 1), ('max_pool_3x3', 2), ('sep_conv_3x3', 3), ('avg_pool_3x3', 0), ('sep_conv_5x5', 1)], normal_concat=range(2, 6), reduce=[('dil_conv_5x5', 0), ('sep_conv_5x5', 1), ('sep_conv_5x5', 0), ('dil_conv_3x3', 1), ('avg_pool_3x3', 1), ('skip_connect', 2), ('max_pool_3x3', 0), ('skip_connect', 4)], reduce_concat=range(2, 6)) # 3.40
| 277.725275 | 452 | 0.646302 | 4,506 | 25,273 | 3.205726 | 0.018642 | 0.251021 | 0.338525 | 0.152302 | 0.960886 | 0.954725 | 0.947248 | 0.922395 | 0.903011 | 0.868951 | 0 | 0.122078 | 0.092787 | 25,273 | 90 | 453 | 280.811111 | 0.507938 | 0.015155 | 0 | 0 | 0 | 0 | 0.439629 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.014493 | 0 | 0.014493 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
49b2fe6aeb036f0a52bef509ec8d4b828a8d528b | 35,044 | py | Python | utils/modelCollection.py | karbogas/traffic4cast | cec5523a794df26c4a71723c866ad5d1443c2d94 | [
"Apache-2.0"
] | 1 | 2022-03-01T14:36:04.000Z | 2022-03-01T14:36:04.000Z | utils/modelCollection.py | karbogas/traffic4cast | cec5523a794df26c4a71723c866ad5d1443c2d94 | [
"Apache-2.0"
] | null | null | null | utils/modelCollection.py | karbogas/traffic4cast | cec5523a794df26c4a71723c866ad5d1443c2d94 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
@author: karbogas
"""
import torch
from torch_geometric.nn import ChebConv, max_pool, knn, GCNConv, GraphConv, SAGEConv
from torch_geometric.utils import add_self_loops
from torch.nn import BatchNorm1d
import torch.nn.functional as F
from torch_scatter import scatter_add
# Defines the convolution block
class Kipfblock(torch.nn.Module):
def __init__(self, n_input, n_hidden=64, K=6, p=0.5, bn=False, conv = 'Cheb'):
super(Kipfblock, self).__init__()
# Pick convolution technique
if conv == 'Cheb':
self.conv1 = ChebConv(n_input, n_hidden, K=K)
elif conv == 'GCN':
self.conv1 = GCNConv(n_input, n_hidden)
elif conv == 'SAGE':
self.conv1 = SAGEConv(n_input, n_hidden)
elif conv == 'Graph':
self.conv1 = GraphConv(n_input, n_hidden)
self.p = p
self.n_input = n_input
self.n_hidden = n_hidden
self.do_bn = bn
if bn:
self.bn = BatchNorm1d(n_hidden)
def forward(self, x, edge_index):
# convolutional layer + optional batch normalization + relu
if self.do_bn:
x = F.relu(self.bn(self.conv1(x, edge_index)))
else:
x = F.relu(self.conv1(x, edge_index))
return x
# Model with single pooling
class KipfNet(torch.nn.Module):
def __init__(self, clusters = None, knn = 4, maxCluster = 0, clustering = 'None',categories = None, coords = False, n_hidden = 64, n_hidden2 = 32, n_hidden3 = 16, K_block = 6, K_mix = 1, skipconv = False, do_bn = True, conv = 'Cheb', layers = 1, midSkip = False, p = 0.5, includeHeading = False):
super(KipfNet, self).__init__()
self.clusters = clusters
self.knn = knn
self.maxCluster = maxCluster
self.clustering = clustering
self.categories = categories - 1
self.skipconv = skipconv
self.coords = coords
self.midSkip = midSkip
self.layers = layers
self.bn = BatchNorm1d(n_hidden)
self.bn2 = BatchNorm1d(n_hidden2)
self.bn3 = BatchNorm1d(n_hidden3)
self.p = p
# Input size depends on heading channel
if includeHeading:
n_input = 36
else:
n_input = 24
# Add coordinates and categories to input
if coords:
n_input = n_input + 2
if categories is not None:
n_input = n_input + 1
if layers == 1:
midSkip = False
# Build model (selected number of convolution blocks)
self.m# Build model (selected number of convolution blocks)oduleList1 = torch.nn.ModuleList()
self.skipList1 = torch.nn.ModuleList()
for i in range(layers):
if i == 0:
self.moduleList1.append(Kipfblock(n_input=n_input, n_hidden=n_hidden, K=K_block, bn=do_bn, conv = conv))
n_mix = n_hidden + n_input
elif i == 1:
self.moduleList1.append(Kipfblock(n_input=n_hidden, n_hidden=n_hidden2, K=K_block, bn=do_bn, conv = conv))
n_mix = n_hidden2 + n_hidden
else:
self.moduleList1.append(Kipfblock(n_input=n_hidden2, n_hidden=n_hidden3, K=K_block, bn=do_bn, conv = conv))
n_mix = n_hidden3 + n_hidden2
if midSkip:
if i == 0:
if conv == 'Cheb':
self.skipList1.append(ChebConv(n_mix, n_hidden, K=K_mix))
elif conv == 'GCN':
self.skipList1.append(GCNConv(n_mix, n_hidden))
elif conv == 'SAGE':
self.skipList1.append(SAGEConv(n_mix, n_hidden))
elif conv == 'Graph':
self.skipList1.append(GraphConv(n_mix, n_hidden))
elif i == 1:
if conv == 'Cheb':
self.skipList1.append(ChebConv(n_mix, n_hidden2, K=K_mix))
elif conv == 'GCN':
self.skipList1.append(GCNConv(n_mix, n_hidden2))
elif conv == 'SAGE':
self.skipList1.append(SAGEConv(n_mix, n_hidden2))
elif conv == 'Graph':
self.skipList1.append(GraphConv(n_mix, n_hidden2))
else:
if conv == 'Cheb':
self.skipList1.append(ChebConv(n_mix, n_hidden3, K=K_mix))
elif conv == 'GCN':
self.skipList1.append(GCNConv(n_mix, n_hidden3))
elif conv == 'SAGE':
self.skipList1.append(SAGEConv(n_mix, n_hidden3))
elif conv == 'Graph':
self.skipList1.append(GraphConv(n_mix, n_hidden3))
# Pooled Branch (selected number of convolution blocks)
if clustering != 'None':
self.moduleList2 = torch.nn.ModuleList()
self.skipList2 = torch.nn.ModuleList()
for i in range(layers):
if i == 0:
self.moduleList2.append(Kipfblock(n_input=n_input, n_hidden=n_hidden, K=K_block, bn=do_bn, conv = conv))
n_mix = n_hidden + n_input
elif i == 1:
self.moduleList2.append(Kipfblock(n_input=n_hidden, n_hidden=n_hidden2, K=K_block, bn=do_bn, conv = conv))
n_mix = n_hidden2 + n_hidden
else:
self.moduleList2.append(Kipfblock(n_input=n_hidden2, n_hidden=n_hidden3, K=K_block, bn=do_bn, conv = conv))
n_mix = n_hidden3 + n_hidden2
if midSkip:
if i == 0:
if conv == 'Cheb':
self.skipList2.append(ChebConv(n_mix, n_hidden, K=K_mix))
elif conv == 'GCN':
self.skipList2.append(GCNConv(n_mix, n_hidden))
elif conv == 'SAGE':
self.skipList2.append(SAGEConv(n_mix, n_hidden))
elif conv == 'Graph':
self.skipList2.append(GraphConv(n_mix, n_hidden))
elif i == 1:
if conv == 'Cheb':
self.skipList2.append(ChebConv(n_mix, n_hidden2, K=K_mix))
elif conv == 'GCN':
self.skipList2.append(GCNConv(n_mix, n_hidden2))
elif conv == 'SAGE':
self.skipList2.append(SAGEConv(n_mix, n_hidden2))
elif conv == 'Graph':
self.skipList2.append(GraphConv(n_mix, n_hidden2))
else:
if conv == 'Cheb':
self.skipList2.append(ChebConv(n_mix, n_hidden3, K=K_mix))
elif conv == 'GCN':
self.skipList2.append(GCNConv(n_mix, n_hidden3))
elif conv == 'SAGE':
self.skipList2.append(SAGEConv(n_mix, n_hidden3))
elif conv == 'Graph':
self.skipList2.append(GraphConv(n_mix, n_hidden3))
# Input size for final convolution
if layers == 1:
n_mix = n_hidden
elif layers == 2:
n_mix = n_hidden2
else:
n_mix = n_hidden3
if clustering != 'None':
n_mix = n_mix * 2
if skipconv:
n_mix = n_mix + n_input
# Output size depends on heading channel
if includeHeading:
n_output = 9
else:
n_output = 6
# Select convolution type
if conv == 'Cheb':
self.conv_mix = ChebConv(n_mix, n_output, K=K_mix)
elif conv == 'GCN':
self.conv_mix = GCNConv(n_mix, n_output)
elif conv == 'SAGE':
self.conv_mix = SAGEConv(n_mix, n_output)
elif conv == 'Graph':
self.conv_mix = GraphConv(n_mix, n_output)
def forward(self, data, final, start):
x, edge_index, pos, batch = data.x, data.edge_index, data.pos, data.batch
x_start = x
# If no pooling
if self.clustering == 'None':
# Perform convolution blocks
for i in range(self.layers):
x_temp = x
x = self.moduleList1[i](x,edge_index)
if self.midSkip:
x = torch.cat((x, x_temp), 1)
x = self.skipList1[i](x,edge_index)
# Input size of final convolution
if self.skipconv:
y = torch.cat((x, x_start),1)
else:
y = x
# Do final convolution
y = self.conv_mix(y, edge_index)
# Add dropout layer
if self.p != 1:
y = F.dropout(y, training=self.training, p=self.p)
# If grid based pooling
elif self.clustering == '4x4':
batchClusters = self.clusters
batch_size = torch.max(batch) + 1
# Divide clusters from different batches
for i in range(1,batch_size):
batchClusters = torch.cat((batchClusters, self.clusters + i*self.maxCluster))
# Pooled branch, max pooling
data = max_pool(batchClusters, data)
x_t, edge_index_t, pos_t, batch_t = data.x, data.edge_index, data.pos, data.batch
edge_index_t, temp = add_self_loops(edge_index_t)
# Add coordinates to input
if self.coords:
normPos = pos / torch.max(pos)
normPos_t = pos_t / torch.max(pos_t)
x = torch.cat((x, normPos),1)
x_t = torch.cat((x_t, normPos_t),1)
# Perform convolution blocks in both branches
for i in range(self.layers):
x_temp = x
x = self.moduleList1[i](x,edge_index)
if self.midSkip:
x = torch.cat((x, x_temp), 1)
if i == 0:
bn = self.bn
elif i == 1:
bn = self.bn2
else:
bn = self.bn3
x = F.relu(bn(self.skipList1[i](x,edge_index)))
for i in range(self.layers):
x_ttemp = x_t
x_t = self.moduleList2[i](x_t,edge_index_t)
if self.midSkip:
x_t = torch.cat((x_t, x_ttemp), 1)
if i == 0:
bn = self.bn
elif i == 1:
bn = self.bn2
else:
bn = self.bn3
x_t = F.relu(bn(self.skipList2[i](x_t,edge_index_t)))
# Calculate knn weights for first batch (and last, since the size might be different)
if start:
pairs = knn(pos_t,pos,self.knn, batch_x = batch_t, batch_y = batch)
yIdx, xIdx = pairs
diff = pos_t[xIdx] - pos[yIdx]
squared_distance = (diff * diff).sum(dim=-1, keepdim=True)
weights = 1.0 / torch.clamp(squared_distance, min = 1e-16)
self.weights = weights
self.xIdx = xIdx
self.yIdx = yIdx
if final:
pairs = knn(pos_t,pos,self.knn, batch_x = batch_t, batch_y = batch)
yIdx, xIdx = pairs
diff = pos_t[xIdx] - pos[yIdx]
squared_distance = (diff * diff).sum(dim=-1, keepdim=True)
weights = 1.0 / torch.clamp(squared_distance, min = 1e-16)
self.weights = weights
self.xIdx = xIdx
self.yIdx = yIdx
# Unpool pooled branch
x_t = scatter_add(x_t[self.xIdx] * self.weights, self.yIdx, dim = 0, dim_size=pos.size(0))
x_t = x_t / scatter_add(self.weights, self.yIdx, dim = 0, dim_size=pos.size(0))
# Input size of final convolution
if self.skipconv:
y = torch.cat((x, x_t, x_start),1)
else:
y = torch.cat((x, x_t),1)
# Do final convolution
y = self.conv_mix(y, edge_index)
# Add dropout layer
if self.p != 1:
y = F.dropout(y, training=self.training, p=self.p)
# If street based pooling
elif self.clustering == 'Street':
batchClusters = self.clusters
batchCat = self.categories
batch_size = torch.max(batch) + 1
# Divide clusters and categories from different batches
for i in range(1,batch_size):
batchClusters = torch.cat((batchClusters, self.clusters + i*self.maxCluster))
batchCat = torch.cat((batchCat, self.categories + i * 5))
batchCat = batchCat.long()
data.batch = batchCat
# Pooled branch, max pooling
data = max_pool(batchClusters, data)
x_t, edge_index_t, pos_t, batchCat_t = data.x, data.edge_index, data.pos, data.batch
edge_index_t, temp = add_self_loops(edge_index_t)
# Add coordinates and categories to input
if self.coords:
cats = (batchCat % 5).float()
catsT = (batchCat_t % 5).float()
normPos = pos / torch.max(pos)
normPos_t = pos_t / torch.max(pos_t)
normCat = (cats / 4).view(batchCat.size(0),1)
normCat_t = (catsT / 4).view(batchCat_t.size(0),1)
x = torch.cat((x, normPos, normCat),1)
x_t = torch.cat((x_t, normPos_t, normCat_t),1)
# Perform convolution blocks in both branches
for i in range(self.layers):
x_temp = x
x = self.moduleList1[i](x,edge_index)
if self.midSkip:
x = torch.cat((x, x_temp), 1)
if i == 0:
bn = self.bn
elif i == 1:
bn = self.bn2
else:
bn = self.bn3
x = F.relu(bn(self.skipList1[i](x,edge_index)))
for i in range(self.layers):
x_ttemp = x_t
x_t = self.moduleList2[i](x_t,edge_index_t)
if self.midSkip:
x_t = torch.cat((x_t, x_ttemp), 1)
if i == 0:
bn = self.bn
elif i == 1:
bn = self.bn2
else:
bn = self.bn3
x_t = F.relu(bn(self.skipList2[i](x_t,edge_index_t)))
# Calculate knn weights for first batch (and last, since the size might be different)
if start:
sorter = torch.argsort(batchCat)
backsorter = torch.argsort(sorter)
pos = pos[sorter]
batchCat = batchCat[sorter]
pairs = knn(pos_t,pos,self.knn, batch_x = batchCat_t, batch_y = batchCat)
yIdx, xIdx = pairs
diff = pos_t[xIdx] - pos[yIdx]
squared_distance = (diff * diff).sum(dim=-1, keepdim=True)
weights = 1.0 / torch.clamp(squared_distance, min = 1e-16)
self.weights = weights
self.xIdx = xIdx
self.yIdx = yIdx
self.backSorter = backsorter
if final:
sorter = torch.argsort(batchCat)
backsorter = torch.argsort(sorter)
pos = pos[sorter]
batchCat = batchCat[sorter]
pairs = knn(pos_t,pos,self.knn, batch_x = batchCat_t, batch_y = batchCat)
yIdx, xIdx = pairs
diff = pos_t[xIdx] - pos[yIdx]
squared_distance = (diff * diff).sum(dim=-1, keepdim=True)
weights = 1.0 / torch.clamp(squared_distance, min = 1e-16)
self.weights = weights
self.xIdx = xIdx
self.yIdx = yIdx
self.backSorter = backsorter
# Unpool pooled branch
x_t = scatter_add(x_t[self.xIdx] * self.weights, self.yIdx, dim = 0, dim_size=pos.size(0))
x_t = x_t / scatter_add(self.weights, self.yIdx, dim = 0, dim_size=pos.size(0))
x_t = x_t[self.backSorter]
# Input size of final convolution
if self.skipconv:
y = torch.cat((x, x_t, x_start),1)
else:
y = torch.cat((x, x_t),1)
# Do final convolution
y = self.conv_mix(y, edge_index)
# Add dropout layer
if self.p != 1:
y = F.dropout(y, training=self.training, p=self.p)
return y
# Model with double pooling (only street based clustering)
class KipfNetDoublePool(torch.nn.Module):
def __init__(self, clusters1 = None,clusters2 = None, knn = 4, maxCluster1 = 0, maxCluster2 = 0, clustering = 'None',categories = None, coords = False, n_hidden = 64, n_hidden2 = 32, n_hidden3 = 16, K_block = 6, K_mix = 1, skipconv = False, do_bn = True, conv = 'Cheb', layers = 1, midSkip = False, p = 0.5, includeHeading = False):
super(KipfNetDoublePool, self).__init__()
self.clusters1 = clusters1
self.clusters2 = clusters2
self.knn = knn
self.maxCluster1 = maxCluster1
self.maxCluster2 = maxCluster2
self.clustering = clustering
self.categories = categories - 1
self.skipconv = skipconv
self.coords = coords
self.midSkip = midSkip
self.layers = layers
self.bn = BatchNorm1d(n_hidden)
self.bn2 = BatchNorm1d(n_hidden2)
self.bn3 = BatchNorm1d(n_hidden3)
self.p = p
# Input size depends on heading channel
if includeHeading:
n_input = 36
else:
n_input = 24
# Add coordinates and categories to input
if coords:
n_input = n_input + 2
if categories is not None:
n_input = n_input + 1
if layers == 1:
midSkip = False
# Add coordinates and categories to input
self.moduleList1 = torch.nn.ModuleList()
self.skipList1 = torch.nn.ModuleList()
for i in range(layers):
if i == 0:
self.moduleList1.append(Kipfblock(n_input=n_input, n_hidden=n_hidden, K=K_block, bn=do_bn, conv = conv))
n_mix = n_hidden + n_input
elif i == 1:
self.moduleList1.append(Kipfblock(n_input=n_hidden, n_hidden=n_hidden2, K=K_block, bn=do_bn, conv = conv))
n_mix = n_hidden2 + n_hidden
else:
self.moduleList1.append(Kipfblock(n_input=n_hidden2, n_hidden=n_hidden3, K=K_block, bn=do_bn, conv = conv))
n_mix = n_hidden3 + n_hidden2
if midSkip:
if i == 0:
if conv == 'Cheb':
self.skipList1.append(ChebConv(n_mix, n_hidden, K=K_mix))
elif conv == 'GCN':
self.skipList1.append(GCNConv(n_mix, n_hidden))
elif conv == 'SAGE':
self.skipList1.append(SAGEConv(n_mix, n_hidden))
elif conv == 'Graph':
self.skipList1.append(GraphConv(n_mix, n_hidden))
elif i == 1:
if conv == 'Cheb':
self.skipList1.append(ChebConv(n_mix, n_hidden2, K=K_mix))
elif conv == 'GCN':
self.skipList1.append(GCNConv(n_mix, n_hidden2))
elif conv == 'SAGE':
self.skipList1.append(SAGEConv(n_mix, n_hidden2))
elif conv == 'Graph':
self.skipList1.append(GraphConv(n_mix, n_hidden2))
else:
if conv == 'Cheb':
self.skipList1.append(ChebConv(n_mix, n_hidden3, K=K_mix))
elif conv == 'GCN':
self.skipList1.append(GCNConv(n_mix, n_hidden3))
elif conv == 'SAGE':
self.skipList1.append(SAGEConv(n_mix, n_hidden3))
elif conv == 'Graph':
self.skipList1.append(GraphConv(n_mix, n_hidden3))
# Pooled Branch 1 (selected number of convolution blocks)
if clustering != 'None':
self.moduleList2 = torch.nn.ModuleList()
self.skipList2 = torch.nn.ModuleList()
for i in range(layers):
if i == 0:
self.moduleList2.append(Kipfblock(n_input=n_input, n_hidden=n_hidden, K=K_block, bn=do_bn, conv = conv))
n_mix = n_hidden + n_input
elif i == 1:
self.moduleList2.append(Kipfblock(n_input=n_hidden, n_hidden=n_hidden2, K=K_block, bn=do_bn, conv = conv))
n_mix = n_hidden2 + n_hidden
else:
self.moduleList2.append(Kipfblock(n_input=n_hidden2, n_hidden=n_hidden3, K=K_block, bn=do_bn, conv = conv))
n_mix = n_hidden3 + n_hidden2
if midSkip:
if i == 0:
if conv == 'Cheb':
self.skipList2.append(ChebConv(n_mix, n_hidden, K=K_mix))
elif conv == 'GCN':
self.skipList2.append(GCNConv(n_mix, n_hidden))
elif conv == 'SAGE':
self.skipList2.append(SAGEConv(n_mix, n_hidden))
elif conv == 'Graph':
self.skipList2.append(GraphConv(n_mix, n_hidden))
elif i == 1:
if conv == 'Cheb':
self.skipList2.append(ChebConv(n_mix, n_hidden2, K=K_mix))
elif conv == 'GCN':
self.skipList2.append(GCNConv(n_mix, n_hidden2))
elif conv == 'SAGE':
self.skipList2.append(SAGEConv(n_mix, n_hidden2))
elif conv == 'Graph':
self.skipList2.append(GraphConv(n_mix, n_hidden2))
else:
if conv == 'Cheb':
self.skipList2.append(ChebConv(n_mix, n_hidden3, K=K_mix))
elif conv == 'GCN':
self.skipList2.append(GCNConv(n_mix, n_hidden3))
elif conv == 'SAGE':
self.skipList2.append(SAGEConv(n_mix, n_hidden3))
elif conv == 'Graph':
self.skipList2.append(GraphConv(n_mix, n_hidden3))
# Pooled Branch 2 (selected number of convolution blocks)
self.moduleList3 = torch.nn.ModuleList()
self.skipList3 = torch.nn.ModuleList()
for i in range(layers):
if i == 0:
self.moduleList3.append(Kipfblock(n_input=n_input, n_hidden=n_hidden, K=K_block, bn=do_bn, conv = conv))
n_mix = n_hidden + n_input
elif i == 1:
self.moduleList3.append(Kipfblock(n_input=n_hidden, n_hidden=n_hidden2, K=K_block, bn=do_bn, conv = conv))
n_mix = n_hidden2 + n_hidden
else:
self.moduleList3.append(Kipfblock(n_input=n_hidden2, n_hidden=n_hidden3, K=K_block, bn=do_bn, conv = conv))
n_mix = n_hidden3 + n_hidden2
if midSkip:
if i == 0:
if conv == 'Cheb':
self.skipList3.append(ChebConv(n_mix, n_hidden, K=K_mix))
elif conv == 'GCN':
self.skipList3.append(GCNConv(n_mix, n_hidden))
elif conv == 'SAGE':
self.skipList3.append(SAGEConv(n_mix, n_hidden))
elif conv == 'Graph':
self.skipList3.append(GraphConv(n_mix, n_hidden))
elif i == 1:
if conv == 'Cheb':
self.skipList3.append(ChebConv(n_mix, n_hidden2, K=K_mix))
elif conv == 'GCN':
self.skipList3.append(GCNConv(n_mix, n_hidden2))
elif conv == 'SAGE':
self.skipList3.append(SAGEConv(n_mix, n_hidden2))
elif conv == 'Graph':
self.skipList3.append(GraphConv(n_mix, n_hidden2))
else:
if conv == 'Cheb':
self.skipList3.append(ChebConv(n_mix, n_hidden3, K=K_mix))
elif conv == 'GCN':
self.skipList3.append(GCNConv(n_mix, n_hidden3))
elif conv == 'SAGE':
self.skipList3.append(SAGEConv(n_mix, n_hidden3))
elif conv == 'Graph':
self.skipList3.append(GraphConv(n_mix, n_hidden3))
# Input size for final convolution
if layers == 1:
n_mix = n_hidden
elif layers == 2:
n_mix = n_hidden2
else:
n_mix = n_hidden3
if clustering != 'None':
n_mix = n_mix * 3
if skipconv:
n_mix = n_mix + n_input
# Output size depends on heading channel
if includeHeading:
n_output = 9
else:
n_output = 6
# Select convolution type
if conv == 'Cheb':
self.conv_mix = ChebConv(n_mix, n_output, K=K_mix)
elif conv == 'GCN':
self.conv_mix = GCNConv(n_mix, n_output)
elif conv == 'SAGE':
self.conv_mix = SAGEConv(n_mix, n_output)
elif conv == 'Graph':
self.conv_mix = GraphConv(n_mix, n_output)
self.xIdx = []
self.yIdx = []
self.weights = []
def forward(self, data, final, start):
x, edge_index, pos, batch = data.x, data.edge_index, data.pos, data.batch
x_start = x
# Only street based pooling
if self.clustering == 'Street':
batchClusters1 = self.clusters1
batchCat = self.categories
batchClusters2 = self.clusters2
batch_size = torch.max(batch) + 1
# Divide clusters and categories from different batches
for i in range(1,batch_size):
batchClusters1 = torch.cat((batchClusters1, self.clusters1 + i*self.maxCluster1))
batchCat = torch.cat((batchCat, self.categories + i * 5))
batchClusters2 = torch.cat((batchClusters2, self.clusters2 + i*self.maxCluster2))
batchCat = batchCat.long()
data.batch = batchCat
data2 = data
# Both pooled branches, max pooling
data = max_pool(batchClusters1, data)
x_t, edge_index_t, pos_t, batchCat_t = data.x, data.edge_index, data.pos, data.batch
data2 = max_pool(batchClusters2, data2)
x_t2, edge_index_t2, pos_t2, batchCat_t2 = data2.x, data2.edge_index, data2.pos, data2.batch
edge_index_t, temp = add_self_loops(edge_index_t)
edge_index_t2, temp = add_self_loops(edge_index_t2)
# Add coordinates and categories to input
if self.coords:
cats = (batchCat % 5).float()
catsT = (batchCat_t % 5).float()
catsT2 = (batchCat_t2 % 5).float()
normPos = pos / torch.max(pos)
normPos_t = pos_t / torch.max(pos_t)
normPos_t2 = pos_t2 / torch.max(pos_t2)
normCat = (cats / 4).view(batchCat.size(0),1)
normCat_t = (catsT / 4).view(batchCat_t.size(0),1)
normCat_t2 = (catsT2 / 4).view(batchCat_t2.size(0),1)
x = torch.cat((x, normPos, normCat),1)
x_t = torch.cat((x_t, normPos_t, normCat_t),1)
x_t2 = torch.cat((x_t2, normPos_t2, normCat_t2),1)
# Perform convolution blocks in all 3 branches
for i in range(self.layers):
x_temp = x
x = self.moduleList1[i](x,edge_index)
if self.midSkip:
x = torch.cat((x, x_temp), 1)
if i == 0:
bn = self.bn
elif i == 1:
bn = self.bn2
else:
bn = self.bn3
x = F.relu(bn(self.skipList1[i](x,edge_index)))
for i in range(self.layers):
x_ttemp = x_t
x_t = self.moduleList2[i](x_t,edge_index_t)
if self.midSkip:
x_t = torch.cat((x_t, x_ttemp), 1)
if i == 0:
bn = self.bn
elif i == 1:
bn = self.bn2
else:
bn = self.bn3
x_t = F.relu(bn(self.skipList2[i](x_t,edge_index_t)))
for i in range(self.layers):
x_ttemp2 = x_t2
x_t2 = self.moduleList3[i](x_t2,edge_index_t2)
if self.midSkip:
x_t2 = torch.cat((x_t2, x_ttemp2), 1)
if i == 0:
bn = self.bn
elif i == 1:
bn = self.bn2
else:
bn = self.bn3
x_t2 = F.relu(bn(self.skipList3[i](x_t2,edge_index_t2)))
# Calculate knn weights of both pooled branches for first batch (and last, since the size might be different)
if start:
sorter = torch.argsort(batchCat)
backsorter = torch.argsort(sorter)
pos = pos[sorter]
batchCat = batchCat[sorter]
pairs = knn(pos_t,pos,self.knn, batch_x = batchCat_t, batch_y = batchCat)
yIdx, xIdx = pairs
diff = pos_t[xIdx] - pos[yIdx]
squared_distance = (diff * diff).sum(dim=-1, keepdim=True)
weights = 1.0 / torch.clamp(squared_distance, min = 1e-16)
pairs2 = knn(pos_t2,pos,self.knn, batch_x = batchCat_t2, batch_y = batchCat)
yIdx2, xIdx2 = pairs2
diff2 = pos_t2[xIdx2] - pos[yIdx2]
squared_distance2 = (diff2 * diff2).sum(dim=-1, keepdim=True)
weights2 = 1.0 / torch.clamp(squared_distance2, min = 1e-16)
self.weights = weights
self.xIdx = xIdx
self.yIdx = yIdx
self.weights2 = weights2
self.xIdx2 = xIdx2
self.yIdx2 = yIdx2
self.backSorter = backsorter
if final:
sorter = torch.argsort(batchCat)
backsorter = torch.argsort(sorter)
pos = pos[sorter]
batchCat = batchCat[sorter]
pairs = knn(pos_t,pos,self.knn, batch_x = batchCat_t, batch_y = batchCat)
yIdx, xIdx = pairs
diff = pos_t[xIdx] - pos[yIdx]
squared_distance = (diff * diff).sum(dim=-1, keepdim=True)
weights = 1.0 / torch.clamp(squared_distance, min = 1e-16)
pairs2 = knn(pos_t2,pos,self.knn, batch_x = batchCat_t2, batch_y = batchCat)
yIdx2, xIdx2 = pairs2
diff2 = pos_t2[xIdx2] - pos[yIdx2]
squared_distance2 = (diff2 * diff2).sum(dim=-1, keepdim=True)
weights2 = 1.0 / torch.clamp(squared_distance2, min = 1e-16)
self.weights = weights
self.xIdx = xIdx
self.yIdx = yIdx
self.weights2 = weights2
self.xIdx2 = xIdx2
self.yIdx2 = yIdx2
self.backSorter = backsorter
# Unpool pooled branches
x_t = scatter_add(x_t[self.xIdx] * self.weights, self.yIdx, dim = 0, dim_size=pos.size(0))
x_t = x_t / scatter_add(self.weights, self.yIdx, dim = 0, dim_size=pos.size(0))
x_t = x_t[self.backSorter]
x_t2 = scatter_add(x_t2[self.xIdx2] * self.weights2, self.yIdx2, dim = 0, dim_size=pos.size(0))
x_t2 = x_t2 / scatter_add(self.weights2, self.yIdx2, dim = 0, dim_size=pos.size(0))
x_t2 = x_t2[self.backSorter]
# Input size of final convolution
if self.skipconv:
y = torch.cat((x, x_t, x_t2, x_start),1)
else:
y = torch.cat((x, x_t, x_t2),1)
# Do final convolution
y = self.conv_mix(y, edge_index)
# Add dropout layer
if self.p != 1:
y = F.dropout(y, training=self.training, p=self.p)
return y | 42.998773 | 337 | 0.476829 | 3,942 | 35,044 | 4.065195 | 0.052765 | 0.024212 | 0.029641 | 0.018534 | 0.882434 | 0.866646 | 0.842496 | 0.833697 | 0.825335 | 0.824087 | 0 | 0.027455 | 0.43043 | 35,044 | 815 | 338 | 42.998773 | 0.775401 | 0.062008 | 0 | 0.856 | 0 | 0 | 0.010455 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0096 | false | 0 | 0.0096 | 0 | 0.0288 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
49ee7f11082bff515c01245838615f8dcce88c66 | 5,154 | py | Python | monorun/apis/test.py | minghanz/MonoRUn | 3a575ec7826d2b95e05bc87099b152434743f104 | [
"MIT"
] | null | null | null | monorun/apis/test.py | minghanz/MonoRUn | 3a575ec7826d2b95e05bc87099b152434743f104 | [
"MIT"
] | null | null | null | monorun/apis/test.py | minghanz/MonoRUn | 3a575ec7826d2b95e05bc87099b152434743f104 | [
"MIT"
] | null | null | null | import os.path as osp
import mmcv
import torch
from mmcv.image import tensor2imgs
from mmdet.core import encode_mask_results
import numpy as np
def single_gpu_test(model,
data_loader,
show=False,
out_dir=None,
show_score_thr=0.3,
cov_scale=5):
model.eval()
results = []
dataset = data_loader.dataset
prog_bar = mmcv.ProgressBar(len(dataset))
for i, data in enumerate(data_loader):
with torch.no_grad():
result = model(return_loss=False, rescale=True, **data)
if show or out_dir:
img_tensor = data['img'][0]
img_metas = data['img_metas'][0].data[0]
imgs = tensor2imgs(img_tensor, **img_metas[0]['img_norm_cfg'])
assert len(imgs) == len(img_metas)
for img, img_meta in zip(imgs, img_metas):
h, w, _ = img_meta['img_shape']
img_show = img[:h, :w, :]
ori_h, ori_w = img_meta['ori_shape'][:-1]
img_show = mmcv.imresize(img_show, (ori_w, ori_h))
if out_dir:
out_file = osp.join(out_dir, img_meta['ori_filename'])
else:
out_file = None
model.module.show_result(
img_show,
data['cam_intrinsic'][0].data[0][0].cpu().numpy(),
result,
score_thr=show_score_thr,
cov_scale=cov_scale,
show=show,
out_file=out_file)
# encode mask results
if isinstance(result, tuple):
bbox_results, mask_results = result
encoded_mask_results = encode_mask_results(mask_results)
result = bbox_results, encoded_mask_results
results.append(result[0])
### result is a list with a single element which is a dict of 'bbox_results' and 'bbox_3d_results'
### each item is a list of 3 arrays corresponding to 3 categories 'Car', 'Pedestrian', 'Cyclist', each array is a n*5 or n*8
### for "bbox_results": x_min, y_min, x_max, y_max, conf
### for "bbox_3d_results": l,h,w,x,y,z,yaw,conf
# print("result:", result[0])
# if i > 5:
# break
batch_size = len(data['img_metas'][0].data)
for _ in range(batch_size):
prog_bar.update()
return results
def default_d():
d = dict()
d['bbox_results'] = [np.empty((0, 5), dtype=np.float32)]*3
d['bbox_3d_results'] = [np.empty((0, 8), dtype=np.float32)]*3
return d
def single_gpu_eval(results_d,
model,
data_loader,
show=False,
out_dir=None,
show_score_thr=0.3,
cov_scale=5,
):
model.eval()
results = []
dataset = data_loader.dataset
prog_bar = mmcv.ProgressBar(len(dataset))
for i, data in enumerate(data_loader):
img_n = dataset.img_idxs[i]
result = results_d[img_n] if img_n in results_d else default_d()
result = [result]
# with torch.no_grad():
# result = model(return_loss=False, rescale=True, **data)
if show or out_dir:
img_tensor = data['img'][0]
img_metas = data['img_metas'][0].data[0]
imgs = tensor2imgs(img_tensor, **img_metas[0]['img_norm_cfg'])
assert len(imgs) == len(img_metas)
for img, img_meta in zip(imgs, img_metas):
h, w, _ = img_meta['img_shape']
img_show = img[:h, :w, :]
ori_h, ori_w = img_meta['ori_shape'][:-1]
img_show = mmcv.imresize(img_show, (ori_w, ori_h))
if out_dir:
out_file = osp.join(out_dir, img_meta['ori_filename'])
else:
out_file = None
model.module.show_result(
img_show,
data['cam_intrinsic'][0].data[0][0].cpu().numpy(),
result,
score_thr=show_score_thr,
cov_scale=cov_scale,
show=show,
out_file=out_file)
# encode mask results
if isinstance(result, tuple):
bbox_results, mask_results = result
encoded_mask_results = encode_mask_results(mask_results)
result = bbox_results, encoded_mask_results
results.append(result[0])
### result is a list with a single element which is a dict of 'bbox_results' and 'bbox_3d_results'
### each item is a list of 3 arrays corresponding to 3 categories 'Car', 'Pedestrian', 'Cyclist', each array is a n*5 or n*8
### for "bbox_results": x_min, y_min, x_max, y_max, conf
### for "bbox_3d_results": l,h,w,x,y,z,yaw,conf
# print("result:", result[0])
# if i > 5:
# break
batch_size = len(data['img_metas'][0].data)
for _ in range(batch_size):
prog_bar.update()
return results
| 36.295775 | 132 | 0.536088 | 671 | 5,154 | 3.870343 | 0.186289 | 0.055064 | 0.020793 | 0.020023 | 0.871775 | 0.871775 | 0.871775 | 0.871775 | 0.871775 | 0.871775 | 0 | 0.016847 | 0.355064 | 5,154 | 141 | 133 | 36.553191 | 0.76444 | 0.163562 | 0 | 0.784314 | 0 | 0 | 0.041862 | 0 | 0 | 0 | 0 | 0 | 0.019608 | 1 | 0.029412 | false | 0 | 0.058824 | 0 | 0.117647 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
49f4d4b1c16003dcbf07e984149c7cd65f34ce34 | 166 | py | Python | cep/kinematics/__init__.py | hjw-1014/Multi-Objective-Reactive-Motion-Planning-in-Mobile-Manipulators | 9a8801e9c663174b753c4852b2313c5a3f302434 | [
"MIT"
] | null | null | null | cep/kinematics/__init__.py | hjw-1014/Multi-Objective-Reactive-Motion-Planning-in-Mobile-Manipulators | 9a8801e9c663174b753c4852b2313c5a3f302434 | [
"MIT"
] | null | null | null | cep/kinematics/__init__.py | hjw-1014/Multi-Objective-Reactive-Motion-Planning-in-Mobile-Manipulators | 9a8801e9c663174b753c4852b2313c5a3f302434 | [
"MIT"
] | null | null | null | from .robot_model import Robot
from .darias_model import DarIASArm
from .tiago_model import TiagoRobot
from .tiago_lefthand_base_model import TiagoRobot_lefthand_Base | 41.5 | 63 | 0.885542 | 24 | 166 | 5.791667 | 0.416667 | 0.316547 | 0.302158 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090361 | 166 | 4 | 63 | 41.5 | 0.92053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
49f5cce85e68a71ca5a01a9e2d7060dfd41e64e3 | 3,922 | py | Python | tests/test_student_quiz.py | thiagosalvatore/poo-exercise | ab897d9b17b3aa63252c4fa7334f624f6d380d9a | [
"Apache-2.0"
] | null | null | null | tests/test_student_quiz.py | thiagosalvatore/poo-exercise | ab897d9b17b3aa63252c4fa7334f624f6d380d9a | [
"Apache-2.0"
] | null | null | null | tests/test_student_quiz.py | thiagosalvatore/poo-exercise | ab897d9b17b3aa63252c4fa7334f624f6d380d9a | [
"Apache-2.0"
] | null | null | null | import pytest
from poo_exercise.models import Question, Quiz, StudentQuiz
def test_assign_quiz_student(tst):
q1 = Question('whats your name?', ['Thiago', 'James', 'Bond'], 'Thiago')
q2 = Question('how old are you?', [20, 40, 27], 27)
quiz = Quiz(tst.teacher, tst.classroom, [q1, q2])
student_quiz = StudentQuiz(tst.student, quiz)
assert student_quiz.quiz == quiz
assert student_quiz.student == tst.student
assert student_quiz.grade == 0
assert student_quiz.answers == []
def test_assign_quiz_student_not_in_classroom(tst):
q1 = Question('whats your name?', ['Thiago', 'James', 'Bond'], 'Thiago')
q2 = Question('how old are you?', [20, 40, 27], 27)
quiz = Quiz(tst.teacher, tst.classroom, [q1, q2])
tst.student.classes = []
with pytest.raises(AssertionError):
StudentQuiz(tst.student, quiz)
def test_submit_quiz_answers(tst):
q1 = Question('whats your name?', ['Thiago', 'James', 'Bond'], 'Thiago')
q2 = Question('how old are you?', [20, 40, 27], 27)
quiz = Quiz(tst.teacher, tst.classroom, [q1, q2])
student_quiz = StudentQuiz(tst.student, quiz)
student_quiz.submit_answers(['Thiago', 27])
assert student_quiz.quiz == quiz
assert student_quiz.student == tst.student
assert student_quiz.grade == 0
assert student_quiz.answers == ['Thiago', 27]
def test_submit_quiz_answers_empty(tst):
q1 = Question('whats your name?', ['Thiago', 'James', 'Bond'], 'Thiago')
q2 = Question('how old are you?', [20, 40, 27], 27)
quiz = Quiz(tst.teacher, tst.classroom, [q1, q2])
student_quiz = StudentQuiz(tst.student, quiz)
student_quiz.submit_answers(['', ''])
assert student_quiz.quiz == quiz
assert student_quiz.student == tst.student
assert student_quiz.grade == 0
assert student_quiz.answers == ['', '']
def test_submit_quiz_less_answers_than_options(tst):
q1 = Question('whats your name?', ['Thiago', 'James', 'Bond'], 'Thiago')
q2 = Question('how old are you?', [20, 40, 27], 27)
quiz = Quiz(tst.teacher, tst.classroom, [q1, q2])
student_quiz = StudentQuiz(tst.student, quiz)
with pytest.raises(Exception):
student_quiz.submit_answers(['Thiago'])
def test_submit_quiz_answers_and_grade_ten(tst):
q1 = Question('whats your name?', ['Thiago', 'James', 'Bond'], 'Thiago')
q2 = Question('how old are you?', [20, 40, 27], 27)
quiz = Quiz(tst.teacher, tst.classroom, [q1, q2])
student_quiz = StudentQuiz(tst.student, quiz)
student_quiz.submit_answers(['Thiago', 27])
assert student_quiz.quiz == quiz
assert student_quiz.student == tst.student
assert student_quiz.grade == 0
assert student_quiz.answers == ['Thiago', 27]
student_quiz.grade_quiz()
assert student_quiz.grade == 10
def test_submit_quiz_answers_and_grade_5(tst):
q1 = Question('whats your name?', ['Thiago', 'James', 'Bond'], 'Thiago')
q2 = Question('how old are you?', [20, 40, 27], 27)
quiz = Quiz(tst.teacher, tst.classroom, [q1, q2])
student_quiz = StudentQuiz(tst.student, quiz)
student_quiz.submit_answers(['Thiago', 40])
assert student_quiz.quiz == quiz
assert student_quiz.student == tst.student
assert student_quiz.grade == 0
assert student_quiz.answers == ['Thiago', 40]
student_quiz.grade_quiz()
assert student_quiz.grade == 5
def test_submit_quiz_answers_and_grade_0(tst):
q1 = Question('whats your name?', ['Thiago', 'James', 'Bond'], 'Thiago')
q2 = Question('how old are you?', [20, 40, 27], 27)
quiz = Quiz(tst.teacher, tst.classroom, [q1, q2])
student_quiz = StudentQuiz(tst.student, quiz)
student_quiz.submit_answers(['James', 40])
assert student_quiz.quiz == quiz
assert student_quiz.student == tst.student
assert student_quiz.grade == 0
assert student_quiz.answers == ['James', 40]
student_quiz.grade_quiz()
assert student_quiz.grade == 0
| 35.654545 | 76 | 0.669301 | 535 | 3,922 | 4.73271 | 0.095327 | 0.221564 | 0.18128 | 0.082938 | 0.909953 | 0.862559 | 0.862559 | 0.824645 | 0.808057 | 0.773302 | 0 | 0.038581 | 0.18052 | 3,922 | 109 | 77 | 35.981651 | 0.749222 | 0 | 0 | 0.703704 | 0 | 0 | 0.121367 | 0 | 0 | 0 | 0 | 0 | 0.345679 | 1 | 0.098765 | false | 0 | 0.024691 | 0 | 0.123457 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
b71c7c4ee711e3497257d0c8392fdb211807858a | 10,213 | py | Python | dpmModule/item/BossAccesory.py | kmsiapps/maplestory_dpm_calc | fbbc5384fdb6aefb58aa4d0d286a6c2972807d57 | [
"MIT"
] | null | null | null | dpmModule/item/BossAccesory.py | kmsiapps/maplestory_dpm_calc | fbbc5384fdb6aefb58aa4d0d286a6c2972807d57 | [
"MIT"
] | null | null | null | dpmModule/item/BossAccesory.py | kmsiapps/maplestory_dpm_calc | fbbc5384fdb6aefb58aa4d0d286a6c2972807d57 | [
"MIT"
] | null | null | null | from . import ItemKernel as it
#No upgrade
#자쿰 얼장(응축된 힘의 결정석)...(5)
Face110 = it.Item(stat_main = 5, stat_sub = 5, att = 5, level = 110)
#자쿰 눈장....(3)
Eye100 = it.Item(stat_main = 6, stat_sub = 6, att = 1, level = 100)
#자쿰 벨트....(3)
Belt150 = it.Item(stat_main = 18, stat_sub = 18, att = 1, level = 150)
#매그너스 숄더..(1)
Shoulder120 = it.Item(stat_main = 10, stat_sub = 10, att = 6, level = 120)
#매그너스 뱃지..(0)
Badge130 = it.Item(stat_main = 10, stat_sub = 10, att = 5, level = 130)
#힐라 -> 미사용, 무시
#파풀 눈장..(5)
Eye150 = it.Item(stat_main = 8, stat_sub = 8, att = 1, level = 150)
#반레온보장..(2)
Ring120 = it.Item(stat_main = 5, stat_sub = 5, att = 2, level = 120)
#혼테일 귀고리...(6)
Ear130 = it.Item(stat_main = 5, stat_sub = 5, att = 2, level = 130)
#혼테일 링...(2)
Ring110 = it.Item(stat_main = 5, stat_sub = 5, att = 2, level = 110)
#혼테일 목걸이..(0) -> 알발린상태로 계산 필요
#아카이럼 매커...(2)
# TODO: 120으로 변경
Pendant130 = it.Item(stat_main = 10, stat_sub = 10, att = 1, level = 130)
#아카이럼 도미...(6 or 0) 파편작 가정
Pendant140 = it.Item(stat_main = 5, stat_sub = 5, att = 5, level = 140)
Pendant140Fragment = it.Item(stat_main = 23, stat_sub = 23, att = 23, level = 140)
#핑크빈 포켓 ... 0
Pocket140 = it.Item(stat_main = 5, stat_sub = 5, att = 5, level = 140)
#핑크빈 벨트 ... 3
Belt140 = it.Item(stat_main = 15, stat_sub = 15, att = 1, level = 140)
#핑크빈 얼장 ... 5
Eye135 = it.Item(stat_main = 7, stat_sub = 7, att = 1, level = 135)
class Factory():
@staticmethod
def get11Set(star = 0, enhance = 70, potential = it.CharacterModifier(), additional_potential = it.CharacterModifier(), bonus = it.CharacterModifier(), hammer = True):
'''get11Set : Package of Item set With Followings
Eye : Eye100(Zakum)
Face : Face110(Zakum)
Ear : Ear130(Horntail)
Ring1 : Ring110(Horntail)
Ring2 : Ring120(Van Leon)
! NO Ring3 / Ring4
Pendant1: Pendant130(Acairum)
Pendant1: Pendant140(Acairum)
Belt : Belt140(PinkBin)
Pocket : Pocket140(PinkBin)
Badge : Badge130(Magnus)
Shoulder : Shoulder120(Magnus)
'''
package = [Eye100.copy(), Face110.copy(), Ear130.copy(), Ring110.copy(), Ring120.copy(), \
Pendant130.copy(), Pendant140.copy(), Belt140.copy(), Pocket140.copy(), \
Badge130.copy(), Shoulder120.copy()]
upgrades = [3, 5, 6, 2, 2, 2, 6, 3, 0, 0, 2]
if hammer:
upgrades = [i+1 for i in upgrades]
#TODO : Simplyfy this dirty codes.
if enhance == 100:
scrolls = [[upgrades[i],0,0] for i in range(11)]
elif enhance == 70:
scrolls = [[0,upgrades[i],0] for i in range(11)]
elif enhance == 30:
scrolls = [[0,0,upgrades[i]] for i in range(11)]
else:
raise TypeError("enhance must be 100, 70, or 30.")
for idx, item in zip([i for i in range(11)], package):
item.set_potential(potential)
item.set_additional_potential(additional_potential)
if idx not in [3,4,9,10]:
item.add_main_option(bonus)
item.add_main_option(it.EnhancerFactory.get_armor_starforce_enhancement(item.level, star))
item.add_main_option(it.EnhancerFactory.get_armor_scroll_enhancement(item.level, elist = scrolls[idx]))
return package
@staticmethod
def get11SetDict(star = 0, enhance = 70, potential = it.CharacterModifier(), additional_potential = it.CharacterModifier(), bonus = it.CharacterModifier(), hammer = True):
package = [Eye100.copy(), Face110.copy(), Ear130.copy(), Ring110.copy(), Ring120.copy(), \
Pendant130.copy(), Pendant140.copy(), Belt140.copy(), Pocket140.copy(), \
Badge130.copy(), Shoulder120.copy()]
package = {"eye" : Eye100.copy(), "face" : Face110.copy(), "ear" : Ear130.copy(), "ring1" : Ring110.copy(), \
"ring2" : Ring120.copy(), "pendant1" : Pendant130.copy(), "pendant2" : Pendant140.copy(), \
"belt" : Belt140.copy(), "pocket" : Pocket140.copy(), \
"badge" : Badge130.copy(), "shoulder" : Shoulder120.copy()}
keylist = ["eye", "face", "ear", "ring1", "ring2", "pendant1", "pendant2", "belt", "pocket", "badge", "shoulder"]
upgrades = [3, 5, 6, 2, 2, 2, 6, 3, 0, 0, 2]
if hammer:
upgrades = [i+1 for i in upgrades]
#TODO : Simplyfy this dirty codes.
if enhance == 100:
scrolls = [[upgrades[i],0,0] for i in range(11)]
elif enhance == 70:
scrolls = [[0,upgrades[i],0] for i in range(11)]
elif enhance == 30:
scrolls = [[0,0,upgrades[i]] for i in range(11)]
else:
raise TypeError("enhance must be 100, 70, or 30.")
for idx, itemkey in zip([i for i in range(11)], keylist):
item = package[itemkey]
item.set_potential(potential)
item.set_additional_potential(additional_potential)
if itemkey not in ["ring1", "ring2","shoulder","badge"]:
item.add_main_option(bonus)
item.add_main_option(it.EnhancerFactory.get_armor_starforce_enhancement(item.level, star))
item.add_main_option(it.EnhancerFactory.get_armor_scroll_enhancement(item.level, elist = scrolls[idx]))
return package
@staticmethod
def getBetter11Set(star = 0, enhance = 70, potential = it.CharacterModifier(), additional_potential = it.CharacterModifier(), bonus = it.CharacterModifier(), hammer = True):
'''getBetter11Set : Package of Item set With Followings
Eye : Eye100(Zakum) -> Eye135(PinkBin)
Face : Face110(Zakum) -> Face150(Papul)
Ear : Ear130(Horntail)
Ring1 : Ring110(Horntail)
Ring2 : Ring120(Horntail)
! NO Ring3 / Ring4
Pendant1: Pendant130(Acairum)
Pendant2: Pendant140(Acairum) -> Pendant140Fragment(Acairum)
Belt : Belt140(PinkBin)
Pocket : Pocket140(PinkBin)
Badge : Badge130(Magnus)
Shoulder : Shoulder120(Magnus)
'''
package = [Eye150.copy(), Face110.copy(), Ear130.copy(), Ring110.copy(), Ring120.copy(), \
Pendant130.copy(), Pendant140Fragment.copy(), Belt140.copy(), Pocket140.copy(), \
Badge130.copy(), Shoulder120.copy()]
upgrades = [5, 5, 6, 2, 2, 2, 0, 3, 0, 0, 2]
if hammer:
upgrades = [i+1 for i in upgrades]
#TODO : Simplyfy this dirty codes.
if enhance == 100:
scrolls = [[upgrades[i],0,0] for i in range(11)]
elif enhance == 70:
scrolls = [[0,upgrades[i],0] for i in range(11)]
elif enhance == 30:
scrolls = [[0,0,upgrades[i]] for i in range(11)]
else:
raise TypeError("enhance must be 100, 70, or 30.")
for idx, item in zip([0,1,2], package):
item.set_potential(potential)
item.set_additional_potential(additional_potential)
if idx not in [3,4,9,10]:
item.add_main_option(bonus)
item.add_main_option(it.EnhancerFactory.get_armor_starforce_enhancement(item.level, star))
item.add_main_option(it.EnhancerFactory.get_armor_scroll_enhancement(item.level, elist = scrolls[idx]))
return package
@staticmethod
def getBetter11SetDict(star = 0, enhance = 70, potential = it.CharacterModifier(), additional_potential = it.CharacterModifier(), bonus = it.CharacterModifier(), hammer = True):
package = {"eye" : Eye150.copy(), "face" : Face110.copy(), "ear" : Ear130.copy(), "ring1" : Ring110.copy(), \
"ring2" : Ring120.copy(), "pendant1" : Pendant130.copy(), "pendant2" : Pendant140Fragment.copy(), \
"belt" : Belt140.copy(), "pocket" : Pocket140.copy(), \
"badge" : Badge130.copy(), "shoulder" : Shoulder120.copy()}
keylist = ["eye", "face", "ear", "ring1", "ring2", "pendant1", "pendant2", "belt", "pocket", "badge", "shoulder"]
upgrades = [5, 5, 6, 2, 2, 2, 0, 3, 0, 0, 2]
if hammer:
upgrades = [i+1 for i in upgrades]
#TODO : Simplyfy this dirty codes.
if enhance == 100:
scrolls = [[upgrades[i],0,0] for i in range(11)]
elif enhance == 70:
scrolls = [[0,upgrades[i],0] for i in range(11)]
elif enhance == 30:
scrolls = [[0,0,upgrades[i]] for i in range(11)]
else:
raise TypeError("enhance must be 100, 70, or 30.")
for idx, itemkey in zip([i for i in range(11)], keylist):
item = package[itemkey]
item.set_potential(potential)
item.set_additional_potential(additional_potential)
if itemkey not in ["ring1", "ring2","shoulder","badge"]:
item.add_main_option(bonus)
item.add_main_option(it.EnhancerFactory.get_armor_starforce_enhancement(item.level, star))
item.add_main_option(it.EnhancerFactory.get_armor_scroll_enhancement(item.level, elist = scrolls[idx]))
return package
@staticmethod
def getSetOption(rank):
li = [it.CharacterModifier(),
it.CharacterModifier(),
it.CharacterModifier(stat_main = 10, stat_sub = 10, att = 5),
it.CharacterModifier(),
it.CharacterModifier(stat_main = 10, stat_sub = 10, att = 5),
it.CharacterModifier(),
it.CharacterModifier(att = 10, stat_main = 10, stat_sub = 10, armor_ignore = 10),
it.CharacterModifier(),
it.CharacterModifier(att = 10, stat_main = 15, stat_sub = 15, boss_pdamage = 10)]
retval = li[0]
for i in range(rank):
retval += li[i]
return retval | 45.59375 | 181 | 0.564868 | 1,231 | 10,213 | 4.5987 | 0.136474 | 0.070482 | 0.021198 | 0.03109 | 0.834128 | 0.832008 | 0.811694 | 0.811694 | 0.780427 | 0.750044 | 0 | 0.092293 | 0.298737 | 10,213 | 224 | 182 | 45.59375 | 0.698129 | 0.11407 | 0 | 0.729927 | 0 | 0 | 0.046184 | 0 | 0 | 0 | 0 | 0.013393 | 0 | 1 | 0.036496 | false | 0 | 0.007299 | 0 | 0.087591 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
3f7557405a5e3186cb36ffb66fd38a530c9bd900 | 3,375 | py | Python | src/apps/surveys18/tests/setup.py | travishen/alss-dev | 226e8c4f933de39615775a504191428591962c9f | [
"MIT"
] | null | null | null | src/apps/surveys18/tests/setup.py | travishen/alss-dev | 226e8c4f933de39615775a504191428591962c9f | [
"MIT"
] | null | null | null | src/apps/surveys18/tests/setup.py | travishen/alss-dev | 226e8c4f933de39615775a504191428591962c9f | [
"MIT"
] | null | null | null | from django.core.management import call_command
def setup_fixtures():
call_command("loaddata", "fixtures/surveys18/product-type.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/unit.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/land-status.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/land-type.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/farm-related-business.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/management-type.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/product.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/loss.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/contract.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/income-range.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/market-type.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/age-scope.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/gender.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/relationship.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/education-level.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/farmer-work-day.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/life-style.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/other-farm-work.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/month.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/work-type.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/age-scope.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/lack.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/refuse-reason.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/test/survey.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/test/addressmatch.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/test/annualincome.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/test/business.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/test/cropmarketing.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/test/landarea.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/test/livestockmarketing.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/test/longtermhire.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/test/longtermlack.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/test/nosalaryhire.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/test/numberworkers.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/test/phone.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/test/population.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/test/subsidy.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/test/refuse.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/test/shorttermhire.yaml", verbosity=0)
call_command("loaddata", "fixtures/surveys18/test/shorttermlack.yaml", verbosity=0)
| 71.808511 | 92 | 0.76 | 405 | 3,375 | 6.22963 | 0.148148 | 0.178755 | 0.301229 | 0.428062 | 0.84344 | 0.84344 | 0.823623 | 0.823623 | 0.585811 | 0.088783 | 0 | 0.038898 | 0.085926 | 3,375 | 46 | 93 | 73.369565 | 0.77893 | 0 | 0 | 0.047619 | 0 | 0 | 0.527704 | 0.432889 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02381 | true | 0 | 0.02381 | 0 | 0.047619 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
3fa2b38162791caf0c9b244be8635aeccecd19be | 8,649 | py | Python | backend/crnn.pytorch-master/tesseractOCR.py | loremacchia/Just-Read-It | 6d8d2cc5fada80d959f5c4bc357c6c9f4a68e688 | [
"MIT"
] | 2 | 2020-07-19T07:45:21.000Z | 2022-02-26T16:53:42.000Z | backend/crnn.pytorch-master/tesseractOCR.py | loremacchia/Just-Read-It | 6d8d2cc5fada80d959f5c4bc357c6c9f4a68e688 | [
"MIT"
] | null | null | null | backend/crnn.pytorch-master/tesseractOCR.py | loremacchia/Just-Read-It | 6d8d2cc5fada80d959f5c4bc357c6c9f4a68e688 | [
"MIT"
] | 1 | 2021-01-22T10:19:42.000Z | 2021-01-22T10:19:42.000Z | import pytesseract
import torch
from torch.autograd import Variable
import utils
import dataset
from PIL import Image
import cv2
import models.crnn as crnn
import numpy as np
import os
import csv
import editDistance
import calculateDistance
iteration = 0
def computeOCR(img):
image = np.array(img)[:, :, ::-1].copy()
# Grayscale, Gaussian blur, Otsu's threshold
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (3, 3), 0)
thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
# Morph open to remove noise and invert image
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3, 3))
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=1)
invert = 255 - opening
# Perform text extraction
data = pytesseract.image_to_string(invert, lang='eng', config='--psm 8')
return data
def getString(img, bb):
global iteration
(x, y, w, h) = cv2.boundingRect(bb) # returns (x,y,w,h) of the rect
cropped = img[y: y + h, x: x + w]
if h > w:
cropped = cv2.rotate(cropped, cv2.ROTATE_90_CLOCKWISE)
if cropped is None or len(cropped) <= 0 or len(cropped[0]) <= 0: # MOD
if cropped is None:
return None, None
iteration += 1
# cv2.imwrite("./result/wrongImg/" + str(iteration) + ".jpg", cropped)
return None, None
elif len(cropped[0]) > 0:
cropped = cv2.cvtColor(cropped, cv2.COLOR_BGR2RGB)
croppedPil = Image.fromarray(cropped)
string = computeOCR(croppedPil) # parola data dall immagine originale
croppedRot = cv2.rotate(cropped, cv2.ROTATE_180)
croppedPilRot = Image.fromarray(croppedRot)
stringRot = computeOCR(croppedPilRot) # parola data dall immagine ruotata di 180
# res = editDistance.compareStrings(string, stringRot)
res_not_rotate = calculateDistance.control_distance(string)
res_rotate = calculateDistance.control_distance(stringRot)
res = comparison_rotate(res_not_rotate, res_rotate)
if res is not None:
if res == res_rotate:
string = stringRot
img = cv2.rotate(img, cv2.ROTATE_180) # ??????
cropped = croppedRot
# STRING1 È CON EDITDISTANCE/CALCULATEDISTANCE
string1 = res[3] # in res[2] cè la parola target in res[3] la parola del dizionario più vicina
else:
if x - 6 > 0:
x -= 6
else:
x = 0
if y - 6 > 0:
y -= 6
else:
y = 0
w += 12
h += 12
cropped = img[y: y + h, x: x + w]
if h > w:
cropped = cv2.rotate(cropped, cv2.ROTATE_90_CLOCKWISE)
if len(cropped[0]) > 0:
cropped = cv2.cvtColor(cropped, cv2.COLOR_BGR2RGB)
croppedPil = Image.fromarray(cropped)
string = computeOCR(croppedPil)
croppedRot = cv2.rotate(cropped, cv2.ROTATE_180)
croppedPilRot = Image.fromarray(croppedRot)
stringRot = computeOCR(croppedPilRot)
# res = editDistance.compareStrings(string, stringRot)
res_not_rotate = calculateDistance.control_distance(string)
res_rotate = calculateDistance.control_distance(stringRot)
res = comparison_rotate(res_not_rotate, res_rotate)
if res is not None:
if res == res_rotate:
string = stringRot
img = cv2.rotate(img, cv2.ROTATE_180) # ??????
cropped = croppedRot
string1 = res[3] # in res[2] cè la parola target in res[3] la parola del dizionario più vicina
else:
string1 = None
if(string1 == "" or string1 is None):
iteration += 1
# string = string.replace("/", "")
# cv2.imwrite("./result/wrongImg/" + string + ".jpg", cropped)
string1 = None
if(string == "" or string is None):
iteration += 1
# string = string.replace("/", "")
# ho messo come nome immagine iterator e non string
# cv2.imwrite("./result/wrongImgStr/" + str(iteration) + ".jpg", cropped)
string1 = None
return string, string1
def getStringnGram( img, bb):
global iteration
(x, y, w, h) = cv2.boundingRect(bb) # returns (x,y,w,h) of the rect
cropped = img[y: y + h, x: x + w]
if h > w:
cropped = cv2.rotate(cropped, cv2.ROTATE_90_CLOCKWISE)
if(cropped is None or len(cropped) <= 0 or len(cropped[0]) <= 0): # MOD
if cropped is None:
return None, None
iteration += 1
# cv2.imwrite("./result/wrongImg/" + str(iteration) + ".jpg", cropped)
return None, None
elif (len(cropped[0]) > 0):
cropped = cv2.cvtColor(cropped, cv2.COLOR_BGR2RGB)
croppedPil = Image.fromarray(cropped)
string = computeOCR(croppedPil) # parola data dall immagine originale
croppedRot = cv2.rotate(cropped, cv2.ROTATE_180)
croppedPilRot = Image.fromarray(croppedRot)
stringRot = computeOCR(croppedPilRot) # parola data dall immagine ruotata di 180
# res = editDistance.compareStrings(string, stringRot)
res_not_rotate = calculateDistance.nGram(string)
res_rotate = calculateDistance.nGram(stringRot)
res = comparison_rotate(res_not_rotate, res_rotate)
if res is not None:
if res == res_rotate:
string = stringRot
img = cv2.rotate(img, cv2.ROTATE_180) # ??????
cropped = croppedRot
# STRING1 È CON EDITDISTANCE/CALCULATEDISTANCE
string1 = res[3] # in res[2] cè la parola target in res[3] la parola del dizionario più vicina
else:
if x - 6 > 0:
x -= 6
else:
x = 0
if y - 6 > 0:
y -= 6
else:
y = 0
w += 12
h += 12
cropped = img[y: y + h, x: x + w]
if h > w:
cropped = cv2.rotate(cropped, cv2.ROTATE_90_CLOCKWISE)
if len(cropped[0]) > 0:
cropped = cv2.cvtColor(cropped, cv2.COLOR_BGR2RGB)
croppedPil = Image.fromarray(cropped)
string = computeOCR(croppedPil)
croppedRot = cv2.rotate(cropped, cv2.ROTATE_180)
croppedPilRot = Image.fromarray(croppedRot)
stringRot = computeOCR(croppedPilRot)
# res = editDistance.compareStrings(string, stringRot)
res_not_rotate = calculateDistance.nGram(string)
res_rotate = calculateDistance.nGram(stringRot)
res = comparison_rotate(res_not_rotate, res_rotate)
if res is not None:
if res == res_rotate:
string = stringRot
img = cv2.rotate(img, cv2.ROTATE_180) # ??????
cropped = croppedRot
string1 = res[3] # in res[2] cè la parola target in res[3] la parola del dizionario più vicina
else:
string1 = None
if string1 == "" or string1 is None:
iteration += 1
# string = string.replace("/", "")
# cv2.imwrite("./result/wrongImg/" + string + ".jpg", cropped)
string1 = None
if string == "" or string is None:
iteration += 1
# string = string.replace("/", "")
# ho messo come nome immagine iterator e non string
# cv2.imwrite("./result/wrongImgStr/" + str(iteration) + ".jpg", cropped)
string1 = None
return string, string1
# per confrontare tra il risultato di calculateDistance con e senza rotazione dell immagine
def comparison_rotate(not_rotate, rotate):
if not_rotate is None:
return rotate
elif rotate is None:
return not_rotate
# alla posizione 0 mi indica se la parola è stata trovata o meno
if not_rotate[0] > rotate[0]:
return not_rotate
elif not_rotate[0] < rotate[0]:
return rotate
else:
# alla posizione 1 c'è il ratio
if not_rotate[1] >= rotate[1]:
return not_rotate
elif not_rotate[1] < rotate[1]:
return rotate
| 37.280172 | 115 | 0.560296 | 980 | 8,649 | 4.869388 | 0.162245 | 0.045264 | 0.040235 | 0.031852 | 0.804275 | 0.804275 | 0.779547 | 0.779547 | 0.779547 | 0.779547 | 0 | 0.034471 | 0.345936 | 8,649 | 231 | 116 | 37.441558 | 0.809086 | 0.206382 | 0 | 0.758824 | 0 | 0 | 0.001466 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023529 | false | 0 | 0.076471 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b2056a38d01320bde09d1d2a3cc8c3e027a27ab1 | 2,477 | py | Python | bempp/api/operators/potential/maxwell.py | pescap/bempp-cl | 3a68666e8db0e873d418b734289067483f68f12e | [
"MIT"
] | 70 | 2019-09-04T15:15:05.000Z | 2022-03-22T16:54:40.000Z | bempp/api/operators/potential/maxwell.py | pescap/bempp-cl | 3a68666e8db0e873d418b734289067483f68f12e | [
"MIT"
] | 66 | 2020-01-16T08:31:00.000Z | 2022-03-25T11:18:59.000Z | bempp/api/operators/potential/maxwell.py | pescap/bempp-cl | 3a68666e8db0e873d418b734289067483f68f12e | [
"MIT"
] | 22 | 2019-09-30T08:50:33.000Z | 2022-03-20T19:37:22.000Z | """Maxwell potential operators."""
import numpy as _np
def electric_field(
space,
points,
wavenumber,
parameters=None,
assembler="dense",
device_interface=None,
precision=None,
):
"""Return a Maxwell electric field potential operator."""
from bempp.api.operators import OperatorDescriptor
from bempp.api.assembly.potential_operator import PotentialOperator
from bempp.api.assembly.assembler import PotentialAssembler
import bempp.api
if space.identifier != "rwg0":
raise ValueError("Space must be an RWG type function space.")
if precision is None:
precision = bempp.api.DEFAULT_PRECISION
operator_descriptor = OperatorDescriptor(
"maxwell_electric_field_potential", # Identifier
[_np.real(wavenumber), _np.imag(wavenumber)], # Options
"helmholtz_single_layer", # Kernel type
"maxwell_electric_field", # Assembly type
precision, # Precision
True, # Is complex
None, # Singular part
3, # Kernel dimension
)
return PotentialOperator(
PotentialAssembler(
space, points, operator_descriptor, device_interface, assembler, parameters
)
)
def magnetic_field(
space,
points,
wavenumber,
parameters=None,
assembler="dense",
device_interface=None,
precision=None,
):
"""Return a Maxwell magnetic field potential operator."""
from bempp.api.operators import OperatorDescriptor
from bempp.api.assembly.potential_operator import PotentialOperator
from bempp.api.assembly.assembler import PotentialAssembler
import bempp.api
if space.identifier != "rwg0":
raise ValueError("Space must be an RWG type function space.")
if precision is None:
precision = bempp.api.DEFAULT_PRECISION
operator_descriptor = OperatorDescriptor(
"maxwell_magnetic_field_potential", # Identifier
[_np.real(wavenumber), _np.imag(wavenumber)], # Options
"helmholtz_single_layer", # Kernel type
"maxwell_magnetic_field", # Assembly type
precision, # Precision
True, # Is complex
None, # Singular part
3, # Kernel dimension
)
return PotentialOperator(
PotentialAssembler(
space, points, operator_descriptor, device_interface, assembler, parameters
)
)
| 30.580247 | 88 | 0.655228 | 239 | 2,477 | 6.65272 | 0.242678 | 0.050314 | 0.045283 | 0.050314 | 0.930818 | 0.930818 | 0.930818 | 0.930818 | 0.930818 | 0.930818 | 0 | 0.00221 | 0.269277 | 2,477 | 80 | 89 | 30.9625 | 0.876243 | 0.132015 | 0 | 0.8 | 0 | 0 | 0.12359 | 0.074546 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030769 | false | 0 | 0.138462 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b750c51e61ade107213b8d6500139d161ce52faf | 8,646 | py | Python | graphgenerator.py | tobiasbartel/servicium-instance_manager | 74702ab61481df67c06c6dc7dfd435a4b37126e8 | [
"MIT"
] | null | null | null | graphgenerator.py | tobiasbartel/servicium-instance_manager | 74702ab61481df67c06c6dc7dfd435a4b37126e8 | [
"MIT"
] | null | null | null | graphgenerator.py | tobiasbartel/servicium-instance_manager | 74702ab61481df67c06c6dc7dfd435a4b37126e8 | [
"MIT"
] | null | null | null | from models import *
from pprint import pprint
from servicecatalog.models import READ, WRITE, BOTH
import pydotplus
import re
def instance(my_instance_name, my_payment_methods_list=None):
my_instance=Instance.objects.get(slug=my_instance_name)
if my_payment_methods_list is not None:
my_payment_methods = []
for payment_method in my_payment_methods_list:
my_payment_methods.append(PaymentMethod.objects.get(slug=payment_method))
else:
my_payment_methods = None
ARROW_SIZE = 0.7
FONT_SIZE = 8
graph = pydotplus.Dot(graph_type='digraph', graph_name=my_instance.__unicode__, strict=True)
# graph.set_prog('fdp')
graph.set('splines', 'ortho')
#graph.set('rankdir', 'LR')
graph.set('overlap', 'false')
# graph.set('splines', True)
graph.set('concentrate', True)
# graph.set('nodesep', 0.5)
graph.set('stylesheet', '/static/PaymentFont/css/paymentfont.css')
# graph.set('newrank', True)
graph.set('concentrate', True)
node = pydotplus.Node()
node.set_name(my_instance.__unicode__())
node.set('URL', '/instance/%s/' % my_instance.slug)
node.set('fontsize', FONT_SIZE)
node.set('fontname', 'PaymentFont,sans-serif')
node.set('shape', 'box3d ')
node.set('style', 'filled')
node.set('fillcolor', 'gold')
graph.add_node(node)
if my_instance.customer_accesable:
node = pydotplus.Node()
node.set_name('Merchant')
node.set('fontsize', FONT_SIZE)
node.set('fontname', 'PaymentFont,sans-serif')
node.set('fillcolor', 'cornflowerblue')
node.set('style', 'filled')
node.set('shape', 'invhouse')
graph.add_node(node)
edge = pydotplus.Edge('Merchant', my_instance.__unicode__())
edge.set('arrowsize', ARROW_SIZE)
graph.add_edge(edge)
for dependency in InstanceConnectsInstance.objects.all().filter(from_instance=my_instance).iterator():
label = ''
if my_payment_methods is None or len(dependency.payment_methods.values()) == 0:
node = pydotplus.Node()
node.set_name(dependency.to_instance.__unicode__())
node.set('shape', 'box')
node.set('URL', '/instance/%s/' % dependency.to_instance.slug)
node.set('fontsize', FONT_SIZE)
node.set('fontname', 'PaymentFont,sans-serif')
graph.add_node(node)
edge = pydotplus.Edge(dependency.from_instance.__unicode__(), dependency.to_instance.__unicode__())
edge.set('arrowsize', ARROW_SIZE)
edge.set('fontsize', FONT_SIZE)
edge.set('fontname', 'PaymentFont,sans-serif')
if dependency.comment is not None:
edge.set('xlabel', dependency.comment)
if dependency.access_direction == READ:
edge.set('dir', 'back')
elif dependency.access_direction == BOTH:
edge.set('dir', 'both')
if dependency.is_online:
edge.set('color', 'red')
elif dependency.is_online is False:
edge.set('color', 'blue')
graph.add_edge(edge)
else:
filtered_payment_methods = list(set(dependency.payment_methods.iterator()) & set(my_payment_methods))
if len(filtered_payment_methods) > 0:
node = pydotplus.Node()
node.set_name(dependency.to_instance.__unicode__())
node.set('shape', 'box')
node.set('URL', '/instance/%s/' % dependency.to_instance.slug)
node.set('fontsize', FONT_SIZE)
node.set('fontname', 'PaymentFont,sans-serif')
graph.add_node(node)
edge = pydotplus.Edge(dependency.from_instance.__unicode__(), dependency.to_instance.__unicode__())
edge.set('fontname', 'PaymentFont,sans-serif')
edge.set('fontsize', FONT_SIZE)
edge.set('arrowsize', ARROW_SIZE)
if dependency.access_direction == READ:
edge.set('dir', 'back')
elif dependency.access_direction == BOTH:
edge.set('dir', 'both')
if dependency.is_online:
edge.set('color', 'red')
elif dependency.is_online is False:
edge.set('color', 'blue')
for depending_paynment_method in filtered_payment_methods:
if depending_paynment_method.image is not None:
label += "%s" % depending_paynment_method.image
else:
label += "%s" % depending_paynment_method
if dependency.comment:
label = "%s" % (dependency.comment,)
edge.set('xlabel', label)
graph.add_edge(edge)
for dependency in InstanceConnectsModule.objects.all().filter(from_instance=my_instance):
label = ''
if my_payment_methods is None or len(dependency.payment_methods.values()) == 0:
node = pydotplus.Node()
node.set_name(dependency.to_module.__unicode__())
node.set('URL', '/module/%s/' % dependency.to_module.slug)
node.set('fontsize', FONT_SIZE)
node.set('fontname', 'PaymentFont,sans-serif')
if dependency.to_module.is_service:
node.set('shape', 'hexagon')
else:
node.set('shape', 'box')
if dependency.to_module.is_external:
node.set('fillcolor', 'lightgreen')
node.set('style', 'filled')
graph.add_node(node)
edge = pydotplus.Edge(dependency.from_instance.__unicode__(), dependency.to_module.__unicode__())
edge.set('arrowsize', ARROW_SIZE)
edge.set('fontsize', FONT_SIZE)
edge.set('fontname', 'PaymentFont,sans-serif')
if dependency.comment is not None:
edge.set('xlabel', dependency.comment)
if dependency.access_direction == READ:
edge.set('dir', 'back')
elif dependency.access_direction == BOTH:
edge.set('dir', 'both')
if dependency.is_online:
edge.set('color', 'red')
elif dependency.is_online is False:
edge.set('color', 'blue')
graph.add_edge(edge)
else:
filtered_payment_methods = list(set(dependency.payment_methods.iterator()) & set(my_payment_methods))
if len(filtered_payment_methods) > 0:
node = pydotplus.Node()
node.set_name(dependency.to_module.__unicode__())
node.set('shape', 'box')
node.set('URL', '/module/%s/' % dependency.to_module.slug)
node.set('fontsize', FONT_SIZE)
node.set('fontname', 'PaymentFont,sans-serif')
if dependency.to_module.is_service:
node.set('shape', 'hexagon')
else:
node.set('shape', 'box')
if dependency.to_module.is_external:
node.set('fillcolor', 'lightgreen')
node.set('style', 'filled')
graph.add_node(node)
edge = pydotplus.Edge(dependency.from_instance.__unicode__(), dependency.to_module.__unicode__())
edge.set('fontname', 'PaymentFont,sans-serif')
edge.set('fontsize', FONT_SIZE)
edge.set('arrowsize', ARROW_SIZE)
if dependency.access_direction == READ:
edge.set('dir', 'back')
elif dependency.access_direction == BOTH:
edge.set('dir', 'both')
if dependency.is_online:
edge.set('color', 'red')
elif dependency.is_online is False:
edge.set('color', 'blue')
for depending_paynment_method in filtered_payment_methods:
if depending_paynment_method.image is not None:
label += "%s" % depending_paynment_method.image
else:
label += "%s" % depending_paynment_method
if dependency.comment:
label = "%s" % (dependency.comment,)
edge.set('xlabel', label)
graph.add_edge(edge)
my_graph = graph.create(format='svg', )
my_graph = re.sub(r"( width=)", " min-width=", my_graph )
my_graph = re.sub(r"( height=)", " min-height=", my_graph )
return my_graph | 41.368421 | 115 | 0.571478 | 929 | 8,646 | 5.087191 | 0.131324 | 0.059247 | 0.033855 | 0.040203 | 0.808083 | 0.787135 | 0.764706 | 0.723445 | 0.721752 | 0.721752 | 0 | 0.001662 | 0.304071 | 8,646 | 209 | 116 | 41.368421 | 0.783779 | 0.014805 | 0 | 0.807018 | 0 | 0 | 0.121344 | 0.030424 | 0 | 0 | 0 | 0 | 0 | 1 | 0.005848 | false | 0 | 0.02924 | 0 | 0.040936 | 0.005848 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b772a475ac26522d96ac69b8441282c616db1c16 | 132 | py | Python | lib/dramatis/runtime/actor/__init__.py | dramatis/dramatis | 1a43a6be1d7e7e9fd2cde052430d6e84700dc822 | [
"MIT"
] | 5 | 2015-11-05T01:51:29.000Z | 2019-04-16T09:09:19.000Z | lib/dramatis/runtime/actor/__init__.py | halorgium/dramatis | 50b35c4e79c33e438cb9f5eeab51ab73119bd75d | [
"MIT"
] | null | null | null | lib/dramatis/runtime/actor/__init__.py | halorgium/dramatis | 50b35c4e79c33e438cb9f5eeab51ab73119bd75d | [
"MIT"
] | 1 | 2022-03-03T19:51:04.000Z | 2022-03-03T19:51:04.000Z | from __future__ import absolute_import
from dramatis.runtime.actor.actor import Actor
from dramatis.runtime.actor.main import Main
| 26.4 | 46 | 0.856061 | 19 | 132 | 5.684211 | 0.421053 | 0.222222 | 0.351852 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098485 | 132 | 4 | 47 | 33 | 0.907563 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
b7828746860f73fc4eef13b1210bdd6ea9119684 | 2,735 | py | Python | CodeStomp/AmyCare/fit/migrations/0011_auto_20201124_1629.py | mayank712jindal/Code-Innovation-Series-ChitkaraUniversity | 43adf0b75a076d3d6821b20c103c8c079655b77e | [
"MIT"
] | null | null | null | CodeStomp/AmyCare/fit/migrations/0011_auto_20201124_1629.py | mayank712jindal/Code-Innovation-Series-ChitkaraUniversity | 43adf0b75a076d3d6821b20c103c8c079655b77e | [
"MIT"
] | null | null | null | CodeStomp/AmyCare/fit/migrations/0011_auto_20201124_1629.py | mayank712jindal/Code-Innovation-Series-ChitkaraUniversity | 43adf0b75a076d3d6821b20c103c8c079655b77e | [
"MIT"
] | null | null | null | # Generated by Django 3.1.3 on 2020-11-24 10:59
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('fit', '0010_auto_20201124_1532'),
]
operations = [
migrations.AlterField(
model_name='doctors',
name='doc_address',
field=models.CharField(default='', max_length=500, null=True),
),
migrations.AlterField(
model_name='doctors',
name='doc_category',
field=models.CharField(default='', max_length=100, null=True),
),
migrations.AlterField(
model_name='doctors',
name='doc_email',
field=models.CharField(default='', max_length=100, null=True),
),
migrations.AlterField(
model_name='doctors',
name='doc_idProof',
field=models.ImageField(default='', null=True, upload_to='fit/doctors'),
),
migrations.AlterField(
model_name='doctors',
name='doc_location',
field=models.CharField(default='', max_length=100, null=True),
),
migrations.AlterField(
model_name='doctors',
name='doc_name',
field=models.CharField(default='', max_length=100, null=True),
),
migrations.AlterField(
model_name='doctors',
name='doc_phone',
field=models.IntegerField(null=True),
),
migrations.AlterField(
model_name='doctors',
name='doc_username',
field=models.CharField(default='', max_length=50, null=True),
),
migrations.AlterField(
model_name='patient',
name='pat_address',
field=models.CharField(default='', max_length=500, null=True),
),
migrations.AlterField(
model_name='patient',
name='pat_email',
field=models.CharField(default='', max_length=100, null=True),
),
migrations.AlterField(
model_name='patient',
name='pat_loc',
field=models.CharField(default='', max_length=100, null=True),
),
migrations.AlterField(
model_name='patient',
name='pat_name',
field=models.CharField(default='', max_length=100, null=True),
),
migrations.AlterField(
model_name='patient',
name='pat_phone',
field=models.IntegerField(null=True),
),
migrations.AlterField(
model_name='patient',
name='pat_username',
field=models.CharField(default='', max_length=10, null=True),
),
]
| 32.559524 | 84 | 0.55064 | 260 | 2,735 | 5.626923 | 0.207692 | 0.191388 | 0.239234 | 0.277512 | 0.837321 | 0.837321 | 0.837321 | 0.718387 | 0.718387 | 0.645249 | 0 | 0.033351 | 0.320293 | 2,735 | 83 | 85 | 32.951807 | 0.753631 | 0.016453 | 0 | 0.688312 | 1 | 0 | 0.102307 | 0.008557 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.012987 | 0 | 0.051948 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
b7f2a94952f1d8fd2a0f14470373dd0599dace2d | 16,800 | py | Python | lm/util/datahelper.py | Tou7and/meta-transfer-learning | 1ed18e793c31b79a224a5334ed5a5b8a8ac3e71a | [
"MIT"
] | 43 | 2020-04-25T17:25:04.000Z | 2022-03-12T15:47:05.000Z | lm/util/datahelper.py | Tou7and/meta-transfer-learning | 1ed18e793c31b79a224a5334ed5a5b8a8ac3e71a | [
"MIT"
] | 2 | 2020-07-29T06:50:04.000Z | 2020-09-17T08:56:44.000Z | lm/util/datahelper.py | Tou7and/meta-transfer-learning | 1ed18e793c31b79a224a5334ed5a5b8a8ac3e71a | [
"MIT"
] | 10 | 2020-05-14T17:46:05.000Z | 2022-03-10T19:06:20.000Z | import os
import string
import re
import numpy
from tqdm import tqdm
from stanfordcorenlp import StanfordCoreNLP
import math
from scipy import spatial
import unicodedata
from scipy.spatial.distance import cosine
from . import texthelper
dir_path = os.path.dirname(os.path.realpath(__file__))
def read_seame_phase1():
"""
Recursively iterate phase 1 directories and read all the data
"""
print("> read SEAME corpus")
interview_phase1_dir = dir_path + "/../../../dataset/seame_LDC2015S04/seame/data/interview/transcript/phaseI/"
conversation_phase1_dir = dir_path + "/../../../dataset/seame_LDC2015S04/seame/data/conversation/transcript/phaseI/"
interview_phase1_filenames = []
conversation_phase1_filenames = []
interview_phase1_data = {}
conversation_phase1_data = {}
all_data = {}
vocab = {}
speaker_ids = {}
for root, dirs, files in os.walk(interview_phase1_dir):
for file in files:
if file.endswith(".txt"):
path = os.path.join(root, file)
interview_phase1_filenames.append(path)
for root, dirs, files in os.walk(conversation_phase1_dir):
for file in files:
if file.endswith(".txt"):
path = os.path.join(root, file)
conversation_phase1_filenames.append(path)
print("################################")
print(" SUMMARY ")
print("################################")
print("interview phase 1 files\t\t:", len(interview_phase1_filenames))
print("conversation phase 1 files\t:", len(conversation_phase1_filenames))
print("################################\n")
total_utterances_interview_phase1 = 0
total_utterances_conversation_phase1 = 0
total_utterances_interview_phase1_filtered = 0
total_utterances_conversation_phase1_filtered = 0
print("> read interview phase 1")
for i in tqdm(range(len(interview_phase1_filenames))):
filename = interview_phase1_filenames[i]
with open(filename, "r") as file:
for line in file:
str_id = line.split("_")[0]
speaker_id = str_id[0:4]
speaker_ids[speaker_id] = True
arr = line.split("\t")
seq = arr[3]
seq = texthelper.preprocess_mixed_language_sentence(seq)
total_utterances_interview_phase1 += 1
if seq != "":
total_utterances_interview_phase1_filtered += 1
words = seq.split(" ")
for j in range(len(words)):
vocab[words[j]] = True
if speaker_id in interview_phase1_data:
interview_phase1_data[speaker_id].append(seq)
all_data[speaker_id].append(seq)
else:
interview_phase1_data[speaker_id] = []
interview_phase1_data[speaker_id].append(seq)
all_data[speaker_id] = []
all_data[speaker_id].append(seq)
print("> read conversation phase 1")
for i in tqdm(range(len(conversation_phase1_filenames))):
filename = conversation_phase1_filenames[i]
with open(filename, "r") as file:
for line in file:
str_id = line.split("_")[0]
speaker_id = str_id[2:6]
speaker_ids[speaker_id] = True
arr = line.split("\t")
seq = arr[3]
seq = texthelper.preprocess_mixed_language_sentence(seq)
total_utterances_conversation_phase1 += 1
if seq != "":
total_utterances_conversation_phase1_filtered += 1
words = seq.split(" ")
for j in range(len(words)):
vocab[words[j]] = True
if speaker_id in conversation_phase1_data:
conversation_phase1_data[speaker_id].append(seq)
all_data[speaker_id].append(seq)
else:
conversation_phase1_data[speaker_id] = []
conversation_phase1_data[speaker_id].append(seq)
all_data[speaker_id] = []
all_data[speaker_id].append(seq)
total_utterances = 0
for key in all_data:
total_utterances += len(all_data[key])
print("################################")
print(" OVERVIEW ")
print("################################")
print("number of speaker by speaker_ids:", len(speaker_ids))
print("number of speaker of all utterances:", len(all_data))
print("all utterances:", total_utterances)
print("number of words", len(vocab))
print("total utterances interview_phase1\t:", total_utterances_interview_phase1)
print("total utterances conversation_phase1\t:", total_utterances_conversation_phase1)
print("total utterances interview_phase1_filtered\t:", total_utterances_interview_phase1_filtered)
print("total utterances conversation_phase1_filtered\t:", total_utterances_conversation_phase1_filtered)
print("################################")
return interview_phase1_data, conversation_phase1_data, all_data, vocab
def read_seame():
"""
Recursively iterate directories and read all the data
"""
print("> read SEAME corpus")
interview_phase1_dir = dir_path + "/../../../dataset/seame_LDC2015S04/seame/data/interview/transcript/phaseI/"
interview_phase2_dir = dir_path + "/../../../dataset/seame_LDC2015S04/seame/data/interview/transcript/phaseII/"
conversation_phase1_dir = dir_path + "/../../../dataset/seame_LDC2015S04/seame/data/conversation/transcript/phaseI/"
conversation_phase2_dir = dir_path + "/../../../dataset/seame_LDC2015S04/seame/data/conversation/transcript/phaseII/"
interview_phase1_filenames = []
interview_phase2_filenames = []
conversation_phase1_filenames = []
conversation_phase2_filenames = []
interview_phase1_data = {}
interview_phase2_data = {}
conversation_phase1_data = {}
conversation_phase2_data = {}
all_data = {}
vocab = {}
speaker_ids = {}
for root, dirs, files in os.walk(interview_phase1_dir):
for file in files:
if file.endswith(".txt"):
path = os.path.join(root, file)
interview_phase1_filenames.append(path)
for root, dirs, files in os.walk(interview_phase2_dir):
for file in files:
if file.endswith(".txt"):
path = os.path.join(root, file)
interview_phase2_filenames.append(path)
for root, dirs, files in os.walk(conversation_phase1_dir):
for file in files:
if file.endswith(".txt"):
path = os.path.join(root, file)
conversation_phase1_filenames.append(path)
for root, dirs, files in os.walk(conversation_phase2_dir):
for file in files:
if file.endswith(".txt"):
path = os.path.join(root, file)
conversation_phase2_filenames.append(path)
print("################################")
print(" SUMMARY ")
print("################################")
print("interview phase 1 files\t\t:", len(interview_phase1_filenames))
print("interview phase 2 files\t\t:", len(interview_phase2_filenames))
print("conversation phase 1 files\t:", len(conversation_phase1_filenames))
print("conversation phase 2 files\t:", len(conversation_phase2_filenames))
print("################################\n")
total_utterances_interview_phase1 = 0
total_utterances_interview_phase2 = 0
total_utterances_conversation_phase1 = 0
total_utterances_conversation_phase2 = 0
total_utterances_interview_phase1_filtered = 0
total_utterances_interview_phase2_filtered = 0
total_utterances_conversation_phase1_filtered = 0
total_utterances_conversation_phase2_filtered = 0
print("> read interview phase 1")
for i in tqdm(range(len(interview_phase1_filenames))):
filename = interview_phase1_filenames[i]
with open(filename, "r") as file:
for line in file:
str_id = line.split("_")[0]
speaker_id = str_id[0:4]
speaker_ids[speaker_id] = True
arr = line.split("\t")
seq = arr[3]
seq = texthelper.preprocess_mixed_language_sentence(seq)
total_utterances_interview_phase1 += 1
if seq != "":
total_utterances_interview_phase1_filtered += 1
words = seq.split(" ")
for j in range(len(words)):
vocab[words[j]] = True
if speaker_id in interview_phase1_data:
interview_phase1_data[speaker_id].append(seq)
all_data[speaker_id].append(seq)
else:
interview_phase1_data[speaker_id] = []
interview_phase1_data[speaker_id].append(seq)
all_data[speaker_id] = []
all_data[speaker_id].append(seq)
print("> read interview phase 2")
for i in tqdm(range(len(interview_phase2_filenames))):
filename = interview_phase2_filenames[i]
with open(filename, "r") as file:
for line in file:
str_id = line.split("_")[0]
speaker_id = str_id[0:4]
speaker_ids[speaker_id] = True
arr = line.split("\t")
seq = arr[4]
seq = texthelper.preprocess_mixed_language_sentence(seq, retokenize=False)
total_utterances_interview_phase2 += 1
if seq != "":
total_utterances_interview_phase2_filtered += 1
words = seq.split(" ")
for j in range(len(words)):
vocab[words[j]] = True
if speaker_id in interview_phase2_data:
interview_phase2_data[speaker_id].append(seq)
all_data[speaker_id].append(seq)
else:
interview_phase2_data[speaker_id] = []
interview_phase2_data[speaker_id].append(seq)
all_data[speaker_id] = []
all_data[speaker_id].append(seq)
print("> read conversation phase 1")
for i in tqdm(range(len(conversation_phase1_filenames))):
filename = conversation_phase1_filenames[i]
with open(filename, "r") as file:
for line in file:
str_id = line.split("_")[0]
speaker_id = str_id[2:6]
speaker_ids[speaker_id] = True
arr = line.split("\t")
seq = arr[3]
seq = texthelper.preprocess_mixed_language_sentence(seq)
total_utterances_conversation_phase1 += 1
if seq != "":
total_utterances_conversation_phase1_filtered += 1
words = seq.split(" ")
for j in range(len(words)):
vocab[words[j]] = True
if speaker_id in conversation_phase1_data:
conversation_phase1_data[speaker_id].append(seq)
all_data[speaker_id].append(seq)
else:
conversation_phase1_data[speaker_id] = []
conversation_phase1_data[speaker_id].append(seq)
all_data[speaker_id] = []
all_data[speaker_id].append(seq)
print("> read conversation phase 2")
for i in tqdm(range(len(conversation_phase2_filenames))):
filename = conversation_phase2_filenames[i]
with open(filename, "r") as file:
for line in file:
str_id = line.split("_")[0]
speaker_id = str_id[2:6]
speaker_ids[speaker_id] = True
arr = line.split("\t")
seq = arr[4]
seq = texthelper.preprocess_mixed_language_sentence(seq, retokenize=False)
total_utterances_conversation_phase2 += 1
if seq != "":
total_utterances_conversation_phase2_filtered += 1
words = seq.split(" ")
for j in range(len(words)):
vocab[words[j]] = True
if speaker_id in conversation_phase2_data:
conversation_phase2_data[speaker_id].append(seq)
all_data[speaker_id].append(seq)
else:
conversation_phase2_data[speaker_id] = []
conversation_phase2_data[speaker_id].append(seq)
all_data[speaker_id] = []
all_data[speaker_id].append(seq)
total_utterances = 0
for key in all_data:
total_utterances += len(all_data[key])
print("################################")
print(" OVERVIEW ")
print("################################")
print("number of speaker by speaker_ids:", len(speaker_ids))
print("number of speaker of all utterances:", len(all_data))
print("all utterances:", total_utterances)
print("number of words", len(vocab))
print("total utterances interview_phase1\t:", total_utterances_interview_phase1)
print("total utterances interview_phase2\t:", total_utterances_interview_phase2)
print("total utterances conversation_phase1\t:", total_utterances_conversation_phase1)
print("total utterances conversation_phase2\t:", total_utterances_conversation_phase2)
print("total utterances interview_phase1_filtered\t:", total_utterances_interview_phase1_filtered)
print("total utterances interview_phase2_filtered\t:", total_utterances_interview_phase2_filtered)
print("total utterances conversation_phase1_filtered\t:", total_utterances_conversation_phase1_filtered)
print("total utterances conversation_phase2_filtered\t:", total_utterances_conversation_phase2_filtered)
print("################################")
return interview_phase1_data, interview_phase2_data, conversation_phase1_data, conversation_phase2_data, all_data, vocab
def load_seame_numpy_array():
interview_phase1_data = numpy.load(dir_path + "/../data/seame/numpy_array/interview_phase1_data.npy", encoding="latin1")
interview_phase2_data = numpy.load(dir_path + "/../data/seame/numpy_array/interview_phase2_data.npy", encoding="latin1")
conversation_phase1_data = numpy.load(dir_path + "/../data/seame/numpy_array/conversation_phase1_data.npy", encoding="latin1")
conversation_phase2_data = numpy.load(dir_path + "/../data/seame/numpy_array/conversation_phase2_data.npy", encoding="latin1")
vocab = numpy.load(dir_path + "/../data/seame/numpy_array/vocab.npy", encoding="latin1")
return interview_phase1_data, interview_phase2_data, conversation_phase1_data, conversation_phase2_data, vocab
def save_seame(interview_phase1_data, conversation_phase1_data, interview_phase2_data, conversation_phase2_data, all_data, vocab):
numpy.save("preprocess/SEAME/arr/interview_phase1_data", interview_phase1_data)
numpy.save("preprocess/SEAME/arr/interview_phase2_data", interview_phase2_data)
numpy.save("preprocess/SEAME/arr/conversation_phase1_data", conversation_phase1_data)
numpy.save("preprocess/SEAME/arr/conversation_phase2_data", conversation_phase2_data)
numpy.save("preprocess/SEAME/arr/all_data", all_data)
numpy.save("preprocess/SEAME/arr/vocab", vocab) | 45.040214 | 131 | 0.568036 | 1,722 | 16,800 | 5.239257 | 0.060976 | 0.089781 | 0.051873 | 0.050543 | 0.918865 | 0.868322 | 0.822212 | 0.796608 | 0.77178 | 0.748282 | 0 | 0.022365 | 0.32131 | 16,800 | 373 | 132 | 45.040214 | 0.7689 | 0.006845 | 0 | 0.783784 | 0 | 0 | 0.159573 | 0.096124 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013514 | false | 0 | 0.037162 | 0 | 0.060811 | 0.168919 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4d44ec8e4d7556dbe293325478c649d5327d02a4 | 3,165 | py | Python | tests/test_year_2019.py | l0pht511/jpholiday | 083145737b61fad3420c066968c4329d17dc3baf | [
"MIT"
] | 179 | 2017-10-05T12:41:10.000Z | 2022-03-24T22:18:25.000Z | tests/test_year_2019.py | l0pht511/jpholiday | 083145737b61fad3420c066968c4329d17dc3baf | [
"MIT"
] | 17 | 2018-10-23T00:51:13.000Z | 2021-11-22T11:40:06.000Z | tests/test_year_2019.py | l0pht511/jpholiday | 083145737b61fad3420c066968c4329d17dc3baf | [
"MIT"
] | 17 | 2018-10-19T11:13:07.000Z | 2022-01-29T08:05:56.000Z | # coding: utf-8
import datetime
import unittest
import jpholiday
class TestYear2019(unittest.TestCase):
def test_holiday(self):
"""
2019年祝日
"""
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2019, 1, 1)), '元日')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2019, 1, 14)), '成人の日')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2019, 2, 11)), '建国記念の日')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2019, 3, 21)), '春分の日')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2019, 4, 29)), '昭和の日')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2019, 4, 30)), '国民の休日')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2019, 5, 1)), '天皇の即位の日')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2019, 5, 2)), '国民の休日')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2019, 5, 3)), '憲法記念日')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2019, 5, 4)), 'みどりの日')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2019, 5, 5)), 'こどもの日')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2019, 5, 6)), 'こどもの日 振替休日')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2019, 7, 15)), '海の日')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2019, 8, 11)), '山の日')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2019, 8, 12)), '山の日 振替休日')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2019, 9, 16)), '敬老の日')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2019, 9, 23)), '秋分の日')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2019, 10, 14)), '体育の日')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2019, 10, 22)), '即位礼正殿の儀')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2019, 11, 3)), '文化の日')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2019, 11, 4)), '文化の日 振替休日')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2019, 11, 23)), '勤労感謝の日')
def test_count_month(self):
"""
2019年月祝日数
"""
self.assertEqual(len(jpholiday.month_holidays(2019, 1)), 2)
self.assertEqual(len(jpholiday.month_holidays(2019, 2)), 1)
self.assertEqual(len(jpholiday.month_holidays(2019, 3)), 1)
self.assertEqual(len(jpholiday.month_holidays(2019, 4)), 2)
self.assertEqual(len(jpholiday.month_holidays(2019, 5)), 6)
self.assertEqual(len(jpholiday.month_holidays(2019, 6)), 0)
self.assertEqual(len(jpholiday.month_holidays(2019, 7)), 1)
self.assertEqual(len(jpholiday.month_holidays(2019, 8)), 2)
self.assertEqual(len(jpholiday.month_holidays(2019, 9)), 2)
self.assertEqual(len(jpholiday.month_holidays(2019, 10)), 2)
self.assertEqual(len(jpholiday.month_holidays(2019, 11)), 3)
self.assertEqual(len(jpholiday.month_holidays(2019, 12)), 0)
def test_count_year(self):
"""
2019年祝日数
"""
self.assertEqual(len(jpholiday.year_holidays(2019)), 22)
| 54.568966 | 92 | 0.685624 | 413 | 3,165 | 5.104116 | 0.159806 | 0.249051 | 0.250474 | 0.271347 | 0.829222 | 0.829222 | 0.829222 | 0.745731 | 0.574953 | 0.524668 | 0 | 0.093904 | 0.165561 | 3,165 | 57 | 93 | 55.526316 | 0.704279 | 0.012954 | 0 | 0 | 0 | 0 | 0.037328 | 0 | 0 | 0 | 0 | 0 | 0.833333 | 1 | 0.071429 | false | 0 | 0.071429 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
4d89e14f9cd98c4b152706e9cd4b1bca730b3a82 | 61,993 | py | Python | tests/dataset/test_synthetic_slate.py | Minyus/zr-obp | 5fc55d016dcf3be402e5033340ef1b333a1372bd | [
"Apache-2.0"
] | null | null | null | tests/dataset/test_synthetic_slate.py | Minyus/zr-obp | 5fc55d016dcf3be402e5033340ef1b333a1372bd | [
"Apache-2.0"
] | null | null | null | tests/dataset/test_synthetic_slate.py | Minyus/zr-obp | 5fc55d016dcf3be402e5033340ef1b333a1372bd | [
"Apache-2.0"
] | null | null | null | from typing import List
import pytest
import numpy as np
import pandas as pd
from obp.dataset import (
linear_reward_function,
logistic_reward_function,
linear_behavior_policy_logit,
SyntheticSlateBanditDataset,
)
from obp.types import BanditFeedback
# n_unique_action, len_list, dim_context, reward_type, reward_structure, decay_function, click_model, eta, random_state, err, description
invalid_input_of_init = [
(
"4",
3,
2,
"binary",
"independent",
"exponential",
"pbm",
1.0,
1,
ValueError,
"n_unique_action must be an integer larger than 1",
),
(
1,
3,
2,
"binary",
"independent",
"exponential",
"pbm",
1.0,
1,
ValueError,
"n_unique_action must be an integer larger than 1",
),
(
5,
"4",
2,
"binary",
"independent",
"exponential",
"pbm",
1.0,
1,
ValueError,
"len_list must be an integer larger than",
),
(
5,
-1,
2,
"binary",
"independent",
"exponential",
"pbm",
1.0,
1,
ValueError,
"len_list must be an integer larger than",
),
(
5,
10,
2,
"binary",
"independent",
"exponential",
"pbm",
1.0,
1,
ValueError,
"len_list must be equal to or smaller than",
),
(
5,
3,
0,
"binary",
"independent",
"exponential",
"pbm",
1.0,
1,
ValueError,
"dim_context must be a positive integer",
),
(
5,
3,
"2",
"binary",
"independent",
"exponential",
"pbm",
1.0,
1,
ValueError,
"dim_context must be a positive integer",
),
(
5,
3,
2,
"aaa",
"independent",
"exponential",
"pbm",
1.0,
1,
ValueError,
"reward_type must be either",
),
(
5,
3,
2,
"binary",
"aaa",
"exponential",
"pbm",
1.0,
1,
ValueError,
"reward_structure must be one of",
),
(
5,
3,
2,
"binary",
"independent",
"aaa",
"pbm",
1.0,
1,
ValueError,
"decay_function must be either",
),
(
5,
3,
2,
"binary",
"independent",
"exponential",
"aaa",
1.0,
1,
ValueError,
"click_model must be one of",
),
(
5,
3,
2,
"binary",
"independent",
"exponential",
"pbm",
"aaa",
1,
TypeError,
"`eta` must be an instance of <class 'float'>, not <class 'str'>.",
),
(
5,
3,
2,
"binary",
"independent",
"exponential",
"pbm",
-1.0,
1,
ValueError,
"`eta`= -1.0, must be >= 0.0.",
),
(
5,
3,
2,
"binary",
"independent",
"exponential",
"pbm",
1.0,
"x",
ValueError,
"random_state must be an integer",
),
(
5,
3,
2,
"binary",
"independent",
"exponential",
"pbm",
1.0,
None,
ValueError,
"random_state must be an integer",
),
]
@pytest.mark.parametrize(
"n_unique_action, len_list, dim_context, reward_type, reward_structure, decay_function, click_model, eta, random_state, err, description",
invalid_input_of_init,
)
def test_synthetic_slate_init_using_invalid_inputs(
n_unique_action,
len_list,
dim_context,
reward_type,
reward_structure,
decay_function,
click_model,
eta,
random_state,
err,
description,
):
with pytest.raises(err, match=f"{description}*"):
_ = SyntheticSlateBanditDataset(
n_unique_action=n_unique_action,
len_list=len_list,
dim_context=dim_context,
reward_type=reward_type,
reward_structure=reward_structure,
decay_function=decay_function,
click_model=click_model,
eta=eta,
random_state=random_state,
)
def check_slate_bandit_feedback(
bandit_feedback: BanditFeedback, is_factorizable: bool = False
):
# check pscore columns
pscore_columns: List[str] = []
pscore_candidate_columns = [
"pscore_cascade",
"pscore",
"pscore_item_position",
]
for column in pscore_candidate_columns:
if column in bandit_feedback and bandit_feedback[column] is not None:
pscore_columns.append(column)
else:
pscore_columns.append(column)
assert (
len(pscore_columns) > 0
), f"bandit feedback must contain at least one of the following pscore columns: {pscore_candidate_columns}"
bandit_feedback_df = pd.DataFrame()
for column in ["slate_id", "position", "action"] + pscore_columns:
bandit_feedback_df[column] = bandit_feedback[column]
# sort dataframe
bandit_feedback_df = (
bandit_feedback_df.sort_values(["slate_id", "position"])
.reset_index(drop=True)
.copy()
)
# check uniqueness
assert (
bandit_feedback_df.duplicated(["slate_id", "position"]).sum() == 0
), "position must not be duplicated in each slate"
assert (
bandit_feedback_df.duplicated(["slate_id", "action"]).sum() == 0
if not is_factorizable
else True
), "action must not be duplicated in each slate"
# check pscores
for column in pscore_columns:
invalid_pscore_flgs = (bandit_feedback_df[column] < 0) | (
bandit_feedback_df[column] > 1
)
assert invalid_pscore_flgs.sum() == 0, "the range of pscores must be [0, 1]"
if "pscore_cascade" in pscore_columns and "pscore" in pscore_columns:
assert (
bandit_feedback_df["pscore_cascade"] < bandit_feedback_df["pscore"]
).sum() == 0, "pscore must be smaller than or equal to pscore_cascade"
if "pscore_item_position" in pscore_columns and "pscore" in pscore_columns:
assert (
bandit_feedback_df["pscore_item_position"] < bandit_feedback_df["pscore"]
).sum() == 0, "pscore must be smaller than or equal to pscore_item_position"
if "pscore_item_position" in pscore_columns and "pscore_cascade" in pscore_columns:
assert (
bandit_feedback_df["pscore_item_position"]
< bandit_feedback_df["pscore_cascade"]
).sum() == 0, (
"pscore_cascade must be smaller than or equal to pscore_item_position"
)
if "pscore_cascade" in pscore_columns:
previous_minimum_pscore_cascade = (
bandit_feedback_df.groupby("slate_id")["pscore_cascade"]
.expanding()
.min()
.values
)
assert (
previous_minimum_pscore_cascade < bandit_feedback_df["pscore_cascade"]
).sum() == 0, "pscore_cascade must be non-decresing sequence in each slate"
if "pscore" in pscore_columns:
count_pscore_in_expression = bandit_feedback_df.groupby("slate_id").apply(
lambda x: x["pscore"].unique().shape[0]
)
assert (
count_pscore_in_expression != 1
).sum() == 0, "pscore must be unique in each slate"
if "pscore" in pscore_columns and "pscore_cascade" in pscore_columns:
last_slot_feedback_df = bandit_feedback_df.drop_duplicates(
"slate_id", keep="last"
)
assert (
last_slot_feedback_df["pscore"] != last_slot_feedback_df["pscore_cascade"]
).sum() == 0, "pscore must be the same as pscore_cascade in the last slot"
def test_synthetic_slate_obtain_batch_bandit_feedback_using_uniform_random_behavior_policy():
# set parameters
n_unique_action = 10
len_list = 3
dim_context = 2
reward_type = "binary"
random_state = 12345
n_rounds = 100
dataset = SyntheticSlateBanditDataset(
n_unique_action=n_unique_action,
len_list=len_list,
dim_context=dim_context,
reward_type=reward_type,
random_state=random_state,
)
# obtain feedback
bandit_feedback = dataset.obtain_batch_bandit_feedback(n_rounds=n_rounds)
# check slate bandit feedback (common test)
check_slate_bandit_feedback(bandit_feedback=bandit_feedback)
pscore_columns = [
"pscore_cascade",
"pscore",
"pscore_item_position",
]
bandit_feedback_df = pd.DataFrame()
for column in ["slate_id", "position", "action"] + pscore_columns:
bandit_feedback_df[column] = bandit_feedback[column]
# check pscore marginal
pscore_item_position = 1 / n_unique_action
assert np.allclose(
bandit_feedback_df["pscore_item_position"].unique(), pscore_item_position
), f"pscore_item_position must be [{pscore_item_position}], but {bandit_feedback_df['pscore_item_position'].unique()}"
# check pscore joint
pscore_cascade = []
pscore_above = 1.0
for position_ in np.arange(len_list):
pscore_above *= 1.0 / (n_unique_action - position_)
pscore_cascade.append(pscore_above)
assert np.allclose(
bandit_feedback_df["pscore_cascade"], np.tile(pscore_cascade, n_rounds)
), f"pscore_cascade must be {pscore_cascade} for all slates"
assert np.allclose(
bandit_feedback_df["pscore"].unique(), [pscore_above]
), f"pscore must be {pscore_above} for all slates"
def test_synthetic_slate_obtain_batch_bandit_feedback_using_uniform_random_factorizable_behavior_policy():
# set parameters
n_unique_action = 10
len_list = 3
dim_context = 2
reward_type = "binary"
random_state = 12345
n_rounds = 100
dataset = SyntheticSlateBanditDataset(
n_unique_action=n_unique_action,
len_list=len_list,
dim_context=dim_context,
reward_type=reward_type,
is_factorizable=True,
random_state=random_state,
)
# obtain feedback
bandit_feedback = dataset.obtain_batch_bandit_feedback(n_rounds=n_rounds)
# check slate bandit feedback (common test)
check_slate_bandit_feedback(bandit_feedback=bandit_feedback, is_factorizable=True)
pscore_columns = [
"pscore_cascade",
"pscore",
"pscore_item_position",
]
bandit_feedback_df = pd.DataFrame()
for column in ["slate_id", "position", "action"] + pscore_columns:
bandit_feedback_df[column] = bandit_feedback[column]
# check pscore marginal
pscore_item_position = 1 / n_unique_action
assert np.allclose(
bandit_feedback_df["pscore_item_position"].unique(), pscore_item_position
), f"pscore_item_position must be [{pscore_item_position}], but {bandit_feedback_df['pscore_item_position'].unique()}"
# check pscore joint
pscore_cascade = []
pscore_above = 1.0
for position_ in np.arange(len_list):
pscore_above *= 1.0 / n_unique_action
pscore_cascade.append(pscore_above)
assert np.allclose(
bandit_feedback_df["pscore_cascade"], np.tile(pscore_cascade, n_rounds)
), f"pscore_cascade must be {pscore_cascade} for all slates"
assert np.allclose(
bandit_feedback_df["pscore"].unique(), [pscore_above]
), f"pscore must be {pscore_above} for all slates"
def test_synthetic_slate_obtain_batch_bandit_feedback_using_uniform_random_behavior_policy_largescale():
# set parameters
n_unique_action = 100
len_list = 10
dim_context = 2
reward_type = "binary"
random_state = 12345
n_rounds = 10000
dataset = SyntheticSlateBanditDataset(
n_unique_action=n_unique_action,
len_list=len_list,
dim_context=dim_context,
reward_type=reward_type,
random_state=random_state,
)
# obtain feedback
bandit_feedback = dataset.obtain_batch_bandit_feedback(n_rounds=n_rounds)
# check slate bandit feedback (common test)
check_slate_bandit_feedback(bandit_feedback=bandit_feedback)
# check pscore marginal
pscore_item_position = 1 / n_unique_action
assert np.allclose(
np.unique(bandit_feedback["pscore_item_position"]), pscore_item_position
), f"pscore_item_position must be [{pscore_item_position}], but {np.unique(bandit_feedback['pscore_item_position'])}"
def test_synthetic_slate_obtain_batch_bandit_feedback_using_linear_behavior_policy():
# set parameters
n_unique_action = 10
len_list = 3
dim_context = 2
reward_type = "binary"
random_state = 12345
n_rounds = 100
dataset = SyntheticSlateBanditDataset(
n_unique_action=n_unique_action,
len_list=len_list,
dim_context=dim_context,
reward_type=reward_type,
random_state=random_state,
behavior_policy_function=linear_behavior_policy_logit,
)
with pytest.raises(ValueError):
_ = dataset.obtain_batch_bandit_feedback(n_rounds=-1)
with pytest.raises(ValueError):
_ = dataset.obtain_batch_bandit_feedback(n_rounds="a")
# obtain feedback
bandit_feedback = dataset.obtain_batch_bandit_feedback(n_rounds=n_rounds)
# check slate bandit feedback (common test)
check_slate_bandit_feedback(bandit_feedback=bandit_feedback)
# print reward
pscore_columns = [
"pscore_cascade",
"pscore",
"pscore_item_position",
]
bandit_feedback_df = pd.DataFrame()
for column in ["slate_id", "position", "action", "reward"] + pscore_columns:
bandit_feedback_df[column] = bandit_feedback[column]
print(bandit_feedback_df.groupby("position")["reward"].describe())
if reward_type == "binary":
assert set(np.unique(bandit_feedback["reward"])) == set([0, 1])
def test_synthetic_slate_obtain_batch_bandit_feedback_using_linear_behavior_policy_without_pscore_item_position():
# set parameters
n_unique_action = 80
len_list = 3
dim_context = 2
reward_type = "binary"
random_state = 12345
n_rounds = 100
dataset = SyntheticSlateBanditDataset(
n_unique_action=n_unique_action,
len_list=len_list,
dim_context=dim_context,
reward_type=reward_type,
random_state=random_state,
behavior_policy_function=linear_behavior_policy_logit,
)
# obtain feedback
bandit_feedback = dataset.obtain_batch_bandit_feedback(
n_rounds=n_rounds, return_pscore_item_position=False
)
# check slate bandit feedback (common test)
check_slate_bandit_feedback(bandit_feedback=bandit_feedback)
assert (
bandit_feedback["pscore_item_position"] is None
), f"pscore marginal must be None, but {bandit_feedback['pscore_item_position']}"
# random seed should be fixed
dataset2 = SyntheticSlateBanditDataset(
n_unique_action=n_unique_action,
len_list=len_list,
dim_context=dim_context,
reward_type=reward_type,
random_state=random_state,
behavior_policy_function=linear_behavior_policy_logit,
)
# obtain feedback
bandit_feedback2 = dataset2.obtain_batch_bandit_feedback(
n_rounds=n_rounds, return_pscore_item_position=False
)
# check slate bandit feedback (common test)
check_slate_bandit_feedback(bandit_feedback=bandit_feedback2)
# check random seed effect
assert np.allclose(
bandit_feedback["expected_reward_factual"],
bandit_feedback2["expected_reward_factual"],
)
if reward_type == "binary":
assert set(np.unique(bandit_feedback["reward"])) == set([0, 1])
# n_unique_action, len_list, dim_context, reward_type, decay_function, random_state, n_rounds, reward_structure, click_model, eta, behavior_policy_function, is_factorizable, reward_function, return_pscore_item_position, description
valid_input_of_obtain_batch_bandit_feedback = [
(
10,
3,
2,
"binary",
123,
1000,
"standard_additive",
"exponential",
None,
1.0,
linear_behavior_policy_logit,
False,
logistic_reward_function,
False,
"standard_additive",
),
(
10,
3,
2,
"binary",
123,
1000,
"independent",
"exponential",
None,
1.0,
linear_behavior_policy_logit,
False,
logistic_reward_function,
False,
"independent",
),
(
10,
3,
2,
"binary",
123,
1000,
"cascade_additive",
"exponential",
None,
1.0,
linear_behavior_policy_logit,
False,
logistic_reward_function,
False,
"cascade_additive",
),
(
10,
3,
2,
"continuous",
123,
1000,
"standard_additive",
"exponential",
None,
1.0,
linear_behavior_policy_logit,
False,
linear_reward_function,
False,
"standard_additive continuous",
),
(
10,
3,
2,
"continuous",
123,
1000,
"independent",
"exponential",
None,
1.0,
linear_behavior_policy_logit,
False,
linear_reward_function,
False,
"independent continuous",
),
(
10,
3,
2,
"continuous",
123,
1000,
"cascade_additive",
"exponential",
None,
1.0,
linear_behavior_policy_logit,
False,
linear_reward_function,
False,
"cascade_additive continuous",
),
(
10,
3,
2,
"continuous",
123,
1000,
"cascade_additive",
"exponential",
None,
0.0,
None,
False,
None,
False,
"Random policy and reward function (continuous reward)",
),
(
10,
3,
2,
"binary",
123,
1000,
"cascade_decay",
"exponential",
None,
0.0,
linear_behavior_policy_logit,
False,
logistic_reward_function,
False,
"cascade_decay (binary reward)",
),
(
10,
3,
2,
"binary",
123,
1000,
"cascade_decay",
"inverse",
None,
0.0,
linear_behavior_policy_logit,
False,
logistic_reward_function,
False,
"cascade_decay (binary reward)",
),
(
10,
3,
2,
"continuous",
123,
1000,
"cascade_decay",
"exponential",
None,
0.0,
linear_behavior_policy_logit,
False,
linear_reward_function,
False,
"cascade_decay (continuous reward)",
),
(
10,
3,
2,
"continuous",
123,
1000,
"cascade_decay",
"inverse",
None,
0.0,
linear_behavior_policy_logit,
False,
linear_reward_function,
False,
"cascade_decay (continuous reward)",
),
(
10,
3,
2,
"binary",
123,
1000,
"standard_decay",
"exponential",
None,
0.0,
linear_behavior_policy_logit,
False,
logistic_reward_function,
False,
"standard_decay (binary reward)",
),
(
10,
3,
2,
"binary",
123,
1000,
"standard_decay",
"inverse",
None,
0.0,
linear_behavior_policy_logit,
False,
logistic_reward_function,
False,
"standard_decay (binary reward)",
),
(
10,
3,
2,
"continuous",
123,
1000,
"standard_decay",
"exponential",
None,
0.0,
linear_behavior_policy_logit,
False,
linear_reward_function,
False,
"standard_decay (continuous reward)",
),
(
10,
3,
2,
"continuous",
123,
1000,
"standard_decay",
"inverse",
None,
0.0,
linear_behavior_policy_logit,
False,
linear_reward_function,
False,
"standard_decay (continuous reward)",
),
(
10,
3,
2,
"binary",
123,
1000,
"cascade_additive",
"exponential",
"cascade",
0.0,
linear_behavior_policy_logit,
False,
logistic_reward_function,
False,
"cascade_additive, cascade click model (binary reward)",
),
(
10,
3,
2,
"binary",
123,
1000,
"cascade_decay",
"exponential",
"cascade",
0.5,
linear_behavior_policy_logit,
False,
logistic_reward_function,
False,
"cascade_decay, cascade click model (binary reward)",
),
(
10,
3,
2,
"binary",
123,
1000,
"standard_additive",
"exponential",
"cascade",
0.5,
linear_behavior_policy_logit,
False,
logistic_reward_function,
False,
"standard_additive, cascade click model (binary reward)",
),
(
10,
3,
2,
"binary",
123,
1000,
"standard_decay",
"exponential",
"cascade",
0.5,
linear_behavior_policy_logit,
False,
logistic_reward_function,
False,
"standard_decay, cascade click model (binary reward)",
),
(
10,
3,
2,
"binary",
123,
1000,
"independent",
"exponential",
"cascade",
0.5,
linear_behavior_policy_logit,
False,
logistic_reward_function,
False,
"independent, cascade click model (binary reward)",
),
(
10,
3,
2,
"binary",
123,
1000,
"cascade_additive",
"exponential",
"pbm",
0.5,
linear_behavior_policy_logit,
False,
logistic_reward_function,
False,
"cascade_additive, pbm click model (binary reward)",
),
(
10,
3,
2,
"binary",
123,
1000,
"cascade_decay",
"exponential",
"pbm",
0.5,
linear_behavior_policy_logit,
False,
logistic_reward_function,
False,
"cascade_decay, pbm click model (binary reward)",
),
(
10,
3,
2,
"binary",
123,
1000,
"standard_additive",
"exponential",
"pbm",
0.5,
linear_behavior_policy_logit,
False,
logistic_reward_function,
False,
"standard_additive, pbm click model (binary reward)",
),
(
10,
3,
2,
"binary",
123,
1000,
"standard_decay",
"exponential",
"pbm",
0.5,
linear_behavior_policy_logit,
False,
logistic_reward_function,
False,
"standard_decay, pbm click model (binary reward)",
),
(
10,
3,
2,
"binary",
123,
1000,
"independent",
"exponential",
"pbm",
0.5,
linear_behavior_policy_logit,
False,
logistic_reward_function,
False,
"independent, pbm click model (binary reward)",
),
(
10,
3,
2,
"binary",
123,
1000,
"independent",
"exponential",
"pbm",
0.5,
linear_behavior_policy_logit,
True,
logistic_reward_function,
False,
"independent, pbm click model (binary reward)",
),
(
10,
3,
2,
"binary",
123,
1000,
"independent",
"exponential",
"pbm",
0.5,
None,
False,
logistic_reward_function,
False,
"independent, pbm click model (binary reward)",
),
(
10,
3,
2,
"binary",
123,
1000,
"independent",
"exponential",
"pbm",
0.5,
None,
True,
logistic_reward_function,
False,
"independent, pbm click model (binary reward)",
),
(
3,
5,
2,
"binary",
123,
1000,
"independent",
"exponential",
"pbm",
0.5,
None,
True,
logistic_reward_function,
False,
"independent, pbm click model (binary reward)",
),
]
@pytest.mark.parametrize(
"n_unique_action, len_list, dim_context, reward_type, random_state, n_rounds, reward_structure, decay_function, click_model, eta, behavior_policy_function, is_factorizable, reward_function, return_pscore_item_position, description",
valid_input_of_obtain_batch_bandit_feedback,
)
def test_synthetic_slate_using_valid_inputs(
n_unique_action,
len_list,
dim_context,
reward_type,
random_state,
n_rounds,
reward_structure,
decay_function,
click_model,
eta,
behavior_policy_function,
is_factorizable,
reward_function,
return_pscore_item_position,
description,
):
dataset = SyntheticSlateBanditDataset(
n_unique_action=n_unique_action,
len_list=len_list,
dim_context=dim_context,
reward_type=reward_type,
reward_structure=reward_structure,
decay_function=decay_function,
click_model=click_model,
eta=eta,
random_state=random_state,
behavior_policy_function=behavior_policy_function,
is_factorizable=is_factorizable,
base_reward_function=reward_function,
)
# obtain feedback
bandit_feedback = dataset.obtain_batch_bandit_feedback(
n_rounds=n_rounds, return_pscore_item_position=return_pscore_item_position
)
# check slate bandit feedback (common test)
check_slate_bandit_feedback(
bandit_feedback=bandit_feedback, is_factorizable=is_factorizable
)
pscore_columns = [
"pscore_cascade",
"pscore",
"pscore_item_position",
]
bandit_feedback_df = pd.DataFrame()
for column in [
"slate_id",
"position",
"action",
"reward",
"expected_reward_factual",
] + pscore_columns:
bandit_feedback_df[column] = bandit_feedback[column]
print(f"-------{description}--------")
print(bandit_feedback_df.groupby("position")["reward"].describe())
if reward_type == "binary":
assert set(np.unique(bandit_feedback["reward"])) == set([0, 1])
n_rounds = 5
len_list = 3
# slate_id, reward, description
invalid_input_of_calc_on_policy_policy_value = [
(
np.repeat(np.arange(n_rounds), len_list),
"4", #
"reward must be ndarray",
),
(
np.repeat(np.arange(n_rounds), len_list),
np.zeros((n_rounds, len_list), dtype=int), #
"reward must be 1-dimensional",
),
(
"4", #
np.zeros(n_rounds * len_list, dtype=int),
"slate_id must be ndarray",
),
(
np.repeat(np.arange(n_rounds), len_list).reshape((n_rounds, len_list)), #
np.zeros(n_rounds * len_list, dtype=int),
"slate_id must be 1-dimensional",
),
(
np.repeat(np.arange(n_rounds), len_list),
np.zeros(n_rounds * len_list - 1, dtype=int), #
"the size of axis 0 of reward must be the same as that of slate_id",
),
]
@pytest.mark.parametrize(
"slate_id, reward, description",
invalid_input_of_calc_on_policy_policy_value,
)
def test_calc_on_policy_policy_value_using_invalid_input_data(
slate_id, reward, description
) -> None:
# set parameters
n_unique_action = 10
len_list = 3
dim_context = 2
reward_type = "binary"
random_state = 12345
dataset = SyntheticSlateBanditDataset(
n_unique_action=n_unique_action,
len_list=len_list,
dim_context=dim_context,
reward_type=reward_type,
random_state=random_state,
)
with pytest.raises(ValueError, match=f"{description}*"):
_ = dataset.calc_on_policy_policy_value(reward=reward, slate_id=slate_id)
# slate_id, reward, description
valid_input_of_calc_on_policy_policy_value = [
(
np.array([1, 1, 2, 2, 3, 4]),
np.array([0, 1, 1, 0, 0, 0]),
0.5,
"4 slate ids",
),
(
np.array([1, 1]),
np.array([2, 3]),
5,
"one slate id",
),
]
@pytest.mark.parametrize(
"slate_id, reward, result, description",
valid_input_of_calc_on_policy_policy_value,
)
def test_calc_on_policy_policy_value_using_valid_input_data(
slate_id, reward, result, description
) -> None:
# set parameters
n_unique_action = 10
len_list = 3
dim_context = 2
reward_type = "binary"
random_state = 12345
dataset = SyntheticSlateBanditDataset(
n_unique_action=n_unique_action,
len_list=len_list,
dim_context=dim_context,
reward_type=reward_type,
random_state=random_state,
behavior_policy_function=linear_behavior_policy_logit,
)
assert result == dataset.calc_on_policy_policy_value(
reward=reward, slate_id=slate_id
)
# evaluation_policy_type, epsilon, context, action, err, description
invalid_input_of_generate_evaluation_policy_pscore = [
(
"awesome", #
1.0,
np.ones([5, 2]),
np.tile(np.arange(3), 5),
ValueError,
"evaluation_policy_type must be",
),
(
"optimal",
1.0,
np.array([5, 2]), #
np.tile(np.arange(3), 5),
ValueError,
"context must be 2-dimensional ndarray",
),
(
"optimal",
1.0,
np.ones([5, 2]),
np.ones([5, 2]), #
ValueError,
"action must be 1-dimensional ndarray",
),
(
"optimal",
1.0,
np.ones([5, 2]),
np.random.choice(5), #
ValueError,
"action must be 1-dimensional ndarray",
),
(
"optimal",
1.0,
np.ones([5, 2]),
np.ones(5), #
ValueError,
"action must be 1-dimensional ndarray, shape (n_rounds * len_list)",
),
(
"optimal",
"aaa", #
np.ones([5, 2]),
np.tile(np.arange(3), 5),
TypeError,
"`epsilon` must be an instance of <class 'float'>, not <class 'str'>.",
),
(
"optimal",
-1.0, #
np.ones([5, 2]),
np.tile(np.arange(3), 5),
ValueError,
"`epsilon`= -1.0, must be >= 0.0.",
),
(
"optimal",
2.0, #
np.ones([5, 2]),
np.tile(np.arange(3), 5),
ValueError,
"`epsilon`= 2.0, must be <= 1.0.",
),
]
@pytest.mark.parametrize(
"evaluation_policy_type, epsilon, context, action, err, description",
invalid_input_of_generate_evaluation_policy_pscore,
)
def test_generate_evaluation_policy_pscore_using_invalid_input_data(
evaluation_policy_type,
epsilon,
context,
action,
err,
description,
) -> None:
# set parameters
n_unique_action = 10
len_list = 3
dim_context = 2
reward_type = "binary"
random_state = 12345
dataset = SyntheticSlateBanditDataset(
n_unique_action=n_unique_action,
len_list=len_list,
dim_context=dim_context,
reward_type=reward_type,
random_state=random_state,
base_reward_function=logistic_reward_function,
)
with pytest.raises(err, match=f"{description}*"):
_ = dataset.generate_evaluation_policy_pscore(
evaluation_policy_type=evaluation_policy_type,
epsilon=epsilon,
context=context,
action=action,
)
# n_unique_action, is_factorizable, evaluation_policy_type, epsilon, description
valid_input_of_generate_evaluation_policy_pscore = [
(
10,
False,
"optimal",
0.1,
"optimal evaluation policy",
),
(
10,
True,
"optimal",
0.1,
"optimal evaluation policy",
),
(
10,
False,
"anti-optimal",
0.1,
"anti-optimal evaluation policy",
),
(
10,
True,
"random",
None,
"random evaluation policy",
),
(
10,
False,
"optimal",
0.0,
"optimal evaluation policy, epsilon=0.0 (greedy)",
),
(
10,
True,
"optimal",
1.0,
"optimal evaluation policy, epsilon=1.0 (random)",
),
(
2,
True,
"optimal",
1.0,
"optimal evaluation policy, epsilon=1.0 (random)",
),
]
@pytest.mark.parametrize(
"n_unique_action, is_factorizable, evaluation_policy_type, epsilon, description",
valid_input_of_generate_evaluation_policy_pscore,
)
def test_generate_evaluation_policy_pscore_using_valid_input_data(
n_unique_action,
is_factorizable,
evaluation_policy_type,
epsilon,
description,
) -> None:
# set parameters
len_list = 3
dim_context = 2
reward_type = "binary"
random_state = 12345
n_rounds = 100
dataset = SyntheticSlateBanditDataset(
n_unique_action=n_unique_action,
len_list=len_list,
dim_context=dim_context,
reward_type=reward_type,
random_state=random_state,
is_factorizable=is_factorizable,
base_reward_function=logistic_reward_function,
)
# obtain feedback
bandit_feedback = dataset.obtain_batch_bandit_feedback(
n_rounds=n_rounds, return_pscore_item_position=True
)
# generate pscores
(
pscore,
pscore_item_position,
pscore_cascade,
) = dataset.generate_evaluation_policy_pscore(
evaluation_policy_type=evaluation_policy_type,
context=bandit_feedback["context"],
epsilon=epsilon,
action=bandit_feedback["action"],
)
if evaluation_policy_type == "random" or epsilon == 1.0:
# pscores of random evaluation policy must be the same as those of bandit feedback using random behavior policy
assert np.allclose(pscore, bandit_feedback["pscore"])
assert np.allclose(
pscore_item_position, bandit_feedback["pscore_item_position"]
)
assert np.allclose(pscore_cascade, bandit_feedback["pscore_cascade"])
if epsilon == 0.0:
# pscore element of greedy evaluation policy must be either 0 or 1
assert len(set(np.unique(pscore)) - set([0.0, 1.0])) == 0
assert len(set(np.unique(pscore_item_position)) - set([0.0, 1.0])) == 0
assert len(set(np.unique(pscore_cascade)) - set([0.0, 1.0])) == 0
# check pscores
assert (
pscore_cascade < pscore
).sum() == 0, "pscore must be smaller than or equal to pscore_cascade"
assert (
pscore_item_position < pscore
).sum() == 0, "pscore must be smaller than or equal to pscore_item_position"
assert (
pscore_item_position < pscore_cascade
).sum() == 0, "pscore_cascade must be smaller than or equal to pscore_item_position"
# check slate bandit feedback (common test)
check_slate_bandit_feedback(
bandit_feedback=bandit_feedback, is_factorizable=is_factorizable
)
bandit_feedback_df = pd.DataFrame()
for column in ["slate_id", "position", "action"]:
bandit_feedback_df[column] = bandit_feedback[column]
bandit_feedback_df["pscore"] = pscore
bandit_feedback_df["pscore_cascade"] = pscore_cascade
bandit_feedback_df["pscore_item_position"] = pscore_item_position
previous_minimum_pscore_cascade = (
bandit_feedback_df.groupby("slate_id")["pscore_cascade"]
.expanding()
.min()
.values
)
assert (
previous_minimum_pscore_cascade < pscore_cascade
).sum() == 0, "pscore_cascade must be non-decresing sequence in each slate"
count_pscore_in_expression = bandit_feedback_df.groupby("slate_id").apply(
lambda x: x["pscore"].unique().shape[0]
)
assert (
count_pscore_in_expression != 1
).sum() == 0, "pscore must be unique in each slate"
last_slot_feedback_df = bandit_feedback_df.drop_duplicates("slate_id", keep="last")
assert np.allclose(
last_slot_feedback_df["pscore"], last_slot_feedback_df["pscore_cascade"]
), "pscore must be the same as pscore_cascade in the last slot"
# n_unique_action, len_list, epsilon, action_2d, sorted_actions, random_pscore, random_pscore_item_position, random_pscore_cascade, true_pscore, true_pscore_item_position, true_pscore_cascade, description
valid_input_of_calc_epsilon_greedy_pscore = [
(
5,
3,
0.1,
np.tile(np.arange(3), 4).reshape((4, 3)),
np.array([[0, 1, 2], [0, 1, 3], [1, 0, 2], [1, 0, 4]]),
np.ones(12) / 60, # 1 / 5P3
np.ones(12) / 5, # 1/ 5
np.tile([1 / 5, 1 / 20, 1 / 60], 4),
np.array(
[[0.9 + 0.1 / 60] * 3, [0.1 / 60] * 3, [0.1 / 60] * 3, [0.1 / 60] * 3]
).flatten(),
np.array(
[
[0.9 + 0.1 / 5] * 3,
[0.9 + 0.1 / 5, 0.9 + 0.1 / 5, 0.1 / 5],
[0.1 / 5, 0.1 / 5, 0.9 + 0.1 / 5],
[0.1 / 5] * 3,
]
).flatten(),
np.array(
[
[0.9 + 0.1 / 5, 0.9 + 0.1 / 20, 0.9 + 0.1 / 60],
[0.9 + 0.1 / 5, 0.9 + 0.1 / 20, 0.1 / 60],
[0.1 / 5, 0.1 / 20, 0.1 / 60],
[0.1 / 5, 0.1 / 20, 0.1 / 60],
]
).flatten(),
"epsilon is 0.1",
),
]
@pytest.mark.parametrize(
"n_unique_action, len_list, epsilon, action_2d, sorted_actions, random_pscore, random_pscore_item_position, random_pscore_cascade, true_pscore, true_pscore_item_position, true_pscore_cascade, description",
valid_input_of_calc_epsilon_greedy_pscore,
)
def test_calc_epsilon_greedy_pscore_using_valid_input_data(
n_unique_action,
len_list,
epsilon,
action_2d,
sorted_actions,
random_pscore,
random_pscore_item_position,
random_pscore_cascade,
true_pscore,
true_pscore_item_position,
true_pscore_cascade,
description,
) -> None:
# set parameters
dim_context = 2
reward_type = "binary"
random_state = 12345
dataset = SyntheticSlateBanditDataset(
n_unique_action=n_unique_action,
len_list=len_list,
dim_context=dim_context,
reward_type=reward_type,
random_state=random_state,
base_reward_function=logistic_reward_function,
)
(
pscore,
pscore_item_position,
pscore_cascade,
) = dataset._calc_epsilon_greedy_pscore(
epsilon=epsilon,
action_2d=action_2d,
sorted_actions=sorted_actions,
random_pscore=random_pscore,
random_pscore_item_position=random_pscore_item_position,
random_pscore_cascade=random_pscore_cascade,
)
assert np.allclose(true_pscore, pscore)
assert np.allclose(true_pscore_item_position, pscore_item_position)
assert np.allclose(true_pscore_cascade, pscore_cascade)
# n_rounds, n_unique_action, len_list, dim_context, reward_type, reward_structure, click_model, evaluation_policy_logit_, context, err, description
invalid_input_of_calc_ground_truth_policy_value = [
(
3,
3,
2,
2,
"binary",
"independent",
None,
np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [1, 2, 3]]).flatten(),
np.ones((3, 2)),
ValueError,
"evaluation_policy_logit_ must be 2-dimensional",
),
(
3,
2,
2,
2,
"binary",
"independent",
None,
np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [1, 2, 3]]),
np.ones((3, 2)),
ValueError,
"the size of axis 1 of evaluation_policy_logit_ must be",
),
(
3,
3,
2,
1,
"binary",
"independent",
None,
np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [1, 2, 3]]),
np.ones((3, 2)),
ValueError,
"the size of axis 1 of context must be",
),
(
4,
3,
2,
2,
"binary",
"independent",
None,
np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [1, 2, 3]]),
np.ones((3, 2)),
ValueError,
"the length of evaluation_policy_logit_ and context",
),
(
3,
3,
2,
2,
"binary",
"independent",
None,
np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [1, 2, 3]]),
np.ones((3, 2)),
ValueError,
"the length of evaluation_policy_logit_ and context",
),
]
@pytest.mark.parametrize(
"n_rounds, n_unique_action, len_list, dim_context, reward_type, reward_structure, click_model, evaluation_policy_logit_, context, err, description",
invalid_input_of_calc_ground_truth_policy_value,
)
def test_calc_ground_truth_policy_value_using_invalid_input_data(
n_rounds,
n_unique_action,
len_list,
dim_context,
reward_type,
reward_structure,
click_model,
evaluation_policy_logit_,
context,
err,
description,
):
dataset = SyntheticSlateBanditDataset(
n_unique_action=n_unique_action,
len_list=len_list,
dim_context=dim_context,
reward_type=reward_type,
reward_structure=reward_structure,
click_model=click_model,
base_reward_function=logistic_reward_function,
)
_ = dataset.obtain_batch_bandit_feedback(n_rounds=n_rounds)
with pytest.raises(err, match=f"{description}*"):
dataset.calc_ground_truth_policy_value(
evaluation_policy_logit_=evaluation_policy_logit_,
context=context,
)
# n_rounds, n_unique_action, len_list, dim_context, reward_type, reward_structure, click_model, base_reward_function, is_factorizable, evaluation_policy_logit_, description
valid_input_of_calc_ground_truth_policy_value = [
(
4,
3,
2,
2,
"binary",
"independent",
None,
logistic_reward_function,
False,
np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [1, 2, 3]]),
None,
),
(
3,
2,
2,
1,
"binary",
"independent",
None,
logistic_reward_function,
False,
np.array([[1, 2], [3, 4], [5, 6]]),
None,
),
(
4,
3,
2,
2,
"binary",
"independent",
None,
None,
False,
np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [1, 2, 3]]),
None,
),
(
4,
3,
2,
2,
"binary",
"cascade_decay",
None,
logistic_reward_function,
False,
np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [1, 2, 3]]),
None,
),
(
4,
3,
2,
2,
"binary",
"cascade_additive",
None,
logistic_reward_function,
False,
np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [1, 2, 3]]),
None,
),
(
4,
3,
2,
2,
"binary",
"standard_decay",
None,
logistic_reward_function,
False,
np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [1, 2, 3]]),
None,
),
(
4,
3,
2,
2,
"binary",
"standard_additive",
None,
logistic_reward_function,
False,
np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [1, 2, 3]]),
None,
),
(
4,
3,
2,
2,
"continuous",
"cascade_decay",
None,
logistic_reward_function,
False,
np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [1, 2, 3]]),
None,
),
(
4,
3,
2,
2,
"binary",
"cascade_decay",
"pbm",
logistic_reward_function,
False,
np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [1, 2, 3]]),
None,
),
(
4,
3,
2,
2,
"binary",
"cascade_decay",
"cascade",
logistic_reward_function,
False,
np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [1, 2, 3]]),
None,
),
(
4,
3,
2,
2,
"binary",
"cascade_decay",
"cascade",
logistic_reward_function,
True,
np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [1, 2, 3]]),
None,
),
(
4,
3,
5,
2,
"binary",
"cascade_decay",
"cascade",
logistic_reward_function,
True,
np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [1, 2, 3]]),
None,
),
]
@pytest.mark.parametrize(
"n_rounds, n_unique_action, len_list, dim_context, reward_type, reward_structure, click_model, base_reward_function, is_factorizable, evaluation_policy_logit_, description",
valid_input_of_calc_ground_truth_policy_value,
)
def test_calc_ground_truth_policy_value_using_valid_input_data(
n_rounds,
n_unique_action,
len_list,
dim_context,
reward_type,
reward_structure,
click_model,
base_reward_function,
is_factorizable,
evaluation_policy_logit_,
description,
):
dataset = SyntheticSlateBanditDataset(
n_unique_action=n_unique_action,
len_list=len_list,
dim_context=dim_context,
reward_type=reward_type,
reward_structure=reward_structure,
click_model=click_model,
base_reward_function=base_reward_function,
is_factorizable=is_factorizable,
)
logged_bandit_feedback = dataset.obtain_batch_bandit_feedback(n_rounds=n_rounds)
policy_value = dataset.calc_ground_truth_policy_value(
evaluation_policy_logit_=evaluation_policy_logit_,
context=logged_bandit_feedback["context"],
)
assert isinstance(policy_value, float) and 0 <= policy_value
@pytest.mark.parametrize("is_factorizable", [(True), (False)])
def test_calc_ground_truth_policy_value_value_check_with_click_model(is_factorizable):
n_rounds = 3
n_unique_action = 4
len_list = 3
dim_context = 3
reward_type = "binary"
reward_structure = "cascade_additive"
evaluation_policy_logit_ = np.array([[1, 2, 3, 4], [5, 6, 7, 8], [3, 4, 5, 6]])
dataset_none = SyntheticSlateBanditDataset(
n_unique_action=n_unique_action,
len_list=len_list,
dim_context=dim_context,
reward_type=reward_type,
reward_structure=reward_structure,
click_model=None,
random_state=12345,
base_reward_function=logistic_reward_function,
is_factorizable=is_factorizable,
)
logged_bandit_feedback_none = dataset_none.obtain_batch_bandit_feedback(
n_rounds=n_rounds
)
policy_value_none = dataset_none.calc_ground_truth_policy_value(
evaluation_policy_logit_=evaluation_policy_logit_,
context=logged_bandit_feedback_none["context"],
)
dataset_pbm = SyntheticSlateBanditDataset(
n_unique_action=n_unique_action,
len_list=len_list,
dim_context=dim_context,
reward_type=reward_type,
reward_structure=reward_structure,
click_model="pbm",
random_state=12345,
base_reward_function=logistic_reward_function,
is_factorizable=is_factorizable,
)
logged_bandit_feedback_pbm = dataset_pbm.obtain_batch_bandit_feedback(
n_rounds=n_rounds
)
policy_value_pbm = dataset_pbm.calc_ground_truth_policy_value(
evaluation_policy_logit_=evaluation_policy_logit_,
context=logged_bandit_feedback_pbm["context"],
)
dataset_cascade = SyntheticSlateBanditDataset(
n_unique_action=n_unique_action,
len_list=len_list,
dim_context=dim_context,
reward_type=reward_type,
reward_structure=reward_structure,
click_model="cascade",
random_state=12345,
base_reward_function=logistic_reward_function,
is_factorizable=is_factorizable,
)
logged_bandit_feedback_cascade = dataset_cascade.obtain_batch_bandit_feedback(
n_rounds=n_rounds
)
policy_value_cascade = dataset_cascade.calc_ground_truth_policy_value(
evaluation_policy_logit_=evaluation_policy_logit_,
context=logged_bandit_feedback_cascade["context"],
)
assert policy_value_pbm < policy_value_none
assert policy_value_cascade < policy_value_none
# "len_list, click_model, is_factorizable"
valid_input_of_calc_ground_truth_policy_value = [
(3, "pbm", False),
(3, "pbm", True),
(3, "cascade", False),
(3, "cascade", True),
(5, "cascade", True),
]
@pytest.mark.parametrize(
"len_list, click_model, is_factorizable",
valid_input_of_calc_ground_truth_policy_value,
)
def test_calc_ground_truth_policy_value_value_check_with_eta(
len_list, click_model, is_factorizable
):
n_rounds = 3
n_unique_action = 4
dim_context = 3
reward_type = "binary"
reward_structure = "cascade_additive"
evaluation_policy_logit_ = np.array([[1, 2, 3, 4], [5, 6, 7, 8], [3, 4, 5, 6]])
dataset_05 = SyntheticSlateBanditDataset(
n_unique_action=n_unique_action,
len_list=len_list,
dim_context=dim_context,
reward_type=reward_type,
reward_structure=reward_structure,
click_model=click_model,
eta=0.5,
random_state=12345,
base_reward_function=logistic_reward_function,
is_factorizable=is_factorizable,
)
logged_bandit_feedback_05 = dataset_05.obtain_batch_bandit_feedback(
n_rounds=n_rounds
)
policy_value_05 = dataset_05.calc_ground_truth_policy_value(
evaluation_policy_logit_=evaluation_policy_logit_,
context=logged_bandit_feedback_05["context"],
)
dataset_1 = SyntheticSlateBanditDataset(
n_unique_action=n_unique_action,
len_list=len_list,
dim_context=dim_context,
reward_type=reward_type,
reward_structure=reward_structure,
click_model=click_model,
eta=1.0,
random_state=12345,
base_reward_function=logistic_reward_function,
is_factorizable=is_factorizable,
)
logged_bandit_feedback_1 = dataset_1.obtain_batch_bandit_feedback(n_rounds=n_rounds)
policy_value_1 = dataset_1.calc_ground_truth_policy_value(
evaluation_policy_logit_=evaluation_policy_logit_,
context=logged_bandit_feedback_1["context"],
)
dataset_2 = SyntheticSlateBanditDataset(
n_unique_action=n_unique_action,
len_list=len_list,
dim_context=dim_context,
reward_type=reward_type,
reward_structure=reward_structure,
click_model=click_model,
eta=2.0,
random_state=12345,
base_reward_function=logistic_reward_function,
is_factorizable=is_factorizable,
)
logged_bandit_feedback_2 = dataset_2.obtain_batch_bandit_feedback(n_rounds=n_rounds)
policy_value_2 = dataset_2.calc_ground_truth_policy_value(
evaluation_policy_logit_=evaluation_policy_logit_,
context=logged_bandit_feedback_2["context"],
)
assert policy_value_2 < policy_value_1 < policy_value_05
n_rounds = 10
n_unique_action = 5
len_list = 3
# action, evaluation_policy_logit_, err, description
invalid_input_of_obtain_pscore_given_evaluation_policy_logit = [
(
np.ones((n_rounds, len_list)),
np.ones((n_rounds, n_unique_action)),
ValueError,
"action must be 1-dimensional",
),
(
np.ones((n_rounds * len_list)),
np.ones((n_rounds * n_unique_action)),
ValueError,
"evaluation_policy_logit_ must be 2-dimensional",
),
(
np.ones((n_rounds * len_list + 1)),
np.ones((n_rounds, n_unique_action)),
ValueError,
"the shape of action and evaluation_policy_logit_ must be",
),
(
np.ones((n_rounds * len_list)),
np.ones((n_rounds, n_unique_action + 1)),
ValueError,
"the shape of action and evaluation_policy_logit_ must be",
),
(
np.ones((n_rounds * len_list)),
np.ones((n_rounds + 1, n_unique_action)),
ValueError,
"the shape of action and evaluation_policy_logit_ must be",
),
]
@pytest.mark.parametrize(
"action, evaluation_policy_logit_, err, description",
invalid_input_of_obtain_pscore_given_evaluation_policy_logit,
)
def test_obtain_pscore_given_evaluation_policy_logit(
action, evaluation_policy_logit_, err, description
):
dataset = SyntheticSlateBanditDataset(
n_unique_action=n_unique_action,
len_list=len_list,
)
with pytest.raises(err, match=f"{description}*"):
dataset.obtain_pscore_given_evaluation_policy_logit(
action=action,
evaluation_policy_logit_=evaluation_policy_logit_,
)
# n_unique_action, return_pscore_item_position, is_factorizable
valid_input_of_obtain_pscore_given_evaluation_policy_logit = [
(10, True, True),
(10, True, False),
(10, False, True),
(10, False, False),
(3, False, True),
]
@pytest.mark.parametrize(
"n_unique_action, return_pscore_item_position, is_factorizable",
valid_input_of_obtain_pscore_given_evaluation_policy_logit,
)
def test_obtain_pscore_given_evaluation_policy_logit_value_check(
n_unique_action,
return_pscore_item_position,
is_factorizable,
):
dataset = SyntheticSlateBanditDataset(
n_unique_action=n_unique_action,
len_list=5,
behavior_policy_function=linear_behavior_policy_logit,
is_factorizable=is_factorizable,
random_state=12345,
)
bandit_feedback = dataset.obtain_batch_bandit_feedback(
n_rounds=2,
return_pscore_item_position=return_pscore_item_position,
)
behavior_and_evaluation_policy_logit_ = dataset.behavior_policy_function(
context=bandit_feedback["context"],
action_context=bandit_feedback["action_context"],
random_state=dataset.random_state,
)
(
evaluation_policy_pscore,
evaluation_policy_pscore_item_position,
evaluation_policy_pscore_cascade,
) = dataset.obtain_pscore_given_evaluation_policy_logit(
action=bandit_feedback["action"],
evaluation_policy_logit_=behavior_and_evaluation_policy_logit_,
return_pscore_item_position=return_pscore_item_position,
)
print(bandit_feedback["pscore"])
print(evaluation_policy_pscore)
assert np.allclose(bandit_feedback["pscore"], evaluation_policy_pscore)
assert np.allclose(
bandit_feedback["pscore_cascade"], evaluation_policy_pscore_cascade
)
assert (
np.allclose(
bandit_feedback["pscore_item_position"],
evaluation_policy_pscore_item_position,
)
if return_pscore_item_position
else bandit_feedback["pscore_item_position"]
== evaluation_policy_pscore_item_position
)
# n_unique_action, len_list, all_slate_actions, policy_logit_i_, true_pscores, description
valid_input_of_calc_pscore_given_policy_logit = [
(
5,
3,
np.array([[0, 1, 2], [3, 1, 0]]),
np.arange(5),
np.array(
[
[
np.exp(0) / np.exp([0, 1, 2, 3, 4]).sum(),
np.exp(1) / np.exp([1, 2, 3, 4]).sum(),
np.exp(2) / np.exp([2, 3, 4]).sum(),
],
[
np.exp(3) / np.exp([0, 1, 2, 3, 4]).sum(),
np.exp(1) / np.exp([0, 1, 2, 4]).sum(),
np.exp(0) / np.exp([0, 2, 4]).sum(),
],
]
).prod(axis=1),
"calc pscores of several slate actions",
),
]
@pytest.mark.parametrize(
"n_unique_action, len_list, all_slate_actions, policy_logit_i_, true_pscores, description",
valid_input_of_calc_pscore_given_policy_logit,
)
def test_calc_pscore_given_policy_logit_using_valid_input_data(
n_unique_action,
len_list,
all_slate_actions,
policy_logit_i_,
true_pscores,
description,
) -> None:
# set parameters
dim_context = 2
reward_type = "binary"
random_state = 12345
dataset = SyntheticSlateBanditDataset(
n_unique_action=n_unique_action,
len_list=len_list,
dim_context=dim_context,
reward_type=reward_type,
random_state=random_state,
base_reward_function=logistic_reward_function,
)
pscores = dataset._calc_pscore_given_policy_logit(
all_slate_actions, policy_logit_i_
)
assert np.allclose(true_pscores, pscores)
# n_unique_action, len_list, evaluation_policy_logit_, action, true_pscores, true_pscores_cascade, true_pscores_item_position,description
mock_input_of_obtain_pscore_given_evaluation_policy_logit = [
(
3,
2,
np.array([[0, 1, 2], [2, 1, 0]]),
np.array([2, 1, 2, 0]),
np.repeat(
np.array(
[
[
np.exp(2) / np.exp([0, 1, 2]).sum(),
np.exp(1) / np.exp([0, 1]).sum(),
],
[
np.exp(0) / np.exp([0, 1, 2]).sum(),
np.exp(2) / np.exp([1, 2]).sum(),
],
]
).prod(axis=1),
2,
),
np.array(
[
[
np.exp(2) / np.exp([0, 1, 2]).sum(),
np.exp(1) / np.exp([0, 1]).sum(),
],
[
np.exp(0) / np.exp([0, 1, 2]).sum(),
np.exp(2) / np.exp([1, 2]).sum(),
],
]
)
.cumprod(axis=1)
.flatten(),
np.array(
[
[
np.exp(2)
/ np.exp([0, 1, 2]).sum()
* np.exp(1)
/ np.exp([0, 1]).sum(),
np.exp(2)
/ np.exp([0, 1, 2]).sum()
* np.exp(0)
/ np.exp([0, 1]).sum(),
],
[
np.exp(2)
/ np.exp([0, 1, 2]).sum()
* np.exp(1)
/ np.exp([0, 1]).sum(),
np.exp(0)
/ np.exp([0, 1, 2]).sum()
* np.exp(1)
/ np.exp([1, 2]).sum(),
],
[
np.exp(0)
/ np.exp([0, 1, 2]).sum()
* np.exp(1)
/ np.exp([1, 2]).sum(),
np.exp(0)
/ np.exp([0, 1, 2]).sum()
* np.exp(2)
/ np.exp([1, 2]).sum(),
],
[
np.exp(1)
/ np.exp([0, 1, 2]).sum()
* np.exp(2)
/ np.exp([0, 2]).sum(),
np.exp(0)
/ np.exp([0, 1, 2]).sum()
* np.exp(2)
/ np.exp([1, 2]).sum(),
],
]
).sum(axis=1),
"calc three pscores using mock data",
),
]
@pytest.mark.parametrize(
"n_unique_action, len_list, evaluation_policy_logit_, action, true_pscores, true_pscores_cascade, true_pscores_item_position,description",
mock_input_of_obtain_pscore_given_evaluation_policy_logit,
)
def test_obtain_pscore_given_evaluation_policy_logit_using_mock_input_data(
n_unique_action,
len_list,
evaluation_policy_logit_,
action,
true_pscores,
true_pscores_cascade,
true_pscores_item_position,
description,
) -> None:
# set parameters
dim_context = 2
reward_type = "binary"
random_state = 12345
dataset = SyntheticSlateBanditDataset(
n_unique_action=n_unique_action,
len_list=len_list,
dim_context=dim_context,
reward_type=reward_type,
random_state=random_state,
base_reward_function=logistic_reward_function,
)
(
evaluation_policy_pscore,
evaluation_policy_pscore_item_position,
evaluation_policy_pscore_cascade,
) = dataset.obtain_pscore_given_evaluation_policy_logit(
action, evaluation_policy_logit_, return_pscore_item_position=True
)
assert np.allclose(true_pscores, evaluation_policy_pscore)
assert np.allclose(true_pscores_cascade, evaluation_policy_pscore_cascade)
assert np.allclose(
true_pscores_item_position, evaluation_policy_pscore_item_position
)
| 27.130416 | 236 | 0.583808 | 6,986 | 61,993 | 4.834526 | 0.03507 | 0.064665 | 0.038491 | 0.021792 | 0.889027 | 0.864126 | 0.843489 | 0.811393 | 0.785338 | 0.76174 | 0 | 0.034665 | 0.314552 | 61,993 | 2,284 | 237 | 27.142294 | 0.760149 | 0.041456 | 0 | 0.72723 | 0 | 0.002335 | 0.157172 | 0.016748 | 0 | 0 | 0 | 0 | 0.022887 | 1 | 0.009809 | false | 0 | 0.002802 | 0 | 0.012611 | 0.002335 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4d90e0e621090adf9c07b5664949e17df1bf40be | 8,898 | py | Python | examples/Navigation.py | idlebear/Py2D | 5a6599bfeb52126f9593d537f40d973e3959ae4c | [
"BSD-2-Clause",
"Unlicense"
] | 17 | 2017-03-01T17:34:11.000Z | 2021-09-25T08:18:33.000Z | examples/Navigation.py | idlebear/Py2D | 5a6599bfeb52126f9593d537f40d973e3959ae4c | [
"BSD-2-Clause",
"Unlicense"
] | 7 | 2018-04-24T13:25:09.000Z | 2022-02-09T09:10:34.000Z | examples/Navigation.py | idlebear/Py2D | 5a6599bfeb52126f9593d537f40d973e3959ae4c | [
"BSD-2-Clause",
"Unlicense"
] | 6 | 2015-04-03T08:44:27.000Z | 2020-10-27T19:00:45.000Z | import pygame
from pygame.locals import *
from py2d.Math import *
from examples import Example
from py2d.Navigation import *
class Mesh(Example):
"""Navigation mesh generation sample
Draw a polygon and holes and observe the generated navigation mesh.
The generated mesh will be colored light gray with the connectivity shown in cyan.
You can switch active polygons with the number keys 0-9.
The polygons are numbered as follows:
0 The Main Polygon (color: green)
1-9 Holes in the Main polygon (color: red)
The result of the decomposition will be shown in yellow.
Key mappings:
0-9: Switch active polygon
B: Set beginning of pathfinding at current mouse position
E: Set end of pathfinding at current mouse position
MOUSE1: Add new point to the end of the active polygon
BACKSPACE: Delete the last point of the active polygon
Have fun!
"""
def __init__(self, runner):
self.runner = runner
self.title = "Navigation Mesh"
self.polys = [Polygon() for i in range(10)]
self.active_poly = 0
self.beginning = None
self.end = None
self.debug = False
self.fill = False
self.update_mesh()
self.update_nav()
self.mouse_pos = None
def update(self, time_elapsed):
if self.runner.keys[K_BACKSPACE]:
self.runner.keys[K_BACKSPACE] = False
if self.polys[self.active_poly].points: del(self.polys[self.active_poly].points[-1])
self.update_mesh()
for i in range(10):
key = ord(str(i))
if self.runner.keys[key]:
self.runner.keys[key] = False
self.active_poly = i
if self.runner.keys[K_d]:
self.runner.keys[K_d] = False
self.debug = not self.debug
if self.runner.keys[K_f]:
self.runner.keys[K_f] = False
self.fill = not self.fill
if self.runner.keys[K_b]:
self.runner.keys[K_b] = False
self.beginning = Vector(self.mouse_pos[0], self.mouse_pos[1])
self.update_nav()
if self.runner.keys[K_e]:
self.runner.keys[K_e] = False
self.end = Vector(self.mouse_pos[0], self.mouse_pos[1])
self.update_nav()
def render(self):
self.draw_poly(self.polys[0], 0x00ff00, False)
for h in self.polys[1:]:
self.draw_poly(h, 0xff0000, False)
if self.mesh:
for i, p in enumerate(self.mesh.polygons):
self.draw_poly(p, 0xcccccc, self.fill)
if self.fill: self.draw_poly(p, 0x000000, False)
center = p.get_centerpoint()
if self.debug: self.runner.screen.blit(self.runner.font.render(str(i), True, (0,0,0)), center.as_tuple())
for n,dist in p.neighbors.iteritems():
pygame.draw.line(self.runner.screen, 0x00ff00, center.as_tuple(), n.get_centerpoint().as_tuple(), 3)
if self.path:
for a, b in zip(self.path.polygons, self.path.polygons[1:]):
pygame.draw.line(self.runner.screen, 0xff0000, a.get_centerpoint().as_tuple(), b.get_centerpoint().as_tuple(), 5)
if self.beginning:
pygame.draw.circle(self.runner.screen, 0x00ff00, self.beginning.as_tuple(),2)
if self.end:
pygame.draw.circle(self.runner.screen, 0xff0000, self.end.as_tuple(),2)
def draw_poly(self, poly, color, fill):
if len(poly) > 1:
if fill and len(poly) > 2:
pygame.draw.polygon(self.runner.screen, color, poly.as_tuple_list())
pygame.draw.lines(self.runner.screen, color, True, poly.as_tuple_list())
elif poly.points:
pygame.draw.circle(self.runner.screen, color, poly.points[0].as_tuple(),2)
def mouse_down(self, pos, button):
if button == 1:
self.polys[self.active_poly].add_point(Vector(pos[0], pos[1]))
self.update_mesh()
def mouse_move(self, pos, rel, buttons):
self.mouse_pos = pos
def update_mesh(self):
self.debug_points = []
if len(self.polys[0]) > 2:
holes = [h for h in self.polys[1:] if len(h) > 2]
self.mesh = NavMesh.generate(self.polys[0], holes)
else:
self.mesh = None
self.update_nav()
def update_nav(self):
if self.mesh:
self.path = self.mesh.get_path(self.beginning, self.end)
else:
self.path = None
class Walker(Example):
"""Navigation walker sample
Draw a polygon and holes and observe the generated navigation mesh.
The generated mesh will be colored light gray with the connectivity shown in cyan.
You can switch active polygons with the number keys 0-9.
Then, use B and E to set start and end positions for a walker object
The polygons are numbered as follows:
0 The Main Polygon (color: green)
1-9 Holes in the Main polygon (color: red)
The result of the decomposition will be shown in yellow.
Key mappings:
0-9: Switch active polygon
B: Set beginning of pathfinding at current mouse position
E: Set end of pathfinding at current mouse position
D: Toggle debug labels
F: Toggle polygon filling
M: Toggle drawing of polygon mesh in filled mode
N: Toggle drawing of neighbor info and path solution
MOUSE1: Add new point to the end of the active polygon
BACKSPACE: Delete the last point of the active polygon
Have fun!
"""
def __init__(self, runner):
self.runner = runner
self.title = "Navigation Mesh"
self.polys = [Polygon() for i in range(10)]
self.active_poly = 0
self.beginning = None
self.end = None
self.move_to = None
self.debug = False
self.fill = False
self.draw_mesh = True
self.draw_neighbors = True
self.update_mesh()
self.update_nav()
self.mouse_pos = None
def update(self, time_elapsed):
if self.beginning and self.path:
if not self.move_to and (self.beginning - self.end).length_squared > 0.1:
self.move_to = self.path.get_next_move_to(self.beginning, self.end)
if self.move_to:
self.beginning += (self.move_to - self.beginning).clamp() * (time_elapsed * 0.1)
if (self.beginning - self.move_to).length_squared < 0.0001: self.move_to = None
if self.runner.keys[K_BACKSPACE]:
self.runner.keys[K_BACKSPACE] = False
if self.polys[self.active_poly].points: del(self.polys[self.active_poly].points[-1])
self.update_mesh()
for i in range(10):
key = ord(str(i))
if self.runner.keys[key]:
self.runner.keys[key] = False
self.active_poly = i
if self.runner.keys[K_d]:
self.runner.keys[K_d] = False
self.debug = not self.debug
if self.runner.keys[K_f]:
self.runner.keys[K_f] = False
self.fill = not self.fill
if self.runner.keys[K_m]:
self.runner.keys[K_m] = False
self.draw_mesh = not self.draw_mesh
if self.runner.keys[K_n]:
self.runner.keys[K_n] = False
self.draw_neighbors = not self.draw_neighbors
if self.runner.keys[K_b]:
self.runner.keys[K_b] = False
self.beginning = Vector(self.mouse_pos[0], self.mouse_pos[1])
self.update_nav()
if self.runner.keys[K_e]:
self.runner.keys[K_e] = False
self.end = Vector(self.mouse_pos[0], self.mouse_pos[1])
self.update_nav()
def render(self):
self.draw_poly(self.polys[0], 0x00ff00, False)
for h in self.polys[1:]:
self.draw_poly(h, 0xff0000, False)
if self.mesh:
for i, p in enumerate(self.mesh.polygons):
self.draw_poly(p, 0xcccccc, self.fill)
if self.fill and self.draw_mesh: self.draw_poly(p, 0x000000, False)
center = p.get_centerpoint()
if self.debug: self.runner.screen.blit(self.runner.font.render(str(i), True, (0,0,0)), center.as_tuple())
if self.draw_neighbors:
for n,dist in p.neighbors.iteritems():
pygame.draw.line(self.runner.screen, 0x00ff00, center.as_tuple(), n.get_centerpoint().as_tuple(), 3)
if self.path:
if self.draw_neighbors:
for a, b in zip(self.path.polygons, self.path.polygons[1:]):
pygame.draw.line(self.runner.screen, 0xff0000, a.get_centerpoint().as_tuple(), b.get_centerpoint().as_tuple(), 5)
if self.beginning and self.move_to:
pygame.draw.line(self.runner.screen, 0xff00ff, self.beginning.as_tuple(), self.move_to.as_tuple(), 5)
if self.end:
pygame.draw.circle(self.runner.screen, 0xff0000, self.end.as_tuple(),2)
if self.beginning:
pygame.draw.ellipse(self.runner.screen, 0x00ff00, pygame.Rect(self.beginning.x - 4, self.beginning.y - 4, 8,8))
def draw_poly(self, poly, color, fill):
if len(poly) > 1:
if fill and len(poly) > 2:
pygame.draw.polygon(self.runner.screen, color, poly.as_tuple_list())
pygame.draw.lines(self.runner.screen, color, True, poly.as_tuple_list())
elif poly.points:
pygame.draw.circle(self.runner.screen, color, poly.points[0].as_tuple(),2)
def mouse_down(self, pos, button):
if button == 1:
self.polys[self.active_poly].add_point(Vector(pos[0], pos[1]))
self.update_mesh()
def mouse_move(self, pos, rel, buttons):
self.mouse_pos = pos
def update_mesh(self):
self.debug_points = []
if len(self.polys[0]) > 2:
holes = [h for h in self.polys[1:] if len(h) > 2]
self.mesh = NavMesh.generate(self.polys[0], holes)
else:
self.mesh = None
self.update_nav()
def update_nav(self):
self.move_to = None
if self.mesh:
self.path = self.mesh.get_path(self.beginning, self.end)
else:
self.path = None
| 26.093842 | 118 | 0.698134 | 1,462 | 8,898 | 4.145007 | 0.117647 | 0.084158 | 0.064686 | 0.059406 | 0.877888 | 0.837459 | 0.822607 | 0.822607 | 0.813036 | 0.813036 | 0 | 0.023722 | 0.175657 | 8,898 | 340 | 119 | 26.170588 | 0.802454 | 0.184761 | 0 | 0.857143 | 0 | 0 | 0.004164 | 0 | 0 | 0 | 0.018876 | 0 | 0 | 1 | 0.084656 | false | 0 | 0.026455 | 0 | 0.121693 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4dc304a68008ecaf3bfdf07f7ce7e60bc7f18ab9 | 24,488 | py | Python | api/apps/boxes/tests/test_api.py | polart/vagrant-registry | 47fa53a93d506f2501f333a256ccf36e49970789 | [
"MIT"
] | 8 | 2020-03-16T21:41:08.000Z | 2021-12-16T05:44:04.000Z | api/apps/boxes/tests/test_api.py | polart/vagrant-registry | 47fa53a93d506f2501f333a256ccf36e49970789 | [
"MIT"
] | 6 | 2020-03-21T11:23:18.000Z | 2022-02-27T01:16:18.000Z | api/apps/boxes/tests/test_api.py | polart/vagrant-registry | 47fa53a93d506f2501f333a256ccf36e49970789 | [
"MIT"
] | null | null | null | from django.db import transaction
from rest_framework import status
from rest_framework.test import (
APITestCase, APIRequestFactory, force_authenticate)
from apps.boxes.api_views import BoxViewSet
from apps.boxes.models import BoxUpload, Box, BoxMember, BoxProvider
from apps.factories import (
BoxUploadFactory, BoxProviderFactory, StaffFactory, UserFactory,
BoxFactory, BoxVersionFactory, EmptyBoxProviderFactory)
from vagrant_registry import urls
class BoxViewSetTestCase(APITestCase):
def setUp(self):
self.factory = APIRequestFactory()
self.view = BoxViewSet.as_view({
'get': 'list',
})
def test_list_boxes(self):
user = UserFactory()
b1 = BoxFactory(visibility=Box.PRIVATE)
b2 = BoxFactory(visibility=Box.PRIVATE)
b2.share_with(user, BoxMember.PERM_R)
request = self.factory.get('/url/')
force_authenticate(request, user=user)
response = self.view(request)
self.assertEqual(response.status_code, status.HTTP_200_OK)
# only b2; b1 not shared with user
self.assertEqual(response.data['count'], 1)
self.assertEqual(response.data['results'][0]['name'], b2.name)
class UserBoxViewSetTestCase(APITestCase):
def setUp(self):
self.factory = APIRequestFactory()
self.view_list = urls.box_list
self.view_detail = urls.box_detail
def test_list_user_boxes(self):
user = UserFactory()
user1 = UserFactory()
b1 = BoxFactory(visibility=Box.PRIVATE, owner=user1)
b2 = BoxFactory(visibility=Box.PRIVATE, owner=user1)
b2.share_with(user, BoxMember.PERM_R)
request = self.factory.get('/url/')
force_authenticate(request, user=user)
response = self.view_list(request, username=user1.username)
self.assertEqual(response.status_code, status.HTTP_200_OK)
# only b2; b1 not shared with user; b4 different owner
self.assertEqual(response.data['count'], 1)
self.assertEqual(response.data['results'][0]['name'], b2.name)
def test_user_creates_own_box(self):
user = UserFactory()
data = {
'name': 'testbox1',
'description': 'some description',
'short_description': 'Test box',
'visibility': Box.PRIVATE,
}
request = self.factory.post('/url/', data=data)
force_authenticate(request, user=user)
response = self.view_list(request, username=user.username)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
self.assertEqual(Box.objects.count(), 1)
self.assertTrue(Box.objects.filter(**data).exists())
def test_user_updates_own_box(self):
user = UserFactory()
box = BoxFactory(owner=user, visibility=Box.PRIVATE)
data = {
'name': 'testbox1',
'description': 'some description',
'short_description': 'Test box',
'visibility': Box.PUBLIC,
}
request = self.factory.patch('/url/', data=data)
force_authenticate(request, user=user)
response = self.view_detail(request, username=user.username,
box_name=box.name)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(Box.objects.count(), 1)
self.assertTrue(Box.objects.filter(**data).exists())
def test_user_creates_box_with_the_same_name(self):
user = UserFactory()
box = BoxFactory(owner=user, visibility=Box.PRIVATE)
data = {
'name': box.name,
'description': 'some description',
'short_description': 'Test box',
'visibility': Box.PUBLIC,
}
request = self.factory.post('/url/', data=data)
force_authenticate(request, user=user)
# Wrap in atomic transaction because of UNIQUER DB error
with transaction.atomic():
response = self.view_list(request, username=user.username)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertEqual(Box.objects.count(), 1)
def test_user_cant_create_box_for_other_user(self):
user = UserFactory()
user1 = UserFactory()
data = {
'name': 'testbox1',
'description': 'some description',
'short_description': 'Test box',
'visibility': Box.PRIVATE,
}
request = self.factory.post('/url/', data=data)
force_authenticate(request, user=user)
response = self.view_list(request, username=user1.username)
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
self.assertEqual(Box.objects.count(), 0)
class UserBoxMemberViewSetTestCase(APITestCase):
def setUp(self):
self.factory = APIRequestFactory()
self.view_list = urls.box_member_list
self.view_detail = urls.box_member_detail
def test_box_owner_can_add_box_member(self):
user = UserFactory()
box = BoxFactory(owner=user)
user1 = UserFactory()
data = {
'permissions': BoxMember.PERM_RW,
}
request = self.factory.post('/url/', data=data)
force_authenticate(request, user=user)
response = self.view_detail(
request,
username=user.username,
box_name=box.name,
member_username=user1.username)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
self.assertListEqual(list(box.shared_with.all()), [user1])
def test_box_owner_cannot_add_already_added_box_member(self):
user = UserFactory()
box = BoxFactory(owner=user)
user1 = UserFactory()
box.share_with(user1, BoxMember.PERM_RW)
data = {
'permissions': BoxMember.PERM_RW,
}
request = self.factory.post('/url/', data=data)
force_authenticate(request, user=user)
# Wrap in atomic transaction because of UNIQUER DB error
with transaction.atomic():
response = self.view_detail(
request,
username=user.username,
box_name=box.name,
member_username=user1.username)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertListEqual(list(box.shared_with.all()), [user1])
def test_box_owner_can_view_box_members(self):
user = UserFactory()
box = BoxFactory(owner=user)
user1 = UserFactory()
box.share_with(user1, BoxMember.PERM_RW)
user2 = UserFactory()
box.share_with(user2, BoxMember.PERM_R)
request = self.factory.get('/url/')
force_authenticate(request, user=user)
response = self.view_list(
request,
username=user.username,
box_name=box.name)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data['count'], 2)
def test_user_with_permissions_cannot_add_box_member(self):
user = UserFactory()
box = BoxFactory()
box.share_with(user, BoxMember.PERM_RW)
user1 = UserFactory()
data = {
'permissions': BoxMember.PERM_RW,
}
request = self.factory.post('/url/', data=data)
force_authenticate(request, user=user)
response = self.view_detail(
request,
username=box.owner.username,
box_name=box.name,
member_username=user1.username)
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
self.assertListEqual(list(box.shared_with.all()), [user])
def test_user_with_permissions_cannot_read_box_members(self):
user = UserFactory()
box = BoxFactory()
box.share_with(user, BoxMember.PERM_RW)
request = self.factory.get('/url/')
force_authenticate(request, user=user)
response = self.view_list(
request,
username=box.owner.username,
box_name=box.name)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data['count'], 0)
self.assertListEqual(list(box.shared_with.all()), [user])
class UserBoxVersionViewSetTestCase(APITestCase):
def setUp(self):
self.factory = APIRequestFactory()
self.view_list = urls.box_version_list
self.view_detail = urls.box_version_detail
def test_box_owner_can_create_version(self):
user = UserFactory()
box = BoxFactory(owner=user)
data = {
'version': '1.0.1',
'changes': 'Initial release',
}
request = self.factory.post('/url/', data=data)
force_authenticate(request, user=user)
response = self.view_list(
request,
username=box.owner.username,
box_name=box.name)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
self.assertTrue(box.versions.filter(**data).exists())
def test_user_with_permissions_can_create_version(self):
user = UserFactory()
box = BoxFactory()
box.share_with(user, BoxMember.PERM_RW)
data = {
'version': '1.0.1',
'changes': 'Initial release',
}
request = self.factory.post('/url/', data=data)
force_authenticate(request, user=user)
response = self.view_list(
request,
username=box.owner.username,
box_name=box.name)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
self.assertTrue(box.versions.filter(**data).exists())
def test_user_with_permissions_can_view_versions(self):
user = UserFactory()
box = BoxFactory()
box.share_with(user, BoxMember.PERM_R)
BoxVersionFactory(box=box, version='1.0.0')
BoxVersionFactory(box=box, version='1.0.1')
request = self.factory.get('/url/')
force_authenticate(request, user=user)
response = self.view_list(
request,
username=box.owner.username,
box_name=box.name)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data['count'], 2)
class UserBoxProviderViewSetTestCase(APITestCase):
def setUp(self):
self.factory = APIRequestFactory()
self.view_list = urls.box_provider_list
self.view_detail = urls.box_provider_detail
def test_user_with_permissions_can_view_providers(self):
user = UserFactory()
box = BoxFactory()
box.share_with(user, BoxMember.PERM_R)
version = BoxVersionFactory(box=box)
BoxProviderFactory(version=version, provider='virtualbox')
BoxProviderFactory(version=version, provider='vmware')
request = self.factory.get('/url/')
force_authenticate(request, user=user)
response = self.view_list(
request,
username=box.owner.username,
box_name=box.name,
version=version.version)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data['count'], 2)
def test_user_with_permissions_cannot_delete_provider(self):
user = UserFactory()
box = BoxFactory()
box.share_with(user, BoxMember.PERM_R)
version = BoxVersionFactory(box=box)
provider = BoxProviderFactory(version=version)
request = self.factory.delete('/url/')
force_authenticate(request, user=user)
response = self.view_detail(
request,
username=box.owner.username,
box_name=box.name,
version=version.version,
provider=provider.provider)
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
class UserBoxMetadataViewSetTestCase(APITestCase):
def setUp(self):
self.factory = APIRequestFactory()
self.view_detail = urls.box_metadata_detail
def test_user_with_permissions_can_view_metadata(self):
user = UserFactory()
box = BoxFactory()
box.share_with(user, BoxMember.PERM_R)
version = BoxVersionFactory(box=box)
BoxProviderFactory(version=version, provider='virtualbox')
BoxProviderFactory(version=version, provider='vmware')
BoxProviderFactory()
request = self.factory.get('/url/')
force_authenticate(request, user=user)
response = self.view_detail(
request,
username=box.owner.username,
box_name=box.name)
self.assertEqual(response.status_code, status.HTTP_200_OK)
def test_anonymous_can_view_public_box_metadata(self):
box = BoxFactory(visibility=Box.PUBLIC)
version = BoxVersionFactory(box=box)
BoxProviderFactory(version=version, provider='virtualbox')
BoxProviderFactory(version=version, provider='vmware')
BoxProviderFactory()
request = self.factory.get('/url/')
response = self.view_detail(
request,
username=box.owner.username,
box_name=box.name)
self.assertEqual(response.status_code, status.HTTP_200_OK)
class UserBoxUploadViewSetTestCase(APITestCase):
def setUp(self):
self.factory = APIRequestFactory()
self.view_list = urls.box_upload_list
def test_box_owner_can_initiate_upload(self):
user = UserFactory()
box = BoxFactory(owner=user)
box_version = BoxVersionFactory(box=box)
box_provider = EmptyBoxProviderFactory(version=box_version)
data = {
'file_size': 100,
'checksum_type': BoxProvider.SHA256,
'checksum': 'asdf',
}
request = self.factory.post('/url/', data=data)
force_authenticate(request, user=user)
response = self.view_list(
request,
username=box.owner.username,
box_name=box.name,
version=box_version.version,
provider=box_provider.provider,
)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
self.assertTrue(box_provider.uploads.filter(**data).exists())
def test_user_with_permissions_can_initiate_upload(self):
user = UserFactory()
box = BoxFactory()
box.share_with(user, BoxMember.PERM_RW)
box_version = BoxVersionFactory(box=box)
box_provider = EmptyBoxProviderFactory(version=box_version)
data = {
'file_size': 100,
'checksum_type': BoxProvider.SHA256,
'checksum': 'asdf',
}
request = self.factory.post('/url/', data=data)
force_authenticate(request, user=user)
response = self.view_list(
request,
username=box.owner.username,
box_name=box.name,
version=box_version.version,
provider=box_provider.provider,
)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
self.assertTrue(box_provider.uploads.filter(**data).exists())
def test_user_with_permissions_can_view_uploads(self):
user = UserFactory()
box = BoxFactory()
box.share_with(user, BoxMember.PERM_R)
version = BoxVersionFactory(box=box)
provider = BoxProviderFactory(version=version)
BoxUploadFactory(provider=provider)
BoxUploadFactory(provider=provider, file_content=b'test2')
request = self.factory.get('/url/')
force_authenticate(request, user=user)
response = self.view_list(
request,
username=box.owner.username,
box_name=box.name,
version=version.version,
provider=provider.provider,
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data['count'], 2)
def test_upload_cannot_be_initiated_for_completed_provider(self):
user = UserFactory()
box = BoxFactory(owner=user)
box_version = BoxVersionFactory(box=box)
box_provider = BoxProviderFactory(version=box_version)
data = {
'file_size': 100,
'checksum_type': BoxProvider.SHA256,
'checksum': 'asdf',
}
request = self.factory.post('/url/', data=data)
force_authenticate(request, user=user)
response = self.view_list(
request,
username=box.owner.username,
box_name=box.name,
version=box_version.version,
provider=box_provider.provider,
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
class UserBoxUploadHandlerViewSetTestCase(APITestCase):
@classmethod
def setUpTestData(cls):
# Use staff user, so there is no need to assign permissions
cls.user = StaffFactory()
def setUp(self):
self.factory = APIRequestFactory()
self.view = urls.box_upload_detail
def get_request(self, data, content_range):
return self.factory.put(
'/url/', data,
content_type='application/octet-stream',
HTTP_CONTENT_RANGE='bytes {c[0]}-{c[1]}/{c[2]}'.format(
c=content_range)
)
def get_file_length(self, file_data):
data = file_data.encode()
return data, len(file_data.encode())
def force_auth(self, request, user):
force_authenticate(request, user=user)
def get_response(self, request, bu_factory):
self.force_auth(request, bu_factory.box.owner)
return self.view(
request,
username=bu_factory.box.owner.username,
box_name=bu_factory.box.name,
version=bu_factory.version.version,
provider=bu_factory.provider.provider,
pk=bu_factory.pk,
checksum=bu_factory.checksum,
)
def test_unsupported_media_type(self):
bu_factory = BoxUploadFactory(provider__version__box__owner=self.user)
request = self.factory.put('/url/', data='data')
response = self.get_response(request, bu_factory)
self.assertEqual(response.status_code,
status.HTTP_415_UNSUPPORTED_MEDIA_TYPE)
def test_content_range_header_is_required(self):
bu_factory = BoxUploadFactory(provider__version__box__owner=self.user)
request = self.factory.put('/url/', 'test',
content_type='application/octet-stream',)
response = self.get_response(request, bu_factory)
response.render()
self.assertEqual(response.status_code,
status.HTTP_416_REQUESTED_RANGE_NOT_SATISFIABLE)
self.assertIn('Content-Range', str(response.content))
def test_invalid_content_range_header_not_accepted(self):
bu_factory = BoxUploadFactory(provider__version__box__owner=self.user)
request = self.get_request('test', (1, 'a', None))
response = self.get_response(request, bu_factory)
response.render()
self.assertEqual(response.status_code,
status.HTTP_416_REQUESTED_RANGE_NOT_SATISFIABLE)
self.assertIn('Content-Range', str(response.content))
def test_invalid_offset_in_content_range_header_not_accepted(self):
file_data, file_len = self.get_file_length('test content')
bu_factory = BoxUploadFactory(provider__version__box__owner=self.user,
file_content=file_data, offset=5)
request = self.get_request(file_data, (2, 2 + file_len, file_len))
response = self.get_response(request, bu_factory)
self.assertEqual(response.status_code,
status.HTTP_416_REQUESTED_RANGE_NOT_SATISFIABLE)
def test_invalid_complete_length_in_content_range_header_not_accepted(self):
file_data, file_len = self.get_file_length('test content')
bu_factory = BoxUploadFactory(provider__version__box__owner=self.user,
file_content=file_data)
request = self.get_request(file_data, (0, file_len - 1, file_len + 10))
response = self.get_response(request, bu_factory)
self.assertEqual(response.status_code,
status.HTTP_416_REQUESTED_RANGE_NOT_SATISFIABLE)
def test_invalid_last_byte_in_content_range_header_not_accepted(self):
file_data, file_len = self.get_file_length('test content')
bu_factory = BoxUploadFactory(provider__version__box__owner=self.user,
file_content=file_data)
request = self.get_request(file_data, (0, file_len + 10, file_len))
response = self.get_response(request, bu_factory)
self.assertEqual(response.status_code,
status.HTTP_416_REQUESTED_RANGE_NOT_SATISFIABLE)
def test_invalid_content_length_in_content_range_header_not_accepted(self):
file_data, file_len = self.get_file_length('test content')
bu_factory = BoxUploadFactory(offset=2,
provider__version__box__owner=self.user,
file_content=file_data)
request = self.get_request(file_data, (2, file_len - 1, file_len))
response = self.get_response(request, bu_factory)
self.assertEqual(response.status_code,
status.HTTP_416_REQUESTED_RANGE_NOT_SATISFIABLE)
def test_invalid_content_not_accepted(self):
file_data, file_len = self.get_file_length('test')
bu_factory = BoxUploadFactory(provider__version__box__owner=self.user,
file_content=file_data)
request = self.get_request('poop', (0, file_len - 1, file_len))
response = self.get_response(request, bu_factory)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
def test_empty_file_data_not_allowed(self):
file_data, file_len = self.get_file_length('')
bu_factory = BoxUploadFactory(
provider__version__box__owner=self.user,
file_content=file_data,
)
request = self.get_request(file_data, (0, file_len - 1, file_len))
response = self.get_response(request, bu_factory)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
def test_box_uploaded_successfully_at_once(self):
file_data, file_len = self.get_file_length('Ї12\n345\t6789')
bu_factory = BoxUploadFactory(
provider__version__box__owner=self.user,
file_content=file_data,
)
request = self.get_request(file_data, (0, file_len - 1, file_len))
response = self.get_response(request, bu_factory)
self.assertEqual(response.status_code,
status.HTTP_201_CREATED)
box_upload = BoxUpload.objects.get(pk=bu_factory.pk)
self.assertEqual(box_upload.file.read(), file_data)
self.assertEqual(box_upload.status, BoxUpload.COMPLETED)
self.assertNotEqual(box_upload.date_completed, None)
self.assertEqual(box_upload.provider.file.read(), file_data)
def test_box_uploaded_successfully_in_chunks(self):
chunk = 'Ї12\n345\t6789\n'
chunks_num = 5
_, chunk_len = self.get_file_length(chunk)
file_data, file_len = self.get_file_length(''.join([chunk]*chunks_num))
bu_factory = BoxUploadFactory(provider__version__box__owner=self.user,
file_content=file_data)
for i in range(chunks_num):
request = self.get_request(
chunk, (chunk_len * i, chunk_len * (i + 1) - 1, file_len))
response = self.get_response(request, bu_factory)
if i == chunks_num - 1:
check_status = status.HTTP_201_CREATED
else:
check_status = status.HTTP_202_ACCEPTED
self.assertEqual(response.status_code, check_status)
box_upload = BoxUpload.objects.get(pk=bu_factory.pk)
self.assertEqual(box_upload.file.read(), file_data)
self.assertEqual(box_upload.status, BoxUpload.COMPLETED)
self.assertNotEqual(box_upload.date_completed, None)
self.assertEqual(box_upload.provider.file.read(), file_data)
| 36.603886 | 80 | 0.643866 | 2,712 | 24,488 | 5.550516 | 0.084071 | 0.051817 | 0.064173 | 0.063575 | 0.854979 | 0.826613 | 0.802099 | 0.792666 | 0.769614 | 0.75553 | 0 | 0.012424 | 0.257146 | 24,488 | 668 | 81 | 36.658683 | 0.815073 | 0.010332 | 0 | 0.702652 | 0 | 0 | 0.038881 | 0.001981 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0.087121 | false | 0 | 0.013258 | 0.001894 | 0.121212 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4de02e61e24ac39f6407a5451f06e2608fd52294 | 6,814 | py | Python | django_namespaced_cache/test.py | pardo/namespaced-cache | a14fb224e57dce4e5c75b8c09780ba2a3d1b6c46 | [
"MIT"
] | null | null | null | django_namespaced_cache/test.py | pardo/namespaced-cache | a14fb224e57dce4e5c75b8c09780ba2a3d1b6c46 | [
"MIT"
] | null | null | null | django_namespaced_cache/test.py | pardo/namespaced-cache | a14fb224e57dce4e5c75b8c09780ba2a3d1b6c46 | [
"MIT"
] | null | null | null | import random
import unittest
from namespaced_cache import NamespacedCache, MockCache
class TestNamespacedCache(unittest.TestCase):
def setUp(self):
cache = MockCache()
self.cache = NamespacedCache()
self.cache.set_cache(cache)
def test_get_set(self):
self.cache.set("a", 1)
self.cache.set("a.b", 2)
self.cache.set("a.b.c", 3)
self.assertEqual(self.cache.get("a"), 1)
self.assertEqual(self.cache.get("a.b"), 2)
self.assertEqual(self.cache.get("a.b.c"), 3)
def test_delete(self):
self.cache.set("a", 1)
self.cache.set("a.b", 2)
self.cache.set("a.b.c", 3)
self.assertTrue(self.cache.has_key("a"))
self.assertTrue(self.cache.has_key("a.b"))
self.assertTrue(self.cache.has_key("a.b.c"))
self.cache.delete("a")
self.assertFalse(self.cache.has_key("a"))
self.assertTrue(self.cache.has_key("a.b"))
self.assertTrue(self.cache.has_key("a.b.c"))
self.cache.delete("a.b.c")
self.assertFalse(self.cache.has_key("a"))
self.assertTrue(self.cache.has_key("a.b"))
self.assertFalse(self.cache.has_key("a.b.c"))
def test_clear(self):
self.cache.set("a", 1)
self.cache.set("a.b", 2)
self.cache.set("a.b.c", 3)
self.cache.clear()
self.assertFalse(self.cache.has_key("a"))
self.assertFalse(self.cache.has_key("a.b"))
self.assertFalse(self.cache.has_key("a.b.c"))
def test_has_key(self):
self.cache.set("a", 1)
self.cache.set("a.b", 2)
self.cache.set("a.b.c", 3)
self.assertTrue(self.cache.has_key("a"))
self.assertTrue(self.cache.has_key("a.b"))
self.assertTrue(self.cache.has_key("a.b.c"))
self.assertFalse(self.cache.has_key("a.b.c.d"))
self.assertFalse(self.cache.has_key("a.c.d"))
self.assertFalse(self.cache.has_key("a.d"))
self.assertFalse(self.cache.has_key("d"))
def test_set_many(self):
data = {
"a": 1,
"a.b": 2,
"a.b.c": 3
}
self.cache.set_many(data)
self.assertEqual(self.cache.get("a"), 1)
self.assertEqual(self.cache.get("a.b"), 2)
self.assertEqual(self.cache.get("a.b.c"), 3)
def test_get_many(self):
self.cache.set("a", 1)
self.cache.set("a.b", 2)
self.cache.set("a.b.c", 3)
data = self.cache.get_many(["a", "a.b", "a.b.c"])
self.assertEqual(data, {
"a": 1,
"a.b": 2,
"a.b.c": 3
})
def test_delete_many(self):
self.cache.set("a", 1)
self.cache.set("a.b", 2)
self.cache.set("a.b.c", 3)
self.cache.delete_many([
"a",
"a.b.c"
])
self.assertTrue(self.cache.has_key("a.b"))
self.assertFalse(self.cache.has_key("a.b.c"))
self.assertFalse(self.cache.has_key("a"))
def test_get_keys(self):
#namespaced feature
self.cache.set("a", 1)
self.cache.set("a.b", 2)
self.cache.set("a.b.c", 3)
self.cache.set("b", 1)
self.cache.set("b.b", 2)
self.cache.set("b.b.c", 3)
self.cache.set("c.a", 1)
self.cache.set("c.b", 2)
self.cache.set("c.c", 3)
self.cache.set("c.d", 4)
self.cache.set("d", 1)
self.cache.set("d.a", 2)
self.cache.set("d.a.a", 3)
self.cache.set("d.a.b", 4)
self.cache.set("d.a.c", 5)
self.cache.set("d.a.c.a.b", 5)
all_keys = [
"a",
"a.b",
"a.b.c",
"b",
"b.b",
"b.b.c",
"c.a",
"c.b",
"c.c",
"c.d",
"d",
"d.a",
"d.a.a",
"d.a.b",
"d.a.c",
"d.a.c.a.b"
]
for key in self.cache.get_keys():
self.assertTrue(key in all_keys, "Key not found '%s' " % key)
expected_keys = [
"c.a",
"c.b",
"c.c",
"c.d",
]
for key in self.cache.get_keys("c"):
self.assertTrue(key in expected_keys, "Key not found '%s' " % key)
expected_keys = [
"c.a",
]
for key in self.cache.get_keys("c.a"):
self.assertTrue(key in expected_keys, "Key not found '%s' " % key)
expected_keys = [
"d",
"d.a",
"d.a.a",
"d.a.b",
"d.a.c",
"d.a.c.a.b"
]
for key in self.cache.get_keys("d"):
self.assertTrue(key in expected_keys, "Key not found '%s' " % key)
expected_keys = [
"d.a",
"d.a.a",
"d.a.b",
"d.a.c",
"d.a.c.a.b"
]
for key in self.cache.get_keys("d."):
self.assertTrue(key in expected_keys, "Key not found '%s' " % key)
expected_keys = [
"d.a",
"d.a.a",
"d.a.b",
"d.a.c",
"d.a.c.a.b"
]
for key in self.cache.get_keys("d.a"):
self.assertTrue(key in expected_keys, "Key not found '%s' " % key)
expected_keys = [
"d.a.a",
"d.a.b",
"d.a.c",
"d.a.c.a.b"
]
for key in self.cache.get_keys("d.a."):
self.assertTrue(key in expected_keys, "Key not found '%s' " % key)
expected_keys = [
"d.a.c",
"d.a.c.a.b"
]
for key in self.cache.get_keys("d.a.c"):
self.assertTrue(key in expected_keys, "Key not found '%s' " % key)
expected_keys = [
"a",
"a.b",
"a.b.c",
# "b", deleted
# "b.b", deleted
"b.b.c",
"c.a",
# "c.b", deleted
"c.c",
"c.d",
"d",
"d.a",
"d.a.a",
"d.a.b",
# "d.a.c", deleted
"d.a.c.a.b"
]
self.cache.delete("b")
self.cache.delete("b.b")
self.cache.delete("c.b")
self.cache.delete("d.a.c")
for key in self.cache.get_keys():
self.assertTrue(key in expected_keys, "Key not found '%s' " % key)
expected_keys = [
"a",
"a.b",
"a.b.c",
# "b.b", deleted
"b.b.c",
"d"
]
self.cache.delete_keys("c")
self.cache.delete_keys("d.a")
for key in self.cache.get_keys():
self.assertTrue(key in expected_keys, "Key not found '%s' " % key)
if __name__ == '__main__':
unittest.main()
| 24.868613 | 78 | 0.459495 | 981 | 6,814 | 3.109072 | 0.04893 | 0.256721 | 0.141639 | 0.108197 | 0.84623 | 0.796393 | 0.768197 | 0.743607 | 0.708852 | 0.682623 | 0 | 0.010517 | 0.358086 | 6,814 | 273 | 79 | 24.959707 | 0.686786 | 0.013648 | 0 | 0.629268 | 0 | 0 | 0.108116 | 0 | 0 | 0 | 0 | 0 | 0.190244 | 1 | 0.043902 | false | 0 | 0.014634 | 0 | 0.063415 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4de4d2d255acdc45b71d91d664c528b640b0cef2 | 27,282 | py | Python | beartype/_util/cls/pep/utilpep3119.py | posita/beartype | e56399686e1f2ffd5128a4030b19314504e32450 | [
"MIT"
] | null | null | null | beartype/_util/cls/pep/utilpep3119.py | posita/beartype | e56399686e1f2ffd5128a4030b19314504e32450 | [
"MIT"
] | null | null | null | beartype/_util/cls/pep/utilpep3119.py | posita/beartype | e56399686e1f2ffd5128a4030b19314504e32450 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# --------------------( LICENSE )--------------------
# Copyright (c) 2014-2021 Beartype authors.
# See "LICENSE" for further details.
'''
Project-wide :pep:`3119`-compliant **class tester** (i.e., callable testing
various properties of arbitrary classes first standardized by :pep:`3119`)
utilities.
This private submodule is *not* intended for importation by downstream callers.
'''
# ....................{ IMPORTS }....................
from beartype.roar import BeartypeDecorHintPep3119Exception
from beartype._data.datatyping import (
TypeException,
TypeOrTupleTypes,
)
# ....................{ VALIDATORS ~ instance }....................
def die_unless_type_isinstanceable(
# Mandatory parameters.
cls: type,
# Optional parameters.
exception_cls: TypeException = BeartypeDecorHintPep3119Exception,
exception_prefix: str = '',
) -> None:
'''
Raise an exception of the passed type unless the passed object is an
**isinstanceable class** (i.e., class whose metaclass does *not* define an
``__instancecheck__()`` dunder method that raises an exception).
Classes that are *not* isinstanceable include most PEP-compliant type
hints, notably:
* **Generic aliases** (i.e., subscriptable classes overriding the
``__class_getitem__()`` class dunder method standardized by :pep:`560`
subscripted by an arbitrary object) under Python >= 3.9, whose
metaclasses define an ``__instancecheck__()`` dunder method to
unconditionally raise an exception. Generic aliases include:
* :pep:`484`-compliant **subscripted generics.**
* :pep:`585`-compliant type hints.
* User-defined classes whose metaclasses define an ``__instancecheck__()``
dunder method to unconditionally raise an exception, including:
* :pep:`544`-compliant protocols *not* decorated by the
:func:`typing.runtime_checkable` decorator.
Motivation
----------
When a class whose metaclass defines an ``__instancecheck__()`` dunder
method is passed as the second parameter to the :func:`isinstance` builtin,
that builtin defers to that method rather than testing whether the first
parameter passed to that builtin is an instance of that class. If that
method raises an exception, that builtin raises the same exception,
preventing callers from deciding whether arbitrary objects are instances
of that class. For brevity, we refer to that class as "non-isinstanceable."
Most classes are isinstanceable, because deciding whether arbitrary objects
are instances of those classes is a core prerequisite for object-oriented
programming. Most classes that are also PEP-compliant type hints, however,
are *not* isinstanceable, because they're *never* intended to be
instantiated into objects (and typically prohibit instantiation in various
ways); they're only intended to be referenced as type hints annotating
callables, an arguably crude form of callable markup.
:mod:`beartype`-decorated callables typically check the types of arbitrary
objects at runtime by passing those objects and types as the first and
second parameters to the :func:`isinstance` builtin. If those types are
non-isinstanceable, those type-checks will typically raise
non-human-readable exceptions (e.g., ``"TypeError: isinstance() argument 2
cannot be a parameterized generic"`` for :pep:`585`-compliant type hints).
This is non-ideal both because those exceptions are non-human-readable
*and* because those exceptions are raised at call rather than decoration
time, where users expect the :mod:`beartype.beartype` decorator to raise
exceptions for erroneous type hints.
Thus the existence of this function, which the :mod:`beartype.beartype`
decorator calls to validate the usability of type hints that are classes
*before* checking objects against those classes at call time.
Parameters
----------
cls : object
Object to be validated.
exception_cls : TypeException, optional
Type of exception to be raised. Defaults to
:exc:`BeartypeDecorHintPep3119Exception`.
exception_prefix : str, optional
Human-readable label prefixing the representation of this object in the
exception message. Defaults to the empty string.
Raises
----------
:exc:`BeartypeDecorHintPep3119Exception`
If this object is *not* an isinstanceable class.
See Also
----------
:func:`die_unless_type_isinstanceable`
Further details.
'''
# Avoid circular import dependencies.
from beartype._util.cls.utilclstest import die_unless_type
# If this object is *NOT* a class, raise an exception.
die_unless_type(
cls=cls,
exception_cls=exception_cls,
exception_prefix=exception_prefix,
)
# Else, this object is a class.
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
# CAUTION: Synchronize with the is_type_isinstanceable() tester.
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
# If this class is *NOT* isinstanceable, raise an exception.
try:
isinstance(None, cls) # type: ignore[arg-type]
except Exception as exception:
assert isinstance(exception_cls, type), (
f'{repr(exception_cls)} not exception class.')
assert isinstance(exception_prefix, str), (
f'{repr(exception_prefix)} not string.')
#FIXME: Uncomment after we uncover why doing so triggers an
#infinite circular exception chain when "hint" is a "GenericAlias".
#It's clearly the is_hint_pep544_protocol() call, but why? In any
#case, the simplest workaround would just be to inline the logic of
#is_hint_pep544_protocol() here directly. Yes, we know. *shrug*
# # Human-readable exception message to be raised as either...
# exception_message = (
# # If this class is a PEP 544-compliant protocol, a message
# # documenting this exact issue and how to resolve it;
# (
# f'{exception_prefix}PEP 544 protocol {hint} '
# f'uncheckable at runtime (i.e., '
# f'not decorated by @typing.runtime_checkable).'
# )
# if is_hint_pep544_protocol(hint) else
# # Else, a fallback message documenting this general issue.
# (
# f'{exception_prefix}type {hint} uncheckable at runtime (i.e., '
# f'not passable as second parameter to isinstance() '
# f'due to raising "{exception}" from metaclass '
# f'__instancecheck__() method).'
# )
# )
# Exception message to be raised.
exception_message = (
f'{exception_prefix}{repr(cls)} uncheckable at runtime '
f'(i.e., not passable as second parameter to isinstance(), '
f'due to raising "{exception}" from metaclass '
f'__instancecheck__() method).'
)
# Raise this exception chained onto this lower-level exception.
raise exception_cls(exception_message) from exception
#FIXME: Unit test us up.
def die_unless_type_or_types_isinstanceable(
# Mandatory parameters.
type_or_types: TypeOrTupleTypes,
# Optional parameters.
exception_cls: TypeException = BeartypeDecorHintPep3119Exception,
exception_prefix: str = '',
) -> None:
'''
Raise an exception of the passed type unless the passed object is either an
**isinstanceable class** (i.e., class whose metaclass does *not* define an
``__instancecheck__()`` dunder method that raises an exception) *or* tuple
of one or more isinstanceable classes.
Parameters
----------
type_or_types : object
Object to be validated.
exception_cls : TypeException, optional
Type of exception to be raised. Defaults to
:exc:`BeartypeDecorHintPep3119Exception`.
exception_prefix : str, optional
Human-readable label prefixing the representation of this object in the
exception message. Defaults to the empty string.
Raises
----------
:exc:`BeartypeDecorHintPep3119Exception`
If this object is neither:
* An isinstanceable class.
* A tuple containing only isinstanceable classes.
'''
# Avoid circular import dependencies.
from beartype._util.cls.utilclstest import die_unless_type_or_types
# If this object is neither a class nor tuple of classes, raise an
# exception.
die_unless_type_or_types(
type_or_types=type_or_types,
exception_cls=exception_cls,
exception_prefix=exception_prefix,
)
# Else, this object is either a class or tuple of classes.
# If this object is a class...
if isinstance(type_or_types, type):
# If this class is *NOT* isinstanceable, raise an exception.
die_unless_type_isinstanceable(
cls=type_or_types,
exception_cls=exception_cls,
exception_prefix=exception_prefix,
)
# Else, this class is isinstanceable.
# Else, this object *MUST* (by process of elimination and the above
# validation) be a tuple of classes. In this case...
else:
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
# CAUTION: Synchronize with the is_type_isinstanceable() tester.
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
# If this tuple of classes is *NOT* isinstanceable, raise an exception.
try:
isinstance(None, type_or_types) # type: ignore[arg-type]
except Exception as exception:
assert isinstance(exception_cls, type), (
f'{repr(exception_cls)} not exception class.')
assert isinstance(exception_prefix, str), (
f'{repr(exception_prefix)} not string.')
# Exception message to be raised.
exception_message = (
f'{exception_prefix}{repr(type_or_types)} '
f'uncheckable at runtime'
)
# For the 0-based index of each tuple class and that class...
for cls_index, cls in enumerate(type_or_types):
# If this class is *NOT* isinstanceable, raise an exception.
die_unless_type_isinstanceable(
cls=cls,
exception_cls=exception_cls,
exception_prefix=(
f'{exception_message}, as tuple item {cls_index} '),
)
# Else, this class is isinstanceable. Continue to the next.
# Raise this exception chained onto this lower-level exception.
# Although this should *NEVER* happen (as we should have already
# raised an exception above), we nonetheless do so for safety.
raise exception_cls(f'{exception_message}.') from exception
# ....................{ VALIDATORS ~ subclass }....................
def die_unless_type_issubclassable(
# Mandatory parameters.
cls: type,
# Optional parameters.
exception_cls: TypeException = BeartypeDecorHintPep3119Exception,
exception_prefix: str = '',
) -> None:
'''
Raise an exception of the passed type unless the passed object is an
**issubclassable class** (i.e., class whose metaclass does *not* define a
``__subclasscheck__()`` dunder method that raises an exception).
Classes that are *not* issubclassable include most PEP-compliant type
hints, notably:
* **Generic aliases** (i.e., subscriptable classes overriding the
``__class_getitem__()`` class dunder method standardized by :pep:`560`
subscripted by an arbitrary object) under Python >= 3.9, whose
metaclasses define an ``__subclasscheck__()`` dunder method to
unconditionally raise an exception. Generic aliases include:
* :pep:`484`-compliant **subscripted generics.**
* :pep:`585`-compliant type hints.
* User-defined classes whose metaclasses define a ``__subclasscheck__()``
dunder method to unconditionally raise an exception, including:
* :pep:`544`-compliant protocols *not* decorated by the
:func:`typing.runtime_checkable` decorator.
Motivation
----------
When a class whose metaclass defines a ``__subclasscheck__()`` dunder
method is passed as the second parameter to the :func:`issubclass` builtin,
that builtin defers to that method rather than testing whether the first
parameter passed to that builtin is an subclass of that class. If that
method raises an exception, that builtin raises the same exception,
preventing callers from deciding whether arbitrary objects are subclasses
of that class. For brevity, we refer to that class as "non-issubclassable."
Most classes are issubclassable, because deciding whether arbitrary classes
are subclasses of those classes is a core prerequisite for object-oriented
programming. Most classes that are also PEP-compliant type hints, however,
are *not* issubclassable, because they're *never* intended to be
instantiated into objects (and typically prohibit instantiation in various
ways); they're only intended to be referenced as type hints annotating
callables, an arguably crude form of callable markup.
:mod:`beartype`-decorated callables typically check the superclasses of
arbitrary classes at runtime by passing those classes and superclasses as
the first and second parameters to the :func:`issubclass` builtin. If those
types are non-issubclassable, those type-checks will typically raise
non-human-readable exceptions (e.g., ``"TypeError: issubclass() argument 2
cannot be a parameterized generic"`` for :pep:`585`-compliant type hints).
This is non-ideal both because those exceptions are non-human-readable
*and* because those exceptions are raised at call rather than decoration
time, where users expect the :mod:`beartype.beartype` decorator to raise
exceptions for erroneous type hints.
Thus the existence of this function, which the :mod:`beartype.beartype`
decorator calls to validate the usability of type hints that are classes
*before* checking objects against those classes at call time.
Parameters
----------
cls : object
Object to be validated.
exception_cls : TypeException, optional
Type of exception to be raised. Defaults to
:exc:`BeartypeDecorHintPep3119Exception`.
exception_prefix : str, optional
Human-readable label prefixing the representation of this object in the
exception message. Defaults to the empty string.
Raises
----------
:exc:`BeartypeDecorHintPep3119Exception`
If this object is *not* an issubclassable class.
'''
# Avoid circular import dependencies.
from beartype._util.cls.utilclstest import die_unless_type
# If this hint is *NOT* a class, raise an exception.
die_unless_type(
cls=cls,
exception_cls=exception_cls,
exception_prefix=exception_prefix,
)
# Else, this hint is a class.
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
# CAUTION: Synchronize with the is_type_issubclassable() tester.
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
try:
issubclass(type, cls) # type: ignore[arg-type]
except Exception as exception:
assert isinstance(exception_cls, type), (
f'{repr(exception_cls)} not exception class.')
assert isinstance(exception_prefix, str), (
f'{repr(exception_prefix)} not string.')
# Exception message to be raised.
exception_message = (
f'{exception_prefix}{repr(cls)} uncheckable at runtime '
f'(i.e., not passable as second parameter to issubclass(), '
f'due to raising "{exception}" from metaclass '
f'__subclasscheck__() method).'
)
# Raise this exception chained onto this lower-level exception.
raise exception_cls(exception_message) from exception
#FIXME: Unit test us up.
def die_unless_type_or_types_issubclassable(
# Mandatory parameters.
type_or_types: TypeOrTupleTypes,
# Optional parameters.
exception_cls: TypeException = BeartypeDecorHintPep3119Exception,
exception_prefix: str = '',
) -> None:
'''
Raise an exception of the passed type unless the passed object is either an
**issubclassable class** (i.e., class whose metaclass does *not* define an
``__subclasscheck__()`` dunder method that raises an exception) *or* tuple
of one or more issubclassable classes.
Parameters
----------
type_or_types : object
Object to be validated.
exception_cls : TypeException, optional
Type of exception to be raised. Defaults to
:exc:`BeartypeDecorHintPep3119Exception`.
exception_prefix : str, optional
Human-readable label prefixing the representation of this object in the
exception message. Defaults to the empty string.
Raises
----------
:exc:`BeartypeDecorHintPep3119Exception`
If this object is neither:
* An issubclassable class.
* A tuple containing only issubclassable classes.
'''
# Avoid circular import dependencies.
from beartype._util.cls.utilclstest import die_unless_type_or_types
# If this object is neither a class nor tuple of classes, raise an
# exception.
die_unless_type_or_types(
type_or_types=type_or_types,
exception_cls=exception_cls,
exception_prefix=exception_prefix,
)
# Else, this object is either a class or tuple of classes.
# If this object is a class...
if isinstance(type_or_types, type):
# If this class is *NOT* issubclassable, raise an exception.
die_unless_type_issubclassable(
cls=type_or_types,
exception_cls=exception_cls,
exception_prefix=exception_prefix,
)
# Else, this class is issubclassable.
# Else, this object *MUST* (by process of elimination and the above
# validation) be a tuple of classes. In this case...
else:
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
# CAUTION: Synchronize with the is_type_issubclassable() tester.
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
# If this tuple of classes is *NOT* issubclassable, raise an exception.
try:
issubclass(type, type_or_types) # type: ignore[arg-type]
except Exception as exception:
assert isinstance(exception_cls, type), (
f'{repr(exception_cls)} not exception class.')
assert isinstance(exception_prefix, str), (
f'{repr(exception_prefix)} not string.')
# Exception message to be raised.
exception_message = (
f'{exception_prefix}{repr(type_or_types)} '
f'uncheckable at runtime'
)
# For the 0-based index of each tuple class and that class...
for cls_index, cls in enumerate(type_or_types):
# If this class is *NOT* issubclassable, raise an exception.
die_unless_type_issubclassable(
cls=cls,
exception_cls=exception_cls,
exception_prefix=(
f'{exception_message}, as tuple item {cls_index} '),
)
# Else, this class is issubclassable. Continue to the next.
# Raise this exception chained onto this lower-level exception.
# Although this should *NEVER* happen (as we should have already
# raised an exception above), we nonetheless do so for safety.
raise exception_cls(f'{exception_message}.') from exception
# ....................{ TESTERS }....................
def is_type_isinstanceable(cls: object) -> bool:
'''
``True`` only if the passed object is either an **isinstanceable class**
(i.e., class whose metaclass does *not* define an ``__instancecheck__()``
dunder method that raises an exception) *or* tuple containing only
isinstanceable classes.
This tester is intentionally *not* memoized (e.g., by the
:func:`callable_cached` decorator). Although the implementation does *not*
trivially reduce to an efficient one-liner, the inefficient branch of this
implementation *only* applies to erroneous edge cases resulting in raised
exceptions and is thus largely ignorable.
Caveats
----------
**This tester may return false positives in unlikely edge cases.**
Internally, this tester tests whether this class is isinstanceable by
detecting whether passing the ``None`` singleton and this class to the
:func:`isinstance` builtin raises an exception. If that call raises *no*
exception, this class is probably but *not* necessarily isinstanceable.
Since the metaclass of this class could define an ``__instancecheck__()``
dunder method to conditionally raise exceptions except when passed the
``None`` singleton, there exists *no* means of ascertaining whether a class
is fully isinstanceable in the general case. Since most classes that are
*not* isinstanceable are unconditionally isinstanceable (i.e., the
metaclasses of those classes define an ``__instancecheck__()`` dunder
method to unconditionally raise exceptions), this distinction is generally
meaningless in the real world. This test thus generally suffices.
Parameters
----------
cls : object
Object to be tested.
Returns
----------
bool
``True`` only if this object is either:
* An isinstanceable class.
* A tuple containing only isinstanceable classes.
See Also
----------
:func:`die_unless_type_isinstanceable`
Further details.
'''
# If this object is *NOT* a class, immediately return false.
if not isinstance(cls, type):
return False
# Else, this object is a class.
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
# CAUTION: Synchronize with die_unless_type_isinstanceable().
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
# Attempt to pass this class as the second parameter to the isinstance()
# builtin to decide whether or not this class is safely usable as a
# standard class or not.
#
# Note that this leverages an EAFP (i.e., "It is easier to ask forgiveness
# than permission") approach and thus imposes a minor performance penalty,
# but that there exists *NO* faster alternative applicable to arbitrary
# user-defined classes, whose metaclasses may define an __instancecheck__()
# dunder method to raise exceptions and thus prohibit being passed as the
# second parameter to the isinstance() builtin, the primary means employed
# by @beartype wrapper functions to check arbitrary types.
try:
isinstance(None, cls) # type: ignore[arg-type]
# If the prior function call raised *NO* exception, this class is
# probably but *NOT* necessarily isinstanceable.
return True
# If the prior function call raised an exception, this class is *NOT*
# isinstanceable. In this case, return false.
except:
return False
def is_type_issubclassable(cls: object) -> bool:
'''
``True`` only if the passed object is either an **issubclassable class**
(i.e., class whose metaclass does *not* define a ``__subclasscheck__()``
dunder method that raises an exception) *or* tuple containing only
issubclassable classes.
This tester is intentionally *not* memoized (e.g., by the
:func:`callable_cached` decorator). Although the implementation does *not*
trivially reduce to an efficient one-liner, the inefficient branch of this
implementation *only* applies to erroneous edge cases resulting in raised
exceptions and is thus largely ignorable.
Caveats
----------
**This tester may return false positives in unlikely edge cases.**
Internally, this tester tests whether this class is issubclassable by
detecting whether passing the :class:`type` superclass and this class to
the :func:`issubclass` builtin raises an exception. If that call raises
*no* exception, this class is probably but *not* necessarily
issubclassable. Since the metaclass of this class could define a
``__subclasscheck__()`` dunder method to conditionally raise exceptions
except when passed the :class:`type` superclass, there exists *no* means of
ascertaining whether a class is fully issubclassable in the general case.
Since most classes that are *not* issubclassable are unconditionally
issubclassable (i.e., the metaclasses of those classes define an
``__subclasscheck__()`` dunder method to unconditionally raise exceptions),
this distinction is generally meaningless in the real world. This test thus
generally suffices.
Parameters
----------
cls : object
Object to be tested.
Returns
----------
bool
``True`` only if this object is either:
* An issubclassable class.
* A tuple containing only issubclassable classes.
See Also
----------
:func:`die_unless_type_issubclassable`
Further details.
'''
# If this object is *NOT* a class, immediately return false.
if not isinstance(cls, type):
return False
# Else, this object is a class.
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
# CAUTION: Synchronize with die_unless_type_issubclassable().
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
# Attempt to pass this class as the second parameter to the issubclass()
# builtin to decide whether or not this class is safely usable as a
# standard class or not.
#
# Note that this leverages an EAFP (i.e., "It is easier to ask forgiveness
# than permission") approach and thus imposes a minor performance penalty,
# but that there exists *NO* faster alternative applicable to arbitrary
# user-defined classes, whose metaclasses may define a __subclasscheck__()
# dunder method to raise exceptions and thus prohibit being passed as the
# second parameter to the issubclass() builtin, the primary means employed
# by @beartype wrapper functions to check arbitrary types.
try:
issubclass(type, cls) # type: ignore[arg-type]
# If the prior function call raised *NO* exception, this class is
# probably but *NOT* necessarily issubclassable.
return True
# If the prior function call raised an exception, this class is *NOT*
# issubclassable. In this case, return false.
except:
return False
| 43.442675 | 81 | 0.651382 | 3,171 | 27,282 | 5.504888 | 0.122674 | 0.024748 | 0.015124 | 0.010426 | 0.895795 | 0.875401 | 0.865032 | 0.857528 | 0.834269 | 0.797892 | 0 | 0.005873 | 0.238619 | 27,282 | 627 | 82 | 43.511962 | 0.834489 | 0.71637 | 0 | 0.8 | 0 | 0 | 0.142356 | 0.048163 | 0 | 0 | 0 | 0.00319 | 0.055172 | 1 | 0.041379 | false | 0.013793 | 0.041379 | 0 | 0.124138 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
150b4eb0b32d39941e9d2701355f50b2fbc7131e | 16,795 | py | Python | bandits_to_rank/environment.py | gaudel/ranking_bandits | 1fe4a38b17a3bb7ccab3ae0f4d0afb70fe54dbc9 | [
"MIT"
] | 3 | 2021-07-22T14:46:01.000Z | 2021-07-23T08:55:01.000Z | bandits_to_rank/environment.py | gaudel/ranking_bandits | 1fe4a38b17a3bb7ccab3ae0f4d0afb70fe54dbc9 | [
"MIT"
] | null | null | null | bandits_to_rank/environment.py | gaudel/ranking_bandits | 1fe4a38b17a3bb7ccab3ae0f4d0afb70fe54dbc9 | [
"MIT"
] | null | null | null | ### Environment
## Packages
from random import random
import random as rd
import numpy as np
from enum import Enum, auto
# from bandits import maximum_K_index, maximum_K
from bandits_to_rank.tools.tools import order_theta_according_to_kappa_index, maximum_K_index, maximum_K
### Helping Fonctions
## Environment
class PositionsRanking(Enum):
FIXED = auto()
DECREASING = auto()
SHUFFLE = auto()
SHUFFLE_EXCEPT_FIRST = auto()
INCREASING = auto()
INCREASING_EXCEPT_FIRST = auto()
class Environment_PBM:
"""
Describe the comportement of a user in front of a list of item
Returns a list of rewards : r_k = 1 with probability tehta_k and 0 otherwise
"""
def __init__(self, thetas, kappas, label=None):
self.thetas = np.array(thetas)
self.kappas = np.array(kappas)
self.label = label
self.rng = np.random.default_rng()
def shuffle(self, positions_ranking=PositionsRanking.FIXED):
"""Shuffle items and positions
>>> from GRAB.bandits_to_rank.environment import Environment_PBM, PositionsRanking
>>> import random
>>> import numpy as np
>>> np.set_printoptions(precision=2)
>>> thetas = [0.9, 0.8, 0.5, 0.4, 0.3, 0.2, 0.1]
>>> kappas = [1, 0.7, 0.5, 0.4, 0.3]
>>> env = Environment_PBM(thetas, kappas)
>>> env.get_best_index_decrease()
array([0, 1, 2, 3, 4])
>>> env.get_best_index()
array([0, 1, 2, 3, 4])
>>> env.rng = np.random.default_rng(1)
>>> env.shuffle(PositionsRanking.SHUFFLE_EXCEPT_FIRST)
>>> env.thetas
array([0.2, 0.9, 0.8, 0.3, 0.5, 0.1, 0.4])
>>> env.get_best_index_decrease()
array([1, 2, 4, 6, 3])
>>> env.kappas
array([1. , 0.4, 0.3, 0.5, 0.7])
>>> env.get_best_index()
array([1, 6, 3, 4, 2])
>>> env.shuffle(PositionsRanking.DECREASING)
>>> env.thetas
array([0.9, 0.2, 0.4, 0.1, 0.5, 0.8, 0.3])
>>> env.kappas
array([1. , 0.7, 0.5, 0.4, 0.3])
>>> env.shuffle(PositionsRanking.SHUFFLE)
>>> env.thetas
array([0.2, 0.1, 0.9, 0.3, 0.5, 0.8, 0.4])
>>> env.kappas
array([0.3, 0.4, 0.7, 1. , 0.5])
>>> env.shuffle(PositionsRanking.INCREASING)
>>> env.thetas
array([0.2, 0.9, 0.8, 0.1, 0.3, 0.5, 0.4])
>>> env.kappas
array([0.3, 0.4, 0.5, 0.7, 1. ])
>>> env.shuffle(PositionsRanking.INCREASING_EXCEPT_FIRST)
>>> env.thetas
array([0.2, 0.1, 0.9, 0.8, 0.4, 0.3, 0.5])
>>> env.kappas
array([1. , 0.3, 0.4, 0.5, 0.7])
"""
# thetas
self.rng.shuffle(self.thetas)
# kappas
if positions_ranking is PositionsRanking.FIXED:
pass
elif positions_ranking is PositionsRanking.DECREASING:
self.kappas.sort()
self.kappas = self.kappas[::-1]
elif positions_ranking is PositionsRanking.SHUFFLE:
self.rng.shuffle(self.kappas)
elif positions_ranking is PositionsRanking.SHUFFLE_EXCEPT_FIRST:
self.kappas.sort()
self.kappas = self.kappas[::-1]
self.rng.shuffle(self.kappas[1:])
elif positions_ranking is PositionsRanking.INCREASING:
self.kappas.sort()
elif positions_ranking is PositionsRanking.INCREASING_EXCEPT_FIRST:
self.kappas.sort()
self.kappas = self.kappas[::-1]
self.kappas[1:].sort()
else:
raise ValueError(f'unhandled ranking on positions: {positions_ranking}')
def get_reward(self, propositions):
return np.array(self.rng.random() < self.thetas[propositions] * self.kappas, dtype=np.int)
def _kappas(self):
return self.kappas
def _thetas(self):
return self.thetas
def get_setting(self):
return len(self.thetas), len(self.kappas)
# def get_best(self):
# return ordonne_theta_function_kappa(self.thetas,self.kappas)
def get_best_index(self):
return order_theta_according_to_kappa_index(self.thetas, self.kappas)
def get_best_decrease(self):
nb_position = len(self.kappas)
return maximum_K(self.thetas, nb_position)
def get_best_index_decrease(self):
nb_position = len(self.kappas)
return maximum_K_index(self.thetas, nb_position)
def get_expected_reward(self, propositions):
return self.kappas * self.thetas[propositions]
def get_params(self):
return {"label": self.label, "thetas": self.thetas, "kappas": self.kappas}
class Environment_multirequest_PBM:
"""
Describe the comportement of a user in front of a list of item
Returns a list of rewards : r_k = 1 with probability tehta_k and 0 otherwise
"""
def __init__(self, thetas, kappas):
self.thetas = thetas
self.kappas = np.array(kappas)
self.rng = np.random.default_rng()
def shuffle(self, positions_ranking=PositionsRanking.FIXED):
"""Shuffle items and positions
>>> from GRAB.bandits_to_rank.environment import Environment_multirequest_PBM, PositionsRanking
>>> import random
>>> import numpy as np
>>> np.set_printoptions(precision=2)
>>> thetas = {1:[0.9, 0.8, 0.5, 0.4, 0.3, 0.2, 0.1],
2:[0.5, 0.4, 0.3, 0.2, 0.1, 0.05, 0.01],
3:[0.19, 0.8, 0.35, 0.4, 0.23, 0.2, 0.61]}
>>> kappas = [1, 0.7, 0.5, 0.4, 0.3]
>>> env = Environment_multirequest_PBM(thetas, kappas)
>>> env.get_best_index_decrease(1)
array([0, 1, 2, 3, 4])
>>> env.get_best_index(1)
array([0, 1, 2, 3, 4])
>>> random.seed(1)
>>> env.shuffle(1,fixed_kappa=True)
>>> env.thetas[1]
[0.8, 0.3, 0.9, 0.5, 0.2, 0.1, 0.4]
>>> env.get_best_index_decrease(1)
array([2, 0, 3, 6, 1])
>>> env.kappas
[1, 0.7, 0.5, 0.4, 0.3]
>>> env.get_best_index(1)
array([2, 0, 3, 6, 1])
>>> env.shuffle(1)
>>> env.thetas
[0.5, 0.1, 0.4, 0.3, 0.8, 0.2, 0.9]
>>> env.get_best_index_decrease(1)
array([6, 4, 0, 2, 3])
>>> env.kappas
[1, 0.3, 0.5, 0.7, 0.4]
>>> env.get_best_index(1)
array([6, 3, 0, 4, 2])
"""
raise NotImplementedError()
def get_reward(self, propositions, query):
return np.array(self.rng.random() < self.thetas[query][propositions] * self.kappas, dtype=np.int)
def _kappas(self):
return self.kappas
def _thetas(self):
return self.thetas
def _thetas_query(self, query):
return self.thetas[query]
def _query_nb(self):
return len(self.thetas.keys())
def _query_list(self):
return self.thetas.keys()
def get_setting(self, query):
return len(self.thetas[query]), len(self.kappas)
def get_next_query(self):
return rd.choice(list(self._query_list()))
# def get_best(self):
# return ordonne_theta_function_kappa(self.thetas,self.kappas)
def get_best_index(self, query):
return order_theta_according_to_kappa_index(self.thetas[query], self.kappas)
def get_best_decrease(self, query):
nb_position = len(self.kappas)
return maximum_K(self.thetas[query], nb_position)
def get_best_index_decrease(self, query):
nb_position = len(self.kappas)
return maximum_K_index(self.thetas[query], nb_position)
def get_expected_reward(self, propositions, query):
return self.kappas * self.thetas[query][propositions]
def get_params(self):
return {"thetas": self.thetas, "kappas": self.kappas}
class Environment_Cascade:
"""
Describe the comportement of a user in front of a list of item
Returns a list of rewards : r_k = 1 with probability tehta_k and 0 otherwise
Examples
--------
>>> import numpy as np
>>> np.set_printoptions(precision=3)
>>> thetas = [0.1, 0.5, 0.7, 0.3]
>>> env = Environment_Cascade(thetas, np.arange(3))
>>> env_dec = Environment_Cascade(thetas, np.arange(2,-1,-1))
>>> env.position_index_to_view_index
array([0, 1, 2])
>>> env_dec.position_index_to_view_index
array([2, 1, 0])
>>> Environment_Cascade(thetas, np.array([0, 3, 1, 2])).position_index_to_view_index
array([0, 2, 3, 1])
>>> arm = np.array([0, 1, 2])
>>> round(env_dec.get_expected_reward(arm).sum(),3), env.get_expected_reward(arm)
(0.865, array([0.1 , 0.45 , 0.315]))
>>> arm = np.array([2, 1, 0])
>>> round(env_dec.get_expected_reward(arm).sum(),3), env_dec.get_expected_reward(arm)
(0.865, array([0.315, 0.45 , 0.1 ]))
>>> arm = np.array([2, 3, 1])
>>> round(env_dec.get_expected_reward(arm).sum(),3), env.get_expected_reward(arm)
(0.895, array([0.7 , 0.09 , 0.105]))
>>> arm = np.array([1, 3, 2])
>>> round(env_dec.get_expected_reward(arm).sum(),3), env_dec.get_expected_reward(arm)
(0.895, array([0.105, 0.09 , 0.7 ]))
>>> arm = np.array([1, 2, 3])
>>> round(env_dec.get_expected_reward(arm).sum(),3), env.get_expected_reward(arm)
(0.895, array([0.5 , 0.35 , 0.045]))
"""
def __init__(self, thetas, order_view, label=None):
self.thetas = np.array(thetas)
self.nb_position = len(order_view)
self.label = label
self.rng = np.random.default_rng()
self.set_order_view(order_view)
def set_order_view(self, order_view):
self.view_index_to_position_index = order_view
self.position_index_to_view_index = np.argsort(order_view)
def shuffle(self, positions_ranking=PositionsRanking.FIXED):
"""Shuffle items and positions
>>> from GRAB.bandits_to_rank.environment import Environment_Cascade, PositionsRanking
>>> import random
>>> import numpy as np
>>> np.set_printoptions(precision=2)
>>> thetas = [0.9, 0.8, 0.5, 0.4, 0.3, 0.2, 0.1]
>>> env = Environment_Cascade(thetas, np.arange(5))
>>> env.get_best_index_decrease()
array([0, 1, 2, 3, 4])
>>> env.get_best_index()
array([0, 1, 2, 3, 4])
>>> env.rng = np.random.default_rng(1)
>>> env.shuffle(PositionsRanking.SHUFFLE_EXCEPT_FIRST)
>>> env.thetas
array([0.2, 0.9, 0.8, 0.3, 0.5, 0.1, 0.4])
>>> env.get_best_index_decrease()
array([1, 2, 4, 6, 3])
>>> env.view_index_to_position_index
array([0, 3, 4, 2, 1])
>>> env.get_best_index()
array([1, 6, 3, 4, 2])
>>> env.shuffle(PositionsRanking.DECREASING)
>>> env.thetas
array([0.9, 0.2, 0.4, 0.1, 0.5, 0.8, 0.3])
>>> env.view_index_to_position_index
array([0, 1, 2, 3, 4])
>>> env.shuffle(PositionsRanking.SHUFFLE)
>>> env.thetas
array([0.2, 0.1, 0.9, 0.3, 0.5, 0.8, 0.4])
>>> env.view_index_to_position_index
array([4, 3, 1, 0, 2])
>>> env.shuffle(PositionsRanking.INCREASING)
>>> env.thetas
array([0.2, 0.9, 0.8, 0.1, 0.3, 0.5, 0.4])
>>> env.view_index_to_position_index
array([4, 3, 2, 1, 0])
>>> env.shuffle(PositionsRanking.INCREASING_EXCEPT_FIRST)
>>> env.thetas
array([0.2, 0.1, 0.9, 0.8, 0.4, 0.3, 0.5])
>>> env.view_index_to_position_index
array([0, 4, 3, 2, 1])
"""
# thetas
self.rng.shuffle(self.thetas)
# positions
if positions_ranking is PositionsRanking.FIXED:
pass
elif positions_ranking is PositionsRanking.DECREASING:
self.view_index_to_position_index = np.arange(self.nb_position)
elif positions_ranking is PositionsRanking.SHUFFLE:
self.rng.shuffle(self.view_index_to_position_index)
elif positions_ranking is PositionsRanking.SHUFFLE_EXCEPT_FIRST:
self.view_index_to_position_index.sort()
self.rng.shuffle(self.view_index_to_position_index[1:])
elif positions_ranking is PositionsRanking.INCREASING:
self.view_index_to_position_index = np.arange(self.nb_position - 1, -1, -1)
elif positions_ranking is PositionsRanking.INCREASING_EXCEPT_FIRST:
self.view_index_to_position_index = np.arange(self.nb_position, 0, -1)
self.view_index_to_position_index[0] = 0
else:
raise ValueError(f'unhandled ranking on positions: {positions_ranking}')
self.position_index_to_view_index = np.argsort(self.view_index_to_position_index)
def get_reward(self, propositions):
"""
get vector of probability to look at a each position, given the item in each positions
$P(o_i) = \prod_{l=0}^{i-1} (1-\theta_l)$, for each $i$ in $\{0, ..., L-1\}$
Parameters
----------
propositions
Returns
-------
Examples
--------
>>> import numpy as np
>>> np.set_printoptions(precision=2)
>>> thetas = [0.1, 0.5, 0.6, 0.3]
>>> env = Environment_Cascade(thetas, np.arange(3))
>>> env_dec = Environment_Cascade(thetas, np.arange(2,-1,-1))
>>> propositions = np.array([0, 1, 2])
>>> env.get_expected_reward(propositions)
array([0.1 , 0.45, 0.27])
>>> n = 100000
>>> stats = np.zeros(len(propositions))
>>> for _ in range(n): stats += env.get_reward(propositions)
>>> stats / n
array([0.1 , 0.45, 0.27])
>>> propositions = np.array([2, 1, 0])
>>> env_dec.get_expected_reward(propositions)
array([0.27, 0.45, 0.1 ])
>>> stats = np.zeros(len(propositions))
>>> for _ in range(n): stats += env_dec.get_reward(propositions)
>>> stats / n
array([0.27, 0.45, 0.1 ])
"""
click_probabilities = np.concatenate((self.get_expected_reward(propositions), np.zeros(1)))
return self.rng.multinomial(1, click_probabilities)[:-1]
def _thetas(self):
return self.thetas
def _kappas(self):
return np.array([0. for i in range(self.nb_position)])
def get_setting(self):
return len(self.thetas), self.nb_position
def get_best_index(self):
return np.array(self.thetas).argsort()[::-1][self.view_index_to_position_index]
def get_best_decrease(self):
theta_ordered = np.sort(np.array(self.thetas))
return theta_ordered[::-1]
def get_best_index_decrease(self):
return maximum_K_index(self.thetas, self.nb_position)
def get_expected_reward(self, propositions):
"""
get vector of probability to look at a each position, given the item in each positions
$P(o_i) = \prod_{l=0}^{i-1} (1-\theta_l)$, for each $i$ in $\{0, ..., L-1\}$
Parameters
----------
propositions
Returns
-------
Examples
--------
>>> import numpy as np
>>> np.set_printoptions(precision=3)
>>> thetas = [0.1, 0.5, 0.7, 0.3]
>>> env = Environment_Cascade(thetas, np.arange(3))
>>> env_dec = Environment_Cascade(thetas, np.arange(2,-1,-1))
>>> arm = np.array([0, 1, 2])
>>> env.get_expected_reward(arm)
array([0.1 , 0.45 , 0.315])
>>> arm = np.array([2, 1, 0])
>>> env_dec.get_expected_reward(arm)
array([0.315, 0.45 , 0.1 ])
"""
return self.observation_probabilities(propositions) * self.thetas[propositions]
def observation_probabilities(self, propositions):
"""
get vector of probability to look at a each position, given the item in each positions
$P(o_i) = \prod_{l=0}^{i-1} (1-\theta_l)$, for each $i$ in $\{0, ..., L-1\}$
Parameters
----------
propositions
Returns
-------
Examples
--------
>>> import numpy as np
>>> np.set_printoptions(precision=3)
>>> thetas = [0.1, 0.5, 0.7, 0.3]
>>> env = Environment_Cascade(thetas, np.arange(3))
>>> env_dec = Environment_Cascade(thetas, np.arange(2,-1,-1))
>>> arm = np.array([0, 1, 2])
>>> env.observation_probabilities(arm)
array([1. , 0.9 , 0.45])
>>> arm = np.array([2, 1, 0])
>>> env_dec.observation_probabilities(arm)
array([0.45, 0.9 , 1. ])
"""
res = np.ones(self.nb_position)
np.cumprod((1 - self.thetas[propositions])[self.view_index_to_position_index[:-1]], out=res[1:])
return res[self.position_index_to_view_index]
def get_params(self):
return {"label": self.label, "thetas": self.thetas, "order_view": self.view_index_to_position_index}
| 33.929293 | 108 | 0.587973 | 2,379 | 16,795 | 3.979823 | 0.068937 | 0.008661 | 0.008555 | 0.034115 | 0.841994 | 0.811787 | 0.770701 | 0.719159 | 0.612695 | 0.553654 | 0 | 0.057863 | 0.260137 | 16,795 | 494 | 109 | 33.997976 | 0.704088 | 0.473177 | 0 | 0.469388 | 0 | 0 | 0.021369 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.265306 | false | 0.013605 | 0.034014 | 0.163265 | 0.585034 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
12c8b957d67c1f799d4d06c0529857c38549c921 | 19,087 | py | Python | packetbeat/tests/system/test_0062_cassandra.py | IzekChen/beats | 0e52267c104181eb4c4422a4d939f68c9e994ced | [
"ECL-2.0",
"Apache-2.0"
] | 8 | 2019-01-14T14:49:09.000Z | 2020-07-24T18:32:06.000Z | packetbeat/tests/system/test_0062_cassandra.py | IzekChen/beats | 0e52267c104181eb4c4422a4d939f68c9e994ced | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2019-11-26T22:32:53.000Z | 2019-11-28T03:11:30.000Z | packetbeat/tests/system/test_0062_cassandra.py | IzekChen/beats | 0e52267c104181eb4c4422a4d939f68c9e994ced | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2020-11-04T06:56:58.000Z | 2020-11-04T06:56:58.000Z | from packetbeat import BaseTest
"""
Tests for the Cassandra
"""
class Test(BaseTest):
def test_create_keyspace(self):
"""
Should correctly create a keyspace in Cassandra
"""
self.render_config_template(
cassandra_ports=[9042],
cassandra_send_request=True,
cassandra_send_response=True,
cassandra_send_request_header=True,
cassandra_send_response_header=True,
)
self.run_packetbeat(pcap="cassandra/v4/cassandra_create_keyspace.pcap", debug_selectors=["*"])
objs = self.read_output()
o = objs[0]
assert o["type"] == "cassandra"
assert o["server.port"] == 9042
assert o[
"cassandra.request.query"] == "CREATE KEYSPACE mykeyspace WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };"
assert o["cassandra.request.headers.version"] == "4"
assert o["cassandra.request.headers.op"] == "QUERY"
assert o["cassandra.request.headers.length"] == 124
assert o["cassandra.request.headers.flags"] == "Default"
assert o["cassandra.request.headers.stream"] == 20
assert o["cassandra.response.result.type"] == "schemaChanged"
assert o["cassandra.response.result.schema_change.change"] == "CREATED"
assert o["cassandra.response.result.schema_change.keyspace"] == "mykeyspace"
assert o["cassandra.response.result.schema_change.target"] == "KEYSPACE"
assert o["cassandra.response.headers.version"] == "4"
assert o["cassandra.response.headers.length"] == 35
assert o["cassandra.response.headers.op"] == "RESULT"
assert o["cassandra.response.headers.flags"] == "Default"
assert o["cassandra.response.headers.stream"] == 20
def test_create_table(self):
"""
Should correctly create a table in Cassandra
"""
self.render_config_template(
cassandra_ports=[9042],
cassandra_send_request=True,
cassandra_send_response=True,
cassandra_send_request_header=True,
cassandra_send_response_header=True,
)
self.run_packetbeat(pcap="cassandra/v4/cassandra_create_table.pcap", debug_selectors=["*"])
objs = self.read_output()
o = objs[0]
assert o["type"] == "cassandra"
assert o["server.port"] == 9042
assert o[
"cassandra.request.query"] == "CREATE TABLE users (\n user_id int PRIMARY KEY,\n fname text,\n lname text\n);"
assert o["cassandra.request.headers.version"] == "4"
assert o["cassandra.request.headers.op"] == "QUERY"
assert o["cassandra.request.headers.length"] == 98
assert o["cassandra.request.headers.flags"] == "Default"
assert o["cassandra.request.headers.stream"] == 49
assert o["cassandra.response.result.type"] == "schemaChanged"
assert o["cassandra.response.headers.version"] == "4"
assert o["cassandra.response.headers.length"] == 39
assert o["cassandra.response.headers.op"] == "RESULT"
assert o["cassandra.response.headers.flags"] == "Default"
assert o["cassandra.response.headers.stream"] == 49
def test_insert_data(self):
"""
Should correctly insert record into table in Cassandra
"""
self.render_config_template(
cassandra_ports=[9042],
cassandra_send_request=True,
cassandra_send_response=True,
cassandra_send_request_header=True,
cassandra_send_response_header=True,
)
self.run_packetbeat(pcap="cassandra/v4/cassandra_insert.pcap", debug_selectors=["*"])
objs = self.read_output()
o = objs[0]
print(o)
assert o["type"] == "cassandra"
assert o["server.port"] == 9042
assert o[
"cassandra.request.query"] == "INSERT INTO users (user_id, fname, lname)\n VALUES (1745, 'john', 'smith');"
assert o["cassandra.request.headers.version"] == "4"
assert o["cassandra.request.headers.op"] == "QUERY"
assert o["cassandra.request.headers.length"] == 97
assert o["cassandra.request.headers.flags"] == "Default"
assert o["cassandra.request.headers.stream"] == 252
assert o["cassandra.response.result.type"] == "void"
assert o["cassandra.response.headers.version"] == "4"
assert o["cassandra.response.headers.length"] == 4
assert o["cassandra.response.headers.op"] == "RESULT"
assert o["cassandra.response.headers.flags"] == "Default"
assert o["cassandra.response.headers.stream"] == 252
def test_select_data(self):
"""
Should correctly select record from table in Cassandra
"""
self.render_config_template(
cassandra_ports=[9042],
cassandra_send_request=True,
cassandra_send_response=True,
cassandra_send_request_header=True,
cassandra_send_response_header=True,
)
self.run_packetbeat(pcap="cassandra/v4/cassandra_select.pcap", debug_selectors=["*"])
objs = self.read_output()
o = objs[0]
assert o["type"] == "cassandra"
assert o["server.port"] == 9042
assert o["cassandra.request.query"] == "SELECT * FROM users;"
assert o["cassandra.request.headers.version"] == "4"
assert o["cassandra.request.headers.op"] == "QUERY"
assert o["cassandra.request.headers.length"] == 41
assert o["cassandra.request.headers.flags"] == "Default"
assert o["cassandra.request.headers.stream"] == 253
assert o["cassandra.response.result.type"] == "rows"
assert o["cassandra.response.headers.version"] == "4"
assert o["cassandra.response.headers.length"] == 89
assert o["cassandra.response.headers.op"] == "RESULT"
assert o["cassandra.response.headers.flags"] == "Default"
assert o["cassandra.response.headers.stream"] == 253
def test_create_index(self):
"""
Should correctly create index of table in Cassandra
"""
self.render_config_template(
cassandra_ports=[9042],
cassandra_send_request=True,
cassandra_send_response=True,
cassandra_send_request_header=True,
cassandra_send_response_header=True,
)
self.run_packetbeat(pcap="cassandra/v4/cassandra_create_index.pcap", debug_selectors=["*"])
objs = self.read_output()
o = objs[0]
assert o["type"] == "cassandra"
assert o["server.port"] == 9042
assert o["cassandra.request.query"] == "CREATE INDEX ON users (lname);"
assert o["cassandra.request.headers.version"] == "4"
assert o["cassandra.request.headers.op"] == "QUERY"
assert o["cassandra.request.headers.length"] == 51
assert o["cassandra.request.headers.flags"] == "Default"
assert o["cassandra.request.headers.stream"] == 92
assert o["cassandra.response.result.type"] == "schemaChanged"
assert o["cassandra.response.headers.version"] == "4"
assert o["cassandra.response.headers.length"] == 39
assert o["cassandra.response.headers.op"] == "RESULT"
assert o["cassandra.response.headers.flags"] == "Default"
assert o["cassandra.response.headers.stream"] == 92
def test_trace_error(self):
"""
Should correctly catch a error message and trace flag was enabled
"""
self.render_config_template(
cassandra_ports=[9042],
cassandra_send_request=True,
cassandra_send_response=True,
cassandra_send_request_header=True,
cassandra_send_response_header=True,
)
self.run_packetbeat(pcap="cassandra/v4/cassandra_trace_err.pcap", debug_selectors=["*"])
objs = self.read_output()
o = objs[0]
assert o["type"] == "cassandra"
assert o["server.port"] == 9042
assert o["bytes_in"] == 55
assert o["bytes_out"] == 62
assert o["cassandra.request.query"] == "DROP KEYSPACE mykeyspace;"
print(o)
assert o["cassandra.request.headers.version"] == "4"
assert o["cassandra.request.headers.op"] == "QUERY"
assert o["cassandra.request.headers.length"] == 46
assert o["cassandra.request.headers.flags"] == "Tracing"
assert o["cassandra.request.headers.stream"] == 275
assert o["cassandra.response.error.code"] == 8960
assert o["cassandra.response.error.msg"] == "Cannot drop non existing keyspace 'mykeyspace'."
assert o["cassandra.response.error.type"] == "errConfig"
assert o["cassandra.response.headers.version"] == "4"
assert o["cassandra.response.headers.length"] == 53
assert o["cassandra.response.headers.op"] == "ERROR"
assert o["cassandra.response.headers.flags"] == "Default"
assert o["cassandra.response.headers.stream"] == 275
def test_select_use_index(self):
"""
Should correctly select record from table (use index) in Cassandra
"""
self.render_config_template(
cassandra_ports=[9042],
cassandra_send_request=True,
cassandra_send_response=True,
cassandra_send_request_header=True,
cassandra_send_response_header=True,
)
self.run_packetbeat(pcap="cassandra/v4/cassandra_select_via_index.pcap", debug_selectors=["*"])
objs = self.read_output()
o = objs[0]
print(o)
assert o["type"] == "cassandra"
assert o["server.port"] == 9042
assert o["cassandra.request.query"] == "SELECT * FROM users WHERE lname = 'smith';"
assert o["cassandra.request.headers.version"] == "4"
assert o["cassandra.request.headers.op"] == "QUERY"
assert o["cassandra.request.headers.length"] == 63
assert o["cassandra.request.headers.flags"] == "Default"
assert o["cassandra.request.headers.stream"] == 262
assert o["cassandra.response.headers.version"] == "4"
assert o["cassandra.response.headers.length"] == 89
assert o["cassandra.response.headers.op"] == "RESULT"
assert o["cassandra.response.headers.flags"] == "Default"
assert o["cassandra.response.headers.stream"] == 262
assert o["cassandra.response.result.type"] == "rows"
def test_ops_mixed(self):
"""
Should correctly have mixed operation happened in Cassandra
"""
self.render_config_template(
cassandra_ports=[9042],
cassandra_send_request=True,
cassandra_send_response=True,
cassandra_send_request_header=True,
cassandra_send_response_header=True,
)
self.run_packetbeat(pcap="cassandra/v4/cassandra_mixed_frame.pcap", debug_selectors=["*"])
objs = self.read_output()
o = objs[0]
print(o)
assert o["type"] == "cassandra"
assert o["server.port"] == 9042
assert o["bytes_in"] == 9
assert o["bytes_out"] == 61
assert o["cassandra.request.headers.version"] == "4"
assert o["cassandra.request.headers.op"] == "OPTIONS"
assert o["cassandra.request.headers.length"] == 0
assert o["cassandra.request.headers.flags"] == "Default"
assert o["cassandra.request.headers.stream"] == 0
assert o["cassandra.response.headers.version"] == "4"
assert o["cassandra.response.headers.length"] == 52
assert o["cassandra.response.headers.op"] == "SUPPORTED"
assert o["cassandra.response.headers.flags"] == "Default"
assert o["cassandra.response.headers.stream"] == 0
o = objs[1]
print(o)
assert o["type"] == "cassandra"
assert o["server.port"] == 9042
assert o["bytes_in"] == 31
assert o["bytes_out"] == 9
assert o["cassandra.request.headers.version"] == "4"
assert o["cassandra.request.headers.op"] == "STARTUP"
assert o["cassandra.request.headers.length"] == 22
assert o["cassandra.request.headers.flags"] == "Default"
assert o["cassandra.request.headers.stream"] == 1
assert o["cassandra.response.headers.version"] == "4"
assert o["cassandra.response.headers.length"] == 0
assert o["cassandra.response.headers.op"] == "READY"
assert o["cassandra.response.headers.flags"] == "Default"
assert o["cassandra.response.headers.stream"] == 1
o = objs[2]
print(o)
assert o["type"] == "cassandra"
assert o["server.port"] == 9042
assert o["bytes_in"] == 58
assert o["bytes_out"] == 9
assert o["cassandra.request.headers.version"] == "4"
assert o["cassandra.request.headers.op"] == "REGISTER"
assert o["cassandra.request.headers.length"] == 49
assert o["cassandra.request.headers.flags"] == "Default"
assert o["cassandra.request.headers.stream"] == 2
assert o["cassandra.response.headers.version"] == "4"
assert o["cassandra.response.headers.length"] == 0
assert o["cassandra.response.headers.op"] == "READY"
assert o["cassandra.response.headers.flags"] == "Default"
assert o["cassandra.response.headers.stream"] == 2
def test_ops_ignored(self):
"""
Should correctly ignore OPTIONS and REGISTER operation
"""
self.render_config_template(
cassandra_ports=[9042],
cassandra_send_request=True,
cassandra_send_response=True,
cassandra_send_request_header=True,
cassandra_send_response_header=True,
cassandra_ignored_ops=["OPTIONS", "REGISTER"]
)
self.run_packetbeat(pcap="cassandra/v4/cassandra_mixed_frame.pcap", debug_selectors=["*"])
objs = self.read_output()
o = objs[0]
assert o["type"] == "cassandra"
assert o["server.port"] == 9042
assert o["bytes_in"] == 31
assert o["bytes_out"] == 9
assert o["cassandra.request.headers.version"] == "4"
assert o["cassandra.request.headers.op"] == "STARTUP"
assert o["cassandra.request.headers.length"] == 22
assert o["cassandra.request.headers.flags"] == "Default"
assert o["cassandra.request.headers.stream"] == 1
assert o["cassandra.response.headers.version"] == "4"
assert o["cassandra.response.headers.length"] == 0
assert o["cassandra.response.headers.op"] == "READY"
assert o["cassandra.response.headers.flags"] == "Default"
assert o["cassandra.response.headers.stream"] == 1
o = objs[1]
assert o["type"] == "cassandra"
assert o["server.port"] == 9042
assert o["bytes_in"] == 101
assert o["bytes_out"] == 116
assert o["cassandra.request.headers.version"] == "4"
assert o["cassandra.request.headers.op"] == "QUERY"
assert o["cassandra.request.headers.length"] == 92
assert o["cassandra.request.headers.flags"] == "Default"
assert o["cassandra.request.headers.stream"] == 3
assert o["cassandra.response.headers.version"] == "4"
assert o["cassandra.response.headers.length"] == 107
assert o["cassandra.response.headers.op"] == "RESULT"
assert o["cassandra.response.headers.flags"] == "Default"
assert o["cassandra.response.headers.stream"] == 3
def test_compressed_frame(self):
"""
Should correctly have some compressed frame should happened in Cassandra
"""
self.render_config_template(
cassandra_ports=[9042],
cassandra_send_request=True,
cassandra_send_response=True,
cassandra_send_request_header=True,
cassandra_send_response_header=True,
cassandra_compressor="snappy",
)
self.run_packetbeat(pcap="cassandra/v4/cassandra_compressed.pcap", debug_selectors=["*"])
objs = self.read_output()
o = objs[0]
print(o)
assert o["type"] == "cassandra"
assert o["server.port"] == 9042
assert o["bytes_in"] == 52
assert o["bytes_out"] == 10
assert o["cassandra.request.headers.version"] == "4"
assert o["cassandra.request.headers.op"] == "STARTUP"
assert o["cassandra.request.headers.length"] == 43
assert o["cassandra.request.headers.flags"] == "Default"
assert o["cassandra.request.headers.stream"] == 0
assert o["cassandra.response.headers.version"] == "4"
assert o["cassandra.response.headers.length"] == 1
assert o["cassandra.response.headers.op"] == "READY"
assert o["cassandra.response.headers.flags"] == "Compress"
assert o["cassandra.response.headers.stream"] == 0
o = objs[1]
print(o)
assert o["type"] == "cassandra"
assert o["server.port"] == 9042
assert o["bytes_in"] == 53
assert o["bytes_out"] == 10
assert o["cassandra.request.headers.version"] == "4"
assert o["cassandra.request.headers.op"] == "REGISTER"
assert o["cassandra.request.headers.length"] == 44
assert o["cassandra.request.headers.flags"] == "Compress"
assert o["cassandra.request.headers.stream"] == 64
assert o["cassandra.response.headers.version"] == "4"
assert o["cassandra.response.headers.length"] == 1
assert o["cassandra.response.headers.op"] == "READY"
assert o["cassandra.response.headers.flags"] == "Compress"
assert o["cassandra.response.headers.stream"] == 64
o = objs[2]
print(o)
assert o["type"] == "cassandra"
assert o["server.port"] == 9042
assert o["bytes_in"] == 62
assert o["bytes_out"] == 165
assert o["cassandra.request.query"] == "SELECT * FROM system.local WHERE key='local'"
assert o["cassandra.request.headers.version"] == "4"
assert o["cassandra.request.headers.op"] == "QUERY"
assert o["cassandra.request.headers.length"] == 53
assert o["cassandra.request.headers.flags"] == "Compress"
assert o["cassandra.request.headers.stream"] == 0
assert o["cassandra.response.headers.version"] == "4"
assert o["cassandra.response.headers.length"] == 156
assert o["cassandra.response.headers.op"] == "RESULT"
assert o["cassandra.response.headers.flags"] == "Compress"
assert o["cassandra.response.headers.stream"] == 64
assert o["cassandra.response.result.type"] == "rows"
assert o["cassandra.response.result.rows.num_rows"] == 290917
assert o["cassandra.response.result.rows.meta.col_count"] == 9
assert o["cassandra.response.result.rows.meta.flags"] == "GlobalTableSpec"
assert o["cassandra.response.result.rows.meta.keyspace"] == "system"
assert o["cassandra.response.result.rows.meta.table"] == "peers"
| 42.321508 | 147 | 0.61796 | 2,150 | 19,087 | 5.38 | 0.08186 | 0.135558 | 0.243451 | 0.192963 | 0.898331 | 0.885969 | 0.861503 | 0.822253 | 0.822253 | 0.822253 | 0 | 0.021827 | 0.239116 | 19,087 | 450 | 148 | 42.415556 | 0.774633 | 0.030125 | 0 | 0.704225 | 0 | 0.002817 | 0.411758 | 0.326965 | 0 | 0 | 0 | 0 | 0.630986 | 1 | 0.028169 | false | 0 | 0.002817 | 0 | 0.033803 | 0.025352 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
4216480de819729f1179c290a3defeed472d2059 | 18,682 | py | Python | pyidf/exterior_equipment.py | marcelosalles/pyidf | c2f744211572b5e14e29522aac1421ba88addb0e | [
"Apache-2.0"
] | 19 | 2015-12-08T23:33:51.000Z | 2022-01-31T04:41:10.000Z | pyidf/exterior_equipment.py | marcelosalles/pyidf | c2f744211572b5e14e29522aac1421ba88addb0e | [
"Apache-2.0"
] | 2 | 2019-10-04T10:57:00.000Z | 2021-10-01T06:46:17.000Z | pyidf/exterior_equipment.py | marcelosalles/pyidf | c2f744211572b5e14e29522aac1421ba88addb0e | [
"Apache-2.0"
] | 7 | 2015-11-04T02:25:01.000Z | 2021-12-08T03:14:28.000Z | """ Data objects in group "Exterior Equipment"
"""
from collections import OrderedDict
import logging
from pyidf.helper import DataObject
logger = logging.getLogger("pyidf")
logger.addHandler(logging.NullHandler())
class ExteriorLights(DataObject):
""" Corresponds to IDD object `Exterior:Lights`
only used for Meter type reporting, does not affect building loads
"""
_schema = {'extensible-fields': OrderedDict(),
'fields': OrderedDict([(u'name',
{'name': u'Name',
'pyname': u'name',
'required-field': True,
'autosizable': False,
'autocalculatable': False,
'type': u'alpha'}),
(u'schedule name',
{'name': u'Schedule Name',
'pyname': u'schedule_name',
'required-field': True,
'autosizable': False,
'autocalculatable': False,
'type': u'object-list'}),
(u'design level',
{'name': u'Design Level',
'pyname': u'design_level',
'required-field': True,
'autosizable': False,
'minimum': 0.0,
'autocalculatable': False,
'type': u'real',
'unit': u'W'}),
(u'control option',
{'name': u'Control Option',
'pyname': u'control_option',
'required-field': False,
'autosizable': False,
'accepted-values': [u'ScheduleNameOnly',
u'AstronomicalClock'],
'autocalculatable': False,
'type': 'alpha'}),
(u'end-use subcategory',
{'name': u'End-Use Subcategory',
'pyname': u'enduse_subcategory',
'default': u'General',
'required-field': False,
'autosizable': False,
'autocalculatable': False,
'type': u'alpha'})]),
'format': None,
'group': u'Exterior Equipment',
'min-fields': 0,
'name': u'Exterior:Lights',
'pyname': u'ExteriorLights',
'required-object': False,
'unique-object': False}
@property
def name(self):
"""field `Name`
Args:
value (str): value for IDD Field `Name`
Raises:
ValueError: if `value` is not a valid value
Returns:
str: the value of `name` or None if not set
"""
return self["Name"]
@name.setter
def name(self, value=None):
"""Corresponds to IDD field `Name`"""
self["Name"] = value
@property
def schedule_name(self):
"""field `Schedule Name`
| units in schedule should be fraction applied to capacity of the exterior lights equipment, generally (0.0 - 1.0)
Args:
value (str): value for IDD Field `Schedule Name`
Raises:
ValueError: if `value` is not a valid value
Returns:
str: the value of `schedule_name` or None if not set
"""
return self["Schedule Name"]
@schedule_name.setter
def schedule_name(self, value=None):
"""Corresponds to IDD field `Schedule Name`"""
self["Schedule Name"] = value
@property
def design_level(self):
"""field `Design Level`
| Units: W
| IP-Units: W
Args:
value (float): value for IDD Field `Design Level`
Raises:
ValueError: if `value` is not a valid value
Returns:
float: the value of `design_level` or None if not set
"""
return self["Design Level"]
@design_level.setter
def design_level(self, value=None):
"""Corresponds to IDD field `Design Level`"""
self["Design Level"] = value
@property
def control_option(self):
"""field `Control Option`
| Astronomical Clock option overrides schedule to turn lights off when sun is up
Args:
value (str): value for IDD Field `Control Option`
Raises:
ValueError: if `value` is not a valid value
Returns:
str: the value of `control_option` or None if not set
"""
return self["Control Option"]
@control_option.setter
def control_option(self, value=None):
"""Corresponds to IDD field `Control Option`"""
self["Control Option"] = value
@property
def enduse_subcategory(self):
"""field `End-Use Subcategory`
| Default value: General
Args:
value (str): value for IDD Field `End-Use Subcategory`
Raises:
ValueError: if `value` is not a valid value
Returns:
str: the value of `enduse_subcategory` or None if not set
"""
return self["End-Use Subcategory"]
@enduse_subcategory.setter
def enduse_subcategory(self, value="General"):
""" Corresponds to IDD field `End-Use Subcategory`
"""
self["End-Use Subcategory"] = value
class ExteriorFuelEquipment(DataObject):
""" Corresponds to IDD object `Exterior:FuelEquipment`
only used for Meter type reporting, does not affect building loads
"""
_schema = {'extensible-fields': OrderedDict(),
'fields': OrderedDict([(u'name',
{'name': u'Name',
'pyname': u'name',
'required-field': True,
'autosizable': False,
'autocalculatable': False,
'type': u'alpha'}),
(u'fuel use type',
{'name': u'Fuel Use Type',
'pyname': u'fuel_use_type',
'required-field': True,
'autosizable': False,
'accepted-values': [u'Electricity',
u'NaturalGas',
u'PropaneGas',
u'FuelOil#1',
u'FuelOil#2',
u'Diesel',
u'Gasoline',
u'Coal',
u'OtherFuel1',
u'OtherFuel2',
u'Steam',
u'DistrictHeating',
u'DistrictCooling'],
'autocalculatable': False,
'type': 'alpha'}),
(u'schedule name',
{'name': u'Schedule Name',
'pyname': u'schedule_name',
'required-field': True,
'autosizable': False,
'autocalculatable': False,
'type': u'object-list'}),
(u'design level',
{'name': u'Design Level',
'pyname': u'design_level',
'required-field': True,
'autosizable': False,
'minimum': 0.0,
'autocalculatable': False,
'type': u'real',
'unit': u'W'}),
(u'end-use subcategory',
{'name': u'End-Use Subcategory',
'pyname': u'enduse_subcategory',
'default': u'General',
'required-field': False,
'autosizable': False,
'autocalculatable': False,
'type': u'alpha'})]),
'format': None,
'group': u'Exterior Equipment',
'min-fields': 0,
'name': u'Exterior:FuelEquipment',
'pyname': u'ExteriorFuelEquipment',
'required-object': False,
'unique-object': False}
@property
def name(self):
"""field `Name`
Args:
value (str): value for IDD Field `Name`
Raises:
ValueError: if `value` is not a valid value
Returns:
str: the value of `name` or None if not set
"""
return self["Name"]
@name.setter
def name(self, value=None):
"""Corresponds to IDD field `Name`"""
self["Name"] = value
@property
def fuel_use_type(self):
"""field `Fuel Use Type`
Args:
value (str): value for IDD Field `Fuel Use Type`
Raises:
ValueError: if `value` is not a valid value
Returns:
str: the value of `fuel_use_type` or None if not set
"""
return self["Fuel Use Type"]
@fuel_use_type.setter
def fuel_use_type(self, value=None):
"""Corresponds to IDD field `Fuel Use Type`"""
self["Fuel Use Type"] = value
@property
def schedule_name(self):
"""field `Schedule Name`
| units in schedule should be fraction applied to capacity of the exterior fuel equipment, generally (0.0 - 1.0)
Args:
value (str): value for IDD Field `Schedule Name`
Raises:
ValueError: if `value` is not a valid value
Returns:
str: the value of `schedule_name` or None if not set
"""
return self["Schedule Name"]
@schedule_name.setter
def schedule_name(self, value=None):
"""Corresponds to IDD field `Schedule Name`"""
self["Schedule Name"] = value
@property
def design_level(self):
"""field `Design Level`
| Units: W
| IP-Units: W
Args:
value (float): value for IDD Field `Design Level`
Raises:
ValueError: if `value` is not a valid value
Returns:
float: the value of `design_level` or None if not set
"""
return self["Design Level"]
@design_level.setter
def design_level(self, value=None):
"""Corresponds to IDD field `Design Level`"""
self["Design Level"] = value
@property
def enduse_subcategory(self):
"""field `End-Use Subcategory`
| Default value: General
Args:
value (str): value for IDD Field `End-Use Subcategory`
Raises:
ValueError: if `value` is not a valid value
Returns:
str: the value of `enduse_subcategory` or None if not set
"""
return self["End-Use Subcategory"]
@enduse_subcategory.setter
def enduse_subcategory(self, value="General"):
""" Corresponds to IDD field `End-Use Subcategory`
"""
self["End-Use Subcategory"] = value
class ExteriorWaterEquipment(DataObject):
""" Corresponds to IDD object `Exterior:WaterEquipment`
only used for Meter type reporting, does not affect building loads
"""
_schema = {'extensible-fields': OrderedDict(),
'fields': OrderedDict([(u'name',
{'name': u'Name',
'pyname': u'name',
'required-field': True,
'autosizable': False,
'autocalculatable': False,
'type': u'alpha'}),
(u'fuel use type',
{'name': u'Fuel Use Type',
'pyname': u'fuel_use_type',
'default': u'Water',
'required-field': False,
'autosizable': False,
'accepted-values': [u'Water'],
'autocalculatable': False,
'type': 'alpha'}),
(u'schedule name',
{'name': u'Schedule Name',
'pyname': u'schedule_name',
'required-field': True,
'autosizable': False,
'autocalculatable': False,
'type': u'object-list'}),
(u'design level',
{'name': u'Design Level',
'pyname': u'design_level',
'required-field': True,
'autosizable': False,
'minimum': 0.0,
'autocalculatable': False,
'type': u'real',
'unit': u'm3/s'}),
(u'end-use subcategory',
{'name': u'End-Use Subcategory',
'pyname': u'enduse_subcategory',
'default': u'General',
'required-field': False,
'autosizable': False,
'autocalculatable': False,
'type': u'alpha'})]),
'format': None,
'group': u'Exterior Equipment',
'min-fields': 0,
'name': u'Exterior:WaterEquipment',
'pyname': u'ExteriorWaterEquipment',
'required-object': False,
'unique-object': False}
@property
def name(self):
"""field `Name`
Args:
value (str): value for IDD Field `Name`
Raises:
ValueError: if `value` is not a valid value
Returns:
str: the value of `name` or None if not set
"""
return self["Name"]
@name.setter
def name(self, value=None):
"""Corresponds to IDD field `Name`"""
self["Name"] = value
@property
def fuel_use_type(self):
"""field `Fuel Use Type`
| Default value: Water
Args:
value (str): value for IDD Field `Fuel Use Type`
Raises:
ValueError: if `value` is not a valid value
Returns:
str: the value of `fuel_use_type` or None if not set
"""
return self["Fuel Use Type"]
@fuel_use_type.setter
def fuel_use_type(self, value="Water"):
"""Corresponds to IDD field `Fuel Use Type`"""
self["Fuel Use Type"] = value
@property
def schedule_name(self):
"""field `Schedule Name`
| units in Schedule should be fraction applied to capacity of the exterior water equipment, generally (0.0 - 1.0)
Args:
value (str): value for IDD Field `Schedule Name`
Raises:
ValueError: if `value` is not a valid value
Returns:
str: the value of `schedule_name` or None if not set
"""
return self["Schedule Name"]
@schedule_name.setter
def schedule_name(self, value=None):
"""Corresponds to IDD field `Schedule Name`"""
self["Schedule Name"] = value
@property
def design_level(self):
"""field `Design Level`
| Units: m3/s
Args:
value (float): value for IDD Field `Design Level`
Raises:
ValueError: if `value` is not a valid value
Returns:
float: the value of `design_level` or None if not set
"""
return self["Design Level"]
@design_level.setter
def design_level(self, value=None):
"""Corresponds to IDD field `Design Level`"""
self["Design Level"] = value
@property
def enduse_subcategory(self):
"""field `End-Use Subcategory`
| Default value: General
Args:
value (str): value for IDD Field `End-Use Subcategory`
Raises:
ValueError: if `value` is not a valid value
Returns:
str: the value of `enduse_subcategory` or None if not set
"""
return self["End-Use Subcategory"]
@enduse_subcategory.setter
def enduse_subcategory(self, value="General"):
""" Corresponds to IDD field `End-Use Subcategory`
"""
self["End-Use Subcategory"] = value
| 34.919626 | 123 | 0.421422 | 1,574 | 18,682 | 4.955527 | 0.083863 | 0.055385 | 0.033846 | 0.030769 | 0.889615 | 0.879615 | 0.864231 | 0.852692 | 0.840128 | 0.840128 | 0 | 0.002808 | 0.48528 | 18,682 | 534 | 124 | 34.985019 | 0.80834 | 0.251526 | 0 | 0.833992 | 0 | 0 | 0.201579 | 0.006881 | 0 | 0 | 0 | 0 | 0 | 1 | 0.118577 | false | 0 | 0.011858 | 0 | 0.213439 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
423af41e8a8738661e2119aa4d36599623ec2ec3 | 246 | py | Python | set1/challenge1.py | sparkhom/crypto | 32180394e977b3bbd316ed33a461c1dccb44a741 | [
"0BSD"
] | null | null | null | set1/challenge1.py | sparkhom/crypto | 32180394e977b3bbd316ed33a461c1dccb44a741 | [
"0BSD"
] | null | null | null | set1/challenge1.py | sparkhom/crypto | 32180394e977b3bbd316ed33a461c1dccb44a741 | [
"0BSD"
] | null | null | null | import base64
def challenge1(hexdata):
return base64.b64encode(bytearray.fromhex(hexdata))
if __name__ == '__main__':
print(challenge1("49276d206b696c6c696e6720796f757220627261696e206c696b65206120706f69736f6e6f7573206d757368726f6f6d"))
| 30.75 | 121 | 0.833333 | 17 | 246 | 11.588235 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.386667 | 0.085366 | 246 | 7 | 122 | 35.142857 | 0.488889 | 0 | 0 | 0 | 0 | 0 | 0.422764 | 0.390244 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.2 | 0.6 | 0.2 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
42614d8e3f3ce3e806204e73e3505e50fc53a68a | 3,941 | py | Python | chexpert/metrics.py | PJansson/Chexpert | 94b20deedbca9261eaf82b9a4309bb74d42ea8b0 | [
"Apache-2.0"
] | null | null | null | chexpert/metrics.py | PJansson/Chexpert | 94b20deedbca9261eaf82b9a4309bb74d42ea8b0 | [
"Apache-2.0"
] | null | null | null | chexpert/metrics.py | PJansson/Chexpert | 94b20deedbca9261eaf82b9a4309bb74d42ea8b0 | [
"Apache-2.0"
] | null | null | null | import torch
from sklearn.metrics import roc_auc_score
class AUROC:
def __init__(self, class_scores=False):
self.class_scores = class_scores
self.y_true = []
self.y_score = []
def update(self, x, y):
self.y_true.append(y.cpu())
self.y_score.append(x.cpu())
def compute(self):
y_true = torch.cat(self.y_true)
y_score = torch.cat(self.y_score)
# Masks out classes with 0 positive labels
mask = ~((y_true == 0).all(0) | (y_true == 1).all(0))
y_true = y_true[:, mask]
y_score = y_score[:, mask]
scores = roc_auc_score(y_true, y_score, average=None)
if self.class_scores:
unmasked_scores = []
i = 0
for m in mask:
score = scores[i] if m else 0.5
i = i + 1 if m else i
unmasked_scores.append(score)
if self.class_scores:
return scores.mean(), unmasked_scores
return scores.mean()
def compute_mean_only(self):
y_true = torch.cat(self.y_true)
y_score = torch.cat(self.y_score)
mask = ~((y_true == 0).all(0) | (y_true == 1).all(0))
y_true = y_true[:, mask]
y_score = y_score[:, mask]
score = roc_auc_score(y_true, y_score)
return score
def reset(self):
self.y_true = []
self.y_score = []
class PerStudyAUROC:
def __init__(self, dataset, class_scores=False, aggregation="mean"):
self.df = dataset.df
self.df["Patient"] = self.df["Path"].apply(lambda x: x.rsplit("/", 3)[1])
self.df["Study"] = self.df["Path"].apply(lambda x: x.rsplit("/", 3)[2])
self.classes = dataset.classes
self.classes_pred = [c + " Pred" for c in self.classes]
self.aggregation = {
**{"Path": list, "Frontal/Lateral": list},
**{c: "first" for c in self.classes},
**{c: aggregation for c in self.classes_pred},
}
self.class_scores = class_scores
self.y_true = []
self.y_score = []
def update(self, x, y):
self.y_true.append(y.cpu())
self.y_score.append(x.cpu())
def compute_auroc(self, y_true, y_score):
# Masks out classes with 0 positive labels
mask = ~((y_true == 0).all(0) | (y_true == 1).all(0))
y_true = y_true[:, mask]
y_score = y_score[:, mask]
scores = roc_auc_score(y_true, y_score, average=None)
return scores, mask
def compute(self):
y_true = torch.cat(self.y_true)
y_score = torch.cat(self.y_score)
per_image_auroc, _ = self.compute_auroc(y_true, y_score)
self.df[self.classes] = y_true
self.df[self.classes_pred] = y_score
grouped = self.df.groupby(["Patient", "Study"]).agg(self.aggregation)
y_true = grouped[self.classes].values
y_score = grouped[self.classes_pred].values
scores, mask = self.compute_auroc(y_true, y_score)
if self.class_scores:
unmasked_scores = []
i = 0
for m in mask:
score = scores[i] if m else 0.5
i = i + 1 if m else i
unmasked_scores.append(score)
if self.class_scores:
return per_image_auroc.mean(), scores.mean(), unmasked_scores
return per_image_auroc.mean(), scores.mean()
def compute_mean_only(self):
y_true = torch.cat(self.y_true)
y_score = torch.cat(self.y_score)
self.df[self.classes] = y_true
self.df[self.classes_pred] = y_score
grouped = self.df.groupby(["Patient", "Study"]).agg(self.aggregation)
y_true = grouped[self.classes].values
y_score = grouped[self.classes_pred].values
scores, mask = self.compute_auroc(y_true, y_score)
return scores.mean()
def reset(self):
self.y_true = []
self.y_score = []
| 30.315385 | 81 | 0.571682 | 553 | 3,941 | 3.855335 | 0.128391 | 0.086773 | 0.063321 | 0.056754 | 0.834428 | 0.782833 | 0.782833 | 0.762664 | 0.736398 | 0.679174 | 0 | 0.00942 | 0.29967 | 3,941 | 129 | 82 | 30.550388 | 0.763043 | 0.020553 | 0 | 0.747368 | 0 | 0 | 0.020482 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.115789 | false | 0 | 0.021053 | 0 | 0.231579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
42ac5539eaded77a39d2ea4dd4083814eef6958c | 7,842 | py | Python | tests/test_clones.py | delfick/photons-messages-generator | 54c72eb72f0e7686afe01c4834fef3f297cedbaa | [
"MIT"
] | null | null | null | tests/test_clones.py | delfick/photons-messages-generator | 54c72eb72f0e7686afe01c4834fef3f297cedbaa | [
"MIT"
] | null | null | null | tests/test_clones.py | delfick/photons-messages-generator | 54c72eb72f0e7686afe01c4834fef3f297cedbaa | [
"MIT"
] | null | null | null | # coding: spec
from photons_messages_generator import test_helpers as thp
describe "clones":
it "uses the clone instead of original struct":
src = """
fields:
SomeParams:
size_bytes: 5
fields:
- name: "One"
type: "uint8"
size_bytes: 1
- name: "Two"
type: "uint8"
size_bytes: 1
- name: "Three"
type: "uint8"
size_bytes: 1
- name: "Four"
type: "uint8"
size_bytes: 1
- name: "Five"
type: "uint8"
size_bytes: 1
packets:
one:
OnePacketExample:
pkt_type: 1
size_bytes: 5
fields:
- name: "One"
type: "<SomeParams>"
size_bytes: 5
OneOtherExample:
pkt_type: 2
size_bytes: 5
fields:
- name: "One"
type: "<SomeParams>"
size_bytes: 5
OneAnotherExample:
pkt_type: 3
size_bytes: 15
fields:
- name: "Params"
type: "[3]<SomeParams>"
size_bytes: 15
"""
adjustments = """
num_reserved_fields_in_frame: 3
clones:
some_params_with_optionals:
cloning: SomeParams
multi_options:
name: ParamsOptionals
fields:
One:
more_extras: ["optional()"]
Two:
remove_default: true
more_extras: ["optional()"]
Four:
remove_default: true
changes:
SomeParams:
fields:
One:
default: "0"
extras: "transform()"
Two:
default: "20"
extras: "transform()"
Three:
default: "30"
Four:
default: "30"
extras: "dynamic()"
Five:
extras: "other()"
OneOtherExample:
fields:
One:
override_struct: some_params_with_optionals
OneAnotherExample:
fields:
Params:
override_struct: some_params_with_optionals
"""
with thp.generate(src, adjustments) as output:
expected_fields = """
# fmt: off
some_params_with_optionals = [
("one", T.Uint8.default(0).transform().optional())
, ("two", T.Uint8.transform().optional())
, ("three", T.Uint8.default(30))
, ("four", T.Uint8.dynamic())
, ("five", T.Uint8.other())
]
class ParamsOptionals(dictobj.PacketSpec):
fields = some_params_with_optionals
some_params = [
("one", T.Uint8.default(0).transform())
, ("two", T.Uint8.default(20).transform())
, ("three", T.Uint8.default(30))
, ("four", T.Uint8.default(30).dynamic())
, ("five", T.Uint8.other())
]
# fmt: on
"""
expected_messages = """
# fmt: off
########################
### ONE
########################
class OneMessages(Messages):
PacketExample = msg(1
, *fields.some_params
)
OtherExample = msg(2
, *fields.some_params_with_optionals
)
AnotherExample = msg(3
, ("params", T.Bytes(40).multiple(3, kls=fields.ParamsOptionals))
)
# fmt: on
__all__ = ["OneMessages"]
"""
output.assertFileContents("fields.py", expected_fields)
output.assertFileContents("messages.py", expected_messages)
it "can use the clone in other fields":
src = """
fields:
SomeParams:
size_bytes: 5
fields:
- name: "One"
type: "uint8"
size_bytes: 1
- name: "Two"
type: "uint8"
size_bytes: 1
- name: "Three"
type: "uint8"
size_bytes: 1
- name: "Four"
type: "uint8"
size_bytes: 1
- name: "Five"
type: "uint8"
size_bytes: 1
AnotherParams:
size_bytes: 5
fields:
- name: "Params"
type: "<SomeParams>"
size_bytes: 5
MoreParams:
size_bytes: 15
fields:
- name: "Params"
type: "[3]<SomeParams>"
size_bytes: 15
"""
adjustments = """
num_reserved_fields_in_frame: 3
clones:
some_params_with_optionals:
cloning: SomeParams
multi_options:
name: ParamsOptionals
fields:
One:
more_extras: ["optional()"]
Two:
remove_default: true
more_extras: ["optional()"]
Four:
remove_default: true
changes:
SomeParams:
fields:
One:
default: "0"
extras: "transform()"
Two:
default: "20"
extras: "transform()"
Three:
default: "30"
Four:
default: "30"
extras: "dynamic()"
Five:
extras: "other()"
AnotherParams:
fields:
Params:
override_struct: some_params_with_optionals
MoreParams:
fields:
Params:
override_struct: some_params_with_optionals
"""
with thp.generate(src, adjustments) as output:
expected_fields = """
# fmt: off
some_params_with_optionals = [
("one", T.Uint8.default(0).transform().optional())
, ("two", T.Uint8.transform().optional())
, ("three", T.Uint8.default(30))
, ("four", T.Uint8.dynamic())
, ("five", T.Uint8.other())
]
class ParamsOptionals(dictobj.PacketSpec):
fields = some_params_with_optionals
some_params = [
("one", T.Uint8.default(0).transform())
, ("two", T.Uint8.default(20).transform())
, ("three", T.Uint8.default(30))
, ("four", T.Uint8.default(30).dynamic())
, ("five", T.Uint8.other())
]
another_params = [
*some_params_with_optionals
]
more_params = [
("params", T.Bytes(40).multiple(3, kls=ParamsOptionals))
]
# fmt: on
"""
output.assertFileContents("fields.py", expected_fields)
| 28.937269 | 85 | 0.389697 | 570 | 7,842 | 5.184211 | 0.170175 | 0.067005 | 0.056853 | 0.093401 | 0.799662 | 0.774958 | 0.731303 | 0.713706 | 0.697124 | 0.697124 | 0 | 0.027662 | 0.511349 | 7,842 | 270 | 86 | 29.044444 | 0.743476 | 0.00153 | 0 | 0.852174 | 0 | 0 | 0.922458 | 0.121359 | 0 | 0 | 0 | 0 | 0.013043 | 0 | null | null | 0 | 0.004348 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
c417911fe5861c874661dab536796485fe6a2bb5 | 19,603 | py | Python | src/tests/base/test_cancelevent.py | tcatm/pretix | a76f74b161e140f4445568b97cb26fc57247e0d2 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2020-04-25T00:11:00.000Z | 2020-04-25T00:11:00.000Z | src/tests/base/test_cancelevent.py | tcatm/pretix | a76f74b161e140f4445568b97cb26fc57247e0d2 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | src/tests/base/test_cancelevent.py | tcatm/pretix | a76f74b161e140f4445568b97cb26fc57247e0d2 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | from datetime import timedelta
from decimal import Decimal
from django.core import mail as djmail
from django.test import TestCase
from django.utils.timezone import now
from django_scopes import scope
from pretix.base.models import (
Event, Item, Order, OrderPosition, Organizer, Voucher, WaitingListEntry,
)
from pretix.base.models.orders import OrderFee, OrderPayment, OrderRefund
from pretix.base.services.cancelevent import cancel_event
from pretix.base.services.invoices import generate_invoice
from pretix.testutils.scope import classscope
class EventCancelTests(TestCase):
def setUp(self):
super().setUp()
self.o = Organizer.objects.create(name='Dummy', slug='dummy')
with scope(organizer=self.o):
self.event = Event.objects.create(organizer=self.o, name='Dummy', slug='dummy', date_from=now(),
plugins='tests.testdummy')
self.order = Order.objects.create(
code='FOO', event=self.event, email='dummy@dummy.test',
status=Order.STATUS_PENDING, locale='en',
datetime=now(), expires=now() + timedelta(days=10),
total=Decimal('46.00'),
)
self.ticket = Item.objects.create(event=self.event, name='Early-bird ticket',
default_price=Decimal('23.00'), admission=True)
self.op1 = OrderPosition.objects.create(
order=self.order, item=self.ticket, variation=None,
price=Decimal("23.00"), attendee_name_parts={'full_name': "Peter"}, positionid=1
)
self.op2 = OrderPosition.objects.create(
order=self.order, item=self.ticket, variation=None,
price=Decimal("23.00"), attendee_name_parts={'full_name': "Dieter"}, positionid=2
)
generate_invoice(self.order)
djmail.outbox = []
@classscope(attr='o')
def test_cancel_send_mail(self):
gc = self.o.issued_gift_cards.create(currency="EUR")
self.order.payments.create(
amount=Decimal('46.00'),
state=OrderPayment.PAYMENT_STATE_CONFIRMED,
provider='giftcard',
info='{"gift_card": %d}' % gc.pk
)
self.order.status = Order.STATUS_PAID
self.order.save()
cancel_event(
self.event.pk, subevent=None,
auto_refund=True, keep_fee_fixed="0.00", keep_fee_percentage="0.00",
send=True, send_subject="Event canceled", send_message="Event canceled :-( {refund_amount}",
user=None
)
assert len(djmail.outbox) == 1
self.order.refresh_from_db()
assert self.order.status == Order.STATUS_CANCELED
assert '46.00' in djmail.outbox[0].body
@classscope(attr='o')
def test_cancel_send_mail_attendees(self):
self.op1.attendee_email = 'foo@example.com'
self.op1.save()
cancel_event(
self.event.pk, subevent=None,
auto_refund=True, keep_fee_fixed="0.00", keep_fee_percentage="0.00",
send=True, send_subject="Event canceled", send_message="Event canceled :-(",
user=None
)
assert len(djmail.outbox) == 2
self.order.refresh_from_db()
assert self.order.status == Order.STATUS_CANCELED
@classscope(attr='o')
def test_cancel_auto_refund(self):
gc = self.o.issued_gift_cards.create(currency="EUR")
p1 = self.order.payments.create(
amount=Decimal('46.00'),
state=OrderPayment.PAYMENT_STATE_CONFIRMED,
provider='giftcard',
info='{"gift_card": %d}' % gc.pk
)
self.order.status = Order.STATUS_PAID
self.order.save()
cancel_event(
self.event.pk, subevent=None,
auto_refund=True, keep_fee_fixed="0.00", keep_fee_percentage="0.00",
send=True, send_subject="Event canceled", send_message="Event canceled :-(",
user=None
)
r = self.order.refunds.get()
assert r.state == OrderRefund.REFUND_STATE_DONE
assert r.amount == Decimal('46.00')
assert r.source == OrderRefund.REFUND_SOURCE_ADMIN
assert r.payment == p1
assert self.order.all_logentries().filter(action_type='pretix.event.order.refund.created').exists()
assert not self.order.all_logentries().filter(action_type='pretix.event.order.refund.requested').exists()
assert gc.value == Decimal('46.00')
@classscope(attr='o')
def test_cancel_do_not_refund(self):
gc = self.o.issued_gift_cards.create(currency="EUR")
self.order.payments.create(
amount=Decimal('46.00'),
state=OrderPayment.PAYMENT_STATE_CONFIRMED,
provider='giftcard',
info='{"gift_card": %d}' % gc.pk
)
self.order.status = Order.STATUS_PAID
self.order.save()
cancel_event(
self.event.pk, subevent=None,
auto_refund=False, keep_fee_fixed="0.00", keep_fee_percentage="0.00",
send=True, send_subject="Event canceled", send_message="Event canceled :-(",
user=None
)
self.order.refresh_from_db()
assert self.order.status == Order.STATUS_CANCELED
assert not self.order.refunds.exists()
@classscope(attr='o')
def test_cancel_refund_paid_with_fees(self):
gc = self.o.issued_gift_cards.create(currency="EUR")
p1 = self.order.payments.create(
amount=Decimal('46.00'),
state=OrderPayment.PAYMENT_STATE_CONFIRMED,
provider='giftcard',
info='{"gift_card": %d}' % gc.pk
)
self.order.status = Order.STATUS_PAID
self.order.save()
cancel_event(
self.event.pk, subevent=None,
auto_refund=True, keep_fee_fixed="10.00", keep_fee_percentage="10.00",
send=False, send_subject="Event canceled", send_message="Event canceled :-(",
user=None
)
r = self.order.refunds.get()
assert r.state == OrderRefund.REFUND_STATE_DONE
assert r.amount == Decimal('31.40')
assert r.source == OrderRefund.REFUND_SOURCE_ADMIN
assert r.payment == p1
assert self.order.all_logentries().filter(action_type='pretix.event.order.refund.created').exists()
assert not self.order.all_logentries().filter(action_type='pretix.event.order.refund.requested').exists()
assert gc.value == Decimal('31.40')
@classscope(attr='o')
def test_cancel_refund_partially_paid_with_fees(self):
gc = self.o.issued_gift_cards.create(currency="EUR")
self.order.payments.create(
amount=Decimal('12.00'),
state=OrderPayment.PAYMENT_STATE_CONFIRMED,
provider='giftcard',
info='{"gift_card": %d}' % gc.pk
)
self.order.status = Order.STATUS_PENDING
self.order.save()
cancel_event(
self.event.pk, subevent=None,
auto_refund=True, keep_fee_fixed="10.00", keep_fee_percentage="10.00",
send=False, send_subject="Event canceled", send_message="Event canceled :-(",
user=None
)
assert not self.order.refunds.exists()
self.order.refresh_from_db()
assert self.order.total == Decimal('12.00')
assert self.order.status == Order.STATUS_PAID
assert self.order.positions.count() == 0
@classscope(attr='o')
def test_cancel_keep_fees(self):
gc = self.o.issued_gift_cards.create(currency="EUR")
p1 = self.order.payments.create(
amount=Decimal('46.00'),
state=OrderPayment.PAYMENT_STATE_CONFIRMED,
provider='giftcard',
info='{"gift_card": %d}' % gc.pk
)
self.op1.price -= Decimal('5.00')
self.op1.save()
self.order.fees.create(
fee_type=OrderFee.FEE_TYPE_PAYMENT,
value=Decimal('5.00'),
)
self.order.status = Order.STATUS_PAID
self.order.save()
cancel_event(
self.event.pk, subevent=None,
auto_refund=True, keep_fee_fixed="0.00", keep_fee_percentage="10.00", keep_fees=[OrderFee.FEE_TYPE_PAYMENT],
send=False, send_subject="Event canceled", send_message="Event canceled :-(", user=None
)
r = self.order.refunds.get()
assert r.state == OrderRefund.REFUND_STATE_DONE
assert r.amount == Decimal('36.90')
assert r.source == OrderRefund.REFUND_SOURCE_ADMIN
assert r.payment == p1
assert self.order.all_logentries().filter(action_type='pretix.event.order.refund.created').exists()
assert not self.order.all_logentries().filter(action_type='pretix.event.order.refund.requested').exists()
assert gc.value == Decimal('36.90')
@classscope(attr='o')
def test_cancel_keep_some_fees(self):
gc = self.o.issued_gift_cards.create(currency="EUR")
self.order.payments.create(
amount=Decimal('46.00'),
state=OrderPayment.PAYMENT_STATE_CONFIRMED,
provider='giftcard',
info='{"gift_card": %d}' % gc.pk
)
self.op1.price -= Decimal('5.00')
self.op1.save()
self.order.fees.create(
fee_type=OrderFee.FEE_TYPE_PAYMENT,
value=Decimal('2.50'),
)
self.order.fees.create(
fee_type=OrderFee.FEE_TYPE_SHIPPING,
value=Decimal('2.50'),
)
self.order.status = Order.STATUS_PAID
self.order.save()
cancel_event(
self.event.pk, subevent=None,
auto_refund=True, keep_fee_fixed="0.00", keep_fee_percentage="10.00", keep_fees=[OrderFee.FEE_TYPE_PAYMENT],
send=False, send_subject="Event canceled", send_message="Event canceled :-(",
user=None
)
r = self.order.refunds.get()
assert r.amount == Decimal('39.40')
assert self.order.all_fees.get(fee_type=OrderFee.FEE_TYPE_SHIPPING).canceled
assert not self.order.all_fees.get(fee_type=OrderFee.FEE_TYPE_PAYMENT).canceled
assert self.order.all_fees.get(fee_type=OrderFee.FEE_TYPE_CANCELLATION).value == Decimal('4.10')
@classscope(attr='o')
def test_cancel_refund_paid_partial_to_manual(self):
gc = self.o.issued_gift_cards.create(currency="EUR")
p1 = self.order.payments.create(
amount=Decimal('20.00'),
state=OrderPayment.PAYMENT_STATE_CONFIRMED,
provider='giftcard',
info='{"gift_card": %d}' % gc.pk
)
self.order.payments.create(
amount=Decimal('26.00'),
state=OrderPayment.PAYMENT_STATE_CONFIRMED,
provider='manual',
)
self.order.status = Order.STATUS_PAID
self.order.save()
cancel_event(
self.event.pk, subevent=None, manual_refund=True,
auto_refund=True, keep_fee_fixed="0.00", keep_fee_percentage="0.00",
send=False, send_subject="Event canceled", send_message="Event canceled :-(",
user=None
)
assert self.order.refunds.count() == 2
r = self.order.refunds.get(provider='giftcard')
assert r.state == OrderRefund.REFUND_STATE_DONE
assert r.amount == Decimal('20.00')
assert r.source == OrderRefund.REFUND_SOURCE_ADMIN
assert r.payment == p1
r = self.order.refunds.get(provider='manual')
assert r.state == OrderRefund.REFUND_STATE_CREATED
assert r.amount == Decimal('26.00')
assert r.source == OrderRefund.REFUND_SOURCE_ADMIN
assert r.payment is None
@classscope(attr='o')
def test_cancel_refund_paid_partial_no_manual(self):
gc = self.o.issued_gift_cards.create(currency="EUR")
p1 = self.order.payments.create(
amount=Decimal('20.00'),
state=OrderPayment.PAYMENT_STATE_CONFIRMED,
provider='giftcard',
info='{"gift_card": %d}' % gc.pk
)
self.order.payments.create(
amount=Decimal('26.00'),
state=OrderPayment.PAYMENT_STATE_CONFIRMED,
provider='manual',
)
self.order.status = Order.STATUS_PAID
self.order.save()
cancel_event(
self.event.pk, subevent=None, manual_refund=False,
auto_refund=True, keep_fee_fixed="0.00", keep_fee_percentage="0.00",
send=False, send_subject="Event canceled", send_message="Event canceled :-(",
user=None
)
assert self.order.refunds.count() == 1
r = self.order.refunds.get(provider='giftcard')
assert r.state == OrderRefund.REFUND_STATE_DONE
assert r.amount == Decimal('20.00')
assert r.source == OrderRefund.REFUND_SOURCE_ADMIN
assert r.payment == p1
class SubEventCancelTests(TestCase):
def setUp(self):
super().setUp()
self.o = Organizer.objects.create(name='Dummy', slug='dummy')
with scope(organizer=self.o):
self.event = Event.objects.create(organizer=self.o, name='Dummy', slug='dummy', date_from=now(),
plugins='tests.testdummy', has_subevents=True)
self.se1 = self.event.subevents.create(name='One', date_from=now())
self.se2 = self.event.subevents.create(name='Two', date_from=now())
self.order = Order.objects.create(
code='FOO', event=self.event, email='dummy@dummy.test',
status=Order.STATUS_PENDING, locale='en',
datetime=now(), expires=now() + timedelta(days=10),
total=Decimal('46.00'),
)
self.ticket = Item.objects.create(event=self.event, name='Early-bird ticket',
default_price=Decimal('23.00'), admission=True)
self.op1 = OrderPosition.objects.create(
order=self.order, item=self.ticket, variation=None, subevent=self.se1,
price=Decimal("23.00"), attendee_name_parts={'full_name': "Peter"}, positionid=1
)
self.op2 = OrderPosition.objects.create(
order=self.order, item=self.ticket, variation=None, subevent=self.se2,
price=Decimal("23.00"), attendee_name_parts={'full_name': "Dieter"}, positionid=2
)
generate_invoice(self.order)
djmail.outbox = []
@classscope(attr='o')
def test_cancel_partially_send_mail_attendees(self):
self.op1.attendee_email = 'foo@example.com'
self.op1.save()
self.op2.attendee_email = 'foo@example.org'
self.op2.save()
cancel_event(
self.event.pk, subevent=self.se1.pk,
auto_refund=True, keep_fee_fixed="0.00", keep_fee_percentage="0.00",
send=True, send_subject="Event canceled", send_message="Event canceled :-(",
user=None
)
assert len(djmail.outbox) == 2
self.order.refresh_from_db()
assert self.order.status == Order.STATUS_PENDING
assert self.order.positions.count() == 1
@classscope(attr='o')
def test_cancel_simple_order(self):
self.op2.subevent = self.se1
self.op2.save()
cancel_event(
self.event.pk, subevent=self.se1.pk,
auto_refund=True, keep_fee_fixed="0.00", keep_fee_percentage="0.00",
send=True, send_subject="Event canceled", send_message="Event canceled :-(",
user=None
)
self.order.refresh_from_db()
assert self.order.status == Order.STATUS_CANCELED
@classscope(attr='o')
def test_cancel_all_subevents(self):
cancel_event(
self.event.pk, subevent=None,
auto_refund=True, keep_fee_fixed="0.00", keep_fee_percentage="0.00",
send=True, send_subject="Event canceled", send_message="Event canceled :-(",
user=None
)
self.order.refresh_from_db()
assert self.order.status == Order.STATUS_CANCELED
@classscope(attr='o')
def test_cancel_mixed_order(self):
gc = self.o.issued_gift_cards.create(currency="EUR")
self.order.payments.create(
amount=Decimal('46.00'),
state=OrderPayment.PAYMENT_STATE_CONFIRMED,
provider='giftcard',
info='{"gift_card": %d}' % gc.pk
)
self.order.status = Order.STATUS_PAID
self.order.save()
cancel_event(
self.event.pk, subevent=self.se1.pk,
auto_refund=True, keep_fee_fixed="0.00", keep_fee_percentage="0.00",
send=True, send_subject="Event canceled", send_message="Event canceled :-( {refund_amount}",
user=None
)
self.order.refresh_from_db()
assert self.order.status == Order.STATUS_PAID
assert '23.00' in djmail.outbox[0].body
@classscope(attr='o')
def test_cancel_partially_keep_fees(self):
gc = self.o.issued_gift_cards.create(currency="EUR")
p1 = self.order.payments.create(
amount=Decimal('46.00'),
state=OrderPayment.PAYMENT_STATE_CONFIRMED,
provider='giftcard',
info='{"gift_card": %d}' % gc.pk
)
self.op1.price -= Decimal('5.00')
self.op1.save()
self.order.fees.create(
fee_type=OrderFee.FEE_TYPE_PAYMENT,
value=Decimal('5.00'),
)
self.order.status = Order.STATUS_PAID
self.order.save()
cancel_event(
self.event.pk, subevent=self.se1.pk,
auto_refund=True, keep_fee_fixed="0.00", keep_fee_percentage="10.00",
send=False, send_subject="Event canceled", send_message="Event canceled :-(",
user=None
)
r = self.order.refunds.get()
assert r.state == OrderRefund.REFUND_STATE_DONE
assert r.amount == Decimal('16.20')
assert r.source == OrderRefund.REFUND_SOURCE_ADMIN
assert r.payment == p1
assert self.order.all_logentries().filter(action_type='pretix.event.order.refund.created').exists()
assert not self.order.all_logentries().filter(action_type='pretix.event.order.refund.requested').exists()
assert gc.value == Decimal('16.20')
assert self.order.positions.filter(subevent=self.se2).count() == 1
assert self.order.positions.filter(subevent=self.se1).count() == 0
f = self.order.fees.get(fee_type=OrderFee.FEE_TYPE_CANCELLATION)
assert f.value == Decimal('1.80')
@classscope(attr='o')
def test_cancel_send_mail_waitinglist(self):
v = Voucher.objects.create(event=self.event, block_quota=True, redeemed=1)
WaitingListEntry.objects.create(
event=self.event, item=self.ticket, variation=None, email='foo@bar.com', voucher=v
)
WaitingListEntry.objects.create(
event=self.event, item=self.ticket, variation=None, email='foo@example.org'
)
cancel_event(
self.event.pk, subevent=None,
auto_refund=True, keep_fee_fixed="0.00", keep_fee_percentage="0.00",
send=False, send_subject="Event canceled", send_message="Event canceled :-(",
send_waitinglist=True, send_waitinglist_message="Event canceled", send_waitinglist_subject=":(",
user=None
)
assert len(djmail.outbox) == 1
assert djmail.outbox[0].to == ['foo@example.org']
| 42.339093 | 120 | 0.61312 | 2,366 | 19,603 | 4.909129 | 0.081572 | 0.071287 | 0.027723 | 0.032716 | 0.912096 | 0.894705 | 0.875506 | 0.854154 | 0.835213 | 0.823935 | 0 | 0.023208 | 0.259246 | 19,603 | 462 | 121 | 42.430736 | 0.776668 | 0 | 0 | 0.715294 | 0 | 0 | 0.09371 | 0.013875 | 0 | 0 | 0 | 0 | 0.162353 | 1 | 0.042353 | false | 0 | 0.025882 | 0 | 0.072941 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c483fe5648c0b20003c155c2e07221be19fc9e7f | 62,338 | py | Python | larq/layers.py | v-i-s-h/larq | f80db7eb18759a154a64204ab9396f0cd2a7d9bf | [
"Apache-2.0"
] | 2 | 2020-12-27T15:30:07.000Z | 2021-03-31T01:52:37.000Z | larq/layers.py | v-i-s-h/larq | f80db7eb18759a154a64204ab9396f0cd2a7d9bf | [
"Apache-2.0"
] | null | null | null | larq/layers.py | v-i-s-h/larq | f80db7eb18759a154a64204ab9396f0cd2a7d9bf | [
"Apache-2.0"
] | 1 | 2022-03-19T13:28:24.000Z | 2022-03-19T13:28:24.000Z | """Each Quantized Layer requires a `input_quantizer` and `kernel_quantizer` that
describes the way of quantizing the activation of the previous layer and the weights
respectively.
If both `input_quantizer` and `kernel_quantizer` are `None` the layer
is equivalent to a full precision layer.
"""
import tensorflow as tf
from larq import utils
from larq.layers_base import (
QuantizerBase,
QuantizerDepthwiseBase,
QuantizerSeparableBase,
)
@utils.register_keras_custom_object
class QuantDense(QuantizerBase, tf.keras.layers.Dense):
"""Just your regular densely-connected quantized NN layer.
`QuantDense` implements the operation:
`output = activation(dot(input_quantizer(input), kernel_quantizer(kernel)) + bias)`,
where `activation` is the element-wise activation function passed as the
`activation` argument, `kernel` is a weights matrix created by the layer, and `bias`
is a bias vector created by the layer (only applicable if `use_bias` is `True`).
`input_quantizer` and `kernel_quantizer` are the element-wise quantization
functions to use. If both quantization functions are `None` this layer is
equivalent to `Dense`.
!!! note ""
If the input to the layer has a rank greater than 2, then it is flattened
prior to the initial dot product with `kernel`.
!!! example
```python
# as first layer in a sequential model:
model = Sequential()
model.add(
QuantDense(
32,
input_quantizer="ste_sign",
kernel_quantizer="ste_sign",
kernel_constraint="weight_clip",
input_shape=(16,),
)
)
# now the model will take as input arrays of shape (*, 16)
# and output arrays of shape (*, 32)
# after the first layer, you don't need to specify
# the size of the input anymore:
model.add(
QuantDense(
32,
input_quantizer="ste_sign",
kernel_quantizer="ste_sign",
kernel_constraint="weight_clip",
)
)
```
# Arguments
units: Positive integer, dimensionality of the output space.
activation: Activation function to use. If you don't specify anything,
no activation is applied (`a(x) = x`).
use_bias: Boolean, whether the layer uses a bias vector.
input_quantizer: Quantization function applied to the input of the layer.
kernel_quantizer: Quantization function applied to the `kernel` weights matrix.
kernel_initializer: Initializer for the `kernel` weights matrix.
bias_initializer: Initializer for the bias vector.
kernel_regularizer: Regularizer function applied to the `kernel` weights matrix.
bias_regularizer: Regularizer function applied to the bias vector.
activity_regularizer: Regularizer function applied to
the output of the layer (its "activation").
kernel_constraint: Constraint function applied to the `kernel` weights matrix.
bias_constraint: Constraint function applied to the bias vector.
metrics: An array of metrics to add to the layer. If `None` the metrics set in
`larq.metrics.scope` are used.
Currently only the `flip_ratio` metric is available.
# Input shape
N-D tensor with shape: `(batch_size, ..., input_dim)`. The most common situation
would be a 2D input with shape `(batch_size, input_dim)`.
# Output shape
N-D tensor with shape: `(batch_size, ..., units)`. For instance, for a 2D input with
shape `(batch_size, input_dim)`, the output would have shape `(batch_size, units)`.
"""
def __init__(
self,
units,
activation=None,
use_bias=True,
input_quantizer=None,
kernel_quantizer=None,
kernel_initializer="glorot_uniform",
bias_initializer="zeros",
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
metrics=None,
**kwargs,
):
super().__init__(
units,
activation=activation,
use_bias=use_bias,
input_quantizer=input_quantizer,
kernel_quantizer=kernel_quantizer,
kernel_initializer=kernel_initializer,
bias_initializer=bias_initializer,
kernel_regularizer=kernel_regularizer,
bias_regularizer=bias_regularizer,
activity_regularizer=activity_regularizer,
kernel_constraint=kernel_constraint,
bias_constraint=bias_constraint,
metrics=metrics,
**kwargs,
)
@utils.register_keras_custom_object
class QuantConv1D(QuantizerBase, tf.keras.layers.Conv1D):
"""1D quantized convolution layer (e.g. temporal convolution).
This layer creates a convolution kernel that is convolved with the layer input
over a single spatial (or temporal) dimension to produce a tensor of outputs.
`input_quantizer` and `kernel_quantizer` are the element-wise quantization
functions to use. If both quantization functions are `None` this layer is
equivalent to `Conv1D`.
If `use_bias` is True, a bias vector is created and added to the outputs.
Finally, if `activation` is not `None`, it is applied to the outputs as well.
When using this layer as the first layer in a model, provide an `input_shape`
argument (tuple of integers or `None`, e.g. `(10, 128)` for sequences of
10 vectors of 128-dimensional vectors, or `(None, 128)` for variable-length
sequences of 128-dimensional vectors.
# Arguments
filters: Integer, the dimensionality of the output space
(i.e. the number of output filters in the convolution).
kernel_size: An integer or tuple/list of a single integer,
specifying the length of the 1D convolution window.
strides: An integer or tuple/list of a single integer, specifying the stride
length of the convolution. Specifying any stride value != 1 is incompatible
with specifying any `dilation_rate` value != 1.
padding: One of `"valid"`, `"causal"` or `"same"` (case-insensitive). `"causal"`
results in causal (dilated) convolutions, e.g. output[t] does not depend on
input[t+1:]. Useful when modeling temporal data where the model should not
violate the temporal order. See [WaveNet: A Generative Model for Raw Audio,
section 2.1](https://arxiv.org/abs/1609.03499).
data_format: A string, one of `channels_last` (default) or `channels_first`.
dilation_rate: an integer or tuple/list of a single integer, specifying the dilation
rate to use for dilated convolution. Currently, specifying any `dilation_rate`
value != 1 is incompatible with specifying any `strides` value != 1.
activation: Activation function to use. If you don't specify anything, no activation
is applied (`a(x) = x`).
use_bias: Boolean, whether the layer uses a bias vector.
input_quantizer: Quantization function applied to the input of the layer.
kernel_quantizer: Quantization function applied to the `kernel` weights matrix.
kernel_initializer: Initializer for the `kernel` weights matrix.
bias_initializer: Initializer for the bias vector.
kernel_regularizer: Regularizer function applied to the `kernel` weights matrix.
bias_regularizer: Regularizer function applied to the bias vector.
activity_regularizer: Regularizer function applied to
the output of the layer (its "activation").
kernel_constraint: Constraint function applied to the kernel matrix.
bias_constraint: Constraint function applied to the bias vector.
metrics: An array of metrics to add to the layer. If `None` the metrics set in
`larq.metrics.scope` are used.
Currently only the `flip_ratio` metric is available.
# Input shape
3D tensor with shape: `(batch_size, steps, input_dim)`
# Output shape
3D tensor with shape: `(batch_size, new_steps, filters)`.
`steps` value might have changed due to padding or strides.
"""
def __init__(
self,
filters,
kernel_size,
strides=1,
padding="valid",
data_format="channels_last",
dilation_rate=1,
activation=None,
use_bias=True,
input_quantizer=None,
kernel_quantizer=None,
kernel_initializer="glorot_uniform",
bias_initializer="zeros",
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
metrics=None,
**kwargs,
):
super().__init__(
filters,
kernel_size,
strides=strides,
padding=padding,
data_format=data_format,
dilation_rate=dilation_rate,
activation=activation,
use_bias=use_bias,
input_quantizer=input_quantizer,
kernel_quantizer=kernel_quantizer,
kernel_initializer=kernel_initializer,
bias_initializer=bias_initializer,
kernel_regularizer=kernel_regularizer,
bias_regularizer=bias_regularizer,
activity_regularizer=activity_regularizer,
kernel_constraint=kernel_constraint,
bias_constraint=bias_constraint,
metrics=metrics,
**kwargs,
)
@utils.register_keras_custom_object
class QuantConv2D(QuantizerBase, tf.keras.layers.Conv2D):
"""2D quantized convolution layer (e.g. spatial convolution over images).
This layer creates a convolution kernel that is convolved
with the layer input to produce a tensor of outputs.
`input_quantizer` and `kernel_quantizer` are the element-wise quantization
functions to use. If both quantization functions are `None` this layer is
equivalent to `Conv2D`. If `use_bias` is True, a bias vector is created
and added to the outputs. Finally, if `activation` is not `None`,
it is applied to the outputs as well.
When using this layer as the first layer in a model, provide the keyword argument
`input_shape` (tuple of integers, does not include the sample axis),
e.g. `input_shape=(128, 128, 3)` for 128x128 RGB pictures in
`data_format="channels_last"`.
# Arguments
filters: Integer, the dimensionality of the output space
(i.e. the number of output filters in the convolution).
kernel_size: An integer or tuple/list of 2 integers, specifying the
height and width of the 2D convolution window. Can be a single integer
to specify the same value for all spatial dimensions.
strides: An integer or tuple/list of 2 integers, specifying the strides of
the convolution along the height and width. Can be a single integer to
specify the same value for all spatial dimensions. Specifying any stride
value != 1 is incompatible with specifying any `dilation_rate` value != 1.
padding: one of `"valid"` or `"same"` (case-insensitive).
data_format: A string, one of `channels_last` (default) or `channels_first`.
The ordering of the dimensions in the inputs. `channels_last` corresponds to
inputs with shape `(batch, height, width, channels)` while `channels_first`
corresponds to inputs with shape `(batch, channels, height, width)`. It defaults
to the `image_data_format` value found in your Keras config file at
`~/.keras/keras.json`. If you never set it, then it will be "channels_last".
dilation_rate: an integer or tuple/list of 2 integers, specifying the dilation rate
to use for dilated convolution. Can be a single integer to specify the same
value for all spatial dimensions. Currently, specifying any `dilation_rate`
value != 1 is incompatible with specifying any stride value != 1.
activation: Activation function to use. If you don't specify anything,
no activation is applied (`a(x) = x`).
use_bias: Boolean, whether the layer uses a bias vector.
input_quantizer: Quantization function applied to the input of the layer.
kernel_quantizer: Quantization function applied to the `kernel` weights matrix.
kernel_initializer: Initializer for the `kernel` weights matrix.
bias_initializer: Initializer for the bias vector.
kernel_regularizer: Regularizer function applied to the `kernel` weights matrix.
bias_regularizer: Regularizer function applied to the bias vector.
activity_regularizer: Regularizer function applied to
the output of the layer (its "activation").
kernel_constraint: Constraint function applied to the kernel matrix.
bias_constraint: Constraint function applied to the bias vector.
metrics: An array of metrics to add to the layer. If `None` the metrics set in
`larq.metrics.scope` are used.
Currently only the `flip_ratio` metric is available.
# Input shape
4D tensor with shape:
`(samples, channels, rows, cols)` if data_format='channels_first'
or 4D tensor with shape:
`(samples, rows, cols, channels)` if data_format='channels_last'.
# Output shape
4D tensor with shape:
`(samples, filters, new_rows, new_cols)` if data_format='channels_first'
or 4D tensor with shape:
`(samples, new_rows, new_cols, filters)` if data_format='channels_last'.
`rows` and `cols` values might have changed due to padding.
"""
def __init__(
self,
filters,
kernel_size,
strides=(1, 1),
padding="valid",
data_format=None,
dilation_rate=(1, 1),
activation=None,
use_bias=True,
input_quantizer=None,
kernel_quantizer=None,
kernel_initializer="glorot_uniform",
bias_initializer="zeros",
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
metrics=None,
**kwargs,
):
super().__init__(
filters,
kernel_size,
strides=strides,
padding=padding,
data_format=data_format,
dilation_rate=dilation_rate,
activation=activation,
use_bias=use_bias,
input_quantizer=input_quantizer,
kernel_quantizer=kernel_quantizer,
kernel_initializer=kernel_initializer,
bias_initializer=bias_initializer,
kernel_regularizer=kernel_regularizer,
bias_regularizer=bias_regularizer,
activity_regularizer=activity_regularizer,
kernel_constraint=kernel_constraint,
bias_constraint=bias_constraint,
metrics=metrics,
**kwargs,
)
@utils.register_keras_custom_object
class QuantConv3D(QuantizerBase, tf.keras.layers.Conv3D):
"""3D convolution layer (e.g. spatial convolution over volumes).
This layer creates a convolution kernel that is convolved
with the layer input to produce a tensor of
outputs. `input_quantizer` and `kernel_quantizer` are the element-wise quantization
functions to use. If both quantization functions are `None` this layer is
equivalent to `Conv3D`. If `use_bias` is True, a bias vector is created and
added to the outputs. Finally, if `activation` is not `None`,
it is applied to the outputs as well.
When using this layer as the first layer in a model, provide the keyword argument
`input_shape` (tuple of integers, does not include the sample axis),
e.g. `input_shape=(128, 128, 128, 1)` for 128x128x128 volumes
with a single channel, in `data_format="channels_last"`.
# Arguments
filters: Integer, the dimensionality of the output space
(i.e. the number of output filters in the convolution).
kernel_size: An integer or tuple/list of 3 integers, specifying the
depth, height and width of the 3D convolution window. Can be a single
integer to specify the same value for all spatial dimensions.
strides: An integer or tuple/list of 3 integers, specifying the strides of the
convolution along each spatial dimension. Can be a single integer to specify the
same value for all spatial dimensions. Specifying any stride value != 1 is
incompatible with specifying any `dilation_rate` value != 1.
padding: one of `"valid"` or `"same"` (case-insensitive).
data_format: A string, one of `channels_last` (default) or `channels_first`.
The ordering of the dimensions in the inputs. `channels_last` corresponds to
inputs with shape `(batch, spatial_dim1, spatial_dim2, spatial_dim3, channels)`
while `channels_first` corresponds to inputs with shape
`(batch, channels, spatial_dim1, spatial_dim2, spatial_dim3)`. It defaults to
the `image_data_format` value found in your Keras config file at
`~/.keras/keras.json`. If you never set it, then it will be "channels_last".
dilation_rate: an integer or tuple/list of 3 integers, specifying the dilation rate
to use for dilated convolution. Can be a single integer to specify the same
value for all spatial dimensions. Currently, specifying any `dilation_rate`
value != 1 is incompatible with specifying any stride value != 1.
activation: Activation function to use. If you don't specify anything,
no activation is applied (`a(x) = x`).
use_bias: Boolean, whether the layer uses a bias vector.
input_quantizer: Quantization function applied to the input of the layer.
kernel_quantizer: Quantization function applied to the `kernel` weights matrix.
kernel_initializer: Initializer for the `kernel` weights matrix.
bias_initializer: Initializer for the bias vector.
kernel_regularizer: Regularizer function applied to the `kernel` weights matrix.
bias_regularizer: Regularizer function applied to the bias vector.
activity_regularizer: Regularizer function applied to
the output of the layer (its "activation").
kernel_constraint: Constraint function applied to the kernel matrix.
bias_constraint: Constraint function applied to the bias vector.
metrics: An array of metrics to add to the layer. If `None` the metrics set in
`larq.metrics.scope` are used.
Currently only the `flip_ratio` metric is available.
# Input shape
5D tensor with shape:
`(samples, channels, conv_dim1, conv_dim2, conv_dim3)` if
data_format='channels_first'
or 5D tensor with shape:
`(samples, conv_dim1, conv_dim2, conv_dim3, channels)` if
data_format='channels_last'.
# Output shape
5D tensor with shape:
`(samples, filters, new_conv_dim1, new_conv_dim2, new_conv_dim3)` if
data_format='channels_first'
or 5D tensor with shape:
`(samples, new_conv_dim1, new_conv_dim2, new_conv_dim3, filters)` if
data_format='channels_last'.
`new_conv_dim1`, `new_conv_dim2` and `new_conv_dim3` values might have
changed due to padding.
"""
def __init__(
self,
filters,
kernel_size,
strides=(1, 1, 1),
padding="valid",
data_format=None,
dilation_rate=(1, 1, 1),
activation=None,
use_bias=True,
input_quantizer=None,
kernel_quantizer=None,
kernel_initializer="glorot_uniform",
bias_initializer="zeros",
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
metrics=None,
**kwargs,
):
super().__init__(
filters,
kernel_size,
strides=strides,
padding=padding,
data_format=data_format,
dilation_rate=dilation_rate,
activation=activation,
use_bias=use_bias,
input_quantizer=input_quantizer,
kernel_quantizer=kernel_quantizer,
kernel_initializer=kernel_initializer,
bias_initializer=bias_initializer,
kernel_regularizer=kernel_regularizer,
bias_regularizer=bias_regularizer,
activity_regularizer=activity_regularizer,
kernel_constraint=kernel_constraint,
bias_constraint=bias_constraint,
metrics=metrics,
**kwargs,
)
@utils.register_keras_custom_object
class QuantDepthwiseConv2D(QuantizerDepthwiseBase, tf.keras.layers.DepthwiseConv2D):
""""Quantized depthwise separable 2D convolution.
Depthwise Separable convolutions consists in performing just the first step in a
depthwise spatial convolution (which acts on each input channel separately).
The `depth_multiplier` argument controls how many output channels are generated per
input channel in the depthwise step.
# Arguments
kernel_size: An integer or tuple/list of 2 integers, specifying the height and width
of the 2D convolution window. Can be a single integer to specify the same value
for all spatial dimensions.
strides: An integer or tuple/list of 2 integers, specifying the strides of the
convolution along the height and width. Can be a single integer to specify the
same value for all spatial dimensions. Specifying any stride value != 1 is
incompatible with specifying any `dilation_rate` value != 1.
padding: one of `'valid'` or `'same'` (case-insensitive).
depth_multiplier: The number of depthwise convolution output channels for each input
channel. The total number of depthwise convolution output channels will be equal
to `filters_in * depth_multiplier`.
data_format: A string, one of `channels_last` (default) or `channels_first`.
The ordering of the dimensions in the inputs. `channels_last` corresponds to
inputs with shape `(batch, height, width, channels)` while `channels_first`
corresponds to inputs with shape `(batch, channels, height, width)`.
It defaults to the `image_data_format` value found in your
Keras config file at `~/.keras/keras.json`.
If you never set it, then it will be 'channels_last'.
activation: Activation function to use.
If you don't specify anything, no activation is applied (ie. `a(x) = x`).
use_bias: Boolean, whether the layer uses a bias vector.
input_quantizer: Quantization function applied to the input of the layer.
depthwise_quantizer: Quantization function applied to the `depthwise_kernel`
weights matrix.
depthwise_initializer: Initializer for the depthwise kernel matrix.
bias_initializer: Initializer for the bias vector.
depthwise_regularizer: Regularizer function applied to the depthwise kernel matrix.
bias_regularizer: Regularizer function applied to the bias vector.
activity_regularizer: Regularizer function applied to
the output of the layer (its 'activation').
depthwise_constraint: Constraint function applied to the depthwise kernel matrix.
bias_constraint: Constraint function applied to the bias vector.
metrics: An array of metrics to add to the layer. If `None` the metrics set in
`larq.metrics.scope` are used.
Currently only the `flip_ratio` metric is available.
# Input shape
4D tensor with shape:
`[batch, channels, rows, cols]` if data_format='channels_first'
or 4D tensor with shape:
`[batch, rows, cols, channels]` if data_format='channels_last'.
# Output shape
4D tensor with shape:
`[batch, filters, new_rows, new_cols]` if data_format='channels_first'
or 4D tensor with shape:
`[batch, new_rows, new_cols, filters]` if data_format='channels_last'.
`rows` and `cols` values might have changed due to padding.
"""
def __init__(
self,
kernel_size,
strides=(1, 1),
padding="valid",
depth_multiplier=1,
data_format=None,
activation=None,
use_bias=True,
input_quantizer=None,
depthwise_quantizer=None,
depthwise_initializer="glorot_uniform",
bias_initializer="zeros",
depthwise_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
depthwise_constraint=None,
bias_constraint=None,
metrics=None,
**kwargs,
):
super().__init__(
kernel_size=kernel_size,
strides=strides,
padding=padding,
depth_multiplier=depth_multiplier,
data_format=data_format,
activation=activation,
use_bias=use_bias,
input_quantizer=input_quantizer,
depthwise_quantizer=depthwise_quantizer,
depthwise_initializer=depthwise_initializer,
bias_initializer=bias_initializer,
depthwise_regularizer=depthwise_regularizer,
bias_regularizer=bias_regularizer,
activity_regularizer=activity_regularizer,
depthwise_constraint=depthwise_constraint,
bias_constraint=bias_constraint,
metrics=metrics,
**kwargs,
)
@utils.register_keras_custom_object
class QuantSeparableConv1D(QuantizerSeparableBase, tf.keras.layers.SeparableConv1D):
"""Depthwise separable 1D quantized convolution.
This layer performs a depthwise convolution that acts separately on channels,
followed by a pointwise convolution that mixes channels.
`input_quantizer`, `depthwise_quantizer` and `pointwise_quantizer` are the
element-wise quantization functions to use. If all quantization functions are `None`
this layer is equivalent to `SeparableConv1D`. If `use_bias` is True and
a bias initializer is provided, it adds a bias vector to the output.
It then optionally applies an activation function to produce the final output.
# Arguments
filters: Integer, the dimensionality of the output space (i.e. the number
of filters in the convolution).
kernel_size: A single integer specifying the spatial dimensions of the filters.
strides: A single integer specifying the strides of the convolution.
Specifying any `stride` value != 1 is incompatible with specifying
any `dilation_rate` value != 1.
padding: One of `"valid"`, `"same"`, or `"causal"` (case-insensitive).
data_format: A string, one of `channels_last` (default) or `channels_first`.
The ordering of the dimensions in the inputs. `channels_last` corresponds
to inputs with shape `(batch, length, channels)` while `channels_first`
corresponds to inputs with shape `(batch, channels, length)`.
dilation_rate: A single integer, specifying the dilation rate to use for dilated
convolution. Currently, specifying any `dilation_rate` value != 1 is
incompatible with specifying any stride value != 1.
depth_multiplier: The number of depthwise convolution output channels for
each input channel. The total number of depthwise convolution output
channels will be equal to `num_filters_in * depth_multiplier`.
activation: Activation function. Set it to None to maintain a linear activation.
use_bias: Boolean, whether the layer uses a bias.
input_quantizer: Quantization function applied to the input of the layer.
depthwise_quantizer: Quantization function applied to the depthwise kernel.
pointwise_quantizer: Quantization function applied to the pointwise kernel.
depthwise_initializer: An initializer for the depthwise convolution kernel.
pointwise_initializer: An initializer for the pointwise convolution kernel.
bias_initializer: An initializer for the bias vector. If None, the default
initializer will be used.
depthwise_regularizer: Optional regularizer for the depthwise convolution kernel.
pointwise_regularizer: Optional regularizer for the pointwise convolution kernel.
bias_regularizer: Optional regularizer for the bias vector.
activity_regularizer: Optional regularizer function for the output.
depthwise_constraint: Optional projection function to be applied to the
depthwise kernel after being updated by an `Optimizer`
(e.g. used for norm constraints or value constraints for layer weights).
The function must take as input the unprojected variable and must return
the projected variable (which must have the same shape). Constraints are
not safe to use when doing asynchronous distributed training.
pointwise_constraint: Optional projection function to be applied to the
pointwise kernel after being updated by an `Optimizer`.
bias_constraint: Optional projection function to be applied to the
bias after being updated by an `Optimizer`.
metrics: An array of metrics to add to the layer. If `None` the metrics set in
`larq.metrics.scope` are used.
Currently only the `flip_ratio` metric is available.
trainable: Boolean, if `True` the weights of this layer will be marked as
trainable (and listed in `layer.trainable_weights`).
name: A string, the name of the layer.
"""
def __init__(
self,
filters,
kernel_size,
strides=1,
padding="valid",
data_format=None,
dilation_rate=1,
depth_multiplier=1,
activation=None,
use_bias=True,
input_quantizer=None,
depthwise_quantizer=None,
pointwise_quantizer=None,
depthwise_initializer="glorot_uniform",
pointwise_initializer="glorot_uniform",
bias_initializer="zeros",
depthwise_regularizer=None,
pointwise_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
depthwise_constraint=None,
pointwise_constraint=None,
bias_constraint=None,
metrics=None,
**kwargs,
):
super().__init__(
filters,
kernel_size,
strides=strides,
padding=padding,
data_format=data_format,
dilation_rate=dilation_rate,
depth_multiplier=depth_multiplier,
activation=activation,
use_bias=use_bias,
input_quantizer=input_quantizer,
depthwise_quantizer=depthwise_quantizer,
pointwise_quantizer=pointwise_quantizer,
depthwise_initializer=depthwise_initializer,
pointwise_initializer=pointwise_initializer,
bias_initializer=bias_initializer,
depthwise_regularizer=depthwise_regularizer,
pointwise_regularizer=pointwise_regularizer,
bias_regularizer=bias_regularizer,
activity_regularizer=activity_regularizer,
depthwise_constraint=depthwise_constraint,
pointwise_constraint=pointwise_constraint,
bias_constraint=bias_constraint,
metrics=metrics,
**kwargs,
)
@utils.register_keras_custom_object
class QuantSeparableConv2D(QuantizerSeparableBase, tf.keras.layers.SeparableConv2D):
"""Depthwise separable 2D convolution.
Separable convolutions consist in first performing a depthwise spatial convolution
(which acts on each input channel separately) followed by a pointwise convolution
which mixes together the resulting output channels. The `depth_multiplier` argument
controls how many output channels are generated per input channel
in the depthwise step.
`input_quantizer`, `depthwise_quantizer` and `pointwise_quantizer` are the
element-wise quantization functions to use. If all quantization functions are `None`
this layer is equivalent to `SeparableConv1D`. If `use_bias` is True and
a bias initializer is provided, it adds a bias vector to the output.
It then optionally applies an activation function to produce the final output.
Intuitively, separable convolutions can be understood as a way to factorize a
convolution kernel into two smaller kernels,
or as an extreme version of an Inception block.
# Arguments
filters: Integer, the dimensionality of the output space
(i.e. the number of output filters in the convolution).
kernel_size: An integer or tuple/list of 2 integers, specifying the height and
width of the 2D convolution window. Can be a single integer to specify the
same value for all spatial dimensions.
strides: An integer or tuple/list of 2 integers, specifying the strides of the
convolution along the height and width. Can be a single integer to specify
the same value for all spatial dimensions. Specifying any stride value != 1
is incompatible with specifying any `dilation_rate` value != 1.
padding: one of `"valid"` or `"same"` (case-insensitive).
data_format: A string, one of `channels_last` (default) or `channels_first`.
The ordering of the dimensions in the inputs. `channels_last` corresponds to
inputs with shape `(batch, height, width, channels)` while `channels_first`
corresponds to inputs with shape `(batch, channels, height, width)`. It
defaults to the `image_data_format` value found in your Keras config file at
`~/.keras/keras.json`. If you never set it, then it will be "channels_last".
dilation_rate: An integer or tuple/list of 2 integers, specifying the dilation rate
to use for dilated convolution. Currently, specifying any `dilation_rate`
value != 1 is incompatible with specifying any `strides` value != 1.
depth_multiplier: The number of depthwise convolution output channels for each
input channel. The total number of depthwise convolution output channels
will be equal to `filters_in * depth_multiplier`.
activation: Activation function to use. If you don't specify anything,
no activation is applied (`a(x) = x`).
use_bias: Boolean, whether the layer uses a bias vector.
input_quantizer: Quantization function applied to the input of the layer.
depthwise_quantizer: Quantization function applied to the depthwise kernel matrix.
pointwise_quantizer: Quantization function applied to the pointwise kernel matrix.
depthwise_initializer: Initializer for the depthwise kernel matrix.
pointwise_initializer: Initializer for the pointwise kernel matrix.
bias_initializer: Initializer for the bias vector.
depthwise_regularizer: Regularizer function applied to the depthwise kernel matrix.
pointwise_regularizer: Regularizer function applied to the pointwise kernel matrix.
bias_regularizer: Regularizer function applied to the bias vector.
activity_regularizer: Regularizer function applied to
the output of the layer (its "activation").
depthwise_constraint: Constraint function applied to the depthwise kernel matrix.
pointwise_constraint: Constraint function applied to the pointwise kernel matrix.
bias_constraint: Constraint function applied to the bias vector.
metrics: An array of metrics to add to the layer. If `None` the metrics set in
`larq.metrics.scope` are used.
Currently only the `flip_ratio` metric is available.
# Input shape
4D tensor with shape:
`(batch, channels, rows, cols)` if data_format='channels_first'
or 4D tensor with shape:
`(batch, rows, cols, channels)` if data_format='channels_last'.
# Output shape
4D tensor with shape:
`(batch, filters, new_rows, new_cols)` if data_format='channels_first'
or 4D tensor with shape:
`(batch, new_rows, new_cols, filters)` if data_format='channels_last'.
`rows` and `cols` values might have changed due to padding.
"""
def __init__(
self,
filters,
kernel_size,
strides=(1, 1),
padding="valid",
data_format=None,
dilation_rate=(1, 1),
depth_multiplier=1,
activation=None,
use_bias=True,
input_quantizer=None,
depthwise_quantizer=None,
pointwise_quantizer=None,
depthwise_initializer="glorot_uniform",
pointwise_initializer="glorot_uniform",
bias_initializer="zeros",
depthwise_regularizer=None,
pointwise_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
depthwise_constraint=None,
pointwise_constraint=None,
bias_constraint=None,
metrics=None,
**kwargs,
):
super().__init__(
filters,
kernel_size,
strides=strides,
padding=padding,
data_format=data_format,
dilation_rate=dilation_rate,
depth_multiplier=depth_multiplier,
activation=activation,
use_bias=use_bias,
input_quantizer=input_quantizer,
depthwise_quantizer=depthwise_quantizer,
pointwise_quantizer=pointwise_quantizer,
depthwise_initializer=depthwise_initializer,
pointwise_initializer=pointwise_initializer,
bias_initializer=bias_initializer,
depthwise_regularizer=depthwise_regularizer,
pointwise_regularizer=pointwise_regularizer,
bias_regularizer=bias_regularizer,
activity_regularizer=activity_regularizer,
depthwise_constraint=depthwise_constraint,
pointwise_constraint=pointwise_constraint,
bias_constraint=bias_constraint,
metrics=metrics,
**kwargs,
)
@utils.register_keras_custom_object
class QuantConv2DTranspose(QuantizerBase, tf.keras.layers.Conv2DTranspose):
"""Transposed quantized convolution layer (sometimes called Deconvolution).
The need for transposed convolutions generally arises from the desire to use a
transformation going in the opposite direction of a normal convolution, i.e.,
from something that has the shape of the output of some convolution to something
that has the shape of its input while maintaining a connectivity pattern
that is compatible with said convolution. `input_quantizer` and `kernel_quantizer`
are the element-wise quantization functions to use. If both quantization functions
are `None` this layer is equivalent to `Conv2DTranspose`.
When using this layer as the first layer in a model, provide the keyword argument
`input_shape` (tuple of integers, does not include the sample axis), e.g.
`input_shape=(128, 128, 3)` for 128x128 RGB pictures in
`data_format="channels_last"`.
# Arguments
filters: Integer, the dimensionality of the output space
(i.e. the number of output filters in the convolution).
kernel_size: An integer or tuple/list of 2 integers, specifying the
height and width of the 2D convolution window. Can be a single integer
to specify the same value for all spatial dimensions.
strides: An integer or tuple/list of 2 integers, specifying the strides of
the convolution along the height and width. Can be a single integer to
specify the same value for all spatial dimensions. Specifying any stride
value != 1 is incompatible with specifying any `dilation_rate` value != 1.
padding: one of `"valid"` or `"same"` (case-insensitive).
output_padding: An integer or tuple/list of 2 integers, specifying the amount
of padding along the height and width of the output tensor. Can be a single
integer to specify the same value for all spatial dimensions. The amount of
output padding along a given dimension must be lower than the stride along
that same dimension.
If set to `None` (default), the output shape is inferred.
data_format: A string, one of `channels_last` (default) or `channels_first`. The
ordering of the dimensions in the inputs. `channels_last` corresponds to inputs
with shape `(batch, height, width, channels)` while `channels_first` corresponds
to inputs with shape `(batch, channels, height, width)`. It defaults to the
`image_data_format` value found in your Keras config file at
`~/.keras/keras.json`. If you never set it, then it will be "channels_last".
dilation_rate: an integer or tuple/list of 2 integers, specifying the dilation rate
to use for dilated convolution. Can be a single integer to specify the same
value for all spatial dimensions. Currently, specifying any `dilation_rate`
value != 1 is incompatible with specifying any stride value != 1.
activation: Activation function to use. If you don't specify anything,
no activation is applied (`a(x) = x`).
use_bias: Boolean, whether the layer uses a bias vector.
input_quantizer: Quantization function applied to the input of the layer.
kernel_quantizer: Quantization function applied to the `kernel` weights matrix.
kernel_initializer: Initializer for the `kernel` weights matrix.
bias_initializer: Initializer for the bias vector.
kernel_regularizer: Regularizer function applied to the `kernel` weights matrix.
bias_regularizer: Regularizer function applied to the bias vector.
activity_regularizer: Regularizer function applied to
the output of the layer (its "activation").
kernel_constraint: Constraint function applied to the kernel matrix.
bias_constraint: Constraint function applied to the bias vector.
metrics: An array of metrics to add to the layer. If `None` the metrics set in
`larq.metrics.scope` are used.
Currently only the `flip_ratio` metric is available.
# Input shape
4D tensor with shape:
`(batch, channels, rows, cols)` if data_format='channels_first'
or 4D tensor with shape:
`(batch, rows, cols, channels)` if data_format='channels_last'.
# Output shape
4D tensor with shape:
`(batch, filters, new_rows, new_cols)` if data_format='channels_first'
or 4D tensor with shape:
`(batch, new_rows, new_cols, filters)` if data_format='channels_last'.
`rows` and `cols` values might have changed due to padding.
# References
- [A guide to convolution arithmetic for deep
learning](https://arxiv.org/abs/1603.07285v1)
- [Deconvolutional
Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf)
"""
def __init__(
self,
filters,
kernel_size,
strides=(1, 1),
padding="valid",
output_padding=None,
data_format=None,
dilation_rate=(1, 1),
activation=None,
use_bias=True,
input_quantizer=None,
kernel_quantizer=None,
kernel_initializer="glorot_uniform",
bias_initializer="zeros",
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
metrics=None,
**kwargs,
):
super().__init__(
filters,
kernel_size,
strides=strides,
padding=padding,
data_format=data_format,
dilation_rate=dilation_rate,
activation=activation,
use_bias=use_bias,
input_quantizer=input_quantizer,
kernel_quantizer=kernel_quantizer,
kernel_initializer=kernel_initializer,
bias_initializer=bias_initializer,
kernel_regularizer=kernel_regularizer,
bias_regularizer=bias_regularizer,
activity_regularizer=activity_regularizer,
kernel_constraint=kernel_constraint,
bias_constraint=bias_constraint,
metrics=metrics,
**kwargs,
)
@utils.register_keras_custom_object
class QuantConv3DTranspose(QuantizerBase, tf.keras.layers.Conv3DTranspose):
"""Transposed quantized convolution layer (sometimes called Deconvolution).
The need for transposed convolutions generally arises
from the desire to use a transformation going in the opposite direction
of a normal convolution, i.e., from something that has the shape of the
output of some convolution to something that has the shape of its input
while maintaining a connectivity pattern that is compatible with
said convolution. `input_quantizer` and `kernel_quantizer`
are the element-wise quantization functions to use. If both quantization functions
are `None` this layer is equivalent to `Conv3DTranspose`.
When using this layer as the first layer in a model, provide the keyword argument
`input_shape` (tuple of integers, does not include the sample axis),
e.g. `input_shape=(128, 128, 128, 3)` for a 128x128x128 volume with 3 channels
if `data_format="channels_last"`.
# Arguments
filters: Integer, the dimensionality of the output space
(i.e. the number of output filters in the convolution).
kernel_size: An integer or tuple/list of 3 integers, specifying the depth, height
and width of the 3D convolution window. Can be a single integer to specify the
same value for all spatial dimensions.
strides: An integer or tuple/list of 3 integers, specifying the strides of the
convolution along the depth, height and width. Can be a single integer to
specify the same value for all spatial dimensions. Specifying any stride
value != 1 is incompatible with specifying any `dilation_rate` value != 1.
padding: one of `"valid"` or `"same"` (case-insensitive).
output_padding: An integer or tuple/list of 3 integers, specifying the amount
of padding along the depth, height, and width. Can be a single integer to
specify the same value for all spatial dimensions. The amount of output
padding along a given dimension must be lower than the stride along that
same dimension. If set to `None` (default), the output shape is inferred.
data_format: A string, one of `channels_last` (default) or `channels_first`. The
ordering of the dimensions in the inputs. `channels_last` corresponds to inputs
with shape `(batch, depth, height, width, channels)` while `channels_first`
corresponds to inputs with shape `(batch, channels, depth, height, width)`.
It defaults to the `image_data_format` value found in your Keras config file at
`~/.keras/keras.json`. If you never set it, then it will be "channels_last".
dilation_rate: an integer or tuple/list of 3 integers, specifying the dilation
rate to use for dilated convolution. Can be a single integer to specify the
same value for all spatial dimensions. Currently, specifying any `dilation_rate`
value != 1 is incompatible with specifying any stride value != 1.
activation: Activation function to use. If you don't specify anything,
no activation is applied (`a(x) = x`).
use_bias: Boolean, whether the layer uses a bias vector.
input_quantizer: Quantization function applied to the input of the layer.
kernel_quantizer: Quantization function applied to the `kernel` weights matrix.
kernel_initializer: Initializer for the `kernel` weights matrix.
bias_initializer: Initializer for the bias vector.
kernel_regularizer: Regularizer function applied to the `kernel` weights matrix.
bias_regularizer: Regularizer function applied to the bias vector.
activity_regularizer: Regularizer function applied to
the output of the layer (its "activation").
kernel_constraint: Constraint function applied to the kernel matrix.
bias_constraint: Constraint function applied to the bias vector.
metrics: An array of metrics to add to the layer. If `None` the metrics set in
`larq.metrics.scope` are used.
Currently only the `flip_ratio` metric is available.
# Input shape
5D tensor with shape:
`(batch, channels, depth, rows, cols)` if data_format='channels_first'
or 5D tensor with shape:
`(batch, depth, rows, cols, channels)` if data_format='channels_last'.
# Output shape
5D tensor with shape:
`(batch, filters, new_depth, new_rows, new_cols)` if data_format='channels_first'
or 5D tensor with shape:
`(batch, new_depth, new_rows, new_cols, filters)` if data_format='channels_last'.
`depth` and `rows` and `cols` values might have changed due to padding.
# References
- [A guide to convolution arithmetic for deep
learning](https://arxiv.org/abs/1603.07285v1)
- [Deconvolutional
Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf)
"""
def __init__(
self,
filters,
kernel_size,
strides=(1, 1, 1),
padding="valid",
output_padding=None,
data_format=None,
activation=None,
use_bias=True,
input_quantizer=None,
kernel_quantizer=None,
kernel_initializer="glorot_uniform",
bias_initializer="zeros",
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
metrics=None,
**kwargs,
):
super().__init__(
filters,
kernel_size,
strides=strides,
padding=padding,
data_format=data_format,
activation=activation,
use_bias=use_bias,
input_quantizer=input_quantizer,
kernel_quantizer=kernel_quantizer,
kernel_initializer=kernel_initializer,
bias_initializer=bias_initializer,
kernel_regularizer=kernel_regularizer,
bias_regularizer=bias_regularizer,
activity_regularizer=activity_regularizer,
kernel_constraint=kernel_constraint,
bias_constraint=bias_constraint,
metrics=metrics,
**kwargs,
)
@utils.register_keras_custom_object
class QuantLocallyConnected1D(QuantizerBase, tf.keras.layers.LocallyConnected1D):
"""Locally-connected quantized layer for 1D inputs.
The `QuantLocallyConnected1D` layer works similarly to the `QuantConv1D` layer,
except that weights are unshared, that is, a different set of filters is applied
at each different patch of the input. `input_quantizer` and `kernel_quantizer`
are the element-wise quantization functions to use. If both quantization functions
are `None` this layer is equivalent to `LocallyConnected1D`.
!!! example
```python
# apply a unshared weight convolution 1d of length 3 to a sequence with
# 10 timesteps, with 64 output filters
model = Sequential()
model.add(QuantLocallyConnected1D(64, 3, input_shape=(10, 32)))
# now model.output_shape == (None, 8, 64)
# add a new conv1d on top
model.add(QuantLocallyConnected1D(32, 3))
# now model.output_shape == (None, 6, 32)
```
# Arguments
filters: Integer, the dimensionality of the output space
(i.e. the number of output filters in the convolution).
kernel_size: An integer or tuple/list of a single integer,
specifying the length of the 1D convolution window.
strides: An integer or tuple/list of a single integer, specifying the stride
length of the convolution. Specifying any stride value != 1 is incompatible
with specifying any `dilation_rate` value != 1.
padding: Currently only supports `"valid"` (case-insensitive).
`"same"` may be supported in the future.
data_format: A string, one of `channels_last` (default) or `channels_first`.
The ordering of the dimensions in the inputs. `channels_last` corresponds
to inputs with shape `(batch, length, channels)` while `channels_first`
corresponds to inputs with shape `(batch, channels, length)`. It defaults
to the `image_data_format` value found in your Keras config file at
`~/.keras/keras.json`. If you never set it, then it will be "channels_last".
activation: Activation function to use. If you don't specify anything,
no activation is applied (`a(x) = x`).
use_bias: Boolean, whether the layer uses a bias vector.
input_quantizer: Quantization function applied to the input of the layer.
kernel_quantizer: Quantization function applied to the `kernel` weights matrix.
kernel_initializer: Initializer for the `kernel` weights matrix.
bias_initializer: Initializer for the bias vector.
kernel_regularizer: Regularizer function applied to the `kernel` weights matrix.
bias_regularizer: Regularizer function applied to the bias vector.
activity_regularizer: Regularizer function applied to
the output of the layer (its "activation").
kernel_constraint: Constraint function applied to the kernel matrix.
bias_constraint: Constraint function applied to the bias vector.
metrics: An array of metrics to add to the layer. If `None` the metrics set in
`larq.metrics.scope` are used.
Currently only the `flip_ratio` metric is available.
implementation: implementation mode, either `1` or `2`.
`1` loops over input spatial locations to perform the forward pass.
It is memory-efficient but performs a lot of (small) ops.
`2` stores layer weights in a dense but sparsely-populated 2D matrix
and implements the forward pass as a single matrix-multiply. It uses
a lot of RAM but performs few (large) ops.
Depending on the inputs, layer parameters, hardware, and
`tf.executing_eagerly()` one implementation can be dramatically faster
(e.g. 50X) than another.
It is recommended to benchmark both in the setting of interest to pick
the most efficient one (in terms of speed and memory usage).
Following scenarios could benefit from setting `implementation=2`:
- eager execution;
- inference;
- running on CPU;
- large amount of RAM available;
- small models (few filters, small kernel);
- using `padding=same` (only possible with `implementation=2`).
# Input shape
3D tensor with shape: `(batch_size, steps, input_dim)`
# Output shape
3D tensor with shape: `(batch_size, new_steps, filters)`
`steps` value might have changed due to padding or strides.
"""
def __init__(
self,
filters,
kernel_size,
strides=1,
padding="valid",
data_format=None,
activation=None,
use_bias=True,
input_quantizer=None,
kernel_quantizer=None,
kernel_initializer="glorot_uniform",
bias_initializer="zeros",
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
metrics=None,
implementation=1,
**kwargs,
):
super().__init__(
filters,
kernel_size,
strides=strides,
padding=padding,
data_format=data_format,
activation=activation,
use_bias=use_bias,
input_quantizer=input_quantizer,
kernel_quantizer=kernel_quantizer,
kernel_initializer=kernel_initializer,
bias_initializer=bias_initializer,
kernel_regularizer=kernel_regularizer,
bias_regularizer=bias_regularizer,
activity_regularizer=activity_regularizer,
kernel_constraint=kernel_constraint,
bias_constraint=bias_constraint,
metrics=metrics,
implementation=implementation,
**kwargs,
)
@utils.register_keras_custom_object
class QuantLocallyConnected2D(QuantizerBase, tf.keras.layers.LocallyConnected2D):
"""Locally-connected quantized layer for 2D inputs.
The `QuantLocallyConnected2D` layer works similarly to the `QuantConv2D` layer,
except that weights are unshared, that is, a different set of filters is applied
at each different patch of the input. `input_quantizer` and `kernel_quantizer`
are the element-wise quantization functions to use. If both quantization functions
are `None` this layer is equivalent to `LocallyConnected2D`.
!!! example
```python
# apply a 3x3 unshared weights convolution with 64 output filters on a
32x32 image
# with `data_format="channels_last"`:
model = Sequential()
model.add(QuantLocallyConnected2D(64, (3, 3), input_shape=(32, 32, 3)))
# now model.output_shape == (None, 30, 30, 64)
# notice that this layer will consume (30*30)*(3*3*3*64) + (30*30)*64
parameters
# add a 3x3 unshared weights convolution on top, with 32 output filters:
model.add(QuantLocallyConnected2D(32, (3, 3)))
# now model.output_shape == (None, 28, 28, 32)
```
# Arguments
filters: Integer, the dimensionality of the output space
(i.e. the number of output filters in the convolution).
kernel_size: An integer or tuple/list of 2 integers, specifying the
width and height of the 2D convolution window. Can be a single integer to
specify the same value for all spatial dimensions.
strides: An integer or tuple/list of 2 integers, specifying the strides of the
convolution along the width and height. Can be a single integer to specify
the same value for all spatial dimensions.
padding: Currently only support `"valid"` (case-insensitive).
`"same"` will be supported in future.
data_format: A string, one of `channels_last` (default) or `channels_first`.
The ordering of the dimensions in the inputs. `channels_last` corresponds to
inputs with shape `(batch, height, width, channels)` while `channels_first`
corresponds to inputs with shape `(batch, channels, height, width)`. It
defaults to the `image_data_format` value found in your Keras config file at
`~/.keras/keras.json`. If you never set it, then it will be "channels_last".
activation: Activation function to use. If you don't specify anything,
no activation is applied (`a(x) = x`).
use_bias: Boolean, whether the layer uses a bias vector.
input_quantizer: Quantization function applied to the input of the layer.
kernel_quantizer: Quantization function applied to the `kernel` weights matrix.
kernel_initializer: Initializer for the `kernel` weights matrix.
bias_initializer: Initializer for the bias vector.
kernel_regularizer: Regularizer function applied to the `kernel` weights matrix.
bias_regularizer: Regularizer function applied to the bias vector.
activity_regularizer: Regularizer function applied to
the output of the layer (its "activation").
kernel_constraint: Constraint function applied to the kernel matrix.
bias_constraint: Constraint function applied to the bias vector.
metrics: An array of metrics to add to the layer. If `None` the metrics set in
`larq.metrics.scope` are used.
Currently only the `flip_ratio` metric is available.
implementation: implementation mode, either `1` or `2`.
`1` loops over input spatial locations to perform the forward pass.
It is memory-efficient but performs a lot of (small) ops.
`2` stores layer weights in a dense but sparsely-populated 2D matrix
and implements the forward pass as a single matrix-multiply. It uses
a lot of RAM but performs few (large) ops.
Depending on the inputs, layer parameters, hardware, and
`tf.executing_eagerly()` one implementation can be dramatically faster
(e.g. 50X) than another.
It is recommended to benchmark both in the setting of interest to pick
the most efficient one (in terms of speed and memory usage).
Following scenarios could benefit from setting `implementation=2`:
- eager execution;
- inference;
- running on CPU;
- large amount of RAM available;
- small models (few filters, small kernel);
- using `padding=same` (only possible with `implementation=2`).
# Input shape
4D tensor with shape:
`(samples, channels, rows, cols)` if data_format='channels_first'
or 4D tensor with shape:
`(samples, rows, cols, channels)` if data_format='channels_last'.
# Output shape
4D tensor with shape:
`(samples, filters, new_rows, new_cols)` if data_format='channels_first'
or 4D tensor with shape:
`(samples, new_rows, new_cols, filters)` if data_format='channels_last'.
`rows` and `cols` values might have changed due to padding.
"""
def __init__(
self,
filters,
kernel_size,
strides=(1, 1),
padding="valid",
data_format=None,
activation=None,
use_bias=True,
input_quantizer=None,
kernel_quantizer=None,
kernel_initializer="glorot_uniform",
bias_initializer="zeros",
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
metrics=None,
implementation=1,
**kwargs,
):
super().__init__(
filters,
kernel_size,
strides=strides,
padding=padding,
data_format=data_format,
activation=activation,
use_bias=use_bias,
input_quantizer=input_quantizer,
kernel_quantizer=kernel_quantizer,
kernel_initializer=kernel_initializer,
bias_initializer=bias_initializer,
kernel_regularizer=kernel_regularizer,
bias_regularizer=bias_regularizer,
activity_regularizer=activity_regularizer,
kernel_constraint=kernel_constraint,
bias_constraint=bias_constraint,
implementation=implementation,
metrics=metrics,
**kwargs,
)
| 46.590433 | 88 | 0.692756 | 7,767 | 62,338 | 5.428479 | 0.06283 | 0.013045 | 0.023338 | 0.036051 | 0.896853 | 0.879776 | 0.864075 | 0.855964 | 0.85122 | 0.834594 | 0 | 0.008769 | 0.242661 | 62,338 | 1,337 | 89 | 46.62528 | 0.884307 | 0.682409 | 0 | 0.937255 | 0 | 0 | 0.017763 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021569 | false | 0 | 0.005882 | 0 | 0.04902 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
674a68cd9d3072172408a4ca22a05b0ec4808fae | 130 | py | Python | tests/test_shortcuts.py | gundotio/worf | 45268e3d04ba5a2549d3a4f511d876622c9e0cad | [
"MIT"
] | null | null | null | tests/test_shortcuts.py | gundotio/worf | 45268e3d04ba5a2549d3a4f511d876622c9e0cad | [
"MIT"
] | 33 | 2021-03-05T05:20:30.000Z | 2022-03-16T02:01:45.000Z | tests/test_shortcuts.py | gundotio/worf | 45268e3d04ba5a2549d3a4f511d876622c9e0cad | [
"MIT"
] | null | null | null | from worf.shortcuts import get_current_version
def test_get_current_version():
assert get_current_version().startswith("v")
| 21.666667 | 48 | 0.807692 | 18 | 130 | 5.444444 | 0.666667 | 0.306122 | 0.520408 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107692 | 130 | 5 | 49 | 26 | 0.844828 | 0 | 0 | 0 | 0 | 0 | 0.007692 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
675371282a6cabc8360814a84a2886a8f7fcae3a | 50,311 | py | Python | networking_cisco/tests/unit/ml2/drivers/cisco/nexus/test_cisco_nexus_restapi_events.py | Tehsmash/networking-cisco | fdbd79a832fe090f3c4c7bd7a4f0ec0c349d4d16 | [
"Apache-2.0"
] | null | null | null | networking_cisco/tests/unit/ml2/drivers/cisco/nexus/test_cisco_nexus_restapi_events.py | Tehsmash/networking-cisco | fdbd79a832fe090f3c4c7bd7a4f0ec0c349d4d16 | [
"Apache-2.0"
] | null | null | null | networking_cisco/tests/unit/ml2/drivers/cisco/nexus/test_cisco_nexus_restapi_events.py | Tehsmash/networking-cisco | fdbd79a832fe090f3c4c7bd7a4f0ec0c349d4d16 | [
"Apache-2.0"
] | null | null | null | # Copyright (c) 2017 Cisco Systems, Inc.
# All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Basic Test Classes using RESTAPI Driver to test Cisco Nexus platforms.
These Classes are based on the original ssh event driver so same
tests occur with same configuration. What's different between
the tests is the resulting driver output which is what
the tests in this class presents to its parent class.
You will notice in this file there are test methods which
are skipped by using 'pass'. This is because these tests
apply to ssh only OR because rerunning the test would be
redundant.
"""
import mock
from oslo_config import cfg
import six
from networking_cisco.plugins.ml2.drivers.cisco.nexus import (
constants as const)
from networking_cisco.plugins.ml2.drivers.cisco.nexus import (
exceptions)
from networking_cisco.plugins.ml2.drivers.cisco.nexus import (
nexus_db_v2 as nxos_db)
from networking_cisco.plugins.ml2.drivers.cisco.nexus import (
nexus_restapi_snippets as snipp)
from networking_cisco.tests.unit.ml2.drivers.cisco.nexus import (
test_cisco_nexus_base as base)
from networking_cisco.tests.unit.ml2.drivers.cisco.nexus import (
test_cisco_nexus_events)
class TestCiscoNexusRestDeviceResults(base.TestCiscoNexusBaseResults):
"""Unit tests driver results for Cisco ML2 Nexus."""
test_results = {
'duplicate_add_port_driver_result': [
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_VLAN_ADD % 267),
base.POST],
[(snipp.PATH_IF % 'phys-[eth1/10]'),
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_TRUNKVLAN % ('l1PhysIf', '', '+267')),
base.POST]
],
'duplicate_del_port_driver_result': [
[(snipp.PATH_IF % 'phys-[eth1/10]'),
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_TRUNKVLAN % ('l1PhysIf', '', '-267')),
base.POST],
[(snipp.PATH_VLAN % '267'),
base.NEXUS_IP_ADDRESS_1,
'',
base.DELETE]
],
'add_port2_driver_result': [
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_VLAN_ADD % 265),
base.POST],
[(snipp.PATH_IF % 'phys-[eth1/20]'),
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_TRUNKVLAN % ('l1PhysIf', '', '+265')),
base.POST]
],
'delete_port2_driver_result': [
[(snipp.PATH_IF % 'phys-[eth1/20]'),
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_TRUNKVLAN % ('l1PhysIf', '', '-265')),
base.POST],
[(snipp.PATH_VLAN % '265'),
base.NEXUS_IP_ADDRESS_1,
'',
base.DELETE]
],
'add_port2_driver_result2': [
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_8,
(snipp.BODY_VLAN_ADD % 267),
base.POST],
[(snipp.PATH_IF % 'phys-[eth1/20]'),
base.NEXUS_IP_ADDRESS_8,
(snipp.BODY_TRUNKVLAN % ('l1PhysIf', '', '+267')),
base.POST]
],
'delete_port2_driver_result2': [
[(snipp.PATH_IF % 'phys-[eth1/20]'),
base.NEXUS_IP_ADDRESS_8,
(snipp.BODY_TRUNKVLAN % ('l1PhysIf', '', '-267')),
base.POST]
],
'add_port2_driver_result3': [
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_6,
(snipp.BODY_VLAN_ADD % 268),
base.POST],
[(snipp.PATH_IF % 'aggr-[po2]'),
base.NEXUS_IP_ADDRESS_6,
(snipp.BODY_TRUNKVLAN % ('pcAggrIf', '', '+268')),
base.POST],
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_7,
(snipp.BODY_VLAN_ADD % 268),
base.POST],
[(snipp.PATH_IF % 'aggr-[po2]'),
base.NEXUS_IP_ADDRESS_7,
(snipp.BODY_TRUNKVLAN % ('pcAggrIf', '', '+268')),
base.POST]
],
'delete_port2_driver_result3': [
[(snipp.PATH_IF % 'aggr-[po2]'),
base.NEXUS_IP_ADDRESS_6,
(snipp.BODY_TRUNKVLAN % ('pcAggrIf', '', '-268')),
base.POST],
[(snipp.PATH_VLAN % '268'),
base.NEXUS_IP_ADDRESS_6,
'',
base.DELETE],
[(snipp.PATH_IF % 'aggr-[po2]'),
base.NEXUS_IP_ADDRESS_7,
(snipp.BODY_TRUNKVLAN % ('pcAggrIf', '', '-268')),
base.POST],
[(snipp.PATH_VLAN % '268'),
base.NEXUS_IP_ADDRESS_7,
'',
base.DELETE]
],
'add_port_channel_driver_result': [
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_2,
(snipp.BODY_VLAN_ADD % 268),
base.POST],
[(snipp.PATH_IF % 'aggr-[po2]'),
base.NEXUS_IP_ADDRESS_2,
(snipp.BODY_TRUNKVLAN % ('pcAggrIf', '', '+268')),
base.POST]
],
'delete_port_channel_driver_result': [
[(snipp.PATH_IF % 'aggr-[po2]'),
base.NEXUS_IP_ADDRESS_2,
(snipp.BODY_TRUNKVLAN % ('pcAggrIf', '', '-268')),
base.POST],
[(snipp.PATH_VLAN % '268'),
base.NEXUS_IP_ADDRESS_2,
'',
base.DELETE]
],
'dual_add_port_driver_result': [
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_DUAL,
(snipp.BODY_VLAN_ADD % 269),
base.POST],
[(snipp.PATH_IF % 'phys-[eth1/3]'),
base.NEXUS_IP_ADDRESS_DUAL,
(snipp.BODY_TRUNKVLAN % ('l1PhysIf', '', '+269')),
base.POST],
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_DUAL,
(snipp.BODY_VLAN_ADD % 269),
base.POST],
[(snipp.PATH_IF % 'aggr-[po2]'),
base.NEXUS_IP_ADDRESS_DUAL,
(snipp.BODY_TRUNKVLAN % ('pcAggrIf', '', '+269')),
base.POST]
],
'dual_delete_port_driver_result': [
[(snipp.PATH_IF % 'phys-[eth1/3]'),
base.NEXUS_IP_ADDRESS_DUAL,
(snipp.BODY_TRUNKVLAN % ('l1PhysIf', '', '-269')),
base.POST],
[(snipp.PATH_VLAN % '269'),
base.NEXUS_IP_ADDRESS_DUAL,
'',
base.DELETE],
[(snipp.PATH_IF % 'aggr-[po2]'),
base.NEXUS_IP_ADDRESS_DUAL,
(snipp.BODY_TRUNKVLAN % ('pcAggrIf', '', '-269')),
base.POST],
],
'add_port_driver_result': [
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_8,
(snipp.BODY_VLAN_ADD % 267),
base.POST],
[(snipp.PATH_IF % 'phys-[eth1/10]'),
base.NEXUS_IP_ADDRESS_8,
(snipp.BODY_TRUNKVLAN % ('l1PhysIf', '', '+267')),
base.POST]
],
'del_port_driver_result': [
[(snipp.PATH_IF % 'phys-[eth1/10]'),
base.NEXUS_IP_ADDRESS_8,
(snipp.BODY_TRUNKVLAN % ('l1PhysIf', '', '-267')),
base.POST],
[(snipp.PATH_VLAN % '267'),
base.NEXUS_IP_ADDRESS_8,
'',
base.DELETE]
],
'migrate_add_host2_driver_result': [
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_3,
(snipp.BODY_VLAN_ADD % 267),
base.POST],
[(snipp.PATH_IF % 'phys-[eth1/20]'),
base.NEXUS_IP_ADDRESS_3,
(snipp.BODY_TRUNKVLAN % ('l1PhysIf', '', '+267')),
base.POST]
],
}
class TestCiscoNexusRestDevice(test_cisco_nexus_events.TestCiscoNexusDevice):
"""Unit tests for Cisco ML2 Nexus restapi device driver"""
def setUp(self):
cfg.CONF.set_override('switch_heartbeat_time', 0, 'ml2_cisco')
# Call Grandfather's setUp(); otherwise parent will set driver to
# 'ncclient' instead of 'restapi'.
super(test_cisco_nexus_events.TestCiscoNexusDevice, self).setUp()
self.mock_ncclient.reset_mock()
self.results = TestCiscoNexusRestDeviceResults()
def test_create_delete_duplicate_ports(self):
(super(TestCiscoNexusRestDevice, self).
test_create_delete_duplicate_ports())
def test_create_delete_duplicate_port_transaction(self):
(super(TestCiscoNexusRestDevice, self).
test_create_delete_duplicate_port_transaction())
def test_create_delete_same_switch_diff_hosts_diff_vlan(self):
(super(TestCiscoNexusRestDevice, self).
test_create_delete_same_switch_diff_hosts_diff_vlan())
def test_create_delete_same_switch_diff_hosts_same_vlan(self):
(super(TestCiscoNexusRestDevice, self).
test_create_delete_same_switch_diff_hosts_same_vlan())
def test_create_delete_diff_switch_same_host(self):
(super(TestCiscoNexusRestDevice, self).
test_create_delete_diff_switch_same_host())
def test_create_delete_portchannel(self):
super(TestCiscoNexusRestDevice, self).test_create_delete_portchannel()
def test_create_delete_dual(self):
super(TestCiscoNexusRestDevice, self).test_create_delete_dual()
def test_create_delete_dhcp(self):
super(TestCiscoNexusRestDevice, self).test_create_delete_dhcp()
def test_create_delete_router_ha_intf(self):
(super(TestCiscoNexusRestDevice, self).
test_create_delete_router_ha_intf())
def test_nexus_vm_migration(self):
super(TestCiscoNexusRestDevice, self).test_nexus_vm_migration()
class TestCiscoNexusRestInitResults(base.TestCiscoNexusBaseResults):
"""Unit tests driver results for Cisco ML2 Nexus."""
test_results = {
# set 1 - switch 1.1.1.1 sets eth 1/10 & 1/20 to None
# set 2 - switch 8.8.8.8 sets eth 1/10 & 1/20 to None
# set 3 - switch 4.4.4.4 sets eth 1/3 & portchannel 2 to None
# set 4 - switch 2.2.2.2 sets portchannel 2 to None
# set 5 - switch 6.6.6.6 sets portchannel 2 to None
# set 6 - switch 7.7.7.7 sets portchannel 2 to None
'duplicate_init_port_driver_result1': [
[(snipp.PATH_IF % 'phys-[eth1/10]'),
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_TRUNKVLAN % ('l1PhysIf',
snipp.BODY_PORT_CH_MODE, '')),
base.POST],
[(snipp.PATH_IF % 'phys-[eth1/20]'),
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_TRUNKVLAN % ('l1PhysIf',
snipp.BODY_PORT_CH_MODE, '')),
base.POST],
[(snipp.PATH_IF % 'aggr-[po2]'),
base.NEXUS_IP_ADDRESS_2,
(snipp.BODY_TRUNKVLAN % ('pcAggrIf',
snipp.BODY_PORT_CH_MODE, '')),
base.POST],
[(snipp.PATH_IF % 'phys-[eth1/20]'),
base.NEXUS_IP_ADDRESS_3,
(snipp.BODY_TRUNKVLAN % ('l1PhysIf',
snipp.BODY_PORT_CH_MODE, '')),
base.POST],
[(snipp.PATH_IF % 'phys-[eth1/3]'),
base.NEXUS_IP_ADDRESS_DUAL,
(snipp.BODY_TRUNKVLAN % ('l1PhysIf',
snipp.BODY_PORT_CH_MODE, '')),
base.POST],
[(snipp.PATH_IF % 'aggr-[po2]'),
base.NEXUS_IP_ADDRESS_DUAL,
(snipp.BODY_TRUNKVLAN % ('pcAggrIf',
snipp.BODY_PORT_CH_MODE, '')),
base.POST],
[(snipp.PATH_IF % 'aggr-[po2]'),
base.NEXUS_IP_ADDRESS_6,
(snipp.BODY_TRUNKVLAN % ('pcAggrIf',
snipp.BODY_PORT_CH_MODE, '')),
base.POST],
[(snipp.PATH_IF % 'aggr-[po2]'),
base.NEXUS_IP_ADDRESS_7,
(snipp.BODY_TRUNKVLAN % ('pcAggrIf',
snipp.BODY_PORT_CH_MODE, '')),
base.POST],
[(snipp.PATH_IF % 'phys-[eth1/10]'),
base.NEXUS_IP_ADDRESS_8,
(snipp.BODY_TRUNKVLAN % ('l1PhysIf',
snipp.BODY_PORT_CH_MODE, '')),
base.POST],
[(snipp.PATH_IF % 'phys-[eth1/20]'),
base.NEXUS_IP_ADDRESS_8,
(snipp.BODY_TRUNKVLAN % ('l1PhysIf',
snipp.BODY_PORT_CH_MODE, '')),
base.POST],
],
}
GET_INTERFACE_NO_TRUNK_RESPONSE = {
"totalCount": "1",
"imdata": [
{
"l1PhysIf": {
"attributes": {
"trunkVlans": "1-4094"
}
}
}
]
}
GET_INTERFACE_PCHAN_NO_TRUNK_RESPONSE = {
"totalCount": "1",
"imdata": [
{
"pcAggrIf": {
"attributes": {
"trunkVlans": "1-4094"
}
}
}
]
}
# Skipped inheriting event class TestCiscoNexusDeviceFailure
# since some tests are generic and need not be executed twice
# and some apply only to SSH driver.
class TestCiscoNexusRestDeviceInit(
test_cisco_nexus_events.TestCiscoNexusDeviceInit):
"""Verifies interface vlan allowed none is set when missing."""
def get_init_side_effect(
self, action, ipaddr=None, body=None, headers=None):
eth_path = 'api/mo/sys/intf/phys-'
port_chan_path = 'api/mo/sys/intf/aggr-'
if action == snipp.PATH_GET_NEXUS_TYPE:
return base.GET_NEXUS_TYPE_RESPONSE
elif action in snipp.PATH_GET_PC_MEMBERS:
return base.GET_NO_PORT_CH_RESPONSE
elif eth_path in action:
return GET_INTERFACE_NO_TRUNK_RESPONSE
elif port_chan_path in action:
return GET_INTERFACE_PCHAN_NO_TRUNK_RESPONSE
return {}
def restapi_mock_init(self):
# this is to prevent interface initialization from occurring
# which adds unnecessary noise to the results.
data_json = {'rest_get.side_effect':
self.get_init_side_effect}
self.mock_ncclient.configure_mock(**data_json)
def setUp(self):
"""Sets up mock ncclient, and switch and credentials dictionaries."""
cfg.CONF.set_override('switch_heartbeat_time', 0, 'ml2_cisco')
# Call Grandfather's setUp(); otherwise parent will set driver to
# 'ncclient' instead of 'restapi'.
super(test_cisco_nexus_events.TestCiscoNexusDeviceInit, self).setUp()
self.results = TestCiscoNexusRestInitResults()
def test_verify_initialization(self):
self._verify_results(
self.results.get_test_results(
'duplicate_init_port_driver_result1'))
class TestCiscoNexusRestBaremetalResults(base.TestCiscoNexusBaseResults):
"""Unit tests driver results for Cisco ML2 Nexus."""
test_results = {
'add_port_ethernet_driver_result': [
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_VLAN_ADD % 267),
base.POST],
[(snipp.PATH_IF % 'phys-[eth1/10]'),
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_NATIVE_TRUNKVLAN % (
'l1PhysIf', '', '+267', 'vlan-267')),
base.POST]
],
'delete_port_ethernet_driver_result': [
[(snipp.PATH_IF % 'phys-[eth1/10]'),
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_NATIVE_TRUNKVLAN % ('l1PhysIf', '', '-267', '')),
base.POST],
[(snipp.PATH_VLAN % '267'),
base.NEXUS_IP_ADDRESS_1,
'',
base.DELETE]
],
'add_vm_port_ethernet_driver_result': [
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_VLAN_ADD % 265),
base.POST],
[(snipp.PATH_IF % 'phys-[eth1/10]'),
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_TRUNKVLAN % ('l1PhysIf', '', '+265')),
base.POST]
],
'delete_vm_port_ethernet_driver_result': [
[(snipp.PATH_IF % 'phys-[eth1/10]'),
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_TRUNKVLAN % ('l1PhysIf', '', '-265')),
base.POST],
[(snipp.PATH_VLAN % '265'),
base.NEXUS_IP_ADDRESS_1,
'',
base.DELETE]
],
'add_port_channel_driver_result': [
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_VLAN_ADD % 267),
base.POST],
[(snipp.PATH_IF % 'aggr-[po469]'),
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_TRUNKVLAN % ('pcAggrIf', '', '+267')),
base.POST]
],
'delete_port_channel_driver_result': [
[(snipp.PATH_IF % 'aggr-[po469]'),
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_TRUNKVLAN % ('pcAggrIf', '', '-267')),
base.POST],
[(snipp.PATH_VLAN % '267'),
base.NEXUS_IP_ADDRESS_1,
'',
base.DELETE]
],
'add_port_ethernet_native_driver_result': [
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_VLAN_ADD % 265),
base.POST],
[(snipp.PATH_IF % 'phys-[eth1/10]'),
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_NATIVE_TRUNKVLAN % (
'l1PhysIf', '', '+265', 'vlan-265')),
base.POST]
],
'delete_port_ethernet_native_driver_result': [
[(snipp.PATH_IF % 'phys-[eth1/10]'),
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_NATIVE_TRUNKVLAN % ('l1PhysIf', '', '-265', '')),
base.POST],
[(snipp.PATH_VLAN % '265'),
base.NEXUS_IP_ADDRESS_1,
'',
base.DELETE]
],
'driver_result_unique_vPC_add1': [
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_VLAN_ADD % 267),
base.POST],
[(snipp.PATH_IF % 'aggr-[po469]'),
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_NATIVE_TRUNKVLAN % (
'pcAggrIf', '', '+267', 'vlan-267')),
base.POST],
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_2,
(snipp.BODY_VLAN_ADD % 267),
base.POST],
[(snipp.PATH_IF % 'aggr-[po469]'),
base.NEXUS_IP_ADDRESS_2,
(snipp.BODY_NATIVE_TRUNKVLAN % (
'pcAggrIf', '', '+267', 'vlan-267')),
base.POST]
],
'driver_result_unique_vPC_del1': [
[(snipp.PATH_IF % 'aggr-[po469]'),
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_NATIVE_TRUNKVLAN % ('pcAggrIf', '', '-267', '')),
base.POST],
[(snipp.PATH_VLAN % '267'),
base.NEXUS_IP_ADDRESS_1,
'',
base.DELETE],
[(snipp.PATH_IF % 'aggr-[po469]'),
base.NEXUS_IP_ADDRESS_2,
(snipp.BODY_NATIVE_TRUNKVLAN % ('pcAggrIf', '', '-267', '')),
base.POST],
[(snipp.PATH_VLAN % '267'),
base.NEXUS_IP_ADDRESS_2,
'',
base.DELETE]
],
'driver_result_unique_vPC_add1_vm': [
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_VLAN_ADD % 265),
base.POST],
[(snipp.PATH_IF % 'aggr-[po469]'),
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_TRUNKVLAN % ('pcAggrIf', '', '+265')),
base.POST],
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_2,
(snipp.BODY_VLAN_ADD % 265),
base.POST],
[(snipp.PATH_IF % 'aggr-[po469]'),
base.NEXUS_IP_ADDRESS_2,
(snipp.BODY_TRUNKVLAN % ('pcAggrIf', '', '+265')),
base.POST]
],
'driver_result_unique_vPC_del1_vm': [
[(snipp.PATH_IF % 'aggr-[po469]'),
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_TRUNKVLAN % ('pcAggrIf', '', '-265')),
base.POST],
[(snipp.PATH_VLAN % '265'),
base.NEXUS_IP_ADDRESS_1,
'',
base.DELETE],
[(snipp.PATH_IF % 'aggr-[po469]'),
base.NEXUS_IP_ADDRESS_2,
(snipp.BODY_TRUNKVLAN % ('pcAggrIf', '', '-265')),
base.POST],
[(snipp.PATH_VLAN % '265'),
base.NEXUS_IP_ADDRESS_2,
'',
base.DELETE]
],
'driver_result_unique_auto_vPC_vm_add1': [
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_VLAN_ADD % 265),
base.POST],
[(snipp.PATH_IF % 'aggr-[po1001]'),
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_TRUNKVLAN % ('pcAggrIf', '', '+265')),
base.POST],
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_2,
(snipp.BODY_VLAN_ADD % 265),
base.POST],
[(snipp.PATH_IF % 'aggr-[po1001]'),
base.NEXUS_IP_ADDRESS_2,
(snipp.BODY_TRUNKVLAN % ('pcAggrIf', '', '+265')),
base.POST]
],
'driver_result_unique_auto_vPC_vm_del1': [
[(snipp.PATH_IF % 'aggr-[po1001]'),
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_TRUNKVLAN % ('pcAggrIf', '', '-265')),
base.POST],
[(snipp.PATH_VLAN % '265'),
base.NEXUS_IP_ADDRESS_1,
'',
base.DELETE],
[(snipp.PATH_IF % 'aggr-[po1001]'),
base.NEXUS_IP_ADDRESS_2,
(snipp.BODY_TRUNKVLAN % ('pcAggrIf', '', '-265')),
base.POST],
[(snipp.PATH_VLAN % '265'),
base.NEXUS_IP_ADDRESS_2,
'',
base.DELETE]
],
'driver_result_unique_auto_vPC_add1': [
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_ADD_PORT_CH % (1001, 1001, 1001)),
base.POST],
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_ADD_PORT_CH_P2 % (1001, 1001)),
base.POST],
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_ADD_CH_GRP % (1001, 1001, 'phys-[eth1/10]')),
base.POST],
[(snipp.PATH_IF % 'aggr-[po1001]'),
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_TRUNKVLAN % (
'pcAggrIf', snipp.BODY_PORT_CH_MODE, '')),
base.POST],
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_2,
(snipp.BODY_ADD_PORT_CH % (1001, 1001, 1001)),
base.POST],
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_2,
(snipp.BODY_ADD_PORT_CH_P2 % (1001, 1001)),
base.POST],
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_2,
(snipp.BODY_ADD_CH_GRP % (1001, 1001, 'phys-[eth1/20]')),
base.POST],
[(snipp.PATH_IF % 'aggr-[po1001]'),
base.NEXUS_IP_ADDRESS_2,
(snipp.BODY_TRUNKVLAN % (
'pcAggrIf', snipp.BODY_PORT_CH_MODE, '')),
base.POST],
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_VLAN_ADD % 267),
base.POST],
[(snipp.PATH_IF % 'aggr-[po1001]'),
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_NATIVE_TRUNKVLAN % (
'pcAggrIf', '', '+267', 'vlan-267')),
base.POST],
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_2,
(snipp.BODY_VLAN_ADD % 267),
base.POST],
[(snipp.PATH_IF % 'aggr-[po1001]'),
base.NEXUS_IP_ADDRESS_2,
(snipp.BODY_NATIVE_TRUNKVLAN % (
'pcAggrIf', '', '+267', 'vlan-267')),
base.POST]
],
'driver_result_unique_auto_vPC_del1': [
[(snipp.PATH_IF % 'aggr-[po1001]'),
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_NATIVE_TRUNKVLAN % ('pcAggrIf', '', '-267', '')),
base.POST],
[(snipp.PATH_VLAN % '267'),
base.NEXUS_IP_ADDRESS_1,
'',
base.DELETE],
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_DEL_CH_GRP % ('1001', 'phys-[eth1/10]')),
base.POST],
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_DEL_PORT_CH % ('1001')),
base.POST],
[(snipp.PATH_IF % 'aggr-[po1001]'),
base.NEXUS_IP_ADDRESS_2,
(snipp.BODY_NATIVE_TRUNKVLAN % ('pcAggrIf', '', '-267', '')),
base.POST],
[(snipp.PATH_VLAN % '267'),
base.NEXUS_IP_ADDRESS_2,
'',
base.DELETE],
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_2,
(snipp.BODY_DEL_CH_GRP % ('1001', 'phys-[eth1/20]')),
base.POST],
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_2,
(snipp.BODY_DEL_PORT_CH % ('1001')),
base.POST]
],
'driver_result_unique_auto_vPC_inconsistency_failure': [
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_ADD_PORT_CH % (1001, 1001, 1001)),
base.POST],
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_ADD_PORT_CH_P2 % (1001, 1001)),
base.POST],
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_ADD_CH_GRP % (1001, 1001, 'phys-[eth1/10]')),
base.POST],
[(snipp.PATH_IF % 'aggr-[po1001]'),
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_TRUNKVLAN % (
'pcAggrIf', snipp.BODY_PORT_CH_MODE, '')),
base.POST],
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_DEL_CH_GRP % ('1001', 'phys-[eth1/10]')),
base.POST],
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_DEL_PORT_CH % ('1001')),
base.POST]
],
'driver_result_unique_auto_vPC_add_usr_cmd_rest': [
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_ADD_PORT_CH % (1001, 1001, 1001)),
base.POST],
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_ADD_CH_GRP % (1001, 1001, 'phys-[eth1/10]')),
base.POST],
[(snipp.PATH_IF % 'aggr-[po1001]'),
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_TRUNKVLAN % (
'pcAggrIf', snipp.BODY_PORT_CH_MODE, '')),
base.POST],
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_2,
(snipp.BODY_ADD_PORT_CH % (1001, 1001, 1001)),
base.POST],
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_2,
(snipp.BODY_ADD_CH_GRP % (1001, 1001, 'phys-[eth1/20]')),
base.POST],
[(snipp.PATH_IF % 'aggr-[po1001]'),
base.NEXUS_IP_ADDRESS_2,
(snipp.BODY_TRUNKVLAN % (
'pcAggrIf', snipp.BODY_PORT_CH_MODE, '')),
base.POST],
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_VLAN_ADD % 267),
base.POST],
[(snipp.PATH_IF % 'aggr-[po1001]'),
base.NEXUS_IP_ADDRESS_1,
(snipp.BODY_NATIVE_TRUNKVLAN % (
'pcAggrIf', '', '+267', 'vlan-267')),
base.POST],
[snipp.PATH_ALL,
base.NEXUS_IP_ADDRESS_2,
(snipp.BODY_VLAN_ADD % 267),
base.POST],
[(snipp.PATH_IF % 'aggr-[po1001]'),
base.NEXUS_IP_ADDRESS_2,
(snipp.BODY_NATIVE_TRUNKVLAN % (
'pcAggrIf', '', '+267', 'vlan-267')),
base.POST],
],
'driver_result_unique_auto_vPC_add_usr_cmd_nxapi_cli': [
[snipp.PATH_USER_CMDS,
base.NEXUS_IP_ADDRESS_1,
"int port-channel 1001 ;spanning-tree port type edge trunk "
";no lacp suspend-individual",
base.POST],
[snipp.PATH_USER_CMDS,
base.NEXUS_IP_ADDRESS_2,
"int port-channel 1001 ;spanning-tree port type edge trunk "
";no lacp suspend-individual",
base.POST],
],
}
GET_PORT_CH_RESPONSE = {
"totalCount": "4",
"imdata": [
{
"pcRsMbrIfs": {
"attributes": {
"parentSKey": "po1",
"tSKey": "eth1/11",
}
}
},
{
"pcRsMbrIfs": {
"attributes": {
"parentSKey": "po469",
"tSKey": "eth1/10",
}
}
},
{
"pcRsMbrIfs": {
"attributes": {
"parentSKey": "po2",
"tSKey": "eth1/12",
}
}
},
{
"pcRsMbrIfs": {
"attributes": {
"parentSKey": "po470",
"tSKey": "eth1/20",
}
}
}
]
}
class TestCiscoNexusRestBaremetalDevice(
test_cisco_nexus_events.TestCiscoNexusBaremetalDevice):
"""Tests for Cisco ML2 Nexus baremetal RESTAPI device driver."""
def get_init_side_effect(
self, action, ipaddr=None, body=None, headers=None):
eth_path = 'api/mo/sys/intf/phys-'
port_chan_path = 'api/mo/sys/intf/aggr-'
if action == snipp.PATH_GET_NEXUS_TYPE:
return base.GET_NEXUS_TYPE_RESPONSE
elif action in snipp.PATH_GET_PC_MEMBERS:
return GET_PORT_CH_RESPONSE
elif eth_path in action:
return base.GET_INTERFACE_RESPONSE
elif port_chan_path in action:
return base.GET_INTERFACE_PCHAN_RESPONSE
return {}
def get_init_side_effect2(
self, action, ipaddr=None, body=None, headers=None):
eth_path = 'api/mo/sys/intf/phys-'
port_chan_path = 'api/mo/sys/intf/aggr-'
if action == snipp.PATH_GET_NEXUS_TYPE:
return base.GET_NEXUS_TYPE_RESPONSE
elif action in snipp.PATH_GET_PC_MEMBERS:
return base.GET_NO_PORT_CH_RESPONSE
elif eth_path in action:
return base.GET_INTERFACE_RESPONSE
elif port_chan_path in action:
return GET_INTERFACE_PCHAN_NO_TRUNK_RESPONSE
return {}
def _init_port_channel(self, which=1):
# this is to prevent interface initialization from occurring
# which adds unnecessary noise to the results.
GET_PORT_CH_RESPONSE['imdata'][which]['pcRsMbrIfs'][
'attributes']['parentSKey'] = 'po469'
data_json = {'rest_get.side_effect':
self.get_init_side_effect}
self.mock_ncclient.configure_mock(**data_json)
def setUp(self):
"""Sets up mock ncclient, and switch and credentials dictionaries."""
original_intersect = nxos_db._get_free_vpcids_on_switches
def new_get_free_vpcids_on_switches(nexus_ips):
intersect = list(original_intersect(nexus_ips))
intersect.sort()
return intersect
mock.patch.object(nxos_db,
'_get_free_vpcids_on_switches',
new=new_get_free_vpcids_on_switches).start()
cfg.CONF.set_override('switch_heartbeat_time', 0, 'ml2_cisco')
# Call Grandfather's setUp(); otherwise parent will set driver to
# 'ncclient' instead of 'restapi'.
super(test_cisco_nexus_events.TestCiscoNexusBaremetalDevice,
self).setUp()
self.results = TestCiscoNexusRestBaremetalResults()
def test_create_delete_basic_bm_ethernet_port_and_vm(self):
(super(TestCiscoNexusRestBaremetalDevice, self).
test_create_delete_basic_bm_ethernet_port_and_vm())
def test_create_delete_basic_port_channel(self):
"""Basic creation and deletion test of 1 learned port-channel."""
(super(TestCiscoNexusRestBaremetalDevice, self).
test_create_delete_basic_port_channel())
def test_create_delete_learn_vpc_and_vm(self):
(super(TestCiscoNexusRestBaremetalDevice, self).
test_create_delete_learn_vpc_and_vm())
def test_create_delete_basic_eth_port_is_native(self):
(super(TestCiscoNexusRestBaremetalDevice, self).
test_create_delete_basic_eth_port_is_native())
def test_create_delete_switch_ip_not_defined(self):
(super(TestCiscoNexusRestBaremetalDevice, self).
test_create_delete_switch_ip_not_defined())
def test_automated_port_channel_creation_deletion(self):
"""Basic creation and deletion test of 1 auto port-channel."""
data_json = {'rest_get.side_effect':
self.get_init_side_effect2}
self.mock_ncclient.configure_mock(**data_json)
switch_list = ['1.1.1.1', '2.2.2.2']
for switch_ip in switch_list:
cfg.CONF.set_override(
const.VPCPOOL, ('1001-1025, 1030'),
cfg.CONF.ml2_cisco.nexus_switches.get(switch_ip)._group)
self._cisco_mech_driver._initialize_vpc_alloc_pools()
self._basic_create_verify_port_vlan(
'test_config_vPC',
self.results.get_test_results(
'driver_result_unique_auto_vPC_add1'),
nbr_of_bindings=2)
for switch_ip in switch_list:
self.assertEqual(
25, len(nxos_db.get_free_switch_vpc_allocs(switch_ip)))
# Clean all the ncclient mock_calls so we can evaluate
# results of delete operations.
self.mock_ncclient.reset_mock()
self._basic_delete_verify_port_vlan(
'test_config_vPC',
self.results.get_test_results(
'driver_result_unique_auto_vPC_del1'))
for switch_ip in switch_list:
self.assertEqual(
26, len(nxos_db.get_free_switch_vpc_allocs(switch_ip)))
def test_create_delete_automated_vpc_and_vm(self):
"""Basic creation and deletion test of 2 auto port-channel and vm."""
data_json = {'rest_get.side_effect':
self.get_init_side_effect2}
self.mock_ncclient.configure_mock(**data_json)
switch_list = ['1.1.1.1', '2.2.2.2']
for switch_ip in switch_list:
cfg.CONF.set_override(
const.VPCPOOL, ('1001-1025, 1030'),
cfg.CONF.ml2_cisco.nexus_switches.get(switch_ip)._group)
self._cisco_mech_driver._initialize_vpc_alloc_pools()
self._basic_create_verify_port_vlan(
'test_config_vPC',
self.results.get_test_results(
'driver_result_unique_auto_vPC_add1'),
nbr_of_bindings=2)
# Clean all the ncclient mock_calls so we can evaluate
# results of delete operations.
self.mock_ncclient.reset_mock()
self._basic_create_verify_port_vlan(
'test_config_vm',
self.results.get_test_results(
'driver_result_unique_auto_vPC_vm_add1'),
nbr_of_bindings=4)
for switch_ip in switch_list:
self.assertEqual(
25, len(nxos_db.get_free_switch_vpc_allocs(switch_ip)))
self._basic_delete_verify_port_vlan(
'test_config_vm',
self.results.get_test_results(
'driver_result_unique_auto_vPC_vm_del1'),
nbr_of_bindings=2)
self._basic_delete_verify_port_vlan(
'test_config_vPC',
self.results.get_test_results(
'driver_result_unique_auto_vPC_del1'))
for switch_ip in switch_list:
self.assertEqual(
26, len(nxos_db.get_free_switch_vpc_allocs(switch_ip)))
def test_automated_port_channel_w_user_cfg(self):
"""Basic creation and deletion test of 1 auto port-channel."""
data_json = {'rest_get.side_effect':
self.get_init_side_effect2}
self.mock_ncclient.configure_mock(**data_json)
switch_list = ['1.1.1.1', '2.2.2.2']
for switch_ip in switch_list:
cfg.CONF.set_override(
const.VPCPOOL, ('1001-1025'),
cfg.CONF.ml2_cisco.nexus_switches.get(switch_ip)._group)
self._cisco_mech_driver._initialize_vpc_alloc_pools()
self._cfg_vPC_user_commands(
switch_list, "spanning-tree port type edge trunk ;no lacp "
"suspend-individual")
self._basic_create_verify_port_vlan(
'test_config_vPC',
self.results.get_test_results(
'driver_result_unique_auto_vPC_add_usr_cmd_rest'),
nbr_of_bindings=2)
self._verify_nxapi_results(
self.results.get_test_results(
'driver_result_unique_auto_vPC_add_usr_cmd_nxapi_cli'))
# Clean all the ncclient mock_calls so we can evaluate
# results of delete operations.
self.mock_ncclient.reset_mock()
self._basic_delete_verify_port_vlan(
'test_config_vPC',
self.results.get_test_results(
'driver_result_unique_auto_vPC_del1'))
for switch_ip in switch_list:
self.assertEqual(
25, len(nxos_db.get_free_switch_vpc_allocs(switch_ip)))
def test_failure_inconsistent_learned_chgrp(self):
"""Learning chgrp but different on both eth interfaces."""
# Clean all the ncclient mock_calls to clear exception
# and other mock_call history.
self.mock_ncclient.reset_mock()
LOCAL_GET_PORT_CH_RESPONSE = {
"totalCount": "2",
"imdata": [
{
"pcRsMbrIfs": {
"attributes": {
"parentSKey": "po469",
"tSKey": "eth1/10",
}
}
},
{
"pcRsMbrIfs": {
"attributes": {
"parentSKey": "po470",
"tSKey": "eth1/20",
}
}
}
]
}
def local_get_init_side_effect(
action, ipaddr=None, body=None, headers=None):
eth_path = 'api/mo/sys/intf/phys-'
port_chan_path = 'api/mo/sys/intf/aggr-'
if action == snipp.PATH_GET_NEXUS_TYPE:
return base.GET_NEXUS_TYPE_RESPONSE
elif action in snipp.PATH_GET_PC_MEMBERS:
return LOCAL_GET_PORT_CH_RESPONSE
elif eth_path in action:
return base.GET_INTERFACE_RESPONSE
elif port_chan_path in action:
return GET_INTERFACE_PCHAN_NO_TRUNK_RESPONSE
return {}
# Substitute init_port_channel() with the following
# since this is a one time test scenario.
data_json = {'rest_get.side_effect':
local_get_init_side_effect}
self.mock_ncclient.configure_mock(**data_json)
e = self.assertRaises(exceptions.NexusVPCLearnedNotConsistent,
self._create_port,
self.test_configs[
'test_config_vPC'])
x = six.u(str(e))
self.assertIn("first interface 1.1.1.1, ethernet:1/10, vpc=469", x)
self.assertIn("second interface 2.2.2.2, ethernet:1/20, vpc=470", x)
def test_failure_inconsistent_new_chgrp(self):
"""Started as newly created chgrp but one if had chgrp configured."""
# First interface Nexus returns there's no ch_grp
# so treat as port-channel create.
# Second interface Nexus returns ch_grp so so process
# reset procedure which checks that .....
# - port-channel deleted from Nexus for first interface
# - ch_grp removed from Nexus on first interface
# - free-up vpcid allocated on first interface
# - raised cexc.NexusVPCExpectedNoChgrp
LOCAL_GET_PORT_CH_RESPONSE = {
"totalCount": "1",
"imdata": [
{
"pcRsMbrIfs": {
"attributes": {
"parentSKey": "po470",
"tSKey": "eth1/20",
}
}
}
]
}
def local_get_init_side_effect(
action, ipaddr=None, body=None, headers=None):
eth_path = 'api/mo/sys/intf/phys-'
port_chan_path = 'api/mo/sys/intf/aggr-'
if action == snipp.PATH_GET_NEXUS_TYPE:
return base.GET_NEXUS_TYPE_RESPONSE
elif action in snipp.PATH_GET_PC_MEMBERS:
return LOCAL_GET_PORT_CH_RESPONSE
elif eth_path in action:
return base.GET_INTERFACE_RESPONSE
elif port_chan_path in action:
return GET_INTERFACE_PCHAN_NO_TRUNK_RESPONSE
return {}
# Substitute init_port_channel() with the following
# since this is a one time test scenario.
data_json = {'rest_get.side_effect':
local_get_init_side_effect}
self.mock_ncclient.configure_mock(**data_json)
switch_list = ['1.1.1.1', '2.2.2.2']
for switch_ip in switch_list:
nxos_db.init_vpc_entries(switch_ip,
self._make_vpc_list(1001, 1025))
allocs = nxos_db.get_free_switch_vpc_allocs(switch_ip)
self.assertEqual(len(allocs), 25)
e = self.assertRaises(exceptions.NexusVPCExpectedNoChgrp,
self._create_port,
self.test_configs[
'test_config_vPC'])
# Check that appropriate string in exception string
x = six.u(str(e))
self.assertIn("first interface 1.1.1.1, ethernet:1/10, vpc=None", x)
self.assertIn("second interface 2.2.2.2, ethernet:1/20, vpc=470", x)
# Verify vpcid initially allocated is now free
for switch_ip in switch_list:
allocs = nxos_db.get_free_switch_vpc_allocs(switch_ip)
self.assertEqual(len(allocs), 25)
# Verify no attempt to create port-channels
self._verify_results([])
def test_vpcids_depleted_failure(self):
"""Verifies exception when failed to get vpcid."""
# Clean all the ncclient mock_calls to clear exception
# and other mock_call history.
self.mock_ncclient.reset_mock()
def new_alloc_vpcid(nexus_ip_list):
return 0
mock.patch.object(nxos_db,
'alloc_vpcid',
new=new_alloc_vpcid).start()
e = self.assertRaises(exceptions.NexusVPCAllocFailure,
self._create_port,
self.test_configs[
'test_config_vPC'])
x = six.u(str(e))
self.assertIn("switches=1.1.1.1,2.2.2.2", x)
# Clean all the ncclient mock_calls to clear exception
# and other mock_call history.
self.mock_ncclient.reset_mock()
class TestCiscoNexusBaremetalVPCConfig(base.TestCiscoNexusBase,
test_cisco_nexus_events.
TestCiscoNexusDeviceConfig,
TestCiscoNexusRestDeviceResults):
"""Unit tests for Cisco ML2 Nexus baremetal VPC Config.
The purpose of this test case is to validate vpc pool initialization.
If vpc-pool is configured, it will be compared with what currently
exists in the vpc pool data base. Adds and removals of the data base
will occur. Removal will not occur if the entry is active.
"""
def setUp(self):
super(TestCiscoNexusBaremetalVPCConfig, self).setUp()
self.mock_ncclient.reset_mock()
def _run_vpc_config_test(self, switch_ip, config, count_in,
min_in, max_in):
"""Config vpc-pool config with garbage. log & no db entries."""
cfg.CONF.set_override(
const.VPCPOOL, config,
cfg.CONF.ml2_cisco.nexus_switches.get(switch_ip)._group)
self._cisco_mech_driver._initialize_vpc_alloc_pools()
# Verify get_switch_vpc_count_min_max() returns correct
# count, min, max values for switches.
count, min, max = nxos_db.get_switch_vpc_count_min_max(
switch_ip)
self.assertEqual(count, count_in)
self.assertEqual(min, min_in)
self.assertEqual(max, max_in)
def test_vpc_config_db_results_bad_config1(self):
"""Config vpc-pool config with garbage. log & no db entries."""
self._run_vpc_config_test('1.1.1.1', 'blahblahblah', 0, None, None)
def test_vpc_config_db_results_bad_config2(self):
"""Config vpc-pool config with bad range. log & no db entries."""
self._run_vpc_config_test('1.1.1.1', '5-7-9,1', 0, None, None)
def test_vpc_config_db_results_bad_config3(self):
"""Config vpc-pool config with bad digits. log & no db entries."""
self._run_vpc_config_test('1.1.1.1', '5-abc,1', 0, None, None)
def test_vpc_config_db_results_bad_vpc_range(self):
"""Config vpc-pool config with bad min/max values."""
# bad_min = 0-5
bad_min = str(const.MINVPC - 1) + '-5'
self._run_vpc_config_test('1.1.1.1', bad_min, 0, None, None)
# bad_max = 4096-4097
bad_max = str(const.MAXVPC) + '-' + str(const.MAXVPC + 1)
self._run_vpc_config_test('1.1.1.1', bad_max, 0, None, None)
def test_vpc_config_db_results_bad_config_keep_old(self):
"""Verify on config error, existing db entries stay intact."""
old_list = [1, 6, 8, 11]
# Pretend these already existed and make 8 active
nxos_db.init_vpc_entries('1.1.1.1', old_list)
nxos_db.update_vpc_entry(['1.1.1.1'], 8, True, True)
# valid port-channel values are 1-4096 on Nexus 9K
# ERROR: range starts with 0
bad_min = str(const.MINVPC - 1) + '-1001, 1002'
self._run_vpc_config_test('1.1.1.1', bad_min, 4, 1, 11)
def test_vpc_config_db_results_removal(self):
"""Allow user to remove config but only non-active."""
# 1 no add, already exists
# 6 remove not active
# 8 no remove, ACTIVE
# 11 no add, already exists
old_list = [1, 6, 8, 11]
# Pretend these already existed and make 8 active
nxos_db.init_vpc_entries('1.1.1.1', old_list)
nxos_db.update_vpc_entry(['1.1.1.1'], 8, True, True)
self._run_vpc_config_test('1.1.1.1', '', 1, 8, 8)
# Make 8 inactive and try again.
nxos_db.update_vpc_entry(['1.1.1.1'], 8, False, False)
self._run_vpc_config_test('1.1.1.1', '', 0, None, None)
def test_vpc_config_db_results_good_config_not_range(self):
"""Config valid vpc-pool not range config. """
self._run_vpc_config_test('1.1.1.1', '1,3,5', 3, 1, 5)
def test_vpc_config_db_results_good_config_range(self):
"""Config valid vpc-pool range config. """
self._run_vpc_config_test('1.1.1.1', '1-5', 5, 1, 5)
def test_vpc_config_db_results_good_config_all(self):
"""Config valid vpc-pool range config. Test Min/Max vpc value."""
# test_range_limits = 1-5,4096
test_range_limits = str(const.MINVPC) + '-5,' + str(const.MAXVPC)
self._run_vpc_config_test('1.1.1.1', test_range_limits,
6, const.MINVPC, const.MAXVPC)
def test_vpc_config_db_results_with_old_config1(self):
"""Config valid vpc-pool compare with pre-existing entries."""
# 1 will be removed,
# 3 no add, already exists
# 4 no add, already exists
# 11 will not be removed since active
old_list = [1, 3, 4, 11]
# Pretend these already existed and make 11 active
nxos_db.init_vpc_entries('1.1.1.1', old_list)
nxos_db.update_vpc_entry(['1.1.1.1'], 11, True, True)
self._run_vpc_config_test('1.1.1.1', '2-5, 8', 6, 2, 11)
def test_vpc_config_db_results_with_old_config2(self):
"""Config valid vpc-pool compare with pre-existing entries."""
# 1 no add, already exists
# 6 remove not active
# 8 no remove, ACTIVE
# 11 no add, already exists
old_list = [1, 6, 8, 11]
# Pretend these already existed and make 8 active
nxos_db.init_vpc_entries('1.1.1.1', old_list)
nxos_db.update_vpc_entry(['1.1.1.1'], 8, True, True)
self._run_vpc_config_test('1.1.1.1', '1-4, 9, 11', 7, 1, 11)
def test_vpc_config_db_results_with_old_config3(self):
"""Config valid vpc-pool compare with pre-existing entries."""
# 1 no add, already exists
# 11 no add, already exists
old_list = [1, 6, 8, 11]
# Pretend these already existed and make 8 active
nxos_db.init_vpc_entries('1.1.1.1', old_list)
self._run_vpc_config_test('1.1.1.1', '1-4, 6-9, 11', 9, 1, 11)
# Skipped inheriting event class TestCiscoNexusNonCacheSshDevice
# since it does not apply to REST API
| 35.936429 | 78 | 0.558248 | 5,824 | 50,311 | 4.490213 | 0.080872 | 0.046117 | 0.052159 | 0.08535 | 0.822187 | 0.793239 | 0.766816 | 0.722879 | 0.704141 | 0.678636 | 0 | 0.039697 | 0.330067 | 50,311 | 1,399 | 79 | 35.962116 | 0.736182 | 0.120173 | 0 | 0.762984 | 0 | 0 | 0.116583 | 0.041679 | 0 | 0 | 0 | 0 | 0.016997 | 1 | 0.045326 | false | 0 | 0.008499 | 0.000944 | 0.088763 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
675e3781b1312ede8872e02713bda2439e898125 | 171 | py | Python | snapx/snapx/algorithms/__init__.py | ruth-ann/snap-python | fe98de7b5697b3d60eb3497893e24801ae1916f9 | [
"BSD-3-Clause"
] | 242 | 2015-01-01T08:40:28.000Z | 2022-03-18T05:22:09.000Z | snapx/snapx/algorithms/__init__.py | ruth-ann/snap-python | fe98de7b5697b3d60eb3497893e24801ae1916f9 | [
"BSD-3-Clause"
] | 99 | 2015-01-24T07:55:27.000Z | 2021-10-30T18:20:13.000Z | snapx/snapx/algorithms/__init__.py | ruth-ann/snap-python | fe98de7b5697b3d60eb3497893e24801ae1916f9 | [
"BSD-3-Clause"
] | 105 | 2015-03-03T06:45:17.000Z | 2022-02-24T15:52:40.000Z | from snapx.algorithms.centrality import *
from snapx.algorithms.community import *
from snapx.algorithms.components import *
from snapx.algorithms.shortest_paths import *
| 34.2 | 45 | 0.836257 | 21 | 171 | 6.761905 | 0.428571 | 0.253521 | 0.535211 | 0.528169 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093567 | 171 | 4 | 46 | 42.75 | 0.916129 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
c03050115a24fd33ec99b794dc39a3c1c5298fb0 | 3,092 | py | Python | L1Trigger/L1THGCalUtilities/python/clustering2d.py | bisnupriyasahu/cmssw | 6cf37ca459246525be0e8a6f5172c6123637d259 | [
"Apache-2.0"
] | 1 | 2019-08-09T08:42:11.000Z | 2019-08-09T08:42:11.000Z | L1Trigger/L1THGCalUtilities/python/clustering2d.py | bisnupriyasahu/cmssw | 6cf37ca459246525be0e8a6f5172c6123637d259 | [
"Apache-2.0"
] | null | null | null | L1Trigger/L1THGCalUtilities/python/clustering2d.py | bisnupriyasahu/cmssw | 6cf37ca459246525be0e8a6f5172c6123637d259 | [
"Apache-2.0"
] | null | null | null | import FWCore.ParameterSet.Config as cms
def create_distance(process, inputs,
distance=6.,# cm
seed_threshold=5.,# MipT
cluster_threshold=2.# MipT
):
producer = process.hgcalBackEndLayer1Producer.clone()
producer.ProcessorParameters.C2d_parameters.seeding_threshold_silicon = cms.double(seed_threshold)
producer.ProcessorParameters.C2d_parameters.seeding_threshold_scintillator = cms.double(seed_threshold)
producer.ProcessorParameters.C2d_parameters.clustering_threshold_silicon = cms.double(cluster_threshold)
producer.ProcessorParameters.C2d_parameters.clustering_threshold_scintillator = cms.double(cluster_threshold)
producer.ProcessorParameters.C2d_parameters.dR_cluster = cms.double(distance)
producer.ProcessorParameters.C2d_parameters.clusterType = cms.string('dRC2d')
producer.InputTriggerCells = cms.InputTag('{}:HGCalConcentratorProcessorSelection'.format(inputs))
return producer
def create_topological(process, inputs,
seed_threshold=5.,# MipT
cluster_threshold=2.# MipT
):
producer = process.hgcalBackEndLayer1Producer.clone()
producer.ProcessorParameters.C2d_parameters.seeding_threshold_silicon = cms.double(seed_threshold) # MipT
producer.ProcessorParameters.C2d_parameters.seeding_threshold_scintillator = cms.double(seed_threshold) # MipT
producer.ProcessorParameters.C2d_parameters.clustering_threshold_silicon = cms.double(cluster_threshold) # MipT
producer.ProcessorParameters.C2d_parameters.clustering_threshold_scintillator = cms.double(cluster_threshold) # MipT
producer.ProcessorParameters.C2d_parameters.clusterType = cms.string('NNC2d')
producer.InputTriggerCells = cms.InputTag('{}:HGCalConcentratorProcessorSelection'.format(inputs))
return producer
def create_constrainedtopological(process, inputs,
distance=6.,# cm
seed_threshold=5.,# MipT
cluster_threshold=2.# MipT
):
producer = process.hgcalBackEndLayer1Producer.clone()
producer.ProcessorParameters.C2d_parameters.seeding_threshold_silicon = cms.double(seed_threshold) # MipT
producer.ProcessorParameters.C2d_parameters.seeding_threshold_scintillator = cms.double(seed_threshold) # MipT
producer.ProcessorParameters.C2d_parameters.clustering_threshold_silicon = cms.double(cluster_threshold) # MipT
producer.ProcessorParameters.C2d_parameters.clustering_threshold_scintillator = cms.double(cluster_threshold) # MipT
producer.ProcessorParameters.C2d_parameters.dR_cluster = cms.double(distance) # cm
producer.ProcessorParameters.C2d_parameters.clusterType = cms.string('dRNNC2d')
producer.InputTriggerCells = cms.InputTag('{}:HGCalConcentratorProcessorSelection'.format(inputs))
return producer
def create_dummy(process, inputs):
producer = process.hgcalBackEndLayer1Producer.clone()
producer.ProcessorParameters.C2d_parameters.clusterType = cms.string('dummyC2d')
producer.InputTriggerCells = cms.InputTag('{}:HGCalConcentratorProcessorSelection'.format(inputs))
return producer
| 59.461538 | 120 | 0.796572 | 299 | 3,092 | 8.016722 | 0.150502 | 0.202753 | 0.225282 | 0.300375 | 0.940342 | 0.940342 | 0.940342 | 0.873592 | 0.837714 | 0.758865 | 0 | 0.012463 | 0.117723 | 3,092 | 51 | 121 | 60.627451 | 0.866202 | 0.025226 | 0 | 0.804348 | 0 | 0 | 0.059079 | 0.050734 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0.021739 | 0 | 0.195652 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
c048a9a19d10f98ee496ba046c0cceda35eace71 | 43,379 | py | Python | tests/test_nested_containers.py | gwenzek/omegaconf | 0ff8a401739d00b01d88408c262a0f061ff3be68 | [
"BSD-3-Clause"
] | null | null | null | tests/test_nested_containers.py | gwenzek/omegaconf | 0ff8a401739d00b01d88408c262a0f061ff3be68 | [
"BSD-3-Clause"
] | null | null | null | tests/test_nested_containers.py | gwenzek/omegaconf | 0ff8a401739d00b01d88408c262a0f061ff3be68 | [
"BSD-3-Clause"
] | null | null | null | import copy
import re
from typing import Any, Dict, List, Optional, Union
from pytest import mark, param, raises
from omegaconf import (
MISSING,
Container,
DictConfig,
IntegerNode,
KeyValidationError,
ListConfig,
Node,
OmegaConf,
ValidationError,
)
from omegaconf._utils import (
ValueKind,
_ensure_container,
_resolve_optional,
get_value_kind,
is_dict_annotation,
is_list_annotation,
is_structured_config,
)
from tests import ConcretePlugin, Plugin
def check_node_metadata(
node: Container,
type_hint: Any,
key_type: Any,
elt_type: Any,
obj_type: Any,
) -> None:
value_optional, value_ref_type = _resolve_optional(type_hint)
assert node._metadata.optional == value_optional
assert node._metadata.ref_type == value_ref_type
assert node._metadata.key_type == key_type
assert node._metadata.element_type == elt_type
assert node._metadata.object_type == obj_type
if is_dict_annotation(value_ref_type) or is_structured_config(value_ref_type):
assert isinstance(node, DictConfig)
elif is_list_annotation(value_ref_type):
assert isinstance(node, ListConfig)
def check_subnode(
cfg: Container,
key: Any,
value: Any,
type_hint: Any,
key_type: Any,
elt_type: Any,
obj_type: Any,
) -> None:
"Validate that `cfg[key] == value` and that subnode `cfg._get_node(key)._metadata` is correct."
node = cfg._get_node(key)
assert isinstance(node, (ListConfig, DictConfig))
vk = get_value_kind(node)
if vk in (ValueKind.MANDATORY_MISSING, ValueKind.INTERPOLATION):
if isinstance(value, Node):
value = value._value()
assert node._value() == value
else:
assert cfg[key] == value
check_node_metadata(node, type_hint, key_type, elt_type, obj_type)
@mark.parametrize(
"cfg, type_hint, key_type, elt_type, obj_type",
[
param(
ListConfig([[[456]]], element_type=List[List[int]]),
List[List[int]],
int,
List[int],
list,
id="list-list-list",
),
param(
ListConfig([{"foo": {"bar": 456}}], element_type=Dict[str, Dict[str, int]]),
Dict[str, Dict[str, int]],
str,
Dict[str, int],
dict,
id="list-dict-dict",
),
param(
ListConfig([[123], None], element_type=Optional[List[int]]),
Optional[List[int]],
int,
int,
list,
id="list-optional-list",
),
param(
ListConfig([[123], [None]], element_type=List[Optional[int]]),
List[Optional[int]],
int,
Optional[int],
list,
id="list-list-optional",
),
param(
ListConfig([{"bar": 456}, None], element_type=Optional[Dict[str, int]]),
Optional[Dict[str, int]],
str,
int,
dict,
id="list-optional-dict",
),
param(
ListConfig(
[{"foo": 456}, {"bar": None}], element_type=Dict[str, Optional[int]]
),
Dict[str, Optional[int]],
str,
Optional[int],
dict,
id="list-dict-optional",
),
param(
DictConfig({"foo": [[456]]}, element_type=List[List[int]]),
List[List[int]],
int,
List[int],
list,
id="dict-list-list",
),
param(
DictConfig(
{"foo": {"bar": {"baz": 456}}}, element_type=Dict[str, Dict[str, int]]
),
Dict[str, Dict[str, int]],
str,
Dict[str, int],
dict,
id="dict-dict-dict",
),
param(
DictConfig({"foo": [123], "bar": None}, element_type=Optional[List[int]]),
Optional[List[int]],
int,
int,
list,
id="dict-optional-list",
),
param(
DictConfig({"foo": [123], "bar": [None]}, element_type=List[Optional[int]]),
List[Optional[int]],
int,
Optional[int],
list,
id="dict-list-optional",
),
param(
DictConfig(
{"foo": {"bar": 456}, "baz": None},
element_type=Optional[Dict[str, int]],
),
Optional[Dict[str, int]],
str,
int,
dict,
id="dict-optional-dict",
),
param(
DictConfig(
{"foo": {"bar": 456}, "baz": {"qux": None}},
element_type=Dict[str, Optional[int]],
),
Dict[str, Optional[int]],
str,
Optional[int],
dict,
id="dict-dict-optional",
),
param(
DictConfig(
{"foo": {"bar": ConcretePlugin()}}, element_type=Dict[str, Plugin]
),
Dict[str, Plugin],
str,
Plugin,
dict,
id="dict-of-plugin",
),
param(
DictConfig({"foo": [ConcretePlugin()]}, element_type=List[Plugin]),
List[Plugin],
int,
Plugin,
list,
id="list-of-plugin",
),
],
)
def test_container_nested_element(
cfg: Union[DictConfig, ListConfig],
type_hint: Any,
key_type: Any,
elt_type: Any,
obj_type: Any,
) -> None:
"""Ensure metadata and contents of container-typed subnode are correct"""
cfg = copy.deepcopy(cfg)
keys: Any = range(len(cfg)) if isinstance(cfg, ListConfig) else cfg.keys()
for key in keys:
value = cfg[key]
check_subnode(
cfg,
key,
value,
type_hint,
key_type,
elt_type,
obj_type if value is not None else None,
)
@mark.parametrize(
"cfg, value, type_hint, key_type, elt_type, obj_type",
[
param(
ListConfig([[[456]]], element_type=List[List[int]]),
[[123]],
List[List[int]],
int,
List[int],
list,
id="assign-to-list-element",
),
param(
ListConfig([{"foo": {"bar": 456}}], element_type=Dict[str, Dict[str, int]]),
{"baz": {"qux": 123}},
Dict[str, Dict[str, int]],
str,
Dict[str, int],
dict,
id="assign-to-dict-element",
),
param(
ListConfig([[123], None], element_type=Optional[List[int]]),
[456],
Optional[List[int]],
int,
int,
list,
id="assign-list-to-optional-list",
),
param(
ListConfig([{"foo": 456}, None], element_type=Optional[Dict[str, int]]),
{"bar": 123},
Optional[Dict[str, int]],
str,
int,
dict,
id="assign-dict-to-optional-dict",
),
param(
ListConfig([[123], [None]], element_type=List[Optional[int]]),
[456],
List[Optional[int]],
int,
Optional[int],
list,
id="assign-list-to-list-optional",
),
param(
ListConfig([[123], [None]], element_type=List[Optional[int]]),
[None],
List[Optional[int]],
int,
Optional[int],
list,
id="assign-list-none-to-list-optional",
),
param(
ListConfig(
[{"foo": 456}, {"bar": None}], element_type=Dict[str, Optional[int]]
),
{"baz": 123},
Dict[str, Optional[int]],
str,
Optional[int],
dict,
id="assign-dict-to-dict-optional",
),
param(
ListConfig(
[{"foo": 456}, {"bar": None}], element_type=Dict[str, Optional[int]]
),
{"baz": None},
Dict[str, Optional[int]],
str,
Optional[int],
dict,
id="assign-dict-none-to-dict-optional",
),
param(
ListConfig([{"foo": ConcretePlugin()}], element_type=Dict[str, Plugin]),
{"bar": ConcretePlugin()},
Dict[str, Plugin],
str,
Plugin,
dict,
id="assign-dict-plugin",
),
param(
ListConfig([[ConcretePlugin()]], element_type=List[Plugin]),
[ConcretePlugin()],
List[Plugin],
int,
Plugin,
list,
id="assign-list-plugin",
),
],
)
@mark.parametrize(
"ensure_container",
[
param(True, id="container"),
param(False, id="no_container"),
],
)
def test_list_assign_to_container_typed_element(
cfg: ListConfig,
value: Any,
type_hint: Any,
key_type: Any,
elt_type: Any,
obj_type: Any,
ensure_container: bool,
) -> None:
cfg = copy.deepcopy(cfg)
if ensure_container:
value = _ensure_container(value)
n = len(cfg)
for idx in range(n):
cfg[idx] = value
check_subnode(cfg, idx, value, type_hint, key_type, elt_type, obj_type)
cfg.append(value)
check_subnode(cfg, n, value, type_hint, key_type, elt_type, obj_type)
@mark.parametrize(
"cfg, type_hint, key_type, elt_type",
[
param(
ListConfig([[123], None], element_type=Optional[List[int]]),
Optional[List[int]],
int,
int,
id="assign-to-optional-list",
),
param(
ListConfig([{"bar": 456}, None], element_type=Optional[Dict[str, int]]),
Optional[Dict[str, int]],
str,
int,
id="assign-to-optional-dict",
),
param(
ListConfig([[ConcretePlugin()], None], element_type=Optional[List[Plugin]]),
Optional[List[Plugin]],
int,
Plugin,
id="assign-to-optional-plugin-list",
),
param(
ListConfig(
[{"bar": ConcretePlugin()}, None],
element_type=Optional[Dict[str, Plugin]],
),
Optional[Dict[str, Plugin]],
str,
Plugin,
id="assign-to-optional-plugin-dict",
),
],
)
@mark.parametrize(
"value",
[
param(None, id="none"),
param(MISSING, id="missing"),
param("${interp}", id="interp"),
],
)
def test_list_assign_to_container_typed_element_special(
cfg: ListConfig,
value: Any,
type_hint: Any,
key_type: Any,
elt_type: Any,
) -> None:
cfg = copy.deepcopy(cfg)
n = len(cfg)
for idx in range(n):
cfg[idx] = value
check_subnode(cfg, idx, value, type_hint, key_type, elt_type, None)
cfg.append(value)
check_subnode(cfg, n, value, type_hint, key_type, elt_type, None)
@mark.parametrize(
"ensure_container",
[
param(True, id="container"),
param(False, id="no_container"),
],
)
@mark.parametrize(
"cfg, value, type_hint, key_type, elt_type, obj_type",
[
param(
DictConfig({"foo": [[456]]}, element_type=List[List[int]]),
[[123]],
List[List[int]],
int,
List[int],
list,
id="assign-to-list-element",
),
param(
DictConfig(
{"foo": {"bar": {"baz": 456}}}, element_type=Dict[str, Dict[str, int]]
),
{"qux": {"frob": 123}},
Dict[str, Dict[str, int]],
str,
Dict[str, int],
dict,
id="assign-to-dict-element",
),
param(
DictConfig({"foo": [123], "bar": None}, element_type=Optional[List[int]]),
[456],
Optional[List[int]],
int,
int,
list,
id="assign-list-to-optional-list",
),
param(
DictConfig(
{"foo": {"bar": 456}, "baz": None},
element_type=Optional[Dict[str, int]],
),
{"qux": 123},
Optional[Dict[str, int]],
str,
int,
dict,
id="assign-dict-to-optional-dict",
),
param(
DictConfig({"foo": [123], "bar": [None]}, element_type=List[Optional[int]]),
[456],
List[Optional[int]],
int,
Optional[int],
list,
id="assign-list-to-list-optional",
),
param(
DictConfig({"foo": [123], "bar": [None]}, element_type=List[Optional[int]]),
[None],
List[Optional[int]],
int,
Optional[int],
list,
id="assign-list-none-to-list-optional",
),
param(
DictConfig(
{"foo": {"bar": 456}, "baz": {"qux": None}},
element_type=Dict[str, Optional[int]],
),
{"frob": 123},
Dict[str, Optional[int]],
str,
Optional[int],
dict,
id="assign-dict-to-dict-optional",
),
param(
DictConfig(
{"foo": {"bar": 456}, "baz": {"qux": None}},
element_type=Dict[str, Optional[int]],
),
{"frob": None},
Dict[str, Optional[int]],
str,
Optional[int],
dict,
id="assign-dict-none-to-dict-optional",
),
param(
DictConfig({"foo": [ConcretePlugin()]}, element_type=List[Plugin]),
[ConcretePlugin()],
List[Plugin],
int,
Plugin,
list,
id="assign-to-list-of-plugins",
),
param(
DictConfig(
{"foo": {"bar": ConcretePlugin()}}, element_type=Dict[str, Plugin]
),
{"baz": ConcretePlugin()},
Dict[str, Plugin],
str,
Plugin,
dict,
id="assign-to-dict-of-plugins",
),
param(
DictConfig({"key": []}, element_type=List[int]),
DictConfig("${interp}"),
List[int],
int,
int,
None,
id="coerce-dictconfig-interp-to-listconfig",
),
param(
DictConfig({"key": {}}, element_type=Dict[str, int]),
ListConfig("${interp}"),
Dict[str, int],
str,
int,
None,
id="coerce-listconfig-interp-to-dictconfig",
),
param(
DictConfig({"key": []}, element_type=List[int]),
DictConfig("${interp}", ref_type=Dict[str, int]),
List[int],
int,
int,
None,
id="coerce-dictconfig-interp_with_ref-to-listconfig",
),
param(
DictConfig({"key": {}}, element_type=Dict[str, int]),
ListConfig("${interp}", ref_type=List[int]),
Dict[str, int],
str,
int,
None,
id="coerce-listconfig-interp_with_ref-to-dictconfig",
),
param(
DictConfig({"key": []}, element_type=List[int]),
DictConfig(MISSING),
List[int],
int,
int,
None,
id="coerce-dictconfig-missing-to-listconfig",
),
param(
DictConfig({"key": {}}, element_type=Dict[str, int]),
ListConfig(MISSING),
Dict[str, int],
str,
int,
None,
id="coerce-listconfig-missing-to-dictconfig",
),
param(
DictConfig({"key": []}, element_type=List[int]),
DictConfig(MISSING, ref_type=Optional[Dict[str, int]]),
List[int],
int,
int,
None,
id="coerce-dictconfig-missing_with_ref-to-listconfig",
),
param(
DictConfig({"key": {}}, element_type=Dict[str, int]),
ListConfig(MISSING, ref_type=Optional[List[int]]),
Dict[str, int],
str,
int,
None,
id="coerce-listconfig-missing_with_ref-to-dictconfig",
),
param(
DictConfig({"key": []}, element_type=Optional[List[int]]),
DictConfig(None),
Optional[List[int]],
int,
int,
None,
id="coerce-dictconfig-none-to-listconfig",
),
param(
DictConfig({"key": {}}, element_type=Optional[Dict[str, int]]),
ListConfig(None),
Optional[Dict[str, int]],
str,
int,
None,
id="coerce-listconfig-none-to-dictconfig",
),
param(
DictConfig({"key": []}, element_type=Optional[List[int]]),
DictConfig(None, ref_type=Optional[Dict[str, int]]),
Optional[List[int]],
int,
int,
None,
id="coerce-dictconfig-none_with_ref-to-listconfig",
),
param(
DictConfig({"key": {}}, element_type=Optional[Dict[str, int]]),
ListConfig(None, ref_type=Optional[List[int]]),
Optional[Dict[str, int]],
str,
int,
None,
id="coerce-listconfig-none_with_ref-to-dictconfig",
),
],
)
def test_dict_assign_to_container_typed_element(
cfg: DictConfig,
value: Any,
type_hint: Any,
key_type: Any,
elt_type: Any,
obj_type: Any,
ensure_container: bool,
) -> None:
cfg = copy.deepcopy(cfg)
if ensure_container:
value = _ensure_container(value)
for key in cfg:
cfg[key] = value
check_subnode(cfg, key, value, type_hint, key_type, elt_type, obj_type)
cfg["_new_key"] = value
check_subnode(cfg, "_new_key", value, type_hint, key_type, elt_type, obj_type)
@mark.parametrize(
"dc,value",
[
param(DictConfig({"key": 123}, element_type=int), 456, id="int"),
param(DictConfig({"key": [123]}, element_type=List[int]), [456], id="list"),
param(
DictConfig({"key": {"foo": 123}}, element_type=Dict[str, int]),
{"baz": 456},
id="dict",
),
],
)
@mark.parametrize("overwrite_preexisting_key", [True, False])
def test_setitem_valid_element_type(
dc: DictConfig, value: Any, overwrite_preexisting_key: bool
) -> None:
dc = copy.deepcopy(dc)
if not overwrite_preexisting_key:
del dc["key"]
dc["key"] = value
assert dc["key"] == value
@mark.parametrize(
"cfg, type_hint, key_type, elt_type",
[
param(
DictConfig({"foo": [123], "bar": None}, element_type=Optional[List[int]]),
Optional[List[int]],
int,
int,
id="assign-to-optional-list",
),
param(
DictConfig(
{"foo": {"bar": 456}, "baz": None},
element_type=Optional[Dict[str, int]],
),
Optional[Dict[str, int]],
str,
int,
id="assign-to-optional-dict",
),
],
)
@mark.parametrize(
"value",
[
param(None, id="none"),
param(MISSING, id="missing"),
param("${interp}", id="interp"),
],
)
def test_dict_assign_to_container_typed_element_special(
cfg: DictConfig,
value: Any,
type_hint: Any,
key_type: Any,
elt_type: Any,
) -> None:
cfg = copy.deepcopy(cfg)
for key in cfg:
cfg[key] = value
check_subnode(cfg, key, value, type_hint, key_type, elt_type, None)
cfg["_new_key"] = value
check_subnode(cfg, "_new_key", value, type_hint, key_type, elt_type, None)
@mark.parametrize(
"ensure_container",
[
param(True, id="container"),
param(False, id="no_container"),
],
)
@mark.parametrize(
"overwrite_preexisting_key",
[
param(True, id="overwrite"),
param(False, id="no_overwrite"),
],
)
@mark.parametrize(
"dc,value,err_msg",
[
param(
DictConfig({"key": 123}, element_type=int),
"foo",
re.escape("Value 'foo' of type 'str' could not be converted to Integer"),
id="assign-str-to-int",
),
param(
DictConfig({"key": [123]}, element_type=List[int]),
"foo",
re.escape("Invalid value assigned: str is not a ListConfig, list or tuple"),
id="assign-str-to-list[int]",
),
param(
DictConfig({"key": {"foo": 123}}, element_type=Dict[str, int]),
"bar",
re.escape("Cannot assign str to Dict[str, int]"),
id="assign-str-to-list[int]",
),
param(
DictConfig({"key": [123]}, element_type=List[int]),
None,
re.escape("field 'key' is not Optional"),
id="assign-none-to-list[int]",
),
param(
DictConfig({"key": [123]}, element_type=List[int]),
456,
re.escape("Invalid value assigned: int is not a ListConfig, list or tuple"),
id="assign-int-to-list[int]",
),
param(
DictConfig({"key": [123]}, element_type=List[int]),
["foo"],
r"(Value 'foo' of type 'str' could not be converted to Integer)"
+ r"|(Value 'foo' \(str\) is incompatible with type hint 'int')",
id="assign-list[str]-to-list[int]",
),
param(
DictConfig({"key": [123]}, element_type=List[int]),
[None],
r"(Invalid type assigned: NoneType is not a subclass of int)"
+ r"|(Incompatible value 'None' for field of type 'int')",
id="assign-list[none]-to-list[int]",
),
param(
DictConfig({"key": [123]}, element_type=List[int]),
{"baz": 456},
r"(Invalid value assigned: dict is not a ListConfig, list or tuple)"
+ r"|(Invalid value assigned: DictConfig is not a ListConfig, list or tuple)"
+ r"|(Invalid value assigned: dict does not match type hint typing\.List\[int\])"
+ r"|('DictConfig' is incompatible with type hint 'typing\.List\[int\]')",
id="assign-dict[str-int]-to-list[int]]",
),
param(
DictConfig({"key": {"key2": 123}}, element_type=Dict[str, int]),
{"key2": "foo"},
r"(Value 'foo' \(str\) is incompatible with type hint 'int')"
+ r"|(Value 'foo' of type 'str' could not be converted to Integer)",
id="assign_dict[str_str]_to_dict[str_int]",
),
param(
DictConfig({"key": {"key2": 123}}, element_type=Dict[str, int]),
[],
r"(Cannot assign list to Dict\[str, int\])"
+ r"|('ListConfig' is incompatible with type hint 'typing.Dict\[str, int\]')",
id="assign_list_to_dict[str_int]",
),
param(
DictConfig({"key": [[123]]}, element_type=List[List[int]]),
[[456.789]],
r"(Value 456\.789 \(float\) is incompatible with type hint 'int')"
+ r"|(Value '456\.789' of type 'float' could not be converted to Integer)",
id="assign-list[list[float]]-to-list[list[int]]",
),
param(
DictConfig({"key": [[123]]}, element_type=List[List[int]]),
[[None]],
r"(Invalid type assigned: NoneType is not a subclass of int)"
+ r"|(Incompatible value 'None' for field of type 'int')",
id="assign-list[list[none]]-to-list[list[int]]",
),
param(
DictConfig({"key": [[123]]}, element_type=List[List[int]]),
[[IntegerNode(None)]],
r"(Value None \(NoneType\) is incompatible with type hint 'int')"
+ r"|(Incompatible value 'None' for field of type 'int')",
id="assign-list[list[typed-none]]-to-list[list[int]]",
),
param(
DictConfig({"key": [[123.456]]}, element_type=List[List[float]]),
[[IntegerNode(789)]],
re.escape("Value 789 (int) is incompatible with type hint 'float'"),
id="assign-list[list[typed-int]]-to-list[list[float]]",
),
param(
DictConfig(
{"key": {"foo": {"bar": 123}}}, element_type=Dict[str, Dict[str, int]]
),
{"foo": {"bar": 456.789}},
r"(Value 456\.789 \(float\) is incompatible with type hint 'int')"
+ r"|(Value '456\.789' of type 'float' could not be converted to Integer)",
id="assign-dict[str-[dict[str-float]]]-to-dict[str[dict[str-int]]]",
),
param(
DictConfig(
{"key": {"foo": {"bar": 123}}}, element_type=Dict[str, Dict[str, int]]
),
{"foo": {"bar2": 456.789}},
r"(Value 456\.789 \(float\) is incompatible with type hint 'int')"
+ r"|(Value '456\.789' of type 'float' could not be converted to Integer)",
id="assign-dict[str-[dict[str-float]]]-to-dict[str[dict[str-int]]]-2",
),
param(
DictConfig(
{"key": {"foo": {"bar": 123}}}, element_type=Dict[str, Dict[str, int]]
),
{"foo": {123: 456}},
r"(Key 123 \(int\) is incompatible with \(str\))"
+ r"|(Key 123 \(int\) is incompatible with key type hint 'str')",
id="assign-dict[str_[dict[int_int]]]-to-dict[str[dict[str_int]]]",
),
param(
DictConfig(
{"key": {"foo": {"bar": 123}}}, element_type=Dict[str, Dict[str, int]]
),
{"foo": {456: 789}},
r"(Key 456 \(int\) is incompatible with \(str\))"
+ r"|(Key 456 \(int\) is incompatible with key type hint 'str')",
id="assign-dict[str_[dict[int-int]]]-to-dict[str[dict[str_int]]]",
),
param(
DictConfig(
{"key": {"foo": {"bar": 123}}}, element_type=Dict[str, Dict[str, int]]
),
{"foo": {456: IntegerNode(None)}},
r"(Key 456 \(int\) is incompatible with \(str\))"
+ r"|(Key 456 \(int\) is incompatible with key type hint 'str')",
id="assign-dict[str_[dict[int-typed_none]]]-to-dict[str[dict[str_int]]]",
),
param(
DictConfig(
{"key": {"foo": {"bar": 123.456}}},
element_type=Dict[str, Dict[str, float]],
),
{"foo": {"bar": IntegerNode(789)}},
re.escape("Value 789 (int) is incompatible with type hint 'float'"),
id="assign-dict[str-[dict[int_typed-int]]]-to-dict[str[dict[str-float]]]",
),
param(
DictConfig(
{"key": {"foo": {"bar": 123.456}}},
element_type=Dict[str, Dict[str, float]],
),
{"foo": {"bar2": IntegerNode(789)}},
re.escape("Value 789 (int) is incompatible with type hint 'float'"),
id="assign-dict[str-[dict[int_typed-int]]]-to-dict[str[dict[str_float]]]-2",
),
],
)
def test_dict_setitem_invalid_element_type(
dc: DictConfig,
value: Any,
err_msg: str,
ensure_container: bool,
overwrite_preexisting_key: bool,
) -> None:
dc_orig = dc
dc = copy.deepcopy(dc)
if ensure_container:
if isinstance(value, (dict, list)):
value = _ensure_container(value)
else:
return # skip
if overwrite_preexisting_key:
with raises((ValidationError, KeyValidationError), match=err_msg):
dc["key"] = value
assert dc == dc_orig
else:
del dc["key"]
with raises((ValidationError, KeyValidationError), match=err_msg):
dc["key"] = value
assert dc == {}
@mark.parametrize(
"lc,index,value,err_msg",
[
param(
ListConfig([123], element_type=int),
0,
"foo",
"Value 'foo' of type 'str' could not be converted to Integer",
id="assign_str_to_int",
),
param(
ListConfig([123], element_type=int),
0,
None,
re.escape("[0] is not optional and cannot be assigned None"),
id="assign_none_to_int",
),
param(
ListConfig([[123]], element_type=List[int]),
0,
"foo",
"Invalid value assigned: str is not a ListConfig, list or tuple",
id="assign_str_to_list[int]",
),
param(
ListConfig([{"key": 123}], element_type=Dict[str, int]),
0,
"foo",
re.escape("Cannot assign str to Dict[str, int]"),
id="assign_str_to_dict[str, int]",
),
param(
ListConfig([[123]], element_type=List[int]),
0,
None,
re.escape("[0] is not optional and cannot be assigned None"),
id="assign_none_to_list[int]",
),
param(
ListConfig([[123]], element_type=List[int]),
0,
456,
"Invalid value assigned: int is not a ListConfig, list or tuple",
id="assign_int_to_list[int]",
),
param(
ListConfig([[123]], element_type=List[int]),
0,
["foo"],
"Value 'foo' of type 'str' could not be converted to Integer",
id="assign_list[str]_to_list[int]",
),
param(
ListConfig([[123]], element_type=List[int]),
0,
[None],
re.escape("Invalid type assigned: NoneType is not a subclass of int"),
id="assign_list[none]_to_list[int]",
),
param(
ListConfig([[123]], element_type=List[int]),
0,
{"baz": 456},
"Invalid value assigned: dict",
id="assign_dict[str,int]_to_list[int]]",
),
param(
ListConfig([{"key": 123}], element_type=Dict[str, int]),
0,
{"key2": "foo"},
"Value 'foo' of type 'str' could not be converted to Integer",
id="assign_dict[str,str]_to_dict[str,int]",
),
param(
ListConfig([{"key2": 123}], element_type=Dict[str, int]),
0,
{"key2": "foo"},
"Value 'foo' of type 'str' could not be converted to Integer",
id="assign_dict[str,str]_to_dict[str,int]",
),
param(
ListConfig([], element_type=int),
None,
"foo",
"Value 'foo' of type 'str' could not be converted to Integer",
id="append_str_to_int",
),
param(
ListConfig([], element_type=int),
None,
None,
"Invalid type assigned: NoneType is not a subclass of int",
id="append_none_to_int",
),
param(
ListConfig([], element_type=List[int]),
None,
"foo",
"Invalid value assigned: str is not a ListConfig, list or tuple",
id="append_str_to_list[int]",
),
param(
ListConfig([], element_type=Dict[str, int]),
None,
"foo",
re.escape("Cannot assign str to Dict[str, int]"),
id="append_str_to_dict[str, int]",
),
param(
ListConfig([], element_type=List[int]),
None,
None,
re.escape("Invalid type assigned: NoneType is not a subclass of List[int]"),
id="append_none_to_list[int]",
),
param(
ListConfig([], element_type=List[int]),
None,
456,
"Invalid value assigned: int is not a ListConfig, list or tuple",
id="append_int_to_list[int]",
),
param(
ListConfig([], element_type=List[int]),
None,
["foo"],
"Value 'foo' of type 'str' could not be converted to Integer",
id="append_list[str]_to_list[int]",
),
param(
ListConfig([], element_type=List[int]),
None,
[None],
re.escape("Invalid type assigned: NoneType is not a subclass of int"),
id="append_list[none]_to_list[int]",
),
param(
ListConfig([], element_type=List[int]),
None,
{"baz": 456},
"Invalid value assigned: dict",
id="append_dict[str,int]_to_list[int]]",
),
param(
ListConfig([], element_type=Dict[str, int]),
None,
{"key2": "foo"},
"Value 'foo' of type 'str' could not be converted to Integer",
id="append_dict[str,str]_to_dict[str,int]",
),
param(
ListConfig([], element_type=Dict[str, int]),
None,
{"key2": "foo"},
"Value 'foo' of type 'str' could not be converted to Integer",
id="append_dict[str,str]_to_dict[str,int]",
),
param(
ListConfig(
[{"key2": ConcretePlugin()}], element_type=Dict[str, ConcretePlugin]
),
0,
{"key": Plugin()},
"Invalid type assigned: Plugin is not a subclass of ConcretePlugin",
id="append_dict[str,str]_to_dict[str,int]",
),
param(
ListConfig([[ConcretePlugin()]], element_type=List[ConcretePlugin]),
0,
[Plugin()],
"Invalid type assigned: Plugin is not a subclass of ConcretePlugin",
id="append_dict[str,str]_to_dict[str,int]",
),
param(
ListConfig([], element_type=Dict[str, ConcretePlugin]),
None,
{"key": Plugin()},
"Invalid type assigned: Plugin is not a subclass of ConcretePlugin",
id="append_dict[str,str]_to_dict[str,int]",
),
param(
ListConfig([], element_type=List[ConcretePlugin]),
None,
[Plugin()],
"Invalid type assigned: Plugin is not a subclass of ConcretePlugin",
id="append_dict[str,str]_to_dict[str,int]",
),
],
)
def test_list_setitem_invalid_element_type(
lc: ListConfig,
index: Optional[int],
value: Any,
err_msg: str,
) -> None:
lc_orig = lc
lc = copy.deepcopy(lc)
with raises(ValidationError, match=err_msg):
if index is None:
lc.append(value)
else:
lc[index] = value
assert lc == lc_orig
@mark.parametrize(
"dc1, dc2, value, type_hint, key_type, elt_type, obj_type",
[
param(
DictConfig({"key": {"key2": Plugin()}}, element_type=Dict[str, Plugin]),
DictConfig({"key": {"key2": ConcretePlugin()}}, element_type=Any),
ConcretePlugin(),
Plugin,
Any,
Any,
ConcretePlugin,
id="any-plugin-into-typed-plugin",
),
param(
DictConfig({"key": {"key2": Plugin()}}, element_type=Any),
DictConfig(
{"key": {"key2": ConcretePlugin()}}, element_type=Dict[str, Plugin]
),
ConcretePlugin(),
Plugin,
Any,
Any,
ConcretePlugin,
id="typed-plugin-into-any-plugin",
),
param(
DictConfig({"key": {"key2": Plugin()}}, element_type=Dict[str, Plugin]),
DictConfig(
{"key": {"key2": ConcretePlugin()}},
element_type=Dict[str, ConcretePlugin],
),
ConcretePlugin(),
Plugin,
Any,
Any,
ConcretePlugin,
id="typed-concrete-plugin-into-typed-plugin",
),
param(
DictConfig({"key": {"key2": {}}}),
DictConfig({"key": {"key2": Plugin()}}, element_type=Dict[str, Plugin]),
Plugin(),
Plugin,
Any,
Any,
Plugin,
id="typed-plugin-into-any",
),
],
)
def test_merge_nested_dict_promotion(
dc1: DictConfig,
dc2: DictConfig,
value: Any,
type_hint: Any,
key_type: Any,
elt_type: Any,
obj_type: Any,
) -> None:
cfg = OmegaConf.merge(dc1, dc2)
check_subnode(
cfg.key,
key="key2",
value=value,
type_hint=type_hint,
key_type=key_type,
elt_type=elt_type,
obj_type=obj_type,
)
@mark.parametrize(
"configs, keys, value, type_hint, key_type, elt_type, obj_type",
[
param(
[
DictConfig({}, element_type=Dict[str, List[int]]),
DictConfig({"foo": {"bar": "${interp}"}}, element_type=Dict[str, Any]),
],
["foo", "bar"],
"${interp}",
List[int],
int,
int,
None,
id="merge-interp-into-list",
),
param(
[
DictConfig({}, element_type=Dict[str, Optional[List[int]]]),
DictConfig({"foo": {"bar": None}}, element_type=Dict[str, Any]),
],
["foo", "bar"],
None,
Optional[List[int]],
int,
int,
None,
id="merge-none-into-list",
),
param(
[
DictConfig({}, element_type=Dict[str, Dict[str, int]]),
DictConfig({"foo": {"bar": "${interp}"}}, element_type=Dict[str, Any]),
],
["foo", "bar"],
"${interp}",
Dict[str, int],
str,
int,
None,
id="merge-interp-into-dict",
),
param(
[
DictConfig({}, element_type=Dict[str, Optional[Dict[str, int]]]),
DictConfig({"foo": {"bar": None}}, element_type=Dict[str, Any]),
],
["foo", "bar"],
None,
Optional[Dict[str, int]],
str,
int,
None,
id="merge-none-into-dict",
),
],
)
def test_merge_nested(
configs: List[Any],
keys: List[Any],
value: Any,
type_hint: Any,
key_type: Any,
elt_type: Any,
obj_type: Any,
) -> None:
"""Ensure metadata and contents of container-typed subnode are correct"""
cfg = OmegaConf.merge(*configs)
for key in keys[:-1]:
cfg = cfg._get_node(key) # type: ignore
key = keys[-1]
check_subnode(
cfg,
key,
value,
type_hint,
key_type,
elt_type,
obj_type,
)
@mark.parametrize(
"dc1, dc2, value, type_hint, key_type, elt_type, obj_type",
[
param(
DictConfig({"key": {}}),
DictConfig({"key": "${interp}"}, element_type=Dict[str, int]),
"${interp}",
Dict[str, int],
str,
int,
None,
id="dict-interp-into-any",
),
param(
DictConfig({"key": {}}),
DictConfig({"key": None}, element_type=Optional[Dict[str, int]]),
None,
Optional[Dict[str, int]],
str,
int,
None,
id="none-interp-into-any",
),
param(
DictConfig({"key": {"foo": 123}}, element_type=Dict[str, Any]),
DictConfig({"key": {"bar": 456.789}}, element_type=Dict[str, float]),
{"foo": 123, "bar": 456.789},
Dict[str, Any],
str,
Any,
dict,
id="dict[str,float]-into-dict[str,any]",
),
param(
DictConfig({"key": {}}, element_type=Dict[str, int]),
DictConfig({"key": "${interp}"}),
"${interp}",
Dict[str, int],
str,
int,
None,
id="interp-into-dict",
),
param(
DictConfig({"key": []}),
DictConfig({"key": "${interp}"}, element_type=List[int]),
"${interp}",
Any,
int,
Any,
None,
id="list-interp-into-any",
),
param(
DictConfig({"key": []}, element_type=List[int]),
DictConfig({"key": "${interp}"}),
"${interp}",
List[int],
int,
int,
None,
id="any-interp-into-list-int",
),
param(
DictConfig({"key": []}, element_type=List[float]),
DictConfig({"key": ["${interp}"]}, element_type=List[int]),
["${interp}"],
List[float],
int,
float,
list,
id="any-interp_list-into-list-list-int",
),
],
)
def test_merge_interpolation_with_container_type(
dc1: DictConfig,
dc2: DictConfig,
value: Any,
type_hint: Any,
key_type: Any,
elt_type: Any,
obj_type: Any,
) -> None:
cfg = OmegaConf.merge(dc1, dc2)
check_subnode(
cfg,
key="key",
value=value,
type_hint=type_hint,
key_type=key_type,
elt_type=elt_type,
obj_type=obj_type,
)
def test_merge_nested_list_promotion() -> None:
dc1 = DictConfig({"key": [Plugin]}, element_type=List[Plugin])
dc2 = DictConfig({"key": [ConcretePlugin]})
cfg = OmegaConf.merge(dc1, dc2)
check_subnode(
cfg.key,
key=0,
value=ConcretePlugin(),
type_hint=Plugin,
key_type=Any,
elt_type=Any,
obj_type=ConcretePlugin,
)
@mark.parametrize(
"configs, err_msg",
[
param(
[DictConfig({}, element_type=int), {"foo": "abc"}],
"Value 'abc' (str) is incompatible with type hint 'int'",
),
param(
[DictConfig({}, element_type=Dict[str, int]), {"foo": 123}],
"Value 123 (int) is incompatible with type hint 'typing.Dict[str, int]'",
id="merge-int-into-dict",
),
param(
[
DictConfig({}, element_type=Dict[str, Dict[str, int]]),
DictConfig(
{"foo": {"bar": None}}, element_type=Dict[str, Optional[int]]
),
],
"field 'foo.bar' is not Optional",
id="merge-none_typed-into-int",
),
],
)
def test_merge_bad_element_type(configs: Any, err_msg: Any) -> None:
with raises(
ValidationError,
match=re.escape(err_msg),
):
OmegaConf.merge(*configs)
| 30.292598 | 99 | 0.483114 | 4,501 | 43,379 | 4.529882 | 0.033326 | 0.061111 | 0.045122 | 0.049438 | 0.847957 | 0.814998 | 0.778655 | 0.74604 | 0.700868 | 0.668498 | 0 | 0.016687 | 0.367274 | 43,379 | 1,431 | 100 | 30.313767 | 0.726163 | 0.005717 | 0 | 0.740607 | 0 | 0.005058 | 0.205239 | 0.071853 | 0 | 0 | 0 | 0 | 0.010116 | 1 | 0.010838 | false | 0 | 0.005058 | 0 | 0.016619 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c05333cdbf2368c556f966c0fe00b71d0ad41613 | 98 | py | Python | crypton/block/__init__.py | batuhaninan/Crypton | cb3de3dccb79c49524b594a23709a8ae0c8fd555 | [
"MIT"
] | null | null | null | crypton/block/__init__.py | batuhaninan/Crypton | cb3de3dccb79c49524b594a23709a8ae0c8fd555 | [
"MIT"
] | null | null | null | crypton/block/__init__.py | batuhaninan/Crypton | cb3de3dccb79c49524b594a23709a8ae0c8fd555 | [
"MIT"
] | null | null | null | from .src.block_encrypt import *
from .src.block_decrypt import *
from .src.block_helper import *
| 24.5 | 32 | 0.785714 | 15 | 98 | 4.933333 | 0.466667 | 0.283784 | 0.486486 | 0.486486 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122449 | 98 | 3 | 33 | 32.666667 | 0.860465 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
fbf35be8430a0073aff6e026559fb9973bf4b798 | 11,881 | py | Python | mp_script/consensus_no_ref.py | YixinXu-OSU/CLAE_xyx | 8b70fc18a8b3baaa52487fbf413cb695f4a3dc35 | [
"BSD-3-Clause"
] | null | null | null | mp_script/consensus_no_ref.py | YixinXu-OSU/CLAE_xyx | 8b70fc18a8b3baaa52487fbf413cb695f4a3dc35 | [
"BSD-3-Clause"
] | null | null | null | mp_script/consensus_no_ref.py | YixinXu-OSU/CLAE_xyx | 8b70fc18a8b3baaa52487fbf413cb695f4a3dc35 | [
"BSD-3-Clause"
] | null | null | null | import pandas as pd
import numpy as np
import os
import threading
from tqdm import tqdm
import sys
from Threshold_Consensus import muscle_generation
def no_ref_consensus_finding_sparc(group_id: int):
try:
df_seq_extract = pd.read_csv('temp_df/df_w_seqs_' + str(group_id) + '.csv')
except FileNotFoundError:
df_seq_extract = pd.read_csv('temp_df/df_w_seqs_no_blat_' + str(group_id) + '.csv')
unique_id = np.unique(df_seq_extract['qseqid'])
sp_result_df = pd.DataFrame()
sp_dummy_counter = 0
for seqid in tqdm(unique_id):
temp = df_seq_extract[df_seq_extract['qseqid'] == seqid]
if temp.shape[0] > 0:
file_name = str(seqid) + '.fasta'
ref_file_name = 'ref_seq_' + str(seqid) + '.fasta'
f = open(file_name, mode='w+')
ref_f = open(ref_file_name, mode='w+')
longest = temp.iloc[np.argmax(temp['corr_seq'].apply(len))]
depth = temp.shape[0]
ref_f.write('>' + str(longest['name']) + '\n')
ref_f.write(str(longest['corr_seq']) + '\n')
ref_f.close()
for index, row in temp.iterrows():
f.write('>' + str(row['name']) + '\n')
f.write(str(row['corr_seq']) + '\n')
f.close()
thread_id = str(threading.get_ident())
thread_id += file_name
if depth == 1:
sp_result_df.at[sp_dummy_counter, 'qseqid'] = seqid
sp_result_df.at[sp_dummy_counter, 'depth'] = depth
sp_result_df.at[sp_dummy_counter, 'seq'] = temp.iloc[0]['corr_seq']
sp_dummy_counter += 1
elif depth > 1:
os.system('blasr ' + file_name + ' ' + ref_file_name + ' --bestn 1 --minMatch 5 --placeGapConsistently -m 5 --out mapped' + str(thread_id) + '.m5 --nproc ' + str(os.cpu_count()))
# Start of sparc consensus generating
ret_val = os.system('Sparc b ' + ref_file_name + ' m mapped' + str(thread_id) + '.m5 c 2 k 2 g 2 o ' + str(thread_id))
if os.path.exists(str(thread_id) + '.consensus.fasta'):
consensus_file = open(str(thread_id) + '.consensus.fasta', mode='r')
seq = consensus_file.readlines()
if len(seq) > 1 and ret_val == 0:
seq = seq[1][:-1]
else:
seq = ''
consensus_file.close()
sp_result_df.at[sp_dummy_counter, 'qseqid'] = seqid
sp_result_df.at[sp_dummy_counter, 'depth'] = depth
sp_result_df.at[sp_dummy_counter, 'seq'] = seq
sp_dummy_counter += 1
os.remove(str(thread_id) + '.consensus.fasta')
os.remove('mapped' + str(thread_id) + '.m5')
os.remove(file_name)
os.remove(ref_file_name)
sp_result_df.to_csv('results/no_ref/Result_sparc_' + str(group_id) + '.csv')
print('Group ' + str(group_id) + ' no ref Sparc Consensus Finding Finished.')
def no_ref_consensus_finding_sparc_lseq(group_id: int):
try:
df_seq_extract = pd.read_csv('temp_df/lseqs_df_' + str(group_id) + '.csv')
except FileNotFoundError:
print('No LSEQ file found')
return
unique_id = np.unique(df_seq_extract['qseqid'])
sp_result_df = pd.DataFrame()
sp_dummy_counter = 0
for seqid in tqdm(unique_id):
temp = df_seq_extract[df_seq_extract['qseqid'] == seqid]
if temp.shape[0] > 0:
file_name = str(seqid) + '_l.fasta'
ref_file_name = 'ref_seq_' + str(seqid) + '_l.fasta'
f = open(file_name, mode='w+')
ref_f = open(ref_file_name, mode='w+')
longest = temp.iloc[np.argmax(temp['lseq'].apply(len))]
depth = temp.shape[0]
ref_f.write('>' + str(longest['lname']) + '\n')
ref_f.write(str(longest['lseq']) + '\n')
ref_f.close()
for index, row in temp.iterrows():
f.write('>' + str(row['lname']) + '\n')
f.write(str(row['lseq']) + '\n')
f.close()
thread_id = str(threading.get_ident())
thread_id += file_name
if depth == 1:
sp_result_df.at[sp_dummy_counter, 'qseqid'] = seqid
sp_result_df.at[sp_dummy_counter, 'depth'] = depth
sp_result_df.at[sp_dummy_counter, 'seq'] = temp.iloc[0]['lseq']
sp_dummy_counter += 1
elif depth > 1:
os.system('blasr ' + file_name + ' ' + ref_file_name + ' --bestn 1 --minMatch 5 --placeGapConsistently -m 5 --out mapped' + str(thread_id) + '.m5 --nproc ' + str(os.cpu_count()))
# Start of sparc consensus generating
ret_val = os.system('Sparc b ' + ref_file_name + ' m mapped' + str(thread_id) + '.m5 c 2 k 2 g 1 o ' + str(thread_id))
if os.path.exists(str(thread_id) + '.consensus.fasta'):
consensus_file = open(str(thread_id) + '.consensus.fasta', mode='r')
seq = consensus_file.readlines()
if len(seq) > 1 and ret_val == 0:
seq = seq[1][:-1]
else:
seq = ''
consensus_file.close()
sp_result_df.at[sp_dummy_counter, 'qseqid'] = seqid
sp_result_df.at[sp_dummy_counter, 'depth'] = depth
sp_result_df.at[sp_dummy_counter, 'seq'] = seq
sp_dummy_counter += 1
os.remove(str(thread_id) + '.consensus.fasta')
os.remove('mapped' + str(thread_id) + '.m5')
os.remove(file_name)
os.remove(ref_file_name)
sp_result_df.to_csv('results/no_ref/Result_Lseq_sparc_' + str(group_id) + '.csv')
print('Group ' + str(group_id) + ' no ref lseq Sparc Consensus Finding Finished.')
def no_ref_consensus_finding_pbdagcon(group_id: int):
try:
df_seq_extract = pd.read_csv('temp_df/df_w_seqs_' + str(group_id) + '.csv')
except FileNotFoundError:
df_seq_extract = pd.read_csv('temp_df/df_w_seqs_no_blat_' + str(group_id) + '.csv')
unique_id = np.unique(df_seq_extract['qseqid'])
pb_result_df = pd.DataFrame()
pb_dummy_counter = 0
for seqid in tqdm(unique_id):
temp = df_seq_extract[df_seq_extract['qseqid'] == seqid]
if temp.shape[0] > 0:
file_name = str(seqid) + '.fasta'
ref_file_name = 'ref_seq_' + str(seqid) + '.fasta'
f = open(file_name, mode='w+')
ref_f = open(ref_file_name, mode='w+')
longest = temp.iloc[np.argmax(temp['corr_seq'].apply(len))]
depth = temp.shape[0]
ref_f.write('>' + str(longest['name']) + '\n')
ref_f.write(str(longest['corr_seq']) + '\n')
ref_f.close()
for index, row in temp.iterrows():
f.write('>' + str(row['name']) + '\n')
f.write(str(row['corr_seq']) + '\n')
f.close()
thread_id = str(threading.get_ident())
thread_id += file_name
if depth == 1:
pb_result_df.at[pb_dummy_counter, 'qseqid'] = seqid
pb_result_df.at[pb_dummy_counter, 'depth'] = depth
pb_result_df.at[pb_dummy_counter, 'seq'] = temp.iloc[0]['corr_seq']
pb_dummy_counter += 1
elif depth > 1:
os.system('blasr ' + file_name + ' ' + ref_file_name + ' --bestn 1 --minMatch 5 --placeGapConsistently -m 5 --out mapped' + str(thread_id) + '.m5 --nproc ' + str(os.cpu_count()))
# Start of pbdagcon consensus generating
os.system('pbdagcon --min-coverage 4 --min-length 100 --threads 20 mapped' + str(thread_id) + '.m5 > consensus' + str(thread_id) + '.fasta')
consensus_file = open('consensus' + str(thread_id) + '.fasta', mode='r')
seq = consensus_file.readlines()
if len(seq) > 1:
seq = seq[1][:-1]
else:
seq = ''
consensus_file.close()
pb_result_df.at[pb_dummy_counter, 'qseqid'] = seqid
pb_result_df.at[pb_dummy_counter, 'depth'] = depth
pb_result_df.at[pb_dummy_counter, 'seq'] = seq
pb_dummy_counter += 1
os.remove('consensus' + str(thread_id) + '.fasta')
os.remove('mapped' + str(thread_id) + '.m5')
os.remove(file_name)
os.remove(ref_file_name)
pb_result_df.to_csv('results/no_ref/Result_pbdagcon_' + str(group_id) + '.csv')
print('Group ' + str(group_id) + ' no ref pbdagcon Consensus Finding Finished.')
def no_ref_consensus_finding_chris(group_id: int):
try:
df_seq_extract = pd.read_csv('temp_df/df_w_seqs_' + str(group_id) + '.csv')
except FileNotFoundError:
df_seq_extract = pd.read_csv('temp_df/df_w_seqs_no_blat_' + str(group_id) + '.csv')
unique_id = np.unique(df_seq_extract['qseqid'])
result_df = pd.DataFrame()
dummy_counter = 0
for seqid in tqdm(unique_id):
temp = df_seq_extract[df_seq_extract['qseqid'] == seqid]
if temp.shape[0] > 0:
file_name = str(seqid) + '.fasta'
f = open(file_name, mode='w+')
depth = temp.shape[0]
"""
if temp.shape[0] > 70:
temp = temp.loc[list(temp['corr_seq'].apply(len).sort_values(ascending=True).iloc[:70].index)]
"""
for index, row in temp.iterrows():
seq = row['corr_seq']
f.write(">" + str(seqid) + '\n')
f.write(str(seq) + '\n')
f.close()
if depth == 1:
result_df.at[dummy_counter, 'qseqid'] = seqid
result_df.at[dummy_counter, 'depth'] = depth
result_df.at[dummy_counter, 'seq'] = seq
dummy_counter += 1
os.remove(file_name)
elif depth == 0:
os.remove(file_name)
else:
muscle_generation(file_name)
consensus_file = open("Consensus_No_Dashes_" + file_name, mode='r')
lines = consensus_file.readlines()
if len(lines) == 1:
seq = lines[0]
else:
seq = lines[1]
consensus_file.close()
result_df.at[dummy_counter, 'qseqid'] = seqid
result_df.at[dummy_counter, 'depth'] = depth
result_df.at[dummy_counter, 'seq'] = seq
dummy_counter += 1
os.remove(file_name)
os.remove("Consensus_No_Dashes_" + file_name)
os.remove("Consensus_" + file_name)
os.remove("MSA_Consensus_" + file_name)
# os.rename("MSA_Consensus_" + file_name, "MSA/MSA_Consensus_" + file_name)
result_df.to_csv('results/no_ref/Result_chris_' + str(group_id) + '.csv')
print('Group ' + str(group_id) + ' no ref Chris Consensus Finding Finished.') | 40.549488 | 194 | 0.513677 | 1,446 | 11,881 | 3.938451 | 0.094053 | 0.057594 | 0.042142 | 0.025285 | 0.868832 | 0.827041 | 0.805443 | 0.805443 | 0.776822 | 0.749254 | 0 | 0.011011 | 0.357882 | 11,881 | 293 | 195 | 40.549488 | 0.735483 | 0.015487 | 0 | 0.732057 | 0 | 0 | 0.134197 | 0.022961 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019139 | false | 0 | 0.033493 | 0 | 0.057416 | 0.023923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3f4c28dda70a149b565880fe11cff5faece46355 | 18,831 | py | Python | tests/test_androidtv.py | diosdog/python-androidtv | 8cc6a7654c9bd8f3acd08601c51c165aada49f78 | [
"MIT"
] | null | null | null | tests/test_androidtv.py | diosdog/python-androidtv | 8cc6a7654c9bd8f3acd08601c51c165aada49f78 | [
"MIT"
] | null | null | null | tests/test_androidtv.py | diosdog/python-androidtv | 8cc6a7654c9bd8f3acd08601c51c165aada49f78 | [
"MIT"
] | null | null | null | import sys
import unittest
sys.path.insert(0, '..')
from androidtv import constants
from androidtv.androidtv import AndroidTV
# `adb shell dumpsys audio`
DUMPSYS_AUDIO_OFF = """MediaFocusControl dump time: 9:00:59 AM
Audio Focus stack entries (last is top of stack):
source:android.os.BinderProxy@bd99735 -- pack: org.droidtv.playtv -- client: android.media.AudioManager@d4df3dforg.droidtv.playtv.PlayTvActivity@bfb901f -- gain: GAIN -- flags: DELAY_OK|PAUSES_ON_DUCKABLE_LOSS -- loss: none -- notified: true -- uid: 1000 -- attr: AudioAttributes: usage=1 content=3 flags=0x0 tags= bundle=null -- sdk:26
No external focus policy
Notify on duck: true
In ring or call: false
Stream volumes (device: index)
- STREAM_VOICE_CALL:
Muted: true
Min: 1
Max: 5
Current: 2 (speaker): 2, 40000 (hmdi_arc): 2, 40000000 (default): 1
Devices: speaker
- STREAM_SYSTEM:
Muted: true
Min: 0
Max: 7
Current: 2 (speaker): 2, 40000 (hmdi_arc): 3, 40000000 (default): 2
Devices: speaker
- STREAM_RING:
Muted: true
Min: 0
Max: 7
Current: 2 (speaker): 3, 40000 (hmdi_arc): 3, 40000000 (default): 2
Devices: speaker
- STREAM_MUSIC:
Muted: false
Min: 0
Max: 60
Current: 2 (speaker): 20, 40000 (hmdi_arc): 27, 40000000 (default): 15
Devices: speaker
- STREAM_ALARM:
Muted: true
Min: 0
Max: 7
Current: 2 (speaker): 3, 40000 (hmdi_arc): 3, 40000000 (default): 2
Devices: speaker
- STREAM_NOTIFICATION:
Muted: true
Min: 0
Max: 7
Current: 2 (speaker): 3, 40000 (hmdi_arc): 3, 40000000 (default): 2
Devices: speaker
- STREAM_BLUETOOTH_SCO:
Muted: true
Min: 0
Max: 15
Current: 2 (speaker): 7, 40000 (hmdi_arc): 7, 40000000 (default): 4
Devices: speaker
- STREAM_SYSTEM_ENFORCED:
Muted: true
Min: 0
Max: 7
Current: 2 (speaker): 3, 40000 (hmdi_arc): 3, 40000000 (default): 2
Devices: speaker
- STREAM_DTMF:
Muted: true
Min: 0
Max: 15
Current: 2 (speaker): 5, 40000 (hmdi_arc): 7, 40000000 (default): 4
Devices: speaker
- STREAM_TTS:
Muted: true
Min: 0
Max: 15
Current: 2 (speaker): 7, 40000 (hmdi_arc): 7, 40000000 (default): 4
Devices: speaker
- STREAM_ACCESSIBILITY:
Muted: true
Min: 0
Max: 15
Current: 2 (speaker): 5, 40000 (hmdi_arc): 7, 40000000 (default): 4
Devices: speaker
- mute affected streams = 0x2e
Ringer mode:
- mode (internal) = NORMAL
- mode (external) = NORMAL
- ringer mode affected streams = 0x80 (STREAM_SYSTEM_ENFORCED)
- ringer mode muted streams = 0x0
- delegate = ZenModeHelper
Audio routes:
mMainType=0x0
mBluetoothName=null
Other state:
mVolumeController=VolumeController(android.os.BinderProxy@fb5b7ca,mVisible=false)
mSafeMediaVolumeState=SAFE_MEDIA_VOLUME_ACTIVE
mSafeMediaVolumeIndex=250
sIndependentA11yVolume=false
mPendingVolumeCommand=null
mMusicActiveMs=0
mMcc=0
mCameraSoundForced=false
mHasVibrator=false
mVolumePolicy=VolumePolicy[volumeDownToEnterSilent=true,volumeUpToExitSilent=true,doNotDisturbWhenSilent=true,vibrateToSilentDebounce=400]
mAvrcpAbsVolSupported=false
Audio policies:
PlaybackActivityMonitor dump time: 9:00:59 AM
ID:23 -- type:android.media.SoundPool -- u/pid:10025/1934 -- state:idle -- attr:AudioAttributes: usage=13 content=4 flags=0x0 tags= bundle=null
ID:55 -- type:android.media.MediaPlayer -- u/pid:1000/2283 -- state:idle -- attr:AudioAttributes: usage=0 content=0 flags=0x0 tags= bundle=null
ID:15 -- type:android.media.SoundPool -- u/pid:1000/1723 -- state:idle -- attr:AudioAttributes: usage=13 content=4 flags=0x0 tags= bundle=null
ID:31 -- type:android.media.MediaPlayer -- u/pid:1000/2010 -- state:idle -- attr:AudioAttributes: usage=0 content=0 flags=0x0 tags= bundle=null
ID:143 -- type:android.media.SoundPool -- u/pid:10018/15178 -- state:idle -- attr:AudioAttributes: usage=13 content=4 flags=0x0 tags= bundle=null
ducked players:
muted player piids:"""
DUMPSYS_AUDIO_ON = """MediaFocusControl dump time: 9:03:06 AM
Audio Focus stack entries (last is top of stack):
source:android.os.BinderProxy@bd99735 -- pack: org.droidtv.playtv -- client: android.media.AudioManager@d4df3dforg.droidtv.playtv.PlayTvActivity@bfb901f -- gain: GAIN -- flags: DELAY_OK|PAUSES_ON_DUCKABLE_LOSS -- loss: none -- notified: true -- uid: 1000 -- attr: AudioAttributes: usage=1 content=3 flags=0x0 tags= bundle=null -- sdk:26
No external focus policy
Notify on duck: true
In ring or call: false
Stream volumes (device: index)
- STREAM_VOICE_CALL:
Muted: false
Min: 1
Max: 5
Current: 2 (speaker): 2, 40000 (hmdi_arc): 2, 40000000 (default): 1
Devices: speaker
- STREAM_SYSTEM:
Muted: false
Min: 0
Max: 7
Current: 2 (speaker): 2, 40000 (hmdi_arc): 3, 40000000 (default): 2
Devices: hmdi_arc
- STREAM_RING:
Muted: false
Min: 0
Max: 7
Current: 2 (speaker): 3, 40000 (hmdi_arc): 3, 40000000 (default): 2
Devices: speaker
- STREAM_MUSIC:
Muted: false
Min: 0
Max: 60
Current: 2 (speaker): 20, 40000 (hmdi_arc): 22, 40000000 (default): 15
Devices: hmdi_arc
- STREAM_ALARM:
Muted: false
Min: 0
Max: 7
Current: 2 (speaker): 3, 40000 (hmdi_arc): 3, 40000000 (default): 2
Devices: speaker
- STREAM_NOTIFICATION:
Muted: false
Min: 0
Max: 7
Current: 2 (speaker): 3, 40000 (hmdi_arc): 3, 40000000 (default): 2
Devices: speaker
- STREAM_BLUETOOTH_SCO:
Muted: false
Min: 0
Max: 15
Current: 2 (speaker): 6, 40000 (hmdi_arc): 6, 40000000 (default): 4
Devices: speaker
- STREAM_SYSTEM_ENFORCED:
Muted: false
Min: 0
Max: 7
Current: 2 (speaker): 3, 40000 (hmdi_arc): 3, 40000000 (default): 2
Devices: speaker
- STREAM_DTMF:
Muted: false
Min: 0
Max: 15
Current: 2 (speaker): 5, 40000 (hmdi_arc): 6, 40000000 (default): 4
Devices: hmdi_arc
- STREAM_TTS:
Muted: false
Min: 0
Max: 15
Current: 2 (speaker): 6, 40000 (hmdi_arc): 6, 40000000 (default): 4
Devices: speaker
- STREAM_ACCESSIBILITY:
Muted: false
Min: 0
Max: 15
Current: 2 (speaker): 5, 40000 (hmdi_arc): 6, 40000000 (default): 4
Devices: hmdi_arc
- mute affected streams = 0x2e
Ringer mode:
- mode (internal) = NORMAL
- mode (external) = NORMAL
- ringer mode affected streams = 0x80 (STREAM_SYSTEM_ENFORCED)
- ringer mode muted streams = 0x0
- delegate = ZenModeHelper
Audio routes:
mMainType=0x8
mBluetoothName=null
Other state:
mVolumeController=VolumeController(android.os.BinderProxy@fb5b7ca,mVisible=false)
mSafeMediaVolumeState=SAFE_MEDIA_VOLUME_ACTIVE
mSafeMediaVolumeIndex=250
sIndependentA11yVolume=false
mPendingVolumeCommand=null
mMusicActiveMs=0
mMcc=0
mCameraSoundForced=false
mHasVibrator=false
mVolumePolicy=VolumePolicy[volumeDownToEnterSilent=true,volumeUpToExitSilent=true,doNotDisturbWhenSilent=true,vibrateToSilentDebounce=400]
mAvrcpAbsVolSupported=false
Audio policies:
PlaybackActivityMonitor dump time: 9:03:06 AM
ID:23 -- type:android.media.SoundPool -- u/pid:10025/1934 -- state:idle -- attr:AudioAttributes: usage=13 content=4 flags=0x0 tags= bundle=null
ID:55 -- type:android.media.MediaPlayer -- u/pid:1000/2283 -- state:idle -- attr:AudioAttributes: usage=0 content=0 flags=0x0 tags= bundle=null
ID:15 -- type:android.media.SoundPool -- u/pid:1000/1723 -- state:idle -- attr:AudioAttributes: usage=13 content=4 flags=0x0 tags= bundle=null
ID:31 -- type:android.media.MediaPlayer -- u/pid:1000/2010 -- state:idle -- attr:AudioAttributes: usage=0 content=0 flags=0x0 tags= bundle=null
ID:143 -- type:android.media.SoundPool -- u/pid:10018/15178 -- state:idle -- attr:AudioAttributes: usage=13 content=4 flags=0x0 tags= bundle=null
ducked players:
muted player piids:"""
# `dumpsys power | grep 'Display Power' | grep -q 'state=ON' && echo -e '1\c' && dumpsys power | grep mWakefulness | grep -q Awake && echo -e '1\c' && dumpsys power | grep Locks | grep 'size=' && CURRENT_APP=$(dumpsys window windows | grep mCurrentFocus) && CURRENT_APP=${CURRENT_APP#*{* * } && CURRENT_APP=${CURRENT_APP%%/*} && echo $CURRENT_APP && (dumpsys media_session | grep -A 100 'Sessions Stack' | grep -A 100 $CURRENT_APP | grep -m 1 'state=PlaybackState {' || echo) && dumpsys audio`
GET_PROPERTIES_OUTPUT1 = ""
GET_PROPERTIES_DICT1 = {'screen_on': False,
'awake': False,
'wake_lock_size': -1,
'media_session_state': None,
'current_app': None,
'audio_state': None,
'device': None,
'is_volume_muted': None,
'volume': None}
STATE1 = (constants.STATE_OFF, None, None, None, None)
# `dumpsys power | grep 'Display Power' | grep -q 'state=ON' && echo -e '1\c' && dumpsys power | grep mWakefulness | grep -q Awake && echo -e '1\c' && dumpsys power | grep Locks | grep 'size=' && CURRENT_APP=$(dumpsys window windows | grep mCurrentFocus) && CURRENT_APP=${CURRENT_APP#*{* * } && CURRENT_APP=${CURRENT_APP%%/*} && echo $CURRENT_APP && (dumpsys media_session | grep -A 100 'Sessions Stack' | grep -A 100 $CURRENT_APP | grep -m 1 'state=PlaybackState {' || echo) && dumpsys audio`
GET_PROPERTIES_OUTPUT2 = "1"
GET_PROPERTIES_DICT2 = {'screen_on': True,
'awake': False,
'wake_lock_size': -1,
'media_session_state': None,
'current_app': None,
'audio_state': None,
'device': None,
'is_volume_muted': None,
'volume': None}
STATE2 = (constants.STATE_IDLE, None, None, None, None)
# `dumpsys power | grep 'Display Power' | grep -q 'state=ON' && echo -e '1\c' && dumpsys power | grep mWakefulness | grep -q Awake && echo -e '1\c' && dumpsys power | grep Locks | grep 'size=' && CURRENT_APP=$(dumpsys window windows | grep mCurrentFocus) && CURRENT_APP=${CURRENT_APP#*{* * } && CURRENT_APP=${CURRENT_APP%%/*} && echo $CURRENT_APP && (dumpsys media_session | grep -A 100 'Sessions Stack' | grep -A 100 $CURRENT_APP | grep -m 1 'state=PlaybackState {' || echo) && dumpsys audio`
GET_PROPERTIES_OUTPUT3 = """11Wake Locks: size=2
com.amazon.tv.launcher
""" + DUMPSYS_AUDIO_ON
GET_PROPERTIES_DICT3 = {'screen_on': True,
'awake': True,
'wake_lock_size': 2,
'media_session_state': None,
'current_app': 'com.amazon.tv.launcher',
'audio_state': constants.STATE_IDLE,
'device': 'hmdi_arc',
'is_volume_muted': False,
'volume': 22}
STATE3 = (constants.STATE_PLAYING, 'com.amazon.tv.launcher', 'hmdi_arc', False, 22/60.)
GET_PROPERTIES_DICT_NONE = {'screen_on': None,
'awake': None,
'wake_lock_size': None,
'media_session_state': None,
'current_app': None,
'audio_state': None,
'device': None,
'is_volume_muted': None,
'volume': None}
def _adb_shell_patched(self):
def _adb_shell_method(cmd):
self.adb_shell_cmd = cmd
return self.adb_shell_output
return _adb_shell_method
class TestAndroidTV(unittest.TestCase):
def setUp(self):
self.atv = AndroidTV('127.0.0.1:5555')
# patch ADB-related methods
self.atv.adb_shell = _adb_shell_patched(self.atv)
self.atv._adb = True
self.atv._available = True
self.atv.adb_shell_output = None
def test_device(self):
"""Check that the ``device`` property works correctly.
"""
self.atv.adb_shell_output = None
device = self.atv.device
self.assertIsNone(device)
self.atv.adb_shell_output = ''
device = self.atv.device
self.assertIsNone(device)
self.atv.adb_shell_output = DUMPSYS_AUDIO_OFF
device = self.atv.device
self.assertEqual('speaker', device)
self.atv.adb_shell_output = DUMPSYS_AUDIO_ON
device = self.atv.device
self.assertEqual('hmdi_arc', device)
def test_volume(self):
"""Check that the ``volume`` property works correctly.
"""
self.atv.adb_shell_output = None
volume = self.atv.volume
self.assertIsNone(volume)
self.atv.adb_shell_output = ''
volume = self.atv.volume
self.assertIsNone(volume)
self.atv.adb_shell_output = DUMPSYS_AUDIO_OFF
volume = self.atv.volume
self.assertEqual(volume, 20)
self.assertEqual(self.atv.max_volume, 60.)
self.atv.adb_shell_output = DUMPSYS_AUDIO_ON
volume = self.atv.volume
self.assertEqual(volume, 22)
self.assertEqual(self.atv.max_volume, 60.)
def test_is_volume_muted(self):
"""Check that the ``is_volume_muted`` property works correctly.
"""
self.atv.adb_shell_output = None
is_volume_muted = self.atv.is_volume_muted
self.assertIsNone(is_volume_muted)
self.atv.adb_shell_output = ''
is_volume_muted = self.atv.is_volume_muted
self.assertIsNone(is_volume_muted)
self.atv.adb_shell_output = DUMPSYS_AUDIO_OFF
is_volume_muted = self.atv.is_volume_muted
self.assertFalse(is_volume_muted)
def test_get_properties(self):
"""Check that ``get_properties()`` works correctly.
"""
self.atv.adb_shell_output = None
properties = self.atv.get_properties_dict(lazy=True)
self.assertEqual(properties, GET_PROPERTIES_DICT_NONE)
self.atv.adb_shell_output = GET_PROPERTIES_OUTPUT1
properties = self.atv.get_properties_dict(lazy=True)
self.assertEqual(properties, GET_PROPERTIES_DICT1)
self.atv.adb_shell_output = GET_PROPERTIES_OUTPUT2
properties = self.atv.get_properties_dict(lazy=True)
self.assertEqual(properties, GET_PROPERTIES_DICT2)
self.atv.adb_shell_output = GET_PROPERTIES_OUTPUT3
properties = self.atv.get_properties_dict(lazy=True)
self.assertEqual(properties, GET_PROPERTIES_DICT3)
def test_update(self):
"""Check that the ``update`` method works correctly.
"""
self.atv.adb_shell_output = GET_PROPERTIES_OUTPUT1
state = self.atv.update()
self.assertTupleEqual(state, STATE1)
self.atv.adb_shell_output = GET_PROPERTIES_OUTPUT2
state = self.atv.update()
self.assertTupleEqual(state, STATE2)
self.atv.adb_shell_output = GET_PROPERTIES_OUTPUT3
state = self.atv.update()
self.assertTupleEqual(state, STATE3)
def test_set_volume_level(self):
"""Check that the ``set_volume_level`` method works correctly.
"""
self.atv.adb_shell_output = None
new_volume_level = self.atv.set_volume_level(0.5)
self.assertIsNone(new_volume_level)
self.atv.adb_shell_output = ''
new_volume_level = self.atv.set_volume_level(0.5)
self.assertIsNone(new_volume_level)
self.atv.adb_shell_output = DUMPSYS_AUDIO_ON
new_volume_level = self.atv.set_volume_level(0.5)
self.assertEqual(new_volume_level, 0.5)
self.assertEqual(self.atv.adb_shell_cmd, "(input keyevent 24 && sleep 1 && input keyevent 24 && sleep 1 && input keyevent 24 && sleep 1 && input keyevent 24 && sleep 1 && input keyevent 24 && sleep 1 && input keyevent 24 && sleep 1 && input keyevent 24 && sleep 1 && input keyevent 24) &")
self.atv.adb_shell_output = ''
new_volume_level = self.atv.set_volume_level(0.5, 22./60)
self.assertEqual(new_volume_level, 0.5)
self.assertEqual(self.atv.adb_shell_cmd, "(input keyevent 24 && sleep 1 && input keyevent 24 && sleep 1 && input keyevent 24 && sleep 1 && input keyevent 24 && sleep 1 && input keyevent 24 && sleep 1 && input keyevent 24 && sleep 1 && input keyevent 24 && sleep 1 && input keyevent 24) &")
def test_volume_up(self):
"""Check that the ``volume_up`` method works correctly.
"""
self.atv.adb_shell_output = None
new_volume_level = self.atv.volume_up()
self.assertIsNone(new_volume_level)
self.assertEqual(self.atv.adb_shell_cmd, "input keyevent 24")
self.atv.adb_shell_output = ''
new_volume_level = self.atv.volume_up()
self.assertIsNone(new_volume_level)
self.assertEqual(self.atv.adb_shell_cmd, "input keyevent 24")
self.atv.adb_shell_output = DUMPSYS_AUDIO_ON
new_volume_level = self.atv.volume_up()
self.assertEqual(new_volume_level, 23./60)
self.assertEqual(self.atv.adb_shell_cmd, "input keyevent 24")
new_volume_level = self.atv.volume_up(23./60)
self.assertEqual(new_volume_level, 24./60)
self.assertEqual(self.atv.adb_shell_cmd, "input keyevent 24")
self.atv.adb_shell_output = DUMPSYS_AUDIO_OFF
new_volume_level = self.atv.volume_up()
self.assertEqual(new_volume_level, 21./60)
self.assertEqual(self.atv.adb_shell_cmd, "input keyevent 24")
new_volume_level = self.atv.volume_up(21./60)
self.assertEqual(new_volume_level, 22./60)
self.assertEqual(self.atv.adb_shell_cmd, "input keyevent 24")
def test_volume_down(self):
"""Check that the ``volume_down`` method works correctly.
"""
self.atv.adb_shell_output = None
new_volume_level = self.atv.volume_down()
self.assertIsNone(new_volume_level)
self.assertEqual(self.atv.adb_shell_cmd, "input keyevent 25")
self.atv.adb_shell_output = ''
new_volume_level = self.atv.volume_down()
self.assertIsNone(new_volume_level)
self.assertEqual(self.atv.adb_shell_cmd, "input keyevent 25")
self.atv.adb_shell_output = DUMPSYS_AUDIO_ON
new_volume_level = self.atv.volume_down()
self.assertEqual(new_volume_level, 21./60)
self.assertEqual(self.atv.adb_shell_cmd, "input keyevent 25")
new_volume_level = self.atv.volume_down(21./60)
self.assertEqual(new_volume_level, 20./60)
self.assertEqual(self.atv.adb_shell_cmd, "input keyevent 25")
self.atv.adb_shell_output = DUMPSYS_AUDIO_OFF
new_volume_level = self.atv.volume_down()
self.assertEqual(new_volume_level, 19./60)
self.assertEqual(self.atv.adb_shell_cmd, "input keyevent 25")
new_volume_level = self.atv.volume_down(19./60)
self.assertEqual(new_volume_level, 18./60)
self.assertEqual(self.atv.adb_shell_cmd, "input keyevent 25")
if __name__ == "__main__":
unittest.main()
| 37.142012 | 493 | 0.663958 | 2,462 | 18,831 | 4.894801 | 0.110073 | 0.049954 | 0.039001 | 0.057257 | 0.884657 | 0.87918 | 0.858601 | 0.830305 | 0.792797 | 0.774956 | 0 | 0.061799 | 0.223196 | 18,831 | 506 | 494 | 37.215415 | 0.762032 | 0.108173 | 0 | 0.801527 | 0 | 0.035623 | 0.520781 | 0.11143 | 0 | 0 | 0.003822 | 0 | 0.127226 | 1 | 0.02799 | false | 0 | 0.010178 | 0 | 0.045802 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3f5cd62f1f1ac3b2be8f07d82bdd0bb755656993 | 26,758 | py | Python | mlp/layers.py | orestis-z/mlpractical | f9201b4ab29506376a9edef9707ea32ea7e3f6c9 | [
"BSD-3-Clause"
] | null | null | null | mlp/layers.py | orestis-z/mlpractical | f9201b4ab29506376a9edef9707ea32ea7e3f6c9 | [
"BSD-3-Clause"
] | null | null | null | mlp/layers.py | orestis-z/mlpractical | f9201b4ab29506376a9edef9707ea32ea7e3f6c9 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""Layer definitions.
This module defines classes which encapsulate a single layer.
These layers map input activations to output activation with the `fprop`
method and map gradients with repsect to outputs to gradients with respect to
their inputs with the `bprop` method.
Some layers will have learnable parameters and so will additionally define
methods for getting and setting parameter and calculating gradients with
respect to the layer parameters.
"""
import numpy as np
from scipy import special
import mlp.initialisers as init
from mlp import DEFAULT_SEED
class Layer(object):
"""Abstract class defining the interface for a layer."""
def fprop(self, inputs):
"""Forward propagates activations through the layer transformation.
Args:
inputs: Array of layer inputs of shape (batch_size, input_dim).
Returns:
outputs: Array of layer outputs of shape (batch_size, output_dim).
"""
raise NotImplementedError()
def bprop(self, inputs, outputs, grads_wrt_outputs):
"""Back propagates gradients through a layer.
Given gradients with respect to the outputs of the layer calculates the
gradients with respect to the layer inputs.
Args:
inputs: Array of layer inputs of shape (batch_size, input_dim).
outputs: Array of layer outputs calculated in forward pass of
shape (batch_size, output_dim).
grads_wrt_outputs: Array of gradients with respect to the layer
outputs of shape (batch_size, output_dim).
Returns:
Array of gradients with respect to the layer inputs of shape
(batch_size, input_dim).
"""
raise NotImplementedError()
class LayerWithParameters(Layer):
"""Abstract class defining the interface for a layer with parameters."""
def grads_wrt_params(self, inputs, grads_wrt_outputs):
"""Calculates gradients with respect to layer parameters.
Args:
inputs: Array of inputs to layer of shape (batch_size, input_dim).
grads_wrt_to_outputs: Array of gradients with respect to the layer
outputs of shape (batch_size, output_dim).
Returns:
List of arrays of gradients with respect to the layer parameters
with parameter gradients appearing in same order in tuple as
returned from `get_params` method.
"""
raise NotImplementedError()
def params_penalty(self):
"""Returns the parameter dependent penalty term for this layer.
If no parameter-dependent penalty terms are set this returns
zero.
"""
raise NotImplementedError()
@property
def params(self):
"""Returns a list of parameters of layer.
Returns:
List of current parameter values. This list should be in the
corresponding order to the `values` argument to `set_params`.
"""
raise NotImplementedError()
@params.setter
def params(self, values):
"""Sets layer parameters from a list of values.
Args:
values: List of values to set parameters to. This list should be
in the corresponding order to what is returned by `get_params`.
"""
raise NotImplementedError()
class StochasticLayer(Layer):
"""Specialised layer which uses a stochastic forward propagation."""
def __init__(self, rng=None):
"""Constructs a new StochasticLayer object.
Args:
rng (RandomState): Seeded random number generator object.
"""
if rng is None:
rng = np.random.RandomState(DEFAULT_SEED)
self.rng = rng
def fprop(self, inputs, stochastic=True):
"""Forward propagates activations through the layer transformation.
Args:
inputs: Array of layer inputs of shape (batch_size, input_dim).
stochastic: Flag allowing different deterministic
forward-propagation mode in addition to default stochastic
forward-propagation e.g. for use at test time. If False
a deterministic forward-propagation transformation
corresponding to the expected output of the stochastic
forward-propagation is applied.
Returns:
outputs: Array of layer outputs of shape (batch_size, output_dim).
"""
raise NotImplementedError()
def bprop(self, inputs, outputs, grads_wrt_outputs):
"""Back propagates gradients through a layer.
Given gradients with respect to the outputs of the layer calculates the
gradients with respect to the layer inputs. This should correspond to
default stochastic forward-propagation.
Args:
inputs: Array of layer inputs of shape (batch_size, input_dim).
outputs: Array of layer outputs calculated in forward pass of
shape (batch_size, output_dim).
grads_wrt_outputs: Array of gradients with respect to the layer
outputs of shape (batch_size, output_dim).
Returns:
Array of gradients with respect to the layer inputs of shape
(batch_size, input_dim).
"""
raise NotImplementedError()
class StochasticLayerWithParameters(Layer):
"""Specialised layer which uses a stochastic forward propagation."""
def __init__(self, rng=None):
"""Constructs a new StochasticLayer object.
Args:
rng (RandomState): Seeded random number generator object.
"""
if rng is None:
rng = np.random.RandomState(DEFAULT_SEED)
self.rng = rng
def fprop(self, inputs, stochastic=True):
"""Forward propagates activations through the layer transformation.
Args:
inputs: Array of layer inputs of shape (batch_size, input_dim).
stochastic: Flag allowing different deterministic
forward-propagation mode in addition to default stochastic
forward-propagation e.g. for use at test time. If False
a deterministic forward-propagation transformation
corresponding to the expected output of the stochastic
forward-propagation is applied.
Returns:
outputs: Array of layer outputs of shape (batch_size, output_dim).
"""
raise NotImplementedError()
def grads_wrt_params(self, inputs, grads_wrt_outputs):
"""Calculates gradients with respect to layer parameters.
Args:
inputs: Array of inputs to layer of shape (batch_size, input_dim).
grads_wrt_to_outputs: Array of gradients with respect to the layer
outputs of shape (batch_size, output_dim).
Returns:
List of arrays of gradients with respect to the layer parameters
with parameter gradients appearing in same order in tuple as
returned from `get_params` method.
"""
raise NotImplementedError()
def params_penalty(self):
"""Returns the parameter dependent penalty term for this layer.
If no parameter-dependent penalty terms are set this returns
zero.
"""
raise NotImplementedError()
@property
def params(self):
"""Returns a list of parameters of layer.
Returns:
List of current parameter values. This list should be in the
corresponding order to the `values` argument to `set_params`.
"""
raise NotImplementedError()
@params.setter
def params(self, values):
"""Sets layer parameters from a list of values.
Args:
values: List of values to set parameters to. This list should be
in the corresponding order to what is returned by `get_params`.
"""
raise NotImplementedError()
class AffineLayer(LayerWithParameters):
"""Layer implementing an affine tranformation of its inputs.
This layer is parameterised by a weight matrix and bias vector.
"""
def __init__(self, input_dim, output_dim,
weights_initialiser=init.UniformInit(-0.1, 0.1),
biases_initialiser=init.ConstantInit(0.),
weights_penalty=None, biases_penalty=None):
"""Initialises a parameterised affine layer.
Args:
input_dim (int): Dimension of inputs to the layer.
output_dim (int): Dimension of the layer outputs.
weights_initialiser: Initialiser for the weight parameters.
biases_initialiser: Initialiser for the bias parameters.
weights_penalty: Weights-dependent penalty term (regulariser) or
None if no regularisation is to be applied to the weights.
biases_penalty: Biases-dependent penalty term (regulariser) or
None if no regularisation is to be applied to the biases.
"""
self.input_dim = input_dim
self.output_dim = output_dim
self.weights = weights_initialiser((self.output_dim, self.input_dim))
self.biases = biases_initialiser(self.output_dim)
self.weights_penalty = weights_penalty
self.biases_penalty = biases_penalty
def fprop(self, inputs):
"""Forward propagates activations through the layer transformation.
For inputs `x`, outputs `y`, weights `W` and biases `b` the layer
corresponds to `y = W.dot(x) + b`.
Args:
inputs: Array of layer inputs of shape (batch_size, input_dim).
Returns:
outputs: Array of layer outputs of shape (batch_size, output_dim).
"""
return self.weights.dot(inputs.T).T + self.biases
def bprop(self, inputs, outputs, grads_wrt_outputs):
"""Back propagates gradients through a layer.
Given gradients with respect to the outputs of the layer calculates the
gradients with respect to the layer inputs.
Args:
inputs: Array of layer inputs of shape (batch_size, input_dim).
outputs: Array of layer outputs calculated in forward pass of
shape (batch_size, output_dim).
grads_wrt_outputs: Array of gradients with respect to the layer
outputs of shape (batch_size, output_dim).
Returns:
Array of gradients with respect to the layer inputs of shape
(batch_size, input_dim).
"""
return grads_wrt_outputs.dot(self.weights)
def grads_wrt_params(self, inputs, grads_wrt_outputs):
"""Calculates gradients with respect to layer parameters.
Args:
inputs: array of inputs to layer of shape (batch_size, input_dim)
grads_wrt_to_outputs: array of gradients with respect to the layer
outputs of shape (batch_size, output_dim)
Returns:
list of arrays of gradients with respect to the layer parameters
`[grads_wrt_weights, grads_wrt_biases]`.
"""
grads_wrt_weights = np.dot(grads_wrt_outputs.T, inputs)
grads_wrt_biases = np.sum(grads_wrt_outputs, axis=0)
return [grads_wrt_weights, grads_wrt_biases]
def params_penalty(self):
"""Returns the parameter dependent penalty term for this layer.
If no parameter-dependent penalty terms are set this returns
zero.
"""
params_penalty = 0
return params_penalty
@property
def params(self):
"""A list of layer parameter values: `[weights, biases]`."""
return [self.weights, self.biases]
@params.setter
def params(self, values):
self.weights = values[0]
self.biases = values[1]
def __repr__(self):
return 'AffineLayer(input_dim={0}, output_dim={1})'.format(
self.input_dim, self.output_dim)
class SigmoidLayer(Layer):
"""Layer implementing an element-wise logistic sigmoid transformation."""
def fprop(self, inputs):
"""Forward propagates activations through the layer transformation.
For inputs `x` and outputs `y` this corresponds to
`y = 1 / (1 + exp(-x))`.
Args:
inputs: Array of layer inputs of shape (batch_size, input_dim).
Returns:
outputs: Array of layer outputs of shape (batch_size, output_dim).
"""
return 1. / (1. + np.exp(-inputs))
def bprop(self, inputs, outputs, grads_wrt_outputs):
"""Back propagates gradients through a layer.
Given gradients with respect to the outputs of the layer calculates the
gradients with respect to the layer inputs.
Args:
inputs: Array of layer inputs of shape (batch_size, input_dim).
outputs: Array of layer outputs calculated in forward pass of
shape (batch_size, output_dim).
grads_wrt_outputs: Array of gradients with respect to the layer
outputs of shape (batch_size, output_dim).
Returns:
Array of gradients with respect to the layer inputs of shape
(batch_size, input_dim).
"""
return grads_wrt_outputs * outputs * (1. - outputs)
def __repr__(self):
return 'SigmoidLayer'
class TanhLayer(Layer):
"""Layer implementing an element-wise hyperbolic tangent transformation."""
def fprop(self, inputs):
"""Forward propagates activations through the layer transformation.
For inputs `x` and outputs `y` this corresponds to `y = tanh(x)`.
Args:
inputs: Array of layer inputs of shape (batch_size, input_dim).
Returns:
outputs: Array of layer outputs of shape (batch_size, output_dim).
"""
return np.tanh(inputs)
def bprop(self, inputs, outputs, grads_wrt_outputs):
"""Back propagates gradients through a layer.
Given gradients with respect to the outputs of the layer calculates the
gradients with respect to the layer inputs.
Args:
inputs: Array of layer inputs of shape (batch_size, input_dim).
outputs: Array of layer outputs calculated in forward pass of
shape (batch_size, output_dim).
grads_wrt_outputs: Array of gradients with respect to the layer
outputs of shape (batch_size, output_dim).
Returns:
Array of gradients with respect to the layer inputs of shape
(batch_size, input_dim).
"""
return (1. - outputs**2) * grads_wrt_outputs
def __repr__(self):
return 'TanhLayer'
class SoftmaxLayer(Layer):
"""Layer implementing a softmax transformation."""
def fprop(self, inputs):
"""Forward propagates activations through the layer transformation.
For inputs `x` and outputs `y` this corresponds to
`y = exp(x) / sum(exp(x))`.
Args:
inputs: Array of layer inputs of shape (batch_size, input_dim).
Returns:
outputs: Array of layer outputs of shape (batch_size, output_dim).
"""
# subtract max inside exponential to improve numerical stability -
# when we divide through by sum this term cancels
exp_inputs = np.exp(inputs - inputs.max(-1)[:, None])
return exp_inputs / exp_inputs.sum(-1)[:, None]
def bprop(self, inputs, outputs, grads_wrt_outputs):
"""Back propagates gradients through a layer.
Given gradients with respect to the outputs of the layer calculates the
gradients with respect to the layer inputs.
Args:
inputs: Array of layer inputs of shape (batch_size, input_dim).
outputs: Array of layer outputs calculated in forward pass of
shape (batch_size, output_dim).
grads_wrt_outputs: Array of gradients with respect to the layer
outputs of shape (batch_size, output_dim).
Returns:
Array of gradients with respect to the layer inputs of shape
(batch_size, input_dim).
"""
return outputs * (grads_wrt_outputs -
(grads_wrt_outputs * outputs).sum(-1)[:, None])
def __repr__(self):
return 'SoftmaxLayer'
class ReshapeLayer(Layer):
"""Layer which reshapes dimensions of inputs."""
def __init__(self, output_shape=None):
"""Create a new reshape layer object.
Args:
output_shape: Tuple specifying shape each input in batch should
be reshaped to in outputs. This **excludes** the batch size
so the shape of the final output array will be
(batch_size, ) + output_shape
Similarly to numpy.reshape, one shape dimension can be -1. In
this case, the value is inferred from the size of the input
array and remaining dimensions. The shape specified must be
compatible with the input array shape - i.e. the total number
of values in the array cannot be changed. If set to `None` the
output shape will be set to
(batch_size, -1)
which will flatten all the inputs to vectors.
"""
self.output_shape = (-1,) if output_shape is None else output_shape
def fprop(self, inputs):
"""Forward propagates activations through the layer transformation.
Args:
inputs: Array of layer inputs of shape (batch_size, input_dim).
Returns:
outputs: Array of layer outputs of shape (batch_size, output_dim).
"""
return inputs.reshape((inputs.shape[0],) + self.output_shape)
def bprop(self, inputs, outputs, grads_wrt_outputs):
"""Back propagates gradients through a layer.
Given gradients with respect to the outputs of the layer calculates the
gradients with respect to the layer inputs.
Args:
inputs: Array of layer inputs of shape (batch_size, input_dim).
outputs: Array of layer outputs calculated in forward pass of
shape (batch_size, output_dim).
grads_wrt_outputs: Array of gradients with respect to the layer
outputs of shape (batch_size, output_dim).
Returns:
Array of gradients with respect to the layer inputs of shape
(batch_size, input_dim).
"""
return grads_wrt_outputs.reshape(inputs.shape)
def __repr__(self):
return 'ReshapeLayer(output_shape={0})'.format(self.output_shape)
class ReluLayer(Layer):
"""Layer implementing an element-wise rectified linear transformation."""
def fprop(self, inputs):
"""Forward propagates activations through the layer transformation.
For inputs `x` and outputs `y` this corresponds to `y = max(0, x)`.
Args:
inputs: Array of layer inputs of shape (batch_size, input_dim).
Returns:
outputs: Array of layer outputs of shape (batch_size, output_dim).
"""
return np.maximum(inputs, 0.)
def bprop(self, inputs, outputs, grads_wrt_outputs):
"""Back propagates gradients through a layer.
Given gradients with respect to the outputs of the layer calculates the
gradients with respect to the layer inputs.
Args:
inputs: Array of layer inputs of shape (batch_size, input_dim).
outputs: Array of layer outputs calculated in forward pass of
shape (batch_size, output_dim).
grads_wrt_outputs: Array of gradients with respect to the layer
outputs of shape (batch_size, output_dim).
Returns:
Array of gradients with respect to the layer inputs of shape
(batch_size, input_dim).
"""
return (outputs > 0) * grads_wrt_outputs
def __repr__(self):
return 'ReluLayer'
class EluLayer(Layer):
"""Layer implementing an element-wise exponential linear transformation as
described in https://arxiv.org/abs/1511.07289."""
def __init__(self, alpha=1):
self.alpha = alpha
def fprop(self, inputs):
"""Forward propagates activations through the layer transformation.
Args:
inputs: Array of layer inputs of shape (batch_size, input_dim).
Returns:
outputs: Array of layer outputs of shape (batch_size, output_dim).
"""
result = inputs.copy()
mask = inputs <= 0
result[mask] = self.alpha * (np.exp(inputs[mask]) - 1)
return result
def bprop(self, inputs, outputs, grads_wrt_outputs):
"""Back propagates gradients through a layer.
Given gradients with respect to the outputs of the layer calculates the
gradients with respect to the layer inputs.
Args:
inputs: Array of layer inputs of shape (batch_size, input_dim).
outputs: Array of layer outputs calculated in forward pass of
shape (batch_size, output_dim).
grads_wrt_outputs: Array of gradients with respect to the layer
outputs of shape (batch_size, output_dim).
Returns:
Array of gradients with respect to the layer inputs of shape
(batch_size, input_dim).
"""
result = np.ones(outputs.shape)
mask = outputs <= 0
result[mask] = self.alpha * np.exp(outputs[mask])
return result * grads_wrt_outputs
def __repr__(self):
return f'EluLayer(alpha={self.alpha:.3f})'
class SeluLayer(EluLayer):
"""Layer implementing an element-wise scaled exponential linear
transformation as described in https://arxiv.org/abs/1706.02515."""
# pre-defined constants
LAMBDA = 1.05070098
ALPHA = 1.67326324
def __init__(self):
super().__init__(self.ALPHA)
def fprop(self, inputs):
"""Forward propagates activations through the layer transformation.
Args:
inputs: Array of layer inputs of shape (batch_size, input_dim).
Returns:
outputs: Array of layer outputs of shape (batch_size, output_dim).
"""
return super().fprop(inputs) * self.LAMBDA
def bprop(self, inputs, outputs, grads_wrt_outputs):
"""Back propagates gradients through a layer.
Given gradients with respect to the outputs of the layer calculates the
gradients with respect to the layer inputs.
Args:
inputs: Array of layer inputs of shape (batch_size, input_dim).
outputs: Array of layer outputs calculated in forward pass of
shape (batch_size, output_dim).
grads_wrt_outputs: Array of gradients with respect to the layer
outputs of shape (batch_size, output_dim).
Returns:
Array of gradients with respect to the layer inputs of shape
(batch_size, input_dim).
"""
return super().bprop(inputs, outputs, grads_wrt_outputs) * self.LAMBDA
def __repr__(self):
return 'SeluLayer'
class GeluLayer(Layer):
"""Layer implementing an element-wise gaussian error linear transformation
as described in https://arxiv.org/abs/1606.08415."""
# Store some constants to avoid recomputing them
SQRT_2 = np.sqrt(2)
SQRT_2_DIV_PI = np.sqrt(2 / np.pi)
SQRT_2_TIMES_PI = np.sqrt(2 * np.pi)
def fprop(self, inputs):
"""Forward propagates activations through the layer transformation.
For inputs `x` and outputs `y` this corresponds to `y = max(0, x)`.
Args:
inputs: Array of layer inputs of shape (batch_size, input_dim).
Returns:
outputs: Array of layer outputs of shape (batch_size, output_dim).
"""
return 0.5 * inputs * (1 + special.erf(inputs / self.SQRT_2))
def bprop(self, inputs, outputs, grads_wrt_outputs):
"""Back propagates gradients through a layer.
Given gradients with respect to the outputs of the layer calculates the
gradients with respect to the layer inputs.
Args:
inputs: Array of layer inputs of shape (batch_size, input_dim).
outputs: Array of layer outputs calculated in forward pass of
shape (batch_size, output_dim).
grads_wrt_outputs: Array of gradients with respect to the layer
outputs of shape (batch_size, output_dim).
Returns:
Array of gradients with respect to the layer inputs of shape
(batch_size, input_dim).
"""
return ((1 + special.erf(outputs / self.SQRT_2)) / 2 + outputs /
self. SQRT_2_TIMES_PI * np.exp(-outputs ** 2 / 2)) * grads_wrt_outputs
def __repr__(self):
return 'GeluLayer'
class IsrluLayer(Layer):
"""Layer implementing an element-wise inverse square root linear
transformation as described in https://arxiv.org/abs/1710.09967."""
def __init__(self, alpha=1):
self.alpha = alpha
def fprop(self, inputs):
"""Forward propagates activations through the layer transformation.
For inputs `x` and outputs `y` this corresponds to `y = max(0, x)`.
Args:
inputs: Array of layer inputs of shape (batch_size, input_dim).
Returns:
outputs: Array of layer outputs of shape (batch_size, output_dim).
"""
result = inputs.copy()
mask = inputs <= 0
result[mask] = inputs[mask] / \
np.sqrt(1 + self.alpha * inputs[mask] ** 2)
return result
def bprop(self, inputs, outputs, grads_wrt_outputs):
"""Back propagates gradients through a layer.
Given gradients with respect to the outputs of the layer calculates the
gradients with respect to the layer inputs.
Args:
inputs: Array of layer inputs of shape (batch_size, input_dim).
outputs: Array of layer outputs calculated in forward pass of
shape (batch_size, output_dim).
grads_wrt_outputs: Array of gradients with respect to the layer
outputs of shape (batch_size, output_dim).
Returns:
Array of gradients with respect to the layer inputs of shape
(batch_size, input_dim).
"""
result = np.ones(outputs.shape)
mask = outputs <= 0
result[mask] = (1 + self.alpha * outputs[mask] ** 2) ** -1.5
return result * grads_wrt_outputs
def __repr__(self):
return f'IsrluLayer(alpha={self.alpha:.3f})'
| 36.306649 | 86 | 0.63723 | 3,302 | 26,758 | 5.040279 | 0.087826 | 0.044884 | 0.057682 | 0.076909 | 0.797032 | 0.782551 | 0.763023 | 0.757015 | 0.748002 | 0.742114 | 0 | 0.006189 | 0.293557 | 26,758 | 736 | 87 | 36.355978 | 0.874253 | 0.624337 | 0 | 0.529412 | 0 | 0 | 0.027489 | 0.016937 | 0 | 0 | 0 | 0 | 0 | 1 | 0.317647 | false | 0 | 0.023529 | 0.058824 | 0.647059 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
451cd6dc2e059f35fe9504ff637e7440cb62e5f2 | 10,691 | py | Python | registry/alembic/versions/3104643cd4e3_baseline.py | DhivakharVenkatachalam/snet-marketplace-service | 6aee606bc9b00d418caeae26c64deae03792e0ce | [
"MIT"
] | 14 | 2019-02-12T09:14:52.000Z | 2021-03-11T18:42:22.000Z | registry/alembic/versions/3104643cd4e3_baseline.py | prashantramangupta/snet-marketplace-service | 7c293054e4b0207deefecc46defd743c064472a4 | [
"MIT"
] | 1,079 | 2019-01-10T04:31:24.000Z | 2022-03-29T06:16:42.000Z | registry/alembic/versions/3104643cd4e3_baseline.py | prashantramangupta/snet-marketplace-service | 7c293054e4b0207deefecc46defd743c064472a4 | [
"MIT"
] | 20 | 2018-12-18T13:06:41.000Z | 2021-09-17T11:13:01.000Z | """baseline
Revision ID: 3104643cd4e3
Revises:
Create Date: 2020-03-16 21:32:37.522376
"""
import sqlalchemy as sa
from alembic import op
from sqlalchemy.dialects import mysql
# revision identifiers, used by Alembic.
revision = '3104643cd4e3'
down_revision = None
branch_labels = None
depends_on = None
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.create_table('organization',
sa.Column('uuid', mysql.VARCHAR(length=128), nullable=False),
sa.Column('name', mysql.VARCHAR(length=128), nullable=True),
sa.Column('org_id', mysql.VARCHAR(length=128), nullable=True),
sa.Column('org_type', mysql.VARCHAR(length=128), nullable=True),
sa.Column('origin', mysql.VARCHAR(length=128), nullable=True),
sa.Column('description', mysql.VARCHAR(length=1024), nullable=True),
sa.Column('short_description', mysql.VARCHAR(length=1024), nullable=True),
sa.Column('url', mysql.VARCHAR(length=512), nullable=True),
sa.Column('duns_no', mysql.VARCHAR(length=36), nullable=True),
sa.Column('contacts', mysql.JSON(), nullable=False),
sa.Column('assets', mysql.JSON(), nullable=False),
sa.Column('metadata_ipfs_uri', mysql.VARCHAR(length=255), nullable=True),
sa.PrimaryKeyConstraint('uuid')
)
op.create_table('organization_archive',
sa.Column('row_id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('uuid', mysql.VARCHAR(length=128), nullable=True),
sa.Column('name', mysql.VARCHAR(length=128), nullable=True),
sa.Column('org_id', mysql.VARCHAR(length=128), nullable=True),
sa.Column('org_type', mysql.VARCHAR(length=128), nullable=True),
sa.Column('origin', mysql.VARCHAR(length=128), nullable=True),
sa.Column('description', mysql.VARCHAR(length=1024), nullable=True),
sa.Column('short_description', mysql.VARCHAR(length=1024), nullable=True),
sa.Column('url', mysql.VARCHAR(length=512), nullable=True),
sa.Column('duns_no', mysql.VARCHAR(length=36), nullable=True),
sa.Column('contacts', mysql.JSON(), nullable=False),
sa.Column('assets', mysql.JSON(), nullable=False),
sa.Column('metadata_ipfs_uri', mysql.VARCHAR(length=255), nullable=True),
sa.Column('groups', mysql.JSON(), nullable=False),
sa.Column('org_state', mysql.JSON(), nullable=False),
sa.PrimaryKeyConstraint('row_id')
)
op.create_table('service_review_history',
sa.Column('row_id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('org_uuid', mysql.VARCHAR(length=128), nullable=False),
sa.Column('service_uuid', mysql.VARCHAR(length=128), nullable=False),
sa.Column('service_metadata', mysql.JSON(), nullable=False),
sa.Column('state', mysql.VARCHAR(length=64), nullable=False),
sa.Column('reviewed_by', mysql.VARCHAR(length=128), nullable=True),
sa.Column('reviewed_on', mysql.TIMESTAMP(), nullable=True),
sa.Column('created_on', mysql.TIMESTAMP(), nullable=False),
sa.Column('updated_on', mysql.TIMESTAMP(), nullable=False),
sa.PrimaryKeyConstraint('row_id')
)
op.create_table('group',
sa.Column('row_id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('name', mysql.VARCHAR(length=128), nullable=False),
sa.Column('id', mysql.VARCHAR(length=128), nullable=False),
sa.Column('org_uuid', mysql.VARCHAR(length=128), nullable=False),
sa.Column('payment_address', mysql.VARCHAR(length=128), nullable=True),
sa.Column('payment_config', mysql.JSON(), nullable=False),
sa.Column('status', mysql.VARCHAR(length=128), nullable=True),
sa.ForeignKeyConstraint(['org_uuid'], ['organization.uuid'], onupdate='CASCADE', ondelete='CASCADE'),
sa.PrimaryKeyConstraint('row_id')
)
op.create_table('org_member',
sa.Column('row_id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('invite_code', mysql.VARCHAR(length=128), nullable=True),
sa.Column('org_uuid', mysql.VARCHAR(length=128), nullable=False),
sa.Column('role', mysql.VARCHAR(length=128), nullable=True),
sa.Column('username', mysql.VARCHAR(length=128), nullable=True),
sa.Column('address', mysql.VARCHAR(length=128), nullable=True),
sa.Column('status', mysql.VARCHAR(length=128), nullable=True),
sa.Column('transaction_hash', mysql.VARCHAR(length=128), nullable=True),
sa.Column('invited_on', mysql.TIMESTAMP(), nullable=True),
sa.Column('created_on', mysql.TIMESTAMP(), nullable=True),
sa.Column('updated_on', mysql.TIMESTAMP(), nullable=True),
sa.ForeignKeyConstraint(['org_uuid'], ['organization.uuid'], onupdate='CASCADE', ondelete='CASCADE'),
sa.PrimaryKeyConstraint('row_id')
)
op.create_table('organization_address',
sa.Column('row_id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('org_uuid', mysql.VARCHAR(length=128), nullable=False),
sa.Column('address_type', mysql.VARCHAR(length=64), nullable=True),
sa.Column('street_address', mysql.VARCHAR(length=256), nullable=True),
sa.Column('apartment', mysql.VARCHAR(length=256), nullable=True),
sa.Column('city', mysql.VARCHAR(length=64), nullable=True),
sa.Column('pincode', mysql.VARCHAR(length=64), nullable=True),
sa.Column('state', mysql.VARCHAR(length=64), nullable=True),
sa.Column('country', mysql.VARCHAR(length=64), nullable=True),
sa.Column('created_on', mysql.TIMESTAMP(), nullable=True),
sa.Column('updated_on', mysql.TIMESTAMP(), nullable=True),
sa.ForeignKeyConstraint(['org_uuid'], ['organization.uuid'], onupdate='CASCADE', ondelete='CASCADE'),
sa.PrimaryKeyConstraint('row_id')
)
op.create_table('organization_state',
sa.Column('row_id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('org_uuid', mysql.VARCHAR(length=128), nullable=False),
sa.Column('state', mysql.VARCHAR(length=128), nullable=False),
sa.Column('transaction_hash', mysql.VARCHAR(length=128), nullable=True),
sa.Column('test_transaction_hash', mysql.VARCHAR(length=128), nullable=True),
sa.Column('user_address', mysql.VARCHAR(length=128), nullable=True),
sa.Column('created_by', mysql.VARCHAR(length=128), nullable=False),
sa.Column('created_on', mysql.TIMESTAMP(), nullable=True),
sa.Column('updated_by', mysql.VARCHAR(length=128), nullable=False),
sa.Column('updated_on', mysql.TIMESTAMP(), nullable=True),
sa.Column('approved_by', mysql.VARCHAR(length=128), nullable=True),
sa.Column('approved_on', mysql.TIMESTAMP(), nullable=True),
sa.ForeignKeyConstraint(['org_uuid'], ['organization.uuid'], onupdate='CASCADE', ondelete='CASCADE'),
sa.PrimaryKeyConstraint('row_id')
)
op.create_table('service',
sa.Column('org_uuid', mysql.VARCHAR(length=128), nullable=False),
sa.Column('uuid', mysql.VARCHAR(length=128), nullable=False),
sa.Column('display_name', mysql.VARCHAR(length=128), nullable=False),
sa.Column('service_id', mysql.VARCHAR(length=128), nullable=True),
sa.Column('metadata_uri', mysql.VARCHAR(length=255), nullable=True),
sa.Column('proto', mysql.JSON(), nullable=False),
sa.Column('short_description', mysql.VARCHAR(length=1024), nullable=False),
sa.Column('description', mysql.VARCHAR(length=1024), nullable=False),
sa.Column('project_url', mysql.VARCHAR(length=512), nullable=True),
sa.Column('assets', mysql.JSON(), nullable=False),
sa.Column('ratings', mysql.JSON(), nullable=False),
sa.Column('ranking', sa.Integer(), nullable=False),
sa.Column('contributors', mysql.JSON(), nullable=False),
sa.Column('tags', mysql.JSON(), nullable=False),
sa.Column('mpe_address', mysql.VARCHAR(length=128), nullable=False),
sa.Column('created_on', mysql.TIMESTAMP(), nullable=False),
sa.Column('updated_on', mysql.TIMESTAMP(), nullable=False),
sa.ForeignKeyConstraint(['org_uuid'], ['organization.uuid'], onupdate='CASCADE', ondelete='CASCADE'),
sa.PrimaryKeyConstraint('uuid')
)
op.create_table('service_group',
sa.Column('row_id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('org_uuid', mysql.VARCHAR(length=128), nullable=False),
sa.Column('service_uuid', mysql.VARCHAR(length=128), nullable=False),
sa.Column('group_id', mysql.VARCHAR(length=128), nullable=False),
sa.Column('group_name', mysql.VARCHAR(length=128), nullable=False),
sa.Column('pricing', mysql.JSON(), nullable=False),
sa.Column('endpoints', mysql.JSON(), nullable=False),
sa.Column('test_endpoints', mysql.JSON(), nullable=False),
sa.Column('daemon_address', mysql.JSON(), nullable=False),
sa.Column('free_calls', sa.Integer(), nullable=False),
sa.Column('free_call_signer_address', mysql.VARCHAR(length=128), nullable=True),
sa.Column('created_on', mysql.TIMESTAMP(), nullable=False),
sa.Column('updated_on', mysql.TIMESTAMP(), nullable=False),
sa.ForeignKeyConstraint(['service_uuid'], ['service.uuid'], onupdate='CASCADE', ondelete='CASCADE'),
sa.PrimaryKeyConstraint('row_id'),
sa.UniqueConstraint('org_uuid', 'service_uuid', 'group_id', name='uq_org_srvc_grp')
)
op.create_table('service_state',
sa.Column('row_id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('org_uuid', mysql.VARCHAR(length=128), nullable=False),
sa.Column('service_uuid', mysql.VARCHAR(length=128), nullable=False),
sa.Column('state', mysql.VARCHAR(length=128), nullable=False),
sa.Column('transaction_hash', mysql.VARCHAR(length=128), nullable=True),
sa.Column('test_transaction_hash', mysql.VARCHAR(length=128), nullable=True),
sa.Column('created_by', mysql.VARCHAR(length=128), nullable=False),
sa.Column('updated_by', mysql.VARCHAR(length=128), nullable=False),
sa.Column('approved_by', mysql.VARCHAR(length=128), nullable=True),
sa.Column('created_on', mysql.TIMESTAMP(), nullable=False),
sa.Column('updated_on', mysql.TIMESTAMP(), nullable=False),
sa.ForeignKeyConstraint(['service_uuid'], ['service.uuid'], onupdate='CASCADE', ondelete='CASCADE'),
sa.PrimaryKeyConstraint('row_id'),
sa.UniqueConstraint('org_uuid', 'service_uuid', name='uq_org_srvc'),
sa.UniqueConstraint('service_uuid')
)
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_table('service_state')
op.drop_table('service_group')
op.drop_table('service')
op.drop_table('organization_state')
op.drop_table('organization_address')
op.drop_table('org_member')
op.drop_table('group')
op.drop_table('service_review_history')
op.drop_table('organization_archive')
op.drop_table('organization')
# ### end Alembic commands ###
| 54.825641 | 105 | 0.707324 | 1,372 | 10,691 | 5.403061 | 0.095481 | 0.127344 | 0.179684 | 0.164306 | 0.878322 | 0.868339 | 0.820585 | 0.810063 | 0.7414 | 0.638068 | 0 | 0.027218 | 0.113366 | 10,691 | 194 | 106 | 55.108247 | 0.754826 | 0.026003 | 0 | 0.508475 | 0 | 0 | 0.173477 | 0.010601 | 0 | 0 | 0 | 0 | 0 | 1 | 0.011299 | false | 0 | 0.016949 | 0 | 0.028249 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
18969d871a82c36478798889ec697f7c28a62f4f | 110 | py | Python | code/models/road_extraction/BT_RoadNet_model/__init__.py | xueruoyao/FCN-pytorch | a5019da3943f47fa4f7baed3640cdbfeae2d677e | [
"MIT"
] | 1 | 2021-11-16T12:24:43.000Z | 2021-11-16T12:24:43.000Z | code/models/road_extraction/BT_RoadNet_model/__init__.py | xueruoyao/FCN-pytorch | a5019da3943f47fa4f7baed3640cdbfeae2d677e | [
"MIT"
] | null | null | null | code/models/road_extraction/BT_RoadNet_model/__init__.py | xueruoyao/FCN-pytorch | a5019da3943f47fa4f7baed3640cdbfeae2d677e | [
"MIT"
] | null | null | null | from .bt_roadnet import BT_RoadNet
def build_model(in_ch, k, out_ch):
return BT_RoadNet(in_ch, k, out_ch) | 27.5 | 39 | 0.763636 | 22 | 110 | 3.454545 | 0.545455 | 0.355263 | 0.131579 | 0.210526 | 0.263158 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145455 | 110 | 4 | 39 | 27.5 | 0.808511 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 8 |
189ac6f67d8806107be46061af0a43d8b5cf74a3 | 120 | py | Python | discord/enums.py | kuzaku-developers/disnake | 61cc1ad4c2bafd39726a1447c85f7e469e41af10 | [
"MIT"
] | null | null | null | discord/enums.py | kuzaku-developers/disnake | 61cc1ad4c2bafd39726a1447c85f7e469e41af10 | [
"MIT"
] | null | null | null | discord/enums.py | kuzaku-developers/disnake | 61cc1ad4c2bafd39726a1447c85f7e469e41af10 | [
"MIT"
] | null | null | null | from disnake.enums import *
from disnake.enums import __dict__ as __original_dict__
locals().update(__original_dict__)
| 24 | 55 | 0.833333 | 16 | 120 | 5.375 | 0.5625 | 0.255814 | 0.372093 | 0.511628 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 120 | 4 | 56 | 30 | 0.796296 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
7a16eba4d1a8bc18da88b4004e025495da06644c | 24,026 | py | Python | pystratis/api/coldstaking/coldstaking.py | TjadenFroyda/pyStratis | 9cc7620d7506637f8a2b84003d931eceb36ac5f2 | [
"MIT"
] | 8 | 2021-06-30T20:44:22.000Z | 2021-12-07T14:42:22.000Z | pystratis/api/coldstaking/coldstaking.py | TjadenFroyda/pyStratis | 9cc7620d7506637f8a2b84003d931eceb36ac5f2 | [
"MIT"
] | 2 | 2021-07-01T11:50:18.000Z | 2022-01-25T18:39:49.000Z | pystratis/api/coldstaking/coldstaking.py | TjadenFroyda/pyStratis | 9cc7620d7506637f8a2b84003d931eceb36ac5f2 | [
"MIT"
] | 4 | 2021-07-01T04:36:42.000Z | 2021-09-17T10:54:19.000Z | from decimal import Decimal
from typing import List, Union
from pystratis.api import APIRequest, EndpointRegister, endpoint
from pystratis.api.coldstaking.requestmodels import *
from pystratis.api.coldstaking.responsemodels import *
from pystratis.api.global_responsemodels import UtxoDescriptor, AddressDescriptor
from pystratis.core import ExtPubKey
from pystratis.core.types import Address, Money, hexstr
class ColdStaking(APIRequest, metaclass=EndpointRegister):
"""Implements the coldstaking api endpoints."""
route = '/api/coldstaking'
def __init__(self, **kwargs):
super().__init__(**kwargs)
@endpoint(f'{route}/cold-staking-info')
def info(self, wallet_name: str, **kwargs) -> InfoModel:
"""Gets general information related to cold staking.
Args:
wallet_name (str): The wallet name.
**kwargs: Extra keyword arguments.
Returns:
InfoModel: The cold staking account information for the given wallet.
Raises:
APIError: Error thrown by node API. See message for details.
"""
request_model = InfoRequest(wallet_name=wallet_name)
data = self.get(request_model, **kwargs)
return InfoModel(**data)
@endpoint(f'{route}/cold-staking-account')
def account(self,
wallet_name: str,
wallet_password: str,
is_cold_wallet_account: bool = False,
extpubkey: Union[ExtPubKey, str] = None,
**kwargs) -> AccountModel:
"""Create a cold staking account.
Args:
wallet_name (str): The wallet name.
wallet_password (str): The wallet password.
is_cold_wallet_account (bool, optional): If this account is for a cold wallet. Default=False.
extpubkey (ExtPubKey, str, optional): The extpubkey for the cold wallet.
**kwargs: Extra keyword arguments.
Returns:
AccountModel: Information about the cold staking account.
Raises:
APIError: Error thrown by node API. See message for details.
"""
if extpubkey is not None and isinstance(extpubkey, str):
extpubkey = ExtPubKey(extpubkey)
request_model = AccountRequest(
wallet_name=wallet_name,
wallet_password=wallet_password,
is_cold_wallet_account=is_cold_wallet_account,
extpubkey=extpubkey
)
data = self.post(request_model, **kwargs)
return AccountModel(**data)
@endpoint(f'{route}/cold-staking-address')
def address(self,
wallet_name: str,
is_cold_wallet_address: bool = False,
segwit: bool = False,
**kwargs) -> AddressModel:
"""Gets a cold staking address.
Args:
wallet_name (str): The wallet name.
is_cold_wallet_address (bool, optional): If this address is for a cold wallet. Default=False.
segwit (bool, optional): If this is a segwit address. Default=False.
**kwargs: Extra keyword arguments.
Returns:
AddressModel: Information about the cold staking address.
Raises:
APIError: Error thrown by node API. See message for details.
"""
request_model = AddressRequest(
wallet_name=wallet_name,
is_cold_wallet_address=is_cold_wallet_address,
segwit=segwit
)
data = self.get(request_model, **kwargs)
data['address'] = Address(address=data['address'], network=self._network)
return AddressModel(**data)
@endpoint(f'{route}/setup-cold-staking')
def setup(self,
cold_wallet_address: Union[Address, str],
hot_wallet_address: Union[Address, str],
wallet_name: str,
wallet_account: str,
wallet_password: str,
amount: Union[Money, int, float, Decimal],
fees: Union[Money, int, float, Decimal],
subtract_fee_from_amount: bool = True,
split_count: int = 1,
segwit_change_address: bool = False,
**kwargs) -> SetupModel:
"""Spends funds from a normal wallet addresses to the cold staking script.
Args:
cold_wallet_address (Address, str): The cold wallet address.
hot_wallet_address (Address, str): The hot wallet address.
wallet_name (str): The wallet name.
wallet_account (str): The wallet account.
wallet_password (str): The wallet password.
amount (Money, int, float, Decimal): The amount to send to the old wallet.
fees (Money, int, float, Decimal): The transaction fee.
subtract_fee_from_amount (bool, optional): If fee should be subtracted from amount. Default=True.
split_count (int, optional): Number of transactions to split over. Default=1.
segwit_change_address (bool, optional): If change address is a segwit address. Default=False.
**kwargs: Extra keyword arguments.
Returns:
SetupModel: The transaction hex for the cold staking setup transaction.
Raises:
APIError: Error thrown by node API. See message for details.
"""
if isinstance(cold_wallet_address, str):
cold_wallet_address = Address(address=cold_wallet_address, network=self._network)
if isinstance(hot_wallet_address, str):
hot_wallet_address = Address(address=hot_wallet_address, network=self._network)
request_model = SetupRequest(
cold_wallet_address=cold_wallet_address,
hot_wallet_address=hot_wallet_address,
wallet_name=wallet_name,
wallet_account=wallet_account,
wallet_password=wallet_password,
amount=Money(amount),
fees=Money(fees),
subtract_fee_from_amount=subtract_fee_from_amount,
split_count=split_count,
segwit_change_address=segwit_change_address
)
data = self.post(request_model, **kwargs)
data['transactionHex'] = hexstr(data['transactionHex'])
return SetupModel(**data)
@endpoint(f'{route}/setup-offline-cold-staking')
def setup_offline(self,
cold_wallet_address: Union[Address, str],
hot_wallet_address: Union[Address, str],
wallet_name: str,
wallet_account: str,
amount: Union[Money, int, float, Decimal],
fees: Union[Money, int, float, Decimal],
subtract_fee_from_amount: bool = True,
split_count: int = 1,
segwit_change_address: bool = False,
**kwargs) -> BuildOfflineSignModel:
"""Creates a cold staking setup transaction in an unsigned state.
Args:
cold_wallet_address (Address, str): The cold wallet address.
hot_wallet_address (Address, str): The hot wallet address.
wallet_name (str): The wallet name.
wallet_account (str): The wallet account.
amount (Money, int, float, Decimal): The amount to send to the old wallet.
fees (Money, int, float, Decimal): The transaction fee.
subtract_fee_from_amount (bool, optional): If fee should be subtracted from amount. Default=True.
split_count (int, optional): Number of transactions to split over. Default=1.
segwit_change_address (bool, optional): If change address is a segwit address. Default=False.
**kwargs: Extra keyword arguments.
Returns:
BuildOfflineSignModel: The built transaction for signing offline.
Raises:
APIError: Error thrown by node API. See message for details.
"""
if isinstance(cold_wallet_address, str):
cold_wallet_address = Address(address=cold_wallet_address, network=self._network)
if isinstance(hot_wallet_address, str):
hot_wallet_address = Address(address=hot_wallet_address, network=self._network)
request_model = SetupOfflineRequest(
cold_wallet_address=cold_wallet_address,
hot_wallet_address=hot_wallet_address,
wallet_name=wallet_name,
wallet_account=wallet_account,
amount=Money(amount),
fees=Money(fees),
subtract_fee_from_amount=subtract_fee_from_amount,
split_count=split_count,
segwit_change_address=segwit_change_address
)
data = self.post(request_model, **kwargs)
# Build the UtxoDescriptors
data['utxos'] = [UtxoDescriptor(**x) for x in data['utxos']]
# Build the AddressDescriptors
address_descriptors = []
for address_descriptor in data['addresses']:
address_descriptor['address'] = Address(address=address_descriptor['address'], network=self._network)
address_descriptors.append(address_descriptor)
data['addresses'] = [AddressDescriptor(**x) for x in address_descriptors]
return BuildOfflineSignModel(**data)
@endpoint(f'{route}/estimate-cold-staking-setup-tx-fee')
def estimate_setup_tx_fee(self,
cold_wallet_address: Union[Address, str],
hot_wallet_address: Union[Address, str],
wallet_name: str,
wallet_account: str,
wallet_password: str,
amount: Union[Money, int, float, Decimal],
fees: Union[Money, int, float, Decimal],
subtract_fee_from_amount: bool = True,
split_count: int = 1,
segwit_change_address: bool = False,
**kwargs) -> Money:
"""Estimate the cold staking setup tx fee.
Args:
cold_wallet_address (Address, str): The cold wallet address.
hot_wallet_address (Address, str): The hot wallet address.
wallet_name (str): The wallet name.
wallet_account (str): The wallet account.
wallet_password (str): The wallet password.
amount (Money, int, float, Decimal): The amount to send to the old wallet.
fees (Money, int, float, Decimal): The transaction fee.
subtract_fee_from_amount (bool, optional): If fee should be subtracted from amount. Default=True.
split_count (int, optional): Number of transactions to split over. Default=1.
segwit_change_address (bool, optional): If change address is a segwit address. Default=False.
**kwargs: Extra keyword arguments.
Returns:
Money: The cold staking fee estimate.
Raises:
APIError: Error thrown by node API. See message for details.
"""
if isinstance(cold_wallet_address, str):
cold_wallet_address = Address(address=cold_wallet_address, network=self._network)
if isinstance(hot_wallet_address, str):
hot_wallet_address = Address(address=hot_wallet_address, network=self._network)
request_model = SetupRequest(
cold_wallet_address=cold_wallet_address,
hot_wallet_address=hot_wallet_address,
wallet_name=wallet_name,
wallet_account=wallet_account,
wallet_password=wallet_password,
amount=Money(amount),
fees=Money(fees),
subtract_fee_from_amount=subtract_fee_from_amount,
split_count=split_count,
segwit_change_address=segwit_change_address
)
data = self.post(request_model, **kwargs)
return Money.from_satoshi_units(data)
@endpoint(f'{route}/estimate-offline-cold-staking-setup-tx-fee')
def estimate_offline_setup_tx_fee(self,
cold_wallet_address: Union[Address, str],
hot_wallet_address: Union[Address, str],
wallet_name: str,
wallet_account: str,
amount: Union[Money, int, float, Decimal],
fees: Union[Money, int, float, Decimal],
subtract_fee_from_amount: bool = True,
split_count: int = 1,
segwit_change_address: bool = False,
**kwargs) -> Money:
"""Estimate the cold staking offline setup tx fee.
Args:
cold_wallet_address (Address, str): The cold wallet address.
hot_wallet_address (Address, str): The hot wallet address.
wallet_name (str): The wallet name.
wallet_account (str): The wallet account.
amount (Money, int, float, Decimal): The amount to send to the old wallet.
fees (Money, int, float, Decimal): The transaction fee.
subtract_fee_from_amount (bool, optional): If fee should be subtracted from amount. Default=True.
split_count (int, optional): Number of transactions to split over. Default=1.
segwit_change_address (bool, optional): If change address is a segwit address. Default=False.
**kwargs: Extra keyword arguments.
Returns:
Money: The offline cold staking fee estimate.
Raises:
APIError: Error thrown by node API. See message for details.
"""
if isinstance(cold_wallet_address, str):
cold_wallet_address = Address(address=cold_wallet_address, network=self._network)
if isinstance(hot_wallet_address, str):
hot_wallet_address = Address(address=hot_wallet_address, network=self._network)
request_model = SetupOfflineRequest(
cold_wallet_address=cold_wallet_address,
hot_wallet_address=hot_wallet_address,
wallet_name=wallet_name,
wallet_account=wallet_account,
amount=Money(amount),
fees=Money(fees),
subtract_fee_from_amount=subtract_fee_from_amount,
split_count=split_count,
segwit_change_address=segwit_change_address
)
data = self.post(request_model, **kwargs)
return Money.from_satoshi_units(data)
@endpoint(f'{route}/cold-staking-withdrawal')
def withdrawal(self,
receiving_address: Union[Address, str],
wallet_name: str,
wallet_password: str,
amount: Union[Money, int, float, Decimal],
fees: Union[Money, int, float, Decimal],
subtract_fee_from_amount: bool = True,
**kwargs) -> WithdrawalModel:
"""Spends funds from the cold staking wallet account back to a normal wallet account.
Args:
receiving_address (Address, str): The receiving address.
wallet_password (str): The wallet password.
wallet_name (str): The wallet name.
amount (Money, int, float, Decimal): The amount to withdraw to the receiving address.
fees (Money, int, float, Decimal, optional): The amount paid in fees.
subtract_fee_from_amount (bool, optional): If fee should be subtracted from amount. Default=True.
**kwargs: Extra keyword arguments.
Returns:
WithdrawalModel: The withdrawal transaction model.
Raises:
APIError: Error thrown by node API. See message for details.
"""
if isinstance(receiving_address, str):
receiving_address = Address(address=receiving_address, network=self._network)
request_model = WithdrawalRequest(
wallet_name=wallet_name,
wallet_password=wallet_password,
receiving_address=receiving_address,
fees=Money(fees),
amount=Money(amount),
subtract_fee_from_amount=subtract_fee_from_amount
)
data = self.post(request_model, **kwargs)
return WithdrawalModel(**data)
@endpoint(f'{route}/offline-cold-staking-withdrawal')
def offline_withdrawal(self,
receiving_address: Union[Address, str],
wallet_name: str,
account_name: str,
amount: Union[Money, int, float, Decimal],
fees: Union[Money, int, float, Decimal],
subtract_fee_from_amount: bool = True,
**kwargs) -> BuildOfflineSignModel:
"""Builds a request to spend funds from a cold staking wallet account back to a normal wallet account.
Args:
receiving_address (Address, str): The receiving address.
wallet_name (str): The wallet name.
account_name (str): The account name.
amount (Money, int, float, Decimal): The amount to withdraw to the receiving address.
fees (Money, int, float, Decimal): The amount paid in fees.
subtract_fee_from_amount (bool, optional): If fee should be subtracted from amount. Default=True.
**kwargs: Extra keyword arguments.
Returns:
BuildOfflineSignModel: The built withdrawal transaction model for offline signing.
Raises:
APIError: Error thrown by node API. See message for details.
"""
if isinstance(receiving_address, str):
receiving_address = Address(address=receiving_address, network=self._network)
request_model = OfflineWithdrawalRequest(
wallet_name=wallet_name,
account_name=account_name,
receiving_address=receiving_address,
fees=Money(fees),
amount=Money(amount),
subtract_fee_from_amount=subtract_fee_from_amount
)
data = self.post(request_model, **kwargs)
# Build the UtxoDescriptors
for i in range(len(data['utxos'])):
data['utxos'][i]['amount'] = Money(data['utxos'][i]['amount'])
data['utxos'] = [UtxoDescriptor(**x) for x in data['utxos']]
# Build the AddressDescriptors
address_descriptors = []
for address_descriptor in data['addresses']:
address_descriptor['address'] = Address(address=address_descriptor['address'], network=self._network)
address_descriptors.append(address_descriptor)
data['addresses'] = [AddressDescriptor(**x) for x in address_descriptors]
return BuildOfflineSignModel(**data)
@endpoint(f'{route}/estimate-offline-cold-staking-withdrawal-tx-fee')
def estimate_offline_withdrawal_tx_fee(self,
wallet_name: str,
account_name: str,
receiving_address: Union[Address, str],
amount: Union[Money, int, float, Decimal],
subtract_fee_from_amount: bool = True,
**kwargs) -> Money:
"""Estimate the fee for an offline cold staking withdrawal transaction.
Args:
wallet_name (str): The wallet name.
account_name (str): The account name.
receiving_address (Address, str): The receiving address.
amount (Money, int, float, Decimal): The amount to withdraw to the receiving address.
subtract_fee_from_amount (bool, optional): If fee should be subtracted from amount. Default=True.
**kwargs: Extra keyword arguments.
Returns:
Money: The estimate for offline withdrawal transaction fee.
Raises:
APIError: Error thrown by node API. See message for details.
"""
if isinstance(receiving_address, str):
receiving_address = Address(address=receiving_address, network=self._network)
request_model = OfflineWithdrawalFeeEstimationRequest(
wallet_name=wallet_name,
account_name=account_name,
receiving_address=receiving_address,
amount=Money(amount),
subtract_fee_from_amount=subtract_fee_from_amount
)
data = self.post(request_model, **kwargs)
return Money.from_satoshi_units(data)
@endpoint(f'{route}/estimate-cold-staking-withdrawal-tx-fee')
def estimate_withdrawal_tx_fee(self,
receiving_address: Union[Address, str],
wallet_name: str,
wallet_password: str,
amount: Union[Money, int, float, Decimal],
fees: Union[Money, int, float, Decimal],
subtract_fee_from_amount: bool = True,
**kwargs) -> Money:
"""Estimate the fee for a cold staking withdrawal transaction.
Args:
receiving_address (Address, str): The receiving address.
wallet_password (str): The wallet password.
wallet_name (str): The wallet name.
amount (Money, int, float, Decimal): The amount to withdraw to the receiving address.
fees (Money, int, float, Decimal, optional): The amount paid in fees.
subtract_fee_from_amount (bool, optional): If fee should be subtracted from amount. Default=True.
**kwargs: Extra keyword arguments.
Returns:
Money: The estimate for the withdrawal transaction fee.
Raises:
APIError: Error thrown by node API. See message for details.
"""
if isinstance(receiving_address, str):
receiving_address = Address(address=receiving_address, network=self._network)
request_model = WithdrawalRequest(
wallet_name=wallet_name,
wallet_password=wallet_password,
receiving_address=receiving_address,
fees=Money(fees),
amount=Money(amount),
subtract_fee_from_amount=subtract_fee_from_amount
)
data = self.post(request_model, **kwargs)
return Money.from_satoshi_units(data)
@endpoint(f'{route}/retrieve-filtered-utxos')
def retrieve_filtered_utxos(self,
wallet_name: str,
wallet_password: str,
wallet_account: str,
trx_hex: hexstr,
broadcast: bool = False,
**kwargs) -> List[hexstr]:
"""Estimate the fee for a cold staking withdrawal transaction.
Args:
wallet_name (str): The wallet name.
wallet_password (str): The wallet password.
wallet_account (str): The wallet account.
trx_hex (hexstr): The transaction id hex.
broadcast (bool): If true, broadcast the transaction to the network after being built. Default=False.
**kwargs: Extra keyword arguments.
Returns:
List[hexstr]: A list of hex encoded coldstaking transactions.
Raises:
APIError: Error thrown by node API. See message for details.
"""
request_model = RetrieveFilteredUTXOsRequest(
wallet_name=wallet_name,
wallet_password=wallet_password,
wallet_account=wallet_account,
trx_hex=trx_hex,
broadcast=broadcast
)
data = self.post(request_model, **kwargs)
return [hexstr(x) for x in data]
| 46.834308 | 113 | 0.602722 | 2,536 | 24,026 | 5.513013 | 0.06664 | 0.063229 | 0.043774 | 0.048065 | 0.826193 | 0.804306 | 0.784493 | 0.750304 | 0.736714 | 0.733424 | 0 | 0.00049 | 0.32032 | 24,026 | 512 | 114 | 46.925781 | 0.855664 | 0.34113 | 0 | 0.70922 | 0 | 0 | 0.041767 | 0.0301 | 0 | 0 | 0 | 0 | 0 | 1 | 0.046099 | false | 0.042553 | 0.028369 | 0 | 0.124113 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e19d7496e9e9faaf33022838bab85f52e7e8bd76 | 3,533 | py | Python | tests/test_chiral_rnns.py | raymondyeh07/chirality_nets | d9b4a0ba6347ab6a6126030b9434979fe35a795f | [
"MIT"
] | 28 | 2019-11-20T13:01:02.000Z | 2022-03-22T19:51:48.000Z | tests/test_chiral_rnns.py | raymondyeh07/chirality_nets | d9b4a0ba6347ab6a6126030b9434979fe35a795f | [
"MIT"
] | null | null | null | tests/test_chiral_rnns.py | raymondyeh07/chirality_nets | d9b4a0ba6347ab6a6126030b9434979fe35a795f | [
"MIT"
] | 1 | 2020-10-16T14:51:48.000Z | 2020-10-16T14:51:48.000Z | """Test for Chiral RNN layers, including stacked LSTM and GRU."""
import unittest
import torch
import torch.nn.functional as F
from tests.test_chiral_base import TestChiralBase
from chiral_layers.chiral_lstm import ChiralLstm
from chiral_layers.chiral_gru import ChiralGru
class TestChiralRnns(TestChiralBase):
"""Implements unittests for chiral conv1d layers."""
def test_lstm_equi_group(self):
"""Performs unittest for lstm equivariance."""
print('Tests equivariance of LSTM.')
batch_size = 2
time_size = 1
num_joints = 5
in_dim = 2
out_dim = 2
neg_dim_in = 1
neg_dim_out = 1
sym_groupings = ([2, 2, 1], [2, 2, 1])
# Generate chiral pairs.
x, x_chiral = self._get_input_pairs(batch_size, time_size, num_joints,
in_dim, neg_dim_in, sym_groupings)
# Permute to time in first index.
x = x.permute(-1, 0, 1, 2)
x_chiral = x_chiral.permute(-1, 0, 1, 2)
# Reshape to time x batch x channels.
x = x.view(x.shape[0], x.shape[1], -1)
x_chiral = x_chiral.view(x.shape[0], x.shape[1], -1)
chiral_model = ChiralLstm(num_joints*in_dim, num_joints*in_dim,
num_layers=2, bias=True,
dropout=0.,
sym_groupings=sym_groupings,
neg_dim_in=neg_dim_in, neg_dim_out=neg_dim_out)
y, _ = chiral_model(x)
y_chiral, _ = chiral_model(x_chiral)
# Reshape back to joints, dim representation.
y = y.view(y.shape[0], batch_size, num_joints, -1)
y_chiral = y_chiral.view(y_chiral.shape[0], batch_size, num_joints, -1)
# Permute time back to last dimension.
y = y.permute(1, 2, 3, 0)
y_chiral = y_chiral.permute(1, 2, 3, 0)
# Compare output.
self._checks_chiral_equivariant(y, y_chiral, num_joints, out_dim,
neg_dim_out, sym_groupings[1])
def test_gru_equi_group(self):
"""Performs unittest for gru equivariance."""
print('Tests equivariance of GRU.')
batch_size = 2
time_size = 1
num_joints = 5
in_dim = 2
out_dim = 2
neg_dim_in = 1
neg_dim_out = 1
sym_groupings = ([2, 2, 1], [2, 2, 1])
# Generate chiral pairs.
x, x_chiral = self._get_input_pairs(batch_size, time_size, num_joints,
in_dim, neg_dim_in, sym_groupings)
# Permute to time in first index.
x = x.permute(-1, 0, 1, 2)
x_chiral = x_chiral.permute(-1, 0, 1, 2)
# Reshape to time x batch x channels.
x = x.view(x.shape[0], x.shape[1], -1)
x_chiral = x_chiral.view(x.shape[0], x.shape[1], -1)
chiral_model = ChiralGru(num_joints*in_dim, num_joints*in_dim,
num_layers=2, bias=True,
dropout=0.,
sym_groupings=sym_groupings,
neg_dim_in=neg_dim_in, neg_dim_out=neg_dim_out)
y, _ = chiral_model(x)
y_chiral, _ = chiral_model(x_chiral)
# Reshape back to joints, dim representation.
y = y.view(y.shape[0], batch_size, num_joints, -1)
y_chiral = y_chiral.view(y_chiral.shape[0], batch_size, num_joints, -1)
# Permute time back to last dimension.
y = y.permute(1, 2, 3, 0)
y_chiral = y_chiral.permute(1, 2, 3, 0)
# Compare output.
self._checks_chiral_equivariant(y, y_chiral, num_joints, out_dim,
neg_dim_out, sym_groupings[1])
if __name__ == '__main__':
unittest.main()
| 35.33 | 77 | 0.611661 | 529 | 3,533 | 3.801512 | 0.153119 | 0.047737 | 0.031825 | 0.04177 | 0.81452 | 0.778717 | 0.746892 | 0.746892 | 0.746892 | 0.746892 | 0 | 0.033517 | 0.282196 | 3,533 | 99 | 78 | 35.686869 | 0.759464 | 0.159638 | 0 | 0.776119 | 0 | 0 | 0.020769 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029851 | false | 0 | 0.089552 | 0 | 0.134328 | 0.029851 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e1b4866a36ac7a22505e269434f96bbf5103db21 | 143,961 | py | Python | plugins/P2PMarket/wxRavenP2PMarketDesign.py | sLiinuX/wxRaven | a513a029fa1ff2059ee262c524b4b2b45111f1a6 | [
"MIT"
] | 11 | 2021-12-20T15:32:17.000Z | 2022-03-16T03:54:02.000Z | plugins/P2PMarket/wxRavenP2PMarketDesign.py | sLiinuX/wxRaven | a513a029fa1ff2059ee262c524b4b2b45111f1a6 | [
"MIT"
] | 156 | 2021-12-31T21:01:31.000Z | 2022-03-20T21:57:31.000Z | plugins/P2PMarket/wxRavenP2PMarketDesign.py | sLiinuX/wxRaven | a513a029fa1ff2059ee262c524b4b2b45111f1a6 | [
"MIT"
] | 3 | 2022-01-21T14:52:43.000Z | 2022-02-12T05:32:19.000Z | # -*- coding: utf-8 -*-
###########################################################################
## Python code generated with wxFormBuilder (version 3.10.1-0-g8feb16b3)
## http://www.wxformbuilder.org/
##
## PLEASE DO *NOT* EDIT THIS FILE!
###########################################################################
import wx
import wx.xrc
from wxRavenGUI.application.wxcustom.CustomListCtrl import *
import wx.aui
import wx.adv
from wxRavenGUI.application.wxcustom.CustomListCtrl import *
from wxRavenGUI.application.wxcustom import *
###########################################################################
## Class wxRavenP2PMarket_NewAdDialog
###########################################################################
class wxRavenP2PMarket_NewAdDialog ( wx.Panel ):
def __init__( self, parent, id = wx.ID_ANY, pos = wx.DefaultPosition, size = wx.Size( 891,579 ), style = wx.TAB_TRAVERSAL, name = wx.EmptyString ):
wx.Panel.__init__ ( self, parent, id = id, pos = pos, size = size, style = style, name = name )
bSizer1 = wx.BoxSizer( wx.VERTICAL )
bSizer2 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap1 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/help_contents.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer2.Add( self.m_bitmap1, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText1 = wx.StaticText( self, wx.ID_ANY, u"Publish a new Ad on P2P Market :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText1.Wrap( -1 )
self.m_staticText1.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer2.Add( self.m_staticText1, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_AdFileIPFSHash = wx.TextCtrl( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_AdFileIPFSHash.Enable( False )
bSizer2.Add( self.m_AdFileIPFSHash, 1, wx.ALL|wx.EXPAND, 5 )
self.m_toggleAssistant = wx.ToggleButton( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_toggleAssistant.SetValue( True )
bSizer2.Add( self.m_toggleAssistant, 0, wx.ALL, 5 )
bSizer1.Add( bSizer2, 0, wx.EXPAND, 5 )
self.m_assistantPanel = wx.Panel( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.TAB_TRAVERSAL )
bSizer55 = wx.BoxSizer( wx.VERTICAL )
self.m_staticline1 = wx.StaticLine( self.m_assistantPanel, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.LI_HORIZONTAL )
bSizer55.Add( self.m_staticline1, 0, wx.EXPAND |wx.ALL, 5 )
bSizer3 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap33 = wx.StaticBitmap( self.m_assistantPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/ravencoin_marketplace_ultrasmall.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer3.Add( self.m_bitmap33, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
m_radioBox1Choices = [ u"I'm selling - You are offering an asset for sale", u"I want to find - You want to buy an asset", u"I want to trade - You want to exchange an asset for another asset" ]
self.m_radioBox1 = wx.RadioBox( self.m_assistantPanel, wx.ID_ANY, u"Ad Type :", wx.DefaultPosition, wx.DefaultSize, m_radioBox1Choices, 1, wx.RA_SPECIFY_COLS )
self.m_radioBox1.SetSelection( 0 )
bSizer3.Add( self.m_radioBox1, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
bSizer55.Add( bSizer3, 0, wx.ALIGN_CENTER_HORIZONTAL, 5 )
self.m_staticline2 = wx.StaticLine( self.m_assistantPanel, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.LI_HORIZONTAL )
bSizer55.Add( self.m_staticline2, 0, wx.EXPAND |wx.ALL, 5 )
bSizer4 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap2 = wx.StaticBitmap( self.m_assistantPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/reflog.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer4.Add( self.m_bitmap2, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText2 = wx.StaticText( self.m_assistantPanel, wx.ID_ANY, u"Title :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText2.Wrap( -1 )
self.m_staticText2.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer4.Add( self.m_staticText2, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_AdTitle = wx.TextCtrl( self.m_assistantPanel, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer4.Add( self.m_AdTitle, 1, wx.ALL|wx.EXPAND, 5 )
bSizer55.Add( bSizer4, 0, wx.EXPAND, 5 )
bSizer411 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap211 = wx.StaticBitmap( self.m_assistantPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/browser.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer411.Add( self.m_bitmap211, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText211 = wx.StaticText( self.m_assistantPanel, wx.ID_ANY, u"Website / Gallery / IPFS Page : ", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText211.Wrap( -1 )
self.m_staticText211.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer411.Add( self.m_staticText211, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_AdLink = wx.TextCtrl( self.m_assistantPanel, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer411.Add( self.m_AdLink, 1, wx.ALL|wx.EXPAND, 5 )
bSizer55.Add( bSizer411, 0, wx.EXPAND, 5 )
self.m_staticline3 = wx.StaticLine( self.m_assistantPanel, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.LI_HORIZONTAL )
bSizer55.Add( self.m_staticline3, 0, wx.EXPAND |wx.ALL, 5 )
bSizer13 = wx.BoxSizer( wx.HORIZONTAL )
bSizer14 = wx.BoxSizer( wx.VERTICAL )
bSizer16 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap7 = wx.StaticBitmap( self.m_assistantPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/changelog_obj.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer16.Add( self.m_bitmap7, 0, wx.ALL, 5 )
self.m_staticText8 = wx.StaticText( self.m_assistantPanel, wx.ID_ANY, u"Description :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText8.Wrap( -1 )
self.m_staticText8.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer16.Add( self.m_staticText8, 0, wx.ALL, 5 )
bSizer14.Add( bSizer16, 0, wx.EXPAND, 5 )
self.m_AdDescription = wx.TextCtrl( self.m_assistantPanel, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, wx.TE_MULTILINE )
self.m_AdDescription.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
self.m_AdDescription.SetMinSize( wx.Size( -1,100 ) )
bSizer14.Add( self.m_AdDescription, 1, wx.ALL|wx.EXPAND, 5 )
bSizer13.Add( bSizer14, 1, wx.EXPAND, 5 )
bSizer141 = wx.BoxSizer( wx.VERTICAL )
bSizer161 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap71 = wx.StaticBitmap( self.m_assistantPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/changelog_obj.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer161.Add( self.m_bitmap71, 0, wx.ALL, 5 )
self.m_staticText81 = wx.StaticText( self.m_assistantPanel, wx.ID_ANY, u"Tags / Categories / Keywords :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText81.Wrap( -1 )
self.m_staticText81.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer161.Add( self.m_staticText81, 0, wx.ALL, 5 )
bSizer141.Add( bSizer161, 0, wx.EXPAND, 5 )
self.m_AdKeyword = wx.TextCtrl( self.m_assistantPanel, wx.ID_ANY, u"Asset", wx.DefaultPosition, wx.DefaultSize, wx.TE_MULTILINE )
self.m_AdKeyword.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
self.m_AdKeyword.SetMinSize( wx.Size( -1,100 ) )
bSizer141.Add( self.m_AdKeyword, 1, wx.ALL|wx.EXPAND, 5 )
bSizer13.Add( bSizer141, 1, wx.EXPAND, 5 )
bSizer55.Add( bSizer13, 1, wx.EXPAND, 5 )
self.m_staticline31 = wx.StaticLine( self.m_assistantPanel, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.LI_HORIZONTAL )
bSizer55.Add( self.m_staticline31, 0, wx.EXPAND |wx.ALL, 5 )
bSizer121 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap20 = wx.StaticBitmap( self.m_assistantPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/p2p_icon.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer121.Add( self.m_bitmap20, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText71 = wx.StaticText( self.m_assistantPanel, wx.ID_ANY, u"P2P Sell Method :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText71.Wrap( -1 )
self.m_staticText71.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer121.Add( self.m_staticText71, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
m_txMethodChoices = [ u"Atomic Swap", u"P2SH" ]
self.m_txMethod = wx.Choice( self.m_assistantPanel, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, m_txMethodChoices, 0 )
self.m_txMethod.SetSelection( 0 )
bSizer121.Add( self.m_txMethod, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_toggleRawTxDatas = wx.ToggleButton( self.m_assistantPanel, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.Size( 32,-1 ), 0 )
bSizer121.Add( self.m_toggleRawTxDatas, 0, wx.ALL, 5 )
self.m_bpButtonCreateUTXO = wx.BitmapButton( self.m_assistantPanel, wx.ID_ANY, wx.NullBitmap, wx.DefaultPosition, wx.DefaultSize, wx.BU_AUTODRAW|0 )
self.m_bpButtonCreateUTXO.SetBitmap( wx.Bitmap( u"res/default_style/normal/new_utxo.png", wx.BITMAP_TYPE_ANY ) )
self.m_bpButtonCreateUTXO.SetToolTip( u"Pre-Create UTXO's (Recomended when trying to sell multiple orders)" )
bSizer121.Add( self.m_bpButtonCreateUTXO, 0, wx.ALL, 5 )
bSizer118 = wx.BoxSizer( wx.VERTICAL )
self.m_staticText56 = wx.StaticText( self.m_assistantPanel, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText56.Wrap( -1 )
bSizer118.Add( self.m_staticText56, 0, wx.ALL, 5 )
bSizer121.Add( bSizer118, 1, wx.EXPAND, 5 )
bSizer117 = wx.BoxSizer( wx.HORIZONTAL )
self.m_P2PmethodErrorText = wx.StaticText( self.m_assistantPanel, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_P2PmethodErrorText.Wrap( -1 )
bSizer117.Add( self.m_P2PmethodErrorText, 0, wx.ALL, 5 )
self.m_bitmap38 = wx.StaticBitmap( self.m_assistantPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/help_contents.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer117.Add( self.m_bitmap38, 0, wx.ALL, 5 )
bSizer121.Add( bSizer117, 0, wx.ALIGN_CENTER_VERTICAL, 5 )
bSizer55.Add( bSizer121, 0, wx.EXPAND, 5 )
self.m_txMethodPanel = wx.Panel( self.m_assistantPanel, wx.ID_ANY, wx.DefaultPosition, wx.Size( -1,150 ), wx.TAB_TRAVERSAL )
self.m_txMethodPanel.SetMinSize( wx.Size( -1,150 ) )
self.m_txMethodPanel.SetMaxSize( wx.Size( -1,150 ) )
bSizer55.Add( self.m_txMethodPanel, 1, wx.EXPAND |wx.ALL, 5 )
self.m_assistantPanel.SetSizer( bSizer55 )
self.m_assistantPanel.Layout()
bSizer55.Fit( self.m_assistantPanel )
bSizer1.Add( self.m_assistantPanel, 1, wx.EXPAND |wx.ALL, 5 )
self.m_staticline3111 = wx.StaticLine( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.LI_HORIZONTAL )
bSizer1.Add( self.m_staticline3111, 0, wx.EXPAND |wx.ALL, 5 )
bSizer4121 = wx.BoxSizer( wx.HORIZONTAL )
bSizer1111 = wx.BoxSizer( wx.HORIZONTAL )
self.m_staticText2121 = wx.StaticText( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText2121.Wrap( -1 )
self.m_staticText2121.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer1111.Add( self.m_staticText2121, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
bSizer4121.Add( bSizer1111, 3, wx.EXPAND, 5 )
bSizer1211 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap121 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/p2p_icon2.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer1211.Add( self.m_bitmap121, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText711 = wx.StaticText( self, wx.ID_ANY, u"P2P Channel Asset :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText711.Wrap( -1 )
self.m_staticText711.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer1211.Add( self.m_staticText711, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
m_AdP2PChannelChoiceChoices = []
self.m_AdP2PChannelChoice = wx.Choice( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, m_AdP2PChannelChoiceChoices, 0 )
self.m_AdP2PChannelChoice.SetSelection( 0 )
bSizer1211.Add( self.m_AdP2PChannelChoice, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_bitmap16 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/help_contents.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer1211.Add( self.m_bitmap16, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
bSizer4121.Add( bSizer1211, 2, wx.EXPAND, 5 )
bSizer1.Add( bSizer4121, 0, wx.EXPAND, 5 )
self.m_staticline311 = wx.StaticLine( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.LI_HORIZONTAL )
bSizer1.Add( self.m_staticline311, 0, wx.EXPAND |wx.ALL, 5 )
bSizer22 = wx.BoxSizer( wx.HORIZONTAL )
self.m_PreviewAdBt = wx.Button( self, wx.ID_ANY, u"Preview Ad", wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer22.Add( self.m_PreviewAdBt, 0, wx.ALL, 5 )
self.m_GeneraeteAdBt = wx.Button( self, wx.ID_ANY, u"Generate Ad", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_GeneraeteAdBt.Enable( False )
bSizer22.Add( self.m_GeneraeteAdBt, 0, wx.ALL, 5 )
bSizer1.Add( bSizer22, 0, wx.ALIGN_RIGHT, 5 )
self.SetSizer( bSizer1 )
self.Layout()
# Connect Events
self.m_toggleAssistant.Bind( wx.EVT_TOGGLEBUTTON, self.OnWizardButtonToggle )
self.m_radioBox1.Bind( wx.EVT_RADIOBOX, self.OnAdTypeChanged )
self.m_AdTitle.Bind( wx.EVT_TEXT, self.OnTitleChanged )
self.m_AdLink.Bind( wx.EVT_TEXT, self.OnLinkChanged )
self.m_AdDescription.Bind( wx.EVT_TEXT, self.OnDescriptionChanged )
self.m_AdKeyword.Bind( wx.EVT_TEXT, self.OnKeywordChanged )
self.m_txMethod.Bind( wx.EVT_CHOICE, self.OnTxMethodChanged )
self.m_toggleRawTxDatas.Bind( wx.EVT_TOGGLEBUTTON, self.OnToggleRawTxData )
self.m_bpButtonCreateUTXO.Bind( wx.EVT_BUTTON, self.OnCreateUTXODialogClicked )
self.m_AdP2PChannelChoice.Bind( wx.EVT_CHOICE, self.OnP2PChannelChanged )
self.m_PreviewAdBt.Bind( wx.EVT_BUTTON, self.OnPreviewAdButtonClick )
self.m_GeneraeteAdBt.Bind( wx.EVT_BUTTON, self.OnGenerateButtonClick )
def __del__( self ):
pass
# Virtual event handlers, override them in your derived class
def OnWizardButtonToggle( self, event ):
event.Skip()
def OnAdTypeChanged( self, event ):
event.Skip()
def OnTitleChanged( self, event ):
event.Skip()
def OnLinkChanged( self, event ):
event.Skip()
def OnDescriptionChanged( self, event ):
event.Skip()
def OnKeywordChanged( self, event ):
event.Skip()
def OnTxMethodChanged( self, event ):
event.Skip()
def OnToggleRawTxData( self, event ):
event.Skip()
def OnCreateUTXODialogClicked( self, event ):
event.Skip()
def OnP2PChannelChanged( self, event ):
event.Skip()
def OnPreviewAdButtonClick( self, event ):
event.Skip()
def OnGenerateButtonClick( self, event ):
event.Skip()
###########################################################################
## Class wxRavenAtomicSwapPanel
###########################################################################
class wxRavenAtomicSwapPanel ( wx.Panel ):
def __init__( self, parent, id = wx.ID_ANY, pos = wx.DefaultPosition, size = wx.Size( 611,310 ), style = wx.TAB_TRAVERSAL, name = wx.EmptyString ):
wx.Panel.__init__ ( self, parent, id = id, pos = pos, size = size, style = style, name = name )
bSizer109 = wx.BoxSizer( wx.VERTICAL )
self.m_panelTxType = wx.Panel( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.TAB_TRAVERSAL )
bSizer112 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap211 = wx.StaticBitmap( self.m_panelTxType, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/atomic_swap.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer112.Add( self.m_bitmap211, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText211 = wx.StaticText( self.m_panelTxType, wx.ID_ANY, u"Select an transaction type : ", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText211.Wrap( -1 )
self.m_staticText211.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer112.Add( self.m_staticText211, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
m_AtomicSwapTypeChoices = [ u"sell", u"buy", u"trade" ]
self.m_AtomicSwapType = wx.Choice( self.m_panelTxType, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, m_AtomicSwapTypeChoices, 0 )
self.m_AtomicSwapType.SetSelection( 0 )
bSizer112.Add( self.m_AtomicSwapType, 1, wx.ALL|wx.EXPAND, 5 )
self.m_panelTxType.SetSizer( bSizer112 )
self.m_panelTxType.Layout()
bSizer112.Fit( self.m_panelTxType )
bSizer109.Add( self.m_panelTxType, 0, wx.EXPAND |wx.ALL, 5 )
self.m_staticline19 = wx.StaticLine( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.LI_HORIZONTAL )
bSizer109.Add( self.m_staticline19, 0, wx.EXPAND |wx.ALL, 5 )
self.m_atomicswapPanel = wx.Panel( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.TAB_TRAVERSAL )
bSizer56 = wx.BoxSizer( wx.VERTICAL )
bSizer41 = wx.BoxSizer( wx.HORIZONTAL )
self.m_assetSellPanel = wx.Panel( self.m_atomicswapPanel, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.TAB_TRAVERSAL )
bSizer11 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap21 = wx.StaticBitmap( self.m_assetSellPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/asset_out.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer11.Add( self.m_bitmap21, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText21 = wx.StaticText( self.m_assetSellPanel, wx.ID_ANY, u"Select an Asset : ", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText21.Wrap( -1 )
self.m_staticText21.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer11.Add( self.m_staticText21, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
m_AdAssetChoiceChoices = []
self.m_AdAssetChoice = wx.Choice( self.m_assetSellPanel, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, m_AdAssetChoiceChoices, 0 )
self.m_AdAssetChoice.SetSelection( 0 )
bSizer11.Add( self.m_AdAssetChoice, 1, wx.ALL|wx.EXPAND, 5 )
self.m_assetSellPanel.SetSizer( bSizer11 )
self.m_assetSellPanel.Layout()
bSizer11.Fit( self.m_assetSellPanel )
bSizer41.Add( self.m_assetSellPanel, 1, wx.EXPAND |wx.ALL, 0 )
bSizer12 = wx.BoxSizer( wx.HORIZONTAL )
self.m_staticText7 = wx.StaticText( self.m_atomicswapPanel, wx.ID_ANY, u"Quantity :", wx.DefaultPosition, wx.DefaultSize, wx.ALIGN_RIGHT )
self.m_staticText7.Wrap( -1 )
self.m_staticText7.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer12.Add( self.m_staticText7, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_AdAssetQt = wx.TextCtrl( self.m_atomicswapPanel, wx.ID_ANY, u"1", wx.DefaultPosition, wx.DefaultSize, wx.TE_RIGHT )
bSizer12.Add( self.m_AdAssetQt, 1, wx.ALL, 5 )
self.m_owningAssetVerifBitmap = wx.StaticBitmap( self.m_atomicswapPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/help_contents.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer12.Add( self.m_owningAssetVerifBitmap, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
bSizer41.Add( bSizer12, 1, wx.EXPAND, 5 )
bSizer56.Add( bSizer41, 0, wx.EXPAND, 5 )
bSizer412 = wx.BoxSizer( wx.HORIZONTAL )
bSizer111 = wx.BoxSizer( wx.HORIZONTAL )
self.m_assetTradePanel = wx.Panel( self.m_atomicswapPanel, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.TAB_TRAVERSAL )
bSizer113 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap212 = wx.StaticBitmap( self.m_assetTradePanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/asset_in.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer113.Add( self.m_bitmap212, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText212 = wx.StaticText( self.m_assetTradePanel, wx.ID_ANY, u"Select an Asset : ", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText212.Wrap( -1 )
self.m_staticText212.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer113.Add( self.m_staticText212, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_WantedAssetText = wx.TextCtrl( self.m_assetTradePanel, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer113.Add( self.m_WantedAssetText, 1, wx.ALL, 5 )
self.m_bitmap106 = wx.StaticBitmap( self.m_assetTradePanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/help_contents.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer113.Add( self.m_bitmap106, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_assetTradePanel.SetSizer( bSizer113 )
self.m_assetTradePanel.Layout()
bSizer113.Fit( self.m_assetTradePanel )
bSizer111.Add( self.m_assetTradePanel, 1, wx.EXPAND |wx.ALL, 0 )
bSizer412.Add( bSizer111, 2, wx.EXPAND, 5 )
bSizer1212 = wx.BoxSizer( wx.HORIZONTAL )
self.m_staticText712 = wx.StaticText( self.m_atomicswapPanel, wx.ID_ANY, u"Price :", wx.DefaultPosition, wx.DefaultSize, wx.ALIGN_RIGHT )
self.m_staticText712.Wrap( -1 )
self.m_staticText712.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer1212.Add( self.m_staticText712, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_AdAssetPrice = wx.TextCtrl( self.m_atomicswapPanel, wx.ID_ANY, u"200", wx.DefaultPosition, wx.DefaultSize, wx.TE_RIGHT )
bSizer1212.Add( self.m_AdAssetPrice, 1, wx.ALL, 5 )
bSizer412.Add( bSizer1212, 1, wx.EXPAND, 5 )
bSizer56.Add( bSizer412, 0, wx.EXPAND, 5 )
bSizer144 = wx.BoxSizer( wx.HORIZONTAL )
bSizer145 = wx.BoxSizer( wx.VERTICAL )
self.m_staticText69 = wx.StaticText( self.m_atomicswapPanel, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText69.Wrap( -1 )
bSizer145.Add( self.m_staticText69, 0, wx.ALL, 5 )
bSizer144.Add( bSizer145, 1, wx.EXPAND, 5 )
bSizer146 = wx.BoxSizer( wx.HORIZONTAL )
self.m_staticText70 = wx.StaticText( self.m_atomicswapPanel, wx.ID_ANY, u"Order(s) :", wx.DefaultPosition, wx.DefaultSize, wx.ALIGN_RIGHT )
self.m_staticText70.Wrap( -1 )
self.m_staticText70.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer146.Add( self.m_staticText70, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_orderCount = wx.SpinCtrl( self.m_atomicswapPanel, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, wx.SP_ARROW_KEYS, 1, 1000, 1 )
bSizer146.Add( self.m_orderCount, 0, wx.ALL, 5 )
bSizer144.Add( bSizer146, 1, 0, 5 )
bSizer56.Add( bSizer144, 0, wx.EXPAND, 5 )
bSizer141 = wx.BoxSizer( wx.VERTICAL )
self.m_GenerateSwapTx = wx.Button( self.m_atomicswapPanel, wx.ID_ANY, u"Generate Atomic Swap !", wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer141.Add( self.m_GenerateSwapTx, 0, wx.ALL, 5 )
bSizer56.Add( bSizer141, 1, wx.ALIGN_RIGHT, 5 )
self.m_atomicswapPanel.SetSizer( bSizer56 )
self.m_atomicswapPanel.Layout()
bSizer56.Fit( self.m_atomicswapPanel )
bSizer109.Add( self.m_atomicswapPanel, 0, wx.EXPAND |wx.ALL, 5 )
self.m_staticline18 = wx.StaticLine( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.LI_HORIZONTAL )
bSizer109.Add( self.m_staticline18, 0, wx.EXPAND |wx.ALL, 5 )
self.m_detailsPanel = wx.Panel( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.TAB_TRAVERSAL )
bSizer142 = wx.BoxSizer( wx.VERTICAL )
self.m_txDatas = wx.TextCtrl( self.m_detailsPanel, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, wx.TE_MULTILINE|wx.TE_READONLY )
self.m_txDatas.SetMinSize( wx.Size( -1,70 ) )
self.m_txDatas.SetMaxSize( wx.Size( -1,70 ) )
bSizer142.Add( self.m_txDatas, 1, wx.ALL|wx.EXPAND, 5 )
self.m_detailsPanel.SetSizer( bSizer142 )
self.m_detailsPanel.Layout()
bSizer142.Fit( self.m_detailsPanel )
bSizer109.Add( self.m_detailsPanel, 0, wx.EXPAND |wx.ALL, 5 )
self.SetSizer( bSizer109 )
self.Layout()
# Connect Events
self.m_AtomicSwapType.Bind( wx.EVT_CHOICE, self.OnSwapTypeChanged )
self.m_AdAssetChoice.Bind( wx.EVT_CHOICE, self.OnAssetChanged )
self.m_AdAssetQt.Bind( wx.EVT_TEXT, self.OnQuantityChanged )
self.m_WantedAssetText.Bind( wx.EVT_TEXT, self.OnWantedAssetChanged )
self.m_AdAssetPrice.Bind( wx.EVT_TEXT, self.OnPriceChanged )
self.m_orderCount.Bind( wx.EVT_SPINCTRL, self.OnOrderCountChange )
self.m_orderCount.Bind( wx.EVT_TEXT, self.OnOrderCountChange )
self.m_GenerateSwapTx.Bind( wx.EVT_BUTTON, self.OnGenerateAtomicSwap )
def __del__( self ):
pass
# Virtual event handlers, override them in your derived class
def OnSwapTypeChanged( self, event ):
event.Skip()
def OnAssetChanged( self, event ):
event.Skip()
def OnQuantityChanged( self, event ):
event.Skip()
def OnWantedAssetChanged( self, event ):
event.Skip()
def OnPriceChanged( self, event ):
event.Skip()
def OnOrderCountChange( self, event ):
event.Skip()
def OnGenerateAtomicSwap( self, event ):
event.Skip()
###########################################################################
## Class wxRavenAtomicSwapPanel_NoDetails
###########################################################################
class wxRavenAtomicSwapPanel_NoDetails ( wx.Panel ):
def __init__( self, parent, id = wx.ID_ANY, pos = wx.DefaultPosition, size = wx.Size( 500,173 ), style = wx.TAB_TRAVERSAL, name = wx.EmptyString ):
wx.Panel.__init__ ( self, parent, id = id, pos = pos, size = size, style = style, name = name )
bSizer109 = wx.BoxSizer( wx.VERTICAL )
self.m_panelTxType = wx.Panel( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.TAB_TRAVERSAL )
bSizer112 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap211 = wx.StaticBitmap( self.m_panelTxType, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/atomic_swap.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer112.Add( self.m_bitmap211, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText211 = wx.StaticText( self.m_panelTxType, wx.ID_ANY, u"Select an transaction type : ", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText211.Wrap( -1 )
self.m_staticText211.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer112.Add( self.m_staticText211, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
m_AtomicSwapTypeChoices = [ u"sell", u"buy", u"trade" ]
self.m_AtomicSwapType = wx.Choice( self.m_panelTxType, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, m_AtomicSwapTypeChoices, 0 )
self.m_AtomicSwapType.SetSelection( 0 )
bSizer112.Add( self.m_AtomicSwapType, 1, wx.ALL|wx.EXPAND, 5 )
self.m_panelTxType.SetSizer( bSizer112 )
self.m_panelTxType.Layout()
bSizer112.Fit( self.m_panelTxType )
bSizer109.Add( self.m_panelTxType, 0, wx.EXPAND |wx.ALL, 5 )
self.m_staticline19 = wx.StaticLine( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.LI_HORIZONTAL )
bSizer109.Add( self.m_staticline19, 0, wx.EXPAND |wx.ALL, 5 )
self.m_atomicswapPanel = wx.Panel( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.TAB_TRAVERSAL )
bSizer56 = wx.BoxSizer( wx.VERTICAL )
bSizer41 = wx.BoxSizer( wx.HORIZONTAL )
bSizer229 = wx.BoxSizer( wx.HORIZONTAL )
self.m_assetSellPanel = wx.Panel( self.m_atomicswapPanel, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.TAB_TRAVERSAL )
bSizer11 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap21 = wx.StaticBitmap( self.m_assetSellPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/asset_out.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer11.Add( self.m_bitmap21, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText21 = wx.StaticText( self.m_assetSellPanel, wx.ID_ANY, u"Select an Asset : ", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText21.Wrap( -1 )
self.m_staticText21.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer11.Add( self.m_staticText21, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
m_AdAssetChoiceChoices = []
self.m_AdAssetChoice = wx.Choice( self.m_assetSellPanel, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, m_AdAssetChoiceChoices, 0 )
self.m_AdAssetChoice.SetSelection( 0 )
bSizer11.Add( self.m_AdAssetChoice, 1, wx.ALL|wx.EXPAND, 5 )
self.m_assetSellPanel.SetSizer( bSizer11 )
self.m_assetSellPanel.Layout()
bSizer11.Fit( self.m_assetSellPanel )
bSizer229.Add( self.m_assetSellPanel, 1, wx.EXPAND |wx.ALL, 0 )
bSizer41.Add( bSizer229, 1, wx.EXPAND, 5 )
bSizer12 = wx.BoxSizer( wx.HORIZONTAL )
self.m_staticText7 = wx.StaticText( self.m_atomicswapPanel, wx.ID_ANY, u"Quantity :", wx.DefaultPosition, wx.DefaultSize, wx.ALIGN_RIGHT )
self.m_staticText7.Wrap( -1 )
self.m_staticText7.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer12.Add( self.m_staticText7, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_AdAssetQt = wx.TextCtrl( self.m_atomicswapPanel, wx.ID_ANY, u"1", wx.DefaultPosition, wx.DefaultSize, wx.TE_RIGHT )
bSizer12.Add( self.m_AdAssetQt, 1, wx.ALL, 5 )
bSizer41.Add( bSizer12, 1, wx.EXPAND, 5 )
bSizer56.Add( bSizer41, 0, wx.EXPAND, 5 )
bSizer412 = wx.BoxSizer( wx.HORIZONTAL )
bSizer111 = wx.BoxSizer( wx.HORIZONTAL )
self.m_assetTradePanel = wx.Panel( self.m_atomicswapPanel, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.TAB_TRAVERSAL )
bSizer113 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap212 = wx.StaticBitmap( self.m_assetTradePanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/asset_in.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer113.Add( self.m_bitmap212, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText212 = wx.StaticText( self.m_assetTradePanel, wx.ID_ANY, u"Select an Asset : ", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText212.Wrap( -1 )
self.m_staticText212.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer113.Add( self.m_staticText212, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_WantedAssetText = wx.TextCtrl( self.m_assetTradePanel, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer113.Add( self.m_WantedAssetText, 1, wx.ALL, 5 )
self.m_assetTradePanel.SetSizer( bSizer113 )
self.m_assetTradePanel.Layout()
bSizer113.Fit( self.m_assetTradePanel )
bSizer111.Add( self.m_assetTradePanel, 1, wx.EXPAND |wx.ALL, 0 )
bSizer412.Add( bSizer111, 2, wx.EXPAND, 5 )
bSizer1212 = wx.BoxSizer( wx.HORIZONTAL )
self.m_staticText712 = wx.StaticText( self.m_atomicswapPanel, wx.ID_ANY, u"Price :", wx.DefaultPosition, wx.DefaultSize, wx.ALIGN_RIGHT )
self.m_staticText712.Wrap( -1 )
self.m_staticText712.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer1212.Add( self.m_staticText712, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_AdAssetPrice = wx.TextCtrl( self.m_atomicswapPanel, wx.ID_ANY, u"200", wx.DefaultPosition, wx.DefaultSize, wx.TE_RIGHT )
bSizer1212.Add( self.m_AdAssetPrice, 1, wx.ALL, 5 )
bSizer412.Add( bSizer1212, 1, wx.EXPAND, 5 )
bSizer56.Add( bSizer412, 0, wx.EXPAND, 5 )
bSizer144 = wx.BoxSizer( wx.HORIZONTAL )
bSizer145 = wx.BoxSizer( wx.VERTICAL )
self.m_staticText69 = wx.StaticText( self.m_atomicswapPanel, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText69.Wrap( -1 )
bSizer145.Add( self.m_staticText69, 0, wx.ALL, 5 )
bSizer144.Add( bSizer145, 1, wx.EXPAND, 5 )
bSizer146 = wx.BoxSizer( wx.HORIZONTAL )
self.m_staticText70 = wx.StaticText( self.m_atomicswapPanel, wx.ID_ANY, u"Order(s) :", wx.DefaultPosition, wx.DefaultSize, wx.ALIGN_RIGHT )
self.m_staticText70.Wrap( -1 )
self.m_staticText70.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer146.Add( self.m_staticText70, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_orderCount = wx.SpinCtrl( self.m_atomicswapPanel, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, wx.SP_ARROW_KEYS, 1, 1, 1 )
self.m_orderCount.Enable( False )
bSizer146.Add( self.m_orderCount, 0, wx.ALL, 5 )
bSizer144.Add( bSizer146, 1, 0, 5 )
bSizer56.Add( bSizer144, 0, wx.EXPAND, 5 )
bSizer141 = wx.BoxSizer( wx.VERTICAL )
bSizer56.Add( bSizer141, 1, wx.ALIGN_RIGHT, 5 )
self.m_atomicswapPanel.SetSizer( bSizer56 )
self.m_atomicswapPanel.Layout()
bSizer56.Fit( self.m_atomicswapPanel )
bSizer109.Add( self.m_atomicswapPanel, 0, wx.EXPAND |wx.ALL, 5 )
self.SetSizer( bSizer109 )
self.Layout()
# Connect Events
self.m_AtomicSwapType.Bind( wx.EVT_CHOICE, self.OnSwapTypeChanged )
self.m_AdAssetChoice.Bind( wx.EVT_CHOICE, self.OnAssetChanged )
self.m_AdAssetQt.Bind( wx.EVT_TEXT, self.OnQuantityChanged )
self.m_WantedAssetText.Bind( wx.EVT_TEXT, self.OnWantedAssetChanged )
self.m_AdAssetPrice.Bind( wx.EVT_TEXT, self.OnPriceChanged )
self.m_orderCount.Bind( wx.EVT_SPINCTRL, self.OnOrderCountChange )
self.m_orderCount.Bind( wx.EVT_TEXT, self.OnOrderCountChange )
def __del__( self ):
pass
# Virtual event handlers, override them in your derived class
def OnSwapTypeChanged( self, event ):
event.Skip()
def OnAssetChanged( self, event ):
event.Skip()
def OnQuantityChanged( self, event ):
event.Skip()
def OnWantedAssetChanged( self, event ):
event.Skip()
def OnPriceChanged( self, event ):
event.Skip()
def OnOrderCountChange( self, event ):
event.Skip()
###########################################################################
## Class wxRavenRawTxPanel
###########################################################################
class wxRavenRawTxPanel ( wx.Panel ):
def __init__( self, parent, id = wx.ID_ANY, pos = wx.DefaultPosition, size = wx.Size( 500,132 ), style = wx.TAB_TRAVERSAL, name = wx.EmptyString ):
wx.Panel.__init__ ( self, parent, id = id, pos = pos, size = size, style = style, name = name )
bSizer198 = wx.BoxSizer( wx.VERTICAL )
self.m_rawDatasText = wx.TextCtrl( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, wx.TE_MULTILINE )
bSizer198.Add( self.m_rawDatasText, 1, wx.ALL|wx.EXPAND, 5 )
self.SetSizer( bSizer198 )
self.Layout()
# Connect Events
self.m_rawDatasText.Bind( wx.EVT_TEXT, self.OnRawDataChanged )
def __del__( self ):
pass
# Virtual event handlers, override them in your derived class
def OnRawDataChanged( self, event ):
event.Skip()
###########################################################################
## Class wxRavenP2PMarket_MarketPlaceListingPanel
###########################################################################
class wxRavenP2PMarket_MarketPlaceListingPanel ( wx.Panel ):
def __init__( self, parent, id = wx.ID_ANY, pos = wx.DefaultPosition, size = wx.Size( 723,537 ), style = wx.TAB_TRAVERSAL, name = wx.EmptyString ):
wx.Panel.__init__ ( self, parent, id = id, pos = pos, size = size, style = style, name = name )
bSizer30 = wx.BoxSizer( wx.VERTICAL )
self.m_infoCtrl1 = wx.InfoBar( self )
self.m_infoCtrl1.SetShowHideEffects( wx.SHOW_EFFECT_NONE, wx.SHOW_EFFECT_NONE )
self.m_infoCtrl1.SetEffectDuration( 500 )
bSizer30.Add( self.m_infoCtrl1, 0, wx.ALL|wx.EXPAND, 5 )
bSizer38_Top = wx.BoxSizer( wx.HORIZONTAL )
bSizer40 = wx.BoxSizer( wx.VERTICAL )
self.m_panel1 = wx.Panel( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.TAB_TRAVERSAL )
bSizer40.Add( self.m_panel1, 1, wx.EXPAND |wx.ALL, 5 )
bSizer38_Top.Add( bSizer40, 1, wx.EXPAND, 5 )
bSizer37 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap13 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/ravencoin_marketplace_small.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer37.Add( self.m_bitmap13, 1, wx.ALL, 5 )
bSizer38_Top.Add( bSizer37, 0, 0, 5 )
bSizer39 = wx.BoxSizer( wx.VERTICAL )
self.m_panel2 = wx.Panel( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.TAB_TRAVERSAL )
bSizer39.Add( self.m_panel2, 1, wx.EXPAND |wx.ALL, 5 )
bSizer38_Top.Add( bSizer39, 1, wx.EXPAND, 5 )
bSizer30.Add( bSizer38_Top, 0, wx.EXPAND, 5 )
bSizer31_Search = wx.BoxSizer( wx.HORIZONTAL )
bSizer33 = wx.BoxSizer( wx.VERTICAL )
bSizer34 = wx.BoxSizer( wx.HORIZONTAL )
self.m_staticText35 = wx.StaticText( self, wx.ID_ANY, u"P2P Marketplace :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText35.Wrap( -1 )
self.m_staticText35.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer34.Add( self.m_staticText35, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
m_marketChoiceChoices = [ u"All Marketplaces" ]
self.m_marketChoice = wx.Choice( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, m_marketChoiceChoices, 0 )
self.m_marketChoice.SetSelection( 0 )
bSizer34.Add( self.m_marketChoice, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_searchCtrl1 = wx.SearchCtrl( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_searchCtrl1.ShowSearchButton( True )
self.m_searchCtrl1.ShowCancelButton( False )
bSizer34.Add( self.m_searchCtrl1, 1, wx.ALL|wx.EXPAND, 5 )
self.m_toggleBtn2 = wx.ToggleButton( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.Size( 32,-1 ), 0 )
bSizer34.Add( self.m_toggleBtn2, 0, wx.ALL, 5 )
bSizer33.Add( bSizer34, 1, wx.EXPAND, 5 )
bSizer31_Search.Add( bSizer33, 1, 0, 5 )
bSizer30.Add( bSizer31_Search, 0, wx.EXPAND, 5 )
self.searchOptionsPanel = wx.Panel( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.TAB_TRAVERSAL )
bSizer35 = wx.BoxSizer( wx.HORIZONTAL )
bSizer85 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap26 = wx.StaticBitmap( self.searchOptionsPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/p2p_icon.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer85.Add( self.m_bitmap26, 0, wx.ALL, 5 )
self.m_staticText38 = wx.StaticText( self.searchOptionsPanel, wx.ID_ANY, u"Ad Type :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText38.Wrap( -1 )
self.m_staticText38.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer85.Add( self.m_staticText38, 0, wx.ALL, 5 )
m_adTypeFilterChoices = [u"Sell", u"Buy", u"Trade"]
self.m_adTypeFilter = wx.CheckListBox( self.searchOptionsPanel, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, m_adTypeFilterChoices, 0 )
bSizer85.Add( self.m_adTypeFilter, 0, wx.ALL, 5 )
bSizer35.Add( bSizer85, 0, wx.EXPAND, 5 )
bSizer86 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap27 = wx.StaticBitmap( self.searchOptionsPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/p2p_icon2.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer86.Add( self.m_bitmap27, 0, wx.ALL, 5 )
self.m_staticText39 = wx.StaticText( self.searchOptionsPanel, wx.ID_ANY, u"Transaction Type :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText39.Wrap( -1 )
self.m_staticText39.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer86.Add( self.m_staticText39, 0, wx.ALL, 5 )
m_txTypeFilterChoices = [u"Atomic Swap", u"P2SH"]
self.m_txTypeFilter = wx.CheckListBox( self.searchOptionsPanel, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, m_txTypeFilterChoices, 0 )
bSizer86.Add( self.m_txTypeFilter, 0, wx.ALL, 5 )
bSizer35.Add( bSizer86, 0, wx.EXPAND, 5 )
bSizer861 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap271 = wx.StaticBitmap( self.searchOptionsPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/changelog_obj.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer861.Add( self.m_bitmap271, 0, wx.ALL, 5 )
self.m_staticText391 = wx.StaticText( self.searchOptionsPanel, wx.ID_ANY, u"Search Fields :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText391.Wrap( -1 )
self.m_staticText391.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer861.Add( self.m_staticText391, 0, wx.ALL, 5 )
m_AdInformationsFilterChoices = [u"address", u"title", u"asset", u"price_asset", u"desc", u"keywords"]
self.m_AdInformationsFilter = wx.CheckListBox( self.searchOptionsPanel, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, m_AdInformationsFilterChoices, wx.LB_MULTIPLE|wx.LB_NEEDED_SB )
bSizer861.Add( self.m_AdInformationsFilter, 0, wx.ALL, 5 )
bSizer35.Add( bSizer861, 1, wx.EXPAND, 5 )
self.searchOptionsPanel.SetSizer( bSizer35 )
self.searchOptionsPanel.Layout()
bSizer35.Fit( self.searchOptionsPanel )
bSizer30.Add( self.searchOptionsPanel, 0, wx.EXPAND |wx.ALL, 5 )
bSizer101 = wx.BoxSizer( wx.HORIZONTAL )
self.m_feelLuckButton = wx.BitmapButton( self, wx.ID_ANY, wx.NullBitmap, wx.DefaultPosition, wx.DefaultSize, wx.BU_AUTODRAW|0 )
self.m_feelLuckButton.SetBitmap( wx.Bitmap( u"res/default_style/normal/feel_lucky.png", wx.BITMAP_TYPE_ANY ) )
bSizer101.Add( self.m_feelLuckButton, 0, wx.ALL, 5 )
self.m_KawButton = wx.BitmapButton( self, wx.ID_ANY, wx.NullBitmap, wx.DefaultPosition, wx.DefaultSize, wx.BU_AUTODRAW|0 )
self.m_KawButton.SetBitmap( wx.Bitmap( u"res/default_style/normal/kaw_button.png", wx.BITMAP_TYPE_ANY ) )
bSizer101.Add( self.m_KawButton, 0, wx.ALL, 5 )
bSizer30.Add( bSizer101, 0, wx.ALIGN_CENTER, 5 )
self.m_staticline17 = wx.StaticLine( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.LI_HORIZONTAL )
bSizer30.Add( self.m_staticline17, 0, wx.EXPAND |wx.ALL, 5 )
self.m_marketViewScrollPanel = wx.ScrolledWindow( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.HSCROLL|wx.VSCROLL )
self.m_marketViewScrollPanel.SetScrollRate( 5, 5 )
bSizer57 = wx.BoxSizer( wx.VERTICAL )
self.m_listCtrl1 = wxRavenListCtrl( self.m_marketViewScrollPanel, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.LC_AUTOARRANGE|wx.LC_REPORT )
bSizer57.Add( self.m_listCtrl1, 1, wx.ALL|wx.EXPAND, 5 )
self.m_marketViewScrollPanel.SetSizer( bSizer57 )
self.m_marketViewScrollPanel.Layout()
bSizer57.Fit( self.m_marketViewScrollPanel )
bSizer30.Add( self.m_marketViewScrollPanel, 1, wx.EXPAND |wx.ALL, 5 )
self.SetSizer( bSizer30 )
self.Layout()
# Connect Events
self.m_marketChoice.Bind( wx.EVT_CHOICE, self.OnMarketplaceChanged )
self.m_toggleBtn2.Bind( wx.EVT_TOGGLEBUTTON, self.OnToggleFilterButtonClicked )
self.m_adTypeFilter.Bind( wx.EVT_CHECKLISTBOX, self.OnAdTypeFilterChanged )
self.m_txTypeFilter.Bind( wx.EVT_CHECKLISTBOX, self.OnAdTxMethodChanged )
self.m_AdInformationsFilter.Bind( wx.EVT_CHECKLISTBOX, self.OnAdTxMethodChanged )
self.m_feelLuckButton.Bind( wx.EVT_BUTTON, self.OnFeelLuck )
self.m_KawButton.Bind( wx.EVT_BUTTON, self.OnKaw )
def __del__( self ):
pass
# Virtual event handlers, override them in your derived class
def OnMarketplaceChanged( self, event ):
event.Skip()
def OnToggleFilterButtonClicked( self, event ):
event.Skip()
def OnAdTypeFilterChanged( self, event ):
event.Skip()
def OnAdTxMethodChanged( self, event ):
event.Skip()
def OnFeelLuck( self, event ):
event.Skip()
def OnKaw( self, event ):
event.Skip()
###########################################################################
## Class wxRavenDecodeTxPanel
###########################################################################
class wxRavenDecodeTxPanel ( wx.Panel ):
def __init__( self, parent, id = wx.ID_ANY, pos = wx.DefaultPosition, size = wx.Size( 567,519 ), style = wx.TAB_TRAVERSAL, name = wx.EmptyString ):
wx.Panel.__init__ ( self, parent, id = id, pos = pos, size = size, style = style, name = name )
bSizer230 = wx.BoxSizer( wx.VERTICAL )
self.m_OrderNavigationPanel = wx.Panel( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.TAB_TRAVERSAL )
bSizer334 = wx.BoxSizer( wx.HORIZONTAL )
self.m_buttonPrevious = wx.BitmapButton( self.m_OrderNavigationPanel, wx.ID_ANY, wx.NullBitmap, wx.DefaultPosition, wx.DefaultSize, wx.BU_AUTODRAW|0 )
self.m_buttonPrevious.SetBitmap( wx.Bitmap( u"res/default_style/normal/nav_backward.png", wx.BITMAP_TYPE_ANY ) )
bSizer334.Add( self.m_buttonPrevious, 0, wx.ALL, 5 )
self.m_staticText185 = wx.StaticText( self.m_OrderNavigationPanel, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText185.Wrap( -1 )
bSizer334.Add( self.m_staticText185, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_bitmap133 = wx.StaticBitmap( self.m_OrderNavigationPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/order_icon30.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer334.Add( self.m_bitmap133, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText186 = wx.StaticText( self.m_OrderNavigationPanel, wx.ID_ANY, u"ORDER : 1/?", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText186.Wrap( -1 )
self.m_staticText186.SetFont( wx.Font( 11, wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer334.Add( self.m_staticText186, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText187 = wx.StaticText( self.m_OrderNavigationPanel, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText187.Wrap( -1 )
bSizer334.Add( self.m_staticText187, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_buttonNext = wx.BitmapButton( self.m_OrderNavigationPanel, wx.ID_ANY, wx.NullBitmap, wx.DefaultPosition, wx.DefaultSize, wx.BU_AUTODRAW|0 )
self.m_buttonNext.SetBitmap( wx.Bitmap( u"res/default_style/normal/nav_forward.png", wx.BITMAP_TYPE_ANY ) )
bSizer334.Add( self.m_buttonNext, 0, wx.ALL, 5 )
self.m_OrderNavigationPanel.SetSizer( bSizer334 )
self.m_OrderNavigationPanel.Layout()
bSizer334.Fit( self.m_OrderNavigationPanel )
bSizer230.Add( self.m_OrderNavigationPanel, 0, wx.EXPAND |wx.ALL, 5 )
self.m_TXDetailsPanel = wx.Panel( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.TAB_TRAVERSAL )
bSizer255 = wx.BoxSizer( wx.VERTICAL )
bSizer231 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap71 = wx.StaticBitmap( self.m_TXDetailsPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/unknown_user.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer231.Add( self.m_bitmap71, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText111 = wx.StaticText( self.m_TXDetailsPanel, wx.ID_ANY, u"Origin", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText111.Wrap( -1 )
self.m_staticText111.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer231.Add( self.m_staticText111, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_mineText = wx.TextCtrl( self.m_TXDetailsPanel, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, wx.TE_READONLY )
bSizer231.Add( self.m_mineText, 2, wx.ALL|wx.EXPAND, 5 )
bSizer255.Add( bSizer231, 0, wx.EXPAND, 5 )
bSizer2311 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmapStatus = wx.StaticBitmap( self.m_TXDetailsPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/help_contents.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer2311.Add( self.m_bitmapStatus, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText1111 = wx.StaticText( self.m_TXDetailsPanel, wx.ID_ANY, u"Status", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText1111.Wrap( -1 )
self.m_staticText1111.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer2311.Add( self.m_staticText1111, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_StatusText = wx.TextCtrl( self.m_TXDetailsPanel, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, wx.TE_READONLY )
bSizer2311.Add( self.m_StatusText, 2, wx.ALL|wx.EXPAND, 5 )
bSizer255.Add( bSizer2311, 0, wx.EXPAND, 5 )
bSizer2312 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap721 = wx.StaticBitmap( self.m_TXDetailsPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/help_contents.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer2312.Add( self.m_bitmap721, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText1112 = wx.StaticText( self.m_TXDetailsPanel, wx.ID_ANY, u"Type", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText1112.Wrap( -1 )
self.m_staticText1112.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer2312.Add( self.m_staticText1112, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_TypeText = wx.TextCtrl( self.m_TXDetailsPanel, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, wx.TE_READONLY )
bSizer2312.Add( self.m_TypeText, 2, wx.ALL|wx.EXPAND, 5 )
bSizer255.Add( bSizer2312, 0, wx.EXPAND, 5 )
bSizer2313 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap722 = wx.StaticBitmap( self.m_TXDetailsPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/asset.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer2313.Add( self.m_bitmap722, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText1113 = wx.StaticText( self.m_TXDetailsPanel, wx.ID_ANY, u"Asset", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText1113.Wrap( -1 )
self.m_staticText1113.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer2313.Add( self.m_staticText1113, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_AssetText = wx.TextCtrl( self.m_TXDetailsPanel, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, wx.TE_READONLY )
bSizer2313.Add( self.m_AssetText, 2, wx.ALL|wx.EXPAND, 5 )
bSizer255.Add( bSizer2313, 0, wx.EXPAND, 5 )
bSizer2314 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap723 = wx.StaticBitmap( self.m_TXDetailsPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/supply_2.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer2314.Add( self.m_bitmap723, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText1114 = wx.StaticText( self.m_TXDetailsPanel, wx.ID_ANY, u"Quantity", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText1114.Wrap( -1 )
self.m_staticText1114.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer2314.Add( self.m_staticText1114, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_QuantityText = wx.TextCtrl( self.m_TXDetailsPanel, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, wx.TE_READONLY )
bSizer2314.Add( self.m_QuantityText, 2, wx.ALL|wx.EXPAND, 5 )
bSizer255.Add( bSizer2314, 0, wx.EXPAND, 5 )
bSizer2315 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap724 = wx.StaticBitmap( self.m_TXDetailsPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/ravencoin.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer2315.Add( self.m_bitmap724, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText1115 = wx.StaticText( self.m_TXDetailsPanel, wx.ID_ANY, u"Price", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText1115.Wrap( -1 )
self.m_staticText1115.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer2315.Add( self.m_staticText1115, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_PriceText = wx.TextCtrl( self.m_TXDetailsPanel, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, wx.TE_READONLY )
bSizer2315.Add( self.m_PriceText, 2, wx.ALL|wx.EXPAND, 5 )
bSizer255.Add( bSizer2315, 0, wx.EXPAND, 5 )
bSizer23151 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmapUTXO = wx.StaticBitmap( self.m_TXDetailsPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/raw_datas.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer23151.Add( self.m_bitmapUTXO, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText11151 = wx.StaticText( self.m_TXDetailsPanel, wx.ID_ANY, u"UTXO", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText11151.Wrap( -1 )
self.m_staticText11151.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer23151.Add( self.m_staticText11151, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_UTXOText = wx.TextCtrl( self.m_TXDetailsPanel, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, wx.TE_READONLY )
bSizer23151.Add( self.m_UTXOText, 2, wx.ALL|wx.EXPAND, 5 )
bSizer255.Add( bSizer23151, 0, wx.EXPAND, 5 )
self.m_TXDetailsPanel.SetSizer( bSizer255 )
self.m_TXDetailsPanel.Layout()
bSizer255.Fit( self.m_TXDetailsPanel )
bSizer230.Add( self.m_TXDetailsPanel, 0, wx.EXPAND |wx.ALL, 5 )
self.m_ErrorMsgPanel = wx.Panel( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.TAB_TRAVERSAL )
bSizer257 = wx.BoxSizer( wx.VERTICAL )
bSizer258 = wx.BoxSizer( wx.VERTICAL )
bSizer259 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap89 = wx.StaticBitmap( self.m_ErrorMsgPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/error_tsk.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer259.Add( self.m_bitmap89, 0, wx.ALL, 5 )
self.m_staticText125 = wx.StaticText( self.m_ErrorMsgPanel, wx.ID_ANY, u"ERROR !", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText125.Wrap( -1 )
self.m_staticText125.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer259.Add( self.m_staticText125, 0, wx.ALL, 5 )
bSizer258.Add( bSizer259, 0, wx.ALIGN_CENTER, 5 )
self.m_ErrorDetails = wx.StaticText( self.m_ErrorMsgPanel, wx.ID_ANY, u"Error : Invalid Transaction", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_ErrorDetails.Wrap( -1 )
bSizer258.Add( self.m_ErrorDetails, 0, wx.ALL, 5 )
bSizer257.Add( bSizer258, 0, wx.ALIGN_CENTER_HORIZONTAL, 5 )
self.m_ErrorMsgPanel.SetSizer( bSizer257 )
self.m_ErrorMsgPanel.Layout()
bSizer257.Fit( self.m_ErrorMsgPanel )
bSizer230.Add( self.m_ErrorMsgPanel, 0, wx.EXPAND |wx.ALL, 5 )
self.m_TXInputPanel = wx.Panel( self, wx.ID_ANY, wx.DefaultPosition, wx.Size( -1,-1 ), wx.TAB_TRAVERSAL )
self.m_TXInputPanel.SetMaxSize( wx.Size( -1,100 ) )
bSizer256 = wx.BoxSizer( wx.VERTICAL )
bSizer23152 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmapPartial = wx.StaticBitmap( self.m_TXInputPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/raw_datas.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer23152.Add( self.m_bitmapPartial, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText11152 = wx.StaticText( self.m_TXInputPanel, wx.ID_ANY, u"Signed Partial", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText11152.Wrap( -1 )
self.m_staticText11152.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer23152.Add( self.m_staticText11152, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_SignedPartialText = wx.TextCtrl( self.m_TXInputPanel, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.Size( -1,-1 ), wx.TE_MULTILINE )
self.m_SignedPartialText.SetMinSize( wx.Size( 325,60 ) )
bSizer23152.Add( self.m_SignedPartialText, 2, wx.ALL|wx.EXPAND, 5 )
bSizer256.Add( bSizer23152, 1, wx.EXPAND, 5 )
self.m_TXInputPanel.SetSizer( bSizer256 )
self.m_TXInputPanel.Layout()
bSizer256.Fit( self.m_TXInputPanel )
bSizer230.Add( self.m_TXInputPanel, 0, wx.EXPAND |wx.ALL, 5 )
self.m_InteractionPanel = wx.Panel( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.TAB_TRAVERSAL )
bSizer312 = wx.BoxSizer( wx.HORIZONTAL )
bSizer313 = wx.BoxSizer( wx.HORIZONTAL )
self.m_CloseButtonOLD = wx.Button( self.m_InteractionPanel, wx.ID_ANY, u"Close", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_CloseButtonOLD.Hide()
bSizer313.Add( self.m_CloseButtonOLD, 1, wx.ALL, 5 )
self.m_completeButtonOLD = wx.Button( self.m_InteractionPanel, wx.ID_ANY, u"Complete Tx", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_completeButtonOLD.Enable( False )
self.m_completeButtonOLD.Hide()
bSizer313.Add( self.m_completeButtonOLD, 1, wx.ALL, 5 )
self.m_CloseButton = wx.BitmapButton( self.m_InteractionPanel, wx.ID_ANY, wx.NullBitmap, wx.DefaultPosition, wx.DefaultSize, wx.BU_AUTODRAW|0 )
self.m_CloseButton.SetBitmap( wx.Bitmap( u"res/default_style/normal/close_button.png", wx.BITMAP_TYPE_ANY ) )
self.m_CloseButton.SetMinSize( wx.Size( -1,40 ) )
bSizer313.Add( self.m_CloseButton, 0, wx.ALL, 5 )
self.m_completeButton = wx.BitmapButton( self.m_InteractionPanel, wx.ID_ANY, wx.NullBitmap, wx.DefaultPosition, wx.DefaultSize, wx.BU_AUTODRAW|0 )
self.m_completeButton.SetBitmap( wx.Bitmap( u"res/default_style/normal/complete_order_button.png", wx.BITMAP_TYPE_ANY ) )
self.m_completeButton.Enable( False )
self.m_completeButton.SetMinSize( wx.Size( -1,40 ) )
bSizer313.Add( self.m_completeButton, 1, wx.ALL, 5 )
bSizer312.Add( bSizer313, 0, wx.ALL, 5 )
self.m_InteractionPanel.SetSizer( bSizer312 )
self.m_InteractionPanel.Layout()
bSizer312.Fit( self.m_InteractionPanel )
bSizer230.Add( self.m_InteractionPanel, 0, wx.ALIGN_CENTER|wx.ALL, 5 )
self.SetSizer( bSizer230 )
self.Layout()
# Connect Events
self.m_buttonPrevious.Bind( wx.EVT_BUTTON, self.OnPreviousOrder )
self.m_buttonNext.Bind( wx.EVT_BUTTON, self.OnNextOrder )
self.m_SignedPartialText.Bind( wx.EVT_TEXT, self.OnRawDataInputChanged )
self.m_CloseButtonOLD.Bind( wx.EVT_BUTTON, self.OnCloseParent )
self.m_completeButtonOLD.Bind( wx.EVT_BUTTON, self.OnCompleteTx )
self.m_CloseButton.Bind( wx.EVT_BUTTON, self.OnCloseParent )
self.m_completeButton.Bind( wx.EVT_BUTTON, self.OnCompleteTx )
def __del__( self ):
pass
# Virtual event handlers, override them in your derived class
def OnPreviousOrder( self, event ):
event.Skip()
def OnNextOrder( self, event ):
event.Skip()
def OnRawDataInputChanged( self, event ):
event.Skip()
def OnCloseParent( self, event ):
event.Skip()
def OnCompleteTx( self, event ):
event.Skip()
###########################################################################
## Class wxRavenP2PMarket_CreateUTXO
###########################################################################
class wxRavenP2PMarket_CreateUTXO ( wx.Panel ):
def __init__( self, parent, id = wx.ID_ANY, pos = wx.DefaultPosition, size = wx.Size( 513,144 ), style = wx.TAB_TRAVERSAL, name = wx.EmptyString ):
wx.Panel.__init__ ( self, parent, id = id, pos = pos, size = size, style = style, name = name )
bSizer329 = wx.BoxSizer( wx.VERTICAL )
bSizer330 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap126 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/asset.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer330.Add( self.m_bitmap126, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText178 = wx.StaticText( self, wx.ID_ANY, u"UTXO Asset :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText178.Wrap( -1 )
bSizer330.Add( self.m_staticText178, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
m_AssetChoiceChoices = []
self.m_AssetChoice = wx.Choice( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, m_AssetChoiceChoices, 0 )
self.m_AssetChoice.SetSelection( 0 )
bSizer330.Add( self.m_AssetChoice, 1, wx.ALL, 5 )
bSizer329.Add( bSizer330, 0, wx.EXPAND, 5 )
bSizer333 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap129 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/help_contents.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer333.Add( self.m_bitmap129, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText182 = wx.StaticText( self, wx.ID_ANY, u"Available : ", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText182.Wrap( -1 )
self.m_staticText182.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer333.Add( self.m_staticText182, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_availableText = wx.StaticText( self, wx.ID_ANY, u"0.0", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_availableText.Wrap( -1 )
bSizer333.Add( self.m_availableText, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
bSizer329.Add( bSizer333, 0, wx.ALIGN_RIGHT, 5 )
bSizer332 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap127 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/supply_2.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer332.Add( self.m_bitmap127, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText179 = wx.StaticText( self, wx.ID_ANY, u"Amount :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText179.Wrap( -1 )
bSizer332.Add( self.m_staticText179, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_AssetAmount = wx.TextCtrl( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer332.Add( self.m_AssetAmount, 0, wx.ALL, 5 )
self.m_staticText184 = wx.StaticText( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText184.Wrap( -1 )
bSizer332.Add( self.m_staticText184, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_bitmap128 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/formula.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer332.Add( self.m_bitmap128, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText180 = wx.StaticText( self, wx.ID_ANY, u"UTXO's :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText180.Wrap( -1 )
bSizer332.Add( self.m_staticText180, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_UTXOcount = wx.SpinCtrl( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, wx.SP_ARROW_KEYS, 1, 1000, 1 )
bSizer332.Add( self.m_UTXOcount, 0, wx.ALL, 5 )
bSizer329.Add( bSizer332, 0, wx.EXPAND, 5 )
bSizer331 = wx.BoxSizer( wx.HORIZONTAL )
self.m_CreateUTXOButton = wx.Button( self, wx.ID_ANY, u"Create !", wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer331.Add( self.m_CreateUTXOButton, 0, wx.ALL, 5 )
bSizer329.Add( bSizer331, 0, wx.ALIGN_RIGHT, 5 )
self.SetSizer( bSizer329 )
self.Layout()
# Connect Events
self.m_AssetChoice.Bind( wx.EVT_CHOICE, self.OnAssetChanged )
self.m_AssetAmount.Bind( wx.EVT_TEXT, self.OnAmountChanged )
self.m_UTXOcount.Bind( wx.EVT_SPINCTRL, self.OnUTXOChanged )
self.m_CreateUTXOButton.Bind( wx.EVT_BUTTON, self.OnClickCreateUTXO )
def __del__( self ):
pass
# Virtual event handlers, override them in your derived class
def OnAssetChanged( self, event ):
event.Skip()
def OnAmountChanged( self, event ):
event.Skip()
def OnUTXOChanged( self, event ):
event.Skip()
def OnClickCreateUTXO( self, event ):
event.Skip()
###########################################################################
## Class wxRavenP2PMarket_Airdrop
###########################################################################
class wxRavenP2PMarket_Airdrop ( wx.Panel ):
def __init__( self, parent, id = wx.ID_ANY, pos = wx.DefaultPosition, size = wx.Size( 678,311 ), style = wx.TAB_TRAVERSAL, name = wx.EmptyString ):
wx.Panel.__init__ ( self, parent, id = id, pos = pos, size = size, style = style, name = name )
bSizer329 = wx.BoxSizer( wx.VERTICAL )
bSizer330 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap126 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/asset.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer330.Add( self.m_bitmap126, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText178 = wx.StaticText( self, wx.ID_ANY, u"Airdrop Asset :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText178.Wrap( -1 )
bSizer330.Add( self.m_staticText178, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
m_AssetChoiceChoices = []
self.m_AssetChoice = wx.Choice( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, m_AssetChoiceChoices, 0 )
self.m_AssetChoice.SetSelection( 0 )
bSizer330.Add( self.m_AssetChoice, 1, wx.ALL, 5 )
bSizer329.Add( bSizer330, 0, wx.EXPAND, 5 )
bSizer333 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap129 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/help_contents.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer333.Add( self.m_bitmap129, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText182 = wx.StaticText( self, wx.ID_ANY, u"Available : ", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText182.Wrap( -1 )
self.m_staticText182.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer333.Add( self.m_staticText182, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_availableText = wx.StaticText( self, wx.ID_ANY, u"0.0", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_availableText.Wrap( -1 )
bSizer333.Add( self.m_availableText, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
bSizer329.Add( bSizer333, 0, wx.ALIGN_RIGHT, 5 )
bSizer332 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap127 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/airdrop_icon.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer332.Add( self.m_bitmap127, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText179 = wx.StaticText( self, wx.ID_ANY, u"Distribute :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText179.Wrap( -1 )
bSizer332.Add( self.m_staticText179, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_AssetAmount = wx.TextCtrl( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer332.Add( self.m_AssetAmount, 0, wx.ALL, 5 )
self.m_staticText184 = wx.StaticText( self, wx.ID_ANY, u"Asset(s) to :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText184.Wrap( -1 )
bSizer332.Add( self.m_staticText184, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_bitmap128 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/formula.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer332.Add( self.m_bitmap128, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_UTXOcount = wx.SpinCtrl( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, wx.SP_ARROW_KEYS, 1, 500, 1 )
bSizer332.Add( self.m_UTXOcount, 0, wx.ALL, 5 )
self.m_staticText180 = wx.StaticText( self, wx.ID_ANY, u"Max Winner(s)", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText180.Wrap( -1 )
bSizer332.Add( self.m_staticText180, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
bSizer329.Add( bSizer332, 0, wx.EXPAND, 5 )
bSizer348 = wx.BoxSizer( wx.HORIZONTAL )
self.m_staticText202 = wx.StaticText( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText202.Wrap( -1 )
bSizer348.Add( self.m_staticText202, 1, wx.ALL, 5 )
self.m_checkBox26 = wx.CheckBox( self, wx.ID_ANY, u"Pickup Random Winners from list", wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer348.Add( self.m_checkBox26, 0, wx.ALL, 5 )
bSizer329.Add( bSizer348, 0, wx.EXPAND, 5 )
bSizer349 = wx.BoxSizer( wx.VERTICAL )
self.m_filePicker1 = wx.FilePickerCtrl( self, wx.ID_ANY, wx.EmptyString, u"Select a file", u"*.*", wx.DefaultPosition, wx.DefaultSize, wx.FLP_DEFAULT_STYLE )
bSizer349.Add( self.m_filePicker1, 0, wx.ALL|wx.EXPAND, 5 )
m_listBox7Choices = []
self.m_listBox7 = wx.ListBox( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, m_listBox7Choices, 0 )
bSizer349.Add( self.m_listBox7, 1, wx.ALL|wx.EXPAND, 5 )
bSizer329.Add( bSizer349, 1, wx.EXPAND, 5 )
bSizer331 = wx.BoxSizer( wx.HORIZONTAL )
self.m_CreateUTXOButton_OLD = wx.Button( self, wx.ID_ANY, u"DROP !", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_CreateUTXOButton_OLD.Hide()
bSizer331.Add( self.m_CreateUTXOButton_OLD, 0, wx.ALL, 5 )
self.m_CreateUTXOButton = wx.BitmapButton( self, wx.ID_ANY, wx.NullBitmap, wx.DefaultPosition, wx.DefaultSize, wx.BU_AUTODRAW|0 )
self.m_CreateUTXOButton.SetBitmap( wx.Bitmap( u"res/default_style/normal/airdrop_icon_35.png", wx.BITMAP_TYPE_ANY ) )
bSizer331.Add( self.m_CreateUTXOButton, 0, wx.ALL, 5 )
self.m_RocketDrop = wx.BitmapButton( self, wx.ID_ANY, wx.NullBitmap, wx.DefaultPosition, wx.DefaultSize, wx.BU_AUTODRAW|0 )
self.m_RocketDrop.SetBitmap( wx.Bitmap( u"res/default_style/normal/rocketdrop_35.png", wx.BITMAP_TYPE_ANY ) )
self.m_RocketDrop.Hide()
bSizer331.Add( self.m_RocketDrop, 0, wx.ALL, 5 )
bSizer329.Add( bSizer331, 0, wx.ALIGN_RIGHT, 5 )
self.SetSizer( bSizer329 )
self.Layout()
# Connect Events
self.m_AssetChoice.Bind( wx.EVT_CHOICE, self.OnAssetChanged )
self.m_AssetAmount.Bind( wx.EVT_TEXT, self.OnAmountChanged )
self.m_UTXOcount.Bind( wx.EVT_SPINCTRL, self.OnUTXOChanged )
self.m_filePicker1.Bind( wx.EVT_FILEPICKER_CHANGED, self.OnFileChanged )
self.m_CreateUTXOButton_OLD.Bind( wx.EVT_BUTTON, self.OnClickCreateUTXO )
self.m_CreateUTXOButton.Bind( wx.EVT_BUTTON, self.OnClickCreateUTXO )
self.m_RocketDrop.Bind( wx.EVT_BUTTON, self.OnRocketDropClicked )
def __del__( self ):
pass
# Virtual event handlers, override them in your derived class
def OnAssetChanged( self, event ):
event.Skip()
def OnAmountChanged( self, event ):
event.Skip()
def OnUTXOChanged( self, event ):
event.Skip()
def OnFileChanged( self, event ):
event.Skip()
def OnClickCreateUTXO( self, event ):
event.Skip()
def OnRocketDropClicked( self, event ):
event.Skip()
###########################################################################
## Class wxRavenP2PMarket_Advertising
###########################################################################
class wxRavenP2PMarket_Advertising ( wx.Panel ):
def __init__( self, parent, id = wx.ID_ANY, pos = wx.DefaultPosition, size = wx.Size( 678,311 ), style = wx.TAB_TRAVERSAL, name = wx.EmptyString ):
wx.Panel.__init__ ( self, parent, id = id, pos = pos, size = size, style = style, name = name )
bSizer329 = wx.BoxSizer( wx.VERTICAL )
bSizer330 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap126 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/asset.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer330.Add( self.m_bitmap126, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText178 = wx.StaticText( self, wx.ID_ANY, u"Asset :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText178.Wrap( -1 )
bSizer330.Add( self.m_staticText178, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
m_AssetChoiceChoices = []
self.m_AssetChoice = wx.Choice( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, m_AssetChoiceChoices, 0 )
self.m_AssetChoice.SetSelection( 0 )
bSizer330.Add( self.m_AssetChoice, 1, wx.ALL, 5 )
bSizer329.Add( bSizer330, 0, wx.EXPAND, 5 )
bSizer332 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap127 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/mailbox_1.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer332.Add( self.m_bitmap127, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText179 = wx.StaticText( self, wx.ID_ANY, u"Distribution Amount :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText179.Wrap( -1 )
bSizer332.Add( self.m_staticText179, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_AssetAmount = wx.TextCtrl( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer332.Add( self.m_AssetAmount, 0, wx.ALL, 5 )
self.m_staticText184 = wx.StaticText( self, wx.ID_ANY, u"Unit(s)", wx.DefaultPosition, wx.DefaultSize, wx.ALIGN_LEFT )
self.m_staticText184.Wrap( -1 )
bSizer332.Add( self.m_staticText184, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_bitmap129 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/help_contents.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer332.Add( self.m_bitmap129, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText182 = wx.StaticText( self, wx.ID_ANY, u"Available : ", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText182.Wrap( -1 )
self.m_staticText182.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer332.Add( self.m_staticText182, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_availableText = wx.StaticText( self, wx.ID_ANY, u"0.0", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_availableText.Wrap( -1 )
bSizer332.Add( self.m_availableText, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
bSizer329.Add( bSizer332, 0, wx.EXPAND, 5 )
bSizer349 = wx.BoxSizer( wx.VERTICAL )
self.m_filePicker1 = wx.FilePickerCtrl( self, wx.ID_ANY, wx.EmptyString, u"Select a file", u"*.*", wx.DefaultPosition, wx.DefaultSize, wx.FLP_DEFAULT_STYLE )
bSizer349.Add( self.m_filePicker1, 0, wx.ALL|wx.EXPAND, 5 )
m_listBox7Choices = []
self.m_listBox7 = wx.ListBox( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, m_listBox7Choices, 0 )
bSizer349.Add( self.m_listBox7, 1, wx.ALL|wx.EXPAND, 5 )
bSizer329.Add( bSizer349, 1, wx.EXPAND, 5 )
self.m_panel37 = wx.Panel( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.TAB_TRAVERSAL )
bSizer364 = wx.BoxSizer( wx.VERTICAL )
bSizer367 = wx.BoxSizer( wx.VERTICAL )
self.m_ProgressText = wx.StaticText( self.m_panel37, wx.ID_ANY, u"Progress :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_ProgressText.Wrap( -1 )
bSizer367.Add( self.m_ProgressText, 0, wx.ALL, 5 )
bSizer364.Add( bSizer367, 0, wx.ALIGN_CENTER_HORIZONTAL, 5 )
bSizer366 = wx.BoxSizer( wx.HORIZONTAL )
self.m_gauge1 = wx.Gauge( self.m_panel37, wx.ID_ANY, 100, wx.DefaultPosition, wx.Size( -1,20 ), wx.GA_HORIZONTAL )
self.m_gauge1.SetValue( 0 )
bSizer366.Add( self.m_gauge1, 1, wx.ALL, 5 )
bSizer364.Add( bSizer366, 1, wx.EXPAND, 5 )
self.m_panel37.SetSizer( bSizer364 )
self.m_panel37.Layout()
bSizer364.Fit( self.m_panel37 )
bSizer329.Add( self.m_panel37, 0, wx.EXPAND |wx.ALL, 5 )
bSizer331 = wx.BoxSizer( wx.HORIZONTAL )
self.m_CreateUTXOButton_OLD = wx.Button( self, wx.ID_ANY, u"DROP !", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_CreateUTXOButton_OLD.Hide()
bSizer331.Add( self.m_CreateUTXOButton_OLD, 0, wx.ALL, 5 )
self.m_CreateUTXOButton = wx.BitmapButton( self, wx.ID_ANY, wx.NullBitmap, wx.DefaultPosition, wx.DefaultSize, wx.BU_AUTODRAW|0 )
self.m_CreateUTXOButton.SetBitmap( wx.Bitmap( u"res/default_style/normal/airdrop_icon_35.png", wx.BITMAP_TYPE_ANY ) )
self.m_CreateUTXOButton.Hide()
bSizer331.Add( self.m_CreateUTXOButton, 0, wx.ALL, 5 )
self.m_RocketDrop = wx.BitmapButton( self, wx.ID_ANY, wx.NullBitmap, wx.DefaultPosition, wx.DefaultSize, wx.BU_AUTODRAW|0 )
self.m_RocketDrop.SetBitmap( wx.Bitmap( u"res/default_style/normal/advertiser_icon_45.png", wx.BITMAP_TYPE_ANY ) )
bSizer331.Add( self.m_RocketDrop, 0, wx.ALL, 5 )
bSizer329.Add( bSizer331, 0, wx.ALIGN_RIGHT, 5 )
self.SetSizer( bSizer329 )
self.Layout()
# Connect Events
self.m_AssetChoice.Bind( wx.EVT_CHOICE, self.OnAssetChanged )
self.m_AssetAmount.Bind( wx.EVT_TEXT, self.OnAmountChanged )
self.m_filePicker1.Bind( wx.EVT_FILEPICKER_CHANGED, self.OnFileChanged )
self.m_CreateUTXOButton_OLD.Bind( wx.EVT_BUTTON, self.OnClickCreateUTXO )
self.m_CreateUTXOButton.Bind( wx.EVT_BUTTON, self.OnClickCreateUTXO )
self.m_RocketDrop.Bind( wx.EVT_BUTTON, self.OnRocketDropClicked )
def __del__( self ):
pass
# Virtual event handlers, override them in your derived class
def OnAssetChanged( self, event ):
event.Skip()
def OnAmountChanged( self, event ):
event.Skip()
def OnFileChanged( self, event ):
event.Skip()
def OnClickCreateUTXO( self, event ):
event.Skip()
def OnRocketDropClicked( self, event ):
event.Skip()
###########################################################################
## Class wxRavenP2PMarket_MarketPlace_ItemPanel
###########################################################################
class wxRavenP2PMarket_MarketPlace_ItemPanel ( wx.Panel ):
def __init__( self, parent, id = wx.ID_ANY, pos = wx.DefaultPosition, size = wx.Size( 356,192 ), style = wx.TAB_TRAVERSAL, name = wx.EmptyString ):
wx.Panel.__init__ ( self, parent, id = id, pos = pos, size = size, style = style, name = name )
def __del__( self ):
pass
###########################################################################
## Class wxRavenP2PMarket_Settings
###########################################################################
class wxRavenP2PMarket_Settings ( wx.Panel ):
def __init__( self, parent, id = wx.ID_ANY, pos = wx.DefaultPosition, size = wx.Size( 465,374 ), style = wx.TAB_TRAVERSAL, name = wx.EmptyString ):
wx.Panel.__init__ ( self, parent, id = id, pos = pos, size = size, style = style, name = name )
bSizer74 = wx.BoxSizer( wx.VERTICAL )
bSizer75 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap3 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/p2p_icon.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer75.Add( self.m_bitmap3, 0, wx.ALIGN_CENTER|wx.ALL, 5 )
self.m_staticText7 = wx.StaticText( self, wx.ID_ANY, u"P2P Market (BETA) :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText7.Wrap( -1 )
self.m_staticText7.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer75.Add( self.m_staticText7, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
bSizer74.Add( bSizer75, 0, wx.EXPAND, 5 )
bSizer76 = wx.BoxSizer( wx.HORIZONTAL )
self.searchopt_strictmode = wx.CheckBox( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer76.Add( self.searchopt_strictmode, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText10 = wx.StaticText( self, wx.ID_ANY, u"Enable P2P Market Index/Search", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText10.Wrap( -1 )
bSizer76.Add( self.m_staticText10, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
bSizer77 = wx.BoxSizer( wx.VERTICAL )
self.m_staticText8 = wx.StaticText( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText8.Wrap( -1 )
bSizer77.Add( self.m_staticText8, 1, wx.ALL|wx.EXPAND, 5 )
bSizer76.Add( bSizer77, 0, 0, 5 )
bSizer74.Add( bSizer76, 0, wx.EXPAND, 5 )
bSizer78 = wx.BoxSizer( wx.HORIZONTAL )
self.m_staticText9 = wx.StaticText( self, wx.ID_ANY, u"Ads Search Limit", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText9.Wrap( -1 )
bSizer78.Add( self.m_staticText9, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.searchopt_maxresults = wx.TextCtrl( self, wx.ID_ANY, u"500", wx.DefaultPosition, wx.DefaultSize, 0 )
self.searchopt_maxresults.SetMaxLength( 0 )
bSizer78.Add( self.searchopt_maxresults, 0, wx.ALL, 5 )
bSizer74.Add( bSizer78, 0, wx.EXPAND, 5 )
bSizer783 = wx.BoxSizer( wx.HORIZONTAL )
self.m_staticText93 = wx.StaticText( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, wx.ALIGN_RIGHT )
self.m_staticText93.Wrap( -1 )
bSizer783.Add( self.m_staticText93, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_bitmap137 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/clean_cache.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer783.Add( self.m_bitmap137, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_buttonCleanCache = wx.Button( self, wx.ID_ANY, u"Clear Invalid Cache", wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer783.Add( self.m_buttonCleanCache, 0, wx.ALL, 5 )
bSizer74.Add( bSizer783, 0, wx.EXPAND, 5 )
self.m_staticline21 = wx.StaticLine( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.LI_HORIZONTAL )
bSizer74.Add( self.m_staticline21, 0, wx.EXPAND|wx.ALL, 5 )
bSizer781 = wx.BoxSizer( wx.HORIZONTAL )
self.m_forceNetwork = wx.CheckBox( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer781.Add( self.m_forceNetwork, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_bitmap25 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/network.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer781.Add( self.m_bitmap25, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText91 = wx.StaticText( self, wx.ID_ANY, u"Force Network (Listing Only) :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText91.Wrap( -1 )
bSizer781.Add( self.m_staticText91, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
m_NetworkChoiceChoices = []
self.m_NetworkChoice = wx.Choice( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, m_NetworkChoiceChoices, 0 )
self.m_NetworkChoice.SetSelection( 0 )
self.m_NetworkChoice.Enable( False )
bSizer781.Add( self.m_NetworkChoice, 1, wx.ALL, 5 )
bSizer74.Add( bSizer781, 0, wx.EXPAND, 5 )
self.m_staticline211 = wx.StaticLine( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.LI_HORIZONTAL )
bSizer74.Add( self.m_staticline211, 0, wx.EXPAND |wx.ALL, 5 )
bSizer782 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap30 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/filter_ps.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer782.Add( self.m_bitmap30, 0, wx.ALL, 5 )
self.m_staticText92 = wx.StaticText( self, wx.ID_ANY, u"Search Advanced Options", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText92.Wrap( -1 )
self.m_staticText92.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer782.Add( self.m_staticText92, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
bSizer74.Add( bSizer782, 0, wx.EXPAND, 5 )
bSizer761 = wx.BoxSizer( wx.HORIZONTAL )
self.searchopt_includeNoneData = wx.CheckBox( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer761.Add( self.searchopt_includeNoneData, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_bitmap100 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/empty_datas.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer761.Add( self.m_bitmap100, 0, wx.ALL, 5 )
self.m_staticText101 = wx.StaticText( self, wx.ID_ANY, u"Include None Tx in Listing", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText101.Wrap( -1 )
bSizer761.Add( self.m_staticText101, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
bSizer74.Add( bSizer761, 0, wx.EXPAND, 5 )
bSizer7611 = wx.BoxSizer( wx.HORIZONTAL )
self.searchopt_checkTx = wx.CheckBox( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer7611.Add( self.searchopt_checkTx, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_bitmap101 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/raw_datas_verified.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer7611.Add( self.m_bitmap101, 0, wx.ALL, 5 )
self.m_staticText1011 = wx.StaticText( self, wx.ID_ANY, u"Verify and display only valid Tx", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText1011.Wrap( -1 )
bSizer7611.Add( self.m_staticText1011, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
bSizer74.Add( bSizer7611, 0, wx.EXPAND, 5 )
bSizer76111 = wx.BoxSizer( wx.HORIZONTAL )
self.searchopt_OnlyVerifiedSellers = wx.CheckBox( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer76111.Add( self.searchopt_OnlyVerifiedSellers, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_bitmap1011 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/trusted_icon.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer76111.Add( self.m_bitmap1011, 0, wx.ALL, 5 )
self.m_staticText10111 = wx.StaticText( self, wx.ID_ANY, u"Only display Trusted Sellers", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText10111.Wrap( -1 )
bSizer76111.Add( self.m_staticText10111, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
bSizer74.Add( bSizer76111, 0, wx.EXPAND, 5 )
self.SetSizer( bSizer74 )
self.Layout()
def __del__( self ):
pass
###########################################################################
## Class wxRavenP2PMarket_MyMarketSettings
###########################################################################
class wxRavenP2PMarket_MyMarketSettings ( wx.Panel ):
def __init__( self, parent, id = wx.ID_ANY, pos = wx.DefaultPosition, size = wx.Size( 527,374 ), style = wx.TAB_TRAVERSAL, name = wx.EmptyString ):
wx.Panel.__init__ ( self, parent, id = id, pos = pos, size = size, style = style, name = name )
bSizer74 = wx.BoxSizer( wx.VERTICAL )
bSizer75 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap3 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/my_marketplace.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer75.Add( self.m_bitmap3, 0, wx.ALIGN_CENTER|wx.ALL, 5 )
self.m_staticText7 = wx.StaticText( self, wx.ID_ANY, u"My P2P Marketplace :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText7.Wrap( -1 )
self.m_staticText7.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer75.Add( self.m_staticText7, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
bSizer74.Add( bSizer75, 0, wx.EXPAND, 5 )
bSizer78 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap99 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/known_user.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer78.Add( self.m_bitmap99, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText9 = wx.StaticText( self, wx.ID_ANY, u"Announcer Address :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText9.Wrap( -1 )
bSizer78.Add( self.m_staticText9, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
m_AddressChoiceChoices = []
self.m_AddressChoice = wx.Choice( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, m_AddressChoiceChoices, 0 )
self.m_AddressChoice.SetSelection( 0 )
bSizer78.Add( self.m_AddressChoice, 1, wx.ALL|wx.EXPAND, 5 )
bSizer74.Add( bSizer78, 0, wx.EXPAND, 5 )
bSizer76 = wx.BoxSizer( wx.HORIZONTAL )
self.m_sameAddressChangeOpt = wx.CheckBox( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer76.Add( self.m_sameAddressChangeOpt, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText10 = wx.StaticText( self, wx.ID_ANY, u"Use same address for changes", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText10.Wrap( -1 )
bSizer76.Add( self.m_staticText10, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
m_changeAddressChoiceOptChoices = []
self.m_changeAddressChoiceOpt = wx.Choice( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, m_changeAddressChoiceOptChoices, 0 )
self.m_changeAddressChoiceOpt.SetSelection( 0 )
bSizer76.Add( self.m_changeAddressChoiceOpt, 1, wx.ALL|wx.EXPAND, 5 )
bSizer74.Add( bSizer76, 0, wx.EXPAND, 5 )
bSizer782 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap991 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/atomic_swap.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer782.Add( self.m_bitmap991, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText92 = wx.StaticText( self, wx.ID_ANY, u"Atomic Swap Address :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText92.Wrap( -1 )
bSizer782.Add( self.m_staticText92, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
m_AddressSwapChoices = []
self.m_AddressSwap = wx.Choice( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, m_AddressSwapChoices, 0 )
self.m_AddressSwap.SetSelection( 0 )
bSizer782.Add( self.m_AddressSwap, 1, wx.ALL|wx.EXPAND, 5 )
bSizer74.Add( bSizer782, 0, wx.EXPAND, 5 )
self.m_staticline21 = wx.StaticLine( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.LI_HORIZONTAL )
bSizer74.Add( self.m_staticline21, 0, wx.EXPAND|wx.ALL, 5 )
bSizer781 = wx.BoxSizer( wx.HORIZONTAL )
self.m_defaultListingChanel = wx.CheckBox( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer781.Add( self.m_defaultListingChanel, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText91 = wx.StaticText( self, wx.ID_ANY, u"Default Listing Channel", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText91.Wrap( -1 )
bSizer781.Add( self.m_staticText91, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_bitmap25 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/p2p_icon2.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer781.Add( self.m_bitmap25, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
m_NetworkChoiceChoices = []
self.m_NetworkChoice = wx.Choice( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, m_NetworkChoiceChoices, 0 )
self.m_NetworkChoice.SetSelection( 0 )
self.m_NetworkChoice.Enable( False )
bSizer781.Add( self.m_NetworkChoice, 1, wx.ALL, 5 )
bSizer74.Add( bSizer781, 0, wx.EXPAND, 5 )
self.m_staticline211 = wx.StaticLine( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.LI_HORIZONTAL )
bSizer74.Add( self.m_staticline211, 0, wx.EXPAND |wx.ALL, 5 )
bSizer7811 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap251 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/lock_icon.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer7811.Add( self.m_bitmap251, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_keeplocks = wx.CheckBox( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer7811.Add( self.m_keeplocks, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText911 = wx.StaticText( self, wx.ID_ANY, u"Keeps my trades locked", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText911.Wrap( -1 )
bSizer7811.Add( self.m_staticText911, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText165 = wx.StaticText( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText165.Wrap( -1 )
bSizer7811.Add( self.m_staticText165, 1, wx.ALL, 5 )
self.m_bitmap25121 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/import_log.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer7811.Add( self.m_bitmap25121, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_importButton = wx.Button( self, wx.ID_ANY, u"Import", wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer7811.Add( self.m_importButton, 0, wx.ALL, 5 )
self.m_bitmap2512 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/clear_co.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer7811.Add( self.m_bitmap2512, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_wipeButton = wx.Button( self, wx.ID_ANY, u"Wipe Session", wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer7811.Add( self.m_wipeButton, 0, wx.ALL, 5 )
bSizer74.Add( bSizer7811, 0, wx.EXPAND, 5 )
bSizer78111 = wx.BoxSizer( wx.HORIZONTAL )
self.m_staticText9111 = wx.StaticText( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText9111.Wrap( -1 )
bSizer78111.Add( self.m_staticText9111, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText1651 = wx.StaticText( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText1651.Wrap( -1 )
bSizer78111.Add( self.m_staticText1651, 0, wx.ALL, 5 )
self.m_bitmap2511 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/unlock.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer78111.Add( self.m_bitmap2511, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_unlockAll = wx.Button( self, wx.ID_ANY, u"Unlock all", wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer78111.Add( self.m_unlockAll, 0, wx.ALL, 5 )
bSizer74.Add( bSizer78111, 0, wx.EXPAND, 5 )
bSizer781111 = wx.BoxSizer( wx.HORIZONTAL )
self.m_staticText91111 = wx.StaticText( self, wx.ID_ANY, u"Required if address or channel changed :", wx.DefaultPosition, wx.DefaultSize, wx.ALIGN_RIGHT )
self.m_staticText91111.Wrap( -1 )
self.m_staticText91111.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_ITALIC, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer781111.Add( self.m_staticText91111, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText16511 = wx.StaticText( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText16511.Wrap( -1 )
bSizer781111.Add( self.m_staticText16511, 0, wx.ALL, 5 )
self.m_accountstatusBitmap = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/help_contents.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer781111.Add( self.m_accountstatusBitmap, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_initMyMarketPlace = wx.Button( self, wx.ID_ANY, u"Initialize", wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer781111.Add( self.m_initMyMarketPlace, 0, wx.ALL, 5 )
bSizer74.Add( bSizer781111, 0, wx.EXPAND, 5 )
self.SetSizer( bSizer74 )
self.Layout()
# Connect Events
self.m_importButton.Bind( wx.EVT_BUTTON, self.OnDoImportTradeSessions )
self.m_wipeButton.Bind( wx.EVT_BUTTON, self.OnDoWipeTradeSessions )
self.m_unlockAll.Bind( wx.EVT_BUTTON, self.OnDoUnlockAll )
self.m_initMyMarketPlace.Bind( wx.EVT_BUTTON, self.OnDoInitMyMarketPlace )
def __del__( self ):
pass
# Virtual event handlers, override them in your derived class
def OnDoImportTradeSessions( self, event ):
event.Skip()
def OnDoWipeTradeSessions( self, event ):
event.Skip()
def OnDoUnlockAll( self, event ):
event.Skip()
def OnDoInitMyMarketPlace( self, event ):
event.Skip()
###########################################################################
## Class wxRavenP2PMarket_MarketsBookmarks
###########################################################################
class wxRavenP2PMarket_MarketsBookmarks ( wx.Panel ):
def __init__( self, parent, id = wx.ID_ANY, pos = wx.DefaultPosition, size = wx.Size( 505,374 ), style = wx.TAB_TRAVERSAL, name = wx.EmptyString ):
wx.Panel.__init__ ( self, parent, id = id, pos = pos, size = size, style = style, name = name )
bSizer59 = wx.BoxSizer( wx.VERTICAL )
bSizer306 = wx.BoxSizer( wx.VERTICAL )
self.m_staticText159 = wx.StaticText( self, wx.ID_ANY, u"Use the asset or sub-asset complete name : <asset>\nExample : WXRAVEN/P2P_MARKETPLACE", wx.DefaultPosition, wx.DefaultSize, wx.ALIGN_CENTER_HORIZONTAL|wx.BORDER_STATIC )
self.m_staticText159.Wrap( -1 )
bSizer306.Add( self.m_staticText159, 1, wx.ALL|wx.EXPAND, 5 )
bSizer59.Add( bSizer306, 0, wx.EXPAND, 5 )
bSizer60 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap4 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/p2p_icon2.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer60.Add( self.m_bitmap4, 0, wx.ALL, 5 )
self.m_staticText12 = wx.StaticText( self, wx.ID_ANY, u"P2P Markets Channels :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText12.Wrap( -1 )
bSizer60.Add( self.m_staticText12, 0, wx.ALL, 5 )
bSizer61 = wx.BoxSizer( wx.VERTICAL )
self.bookmark_text_area = wx.TextCtrl( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
self.bookmark_text_area.SetMaxLength( 0 )
bSizer61.Add( self.bookmark_text_area, 0, wx.ALL|wx.EXPAND, 5 )
bookmark_listChoices = []
self.bookmark_list = wx.ListBox( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, bookmark_listChoices, 0 )
bSizer61.Add( self.bookmark_list, 1, wx.ALL|wx.EXPAND, 5 )
bSizer60.Add( bSizer61, 1, wx.EXPAND, 5 )
bSizer62 = wx.BoxSizer( wx.VERTICAL )
self.bookmark_addbt = wx.BitmapButton( self, wx.ID_ANY, wx.NullBitmap, wx.DefaultPosition, wx.DefaultSize, wx.BU_AUTODRAW|0 )
self.bookmark_addbt.SetBitmap( wx.Bitmap( u"res/default_style/normal/add_plus.png", wx.BITMAP_TYPE_ANY ) )
bSizer62.Add( self.bookmark_addbt, 0, wx.ALL, 5 )
self.bookmark_rembt = wx.BitmapButton( self, wx.ID_ANY, wx.NullBitmap, wx.DefaultPosition, wx.DefaultSize, wx.BU_AUTODRAW|0 )
self.bookmark_rembt.SetBitmap( wx.Bitmap( u"res/default_style/normal/remove_minus.png", wx.BITMAP_TYPE_ANY ) )
bSizer62.Add( self.bookmark_rembt, 0, wx.ALL, 5 )
self.ipfs_provider_upbt = wx.BitmapButton( self, wx.ID_ANY, wx.NullBitmap, wx.DefaultPosition, wx.DefaultSize, wx.BU_AUTODRAW|0 )
self.ipfs_provider_upbt.SetBitmap( wx.Bitmap( u"res/default_style/normal/prev_nav.png", wx.BITMAP_TYPE_ANY ) )
self.ipfs_provider_upbt.Enable( False )
bSizer62.Add( self.ipfs_provider_upbt, 0, wx.ALL, 5 )
bSizer60.Add( bSizer62, 0, wx.EXPAND, 5 )
bSizer63 = wx.BoxSizer( wx.VERTICAL )
bSizer60.Add( bSizer63, 0, wx.EXPAND, 5 )
bSizer59.Add( bSizer60, 1, wx.EXPAND, 5 )
self.SetSizer( bSizer59 )
self.Layout()
def __del__( self ):
pass
###########################################################################
## Class wxRavenP2PMarket_AddressesBlackList
###########################################################################
class wxRavenP2PMarket_AddressesBlackList ( wx.Panel ):
def __init__( self, parent, id = wx.ID_ANY, pos = wx.DefaultPosition, size = wx.Size( 505,374 ), style = wx.TAB_TRAVERSAL, name = wx.EmptyString ):
wx.Panel.__init__ ( self, parent, id = id, pos = pos, size = size, style = style, name = name )
bSizer59 = wx.BoxSizer( wx.VERTICAL )
bSizer306 = wx.BoxSizer( wx.VERTICAL )
self.m_staticText159 = wx.StaticText( self, wx.ID_ANY, u"No special format required : only address\nExample : RDyF4itWbfryV2nM4w2L99oJ4MvNptt82F", wx.DefaultPosition, wx.DefaultSize, wx.ALIGN_CENTER_HORIZONTAL|wx.BORDER_STATIC )
self.m_staticText159.Wrap( -1 )
bSizer306.Add( self.m_staticText159, 1, wx.ALL|wx.EXPAND, 5 )
bSizer59.Add( bSizer306, 0, wx.EXPAND, 5 )
bSizer60 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap4 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/blacklist.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer60.Add( self.m_bitmap4, 0, wx.ALL, 5 )
self.m_staticText12 = wx.StaticText( self, wx.ID_ANY, u"Add an address to Blacklist :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText12.Wrap( -1 )
bSizer60.Add( self.m_staticText12, 0, wx.ALL, 5 )
bSizer61 = wx.BoxSizer( wx.VERTICAL )
self.bookmark_text_area = wx.TextCtrl( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
self.bookmark_text_area.SetMaxLength( 0 )
bSizer61.Add( self.bookmark_text_area, 0, wx.ALL|wx.EXPAND, 5 )
bookmark_listChoices = []
self.bookmark_list = wx.ListBox( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, bookmark_listChoices, 0 )
bSizer61.Add( self.bookmark_list, 1, wx.ALL|wx.EXPAND, 5 )
bSizer60.Add( bSizer61, 1, wx.EXPAND, 5 )
bSizer62 = wx.BoxSizer( wx.VERTICAL )
self.bookmark_addbt = wx.BitmapButton( self, wx.ID_ANY, wx.NullBitmap, wx.DefaultPosition, wx.DefaultSize, wx.BU_AUTODRAW|0 )
self.bookmark_addbt.SetBitmap( wx.Bitmap( u"res/default_style/normal/add_plus.png", wx.BITMAP_TYPE_ANY ) )
bSizer62.Add( self.bookmark_addbt, 0, wx.ALL, 5 )
self.bookmark_rembt = wx.BitmapButton( self, wx.ID_ANY, wx.NullBitmap, wx.DefaultPosition, wx.DefaultSize, wx.BU_AUTODRAW|0 )
self.bookmark_rembt.SetBitmap( wx.Bitmap( u"res/default_style/normal/remove_minus.png", wx.BITMAP_TYPE_ANY ) )
bSizer62.Add( self.bookmark_rembt, 0, wx.ALL, 5 )
self.ipfs_provider_upbt = wx.BitmapButton( self, wx.ID_ANY, wx.NullBitmap, wx.DefaultPosition, wx.DefaultSize, wx.BU_AUTODRAW|0 )
self.ipfs_provider_upbt.SetBitmap( wx.Bitmap( u"res/default_style/normal/prev_nav.png", wx.BITMAP_TYPE_ANY ) )
self.ipfs_provider_upbt.Enable( False )
bSizer62.Add( self.ipfs_provider_upbt, 0, wx.ALL, 5 )
bSizer60.Add( bSizer62, 0, wx.EXPAND, 5 )
bSizer63 = wx.BoxSizer( wx.VERTICAL )
bSizer60.Add( bSizer63, 0, wx.EXPAND, 5 )
bSizer59.Add( bSizer60, 1, wx.EXPAND, 5 )
self.SetSizer( bSizer59 )
self.Layout()
def __del__( self ):
pass
###########################################################################
## Class wxRavenP2PMarket_TrustedSellers
###########################################################################
class wxRavenP2PMarket_TrustedSellers ( wx.Panel ):
def __init__( self, parent, id = wx.ID_ANY, pos = wx.DefaultPosition, size = wx.Size( 505,374 ), style = wx.TAB_TRAVERSAL, name = wx.EmptyString ):
wx.Panel.__init__ ( self, parent, id = id, pos = pos, size = size, style = style, name = name )
bSizer59 = wx.BoxSizer( wx.VERTICAL )
bSizer306 = wx.BoxSizer( wx.VERTICAL )
self.m_staticText159 = wx.StaticText( self, wx.ID_ANY, u"Use the format : address = alias\nExample : RDyF4itWbfryV2nM4w2L99oJ4MvNptt82F = RVN Guardian", wx.DefaultPosition, wx.DefaultSize, wx.ALIGN_CENTER_HORIZONTAL|wx.BORDER_STATIC )
self.m_staticText159.Wrap( -1 )
bSizer306.Add( self.m_staticText159, 1, wx.ALL|wx.EXPAND, 5 )
bSizer59.Add( bSizer306, 0, wx.EXPAND, 5 )
bSizer60 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap4 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/trusted_peer.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer60.Add( self.m_bitmap4, 0, wx.ALL, 5 )
self.m_staticText12 = wx.StaticText( self, wx.ID_ANY, u"Add a Trusted Peer :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText12.Wrap( -1 )
bSizer60.Add( self.m_staticText12, 0, wx.ALL, 5 )
bSizer61 = wx.BoxSizer( wx.VERTICAL )
self.bookmark_text_area = wx.TextCtrl( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
self.bookmark_text_area.SetMaxLength( 0 )
bSizer61.Add( self.bookmark_text_area, 0, wx.ALL|wx.EXPAND, 5 )
bookmark_listChoices = []
self.bookmark_list = wx.ListBox( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, bookmark_listChoices, 0 )
bSizer61.Add( self.bookmark_list, 1, wx.ALL|wx.EXPAND, 5 )
bSizer60.Add( bSizer61, 1, wx.EXPAND, 5 )
bSizer62 = wx.BoxSizer( wx.VERTICAL )
self.bookmark_addbt = wx.BitmapButton( self, wx.ID_ANY, wx.NullBitmap, wx.DefaultPosition, wx.DefaultSize, wx.BU_AUTODRAW|0 )
self.bookmark_addbt.SetBitmap( wx.Bitmap( u"res/default_style/normal/add_plus.png", wx.BITMAP_TYPE_ANY ) )
bSizer62.Add( self.bookmark_addbt, 0, wx.ALL, 5 )
self.bookmark_rembt = wx.BitmapButton( self, wx.ID_ANY, wx.NullBitmap, wx.DefaultPosition, wx.DefaultSize, wx.BU_AUTODRAW|0 )
self.bookmark_rembt.SetBitmap( wx.Bitmap( u"res/default_style/normal/remove_minus.png", wx.BITMAP_TYPE_ANY ) )
bSizer62.Add( self.bookmark_rembt, 0, wx.ALL, 5 )
self.ipfs_provider_upbt = wx.BitmapButton( self, wx.ID_ANY, wx.NullBitmap, wx.DefaultPosition, wx.DefaultSize, wx.BU_AUTODRAW|0 )
self.ipfs_provider_upbt.SetBitmap( wx.Bitmap( u"res/default_style/normal/prev_nav.png", wx.BITMAP_TYPE_ANY ) )
self.ipfs_provider_upbt.Enable( False )
bSizer62.Add( self.ipfs_provider_upbt, 0, wx.ALL, 5 )
bSizer60.Add( bSizer62, 0, wx.EXPAND, 5 )
bSizer63 = wx.BoxSizer( wx.VERTICAL )
bSizer60.Add( bSizer63, 0, wx.EXPAND, 5 )
bSizer59.Add( bSizer60, 1, wx.EXPAND, 5 )
self.SetSizer( bSizer59 )
self.Layout()
def __del__( self ):
pass
###########################################################################
## Class wxRavenP2PMarket_NewAdDialog_FIRSTDRAFT
###########################################################################
class wxRavenP2PMarket_NewAdDialog_FIRSTDRAFT ( wx.Panel ):
def __init__( self, parent, id = wx.ID_ANY, pos = wx.DefaultPosition, size = wx.Size( 891,641 ), style = wx.TAB_TRAVERSAL, name = wx.EmptyString ):
wx.Panel.__init__ ( self, parent, id = id, pos = pos, size = size, style = style, name = name )
bSizer1 = wx.BoxSizer( wx.VERTICAL )
bSizer2 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap1 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/help_contents.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer2.Add( self.m_bitmap1, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText1 = wx.StaticText( self, wx.ID_ANY, u"Publish a new Ad on P2P Market :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText1.Wrap( -1 )
self.m_staticText1.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer2.Add( self.m_staticText1, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_AdFileIPFSHash = wx.TextCtrl( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_AdFileIPFSHash.Enable( False )
bSizer2.Add( self.m_AdFileIPFSHash, 1, wx.ALL|wx.EXPAND, 5 )
self.m_toggleAssistant = wx.ToggleButton( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_toggleAssistant.SetValue( True )
bSizer2.Add( self.m_toggleAssistant, 0, wx.ALL, 5 )
bSizer1.Add( bSizer2, 0, wx.EXPAND, 5 )
self.m_assistantPanel = wx.Panel( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.TAB_TRAVERSAL )
bSizer55 = wx.BoxSizer( wx.VERTICAL )
self.m_staticline1 = wx.StaticLine( self.m_assistantPanel, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.LI_HORIZONTAL )
bSizer55.Add( self.m_staticline1, 0, wx.EXPAND |wx.ALL, 5 )
bSizer3 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap33 = wx.StaticBitmap( self.m_assistantPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/ravencoin_marketplace_ultrasmall.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer3.Add( self.m_bitmap33, 0, wx.ALL, 5 )
m_radioBox1Choices = [ u"I'm selling - You are offering an asset for sale", u"I want to find - You want to buy an asset", u"I want to trade - You want to exchange an asset for another asset" ]
self.m_radioBox1 = wx.RadioBox( self.m_assistantPanel, wx.ID_ANY, u"Ad Type :", wx.DefaultPosition, wx.DefaultSize, m_radioBox1Choices, 1, wx.RA_SPECIFY_COLS )
self.m_radioBox1.SetSelection( 0 )
bSizer3.Add( self.m_radioBox1, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
bSizer55.Add( bSizer3, 0, wx.ALIGN_CENTER_HORIZONTAL, 5 )
self.m_staticline2 = wx.StaticLine( self.m_assistantPanel, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.LI_HORIZONTAL )
bSizer55.Add( self.m_staticline2, 0, wx.EXPAND |wx.ALL, 5 )
bSizer4 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap2 = wx.StaticBitmap( self.m_assistantPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/reflog.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer4.Add( self.m_bitmap2, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText2 = wx.StaticText( self.m_assistantPanel, wx.ID_ANY, u"Title :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText2.Wrap( -1 )
self.m_staticText2.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer4.Add( self.m_staticText2, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_AdTitle = wx.TextCtrl( self.m_assistantPanel, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer4.Add( self.m_AdTitle, 1, wx.ALL|wx.EXPAND, 5 )
bSizer55.Add( bSizer4, 0, wx.EXPAND, 5 )
bSizer411 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap211 = wx.StaticBitmap( self.m_assistantPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/browser.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer411.Add( self.m_bitmap211, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText211 = wx.StaticText( self.m_assistantPanel, wx.ID_ANY, u"Website / Gallery / IPFS Page : ", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText211.Wrap( -1 )
self.m_staticText211.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer411.Add( self.m_staticText211, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_AdLink = wx.TextCtrl( self.m_assistantPanel, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer411.Add( self.m_AdLink, 1, wx.ALL|wx.EXPAND, 5 )
bSizer55.Add( bSizer411, 0, wx.EXPAND, 5 )
self.m_staticline3 = wx.StaticLine( self.m_assistantPanel, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.LI_HORIZONTAL )
bSizer55.Add( self.m_staticline3, 0, wx.EXPAND |wx.ALL, 5 )
bSizer13 = wx.BoxSizer( wx.HORIZONTAL )
bSizer14 = wx.BoxSizer( wx.VERTICAL )
bSizer16 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap7 = wx.StaticBitmap( self.m_assistantPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/changelog_obj.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer16.Add( self.m_bitmap7, 0, wx.ALL, 5 )
self.m_staticText8 = wx.StaticText( self.m_assistantPanel, wx.ID_ANY, u"Description :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText8.Wrap( -1 )
self.m_staticText8.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer16.Add( self.m_staticText8, 0, wx.ALL, 5 )
bSizer14.Add( bSizer16, 0, wx.EXPAND, 5 )
self.m_AdDescription = wx.TextCtrl( self.m_assistantPanel, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, wx.TE_MULTILINE )
self.m_AdDescription.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
self.m_AdDescription.SetMinSize( wx.Size( -1,100 ) )
bSizer14.Add( self.m_AdDescription, 1, wx.ALL|wx.EXPAND, 5 )
bSizer13.Add( bSizer14, 1, wx.EXPAND, 5 )
bSizer141 = wx.BoxSizer( wx.VERTICAL )
bSizer161 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap71 = wx.StaticBitmap( self.m_assistantPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/changelog_obj.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer161.Add( self.m_bitmap71, 0, wx.ALL, 5 )
self.m_staticText81 = wx.StaticText( self.m_assistantPanel, wx.ID_ANY, u"Tags / Categories / Keywords :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText81.Wrap( -1 )
self.m_staticText81.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer161.Add( self.m_staticText81, 0, wx.ALL, 5 )
bSizer141.Add( bSizer161, 0, wx.EXPAND, 5 )
self.m_AdKeyword = wx.TextCtrl( self.m_assistantPanel, wx.ID_ANY, u"Asset", wx.DefaultPosition, wx.DefaultSize, wx.TE_MULTILINE )
self.m_AdKeyword.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
self.m_AdKeyword.SetMinSize( wx.Size( -1,100 ) )
bSizer141.Add( self.m_AdKeyword, 1, wx.ALL|wx.EXPAND, 5 )
bSizer13.Add( bSizer141, 1, wx.EXPAND, 5 )
bSizer55.Add( bSizer13, 1, wx.EXPAND, 5 )
self.m_staticline31 = wx.StaticLine( self.m_assistantPanel, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.LI_HORIZONTAL )
bSizer55.Add( self.m_staticline31, 0, wx.EXPAND |wx.ALL, 5 )
bSizer121 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap20 = wx.StaticBitmap( self.m_assistantPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/p2p_icon.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer121.Add( self.m_bitmap20, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText71 = wx.StaticText( self.m_assistantPanel, wx.ID_ANY, u"P2P Sell Method :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText71.Wrap( -1 )
self.m_staticText71.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer121.Add( self.m_staticText71, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
m_txMethodChoices = [ u"Atomic Swap", u"P2SH", u"Raw Text" ]
self.m_txMethod = wx.Choice( self.m_assistantPanel, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, m_txMethodChoices, 0 )
self.m_txMethod.SetSelection( 0 )
bSizer121.Add( self.m_txMethod, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
bSizer118 = wx.BoxSizer( wx.VERTICAL )
self.m_staticText56 = wx.StaticText( self.m_assistantPanel, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText56.Wrap( -1 )
bSizer118.Add( self.m_staticText56, 0, wx.ALL, 5 )
bSizer121.Add( bSizer118, 1, wx.EXPAND, 5 )
bSizer117 = wx.BoxSizer( wx.VERTICAL )
self.m_bitmap38 = wx.StaticBitmap( self.m_assistantPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/help_contents.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer117.Add( self.m_bitmap38, 0, wx.ALL, 5 )
bSizer121.Add( bSizer117, 0, 0, 5 )
bSizer55.Add( bSizer121, 0, wx.EXPAND, 5 )
self.m_atomicswapPanel = wx.Panel( self.m_assistantPanel, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.TAB_TRAVERSAL )
bSizer56 = wx.BoxSizer( wx.VERTICAL )
bSizer41 = wx.BoxSizer( wx.HORIZONTAL )
bSizer11 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap21 = wx.StaticBitmap( self.m_atomicswapPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/asset.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer11.Add( self.m_bitmap21, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText21 = wx.StaticText( self.m_atomicswapPanel, wx.ID_ANY, u"Select an Asset : ", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText21.Wrap( -1 )
self.m_staticText21.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer11.Add( self.m_staticText21, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
m_AdAssetChoiceChoices = []
self.m_AdAssetChoice = wx.Choice( self.m_atomicswapPanel, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, m_AdAssetChoiceChoices, 0 )
self.m_AdAssetChoice.SetSelection( 0 )
bSizer11.Add( self.m_AdAssetChoice, 1, wx.ALL|wx.EXPAND, 5 )
bSizer41.Add( bSizer11, 2, wx.EXPAND, 5 )
bSizer12 = wx.BoxSizer( wx.HORIZONTAL )
self.m_staticText7 = wx.StaticText( self.m_atomicswapPanel, wx.ID_ANY, u"Quantity :", wx.DefaultPosition, wx.DefaultSize, wx.ALIGN_RIGHT )
self.m_staticText7.Wrap( -1 )
self.m_staticText7.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer12.Add( self.m_staticText7, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_AdAssetQt = wx.TextCtrl( self.m_atomicswapPanel, wx.ID_ANY, u"1", wx.DefaultPosition, wx.DefaultSize, wx.TE_RIGHT )
bSizer12.Add( self.m_AdAssetQt, 1, wx.ALL, 5 )
bSizer41.Add( bSizer12, 1, wx.EXPAND, 5 )
bSizer56.Add( bSizer41, 0, wx.EXPAND, 5 )
bSizer412 = wx.BoxSizer( wx.HORIZONTAL )
bSizer111 = wx.BoxSizer( wx.HORIZONTAL )
self.m_atomicTransactionUserFeedback = wx.StaticText( self.m_atomicswapPanel, wx.ID_ANY, u"Click on preview to generate the atomic swap transaction", wx.DefaultPosition, wx.DefaultSize, wx.ALIGN_RIGHT )
self.m_atomicTransactionUserFeedback.Wrap( -1 )
self.m_atomicTransactionUserFeedback.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_ITALIC, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer111.Add( self.m_atomicTransactionUserFeedback, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
bSizer412.Add( bSizer111, 2, wx.EXPAND, 5 )
bSizer1212 = wx.BoxSizer( wx.HORIZONTAL )
self.m_staticText712 = wx.StaticText( self.m_atomicswapPanel, wx.ID_ANY, u"Price :", wx.DefaultPosition, wx.DefaultSize, wx.ALIGN_RIGHT )
self.m_staticText712.Wrap( -1 )
self.m_staticText712.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer1212.Add( self.m_staticText712, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_AdAssetPrice = wx.TextCtrl( self.m_atomicswapPanel, wx.ID_ANY, u"200", wx.DefaultPosition, wx.DefaultSize, wx.TE_RIGHT )
bSizer1212.Add( self.m_AdAssetPrice, 1, wx.ALL, 5 )
bSizer412.Add( bSizer1212, 1, wx.EXPAND, 5 )
bSizer56.Add( bSizer412, 0, wx.EXPAND, 5 )
self.m_atomicswapPanel.SetSizer( bSizer56 )
self.m_atomicswapPanel.Layout()
bSizer56.Fit( self.m_atomicswapPanel )
bSizer55.Add( self.m_atomicswapPanel, 0, wx.EXPAND |wx.ALL, 5 )
self.m_assistantPanel.SetSizer( bSizer55 )
self.m_assistantPanel.Layout()
bSizer55.Fit( self.m_assistantPanel )
bSizer1.Add( self.m_assistantPanel, 1, wx.EXPAND |wx.ALL, 5 )
self.m_staticline3111 = wx.StaticLine( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.LI_HORIZONTAL )
bSizer1.Add( self.m_staticline3111, 0, wx.EXPAND |wx.ALL, 5 )
bSizer4121 = wx.BoxSizer( wx.HORIZONTAL )
bSizer1111 = wx.BoxSizer( wx.HORIZONTAL )
self.m_staticText2121 = wx.StaticText( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText2121.Wrap( -1 )
self.m_staticText2121.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer1111.Add( self.m_staticText2121, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
bSizer4121.Add( bSizer1111, 3, wx.EXPAND, 5 )
bSizer1211 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap121 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/p2p_icon2.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer1211.Add( self.m_bitmap121, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText711 = wx.StaticText( self, wx.ID_ANY, u"P2P Channel Asset :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText711.Wrap( -1 )
self.m_staticText711.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer1211.Add( self.m_staticText711, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
m_AdP2PChannelChoiceChoices = []
self.m_AdP2PChannelChoice = wx.Choice( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, m_AdP2PChannelChoiceChoices, 0 )
self.m_AdP2PChannelChoice.SetSelection( 0 )
bSizer1211.Add( self.m_AdP2PChannelChoice, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_bitmap16 = wx.StaticBitmap( self, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/help_contents.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer1211.Add( self.m_bitmap16, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
bSizer4121.Add( bSizer1211, 2, wx.EXPAND, 5 )
bSizer1.Add( bSizer4121, 0, wx.EXPAND, 5 )
self.m_staticline311 = wx.StaticLine( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.LI_HORIZONTAL )
bSizer1.Add( self.m_staticline311, 0, wx.EXPAND |wx.ALL, 5 )
bSizer22 = wx.BoxSizer( wx.HORIZONTAL )
self.m_PreviewAdBt = wx.Button( self, wx.ID_ANY, u"Preview Ad", wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer22.Add( self.m_PreviewAdBt, 0, wx.ALL, 5 )
self.m_GeneraeteAdBt = wx.Button( self, wx.ID_ANY, u"Generate Ad", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_GeneraeteAdBt.Enable( False )
bSizer22.Add( self.m_GeneraeteAdBt, 0, wx.ALL, 5 )
bSizer1.Add( bSizer22, 0, wx.ALIGN_RIGHT, 5 )
self.SetSizer( bSizer1 )
self.Layout()
# Connect Events
self.m_toggleAssistant.Bind( wx.EVT_TOGGLEBUTTON, self.OnWizardButtonToggle )
self.m_radioBox1.Bind( wx.EVT_RADIOBOX, self.OnAdTypeChanged )
self.m_AdTitle.Bind( wx.EVT_TEXT, self.OnTitleChanged )
self.m_AdLink.Bind( wx.EVT_TEXT, self.OnLinkChanged )
self.m_AdDescription.Bind( wx.EVT_TEXT, self.OnDescriptionChanged )
self.m_AdKeyword.Bind( wx.EVT_TEXT, self.OnKeywordChanged )
self.m_txMethod.Bind( wx.EVT_CHOICE, self.OnTxMethodChanged )
self.m_AdAssetChoice.Bind( wx.EVT_CHOICE, self.OnAssetChanged )
self.m_AdAssetQt.Bind( wx.EVT_TEXT, self.OnQuantityChanged )
self.m_AdAssetPrice.Bind( wx.EVT_TEXT, self.OnPriceChanged )
self.m_AdP2PChannelChoice.Bind( wx.EVT_CHOICE, self.OnP2PChannelChanged )
self.m_PreviewAdBt.Bind( wx.EVT_BUTTON, self.OnPreviewAdButtonClick )
self.m_GeneraeteAdBt.Bind( wx.EVT_BUTTON, self.OnGenerateButtonClick )
def __del__( self ):
pass
# Virtual event handlers, override them in your derived class
def OnWizardButtonToggle( self, event ):
event.Skip()
def OnAdTypeChanged( self, event ):
event.Skip()
def OnTitleChanged( self, event ):
event.Skip()
def OnLinkChanged( self, event ):
event.Skip()
def OnDescriptionChanged( self, event ):
event.Skip()
def OnKeywordChanged( self, event ):
event.Skip()
def OnTxMethodChanged( self, event ):
event.Skip()
def OnAssetChanged( self, event ):
event.Skip()
def OnQuantityChanged( self, event ):
event.Skip()
def OnPriceChanged( self, event ):
event.Skip()
def OnP2PChannelChanged( self, event ):
event.Skip()
def OnPreviewAdButtonClick( self, event ):
event.Skip()
def OnGenerateButtonClick( self, event ):
event.Skip()
###########################################################################
## Class wxRavenP2PMarket_AdDetails
###########################################################################
class wxRavenP2PMarket_AdDetails ( wx.Panel ):
def __init__( self, parent, id = wx.ID_ANY, pos = wx.DefaultPosition, size = wx.Size( 831,606 ), style = wx.TAB_TRAVERSAL, name = wx.EmptyString ):
wx.Panel.__init__ ( self, parent, id = id, pos = pos, size = size, style = style, name = name )
bSizer368 = wx.BoxSizer( wx.VERTICAL )
bSizer369 = wx.BoxSizer( wx.HORIZONTAL )
self.m_topPanel = wx.Panel( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.TAB_TRAVERSAL )
bSizer372 = wx.BoxSizer( wx.VERTICAL )
bSizer373 = wx.BoxSizer( wx.HORIZONTAL )
self.m_staticText219 = wx.StaticText( self.m_topPanel, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText219.Wrap( -1 )
bSizer373.Add( self.m_staticText219, 1, wx.ALL, 5 )
self.m_bitmap154 = wx.StaticBitmap( self.m_topPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/p2p_icon.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer373.Add( self.m_bitmap154, 0, wx.ALL, 5 )
self.m_TitleText = wx.StaticText( self.m_topPanel, wx.ID_ANY, u"Ad Title", wx.DefaultPosition, wx.DefaultSize, wx.ALIGN_CENTER_HORIZONTAL )
self.m_TitleText.Wrap( -1 )
self.m_TitleText.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer373.Add( self.m_TitleText, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText220 = wx.StaticText( self.m_topPanel, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText220.Wrap( -1 )
bSizer373.Add( self.m_staticText220, 1, wx.ALL, 5 )
bSizer372.Add( bSizer373, 0, wx.EXPAND, 5 )
bSizer376 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap155 = wx.StaticBitmap( self.m_topPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/browser.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer376.Add( self.m_bitmap155, 0, wx.ALL, 5 )
self.m_staticText224 = wx.StaticText( self.m_topPanel, wx.ID_ANY, u"Website :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText224.Wrap( -1 )
self.m_staticText224.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer376.Add( self.m_staticText224, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_websiteText = wx.StaticText( self.m_topPanel, wx.ID_ANY, u"{no url}", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_websiteText.Wrap( -1 )
bSizer376.Add( self.m_websiteText, 1, wx.ALL, 5 )
bSizer372.Add( bSizer376, 0, wx.EXPAND, 5 )
bSizer3761 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap1551 = wx.StaticBitmap( self.m_topPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/asset.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer3761.Add( self.m_bitmap1551, 0, wx.ALL, 5 )
self.m_staticText2241 = wx.StaticText( self.m_topPanel, wx.ID_ANY, u"Asset :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText2241.Wrap( -1 )
self.m_staticText2241.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer3761.Add( self.m_staticText2241, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_assetText = wx.StaticText( self.m_topPanel, wx.ID_ANY, u"{assetname}", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_assetText.Wrap( -1 )
bSizer3761.Add( self.m_assetText, 1, wx.ALL, 5 )
bSizer372.Add( bSizer3761, 0, wx.EXPAND, 5 )
bSizer374 = wx.BoxSizer( wx.VERTICAL )
self.m_DescriptionText = wx.StaticText( self.m_topPanel, wx.ID_ANY, u"MyLabel", wx.DefaultPosition, wx.DefaultSize, wx.ALIGN_LEFT )
self.m_DescriptionText.Wrap( -1 )
bSizer374.Add( self.m_DescriptionText, 1, wx.ALL|wx.EXPAND, 5 )
bSizer372.Add( bSizer374, 1, wx.EXPAND, 5 )
bSizer375 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bpButton33 = wx.BitmapButton( self.m_topPanel, wx.ID_ANY, wx.NullBitmap, wx.DefaultPosition, wx.DefaultSize, wx.BU_AUTODRAW|0 )
self.m_bpButton33.SetBitmap( wx.Bitmap( u"res/default_style/normal/buy_now.png", wx.BITMAP_TYPE_ANY ) )
bSizer375.Add( self.m_bpButton33, 0, wx.ALL, 5 )
bSizer372.Add( bSizer375, 0, wx.ALIGN_CENTER_HORIZONTAL, 5 )
self.m_topPanel.SetSizer( bSizer372 )
self.m_topPanel.Layout()
bSizer372.Fit( self.m_topPanel )
bSizer369.Add( self.m_topPanel, 1, wx.EXPAND |wx.ALL, 5 )
bSizer368.Add( bSizer369, 5, wx.EXPAND, 5 )
bSizer370 = wx.BoxSizer( wx.VERTICAL )
self.m_detailTabPanel = wx.Panel( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.TAB_TRAVERSAL )
bSizer371 = wx.BoxSizer( wx.VERTICAL )
self.m_auinotebook1 = wx.aui.AuiNotebook( self.m_detailTabPanel, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.aui.AUI_NB_DEFAULT_STYLE )
bSizer371.Add( self.m_auinotebook1, 1, wx.EXPAND |wx.ALL, 5 )
self.m_detailTabPanel.SetSizer( bSizer371 )
self.m_detailTabPanel.Layout()
bSizer371.Fit( self.m_detailTabPanel )
bSizer370.Add( self.m_detailTabPanel, 1, wx.EXPAND |wx.ALL, 5 )
bSizer368.Add( bSizer370, 10, wx.EXPAND, 5 )
self.SetSizer( bSizer368 )
self.Layout()
# Connect Events
self.m_bpButton33.Bind( wx.EVT_BUTTON, self.OnOpenTxClicked )
def __del__( self ):
pass
# Virtual event handlers, override them in your derived class
def OnOpenTxClicked( self, event ):
event.Skip()
###########################################################################
## Class wxRavenP2PMarket_AdDetails_Splitter
###########################################################################
class wxRavenP2PMarket_AdDetails_Splitter ( wx.Panel ):
def __init__( self, parent, id = wx.ID_ANY, pos = wx.DefaultPosition, size = wx.Size( 831,773 ), style = wx.TAB_TRAVERSAL, name = wx.EmptyString ):
wx.Panel.__init__ ( self, parent, id = id, pos = pos, size = size, style = style, name = name )
bSizer368 = wx.BoxSizer( wx.VERTICAL )
self.m_splitter1 = wx.SplitterWindow( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.SP_NOBORDER )
self.m_splitter1.SetSashGravity( 0 )
self.m_splitter1.Bind( wx.EVT_IDLE, self.m_splitter1OnIdle )
self.m_splitter1.SetMinimumPaneSize( 20 )
self.m_panel46 = wx.Panel( self.m_splitter1, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.TAB_TRAVERSAL )
self.m_panel46.SetBackgroundColour( wx.Colour( 255, 255, 255 ) )
bSizer369 = wx.BoxSizer( wx.HORIZONTAL )
self.m_topPanel = wx.Panel( self.m_panel46, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.TAB_TRAVERSAL )
bSizer372 = wx.BoxSizer( wx.VERTICAL )
bSizer373 = wx.BoxSizer( wx.HORIZONTAL )
self.m_staticText219 = wx.StaticText( self.m_topPanel, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText219.Wrap( -1 )
bSizer373.Add( self.m_staticText219, 1, wx.ALL, 5 )
self.m_bitmap154 = wx.StaticBitmap( self.m_topPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/p2p_icon.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer373.Add( self.m_bitmap154, 0, wx.ALL, 5 )
self.m_TitleText = wx.StaticText( self.m_topPanel, wx.ID_ANY, u"Ad Title", wx.DefaultPosition, wx.DefaultSize, wx.ALIGN_CENTER_HORIZONTAL )
self.m_TitleText.Wrap( -1 )
self.m_TitleText.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer373.Add( self.m_TitleText, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText220 = wx.StaticText( self.m_topPanel, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText220.Wrap( -1 )
bSizer373.Add( self.m_staticText220, 1, wx.ALL, 5 )
bSizer372.Add( bSizer373, 0, wx.EXPAND, 5 )
bSizer376 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap155 = wx.StaticBitmap( self.m_topPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/browser.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer376.Add( self.m_bitmap155, 0, wx.ALL, 5 )
self.m_staticText224 = wx.StaticText( self.m_topPanel, wx.ID_ANY, u"Website :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText224.Wrap( -1 )
self.m_staticText224.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer376.Add( self.m_staticText224, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_websiteText = wx.StaticText( self.m_topPanel, wx.ID_ANY, u"{no url}", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_websiteText.Wrap( -1 )
bSizer376.Add( self.m_websiteText, 1, wx.ALL, 5 )
bSizer372.Add( bSizer376, 0, wx.EXPAND, 5 )
bSizer3761 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap1551 = wx.StaticBitmap( self.m_topPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/asset.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer3761.Add( self.m_bitmap1551, 0, wx.ALL, 5 )
self.m_staticText2241 = wx.StaticText( self.m_topPanel, wx.ID_ANY, u"Asset :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText2241.Wrap( -1 )
self.m_staticText2241.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer3761.Add( self.m_staticText2241, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_assetText = wx.StaticText( self.m_topPanel, wx.ID_ANY, u"{assetname}", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_assetText.Wrap( -1 )
bSizer3761.Add( self.m_assetText, 1, wx.ALL, 5 )
bSizer372.Add( bSizer3761, 0, wx.EXPAND, 5 )
bSizer374 = wx.BoxSizer( wx.VERTICAL )
self.m_DescriptionText = wx.StaticText( self.m_topPanel, wx.ID_ANY, u"MyLabel", wx.DefaultPosition, wx.DefaultSize, wx.ALIGN_LEFT )
self.m_DescriptionText.Wrap( -1 )
bSizer374.Add( self.m_DescriptionText, 1, wx.ALL|wx.EXPAND, 5 )
bSizer372.Add( bSizer374, 1, wx.EXPAND, 5 )
bSizer375 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bpButton33 = wx.BitmapButton( self.m_topPanel, wx.ID_ANY, wx.NullBitmap, wx.DefaultPosition, wx.DefaultSize, wx.BU_AUTODRAW|0 )
self.m_bpButton33.SetBitmap( wx.Bitmap( u"res/default_style/normal/buy_now.png", wx.BITMAP_TYPE_ANY ) )
bSizer375.Add( self.m_bpButton33, 0, wx.ALL, 5 )
bSizer372.Add( bSizer375, 0, wx.ALIGN_CENTER_HORIZONTAL, 5 )
self.m_topPanel.SetSizer( bSizer372 )
self.m_topPanel.Layout()
bSizer372.Fit( self.m_topPanel )
bSizer369.Add( self.m_topPanel, 1, wx.EXPAND |wx.ALL, 5 )
self.m_panel46.SetSizer( bSizer369 )
self.m_panel46.Layout()
bSizer369.Fit( self.m_panel46 )
self.m_panel47 = wx.Panel( self.m_splitter1, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.TAB_TRAVERSAL )
self.m_panel47.SetBackgroundColour( wx.Colour( 255, 255, 255 ) )
bSizer370 = wx.BoxSizer( wx.VERTICAL )
self.m_detailTabPanel = wx.Panel( self.m_panel47, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.TAB_TRAVERSAL )
self.m_detailTabPanel.SetBackgroundColour( wx.Colour( 255, 255, 255 ) )
bSizer371 = wx.BoxSizer( wx.VERTICAL )
self.m_auinotebook1 = wx.aui.AuiNotebook( self.m_detailTabPanel, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.aui.AUI_NB_DEFAULT_STYLE )
bSizer371.Add( self.m_auinotebook1, 1, wx.EXPAND |wx.ALL, 5 )
self.m_detailTabPanel.SetSizer( bSizer371 )
self.m_detailTabPanel.Layout()
bSizer371.Fit( self.m_detailTabPanel )
bSizer370.Add( self.m_detailTabPanel, 1, wx.EXPAND |wx.ALL, 5 )
self.m_panel47.SetSizer( bSizer370 )
self.m_panel47.Layout()
bSizer370.Fit( self.m_panel47 )
self.m_splitter1.SplitHorizontally( self.m_panel46, self.m_panel47, 0 )
bSizer368.Add( self.m_splitter1, 1, wx.EXPAND, 5 )
self.SetSizer( bSizer368 )
self.Layout()
# Connect Events
self.m_bpButton33.Bind( wx.EVT_BUTTON, self.OnOpenTxClicked )
def __del__( self ):
pass
# Virtual event handlers, override them in your derived class
def OnOpenTxClicked( self, event ):
event.Skip()
def m_splitter1OnIdle( self, event ):
self.m_splitter1.SetSashPosition( 0 )
self.m_splitter1.Unbind( wx.EVT_IDLE )
###########################################################################
## Class wxRavenP2PMarket__RavencoreUTXOManager_TradesHistory_View
###########################################################################
class wxRavenP2PMarket__RavencoreUTXOManager_TradesHistory_View ( wx.Panel ):
def __init__( self, parent, id = wx.ID_ANY, pos = wx.DefaultPosition, size = wx.Size( 880,498 ), style = wx.TAB_TRAVERSAL, name = wx.EmptyString ):
wx.Panel.__init__ ( self, parent, id = id, pos = pos, size = size, style = style, name = name )
bSizer184 = wx.BoxSizer( wx.VERTICAL )
self.m_FilterPanel = wx.Panel( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.TAB_TRAVERSAL )
bSizer185 = wx.BoxSizer( wx.VERTICAL )
bSizer186 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap34 = wx.StaticBitmap( self.m_FilterPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/trade_history.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer186.Add( self.m_bitmap34, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
m_filterAddressChoices = [ u"ALL", u"SWAP CACHE", u"ADS CACHE" ]
self.m_filterAddress = wx.Choice( self.m_FilterPanel, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, m_filterAddressChoices, 0 )
self.m_filterAddress.SetSelection( 0 )
bSizer186.Add( self.m_filterAddress, 3, wx.ALL, 5 )
self.m_bitmap86 = wx.StaticBitmap( self.m_FilterPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/tasks_tsk.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer186.Add( self.m_bitmap86, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
m_choiceStatusChoices = [ u"ALL", u"WAITING", u"COMPLETE", u"NOT FOUND" ]
self.m_choiceStatus = wx.Choice( self.m_FilterPanel, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, m_choiceStatusChoices, 0 )
self.m_choiceStatus.SetSelection( 0 )
bSizer186.Add( self.m_choiceStatus, 0, wx.ALL, 5 )
self.m_staticText49 = wx.StaticText( self.m_FilterPanel, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText49.Wrap( -1 )
bSizer186.Add( self.m_staticText49, 1, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_bitmap35 = wx.StaticBitmap( self.m_FilterPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/calendar_icon.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer186.Add( self.m_bitmap35, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_startDCheck = wx.CheckBox( self.m_FilterPanel, wx.ID_ANY, u"From :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_startDCheck.SetValue(True)
bSizer186.Add( self.m_startDCheck, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_datePicker1 = wxRavenDatePicker( self.m_FilterPanel, wx.ID_ANY, wx.DefaultDateTime, wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer186.Add( self.m_datePicker1, 0, wx.ALL, 5 )
self.m_stopDCheck = wx.CheckBox( self.m_FilterPanel, wx.ID_ANY, u"To :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_stopDCheck.SetValue(True)
bSizer186.Add( self.m_stopDCheck, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_datePicker2 = wxRavenDatePicker( self.m_FilterPanel, wx.ID_ANY, wx.DefaultDateTime, wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer186.Add( self.m_datePicker2, 0, wx.ALL, 5 )
self.m_refreshButton = wx.BitmapButton( self.m_FilterPanel, wx.ID_ANY, wx.NullBitmap, wx.DefaultPosition, wx.DefaultSize, wx.BU_AUTODRAW|0 )
self.m_refreshButton.SetBitmap( wx.Bitmap( u"res/default_style/normal/refresh.png", wx.BITMAP_TYPE_ANY ) )
bSizer186.Add( self.m_refreshButton, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
bSizer185.Add( bSizer186, 1, wx.EXPAND, 5 )
bSizer187 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap40 = wx.StaticBitmap( self.m_FilterPanel, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/filter_ps.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer187.Add( self.m_bitmap40, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_addressFilterText = wx.TextCtrl( self.m_FilterPanel, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_addressFilterText.SetMaxLength( 0 )
bSizer187.Add( self.m_addressFilterText, 3, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
bSizer185.Add( bSizer187, 1, wx.EXPAND, 5 )
self.m_FilterPanel.SetSizer( bSizer185 )
self.m_FilterPanel.Layout()
bSizer185.Fit( self.m_FilterPanel )
bSizer184.Add( self.m_FilterPanel, 0, wx.EXPAND|wx.ALL, 5 )
self.m_scrolledWindow2 = wx.ScrolledWindow( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.HSCROLL|wx.VSCROLL )
self.m_scrolledWindow2.SetScrollRate( 5, 5 )
bSizer188 = wx.BoxSizer( wx.VERTICAL )
self.m_listCtrl1 = wxRavenListCtrl( self.m_scrolledWindow2, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.LC_AUTOARRANGE|wx.LC_REPORT )
bSizer188.Add( self.m_listCtrl1, 1, wx.ALL|wx.EXPAND, 5 )
bSizer189 = wx.BoxSizer( wx.HORIZONTAL )
self.m_bitmap112 = wx.StaticBitmap( self.m_scrolledWindow2, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/table_total_in.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer189.Add( self.m_bitmap112, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText119 = wx.StaticText( self.m_scrolledWindow2, wx.ID_ANY, u"Count SELL (Period) :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText119.Wrap( -1 )
bSizer189.Add( self.m_staticText119, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_textTotalIn = wx.TextCtrl( self.m_scrolledWindow2, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, wx.TE_READONLY )
self.m_textTotalIn.SetMaxLength( 0 )
self.m_textTotalIn.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer189.Add( self.m_textTotalIn, 1, wx.ALL, 5 )
self.m_bitmap1121 = wx.StaticBitmap( self.m_scrolledWindow2, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/table_total_out.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer189.Add( self.m_bitmap1121, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText1191 = wx.StaticText( self.m_scrolledWindow2, wx.ID_ANY, u"Count BUY (Period) :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText1191.Wrap( -1 )
bSizer189.Add( self.m_staticText1191, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_textTotalOut = wx.TextCtrl( self.m_scrolledWindow2, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, wx.TE_READONLY )
self.m_textTotalOut.SetMaxLength( 0 )
self.m_textTotalOut.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer189.Add( self.m_textTotalOut, 1, wx.ALL, 5 )
self.m_bitmap3412 = wx.StaticBitmap( self.m_scrolledWindow2, wx.ID_ANY, wx.Bitmap( u"res/default_style/normal/p2p_icon.png", wx.BITMAP_TYPE_ANY ), wx.DefaultPosition, wx.DefaultSize, 0 )
bSizer189.Add( self.m_bitmap3412, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_staticText6912 = wx.StaticText( self.m_scrolledWindow2, wx.ID_ANY, u"Trades :", wx.DefaultPosition, wx.DefaultSize, 0 )
self.m_staticText6912.Wrap( -1 )
self.m_staticText6912.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer189.Add( self.m_staticText6912, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
self.m_textFee = wx.TextCtrl( self.m_scrolledWindow2, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, wx.TE_READONLY )
self.m_textFee.SetMaxLength( 0 )
self.m_textFee.SetFont( wx.Font( wx.NORMAL_FONT.GetPointSize(), wx.FONTFAMILY_DEFAULT, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False, wx.EmptyString ) )
bSizer189.Add( self.m_textFee, 1, wx.ALL, 5 )
bSizer188.Add( bSizer189, 0, wx.EXPAND, 5 )
self.m_scrolledWindow2.SetSizer( bSizer188 )
self.m_scrolledWindow2.Layout()
bSizer188.Fit( self.m_scrolledWindow2 )
bSizer184.Add( self.m_scrolledWindow2, 1, wx.EXPAND|wx.ALL, 5 )
self.SetSizer( bSizer184 )
self.Layout()
def __del__( self ):
pass
| 42.973433 | 237 | 0.732469 | 20,478 | 143,961 | 4.97739 | 0.041606 | 0.06745 | 0.028363 | 0.114199 | 0.865109 | 0.85053 | 0.821755 | 0.810531 | 0.791115 | 0.770924 | 0 | 0.048533 | 0.122492 | 143,961 | 3,349 | 238 | 42.986265 | 0.758318 | 0.012858 | 0 | 0.628981 | 1 | 0 | 0.049837 | 0.03174 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058386 | false | 0.010085 | 0.006369 | 0 | 0.074841 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e1c13eb1bda1429a0cdaee561df5f74534fb9f95 | 151 | py | Python | katas/kyu_8/counting_sheep.py | the-zebulan/CodeWars | 1eafd1247d60955a5dfb63e4882e8ce86019f43a | [
"MIT"
] | 40 | 2016-03-09T12:26:20.000Z | 2022-03-23T08:44:51.000Z | katas/kyu_8/counting_sheep.py | akalynych/CodeWars | 1eafd1247d60955a5dfb63e4882e8ce86019f43a | [
"MIT"
] | null | null | null | katas/kyu_8/counting_sheep.py | akalynych/CodeWars | 1eafd1247d60955a5dfb63e4882e8ce86019f43a | [
"MIT"
] | 36 | 2016-11-07T19:59:58.000Z | 2022-03-31T11:18:27.000Z | # def count_sheeps(sheeps):
# return sum(sheep for sheep in sheeps if sheep is not None)
def count_sheeps(sheeps):
return sheeps.count(True)
| 21.571429 | 64 | 0.721854 | 24 | 151 | 4.458333 | 0.541667 | 0.149533 | 0.261682 | 0.373832 | 0.485981 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192053 | 151 | 6 | 65 | 25.166667 | 0.877049 | 0.582781 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
831b254e778b24c15f677f6578c79e29c564785a | 6,222 | py | Python | raiden_contracts/tests/test_token_network.py | pcppcp/raiden-contracts | 5141ff2352b0f1f02e181c2da28761a0a8addb13 | [
"MIT"
] | null | null | null | raiden_contracts/tests/test_token_network.py | pcppcp/raiden-contracts | 5141ff2352b0f1f02e181c2da28761a0a8addb13 | [
"MIT"
] | null | null | null | raiden_contracts/tests/test_token_network.py | pcppcp/raiden-contracts | 5141ff2352b0f1f02e181c2da28761a0a8addb13 | [
"MIT"
] | null | null | null | import pytest
from eth_tester.exceptions import TransactionFailed
from .fixtures.config import raiden_contracts_version, EMPTY_ADDRESS, FAKE_ADDRESS
from raiden_contracts.constants import (
TEST_SETTLE_TIMEOUT_MIN,
TEST_SETTLE_TIMEOUT_MAX,
)
def test_version(token_network):
assert token_network.functions.contract_version().call()[:2] == raiden_contracts_version[:2]
def test_constructor_call(
web3,
get_token_network,
custom_token,
secret_registry_contract,
get_accounts,
):
A = get_accounts(1)[0]
chain_id = int(web3.version.network)
settle_min = TEST_SETTLE_TIMEOUT_MIN
settle_max = TEST_SETTLE_TIMEOUT_MAX
with pytest.raises(TypeError):
get_token_network([])
with pytest.raises(TypeError):
get_token_network([3, secret_registry_contract.address, chain_id, settle_min, settle_max])
with pytest.raises(TypeError):
get_token_network([0, secret_registry_contract.address, chain_id, settle_min, settle_max])
with pytest.raises(TypeError):
get_token_network(['', secret_registry_contract.address, chain_id, settle_min, settle_max])
with pytest.raises(TypeError):
get_token_network([
FAKE_ADDRESS,
secret_registry_contract.address,
chain_id,
settle_min,
settle_max,
])
with pytest.raises(TypeError):
get_token_network([custom_token.address, 3, chain_id, settle_min, settle_max])
with pytest.raises(TypeError):
get_token_network([custom_token.address, 0, chain_id, settle_min, settle_max])
with pytest.raises(TypeError):
get_token_network([custom_token.address, '', chain_id, settle_min, settle_max])
with pytest.raises(TypeError):
get_token_network([custom_token.address, FAKE_ADDRESS, chain_id, settle_min, settle_max])
with pytest.raises(TypeError):
get_token_network([
custom_token.address,
secret_registry_contract.address,
'',
settle_min,
settle_max,
])
with pytest.raises(TypeError):
get_token_network([
custom_token.address,
secret_registry_contract.address,
-3,
settle_min,
settle_max,
])
with pytest.raises(TypeError):
get_token_network([
custom_token.address,
secret_registry_contract.address,
chain_id,
'',
settle_max,
])
with pytest.raises(TypeError):
get_token_network([
custom_token.address,
secret_registry_contract.address,
chain_id,
-3,
settle_max,
])
with pytest.raises(TypeError):
get_token_network([
custom_token.address,
secret_registry_contract.address,
chain_id,
settle_min,
'',
])
with pytest.raises(TypeError):
get_token_network([
custom_token.address,
secret_registry_contract.address,
chain_id,
settle_min,
-3,
])
with pytest.raises(TransactionFailed):
get_token_network([
EMPTY_ADDRESS,
secret_registry_contract.address,
chain_id,
TEST_SETTLE_TIMEOUT_MIN,
TEST_SETTLE_TIMEOUT_MAX,
])
with pytest.raises(TransactionFailed):
get_token_network([
A,
secret_registry_contract.address,
chain_id,
TEST_SETTLE_TIMEOUT_MIN,
TEST_SETTLE_TIMEOUT_MAX,
])
with pytest.raises(TransactionFailed):
get_token_network([
secret_registry_contract.address,
secret_registry_contract.address,
chain_id,
TEST_SETTLE_TIMEOUT_MIN,
TEST_SETTLE_TIMEOUT_MAX,
])
with pytest.raises(TransactionFailed):
get_token_network([
custom_token.address,
EMPTY_ADDRESS,
chain_id,
TEST_SETTLE_TIMEOUT_MIN,
TEST_SETTLE_TIMEOUT_MAX,
])
with pytest.raises(TransactionFailed):
get_token_network([
custom_token.address,
A,
chain_id,
TEST_SETTLE_TIMEOUT_MIN,
TEST_SETTLE_TIMEOUT_MAX,
])
with pytest.raises(TransactionFailed):
get_token_network([
custom_token.address,
secret_registry_contract.address,
0,
TEST_SETTLE_TIMEOUT_MIN,
TEST_SETTLE_TIMEOUT_MAX,
])
with pytest.raises(TransactionFailed):
get_token_network([
custom_token.address,
secret_registry_contract.address,
chain_id,
TEST_SETTLE_TIMEOUT_MAX,
TEST_SETTLE_TIMEOUT_MIN,
])
with pytest.raises(TransactionFailed):
get_token_network([
custom_token.address,
secret_registry_contract.address,
chain_id,
0,
TEST_SETTLE_TIMEOUT_MIN,
])
with pytest.raises(TransactionFailed):
get_token_network([
custom_token.address,
secret_registry_contract.address,
chain_id,
TEST_SETTLE_TIMEOUT_MIN,
0,
])
get_token_network([
custom_token.address,
secret_registry_contract.address,
chain_id,
TEST_SETTLE_TIMEOUT_MIN,
TEST_SETTLE_TIMEOUT_MAX,
])
def test_constructor_not_registered(
custom_token,
secret_registry_contract,
token_network_registry_contract,
token_network_external,
):
token_network = token_network_external
assert token_network.functions.token().call() == custom_token.address
assert token_network.functions.secret_registry().call() == secret_registry_contract.address
assert (token_network.functions.chain_id().call()
== token_network_registry_contract.functions.chain_id().call())
assert token_network_registry_contract.functions.token_to_token_networks(
custom_token.address,
).call() == EMPTY_ADDRESS
| 30.955224 | 99 | 0.626808 | 640 | 6,222 | 5.659375 | 0.079688 | 0.122584 | 0.107675 | 0.160133 | 0.814743 | 0.752623 | 0.752623 | 0.717559 | 0.705687 | 0.703755 | 0 | 0.003667 | 0.298779 | 6,222 | 200 | 100 | 31.11 | 0.826496 | 0 | 0 | 0.816216 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027027 | 1 | 0.016216 | false | 0 | 0.021622 | 0 | 0.037838 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
834815765cd67ccd3449580013c836025144cc15 | 2,482 | py | Python | tests/test_skipif.py | nschloe/code_extract | 27c3cc9707c5bb5a2b58db703e8440e8dafaae2e | [
"MIT"
] | 7 | 2018-05-06T07:35:24.000Z | 2020-01-26T12:35:42.000Z | tests/test_skipif.py | nschloe/excode | 27c3cc9707c5bb5a2b58db703e8440e8dafaae2e | [
"MIT"
] | 1 | 2017-05-29T16:42:38.000Z | 2017-05-29T16:42:38.000Z | tests/test_skipif.py | nschloe/code_extract | 27c3cc9707c5bb5a2b58db703e8440e8dafaae2e | [
"MIT"
] | 3 | 2018-04-24T23:37:19.000Z | 2020-05-01T14:29:44.000Z | def test_skip(testdir):
string = """
Lorem ipsum
<!--pytest.mark.skip-->
```python
print(1 + 3)
```
"""
testdir.makefile(".md", string)
result = testdir.runpytest("--codeblocks")
result.assert_outcomes(skipped=1)
def test_skip_expected_output(testdir):
string = """
Lorem ipsum
<!--pytest.mark.skip-->
```python
print(1 + 3)
```
<!--pytest-codeblocks:expected-output-->
```
25abc
```
"""
testdir.makefile(".md", string)
result = testdir.runpytest("--codeblocks")
result.assert_outcomes(skipped=1)
def test_skipif(testdir):
string = """
Lorem ipsum
<!--pytest.mark.skipif(1 < 3, reason="")-->
```python
print(1 + 3)
```
"""
testdir.makefile(".md", string)
result = testdir.runpytest("--codeblocks")
result.assert_outcomes(skipped=1)
def test_skipif2(testdir):
string = """
Lorem ipsum
<!--pytest.mark.skipif(1 > 3, reason="")-->
```python
print(1 + 3)
```
"""
testdir.makefile(".md", string)
result = testdir.runpytest("--codeblocks")
result.assert_outcomes(passed=1)
def test_skipif_expected_output(testdir):
string = """
Lorem ipsum
<!--pytest.mark.skipif(1 < 3, reason="")-->
```python
print(1 + 3)
```
<!--pytest-codeblocks:expected-output-->
```
25abc
```
"""
testdir.makefile(".md", string)
result = testdir.runpytest("--codeblocks")
result.assert_outcomes(skipped=1)
def test_skipif_expected_output2(testdir):
string = """
Lorem ipsum
<!--pytest.mark.skipif(1 > 3, reason="")-->
```python
print(1 + 3)
```
<!--pytest-codeblocks:expected-output-->
```
4
```
"""
testdir.makefile(".md", string)
result = testdir.runpytest("--codeblocks")
result.assert_outcomes(passed=1)
def test_importorskip(testdir):
string = """
Lorem ipsum
<!--pytest-codeblocks:importorskip(some_nonexistent_module)-->
```python
print(1 + 3)
```
"""
testdir.makefile(".md", string)
result = testdir.runpytest("--codeblocks")
result.assert_outcomes(skipped=1)
def test_importorskip2(testdir):
string = """
Lorem ipsum
<!--pytest-codeblocks:importorskip(sys)-->
```python
print(1 + 3)
```
"""
testdir.makefile(".md", string)
result = testdir.runpytest("--codeblocks")
result.assert_outcomes(passed=1)
| 17.728571 | 66 | 0.578163 | 253 | 2,482 | 5.577075 | 0.142292 | 0.017009 | 0.102055 | 0.130404 | 0.944011 | 0.934089 | 0.934089 | 0.8618 | 0.841956 | 0.841956 | 0 | 0.021153 | 0.238114 | 2,482 | 139 | 67 | 17.856115 | 0.725013 | 0 | 0 | 0.89 | 0 | 0 | 0.481869 | 0.147462 | 0 | 0 | 0 | 0 | 0.08 | 1 | 0.08 | false | 0.03 | 0.04 | 0 | 0.12 | 0.08 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3612040fc8d841bf371d7de07ba3f62beae3b465 | 186 | py | Python | coord2vec/models/baselines/__init__.py | jonzarecki/coord2vec | 4f267fdd87af7b3d3558ca834b88e9ab7c309c18 | [
"Apache-2.0"
] | null | null | null | coord2vec/models/baselines/__init__.py | jonzarecki/coord2vec | 4f267fdd87af7b3d3558ca834b88e9ab7c309c18 | [
"Apache-2.0"
] | null | null | null | coord2vec/models/baselines/__init__.py | jonzarecki/coord2vec | 4f267fdd87af7b3d3558ca834b88e9ab7c309c18 | [
"Apache-2.0"
] | null | null | null | from coord2vec.models.baselines.coord2vec_model import Coord2Vec
from coord2vec.models.baselines.coord2features import Coord2Features
from coord2vec.models.baselines.random import Random | 62 | 68 | 0.892473 | 22 | 186 | 7.5 | 0.363636 | 0.236364 | 0.345455 | 0.509091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04 | 0.05914 | 186 | 3 | 69 | 62 | 0.902857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
36321195f3045530d0996370adbfee04ca6c3363 | 10,456 | py | Python | cifar10-code/AdaX.py | switchablenorms/AdaX | e18f35f3d6ab99ad862f81d6ddf4d7dbc5f2f63d | [
"Apache-2.0"
] | 30 | 2020-04-22T02:16:26.000Z | 2021-08-12T07:04:48.000Z | cifar10-code/AdaX.py | switchablenorms/adax | e18f35f3d6ab99ad862f81d6ddf4d7dbc5f2f63d | [
"Apache-2.0"
] | 1 | 2020-06-21T13:33:08.000Z | 2020-06-21T13:33:08.000Z | cifar10-code/AdaX.py | switchablenorms/adax | e18f35f3d6ab99ad862f81d6ddf4d7dbc5f2f63d | [
"Apache-2.0"
] | 8 | 2020-05-08T07:25:24.000Z | 2021-11-10T14:09:36.000Z | import math
import torch
from torch.optim import Optimizer
import numpy as np
class AdaX(Optimizer):
r"""Implements AdaX algorithm.
Arguments:
params (iterable): iterable of parameters to optimize or dicts defining
parameter groups
lr (float, optional): learning rate (default: 1e-3)
betas (Tuple[float, float], optional): coefficients used for computing
running averages of gradient and its square (default: (0.9, 1e-4))
eps (float, optional): term added to the denominator to improve
numerical stability (default: 1e-12)
weight_decay (float, optional): L2 penalty (default: 5e-4)
.. _Adam\: A Method for Stochastic Optimization:
https://arxiv.org/abs/1412.6980
.. _On the Convergence of Adam and Beyond:
https://openreview.net/forum?id=ryQu7f-RZ
"""
def __init__(self, params, lr=1.5e-3, betas=(0.9, 1e-4), eps=1e-12,
weight_decay=5e-4):
if not 0.0 <= lr:
raise ValueError("Invalid learning rate: {}".format(lr))
if not 0.0 <= eps:
raise ValueError("Invalid epsilon value: {}".format(eps))
if not 0.0 <= betas[0] < 1.0:
raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0]))
if not 0.0 <= betas[1] < 1.0:
raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1]))
defaults = dict(lr=lr, betas=betas, eps=eps,
weight_decay=weight_decay)
super(AdaX, self).__init__(params, defaults)
def __setstate__(self, state):
super(AdaX, self).__setstate__(state)
def step(self, closure=None):
"""Performs a single optimization step.
Arguments:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.
"""
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
for p in group['params']:
if p.grad is None:
continue
grad = p.grad.data
if grad.is_sparse:
raise RuntimeError('AdaX does not support sparse gradients, please consider SparseAdam instead')
state = self.state[p]
# State initialization
if len(state) == 0:
state['step'] = 0
# Exponential moving average of gradient values
state['exp_avg'] = torch.zeros_like(p.data)
# Exponential moving average of squared gradient values
state['exp_avg_sq'] = torch.zeros_like(p.data)
exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
beta1, beta2 = group['betas']
state['step'] += 1
if group['weight_decay'] != 0:
grad.add_(group['weight_decay'], p.data)
t = state['step']
# Decay the first and second moment running average coefficient
exp_avg.mul_(beta1).add_(1 - beta1, grad)
exp_avg_sq.mul_(1 + beta2).addcmul_(beta2, grad, grad)
denom = exp_avg_sq.sqrt().add_(group['eps'])
bias_correction2 = ((1 + beta2) ** state['step'] - 1)
step_size = group['lr'] * math.sqrt(bias_correction2)
# step_size = group['lr']
p.data.addcdiv_(-step_size, exp_avg, denom)
return loss
class AdaXW(Optimizer):
r"""Implements Adam algorithm.
It has been proposed in `AdaX: Adaptive Gradient Descent with Exponential Long Term Memory`_.
Arguments:
params (iterable): iterable of parameters to optimize or dicts defining
parameter groups
lr (float, optional): learning rate (default: 1e-3)
betas (Tuple[float, float], optional): coefficients used for computing
running averages of gradient and its square (default: (0.9, 1e-4))
eps (float, optional): term added to the denominator to improve
numerical stability (default: 1e-8)
weight_decay (float, optional): weight decay (default: 5e-2
.. _Adam\: A Method for Stochastic Optimization:
https://arxiv.org/abs/1412.6980
.. _On the Convergence of Adam and Beyond:
https://openreview.net/forum?id=ryQu7f-RZ
"""
def __init__(self, params, lr=0.005, betas=(0.9, 1e-4), eps=1e-12,
weight_decay=5e-2):
if not 0.0 <= lr:
raise ValueError("Invalid learning rate: {}".format(lr))
if not 0.0 <= eps:
raise ValueError("Invalid epsilon value: {}".format(eps))
if not 0.0 <= betas[0] < 1.0:
raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0]))
if not 0.0 <= betas[1] < 1.0:
raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1]))
defaults = dict(lr=lr, betas=betas, eps=eps,
weight_decay=weight_decay)
super(AdaXW, self).__init__(params, defaults)
def __setstate__(self, state):
super(AdaXW, self).__setstate__(state)
def step(self, closure=None):
"""Performs a single optimization step.
Arguments:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.
"""
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
beta1, beta2 = group['betas']
for p in group['params']:
if p.grad is None:
continue
grad = p.grad.data
if grad.is_sparse:
raise RuntimeError('AdaX does not support sparse gradients, please consider SparseAdam instead')
state = self.state[p]
# State initialization
if len(state) == 0:
state['step'] = 0
# Exponential moving average of gradient values
state['exp_avg'] = torch.zeros_like(p.data)
# Exponential moving average of squared gradient values
state['exp_avg_sq'] = torch.zeros_like(p.data)
exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
state['step'] += 1
exp_avg.mul_(beta1).add_(1 - beta1, grad)
exp_avg_sq.mul_(1 + beta2).addcmul_(beta2, grad, grad)
denom = exp_avg_sq.sqrt().add_(group['eps'])
bias_correction2 = ((1 + beta2) ** state['step'] - 1)
step_size = group['lr'] * math.sqrt(bias_correction2)
p.data.add_(-torch.mul(p.data, group['lr'] * group['weight_decay'])).addcdiv_(-step_size, exp_avg,
denom)
return loss
class DCAdaXW(Optimizer):
r"""Implements Adam algorithm.
It has been proposed in `AdaX: Adaptive Gradient Descent with Exponential Long Term Memory`_.
Arguments:
params (iterable): iterable of parameters to optimize or dicts defining
parameter groups
lr (float, optional): learning rate (default: 1e-3)
betas (Tuple[float, float], optional): coefficients used for computing
running averages of gradient and its square (default: (0.9, 1e-4))
eps (float, optional): term added to the denominator to improve
numerical stability (default: 1e-8)
weight_decay (float, optional): weight decay (default: 5e-2
.. _Adam\: A Method for Stochastic Optimization:
https://arxiv.org/abs/1412.6980
.. _On the Convergence of Adam and Beyond:
https://openreview.net/forum?id=ryQu7f-RZ
"""
def __init__(self, params, lr=0.005, betas=(0.9, 1e-4), eps=1e-12,
weight_decay=5e-2):
if not 0.0 <= lr:
raise ValueError("Invalid learning rate: {}".format(lr))
if not 0.0 <= eps:
raise ValueError("Invalid epsilon value: {}".format(eps))
if not 0.0 <= betas[0] < 1.0:
raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0]))
if not 0.0 <= betas[1] < 1.0:
raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1]))
defaults = dict(lr=lr, betas=betas, eps=eps,
weight_decay=weight_decay)
super(DCAdaXW, self).__init__(params, defaults)
def __setstate__(self, state):
super(DCAdaXW, self).__setstate__(state)
def step(self, closure=None):
"""Performs a single optimization step.
Arguments:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.
"""
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
beta1, beta2 = group['betas']
for p in group['params']:
if p.grad is None:
continue
grad = p.grad.data
if grad.is_sparse:
raise RuntimeError('AdaX does not support sparse gradients, please consider SparseAdam instead')
state = self.state[p]
# State initialization
if len(state) == 0:
state['step'] = 0
# Exponential moving average of gradient values
state['exp_avg'] = torch.zeros_like(p.data)
# Exponential moving average of squared gradient values
state['exp_avg_sq'] = torch.zeros_like(p.data)
exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
state['step'] += 1
exp_avg_sq.mul_(1 + beta2).addcmul_(beta2, grad, grad)
denom = exp_avg_sq.sqrt().add_(group['eps'])
bias_correction2 = ((1 + beta2) ** state['step'] - 1)
step_size = group['lr'] * math.sqrt(bias_correction2)
exp_avg.mul_(beta1).add_(1 - beta1, torch.div(grad, denom))
p.data.add_(-torch.mul(p.data, group['lr'] * group['weight_decay'])).add_(-step_size, exp_avg)
return loss
| 38.021818 | 116 | 0.5614 | 1,262 | 10,456 | 4.522187 | 0.138669 | 0.03154 | 0.021027 | 0.014719 | 0.929035 | 0.929035 | 0.929035 | 0.925004 | 0.925004 | 0.88593 | 0 | 0.028616 | 0.33158 | 10,456 | 274 | 117 | 38.160584 | 0.787953 | 0.303271 | 0 | 0.810219 | 0 | 0 | 0.120029 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065693 | false | 0 | 0.029197 | 0 | 0.138686 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3659c5429bb186764d87982bed35116a2acd43d3 | 3,487 | py | Python | grid_LSTM.py | YuntianChen/TgDLF | cb61b9ca71b1b6db2ad20d5854ca2561e9bbfbb1 | [
"MIT"
] | 10 | 2020-12-26T12:30:43.000Z | 2022-01-08T01:56:23.000Z | grid_LSTM.py | YuntianChen/TgDLF | cb61b9ca71b1b6db2ad20d5854ca2561e9bbfbb1 | [
"MIT"
] | null | null | null | grid_LSTM.py | YuntianChen/TgDLF | cb61b9ca71b1b6db2ad20d5854ca2561e9bbfbb1 | [
"MIT"
] | 11 | 2020-11-26T08:31:56.000Z | 2022-01-08T15:12:00.000Z | # -*- coding: utf-8 -*-
"""
Created on Thu May 24 21:34:10 2018
@author: Yuntian Chen
"""
import torch as t
import torch.nn as nn
from torch.autograd import Variable
from grid_configuration import config
import torch.nn.functional as F
class netLSTM(nn.Module): # 配合predict 函数,因为有out = out[:, -config.predict_len:, :],所以是输出一段(天)的数据预测结果
def __init__(self):
super(netLSTM, self).__init__()
self.lstm = nn.LSTM(config.input_dim, config.hid_dim,
config.num_layer, batch_first=True, dropout=config.drop_out)
# 全连接至预测的测井曲线
self.fc2 = nn.Linear(config.hid_dim, int(config.hid_dim/2))
self.fc3 = nn.Linear(int(config.hid_dim/2), config.output_dim)
#self.fc4 = nn.Linear(int(config.hid_dim/2), int(config.hid_dim/2))
self.bn = nn.BatchNorm1d(int(config.hid_dim / 2))
def forward(self, x, hs=None, use_gpu=config.use_gpu, full_output=False):
batch_size = x.size(0)
# 不能用batch_size = config.batch_size,因为从第二个epoch开始,
# dataloder导入的数据batch_size变为了2,如果用config.batch_size,
# 那么hs维度和输入的x会不匹配。
if hs is None:
h = Variable(t.zeros(config.num_layer, batch_size, config.hid_dim))
c = Variable(t.zeros(config.num_layer, batch_size, config.hid_dim))
hs = (h, c)
if use_gpu:
hs = (hs[0].cuda(), hs[1].cuda())
out, hs_0 = self.lstm(x, hs) # 输入:batch_size * train_len * input_dim;输出:batch_size * train_len * hid_dim
if not full_output:
out = out[:, -config.predict_len:, :]
out = out.contiguous()
out = out.view(-1, config.hid_dim) # 相当于reshape成(batch_size * train_len) * hid_dim的二维矩阵
# normal net
out = F.relu(self.bn(self.fc2(out)))
#out = F.relu(self.fc4(out))
out = self.fc3(out)
return out, hs_0
class netLSTM_full(nn.Module): # 配合 predict_full函数,直接输出全部序列结果。
def __init__(self):
super(netLSTM_full, self).__init__()
self.lstm = nn.LSTM(config.input_dim, config.hid_dim,
config.num_layer, batch_first=True, dropout=config.drop_out)
# 全连接至预测的测井曲线
self.fc2 = nn.Linear(config.hid_dim, int(config.hid_dim/2))
self.fc3 = nn.Linear(int(config.hid_dim/2), config.output_dim)
# self.fc4 = nn.Linear(int(config.hid_dim/2), int(config.hid_dim/2))
self.bn = nn.BatchNorm1d(int(config.hid_dim / 2))
def forward(self, x, hs=None, use_gpu=config.use_gpu):
batch_size = x.size(0)
# 不能用batch_size = config.batch_size,因为从第二个epoch开始,
# dataloder导入的数据batch_size变为了2,如果用config.batch_size,
# 那么hs维度和输入的x会不匹配。
if hs is None:
h = Variable(t.zeros(config.num_layer, batch_size, config.hid_dim))
c = Variable(t.zeros(config.num_layer, batch_size, config.hid_dim))
hs = (h, c)
if use_gpu:
hs = (hs[0].cuda(), hs[1].cuda())
out, hs_0 = self.lstm(x, hs) # 输入:batch_size * train_len * input_dim;输出:batch_size * train_len * hid_dim
# out = out[:, -24:, :]
out = out.contiguous()
out = out.view(-1, config.hid_dim) # 相当于reshape成(batch_size * train_len) * hid_dim的二维矩阵
# normal net
out = F.relu(self.bn(self.fc2(out)))
# out = F.relu(self.fc4(out))
out = self.fc3(out)
return out, hs_0
| 41.511905 | 114 | 0.599656 | 491 | 3,487 | 4.05499 | 0.209776 | 0.066298 | 0.120542 | 0.075339 | 0.819689 | 0.796585 | 0.796585 | 0.796585 | 0.796585 | 0.796585 | 0 | 0.020858 | 0.271293 | 3,487 | 83 | 115 | 42.012048 | 0.762692 | 0.264697 | 0 | 0.745098 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.078431 | false | 0 | 0.098039 | 0 | 0.254902 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
36a1f5108042cbad23e00a168f8161a645d03772 | 32,876 | py | Python | models/fconv_encoder.py | SIMEXP/deepmotion | 7b8f4e5a7fce007dfca6bf29c02ea223891d28fd | [
"MIT"
] | 5 | 2017-04-02T12:39:54.000Z | 2020-06-24T01:12:24.000Z | models/fconv_encoder.py | SIMEXP/deepmotion | 7b8f4e5a7fce007dfca6bf29c02ea223891d28fd | [
"MIT"
] | null | null | null | models/fconv_encoder.py | SIMEXP/deepmotion | 7b8f4e5a7fce007dfca6bf29c02ea223891d28fd | [
"MIT"
] | 6 | 2016-08-06T10:50:50.000Z | 2019-07-04T06:20:00.000Z | import tensorflow as tf
import numpy as np
from model_util import *
def inference_fconv_m2i(addmotion=False, alpha=1.,input_shape=[None, 22,22,10,1],
input_shape_m=[None, 22,22,10,3],
n_filters=[1, 32, 32, 32],
filter_sizes=[3, 2, 3, 2],
corruption=False):
"""Build the fMRI model.
Args:
images: Images.
"""
# input to the network
x = tf.placeholder(
tf.float32, input_shape, name='x')
m = tf.placeholder(
tf.float32, input_shape_m, name='m')
t = tf.placeholder(
tf.float32, input_shape, name='t')
keep_prob = tf.placeholder(tf.float32, name='keep_prob') #dropout (keep probability)
encoder_i = []
encoder_m = []
encoder_main = []
shapes_main = []
shapes_i = []
shapes_m = []
#keep_prob=1.
### BRANCH 3d images
'''
with tf.variable_scope('img_conv1_1') as scope:
shapes_i.append(x.get_shape().as_list())
nfeaturemap = 3
W = weight_variable([2, 2, 2, 1, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(x, W) + b)
encoder_i.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
'''
current_input = x
input_nfeaturemap = 1
#current_input = tf.multiply(current_input, m,)
if addmotion:
current_input = tf.concat([current_input, m], axis=4)
input_nfeaturemap += 3
with tf.variable_scope('img_conv1_1') as scope:
shapes_i.append(current_input.get_shape().as_list())
nfeaturemap = 128
W = weight_variable([3, 3, 3, input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.elu(conv3d(current_input, W) + b)
encoder_i.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
current_input = tf.nn.dropout(current_input, keep_prob, [tf.shape(x)[0],1,1,1,input_nfeaturemap])
#current_input = max_pool_2x2(current_input)
#input_nfeaturemap = 1
if addmotion:
current_input = tf.concat([current_input, m], axis=4)
input_nfeaturemap += 3
with tf.variable_scope('img_conv1_2') as scope:
shapes_i.append(current_input.get_shape().as_list())
nfeaturemap = 64
W = weight_variable([2, 2, 2, input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.elu(conv3d(current_input, W) + b)
encoder_i.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
current_input = tf.nn.dropout(current_input, keep_prob, [tf.shape(x)[0],1,1,1,input_nfeaturemap])
if addmotion:
current_input = tf.concat([current_input, m], axis=4)
input_nfeaturemap += 3
with tf.variable_scope('img_conv1_3') as scope:
shapes_i.append(current_input.get_shape().as_list())
nfeaturemap = 16
W = weight_variable([2, 2, 2, input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.elu(conv3d(current_input, W) + b)
encoder_i.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
# resize upsampling
#current_input = resize_volumes(current_input, 2, 2, 2)
if addmotion:
current_input = tf.concat([current_input, m], axis=4)
input_nfeaturemap += 3
with tf.variable_scope('deconv_m_2') as scope:
shapes_i.append(current_input.get_shape().as_list())
nfeaturemap = 3
W = weight_variable([1, 1, 1, input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = conv3d(current_input, W) + b
encoder_m.append(W)
#input_nfeaturemap = nfeaturemap
m_hat = output
with tf.variable_scope('img_conv1_3') as scope:
shapes_i.append(current_input.get_shape().as_list())
nfeaturemap = 1
W = weight_variable([1, 1, 1, input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = conv3d(current_input, W) + b
encoder_i.append(W)
input_nfeaturemap = nfeaturemap
y = output
#current_input = tf.concat([branch_image, branch_motion], axis=4)
#input_nfeaturemap = 128
#current_input = tf.multiply(branch_image,branch_motion)
#print tf.shape(current_input)[-1]
#tf.shape(current_input)[-1]
#
# Max pooling
#current_input = max_pool_2x2(current_input)
#
'''
with tf.variable_scope('conv3_2') as scope:
shapes_main.append(current_input.get_shape().as_list())
nfeaturemap = 16
W = weight_variable([1, 1, 1, input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(current_input, W) + b)
encoder_main.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
with tf.variable_scope('deconv_i_1') as scope:
shapes_i.append(z.get_shape().as_list())
nfeaturemap = 64
W = weight_variable([3, 3, 3, z_input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(z, W) + b)
encoder_i.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
#current_input = tf.nn.dropout(current_input, keep_prob, [tf.shape(x)[0],1,1,1,input_nfeaturemap])
with tf.variable_scope('deconv_i_2') as scope:
shapes_i.append(current_input.get_shape().as_list())
nfeaturemap = 1
W = weight_variable([1, 1, 1, input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = conv3d(current_input, W) + b
encoder_i.append(W)
input_nfeaturemap = nfeaturemap
y = output
with tf.variable_scope('deconv_m_1') as scope:
shapes_i.append(z.get_shape().as_list())
nfeaturemap = 64
W = weight_variable([3, 3, 3, z_input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(z, W) + b)
encoder_m.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
'''
loss_m = tf.reduce_mean(tf.square(m-m_hat))
loss_i = tf.reduce_mean(tf.square(t-y))
cost = alpha*loss_i #+ loss_m
# %%
return {'x': x, 't':t, 'm': m, 'm_hat':m_hat, 'y': y, 'cost': cost, 'loss_i':loss_i, 'loss_m':loss_m, 'keep_prob': keep_prob, 'encoder_main':encoder_main, 'encoder_i':encoder_i, 'encoder_m':encoder_m}
def inference_fconv_small12(input_shape=[None, 22,22,10,1],
input_shape_m=[None, 22,22,10,3],
n_filters=[1, 32, 32, 32],
filter_sizes=[3, 2, 3, 2],
corruption=False):
"""Build the fMRI model.
Args:
images: Images.
"""
# input to the network
x = tf.placeholder(
tf.float32, input_shape, name='x')
m = tf.placeholder(
tf.float32, input_shape_m, name='m')
t = tf.placeholder(
tf.float32, input_shape, name='t')
keep_prob = tf.placeholder(tf.float32, name='keep_prob') #dropout (keep probability)
encoder_i = []
encoder_m = []
encoder_main = []
shapes_main = []
shapes_i = []
shapes_m = []
#keep_prob=1.
### BRANCH 3d images
with tf.variable_scope('img_conv1_1') as scope:
shapes_i.append(x.get_shape().as_list())
nfeaturemap = 256
W = weight_variable([3, 3, 3, 1, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(x, W) + b)
encoder_i.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
current_input = tf.nn.dropout(current_input, keep_prob, [tf.shape(x)[0],1,1,1,input_nfeaturemap])
with tf.variable_scope('img_conv1_3') as scope:
shapes_i.append(current_input.get_shape().as_list())
nfeaturemap = 128
W = weight_variable([1, 1, 1, input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(current_input, W) + b)
encoder_i.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
branch_image = current_input
'''
### BRANCH motion parameters
with tf.variable_scope('motion_conv1_1') as scope:
shapes_m.append(m.get_shape().as_list())
nfeaturemap = 64
W = weight_variable([3, 3, 3, 3, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(m, W) + b)
encoder_m.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
#current_input = tf.nn.dropout(current_input, keep_prob, [tf.shape(x)[0],1,1,1,input_nfeaturemap])
with tf.variable_scope('motion_conv1_3') as scope:
shapes_m.append(current_input.get_shape().as_list())
nfeaturemap = 128
W = weight_variable([1, 1, 1, input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(current_input, W) + b)
encoder_m.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
branch_motion = current_input
#current_input = tf.concat([branch_image, branch_motion], axis=4)
#input_nfeaturemap = 128
current_input = tf.multiply(branch_image,branch_motion)
#print tf.shape(current_input)[-1]
#tf.shape(current_input)[-1]
'''
with tf.variable_scope('conv3_1') as scope:
shapes_main.append(current_input.get_shape().as_list())
nfeaturemap = 16
W = weight_variable([1, 1, 1, input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(branch_image, W) + b)
encoder_main.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
# Max pooling
#current_input = max_pool_2x2(current_input)
#'''
with tf.variable_scope('conv3_2') as scope:
shapes_main.append(current_input.get_shape().as_list())
nfeaturemap = 16
W = weight_variable([1, 1, 1, input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(current_input, W) + b)
encoder_main.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
# store the latent representation
z = current_input
z_input_nfeaturemap = input_nfeaturemap
'''
encoder_main.reverse()
encoder_i.reverse()
encoder_m.reverse()
shapes_main.reverse()
shapes_i.reverse()
shapes_m.reverse()
'''
with tf.variable_scope('deconv_i_1') as scope:
shapes_i.append(z.get_shape().as_list())
nfeaturemap = 128
W = weight_variable([3, 3, 3, z_input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(z, W) + b)
encoder_i.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
#current_input = tf.nn.dropout(current_input, keep_prob, [tf.shape(x)[0],1,1,1,input_nfeaturemap])
with tf.variable_scope('deconv_i_2') as scope:
shapes_i.append(current_input.get_shape().as_list())
nfeaturemap = 1
W = weight_variable([1, 1, 1, input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = conv3d(current_input, W) + b
encoder_i.append(W)
input_nfeaturemap = nfeaturemap
y = output
with tf.variable_scope('deconv_m_1') as scope:
shapes_i.append(z.get_shape().as_list())
nfeaturemap = 128
W = weight_variable([3, 3, 3, z_input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(z, W) + b)
encoder_m.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
with tf.variable_scope('deconv_m_2') as scope:
shapes_i.append(current_input.get_shape().as_list())
nfeaturemap = 3
W = weight_variable([1, 1, 1, input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = conv3d(current_input, W) + b
encoder_m.append(W)
input_nfeaturemap = nfeaturemap
m_hat = output
loss_m = tf.reduce_mean(tf.square(m-m_hat))
loss_i = tf.reduce_mean(tf.square(t-y))
cost = loss_i + loss_m
# %%
return {'x': x, 't':t, 'm': m, 'm_hat':m_hat, 'y': y, 'cost': cost, 'loss_i':loss_i, 'loss_m':loss_m, 'keep_prob': keep_prob, 'encoder_main':encoder_main, 'encoder_i':encoder_i, 'encoder_m':encoder_m}
def inference_fconv_small(alpha=1.,input_shape=[None, 22,22,10,1],
input_shape_m=[None, 22,22,10,3],
n_filters=[1, 32, 32, 32],
filter_sizes=[3, 2, 3, 2],
corruption=False):
"""Build the fMRI model.
Args:
images: Images.
"""
# input to the network
x = tf.placeholder(
tf.float32, input_shape, name='x')
m = tf.placeholder(
tf.float32, input_shape_m, name='m')
t = tf.placeholder(
tf.float32, input_shape, name='t')
keep_prob = tf.placeholder(tf.float32, name='keep_prob') #dropout (keep probability)
encoder_i = []
encoder_m = []
encoder_main = []
shapes_main = []
shapes_i = []
shapes_m = []
#keep_prob=1.
### BRANCH 3d images
with tf.variable_scope('img_conv1_1') as scope:
shapes_i.append(x.get_shape().as_list())
nfeaturemap = 32
W = weight_variable([2, 2, 2, 1, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(x, W) + b)
encoder_i.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
current_input = tf.nn.dropout(current_input, keep_prob, [tf.shape(x)[0],1,1,1,input_nfeaturemap])
#current_input = max_pool_2x2(current_input)
input_nfeaturemap = 32
with tf.variable_scope('img_conv1_2') as scope:
shapes_i.append(current_input.get_shape().as_list())
nfeaturemap = 32
W = weight_variable([1, 1, 1, input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(current_input, W) + b)
encoder_i.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
current_input = tf.nn.dropout(current_input, keep_prob, [tf.shape(x)[0],1,1,1,input_nfeaturemap])
with tf.variable_scope('img_conv1_3') as scope:
shapes_i.append(current_input.get_shape().as_list())
nfeaturemap = 1
W = weight_variable([2, 2, 2, input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(current_input, W) + b)
encoder_i.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
# resize upsampling
#current_input = resize_volumes(current_input, 2, 2, 2)
branch_image = current_input
### BRANCH motion parameters
with tf.variable_scope('motion_conv1_1') as scope:
shapes_m.append(m.get_shape().as_list())
nfeaturemap = 64
W = weight_variable([3, 3, 3, 3, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(m, W) + b)
encoder_m.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
#current_input = tf.nn.dropout(current_input, keep_prob, [tf.shape(x)[0],1,1,1,input_nfeaturemap])
with tf.variable_scope('motion_conv1_3') as scope:
shapes_m.append(current_input.get_shape().as_list())
nfeaturemap = 1
W = weight_variable([1, 1, 1, input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(current_input, W) + b)
encoder_m.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
branch_motion = current_input
#current_input = tf.concat([branch_image, branch_motion], axis=4)
#input_nfeaturemap = 128
current_input = tf.multiply(branch_image,branch_motion)
#print tf.shape(current_input)[-1]
#tf.shape(current_input)[-1]
#current_input = tf.nn.dropout(current_input, keep_prob, [tf.shape(x)[0],1,1,1,input_nfeaturemap])
with tf.variable_scope('conv3_1') as scope:
shapes_main.append(current_input.get_shape().as_list())
nfeaturemap = 16
W = weight_variable([3, 3, 3, input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(current_input, W) + b)
encoder_main.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
# Max pooling
#current_input = max_pool_2x2(current_input)
#'''
with tf.variable_scope('conv3_2') as scope:
shapes_main.append(current_input.get_shape().as_list())
nfeaturemap = 16
W = weight_variable([1, 1, 1, input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(current_input, W) + b)
encoder_main.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
# store the latent representation
z = current_input
z_input_nfeaturemap = input_nfeaturemap
'''
encoder_main.reverse()
encoder_i.reverse()
encoder_m.reverse()
shapes_main.reverse()
shapes_i.reverse()
shapes_m.reverse()
'''
with tf.variable_scope('deconv_i_1') as scope:
shapes_i.append(z.get_shape().as_list())
nfeaturemap = 64
W = weight_variable([3, 3, 3, z_input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(z, W) + b)
encoder_i.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
#current_input = tf.nn.dropout(current_input, keep_prob, [tf.shape(x)[0],1,1,1,input_nfeaturemap])
with tf.variable_scope('deconv_i_2') as scope:
shapes_i.append(current_input.get_shape().as_list())
nfeaturemap = 1
W = weight_variable([1, 1, 1, input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = conv3d(current_input, W) + b
encoder_i.append(W)
input_nfeaturemap = nfeaturemap
y = output
with tf.variable_scope('deconv_m_1') as scope:
shapes_i.append(z.get_shape().as_list())
nfeaturemap = 64
W = weight_variable([3, 3, 3, z_input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(z, W) + b)
encoder_m.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
with tf.variable_scope('deconv_m_2') as scope:
shapes_i.append(current_input.get_shape().as_list())
nfeaturemap = 3
W = weight_variable([1, 1, 1, input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = conv3d(current_input, W) + b
encoder_m.append(W)
input_nfeaturemap = nfeaturemap
m_hat = output
loss_m = tf.reduce_mean(tf.square(m-m_hat))
loss_i = tf.reduce_mean(tf.square(t-y))
cost = alpha*loss_i + loss_m
# %%
return {'x': x, 't':t, 'm': m, 'm_hat':m_hat, 'y': y, 'cost': cost, 'loss_i':loss_i, 'loss_m':loss_m, 'keep_prob': keep_prob, 'encoder_main':encoder_main, 'encoder_i':encoder_i, 'encoder_m':encoder_m}
def inference_fconv(input_shape=[None, 22,22,10,1],
input_shape_m=[None, 22,22,10,3],
n_filters=[1, 32, 32, 32],
filter_sizes=[3, 2, 3, 2],
corruption=False):
"""Build the fMRI model.
Args:
images: Images.
"""
# input to the network
x = tf.placeholder(
tf.float32, input_shape, name='x')
m = tf.placeholder(
tf.float32, input_shape_m, name='m')
t = tf.placeholder(
tf.float32, input_shape, name='t')
keep_prob = tf.placeholder(tf.float32, name='keep_prob') #dropout (keep probability)
encoder_i = []
encoder_m = []
encoder_main = []
shapes_main = []
shapes_i = []
shapes_m = []
#keep_prob=1.
### BRANCH 3d images
with tf.variable_scope('img_conv1_1') as scope:
shapes_i.append(x.get_shape().as_list())
nfeaturemap = 128
W = weight_variable([3, 3, 3, input_shape[4], nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(x, W) + b)
encoder_i.append(W)
input_nfeaturemap = nfeaturemap
img_1 = output
#current_input = tf.nn.dropout(current_input, keep_prob, [tf.shape(x)[0],1,1,1,input_nfeaturemap])
### BRANCH motion parameters
with tf.variable_scope('motion_conv1_1') as scope:
shapes_m.append(m.get_shape().as_list())
nfeaturemap = 128
W = weight_variable([3, 3, 3, input_shape_m[4], nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(m, W) + b)
encoder_m.append(W)
input_nfeaturemap = nfeaturemap
motion_1 = output
current_input = tf.multiply(img_1,motion_1)
with tf.variable_scope('img_conv1_3') as scope:
shapes_i.append(current_input.get_shape().as_list())
nfeaturemap = 256
W = weight_variable([1, 1, 1, input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(current_input, W) + b)
encoder_i.append(W)
input_nfeaturemap = nfeaturemap
img_2 = output
#current_input = tf.nn.dropout(current_input, keep_prob, [tf.shape(x)[0],1,1,1,input_nfeaturemap])
# Max pooling
motion_1 = max_pool_2x2(motion_1)
input_nfeaturemap = 128
with tf.variable_scope('motion_conv1_3') as scope:
shapes_m.append(motion_1.get_shape().as_list())
nfeaturemap = 256
W = weight_variable([2, 2, 2, input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(motion_1, W) + b)
encoder_m.append(W)
input_nfeaturemap = nfeaturemap
motion_2 = output
# resize upsampling
motion_2 = resize_volumes(motion_2, 2, 2, 2)
#current_input = tf.concat([branch_image, branch_motion], axis=4)
#input_nfeaturemap = 512
current_input = tf.multiply(img_2,motion_2)
input_nfeaturemap = 256
#print tf.shape(current_input)[-1]
#tf.shape(current_input)[-1]
'''
with tf.variable_scope('img_conv1_1') as scope:
shapes_i.append(x.get_shape().as_list())
nfeaturemap = 256
W = weight_variable([3, 3, 3, 1, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(x, W) + b)
encoder_i.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
current_input = tf.nn.dropout(current_input, keep_prob, [tf.shape(x)[0],1,1,1,input_nfeaturemap])
with tf.variable_scope('img_conv1_3') as scope:
shapes_i.append(current_input.get_shape().as_list())
nfeaturemap = 128
W = weight_variable([1, 1, 1, input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(current_input, W) + b)
encoder_i.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
branch_image = current_input
### BRANCH motion parameters
with tf.variable_scope('motion_conv1_1') as scope:
shapes_m.append(m.get_shape().as_list())
nfeaturemap = 64
W = weight_variable([3, 3, 3, 3, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(m, W) + b)
encoder_m.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
#current_input = tf.nn.dropout(current_input, keep_prob, [tf.shape(x)[0],1,1,1,input_nfeaturemap])
with tf.variable_scope('motion_conv1_3') as scope:
shapes_m.append(current_input.get_shape().as_list())
nfeaturemap = 128
W = weight_variable([1, 1, 1, input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(current_input, W) + b)
encoder_m.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
branch_motion = current_input
#current_input = tf.concat([branch_image, branch_motion], axis=4)
#input_nfeaturemap = 256
current_input = tf.multiply(branch_image,branch_motion)
#print tf.shape(current_input)[-1]
#tf.shape(current_input)[-1]
'''
with tf.variable_scope('conv3_1') as scope:
shapes_main.append(current_input.get_shape().as_list())
nfeaturemap = 128
W = weight_variable([1, 1, 1, input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(current_input, W) + b)
encoder_main.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
# Max pooling
#current_input = max_pool_2x2(current_input)
with tf.variable_scope('conv3_2') as scope:
shapes_main.append(current_input.get_shape().as_list())
nfeaturemap = 128
W = weight_variable([2, 2, 2, input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(current_input, W) + b)
encoder_main.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
# store the latent representation
z = current_input
z_input_nfeaturemap = input_nfeaturemap
'''
encoder_main.reverse()
encoder_i.reverse()
encoder_m.reverse()
shapes_main.reverse()
shapes_i.reverse()
shapes_m.reverse()
'''
with tf.variable_scope('deconv_i_1') as scope:
shapes_i.append(z.get_shape().as_list())
nfeaturemap = 16
W = weight_variable([3, 3, 3, z_input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(z, W) + b)
encoder_i.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
#current_input = tf.nn.dropout(current_input, keep_prob, [tf.shape(x)[0],1,1,1,input_nfeaturemap])
with tf.variable_scope('deconv_i_2') as scope:
shapes_i.append(current_input.get_shape().as_list())
nfeaturemap = 1
W = weight_variable([1, 1, 1, input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = conv3d(current_input, W) + b
encoder_i.append(W)
input_nfeaturemap = nfeaturemap
y = output
with tf.variable_scope('deconv_m_1') as scope:
shapes_i.append(z.get_shape().as_list())
nfeaturemap = 32
W = weight_variable([3, 3, 3, z_input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(z, W) + b)
encoder_m.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
with tf.variable_scope('deconv_m_2') as scope:
shapes_i.append(current_input.get_shape().as_list())
nfeaturemap = 3
W = weight_variable([1, 1, 1, input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = conv3d(current_input, W) + b
encoder_m.append(W)
input_nfeaturemap = nfeaturemap
m_hat = output
loss_m = tf.reduce_mean(tf.square(m-m_hat))
loss_i = tf.reduce_mean(tf.square(t-y))
cost = loss_i + loss_m
# %%
return {'x': x, 't':t, 'm': m, 'm_hat':m_hat, 'y': y, 'cost': cost, 'loss_i':loss_i, 'loss_m':loss_m, 'keep_prob': keep_prob, 'encoder_main':encoder_main, 'encoder_i':encoder_i, 'encoder_m':encoder_m}
def inference_fconv_supercompact(input_shape=[None, 22,22,10,1],
input_shape_m=[None, 22,22,10,3],
n_filters=[1, 32, 32, 32],
filter_sizes=[3, 2, 3, 2],
corruption=False):
"""Build the fMRI model.
Args:
images: Images.
"""
# input to the network
x = tf.placeholder(
tf.float32, input_shape, name='x')
m = tf.placeholder(
tf.float32, input_shape_m, name='m')
t = tf.placeholder(
tf.float32, input_shape, name='t')
keep_prob = tf.placeholder(tf.float32, name='keep_prob') #dropout (keep probability)
encoder_i = []
encoder_m = []
encoder_main = []
shapes_main = []
shapes_i = []
shapes_m = []
#keep_prob=1.
### BRANCH 3d images
with tf.variable_scope('img_conv1_1') as scope:
shapes_i.append(x.get_shape().as_list())
nfeaturemap = 256
W = weight_variable([3, 3, 3, 1, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(x, W) + b)
encoder_i.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
current_input = tf.nn.dropout(current_input, keep_prob, [tf.shape(x)[0],1,1,1,input_nfeaturemap])
with tf.variable_scope('img_conv1_2') as scope:
shapes_i.append(current_input.get_shape().as_list())
nfeaturemap = 128
W = weight_variable([1, 1, 1, input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(current_input, W) + b)
encoder_i.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
branch_image = current_input
### BRANCH motion parameters
with tf.variable_scope('motion_conv1_1') as scope:
shapes_m.append(m.get_shape().as_list())
nfeaturemap = 128
W = weight_variable([3, 3, 3, 3, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(m, W) + b)
encoder_m.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
#current_input = tf.nn.dropout(current_input, keep_prob, [tf.shape(x)[0],1,1,1,input_nfeaturemap])
branch_motion = current_input
#current_input = tf.concat([branch_image, branch_motion], axis=4)
#input_nfeaturemap = 256
current_input = tf.multiply(branch_image,branch_motion)
#print tf.shape(current_input)[-1]
#tf.shape(current_input)[-1]
with tf.variable_scope('conv3_1') as scope:
shapes_main.append(current_input.get_shape().as_list())
nfeaturemap = 128
W = weight_variable([1, 1, 1, input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = tf.nn.relu(conv3d(current_input, W) + b)
encoder_main.append(W)
input_nfeaturemap = nfeaturemap
current_input = output
# store the latent representation
z = current_input
z_input_nfeaturemap = input_nfeaturemap
'''
encoder_main.reverse()
encoder_i.reverse()
encoder_m.reverse()
shapes_main.reverse()
shapes_i.reverse()
shapes_m.reverse()
'''
#current_input = tf.nn.dropout(current_input, keep_prob, [tf.shape(x)[0],1,1,1,input_nfeaturemap])
with tf.variable_scope('deconv_i_2') as scope:
shapes_i.append(current_input.get_shape().as_list())
nfeaturemap = 1
W = weight_variable([1, 1, 1, z_input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = conv3d(z, W) + b
encoder_i.append(W)
input_nfeaturemap = nfeaturemap
y = output
with tf.variable_scope('deconv_m_2') as scope:
shapes_i.append(current_input.get_shape().as_list())
nfeaturemap = 3
W = weight_variable([1, 1, 1, z_input_nfeaturemap, nfeaturemap])
b = bias_variable([nfeaturemap])
output = conv3d(z, W) + b
encoder_m.append(W)
input_nfeaturemap = nfeaturemap
m_hat = output
loss_m = tf.reduce_mean(tf.square(m-m_hat))
loss_i = tf.reduce_mean(tf.square(t-y))
cost = loss_i + loss_m
# %%
return {'x': x, 't':t, 'm': m, 'm_hat':m_hat, 'y': y, 'cost': cost, 'loss_i':loss_i, 'loss_m':loss_m, 'keep_prob': keep_prob, 'encoder_main':encoder_main, 'encoder_i':encoder_i, 'encoder_m':encoder_m} | 34.642782 | 204 | 0.61811 | 4,295 | 32,876 | 4.472177 | 0.026077 | 0.123074 | 0.127915 | 0.050448 | 0.982403 | 0.979384 | 0.979384 | 0.979332 | 0.979332 | 0.977249 | 0 | 0.031025 | 0.261741 | 32,876 | 949 | 205 | 34.642782 | 0.760372 | 0.089457 | 0 | 0.941634 | 0 | 0 | 0.033622 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.009728 | false | 0 | 0.005837 | 0 | 0.025292 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
36b7fb259b1ee32ba347e7305e662f27d86c363b | 30,988 | py | Python | sdk/python/pulumi_google_native/iam/v1/provider.py | AaronFriel/pulumi-google-native | 75d1cda425e33d4610348972cd70bddf35f1770d | [
"Apache-2.0"
] | 44 | 2021-04-18T23:00:48.000Z | 2022-02-14T17:43:15.000Z | sdk/python/pulumi_google_native/iam/v1/provider.py | AaronFriel/pulumi-google-native | 75d1cda425e33d4610348972cd70bddf35f1770d | [
"Apache-2.0"
] | 354 | 2021-04-16T16:48:39.000Z | 2022-03-31T17:16:39.000Z | sdk/python/pulumi_google_native/iam/v1/provider.py | AaronFriel/pulumi-google-native | 75d1cda425e33d4610348972cd70bddf35f1770d | [
"Apache-2.0"
] | 8 | 2021-04-24T17:46:51.000Z | 2022-01-05T10:40:21.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from ... import _utilities
from . import outputs
from ._inputs import *
__all__ = ['ProviderArgs', 'Provider']
@pulumi.input_type
class ProviderArgs:
def __init__(__self__, *,
workload_identity_pool_id: pulumi.Input[str],
workload_identity_pool_provider_id: pulumi.Input[str],
attribute_condition: Optional[pulumi.Input[str]] = None,
attribute_mapping: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
aws: Optional[pulumi.Input['AwsArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
disabled: Optional[pulumi.Input[bool]] = None,
display_name: Optional[pulumi.Input[str]] = None,
location: Optional[pulumi.Input[str]] = None,
oidc: Optional[pulumi.Input['OidcArgs']] = None,
project: Optional[pulumi.Input[str]] = None):
"""
The set of arguments for constructing a Provider resource.
:param pulumi.Input[str] attribute_condition: [A Common Expression Language](https://opensource.google/projects/cel) expression, in plain text, to restrict what otherwise valid authentication credentials issued by the provider should not be accepted. The expression must output a boolean representing whether to allow the federation. The following keywords may be referenced in the expressions: * `assertion`: JSON representing the authentication credential issued by the provider. * `google`: The Google attributes mapped from the assertion in the `attribute_mappings`. * `attribute`: The custom attributes mapped from the assertion in the `attribute_mappings`. The maximum length of the attribute condition expression is 4096 characters. If unspecified, all valid authentication credential are accepted. The following example shows how to only allow credentials with a mapped `google.groups` value of `admins`: ``` "'admins' in google.groups" ```
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] attribute_mapping: Maps attributes from authentication credentials issued by an external identity provider to Google Cloud attributes, such as `subject` and `segment`. Each key must be a string specifying the Google Cloud IAM attribute to map to. The following keys are supported: * `google.subject`: The principal IAM is authenticating. You can reference this value in IAM bindings. This is also the subject that appears in Cloud Logging logs. Cannot exceed 127 characters. * `google.groups`: Groups the external identity belongs to. You can grant groups access to resources using an IAM `principalSet` binding; access applies to all members of the group. You can also provide custom attributes by specifying `attribute.{custom_attribute}`, where `{custom_attribute}` is the name of the custom attribute to be mapped. You can define a maximum of 50 custom attributes. The maximum length of a mapped attribute key is 100 characters, and the key may only contain the characters [a-z0-9_]. You can reference these attributes in IAM policies to define fine-grained access for a workload to Google Cloud resources. For example: * `google.subject`: `principal://iam.googleapis.com/projects/{project}/locations/{location}/workloadIdentityPools/{pool}/subject/{value}` * `google.groups`: `principalSet://iam.googleapis.com/projects/{project}/locations/{location}/workloadIdentityPools/{pool}/group/{value}` * `attribute.{custom_attribute}`: `principalSet://iam.googleapis.com/projects/{project}/locations/{location}/workloadIdentityPools/{pool}/attribute.{custom_attribute}/{value}` Each value must be a [Common Expression Language] (https://opensource.google/projects/cel) function that maps an identity provider credential to the normalized attribute specified by the corresponding map key. You can use the `assertion` keyword in the expression to access a JSON representation of the authentication credential issued by the provider. The maximum length of an attribute mapping expression is 2048 characters. When evaluated, the total size of all mapped attributes must not exceed 8KB. For AWS providers, if no attribute mapping is defined, the following default mapping applies: ``` { "google.subject":"assertion.arn", "attribute.aws_role": "assertion.arn.contains('assumed-role')" " ? assertion.arn.extract('{account_arn}assumed-role/')" " + 'assumed-role/'" " + assertion.arn.extract('assumed-role/{role_name}/')" " : assertion.arn", } ``` If any custom attribute mappings are defined, they must include a mapping to the `google.subject` attribute. For OIDC providers, you must supply a custom mapping, which must include the `google.subject` attribute. For example, the following maps the `sub` claim of the incoming credential to the `subject` attribute on a Google token: ``` {"google.subject": "assertion.sub"} ```
:param pulumi.Input['AwsArgs'] aws: An Amazon Web Services identity provider.
:param pulumi.Input[str] description: A description for the provider. Cannot exceed 256 characters.
:param pulumi.Input[bool] disabled: Whether the provider is disabled. You cannot use a disabled provider to exchange tokens. However, existing tokens still grant access.
:param pulumi.Input[str] display_name: A display name for the provider. Cannot exceed 32 characters.
:param pulumi.Input['OidcArgs'] oidc: An OpenId Connect 1.0 identity provider.
"""
pulumi.set(__self__, "workload_identity_pool_id", workload_identity_pool_id)
pulumi.set(__self__, "workload_identity_pool_provider_id", workload_identity_pool_provider_id)
if attribute_condition is not None:
pulumi.set(__self__, "attribute_condition", attribute_condition)
if attribute_mapping is not None:
pulumi.set(__self__, "attribute_mapping", attribute_mapping)
if aws is not None:
pulumi.set(__self__, "aws", aws)
if description is not None:
pulumi.set(__self__, "description", description)
if disabled is not None:
pulumi.set(__self__, "disabled", disabled)
if display_name is not None:
pulumi.set(__self__, "display_name", display_name)
if location is not None:
pulumi.set(__self__, "location", location)
if oidc is not None:
pulumi.set(__self__, "oidc", oidc)
if project is not None:
pulumi.set(__self__, "project", project)
@property
@pulumi.getter(name="workloadIdentityPoolId")
def workload_identity_pool_id(self) -> pulumi.Input[str]:
return pulumi.get(self, "workload_identity_pool_id")
@workload_identity_pool_id.setter
def workload_identity_pool_id(self, value: pulumi.Input[str]):
pulumi.set(self, "workload_identity_pool_id", value)
@property
@pulumi.getter(name="workloadIdentityPoolProviderId")
def workload_identity_pool_provider_id(self) -> pulumi.Input[str]:
return pulumi.get(self, "workload_identity_pool_provider_id")
@workload_identity_pool_provider_id.setter
def workload_identity_pool_provider_id(self, value: pulumi.Input[str]):
pulumi.set(self, "workload_identity_pool_provider_id", value)
@property
@pulumi.getter(name="attributeCondition")
def attribute_condition(self) -> Optional[pulumi.Input[str]]:
"""
[A Common Expression Language](https://opensource.google/projects/cel) expression, in plain text, to restrict what otherwise valid authentication credentials issued by the provider should not be accepted. The expression must output a boolean representing whether to allow the federation. The following keywords may be referenced in the expressions: * `assertion`: JSON representing the authentication credential issued by the provider. * `google`: The Google attributes mapped from the assertion in the `attribute_mappings`. * `attribute`: The custom attributes mapped from the assertion in the `attribute_mappings`. The maximum length of the attribute condition expression is 4096 characters. If unspecified, all valid authentication credential are accepted. The following example shows how to only allow credentials with a mapped `google.groups` value of `admins`: ``` "'admins' in google.groups" ```
"""
return pulumi.get(self, "attribute_condition")
@attribute_condition.setter
def attribute_condition(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "attribute_condition", value)
@property
@pulumi.getter(name="attributeMapping")
def attribute_mapping(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
Maps attributes from authentication credentials issued by an external identity provider to Google Cloud attributes, such as `subject` and `segment`. Each key must be a string specifying the Google Cloud IAM attribute to map to. The following keys are supported: * `google.subject`: The principal IAM is authenticating. You can reference this value in IAM bindings. This is also the subject that appears in Cloud Logging logs. Cannot exceed 127 characters. * `google.groups`: Groups the external identity belongs to. You can grant groups access to resources using an IAM `principalSet` binding; access applies to all members of the group. You can also provide custom attributes by specifying `attribute.{custom_attribute}`, where `{custom_attribute}` is the name of the custom attribute to be mapped. You can define a maximum of 50 custom attributes. The maximum length of a mapped attribute key is 100 characters, and the key may only contain the characters [a-z0-9_]. You can reference these attributes in IAM policies to define fine-grained access for a workload to Google Cloud resources. For example: * `google.subject`: `principal://iam.googleapis.com/projects/{project}/locations/{location}/workloadIdentityPools/{pool}/subject/{value}` * `google.groups`: `principalSet://iam.googleapis.com/projects/{project}/locations/{location}/workloadIdentityPools/{pool}/group/{value}` * `attribute.{custom_attribute}`: `principalSet://iam.googleapis.com/projects/{project}/locations/{location}/workloadIdentityPools/{pool}/attribute.{custom_attribute}/{value}` Each value must be a [Common Expression Language] (https://opensource.google/projects/cel) function that maps an identity provider credential to the normalized attribute specified by the corresponding map key. You can use the `assertion` keyword in the expression to access a JSON representation of the authentication credential issued by the provider. The maximum length of an attribute mapping expression is 2048 characters. When evaluated, the total size of all mapped attributes must not exceed 8KB. For AWS providers, if no attribute mapping is defined, the following default mapping applies: ``` { "google.subject":"assertion.arn", "attribute.aws_role": "assertion.arn.contains('assumed-role')" " ? assertion.arn.extract('{account_arn}assumed-role/')" " + 'assumed-role/'" " + assertion.arn.extract('assumed-role/{role_name}/')" " : assertion.arn", } ``` If any custom attribute mappings are defined, they must include a mapping to the `google.subject` attribute. For OIDC providers, you must supply a custom mapping, which must include the `google.subject` attribute. For example, the following maps the `sub` claim of the incoming credential to the `subject` attribute on a Google token: ``` {"google.subject": "assertion.sub"} ```
"""
return pulumi.get(self, "attribute_mapping")
@attribute_mapping.setter
def attribute_mapping(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "attribute_mapping", value)
@property
@pulumi.getter
def aws(self) -> Optional[pulumi.Input['AwsArgs']]:
"""
An Amazon Web Services identity provider.
"""
return pulumi.get(self, "aws")
@aws.setter
def aws(self, value: Optional[pulumi.Input['AwsArgs']]):
pulumi.set(self, "aws", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
A description for the provider. Cannot exceed 256 characters.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def disabled(self) -> Optional[pulumi.Input[bool]]:
"""
Whether the provider is disabled. You cannot use a disabled provider to exchange tokens. However, existing tokens still grant access.
"""
return pulumi.get(self, "disabled")
@disabled.setter
def disabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "disabled", value)
@property
@pulumi.getter(name="displayName")
def display_name(self) -> Optional[pulumi.Input[str]]:
"""
A display name for the provider. Cannot exceed 32 characters.
"""
return pulumi.get(self, "display_name")
@display_name.setter
def display_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "display_name", value)
@property
@pulumi.getter
def location(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "location")
@location.setter
def location(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "location", value)
@property
@pulumi.getter
def oidc(self) -> Optional[pulumi.Input['OidcArgs']]:
"""
An OpenId Connect 1.0 identity provider.
"""
return pulumi.get(self, "oidc")
@oidc.setter
def oidc(self, value: Optional[pulumi.Input['OidcArgs']]):
pulumi.set(self, "oidc", value)
@property
@pulumi.getter
def project(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "project")
@project.setter
def project(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "project", value)
class Provider(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
attribute_condition: Optional[pulumi.Input[str]] = None,
attribute_mapping: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
aws: Optional[pulumi.Input[pulumi.InputType['AwsArgs']]] = None,
description: Optional[pulumi.Input[str]] = None,
disabled: Optional[pulumi.Input[bool]] = None,
display_name: Optional[pulumi.Input[str]] = None,
location: Optional[pulumi.Input[str]] = None,
oidc: Optional[pulumi.Input[pulumi.InputType['OidcArgs']]] = None,
project: Optional[pulumi.Input[str]] = None,
workload_identity_pool_id: Optional[pulumi.Input[str]] = None,
workload_identity_pool_provider_id: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
Creates a new WorkloadIdentityPoolProvider in a WorkloadIdentityPool. You cannot reuse the name of a deleted provider until 30 days after deletion.
Auto-naming is currently not supported for this resource.
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] attribute_condition: [A Common Expression Language](https://opensource.google/projects/cel) expression, in plain text, to restrict what otherwise valid authentication credentials issued by the provider should not be accepted. The expression must output a boolean representing whether to allow the federation. The following keywords may be referenced in the expressions: * `assertion`: JSON representing the authentication credential issued by the provider. * `google`: The Google attributes mapped from the assertion in the `attribute_mappings`. * `attribute`: The custom attributes mapped from the assertion in the `attribute_mappings`. The maximum length of the attribute condition expression is 4096 characters. If unspecified, all valid authentication credential are accepted. The following example shows how to only allow credentials with a mapped `google.groups` value of `admins`: ``` "'admins' in google.groups" ```
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] attribute_mapping: Maps attributes from authentication credentials issued by an external identity provider to Google Cloud attributes, such as `subject` and `segment`. Each key must be a string specifying the Google Cloud IAM attribute to map to. The following keys are supported: * `google.subject`: The principal IAM is authenticating. You can reference this value in IAM bindings. This is also the subject that appears in Cloud Logging logs. Cannot exceed 127 characters. * `google.groups`: Groups the external identity belongs to. You can grant groups access to resources using an IAM `principalSet` binding; access applies to all members of the group. You can also provide custom attributes by specifying `attribute.{custom_attribute}`, where `{custom_attribute}` is the name of the custom attribute to be mapped. You can define a maximum of 50 custom attributes. The maximum length of a mapped attribute key is 100 characters, and the key may only contain the characters [a-z0-9_]. You can reference these attributes in IAM policies to define fine-grained access for a workload to Google Cloud resources. For example: * `google.subject`: `principal://iam.googleapis.com/projects/{project}/locations/{location}/workloadIdentityPools/{pool}/subject/{value}` * `google.groups`: `principalSet://iam.googleapis.com/projects/{project}/locations/{location}/workloadIdentityPools/{pool}/group/{value}` * `attribute.{custom_attribute}`: `principalSet://iam.googleapis.com/projects/{project}/locations/{location}/workloadIdentityPools/{pool}/attribute.{custom_attribute}/{value}` Each value must be a [Common Expression Language] (https://opensource.google/projects/cel) function that maps an identity provider credential to the normalized attribute specified by the corresponding map key. You can use the `assertion` keyword in the expression to access a JSON representation of the authentication credential issued by the provider. The maximum length of an attribute mapping expression is 2048 characters. When evaluated, the total size of all mapped attributes must not exceed 8KB. For AWS providers, if no attribute mapping is defined, the following default mapping applies: ``` { "google.subject":"assertion.arn", "attribute.aws_role": "assertion.arn.contains('assumed-role')" " ? assertion.arn.extract('{account_arn}assumed-role/')" " + 'assumed-role/'" " + assertion.arn.extract('assumed-role/{role_name}/')" " : assertion.arn", } ``` If any custom attribute mappings are defined, they must include a mapping to the `google.subject` attribute. For OIDC providers, you must supply a custom mapping, which must include the `google.subject` attribute. For example, the following maps the `sub` claim of the incoming credential to the `subject` attribute on a Google token: ``` {"google.subject": "assertion.sub"} ```
:param pulumi.Input[pulumi.InputType['AwsArgs']] aws: An Amazon Web Services identity provider.
:param pulumi.Input[str] description: A description for the provider. Cannot exceed 256 characters.
:param pulumi.Input[bool] disabled: Whether the provider is disabled. You cannot use a disabled provider to exchange tokens. However, existing tokens still grant access.
:param pulumi.Input[str] display_name: A display name for the provider. Cannot exceed 32 characters.
:param pulumi.Input[pulumi.InputType['OidcArgs']] oidc: An OpenId Connect 1.0 identity provider.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: ProviderArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Creates a new WorkloadIdentityPoolProvider in a WorkloadIdentityPool. You cannot reuse the name of a deleted provider until 30 days after deletion.
Auto-naming is currently not supported for this resource.
:param str resource_name: The name of the resource.
:param ProviderArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(ProviderArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
attribute_condition: Optional[pulumi.Input[str]] = None,
attribute_mapping: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
aws: Optional[pulumi.Input[pulumi.InputType['AwsArgs']]] = None,
description: Optional[pulumi.Input[str]] = None,
disabled: Optional[pulumi.Input[bool]] = None,
display_name: Optional[pulumi.Input[str]] = None,
location: Optional[pulumi.Input[str]] = None,
oidc: Optional[pulumi.Input[pulumi.InputType['OidcArgs']]] = None,
project: Optional[pulumi.Input[str]] = None,
workload_identity_pool_id: Optional[pulumi.Input[str]] = None,
workload_identity_pool_provider_id: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = ProviderArgs.__new__(ProviderArgs)
__props__.__dict__["attribute_condition"] = attribute_condition
__props__.__dict__["attribute_mapping"] = attribute_mapping
__props__.__dict__["aws"] = aws
__props__.__dict__["description"] = description
__props__.__dict__["disabled"] = disabled
__props__.__dict__["display_name"] = display_name
__props__.__dict__["location"] = location
__props__.__dict__["oidc"] = oidc
__props__.__dict__["project"] = project
if workload_identity_pool_id is None and not opts.urn:
raise TypeError("Missing required property 'workload_identity_pool_id'")
__props__.__dict__["workload_identity_pool_id"] = workload_identity_pool_id
if workload_identity_pool_provider_id is None and not opts.urn:
raise TypeError("Missing required property 'workload_identity_pool_provider_id'")
__props__.__dict__["workload_identity_pool_provider_id"] = workload_identity_pool_provider_id
__props__.__dict__["name"] = None
__props__.__dict__["state"] = None
super(Provider, __self__).__init__(
'google-native:iam/v1:Provider',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None) -> 'Provider':
"""
Get an existing Provider resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = ProviderArgs.__new__(ProviderArgs)
__props__.__dict__["attribute_condition"] = None
__props__.__dict__["attribute_mapping"] = None
__props__.__dict__["aws"] = None
__props__.__dict__["description"] = None
__props__.__dict__["disabled"] = None
__props__.__dict__["display_name"] = None
__props__.__dict__["name"] = None
__props__.__dict__["oidc"] = None
__props__.__dict__["state"] = None
return Provider(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="attributeCondition")
def attribute_condition(self) -> pulumi.Output[str]:
"""
[A Common Expression Language](https://opensource.google/projects/cel) expression, in plain text, to restrict what otherwise valid authentication credentials issued by the provider should not be accepted. The expression must output a boolean representing whether to allow the federation. The following keywords may be referenced in the expressions: * `assertion`: JSON representing the authentication credential issued by the provider. * `google`: The Google attributes mapped from the assertion in the `attribute_mappings`. * `attribute`: The custom attributes mapped from the assertion in the `attribute_mappings`. The maximum length of the attribute condition expression is 4096 characters. If unspecified, all valid authentication credential are accepted. The following example shows how to only allow credentials with a mapped `google.groups` value of `admins`: ``` "'admins' in google.groups" ```
"""
return pulumi.get(self, "attribute_condition")
@property
@pulumi.getter(name="attributeMapping")
def attribute_mapping(self) -> pulumi.Output[Mapping[str, str]]:
"""
Maps attributes from authentication credentials issued by an external identity provider to Google Cloud attributes, such as `subject` and `segment`. Each key must be a string specifying the Google Cloud IAM attribute to map to. The following keys are supported: * `google.subject`: The principal IAM is authenticating. You can reference this value in IAM bindings. This is also the subject that appears in Cloud Logging logs. Cannot exceed 127 characters. * `google.groups`: Groups the external identity belongs to. You can grant groups access to resources using an IAM `principalSet` binding; access applies to all members of the group. You can also provide custom attributes by specifying `attribute.{custom_attribute}`, where `{custom_attribute}` is the name of the custom attribute to be mapped. You can define a maximum of 50 custom attributes. The maximum length of a mapped attribute key is 100 characters, and the key may only contain the characters [a-z0-9_]. You can reference these attributes in IAM policies to define fine-grained access for a workload to Google Cloud resources. For example: * `google.subject`: `principal://iam.googleapis.com/projects/{project}/locations/{location}/workloadIdentityPools/{pool}/subject/{value}` * `google.groups`: `principalSet://iam.googleapis.com/projects/{project}/locations/{location}/workloadIdentityPools/{pool}/group/{value}` * `attribute.{custom_attribute}`: `principalSet://iam.googleapis.com/projects/{project}/locations/{location}/workloadIdentityPools/{pool}/attribute.{custom_attribute}/{value}` Each value must be a [Common Expression Language] (https://opensource.google/projects/cel) function that maps an identity provider credential to the normalized attribute specified by the corresponding map key. You can use the `assertion` keyword in the expression to access a JSON representation of the authentication credential issued by the provider. The maximum length of an attribute mapping expression is 2048 characters. When evaluated, the total size of all mapped attributes must not exceed 8KB. For AWS providers, if no attribute mapping is defined, the following default mapping applies: ``` { "google.subject":"assertion.arn", "attribute.aws_role": "assertion.arn.contains('assumed-role')" " ? assertion.arn.extract('{account_arn}assumed-role/')" " + 'assumed-role/'" " + assertion.arn.extract('assumed-role/{role_name}/')" " : assertion.arn", } ``` If any custom attribute mappings are defined, they must include a mapping to the `google.subject` attribute. For OIDC providers, you must supply a custom mapping, which must include the `google.subject` attribute. For example, the following maps the `sub` claim of the incoming credential to the `subject` attribute on a Google token: ``` {"google.subject": "assertion.sub"} ```
"""
return pulumi.get(self, "attribute_mapping")
@property
@pulumi.getter
def aws(self) -> pulumi.Output['outputs.AwsResponse']:
"""
An Amazon Web Services identity provider.
"""
return pulumi.get(self, "aws")
@property
@pulumi.getter
def description(self) -> pulumi.Output[str]:
"""
A description for the provider. Cannot exceed 256 characters.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter
def disabled(self) -> pulumi.Output[bool]:
"""
Whether the provider is disabled. You cannot use a disabled provider to exchange tokens. However, existing tokens still grant access.
"""
return pulumi.get(self, "disabled")
@property
@pulumi.getter(name="displayName")
def display_name(self) -> pulumi.Output[str]:
"""
A display name for the provider. Cannot exceed 32 characters.
"""
return pulumi.get(self, "display_name")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
The resource name of the provider.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def oidc(self) -> pulumi.Output['outputs.OidcResponse']:
"""
An OpenId Connect 1.0 identity provider.
"""
return pulumi.get(self, "oidc")
@property
@pulumi.getter
def state(self) -> pulumi.Output[str]:
"""
The state of the provider.
"""
return pulumi.get(self, "state")
| 81.120419 | 2,876 | 0.715374 | 3,884 | 30,988 | 5.566941 | 0.07621 | 0.040191 | 0.032374 | 0.029507 | 0.882203 | 0.848071 | 0.817316 | 0.80173 | 0.780964 | 0.742207 | 0 | 0.00438 | 0.189622 | 30,988 | 381 | 2,877 | 81.333333 | 0.856642 | 0.585969 | 0 | 0.399194 | 1 | 0 | 0.111129 | 0.031235 | 0 | 0 | 0 | 0 | 0 | 1 | 0.149194 | false | 0.004032 | 0.028226 | 0.016129 | 0.270161 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
36cfbdf9e69fc6f596582e82a55a64ee66afedb0 | 2,139 | py | Python | run.py | Verylovenlp/MinTL-SKKU | 15b5cb870c7d6dcd0f5d895aac2806539cc5101f | [
"MIT"
] | 60 | 2020-09-24T06:17:49.000Z | 2022-02-24T08:44:52.000Z | run.py | Verylovenlp/MinTL-SKKU | 15b5cb870c7d6dcd0f5d895aac2806539cc5101f | [
"MIT"
] | 6 | 2020-11-11T02:04:23.000Z | 2022-03-02T23:58:01.000Z | run.py | salesforce/CASPI | 3e4cd23f4f3d1fa7132ba89805366472c9fe5983 | [
"BSD-3-Clause"
] | 13 | 2020-09-28T07:29:05.000Z | 2022-02-06T15:04:27.000Z |
"""
T5:
end2end: "python train.py --mode train --context_window 2 --pretrained_checkpoint t5-small --cfg seed=557 batch_size=32",
"python train.py --mode train --context_window 2 --gradient_accumulation_steps 8 --pretrained_checkpoint t5-base --cfg seed=557 batch_size=8",
DST: "python DST.py --mode train --context_window 3 --cfg seed=557 batch_size=32",
"python DST.py --mode train --context_window 3 --gradient_accumulation_steps 5 --pretrained_checkpoint t5-base --cfg seed=557 batch_size=12",
"python DST.py --mode train --context_window 5 --version 2.1 --cfg seed=557 batch_size=32",
"python DST.py --mode train --context_window 5 --version 2.1 --gradient_accumulation_steps 5 --pretrained_checkpoint t5-base --cfg seed=557 batch_size=12",
Lexicalize:
python train.py --mode relex --context_window 2 --pretrained_checkpoint t5-small --cfg seed=557 batch_size=32 --model_path experiments/all_sd557_lr0.0006_bs32_sp5_dc0.8_cw2_model_t5-small_noupdateFalse_1.0 --device cpu
python train.py --mode relex --context_window 2 --gradient_accumulation_steps 8 --pretrained_checkpoint t5-base --cfg seed=557 batch_size=8 --model_path experiments/all_sd557_lr0.0006_bs8_sp5_dc0.8_cw2_model_t5-base_noupdateFalse_1.0 --device cpu
BART:
end2end: "python train.py --mode train --context_window 2 --pretrained_checkpoint bart-large-cnn --gradient_accumulation_steps 8 --lr 3e-5 --back_bone bart --cfg seed=557 batch_size=8",
DST: "python DST.py --mode train --context_window 3 --gradient_accumulation_steps 10 --pretrained_checkpoint bart-large-cnn --back_bone bart --lr 1e-5 --cfg seed=557 batch_size=4",
"python DST.py --mode train --context_window 5 --version 2.1 --gradient_accumulation_steps 10 --pretrained_checkpoint bart-large-cnn --back_bone bart --lr 1e-5 --cfg seed=557 batch_size=4",
Lexicalize:
python train.py --mode relex --context_window 2 --pretrained_checkpoint bart-large-cnn --gradient_accumulation_steps 8 --lr 2e-5 --back_bone bart --cfg seed=557 batch_size=8 --model_path experiments/all_sd557_lr3e-05_bs8_sp5_dc0.8_cw2_model_bart-large-cnn_noupdateFalse_1.0 --device cpu
""" | 89.125 | 286 | 0.769518 | 346 | 2,139 | 4.50578 | 0.17341 | 0.046183 | 0.076972 | 0.115459 | 0.973701 | 0.927518 | 0.892239 | 0.865298 | 0.837075 | 0.837075 | 0 | 0.076236 | 0.110799 | 2,139 | 24 | 287 | 89.125 | 0.743428 | 0.995325 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
36db2d7074230b63651088e3278265caec5d776d | 13,166 | py | Python | calculaHashDadosAbertos/CalculaHashCamara.py | masuta16/calculaHash | a5213a67422f3eedc1a2e84edbeab1e88678314b | [
"MIT"
] | null | null | null | calculaHashDadosAbertos/CalculaHashCamara.py | masuta16/calculaHash | a5213a67422f3eedc1a2e84edbeab1e88678314b | [
"MIT"
] | null | null | null | calculaHashDadosAbertos/CalculaHashCamara.py | masuta16/calculaHash | a5213a67422f3eedc1a2e84edbeab1e88678314b | [
"MIT"
] | null | null | null | import requests
import pandas as pd
from pandas.io.json import json_normalize
from datetime import date
import json
import io
import hashlib, os, sys
class Con3Prop:
# import shutil
#para 3 inputs
def __init__(self, parametro1, parametro2, parametro3, diasdiff):
""" parametro = 'PLP''PEC' 'PL', diasdiff= dias de diferença (valor inteiro)"""
self.parametro1 = parametro1
self.parametro2 = parametro2
self.parametro3 = parametro3
self.diasdiff = diasdiff
@property
def diferenca3Prop(self):
"""Esta biblioteca retorna uma consulta a API com a diferença de dias inserida"""
hj=date.today()
return 'https://dadosabertos.camara.leg.br/api/v2/proposicoes?siglaTipo={}&siglaTipo={}&siglaTipo={}&dataApresentacaoInicio='.format(self.parametro1, self.parametro2,self.parametro3)+str(date.fromordinal(hj.toordinal()-self.diasdiff))+"&dataApresentacaoFim="+str(date.today())+"&ordem=ASC&ordenarPor=id"
@property
def vectNorm3Prop(self):
"""Retorna vetor normalizado na biblioteca pandas de uma consulta de um json do site dados abertos"""
hj=date.today()
consulta_3_Dias = 'https://dadosabertos.camara.leg.br/api/v2/proposicoes?siglaTipo={}&siglaTipo={}&siglaTipo={}&dataApresentacaoInicio='.format(self.parametro1, self.parametro2,self.parametro3)+str(date.fromordinal(hj.toordinal()-self.diasdiff))+"&dataApresentacaoFim="+str(date.today())+"&ordem=ASC&ordenarPor=id"
requisicao = requests.get(consulta_3_Dias)
consulta= requisicao.json()
df = pd.json_normalize(consulta['dados'])
return df
@property
def HashMD5Camara(self):
import requests
import pandas as pd
from pandas.io.json import json_normalize
from datetime import date
import json
import io
import hashlib, os, sys
import shutil
"""Função utilizada para calcular a HASH MD5 de uma PLP, PEC ou PL da API da Camara dos deputados e salvar em um csv"""
dir = './temp'
#Crio diretorio temporario
os.makedirs(dir)
#Pega os IDS com PEC,PLP e PL nos ultimos 3 dias
#Dados de hoje menos 3 dias
hj=date.today()
consulta_3_Dias = 'https://dadosabertos.camara.leg.br/api/v2/proposicoes?siglaTipo={}&siglaTipo={}&siglaTipo={}&dataApresentacaoInicio='.format(self.parametro1, self.parametro2,self.parametro3)+str(date.fromordinal(hj.toordinal()-self.diasdiff))+"&dataApresentacaoFim="+str(date.today())+"&ordem=ASC&ordenarPor=id"
requisicao = requests.get(consulta_3_Dias)
consulta= requisicao.json()
df = pd.json_normalize(consulta['dados'])
#Imprime o numero de dados que retornou no dia
print("Sistema retornou "+str(len(df.id))+" consultas")
#Nesse laço for ele irá baixar os arquivos para depois mostrar todos numa pasta temporaria na
for i in range(len(df.id)):
dados= requests.get("https://dadosabertos.camara.leg.br/api/v2/proposicoes/"+str(df.id[i])).json()
df2 = pd.json_normalize(dados['dados'])
#Salva os dados em pdf
url = str(df2.urlInteiroTeor[0])
response = requests.get(url)
if response.ok:
file = open(dir+'/'+str(df.id[i]), "wb+") # write, binary, allow creation
file.write(response.content)
file.close()
else:
print("Failed to get the file")
#Printo as hashs dos ultimos 3 dias
print("Lista de hashs MD5 dos ultimos 3 dias:")
totalFiles = 0
totalDir = 0
#Cria um dataframe com o numero de arquivos na pasta temporaria
for base, dirs, files in os.walk('/content/temp'):
# print('Searching in : ',base)
for directories in dirs:
totalDir += 1
for Files in files:
totalFiles += 1
totalFiles
df3 = pd.DataFrame(index=range(totalFiles),columns=range(2))
df3 = df3.rename(columns={0: 'HashMD5', 1:'id'})
b=0
#Procura a hash dentro do diretorio
for root, dirs,files in os.walk(dir, topdown=True):
for name in files:
# print(os.path.join(name))
FileName = (os.path.join(root, name))
hasher = hashlib.md5()
with open(str(FileName), 'rb') as afile:
buf = afile.read()
hasher.update(buf)
df3['id'][b]= str(os.path.join(name))
df3['HashMD5'][b]=str(hasher.hexdigest())
b+=1
df['id']=df['id'].astype(str)
df3['id']=df3['id'].astype(str)
df = df.merge(df3, on='id')
#Vejo o cabeçalho
print(df.head())
#Vejo a estrutura do cabeçalho
print(df.info())
# Salvo em csv
df.to_csv('HashMD5Camara.csv')
#Deleto diretorio temporario
shutil.rmtree(dir, ignore_errors=True)
class Con2Prop:
# import shutil
#para 2 inputs
def __init__(self, parametro1, parametro2, diasdiff):
""" parametro = 'PLP''PEC' 'PL', diasdiff= dias de diferença (valor inteiro)"""
self.parametro1 = parametro1
self.parametro2 = parametro2
self.diasdiff = diasdiff
@property
def diferenca3Prop(self):
"""Esta biblioteca retorna uma consulta a API com a diferença de dias inserida"""
hj=date.today()
return 'https://dadosabertos.camara.leg.br/api/v2/proposicoes?siglaTipo={}&siglaTipo={}&dataApresentacaoInicio='.format(self.parametro1, self.parametro2)+str(date.fromordinal(hj.toordinal()-self.diasdiff))+"&dataApresentacaoFim="+str(date.today())+"&ordem=ASC&ordenarPor=id"
@property
def vectNorm3Prop(self):
"""Retorna vetor normalizado na biblioteca pandas de uma consulta de um json do site dados abertos"""
hj=date.today()
consulta_3_Dias = 'https://dadosabertos.camara.leg.br/api/v2/proposicoes?siglaTipo={}&siglaTipo={}&dataApresentacaoInicio='.format(self.parametro1, self.parametro2)+str(date.fromordinal(hj.toordinal()-self.diasdiff))+"&dataApresentacaoFim="+str(date.today())+"&ordem=ASC&ordenarPor=id"
requisicao = requests.get(consulta_3_Dias)
consulta= requisicao.json()
df = pd.json_normalize(consulta['dados'])
return df
@property
def HashMD5Camara(self):
import requests
import pandas as pd
from pandas.io.json import json_normalize
from datetime import date
import json
import io
import hashlib, os, sys
import shutil
"""Função utilizada para calcular a HASH MD5 de uma PLP, PEC ou PL da API da Camara dos deputados e salvar em um csv"""
dir = './temp'
#Crio diretorio temporario
os.makedirs(dir)
#Pega os IDS com PEC,PLP e PL nos ultimos 3 dias
#Dados de hoje menos 3 dias
hj=date.today()
consulta_3_Dias = 'https://dadosabertos.camara.leg.br/api/v2/proposicoes?siglaTipo={}&siglaTipo={}&dataApresentacaoInicio='.format(self.parametro1, self.parametro2)+str(date.fromordinal(hj.toordinal()-self.diasdiff))+"&dataApresentacaoFim="+str(date.today())+"&ordem=ASC&ordenarPor=id"
requisicao = requests.get(consulta_3_Dias)
consulta= requisicao.json()
df = pd.json_normalize(consulta['dados'])
#Imprime o numero de dados que retornou no dia
print("Sistema retornou "+str(len(df.id))+" consultas")
#Nesse laço for ele irá baixar os arquivos para depois mostrar todos numa pasta temporaria na
for i in range(len(df.id)):
dados= requests.get("https://dadosabertos.camara.leg.br/api/v2/proposicoes/"+str(df.id[i])).json()
df2 = pd.json_normalize(dados['dados'])
#Salva os dados em pdf
url = str(df2.urlInteiroTeor[0])
response = requests.get(url)
if response.ok:
file = open(dir+'/'+str(df.id[i]), "wb+") # write, binary, allow creation
file.write(response.content)
file.close()
else:
print("Failed to get the file")
#Printo as hashs dos ultimos 3 dias
print("Lista de hashs MD5 dos ultimos 3 dias:")
totalFiles = 0
totalDir = 0
#Cria um dataframe com o numero de arquivos na pasta temporaria
for base, dirs, files in os.walk('/content/temp'):
# print('Searching in : ',base)
for directories in dirs:
totalDir += 1
for Files in files:
totalFiles += 1
totalFiles
df3 = pd.DataFrame(index=range(totalFiles),columns=range(2))
df3 = df3.rename(columns={0: 'HashMD5', 1:'id'})
b=0
#Procura a hash dentro do diretorio
for root, dirs,files in os.walk(dir, topdown=True):
for name in files:
# print(os.path.join(name))
FileName = (os.path.join(root, name))
hasher = hashlib.md5()
with open(str(FileName), 'rb') as afile:
buf = afile.read()
hasher.update(buf)
df3['id'][b]= str(os.path.join(name))
df3['HashMD5'][b]=str(hasher.hexdigest())
b+=1
df['id']=df['id'].astype(str)
df3['id']=df3['id'].astype(str)
df = df.merge(df3, on='id')
#Vejo o cabeçalho
print(df.head())
#Vejo a estrutura do cabeçalho
print(df.info())
# Salvo em csv
df.to_csv('HashMD5Camara.csv')
#Deleto diretorio temporario
shutil.rmtree(dir, ignore_errors=True)
class Con1Prop:
import requests
import pandas as pd
from pandas.io.json import json_normalize
from datetime import date
import json
import io
import hashlib, os, sys
import shutil
#para 1 input
def __init__(self, parametro1, diasdiff):
""" proposta = 'sigla1', diasdiff= dias de diferença (valor inteiro)"""
self.parametro1 = parametro1
self.diasdiff = diasdiff
@property
def diferenca1Prop(self):
"""(proposta = 'sigla', diasdiff= dias de diferença (valor inteiro)"""
hj=date.today()
return 'https://dadosabertos.camara.leg.br/api/v2/proposicoes?siglaTipo={}&dataApresentacaoInicio='.format(self.parametro1)+str(date.fromordinal(hj.toordinal()-self.diasdiff))+"&dataApresentacaoFim="+str(date.today())+"&ordem=ASC&ordenarPor=id"
@property
def HashMD5Camara(self):
import requests
import pandas as pd
from pandas.io.json import json_normalize
from datetime import date
import json
import io
import hashlib, os, sys
import shutil
"""Função utilizada para calcular a HASH MD5 de uma PLP, PEC ou PL da API da Camara dos deputados e salvar em um csv"""
dir = './temp'
#Crio diretorio temporario
os.makedirs(dir)
#Pega os IDS com PEC,PLP e PL nos ultimos 3 dias
#Dados de hoje menos 3 dias
hj=date.today()
consulta_3_Dias = 'https://dadosabertos.camara.leg.br/api/v2/proposicoes?siglaTipo={}&dataApresentacaoInicio='.format(self.parametro1)+str(date.fromordinal(hj.toordinal()-self.diasdiff))+"&dataApresentacaoFim="+str(date.today())+"&ordem=ASC&ordenarPor=id"
requisicao = requests.get(consulta_3_Dias)
consulta= requisicao.json()
df = pd.json_normalize(consulta['dados'])
#Imprime o numero de dados que retornou no dia
print("Sistema retornou "+str(len(df.id))+" consultas")
#Nesse laço for ele irá baixar os arquivos para depois mostrar todos numa pasta temporaria na
for i in range(len(df.id)):
dados= requests.get("https://dadosabertos.camara.leg.br/api/v2/proposicoes/"+str(df.id[i])).json()
df2 = pd.json_normalize(dados['dados'])
#Salva os dados em pdf
url = str(df2.urlInteiroTeor[0])
response = requests.get(url)
if response.ok:
file = open(dir+'/'+str(df.id[i]), "wb+") # write, binary, allow creation
file.write(response.content)
file.close()
else:
print("Failed to get the file")
#Printo as hashs dos ultimos 3 dias
print("Lista de hashs MD5 dos ultimos 3 dias:")
totalFiles = 0
totalDir = 0
#Cria um dataframe com o numero de arquivos na pasta temporaria
for base, dirs, files in os.walk('/content/temp'):
# print('Searching in : ',base)
for directories in dirs:
totalDir += 1
for Files in files:
totalFiles += 1
totalFiles
df3 = pd.DataFrame(index=range(totalFiles),columns=range(2))
df3 = df3.rename(columns={0: 'HashMD5', 1:'id'})
b=0
#Procura a hash dentro do diretorio
for root, dirs,files in os.walk(dir, topdown=True):
for name in files:
# print(os.path.join(name))
FileName = (os.path.join(root, name))
hasher = hashlib.md5()
with open(str(FileName), 'rb') as afile:
buf = afile.read()
hasher.update(buf)
df3['id'][b]= str(os.path.join(name))
df3['HashMD5'][b]=str(hasher.hexdigest())
b+=1
df['id']=df['id'].astype(str)
df3['id']=df3['id'].astype(str)
df = df.merge(df3, on='id')
#Vejo o cabeçalho
print(df.head())
#Vejo a estrutura do cabeçalho
print(df.info())
# Salvo em csv
df.to_csv('HashMD5Camara.csv')
#Deleto diretorio temporario
shutil.rmtree(dir, ignore_errors=True)
| 40.888199 | 319 | 0.641425 | 1,721 | 13,166 | 4.877397 | 0.123765 | 0.013105 | 0.030141 | 0.034072 | 0.978914 | 0.975697 | 0.962711 | 0.962711 | 0.962711 | 0.962711 | 0 | 0.015926 | 0.232189 | 13,166 | 321 | 320 | 41.015576 | 0.814423 | 0.179933 | 0 | 0.948276 | 0 | 0.025862 | 0.186571 | 0.035917 | 0 | 0 | 0 | 0.003115 | 0 | 1 | 0.047414 | false | 0 | 0.168103 | 0 | 0.25 | 0.064655 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
36ed6b06d6436677b22aa226c44e0ba7ecbd5922 | 149 | py | Python | campuscats/cat/admin.py | CaptainMorch/CampusCats | 82c35fcb3c498fb969726c3d4c30efa7aaf985cc | [
"MIT"
] | 1 | 2021-09-29T07:26:19.000Z | 2021-09-29T07:26:19.000Z | campuscats/cat/admin.py | CaptainMorch/CampusCats | 82c35fcb3c498fb969726c3d4c30efa7aaf985cc | [
"MIT"
] | null | null | null | campuscats/cat/admin.py | CaptainMorch/CampusCats | 82c35fcb3c498fb969726c3d4c30efa7aaf985cc | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Cat, CatDetail, Entry
# Register your models here.
admin.site.register((Cat, CatDetail, Entry)) | 29.8 | 44 | 0.785235 | 21 | 149 | 5.571429 | 0.619048 | 0.205128 | 0.290598 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.120805 | 149 | 5 | 44 | 29.8 | 0.89313 | 0.174497 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
36f5982cac014713c8493eb592e066f49ac87c21 | 102 | py | Python | epicpath/__init__.py | ValentinVignal/EpicPath | d1900c4d6af22bd4cd2dc2464a813beca83aa294 | [
"MIT"
] | 16 | 2020-02-04T02:56:08.000Z | 2020-10-18T16:07:57.000Z | epicpath/__init__.py | ValentinVignal/EpicPath | d1900c4d6af22bd4cd2dc2464a813beca83aa294 | [
"MIT"
] | null | null | null | epicpath/__init__.py | ValentinVignal/EpicPath | d1900c4d6af22bd4cd2dc2464a813beca83aa294 | [
"MIT"
] | null | null | null | from .EpicPath import *
from .EpicPath import EpicPath as EP
from .EpicPath import EpicPath as EPath
| 20.4 | 39 | 0.794118 | 15 | 102 | 5.4 | 0.4 | 0.444444 | 0.666667 | 0.641975 | 0.691358 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 102 | 4 | 40 | 25.5 | 0.952941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
36f6bd56f0230c11cf8cb4f29eeee2c9758cd761 | 1,087 | py | Python | test_utils/pretrained_evaluator.py | yusufdalva/detectron2 | 7f15a71c4d44bfe0b61bf410684b38eeaf4689a1 | [
"Apache-2.0"
] | 1 | 2021-09-20T18:44:11.000Z | 2021-09-20T18:44:11.000Z | test_utils/pretrained_evaluator.py | yusufdalva/robust_inst_seg | 7f15a71c4d44bfe0b61bf410684b38eeaf4689a1 | [
"Apache-2.0"
] | null | null | null | test_utils/pretrained_evaluator.py | yusufdalva/robust_inst_seg | 7f15a71c4d44bfe0b61bf410684b38eeaf4689a1 | [
"Apache-2.0"
] | null | null | null |
# TODO: Method for constructing the model from the yaml file
# TODO: Method for evaluation
class InstSegEvaluator:
MODEL_PATH_MAPPING = {
"ResNet_50_C4_x1": "COCO-InstanceSegmentation/mask_rcnn_R_50_C4_1x.yaml",
"ResNet_50_DC5_x1": "COCO-InstanceSegmentation/mask_rcnn_R_50_DC5_1x.yaml",
"ResNet_50_FPN_x1": "COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml",
"ResNet_50_C4_x3": "COCO-InstanceSegmentation/mask_rcnn_R_50_C4_3x.yaml",
"ResNet_50_DC5_x3": "COCO-InstanceSegmentation/mask_rcnn_R_50_DC5_3x.yaml",
"ResNet_50_FPN_x3": "COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml",
"ResNet_101_C4_x3": "COCO-InstanceSegmentation/mask_rcnn_R_101_C4_3x.yaml",
"ResNet_101_DC5_x3": "COCO-InstanceSegmentation/mask_rcnn_R_101_DC5_3x.yaml",
"ResNet_101_FPN_x3": "COCO-InstanceSegmentation/mask_rcnn_R_101_FPN_3x.yaml",
"ResNext_101_FPN_x3": "COCO-InstanceSegmentation/mask_rcnn_X_101_32x8d_FPN_3x.yaml"
}
def __init__(self, backbone):
pass
def evaluate(self):
pass
| 40.259259 | 91 | 0.74977 | 159 | 1,087 | 4.578616 | 0.238994 | 0.32967 | 0.384615 | 0.43956 | 0.56456 | 0.56456 | 0.56456 | 0 | 0 | 0 | 0 | 0.090316 | 0.154554 | 1,087 | 26 | 92 | 41.807692 | 0.70185 | 0.079117 | 0 | 0.117647 | 0 | 0 | 0.691767 | 0.529116 | 0 | 0 | 0 | 0.038462 | 0 | 1 | 0.117647 | false | 0.117647 | 0 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
7fd290d082d557a7b62c8fce55783a7a0d09496d | 27,454 | py | Python | code/rim/rim_utils.py | modichirag/galference | 56f63cdb1d88c4a1b1a67e241d89bd6e7aa5d751 | [
"MIT"
] | null | null | null | code/rim/rim_utils.py | modichirag/galference | 56f63cdb1d88c4a1b1a67e241d89bd6e7aa5d751 | [
"MIT"
] | null | null | null | code/rim/rim_utils.py | modichirag/galference | 56f63cdb1d88c4a1b1a67e241d89bd6e7aa5d751 | [
"MIT"
] | null | null | null | import numpy as np
import tensorflow as tf
from convolutional_recurrent import ConvLSTM3DCell
from tensorflow.python.keras.layers import Conv3D, Conv3DTranspose, MaxPool3D, AveragePooling3D
import sys
sys.path.append('../../utils/')
import tools
class RIM3D(tf.keras.Model):
def __init__(self, cell, input_layer, output_layer, niter):
super(RIM3D, self).__init__()
self.cell = cell
self.output_layer = output_layer
self.input_layer = input_layer
self.niter = niter
self.beta_1, self.beta_2 = 0.9, 0.999
self.lr, self.eps = 0.1, 1e-7
def call(self, x_init, y, grad_fn, grad_args=[], initstates = None, return_steps=False):
outputs_ta = tf.TensorArray(size=self.niter+1, dtype=tf.float32)
states_ta = tf.TensorArray(size=self.niter+1, dtype=tf.float32)
if initstates is None:
stateshape = x_init.shape + tuple([self.cell.filters])
initstates = [tf.zeros(stateshape), tf.zeros(stateshape)]
i = tf.constant(0, dtype=tf.int32)
curr_state = initstates
curr_pos = x_init
m = tf.zeros_like(x_init)
v = tf.zeros_like(x_init)
def body(i, pos, states, m, v):
gradient = grad_fn(pos, y, *grad_args)
t = tf.cast(i+1, tf.float32)
m = self.beta_1*m + (1-self.beta_1)*gradient
v = self.beta_2*v + (1-self.beta_2)*gradient**2
mc = m/(1-self.beta_1**t)
vc = v/(1-self.beta_2**t)
delta = -1.*self.lr*mc/(tf.sqrt(vc) + self.eps)
concat_input = tf.stack([pos, delta], axis=-1)
cell_input = self.input_layer(concat_input)
delta_pos, new_state = self.cell(cell_input, states)
delta_pos = self.output_layer(delta_pos)[...,0]
new_pos = pos + delta_pos
return i +1 , new_pos, new_state, m, v
while tf.less(i, tf.constant(self.niter)):
outputs_ta = outputs_ta.write(i, curr_pos)
states_ta = states_ta.write(i, curr_state)
i, curr_pos, curr_state, m, v = body(i, curr_pos, curr_state, m, v)
outputs_ta = outputs_ta.write(i, curr_pos)
states_ta = states_ta.write(i, curr_state)
return outputs_ta.stack(), states_ta.stack()
def build_rim(params):
nc = params['nc']
input_layer = Conv3D(params['input_size'], kernel_size=params['input_kernel_size'],
trainable=True, padding='SAME',
input_shape=(None, nc, nc, nc, 2), activation=params['input_activation'])
cell = ConvLSTM3DCell(params['cell_size'], kernel_size=params['cell_kernel_size'], padding='SAME')
cell.build(input_shape=[None, nc, nc, nc, params['input_size']])
output_layer = Conv3D(1, kernel_size=params['output_kernel_size'], trainable=True, padding='SAME',
input_shape=(None, nc, nc, nc, params['cell_size']), activation=params['output_activation'])
rim = RIM3D(cell, input_layer, output_layer, niter=params['rim_iter'])
return rim
class RIM3D_parallel(tf.keras.Model):
def __init__(self, cell1, cell2, input_layer, input_layer_sub, output_layer_up, output_layer, strides, niter):
super(RIM3D_parallel, self).__init__()
self.cell1 = cell1
self.cell2 = cell2
self.output_layer = output_layer
self.output_layer_up = output_layer_up
self.input_layer = input_layer
self.input_layer_sub = input_layer_sub
self.strides = strides
self.niter = niter
self.beta_1, self.beta_2 = 0.9, 0.999
self.lr, self.eps = 0.1, 1e-7
def call(self, x_init, y, grad_fn, grad_args=[], initstates = None, return_steps=False):
outputs_ta = tf.TensorArray(size=self.niter+1, dtype=tf.float32)
if initstates is None:
#stateshape = tuple(i//self.strides for i in x_init.shape) + tuple([self.cell1.filters])
#stateshape = x_init.shape + tuple([self.cell.filters])
#initstates = [tf.zeros(stateshape), tf.zeros(stateshape)]
nc2 = int(x_init.shape[1]/self.strides)
stateshape = (x_init.shape[0], nc2, nc2, nc2, self.cell1.filters)
initstates1 = [tf.zeros(stateshape), tf.zeros(stateshape)]
stateshape = x_init.shape + tuple([self.cell2.filters])
initstates2 = [tf.zeros(stateshape), tf.zeros(stateshape)]
initstates = [initstates1, initstates2]
i = tf.constant(0, dtype=tf.int32)
curr_state = initstates
curr_pos = x_init
m = tf.zeros_like(x_init)
v = tf.zeros_like(x_init)
def body(i, pos, states, m, v):
gradient = grad_fn(pos, y, *grad_args)
t = tf.cast(i+1, tf.float32)
m = self.beta_1*m + (1-self.beta_1)*gradient
v = self.beta_2*v + (1-self.beta_2)*gradient**2
mc = m/(1-self.beta_1**t)
vc = v/(1-self.beta_2**t)
delta = -1.*self.lr*mc/(tf.sqrt(vc) + self.eps)
#
states1, states2 = states
concat_input = tf.stack([pos, delta], axis=-1)
#
cell_input_sub = self.input_layer_sub(concat_input)
delta_pos1, new_states1 = self.cell1(cell_input_sub, states1)
delta_pos1 = self.output_layer_up(delta_pos1)
#
cell_input = self.input_layer(concat_input)
delta_pos2, new_states2 = self.cell2(cell_input, states2)
#delta_pos2 = self.output_layer(delta_pos2)
#
#delta_pos = delta_pos1 + delta_pos2
delta_pos = tf.concat([delta_pos1, delta_pos2], axis=-1)
delta_pos = self.output_layer(delta_pos)
new_pos = pos + delta_pos[..., 0]
new_states = [new_states1, new_states2]
return i + 1 , new_pos, new_states, m, v
while tf.less(i, tf.constant(self.niter)):
outputs_ta = outputs_ta.write(i, curr_pos)
i, curr_pos, curr_state, m, v = body(i, curr_pos, curr_state, m, v)
outputs_ta = outputs_ta.write(i, curr_pos)
return outputs_ta.stack()
def build_rim_parallel(params):
nc = params['nc']
input_layer = Conv3D(params['input_size'], kernel_size=params['input_kernel_size'],
trainable=True, padding='SAME',
input_shape=(None, nc, nc, nc, 2), activation=params['input_activation'])
input_layer_sub = Conv3D(params['input_size'], kernel_size=params['input_kernel_size'],
trainable=True, padding='SAME', strides= [params['strides']]*3,
input_shape=(None, nc, nc, nc, 2), activation=params['input_activation'])
#input_layer_sub = MaxPool3D(padding='SAME')
#input_layer_sub = AveragePooling3D(padding='SAME')
cell1 = ConvLSTM3DCell(params['cell_size'], kernel_size=params['cell_kernel_size'], padding='SAME')
#cell1.build(input_shape=[None, nc, nc, nc, params['input_size']])
output_layer_up = Conv3DTranspose(params['cell_size'], kernel_size=params['middle_kernel_size'],
trainable=True, padding='SAME', strides=[params['strides']]*3,
activation=params['output_activation'])
cell2 = ConvLSTM3DCell(params['cell_size'], kernel_size=params['cell_kernel_size'], padding='SAME')
output_layer = Conv3D(1, kernel_size=params['output_kernel_size'], trainable=True, padding='SAME',
input_shape=(None, nc, nc, nc, params['cell_size']*2), activation=params['output_activation'])
rim = RIM3D_parallel(cell1, cell2, input_layer, input_layer_sub, output_layer_up, output_layer, strides=params['strides'],
niter=params['rim_iter'])
return rim
class RIM3D_parallel_single(tf.keras.Model):
def __init__(self, cell1, cell2, input_layer, input_layer_sub, output_layer_up, output_layer, strides, niter):
super(RIM3D_parallel_single, self).__init__()
self.cell1 = cell1
self.cell2 = cell2
self.output_layer = output_layer
self.output_layer_up = output_layer_up
self.input_layer = input_layer
self.input_layer_sub = input_layer_sub
self.strides = strides
self.niter = niter
self.beta_1, self.beta_2 = 0.9, 0.999
self.lr, self.eps = 0.1, 1e-7
def call(self, x_init, y, grad_fn, x_true, grad_args=[], initstates = None, return_steps=False):
if initstates is None:
#stateshape = tuple(i//self.strides for i in x_init.shape) + tuple([self.cell1.filters])
#stateshape = x_init.shape + tuple([self.cell.filters])
#initstates = [tf.zeros(stateshape), tf.zeros(stateshape)]
nc2 = int(x_init.shape[1]/self.strides)
stateshape = (x_init.shape[0], nc2, nc2, nc2, self.cell1.filters)
initstates1 = [tf.zeros(stateshape), tf.zeros(stateshape)]
stateshape = x_init.shape + tuple([self.cell2.filters])
initstates2 = [tf.zeros(stateshape), tf.zeros(stateshape)]
initstates = [initstates1, initstates2]
i = tf.constant(0, dtype=tf.int32)
curr_state = initstates
curr_pos = x_init
m = tf.zeros_like(x_init)
v = tf.zeros_like(x_init)
def body(i, pos, states, m, v):
gradient = grad_fn(pos, y, *grad_args)
t = tf.cast(i+1, tf.float32)
m = self.beta_1*m + (1-self.beta_1)*gradient
v = self.beta_2*v + (1-self.beta_2)*gradient**2
mc = m/(1-self.beta_1**t)
vc = v/(1-self.beta_2**t)
delta = -1.*self.lr*mc/(tf.sqrt(vc) + self.eps)
#
states1, states2 = states
concat_input = tf.stack([pos, delta], axis=-1)
#
cell_input_sub = self.input_layer_sub(concat_input)
delta_pos1, new_states1 = self.cell1(cell_input_sub, states1)
delta_pos1 = self.output_layer_up(delta_pos1)
#
cell_input = self.input_layer(concat_input)
delta_pos2, new_states2 = self.cell2(cell_input, states2)
#delta_pos2 = self.output_layer(delta_pos2)
#
#delta_pos = delta_pos1 + delta_pos2
delta_pos = tf.concat([delta_pos1, delta_pos2], axis=-1)
delta_pos = self.output_layer(delta_pos)
new_pos = pos + delta_pos[..., 0]
new_states = [new_states1, new_states2]
return i + 1 , new_pos, new_states, m, v
loss = 0.
while tf.less(i, tf.constant(self.niter)):
i, curr_pos, curr_state, m, v = body(i, curr_pos, curr_state, m, v)
loss = loss + tf.reduce_mean(tf.square(x_true - curr_pos))
return curr_pos, loss
def build_rim_parallel_single(params):
nc = params['nc']
input_layer = Conv3D(params['input_size'], kernel_size=params['input_kernel_size'],
trainable=True, padding='SAME',
input_shape=(None, nc, nc, nc, 2), activation=params['input_activation'])
input_layer_sub = Conv3D(params['input_size'], kernel_size=params['input_kernel_size'],
trainable=True, padding='SAME', strides= [params['strides']]*3,
input_shape=(None, nc, nc, nc, 2), activation=params['input_activation'])
cell1 = ConvLSTM3DCell(params['cell_size'], kernel_size=params['cell_kernel_size'], padding='SAME')
output_layer_up = Conv3DTranspose(params['cell_size'], kernel_size=params['middle_kernel_size'],
trainable=True, padding='SAME', strides=[params['strides']]*3,
activation=params['output_activation'])
cell2 = ConvLSTM3DCell(params['cell_size'], kernel_size=params['cell_kernel_size'], padding='SAME')
output_layer = Conv3D(1, kernel_size=params['output_kernel_size'], trainable=True, padding='SAME',
input_shape=(None, nc, nc, nc, params['cell_size']*2), activation=params['output_activation'])
rim = RIM3D_parallel_single(cell1, cell2, input_layer, input_layer_sub, output_layer_up, output_layer, strides=params['strides'],
niter=params['rim_iter'])
return rim
class myAdam(tf.keras.Model):
def __init__(self, niter, lr=0.1):
super(myAdam, self).__init__()
self.niter = niter
self.lr = lr
self.beta_1 = 0.9
self.beta_2 = 0.999
self.eps = 1e-7
def call(self, x_init, y, grad_fn, grad_args=[], ):
#outputs_ta = tf.TensorArray(size=self.niter+1, dtype=tf.float32)
i = tf.constant(0, dtype=tf.int32)
curr_pos = x_init
m = tf.zeros_like(x_init)
v = tf.zeros_like(x_init)
def body(i, pos, m, v):
gradient = grad_fn(pos, y, *grad_args)
#get_step = self.optimizer.apply_gradients(zip([gradient],[ pos]))
t = tf.cast(i+1, tf.float32)
m = self.beta_1*m + (1-self.beta_1)*gradient
v = self.beta_2*v + (1-self.beta_2)*gradient**2
mc = m/(1-self.beta_1**t)
vc = v/(1-self.beta_2**t)
delta = -1.*self.lr*mc/(np.sqrt(vc) + self.eps)
new_pos = pos + delta
return i +1 , new_pos, m, v
while tf.less(i, tf.constant(self.niter)):
#outputs_ta = outputs_ta.write(i, curr_pos)
i, curr_pos, m, v = body(i, curr_pos, m, v)
#outputs_ta = outputs_ta.write(i, curr_pos)
#return outputs_ta.stack()
return curr_pos
##def build_rim_series(params):
##
## nc = params['nc']
## input_layer = Conv3D(params['input_size'], kernel_size=params['input_kernel_size'],
## trainable=True, padding='SAME', strides= [params['strides']]*3,
## input_shape=(None, nc, nc, nc, 2), activation=params['input_activation'])
##
## cell1 = ConvLSTM3DCell(params['cell_size'], kernel_size=params['cell_kernel_size'], padding='SAME')
## #cell1.build(input_shape=[None, nc, nc, nc, params['input_size']])
##
## middle_layer = Conv3DTranspose(params['middle_size'], kernel_size=params['middle_kernel_size'],
## trainable=True, padding='SAME', strides=[params['strides']]*3,
## activation=params['input_activation'])
##
## cell2 = ConvLSTM3DCell(params['cell_size'], kernel_size=params['cell_kernel_size'], padding='SAME')
##
## output_layer = Conv3D(1, kernel_size=params['output_kernel_size'], trainable=True, padding='SAME',
## input_shape=(None, nc, nc, nc, params['cell_size']), activation=params['output_activation'])
##
## rim = RIM3D_series(cell1, cell2, input_layer, middle_layer, output_layer, strides=params['strides'],
## niter=params['rim_iter'])
##
## return rim
##
def get_bspline_kernel(x, num_channels, transpose=False, dtype=tf.float32, order=4):
"""Creates a 5x5x5 b-spline kernel.
Args:
num_channels: The number of channels of the image to filter.
dtype: The type of an element in the kernel.
Returns:
A tensor of shape `[5, 5, 5, num_channels, num_channels]`.
"""
in_dim = x.shape[-1]
if order == 8:
kernel = np.array(( 1., 8., 28., 56., 70., 56., 28., 8., 1.), dtype=dtype.as_numpy_dtype())
elif order == 6:
kernel = np.array(( 1., 6., 15., 20., 15., 6., 1.), dtype=dtype.as_numpy_dtype())
elif order==2:
kernel = np.array(( 1., 2., 1.), dtype=dtype.as_numpy_dtype())
else:
kernel = np.array(( 1., 4., 6., 4., 1.), dtype=dtype.as_numpy_dtype())
size = len(kernel)
kernel = np.einsum('ij,k->ijk', np.outer(kernel, kernel), kernel)
kernel /= np.sum(kernel)
kernel = kernel[:, :, :, np.newaxis, np.newaxis]
kernel = tf.constant(kernel, dtype=dtype) * tf.eye(num_channels, dtype=dtype)
return kernel
#fd_dim = mtf.Dimension("fd", size)
#fh_dim = mtf.Dimension("fh", size)
#fw_dim = mtf.Dimension("fw", size)
#if transpose:
# return mtf.import_tf_tensor(mesh, kernel, shape=[fd_dim, fh_dim, fw_dim, channels, in_dim])
#else:
# return mtf.import_tf_tensor(mesh, kernel, shape=[fd_dim, fh_dim, fw_dim, in_dim, channels])
def downsample(field, downsampling_factor=2, antialias=True):
"""
Performs a multiresolution decomposition of the input field.
The input field will be decomposed into a low resolution approximation,
and a details component.
"""
low = field
for i in range(downsampling_factor):
kernel = get_bspline_kernel(low, low.shape[-1], order=6)
low = tf.nn.conv3d(low, kernel, strides=(1,2,2,2,1), padding='SAME')
if antialias:
kernel = get_bspline_kernel(low, low.shape[-1], order=2)
low = tf.nn.conv3d(low, kernel, strides=(1,1,1,1,1), padding='SAME')
return low
def upsample(low, output_shape, downsampling_factor=2):
"""
Performs a multiresolution reconstruction of the input field.
The input field will be decomposed into a low resolution approximation,
and a details component.
"""
for i in range(downsampling_factor):
kernel = get_bspline_kernel(low, low.shape[-1], transpose=True, order=6)
#kernel = mesh_kernels.get_bspline_kernel(low, mtf.Dimension('out_%d'%i,low.shape[-1].size), transpose=True, order=6)
high = tf.nn.conv3d_transpose(low, kernel * 2.0**3, strides=(1,2,2,2,1), padding='SAME', output_shape=output_shape)
return high
def split_scales(field, downsampling_factor=2, antialias=True):
"""
Performs a multiresolution decomposition of the input field.
The input field will be decomposed into a low resolution approximation,
and a details component.
"""
low = downsample(field, downsampling_factor, antialias)
high = upsample(low, field.shape, downsampling_factor)
high = field - high #mtf.reshape(high, field.shape)
return low, high
class RIM3D_split(tf.keras.Model):
def __init__(self, cell1, cell2, input_layer, input_layer_sub, output_layer_sub, output_layer, strides, niter):
super(RIM3D_split, self).__init__()
self.cell1 = cell1
self.cell2 = cell2
self.output_layer = output_layer
self.output_layer_sub = output_layer_sub
self.input_layer = input_layer
self.input_layer_sub = input_layer_sub
self.strides = strides
self.niter = niter
self.beta_1, self.beta_2 = 0.9, 0.999
self.lr, self.eps = 0.1, 1e-7
def call(self, x_init, y, grad_fn, grad_args=[], initstates = None, return_steps=False):
outputs_ta = tf.TensorArray(size=self.niter+1, dtype=tf.float32)
if initstates is None:
nc2 = int(x_init.shape[1]/self.strides)
stateshape = (x_init.shape[0], nc2, nc2, nc2, self.cell1.filters)
initstates1 = [tf.zeros(stateshape), tf.zeros(stateshape)]
stateshape = x_init.shape + tuple([self.cell2.filters])
initstates2 = [tf.zeros(stateshape), tf.zeros(stateshape)]
initstates = [initstates1, initstates2]
i = tf.constant(0, dtype=tf.int32)
curr_state = initstates
curr_pos = x_init
m = tf.zeros_like(x_init)
v = tf.zeros_like(x_init)
def body(i, pos, states, m, v):
gradient = grad_fn(pos, y, *grad_args)
t = tf.cast(i+1, tf.float32)
m = self.beta_1*m + (1-self.beta_1)*gradient
v = self.beta_2*v + (1-self.beta_2)*gradient**2
mc = m/(1-self.beta_1**t)
vc = v/(1-self.beta_2**t)
delta = -1.*self.lr*mc/(tf.sqrt(vc) + self.eps)
#
states1, states2 = states
#low, high = split_scales(pos, 1)
#lowd, highd = split_scales(delta, 1)
#concat_input = tf.stack([high, highd], axis=-1)
#concat_input_sub = tf.stack([low, lowd], axis=-1)
concat_input = tf.stack([pos, delta], axis=-1)
low, high = split_scales(concat_input, 1)
concat_input = high
concat_input_sub = low
#
cell_input_sub = self.input_layer_sub(concat_input_sub)
delta_pos1, new_states1 = self.cell1(cell_input_sub, states1)
delta_pos1 = self.output_layer_sub(delta_pos1)
delta_pos1 = upsample(delta_pos1, pos.shape + [1], 1)
#
cell_input = self.input_layer(concat_input)
delta_pos2, new_states2 = self.cell2(cell_input, states2)
delta_pos2 = self.output_layer(delta_pos2)
#
delta_pos = delta_pos1 + delta_pos2
#delta_pos = tf.concat([delta_pos1, delta_pos2], axis=-1)
#delta_pos = self.output_layer(delta_pos)
new_pos = pos + delta_pos[..., 0]
new_states = [new_states1, new_states2]
return i +1 , new_pos, new_states, m, v
while tf.less(i, tf.constant(self.niter)):
outputs_ta = outputs_ta.write(i, curr_pos)
i, curr_pos, curr_state, m, v = body(i, curr_pos, curr_state, m, v)
outputs_ta = outputs_ta.write(i, curr_pos)
return outputs_ta.stack()
def build_rim_split(params):
nc = params['nc']
input_layer = Conv3D(params['input_size'], kernel_size=params['input_kernel_size'], trainable=True, padding='SAME',
input_shape=(None, nc, nc, nc, 2), activation=params['input_activation'])
input_layer_sub = Conv3D(params['input_size'], kernel_size=params['input_kernel_size'],
trainable=True, padding='SAME', activation=params['input_activation'])
cell1 = ConvLSTM3DCell(params['cell_size'], kernel_size=params['cell_kernel_size'], padding='SAME')
output_layer_sub = Conv3D(1, kernel_size=params['output_kernel_size'],
trainable=True, padding='SAME', activation=params['output_activation'])
cell2 = ConvLSTM3DCell(params['cell_size'], kernel_size=params['cell_kernel_size'], padding='SAME')
output_layer = Conv3D(1, kernel_size=params['output_kernel_size'], trainable=True, padding='SAME',
input_shape=(None, nc, nc, nc, params['cell_size']*2), activation=params['output_activation'])
rim = RIM3D_split(cell1, cell2, input_layer, input_layer_sub, output_layer_sub, output_layer, strides=params['strides'],
niter=params['rim_iter'])
return rim
class RIM3D_split_single(tf.keras.Model):
def __init__(self, cell1, cell2, input_layer, input_layer_sub, output_layer_sub, output_layer, strides, niter):
super(RIM3D_split_single, self).__init__()
self.cell1 = cell1
self.cell2 = cell2
self.output_layer = output_layer
self.output_layer_sub = output_layer_sub
self.input_layer = input_layer
self.input_layer_sub = input_layer_sub
self.strides = strides
self.niter = niter
self.beta_1, self.beta_2 = 0.9, 0.999
self.lr, self.eps = 0.1, 1e-7
def call(self, x_init, y, grad_fn, x_true, grad_args=[], initstates = None, return_steps=False):
if initstates is None:
#stateshape = tuple(i//self.strides for i in x_init.shape) + tuple([self.cell1.filters])
#stateshape = x_init.shape + tuple([self.cell.filters])
#initstates = [tf.zeros(stateshape), tf.zeros(stateshape)]
nc2 = int(x_init.shape[1]/self.strides)
stateshape = (x_init.shape[0], nc2, nc2, nc2, self.cell1.filters)
initstates1 = [tf.zeros(stateshape), tf.zeros(stateshape)]
stateshape = x_init.shape + tuple([self.cell2.filters])
initstates2 = [tf.zeros(stateshape), tf.zeros(stateshape)]
initstates = [initstates1, initstates2]
i = tf.constant(0, dtype=tf.int32)
curr_state = initstates
curr_pos = x_init
m = tf.zeros_like(x_init)
v = tf.zeros_like(x_init)
def body(i, pos, states, m, v):
gradient = grad_fn(pos, y, *grad_args)
t = tf.cast(i+1, tf.float32)
m = self.beta_1*m + (1-self.beta_1)*gradient
v = self.beta_2*v + (1-self.beta_2)*gradient**2
mc = m/(1-self.beta_1**t)
vc = v/(1-self.beta_2**t)
delta = -1.*self.lr*mc/(tf.sqrt(vc) + self.eps)
####
states1, states2 = states
concat_input = tf.stack([pos, delta], axis=-1)
low, high = split_scales(concat_input, 1)
concat_input = high
concat_input_sub = low
#
cell_input_sub = self.input_layer_sub(concat_input_sub)
delta_pos1, new_states1 = self.cell1(cell_input_sub, states1)
delta_pos1 = self.output_layer_sub(delta_pos1)
delta_pos1 = upsample(delta_pos1, pos.shape + [1], 1)
#
cell_input = self.input_layer(concat_input)
delta_pos2, new_states2 = self.cell2(cell_input, states2)
delta_pos2 = self.output_layer(delta_pos2)
#
delta_pos = delta_pos1 + delta_pos2
#delta_pos = tf.concat([delta_pos1, delta_pos2], axis=-1)
#delta_pos = self.output_layer(delta_pos)
new_pos = pos + delta_pos[..., 0]
new_states = [new_states1, new_states2]
return i +1 , new_pos, new_states, m, v
loss = 0.
while tf.less(i, tf.constant(self.niter)):
i, curr_pos, curr_state, m, v = body(i, curr_pos, curr_state, m, v)
loss = loss + tf.reduce_mean(tf.square(x_true - curr_pos))
return curr_pos, loss
def build_rim_split_single(params):
nc = params['nc']
input_layer = Conv3D(params['input_size'], kernel_size=params['input_kernel_size'],
trainable=True, padding='SAME',
input_shape=(None, nc, nc, nc, 2), activation=params['input_activation'])
input_layer_sub = Conv3D(params['input_size'], kernel_size=params['input_kernel_size'],
trainable=True, padding='SAME', activation=params['input_activation'])
cell1 = ConvLSTM3DCell(params['cell_size'], kernel_size=params['cell_kernel_size'], padding='SAME')
output_layer_sub = Conv3D(1, kernel_size=params['output_kernel_size'],
trainable=True, padding='SAME', activation=params['output_activation'])
cell2 = ConvLSTM3DCell(params['cell_size'], kernel_size=params['cell_kernel_size'], padding='SAME')
output_layer = Conv3D(1, kernel_size=params['output_kernel_size'], trainable=True, padding='SAME',
input_shape=(None, nc, nc, nc, params['cell_size']*2), activation=params['output_activation'])
rim = RIM3D_split_single(cell1, cell2, input_layer, input_layer_sub, output_layer_sub, output_layer, strides=params['strides'],
niter=params['rim_iter'])
return rim
| 41.78691 | 133 | 0.607161 | 3,652 | 27,454 | 4.329409 | 0.058872 | 0.040478 | 0.032383 | 0.030359 | 0.881981 | 0.877617 | 0.863197 | 0.861236 | 0.849472 | 0.839669 | 0 | 0.028043 | 0.264843 | 27,454 | 656 | 134 | 41.85061 | 0.755339 | 0.141655 | 0 | 0.780549 | 0 | 0 | 0.053302 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067332 | false | 0 | 0.014963 | 0 | 0.149626 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7fd59db42373c4390d9d508b351343d7742bc795 | 15,845 | py | Python | sdk/python/pulumi_keycloak/openid/client_service_account_role.py | davide-talesco/pulumi-keycloak | 08d66be6f2bf578d4292e29eb6181794375bc4e5 | [
"ECL-2.0",
"Apache-2.0"
] | 13 | 2020-04-28T15:20:56.000Z | 2022-03-24T18:00:17.000Z | sdk/python/pulumi_keycloak/openid/client_service_account_role.py | davide-talesco/pulumi-keycloak | 08d66be6f2bf578d4292e29eb6181794375bc4e5 | [
"ECL-2.0",
"Apache-2.0"
] | 49 | 2020-02-06T17:53:35.000Z | 2022-03-25T19:36:08.000Z | sdk/python/pulumi_keycloak/openid/client_service_account_role.py | davide-talesco/pulumi-keycloak | 08d66be6f2bf578d4292e29eb6181794375bc4e5 | [
"ECL-2.0",
"Apache-2.0"
] | 2 | 2020-06-09T01:08:56.000Z | 2021-12-07T15:30:37.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
__all__ = ['ClientServiceAccountRoleArgs', 'ClientServiceAccountRole']
@pulumi.input_type
class ClientServiceAccountRoleArgs:
def __init__(__self__, *,
client_id: pulumi.Input[str],
realm_id: pulumi.Input[str],
role: pulumi.Input[str],
service_account_user_id: pulumi.Input[str]):
"""
The set of arguments for constructing a ClientServiceAccountRole resource.
:param pulumi.Input[str] client_id: The id of the client that provides the role.
:param pulumi.Input[str] realm_id: The realm the clients and roles belong to.
:param pulumi.Input[str] role: The name of the role that is assigned.
:param pulumi.Input[str] service_account_user_id: The id of the service account that is assigned the role (the service account of the client that "consumes" the role).
"""
pulumi.set(__self__, "client_id", client_id)
pulumi.set(__self__, "realm_id", realm_id)
pulumi.set(__self__, "role", role)
pulumi.set(__self__, "service_account_user_id", service_account_user_id)
@property
@pulumi.getter(name="clientId")
def client_id(self) -> pulumi.Input[str]:
"""
The id of the client that provides the role.
"""
return pulumi.get(self, "client_id")
@client_id.setter
def client_id(self, value: pulumi.Input[str]):
pulumi.set(self, "client_id", value)
@property
@pulumi.getter(name="realmId")
def realm_id(self) -> pulumi.Input[str]:
"""
The realm the clients and roles belong to.
"""
return pulumi.get(self, "realm_id")
@realm_id.setter
def realm_id(self, value: pulumi.Input[str]):
pulumi.set(self, "realm_id", value)
@property
@pulumi.getter
def role(self) -> pulumi.Input[str]:
"""
The name of the role that is assigned.
"""
return pulumi.get(self, "role")
@role.setter
def role(self, value: pulumi.Input[str]):
pulumi.set(self, "role", value)
@property
@pulumi.getter(name="serviceAccountUserId")
def service_account_user_id(self) -> pulumi.Input[str]:
"""
The id of the service account that is assigned the role (the service account of the client that "consumes" the role).
"""
return pulumi.get(self, "service_account_user_id")
@service_account_user_id.setter
def service_account_user_id(self, value: pulumi.Input[str]):
pulumi.set(self, "service_account_user_id", value)
@pulumi.input_type
class _ClientServiceAccountRoleState:
def __init__(__self__, *,
client_id: Optional[pulumi.Input[str]] = None,
realm_id: Optional[pulumi.Input[str]] = None,
role: Optional[pulumi.Input[str]] = None,
service_account_user_id: Optional[pulumi.Input[str]] = None):
"""
Input properties used for looking up and filtering ClientServiceAccountRole resources.
:param pulumi.Input[str] client_id: The id of the client that provides the role.
:param pulumi.Input[str] realm_id: The realm the clients and roles belong to.
:param pulumi.Input[str] role: The name of the role that is assigned.
:param pulumi.Input[str] service_account_user_id: The id of the service account that is assigned the role (the service account of the client that "consumes" the role).
"""
if client_id is not None:
pulumi.set(__self__, "client_id", client_id)
if realm_id is not None:
pulumi.set(__self__, "realm_id", realm_id)
if role is not None:
pulumi.set(__self__, "role", role)
if service_account_user_id is not None:
pulumi.set(__self__, "service_account_user_id", service_account_user_id)
@property
@pulumi.getter(name="clientId")
def client_id(self) -> Optional[pulumi.Input[str]]:
"""
The id of the client that provides the role.
"""
return pulumi.get(self, "client_id")
@client_id.setter
def client_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "client_id", value)
@property
@pulumi.getter(name="realmId")
def realm_id(self) -> Optional[pulumi.Input[str]]:
"""
The realm the clients and roles belong to.
"""
return pulumi.get(self, "realm_id")
@realm_id.setter
def realm_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "realm_id", value)
@property
@pulumi.getter
def role(self) -> Optional[pulumi.Input[str]]:
"""
The name of the role that is assigned.
"""
return pulumi.get(self, "role")
@role.setter
def role(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "role", value)
@property
@pulumi.getter(name="serviceAccountUserId")
def service_account_user_id(self) -> Optional[pulumi.Input[str]]:
"""
The id of the service account that is assigned the role (the service account of the client that "consumes" the role).
"""
return pulumi.get(self, "service_account_user_id")
@service_account_user_id.setter
def service_account_user_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "service_account_user_id", value)
class ClientServiceAccountRole(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
client_id: Optional[pulumi.Input[str]] = None,
realm_id: Optional[pulumi.Input[str]] = None,
role: Optional[pulumi.Input[str]] = None,
service_account_user_id: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
Allows for assigning client roles to the service account of an openid client.
You need to set `service_accounts_enabled` to `true` for the openid client that should be assigned the role.
If you'd like to attach realm roles to a service account, please use the `openid.ClientServiceAccountRealmRole`
resource.
## Example Usage
```python
import pulumi
import pulumi_keycloak as keycloak
realm = keycloak.Realm("realm",
realm="my-realm",
enabled=True)
# client1 provides a role to other clients
client1 = keycloak.openid.Client("client1", realm_id=realm.id)
client1_role = keycloak.Role("client1Role",
realm_id=realm.id,
client_id=client1.id,
description="A role that client1 provides")
# client2 is assigned the role of client1
client2 = keycloak.openid.Client("client2",
realm_id=realm.id,
service_accounts_enabled=True)
client2_service_account_role = keycloak.openid.ClientServiceAccountRole("client2ServiceAccountRole",
realm_id=realm.id,
service_account_user_id=client2.service_account_user_id,
client_id=client1.id,
role=client1_role.name)
```
## Import
This resource can be imported using the format `{{realmId}}/{{serviceAccountUserId}}/{{clientId}}/{{roleId}}`. Examplebash
```sh
$ pulumi import keycloak:openid/clientServiceAccountRole:ClientServiceAccountRole client2_service_account_role my-realm/489ba513-1ceb-49ba-ae0b-1ab1f5099ebf/baf01820-0f8b-4494-9be2-fb3bc8a397a4/c7230ab7-8e4e-4135-995d-e81b50696ad8
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] client_id: The id of the client that provides the role.
:param pulumi.Input[str] realm_id: The realm the clients and roles belong to.
:param pulumi.Input[str] role: The name of the role that is assigned.
:param pulumi.Input[str] service_account_user_id: The id of the service account that is assigned the role (the service account of the client that "consumes" the role).
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: ClientServiceAccountRoleArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Allows for assigning client roles to the service account of an openid client.
You need to set `service_accounts_enabled` to `true` for the openid client that should be assigned the role.
If you'd like to attach realm roles to a service account, please use the `openid.ClientServiceAccountRealmRole`
resource.
## Example Usage
```python
import pulumi
import pulumi_keycloak as keycloak
realm = keycloak.Realm("realm",
realm="my-realm",
enabled=True)
# client1 provides a role to other clients
client1 = keycloak.openid.Client("client1", realm_id=realm.id)
client1_role = keycloak.Role("client1Role",
realm_id=realm.id,
client_id=client1.id,
description="A role that client1 provides")
# client2 is assigned the role of client1
client2 = keycloak.openid.Client("client2",
realm_id=realm.id,
service_accounts_enabled=True)
client2_service_account_role = keycloak.openid.ClientServiceAccountRole("client2ServiceAccountRole",
realm_id=realm.id,
service_account_user_id=client2.service_account_user_id,
client_id=client1.id,
role=client1_role.name)
```
## Import
This resource can be imported using the format `{{realmId}}/{{serviceAccountUserId}}/{{clientId}}/{{roleId}}`. Examplebash
```sh
$ pulumi import keycloak:openid/clientServiceAccountRole:ClientServiceAccountRole client2_service_account_role my-realm/489ba513-1ceb-49ba-ae0b-1ab1f5099ebf/baf01820-0f8b-4494-9be2-fb3bc8a397a4/c7230ab7-8e4e-4135-995d-e81b50696ad8
```
:param str resource_name: The name of the resource.
:param ClientServiceAccountRoleArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(ClientServiceAccountRoleArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
client_id: Optional[pulumi.Input[str]] = None,
realm_id: Optional[pulumi.Input[str]] = None,
role: Optional[pulumi.Input[str]] = None,
service_account_user_id: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = ClientServiceAccountRoleArgs.__new__(ClientServiceAccountRoleArgs)
if client_id is None and not opts.urn:
raise TypeError("Missing required property 'client_id'")
__props__.__dict__["client_id"] = client_id
if realm_id is None and not opts.urn:
raise TypeError("Missing required property 'realm_id'")
__props__.__dict__["realm_id"] = realm_id
if role is None and not opts.urn:
raise TypeError("Missing required property 'role'")
__props__.__dict__["role"] = role
if service_account_user_id is None and not opts.urn:
raise TypeError("Missing required property 'service_account_user_id'")
__props__.__dict__["service_account_user_id"] = service_account_user_id
super(ClientServiceAccountRole, __self__).__init__(
'keycloak:openid/clientServiceAccountRole:ClientServiceAccountRole',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
client_id: Optional[pulumi.Input[str]] = None,
realm_id: Optional[pulumi.Input[str]] = None,
role: Optional[pulumi.Input[str]] = None,
service_account_user_id: Optional[pulumi.Input[str]] = None) -> 'ClientServiceAccountRole':
"""
Get an existing ClientServiceAccountRole resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] client_id: The id of the client that provides the role.
:param pulumi.Input[str] realm_id: The realm the clients and roles belong to.
:param pulumi.Input[str] role: The name of the role that is assigned.
:param pulumi.Input[str] service_account_user_id: The id of the service account that is assigned the role (the service account of the client that "consumes" the role).
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _ClientServiceAccountRoleState.__new__(_ClientServiceAccountRoleState)
__props__.__dict__["client_id"] = client_id
__props__.__dict__["realm_id"] = realm_id
__props__.__dict__["role"] = role
__props__.__dict__["service_account_user_id"] = service_account_user_id
return ClientServiceAccountRole(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="clientId")
def client_id(self) -> pulumi.Output[str]:
"""
The id of the client that provides the role.
"""
return pulumi.get(self, "client_id")
@property
@pulumi.getter(name="realmId")
def realm_id(self) -> pulumi.Output[str]:
"""
The realm the clients and roles belong to.
"""
return pulumi.get(self, "realm_id")
@property
@pulumi.getter
def role(self) -> pulumi.Output[str]:
"""
The name of the role that is assigned.
"""
return pulumi.get(self, "role")
@property
@pulumi.getter(name="serviceAccountUserId")
def service_account_user_id(self) -> pulumi.Output[str]:
"""
The id of the service account that is assigned the role (the service account of the client that "consumes" the role).
"""
return pulumi.get(self, "service_account_user_id")
| 42.940379 | 239 | 0.651499 | 1,912 | 15,845 | 5.16318 | 0.096757 | 0.082253 | 0.07658 | 0.072934 | 0.80622 | 0.784238 | 0.777553 | 0.742808 | 0.726499 | 0.724473 | 0 | 0.012623 | 0.255033 | 15,845 | 368 | 240 | 43.057065 | 0.823704 | 0.397223 | 0 | 0.564972 | 1 | 0 | 0.112884 | 0.04409 | 0 | 0 | 0 | 0 | 0 | 1 | 0.152542 | false | 0.00565 | 0.028249 | 0 | 0.271186 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7fe5735251b2b6ab90e195406675e42aa77012c8 | 172 | py | Python | irradpy/extractor/__init__.py | soodal/Python-IrradPy | 2ae1b0dfd6c6b965e19d58a67abf62e6090c7ee8 | [
"MIT"
] | 15 | 2020-02-27T18:59:22.000Z | 2021-07-08T10:36:49.000Z | irradpy/extractor/__init__.py | BXYMartin/Python-IrradPy | 92a86dbb04bceda6353f3bfc546c0d654463d1a9 | [
"MIT"
] | 3 | 2020-03-02T17:36:12.000Z | 2021-07-28T09:39:27.000Z | irradpy/extractor/__init__.py | soodal/Python-IrradPy | 2ae1b0dfd6c6b965e19d58a67abf62e6090c7ee8 | [
"MIT"
] | 9 | 2020-02-27T18:59:24.000Z | 2021-07-16T09:33:44.000Z | from .extract import extract_dataset
from .extract import extract_dataset_list
from .extract import extract_for_MERRA2
from .extract import extractor
__all__ = ['extract']
| 28.666667 | 41 | 0.837209 | 23 | 172 | 5.869565 | 0.391304 | 0.325926 | 0.503704 | 0.533333 | 0.459259 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006536 | 0.110465 | 172 | 5 | 42 | 34.4 | 0.875817 | 0 | 0 | 0 | 0 | 0 | 0.040698 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.8 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
3d0c2b29fefba9bfebc374378bf180423e74b11f | 6,545 | py | Python | loldib/getratings/models/NA/na_gragas/na_gragas_jng.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | loldib/getratings/models/NA/na_gragas/na_gragas_jng.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | loldib/getratings/models/NA/na_gragas/na_gragas_jng.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | from getratings.models.ratings import Ratings
class NA_Gragas_Jng_Aatrox(Ratings):
pass
class NA_Gragas_Jng_Ahri(Ratings):
pass
class NA_Gragas_Jng_Akali(Ratings):
pass
class NA_Gragas_Jng_Alistar(Ratings):
pass
class NA_Gragas_Jng_Amumu(Ratings):
pass
class NA_Gragas_Jng_Anivia(Ratings):
pass
class NA_Gragas_Jng_Annie(Ratings):
pass
class NA_Gragas_Jng_Ashe(Ratings):
pass
class NA_Gragas_Jng_AurelionSol(Ratings):
pass
class NA_Gragas_Jng_Azir(Ratings):
pass
class NA_Gragas_Jng_Bard(Ratings):
pass
class NA_Gragas_Jng_Blitzcrank(Ratings):
pass
class NA_Gragas_Jng_Brand(Ratings):
pass
class NA_Gragas_Jng_Braum(Ratings):
pass
class NA_Gragas_Jng_Caitlyn(Ratings):
pass
class NA_Gragas_Jng_Camille(Ratings):
pass
class NA_Gragas_Jng_Cassiopeia(Ratings):
pass
class NA_Gragas_Jng_Chogath(Ratings):
pass
class NA_Gragas_Jng_Corki(Ratings):
pass
class NA_Gragas_Jng_Darius(Ratings):
pass
class NA_Gragas_Jng_Diana(Ratings):
pass
class NA_Gragas_Jng_Draven(Ratings):
pass
class NA_Gragas_Jng_DrMundo(Ratings):
pass
class NA_Gragas_Jng_Ekko(Ratings):
pass
class NA_Gragas_Jng_Elise(Ratings):
pass
class NA_Gragas_Jng_Evelynn(Ratings):
pass
class NA_Gragas_Jng_Ezreal(Ratings):
pass
class NA_Gragas_Jng_Fiddlesticks(Ratings):
pass
class NA_Gragas_Jng_Fiora(Ratings):
pass
class NA_Gragas_Jng_Fizz(Ratings):
pass
class NA_Gragas_Jng_Galio(Ratings):
pass
class NA_Gragas_Jng_Gangplank(Ratings):
pass
class NA_Gragas_Jng_Garen(Ratings):
pass
class NA_Gragas_Jng_Gnar(Ratings):
pass
class NA_Gragas_Jng_Gragas(Ratings):
pass
class NA_Gragas_Jng_Graves(Ratings):
pass
class NA_Gragas_Jng_Hecarim(Ratings):
pass
class NA_Gragas_Jng_Heimerdinger(Ratings):
pass
class NA_Gragas_Jng_Illaoi(Ratings):
pass
class NA_Gragas_Jng_Irelia(Ratings):
pass
class NA_Gragas_Jng_Ivern(Ratings):
pass
class NA_Gragas_Jng_Janna(Ratings):
pass
class NA_Gragas_Jng_JarvanIV(Ratings):
pass
class NA_Gragas_Jng_Jax(Ratings):
pass
class NA_Gragas_Jng_Jayce(Ratings):
pass
class NA_Gragas_Jng_Jhin(Ratings):
pass
class NA_Gragas_Jng_Jinx(Ratings):
pass
class NA_Gragas_Jng_Kalista(Ratings):
pass
class NA_Gragas_Jng_Karma(Ratings):
pass
class NA_Gragas_Jng_Karthus(Ratings):
pass
class NA_Gragas_Jng_Kassadin(Ratings):
pass
class NA_Gragas_Jng_Katarina(Ratings):
pass
class NA_Gragas_Jng_Kayle(Ratings):
pass
class NA_Gragas_Jng_Kayn(Ratings):
pass
class NA_Gragas_Jng_Kennen(Ratings):
pass
class NA_Gragas_Jng_Khazix(Ratings):
pass
class NA_Gragas_Jng_Kindred(Ratings):
pass
class NA_Gragas_Jng_Kled(Ratings):
pass
class NA_Gragas_Jng_KogMaw(Ratings):
pass
class NA_Gragas_Jng_Leblanc(Ratings):
pass
class NA_Gragas_Jng_LeeSin(Ratings):
pass
class NA_Gragas_Jng_Leona(Ratings):
pass
class NA_Gragas_Jng_Lissandra(Ratings):
pass
class NA_Gragas_Jng_Lucian(Ratings):
pass
class NA_Gragas_Jng_Lulu(Ratings):
pass
class NA_Gragas_Jng_Lux(Ratings):
pass
class NA_Gragas_Jng_Malphite(Ratings):
pass
class NA_Gragas_Jng_Malzahar(Ratings):
pass
class NA_Gragas_Jng_Maokai(Ratings):
pass
class NA_Gragas_Jng_MasterYi(Ratings):
pass
class NA_Gragas_Jng_MissFortune(Ratings):
pass
class NA_Gragas_Jng_MonkeyKing(Ratings):
pass
class NA_Gragas_Jng_Mordekaiser(Ratings):
pass
class NA_Gragas_Jng_Morgana(Ratings):
pass
class NA_Gragas_Jng_Nami(Ratings):
pass
class NA_Gragas_Jng_Nasus(Ratings):
pass
class NA_Gragas_Jng_Nautilus(Ratings):
pass
class NA_Gragas_Jng_Nidalee(Ratings):
pass
class NA_Gragas_Jng_Nocturne(Ratings):
pass
class NA_Gragas_Jng_Nunu(Ratings):
pass
class NA_Gragas_Jng_Olaf(Ratings):
pass
class NA_Gragas_Jng_Orianna(Ratings):
pass
class NA_Gragas_Jng_Ornn(Ratings):
pass
class NA_Gragas_Jng_Pantheon(Ratings):
pass
class NA_Gragas_Jng_Poppy(Ratings):
pass
class NA_Gragas_Jng_Quinn(Ratings):
pass
class NA_Gragas_Jng_Rakan(Ratings):
pass
class NA_Gragas_Jng_Rammus(Ratings):
pass
class NA_Gragas_Jng_RekSai(Ratings):
pass
class NA_Gragas_Jng_Renekton(Ratings):
pass
class NA_Gragas_Jng_Rengar(Ratings):
pass
class NA_Gragas_Jng_Riven(Ratings):
pass
class NA_Gragas_Jng_Rumble(Ratings):
pass
class NA_Gragas_Jng_Ryze(Ratings):
pass
class NA_Gragas_Jng_Sejuani(Ratings):
pass
class NA_Gragas_Jng_Shaco(Ratings):
pass
class NA_Gragas_Jng_Shen(Ratings):
pass
class NA_Gragas_Jng_Shyvana(Ratings):
pass
class NA_Gragas_Jng_Singed(Ratings):
pass
class NA_Gragas_Jng_Sion(Ratings):
pass
class NA_Gragas_Jng_Sivir(Ratings):
pass
class NA_Gragas_Jng_Skarner(Ratings):
pass
class NA_Gragas_Jng_Sona(Ratings):
pass
class NA_Gragas_Jng_Soraka(Ratings):
pass
class NA_Gragas_Jng_Swain(Ratings):
pass
class NA_Gragas_Jng_Syndra(Ratings):
pass
class NA_Gragas_Jng_TahmKench(Ratings):
pass
class NA_Gragas_Jng_Taliyah(Ratings):
pass
class NA_Gragas_Jng_Talon(Ratings):
pass
class NA_Gragas_Jng_Taric(Ratings):
pass
class NA_Gragas_Jng_Teemo(Ratings):
pass
class NA_Gragas_Jng_Thresh(Ratings):
pass
class NA_Gragas_Jng_Tristana(Ratings):
pass
class NA_Gragas_Jng_Trundle(Ratings):
pass
class NA_Gragas_Jng_Tryndamere(Ratings):
pass
class NA_Gragas_Jng_TwistedFate(Ratings):
pass
class NA_Gragas_Jng_Twitch(Ratings):
pass
class NA_Gragas_Jng_Udyr(Ratings):
pass
class NA_Gragas_Jng_Urgot(Ratings):
pass
class NA_Gragas_Jng_Varus(Ratings):
pass
class NA_Gragas_Jng_Vayne(Ratings):
pass
class NA_Gragas_Jng_Veigar(Ratings):
pass
class NA_Gragas_Jng_Velkoz(Ratings):
pass
class NA_Gragas_Jng_Vi(Ratings):
pass
class NA_Gragas_Jng_Viktor(Ratings):
pass
class NA_Gragas_Jng_Vladimir(Ratings):
pass
class NA_Gragas_Jng_Volibear(Ratings):
pass
class NA_Gragas_Jng_Warwick(Ratings):
pass
class NA_Gragas_Jng_Xayah(Ratings):
pass
class NA_Gragas_Jng_Xerath(Ratings):
pass
class NA_Gragas_Jng_XinZhao(Ratings):
pass
class NA_Gragas_Jng_Yasuo(Ratings):
pass
class NA_Gragas_Jng_Yorick(Ratings):
pass
class NA_Gragas_Jng_Zac(Ratings):
pass
class NA_Gragas_Jng_Zed(Ratings):
pass
class NA_Gragas_Jng_Ziggs(Ratings):
pass
class NA_Gragas_Jng_Zilean(Ratings):
pass
class NA_Gragas_Jng_Zyra(Ratings):
pass
| 15.695444 | 46 | 0.766692 | 972 | 6,545 | 4.736626 | 0.151235 | 0.209818 | 0.389661 | 0.479583 | 0.803432 | 0.803432 | 0 | 0 | 0 | 0 | 0 | 0 | 0.169748 | 6,545 | 416 | 47 | 15.733173 | 0.847258 | 0 | 0 | 0.498195 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.498195 | 0.00361 | 0 | 0.501805 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
3d22a73e6154c4ebf6cf9391a0176d93c52f98e4 | 159 | py | Python | aiotoolz/tests/test_utils.py | eabrouwer3/aiotoolz | 10790c9c5a8413502d8f35ce157966290492dbab | [
"BSD-3-Clause"
] | null | null | null | aiotoolz/tests/test_utils.py | eabrouwer3/aiotoolz | 10790c9c5a8413502d8f35ce157966290492dbab | [
"BSD-3-Clause"
] | null | null | null | aiotoolz/tests/test_utils.py | eabrouwer3/aiotoolz | 10790c9c5a8413502d8f35ce157966290492dbab | [
"BSD-3-Clause"
] | null | null | null | from aiotoolz.utils import raises
def test_raises():
assert raises(ZeroDivisionError, lambda: 1 / 0)
assert not raises(ZeroDivisionError, lambda: 1)
| 22.714286 | 51 | 0.748428 | 20 | 159 | 5.9 | 0.65 | 0.389831 | 0.491525 | 0.508475 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022727 | 0.169811 | 159 | 6 | 52 | 26.5 | 0.871212 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.25 | true | 0 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
3d385c642e5e16725e57cb92648dfc5c370927b1 | 239 | py | Python | com_blacktensor/__init__.py | Jelly6489/Stock-Proj | 3e7b1ad5cddc5b142f0069e024199fe969c7c7e8 | [
"MIT"
] | null | null | null | com_blacktensor/__init__.py | Jelly6489/Stock-Proj | 3e7b1ad5cddc5b142f0069e024199fe969c7c7e8 | [
"MIT"
] | null | null | null | com_blacktensor/__init__.py | Jelly6489/Stock-Proj | 3e7b1ad5cddc5b142f0069e024199fe969c7c7e8 | [
"MIT"
] | 2 | 2020-11-13T08:11:04.000Z | 2020-11-14T05:32:09.000Z | from datetime import datetime
print('=================================================================')
print(f'com_blackTensor_api init. time : {datetime.now()}')
print('=================================================================') | 59.75 | 74 | 0.334728 | 15 | 239 | 5.2 | 0.733333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041841 | 239 | 4 | 75 | 59.75 | 0.340611 | 0 | 0 | 0.5 | 0 | 0 | 0.745833 | 0.541667 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.25 | 0 | 0.25 | 0.75 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 8 |
3d61d30eb6190267a94c9da625dab9a809a70547 | 106 | py | Python | util.py | jongwony/console-calendar | aa07c9887891f47d7a68abc877142ad7e98e27fb | [
"MIT"
] | null | null | null | util.py | jongwony/console-calendar | aa07c9887891f47d7a68abc877142ad7e98e27fb | [
"MIT"
] | null | null | null | util.py | jongwony/console-calendar | aa07c9887891f47d7a68abc877142ad7e98e27fb | [
"MIT"
] | null | null | null | from os import path
def script_path(*p):
return path.join(path.dirname(path.abspath(__file__)), *p)
| 17.666667 | 62 | 0.716981 | 17 | 106 | 4.176471 | 0.705882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141509 | 106 | 5 | 63 | 21.2 | 0.78022 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
e9fe6ff133b3c36b775a8afb6fd9791280665ee1 | 11,793 | py | Python | hfcs-fffit/analysis/final-figs/plotfig_gp_examples.py | helpscott/hfcs-fffit | 4f94145a9473fa4b7f16ca4a2d18966d34f901b2 | [
"MIT"
] | null | null | null | hfcs-fffit/analysis/final-figs/plotfig_gp_examples.py | helpscott/hfcs-fffit | 4f94145a9473fa4b7f16ca4a2d18966d34f901b2 | [
"MIT"
] | 1 | 2021-11-23T20:54:01.000Z | 2021-11-23T20:54:01.000Z | hfcs-fffit/analysis/final-figs/plotfig_gp_examples.py | helpscott/hfcs-fffit | 4f94145a9473fa4b7f16ca4a2d18966d34f901b2 | [
"MIT"
] | 3 | 2021-05-13T19:51:31.000Z | 2021-12-08T01:22:31.000Z | import sys
import gpflow
import numpy as np
from scipy import stats
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.backends.backend_pdf import PdfPages
from fffit.utils import (
shuffle_and_split,
values_real_to_scaled,
values_scaled_to_real,
variances_scaled_to_real,
)
from fffit.plot import (
plot_model_performance,
plot_slices_temperature,
plot_slices_params,
plot_model_vs_test,
)
from fffit.models import run_gpflow_scipy
sys.path.append("../")
from utils.r32 import R32Constants
from utils.id_new_samples import prepare_df_vle
R32 = R32Constants()
pdf = PdfPages('pdfs/fig_gp_examples.pdf')
############################# QUANTITIES TO EDIT #############################
##############################################################################
iternum = 2
gp_shuffle_seed = 8278573
##############################################################################
##############################################################################
csv_path = "../csv/"
in_csv_names = ["r32-vle-iter" + str(i) + "-results.csv" for i in range(1, iternum+1)]
out_csv_name = "r32-vle-iter" + str(iternum + 1) + "-params.csv"
# Read files
df_csvs = [pd.read_csv(csv_path + in_csv_name, index_col=0) for in_csv_name in in_csv_names]
df_csv = pd.concat(df_csvs)
df_all = prepare_df_vle(df_csv, R32)
### Fit GP Model to liquid density
param_names = list(R32.param_names) + ["temperature"]
property_name = "sim_liq_density"
x_train, y_train, x_test, y_test = shuffle_and_split(
df_all, param_names, property_name, shuffle_seed=gp_shuffle_seed, fraction_train=0.8
)
# Fit model
models = {}
models["RBF"] = run_gpflow_scipy(
x_train,
y_train,
gpflow.kernels.RBF(lengthscales=np.ones(R32.n_params + 1)),
)
models["Matern32"] = run_gpflow_scipy(
x_train,
y_train,
gpflow.kernels.Matern32(lengthscales=np.ones(R32.n_params + 1)),
)
models["Matern52"] = run_gpflow_scipy(
x_train,
y_train,
gpflow.kernels.Matern52(lengthscales=np.ones(R32.n_params + 1)),
)
# Plot model performance on train and test points
pdf.savefig(plot_model_performance(models, x_train, y_train, R32.liq_density_bounds))
pdf.savefig(plot_model_performance(models, x_test, y_test, R32.liq_density_bounds))
# Plot temperature slices
figs = plot_slices_temperature(
models,
R32.n_params,
R32.temperature_bounds,
R32.liq_density_bounds,
property_name="Liquid Density [kg/m^3]",
)
for fig in figs:
pdf.savefig(fig)
del figs
# Plot parameter slices
for param_name in R32.param_names:
figs = plot_slices_params(
models,
param_name,
R32.param_names,
300,
R32.temperature_bounds,
R32.liq_density_bounds,
property_name="Liquid Density [kg/m^3]",
)
for fig in figs:
pdf.savefig(fig)
del figs
# Loop over test params
for test_params in x_test[:,:R32.n_params]:
train_points = []
test_points = []
# Locate rows where parameter set == test parameter set
matches = np.unique(np.where((df_all[list(R32.param_names)] == test_params).all(axis=1))[0])
# Loop over all matches -- these will be different temperatures
for match in matches:
# If the match (including T) is in the test set, then append to test points
if np.where((df_all.values[match,:R32.n_params+1] == x_test[:,:R32.n_params+1]).all(axis=1))[0].shape[0] == 1:
test_points.append([df_all["temperature"].iloc[match],df_all[property_name].iloc[match]])
# Else append to train points
else:
train_points.append([df_all["temperature"].iloc[match],df_all[property_name].iloc[match]])
pdf.savefig(
plot_model_vs_test(
models,
test_params,
np.asarray(train_points),
np.asarray(test_points),
R32.temperature_bounds,
R32.liq_density_bounds,
property_name="Liquid Density [kg/m^3]"
)
)
### Fit GP Model to vapor density
param_names = list(R32.param_names) + ["temperature"]
property_name = "sim_vap_density"
x_train, y_train, x_test, y_test = shuffle_and_split(
df_all, param_names, property_name, shuffle_seed=gp_shuffle_seed, fraction_train=0.8
)
# Fit model
models = {}
models["RBF"] = run_gpflow_scipy(
x_train,
y_train,
gpflow.kernels.RBF(lengthscales=np.ones(R32.n_params + 1)),
)
models["Matern32"] = run_gpflow_scipy(
x_train,
y_train,
gpflow.kernels.Matern32(lengthscales=np.ones(R32.n_params + 1)),
)
models["Matern52"] = run_gpflow_scipy(
x_train,
y_train,
gpflow.kernels.Matern52(lengthscales=np.ones(R32.n_params + 1)),
)
# Plot model performance on train and test points
pdf.savefig(plot_model_performance(models, x_train, y_train, R32.vap_density_bounds))
pdf.savefig(plot_model_performance(models, x_test, y_test, R32.vap_density_bounds))
# Plot temperature slices
figs = plot_slices_temperature(
models,
R32.n_params,
R32.temperature_bounds,
R32.vap_density_bounds,
property_name="Vapor Density [kg/m^3]",
)
for fig in figs:
pdf.savefig(fig)
del figs
# Plot parameter slices
for param_name in R32.param_names:
figs = plot_slices_params(
models,
param_name,
R32.param_names,
300,
R32.temperature_bounds,
R32.vap_density_bounds,
property_name="Vapor Density [kg/m^3]",
)
for fig in figs:
pdf.savefig(fig)
del figs
# Loop over test params
for test_params in x_test[:,:R32.n_params]:
train_points = []
test_points = []
# Locate rows where parameter set == test parameter set
matches = np.unique(np.where((df_all[list(R32.param_names)] == test_params).all(axis=1))[0])
# Loop over all matches -- these will be different temperatures
for match in matches:
# If the match (including T) is in the test set, then append to test points
if np.where((df_all.values[match,:R32.n_params+1] == x_test[:,:R32.n_params+1]).all(axis=1))[0].shape[0] == 1:
test_points.append([df_all["temperature"].iloc[match],df_all[property_name].iloc[match]])
# Else append to train points
else:
train_points.append([df_all["temperature"].iloc[match],df_all[property_name].iloc[match]])
pdf.savefig(
plot_model_vs_test(
models,
test_params,
np.asarray(train_points),
np.asarray(test_points),
R32.temperature_bounds,
R32.vap_density_bounds,
property_name="Vapor Density [kg/m^3]"
)
)
### Fit GP Model to Pvap
param_names = list(R32.param_names) + ["temperature"]
property_name = "sim_Pvap"
x_train, y_train, x_test, y_test = shuffle_and_split(
df_all, param_names, property_name, shuffle_seed=gp_shuffle_seed, fraction_train=0.8
)
# Fit model
models = {}
models["RBF"] = run_gpflow_scipy(
x_train,
y_train,
gpflow.kernels.RBF(lengthscales=np.ones(R32.n_params + 1)),
)
models["Matern32"] = run_gpflow_scipy(
x_train,
y_train,
gpflow.kernels.Matern32(lengthscales=np.ones(R32.n_params + 1)),
)
models["Matern52"] = run_gpflow_scipy(
x_train,
y_train,
gpflow.kernels.Matern52(lengthscales=np.ones(R32.n_params + 1)),
)
# Plot model performance on train and test points
pdf.savefig(plot_model_performance(models, x_train, y_train, R32.Pvap_bounds))
pdf.savefig(plot_model_performance(models, x_test, y_test, R32.Pvap_bounds))
# Plot temperature slices
figs = plot_slices_temperature(
models,
R32.n_params,
R32.temperature_bounds,
R32.Pvap_bounds,
property_name="Vapor Pressure [bar]",
)
for fig in figs:
pdf.savefig(fig)
del figs
# Plot parameter slices
for param_name in R32.param_names:
figs = plot_slices_params(
models,
param_name,
R32.param_names,
300,
R32.temperature_bounds,
R32.Pvap_bounds,
property_name="Vapor Pressure [bar]",
)
for fig in figs:
pdf.savefig(fig)
del figs
# Loop over test params
for test_params in x_test[:,:R32.n_params]:
train_points = []
test_points = []
# Locate rows where parameter set == test parameter set
matches = np.unique(np.where((df_all[list(R32.param_names)] == test_params).all(axis=1))[0])
# Loop over all matches -- these will be different temperatures
for match in matches:
# If the match (including T) is in the test set, then append to test points
if np.where((df_all.values[match,:R32.n_params+1] == x_test[:,:R32.n_params+1]).all(axis=1))[0].shape[0] == 1:
test_points.append([df_all["temperature"].iloc[match],df_all[property_name].iloc[match]])
# Else append to train points
else:
train_points.append([df_all["temperature"].iloc[match],df_all[property_name].iloc[match]])
pdf.savefig(
plot_model_vs_test(
models,
test_params,
np.asarray(train_points),
np.asarray(test_points),
R32.temperature_bounds,
R32.Pvap_bounds,
property_name="Vapor pressure [bar]"
)
)
### Fit GP Model to Hvap
param_names = list(R32.param_names) + ["temperature"]
property_name = "sim_Hvap"
x_train, y_train, x_test, y_test = shuffle_and_split(
df_all, param_names, property_name, shuffle_seed=gp_shuffle_seed, fraction_train=0.8
)
# Fit model
models = {}
models["RBF"] = run_gpflow_scipy(
x_train,
y_train,
gpflow.kernels.RBF(lengthscales=np.ones(R32.n_params + 1)),
)
models["Matern32"] = run_gpflow_scipy(
x_train,
y_train,
gpflow.kernels.Matern32(lengthscales=np.ones(R32.n_params + 1)),
)
models["Matern52"] = run_gpflow_scipy(
x_train,
y_train,
gpflow.kernels.Matern52(lengthscales=np.ones(R32.n_params + 1)),
)
# Plot model performance on train and test points
pdf.savefig(plot_model_performance(models, x_train, y_train, R32.Hvap_bounds))
pdf.savefig(plot_model_performance(models, x_test, y_test, R32.Hvap_bounds))
# Plot temperature slices
figs = plot_slices_temperature(
models,
R32.n_params,
R32.temperature_bounds,
R32.Hvap_bounds,
property_name="Enthalpy of Vaporization [kJ/kg]",
)
for fig in figs:
pdf.savefig(fig)
del figs
# Plot parameter slices
for param_name in R32.param_names:
figs = plot_slices_params(
models,
param_name,
R32.param_names,
300,
R32.temperature_bounds,
R32.Hvap_bounds,
property_name="Enthalpy of Vaporization [kJ/kg]",
)
for fig in figs:
pdf.savefig(fig)
del figs
# Loop over test params
for test_params in x_test[:,:R32.n_params]:
train_points = []
test_points = []
# Locate rows where parameter set == test parameter set
matches = np.unique(np.where((df_all[list(R32.param_names)] == test_params).all(axis=1))[0])
# Loop over all matches -- these will be different temperatures
for match in matches:
# If the match (including T) is in the test set, then append to test points
if np.where((df_all.values[match,:R32.n_params+1] == x_test[:,:R32.n_params+1]).all(axis=1))[0].shape[0] == 1:
test_points.append([df_all["temperature"].iloc[match],df_all[property_name].iloc[match]])
# Else append to train points
else:
train_points.append([df_all["temperature"].iloc[match],df_all[property_name].iloc[match]])
pdf.savefig(
plot_model_vs_test(
models,
test_params,
np.asarray(train_points),
np.asarray(test_points),
R32.temperature_bounds,
R32.Hvap_bounds,
property_name="Enthalpy of vaporization [kJ/kg]"
)
)
pdf.close()
| 29.70529 | 118 | 0.660561 | 1,664 | 11,793 | 4.436298 | 0.087139 | 0.019642 | 0.03793 | 0.032512 | 0.884991 | 0.884991 | 0.884991 | 0.884991 | 0.881739 | 0.881739 | 0 | 0.029803 | 0.203341 | 11,793 | 396 | 119 | 29.780303 | 0.755934 | 0.127957 | 0 | 0.724832 | 0 | 0 | 0.062972 | 0.002414 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.040268 | 0 | 0.040268 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
180c4ef6bcd313367546695b236884f314b20139 | 4,633 | py | Python | right_choice/rango/test_views.py | ddrago/RightChoice | cbf7cd358750034fac7811eeaa2397614bf0e262 | [
"Unlicense"
] | null | null | null | right_choice/rango/test_views.py | ddrago/RightChoice | cbf7cd358750034fac7811eeaa2397614bf0e262 | [
"Unlicense"
] | null | null | null | right_choice/rango/test_views.py | ddrago/RightChoice | cbf7cd358750034fac7811eeaa2397614bf0e262 | [
"Unlicense"
] | 1 | 2021-03-31T08:27:19.000Z | 2021-03-31T08:27:19.000Z | from django.test import TestCase, Client
from rango.models import *
class ViewsTestCase(TestCase):
def setUp(self):
self.client = Client()
def test_index_loads_properly(self):
"""Homepage loads"""
response = self.client.get('http://127.0.0.1:8000')
self.assertEqual(response.status_code, 200)
def test_about_loads_properly(self):
"""About page loads"""
response = self.client.get('http://127.0.0.1:8000/rightchoice/about/')
self.assertEqual(response.status_code, 200)
def test_search_results(self):
"""Search page loads"""
response = self.client.get('http://127.0.0.1:8000/rightchoice/searchResults/')
self.assertEqual(response.status_code, 200)
def test_univeristy_loads_properly(self):
"""University page loads"""
response = self.client.get('http://127.0.0.1:8000/rightchoice/universities/')
self.assertEqual(response.status_code, 200)
def test_colleges_loads_properly(self):
"""Colleges page loads"""
response = self.client.get('http://127.0.0.1:8000/rightchoice/colleges/')
self.assertEqual(response.status_code, 200)
def test_apprenticeships_loads_properly(self):
"""Apprenticeships page loads"""
response = self.client.get('http://127.0.0.1:8000/rightchoice/apprenticeships/')
self.assertEqual(response.status_code, 200)
def test_uni_slug_loads(self):
"""Uni slug page loads"""
uni = University.objects.get_or_create(name="Uni 1", location="loc1",details="details",universityImage=None,slug="1uni1",linkToUniWebsite="www.uni1.com")[0]
slug = uni.slug
response = self.client.get('http://127.0.0.1:8000/rightchoice/university/'+slug+'/')
self.assertEqual(response.status_code, 200)
def test__uni_search_results(self):
"""Search page loads"""
response = self.client.get('http://127.0.0.1:8000/rightchoice/universities/searchResultsUniversities')
self.assertEqual(response.status_code, 200)
def test__college_search_results(self):
"""Search page loads"""
response = self.client.get('http://127.0.0.1:8000/rightchoice/universities/searchResultsColleges')
self.assertEqual(response.status_code, 200)
def test_uni__search_slug_loads(self):
"""Uni search slug page loads"""
uni = University.objects.get_or_create(name="Uni 1", location="loc1",details="details",universityImage=None,slug="1uni1",linkToUniWebsite="www.uni1.com")[0]
slug = uni.slug
response = self.client.get('http://127.0.0.1:8000/rightchoice/university/searchResults/'+slug+'/')
self.assertEqual(response.status_code, 200)
def test_college_slug_loads(self):
"""College slug page loads"""
college = College.objects.get_or_create(name="College 1", location="loc1",details="details",collegeImage=None,slug="1college1",linkToCollegeWebsite="www.college1.com")[0]
slug = college.slug
response = self.client.get('http://127.0.0.1:8000/rightchoice/college/'+slug+'/')
self.assertEqual(response.status_code, 200)
def test_college_slug_loads(self):
"""College slug page loads"""
college = College.objects.get_or_create(name="College 1", location="loc1",details="details",collegeImage=None,slug="1college1",linkToCollegeWebsite="www.college1.com")[0]
slug = college.slug
response = self.client.get('http://127.0.0.1:8000/rightchoice/college/searchResults/'+slug+'/')
self.assertEqual(response.status_code, 200)
def test__apprent_search_results(self):
"""Apprent Search page loads"""
response = self.client.get('http://127.0.0.1:8000/rightchoice/apprenticeships/searchResults/')
self.assertEqual(response.status_code, 200)
def test_course_uni_slug_loads(self):
"""Uni course slug page loads"""
response = self.client.get('http://127.0.0.1:8000/rightchoice/uniCourse/1glasgow-university/')
self.assertEqual(response.status_code, 200)
def test_course_college_slug_loads(self):
"""College course slug page loads"""
response = self.client.get('http://127.0.0.1:8000/rightchoice/collegeCourse/1city-of-glasgow-college/')
self.assertEqual(response.status_code, 200)
def test_course_app_slug_loads(self):
"""Apprent search slug page loads"""
response = self.client.get('http://127.0.0.1:8000/rightchoice/apprenticeshipCourse/1apprenticeship-scotland/')
self.assertEqual(response.status_code, 200)
| 46.79798 | 178 | 0.681848 | 580 | 4,633 | 5.310345 | 0.127586 | 0.055195 | 0.093506 | 0.109091 | 0.826299 | 0.806169 | 0.794481 | 0.794481 | 0.738636 | 0.636039 | 0 | 0.061438 | 0.174401 | 4,633 | 99 | 179 | 46.79798 | 0.743791 | 0.078567 | 0 | 0.42623 | 0 | 0.016393 | 0.24636 | 0 | 0 | 0 | 0 | 0 | 0.262295 | 1 | 0.278689 | false | 0 | 0.032787 | 0 | 0.327869 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
181fc825d8654ce29f2a95e18069499365a4a23e | 10,984 | py | Python | interpretation/deepseismic_interpretation/dutchf3/tests/test_dataloaders.py | fazamani/seismic-deeplearning | e1365339b712666b3ca7a0c706f33ce22a2d2bbf | [
"MIT"
] | 2 | 2020-10-19T08:00:01.000Z | 2021-05-16T10:04:04.000Z | interpretation/deepseismic_interpretation/dutchf3/tests/test_dataloaders.py | regginalee/seismic-deeplearning | a0318b4a9f02b9c1f988ccd37971df525f5aa41f | [
"MIT"
] | 3 | 2020-02-21T23:49:10.000Z | 2020-04-09T16:12:50.000Z | interpretation/deepseismic_interpretation/dutchf3/tests/test_dataloaders.py | regginalee/seismic-deeplearning | a0318b4a9f02b9c1f988ccd37971df525f5aa41f | [
"MIT"
] | 2 | 2020-09-26T09:27:43.000Z | 2020-11-16T10:33:34.000Z | # Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
"""
Tests for TrainLoader and TestLoader classes when overriding the file names of the seismic and label data.
"""
import tempfile
import numpy as np
from interpretation.deepseismic_interpretation.dutchf3.data import get_test_loader, TrainPatchLoaderWithDepth, TrainSectionLoaderWithDepth
import pytest
import yacs.config
import os
# npy files dimensions
IL = 5
XL = 10
D = 8
CONFIG_FILE = "./examples/interpretation/notebooks/configs/unet.yaml"
with open(CONFIG_FILE, "rt") as f_read:
config = yacs.config.load_cfg(f_read)
def generate_npy_files(path, data):
np.save(path, data)
def assert_dimensions(test_section_loader):
assert test_section_loader.labels.shape[0] == IL
assert test_section_loader.labels.shape[1] == XL
assert test_section_loader.labels.shape[2] == D
# Because add_section_depth_channels method add
# 2 extra channels to a 1 channel section
assert test_section_loader.seismic.shape[0] == IL
assert test_section_loader.seismic.shape[2] == XL
assert test_section_loader.seismic.shape[3] == D
def test_TestSectionLoader_should_load_data_from_test1_set():
with open(CONFIG_FILE, "rt") as f_read:
config = yacs.config.load_cfg(f_read)
with tempfile.TemporaryDirectory() as data_dir:
os.makedirs(os.path.join(data_dir, "test_once"))
os.makedirs(os.path.join(data_dir, "splits"))
seimic = np.zeros([IL, XL, D])
generate_npy_files(os.path.join(data_dir, "test_once", "test1_seismic.npy"), seimic)
labels = np.ones([IL, XL, D])
generate_npy_files(os.path.join(data_dir, "test_once", "test1_labels.npy"), labels)
txt_path = os.path.join(data_dir, "splits", "section_test1.txt")
open(txt_path, 'a').close()
TestSectionLoader = get_test_loader(config)
test_set = TestSectionLoader(data_dir = data_dir, split = 'test1')
assert_dimensions(test_set)
def test_TestSectionLoader_should_load_data_from_test2_set():
with tempfile.TemporaryDirectory() as data_dir:
os.makedirs(os.path.join(data_dir, "test_once"))
os.makedirs(os.path.join(data_dir, "splits"))
seimic = np.zeros([IL, XL, D])
generate_npy_files(os.path.join(data_dir, "test_once", "test2_seismic.npy"), seimic)
A = np.load(os.path.join(data_dir, "test_once", "test2_seismic.npy"))
labels = np.ones([IL, XL, D])
generate_npy_files(os.path.join(data_dir, "test_once", "test2_labels.npy"), labels)
txt_path = os.path.join(data_dir, "splits", "section_test2.txt")
open(txt_path, 'a').close()
TestSectionLoader = get_test_loader(config)
test_set = TestSectionLoader(data_dir = data_dir, split = 'test2')
assert_dimensions(test_set)
def test_TestSectionLoader_should_load_data_from_path_override_data():
with tempfile.TemporaryDirectory() as data_dir:
os.makedirs(os.path.join(data_dir, "volume_name"))
os.makedirs(os.path.join(data_dir, "splits"))
seimic = np.zeros([IL, XL, D])
generate_npy_files(os.path.join(data_dir, "volume_name", "seismic.npy"), seimic)
labels = np.ones([IL, XL, D])
generate_npy_files(os.path.join(data_dir, "volume_name", "labels.npy"), labels)
txt_path = os.path.join(data_dir, "splits", "section_volume_name.txt")
open(txt_path, 'a').close()
TestSectionLoader = get_test_loader(config)
test_set = TestSectionLoader(data_dir = data_dir,
split = "volume_name",
is_transform = True,
augmentations = None,
seismic_path = os.path.join(data_dir, "volume_name", "seismic.npy"),
label_path = os.path.join(data_dir, "volume_name", "labels.npy"))
assert_dimensions(test_set)
def test_TrainSectionLoaderWithDepth_should_fail_on_empty_file_names(tmpdir):
"""
Check for exception when files do not exist
"""
# Test
with pytest.raises(Exception) as excinfo:
_ = TrainSectionLoaderWithDepth(
data_dir = tmpdir,
split = "volume_name",
is_transform=True,
augmentations=None,
seismic_path = "",
label_path = ""
)
assert "does not exist" in str(excinfo.value)
def test_TrainSectionLoaderWithDepth_should_fail_on_missing_seismic_file(tmpdir):
"""
Check for exception when training param is empty
"""
# Setup
os.makedirs(os.path.join(tmpdir, "volume_name"))
os.makedirs(os.path.join(tmpdir, "splits"))
labels = np.ones([IL, XL, D])
generate_npy_files(os.path.join(tmpdir, "volume_name", "labels.npy"), labels)
txt_path = os.path.join(tmpdir, "splits", "patch_volume_name.txt")
open(txt_path, 'a').close()
# Test
with pytest.raises(Exception) as excinfo:
_ = TrainSectionLoaderWithDepth(
data_dir = tmpdir,
split = "volume_name",
is_transform=True,
augmentations=None,
seismic_path=os.path.join(tmpdir, "volume_name", "seismic.npy"),
label_path=os.path.join(tmpdir, "volume_name", "labels.npy")
)
assert "does not exist" in str(excinfo.value)
def test_TrainSectionLoaderWithDepth_should_fail_on_missing_label_file(tmpdir):
"""
Check for exception when training param is empty
"""
# Setup
os.makedirs(os.path.join(tmpdir, "volume_name"))
os.makedirs(os.path.join(tmpdir, "splits"))
labels = np.ones([IL, XL, D])
generate_npy_files(os.path.join(tmpdir, "volume_name", "labels.npy"), labels)
txt_path = os.path.join(tmpdir, "splits", "patch_volume_name.txt")
open(txt_path, 'a').close()
# Test
with pytest.raises(Exception) as excinfo:
_ = TrainSectionLoaderWithDepth(
data_dir = tmpdir,
split = "volume_name",
is_transform=True,
augmentations=None,
seismic_path=os.path.join(tmpdir, "volume_name", "seismic.npy"),
label_path=os.path.join(tmpdir, "volume_name", "labels.npy")
)
assert "does not exist" in str(excinfo.value)
def test_TrainSectionLoaderWithDepth_should_load_with_one_train_and_label_file(tmpdir):
"""
Check for successful class instantiation w/ single npy file for train & label
"""
# Setup
os.makedirs(os.path.join(tmpdir, "volume_name"))
os.makedirs(os.path.join(tmpdir, "splits"))
seimic = np.zeros([IL, XL, D])
generate_npy_files(os.path.join(tmpdir, "volume_name", "seismic.npy"), seimic)
labels = np.ones([IL, XL, D])
generate_npy_files(os.path.join(tmpdir, "volume_name", "labels.npy"), labels)
txt_path = os.path.join(tmpdir, "splits", "section_volume_name.txt")
open(txt_path, 'a').close()
# Test
train_set = TrainSectionLoaderWithDepth(
data_dir = tmpdir,
split = "volume_name",
is_transform=True,
augmentations=None,
seismic_path=os.path.join(tmpdir, "volume_name", "seismic.npy"),
label_path=os.path.join(tmpdir, "volume_name", "labels.npy")
)
assert train_set.labels.shape == (IL, XL, D)
assert train_set.seismic.shape == (IL, 3, XL, D)
def test_TrainPatchLoaderWithDepth_should_fail_on_empty_file_names(tmpdir):
"""
Check for exception when files do not exist
"""
# Test
with pytest.raises(Exception) as excinfo:
_ = TrainPatchLoaderWithDepth(
data_dir = tmpdir,
split = "volume_name",
is_transform=True,
stride=25,
patch_size=100,
augmentations=None,
seismic_path = "",
label_path = ""
)
assert "does not exist" in str(excinfo.value)
def test_TrainPatchLoaderWithDepth_should_fail_on_missing_seismic_file(tmpdir):
"""
Check for exception when training param is empty
"""
# Setup
os.makedirs(os.path.join(tmpdir, "volume_name"))
os.makedirs(os.path.join(tmpdir, "splits"))
labels = np.ones([IL, XL, D])
generate_npy_files(os.path.join(tmpdir, "volume_name", "labels.npy"), labels)
txt_path = os.path.join(tmpdir, "splits", "patch_volume_name.txt")
open(txt_path, 'a').close()
# Test
with pytest.raises(Exception) as excinfo:
_ = TrainPatchLoaderWithDepth(
data_dir = tmpdir,
split = "volume_name",
is_transform=True,
stride=25,
patch_size=100,
augmentations=None,
seismic_path=os.path.join(tmpdir, "volume_name", "seismic.npy"),
label_path=os.path.join(tmpdir, "volume_name", "labels.npy")
)
assert "does not exist" in str(excinfo.value)
def test_TrainPatchLoaderWithDepth_should_fail_on_missing_label_file(tmpdir):
"""
Check for exception when training param is empty
"""
# Setup
os.makedirs(os.path.join(tmpdir, "volume_name"))
os.makedirs(os.path.join(tmpdir, "splits"))
seimic = np.zeros([IL, XL, D])
generate_npy_files(os.path.join(tmpdir, "volume_name", "seismic.npy"), seimic)
txt_path = os.path.join(tmpdir, "splits", "patch_volume_name.txt")
open(txt_path, 'a').close()
# Test
with pytest.raises(Exception) as excinfo:
_ = TrainPatchLoaderWithDepth(
data_dir = tmpdir,
split = "volume_name",
is_transform=True,
stride=25,
patch_size=100,
augmentations=None,
seismic_path=os.path.join(tmpdir, "volume_name", "seismic.npy"),
label_path=os.path.join(tmpdir, "volume_name", "labels.npy")
)
assert "does not exist" in str(excinfo.value)
def test_TrainPatchLoaderWithDepth_should_load_with_one_train_and_label_file(tmpdir):
"""
Check for successful class instantiation w/ single npy file for train & label
"""
# Setup
os.makedirs(os.path.join(tmpdir, "volume_name"))
os.makedirs(os.path.join(tmpdir, "splits"))
seimic = np.zeros([IL, XL, D])
generate_npy_files(os.path.join(tmpdir, "volume_name", "seismic.npy"), seimic)
labels = np.ones([IL, XL, D])
generate_npy_files(os.path.join(tmpdir, "volume_name", "labels.npy"), labels)
txt_path = os.path.join(tmpdir, "splits", "patch_volume_name.txt")
open(txt_path, 'a').close()
# Test
train_set = TrainPatchLoaderWithDepth(
data_dir = tmpdir,
split = "volume_name",
is_transform=True,
stride=25,
patch_size=100,
augmentations=None,
seismic_path=os.path.join(tmpdir, "volume_name", "seismic.npy"),
label_path=os.path.join(tmpdir, "volume_name", "labels.npy")
)
assert train_set.labels.shape == (IL, XL, D)
assert train_set.seismic.shape == (IL, XL, D)
| 33.693252 | 138 | 0.650583 | 1,404 | 10,984 | 4.861111 | 0.10114 | 0.049231 | 0.082051 | 0.089084 | 0.90315 | 0.902564 | 0.867399 | 0.850696 | 0.850696 | 0.840147 | 0 | 0.005303 | 0.227513 | 10,984 | 325 | 139 | 33.796923 | 0.799057 | 0.074654 | 0 | 0.710784 | 0 | 0 | 0.128768 | 0.020363 | 0 | 0 | 0 | 0 | 0.098039 | 1 | 0.063725 | false | 0 | 0.029412 | 0 | 0.093137 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1830044aa25285ab5f279272aca64be2296a07a8 | 44,917 | py | Python | merging_methods_v5.py | rlpink/external_cluster_editing | 80ca5850dd367a6ef2e6392ab10f6de52802f6a8 | [
"MIT"
] | null | null | null | merging_methods_v5.py | rlpink/external_cluster_editing | 80ca5850dd367a6ef2e6392ab10f6de52802f6a8 | [
"MIT"
] | null | null | null | merging_methods_v5.py | rlpink/external_cluster_editing | 80ca5850dd367a6ef2e6392ab10f6de52802f6a8 | [
"MIT"
] | null | null | null | """
This module implements several methods for calculating and outputting solutions of the unionfind_cluster_editing() algorithm.
It contains two methods for the (best) generated raw solutions,
and, more importantly, methods to merge solutions into one better solution.
"""
from union_find import *
from math import log
import sys
import numpy as np
from numba import njit, jit
from numpy import random as rand
from model_sqrt import *
from numba.typed import Dict
import pandas as pd
def best_solution(solution_costs, parents, filename, missing_weight, n, x):
"""
This function outputs the best generated solution to a file named "result.txt".
"""
costs = solution_costs.min()
best = parents[solution_costs.argmin()]
file = open("result.txt", mode="a")
with file:
file.write("filename: %s \nmissing_weight: %f \nn: %d \nx (solutions generated): %d\nbest solution found:\n" % (filename, missing_weight, n, x))
file.write(f"costs: {costs}\n")
for i in range(0,n):
file.write(f"{best[i]} ")
def print_solution_costs(solution_costs, filename):
"""
This function outputs all sorted solution costs to a ifle named "..._solution_costs.txt".
"""
sorted_costs = np.sort(solution_costs)
print_to = filename[:-4] + "_solution_costs_v5.txt"
with open(print_to, mode="a") as file:
for cost in sorted_costs:
file.write(str(cost))
file.write("\n")
def all_solutions(solution_costs, parents, filename, missing_weight, n):
"""
This function outputs all solutions, sorted by their costs, to a ifle named "all_solutions.txt".
"""
cost_sorted_i = np.argsort(solution_costs)
print_to = filename[:-4] + "_all_solutions_v5.txt"
count = 1
with open(print_to, mode="a") as file:
file.write("filename: %s \nmissing_weight: %f \nn: %d\n" % (filename, missing_weight, n))
for i in cost_sorted_i:
file.write("%d. best solution with cost %f\n" % (count, solution_costs[i]))
count += 1
for j in range(0,n):
file.write(f"{parents[i, j]} ")
file.write("\n")
@njit
def weighted_decision(x, y, cluster_masks, f_vertex_costs, f_sizes, f_parents):
"""
This function is a helper function for merging functions. It generates a weight for cluster center x and another node y by counting the costs over all solutions for two scenarios:
1: y is in the same cluster as x
0: y is in another cluster
The return value is between -1 and 1, -1 for certainly not connected, 1 for certainly connected. A value of 0 would indicate that connected or not connected would (in mean) yield the same costs (as in: the error is not big enough to make a difference).
"""
sol_len = len(f_parents)
sum_for_0 = 0
sum_for_1 = 0
count_0 = 0
count_1 = 0
for i in range(0,sol_len):
x_cost = f_vertex_costs[i, x]
y_cost = f_vertex_costs[i, y]
if cluster_masks[i, y] == 0:
sum_for_0 += x_cost + y_cost
count_0 += 1
else:
sum_for_1 += x_cost + y_cost
count_1 += 1
if count_0 > 0:
cost_0 = sum_for_0/count_0
if count_1 > 0:
cost_1 = sum_for_1/count_1
if cost_0 == 0 and cost_1 == 0:
print("Warning: Both together and single get cost 0 - something went wrong!")
else:
return (cost_0 - cost_1) / (cost_0 + cost_1)
else:
# Falls kein Eintrag 1 gehört Knoten recht sicher nicht zum Cluster
return -1.0
else:
# Falls kein Eintrag 0 gehört Knoten recht sicher zum Cluster
return 1.0
# Falls Rückgabe positiv: Entscheidung für 1 (zusammen), falls negativ: Entscheidung für 0 (getrennt).
# Je näher Rückgabewert an 0, desto unsicherer die Entscheidung.
# Falls kein voriger Fall eintritt (Häufigkeit entscheidet/ Verhältnis liegt vor):
return 0.0
@njit
def merged_solution(solution_costs, vertex_costs, parents, sizes, missing_weight, n):
"""
First merge algorithm. It calculates cluster masks for each cluster center:
True, if the node is in the same component with cluster center,
False otherwise.
For these cluster masks, for each cluster center x and each other node y a weighted decision value is calculated. Is this weight better than the previous one, y gets assigned to new cluster center x. X then gets the weight of the maximum weight over all y, except if that is lower than its previous weight. Tree-like structures can emerge in such cases. Those trees are not handled yet, however they indicate a conflict in the solution, as a node that is both child and parent belongs to two distinct clusters.
"""
sol_len = len(solution_costs)
# Neue Lösung als Array anlegen:
merged_sol = np.arange(n) #dtype = np.int64 not supported by numba
# Arrays anlegen für Vergleichbarkeit der Cluster:
cluster_masks = np.zeros((sol_len,n), dtype=np.int8) #np.bool not supported
for j in range(n):
# Fülle Cluster-Masken
for i in range(sol_len):
# Jede Cluster-Maske enthält "True" überall, wo parents
# denselben Wert hat wie an Stelle j, sonst "False"
for k in range(n):
cluster_masks[i, k] = np.int8(parents[i, k] == parents[i, j])
# Berechne Zugehörigkeit zu Cluster (bzw. oder Nicht-Zugehörigkeit)
# Alle vorigen Knoten waren schon als Zentrum besucht und haben diesen Knoten daher schon mit sich verbunden (bzw. eben nicht) - Symmetrie der Kosten!
for k in range(j+1,n):
# Cluster-Zentrum wird übersprungen (dh. verweist möglicherweise noch auf anderes Cluster!)
if k == j:
continue
wd = weighted_decision(j, k, cluster_masks, vertex_costs, sizes, parents)
# Falls Gewicht groß genug:
if wd > 0.05:
rem_union(j, k, merged_sol)
return merged_sol
@njit
def weighted_decision_scan(x, y, connectivity, f_vertex_costs, f_sizes, f_parents):
"""
This function is a helper function for merging functions. It generates a weight for cluster center x and another node y by counting the costs over all solutions for two scenarios:
1: y is in the same cluster as x
0: y is in another cluster
The return value is between -1 and 1, -1 for certainly not connected, 1 for certainly connected. A value of 0 would indicate that connected or not connected would (in mean) yield the same costs (as in: the error is not big enough to make a difference).
"""
sol_len = len(f_parents)
sum_for_0 = 0
sum_for_1 = 0
count_0 = 0
count_1 = 0
for i in range(0,sol_len):
x_cost = f_vertex_costs[i, x]
y_cost = f_vertex_costs[i, y]
if connectivity[i]:
sum_for_1 += x_cost + y_cost
count_1 += 1
else:
sum_for_0 += x_cost + y_cost
count_0 += 1
if count_0 > 0:
cost_0 = sum_for_0/count_0
if count_1 > 0:
cost_1 = sum_for_1/count_1
if cost_0 == 0 and cost_1 == 0:
print("Warning: Both together and single get cost 0 - something went wrong!")
else:
return (cost_0 - cost_1) / (cost_0 + cost_1)
else:
# Falls kein Eintrag 1 gehört Knoten recht sicher nicht zum Cluster
return -1.0
else:
# Falls kein Eintrag 0 gehört Knoten recht sicher zum Cluster
return 1.0
# Falls Rückgabe positiv: Entscheidung für 1 (zusammen), falls negativ: Entscheidung für 0 (getrennt).
# Je näher Rückgabewert an 0, desto unsicherer die Entscheidung.
# Falls kein voriger Fall eintritt (Häufigkeit entscheidet/ Verhältnis liegt vor):
return 0.0
def merged_solution_scan(solution_costs, vertex_costs, parents, sizes, missing_weight, n, filename):
"""
First merge algorithm. It calculates cluster masks for each cluster center:
True, if the node is in the same component with cluster center,
False otherwise.
For these cluster masks, for each cluster center x and each other node y a weighted decision value is calculated. Is this weight better than the previous one, y gets assigned to new cluster center x. X then gets the weight of the maximum weight over all y, except if that is lower than its previous weight. Tree-like structures can emerge in such cases. Those trees are not handled yet, however they indicate a conflict in the solution, as a node that is both child and parent belongs to two distinct clusters.
"""
sol_len = len(solution_costs)
# Neue Lösung als Array anlegen:
merged_sol = np.arange(n) #dtype = np.int64 not supported by numba
merged_sizes = np.ones(n, dtype=np.int64)
# Arrays anlegen für Vergleichbarkeit der Cluster:
connectivity = np.zeros(sol_len, dtype=np.int8) #np.bool not supported
graph_file = open(filename, mode="r")
for line in graph_file:
# Kommentar-Zeilen überspringen
if line[0] == "#":
continue
splitted = line.split()
nodes = np.array(splitted[:-1], dtype=np.int64)
weight = np.float64(splitted[2])
i = nodes[0]
j = nodes[1]
if weight < 0:
continue
# Fülle Cluster-Masken
for x in range(sol_len):
connectivity[x] = np.int8(parents[x, i] == parents[x, j])
# Berechne Zugehörigkeit zu Cluster (bzw. oder Nicht-Zugehörigkeit)
# Alle vorigen Knoten waren schon als Zentrum besucht und haben diesen Knoten daher schon mit sich verbunden (bzw. eben nicht) - Symmetrie der Kosten!
wd = weighted_decision_scan(i, j, connectivity, vertex_costs, sizes, parents)
# Falls Gewicht groß genug:
if wd > 0.05:
rem_union(i, j, merged_sol)
return merged_sol
@njit
def repair_merged(merged, merged_sizes, solution_costs, vertex_costs, parents, sizes, n, node_dgree):
sol_len = len(solution_costs)
# Arrays anlegen für Vergleichbarkeit der Cluster:
cluster_masks = np.zeros((sol_len,n), dtype=np.int8) #np.bool not supported
for i in range(n):
# Detektiere und verbinde "Mini-Cluster" (Wurzel des Clusters soll verbunden werden);
# Reparatur wird versucht, wenn die Größe des Clusters weniger als halb so groß ist wie der Knotengrad angibt, dh. die lokale Fehlerrate wäre bei über 50% in der Probleminstanz.
if merged[i] == i and merged_sizes[i] < 0.5*node_dgree[i]:
max_wd = -1
best_fit = i
# Fülle Cluster-Masken
for x in range(0,sol_len):
for j in range(n):
# Jede Cluster-Maske enthält "True" überall, wo parents
# denselben Wert hat wie an Stelle j, sonst "False"
cluster_masks[x, j] = np.int8(parents[x, i] == parents[x, j])
for j in range(n):
# Überspringe bereits verbundene Knoten und sich selbst
if merged[i] == merged[j]:
continue
# Berechne Gewicht:
wd = weighted_decision(i, j, cluster_masks, vertex_costs, sizes, parents)
# Aktualisiere ggf. best-passenden Knoten
if wd > max_wd:
max_wd = wd
best_fit = j
# ggf. Modifikation, nur union falls auch max_wd passt.
#if max_wd > 0.1:
union(i, best_fit, merged, merged_sizes)
result = np.zeros((2,n), dtype=np.int64)
result[0] = merged
result[1] = merged_sizes
return result
def get_cluster_centers_big(merged, merged_sizes, node_dgree, split):
big_ccs = {}
for i in range(len(merged)):
if merged_sizes[merged[i]] >= node_dgree[merged[i]] * split:
big_ccs[merged[i]] = merged_sizes[merged[i]]
return big_ccs
def get_cluster_centers_small(merged, merged_sizes, node_dgree, split):
small_ccs = {}
for i in range(len(merged)):
if merged_sizes[merged[i]] < node_dgree[merged[i]] * split:
small_ccs[merged[i]] = merged_sizes[merged[i]]
return small_ccs
def get_second_center(merged, big_ccs):
second_cc = {}
for center in big_ccs.keys():
# Durchlaufe solange andere Knoten bis einer aus dem selben Cluster gefunden wurde
for i in range(len(merged)):
# nicht der selbe Knoten ist gesucht
if i == center:
continue
# sondern der erste, der einen anderen Index hat aber den selben Eintrag:
if merged[i] == merged[center]:
second_cc[center] = i
break
return second_cc
@njit
def weighted_decision_2(s_center, b_center, sb_center, connectivity, vertex_costs, sizes, parents):
costs_0 = 0.0
costs_1 = 0.0
count_0 = 0
count_1 = 0
for x in range(0, len(connectivity)):
if connectivity[x] == -1:
costs_1 += 0.5 * vertex_costs[x, s_center] + vertex_costs[x, b_center] + vertex_costs[x, b_center]
elif connectivity[x] == -2:
costs_1 += 0.5 * vertex_costs[x, s_center] + vertex_costs[x, sb_center] + vertex_costs[x, sb_center]
elif connectivity[x] == 1:
costs_1 += vertex_costs[x, s_center] + vertex_costs[x, b_center] + vertex_costs[x, sb_center]
count_1 += 1
else:
costs_0 += vertex_costs[x, s_center] + vertex_costs[x, b_center] + vertex_costs[x, sb_center]
count_0 += 1
if count_0 > 0:
cost_0 = costs_0/count_0
if count_1 > 0:
cost_1 = costs_1/count_1
if cost_0 == 0 and cost_1 == 0:
print("Warning: Both together and single get cost 0 - something went wrong!")
else:
return (cost_0 - cost_1) / (cost_0 + cost_1)
else:
# Falls kein Eintrag 1, gehört Knoten recht sicher nicht zum Cluster
return -1.0
else:
# Falls kein Eintrag 0, gehört Knoten recht sicher zum Cluster
return 1.0
def repair_merged_v2(merged, merged_sizes, solution_costs, vertex_costs, parents, sizes, n, node_dgree):
sol_len = len(solution_costs)
# Arrays anlegen für Vergleichbarkeit der Cluster:
connectivity = np.zeros(sol_len, dtype=np.int8) #np.bool not supported
big_ccs = get_cluster_centers_big(merged, merged_sizes, node_dgree, 0.3)
small_ccs = get_cluster_centers_small(merged, merged_sizes, node_dgree, 0.3)
second_big_cc = get_second_center(merged, big_ccs)
for s_center in small_ccs.keys():
# Detektiere und verbinde "Mini-Cluster" (Wurzel des Clusters soll verbunden werden);
# Reparatur wird versucht, wenn die Größe des Clusters weniger als halb so groß ist wie der Knotengrad angibt, dh. die lokale Fehlerrate wäre bei über 50% in der Probleminstanz.
max_wd = -1
best_fit = s_center
# Fülle connectivity-Array (0: keine Verbindung zu Cluster; 1: eine Verbindung, 2: zwei Verbindungen)
for b_center in big_ccs.keys():
# Falls Cluster zusammen deutlich zu groß wären, überspringe diese Kombination direkt
if merged_sizes[s_center] + merged_sizes[b_center] > 1.5 * node_dgree[b_center]:
continue
for x in range(0,sol_len):
if parents[x, b_center] != parents[x, second_big_cc[b_center]]:
connectivity[x] = -1
continue
if parents[x, s_center] == parents[x, b_center]:
connectivity[x] = 1
else:
connectivity[x] = 0
# Berechne Gewicht:
wd = weighted_decision_2(s_center, b_center, second_big_cc[b_center], connectivity, vertex_costs, sizes, parents)
# Aktualisiere ggf. best-passenden Knoten
if wd > max_wd:
max_wd = wd
best_fit = b_center
# ggf. Modifikation, nur union falls auch max_wd passt.
if max_wd > 0.05:
union(s_center, best_fit, merged, merged_sizes)
result = np.zeros((2,n), dtype=np.int64)
result[0] = merged
result[1] = merged_sizes
return result
def repair_merged_v3(merged, merged_sizes, solution_costs, vertex_costs, parents, sizes, n, node_dgree):
sol_len = len(solution_costs)
ccs = calculate_mean_nodedgr(merged, merged_sizes, node_dgree)
second_big_cc = get_second_center(merged, ccs)
connectivity = np.zeros(sol_len, dtype=np.int8)
for s_center in ccs.keys():
# s_center soll klein genug sein
if merged_sizes[s_center] > ccs[s_center] * 0.35:
continue
# Detektiere und verbinde "Mini-Cluster" (Wurzel des Clusters soll verbunden werden);
# Reparatur wird versucht, wenn die Größe des Clusters weniger als halb so groß ist wie der Knotengrad angibt, dh. die lokale Fehlerrate wäre bei über 50% in der Probleminstanz.
best_fit = s_center
max_wd = -0.05
for b_center in ccs.keys():
# b_center soll groß genug sein
if merged_sizes[b_center] <= ccs[b_center] * 0.35:
continue
# Falls Cluster zusammen deutlich zu groß wären, überspringe diese Kombination direkt
if merged_sizes[s_center] + merged_sizes[b_center] > 1.5 * ccs[b_center]:
continue
for x in range(0,sol_len):
if parents[x, b_center] != parents[x, second_big_cc[b_center]]:
connectivity[x] = -1
continue
if parents[x, s_center] == parents[x, b_center]:
connectivity[x] = 1
else:
connectivity[x] = 0
# Berechne Gewicht:
wd = weighted_decision_2(s_center, b_center, second_big_cc[b_center], connectivity, vertex_costs, sizes, parents)
# Aktualisiere ggf. best-passenden Knoten
if wd > max_wd:
max_wd = wd
best_fit = b_center
# Verbinde das Cluster mit dem Cluster, das lokal betrachtet die geringsten Knotenkosten einbrachte.
union(s_center, best_fit, merged, merged_sizes)
result = np.zeros((2,n), dtype=np.int64)
result[0] = merged
result[1] = merged_sizes
return result
@njit
def repair_merged_v3_nd(merged, merged_sizes, solution_costs, vertex_costs, parents, sizes, n, node_dgree):
sol_len = len(solution_costs)
ccs_mndgr = calculate_mean_nodedgr_nd(merged, merged_sizes, node_dgree)
ccs = ccs_mndgr[0]
mean_ndgree = ccs_mndgr[1]
second_big_cc = get_second_center_nd(merged, ccs)
connectivity = np.zeros(sol_len, dtype=np.int8)
for s_center_i in range(len(ccs)):
# s_center soll klein genug sein
s_center = ccs[s_center_i]
if merged_sizes[s_center] > mean_ndgree[s_center_i] * 0.35:
continue
# Detektiere und verbinde "Mini-Cluster" (Wurzel des Clusters soll verbunden werden);
# Reparatur wird versucht, wenn die Größe des Clusters weniger als halb so groß ist wie der Knotengrad angibt, dh. die lokale Fehlerrate wäre bei über 50% in der Probleminstanz.
best_fit = s_center
max_wd = 0
for b_center_i in range(len(ccs)):
# b_center soll groß genug sein
b_center = ccs[b_center_i]
if merged_sizes[b_center] <= mean_ndgree[b_center_i] * 0.35:
continue
# Falls Cluster zusammen deutlich zu groß wären, überspringt diese Kombination direkt
if merged_sizes[s_center] + merged_sizes[b_center] > 1.5 * mean_ndgree[b_center_i]:
continue
for x in range(0,sol_len):
# Unterscheide vier Fälle: -1/-2: s_center nur mit einem verbunden; 1: mit beiden; 0: mit keinem
if parents[x, b_center] != parents[x, second_big_cc[b_center_i]]:
if parents[x, s_center] == parents[x, b_center]:
connectivity[x] = -1
elif parents[x, s_center] == parents[x, second_big_cc[b_center_i]]:
connectivity[x] = -2
continue
if parents[x, s_center] == parents[x, b_center]:
connectivity[x] = 1
else:
connectivity[x] = 0
# Berechne Gewicht:
wd = weighted_decision_2(s_center, b_center, second_big_cc[b_center_i], connectivity, vertex_costs, sizes, parents)
# Aktualisiere ggf. best-passenden Knoten
if wd > max_wd:
max_wd = wd
best_fit = b_center
# Verbinde das Cluster mit dem Cluster, das lokal betrachtet die geringsten Knotenkosten einbrachte.
union(s_center, best_fit, merged, merged_sizes)
result = np.zeros((2,n), dtype=np.int64)
result[0] = merged
result[1] = merged_sizes
return result
@njit
def mean_weight_connected(s_center, connectivity, vertex_costs, sizes, parents):
sol_len = len(connectivity)
mwc = 0.0
count = 0
for i in range(sol_len):
if connectivity[i]:
mwc += vertex_costs[i, s_center]
count += 1
if count == 0:
return -1.0
return mwc/count
@njit
def mean_weight_connected2(s_center, b_center, connectivity, vertex_costs, sizes, parents):
sol_len = len(connectivity)
mwc = 0.0
mwd = 0.0
count = 0
countd = 0
for i in range(sol_len):
if connectivity[i]:
mwc += vertex_costs[i, s_center] + vertex_costs[i, b_center]
count += 1
else:
mwd += vertex_costs[i, s_center] + vertex_costs[i, b_center]
countd += 1
if count == 0:
return -1.0
elif countd == 0:
return 1
cost_1 = mwc/count
cost_0 = mwd/countd
return (cost_0 - cost_1) / (cost_0 + cost_1)
@njit
def repair_merged_v4_nd_rem(merged, merged_sizes, solution_costs, vertex_costs, parents, sizes, n, node_dgree, big_border):
sol_len = len(solution_costs)
ccs_mndgr = calculate_mean_nodedgr_nd(merged, merged_sizes, node_dgree)
ccs = ccs_mndgr[0]
mean_ndgree = ccs_mndgr[1]
connectivity = np.zeros(sol_len, dtype=np.int8)
for s_center_i in range(len(ccs)):
# s_center soll klein genug sein
s_center = ccs[s_center_i]
if merged_sizes[s_center] > mean_ndgree[s_center_i] * big_border:
continue
# Detektiere und verbinde "Mini-Cluster" (Wurzel des Clusters soll verbunden werden).
best_fit = s_center
min_mwc = 1.7976931348623157e+308
for b_center_i in range(len(ccs)):
# b_center soll groß genug sein
b_center = ccs[b_center_i]
if merged_sizes[b_center] <= mean_ndgree[b_center_i] * big_border:
continue
# Falls Cluster zusammen deutlich zu groß wären, überspringt diese Kombination direkt.
# zu groß: mehr als 0.29 zusätzlich
# wegen 2/9 Fehlerrate maximal die von den 7/9 übrigen Kanten jeweils fehlen darf.
if merged_sizes[s_center] + merged_sizes[b_center] > 1.29 * mean_ndgree[b_center_i]:
continue
for x in range(0,sol_len):
if parents[x, s_center] == parents[x, b_center]:
connectivity[x] = 1
else:
connectivity[x] = 0
# Berechne Gewicht:
mwc = mean_weight_connected(s_center, connectivity, vertex_costs, sizes, parents)
# Aktualisiere ggf. best-passenden Knoten und minimalen mwc
if mwc == -1:
continue
if mwc < min_mwc:
min_mwc = mwc
best_fit = b_center
# Verbinde das Cluster mit dem Cluster, das im Mittel für s_center am günstigsten ist.
rem_union(s_center, best_fit, merged)
# Wg. Rem: aktualisiere Größe direkt in Repräsentanten von später erneut betrachtetem best_fit
merged_sizes[best_fit] += merged_sizes[s_center]
return merged
@njit
def calculate_mean_nodedgr_array(merged, merged_sizes, node_dgree, cluster_centers):
cluster_mean_nodedgr = np.zeros(len(cluster_centers), dtype=np.int64)
for c in range(len(cluster_centers)):
for i in range(len(merged)):
if merged[i] == cluster_centers[c]:
cluster_mean_nodedgr[c] += node_dgree[i]
cluster_mean_nodedgr[c] /= merged_sizes[cluster_centers[c]]
cmn_array = np.zeros(len(merged), dtype=np.int64)
for i in range(len(cluster_centers)):
c = cluster_centers[i]
cmn_array[c] = cluster_mean_nodedgr[i]
return cmn_array
def repair_merged_v4_rem_scan(merged, merged_sizes, solution_costs, vertex_costs, parents, sizes, n, node_dgree, big_border, filename):
sol_len = len(solution_costs)
cluster_centers = pd.unique(merged)
mean_ndgree = calculate_mean_nodedgr_array(merged, merged_sizes, node_dgree, cluster_centers)
connectivity = np.zeros(sol_len, dtype=np.int8)
best_fits = np.zeros(n, dtype=np.int64)
min_mwcs = np.zeros(n, dtype = np.float64)
for i in range(n):
best_fits[i] = -1
min_mwcs[i] = 1.7976931348623157e+308
graph_file = open(filename, mode="r")
for line in graph_file:
# Kommentar-Zeilen überspringen
if line[0] == "#":
continue
splitted = line.split()
nodes = np.array(splitted[:-1], dtype=np.int64)
weight = np.float64(splitted[2])
i = nodes[0]
j = nodes[1]
# Nur positive Kanten berücksichtigen
if weight < 0:
continue
#Clusterzentren ermitteln
s_center = merged[i]
b_center = merged[j]
# ggf. Benennung ändern (b: big, s: small)
if merged_sizes[s_center] > merged_sizes[b_center]:
tmp = s_center
s_center = b_center
b_center = tmp
# Clustergrößen ermitteln
s_center_s = merged_sizes[s_center]
b_center_s = merged_sizes[b_center]
if b_center_s < big_border * mean_ndgree[b_center]:
continue
if s_center_s >= big_border * mean_ndgree[s_center]:
continue
if s_center_s + b_center_s > 1.29 * mean_ndgree[s_center]:
continue
if s_center_s + b_center_s > 1.29 * mean_ndgree[b_center]:
continue
for x in range(0,sol_len):
if parents[x, i] == parents[x, j]:
connectivity[x] = 1
else:
connectivity[x] = 0
# Berechne Gewicht:
mwc = mean_weight_connected(s_center, connectivity, vertex_costs, sizes, parents)
if mwc == -1:
continue
if mwc < min_mwcs[s_center]:
# Aktualisieren von Minimalen Kosten
min_mwcs[s_center] = mwc
best_fits[s_center] = b_center
# Laufe über alle großen Cluster (denen kleine zugewiesen wurden) und verbinde diese mit den günstigsten Kandidaten,
# bis das Cluster (deutlich) zu voll wäre.
bf_unique = pd.unique(best_fits)
for b_center in bf_unique:
# Wenn best_fits[i] == -1: wurde gar nicht befüllt (dh. i ist kein kleines Cluster oder wurde nie verbunden).
if b_center == -1:
continue
sorted_candidates = priority_candidates(b_center, best_fits, min_mwcs)
for s_center in sorted_candidates:
# Check ob aktuelle Größe noch passt (im Unterschied zu oben: Dort wird nur geguckt ob die Größen -vor- dem ersten Union passen würden
if merged_sizes[s_center] + merged_sizes[b_center] < 1.29 * mean_ndgree[b_center]:
rem_union(b_center, s_center, merged)
merged_sizes[b_center] += merged_sizes[s_center]
return merged
def priority_candidates(b_center, best_fits, min_mwcs):
candidates = np.argwhere(best_fits == b_center).flatten()
sorted_i = np.argsort(min_mwcs[candidates])
return candidates[sorted_i]
@njit
def repair_merged_wd(merged, merged_sizes, solution_costs, vertex_costs, parents, sizes, n, node_dgree, big_border):
sol_len = len(solution_costs)
ccs_mndgr = calculate_mean_nodedgr_nd(merged, merged_sizes, node_dgree)
ccs = ccs_mndgr[0]
mean_ndgree = ccs_mndgr[1]
connectivity = np.zeros(sol_len, dtype=np.int8)
for s_center_i in range(len(ccs)):
# s_center soll klein genug sein
s_center = ccs[s_center_i]
if merged_sizes[s_center] > mean_ndgree[s_center_i] * big_border:
continue
# Detektiere und verbinde "Mini-Cluster" (Wurzel des Clusters soll verbunden werden);
# Reparatur wird versucht, wenn die Größe des Clusters weniger als halb so groß ist wie der Knotengrad angibt, dh. die lokale Fehlerrate wäre bei über 50% in der Probleminstanz.
best_fit = s_center
max_wd = -1
for b_center_i in range(len(ccs)):
# b_center soll groß genug sein
b_center = ccs[b_center_i]
if merged_sizes[b_center] <= mean_ndgree[b_center_i] * big_border:
continue
# Falls Cluster zusammen deutlich zu groß wären, überspringt diese Kombination direkt
if merged_sizes[s_center] + merged_sizes[b_center] > 1.29 * mean_ndgree[b_center_i]:
continue
for x in range(0,sol_len):
if parents[x, s_center] == parents[x, b_center]:
connectivity[x] = 1
else:
connectivity[x] = 0
# Berechne Gewicht:
wd = mean_weight_connected2(s_center, b_center, connectivity, vertex_costs, sizes, parents)
if wd > max_wd:
# Aktualisieren von Minimalen Kosten
max_wd = wd
best_fit = b_center
# if best_fit == s_center:
# print("Knoten %d wurde nicht verbunden.\n" % (s_center))
# Verbinde das Cluster mit dem Cluster, das im Mittel für s_center am günstigsten ist.
rem_union(s_center, best_fit, merged)
# Wg. Rem: aktualisiere Größe direkt in Repräsentanten von später erneut betrachtetem best_fit
merged_sizes[best_fit] += merged_sizes[s_center]
return merged
@njit
def check_if_flat(solution):
for i in range(len(solution)):
# Prüfe, ob Knoten i Wurzel ist oder Kind 1. Ebene
if solution[i] != i and solution[solution[i]] != solution[i]:
# falls es beides nicht ist, ist der Baum nicht flach!
return False
return True
@njit
def mean_node_weight(node, connectivity, vertex_costs, solutions, sizes, n):
mnw_con = 0.0
count = np.sum(connectivity)
if count == 0:
return -1
for x in range(len(connectivity)):
if not connectivity[x]:
continue
root = solutions[x, node]
cluster_costs = 0.0
for i in range(n):
if solutions[x, i] == root:
cluster_costs += vertex_costs[x, i]
if sizes[x, root] != 0:
mnw = cluster_costs / sizes[x, root]
else:
print("Sizes = 0 bei Lösung:", x)
print("An Stelle:", root)
mnw_con += mnw
mnw_con = mnw_con / count
return mnw_con
@njit
def repair_merged_v6_nd_rem(merged, merged_sizes, solution_costs, vertex_costs, parents, sizes, n, node_dgree, big_border):
sol_len = len(solution_costs)
ccs_mndgr = calculate_mean_nodedgr_nd(merged, merged_sizes, node_dgree)
ccs = ccs_mndgr[0]
mean_ndgree = ccs_mndgr[1]
connectivity = np.zeros(sol_len, dtype=np.int8)
for s_center_i in range(len(ccs)):
# s_center soll klein genug sein
s_center = ccs[s_center_i]
if merged_sizes[s_center] > mean_ndgree[s_center_i] * big_border:
continue
# Detektiere und verbinde "Mini-Cluster" (Wurzel des Clusters soll verbunden werden).
best_fit = s_center
min_mwc = 1.7976931348623157e+308
for b_center_i in range(len(ccs)):
# b_center soll groß genug sein
b_center = ccs[b_center_i]
if merged_sizes[b_center] <= mean_ndgree[b_center_i] * big_border:
continue
# Falls Cluster zusammen deutlich zu groß wären, überspringt diese Kombination direkt.
# zu groß: mehr als 0.29 zusätzlich
# wegen 2/9 Fehlerrate maximal die von den 7/9 übrigen Kanten jeweils fehlen darf.
if merged_sizes[s_center] + merged_sizes[b_center] > 1.29 * mean_ndgree[b_center_i]:
continue
for x in range(0,sol_len):
if parents[x, s_center] == parents[x, b_center]:
connectivity[x] = 1
else:
connectivity[x] = 0
# Berechne Gewicht:
mwc = mean_node_weight(s_center, connectivity, vertex_costs, parents, sizes, n)
# Aktualisiere ggf. best-passenden Knoten und minimalen mwc
if mwc == -1:
continue
if mwc < min_mwc:
min_mwc = mwc
best_fit = b_center
# Verbinde das Cluster mit dem Cluster, das im Mittel für s_center am günstigsten ist.
rem_union(s_center, best_fit, merged)
# Wg. Rem: aktualisiere Größe direkt in Repräsentanten von später erneut betrachtetem best_fit
merged_sizes[best_fit] += merged_sizes[s_center]
return merged
@njit
def calculate_frequencies(s_center, ccs, merged, merged_sizes, mean_ndgree, parents, sol_len, frequency, big_border):
# Gehe jede Lösung durch
for x in range(sol_len):
# und jedes große Cluster b_center
for b_center_i in range(len(ccs)):
b_center = ccs[b_center_i]
# falls kein großes Cluster, weiter
if merged_sizes[b_center] <= mean_ndgree[b_center_i] * big_border:
continue
# falls Clustergrößen inkompatibel wären, auch weiter
if merged_sizes[s_center] + merged_sizes[b_center] > 1.29 * mean_ndgree[b_center_i]:
continue
# falls in Lösung s_center direkt mit b_center verbunden ist: erhöhe Häufigkeit; betrachte nächstes b_center.
if parents[x, s_center] == parents[x, b_center]:
frequency[b_center] += 1
continue
# ansonsten teste ob s_center mit irgendeinem anderen Knoten aus dem b_center-Cluster in merged verbunden ist:
else:
b_center_members = np.where(merged == b_center)[0]
for i in b_center_members:
if parents[x, s_center] == parents[x, i]:
frequency[b_center] += 1
# sobald die erste Verbindung zu b_center gefunden wurde, brich Scan ab
break
return frequency
@njit
def repair_merged_v5_rem(merged, merged_sizes, solution_costs, vertex_costs, parents, sizes, n, node_dgree, big_border):
sol_len = len(solution_costs)
ccs_mndgr = calculate_mean_nodedgr_nd(merged, merged_sizes, node_dgree)
ccs = ccs_mndgr[0]
mean_ndgree = ccs_mndgr[1]
frequency = np.zeros(n, dtype=np.int8)
for s_center_i in range(len(ccs)):
# s_center soll klein genug sein
s_center = ccs[s_center_i]
if merged_sizes[s_center] > mean_ndgree[s_center_i] * big_border:
continue
# Array wieder leeren (nach jedem Befüllen)
for i in range(n):
frequency[i] = 0
# Detektiere und verbinde "Mini-Cluster" (Wurzel des Clusters soll verbunden werden).
frequency = calculate_frequencies(s_center, ccs, merged, merged_sizes, mean_ndgree, parents, sol_len, frequency, big_border)
best_fit = np.argmax(frequency)
# Verbinde das Cluster mit dem Cluster, das mit s_center am häufigsten verbunden ist.
rem_union(s_center, best_fit, merged)
# Wg. Rem: aktualisiere Größe direkt in Repräsentanten von später erneut betrachtetem best_fit
merged_sizes[best_fit] += merged_sizes[s_center]
return merged
@njit
def greedy_find_local_best(local_best, x, y, z, vertex_costs):
cand = np.zeros(3, dtype=np.int64)
cand[0] = local_best[x]
cand[1] = local_best[y]
cand[2] = local_best[z]
costs = np.zeros(3, dtype=np.float64)
costs[0] = vertex_costs[cand[0], x] + vertex_costs[cand[0], y] + vertex_costs[cand[0], z]
costs[1] = vertex_costs[cand[1], x] + vertex_costs[cand[1], y] + vertex_costs[cand[1], z]
costs[2] = vertex_costs[cand[2], x] + vertex_costs[cand[2], y] + vertex_costs[cand[2], z]
best = np.argmin(costs)
return cand[best]
def repair_merged_local(merged, merged_sizes, solution_costs, vertex_costs, parents, sizes, n, node_dgree):
sol_len = len(solution_costs)
big_ccs = get_cluster_centers_big(merged, merged_sizes, node_dgree, 0.3)
small_ccs = get_cluster_centers_small(merged, merged_sizes, node_dgree, 0.3)
second_big_cc = get_second_center(merged, big_ccs)
# O(n * x log x), weil für jeden Knoten x Einträge sortiert werden
# Optimierungsmöglichkeit: nur Spalten sortieren, deren Knoten Clusterwurzeln sind.
cost_sorted = np.argsort(vertex_costs, axis=0)
local_best = cost_sorted[0]
local_worst = cost_sorted[sol_len-1]
worst_vertex_costs = np.zeros(n, dtype=np.float64)
for i in range(n):
worst_vertex_costs[i] = vertex_costs[local_worst[i], i]
for s_center in small_ccs.keys():
# Detektiere und verbinde "Mini-Cluster" (Wurzel des Clusters soll verbunden werden);
# Reparatur wird versucht, wenn die Größe des Clusters weniger als halb so groß ist wie der Knotengrad angibt, dh. die lokale Fehlerrate wäre bei über 50% in der Probleminstanz.
best_fit = s_center
min_s_cost = worst_vertex_costs[s_center]
for b_center in big_ccs.keys():
# Falls Cluster zusammen deutlich zu groß wären, überspringe diese Kombination direkt
if merged_sizes[s_center] + merged_sizes[b_center] > 1.6 * node_dgree[b_center]:
continue
local_i = greedy_find_local_best(local_best, s_center, b_center, second_big_cc[b_center], vertex_costs)
local_solution = parents[local_i]
# Falls der Knoten in der (greedy) lokal besten Lösung mit einem der beiden Cluster-Repräsentanten verbunden ist, führe Union durch.
if local_solution[s_center] == local_solution[b_center] or local_solution[s_center] == local_solution[second_big_cc[b_center]]:
if vertex_costs[local_i, s_center] < min_s_cost:
best_fit = b_center
min_s_cost = vertex_costs[local_i, s_center]
# Verbinde das Cluster mit dem Cluster, das lokal betrachtet die geringsten Knotenkosten einbrachte.
union(s_center, best_fit, merged, merged_sizes)
result = np.zeros((2,n), dtype=np.int64)
result[0] = merged
result[1] = merged_sizes
return result
def calculate_mean_nodedgr(merged, merged_sizes, node_dgree):
cluster_center_mnd = {}
for i in range(len(merged)):
if merged[i] in cluster_center_mnd:
cluster_center_mnd[merged[i]] += node_dgree[i]
else:
cluster_center_mnd[merged[i]] = node_dgree[i]
for cc in cluster_center_mnd.keys():
cluster_center_mnd[cc] = cluster_center_mnd[cc] / merged_sizes[cc]
return cluster_center_mnd
@njit
def calculate_mean_nodedgr_nd(merged, merged_sizes, node_dgree):
cluster_centers = np.unique(merged)
cluster_mean_nodedgr = np.zeros(len(cluster_centers), dtype=np.int64)
for c in range(len(cluster_centers)):
for i in range(len(merged)):
if merged[i] == cluster_centers[c]:
cluster_mean_nodedgr[c] += node_dgree[i]
cluster_mean_nodedgr[c] /= merged_sizes[cluster_centers[c]]
result = np.zeros((2,len(cluster_centers)), dtype=np.int64)
result[0] = cluster_centers
result[1] = cluster_mean_nodedgr
return result
@njit
def get_second_center_nd(merged, cluster_centers):
second_cc = np.zeros(len(cluster_centers), dtype=np.int64)
j = 0
for center in cluster_centers:
# Durchlaufe solange andere Knoten bis einer aus dem selben Cluster gefunden wurde
for i in range(len(merged)):
# nicht der selbe Knoten ist gesucht
if i == center:
continue
# sondern der erste, der einen anderen Index hat aber den selben Eintrag:
if merged[i] == merged[center]:
second_cc[j] = i
j += 1
break
return second_cc
def repair_merged_local_v2(merged, merged_sizes, solution_costs, vertex_costs, parents, sizes, n, node_dgree):
sol_len = len(solution_costs)
ccs = calculate_mean_nodedgr(merged, merged_sizes, node_dgree)
second_big_cc = get_second_center(merged, ccs)
# O(n * x log x), weil für jeden Knoten x Einträge sortiert werden
# Optimierungsmöglichkeit: nur Spalten sortieren, deren Knoten Clusterwurzeln sind.
cost_sorted = np.argsort(vertex_costs, axis=0)
local_best = cost_sorted[0]
local_worst = cost_sorted[sol_len-1]
worst_vertex_costs = np.zeros(n, dtype=np.float64)
for i in range(n):
worst_vertex_costs[i] = vertex_costs[local_worst[i], i]
for s_center in ccs.keys():
# s_center soll klein genug sein
if merged_sizes[s_center] > ccs[s_center] * 0.5:
continue
# Detektiere und verbinde "Mini-Cluster" (Wurzel des Clusters soll verbunden werden);
# Reparatur wird versucht, wenn die Größe des Clusters weniger als halb so groß ist wie der Knotengrad angibt, dh. die lokale Fehlerrate wäre bei über 50% in der Probleminstanz.
best_fit = s_center
min_s_cost = worst_vertex_costs[s_center]
for b_center in ccs.keys():
# b_center soll groß genug sein
if merged_sizes[b_center] <= ccs[b_center] * 0.5:
continue
# Falls Cluster zusammen deutlich zu groß wären, überspringe diese Kombination direkt
if merged_sizes[s_center] + merged_sizes[b_center] > 1.5 * ccs[b_center]:
continue
local_i = greedy_find_local_best(local_best, s_center, b_center, second_big_cc[b_center], vertex_costs)
local_solution = parents[local_i]
# Falls der Knoten in der (greedy) lokal besten Lösung mit beiden Cluster-Repräsentanten verbunden ist, ist er ein Kandidat für Union.
if local_solution[s_center] == local_solution[b_center] or local_solution[s_center] == local_solution[second_big_cc[b_center]]:
# Falls die Knotenkosten dieser lokalen Lösung geringer sind als bisheriger Lösungen,
if vertex_costs[local_i, s_center] < min_s_cost:
# aktualisiere besten "Union-Partner" und die minimal beobachteten Knotenkosten.
best_fit = b_center
min_s_cost = vertex_costs[local_i, s_center]
# Verbinde das Cluster mit dem Cluster, das lokal betrachtet die geringsten Knotenkosten einbrachte.
union(s_center, best_fit, merged, merged_sizes)
result = np.zeros((2,n), dtype=np.int64)
result[0] = merged
result[1] = merged_sizes
return result
def merged_to_file(solutions, costs, filename, missing_weight, n, x, n_merges):
"""
A function to write the merged solution(s) to a file, named like the input instance ending with _merged.txt.
"""
print_to = filename[:-4] + "_merged_v5.txt"
cost_sorted_j = np.argsort(costs)
with open(print_to, mode="a") as file:
file.write("filename: %s \nmissing_weight: %f \nn: %d \nx (solutions merged): %d\nmerged solutions:\n" % (filename, missing_weight, n, x))
for j in cost_sorted_j:
file.write(f"costs: {costs[j]}\n")
for i in range(0,n):
file.write(f"{solutions[j, i]} ")
def merged_to_file_mini(solutions, filename, missing_weight, n):
"""
A function to write the merged solution(s) to a file, named like the input instance ending with _merged.txt.
"""
print_to = filename[:-4] + "_merged_mini.txt"
with open(print_to, mode="a") as file:
file.write("filename: %s \nmissing_weight: %f \nn: %d \nmerged solution:\n" % (filename, missing_weight, n))
for i in range(0,n):
file.write(f"{solutions[0, i]} ")
def merged_short_print(solutions, costs, filename, missing_weight, n, x, n_merges):
for j in range(n_merges):
cluster_sizes = {}
for i in range(n):
curr = solutions[j, i]
if curr in cluster_sizes:
cluster_sizes[curr] += 1
else:
cluster_sizes[curr] = 1
print(cluster_sizes)
| 45.370707 | 514 | 0.640826 | 6,340 | 44,917 | 4.338013 | 0.087066 | 0.035123 | 0.02287 | 0.009599 | 0.800822 | 0.778679 | 0.76079 | 0.745301 | 0.729011 | 0.708686 | 0 | 0.017151 | 0.275664 | 44,917 | 989 | 515 | 45.416582 | 0.828185 | 0.286907 | 0 | 0.658228 | 1 | 0.002813 | 0.023635 | 0.001357 | 0 | 0 | 0 | 0 | 0 | 1 | 0.050633 | false | 0 | 0.012658 | 0 | 0.123769 | 0.022504 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
183b1ee5dd320ea629833bb5715a86d3d03990a1 | 3,401 | py | Python | pygpu/tests/test_tools.py | mdda/libgpuarray | 5e9d33b3ad80684158938c2937a81161939992eb | [
"0BSD"
] | null | null | null | pygpu/tests/test_tools.py | mdda/libgpuarray | 5e9d33b3ad80684158938c2937a81161939992eb | [
"0BSD"
] | null | null | null | pygpu/tests/test_tools.py | mdda/libgpuarray | 5e9d33b3ad80684158938c2937a81161939992eb | [
"0BSD"
] | null | null | null | from pygpu.tools import (as_argument, Argument, ArrayArg, ScalarArg,
check_args, Counter, lfu_cache)
from .support import (guard_devsup, rand, check_flags, check_meta, check_all,
context, gen_gpuarray, dtypes_no_complex)
def test_check_args_simple():
ac, ag = gen_gpuarray((50,), 'float32', ctx=context)
bc, bg = gen_gpuarray((50,), 'float32', ctx=context)
n, nd, dims, strs, offsets, contig = check_args((ag, bg))
assert n == 50
assert nd == 1
assert dims == (50,)
assert strs == ((4,), (4,))
assert offsets == (0, 0)
assert contig
ac, ag = gen_gpuarray((50, 1, 20), 'float32', ctx=context)
bc, bg = gen_gpuarray((50, 1, 20), 'float32', ctx=context)
n, nd, dims, strs, offsets, contig = check_args((ag, bg))
assert n == 1000
assert nd == 3
assert dims == (50, 1, 20)
assert strs == ((80, 80, 4), (80, 80, 4))
assert offsets == (0, 0)
assert contig
def test_check_args_collapse_1():
ac, ag = gen_gpuarray((50, 1, 20), 'float32', ctx=context)
bc, bg = gen_gpuarray((50, 1, 20), 'float32', ctx=context)
n, nd, dims, strs, offsets, contig = check_args((ag, bg), collapse=None)
assert n == 1000
assert nd == 3
assert dims == (50, 1, 20)
assert strs == ((80, 80, 4), (80, 80, 4))
assert offsets == (0, 0)
assert contig
n, nd, dims, strs, offsets, contig = check_args((ag, bg), collapse=True)
assert n == 1000
assert nd == 1
assert dims == (1000,)
assert strs == ((4,), (4,))
assert offsets == (0, 0)
assert contig
def test_check_args_collpse_2():
ac, ag = gen_gpuarray((50, 1, 20), 'float32', ctx=context, sliced=2,
offseted_inner=True)
bc, bg = gen_gpuarray((50, 1, 20), 'float32', ctx=context)
n, nd, dims, strs, offsets, contig = check_args((ag, bg), collapse=True)
assert n == 1000
assert nd == 2
assert dims == (50, 20)
assert strs == ((168, 4), (80, 4))
assert offsets == (4, 0)
assert not contig
def test_check_args_collapse_3():
ac, ag = gen_gpuarray((50, 2, 10), 'float32', ctx=context, sliced=2,
offseted_outer=True)
bc, bg = gen_gpuarray((50, 2, 10), 'float32', ctx=context)
n, nd, dims, strs, offsets, contig = check_args((ag, bg), collapse=True)
assert n == 1000
assert nd == 2
assert dims == (50, 20)
assert strs == ((160, 4), (80, 4))
assert offsets == (80, 0)
assert not contig
def test_check_args_broadcast_1():
ac, ag = gen_gpuarray((1,), 'float32', ctx=context)
bc, bg = gen_gpuarray((50,), 'float32', ctx=context)
n, nd, dims, strs, offsets, contig = check_args((ag, bg), broadcast=True)
assert n == 50
assert nd == 1
assert dims == (50,)
assert strs == ((0,), (4,))
assert offsets == (0, 0)
assert not contig
def test_check_args_broadcast_2():
ac, ag = gen_gpuarray((50, 1, 20), 'float32', ctx=context, sliced=2,
offseted_inner=True)
bc, bg = gen_gpuarray((50, 1, 20), 'float32', ctx=context)
n, nd, dims, strs, offsets, contig = check_args((ag, bg), collapse=True,
broadcast=True)
assert n == 1000
assert nd == 2
assert dims == (50, 20)
assert strs == ((168, 4), (80, 4))
assert offsets == (4, 0)
assert not contig
| 35.061856 | 77 | 0.575713 | 485 | 3,401 | 3.917526 | 0.131959 | 0.071053 | 0.125263 | 0.046316 | 0.872105 | 0.844211 | 0.808947 | 0.808947 | 0.774737 | 0.731579 | 0 | 0.08945 | 0.26698 | 3,401 | 96 | 78 | 35.427083 | 0.672684 | 0 | 0 | 0.72619 | 0 | 0 | 0.028815 | 0 | 0 | 0 | 0 | 0 | 0.571429 | 1 | 0.071429 | true | 0 | 0.02381 | 0 | 0.095238 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
186694b5bdc85a2951d1aef083b7589131f49705 | 22,925 | py | Python | __temp_migrations/Online_Coalition_Game_Alternative_Offer/0001_initial.py | JoeriWissink/OnlineCoalitionGame | a61126319dd3d28b96279ae1b4af6a1cc0ba1d93 | [
"MIT"
] | 1 | 2021-03-29T17:35:58.000Z | 2021-03-29T17:35:58.000Z | __temp_migrations/Online_Coalition_Game_Alternative_Offer/0001_initial.py | JoeriWissink/OnlineCoalitionGame | a61126319dd3d28b96279ae1b4af6a1cc0ba1d93 | [
"MIT"
] | null | null | null | __temp_migrations/Online_Coalition_Game_Alternative_Offer/0001_initial.py | JoeriWissink/OnlineCoalitionGame | a61126319dd3d28b96279ae1b4af6a1cc0ba1d93 | [
"MIT"
] | null | null | null | # Generated by Django 2.2.12 on 2020-10-30 09:59
from django.db import migrations, models
import django.db.models.deletion
import otree.db.idmap
import otree.db.models
class Migration(migrations.Migration):
initial = True
dependencies = [
('otree', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='Group',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('id_in_subsession', otree.db.models.PositiveIntegerField(db_index=True, null=True)),
('round_number', otree.db.models.PositiveIntegerField(db_index=True, null=True)),
('proposed_coalition_player_A', otree.db.models.StringField(max_length=10000, null=True)),
('proposed_coalition_player_B', otree.db.models.StringField(max_length=10000, null=True)),
('proposed_coalition_player_C', otree.db.models.StringField(max_length=10000, null=True)),
('allocation_A_to_A', otree.db.models.IntegerField(null=True)),
('allocation_A_to_B', otree.db.models.IntegerField(null=True)),
('allocation_A_to_C', otree.db.models.IntegerField(null=True)),
('allocation_B_to_A', otree.db.models.IntegerField(null=True)),
('allocation_B_to_B', otree.db.models.IntegerField(null=True)),
('allocation_B_to_C', otree.db.models.IntegerField(null=True)),
('allocation_C_to_A', otree.db.models.IntegerField(null=True)),
('allocation_C_to_B', otree.db.models.IntegerField(null=True)),
('allocation_C_to_C', otree.db.models.IntegerField(null=True)),
('selected_coalition_name_player_A', otree.db.models.StringField(max_length=10000, null=True)),
('selected_coalition_name_player_B', otree.db.models.StringField(max_length=10000, null=True)),
('selected_coalition_name_player_C', otree.db.models.StringField(max_length=10000, null=True)),
('selected_coalition_allocation_A_player_A', otree.db.models.IntegerField(null=True)),
('selected_coalition_allocation_B_player_A', otree.db.models.IntegerField(null=True)),
('selected_coalition_allocation_C_player_A', otree.db.models.IntegerField(null=True)),
('selected_coalition_allocation_A_player_B', otree.db.models.IntegerField(null=True)),
('selected_coalition_allocation_B_player_B', otree.db.models.IntegerField(null=True)),
('selected_coalition_allocation_C_player_B', otree.db.models.IntegerField(null=True)),
('selected_coalition_allocation_A_player_C', otree.db.models.IntegerField(null=True)),
('selected_coalition_allocation_B_player_C', otree.db.models.IntegerField(null=True)),
('selected_coalition_allocation_C_player_C', otree.db.models.IntegerField(null=True)),
('tentative_selected_coalition_name_player_A', otree.db.models.StringField(max_length=10000, null=True)),
('tentative_selected_coalition_name_player_B', otree.db.models.StringField(max_length=10000, null=True)),
('tentative_selected_coalition_name_player_C', otree.db.models.StringField(max_length=10000, null=True)),
('tentative_selected_coalition_allocation_A_player_A', otree.db.models.IntegerField(null=True)),
('tentative_selected_coalition_allocation_B_player_A', otree.db.models.IntegerField(null=True)),
('tentative_selected_coalition_allocation_C_player_A', otree.db.models.IntegerField(null=True)),
('tentative_selected_coalition_allocation_A_player_B', otree.db.models.IntegerField(null=True)),
('tentative_selected_coalition_allocation_B_player_B', otree.db.models.IntegerField(null=True)),
('tentative_selected_coalition_allocation_C_player_B', otree.db.models.IntegerField(null=True)),
('tentative_selected_coalition_allocation_A_player_C', otree.db.models.IntegerField(null=True)),
('tentative_selected_coalition_allocation_B_player_C', otree.db.models.IntegerField(null=True)),
('tentative_selected_coalition_allocation_C_player_C', otree.db.models.IntegerField(null=True)),
('tentative_coalition_formed', otree.db.models.BooleanField(choices=[(True, 'Yes'), (False, 'No')], null=True)),
('tentative_formed_coalition_name', otree.db.models.StringField(max_length=10000, null=True)),
('not_in_tentative', otree.db.models.StringField(max_length=10000, null=True)),
('tentative_payoff_A', otree.db.models.IntegerField(null=True)),
('tentative_payoff_B', otree.db.models.IntegerField(null=True)),
('tentative_payoff_C', otree.db.models.IntegerField(null=True)),
('counter_proposed_coalition_name', otree.db.models.StringField(max_length=10000, null=True)),
('counter_allocation_to_player_A', otree.db.models.IntegerField(null=True)),
('counter_allocation_to_player_B', otree.db.models.IntegerField(null=True)),
('counter_allocation_to_player_C', otree.db.models.IntegerField(null=True)),
('counter_proposed_coalition_player_A', otree.db.models.StringField(max_length=10000, null=True)),
('counter_proposed_coalition_player_B', otree.db.models.StringField(max_length=10000, null=True)),
('counter_proposed_coalition_player_C', otree.db.models.StringField(max_length=10000, null=True)),
('counter_allocation_A_to_A', otree.db.models.IntegerField(null=True)),
('counter_allocation_A_to_B', otree.db.models.IntegerField(null=True)),
('counter_allocation_A_to_C', otree.db.models.IntegerField(null=True)),
('counter_allocation_B_to_A', otree.db.models.IntegerField(null=True)),
('counter_allocation_B_to_B', otree.db.models.IntegerField(null=True)),
('counter_allocation_B_to_C', otree.db.models.IntegerField(null=True)),
('counter_allocation_C_to_A', otree.db.models.IntegerField(null=True)),
('counter_allocation_C_to_B', otree.db.models.IntegerField(null=True)),
('counter_allocation_C_to_C', otree.db.models.IntegerField(null=True)),
('coalition_ratified', otree.db.models.StringField(max_length=10000, null=True)),
('coalition_ratified_A', otree.db.models.StringField(max_length=10000, null=True)),
('coalition_ratified_B', otree.db.models.StringField(max_length=10000, null=True)),
('coalition_ratified_C', otree.db.models.StringField(max_length=10000, null=True)),
('coalition_formed', otree.db.models.BooleanField(choices=[(True, 'Yes'), (False, 'No')], null=True)),
('formed_coalition_name', otree.db.models.StringField(max_length=10000, null=True)),
('coalition_formed_name', otree.db.models.StringField(max_length=10000, null=True)),
('payoff_A', otree.db.models.IntegerField(null=True)),
('payoff_B', otree.db.models.IntegerField(null=True)),
('payoff_C', otree.db.models.IntegerField(null=True)),
('new_tentative_formed_coalition_name', otree.db.models.StringField(max_length=10000, null=True)),
('not_in_new_tentative', otree.db.models.StringField(max_length=10000, null=True)),
('new_tentative_payoff_A', otree.db.models.IntegerField(null=True)),
('new_tentative_payoff_B', otree.db.models.IntegerField(null=True)),
('new_tentative_payoff_C', otree.db.models.IntegerField(null=True)),
('new_tentative_coalition_formed', otree.db.models.BooleanField(choices=[(True, 'Yes'), (False, 'No')], null=True)),
('round_begin', otree.db.models.StringField(max_length=10000, null=True)),
('round_end', otree.db.models.StringField(max_length=10000, null=True)),
('session', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='online_coalition_game_alternative_offer_group', to='otree.Session')),
],
options={
'db_table': 'Online_Coalition_Game_Alternative_Offer_group',
},
bases=(models.Model, otree.db.idmap.GroupIDMapMixin),
),
migrations.CreateModel(
name='Subsession',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('round_number', otree.db.models.PositiveIntegerField(db_index=True, null=True)),
('resources_AB', otree.db.models.IntegerField(null=True)),
('resources_AC', otree.db.models.IntegerField(null=True)),
('resources_BC', otree.db.models.IntegerField(null=True)),
('resources_ABC', otree.db.models.IntegerField(null=True)),
('session', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='online_coalition_game_alternative_offer_subsession', to='otree.Session')),
],
options={
'db_table': 'Online_Coalition_Game_Alternative_Offer_subsession',
},
bases=(models.Model, otree.db.idmap.SubsessionIDMapMixin),
),
migrations.CreateModel(
name='Player',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('id_in_group', otree.db.models.PositiveIntegerField(db_index=True, null=True)),
('_payoff', otree.db.models.CurrencyField(default=0, null=True)),
('round_number', otree.db.models.PositiveIntegerField(db_index=True, null=True)),
('_role', otree.db.models.StringField(max_length=10000, null=True)),
('completion_code', otree.db.models.StringField(max_length=10000, null=True)),
('score', otree.db.models.IntegerField(null=True)),
('position', otree.db.models.StringField(max_length=10000, null=True)),
('resources', otree.db.models.IntegerField(null=True)),
('comprehension_money', otree.db.models.PositiveIntegerField(null=True)),
('comprehension_money_fail', otree.db.models.IntegerField(null=True)),
('comprehension_exclusion', otree.db.models.PositiveIntegerField(choices=[[0, 'This depends on which offer is accepted'], [1, 'This party does not receive any money']], null=True)),
('comprehension_exclusion_fail', otree.db.models.IntegerField(null=True)),
('comprehension_coalitions', otree.db.models.PositiveIntegerField(null=True)),
('comprehension_coalitions_fail', otree.db.models.IntegerField(null=True)),
('proposed_coalition', otree.db.models.StringField(max_length=3, null=True)),
('selected_coalition', otree.db.models.StringField(max_length=10000, null=True)),
('selected_coalition_name', otree.db.models.StringField(max_length=3, null=True)),
('selected_coalition_allocation_A', otree.db.models.IntegerField(null=True)),
('selected_coalition_allocation_B', otree.db.models.IntegerField(null=True)),
('selected_coalition_allocation_C', otree.db.models.IntegerField(null=True)),
('allocate_to_player_A', otree.db.models.PositiveIntegerField(blank=True, null=True)),
('allocate_to_player_B', otree.db.models.PositiveIntegerField(blank=True, null=True)),
('allocate_to_player_C', otree.db.models.PositiveIntegerField(blank=True, null=True)),
('tentative_selected_coalition', otree.db.models.StringField(max_length=10000, null=True)),
('tentative_selected_coalition_name', otree.db.models.StringField(max_length=10000, null=True)),
('tentative_selected_coalition_allocation_A', otree.db.models.IntegerField(null=True)),
('tentative_selected_coalition_allocation_B', otree.db.models.IntegerField(null=True)),
('tentative_selected_coalition_allocation_C', otree.db.models.IntegerField(null=True)),
('counter_proposed_coalition', otree.db.models.StringField(max_length=10000, null=True)),
('counter_allocate_to_player_A', otree.db.models.PositiveIntegerField(blank=True, null=True)),
('counter_allocate_to_player_B', otree.db.models.PositiveIntegerField(blank=True, null=True)),
('counter_allocate_to_player_C', otree.db.models.PositiveIntegerField(blank=True, null=True)),
('tentative_payoff', otree.db.models.IntegerField(null=True)),
('ratify_coalition', otree.db.models.StringField(max_length=10000, null=True)),
('money', otree.db.models.IntegerField(null=True)),
('tslider1', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('tslider2', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('tslider3', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('tslider4', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('tslider5', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('tslider6', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('tslider7', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('tslider8', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('tslider9', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('tslider10', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('tslider11', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('tslider12', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('tslider13', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('tslider14', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('tslider15', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('tslider16', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('tslider17', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('tslider18', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('tslider19', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('tslider20', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('tslider21', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider1', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider2', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider3', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider4', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider5', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider6', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider7', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider8', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider9', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider10', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider11', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider12', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider13', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider14', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider15', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider16', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider17', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider18', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider19', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider20', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider21', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider22', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider23', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider24', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider25', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider26', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider27', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider28', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider29', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider30', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider31', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider32', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider33', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider34', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider35', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider36', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider37', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider38', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider39', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider40', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider41', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider42', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider43', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider44', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider45', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider46', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider47', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider48', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider49', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider50', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider51', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider52', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider53', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider54', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider55', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider56', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider57', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider58', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider59', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider60', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider61', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider62', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('slider63', otree.db.models.IntegerField(default=0, null=True, verbose_name=False)),
('group', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='Online_Coalition_Game_Alternative_Offer.Group')),
('participant', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='online_coalition_game_alternative_offer_player', to='otree.Participant')),
('session', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='online_coalition_game_alternative_offer_player', to='otree.Session')),
('subsession', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='Online_Coalition_Game_Alternative_Offer.Subsession')),
],
options={
'db_table': 'Online_Coalition_Game_Alternative_Offer_player',
},
bases=(models.Model, otree.db.idmap.PlayerIDMapMixin),
),
migrations.AddField(
model_name='group',
name='subsession',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='Online_Coalition_Game_Alternative_Offer.Subsession'),
),
]
| 87.5 | 197 | 0.66024 | 2,643 | 22,925 | 5.519485 | 0.078698 | 0.115712 | 0.180902 | 0.255347 | 0.898821 | 0.898135 | 0.880381 | 0.858651 | 0.841376 | 0.804017 | 0 | 0.023054 | 0.197732 | 22,925 | 261 | 198 | 87.835249 | 0.770117 | 0.002007 | 0 | 0.086614 | 1 | 0 | 0.194825 | 0.122612 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.015748 | 0 | 0.031496 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
187fae7c30fb8301966225349b7b4a34d54a54ff | 50,022 | py | Python | 2015/07/ipython_log.py | pschulam/Notebook | 3404ce01a4ebdf23216ff01512a8f84b4f7758aa | [
"MIT"
] | null | null | null | 2015/07/ipython_log.py | pschulam/Notebook | 3404ce01a4ebdf23216ff01512a8f84b4f7758aa | [
"MIT"
] | null | null | null | 2015/07/ipython_log.py | pschulam/Notebook | 3404ce01a4ebdf23216ff01512a8f84b4f7758aa | [
"MIT"
] | null | null | null | # IPython log file
import sys
sys.path.append('/Users/pschulam/Git/mypy')
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import nips15
import online
import loglin
get_ipython().magic('matplotlib inline')
folds_dir = 'models/jmlr/folds'
def load_model(marker, fold, folds_dir=folds_dir):
param_dir = os.path.join(folds_dir, marker, '{:02d}'.format(fold), 'param')
return nips15.NipsModel.from_directory(param_dir)
demographic = ['female', 'afram']
molecular = ['aca', 'scl']
pfvc_spec = {'t':'years_seen_full', 'y':'pfvc', 'x1':demographic, 'x2':demographic + molecular}
pfvc = pd.read_csv('data/benchmark_pfvc.csv')
pfvc_pd = [nips15.PatientData.from_tbl(tbl, **pfvc_spec) for _, tbl in pfvc.groupby('ptid')]
tss_spec = {'t':'years_seen', 'y':'tss', 'x1':demographic, 'x2':demographic}
tss = pd.read_csv('data/benchmark_tss.csv')
tss_match = ['ptid'] + tss_spec['x1']
tss = pd.merge(pfvc[tss_match], tss, 'left', tss_match)
tss_pd = [nips15.PatientData.from_tbl(tbl, **tss_spec) for _, tbl in tss.groupby('ptid')]
pdlco_spec = {'t':'years_seen', 'y':'pdlco', 'x1':demographic, 'x2':demographic}
pdlco = pd.read_csv('data/benchmark_pdc.csv')
pdlco_match = ['ptid'] + pdlco_spec['x1']
pdlco = pd.merge(pfvc[pdlco_match], pdlco, 'left', pdlco_match)
pdlco_pd = [nips15.PatientData.from_tbl(tbl, **pdlco_spec) for _, tbl in pdlco.groupby('ptid')]
pv1_spec = {'t':'years_seen', 'y':'pfev1', 'x1':demographic, 'x2':demographic}
pv1 = pd.read_csv('data/benchmark_pv1.csv')
pv1_match = ['ptid'] + pv1_spec['x1']
pv1 = pd.merge(pfvc[pv1_match], pv1, 'left', pv1_match)
pv1_pd = [nips15.PatientData.from_tbl(tbl, **pv1_spec) for _, tbl in pv1.groupby('ptid')]
sp_spec = {'t':'years_seen', 'y':'rvsp', 'x1':demographic, 'x2':demographic}
sp = pd.read_csv('data/benchmark_sp.csv')
sp_match = ['ptid'] + sp_spec['x1']
sp = pd.merge(pfvc[sp_match], sp, 'left', sp_match)
sp_pd = [nips15.PatientData.from_tbl(tbl, **sp_spec) for _, tbl in sp.groupby('ptid')]
get_ptids = lambda pd: [p.ptid for p in pd]
pfvc_df = pd.DataFrame({'ptid': get_ptids(pfvc_pd), 'pfvc' : pfvc_pd}).set_index('ptid')
tss_df = pd.DataFrame({'ptid': get_ptids(tss_pd), 'tss' : tss_pd}).set_index('ptid')
pdlco_df = pd.DataFrame({'ptid': get_ptids(pdlco_pd), 'pdlco': pdlco_pd}).set_index('ptid')
pv1_df = pd.DataFrame({'ptid': get_ptids(pv1_pd), 'pv1' : pdlco_pd}).set_index('ptid')
sp_df = pd.DataFrame({'ptid': get_ptids(sp_pd), 'rvsp' : sp_pd}).set_index('ptid')
folds_df = pfvc.loc[:, ['ptid', 'fold']].drop_duplicates().set_index('ptid')
patient_data = pd.concat([folds_df, pfvc_df, tss_df, pdlco_df, pv1_df, sp_df], axis=1, join='inner')
model_names = ['pfvc', 'tss', 'pdc', 'pv1']
col_names = ['pfvc', 'tss', 'pdlco', 'pv1']
fold = 1
censor = 1.0
models = [load_model(m, fold) for m in model_names]
test = patient_data['fold'].values == fold
pfvc_ll = [history_likel(models[0], d.truncate(censor)) for d in patient_data['pfvc'][test]]
test = patient_data['fold'].values == fold
pfvc_ll = [loglin.history_likel(models[0], d.truncate(censor)) for d in patient_data['pfvc'][test]]
test = patient_data['fold'].values == fold
pfvc_ll = [loglin.history_likelihood(models[0], d.truncate(censor)) for d in patient_data['pfvc'][test]]
test = patient_data['fold'].values == fold
pfvc_ll = [loglin.history_likelihood(models[0], d.truncate(censor).unpack())
for d in patient_data['pfvc'][test]]
pfvc_ll[0]
pfvc_ll[1]
pfvc_ll[2]
pfvc_ll[3]
loglin.configuration_features((1, 4), [(2, 4), (0, 4)])
import imp
imp.reload(loglin)
loglin.configuration_features((1, 4), [(2, 4), (0, 4)])
loglin.configuration_score([(0, 4), (1, 4), (2, 4)])
features = loglin.configuration_features((0, 4), [(1, 4), (2, 4)])
weights = np.random.normal(size=features.shape)
loglin.configuration_score([(0, 4), (1, 4), (2, 4)], weights)
import imp
imp.reload(loglin)
features = loglin.configuration_features((0, 4), [(1, 4), (2, 4)])
weights = np.random.normal(size=features.shape)
loglin.configuration_score([(0, 4), (1, 4), (2, 4)], weights)
model_names = ['pfvc', 'tss', 'pdc', 'pv1']
col_names = ['pfvc', 'tss', 'pdlco', 'pv1']
fold = 1
censor = 1.0
models = [load_model(m, fold) for m in model_names]
def make_train_examples(patient_data, col_names, models, censor_time):
marker_histories = zip(*[patient_data[n] for n in col_names])
examples = []
for i, history in enumerate(marker_histories):
X = []
for h in history:
d = history.truncate(censor_time).unpack()
X.append(d)
p = models[0].posterior(*history[0].unpack())
y = np.argmax(p)
ex = (X, y)
examples.append(ex)
return examples
def make_test_examples(patient_data, col_names, censor_time):
marker_histories = zip(*[patient_data[n] for n in col_names])
examples = []
for i, history in enumerate(marker_histories)
X = []
for h in history:
d = history.truncate(censor_time).unpack()
X.append(d)
return examples
def make_train_examples(patient_data, col_names, models, censor_time):
marker_histories = zip(*[patient_data[n] for n in col_names])
examples = []
for i, history in enumerate(marker_histories):
X = []
for h in history:
d = history.truncate(censor_time).unpack()
X.append(d)
p = models[0].posterior(*history[0].unpack())
y = np.argmax(p)
ex = (X, y)
examples.append(ex)
return examples
def make_test_examples(patient_data, col_names, censor_time):
marker_histories = zip(*[patient_data[n] for n in col_names])
examples = []
for i, history in enumerate(marker_histories):
X = []
for h in history:
d = history.truncate(censor_time).unpack()
X.append(d)
return examples
test = patient_data['fold'].values == fold
train = ~test
train_data = make_train_examples(patient_data[train], col_names, models, censor)
test_data = make_test_examples(patient_data[test], col_names, censor)
def make_train_examples(patient_data, col_names, models, censor_time):
marker_histories = zip(*[list(patient_data[n]) for n in col_names])
examples = []
for i, history in enumerate(marker_histories):
X = []
for h in history:
d = history.truncate(censor_time).unpack()
X.append(d)
p = models[0].posterior(*history[0].unpack())
y = np.argmax(p)
ex = (X, y)
examples.append(ex)
return examples
def make_test_examples(patient_data, col_names, censor_time):
marker_histories = zip(*[patient_data[n] for n in col_names])
examples = []
for i, history in enumerate(marker_histories):
X = []
for h in history:
d = history.truncate(censor_time).unpack()
X.append(d)
return examples
test = patient_data['fold'].values == fold
train = ~test
train_data = make_train_examples(patient_data[train], col_names, models, censor)
test_data = make_test_examples(patient_data[test], col_names, censor)
mh = [patient_data[n] for n in col_names]
mh
zip(*mh)
list(zip(*mh))[0]
list(zip(*mh))[0][0]
list(zip(*mh))[0][0].unpack()
list(enumerate(zip(*mh)))[0]
def make_train_examples(patient_data, col_names, models, censor_time):
marker_histories = zip(*[patient_data[n] for n in col_names])
examples = []
for i, histories in enumerate(marker_histories):
X = []
for h in histories:
d = h.truncate(censor_time).unpack()
X.append(d)
p = models[0].posterior(*history[0].unpack())
y = np.argmax(p)
ex = (X, y)
examples.append(ex)
return examples
def make_test_examples(patient_data, col_names, censor_time):
marker_histories = zip(*[patient_data[n] for n in col_names])
examples = []
for i, history in enumerate(marker_histories):
X = []
for h in history:
d = h.truncate(censor_time).unpack()
X.append(d)
return examples
test = patient_data['fold'].values == fold
train = ~test
train_data = make_train_examples(patient_data[train], col_names, models, censor)
test_data = make_test_examples(patient_data[test], col_names, censor)
def make_train_examples(patient_data, col_names, models, censor_time):
marker_histories = zip(*[patient_data[n] for n in col_names])
examples = []
for i, histories in enumerate(marker_histories):
X = []
for h in histories:
d = h.truncate(censor_time).unpack()
X.append(d)
p = models[0].posterior(*histories[0].unpack())
y = np.argmax(p)
ex = (X, y)
examples.append(ex)
return examples
def make_test_examples(patient_data, col_names, censor_time):
marker_histories = zip(*[patient_data[n] for n in col_names])
examples = []
for i, histories in enumerate(marker_histories):
X = []
for h in histories:
d = h.truncate(censor_time).unpack()
X.append(d)
return examples
test = patient_data['fold'].values == fold
train = ~test
train_data = make_train_examples(patient_data[train], col_names, models, censor)
test_data = make_test_examples(patient_data[test], col_names, censor)
train_data[0]
X, y = train_data[0]
y
X
tss
tss.shape
tss.drop_duplicates()
tss.drop_duplicates().shape
demographic = ['female', 'afram']
molecular = ['aca', 'scl']
pfvc_spec = {'t':'years_seen_full', 'y':'pfvc', 'x1':demographic, 'x2':demographic + molecular}
pfvc = pd.read_csv('data/benchmark_pfvc.csv')
pfvc_pd = [nips15.PatientData.from_tbl(tbl, **pfvc_spec) for _, tbl in pfvc.groupby('ptid')]
tss_spec = {'t':'years_seen', 'y':'tss', 'x1':demographic, 'x2':demographic}
tss = pd.read_csv('data/benchmark_tss.csv')
tss_match = ['ptid'] + tss_spec['x1']
tss = pd.merge(pfvc[tss_match], tss, 'left', tss_match).drop_duplicates()
tss_pd = [nips15.PatientData.from_tbl(tbl, **tss_spec) for _, tbl in tss.groupby('ptid')]
pdlco_spec = {'t':'years_seen', 'y':'pdlco', 'x1':demographic, 'x2':demographic}
pdlco = pd.read_csv('data/benchmark_pdc.csv')
pdlco_match = ['ptid'] + pdlco_spec['x1']
pdlco = pd.merge(pfvc[pdlco_match], pdlco, 'left', pdlco_match).drop_duplicates()
pdlco_pd = [nips15.PatientData.from_tbl(tbl, **pdlco_spec) for _, tbl in pdlco.groupby('ptid')]
pv1_spec = {'t':'years_seen', 'y':'pfev1', 'x1':demographic, 'x2':demographic}
pv1 = pd.read_csv('data/benchmark_pv1.csv')
pv1_match = ['ptid'] + pv1_spec['x1']
pv1 = pd.merge(pfvc[pv1_match], pv1, 'left', pv1_match).drop_duplicates()
pv1_pd = [nips15.PatientData.from_tbl(tbl, **pv1_spec) for _, tbl in pv1.groupby('ptid')]
sp_spec = {'t':'years_seen', 'y':'rvsp', 'x1':demographic, 'x2':demographic}
sp = pd.read_csv('data/benchmark_sp.csv')
sp_match = ['ptid'] + sp_spec['x1']
sp = pd.merge(pfvc[sp_match], sp, 'left', sp_match).drop_duplicates()
sp_pd = [nips15.PatientData.from_tbl(tbl, **sp_spec) for _, tbl in sp.groupby('ptid')]
get_ptids = lambda pd: [p.ptid for p in pd]
pfvc_df = pd.DataFrame({'ptid': get_ptids(pfvc_pd), 'pfvc' : pfvc_pd}).set_index('ptid')
tss_df = pd.DataFrame({'ptid': get_ptids(tss_pd), 'tss' : tss_pd}).set_index('ptid')
pdlco_df = pd.DataFrame({'ptid': get_ptids(pdlco_pd), 'pdlco': pdlco_pd}).set_index('ptid')
pv1_df = pd.DataFrame({'ptid': get_ptids(pv1_pd), 'pv1' : pdlco_pd}).set_index('ptid')
sp_df = pd.DataFrame({'ptid': get_ptids(sp_pd), 'rvsp' : sp_pd}).set_index('ptid')
tss.shape
folds_df = pfvc.loc[:, ['ptid', 'fold']].drop_duplicates().set_index('ptid')
patient_data = pd.concat([folds_df, pfvc_df, tss_df, pdlco_df, pv1_df, sp_df], axis=1, join='inner')
model_names = ['pfvc', 'tss', 'pdc', 'pv1']
col_names = ['pfvc', 'tss', 'pdlco', 'pv1']
fold = 1
censor = 1.0
models = [load_model(m, fold) for m in model_names]
def make_train_examples(patient_data, col_names, models, censor_time):
marker_histories = zip(*[patient_data[n] for n in col_names])
examples = []
for i, histories in enumerate(marker_histories):
X = []
for h in histories:
d = h.truncate(censor_time).unpack()
X.append(d)
p = models[0].posterior(*histories[0].unpack())
y = np.argmax(p)
ex = (X, y)
examples.append(ex)
return examples
def make_test_examples(patient_data, col_names, censor_time):
marker_histories = zip(*[patient_data[n] for n in col_names])
examples = []
for i, histories in enumerate(marker_histories):
X = []
for h in histories:
d = h.truncate(censor_time).unpack()
X.append(d)
return examples
test = patient_data['fold'].values == fold
train = ~test
train_data = make_train_examples(patient_data[train], col_names, models, censor)
test_data = make_test_examples(patient_data[test], col_names, censor)
X, y = train_data[0]
X
y
X = test_data[0]
def make_train_examples(patient_data, col_names, models, censor_time):
marker_histories = zip(*[patient_data[n] for n in col_names])
examples = []
for i, histories in enumerate(marker_histories):
X = []
for h in histories:
d = h.truncate(censor_time).unpack()
X.append(d)
p = models[0].posterior(*histories[0].unpack())
y = np.argmax(p)
ex = (X, y)
examples.append(ex)
return examples
def make_test_examples(patient_data, col_names, censor_time):
marker_histories = zip(*[patient_data[n] for n in col_names])
examples = []
for i, histories in enumerate(marker_histories):
X = []
for h in histories:
d = h.truncate(censor_time).unpack()
X.append(d)
examples.append(X)
return examples
test = patient_data['fold'].values == fold
train = ~test
train_data = make_train_examples(patient_data[train], col_names, models, censor)
test_data = make_test_examples(patient_data[test], col_names, censor)
test_data[0]
import imp
imp.reload(loglin)
conditional_model = loglin.ConditionalModel(models)
conditional_model.subtypes_features([0, 0, 0, 0])
import imp
imp.reload(loglin)
conditional_model = loglin.ConditionalModel(models)
conditional_model.subtypes_features([0, 0, 0, 0])
conditional_model.subtypes_features([0, 0, 0, 0]).shape
import imp
imp.reload(loglin)
conditional_model = loglin.ConditionalModel(models)
conditional_model.initial_weights()
import imp
imp.reload(loglin)
conditional_model = loglin.ConditionalModel(models)
conditional_model.initial_weights()
conditional_model.initial_weights()
conditional_model.initial_weights()
conditional_model.initial_weights()
conditional_model.initial_weights()
conditional_model.initial_weights()
conditional_model.initial_weights()
conditional_model.initial_weights()
conditional_model.initial_weights()
conditional_model = loglin.ConditionalModel(models)
weights = conditional_model.initial_weights(0)
weights
conditional_model.history_score(train_data[0][0], [0, 0, 0, 0])
subtypes = [0, 0, 0, 0]
s1 = conditional_model.history_score(train_data[0][0], subtypes)
s2 = conditional_model.subtypes_score(subtypes, weights)
subtypes = [0, 0, 0, 0]
s1 = conditional_model.history_score(train_data[0][0], subtypes); print(s1)
s2 = conditional_model.subtypes_score(subtypes, weights); print(s2)
import imp
imp.reload(loglin)
conditional_model = loglin.ConditionalModel(models)
weights = conditional_model.initial_weights(0)
conditional_model.partition_function(train_data[0][0], weights)
import imp
imp.reload(loglin)
conditional_model = loglin.ConditionalModel(models)
weights = conditional_model.initial_weights(0)
conditional_model.partition_function(train_data[0][0], weights)
[conditional_model.marginal_score(train_data[0][0], z, weights) for z in range(8)]
import imp
imp.reload(loglin)
conditional_model = loglin.ConditionalModel(models)
weights = conditional_model.initial_weights(0)
conditional_model.partition_function(train_data[0][0], weights)
[conditional_model.marginal_score(train_data[0][0], z, weights) for z in range(8)]
import imp
imp.reload(loglin)
conditional_model = loglin.ConditionalModel(models)
weights = conditional_model.initial_weights(0)
[conditional_model.marginal_score(train_data[0][0], z, weights) for z in range(8)]
import imp
imp.reload(loglin)
conditional_model = loglin.ConditionalModel(models)
weights = conditional_model.initial_weights(0)
[conditional_model.marginal_score(train_data[0][0], z, weights) for z in range(8)]
sum(Out[107])
import imp
imp.reload(loglin)
conditional_model.partition_function(train_data[0][0], weights)
import imp
imp.reload(loglin)
conditional_model = loglin.ConditionalModel(models)
weights = conditional_model.initial_weights(0)
conditional_model.partition_function(train_data[0][0], weights)
import imp
imp.reload(loglin)
conditional_model = loglin.ConditionalModel(models)
weights = conditional_model.initial_weights(0)
conditional_model.partition_function(train_data[0][0], weights)
import imp
imp.reload(loglin)
conditional_model = loglin.ConditionalModel(models)
weights = conditional_model.initial_weights(0)
conditional_model.partition_function(train_data[0][0], weights)
import imp
imp.reload(loglin)
conditional_model = loglin.ConditionalModel(models)
weights = conditional_model.initial_weights(0)
conditional_model.partition_function(train_data[0][0], weights)
conditional_model.marginal_score(train_data[0][0], 0, weights)
np.exp(Out[123])
np.exp(Out[123] - np.log(Out[122]))
import imp
imp.reload(loglin)
conditional_model = loglin.ConditionalModel(models)
weights = conditional_model.initial_weights(0)
conditional_model.proba(train_data[0][0], weights)
np.round(conditional_model.proba(train_data[0][0], weights), 20
np.round(conditional_model.proba(train_data[0][0], weights), 2)
p = conditional_model.proba(train_data[0][0], weights)
p.sum()
import imp
imp.reload(loglin)
p = conditional_model.proba(train_data[0][0], weights)
p.shape
conditional_model = loglin.ConditionalModel(models)
weights = conditional_model.initial_weights(0)
p = conditional_model.proba(train_data[0][0], weights)
p.shape
p.shape[0, 0, 0, 0]
p[0, 0, 0, 0]
p.sum()
import imp
imp.reload(loglin)
conditional_model = loglin.ConditionalModel(models)
weights = conditional_model.initial_weights(0)
p = conditional_model.proba(train_data[0][0], weights)
p = conditional_model.log_proba(train_data[0][0], weights)
p.shape
from scipy.misc import logsumexp
logsumexp(p, axis=(1, 2, 3))
from scipy.misc import logsumexp
pmarg = logsumexp(p, axis=(1, 2, 3))
from scipy.misc import logsumexp
pmarg = logsumexp(p, axis=(1, 2, 3))
np.exp(pmarg - logsumexp(pmarg))
from scipy.misc import logsumexp
pmarg = logsumexp(p, axis=(1, 2, 3))
np.round(np.exp(pmarg - logsumexp(pmarg)), 2)
from scipy.misc import logsumexp
pmarg = logsumexp(p, axis=(1, 2, 3))
np.round(np.exp(pmarg - logsumexp(pmarg)), 3)
from scipy.misc import logsumexp
pmarg = logsumexp(p, axis=(1, 2, 3))
np.round(np.exp(pmarg - logsumexp(pmarg)), 4)
from scipy.misc import logsumexp
pmarg = logsumexp(p, axis=(1, 2, 3))
np.round(np.exp(pmarg - logsumexp(pmarg)), 3)
import imp
imp.reload(loglin)
conditional_model = loglin.ConditionalModel(models)
weights = conditional_model.initial_weights(0)
p = conditional_model.log_proba(train_data[0][0], weights)
m = conditional_model.marg_log_proba(train_data[0][0], weights)
get_ipython().magic('debug ')
import imp
imp.reload(loglin)
conditional_model = loglin.ConditionalModel(models)
weights = conditional_model.initial_weights(0)
m = conditional_model.marg_log_proba(train_data[0][0], weights)
np.exp(m)
np.round(np.exp(m), 3)
np.apply_along_axis(np.sum, 0, p)
np.apply_over_axes(np.sum, p, 0)
np.apply_over_axes(np.sum, p, 0).shape
np.apply_along_axis(np.sum, 0, p).shape
get_ipython().magic('pinfo np.swapaxes')
np.ndim
p.ndim
p.shape
tuple(range(p.ndim))
import imp
imp.reload(loglin)
conditional_model = loglin.ConditionalModel(models)
weights = conditional_model.initial_weights(0)
p = conditional_model.log_proba(train_data[0][0], weights)
loglin.marginalize(p, 0)
import imp
imp.reload(loglin)
conditional_model = loglin.ConditionalModel(models)
weights = conditional_model.initial_weights(0)
p = conditional_model.log_proba(train_data[0][0], weights)
loglin.marginalize(p, 0)
loglin.marginalize(p, [0])
import imp
imp.reload(loglin)
conditional_model = loglin.ConditionalModel(models)
weights = conditional_model.initial_weights(0)
p = conditional_model.log_proba(train_data[0][0], weights)
loglin.marginalize(p, [0], logsumexp)
import imp
imp.reload(loglin)
conditional_model = loglin.ConditionalModel(models)
weights = conditional_model.initial_weights(0)
p = conditional_model.log_proba(train_data[0][0], weights)
loglin.marginalize(p, [0], logsumexp)
np.round(np.exp(loglin.marginalize(p, [0], logsumexp)), 3)
loglin.marginalize(p, [0, 1], logsumexp)
np.exp(loglin.marginalize(p, [0, 1], logsumexp))
np.round(np.exp(loglin.marginalize(p, [0, 1], logsumexp)), 2)
np.exp(loglin.marginalize(p, [0, 1], logsumexp))
np.exp(loglin.marginalize(p, [0, 1], logsumexp)).sum()
import imp
imp.reload(loglin)
conditional_model = loglin.ConditionalModel(models)
weights = conditional_model.initial_weights(0)
import imp
imp.reload(loglin)
conditional_model = loglin.ConditionalModel(models)
weights = conditional_model.initial_weights(0)
conditional_model.feature_expectations(train_data[0][0], weights)
import imp
imp.reload(loglin)
conditional_model = loglin.ConditionalModel(models)
weights = conditional_model.initial_weights(0)
conditional_model.feature_expectations(train_data[0][0], weights)
import imp
imp.reload(loglin)
conditional_model = loglin.ConditionalModel(models)
weights = conditional_model.initial_weights(0)
conditional_model.feature_expectations(train_data[0][0], weights)
import imp
imp.reload(loglin)
conditional_model = loglin.ConditionalModel(models)
weights = conditional_model.initial_weights(0)
conditional_model.feature_expectations(train_data[0][0], weights)
np.round(Out[209], 3)
import imp
imp.reload(loglin)
conditional_model = loglin.ConditionalModel(models)
weights = conditional_model.initial_weights(0)
X, y = train_data[0]
e1 = conditional_model.conditional_expectations(X, y, weights)
e2 = conditional_model.feature_expectations(X, weights)
import imp
imp.reload(loglin)
conditional_model = loglin.ConditionalModel(models)
weights = conditional_model.initial_weights(0)
X, y = train_data[0]
e1 = conditional_model.conditional_expectations(X, y, weights)
e2 = conditional_model.feature_expectations(X, weights)
import imp
imp.reload(loglin)
conditional_model = loglin.ConditionalModel(models)
weights = conditional_model.initial_weights(0)
X, y = train_data[0]
e1 = conditional_model.conditional_expectations(X, y, weights)
e2 = conditional_model.feature_expectations(X, weights)
import imp
imp.reload(loglin)
conditional_model = loglin.ConditionalModel(models)
weights = conditional_model.initial_weights(0)
X, y = train_data[0]
e1 = conditional_model.conditional_expectations(X, y, weights)
e2 = conditional_model.feature_expectations(X, weights)
e1
np.round(e1, 2)
e1 - e2
def obj(w, train_data=train_data, penalty=1.0):
ll = [conditional_model.marg_log_proba(X, w)[y] for X, y in train_data]
return np.mean(ll) + penalty / 2.0 * np.dot(w, w)
obj(weights)
import imp
imp.reload(loglin)
model_names = ['pfvc', 'tss', 'pdc', 'pv1']
col_names = ['pfvc', 'tss', 'pdlco', 'pv1']
fold = 1
censor = 1.0
models = [load_model(m, fold) for m in model_names]
num_subtypes = [m.num_subtypes for m in models]
import imp
imp.reload(loglin)
weights = loglin.ModelWeights(num_subtypes)
weights
weights.singleton_[0]
weights.singleton_[1]
weights.singleton_[2]
weights.singleton_[3]
weights.singleton_[4]
weights.pairwise_[0]
weights.pairwise_[1]
w = weights.collapsed()
import imp
imp.reload(loglin)
weights = loglin.ModelWeights(num_subtypes)
w = weights.collapsed()
w
w.shape
weights2 = loglin.ModelWeights(num_subtypes, 1)
weights2.singleton(0)
weights = loglin.ModelWeights(num_subtypes)
weights.singleton(0)
weights = loglin.ModelWeights(num_subtypes)
weights.singleton(0)
weights2 = loglin.ModelWeights(num_subtypes, 1)
weights2.singleton(0)
weights2.set_weights(weights.collapsed())
weights2.set_weights(weights.collapsed())
weights2.singleton(0)
weights2 = loglin.ModelWeights(num_subtypes, 1)
weights2.singleton(3)
weights2.set_weights(weights.collapsed())
weights2.singleton(3)
weights = loglin.ModelWeights(num_subtypes)
weights.singleton(3)
weights = loglin.ModelWeights(num_subtypes)
weights.pairwise(1)
weights2 = loglin.ModelWeights(num_subtypes, 1)
weights2.pairwise(1)
weights2.set_weights(weights.collapsed())
weights2.pairwise(1)
weights2.set_weights(weights.collapsed())
weights2.pairwise(1).shape
weights2 = loglin.ModelWeights(num_subtypes, 1)
weights2.pairwise(1).shape
weights = loglin.ModelWeights(num_subtypes)
weights.pairwise(1).shape
import imp
imp.reload(loglin)
weights = loglin.ModelWeights(num_subtypes)
weights.pairwise(1).shape
weights = loglin.ModelWeights(num_subtypes)
weights.pairwise(1)
weights2 = loglin.ModelWeights(num_subtypes, 1)
weights2.pairwise(1)
weights2.set_weights(weights.collapsed())
weights2.pairwise(1)
import imp
imp.reload(loglin)
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
scorer.data_scores(train_data[0][0])
scorer.data_scores(train_data[0][0])[0]
scorer.data_scores(train_data[0][0])[1]
scorer.data_scores(train_data[0][0])[2]
scorer.data_scores(train_data[0][0])[3]
import imp
imp.reload(loglin)
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
engine = loglin.InferenceEngine(scorer)
engine.run(train_data[0][0])
import imp
imp.reload(loglin)
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
engine = loglin.InferenceEngine(scorer)
engine.run(train_data[0][0])
import imp
imp.reload(loglin)
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
engine = loglin.InferenceEngine(scorer)
engine.run(train_data[0][0])
import imp
imp.reload(loglin)
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
engine = loglin.InferenceEngine(scorer)
engine.run(train_data[0][0])
s, p = engine.run(train_data[0][0])
s[0]
s[0].sum()
s[1].sum()
s[2].sum()
s[3].sum()
p[0].sum()
p[1].sum()
p[2].sum()
p[3].sum()
import imp
imp.reload(loglin)
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
engine = loglin.InferenceEngine(scorer)
s, p = engine.run(train_data[0][0])
s[0].sum()
s[1].sum()
s[2].sum()
s[3].sum()
s[3]
loglin.normalize(s[3])
import imp
imp.reload(loglin)
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
engine = loglin.InferenceEngine(scorer)
s, p = engine.run(train_data[0][0])
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
engine = loglin.InferenceEngine(scorer)
s, p = engine.run(train_data[0][0])
np.round(s[0])
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
engine = loglin.InferenceEngine(scorer)
s, p = engine.run(train_data[0][0])
np.round(s[0], 4)
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
engine = loglin.InferenceEngine(scorer)
s, p = engine.run(train_data[0][0])
np.round(s[0], 3)
cm = loglin.ConditionalModel(models)
weights = cm.initial_weights(0)
np.round(weights[:10], 3)
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
engine = loglin.InferenceEngine(scorer)
s, p = engine.run(train_data[0][0])
np.round(s[0], 3)
cm = loglin.ConditionalModel(models)
w = cm.initial_weights(0)
np.round(w[:10], 3)
np.round(weights.collapsed()[:10], 3)
cm = loglin.ConditionalModel(models)
w = cm.initial_weights(0)
np.round(np.exp(cm.marg_log_proba(train_data[0][0], w)), 3)
import imp
imp.reload(loglin)
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
engine = loglin.InferenceEngine(scorer)
s, p = engine.run(train_data[0][0])
np.round(s[0], 3)
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
engine = loglin.InferenceEngine(scorer)
junction_tree = engine.run(train_data[0][0])
np.round(loglin.feature_expectations(junction_tree)[:20], 3)
import imp
imp.reload(loglin)
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
engine = loglin.InferenceEngine(scorer)
junction_tree = engine.run(train_data[0][0])
np.round(loglin.feature_expectations(junction_tree)[:20], 3)
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
engine = loglin.InferenceEngine(scorer)
junction_tree = engine.run(train_data[0][0])
np.round(loglin.feature_expectations(junction_tree)[:20], 3)
import imp
imp.reload(loglin)
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
engine = loglin.InferenceEngine(scorer)
junction_tree = engine.run(train_data[0][0])
np.round(loglin.feature_expectations(junction_tree)[:20], 3)
cm = loglin.ConditionalModel(models)
w = cm.initial_weights(0)
np.round(cm.feature_expectations(train_data[0][0], w)[:20], 3)
import imp
imp.reload(loglin)
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
engine = loglin.InferenceEngine(scorer)
junction_tree = engine.run(train_data[0][0])
cm = loglin.ConditionalModel(models)
w = cm.initial_weights(0)
lp = cm.log_proba(train_data[0][0], w)
np.round(junction_tree[0][1], 3)
np.round(np.exp(loglin.marginalize(lp, [1], logsumexp)), 3)
np.round(np.exp(loglin.marginalize(lp, [0, 1], logsumexp)), 3)
np.round(junction_tree[1][1], 3)
np.round(junction_tree[0][1], 3)
np.round(junction_tree[0][2], 3)
np.round(np.exp(loglin.marginalize(lp, [0, 2], logsumexp)), 3)
np.round(np.exp(loglin.marginalize(lp, [2], logsumexp)), 3)
import imp
imp.reload(loglin)
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
engine = loglin.InferenceEngine(scorer)
junction_tree = engine.run(train_data[0][0])
junction_tree[0]
junction_tree[1][0]
junction_tree[1][0].sum()
logsumexp(junction_tree[1][0])
logsumexp(junction_tree[1][1])
logsumexp(junction_tree[1][2])
logsumexp(junction_tree[1][3])
logsumexp(junction_tree[2][1])
logsumexp(junction_tree[3][1])
logsumexp(junction_tree[3][1])
logsumexp(junction_tree[2][1])
logsumexp(junction_tree[1][1])
logsumexp(junction_tree[2][1])
logsumexp(junction_tree[3][1])
logsumexp(junction_tree[1][1])
logsumexp(junction_tree[2][1])
logsumexp(junction_tree[3][1])
logsumexp(junction_tree[2][1])
logsumexp(junction_tree[2][1])
logsumexp(junction_tree[2][2])
logsumexp(junction_tree[2][3])
logsumexp(junction_tree[2][4])
logsumexp(junction_tree[2][3])
import imp
imp.reload(loglin)
import imp
imp.reload(loglin)
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
engine = loglin.InferenceEngine(scorer)
junction_tree = engine.run(train_data[0][0])
junction_tree[1][0]
np.round(junction_tree[1][0], 3)
np.round(marginalize(junction_tree[2][1], [0]), 3)
np.round(loglin.marginalize(junction_tree[2][1], [0]), 3)
np.round(loglin.marginalize(junction_tree[2][1], [1]), 3)
np.round(junction_tree[1][1], 3)
import imp
imp.reload(loglin)
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
engine = loglin.InferenceEngine(scorer)
junction_tree = engine.run(train_data[0][0])
s, p = junction_tree
s[0]
s[1]
s[2]
s[3]
p[1]
p_up = p.copy()
p_up[1] += s[1]
p_up[2] += s[2]
p_up[3] += s[3]
p_up[1]
p[1]
s[1]
s, p = junction_tree
p_up = p.copy()
p_up[1] += s[1]
p_up[2] += s[2]
p_up[3] += s[3]
p_up[1]
p[1]
s[1]
p_up[1, :]
s, p = junction_tree
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
engine = loglin.InferenceEngine(scorer)
junction_tree = engine.run(train_data[0][0])
s, p = junction_tree
p_up = [p_i.copy() for p_i in p]
p_up[1] += s[1]
p_up[2] += s[2]
p_up[3] += s[3]
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
engine = loglin.InferenceEngine(scorer)
junction_tree = engine.run(train_data[0][0])
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
engine = loglin.InferenceEngine(scorer)
junction_tree = engine.run(train_data[0][0])
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
engine = loglin.InferenceEngine(scorer)
junction_tree = engine.run(train_data[0][0])
s, p = junction_tree
p_up = [None] + [p_i.copy() for p_i in p[1:]]
p_up[1] += s[1]
p_up[2] += s[2]
p_up[3] += s[3]
p_up[1]
p[1]
s[1]
p_up[2]
p[2]
s[2]
loglin.marginalize(p_up[1], [0], logsumexp)
loglin.marginalize(p_up[2], [0], logsumexp)
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
engine = loglin.InferenceEngine(scorer)
junction_tree = engine.run(train_data[0][0])
s, p = junction_tree
s_up = [s_i.copy() for s_i in s]
p_up = [None] + [p_i.copy() for p_i in p[1:]]
p_up[1] += s[1]
p_up[2] += s[2]
p_up[3] += s[3]
s_up[0] += marginalize(p_up[1], [0], logsumexp)
s_up[0] += marginalize(p_up[2], [0], logsumexp)
s_up[0] += marginalize(p_up[3], [0], logsumexp)
s_up[0] += loglin.marginalize(p_up[1], [0], logsumexp)
s_up[0] += loglin.marginalize(p_up[2], [0], logsumexp)
s_up[0] += loglin.marginalize(p_up[3], [0], logsumexp)
s_up[0]
s_up[0] - logsumexp(s_up[0])
np.exp(s_up[0] - logsumexp(s_up[0]))
np.exp(s_up[0] - logsumexp(s_up[0])).sum()
s_up[0] -= logsumexp(s_up[0]))
s_up[0] -= logsumexp(s_up[0])
s_down = [s_i.copy() for s_i in s_up]
p_down = [p_i.copy() for p_i in p_up]
s_down = [s_i.copy() for s_i in s_up]
p_down = [None] + [p_i.copy() for p_i in p_up[1:]]
p_down[1] += loglin.colvec(s_down[0])
p_down[2] += loglin.colvec(s_down[0])
p_down[3] += loglin.colvec(s_down[0])
p_down[0]
p_down[1]
np.exp(p_down[1] - logsumexp(p_down[1]))
np.exp(p_down[1] - logsumexp(p_down[1])).sum()
np.exp(p_down[1] - logsumexp(p_down[1]))
np.exp(p_down[1] - logsumexp(p_down[1])).sum(axis=1)
np.round(np.exp(p_down[1] - logsumexp(p_down[1])).sum(axis=1), 3)
import imp
imp.reload(loglin)
import imp
imp.reload(loglin)
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
engine = loglin.InferenceEngine(scorer)
junction_tree = engine.run(train_data[0][0])
s, p = junction_tree
logl, s, p = junction_tree
s[0]
np.set_printoptions(precision=3)
s[0]
np.set_printoptions(precision=2)
s[0]
p[1].sum(axis=1)
p[2].sum(axis=1)
p[3].sum(axis=1)
s[1]
p[1].sum(axis=0)
s[2]
p[2].sum(axis=0)
s[3]
p[3].sum(axis=0)
cm = loglin.ConditionalModel(models)
wt = cm.initial_weights()
wt[-10:]
weights.collapsed()[-10:]
lp = cm.log_proba(train_data[0][0], wt)
np.exp(loglin.marginalize(lp, [0], logsumexp))
junction_tree[1][0]
np.exp(loglin.marginalize(lp, [1], logsumexp))
junction_tree[1][1]
np.exp(loglin.marginalize(lp, [2], logsumexp))
junction_tree[1][2]
np.exp(loglin.marginalize(lp, [3], logsumexp))
junction_tree[1][3]
np.exp(loglin.marginalize(lp, [0, 1], logsumexp))
junction_tree[2][1]
junction_tree[2][2]
np.exp(loglin.marginalize(lp, [0, 2], logsumexp))
np.exp(loglin.marginalize(lp, [0, 3], logsumexp))
junction_tree[2][3]
fe = loglin.feature_expectations(junction_tree)
len(junction_tree)
import imp
imp.reload(loglin)
fe = loglin.feature_expectations(junction_tree)
fe
import imp
imp.reload(loglin)
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
engine = loglin.InferenceEngine(scorer)
def obj(train_data, engine):
ll = [engine.run(d)[0] for d in train_data]
return sum(ll)
def obj(train_data, engine):
ll = [engine.run(d)[0] for d in train_data]
return np.mean(ll)
obj(train_data, )
obj(train_data, engine)
len(train_data)
import imp
imp.reload(loglin)
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
engine = loglin.InferenceEngine(scorer)
def obj(train_data, engine):
probs = [engine.run(X)[0][0][y] for X, y in train_data]
return np.log(probs).sum()
obj(train_data, engine)
def obj(train_data, engine):
p = [engine.run(X)[0][0][y] for X, y in train_data]
l = np.log(p)
return np.mean(l)
obj(train_data, engine)
import imp
imp.reload(loglin)
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
engine = loglin.InferenceEngine(scorer)
import imp
imp.reload(loglin)
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
engine = loglin.InferenceEngine(scorer)
def obj(train_data, penalty=1.0, engine):
w = engine.scorer.parameters
p = [engine.run(X)[0][0][y] for X, y in train_data]
l = np.log(p)
return np.mean(l) + penalty / 2.0 * np.dot(w, w)
def obj(train_data, engine, penalty=1e-2):
w = engine.scorer.parameters
p = [engine.run(X)[0][0][y] for X, y in train_data]
l = np.log(p)
return np.mean(l) + penalty / 2.0 * np.dot(w, w)
obj(train_data, engine)
def obj(train_data, engine, penalty=1e-2):
w = engine.scorer.parameters
p = [engine.run(X)[0][0][y] for X, y in train_data]
l = np.log(p)
return np.mean(l) - penalty / 2.0 * np.dot(w, w)
obj(train_data, engine)
np.inf
-np.exp(np.inf)
np.exp(-np.inf)
x = np.array([0, 0, 1])
y = np.log(x)
y
x = np.array([0, 0, 1])
y = np.log(x)
logsumexp(y)
x = np.array([0, 0, -10, -5])
y = np.log(x)
y
x = np.array([0, 0, 0.4, 0.6])
y = np.log(x)
y
x = np.array([0, 0, 0.4, 0.6])
y = np.log(x)
logsumexp(x)
x = np.array([0, 0, 0.4, 0.6])
y = np.log(x)
np.exp(x - logsumexp(x))
x = np.array([0, 0, 0.4, 0.6])
y = np.log(x)
logsumexp(y)
x = np.array([0, 0, 0.2, 0.3])
y = np.log(x)
logsumexp(y)
np.exp(x - logsumexp(y))
np.exp(y - logsumexp(y))
import imp
imp.reload(loglin)
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
engine = loglin.InferenceEngine(scorer)
jt = engine.run(train_data[0][0])
jt[0][0]
jt = engine.run(train_data[0][0])
jt_conditioned = engine.observe_target(jt, train_data[0][1])
jt_conditioned[0][0]
jt_conditioned[0][1]
jt_conditioned[0][2]
jt_conditioned[0][1]
jt_conditioned[1][1]
jt_conditioned[0][1].sum()
jt_conditioned[1][1].sum()
jt_conditioned[1][1]
jt_conditioned[0][1]
import imp
imp.reload(loglin)
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
engine = loglin.InferenceEngine(scorer)
jt = engine.run(train_data[0][0])
jt_conditioned = engine.observe_target(jt, train_data[0][1])
import imp
imp.reload(loglin)
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
engine = loglin.InferenceEngine(scorer)
jt = engine.run(train_data[0][0])
jt_conditioned = engine.observe_target(jt, train_data[0][1])
jt_conditioned[0][1]
jt_conditioned[1][1]
def obj(train_data, engine, penalty=1e-2):
w = engine.scorer.parameters
p = [engine.run(X)[0][0][y] for X, y in train_data]
l = np.log(p)
return np.mean(l) - penalty / 2.0 * np.dot(w, w)
def jac(train_data, engine, penalty=1e-2):
w = engine.scorer.parameters
g = np.zeros_like(w)
for X, y in train_data:
jt = engine.run(X)
jt_cond = engine.observe_target(jt, y)
fe = loglin.feature_expectations(jt)
fe_cond = loglin.feature_expectations(jt_cond)
g += fe_cond - fe
return g
def obj(train_data, engine, penalty=1e-2):
w = engine.scorer.parameters
p = [engine.run(X)[0][0][y] for X, y in train_data]
l = np.log(p)
return np.mean(l) - penalty / 2.0 * np.dot(w, w)
def jac(train_data, engine, penalty=1e-2):
w = engine.scorer.parameters
g = np.zeros_like(w)
for X, y in train_data:
jt = engine.run(X)
jt_cond = engine.observe_target(jt, y)
fe = loglin.feature_expectations(jt)
fe_cond = loglin.feature_expectations(jt_cond)
g += fe_cond - fe
g += penalty * w
return g
import imp
imp.reload(loglin)
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
engine = loglin.InferenceEngine(scorer)
def obj(train_data, engine, penalty=1e-2):
w = engine.scorer.parameters
p = [engine.run(X)[0][0][y] for X, y in train_data]
l = np.log(p)
return np.mean(l) - penalty / 2.0 * np.dot(w, w)
def jac(train_data, engine, penalty=1e-2):
w = engine.scorer.parameters
g = np.zeros_like(w)
for X, y in train_data:
jt = engine.run(X)
jt_cond = engine.observe_target(jt, y)
fe = loglin.feature_expectations(jt)
fe_cond = loglin.feature_expectations(jt_cond)
g += fe_cond - fe
g += penalty * w
return g
obj(train_data, engine)
jac(train_data, engine)
import imp
imp.reload(loglin)
weights = loglin.ModelWeights(num_subtypes)
scorer = loglin.Scorer(models, weights)
engine = loglin.InferenceEngine(scorer)
obj(train_data, engine)
def obj(train_data, engine, penalty=1e-2):
w = engine.scorer.parameters
p = [engine.run(X)[0][0][y] for X, y in train_data]
l = np.log(p)
return np.mean(l) - penalty / 2.0 * np.dot(w, w)
def jac(train_data, engine, penalty=1e-2):
w = engine.scorer.parameters
g = np.zeros_like(w)
for X, y in train_data:
jt = engine.run(X)
jt_cond = engine.observe_target(jt, y)
fe = loglin.feature_expectations(jt)
fe_cond = loglin.feature_expectations(jt_cond)
g += fe_cond - fe
g += penalty * w
return g
obj(train_data, engine)
jac(train_data, engine)
def obj(train_data, engine, penalty=1e-2):
w = engine.scorer.parameters
p = [engine.run(X)[0][0][y] for X, y in train_data]
l = np.log(p)
return np.mean(l) - penalty / 2.0 * np.dot(w, w)
def jac(train_data, engine, penalty=1e-2):
w = engine.scorer.parameters
g = np.zeros_like(w)
for X, y in train_data:
jt = engine.run(X)
jt_cond = engine.observe_target(jt, y)
fe = loglin.feature_expectations(jt)
fe_cond = loglin.feature_expectations(jt_cond)
g += fe_cond - fe
g += penalty * w
return g
def f(w, train_data=train_data, engine=engine):
engine.scorer.weights.set_weights(w)
return obj(train_data, engine)
def g(w, train_data=train_data, engine=engine):
engine.scorer.weights.set_weights(w)
return jac(train_data, engine)
w0 = engine.scorer.parameters
def obj(train_data, engine, penalty=1e-2):
w = engine.scorer.parameters
p = [engine.run(X)[0][0][y] for X, y in train_data]
l = np.log(p)
return np.mean(l) - penalty / 2.0 * np.dot(w, w)
def jac(train_data, engine, penalty=1e-2):
w = engine.scorer.parameters
g = np.zeros_like(w)
for X, y in train_data:
jt = engine.run(X)
jt_cond = engine.observe_target(jt, y)
fe = loglin.feature_expectations(jt)
fe_cond = loglin.feature_expectations(jt_cond)
g += fe_cond - fe
g += penalty * w
return g
def f(w, train_data=train_data, engine=engine):
engine.scorer.weights.set_weights(w)
return -obj(train_data, engine)
def g(w, train_data=train_data, engine=engine):
engine.scorer.weights.set_weights(w)
return -jac(train_data, engine)
from scipy.optimize import minimize
w0 = engine.scorer.parameters
solution = minimize(f, w0, jac=g, method='BFGS')
from mypy.util import check_grad
from mypy.util import check_grad
w0 = engine.scorer.parameters
check_grad(f, w0)
imp.reload(mypy)
mypy.util.check_grad(f, w0, range(5))
import mypy
imp.reload(mypy)
mypy.util.check_grad(f, w0, range(5))
get_ipython().magic('debug ')
import mypy
import mypy.util
imp.reload(mypy)
imp.reload(mypy.util)
from mypy.util impor check_grad
check_grad(f, w0, range(5))
import mypy
import mypy.util
imp.reload(mypy)
imp.reload(mypy.util)
from mypy.util import check_grad
check_grad(f, w0, range(5))
g(w0)[:5]
def obj(train_data, engine, penalty=1e-2):
w = engine.scorer.parameters
p = [engine.run(X)[0][0][y] for X, y in train_data]
l = np.log(p)
return np.mean(l) - penalty / 2.0 * np.dot(w, w)
def jac(train_data, engine, penalty=1e-2):
w = engine.scorer.parameters
g = np.zeros_like(w)
n = 0
for X, y in train_data:
n += 1
jt = engine.run(X)
jt_cond = engine.observe_target(jt, y)
fe = loglin.feature_expectations(jt)
fe_cond = loglin.feature_expectations(jt_cond)
g += fe_cond - fe
g /= n
g += penalty * w
return g
def f(w, train_data=train_data, engine=engine):
engine.scorer.weights.set_weights(w)
return -obj(train_data, engine)
def g(w, train_data=train_data, engine=engine):
engine.scorer.weights.set_weights(w)
return -jac(train_data, engine)
g(w0)[:5]
import mypy
import mypy.util
imp.reload(mypy)
imp.reload(mypy.util)
from mypy.util import check_grad
check_grad(f, w0, range(2), 1e-8)
g(w0)[:2]
def obj(train_data, engine, penalty=1e-2):
w = engine.scorer.parameters
p = [engine.run(X)[0][0][y] for X, y in train_data]
l = np.log(p)
return np.mean(l) - penalty / 2.0 * np.dot(w, w)
def jac(train_data, engine, penalty=1e-2):
w = engine.scorer.parameters
g = np.zeros_like(w)
n = 0
for X, y in train_data:
n += 1
jt = engine.run(X)
jt_cond = engine.observe_target(jt, y)
fe = loglin.feature_expectations(jt)
fe_cond = loglin.feature_expectations(jt_cond)
g += fe_cond - fe
g /= n
g -= penalty * w
return g
def f(w, train_data=train_data, engine=engine):
engine.scorer.weights.set_weights(w)
return -obj(train_data, engine)
def g(w, train_data=train_data, engine=engine):
engine.scorer.weights.set_weights(w)
return -jac(train_data, engine)
import mypy
import mypy.util
imp.reload(mypy)
imp.reload(mypy.util)
from mypy.util import check_grad
check_grad(f, w0, range(2), 1e-8)
g(w0)[:2]
import mypy
import mypy.util
imp.reload(mypy)
imp.reload(mypy.util)
from mypy.util import check_grad
check_grad(f, w0, range(10), 1e-8)
g(w0)[:10]
from scipy.optimize import minimize
w0 = engine.scorer.parameters
solution = minimize(f, w0, jac=g, method='BFGS', options={'disp': True})
def obj(train_data, engine, penalty=1e-2):
w = engine.scorer.parameters
p = [engine.run(X)[0][0][y] for X, y in train_data]
l = np.log(p)
return np.mean(l) - penalty / 2.0 * np.dot(w, w)
def jac(train_data, engine, penalty=1e-2):
w = engine.scorer.parameters
g = np.zeros_like(w)
n = 0
for X, y in train_data:
n += 1
jt = engine.run(X)
jt_cond = engine.observe_target(jt, y)
fe = loglin.feature_expectations(jt)
fe_cond = loglin.feature_expectations(jt_cond)
g += fe_cond - fe
g /= n
g -= penalty * w
return g
def f(w, train_data=train_data[:10], engine=engine):
engine.scorer.weights.set_weights(w)
return -obj(train_data, engine)
def g(w, train_data=train_data[:10], engine=engine):
engine.scorer.weights.set_weights(w)
return -jac(train_data, engine)
from scipy.optimize import minimize
w0 = engine.scorer.parameters
solution = minimize(f, w0, jac=g, method='BFGS', options={'disp': True})
solution.x
engine.scorer.weights.set_weights(solution.x)
engine.run(train_data[0][0])
engine.run(train_data[0][0])[0][0]
np.round(engine.run(train_data[0][0])[0][0], 2)
for X, y in train_data[:10]:
p = engine.run(X)[0][0]
print(y)
print(np.round(p, 2))
import imp
imp.reload(loglin)
objective = loglin.ModelObjective(training_data, 1e-2, models)
objective = loglin.ModelObjective(train_data, 1e-2, models)
import imp
imp.reload(loglin)
objective = loglin.ModelObjective(train_data, 1e-2, models)
import imp
imp.reload(loglin)
objective = loglin.ModelObjective(train_data, 1e-2, models)
w0 = objective.initial_weights()
from scipy.optimize import minimize
objective = loglin.ModelObjective(train_data, 1e-2, models)
w0 = objective.initial_weights()
objective.value(w0)
import imp
imp.reload(loglin)
from scipy.optimize import minimize
objective = loglin.ModelObjective(train_data, 1e-2, models)
w0 = objective.initial_weights()
objective.value(w0)
check_grad(objective.value, w0, range(2))
objective.gradient(w0)
import imp
imp.reload(loglin)
from scipy.optimize import minimize
objective = loglin.ModelObjective(train_data, 1e-2, models)
w0 = objective.initial_weights()
objective.value(w0)
check_grad(objective.value, w0, range(2))
objective.gradient(w0)
check_grad(objective.value, w0, range(5))
objective.gradient(w0)[:5]
import imp
imp.reload(loglin)
from scipy.optimize import minimize
objective = loglin.ModelObjective(train_data, 1e-2, models)
w0 = objective.initial_weights()
solution = minimize(objective.value, w0, jac=objective.gradient, method='BFGS')
import logging
from scipy.optimize import minimize
logging.basicConfig(level=logging.INFO)
objective = loglin.ModelObjective(train_data, 1e-2, models)
w0 = objective.initial_weights()
objective.value(w0)
#solution = minimize(objective.value, w0, jac=objective.gradient, method='BFGS')
import imp
imp.reload(loglin)
import logging
from scipy.optimize import minimize
logging.basicConfig(level=logging.INFO)
objective = loglin.ModelObjective(train_data, 1e-2, models)
w0 = objective.initial_weights()
objective.value(w0)
#solution = minimize(objective.value, w0, jac=objective.gradient, method='BFGS')
logging.info('Hello world')
import logging
from scipy.optimize import minimize
get_ipython().magic('logstart')
logging.basicConfig(level=logging.INFO)
objective = loglin.ModelObjective(train_data, 1e-2, models)
w0 = objective.initial_weights()
objective.value(w0)
#solution = minimize(objective.value, w0, jac=objective.gradient, method='BFGS')
get_ipython().magic('logstop ')
| 32.46074 | 104 | 0.710227 | 7,802 | 50,022 | 4.394514 | 0.033068 | 0.052237 | 0.025375 | 0.024704 | 0.954559 | 0.935892 | 0.916438 | 0.896226 | 0.871405 | 0.847314 | 0 | 0.026208 | 0.139579 | 50,022 | 1,540 | 105 | 32.481818 | 0.7704 | 0.005058 | 0 | 0.825071 | 0 | 0 | 0.0212 | 0.004903 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.083569 | null | null | 0.004249 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
43f0bb881210afea8fc3f68bda16e3020b8a2706 | 135 | py | Python | concierge_python/__init__.py | akaisuisei/concierge-python | 2de074ef82530137523edabcaa24bc10095ab329 | [
"MIT"
] | null | null | null | concierge_python/__init__.py | akaisuisei/concierge-python | 2de074ef82530137523edabcaa24bc10095ab329 | [
"MIT"
] | null | null | null | concierge_python/__init__.py | akaisuisei/concierge-python | 2de074ef82530137523edabcaa24bc10095ab329 | [
"MIT"
] | null | null | null | try:
import utils
import concierge
except ImportError:
import concierge_python.utils
import concierge_python.concierge
| 19.285714 | 37 | 0.77037 | 15 | 135 | 6.8 | 0.466667 | 0.441176 | 0.392157 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 135 | 6 | 38 | 22.5 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.833333 | 0 | 0.833333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
43f6793dd3d10127191b4b174d82c677cb924f64 | 1,556 | py | Python | songs/migrations/0001_initial.py | Rocker4/Music_Playlist | fb2091f39ef30ffef0d541607b8623d6098420b7 | [
"MIT"
] | 7 | 2020-05-18T18:51:17.000Z | 2022-02-22T23:23:29.000Z | songs/migrations/0001_initial.py | Rocker4/Music_Playlist | fb2091f39ef30ffef0d541607b8623d6098420b7 | [
"MIT"
] | 8 | 2020-04-12T14:25:59.000Z | 2021-06-05T10:52:04.000Z | songs/migrations/0001_initial.py | Rocker4/Music_Playlist | fb2091f39ef30ffef0d541607b8623d6098420b7 | [
"MIT"
] | 15 | 2020-05-03T09:56:01.000Z | 2022-01-29T01:07:17.000Z | # Generated by Django 2.2.10 on 2020-04-10 17:12
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='New_Playlist_Apple_Music',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('playlist_name', models.CharField(max_length=100)),
('description', models.CharField(max_length=200)),
],
),
migrations.CreateModel(
name='New_Playlist_Spotify',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('playlist_name', models.CharField(max_length=100)),
('description', models.CharField(max_length=200)),
],
),
migrations.CreateModel(
name='Playlist_Apple_Music',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('playlist_name', models.CharField(max_length=100)),
],
),
migrations.CreateModel(
name='Playlist_Spotify',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('playlist_name', models.CharField(max_length=100)),
],
),
]
| 34.577778 | 114 | 0.568766 | 150 | 1,556 | 5.7 | 0.293333 | 0.105263 | 0.126316 | 0.168421 | 0.812866 | 0.776608 | 0.776608 | 0.776608 | 0.776608 | 0.776608 | 0 | 0.031365 | 0.303342 | 1,556 | 44 | 115 | 35.363636 | 0.75738 | 0.029563 | 0 | 0.702703 | 1 | 0 | 0.112732 | 0.015915 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.027027 | 0 | 0.135135 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
a174b39765821a829db6e1b18ae1e3f89beae26a | 30,039 | py | Python | Queries and their results/history.py | Srini96/Market-Basket-Analysis-with-Customer-Profiling-and-Exploratory-Analysis-using-Python | 293291157a4d91c217c4008e058e01b1b930b923 | [
"MIT"
] | 1 | 2021-02-01T02:15:48.000Z | 2021-02-01T02:15:48.000Z | Queries and their results/history.py | Srini96/Market-Basket-Analysis-with-Customer-Profiling-and-Exploratory-Analysis-using-Python | 293291157a4d91c217c4008e058e01b1b930b923 | [
"MIT"
] | null | null | null | Queries and their results/history.py | Srini96/Market-Basket-Analysis-with-Customer-Profiling-and-Exploratory-Analysis-using-Python | 293291157a4d91c217c4008e058e01b1b930b923 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# *** Spyder Python Console History Log ***
## ---(Tue Apr 17 00:48:48 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/temp.py', wdir='C:/Users/KURAMS/.spyder-py3')
pip install seaborn
## ---(Tue Apr 17 01:08:52 2018)---
conda install -c anaconda seaborn=0.7.1
runfile('C:/Users/KURAMS/.spyder-py3/temp.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Wed Apr 18 10:43:26 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/temp.py', wdir='C:/Users/KURAMS/.spyder-py3')
debugfile('C:/Users/KURAMS/.spyder-py3/temp.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/temp.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/NEW.py', wdir='C:/Users/KURAMS/.spyder-py3')
%clear
runfile('C:/Users/KURAMS/.spyder-py3/NEW.py', wdir='C:/Users/KURAMS/.spyder-py3')
help(nan)
help(NaN)
runfile('C:/Users/KURAMS/.spyder-py3/NEW.py', wdir='C:/Users/KURAMS/.spyder-py3')
%clear
runfile('C:/Users/KURAMS/.spyder-py3/NEW.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/NEW.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Sun Apr 22 12:09:37 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/temp.py', wdir='C:/Users/KURAMS/.spyder-py3')
order_products_train_df = pd.read_csv("../input/order_products__train.csv")
order_products_prior_df = pd.read_csv("../input/order_products__prior.csv")
orders_df = pd.read_csv("../input/orders.csv")
products_df = pd.read_csv("../input/products.csv")
aisles_df = pd.read_csv("../input/aisles.csv")
departments_df = pd.read_csv("../input/departments.csv")
runfile('C:/Users/KURAMS/.spyder-py3/temp.py', wdir='C:/Users/KURAMS/.spyder-py3')
%clear
runfile('C:/Users/KURAMS/.spyder-py3/temp.py', wdir='C:/Users/KURAMS/.spyder-py3')
help(ggplot2)
runfile('C:/Users/KURAMS/.spyder-py3/temp.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/temp.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
%clear
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
%clear
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
%clear
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
%clear
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
%CLEAR
%CLE
%clear
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
%matplotlib inline
import matplotlib.pyplot as plt # Matlab-style plotting
import seaborn as sns
color = sns.color_palette()
import warnings
warnings.filterwarnings('ignore') #Supress unnecessary warnings for readability and cleaner presentation
pd.set_option('display.float_format', lambda x: '%.3f' % x) #Limiting floats output to 3 decimal points
from subprocess import check_output
print(check_output(["ls", "../input"]).decode("utf8")) #check the files available in the directory
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/temp.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Sun Apr 22 17:03:29 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/temp.py', wdir='C:/Users/KURAMS/.spyder-py3')
help(np.random)
runfile('C:/Users/KURAMS/.spyder-py3/temp.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Mon Apr 23 10:21:53 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
%clear
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Tue Apr 24 10:20:13 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Tue Apr 24 12:49:03 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/temp.py', wdir='C:/Users/KURAMS/.spyder-py3')
%clear
runfile('C:/Users/KURAMS/.spyder-py3/temp.py', wdir='C:/Users/KURAMS/.spyder-py3')
%clear
runfile('C:/Users/KURAMS/.spyder-py3/Most ordered Products.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Tue Apr 24 17:48:04 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/Most ordered Products.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Tue Apr 24 18:08:24 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/Most ordered Products.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Tue Apr 24 20:26:22 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/Most ordered Products.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Wed Apr 25 11:56:34 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/Most ordered Products.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Wed Apr 25 12:17:45 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/Number of orders people usually order.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Most ordered Products.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Reordered frequency.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Most reordered products.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Hours of orders in a day.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Days of orders in a week.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Period of reorders.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Orders in the whole dataset.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Hours of orders in a day.py', wdir='C:/Users/KURAMS/.spyder-py3')
help(sns.histogram)
help(sns.histoplot)
help(plot)
help(sns.plot)
help(sns.)
help(sns.histogramplot)
runfile('C:/Users/KURAMS/.spyder-py3/Orders made by each customer.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Thu Apr 26 09:44:42 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/Most ordered Products.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Orders made by each customer.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Most ordered Products.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Orders made by each customer.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/untitled1.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Orders made by each customer witrh scatter plot.py', wdir='C:/Users/KURAMS/.spyder-py3')
%clear
runfile('C:/Users/KURAMS/.spyder-py3/Orders made by each customer witrh scatter plot.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Most important Departments (by number of products).py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Most important Aisles in each Department (by number of Products).py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Most important Aisles over all Departments (by number of Products).py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Most important Aisles in each Department (by number of Products).py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Thu Apr 26 15:03:27 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/Days of orders in a week.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Orders made by each customer witrh scatter plot.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Hours of orders in a day.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Most reordered products.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Most important Aisles over all Departments (by number of Products).py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Most important Aisles in each Department (by number of Products).py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Most reordered products.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Number of orders people usually order.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Orders in the whole dataset.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Orders made by each customer.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Period of reorders.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Reordered frequency.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Thu Apr 26 19:36:52 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/Orders made by each customer.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Orders made by each customer witrh scatter plot.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Orders in the whole dataset.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Number of orders people usually order.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Most ordered Products.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Most reordered products.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Best selling departments.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Best selling aisles in a department.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Most important Aisles over all Departments (by number of Products).py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Best selling aisles in a department.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Most important Aisles in each Department (by number of Products).py', wdir='C:/Users/KURAMS/.spyder-py3')
%clear
runfile('C:/Users/KURAMS/.spyder-py3/Most ordered Products.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Best selling departments.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Best selling aisles in a department.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Most ordered Products.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Most reordered products.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Best selling aisles in a department.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Thu Apr 26 21:54:50 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Thu Apr 26 23:30:22 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/Days of orders in a week.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Fri Apr 27 07:48:39 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/Most reordered products.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Best selling departments.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Best selling aisles in a department.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Most important Aisles in each Department (by number of Products).py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Fri Apr 27 08:56:19 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/Days of orders in a week.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Most important Aisles in each Department (by number of Products).py', wdir='C:/Users/KURAMS/.spyder-py3')
%clear
runfile('C:/Users/KURAMS/.spyder-py3/Days of orders in a week.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Most important Aisles in each Department (by number of Products).py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Hours of orders in a day.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Days of orders in a week.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Hours of orders in a day.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Most important Aisles in each Department (by number of Products).py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Period of reorders.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Most ordered Products.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Best selling departments.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Tue May 1 13:40:50 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/Reordered frequency.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Period of reorders.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Orders made by each customer.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Orders made by each customer witrh scatter plot.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Orders in the whole dataset.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Number of orders people usually order.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Most reordered products.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Number of orders people usually order.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Most ordered Products.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Most important Departments (by number of products).py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Most important Aisles over all Departments (by number of Products).py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Most important Aisles in each Department (by number of Products).py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Hours of orders in a day.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Days of orders in a week.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Best selling departments.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Best selling aisles in a department.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Tue May 1 19:22:13 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/Best selling aisles in a department.py', wdir='C:/Users/KURAMS/.spyder-py3')
%clear
runfile('C:/Users/KURAMS/.spyder-py3/Best selling aisles in a department.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Most important Aisles in each Department (by number of Products).py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Best selling aisles in a department.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Wed May 2 10:41:55 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/Best selling aisles in a department.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Most important Aisles in each Department (by number of Products).py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Wed May 2 17:00:08 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Mon May 7 18:55:12 2018)---
runfile('C:/Users/KURAMS/Desktop/DU/Mp3s/apriori.py', wdir='C:/Users/KURAMS/Desktop/DU/Mp3s')
## ---(Tue May 8 14:41:08 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/apriori.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Wed May 9 13:29:05 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/apriori.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Wed May 9 14:28:15 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/apriori.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Best selling aisles in a department.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Most reordered products.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Mon May 14 10:03:24 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/apriori.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Most important Aisles in each Department (by number of Products).py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Most reordered products.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Reordered frequency.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Orders made by each customer.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Orders in the whole dataset.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Sun Jun 10 22:49:08 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/Orders made by each customer witrh scatter plot.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Mon Jun 11 09:45:10 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/Orders in the whole dataset.py', wdir='C:/Users/KURAMS/.spyder-py3')
clear
runfile('C:/Users/KURAMS/.spyder-py3/Most important Aisles over all Departments (by number of Products).py', wdir='C:/Users/KURAMS/.spyder-py3')
clear
runfile('C:/Users/KURAMS/.spyder-py3/Most reordered products.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Wed Jun 27 09:42:14 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
clear
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Tue Jul 17 17:05:17 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Most reordered products.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Hours of orders in a day.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Sat Jul 28 18:31:35 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/untitled3.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/untitled3.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Mon Jul 30 09:45:25 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/untitled3.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/untitled3.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
clear
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
cleaer
clear
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
clear
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
debugfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
clear
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Wed Aug 1 10:07:33 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/untitled3.py', wdir='C:/Users/KURAMS/.spyder-py3')
clear
runfile('C:/Users/KURAMS/.spyder-py3/untitled3.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Fri Aug 3 09:09:41 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/untitled3.py', wdir='C:/Users/KURAMS/.spyder-py3')
239/4
listr66666666666666666666333333333333333336336...........................................................................................................................................................................................................................................................................................................
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
help(count)
help(len())
help(len)
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
import pandas as pde
import pandas as pd
import numpy as np
df = pd.Dataframe(arrange(10))
df = pd.DataFrame(arrange(10))
df = pd.DataFrame(np.arrange(10))
df
len(df)
pd.df
df= pd.df
df = remove('Close')
df = pd.df(remove('Close'))
timeit len(df)
clear
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
time.clock len(df)
time.clock(df)
df = loc(close)
df = loc(Close)
df = loc('Close')
df.loc['Close']
del df[Close]
df
del df['Close']
df
help(append)
help append
help add
help(add)
help(len)
help(pandas)
clear
df
del df[0]
del df['0']
df
del df[0:1]
df.drop(df.index[0])
df.update(df.index[0])
help(update)
df.loc[0]
df
df.append[0] = ['HL_PCT','007]
df.append[0] = ['HL_PCT','007']
df.append[0]
df= pd.append[0]
df.loc[0]= ['007',,,]
df.loc[0]= ['007']
df.loc[0]= ['007']
df.loc[0]= 007
df.loc[0]= ['Close','007']
df.insert(0, 'Close' , '007' )
df
help(undo)
import numpy as np
np.delete(df, 'Close')
np.delete(df, 'Close' : [0,3])
np.delete(df, Close)
## ---(Wed Aug 8 20:44:02 2018)---
help(fibonacci)
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
def calc fibonacci(x):
while x < 100:
return(x* fibonacci(x-1)+(x-2))
else:
return(print("KEYISKO MINDRI"))
x=9
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
help(pandas)
help(del)
help del
help(del)
del
help[del]
300000/12
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
create new file
py.py
clear
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/SRI-1.py', wdir='C:/Users/KURAMS/.spyder-py3')
import nltk
runfile('C:/Users/KURAMS/.spyder-py3/SRI-1.py', wdir='C:/Users/KURAMS/.spyder-py3')
pip uninstall nltk
runfile('C:/Users/KURAMS/.spyder-py3/SRI-1.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Fri Aug 10 12:47:30 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
import matplotlib.pyplot as plt
>>>plt.style.use(['dark_background', 'presentation'])
import matplotlib.pyplot as plt
plt.style.use(['dark_background', 'presentation'])
import numpy as np
import matplotlib.pyplot as plt
with plt.style.context(('dark_background')):
plt.plot(np.sin(np.linspace(0, 2 * np.pi)), 'r-o')
## ---(Sat Aug 11 15:57:16 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
pip uninstall nltk
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
clear
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/kehuhu.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Sat Aug 11 17:34:44 2018)---
import nltk
runfile('C:/Users/KURAMS/.spyder-py3/kehuhu.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Sun Aug 12 08:42:24 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
help(list)
runfile('C:/Users/KURAMS/.spyder-py3/untitled0.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/NLTK/Stemming.py', wdir='C:/Users/KURAMS/.spyder-py3/NLTK')
runfile('C:/Users/KURAMS/.spyder-py3/NLTK/Speech tagging.py', wdir='C:/Users/KURAMS/.spyder-py3/NLTK')
tokenizer = custom_tokenizer.tokenize()
runfile('C:/Users/KURAMS/.spyder-py3/NLTK/kehuhu.py', wdir='C:/Users/KURAMS/.spyder-py3/NLTK')
help(process_content)
help(process)
runfile('C:/Users/KURAMS/.spyder-py3/NLTK/Speech tagging.py', wdir='C:/Users/KURAMS/.spyder-py3/NLTK')
help(pos_tag)
## ---(Mon Aug 13 10:19:53 2018)---
runfile('C:/Users/KURAMS/Chunking.py', wdir='C:/Users/KURAMS')
runfile('C:/Users/KURAMS/.spyder-py3/NLTK/Named entity.py', wdir='C:/Users/KURAMS/.spyder-py3/NLTK')
runfile('C:/Users/KURAMS/.spyder-py3/NLTK/Stemming.py', wdir='C:/Users/KURAMS/.spyder-py3/NLTK')
runfile('C:/Users/KURAMS/.spyder-py3/NLTK/Lemma semma.py', wdir='C:/Users/KURAMS/.spyder-py3/NLTK')
runfile('C:/Users/KURAMS/.spyder-py3/NLTK/udri.py', wdir='C:/Users/KURAMS/.spyder-py3/NLTK')
runfile('C:/Users/KURAMS/.spyder-py3/NLTK/Lemma semma.py', wdir='C:/Users/KURAMS/.spyder-py3/NLTK')
runfile('C:/Users/KURAMS/.spyder-py3/NLTK/Stemming.py', wdir='C:/Users/KURAMS/.spyder-py3/NLTK')
runfile('C:/Users/KURAMS/.spyder-py3/NLTK/Lemma semma.py', wdir='C:/Users/KURAMS/.spyder-py3/NLTK')
runfile('C:/Users/KURAMS/.spyder-py3/NLTK/udri.py', wdir='C:/Users/KURAMS/.spyder-py3/NLTK')
help(lemmas)
help(nltk.lemmas)
clear
runfile('C:/Users/KURAMS/.spyder-py3/NLTK/Wordnet.py', wdir='C:/Users/KURAMS/.spyder-py3/NLTK')
help(name)
help(pandas)
import pandas
help(pandas.name)
help(name)
import pandas
help(pandas)
help(pandas.name)
runfile('C:/Users/KURAMS/.spyder-py3/NLTK/Wordnet.py', wdir='C:/Users/KURAMS/.spyder-py3/NLTK')
import nltk
help(wup)
wup
runfile('C:/Users/KURAMS/.spyder-py3/NLTK/Speech tagging.py', wdir='C:/Users/KURAMS/.spyder-py3/NLTK')
runfile('C:/Users/KURAMS/.spyder-py3/NLTK/Wordnet.py', wdir='C:/Users/KURAMS/.spyder-py3/NLTK')
runfile('C:/Users/KURAMS/.spyder-py3/NLTK/Text classification.py', wdir='C:/Users/KURAMS/.spyder-py3/NLTK')
## ---(Wed Aug 22 12:07:53 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/NLTK/Practice.py', wdir='C:/Users/KURAMS/.spyder-py3/NLTK')
runfile('C:/Users/KURAMS/.spyder-py3/NLTK/untitled2.py', wdir='C:/Users/KURAMS/.spyder-py3/NLTK')
## ---(Tue Sep 4 22:53:31 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/NLTK/Practice.py', wdir='C:/Users/KURAMS/.spyder-py3/NLTK')
## ---(Wed Sep 5 16:50:04 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/NLTK/Practice.py', wdir='C:/Users/KURAMS/.spyder-py3/NLTK')
import numpy
help(linspace)
linspace
import numpy help(numpy.linspace)
runfile('C:/Users/KURAMS/.spyder-py3/NLTK/Practice.py', wdir='C:/Users/KURAMS/.spyder-py3/NLTK')
## ---(Thu Sep 6 10:48:02 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/Regg.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Thu Sep 6 17:56:19 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/Regg.py', wdir='C:/Users/KURAMS/.spyder-py3')
clear
runfile('C:/Users/KURAMS/.spyder-py3/Regg.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/NLTK/Regg.py', wdir='C:/Users/KURAMS/.spyder-py3/NLTK')
runfile('C:/Users/KURAMS/.spyder-py3/input/regg.py', wdir='C:/Users/KURAMS/.spyder-py3/input')
## ---(Thu Sep 6 23:18:43 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/Regg.py', wdir='C:/Users/KURAMS/.spyder-py3')
debugfile('C:/Users/KURAMS/.spyder-py3/Regg.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/Regg.py', wdir='C:/Users/KURAMS/.spyder-py3')
clear
runfile('C:/Users/KURAMS/.spyder-py3/Regg.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Wed Sep 12 10:38:47 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/Regg.py', wdir='C:/Users/KURAMS/.spyder-py3')
import numpy
a = np.array[3,5,6]
import numpy as np
a = np.array[6,9,1]
a = np.array([6,9,1])
b = np.array([20,3,5])
np.max(a)
np.max(b)
np.amax(b)
np.mean(b)
np.mean(b+10)
## ---(Wed Sep 12 22:16:05 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/Regg.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Fri Sep 14 18:39:07 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/Regg.py', wdir='C:/Users/KURAMS/.spyder-py3')
## ---(Fri Sep 14 22:43:46 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/Regg.py', wdir='C:/Users/KURAMS/.spyder-py3')
runfile('C:/Users/KURAMS/.spyder-py3/REGRESSION/Basic regression for single variable.py', wdir='C:/Users/KURAMS/.spyder-py3/REGRESSION')
## ---(Sat Sep 15 09:21:30 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/REGRESSION/Basic regression for single variable.py', wdir='C:/Users/KURAMS/.spyder-py3/REGRESSION')
runfile('C:/Users/KURAMS/.spyder-py3/input/potential-enigma-master/multiple_linear_regression_from_scratch.py', wdir='C:/Users/KURAMS/.spyder-py3/input/potential-enigma-master')
## ---(Sat Sep 15 11:19:53 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/input/potential-enigma-master/multiple_linear_regression_from_scratch.py', wdir='C:/Users/KURAMS/.spyder-py3/input/potential-enigma-master')
runfile('C:/Users/KURAMS/.spyder-py3/REGRESSION/Basic regression for single variable.py', wdir='C:/Users/KURAMS/.spyder-py3/REGRESSION')
runfile('C:/Users/KURAMS/.spyder-py3/input/potential-enigma-master/multiple_linear_regression_from_scratch.py', wdir='C:/Users/KURAMS/.spyder-py3/input/potential-enigma-master')
runfile('C:/Users/KURAMS/.spyder-py3/REGRESSION/Basic regression for single variable.py', wdir='C:/Users/KURAMS/.spyder-py3/REGRESSION')
runfile('C:/Users/KURAMS/.spyder-py3/REGRESSION/Basic regression for single variable using scikit.py', wdir='C:/Users/KURAMS/.spyder-py3/REGRESSION')
## ---(Mon Sep 17 17:44:01 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/REGRESSION/Basic regression for single variable using scikit.py', wdir='C:/Users/KURAMS/.spyder-py3/REGRESSION')
runfile('C:/Users/KURAMS/.spyder-py3/REGRESSION/Basic regression for single variable.py', wdir='C:/Users/KURAMS/.spyder-py3/REGRESSION')
runfile('C:/Users/KURAMS/.spyder-py3/REGRESSION/Basic regression for single variable using scikit.py', wdir='C:/Users/KURAMS/.spyder-py3/REGRESSION')
clear
runfile('C:/Users/KURAMS/.spyder-py3/REGRESSION/Basic regression for single variable using scikit.py', wdir='C:/Users/KURAMS/.spyder-py3/REGRESSION')
## ---(Thu Sep 20 19:37:06 2018)---
runfile('C:/Users/KURAMS/.spyder-py3/REGRESSION/Basic regression for single variable using scikit.py', wdir='C:/Users/KURAMS/.spyder-py3/REGRESSION') | 55.117431 | 345 | 0.711076 | 4,926 | 30,039 | 4.326431 | 0.063337 | 0.131757 | 0.263514 | 0.391892 | 0.892502 | 0.885604 | 0.881616 | 0.873639 | 0.861674 | 0.851351 | 0 | 0.046869 | 0.06172 | 30,039 | 545 | 346 | 55.117431 | 0.709278 | 0.072672 | 0 | 0.703349 | 0 | 0.04067 | 0.702675 | 0.55477 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.100478 | null | null | 0.004785 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 12 |
a19d0b0bf82e4b11c6d77b15e53b5c4a3e1fd332 | 41 | py | Python | Financing_Error.py | mudgalsaurabh/IEEE-Python_Workshop-2018 | dd4f9e0bfe35448161724122116afd7d5214fc05 | [
"MIT"
] | 3 | 2018-03-19T09:07:10.000Z | 2018-08-27T13:35:51.000Z | Financing_Error.py | mudgalsaurabh/IEEE-Python_Workshop-2018 | dd4f9e0bfe35448161724122116afd7d5214fc05 | [
"MIT"
] | null | null | null | Financing_Error.py | mudgalsaurabh/IEEE-Python_Workshop-2018 | dd4f9e0bfe35448161724122116afd7d5214fc05 | [
"MIT"
] | null | null | null | x = 0.1 + 0.1 + 0.1 - 0.3
print(str(x))
| 10.25 | 25 | 0.439024 | 12 | 41 | 1.5 | 0.5 | 0.333333 | 0.5 | 0.444444 | 0.388889 | 0 | 0 | 0 | 0 | 0 | 0 | 0.266667 | 0.268293 | 41 | 3 | 26 | 13.666667 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 1 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 10 |
a1b0982a386d3df5b6205d5e10a154b618d9b3ab | 9,128 | py | Python | test/cpython/test_int_literal.py | aisk/pyston | ac69cfef0621dbc8901175e84fa2b5cb5781a646 | [
"BSD-2-Clause",
"Apache-2.0"
] | 1 | 2020-02-06T14:28:45.000Z | 2020-02-06T14:28:45.000Z | test/cpython/test_int_literal.py | aisk/pyston | ac69cfef0621dbc8901175e84fa2b5cb5781a646 | [
"BSD-2-Clause",
"Apache-2.0"
] | null | null | null | test/cpython/test_int_literal.py | aisk/pyston | ac69cfef0621dbc8901175e84fa2b5cb5781a646 | [
"BSD-2-Clause",
"Apache-2.0"
] | 1 | 2020-02-06T14:29:00.000Z | 2020-02-06T14:29:00.000Z | """Test correct treatment of hex/oct constants.
This is complex because of changes due to PEP 237.
"""
import unittest
from test import test_support
class TestHexOctBin(unittest.TestCase):
def test_hex_baseline(self):
# A few upper/lowercase tests
self.assertEqual(0x0, 0X0)
self.assertEqual(0x1, 0X1)
self.assertEqual(0x123456789abcdef, 0X123456789abcdef)
# Baseline tests
self.assertEqual(0x0, 0)
self.assertEqual(0x10, 16)
self.assertEqual(0x7fffffff, 2147483647)
self.assertEqual(0x7fffffffffffffff, 9223372036854775807)
# Ditto with a minus sign and parentheses
self.assertEqual(-(0x0), 0)
self.assertEqual(-(0x10), -16)
self.assertEqual(-(0x7fffffff), -2147483647)
self.assertEqual(-(0x7fffffffffffffff), -9223372036854775807)
# Ditto with a minus sign and NO parentheses
self.assertEqual(-0x0, 0)
self.assertEqual(-0x10, -16)
self.assertEqual(-0x7fffffff, -2147483647)
self.assertEqual(-0x7fffffffffffffff, -9223372036854775807)
def test_hex_unsigned(self):
# Positive constants
self.assertEqual(0x80000000, 2147483648L)
self.assertEqual(0xffffffff, 4294967295L)
# Ditto with a minus sign and parentheses
self.assertEqual(-(0x80000000), -2147483648L)
self.assertEqual(-(0xffffffff), -4294967295L)
# Ditto with a minus sign and NO parentheses
# This failed in Python 2.2 through 2.2.2 and in 2.3a1
self.assertEqual(-0x80000000, -2147483648L)
self.assertEqual(-0xffffffff, -4294967295L)
# Positive constants
self.assertEqual(0x8000000000000000, 9223372036854775808L)
self.assertEqual(0xffffffffffffffff, 18446744073709551615L)
# Ditto with a minus sign and parentheses
self.assertEqual(-(0x8000000000000000), -9223372036854775808L)
self.assertEqual(-(0xffffffffffffffff), -18446744073709551615L)
# Ditto with a minus sign and NO parentheses
# This failed in Python 2.2 through 2.2.2 and in 2.3a1
self.assertEqual(-0x8000000000000000, -9223372036854775808L)
self.assertEqual(-0xffffffffffffffff, -18446744073709551615L)
def test_oct_baseline(self):
# Baseline tests
self.assertEqual(00, 0)
self.assertEqual(020, 16)
self.assertEqual(017777777777, 2147483647)
self.assertEqual(0777777777777777777777, 9223372036854775807)
# Ditto with a minus sign and parentheses
self.assertEqual(-(00), 0)
self.assertEqual(-(020), -16)
self.assertEqual(-(017777777777), -2147483647)
self.assertEqual(-(0777777777777777777777), -9223372036854775807)
# Ditto with a minus sign and NO parentheses
self.assertEqual(-00, 0)
self.assertEqual(-020, -16)
self.assertEqual(-017777777777, -2147483647)
self.assertEqual(-0777777777777777777777, -9223372036854775807)
def test_oct_baseline_new(self):
# A few upper/lowercase tests
self.assertEqual(0o0, 0O0)
self.assertEqual(0o1, 0O1)
self.assertEqual(0o1234567, 0O1234567)
# Baseline tests
self.assertEqual(0o0, 0)
self.assertEqual(0o20, 16)
self.assertEqual(0o17777777777, 2147483647)
self.assertEqual(0o777777777777777777777, 9223372036854775807)
# Ditto with a minus sign and parentheses
self.assertEqual(-(0o0), 0)
self.assertEqual(-(0o20), -16)
self.assertEqual(-(0o17777777777), -2147483647)
self.assertEqual(-(0o777777777777777777777), -9223372036854775807)
# Ditto with a minus sign and NO parentheses
self.assertEqual(-0o0, 0)
self.assertEqual(-0o20, -16)
self.assertEqual(-0o17777777777, -2147483647)
self.assertEqual(-0o777777777777777777777, -9223372036854775807)
def test_oct_unsigned(self):
# Positive constants
self.assertEqual(020000000000, 2147483648L)
self.assertEqual(037777777777, 4294967295L)
# Ditto with a minus sign and parentheses
self.assertEqual(-(020000000000), -2147483648L)
self.assertEqual(-(037777777777), -4294967295L)
# Ditto with a minus sign and NO parentheses
# This failed in Python 2.2 through 2.2.2 and in 2.3a1
self.assertEqual(-020000000000, -2147483648L)
self.assertEqual(-037777777777, -4294967295L)
# Positive constants
self.assertEqual(01000000000000000000000, 9223372036854775808L)
self.assertEqual(01777777777777777777777, 18446744073709551615L)
# Ditto with a minus sign and parentheses
self.assertEqual(-(01000000000000000000000), -9223372036854775808L)
self.assertEqual(-(01777777777777777777777), -18446744073709551615L)
# Ditto with a minus sign and NO parentheses
# This failed in Python 2.2 through 2.2.2 and in 2.3a1
self.assertEqual(-01000000000000000000000, -9223372036854775808L)
self.assertEqual(-01777777777777777777777, -18446744073709551615L)
def test_oct_unsigned_new(self):
# Positive constants
self.assertEqual(0o20000000000, 2147483648L)
self.assertEqual(0o37777777777, 4294967295L)
# Ditto with a minus sign and parentheses
self.assertEqual(-(0o20000000000), -2147483648L)
self.assertEqual(-(0o37777777777), -4294967295L)
# Ditto with a minus sign and NO parentheses
# This failed in Python 2.2 through 2.2.2 and in 2.3a1
self.assertEqual(-0o20000000000, -2147483648L)
self.assertEqual(-0o37777777777, -4294967295L)
# Positive constants
self.assertEqual(0o1000000000000000000000, 9223372036854775808L)
self.assertEqual(0o1777777777777777777777, 18446744073709551615L)
# Ditto with a minus sign and parentheses
self.assertEqual(-(0o1000000000000000000000), -9223372036854775808L)
self.assertEqual(-(0o1777777777777777777777), -18446744073709551615L)
# Ditto with a minus sign and NO parentheses
# This failed in Python 2.2 through 2.2.2 and in 2.3a1
self.assertEqual(-0o1000000000000000000000, -9223372036854775808L)
self.assertEqual(-0o1777777777777777777777, -18446744073709551615L)
def test_bin_baseline(self):
# A few upper/lowercase tests
self.assertEqual(0b0, 0B0)
self.assertEqual(0b1, 0B1)
self.assertEqual(0b10101010101, 0B10101010101)
# Baseline tests
self.assertEqual(0b0, 0)
self.assertEqual(0b10000, 16)
self.assertEqual(0b1111111111111111111111111111111, 2147483647)
self.assertEqual(0b111111111111111111111111111111111111111111111111111111111111111, 9223372036854775807)
# Ditto with a minus sign and parentheses
self.assertEqual(-(0b0), 0)
self.assertEqual(-(0b10000), -16)
self.assertEqual(-(0b1111111111111111111111111111111), -2147483647)
self.assertEqual(-(0b111111111111111111111111111111111111111111111111111111111111111), -9223372036854775807)
# Ditto with a minus sign and NO parentheses
self.assertEqual(-0b0, 0)
self.assertEqual(-0b10000, -16)
self.assertEqual(-0b1111111111111111111111111111111, -2147483647)
self.assertEqual(-0b111111111111111111111111111111111111111111111111111111111111111, -9223372036854775807)
def test_bin_unsigned(self):
# Positive constants
self.assertEqual(0b10000000000000000000000000000000, 2147483648L)
self.assertEqual(0b11111111111111111111111111111111, 4294967295L)
# Ditto with a minus sign and parentheses
self.assertEqual(-(0b10000000000000000000000000000000), -2147483648L)
self.assertEqual(-(0b11111111111111111111111111111111), -4294967295L)
# Ditto with a minus sign and NO parentheses
# This failed in Python 2.2 through 2.2.2 and in 2.3a1
self.assertEqual(-0b10000000000000000000000000000000, -2147483648L)
self.assertEqual(-0b11111111111111111111111111111111, -4294967295L)
# Positive constants
self.assertEqual(0b1000000000000000000000000000000000000000000000000000000000000000, 9223372036854775808L)
self.assertEqual(0b1111111111111111111111111111111111111111111111111111111111111111, 18446744073709551615L)
# Ditto with a minus sign and parentheses
self.assertEqual(-(0b1000000000000000000000000000000000000000000000000000000000000000), -9223372036854775808L)
self.assertEqual(-(0b1111111111111111111111111111111111111111111111111111111111111111), -18446744073709551615L)
# Ditto with a minus sign and NO parentheses
# This failed in Python 2.2 through 2.2.2 and in 2.3a1
self.assertEqual(-0b1000000000000000000000000000000000000000000000000000000000000000, -9223372036854775808L)
self.assertEqual(-0b1111111111111111111111111111111111111111111111111111111111111111, -18446744073709551615L)
def test_main():
test_support.run_unittest(TestHexOctBin)
if __name__ == "__main__":
test_main()
| 48.296296 | 119 | 0.713848 | 834 | 9,128 | 7.775779 | 0.118705 | 0.242868 | 0.037008 | 0.055513 | 0.909946 | 0.896222 | 0.882806 | 0.882806 | 0.757286 | 0.741866 | 0 | 0.400993 | 0.205521 | 9,128 | 188 | 120 | 48.553191 | 0.493243 | 0.187883 | 0 | 0 | 0 | 0 | 0.001102 | 0 | 0 | 0 | 0.043927 | 0 | 0.875 | 0 | null | null | 0 | 0.016667 | null | null | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
a1b133030770735b4198a383c95dc2e1f77bd961 | 58,100 | py | Python | lattes_qualis/_Classes/Indicators.py | ellenjkr/LattesQualis | 4fa149ea9e1c58e12b03bd1b88474a0cc2c6d534 | [
"MIT"
] | null | null | null | lattes_qualis/_Classes/Indicators.py | ellenjkr/LattesQualis | 4fa149ea9e1c58e12b03bd1b88474a0cc2c6d534 | [
"MIT"
] | null | null | null | lattes_qualis/_Classes/Indicators.py | ellenjkr/LattesQualis | 4fa149ea9e1c58e12b03bd1b88474a0cc2c6d534 | [
"MIT"
] | null | null | null | from _Funções_e_Valores.verify_authors import treat_exceptions
from _Funções_e_Valores.values import ND
import pandas as pd
class Indicators():
def __init__(self, egress_list, students_list, info, qualis_year, general=False):
super(Indicators, self).__init__()
self.egress_list = egress_list
self.students_list = students_list
self.info = info
self.qualis_year = qualis_year
self.general = general
def get_SE(self, data_frame): # Get the amount of publications that contains students or egress as authors
# Get students and egress names
egress_names = []
for egress in self.egress_list:
egress_names.append(treat_exceptions(egress.name.strip()))
students_names = []
for student in self.students_list:
students_names.append(treat_exceptions(student.name.strip()))
# Calculate the amount of students and egress who appear as authors
amount_SE = 0
for index, row in data_frame.iterrows():
SE = False
for column in row.index:
if "Autor" in str(column):
if data_frame[column][index] != "": # If the value isn't null
# Verify if the author's name is on the egress list and if it's a valid publication year
for pos_egress, egress in enumerate(egress_names):
if data_frame[column][index] == egress:
if self.egress_list[pos_egress].period[str(int(data_frame["Ano"][index]))[2:4]] is True:
SE = True
# Verify if the author's name is on the students list and if it's a valid publication year
for pos_student, student in enumerate(students_names):
if data_frame[column][index] == student:
if self.students_list[pos_student].period[str(data_frame["Ano"][index])[2:4]] is True:
SE = True
# If there's an egress or a student as an author for that publication it increases the amount of SE
if SE == True:
amount_SE += 1
return amount_SE
def calculate_amount(self, data_frame, perc_aux):
amount_SE = self.get_SE(data_frame) # Get the amount of publications that contains students or egress as authors
amount = len(data_frame.index) # Amount of publications
perc = f"{perc_aux * amount:.2f}%" # Percentage of this type of publication
try:
perc_SE = f"{100/amount * amount_SE:.2f}%" # Percentage with students or egress
except ZeroDivisionError:
perc_SE = "0%"
return (amount, amount_SE, perc, perc_SE)
def build_table_2016_general(self, journals, proceedings, a1_b1, a1, a2, b1,
b2_b5, b2, b3, b4, b5, others, Irestrito, Irestrito_journals, Irestrito_proceedings,
Igeral, Igeral_journals, Igeral_proceedings, SE_journals, SE_proceedings, SE_a1_b1,
SE_a1, SE_a2, SE_b1, SE_b2_b5, SE_b2, SE_b3, SE_b4, SE_b5, SE_others, percentages_SE,
percentages, Irestrito_medio, Irestrito_medio_journals, Irestrito_medio_proceedings,
Igeral_medio, Igeral_medio_journals, Igeral_medio_proceedings):
type_qualis = ["Periódicos", "Anais", "A1-B1", "A1", "A2", "B1", "B2-B5", "B2", "B3", "B4", "B5", "Outros"]
table = {f"Tipo/Qualis {self.qualis_year}": type_qualis, "Quantidade": [], "Porcentagem": [], 'Quantidade com alunos/egressos':[], "% Alunos/Egressos":[]}
table[f"Tipo/Qualis {self.qualis_year}"].append(None)
table[f"Tipo/Qualis {self.qualis_year}"].append("Índice")
table[f"Tipo/Qualis {self.qualis_year}"].append("Irestrito")
table[f"Tipo/Qualis {self.qualis_year}"].append("Igeral")
table[f"Tipo/Qualis {self.qualis_year}"].append("Irestrito Periódicos")
table[f"Tipo/Qualis {self.qualis_year}"].append("Igeral Periódicos")
table[f"Tipo/Qualis {self.qualis_year}"].append("Irestrito Anais")
table[f"Tipo/Qualis {self.qualis_year}"].append("Igeral Anais")
table["Quantidade"].append(journals)
table["Quantidade"].append(proceedings)
table["Quantidade"].append(a1_b1)
table["Quantidade"].append(a1)
table["Quantidade"].append(a2)
table["Quantidade"].append(b1)
table["Quantidade"].append(b2_b5)
table["Quantidade"].append(b2)
table["Quantidade"].append(b3)
table["Quantidade"].append(b4)
table["Quantidade"].append(b5)
table["Quantidade"].append(others)
table["Quantidade"].append(None)
table["Quantidade"].append("Acumulado")
table["Quantidade"].append(Irestrito)
table["Quantidade"].append(Igeral)
table["Quantidade"].append(Irestrito_journals)
table["Quantidade"].append(Igeral_journals)
table["Quantidade"].append(Irestrito_proceedings)
table["Quantidade"].append(Igeral_proceedings)
table['Quantidade com alunos/egressos'].append(SE_journals)
table['Quantidade com alunos/egressos'].append(SE_proceedings)
table['Quantidade com alunos/egressos'].append(SE_a1_b1)
table['Quantidade com alunos/egressos'].append(SE_a1)
table['Quantidade com alunos/egressos'].append(SE_a2)
table['Quantidade com alunos/egressos'].append(SE_b1)
table['Quantidade com alunos/egressos'].append(SE_b2_b5)
table['Quantidade com alunos/egressos'].append(SE_b2)
table['Quantidade com alunos/egressos'].append(SE_b3)
table['Quantidade com alunos/egressos'].append(SE_b4)
table['Quantidade com alunos/egressos'].append(SE_b5)
table['Quantidade com alunos/egressos'].append(SE_others)
table['Quantidade com alunos/egressos'].append(None)
table['Quantidade com alunos/egressos'].append(None)
table['Quantidade com alunos/egressos'].append(None)
table['Quantidade com alunos/egressos'].append(None)
table['Quantidade com alunos/egressos'].append(None)
table['Quantidade com alunos/egressos'].append(None)
table['Quantidade com alunos/egressos'].append(None)
table['Quantidade com alunos/egressos'].append(None)
table["% Alunos/Egressos"] = percentages_SE
table["% Alunos/Egressos"].append(None)
table["% Alunos/Egressos"].append(None)
table["% Alunos/Egressos"].append(None)
table["% Alunos/Egressos"].append(None)
table["% Alunos/Egressos"].append(None)
table["% Alunos/Egressos"].append(None)
table["% Alunos/Egressos"].append(None)
table["% Alunos/Egressos"].append(None)
table["Porcentagem"] = percentages
table["Porcentagem"].append(None)
if self.general:
table["Porcentagem"].append("Média por docente")
table["Porcentagem"].append(Irestrito_medio)
table["Porcentagem"].append(Igeral_medio)
table["Porcentagem"].append(Irestrito_medio_journals)
table["Porcentagem"].append(Igeral_medio_journals)
table["Porcentagem"].append(Irestrito_medio_proceedings)
table["Porcentagem"].append(Igeral_medio_proceedings)
else:
table["Porcentagem"].append(None)
table["Porcentagem"].append(None)
table["Porcentagem"].append(None)
table["Porcentagem"].append(None)
table["Porcentagem"].append(None)
table["Porcentagem"].append(None)
table["Porcentagem"].append(None)
return table
# Proceedings and Journals separated
def build_table_2016_separated(self, a1_b1, a1, a2, b1, b2_b5, b2, b3, b4, b5, others,
Irestrito, Igeral, SE_a1_b1, SE_a1, SE_a2, SE_b1, SE_b2_b5, SE_b2, SE_b3, SE_b4,
SE_b5, SE_others, percentages_SE, percentages, Irestrito_medio, Igeral_medio):
type_qualis = ["A1-B1", "A1", "A2", "B1", "B2-B5", "B2", "B3", "B4", "B5", "Outros"]
table = {f"Tipo/Qualis {self.qualis_year}": type_qualis, "Quantidade": [], "Porcentagem": [], 'Quantidade com alunos/egressos':[], "% Alunos/Egressos":[]}
table[f"Tipo/Qualis {self.qualis_year}"].append(None)
table[f"Tipo/Qualis {self.qualis_year}"].append("Índice")
table[f"Tipo/Qualis {self.qualis_year}"].append("Irestrito")
table[f"Tipo/Qualis {self.qualis_year}"].append("Igeral")
table["Quantidade"].append(a1_b1)
table["Quantidade"].append(a1)
table["Quantidade"].append(a2)
table["Quantidade"].append(b1)
table["Quantidade"].append(b2_b5)
table["Quantidade"].append(b2)
table["Quantidade"].append(b3)
table["Quantidade"].append(b4)
table["Quantidade"].append(b5)
table["Quantidade"].append(others)
table["Quantidade"].append(None)
table["Quantidade"].append("Acumulado")
table["Quantidade"].append(Irestrito)
table["Quantidade"].append(Igeral)
table['Quantidade com alunos/egressos'].append(SE_a1_b1)
table['Quantidade com alunos/egressos'].append(SE_a1)
table['Quantidade com alunos/egressos'].append(SE_a2)
table['Quantidade com alunos/egressos'].append(SE_b1)
table['Quantidade com alunos/egressos'].append(SE_b2_b5)
table['Quantidade com alunos/egressos'].append(SE_b2)
table['Quantidade com alunos/egressos'].append(SE_b3)
table['Quantidade com alunos/egressos'].append(SE_b4)
table['Quantidade com alunos/egressos'].append(SE_b5)
table['Quantidade com alunos/egressos'].append(SE_others)
table['Quantidade com alunos/egressos'].append(None)
table['Quantidade com alunos/egressos'].append(None)
table['Quantidade com alunos/egressos'].append(None)
table['Quantidade com alunos/egressos'].append(None)
table["% Alunos/Egressos"] = percentages_SE
table["% Alunos/Egressos"].append(None)
table["% Alunos/Egressos"].append(None)
table["% Alunos/Egressos"].append(None)
table["% Alunos/Egressos"].append(None)
table["Porcentagem"] = percentages
table["Porcentagem"].append(None)
if self.general:
table["Porcentagem"].append("Média por docente")
table["Porcentagem"].append(Irestrito_medio)
table["Porcentagem"].append(Igeral_medio)
else:
table["Porcentagem"].append(None)
table["Porcentagem"].append(None)
table["Porcentagem"].append(None)
return table
def build_table_2019_general(self, journals, proceedings, a1_a4, a1, a2, a3, a4,
b1_b4, b1, b2, b3, b4, others, Irestrito, Igeral, Irestrito_journals, Igeral_journals,
Irestrito_proceedings, Igeral_proceedings, SE_journals, SE_proceedings, SE_a1_a4, SE_a1,
SE_a2, SE_a3, SE_a4, SE_b1_b4, SE_b1, SE_b2, SE_b3, SE_b4, SE_others, percentages_SE,
percentages, Irestrito_medio, Igeral_medio, Irestrito_medio_journals, Igeral_medio_journals,
Irestrito_medio_proceedings, Igeral_medio_proceedings):
# Build table
type_qualis = ["Periódicos", "Anais", "A1-A4", "A1", "A2", "A3", "A4", "B1-B4", "B1", "B2", "B3", "B4", "Outros"]
table = {f"Tipo/Qualis {self.qualis_year}": type_qualis, "Quantidade": [], "Porcentagem": [], 'Quantidade com alunos/egressos':[], "% Alunos/Egressos":[]}
table[f"Tipo/Qualis {self.qualis_year}"].append(None)
table[f"Tipo/Qualis {self.qualis_year}"].append("Índice")
table[f"Tipo/Qualis {self.qualis_year}"].append("Irestrito")
table[f"Tipo/Qualis {self.qualis_year}"].append("Igeral")
table[f"Tipo/Qualis {self.qualis_year}"].append("Irestrito Periódicos")
table[f"Tipo/Qualis {self.qualis_year}"].append("Igeral Periódicos")
table[f"Tipo/Qualis {self.qualis_year}"].append("Irestrito Anais")
table[f"Tipo/Qualis {self.qualis_year}"].append("Igeral Anais")
table["Quantidade"].append(journals)
table["Quantidade"].append(proceedings)
table["Quantidade"].append(a1_a4)
table["Quantidade"].append(a1)
table["Quantidade"].append(a2)
table["Quantidade"].append(a3)
table["Quantidade"].append(a4)
table["Quantidade"].append(b1_b4)
table["Quantidade"].append(b1)
table["Quantidade"].append(b2)
table["Quantidade"].append(b3)
table["Quantidade"].append(b4)
table["Quantidade"].append(others)
table["Quantidade"].append(None)
table["Quantidade"].append("Acumulado")
table["Quantidade"].append(Irestrito)
table["Quantidade"].append(Igeral)
table["Quantidade"].append(Irestrito_journals)
table["Quantidade"].append(Igeral_journals)
table["Quantidade"].append(Irestrito_proceedings)
table["Quantidade"].append(Igeral_proceedings)
table['Quantidade com alunos/egressos'].append(SE_journals)
table['Quantidade com alunos/egressos'].append(SE_proceedings)
table['Quantidade com alunos/egressos'].append(SE_a1_a4)
table['Quantidade com alunos/egressos'].append(SE_a1)
table['Quantidade com alunos/egressos'].append(SE_a2)
table['Quantidade com alunos/egressos'].append(SE_a3)
table['Quantidade com alunos/egressos'].append(SE_a4)
table['Quantidade com alunos/egressos'].append(SE_b1_b4)
table['Quantidade com alunos/egressos'].append(SE_b1)
table['Quantidade com alunos/egressos'].append(SE_b2)
table['Quantidade com alunos/egressos'].append(SE_b3)
table['Quantidade com alunos/egressos'].append(SE_b4)
table['Quantidade com alunos/egressos'].append(SE_others)
table['Quantidade com alunos/egressos'].append(None)
table['Quantidade com alunos/egressos'].append(None)
table['Quantidade com alunos/egressos'].append(None)
table['Quantidade com alunos/egressos'].append(None)
table['Quantidade com alunos/egressos'].append(None)
table['Quantidade com alunos/egressos'].append(None)
table['Quantidade com alunos/egressos'].append(None)
table['Quantidade com alunos/egressos'].append(None)
table["% Alunos/Egressos"] = percentages_SE
table["% Alunos/Egressos"].append(None)
table["% Alunos/Egressos"].append(None)
table["% Alunos/Egressos"].append(None)
table["% Alunos/Egressos"].append(None)
table["% Alunos/Egressos"].append(None)
table["% Alunos/Egressos"].append(None)
table["% Alunos/Egressos"].append(None)
table["% Alunos/Egressos"].append(None)
table["Porcentagem"] = percentages
table["Porcentagem"].append(None)
if self.general:
table["Porcentagem"].append("Média por docente")
table["Porcentagem"].append(Irestrito_medio)
table["Porcentagem"].append(Igeral_medio)
table["Porcentagem"].append(Irestrito_medio_journals)
table["Porcentagem"].append(Igeral_medio_journals)
table["Porcentagem"].append(Irestrito_medio_proceedings)
table["Porcentagem"].append(Igeral_medio_proceedings)
else:
table["Porcentagem"].append(None)
table["Porcentagem"].append(None)
table["Porcentagem"].append(None)
table["Porcentagem"].append(None)
table["Porcentagem"].append(None)
table["Porcentagem"].append(None)
table["Porcentagem"].append(None)
return table
def build_table_2019_separated(self, a1_a4, a1, a2, a3, a4, b1_b4, b1, b2, b3, b4, others,
Irestrito, Igeral, SE_a1_a4, SE_a1, SE_a2, SE_a3, SE_a4, SE_b1_b4, SE_b1, SE_b2, SE_b3, SE_b4,
SE_others, percentages_SE, percentages, Irestrito_medio, Igeral_medio):
# Build table
type_qualis = ["A1-A4", "A1", "A2", "A3", "A4", "B1-B4", "B1", "B2", "B3", "B4", "Outros"]
table = {f"Tipo/Qualis {self.qualis_year}": type_qualis, "Quantidade": [], "Porcentagem": [], 'Quantidade com alunos/egressos':[], "% Alunos/Egressos":[]}
table[f"Tipo/Qualis {self.qualis_year}"].append(None)
table[f"Tipo/Qualis {self.qualis_year}"].append("Índice")
table[f"Tipo/Qualis {self.qualis_year}"].append("Irestrito")
table[f"Tipo/Qualis {self.qualis_year}"].append("Igeral")
table["Quantidade"].append(a1_a4)
table["Quantidade"].append(a1)
table["Quantidade"].append(a2)
table["Quantidade"].append(a3)
table["Quantidade"].append(a4)
table["Quantidade"].append(b1_b4)
table["Quantidade"].append(b1)
table["Quantidade"].append(b2)
table["Quantidade"].append(b3)
table["Quantidade"].append(b4)
table["Quantidade"].append(others)
table["Quantidade"].append(None)
table["Quantidade"].append("Acumulado")
table["Quantidade"].append(Irestrito)
table["Quantidade"].append(Igeral)
table['Quantidade com alunos/egressos'].append(SE_a1_a4)
table['Quantidade com alunos/egressos'].append(SE_a1)
table['Quantidade com alunos/egressos'].append(SE_a2)
table['Quantidade com alunos/egressos'].append(SE_a3)
table['Quantidade com alunos/egressos'].append(SE_a4)
table['Quantidade com alunos/egressos'].append(SE_b1_b4)
table['Quantidade com alunos/egressos'].append(SE_b1)
table['Quantidade com alunos/egressos'].append(SE_b2)
table['Quantidade com alunos/egressos'].append(SE_b3)
table['Quantidade com alunos/egressos'].append(SE_b4)
table['Quantidade com alunos/egressos'].append(SE_others)
table['Quantidade com alunos/egressos'].append(None)
table['Quantidade com alunos/egressos'].append(None)
table['Quantidade com alunos/egressos'].append(None)
table['Quantidade com alunos/egressos'].append(None)
table["% Alunos/Egressos"] = percentages_SE
table["% Alunos/Egressos"].append(None)
table["% Alunos/Egressos"].append(None)
table["% Alunos/Egressos"].append(None)
table["% Alunos/Egressos"].append(None)
table["Porcentagem"] = percentages
table["Porcentagem"].append(None)
if self.general:
table["Porcentagem"].append("Média por docente")
table["Porcentagem"].append(Irestrito_medio)
table["Porcentagem"].append(Igeral_medio)
else:
table["Porcentagem"].append(None)
table["Porcentagem"].append(None)
table["Porcentagem"].append(None)
return table
def get_irestrito_igeral_2016(self, a1, a2, b1, b2, b3, b4, b5):
Irestrito = (a1 + a2*0.85 + b1*0.7)
if Irestrito != 0:
Irestrito = round(Irestrito, 2)
Igeral = (a1 + a2*0.85 + b1*0.7 + b2*0.5 + b3*0.2 + b4*0.1 + b5*0.05)
if Igeral != 0:
Igeral = round(Igeral, 2)
return (Irestrito, Igeral)
def get_irestrito_igeral_2019(self, a1, a2, a3, a4, b1, b2, b3, b4):
Irestrito = a1 + (a2 * 0.875) + (a3 * 0.75) + (a4 * 0.625)
if Irestrito != 0:
Irestrito = round(Irestrito, 2)
Igeral = Irestrito + (b1 * 0.5) + (b2 * 0.2) + (b3 * 0.1) + (b4 * 0.05)
if Igeral != 0:
Igeral = round(Igeral, 2)
return (Irestrito, Igeral)
def apply_3x1_2016(self, a1_journals, a2_journals, b1_journals, b2_journals, b3_journals, b4_journals, b5_journals,
a1_proceedings, a2_proceedings, b1_proceedings, b2_proceedings, b3_proceedings, b4_proceedings, b5_proceedings):
slots = {'EA1':a1_journals*3, 'EA2':a2_journals*3, 'EB1':b1_journals*3, 'EB2':b2_journals*3,
'EB3':b3_journals*3, 'EB4':b4_journals*3, 'EB5':b5_journals*3}
events_qualis = {'EA1':a1_proceedings, 'EA2':a2_proceedings, 'EB1':b1_proceedings, 'EB2':b2_proceedings,
'EB3':b3_proceedings, 'EB4':b4_proceedings, 'EB5':b5_proceedings}
remainder = 0
for key in slots.keys():
slots[key] += remainder
remainder = 0
if events_qualis[key] >= slots[key]:
events_qualis[key] = slots[key]
else:
remainder += slots[key] - events_qualis[key]
a1_total = a1_journals + events_qualis['EA1']
a2_total = a2_journals + events_qualis['EA2']
b1_total = b1_journals + events_qualis['EB1']
b2_total = b2_journals + events_qualis['EB2']
b3_total = b3_journals + events_qualis['EB3']
b4_total = b4_journals + events_qualis['EB4']
b5_total = b5_journals + events_qualis['EB5']
Irestrito_3x1_proceedings, Igeral_3x1_proceedings = self.get_irestrito_igeral_2016(events_qualis['EA1'], events_qualis['EA2'], events_qualis['EB1'], events_qualis['EB2'], events_qualis['EB3'], events_qualis['EB4'], events_qualis['EB5'])
Irestrito_3x1_total, Igeral_3x1_total = self.get_irestrito_igeral_2016(a1_total, a2_total, b1_total, b2_total, b3_total, b4_total, b5_total)
return (Irestrito_3x1_proceedings, Igeral_3x1_proceedings, Irestrito_3x1_total, Igeral_3x1_total)
def apply_3x1_2019(self, a1_journals, a2_journals, a3_journals, a4_journals, b1_journals, b2_journals, b3_journals, b4_journals,
a1_proceedings, a2_proceedings, a3_proceedings, a4_proceedings, b1_proceedings, b2_proceedings, b3_proceedings, b4_proceedings):
slots = {'EA1':a1_journals*3, 'EA2':a2_journals*3, 'EA3':a3_journals*3, 'EA4':a4_journals*3,
'EB1':b1_journals*3, 'EB2':b2_journals*3, 'EB3':b3_journals*3, 'EB4':b4_journals*3}
events_qualis = {'EA1':a1_proceedings, 'EA2':a2_proceedings, 'EA3':a3_proceedings, 'EA4':a4_proceedings,
'EB1':b1_proceedings, 'EB2':b2_proceedings, 'EB3':b3_proceedings, 'EB4':b4_proceedings}
remainder = 0
for key in slots.keys():
slots[key] += remainder
remainder = 0
if events_qualis[key] >= slots[key]:
events_qualis[key] = slots[key]
else:
remainder += slots[key] - events_qualis[key]
a1_total = a1_journals + events_qualis['EA1']
a2_total = a2_journals + events_qualis['EA2']
a3_total = a3_journals + events_qualis['EA3']
a4_total = a4_journals + events_qualis['EA4']
b1_total = b1_journals + events_qualis['EB1']
b2_total = b2_journals + events_qualis['EB2']
b3_total = b3_journals + events_qualis['EB3']
b4_total = b4_journals + events_qualis['EB4']
Irestrito_3x1_proceedings, Igeral_3x1_proceedings = self.get_irestrito_igeral_2019(events_qualis['EA1'], events_qualis['EA2'], events_qualis['EA3'], events_qualis['EA4'], events_qualis['EB1'], events_qualis['EB2'], events_qualis['EB3'], events_qualis['EB4'])
Irestrito_3x1_total, Igeral_3x1_total = self.get_irestrito_igeral_2019(a1_total, a2_total, a3_total, a4_total, b1_total, b2_total, b3_total, b4_total)
return (Irestrito_3x1_proceedings, Igeral_3x1_proceedings, Irestrito_3x1_total, Igeral_3x1_total)
def get_irestritos(self, Irestrito, Irestrito_journals, Irestrito_proceedings, Irestrito_3x1_proceedings, Irestrito_3x1_total):
self.irestritos = {'Total com trava':None, 'Total sem trava':None, 'Anais com trava':None, 'Anais sem trava':None, 'Periódicos':None}
self.irestritos['Total com trava'] = Irestrito_3x1_total
self.irestritos['Total sem trava'] = Irestrito
self.irestritos['Anais com trava'] = Irestrito_3x1_proceedings
self.irestritos['Anais sem trava'] = Irestrito_proceedings
self.irestritos['Periódicos'] = Irestrito_journals
def get_igerais(self, Igeral, Igeral_journals, Igeral_proceedings, Igeral_3x1_proceedings, Igeral_3x1_total):
self.igerais = {'Total com trava':None, 'Total sem trava':None, 'Anais com trava':None, 'Anais sem trava':None, 'Periódicos':None}
self.igerais['Total com trava'] = Igeral_3x1_total
self.igerais['Total sem trava'] = Igeral
self.igerais['Anais com trava'] = Igeral_3x1_proceedings
self.igerais['Anais sem trava'] = Igeral_proceedings
self.igerais['Periódicos'] = Igeral_journals
def get_indicators_2016(self):
data_frame = pd.DataFrame(self.info)
# Get total of publications that are not books or chapters
total_articles = 0
for i in data_frame["Tipo"]:
if i != "Livros" and i != "Capítulos":
total_articles += 1
if total_articles != 0:
perc_aux = 100/total_articles
else:
perc_aux = 0
journals_df = data_frame.loc[data_frame["Tipo"] == "Periódico"] # Get all publications on journals
journals, SE_journals, perc_journals, perc_SE_journals = self.calculate_amount(journals_df, perc_aux) # Perform calculations
# (amount of journals, amount of journals with students or egress as authors, percentage of publications on journals, percentage of publications on journals with students or egress as authors)
if journals != 0:
perc_aux_journals = 100/journals
else:
perc_aux_journals = 0
proceedings_df = data_frame.loc[data_frame["Tipo"] == "Anais"] # Get all publications on events
proceedings, SE_proceedings, perc_proceedings, perc_SE_proceedings = self.calculate_amount(proceedings_df, perc_aux) # Perform calculations
if proceedings != 0:
perc_aux_proceedings = 100/proceedings
else:
perc_aux_proceedings = 0
# ==========================================================================================================
a1 = data_frame.loc[data_frame[f"Qualis {self.qualis_year}"] == "A1"] # Get all publications with "A1" Qualis
a1, SE_a1, perc_a1, perc_SE_a1 = self.calculate_amount(a1, perc_aux) # Perform calculations
a1_journals = journals_df.loc[journals_df[f"Qualis {self.qualis_year}"] == "A1"] # Get all journals with "A1" Qualis
a1_journals, SE_a1_journals, perc_a1_journals, perc_SE_a1_journals = self.calculate_amount(a1_journals, perc_aux_journals) # Perform calculations
a1_proceedings = proceedings_df.loc[proceedings_df[f"Qualis {self.qualis_year}"] == "A1"] # Get all proceedings with "A1" Qualis
a1_proceedings, SE_a1_proceedings, perc_a1_proceedings, perc_SE_a1_proceedings = self.calculate_amount(a1_proceedings, perc_aux_proceedings) # Perform calculations
# ==========================================================================================================
a2 = data_frame.loc[data_frame[f"Qualis {self.qualis_year}"] == "A2"] # Get all publications with "A2" Qualis
a2, SE_a2, perc_a2, perc_SE_a2 = self.calculate_amount(a2, perc_aux) # Perform calculations
a2_journals = journals_df.loc[journals_df[f"Qualis {self.qualis_year}"] == "A2"] # Get all journals with "A2" Qualis
a2_journals, SE_a2_journals, perc_a2_journals, perc_SE_a2_journals = self.calculate_amount(a2_journals, perc_aux_journals) # Perform calculations
a2_proceedings = proceedings_df.loc[proceedings_df[f"Qualis {self.qualis_year}"] == "A2"] # Get all proceedings with "A2" Qualis
a2_proceedings, SE_a2_proceedings, perc_a2_proceedings, perc_SE_a2_proceedings = self.calculate_amount(a2_proceedings, perc_aux_proceedings) # Perform calculations
# ==========================================================================================================
b1 = data_frame.loc[data_frame[f"Qualis {self.qualis_year}"] == "B1"] # Get all publications with "B1" Qualis
b1, SE_b1, perc_b1, perc_SE_b1 = self.calculate_amount(b1, perc_aux) # Perform calculations
b1_journals = journals_df.loc[journals_df[f"Qualis {self.qualis_year}"] == "B1"] # Get all journals with "B1" Qualis
b1_journals, SE_b1_journals, perc_b1_journals, perc_SE_b1_journals = self.calculate_amount(b1_journals, perc_aux_journals) # Perform calculations
b1_proceedings = proceedings_df.loc[proceedings_df[f"Qualis {self.qualis_year}"] == "B1"] # Get all proceedings with "B1" Qualis
b1_proceedings, SE_b1_proceedings, perc_b1_proceedings, perc_SE_b1_proceedings = self.calculate_amount(b1_proceedings, perc_aux_proceedings) # Perform calculations
# ==========================================================================================================
b2 = data_frame.loc[data_frame[f"Qualis {self.qualis_year}"] == "B2"] # Get all publications with "B2" Qualis
b2, SE_b2, perc_b2, perc_SE_b2 = self.calculate_amount(b2, perc_aux) # Perform calculations
b2_journals = journals_df.loc[journals_df[f"Qualis {self.qualis_year}"] == "B2"] # Get all journals with "B2" Qualis
b2_journals, SE_b2_journals, perc_b2_journals, perc_SE_b2_journals = self.calculate_amount(b2_journals, perc_aux_journals) # Perform calculations
b2_proceedings = proceedings_df.loc[proceedings_df[f"Qualis {self.qualis_year}"] == "B2"] # Get all proceedings with "B2" Qualis
b2_proceedings, SE_b2_proceedings, perc_b2_proceedings, perc_SE_b2_proceedings = self.calculate_amount(b2_proceedings, perc_aux_proceedings) # Perform calculations
# ==========================================================================================================
b3 = data_frame.loc[data_frame[f"Qualis {self.qualis_year}"] == "B3"] # Get all publications with "B3" Qualis
b3, SE_b3, perc_b3, perc_SE_b3 = self.calculate_amount(b3, perc_aux) # Perform calculations
b3_journals = journals_df.loc[journals_df[f"Qualis {self.qualis_year}"] == "B3"] # Get all journals with "B3" Qualis
b3_journals, SE_b3_journals, perc_b3_journals, perc_SE_b3_journals = self.calculate_amount(b3_journals, perc_aux_journals) # Perform calculations
b3_proceedings = proceedings_df.loc[proceedings_df[f"Qualis {self.qualis_year}"] == "B3"] # Get all proceedings with "B3" Qualis
b3_proceedings, SE_b3_proceedings, perc_b3_proceedings, perc_SE_b3_proceedings = self.calculate_amount(b3_proceedings, perc_aux_proceedings) # Perform calculations
# ==========================================================================================================
b4 = data_frame.loc[data_frame[f"Qualis {self.qualis_year}"] == "B4"] # Get all publications with "B4" Qualis
b4, SE_b4, perc_b4, perc_SE_b4 = self.calculate_amount(b4, perc_aux) # Perform calculations
b4_journals = journals_df.loc[journals_df[f"Qualis {self.qualis_year}"] == "B4"] # Get all journals with "B4" Qualis
b4_journals, SE_b4_journals, perc_b4_journals, perc_SE_b4_journals = self.calculate_amount(b4_journals, perc_aux_journals) # Perform calculations
b4_proceedings = proceedings_df.loc[proceedings_df[f"Qualis {self.qualis_year}"] == "B4"] # Get all proceedings with "B4" Qualis
b4_proceedings, SE_b4_proceedings, perc_b4_proceedings, perc_SE_b4_proceedings = self.calculate_amount(b4_proceedings, perc_aux_proceedings) # Perform calculations
# ==========================================================================================================
b5 = data_frame.loc[data_frame[f"Qualis {self.qualis_year}"] == "B5"] # Get all publications with "B4" Qualis
b5, SE_b5, perc_b5, perc_SE_b5 = self.calculate_amount(b5, perc_aux) # Perform calculations
b5_journals = journals_df.loc[journals_df[f"Qualis {self.qualis_year}"] == "B5"] # Get all journals with "B5" Qualis
b5_journals, SE_b5_journals, perc_b5_journals, perc_SE_b5_journals = self.calculate_amount(b5_journals, perc_aux_journals) # Perform calculations
b5_proceedings = proceedings_df.loc[proceedings_df[f"Qualis {self.qualis_year}"] == "B5"] # Get all proceedings with "B5" Qualis
b5_proceedings, SE_b5_proceedings, perc_b5_proceedings, perc_SE_b5_proceedings = self.calculate_amount(b5_proceedings, perc_aux_proceedings) # Perform calculations
# ==========================================================================================================
# A1-B1 (all merged)
a1_b1 = a1 + a2 + b1
SE_a1_b1 = SE_a1 + SE_a2 + SE_b1
perc_a1_b1 = f"{perc_aux * a1_b1:.2f}%"
try:
perc_SE_a1_b1 = f"{100/a1_b1 * SE_a1_b1:.2f}%"
except ZeroDivisionError:
perc_SE_a1_b1 = "0%"
# A1-B1 (all merged) - Journals
a1_b1_journals = a1_journals + a2_journals + b1_journals
SE_a1_b1_journals = SE_a1_journals + SE_a2_journals + SE_b1_journals
perc_a1_b1_journals = f"{perc_aux_journals * a1_b1_journals:.2f}%"
try:
perc_SE_a1_b1_journals = f"{100/a1_b1_journals * SE_a1_b1_journals:.2f}%"
except ZeroDivisionError:
perc_SE_a1_b1_journals = "0%"
# A1-B1 (all merged) - Proceedings
a1_b1_proceedings = a1_proceedings + a2_proceedings + b1_proceedings
SE_a1_b1_proceedings = SE_a1_proceedings + SE_a2_proceedings + SE_b1_proceedings
perc_a1_b1_proceedings = f"{perc_aux_proceedings * a1_b1_proceedings:.2f}%"
try:
perc_SE_a1_b1_proceedings = f"{100/a1_b1_proceedings * SE_a1_b1_proceedings:.2f}%"
except ZeroDivisionError:
perc_SE_a1_b1_proceedings = "0%"
# ==========================================================================================================
# B2-B5 (all merged)
b2_b5 = b2 + b3 + b4 + b5
SE_b2_b5 = SE_b2 + SE_b3 + SE_b4 + SE_b5
perc_b2_b5 = f"{perc_aux * b2_b5:.2f}%"
try:
perc_SE_b2_b5 = f"{100/b2_b5 * SE_b2_b5:.2f}%"
except ZeroDivisionError:
perc_SE_b2_b5 = "0%"
# B2-B5 (all merged) - Journals
b2_b5_journals = b2_journals + b3_journals + b4_journals + b5_journals
SE_b2_b5_journals = SE_b2_journals + SE_b3_journals + SE_b4_journals + SE_b5_journals
perc_b2_b5_journals = f"{perc_aux_journals * b2_b5_journals:.2f}%"
try:
perc_SE_b2_b5_journals = f"{100/b2_b5_journals * SE_b2_b5_journals:.2f}%"
except ZeroDivisionError:
perc_SE_b2_b5_journals = "0%"
# B2-B5 (all merged) - Proceedings
b2_b5_proceedings = b2_proceedings + b3_proceedings + b4_proceedings + b5_proceedings
SE_b2_b5_proceedings = SE_b2_proceedings + SE_b3_proceedings + SE_b4_proceedings + SE_b5_proceedings
perc_b2_b5_proceedings = f"{perc_aux_proceedings * b2_b5_proceedings:.2f}%"
try:
perc_SE_b2_b5_proceedings = f"{100/b2_b5_proceedings * SE_b2_b5_proceedings:.2f}%"
except ZeroDivisionError:
perc_SE_b2_b5_proceedings = "0%"
# ==========================================================================================================
# Other - Not in A1-B1 or B2-B5
others = data_frame.loc[((data_frame[f"Qualis {self.qualis_year}"] != "A1") & (data_frame[f"Qualis {self.qualis_year}"] != "A2") & (data_frame[f"Qualis {self.qualis_year}"] != "A3") & (data_frame[f"Qualis {self.qualis_year}"] != "A4") & (data_frame["Tipo"] != "Livros") & (data_frame["Tipo"] != "Capítulos"))]
others = others.loc[((others[f"Qualis {self.qualis_year}"] != "B1") & (others[f"Qualis {self.qualis_year}"] != "B2") & (others[f"Qualis {self.qualis_year}"] != "B3") & (others[f"Qualis {self.qualis_year}"] != "B4") & (others[f"Qualis {self.qualis_year}"] != "B5"))]
others, SE_others, perc_others, perc_SE_others = self.calculate_amount(others, perc_aux) # Perform calculations
# Other - Not in A1-B1 or B2-B5 - Journals
others_journals = journals_df.loc[((journals_df[f"Qualis {self.qualis_year}"] != "A1") & (journals_df[f"Qualis {self.qualis_year}"] != "A2") & (journals_df[f"Qualis {self.qualis_year}"] != "A3") & (journals_df[f"Qualis {self.qualis_year}"] != "A4") & (journals_df["Tipo"] != "Livros") & (journals_df["Tipo"] != "Capítulos"))]
others_journals = others_journals.loc[((others_journals[f"Qualis {self.qualis_year}"] != "B1") & (others_journals[f"Qualis {self.qualis_year}"] != "B2") & (others_journals[f"Qualis {self.qualis_year}"] != "B3") & (others_journals[f"Qualis {self.qualis_year}"] != "B4") & (others_journals[f"Qualis {self.qualis_year}"] != "B5"))]
others_journals, SE_others_journals, perc_others_journals, perc_SE_others_journals = self.calculate_amount(others_journals, perc_aux_journals) # Perform calculations
# Other - Not in A1-B1 or B2-B5 - Proceedings
others_proceedings = proceedings_df.loc[((proceedings_df[f"Qualis {self.qualis_year}"] != "A1") & (proceedings_df[f"Qualis {self.qualis_year}"] != "A2") & (proceedings_df[f"Qualis {self.qualis_year}"] != "A3") & (proceedings_df[f"Qualis {self.qualis_year}"] != "A4") & (proceedings_df["Tipo"] != "Livros") & (proceedings_df["Tipo"] != "Capítulos"))]
others_proceedings = others_proceedings.loc[((others_proceedings[f"Qualis {self.qualis_year}"] != "B1") & (others_proceedings[f"Qualis {self.qualis_year}"] != "B2") & (others_proceedings[f"Qualis {self.qualis_year}"] != "B3") & (others_proceedings[f"Qualis {self.qualis_year}"] != "B4") & (others_proceedings[f"Qualis {self.qualis_year}"] != "B5"))]
others_proceedings, SE_others_proceedings, perc_others_proceedings, perc_SE_others_proceedings = self.calculate_amount(others_proceedings, perc_aux_proceedings) # Perform calculations
# ==========================================================================================================
percentages = [perc_journals, perc_proceedings, perc_a1_b1, perc_a1, perc_a2, perc_b1, perc_b2_b5, perc_b2, perc_b3, perc_b4, perc_b5, perc_others]
percentages_SE = [perc_SE_journals, perc_SE_proceedings, perc_SE_a1_b1, perc_SE_a1, perc_SE_a2, perc_SE_b1, perc_SE_b2_b5, perc_SE_b2, perc_SE_b3, perc_SE_b4, perc_SE_b5, perc_SE_others]
percentages_journals = [perc_a1_b1_journals, perc_a1_journals, perc_a2_journals, perc_b1_journals, perc_b2_b5_journals, perc_b2_journals, perc_b3_journals, perc_b4_journals, perc_b5_journals, perc_others_journals]
percentages_SE_journals = [perc_SE_a1_b1_journals, perc_SE_a1_journals, perc_SE_a2_journals, perc_SE_b1_journals, perc_SE_b2_b5_journals, perc_SE_b2_journals, perc_SE_b3_journals, perc_SE_b4_journals, perc_SE_b5_journals, perc_SE_others_journals]
percentages_proceedings = [perc_a1_b1_proceedings, perc_a1_proceedings, perc_a2_proceedings, perc_b1_proceedings, perc_b2_b5_proceedings, perc_b2_proceedings, perc_b3_proceedings, perc_b4_proceedings, perc_b5_proceedings, perc_others_proceedings]
percentages_SE_proceedings = [perc_SE_a1_b1_proceedings, perc_SE_a1_proceedings, perc_SE_a2_proceedings, perc_SE_b1_proceedings, perc_SE_b2_b5_proceedings, perc_SE_b2_proceedings, perc_SE_b3_proceedings, perc_SE_b4_proceedings, perc_SE_b5_proceedings, perc_SE_others_proceedings]
# ==========================================================================================================
Irestrito, Igeral = self.get_irestrito_igeral_2016(a1, a2, b1, b2, b3, b4, b5)
if Irestrito != 0:
Irestrito_medio = round((Irestrito/ND), 2)
else:
Irestrito_medio = 0
if Igeral != 0:
Igeral_medio = round((Igeral/ND), 2)
else:
Igeral_medio = 0
Irestrito_journals, Igeral_journals = self.get_irestrito_igeral_2016(a1_journals, a2_journals, b1_journals, b2_journals, b3_journals, b4_journals, b5_journals)
if Irestrito_journals != 0:
Irestrito_medio_journals = round((Irestrito_journals/ND), 2)
else:
Irestrito_medio_journals = 0
if Igeral_journals != 0:
Igeral_medio_journals = round((Igeral_journals/ND), 2)
else:
Igeral_medio_journals = 0
Irestrito_proceedings, Igeral_proceedings = self.get_irestrito_igeral_2016(a1_proceedings, a2_proceedings, b1_proceedings, b2_proceedings, b3_proceedings, b4_proceedings, b5_proceedings)
if Irestrito_proceedings != 0:
Irestrito_medio_proceedings = round((Irestrito_proceedings/ND), 2)
else:
Irestrito_medio_proceedings = 0
if Igeral_proceedings != 0:
Igeral_medio_proceedings = round((Igeral_proceedings/ND), 2)
else:
Igeral_medio_proceedings = 0
# ==========================================================================================================
table_general = self.build_table_2016_general(journals, proceedings, a1_b1, a1, a2, b1,
b2_b5, b2, b3, b4, b5, others, Irestrito, Irestrito_journals, Irestrito_proceedings,
Igeral, Igeral_journals, Igeral_proceedings, SE_journals, SE_proceedings, SE_a1_b1,
SE_a1, SE_a2, SE_b1, SE_b2_b5, SE_b2, SE_b3, SE_b4, SE_b5, SE_others, percentages_SE,
percentages, Irestrito_medio, Irestrito_medio_journals, Irestrito_medio_proceedings,
Igeral_medio, Igeral_medio_journals, Igeral_medio_proceedings)
table_journals = self.build_table_2016_separated(a1_b1_journals, a1_journals, a2_journals, b1_journals,
b2_b5_journals, b2_journals, b3_journals, b4_journals, b5_journals, others_journals, Irestrito_journals,
Igeral_journals, SE_a1_b1_journals, SE_a1_journals, SE_a2_journals, SE_b1_journals, SE_b2_b5_journals,
SE_b2_journals, SE_b3_journals, SE_b4_journals, SE_b5_journals, SE_others_journals, percentages_SE_journals,
percentages_journals, Irestrito_medio_journals, Igeral_medio_journals)
table_proceedings = self.build_table_2016_separated(a1_b1_proceedings, a1_proceedings, a2_proceedings, b1_proceedings,
b2_b5_proceedings, b2_proceedings, b3_proceedings, b4_proceedings, b5_proceedings, others_proceedings, Irestrito_proceedings,
Igeral_proceedings, SE_a1_b1_proceedings, SE_a1_proceedings, SE_a2_proceedings, SE_b1_proceedings, SE_b2_b5_proceedings,
SE_b2_proceedings, SE_b3_proceedings, SE_b4_proceedings, SE_b5_proceedings, SE_others_proceedings, percentages_SE_proceedings,
percentages_proceedings, Irestrito_medio_proceedings, Igeral_medio_proceedings)
if self.general == True:
Irestrito_3x1_proceedings, Igeral_3x1_proceedings, Irestrito_3x1_total, Igeral_3x1_total = self.apply_3x1_2016(a1_journals, a2_journals,
b1_journals, b2_journals, b3_journals, b4_journals, b5_journals, a1_proceedings, a2_proceedings,
b1_proceedings, b2_proceedings, b3_proceedings, b4_proceedings, b5_proceedings)
self.get_irestritos(Irestrito, Irestrito_journals, Irestrito_proceedings, Irestrito_3x1_proceedings, Irestrito_3x1_total)
self.get_igerais(Igeral, Igeral_journals, Igeral_proceedings, Igeral_3x1_proceedings, Igeral_3x1_total)
return (pd.DataFrame(table_general), pd.DataFrame(table_journals), pd.DataFrame(table_proceedings))
def get_indicators_2019(self):
data_frame = pd.DataFrame(self.info)
# Get total of publications that are not books or chapters
total_articles = 0
for i in data_frame["Tipo"]:
if i != "Livros" and i != "Capítulos":
total_articles += 1
if total_articles != 0:
perc_aux = 100/total_articles
else:
perc_aux = 0
journals_df = data_frame.loc[data_frame["Tipo"] == "Periódico"] # Get all publications on journals
journals, SE_journals, perc_journals, perc_SE_journals = self.calculate_amount(journals_df, perc_aux) # Perform calculations
# (amount of journals, amount of journals with students or egress as authors, percentage of publications on journals, percentage of publications on journals with students or egress as authors)
if journals != 0:
perc_aux_journals = 100/journals
else:
perc_aux_journals = 0
proceedings_df = data_frame.loc[data_frame["Tipo"] == "Anais"] # Get all publications on events
proceedings, SE_proceedings, perc_proceedings, perc_SE_proceedings = self.calculate_amount(proceedings_df, perc_aux) # Perform calculations
if proceedings != 0:
perc_aux_proceedings = 100/proceedings
else:
perc_aux_proceedings = 0
# ==========================================================================================================
a1 = data_frame.loc[data_frame[f"Qualis {self.qualis_year}"] == "A1"] # Get all publications with "A1" Qualis
a1, SE_a1, perc_a1, perc_SE_a1 = self.calculate_amount(a1, perc_aux) # Perform calculations
a1_journals = journals_df.loc[journals_df[f"Qualis {self.qualis_year}"] == "A1"] # Get all journals with "A1" Qualis
a1_journals, SE_a1_journals, perc_a1_journals, perc_SE_a1_journals = self.calculate_amount(a1_journals, perc_aux_journals) # Perform calculations
a1_proceedings = proceedings_df.loc[proceedings_df[f"Qualis {self.qualis_year}"] == "A1"] # Get all proceedings with "A1" Qualis
a1_proceedings, SE_a1_proceedings, perc_a1_proceedings, perc_SE_a1_proceedings = self.calculate_amount(a1_proceedings, perc_aux_proceedings) # Perform calculations
# ==========================================================================================================
a2 = data_frame.loc[data_frame[f"Qualis {self.qualis_year}"] == "A2"] # Get all publications with "A2" Qualis
a2, SE_a2, perc_a2, perc_SE_a2 = self.calculate_amount(a2, perc_aux) # Perform calculations
a2_journals = journals_df.loc[journals_df[f"Qualis {self.qualis_year}"] == "A2"] # Get all journals with "A2" Qualis
a2_journals, SE_a2_journals, perc_a2_journals, perc_SE_a2_journals = self.calculate_amount(a2_journals, perc_aux_journals) # Perform calculations
a2_proceedings = proceedings_df.loc[proceedings_df[f"Qualis {self.qualis_year}"] == "A2"] # Get all proceedings with "A2" Qualis
a2_proceedings, SE_a2_proceedings, perc_a2_proceedings, perc_SE_a2_proceedings = self.calculate_amount(a2_proceedings, perc_aux_proceedings) # Perform calculations
# ==========================================================================================================
a3 = data_frame.loc[data_frame[f"Qualis {self.qualis_year}"] == "A3"] # Get all publications with "A3" Qualis
a3, SE_a3, perc_a3, perc_SE_a3 = self.calculate_amount(a3, perc_aux) # Perform calculations
a3_journals = journals_df.loc[journals_df[f"Qualis {self.qualis_year}"] == "A3"] # Get all journals with "A3" Qualis
a3_journals, SE_a3_journals, perc_a3_journals, perc_SE_a3_journals = self.calculate_amount(a3_journals, perc_aux_journals) # Perform calculations
a3_proceedings = proceedings_df.loc[proceedings_df[f"Qualis {self.qualis_year}"] == "A3"] # Get all proceedings with "A3" Qualis
a3_proceedings, SE_a3_proceedings, perc_a3_proceedings, perc_SE_a3_proceedings = self.calculate_amount(a3_proceedings, perc_aux_proceedings) # Perform calculations
# ==========================================================================================================
a4 = data_frame.loc[data_frame[f"Qualis {self.qualis_year}"] == "A4"] # Get all publications with "A4" Qualis
a4, SE_a4, perc_a4, perc_SE_a4 = self.calculate_amount(a4, perc_aux) # Perform calculations
a4_journals = journals_df.loc[journals_df[f"Qualis {self.qualis_year}"] == "A4"] # Get all journals with "A4" Qualis
a4_journals, SE_a4_journals, perc_a4_journals, perc_SE_a4_journals = self.calculate_amount(a4_journals, perc_aux_journals) # Perform calculations
a4_proceedings = proceedings_df.loc[proceedings_df[f"Qualis {self.qualis_year}"] == "A4"] # Get all proceedings with "A4" Qualis
a4_proceedings, SE_a4_proceedings, perc_a4_proceedings, perc_SE_a4_proceedings = self.calculate_amount(a4_proceedings, perc_aux_proceedings) # Perform calculations
# ==========================================================================================================
b1 = data_frame.loc[data_frame[f"Qualis {self.qualis_year}"] == "B1"] # Get all publications with "B1" Qualis
b1, SE_b1, perc_b1, perc_SE_b1 = self.calculate_amount(b1, perc_aux) # Perform calculations
b1_journals = journals_df.loc[journals_df[f"Qualis {self.qualis_year}"] == "B1"] # Get all journals with "B1" Qualis
b1_journals, SE_b1_journals, perc_b1_journals, perc_SE_b1_journals = self.calculate_amount(b1_journals, perc_aux_journals) # Perform calculations
b1_proceedings = proceedings_df.loc[proceedings_df[f"Qualis {self.qualis_year}"] == "B1"] # Get all proceedings with "B1" Qualis
b1_proceedings, SE_b1_proceedings, perc_b1_proceedings, perc_SE_b1_proceedings = self.calculate_amount(b1_proceedings, perc_aux_proceedings) # Perform calculations
# ==========================================================================================================
b2 = data_frame.loc[data_frame[f"Qualis {self.qualis_year}"] == "B2"] # Get all publications with "B2" Qualis
b2, SE_b2, perc_b2, perc_SE_b2 = self.calculate_amount(b2, perc_aux) # Perform calculations
b2_journals = journals_df.loc[journals_df[f"Qualis {self.qualis_year}"] == "B2"] # Get all journals with "B2" Qualis
b2_journals, SE_b2_journals, perc_b2_journals, perc_SE_b2_journals = self.calculate_amount(b2_journals, perc_aux_journals) # Perform calculations
b2_proceedings = proceedings_df.loc[proceedings_df[f"Qualis {self.qualis_year}"] == "B2"] # Get all proceedings with "B2" Qualis
b2_proceedings, SE_b2_proceedings, perc_b2_proceedings, perc_SE_b2_proceedings = self.calculate_amount(b2_proceedings, perc_aux_proceedings) # Perform calculations
# ==========================================================================================================
b3 = data_frame.loc[data_frame[f"Qualis {self.qualis_year}"] == "B3"] # Get all publications with "B3" Qualis
b3, SE_b3, perc_b3, perc_SE_b3 = self.calculate_amount(b3, perc_aux) # Perform calculations
b3_journals = journals_df.loc[journals_df[f"Qualis {self.qualis_year}"] == "B3"] # Get all journals with "B3" Qualis
b3_journals, SE_b3_journals, perc_b3_journals, perc_SE_b3_journals = self.calculate_amount(b3_journals, perc_aux_journals) # Perform calculations
b3_proceedings = proceedings_df.loc[proceedings_df[f"Qualis {self.qualis_year}"] == "B3"] # Get all proceedings with "B3" Qualis
b3_proceedings, SE_b3_proceedings, perc_b3_proceedings, perc_SE_b3_proceedings = self.calculate_amount(b3_proceedings, perc_aux_proceedings) # Perform calculations
# ==========================================================================================================
b4 = data_frame.loc[data_frame[f"Qualis {self.qualis_year}"] == "B4"] # Get all publications with "B4" Qualis
b4, SE_b4, perc_b4, perc_SE_b4 = self.calculate_amount(b4, perc_aux) # Perform calculations
b4_journals = journals_df.loc[journals_df[f"Qualis {self.qualis_year}"] == "B4"] # Get all journals with "B4" Qualis
b4_journals, SE_b4_journals, perc_b4_journals, perc_SE_b4_journals = self.calculate_amount(b4_journals, perc_aux_journals) # Perform calculations
b4_proceedings = proceedings_df.loc[proceedings_df[f"Qualis {self.qualis_year}"] == "B4"] # Get all proceedings with "B4" Qualis
b4_proceedings, SE_b4_proceedings, perc_b4_proceedings, perc_SE_b4_proceedings = self.calculate_amount(b4_proceedings, perc_aux_proceedings) # Perform calculations
# ==========================================================================================================
# A1-A4 (all merged)
a1_a4 = a1 + a2 + a3 + a4
SE_a1_a4 = SE_a1 + SE_a2 + SE_a3 + SE_a4
perc_a1_a4 = f"{perc_aux * a1_a4:.2f}%"
try:
perc_SE_a1_a4 = f"{100/a1_a4 * SE_a1_a4:.2f}%"
except ZeroDivisionError:
perc_SE_a1_a4 = "0%"
# A1-A4 (all merged) - Journals
a1_a4_journals = a1_journals + a2_journals + a3_journals + a4_journals
SE_a1_a4_journals = SE_a1_journals + SE_a2_journals + SE_a3_journals + SE_a4_journals
perc_a1_a4_journals = f"{perc_aux_journals * a1_a4_journals:.2f}%"
try:
perc_SE_a1_a4_journals = f"{100/a1_a4_journals * SE_a1_a4_journals:.2f}%"
except ZeroDivisionError:
perc_SE_a1_a4_journals = "0%"
# A1-A4 (all merged) - Proceedings
a1_a4_proceedings = a1_proceedings + a2_proceedings + a3_proceedings + a4_proceedings
SE_a1_a4_proceedings = SE_a1_proceedings + SE_a2_proceedings + SE_a3_proceedings + SE_a4_proceedings
perc_a1_a4_proceedings = f"{perc_aux_proceedings * a1_a4_proceedings:.2f}%"
try:
perc_SE_a1_a4_proceedings = f"{100/a1_a4_proceedings * SE_a1_a4_proceedings:.2f}%"
except ZeroDivisionError:
perc_SE_a1_a4_proceedings = "0%"
# ==========================================================================================================
# B1-B4 (all merged)
b1_b4 = b1 + b2 + b3 + b4
SE_b1_b4 = SE_b1 + SE_b2 + SE_b3 + SE_b4
perc_b1_b4 = f"{perc_aux * b1_b4:.2f}%"
try:
perc_SE_b1_b4 = f"{100/b1_b4 * SE_b1_b4:.2f}%"
except ZeroDivisionError:
perc_SE_b1_b4 = "0%"
# B1-B4 (all merged) - Journals
b1_b4_journals = b1_journals + b2_journals + b3_journals + b4_journals
SE_b1_b4_journals = SE_b1_journals + SE_b2_journals + SE_b3_journals + SE_b4_journals
perc_b1_b4_journals = f"{perc_aux_journals * b1_b4_journals:.2f}%"
try:
perc_SE_b1_b4_journals = f"{100/b1_b4_journals * SE_b1_b4_journals:.2f}%"
except ZeroDivisionError:
perc_SE_b1_b4_journals = "0%"
# B1-B4 (all merged) - Proceedings
b1_b4_proceedings = b1_proceedings + b2_proceedings + b3_proceedings + b4_proceedings
SE_b1_b4_proceedings = SE_b1_proceedings + SE_b2_proceedings + SE_b3_proceedings + SE_b4_proceedings
perc_b1_b4_proceedings = f"{perc_aux_proceedings * b1_b4_proceedings:.2f}%"
try:
perc_SE_b1_b4_proceedings = f"{100/b1_b4_proceedings * SE_b1_b4_proceedings:.2f}%"
except ZeroDivisionError:
perc_SE_b1_b4_proceedings = "0%"
# ==========================================================================================================
# Other - Not in A1-A4 or B1-B4
others = data_frame.loc[((data_frame[f"Qualis {self.qualis_year}"] != "A1") & (data_frame[f"Qualis {self.qualis_year}"] != "A2") & (data_frame[f"Qualis {self.qualis_year}"] != "A3") & (data_frame[f"Qualis {self.qualis_year}"] != "A4") & (data_frame["Tipo"] != "Livros") & (data_frame["Tipo"] != "Capítulos"))]
others = others.loc[((others[f"Qualis {self.qualis_year}"] != "B1") & (others[f"Qualis {self.qualis_year}"] != "B2") & (others[f"Qualis {self.qualis_year}"] != "B3") & (others[f"Qualis {self.qualis_year}"] != "B4") & (others[f"Qualis {self.qualis_year}"] != "B5"))]
others, SE_others, perc_others, perc_SE_others = self.calculate_amount(others, perc_aux) # Perform calculations
# Other - Not in A1-A4 or B1-B4 - Journals
others_journals = journals_df.loc[((journals_df[f"Qualis {self.qualis_year}"] != "A1") & (journals_df[f"Qualis {self.qualis_year}"] != "A2") & (journals_df[f"Qualis {self.qualis_year}"] != "A3") & (journals_df[f"Qualis {self.qualis_year}"] != "A4") & (journals_df["Tipo"] != "Livros") & (journals_df["Tipo"] != "Capítulos"))]
others_journals = others_journals.loc[((others_journals[f"Qualis {self.qualis_year}"] != "B1") & (others_journals[f"Qualis {self.qualis_year}"] != "B2") & (others_journals[f"Qualis {self.qualis_year}"] != "B3") & (others_journals[f"Qualis {self.qualis_year}"] != "B4") & (others_journals[f"Qualis {self.qualis_year}"] != "B5"))]
others_journals, SE_others_journals, perc_others_journals, perc_SE_others_journals = self.calculate_amount(others_journals, perc_aux_journals) # Perform calculations
# Other - Not in A1-A4 or B1-B4 - Proceedings
others_proceedings = proceedings_df.loc[((proceedings_df[f"Qualis {self.qualis_year}"] != "A1") & (proceedings_df[f"Qualis {self.qualis_year}"] != "A2") & (proceedings_df[f"Qualis {self.qualis_year}"] != "A3") & (proceedings_df[f"Qualis {self.qualis_year}"] != "A4") & (proceedings_df["Tipo"] != "Livros") & (proceedings_df["Tipo"] != "Capítulos"))]
others_proceedings = others_proceedings.loc[((others_proceedings[f"Qualis {self.qualis_year}"] != "B1") & (others_proceedings[f"Qualis {self.qualis_year}"] != "B2") & (others_proceedings[f"Qualis {self.qualis_year}"] != "B3") & (others_proceedings[f"Qualis {self.qualis_year}"] != "B4") & (others_proceedings[f"Qualis {self.qualis_year}"] != "B5"))]
others_proceedings, SE_others_proceedings, perc_others_proceedings, perc_SE_others_proceedings = self.calculate_amount(others_proceedings, perc_aux_proceedings) # Perform calculations
# ==========================================================================================================
percentages = [perc_journals, perc_proceedings, perc_a1_a4, perc_a1, perc_a2, perc_a3, perc_a4, perc_b1_b4, perc_b1, perc_b2, perc_b3, perc_b4, perc_others]
percentages_SE = [perc_SE_journals, perc_SE_proceedings, perc_SE_a1_a4, perc_SE_a1, perc_SE_a2, perc_SE_a3, perc_SE_a4, perc_SE_b1_b4, perc_SE_b1, perc_SE_b2, perc_SE_b3, perc_SE_b4, perc_SE_others]
percentages_journals = [perc_a1_a4_journals, perc_a1_journals, perc_a2_journals, perc_a3_journals, perc_a4_journals, perc_b1_b4_journals, perc_b1_journals, perc_b2_journals, perc_b3_journals, perc_b4_journals, perc_others_journals]
percentages_SE_journals = [perc_SE_a1_a4_journals, perc_SE_a1_journals, perc_SE_a2_journals, perc_SE_a3_journals, perc_SE_a4_journals, perc_SE_b1_b4_journals, perc_SE_b1_journals, perc_SE_b2_journals, perc_SE_b3_journals, perc_SE_b4_journals, perc_SE_others_journals]
percentages_proceedings = [perc_a1_a4_proceedings, perc_a1_proceedings, perc_a2_proceedings, perc_a3_proceedings, perc_a4_proceedings, perc_b1_b4_proceedings, perc_b1_proceedings, perc_b2_proceedings, perc_b3_proceedings, perc_b4_proceedings, perc_others_proceedings]
percentages_SE_proceedings = [perc_SE_a1_a4_proceedings, perc_SE_a1_proceedings, perc_SE_a2_proceedings, perc_SE_a3_proceedings, perc_SE_a4_proceedings, perc_SE_b1_b4_proceedings, perc_SE_b1_proceedings, perc_SE_b2_proceedings, perc_SE_b3_proceedings, perc_SE_b4_proceedings, perc_SE_others_proceedings]
# ==========================================================================================================
# Calculate Irestrito and Igeral
Irestrito, Igeral = self.get_irestrito_igeral_2019(a1, a2, a3, a4, b1, b2, b3, b4)
if Irestrito != 0:
Irestrito_medio = round((Irestrito/ND), 2)
else:
Irestrito_medio = 0
if Igeral != 0:
Igeral_medio = round((Igeral/ND), 2)
else:
Igeral_medio = 0
Irestrito_journals, Igeral_journals = self.get_irestrito_igeral_2019(a1_journals, a2_journals, a3_journals, a4_journals, b1_journals, b2_journals, b3_journals, b4_journals)
if Irestrito_journals != 0:
Irestrito_medio_journals = round((Irestrito_journals/ND), 2)
else:
Irestrito_medio_journals = 0
if Igeral_journals != 0:
Igeral_medio_journals = round((Igeral_journals/ND), 2)
else:
Igeral_medio_journals = 0
Irestrito_proceedings, Igeral_proceedings = self.get_irestrito_igeral_2019(a1_proceedings, a2_proceedings, a3_proceedings, a4_proceedings, b1_proceedings, b2_proceedings, b3_proceedings, b4_proceedings)
if Irestrito_proceedings != 0:
Irestrito_medio_proceedings = round((Irestrito_proceedings/ND), 2)
else:
Irestrito_medio_proceedings = 0
if Igeral_proceedings != 0:
Igeral_medio_proceedings = round((Igeral_proceedings/ND), 2)
else:
Igeral_medio_proceedings = 0
# ==========================================================================================================
table_general = self.build_table_2019_general(journals, proceedings, a1_a4, a1, a2, a3, a4,
b1_b4, b1, b2, b3, b4, others, Irestrito, Igeral, Irestrito_journals, Igeral_journals,
Irestrito_proceedings, Igeral_proceedings, SE_journals, SE_proceedings, SE_a1_a4, SE_a1,
SE_a2, SE_a3, SE_a4, SE_b1_b4, SE_b1, SE_b2, SE_b3, SE_b4, SE_others, percentages_SE,
percentages, Irestrito_medio, Igeral_medio, Irestrito_medio_journals, Igeral_medio_journals,
Irestrito_medio_proceedings, Igeral_medio_proceedings)
table_journals = self.build_table_2019_separated(a1_a4_journals, a1_journals, a2_journals, a3_journals, a4_journals,
b1_b4_journals, b1_journals, b2_journals, b3_journals, b4_journals, others_journals, Irestrito_journals,
Igeral_journals, SE_a1_a4_journals, SE_a1_journals, SE_a2_journals, SE_a3_journals, SE_a4_journals,
SE_b1_b4_journals, SE_b1_journals, SE_b2_journals, SE_b3_journals, SE_b4_journals, SE_others_journals,
percentages_SE_journals, percentages_journals, Irestrito_medio_journals, Igeral_medio_journals)
table_proceedings = self.build_table_2019_separated(a1_a4_proceedings, a1_proceedings, a2_proceedings, a3_proceedings, a4_proceedings,
b1_b4_proceedings, b1_proceedings, b2_proceedings, b3_proceedings, b4_proceedings, others_proceedings, Irestrito_proceedings,
Igeral_proceedings, SE_a1_a4_proceedings, SE_a1_proceedings, SE_a2_proceedings, SE_a3_proceedings, SE_a4_proceedings,
SE_b1_b4_proceedings, SE_b1_proceedings, SE_b2_proceedings, SE_b3_proceedings, SE_b4_proceedings, SE_others_proceedings,
percentages_SE_proceedings, percentages_proceedings, Irestrito_medio_proceedings, Igeral_medio_proceedings)
if self.general == True:
Irestrito_3x1_proceedings, Igeral_3x1_proceedings, Irestrito_3x1_total, Igeral_3x1_total = self.apply_3x1_2019(a1_journals, a2_journals, a3_journals, a4_journals,
b1_journals, b2_journals, b3_journals, b4_journals, a1_proceedings, a2_proceedings, a3_proceedings, a4_proceedings,
b1_proceedings, b2_proceedings, b3_proceedings, b4_proceedings)
self.get_irestritos(Irestrito, Irestrito_journals, Irestrito_proceedings, Irestrito_3x1_proceedings, Irestrito_3x1_total)
self.get_igerais(Igeral, Igeral_journals, Igeral_proceedings, Igeral_3x1_proceedings, Igeral_3x1_total)
return (pd.DataFrame(table_general), pd.DataFrame(table_journals), pd.DataFrame(table_proceedings))
| 58.041958 | 352 | 0.718709 | 7,886 | 58,100 | 4.969947 | 0.022825 | 0.02281 | 0.045722 | 0.064807 | 0.924451 | 0.89243 | 0.864032 | 0.845661 | 0.830301 | 0.814788 | 0 | 0.037717 | 0.113339 | 58,100 | 1,000 | 353 | 58.1 | 0.72309 | 0.13167 | 0 | 0.70494 | 0 | 0 | 0.194212 | 0.009149 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020027 | false | 0 | 0.004005 | 0 | 0.041389 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a1bca957ff3c86ff25f0ee725cac4c0e23a7e135 | 144 | py | Python | discord/ext/commands/help.py | kuzaku-developers/disnake | 61cc1ad4c2bafd39726a1447c85f7e469e41af10 | [
"MIT"
] | null | null | null | discord/ext/commands/help.py | kuzaku-developers/disnake | 61cc1ad4c2bafd39726a1447c85f7e469e41af10 | [
"MIT"
] | null | null | null | discord/ext/commands/help.py | kuzaku-developers/disnake | 61cc1ad4c2bafd39726a1447c85f7e469e41af10 | [
"MIT"
] | null | null | null | from disnake.ext.commands.help import *
from disnake.ext.commands.help import __dict__ as __original_dict__
locals().update(__original_dict__)
| 28.8 | 67 | 0.833333 | 20 | 144 | 5.3 | 0.55 | 0.207547 | 0.264151 | 0.415094 | 0.603774 | 0.603774 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 144 | 4 | 68 | 36 | 0.80303 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
a1df6d70eeba8a939fe81d7ba56897b0e975f936 | 9,618 | py | Python | exact/exact/tagger_messages/views.py | maubreville/Exact | 2f4ce50054bfe5350a106ef3fa1a2f03c90bbbef | [
"MIT"
] | 43 | 2020-01-29T17:19:21.000Z | 2022-03-29T11:11:32.000Z | exact/exact/tagger_messages/views.py | maubreville/Exact | 2f4ce50054bfe5350a106ef3fa1a2f03c90bbbef | [
"MIT"
] | 41 | 2020-01-31T09:31:31.000Z | 2022-02-24T15:55:21.000Z | exact/exact/tagger_messages/views.py | maubreville/Exact | 2f4ce50054bfe5350a106ef3fa1a2f03c90bbbef | [
"MIT"
] | 16 | 2020-02-11T18:26:32.000Z | 2021-07-30T09:05:15.000Z | from datetime import date, timedelta
from django.core.paginator import Paginator
from django.contrib.auth.decorators import login_required
from django.views.decorators.http import require_POST
from django.contrib.admin.views.decorators import staff_member_required
from django.contrib import messages
from django.db.models import Q
from django.template.response import TemplateResponse
from django.http import HttpResponseRedirect
from django.db import transaction
from exact.tagger_messages.models import Message, TeamMessage, GlobalMessage
from exact.tagger_messages.forms import TeamMessageCreationForm, GlobalMessageCreationForm
from exact.users.models import TeamMembership
from exact.users.models import User, Team
from django.conf import settings
@require_POST
@login_required
def send_team_message(request):
form = TeamMessageCreationForm(request.POST)
if (form.is_valid() and TeamMembership.objects.filter(
user=request.user, team=form.instance.team, is_admin=True
).exists()):
with transaction.atomic():
team_message = form.save(commit=False)
team_message.creator = request.user
team_message.save()
team_message.read_by.add(request.user)
return HttpResponseRedirect(request.META.get('HTTP_REFERER'))
messages.error(request, 'Invalid message form')
return HttpResponseRedirect(request.META.get('HTTP_REFERER'))
@require_POST
@staff_member_required
def send_global_message(request):
form = GlobalMessageCreationForm(request.POST)
if form.is_valid():
with transaction.atomic():
team_message = form.save(commit=False)
team_message.creator = request.user
team_message.save()
return HttpResponseRedirect(request.META.get('HTTP_REFERER'))
messages.error(request, 'Invalid message form')
return HttpResponseRedirect(request.META.get('HTTP_REFERER'))
@require_POST
@login_required
def read_message(request, message_id):
message = Message.objects.get(id=message_id)
message.read_by.add(request.user)
return HttpResponseRedirect(request.META.get('HTTP_REFERER'))
@require_POST
@login_required
def read_all_messages(request):
messages = Message.in_range(TeamMessage.get_messages_for_user(request.user)).filter(~Q(read_by=request.user))
current_user = User.objects.get(username=request.user.username)
current_user.read_messages.add(*messages)
return HttpResponseRedirect(request.META.get('HTTP_REFERER'))
@require_POST
@login_required
def read_all_annoucements(request):
global_annoucements_all = GlobalMessage.get(request.user).filter(~Q(read_by=request.user))
global_annoucements = Message.in_range(global_annoucements_all)
current_user = User.objects.get(username=request.user.username)
current_user.read_messages.add(*global_annoucements)
return HttpResponseRedirect(request.META.get('HTTP_REFERER'))
@require_POST
@login_required
def delete_message(request, message_id):
if request.user.is_staff:
Message.objects.get(id=message_id).delete()
else:
Message.objects.filter(id=message_id, creator=request.user).delete()
return HttpResponseRedirect(request.META.get('HTTP_REFERER'))
@login_required
def overview_unread(request):
usermessages = Message.in_range(TeamMessage.get_messages_for_user(request.user)).filter(~Q(read_by=request.user))
page = request.GET.get('page')
paginator = Paginator(usermessages, settings.MESSAGES_PER_PAGE)
usermessages = paginator.get_page(page)
user_admin_teams = Team.objects.filter(memberships__user=request.user, memberships__is_admin=True)
team_message_creation_form = TeamMessageCreationForm(
initial={
'start_time': str(date.today()),
'expire_time': str(date.today() + timedelta(days=settings.DEFAULT_EXPIRE_TIME)),
})
team_message_creation_form.fields['team'].queryset = user_admin_teams
return TemplateResponse(request, 'tagger_messages/overview.html', {
'mode': 'unread',
'usermessages': usermessages,
'team_message_creation_form': team_message_creation_form,
'user_has_admin_teams': user_admin_teams.exists(),
})
@login_required
def overview_all(request):
# Gets all team messages for the user, even from the past and future
usermessages_all = TeamMessage.get_messages_for_user(request.user)
usermessages = Message.in_range(usermessages_all)
page = request.GET.get('page')
paginator = Paginator(usermessages, settings.MESSAGES_PER_PAGE)
usermessages = paginator.get_page(page)
user_admin_teams = Team.objects.filter(memberships__user=request.user, memberships__is_admin=True)
team_message_creation_form = TeamMessageCreationForm(
initial={
'start_time': str(date.today()),
'expire_time': str(date.today() + timedelta(days=settings.DEFAULT_EXPIRE_TIME)),
})
team_message_creation_form.fields['team'].queryset = user_admin_teams
return TemplateResponse(request, 'tagger_messages/overview.html', {
'mode': 'all',
'usermessages': usermessages,
'team_message_creation_form': team_message_creation_form,
'user_has_admin_teams': user_admin_teams.exists(),
})
@login_required
def overview_sent_active(request):
usermessages_all = TeamMessage.get_messages_for_user(request.user).filter(creator=request.user)
usermessages = Message.in_range(usermessages_all)
page = request.GET.get('page')
paginator = Paginator(usermessages, settings.MESSAGES_PER_PAGE)
usermessages = paginator.get_page(page)
# get all teams where the user is an admin
user_admin_teams = Team.objects.filter(memberships__user=request.user, memberships__is_admin=True)
team_message_creation_form = TeamMessageCreationForm(
initial={
'start_time': str(date.today()),
'expire_time': str(date.today() + timedelta(days=settings.DEFAULT_EXPIRE_TIME)),
})
team_message_creation_form.fields['team'].queryset = user_admin_teams
return TemplateResponse(request, 'tagger_messages/overview.html', {
'mode': 'sent_active',
'usermessages': usermessages,
'team_message_creation_form': team_message_creation_form,
'user_has_admin_teams': user_admin_teams.exists(),
})
@login_required
def overview_sent_hidden(request):
usermessages_all = TeamMessage.get_messages_for_user(request.user).filter(creator=request.user)
usermessages = Message.not_in_range(usermessages_all)
page = request.GET.get('page')
paginator = Paginator(usermessages, settings.MESSAGES_PER_PAGE)
usermessages = paginator.get_page(page)
# get all teams where the user is an admin
user_admin_teams = Team.objects.filter(memberships__user=request.user, memberships__is_admin=True)
team_message_creation_form = TeamMessageCreationForm(
initial={
'start_time': str(date.today()),
'expire_time': str(date.today() + timedelta(days=settings.DEFAULT_EXPIRE_TIME)),
})
team_message_creation_form.fields['team'].queryset = user_admin_teams
return TemplateResponse(request, 'tagger_messages/overview.html', {
'mode': 'sent_hidden',
'usermessages': usermessages,
'team_message_creation_form': team_message_creation_form,
'user_has_admin_teams': user_admin_teams.exists(),
})
@login_required
def overview_global_active(request):
user_admin_teams = Team.objects.filter(memberships__user=request.user, memberships__is_admin=True).exists()
# Gets all global announcements for the user, even from the past and future
global_annoucements_all = GlobalMessage.get(request.user)
global_annoucements = Message.in_range(global_annoucements_all)
page = request.GET.get('page')
paginator = Paginator(global_annoucements, settings.MESSAGES_PER_PAGE)
global_annoucements = paginator.get_page(page)
global_message_creation_form = GlobalMessageCreationForm(
initial={
'start_time': str(date.today()),
'expire_time': str(date.today() + timedelta(days=settings.DEFAULT_EXPIRE_TIME)),
})
return TemplateResponse(request, 'tagger_messages/overview.html', {
'mode': 'global_active',
'global_annoucements': global_annoucements,
'user': request.user,
'global_message_creation_form': global_message_creation_form,
'user_has_admin_teams': user_admin_teams,
})
@login_required
def overview_global_hidden(request):
user_admin_teams = Team.objects.filter(memberships__user=request.user, memberships__is_admin=True).exists()
# Gets all global announcements for the user, even from the past and future
global_annoucements_all = GlobalMessage.get(request.user)
global_annoucements = Message.not_in_range(global_annoucements_all)
page = request.GET.get('page')
paginator = Paginator(global_annoucements, settings.MESSAGES_PER_PAGE)
global_annoucements = paginator.get_page(page)
global_message_creation_form = GlobalMessageCreationForm(
initial={
'start_time': str(date.today()),
'expire_time': str(date.today() + timedelta(days=settings.DEFAULT_EXPIRE_TIME)),
})
return TemplateResponse(request, 'tagger_messages/overview.html', {
'mode': 'global_hidden',
'global_annoucements': global_annoucements,
'user': request.user,
'global_message_creation_form': global_message_creation_form,
'user_has_admin_teams': user_admin_teams,
})
| 39.418033 | 117 | 0.743086 | 1,138 | 9,618 | 5.987698 | 0.101054 | 0.051658 | 0.061344 | 0.054006 | 0.829616 | 0.812738 | 0.797476 | 0.784561 | 0.781039 | 0.769592 | 0 | 0 | 0.157829 | 9,618 | 243 | 118 | 39.580247 | 0.841235 | 0.030776 | 0 | 0.710526 | 0 | 0 | 0.099936 | 0.035852 | 0 | 0 | 0 | 0 | 0 | 1 | 0.063158 | false | 0 | 0.078947 | 0 | 0.215789 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a1ef8be07f5769fb7f512f7868990f9c1fd50dbb | 39,138 | py | Python | apps/reports/views.py | kwarodom/bemoss_web_ui-1 | 6c65c49b8f52bc7d189c9f2391f9098ec0f2dd92 | [
"Unlicense"
] | null | null | null | apps/reports/views.py | kwarodom/bemoss_web_ui-1 | 6c65c49b8f52bc7d189c9f2391f9098ec0f2dd92 | [
"Unlicense"
] | null | null | null | apps/reports/views.py | kwarodom/bemoss_web_ui-1 | 6c65c49b8f52bc7d189c9f2391f9098ec0f2dd92 | [
"Unlicense"
] | null | null | null | # -*- coding: utf-8 -*-
'''
Copyright (c) 2016, Virginia Tech
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the
following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following
disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
The views and conclusions contained in the software and documentation are those of the authors and should not be
interpreted as representing official policies, either expressed or implied, of the FreeBSD Project.
This material was prepared as an account of work sponsored by an agency of the United States Government. Neither the
United States Government nor the United States Department of Energy, nor Virginia Tech, nor any of their employees,
nor any jurisdiction or organization that has cooperated in the development of these materials, makes any warranty,
express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness or
any information, apparatus, product, software, or process disclosed, or represents that its use would not infringe
privately owned rights.
Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or
otherwise does not necessarily constitute or imply its endorsement, recommendation, favoring by the United States
Government or any agency thereof, or Virginia Tech - Advanced Research Institute. The views and opinions of authors
expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof.
VIRGINIA TECH – ADVANCED RESEARCH INSTITUTE
under Contract DE-EE0006352
#__author__ = "BEMOSS Team"
#__credits__ = ""
#__version__ = "2.0"
#__maintainer__ = "BEMOSS Team"
#__email__ = "aribemoss@gmail.com"
#__website__ = "www.bemoss.org"
#__created__ = "2014-09-12 12:04:50"
#__lastUpdated__ = "2016-03-14 11:23:33"
'''
import os
from django.contrib.auth.decorators import login_required
from django.http import HttpResponse
from apps.RTU.models import RTU
from apps.VAV.models import VAV
from apps.dashboard.models import DeviceMetadata
from apps.lighting.models import Lighting
from apps.smartplug.models import Plugload
from apps.thermostat.models import Thermostat
import settings_tornado
import sys
sys.path.insert(0,os.path.expanduser('~/workspace/bemoss_os/'))
from bemoss_lib.databases.cassandraAPI.cassandraDB import retrieve_for_export
# Modified by Xiangyu Zhang for Rearranging sequence of CSV file.
import json
import datetime
import tablib
from collections import OrderedDict
def parse_resultset(variables, data_point, result_set):
return [{"time": lst[variables.index('time')],"temperature":lst[variables.index('temperature')], "heat_setpoint":lst[variables.index('temperature')]}for lst in result_set]
def get_device_id_from_mac(mac):
device_metadata = [ob.device_control_page_info() for ob in DeviceMetadata.objects.filter(mac_address=mac)]
print device_metadata
device_id = device_metadata[0]['device_id']
return device_id
def append_data_smap(_data, data):
for smap_data in _data:
s = smap_data[0] / 1000.0
s = datetime.datetime.fromtimestamp(s).strftime('%Y-%m-%d %H:%M:%S')
data.append((s, smap_data[1]))
return data
def export_thermostat_time_series_data_spreadsheet(request):
if request.method == 'POST':
print 'inside export to spreadsheet for thermostat based on given from and to datetime'
_data = request.body
print _data
_data = json.loads(_data)
mac = _data['mac']
from_date = _data['from_dt']
to_date = _data['to_dt']
print from_date
device_id = get_device_id_from_mac(mac)
if not from_date and not to_date:
data_points, rs = retrieve_for_export(device_id, ['time', 'temperature',
'heat_setpoint', 'cool_setpoint'])
elif not to_date and from_date:
from_date = datetime.datetime.strptime(from_date, '%Y/%m/%d %H:%M')
data_points, rs = retrieve_for_export(device_id, ['time', 'temperature',
'heat_setpoint', 'cool_setpoint'], from_date)
else:
from_date = datetime.datetime.strptime(from_date, '%Y/%m/%d %H:%M')
to_date = datetime.datetime.strptime(to_date, '%Y/%m/%d %H:%M')
data_points, rs = retrieve_for_export(device_id, ['time', 'temperature',
'heat_setpoint', 'cool_setpoint'], from_date, to_date)
_data = list()
for lst in rs:
single_entry = {"1.time": lst[data_points.index('time')],
"2.temperature": lst[data_points.index('temperature')],
"3.heat_setpoint": lst[data_points.index('heat_setpoint')],
"4.cool_setpoint": lst[data_points.index('cool_setpoint')]}
new_single = OrderedDict(sorted(single_entry.items(), key=lambda t:t[0]))
_data.append(new_single)
if request.is_ajax():
return HttpResponse(json.dumps(_data), mimetype='application/json')
def export_plugload_time_series_data_spreadsheet(request):
if request.method == 'POST':
print 'inside export to spreadsheet for plugload/lighting with just status based on given from and to datetime'
_data = request.body
print _data
_data = json.loads(_data)
mac = _data['mac']
from_date = _data['from_dt']
to_date = _data['to_dt']
print from_date
device_id = get_device_id_from_mac(mac)
if not from_date and not to_date:
data_points, rs = retrieve_for_export(device_id, ['time', 'status'])
elif not to_date and from_date:
from_date = datetime.datetime.strptime(from_date, '%Y/%m/%d %H:%M')
data_points, rs = retrieve_for_export(device_id, ['time', 'status'], from_date)
else:
from_date = datetime.datetime.strptime(from_date, '%Y/%m/%d %H:%M')
to_date = datetime.datetime.strptime(to_date, '%Y/%m/%d %H:%M')
data_points, rs = retrieve_for_export(device_id, ['time', 'status'], from_date, to_date)
_data = list()
for lst in rs:
single_entry = {"1.time": lst[data_points.index('time')],
"2.status": lst[data_points.index('status')]}
new_single = OrderedDict(sorted(single_entry.items(), key=lambda t:t[0]))
_data.append(new_single)
if request.is_ajax():
return HttpResponse(json.dumps(_data), mimetype='application/json')
def export_lighting_time_series_data_spreadsheet(request):
if request.method == 'POST':
print 'inside export to spreadsheet for lighting with just status based on given from and to datetime'
_data = request.body
print _data
_data = json.loads(_data)
mac = _data['mac']
from_date = _data['from_dt']
to_date = _data['to_dt']
print from_date
device_id = get_device_id_from_mac(mac)
if not from_date and not to_date:
data_points, rs = retrieve_for_export(device_id, ['time', 'status', 'brightness'])
elif not to_date and from_date:
from_date = datetime.datetime.strptime(from_date, '%Y/%m/%d %H:%M')
data_points, rs = retrieve_for_export(device_id, ['time', 'status', 'brightness'], from_date)
else:
from_date = datetime.datetime.strptime(from_date, '%Y/%m/%d %H:%M')
to_date = datetime.datetime.strptime(to_date, '%Y/%m/%d %H:%M')
data_points, rs = retrieve_for_export(device_id, ['time', 'status', 'brightness'], from_date, to_date)
_data = list()
for lst in rs:
single_entry = {"1.time": lst[data_points.index('time')],
"2.status": lst[data_points.index('status')],
"3.brightness": lst[data_points.index('brightness')]}
new_single = OrderedDict(sorted(single_entry.items(), key=lambda t:t[0]))
_data.append(new_single)
if request.is_ajax():
return HttpResponse(json.dumps(_data), mimetype='application/json')
def export_wattplug_time_series_data_spreadsheet(request):
if request.method == 'POST':
print 'inside export to spreadsheet for lighting with just status based on given from and to datetime'
_data = request.body
print _data
_data = json.loads(_data)
mac = _data['mac']
from_date = _data['from_dt']
to_date = _data['to_dt']
print from_date
device_id = get_device_id_from_mac(mac)
if not from_date and not to_date:
data_points, rs = retrieve_for_export(device_id, ['time', 'status', 'power'])
elif not to_date and from_date:
from_date = datetime.datetime.strptime(from_date, '%Y/%m/%d %H:%M')
data_points, rs = retrieve_for_export(device_id, ['time', 'status', 'power'], from_date)
else:
from_date = datetime.datetime.strptime(from_date, '%Y/%m/%d %H:%M')
to_date = datetime.datetime.strptime(to_date, '%Y/%m/%d %H:%M')
data_points, rs = retrieve_for_export(device_id, ['time', 'status', 'power'], from_date, to_date)
_data = list()
for lst in rs:
single_entry = {"1.time": lst[data_points.index('time')],
"2.status": lst[data_points.index('status')],
"3.power": lst[data_points.index('power')]}
new_single = OrderedDict(sorted(single_entry.items(), key=lambda t:t[0]))
_data.append(new_single)
if request.is_ajax():
return HttpResponse(json.dumps(_data), mimetype='application/json')
def export_vav_time_series_data_spreadsheet(request):
if request.method == 'POST':
print 'inside export to spreadsheet for vav based on given from and to datetime'
_data = request.body
print _data
_data = json.loads(_data)
mac = _data['mac']
from_date = _data['from_dt']
to_date = _data['to_dt']
print from_date
device_id = get_device_id_from_mac(mac)
if not from_date and not to_date:
data_points, rs = retrieve_for_export(device_id, ['time', 'temperature', 'supply_temperature',
'heat_setpoint', 'cool_setpoint', 'flap_position'])
elif not to_date and from_date:
from_date = datetime.datetime.strptime(from_date, '%Y/%m/%d %H:%M')
data_points, rs = retrieve_for_export(device_id, ['time', 'temperature', 'supply_temperature',
'heat_setpoint', 'cool_setpoint', 'flap_position'], from_date)
else:
from_date = datetime.datetime.strptime(from_date, '%Y/%m/%d %H:%M')
to_date = datetime.datetime.strptime(to_date, '%Y/%m/%d %H:%M')
data_points, rs = retrieve_for_export(device_id, ['time', 'temperature', 'supply_temperature',
'heat_setpoint', 'cool_setpoint', 'flap_position'], from_date, to_date)
_data = list()
for lst in rs:
single_entry = {"1.time": lst[data_points.index('time')],
"2.temperature": lst[data_points.index('temperature')],
"3.supply_temperature": lst[data_points.index('supply_temperature')],
"4.heat_setpoint": lst[data_points.index('heat_setpoint')],
"5.cool_setpoint": lst[data_points.index('cool_setpoint')],
"6.flap_position": lst[data_points.index('flap_position')]}
new_single = OrderedDict(sorted(single_entry.items(), key=lambda t:t[0]))
_data.append(new_single)
if request.is_ajax():
return HttpResponse(json.dumps(_data), mimetype='application/json')
def export_rtu_time_series_data_spreadsheet(request):
if request.method == 'POST':
print 'inside export to spreadsheet for rtu based on given from and to datetime'
_data = request.body
print _data
_data = json.loads(_data)
mac = _data['mac']
from_date = _data['from_dt']
to_date = _data['to_dt']
print from_date
device_id = get_device_id_from_mac(mac)
if not from_date and not to_date:
data_points, rs = retrieve_for_export(device_id, ['time', 'outside_temperature', 'supply_temperature',
'return_temperature', 'heating', 'heat_setpoint',
'cool_setpoint', 'outside_damper_position',
'bypass_damper_position'])
elif not to_date and from_date:
from_date = datetime.datetime.strptime(from_date, '%Y/%m/%d %H:%M')
data_points, rs = retrieve_for_export(device_id, ['time', 'outside_temperature', 'supply_temperature',
'return_temperature', 'heating', 'heat_setpoint',
'cool_setpoint', 'outside_damper_position',
'bypass_damper_position'], from_date)
else:
from_date = datetime.datetime.strptime(from_date, '%Y/%m/%d %H:%M')
to_date = datetime.datetime.strptime(to_date, '%Y/%m/%d %H:%M')
data_points, rs = retrieve_for_export(device_id, ['time', 'outside_temperature', 'supply_temperature',
'return_temperature', 'heating', 'heat_setpoint',
'cool_setpoint', 'outside_damper_position',
'bypass_damper_position'], from_date, to_date)
_data = list()
for lst in rs:
single_entry = {"1.time": lst[data_points.index('time')],
"2.outside_temperature": lst[data_points.index('outside_temperature')],
"3.supply_temperature": lst[data_points.index('supply_temperature')],
"4.return_temperature": lst[data_points.index('return_temperature')],
"5.heating": lst[data_points.index('heating')],
"6.heat_setpoint": lst[data_points.index('heat_setpoint')],
"7.cool_setpoint": lst[data_points.index('cool_setpoint')],
"8.outside_damper_position": lst[data_points.index('outside_damper_position')],
"9.bypass_damper_position": lst[data_points.index('bypass_damper_position')]}
new_single = OrderedDict(sorted(single_entry.items(), key=lambda t:t[0]))
_data.append(new_single)
if request.is_ajax():
return HttpResponse(json.dumps(_data), mimetype='application/json')
@login_required(login_url='/login/')
def export_thermostat_to_spreadsheet(request):
_data_th = [ob.device_status() for ob in Thermostat.objects.filter(thermostat_id__approval_status='APR')]
_data_vav = [ob.device_status() for ob in VAV.objects.filter(vav_id__approval_status='APR')]
_data_rtu = [ob.device_status() for ob in RTU.objects.filter(rtu_id__approval_status='APR')]
response = get_data([_data_th, _data_vav, _data_rtu], "thermostat")
return response
@login_required(login_url='/login/')
def export_lighting_to_spreadsheet(request):
_data = [ob.device_status() for ob in Lighting.objects.filter(lighting_id__approval_status='APR')]
response = get_data([_data], "lighting")
return response
@login_required(login_url='/login/')
def export_plugload_to_spreadsheet(request):
_data = [ob.device_status() for ob in Plugload.objects.filter(plugload_id__approval_status='APR')]
response = get_data([_data], "plugload")
return response
def get_data(__data, device_type):
headers = ('Device Nickname', 'Zone', 'Device Model', 'Device Added On', 'Network Status', 'Last Scanned Time',
'Last Offline Time')
data = []
data = tablib.Dataset(*data, headers=headers, title=device_type)
for _data in __data:
for device in _data:
data.append((device['nickname'], device['zone_nickname'], device['device_model'],
str(device['date_added']),
device['network_status'],
str(device['last_scanned']),
str(device['last_offline'])))
response = HttpResponse(data.xls, content_type='application/vnd.ms-excel;charset=utf-8')
response['Content-Disposition'] = "attachment; filename=bemoss_" + device_type + ".xls"
return response
@login_required(login_url='/login/')
def export_all_device_information(request):
_data_th = [ob.device_status() for ob in Thermostat.objects.filter(thermostat_id__approval_status='APR')]
_data_vav = [ob.device_status() for ob in VAV.objects.filter(vav_id__approval_status='APR')]
_data_rtu = [ob.device_status() for ob in RTU.objects.filter(rtu_id__approval_status='APR')]
_data_hvac = data_this([_data_th, _data_vav, _data_rtu], "Thermostats")
_data_lt = [ob.device_status() for ob in Lighting.objects.filter(lighting_id__approval_status='APR')]
_data_lt = data_this([_data_lt], "Lighting Loads")
_data_pl = [ob.device_status() for ob in Plugload.objects.filter(plugload_id__approval_status='APR')]
_data_pl = data_this([_data_pl], "Plugloads")
devices = tablib.Databook((_data_hvac, _data_lt, _data_pl))
with open('bemoss_devices.xls', 'wb') as f:
f.write(devices.xls)
response = HttpResponse(devices.xls, content_type='application/vnd.ms-excel;charset=utf-8')
response['Content-Disposition'] = "attachment; filename=bemoss_devices.xls"
return response
def data_this(__data, sheetname):
headers = ('Device Nickname', 'Zone', 'Device Model', 'Device Added On', 'Network Status', 'Last Scanned Time',
'Last Offline Time')
data = []
data = tablib.Dataset(*data, headers=headers, title=sheetname)
for _data in __data:
for device in _data:
data.append((device['nickname'], device['zone_nickname'], device['device_model'],
str(device['date_added']),
device['network_status'],
str(device['last_scanned']),
str(device['last_offline'])))
return data
@login_required(login_url='/login/')
def export_schedule_thermostats_holiday(request, mac):
mac = mac.encode('ascii', 'ignore')
device = DeviceMetadata.objects.get(mac_address=mac)
_file_name = os.path.join(settings_tornado.PROJECT_DIR, 'resources/scheduler_data/thermostat/' + device.device_id
+ '_schedule.json')
if os.path.isfile(_file_name):
json_file = open(_file_name, 'r+')
_json_data = json.load(json_file)
if device.device_id in _json_data['thermostat']:
print 'device id present'
_data = _json_data['thermostat'][device.device_id]['schedulers']['holiday']
_data = json.dumps(_data)
_data = json.loads(_data, object_hook=_decode_dict)
json_file.close()
headers = ('Period Name', 'From', 'Heat Setpoint (F)', 'Cool Setpoint (F)')
data = []
data = tablib.Dataset(*data, headers=headers, title='Holiday')
for record in _data:
rec_time = str(int(record['at'])/60) + ':' + str(int(record['at']) % 60)
data.append((record['nickname'], rec_time, record['heat_setpoint'], record['cool_setpoint']))
response = HttpResponse(data.xls, content_type='application/vnd.ms-excel;charset=utf-8')
response['Content-Disposition'] = "attachment; filename=" + device.device_model + "_holiday_sch.xls"
return response
@login_required(login_url='/login/')
def export_schedule_thermostats_daily(request, mac):
mac = mac.encode('ascii', 'ignore')
device = DeviceMetadata.objects.get(mac_address=mac)
_file_name = os.path.join(settings_tornado.PROJECT_DIR, 'resources/scheduler_data/thermostat/' + device.device_id
+ '_schedule.json')
if os.path.isfile(_file_name):
json_file = open(_file_name, 'r+')
_json_data = json.load(json_file)
if device.device_id in _json_data['thermostat']:
print 'device id present'
_data = _json_data['thermostat'][device.device_id]['schedulers']['everyday']
_data = json.dumps(_data)
_data = json.loads(_data, object_hook=_decode_dict)
json_file.close()
headers = ('Period Name', 'From', 'Heat Setpoint (F)', 'Cool Setpoint (F)')
_data_mon = _data_tue = _data_wed = _data_thu = _data_fri = _data_sat = _data_sun = []
for day in _data:
data = []
data = tablib.Dataset(*data, headers=headers, title=day)
day_data = _data[day]
for record in day_data:
rec_time = str(int(record['at'])/60) + ':' + str(int(record['at']) % 60)
data.append((record['nickname'], rec_time, record['heat_setpoint'], record['cool_setpoint']))
if day == 'monday':
_data_mon = data
elif day == 'tuesday':
_data_tue = data
elif day == 'wednesday':
_data_wed = data
elif day == 'thursday':
_data_thu = data
elif day == 'friday':
_data_fri = data
elif day == 'saturday':
_data_sat = data
elif day == 'sunday':
_data_sun = data
schedule = tablib.Databook((_data_mon, _data_tue, _data_wed, _data_thu, _data_fri, _data_sat, _data_sun))
with open(device.device_model + "_daily_sch.xls", 'wb') as f:
f.write(schedule.xls)
response = HttpResponse(schedule.xls, content_type='application/vnd.ms-excel;charset=utf-8')
response['Content-Disposition'] = "attachment; filename=" +device.device_model + "_daily_sch.xls"
return response
@login_required(login_url='/login/')
def export_schedule_lighting_daily(request, mac):
mac = mac.encode('ascii', 'ignore')
device = DeviceMetadata.objects.get(mac_address=mac)
if device.device_model_id.device_model_id == '2WL':
_file_name = os.path.join(settings_tornado.PROJECT_DIR, 'resources/scheduler_data/lighting/' + device.device_id
+ '_schedule.json')
if os.path.isfile(_file_name):
json_file = open(_file_name, 'r+')
_json_data = json.load(json_file)
if device.device_id in _json_data['lighting']:
print 'device id present'
_data = _json_data['lighting'][device.device_id]['schedulers']['everyday']
_data = json.dumps(_data)
_data = json.loads(_data, object_hook=_decode_dict)
json_file.close()
headers = ('Period Name', 'From', 'Status')
_data_mon = _data_tue = _data_wed = _data_thu = _data_fri = _data_sat = _data_sun = []
for day in _data:
data = []
data = tablib.Dataset(*data, headers=headers, title=day)
day_data = _data[day]
for record in day_data:
rec_time = str(int(record['at'])/60) + ':' + str(int(record['at']) % 60)
data.append((record['nickname'], rec_time, record['status']))
if day == 'monday':
_data_mon = data
elif day == 'tuesday':
_data_tue = data
elif day == 'wednesday':
_data_wed = data
elif day == 'thursday':
_data_thu = data
elif day == 'friday':
_data_fri = data
elif day == 'saturday':
_data_sat = data
elif day == 'sunday':
_data_sun = data
schedule = tablib.Databook((_data_mon, _data_tue, _data_wed, _data_thu, _data_fri, _data_sat, _data_sun))
with open(device.device_model + "_daily_sch.xls", 'wb') as f:
f.write(schedule.xls)
response = HttpResponse(schedule.xls, content_type='application/vnd.ms-excel;charset=utf-8')
response['Content-Disposition'] = "attachment; filename=" +device.device_model + "_daily_sch.xls"
return response
elif device.device_model_id.device_model_id == '2DB' or \
device.device_model_id.device_model_id == '2SDB' or \
device.device_model_id.device_model_id == '2WSL':
_file_name = os.path.join(settings_tornado.PROJECT_DIR, 'resources/scheduler_data/lighting/' + device.device_id
+ '_schedule.json')
if os.path.isfile(_file_name):
json_file = open(_file_name, 'r+')
_json_data = json.load(json_file)
if device.device_id in _json_data['lighting']:
print 'device id present'
_data = _json_data['lighting'][device.device_id]['schedulers']['everyday']
_data = json.dumps(_data)
_data = json.loads(_data, object_hook=_decode_dict)
json_file.close()
headers = ('Period Name', 'From', 'Status (ON/OFF)', 'Brightness (%)')
_data_mon = _data_tue = _data_wed = _data_thu = _data_fri = _data_sat = _data_sun = []
for day in _data:
data = []
data = tablib.Dataset(*data, headers=headers, title=day)
day_data = _data[day]
for record in day_data:
rec_time = str(int(record['at'])/60) + ':' + str(int(record['at']) % 60)
data.append((record['nickname'], rec_time, record['status'], record['brightness']))
if day == 'monday':
_data_mon = data
elif day == 'tuesday':
_data_tue = data
elif day == 'wednesday':
_data_wed = data
elif day == 'thursday':
_data_thu = data
elif day == 'friday':
_data_fri = data
elif day == 'saturday':
_data_sat = data
elif day == 'sunday':
_data_sun = data
schedule = tablib.Databook((_data_mon, _data_tue, _data_wed, _data_thu, _data_fri, _data_sat, _data_sun))
with open(device.device_model + "_daily_sch.xls", 'wb') as f:
f.write(schedule.xls)
response = HttpResponse(schedule.xls, content_type='application/vnd.ms-excel;charset=utf-8')
response['Content-Disposition'] = "attachment; filename=" +device.device_model + "_daily_sch.xls"
return response
elif device.device_model_id.device_model_id == '2HUE':
_file_name = os.path.join(settings_tornado.PROJECT_DIR, 'resources/scheduler_data/lighting/' + device.device_id
+ '_schedule.json')
if os.path.isfile(_file_name):
json_file = open(_file_name, 'r+')
_json_data = json.load(json_file)
if device.device_id in _json_data['lighting']:
print 'device id present'
_data = _json_data['lighting'][device.device_id]['schedulers']['everyday']
_data = json.dumps(_data)
_data = json.loads(_data, object_hook=_decode_dict)
json_file.close()
headers = ('Period Name', 'From', 'Status (ON/OFF)', 'Brightness (%)', 'Color')
_data_mon = _data_tue = _data_wed = _data_thu = _data_fri = _data_sat = _data_sun = []
for day in _data:
data = []
data = tablib.Dataset(*data, headers=headers, title=day)
day_data = _data[day]
for record in day_data:
rec_time = str(int(record['at'])/60) + ':' + str(int(record['at']) % 60)
data.append((record['nickname'], rec_time, record['status'], record['brightness'], record['color']))
if day == 'monday':
_data_mon = data
elif day == 'tuesday':
_data_tue = data
elif day == 'wednesday':
_data_wed = data
elif day == 'thursday':
_data_thu = data
elif day == 'friday':
_data_fri = data
elif day == 'saturday':
_data_sat = data
elif day == 'sunday':
_data_sun = data
schedule = tablib.Databook((_data_mon, _data_tue, _data_wed, _data_thu, _data_fri, _data_sat, _data_sun))
with open(device.device_model + "_daily_sch.xls", 'wb') as f:
f.write(schedule.xls)
response = HttpResponse(schedule.xls, content_type='application/vnd.ms-excel;charset=utf-8')
response['Content-Disposition'] = "attachment; filename=" +device.device_model + "_daily_sch.xls"
return response
@login_required(login_url='/login/')
def export_schedule_lighting_holiday(request, mac):
mac = mac.encode('ascii', 'ignore')
device = DeviceMetadata.objects.get(mac_address=mac)
if device.device_model_id.device_model_id == '2WL':
_file_name = os.path.join(settings_tornado.PROJECT_DIR, 'resources/scheduler_data/lighting/' + device.device_id
+ '_schedule.json')
if os.path.isfile(_file_name):
json_file = open(_file_name, 'r+')
_json_data = json.load(json_file)
if device.device_id in _json_data['lighting']:
print 'device id present'
_data = _json_data['lighting'][device.device_id]['schedulers']['holiday']['holiday']
_data = json.dumps(_data)
_data = json.loads(_data, object_hook=_decode_dict)
json_file.close()
headers = ('Period Name', 'From', 'Status')
_data_mon = _data_tue = _data_wed = _data_thu = _data_fri = _data_sat = _data_sun = []
data = []
data = tablib.Dataset(*data, headers=headers, title='Holiday')
for record in _data:
rec_time = str(int(record['at'])/60) + ':' + str(int(record['at']) % 60)
data.append((record['nickname'], rec_time, record['status']))
response = HttpResponse(data.xls, content_type='application/vnd.ms-excel;charset=utf-8')
response['Content-Disposition'] = "attachment; filename=" + device.device_model + "_holiday_sch.xls"
return response
elif device.device_model_id.device_model_id == '2DB' or \
device.device_model_id.device_model_id == '2SDB' or \
device.device_model_id.device_model_id == '2WSL':
_file_name = os.path.join(settings_tornado.PROJECT_DIR, 'resources/scheduler_data/lighting/' + device.device_id
+ '_schedule.json')
if os.path.isfile(_file_name):
json_file = open(_file_name, 'r+')
_json_data = json.load(json_file)
if device.device_id in _json_data['lighting']:
print 'device id present'
_data = _json_data['lighting'][device.device_id]['schedulers']['holiday']['holiday']
_data = json.dumps(_data)
_data = json.loads(_data, object_hook=_decode_dict)
json_file.close()
headers = ('Period Name', 'From', 'Status (ON/OFF)', 'Brightness (%)')
data = []
data = tablib.Dataset(*data, headers=headers, title='Holiday')
for record in _data:
rec_time = str(int(record['at'])/60) + ':' + str(int(record['at']) % 60)
data.append((record['nickname'], rec_time, record['status'], record['brightness']))
response = HttpResponse(data.xls, content_type='application/vnd.ms-excel;charset=utf-8')
response['Content-Disposition'] = "attachment; filename=" + device.device_model + "_holiday_sch.xls"
return response
elif device.device_model_id.device_model_id == '2HUE':
_file_name = os.path.join(settings_tornado.PROJECT_DIR, 'resources/scheduler_data/lighting/' + device.device_id
+ '_schedule.json')
if os.path.isfile(_file_name):
json_file = open(_file_name, 'r+')
_json_data = json.load(json_file)
if device.device_id in _json_data['lighting']:
print 'device id present'
_data = _json_data['lighting'][device.device_id]['schedulers']['holiday']['holiday']
_data = json.dumps(_data)
_data = json.loads(_data, object_hook=_decode_dict)
json_file.close()
headers = ('Period Name', 'From', 'Status (ON/OFF)', 'Brightness (%)', 'Color')
data = []
data = tablib.Dataset(*data, headers=headers, title='Holiday')
for record in _data:
rec_time = str(int(record['at'])/60) + ':' + str(int(record['at']) % 60)
data.append((record['nickname'], rec_time, record['status'], record['brightness'], record['color']))
response = HttpResponse(data.xls, content_type='application/vnd.ms-excel;charset=utf-8')
response['Content-Disposition'] = "attachment; filename=" + device.device_model + "_holiday_sch.xls"
return response
@login_required(login_url='/login/')
def export_schedule_plugload_daily(request, mac):
mac = mac.encode('ascii', 'ignore')
device = DeviceMetadata.objects.get(mac_address=mac)
_file_name = os.path.join(settings_tornado.PROJECT_DIR, 'resources/scheduler_data/plugload/' + device.device_id
+ '_schedule.json')
if os.path.isfile(_file_name):
json_file = open(_file_name, 'r+')
_json_data = json.load(json_file)
if device.device_id in _json_data['plugload']:
print 'device id present'
_data = _json_data['plugload'][device.device_id]['schedulers']['everyday']
_data = json.dumps(_data)
_data = json.loads(_data, object_hook=_decode_dict)
json_file.close()
headers = ('Period Name', 'From', 'Status')
_data_mon = _data_tue = _data_wed = _data_thu = _data_fri = _data_sat = _data_sun = []
for day in _data:
data = []
data = tablib.Dataset(*data, headers=headers, title=day)
day_data = _data[day]
for record in day_data:
rec_time = str(int(record['at'])/60) + ':' + str(int(record['at']) % 60)
data.append((record['nickname'], rec_time, record['status']))
if day == 'monday':
_data_mon = data
elif day == 'tuesday':
_data_tue = data
elif day == 'wednesday':
_data_wed = data
elif day == 'thursday':
_data_thu = data
elif day == 'friday':
_data_fri = data
elif day == 'saturday':
_data_sat = data
elif day == 'sunday':
_data_sun = data
schedule = tablib.Databook((_data_mon, _data_tue, _data_wed, _data_thu, _data_fri, _data_sat, _data_sun))
with open(device.device_model + "_daily_sch.xls", 'wb') as f:
f.write(schedule.xls)
response = HttpResponse(schedule.xls, content_type='application/vnd.ms-excel;charset=utf-8')
response['Content-Disposition'] = "attachment; filename=" +device.device_model + "_daily_sch.xls"
return response
@login_required(login_url='/login/')
def export_schedule_plugload_holiday(request, mac):
mac = mac.encode('ascii', 'ignore')
device = DeviceMetadata.objects.get(mac_address=mac)
_file_name = os.path.join(settings_tornado.PROJECT_DIR, 'resources/scheduler_data/plugload/' + device.device_id
+ '_schedule.json')
if os.path.isfile(_file_name):
json_file = open(_file_name, 'r+')
_json_data = json.load(json_file)
if device.device_id in _json_data['plugload']:
print 'device id present'
_data = _json_data['plugload'][device.device_id]['schedulers']['holiday']['holiday']
_data = json.dumps(_data)
_data = json.loads(_data, object_hook=_decode_dict)
json_file.close()
headers = ('Period Name', 'From', 'Status')
_data_mon = _data_tue = _data_wed = _data_thu = _data_fri = _data_sat = _data_sun = []
data = []
data = tablib.Dataset(*data, headers=headers, title='Holiday')
for record in _data:
rec_time = str(int(record['at'])/60) + ':' + str(int(record['at']) % 60)
data.append((record['nickname'], rec_time, record['status']))
response = HttpResponse(data.xls, content_type='application/vnd.ms-excel;charset=utf-8')
response['Content-Disposition'] = "attachment; filename=" + device.device_model + "_holiday_sch.xls"
return response
def _decode_list(data):
rv = []
for item in data:
if isinstance(item, unicode):
item = item.encode('utf-8')
elif isinstance(item, list):
item = _decode_list(item)
elif isinstance(item, dict):
item = _decode_dict(item)
rv.append(item)
return rv
def _decode_dict(data):
rv = {}
for key, value in data.iteritems():
if isinstance(key, unicode):
key = key.encode('utf-8')
if isinstance(value, unicode):
value = value.encode('utf-8')
elif isinstance(value, list):
value = _decode_list(value)
elif isinstance(value, dict):
value = _decode_dict(value)
rv[key] = value
return rv
| 47.497573 | 175 | 0.614799 | 4,742 | 39,138 | 4.782581 | 0.094475 | 0.026103 | 0.018519 | 0.02143 | 0.821465 | 0.810882 | 0.80493 | 0.803386 | 0.785969 | 0.778782 | 0 | 0.005255 | 0.265803 | 39,138 | 823 | 176 | 47.555286 | 0.783957 | 0.002172 | 0 | 0.802774 | 0 | 0 | 0.177777 | 0.030399 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.006163 | 0.024653 | null | null | 0.044684 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
b8110148a22e47235a9ccdb774afb234389f9aec | 12,068 | py | Python | zerver/webhooks/gogs/tests.py | kinster007/clone | 21cc608b5c960e035ef6b3387efeefbb65cb638b | [
"Apache-2.0"
] | null | null | null | zerver/webhooks/gogs/tests.py | kinster007/clone | 21cc608b5c960e035ef6b3387efeefbb65cb638b | [
"Apache-2.0"
] | null | null | null | zerver/webhooks/gogs/tests.py | kinster007/clone | 21cc608b5c960e035ef6b3387efeefbb65cb638b | [
"Apache-2.0"
] | null | null | null | from unittest.mock import MagicMock, patch
from zerver.lib.test_classes import WebhookTestCase
from zerver.lib.webhooks.git import COMMITS_LIMIT
class GogsHookTests(WebhookTestCase):
STREAM_NAME = 'commits'
URL_TEMPLATE = "/api/v1/external/gogs?&api_key={api_key}&stream={stream}"
FIXTURE_DIR_NAME = 'gogs'
def test_push(self) -> None:
expected_topic = "try-git / master"
expected_message = """john [pushed](http://localhost:3000/john/try-git/compare/479e6b772b7fba19412457483f50b201286d0103...d8fce16c72a2ff56a5afc8a08645a6ce45491794) 1 commit to branch master. Commits by John (1).
* Webhook Test ([d8fce16](http://localhost:3000/john/try-git/commit/d8fce16c72a2ff56a5afc8a08645a6ce45491794))"""
self.send_and_test_stream_message('push', expected_topic, expected_message)
def test_push_multiple_committers(self) -> None:
commit_info = '* Webhook Test ([d8fce16](http://localhost:3000/john/try-git/commit/d8fce16c72a2ff56a5afc8a08645a6ce45491794))\n'
expected_topic = "try-git / master"
expected_message = """john [pushed](http://localhost:3000/john/try-git/compare/479e6b772b7fba19412457483f50b201286d0103...d8fce16c72a2ff56a5afc8a08645a6ce45491794) 2 commits to branch master. Commits by Benjamin (1) and John (1).\n\n{}* Webhook Test ([d8fce16](http://localhost:3000/john/try-git/commit/d8fce16c72a2ff56a5afc8a08645a6ce45491794))""".format(commit_info)
self.send_and_test_stream_message('push__commits_multiple_committers', expected_topic, expected_message)
def test_push_multiple_committers_filtered_by_branches(self) -> None:
self.url = self.build_webhook_url(branches='master,development')
commit_info = '* Webhook Test ([d8fce16](http://localhost:3000/john/try-git/commit/d8fce16c72a2ff56a5afc8a08645a6ce45491794))\n'
expected_topic = "try-git / master"
expected_message = """john [pushed](http://localhost:3000/john/try-git/compare/479e6b772b7fba19412457483f50b201286d0103...d8fce16c72a2ff56a5afc8a08645a6ce45491794) 2 commits to branch master. Commits by Benjamin (1) and John (1).\n\n{}* Webhook Test ([d8fce16](http://localhost:3000/john/try-git/commit/d8fce16c72a2ff56a5afc8a08645a6ce45491794))""".format(commit_info)
self.send_and_test_stream_message('push__commits_multiple_committers', expected_topic, expected_message)
def test_push_filtered_by_branches(self) -> None:
self.url = self.build_webhook_url(branches='master,development')
expected_topic = "try-git / master"
expected_message = """john [pushed](http://localhost:3000/john/try-git/compare/479e6b772b7fba19412457483f50b201286d0103...d8fce16c72a2ff56a5afc8a08645a6ce45491794) 1 commit to branch master. Commits by John (1).
* Webhook Test ([d8fce16](http://localhost:3000/john/try-git/commit/d8fce16c72a2ff56a5afc8a08645a6ce45491794))"""
self.send_and_test_stream_message('push', expected_topic, expected_message)
def test_push_commits_more_than_limits(self) -> None:
expected_topic = "try-git / master"
commits_info = "* Webhook Test ([d8fce16](http://localhost:3000/john/try-git/commit/d8fce16c72a2ff56a5afc8a08645a6ce45491794))\n"
expected_message = "john [pushed](http://localhost:3000/john/try-git/compare/479e6b772b7fba19412457483f50b201286d0103...d8fce16c72a2ff56a5afc8a08645a6ce45491794) 30 commits to branch master. Commits by John (30).\n\n{}[and {} more commit(s)]".format(
commits_info * COMMITS_LIMIT,
30 - COMMITS_LIMIT
)
self.send_and_test_stream_message('push__commits_more_than_limits', expected_topic, expected_message)
def test_push_commits_more_than_limits_filtered_by_branches(self) -> None:
self.url = self.build_webhook_url(branches='master,development')
expected_topic = "try-git / master"
commits_info = "* Webhook Test ([d8fce16](http://localhost:3000/john/try-git/commit/d8fce16c72a2ff56a5afc8a08645a6ce45491794))\n"
expected_message = "john [pushed](http://localhost:3000/john/try-git/compare/479e6b772b7fba19412457483f50b201286d0103...d8fce16c72a2ff56a5afc8a08645a6ce45491794) 30 commits to branch master. Commits by John (30).\n\n{}[and {} more commit(s)]".format(
commits_info * COMMITS_LIMIT,
30 - COMMITS_LIMIT
)
self.send_and_test_stream_message('push__commits_more_than_limits', expected_topic, expected_message)
def test_new_branch(self) -> None:
expected_topic = "try-git / my_feature"
expected_message = "john created [my_feature](http://localhost:3000/john/try-git/src/my_feature) branch."
self.send_and_test_stream_message('create__branch', expected_topic, expected_message)
def test_pull_request_opened(self) -> None:
expected_topic = "try-git / PR #1 Title Text for Pull Request"
expected_message = """john opened [PR #1](http://localhost:3000/john/try-git/pulls/1) from `feature` to `master`."""
self.send_and_test_stream_message('pull_request__opened', expected_topic, expected_message)
def test_pull_request_opened_with_custom_topic_in_url(self) -> None:
self.url = self.build_webhook_url(topic='notifications')
expected_topic = "notifications"
expected_message = """john opened [PR #1 Title Text for Pull Request](http://localhost:3000/john/try-git/pulls/1) from `feature` to `master`."""
self.send_and_test_stream_message('pull_request__opened', expected_topic, expected_message)
def test_pull_request_closed(self) -> None:
expected_topic = "try-git / PR #1 Title Text for Pull Request"
expected_message = """john closed [PR #1](http://localhost:3000/john/try-git/pulls/1) from `feature` to `master`."""
self.send_and_test_stream_message('pull_request__closed', expected_topic, expected_message)
def test_pull_request_merged(self) -> None:
expected_topic = "try-git / PR #2 Title Text for Pull Request"
expected_message = """john merged [PR #2](http://localhost:3000/john/try-git/pulls/2) from `feature` to `master`."""
self.send_and_test_stream_message('pull_request__merged', expected_topic, expected_message)
def test_pull_request_reopened(self) -> None:
expected_topic = "test / PR #1349 reopened"
expected_message = """kostekIV reopened [PR #2](https://try.gogs.io/kostekIV/test/pulls/2) from `c` to `master`."""
self.send_and_test_stream_message('pull_request__reopened', expected_topic, expected_message)
def test_pull_request_edited(self) -> None:
expected_topic = "test / PR #1349 Test"
expected_message = """kostekIV edited [PR #2](https://try.gogs.io/kostekIV/test/pulls/2) from `c` to `master`."""
self.send_and_test_stream_message('pull_request__edited', expected_topic, expected_message)
def test_pull_request_assigned(self) -> None:
expected_topic = "test / PR #1349 Test"
expected_message = """kostekIV assigned [PR #2](https://try.gogs.io/kostekIV/test/pulls/2) from `c` to `master`."""
self.send_and_test_stream_message('pull_request__assigned', expected_topic, expected_message)
def test_pull_request_synchronized(self) -> None:
expected_topic = "test / PR #1349 Test"
expected_message = """kostekIV synchronized [PR #2](https://try.gogs.io/kostekIV/test/pulls/2) from `c` to `master`."""
self.send_and_test_stream_message('pull_request__synchronized', expected_topic, expected_message)
def test_issues_opened(self) -> None:
expected_topic = "test / Issue #3 New test issue"
expected_message = """kostekIV opened [Issue #3](https://try.gogs.io/kostekIV/test/issues/3):\n\n~~~ quote\nTest\n~~~"""
self.send_and_test_stream_message('issues__opened', expected_topic, expected_message)
def test_issues_reopened(self) -> None:
expected_topic = "test / Issue #3 New test issue"
expected_message = """kostekIV reopened [Issue #3](https://try.gogs.io/kostekIV/test/issues/3):\n\n~~~ quote\nTest\n~~~"""
self.send_and_test_stream_message('issues__reopened', expected_topic, expected_message)
def test_issues_edited(self) -> None:
expected_topic = "test / Issue #3 New test issue"
expected_message = """kostekIV edited [Issue #3](https://try.gogs.io/kostekIV/test/issues/3):\n\n~~~ quote\nTest edit\n~~~"""
self.send_and_test_stream_message('issues__edited', expected_topic, expected_message)
def test_issues_assignee(self) -> None:
expected_topic = "test / Issue #3 New test issue"
expected_message = """kostekIV assigned [Issue #3](https://try.gogs.io/kostekIV/test/issues/3) (assigned to kostekIV):\n\n~~~ quote\nTest\n~~~"""
self.send_and_test_stream_message('issues__assigned', expected_topic, expected_message)
def test_issues_closed(self) -> None:
expected_topic = "test / Issue #3 New test issue"
expected_message = """kostekIV closed [Issue #3](https://try.gogs.io/kostekIV/test/issues/3):\n\n~~~ quote\nClosed #3\n~~~"""
self.send_and_test_stream_message('issues__closed', expected_topic, expected_message)
def test_issue_comment_new(self) -> None:
expected_topic = "test / Issue #3 New test issue"
expected_message = """kostekIV [commented](https://try.gogs.io/kostekIV/test/issues/3#issuecomment-3635) on [Issue #3](https://try.gogs.io/kostekIV/test/issues/3):\n\n~~~ quote\nTest comment\n~~~"""
self.send_and_test_stream_message('issue_comment__new', expected_topic, expected_message)
def test_issue_comment_edited(self) -> None:
expected_topic = "test / Issue #3 New test issue"
expected_message = """kostekIV edited a [comment](https://try.gogs.io/kostekIV/test/issues/3#issuecomment-3634) on [Issue #3](https://try.gogs.io/kostekIV/test/issues/3):\n\n~~~ quote\nedit comment\n~~~"""
self.send_and_test_stream_message('issue_comment__edited', expected_topic, expected_message)
def test_release_published(self) -> None:
expected_topic = "zulip_test / v1.4 Title"
expected_message = """cestrell published release [Title](https://try.gogs.io/cestrell/zulip_test) for tag v1.4."""
self.send_and_test_stream_message('release__published', expected_topic, expected_message)
@patch('zerver.webhooks.gogs.view.check_send_webhook_message')
def test_push_filtered_by_branches_ignore(self, check_send_webhook_message_mock: MagicMock) -> None:
self.url = self.build_webhook_url(branches='changes,development')
payload = self.get_body('push')
result = self.client_post(self.url, payload, HTTP_X_GOGS_EVENT='push',
content_type="application/json")
self.assertFalse(check_send_webhook_message_mock.called)
self.assert_json_success(result)
@patch('zerver.webhooks.gogs.view.check_send_webhook_message')
def test_push_commits_more_than_limits_filtered_by_branches_ignore(
self, check_send_webhook_message_mock: MagicMock) -> None:
self.url = self.build_webhook_url(branches='changes,development')
payload = self.get_body('push__commits_more_than_limits')
result = self.client_post(self.url, payload, HTTP_X_GOGS_EVENT='push',
content_type="application/json")
self.assertFalse(check_send_webhook_message_mock.called)
self.assert_json_success(result)
@patch('zerver.webhooks.gogs.view.check_send_webhook_message')
def test_push_multiple_committers_filtered_by_branches_ignore(
self, check_send_webhook_message_mock: MagicMock) -> None:
self.url = self.build_webhook_url(branches='changes,development')
payload = self.get_body('push__commits_multiple_committers')
result = self.client_post(self.url, payload, HTTP_X_GOGS_EVENT='push',
content_type="application/json")
self.assertFalse(check_send_webhook_message_mock.called)
self.assert_json_success(result)
| 69.757225 | 376 | 0.727793 | 1,535 | 12,068 | 5.429967 | 0.091857 | 0.071746 | 0.041992 | 0.041392 | 0.89958 | 0.895741 | 0.876185 | 0.823155 | 0.766767 | 0.742531 | 0 | 0.069847 | 0.150563 | 12,068 | 172 | 377 | 70.162791 | 0.743245 | 0 | 0 | 0.460993 | 0 | 0.205674 | 0.449785 | 0.040769 | 0 | 0 | 0 | 0 | 0.042553 | 1 | 0.184397 | false | 0 | 0.021277 | 0 | 0.234043 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
62abd24927893903e110f673e46e9c3818d58558 | 2,415 | py | Python | tests/cli/provider_plugins/ahv/test_ahv_create_spec.py | opywan/calm-dsl | 1d89436d039a39265a0ae806022be5b52e757ac0 | [
"Apache-2.0"
] | null | null | null | tests/cli/provider_plugins/ahv/test_ahv_create_spec.py | opywan/calm-dsl | 1d89436d039a39265a0ae806022be5b52e757ac0 | [
"Apache-2.0"
] | 20 | 2020-06-30T01:00:36.000Z | 2021-03-23T01:03:39.000Z | tests/cli/provider_plugins/ahv/test_ahv_create_spec.py | LevyForchh/calm-dsl | ff6e021628c0ef8c04aaa5e37c80fe1fbff729e6 | [
"Apache-2.0"
] | 1 | 2020-04-07T12:21:13.000Z | 2020-04-07T12:21:13.000Z | import pytest
from .. import plugin_test
@pytest.mark.slow
@pytest.mark.presetup_required
@plugin_test("AHV_VM")
class TestAHVSpec:
def test_normal_spec(self):
"""
Category: Yes
Multiple Categories of same Family Check: No
Disk Images: Yes
DISK: 1
CD-ROM: 1
Virtual Disks: No
Network Adapters: No
Customization Script: No
"""
pass
def test_vm_spec_dup_category(self):
"""
Category: Yes
Multiple Categories of same Family Check: Yes
(For every family, you can use single category)
Disk Images: Yes
DISK: 0
CD-ROM: 1
Virtual Disks: No
Network Adapters: No
Customization Script: No
"""
pass
def test_vm_spec_having_virtual_disks(self):
"""
Category: Yes
Multiple Categories of same Family Check: No
Disk Images: Yes
DISK: 0
CD-ROM: 1
Virtual Disks: Yes
DISK: 1
CD-ROM: 1
Network Adapters: No
Customization Script: No
"""
pass
def test_vm_spec_with_nic(self):
"""
Category: Yes
Multiple Categories of same Family Check: No
Disk Images: Yes
DISK: 0
CD-ROM: 1
Virtual Disks: No
Network Adapters: Yes
Customization Script: No
"""
pass
def test_vm_spec_with_cloud_init_gc(self):
"""
Category: Yes
Multiple Categories of same Family Check: No
Disk Images: Yes
DISK: 0
CD-ROM: 1
Virtual Disks: No
Network Adapters: No
Customization Script: Yes
Customization Type = Cloud_Init
"""
pass
def test_vm_spec_with_sys_prep_gc(self, os_type="Windows"):
"""
Category: Yes
Multiple Categories of same Family Check: No
Disk Images: Yes
DISK: 0
CD-ROM: 1
Virtual Disks: No
Network Adapters: No
Customization Script: Yes
Customization Type = sysprep
"""
pass
| 26.538462 | 63 | 0.492754 | 249 | 2,415 | 4.646586 | 0.232932 | 0.042351 | 0.036301 | 0.150389 | 0.782195 | 0.782195 | 0.75108 | 0.75108 | 0.75108 | 0.668107 | 0 | 0.010574 | 0.45176 | 2,415 | 90 | 64 | 26.833333 | 0.863293 | 0.474948 | 0 | 0.333333 | 0 | 0 | 0.02403 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.333333 | 0.111111 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
62e69093d40e91231b27a73c59a99e16368b1476 | 2,491 | py | Python | chapter_11/gd_2d.py | rkneusel9/MathForDeepLearning | 8db1a85ce3cef4b48aab01ebe156e3fab2dfa271 | [
"MIT"
] | 23 | 2021-10-12T19:53:35.000Z | 2022-03-29T12:41:23.000Z | chapter_11/gd_2d.py | mohit-n-rajput/MathForDeepLearning | 8db1a85ce3cef4b48aab01ebe156e3fab2dfa271 | [
"MIT"
] | null | null | null | chapter_11/gd_2d.py | mohit-n-rajput/MathForDeepLearning | 8db1a85ce3cef4b48aab01ebe156e3fab2dfa271 | [
"MIT"
] | 7 | 2021-06-16T17:21:41.000Z | 2022-03-16T09:22:50.000Z | #
# file: gd_2d.py
#
# 2D example of gradient descent
#
# RTK, 14-Feb-2021
# Last update: 14-Feb-2021
#
################################################################
import numpy as np
import matplotlib.pylab as plt
# Function and partial derivatives
def f(x,y):
return 6*x**2 + 9*y**2 - 12*x - 14*y + 3
def dx(x):
return 12*x - 12
def dy(y):
return 18*y - 14
# Gradient descent steps
N = 100
x,y = np.meshgrid(np.linspace(-1,3,N), np.linspace(-1,3,N))
z = f(x,y)
plt.contourf(x,y,z,10, cmap="Greys")
plt.contour(x,y,z,10, colors='k', linewidths=1)
plt.plot([0,0],[-1,3],color='k',linewidth=1)
plt.plot([-1,3],[0,0],color='k',linewidth=1)
plt.plot(1,0.7777778,color='k',marker='+')
x = xold = -0.5
y = yold = 2.9
for i in range(12):
plt.plot([xold,x],[yold,y], marker='o', linestyle='dotted', color='k')
xold = x
yold = y
x = x - 0.02 * dx(x)
y = y - 0.02 * dy(y)
x = xold = 1.5
y = yold = -0.8
for i in range(12):
plt.plot([xold,x],[yold,y], marker='s', linestyle='dotted', color='k')
xold = x
yold = y
x = x - 0.02 * dx(x)
y = y - 0.02 * dy(y)
x = xold = 2.7
y = yold = 2.3
for i in range(12):
plt.plot([xold,x],[yold,y], marker='<', linestyle='dotted', color='k')
xold = x
yold = y
x = x - 0.02 * dx(x)
y = y - 0.02 * dy(y)
plt.xlabel("$x$")
plt.ylabel("$y$")
plt.tight_layout(pad=0, w_pad=0, h_pad=0)
plt.savefig("gd_2d_steps.png", dpi=300)
plt.show()
plt.close()
# New function and partial derivatives
def f(x,y):
return 6*x**2 + 40*y**2 - 12*x - 30*y + 3
def dx(x):
return 12*x - 12
def dy(y):
return 80*y - 30
# Large stepsize
N = 100
x,y = np.meshgrid(np.linspace(-1,3,N), np.linspace(-1,3,N))
z = f(x,y)
plt.contourf(x,y,z,10, cmap="Greys")
plt.contour(x,y,z,10, colors='k', linewidths=1)
plt.plot([0,0],[-1,3],color='k',linewidth=1)
plt.plot([-1,3],[0,0],color='k',linewidth=1)
plt.plot(1,0.375,color='k',marker='+')
x = xold = -0.5
y = yold = 2.3
for i in range(14):
plt.plot([xold,x],[yold,y], marker='o', linestyle='dotted', color='k')
xold = x
yold = y
x = x - 0.02 * dx(x)
y = y - 0.02 * dy(y)
x = xold = 2.3
y = yold = 2.3
for i in range(14):
plt.plot([xold,x],[yold,y], marker='s', linestyle='dotted', color='k')
xold = x
yold = y
x = x - 0.01 * dx(x)
y = y - 0.01 * dy(y)
plt.xlabel("$x$")
plt.ylabel("$y$")
plt.tight_layout(pad=0, w_pad=0, h_pad=0)
plt.savefig("gd_2d_oscillating.png", dpi=300)
plt.show()
plt.close()
| 22.044248 | 74 | 0.55279 | 495 | 2,491 | 2.759596 | 0.191919 | 0.021962 | 0.065886 | 0.073206 | 0.84407 | 0.839678 | 0.839678 | 0.804539 | 0.799414 | 0.799414 | 0 | 0.090455 | 0.196708 | 2,491 | 112 | 75 | 22.241071 | 0.592204 | 0.081895 | 0 | 0.780488 | 0 | 0 | 0.048913 | 0.009511 | 0 | 0 | 0 | 0 | 0 | 1 | 0.073171 | false | 0 | 0.02439 | 0.073171 | 0.170732 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1a00bea3531b10db8f502ee3e4788aa1b00e61d0 | 173 | py | Python | ibsng/handler/invoice/get_invoice_profiles.py | ParspooyeshFanavar/pyibsng | d48bcf4f25e3f23461528bf0ff8870cc3d537444 | [
"MIT"
] | 6 | 2018-03-06T10:16:36.000Z | 2021-12-05T12:43:10.000Z | ibsng/handler/invoice/get_invoice_profiles.py | ParspooyeshFanavar/pyibsng | d48bcf4f25e3f23461528bf0ff8870cc3d537444 | [
"MIT"
] | 3 | 2018-03-06T10:27:08.000Z | 2022-01-02T15:21:27.000Z | ibsng/handler/invoice/get_invoice_profiles.py | ParspooyeshFanavar/pyibsng | d48bcf4f25e3f23461528bf0ff8870cc3d537444 | [
"MIT"
] | 3 | 2018-01-06T16:28:31.000Z | 2018-09-17T19:47:19.000Z | """Get invoice profiles API method."""
from ibsng.handler.handler import Handler
class getInvoiceProfiles(Handler):
"""Get invoice profiles method class."""
pass
| 19.222222 | 44 | 0.728324 | 20 | 173 | 6.3 | 0.6 | 0.15873 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16185 | 173 | 8 | 45 | 21.625 | 0.868966 | 0.387283 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
1a14c45492b72357809c76ddbf85e95594351136 | 2,399 | py | Python | exams/61a-su20-practice-mt-solution/q2/q2.py | jjllzhang/CS61A | 57b68c7c06999210d96499f6d84e4ec99085d396 | [
"MIT"
] | 1 | 2022-01-22T11:45:01.000Z | 2022-01-22T11:45:01.000Z | exams/61a-su20-practice-mt-solution/q2/q2.py | jjllzhang/CS61A | 57b68c7c06999210d96499f6d84e4ec99085d396 | [
"MIT"
] | null | null | null | exams/61a-su20-practice-mt-solution/q2/q2.py | jjllzhang/CS61A | 57b68c7c06999210d96499f6d84e4ec99085d396 | [
"MIT"
] | null | null | null |
def make_guess(n):
"""
Let's play a guessing game! In order to do this, we'll use higher order functions.
Write a function, make_guess, which takes in a number that we want someone to try to guess, and returns a guessing
function.
A guessing function is a one-argument function which takes in a number. If the number passed in is the number we
wanted to guess, it will return the number of incorrect guesses made prior to the correct guess. Otherwise, it returns
another guessing function, which keeps track of the fact that we've made an incorrect guess.
Solutions which use lists, object mutation, nonlocal, or global will receive no credit.
>>> guesser = make_guess(10)
>>> guess1 = guesser(6)
>>> guess2 = guess1(7)
>>> guess3 = guess2(8)
>>> guess3(10)
3
>>> guess2(10)
2
>>> a = make_guess(5)(1)(2)(3)(4)(5)
>>> a
4
"""
def update_guess(num_incorrect):
def new_guess(x):
if x == n:
return num_incorrect
else:
return update_guess(num_incorrect + 1)
return new_guess
return update_guess(0)
# ORIGINAL SKELETON FOLLOWS
# def make_guess(n):
# """
# Let's play a guessing game! In order to do this, we'll use higher order functions.
# Write a function, make_guess, which takes in a number that we want someone to try to guess, and returns a guessing
# function.
# A guessing function is a one-argument function which takes in a number. If the number passed in is the number we
# wanted to guess, it will return the number of incorrect guesses made prior to the correct guess. Otherwise, it returns
# another guessing function, which keeps track of the fact that we've made an incorrect guess.
# Solutions which use lists, object mutation, nonlocal, or global will receive no credit.
# >>> guesser = make_guess(10)
# >>> guess1 = guesser(6)
# >>> guess2 = guess1(7)
# >>> guess3 = guess2(8)
# >>> guess3(10)
# 3
# >>> guess2(10)
# 2
# >>> a = make_guess(5)(1)(2)(3)(4)(5)
# >>> a
# 4
# """
# def update_guess(num_incorrect):
# def new_guess(x):
# if x == n:
# return num_incorrect
# else:
# return update_guess(num_incorrect + 1)
# return new_guess
# return update_guess(0)
| 36.907692 | 124 | 0.624844 | 351 | 2,399 | 4.202279 | 0.245014 | 0.048814 | 0.032542 | 0.035254 | 0.984407 | 0.984407 | 0.984407 | 0.984407 | 0.984407 | 0.984407 | 0 | 0.031304 | 0.28095 | 2,399 | 64 | 125 | 37.484375 | 0.823768 | 0.825344 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.777778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
c5384d618b74e7736e62ba9e39ecfc66105dd8f3 | 7,196 | py | Python | tests/v2/test_0107-assign-fields-to-records.py | jpivarski/awkward-1.0 | 49a3ff13ef90b8778a80573211d58c544729eaa5 | [
"BSD-3-Clause"
] | 2 | 2019-09-12T03:07:23.000Z | 2019-09-27T05:32:07.000Z | tests/v2/test_0107-assign-fields-to-records.py | jpivarski/awkward-1.0 | 49a3ff13ef90b8778a80573211d58c544729eaa5 | [
"BSD-3-Clause"
] | 1 | 2019-09-26T17:57:45.000Z | 2019-09-26T17:57:45.000Z | tests/v2/test_0107-assign-fields-to-records.py | jpivarski/awkward-1.0 | 49a3ff13ef90b8778a80573211d58c544729eaa5 | [
"BSD-3-Clause"
] | null | null | null | # BSD 3-Clause License; see https://github.com/scikit-hep/awkward-1.0/blob/main/LICENSE
import pytest # noqa: F401
import numpy as np # noqa: F401
import awkward as ak # noqa: F401
to_list = ak._v2.operations.to_list
def test_record():
array1 = ak._v2.operations.from_iter(
[{"x": 1, "y": 1.1}, {"x": 2, "y": 2.2}, {"x": 3, "y": 3.3}], highlevel=False
)
assert to_list(array1) == [
{"x": 1, "y": 1.1},
{"x": 2, "y": 2.2},
{"x": 3, "y": 3.3},
]
array2 = ak._v2.operations.with_field(
array1,
ak._v2.operations.from_iter([[], [1], [2, 2]], highlevel=False),
"z",
)
assert to_list(array2) == [
{"x": 1, "y": 1.1, "z": []},
{"x": 2, "y": 2.2, "z": [1]},
{"x": 3, "y": 3.3, "z": [2, 2]},
]
array3 = ak._v2.operations.with_field(
array1, ak._v2.operations.from_iter([[], [1], [2, 2]], highlevel=False)
)
assert to_list(array3) == [
{"x": 1, "y": 1.1, "2": []},
{"x": 2, "y": 2.2, "2": [1]},
{"x": 3, "y": 3.3, "2": [2, 2]},
]
array3 = ak._v2.operations.with_field(
array1,
ak._v2.operations.from_iter([[], [1], [2, 2]], highlevel=False),
"0",
)
assert to_list(array3) == [
{"x": 1, "y": 1.1, "0": []},
{"x": 2, "y": 2.2, "0": [1]},
{"x": 3, "y": 3.3, "0": [2, 2]},
]
array1 = ak._v2.operations.from_iter(
[(1, 1.1), (2, 2.2), (3, 3.3)], highlevel=False
)
assert to_list(array1) == [(1, 1.1), (2, 2.2), (3, 3.3)]
array2 = ak._v2.operations.with_field(
array1,
ak._v2.operations.from_iter([[], [1], [2, 2]], highlevel=False),
"z",
)
assert to_list(array2) == [
{"0": 1, "1": 1.1, "z": []},
{"0": 2, "1": 2.2, "z": [1]},
{"0": 3, "1": 3.3, "z": [2, 2]},
]
array3 = ak._v2.operations.with_field(
array1, ak._v2.operations.from_iter([[], [1], [2, 2]], highlevel=False)
)
assert to_list(array3) == [(1, 1.1, []), (2, 2.2, [1]), (3, 3.3, [2, 2])]
array3 = ak._v2.operations.with_field(
array1,
ak._v2.operations.from_iter([[], [1], [2, 2]], highlevel=False),
"0",
)
assert to_list(array3) == [
{"0": [], "1": 1.1},
{"0": [1], "1": 2.2},
{"0": [2, 2], "1": 3.3},
]
array3 = ak._v2.operations.with_field(
array1,
ak._v2.operations.from_iter([[], [1], [2, 2]], highlevel=False),
"1",
)
assert to_list(array3) == [
{"0": 1, "1": []},
{"0": 2, "1": [1]},
{"0": 3, "1": [2, 2]},
]
array3 = ak._v2.operations.with_field(
array1,
ak._v2.operations.from_iter([[], [1], [2, 2]], highlevel=False),
"100",
)
assert to_list(array3) == [
{"0": 1, "1": 1.1, "100": []},
{"0": 2, "1": 2.2, "100": [1]},
{"0": 3, "1": 3.3, "100": [2, 2]},
]
def test_withfield():
base = ak._v2.Array([{"x": 1}, {"x": 2}, {"x": 3}], check_valid=True)
what = ak._v2.Array([1.1, 2.2, 3.3], check_valid=True)
assert to_list(ak._v2.operations.with_field(base, what)) == [
{"x": 1, "1": 1.1},
{"x": 2, "1": 2.2},
{"x": 3, "1": 3.3},
]
assert to_list(ak._v2.operations.with_field(base, what, where="y")) == [
{"x": 1, "y": 1.1},
{"x": 2, "y": 2.2},
{"x": 3, "y": 3.3},
]
base["z"] = what
assert to_list(base) == [
{"x": 1, "z": 1.1},
{"x": 2, "z": 2.2},
{"x": 3, "z": 3.3},
]
base["q"] = 123
assert to_list(base) == [
{"x": 1, "z": 1.1, "q": 123},
{"x": 2, "z": 2.2, "q": 123},
{"x": 3, "z": 3.3, "q": 123},
]
base = ak._v2.Array([{"x": 1}, {"x": 2}, {"x": 3}], check_valid=True)[2]
assert to_list(ak._v2.operations.with_field(base, 100, "y")) == {
"x": 3,
"y": 100,
}
def test_regulararray():
content = ak._v2.contents.NumpyArray(
np.array([0.0, 1.1, 2.2, 3.3, 4.4, 5.5, 6.6, 7.7, 8.8, 9.9])
)
recordarray = ak._v2.contents.RecordArray([content], fields=["x"])
regulararray = ak._v2.Array(
ak._v2.contents.RegularArray(recordarray, 3, zeros_length=0), check_valid=True
)
content2 = ak._v2.contents.NumpyArray(np.array([100, 200, 300]))
regulararray2 = ak._v2.Array(
ak._v2.contents.RegularArray(content2, 1, zeros_length=0), check_valid=True
)
assert to_list(ak._v2.operations.with_field(regulararray, regulararray2, "y")) == [
[{"x": 0.0, "y": 100}, {"x": 1.1, "y": 100}, {"x": 2.2, "y": 100}],
[{"x": 3.3, "y": 200}, {"x": 4.4, "y": 200}, {"x": 5.5, "y": 200}],
[{"x": 6.6, "y": 300}, {"x": 7.7, "y": 300}, {"x": 8.8, "y": 300}],
]
content2 = ak._v2.contents.NumpyArray(
np.array([100, 200, 300, 400, 500, 600, 700, 800, 900])
)
regulararray2 = ak._v2.Array(
ak._v2.contents.RegularArray(content2, 3, zeros_length=0), check_valid=True
)
assert to_list(ak._v2.operations.with_field(regulararray, regulararray2, "y")) == [
[{"x": 0.0, "y": 100}, {"x": 1.1, "y": 200}, {"x": 2.2, "y": 300}],
[{"x": 3.3, "y": 400}, {"x": 4.4, "y": 500}, {"x": 5.5, "y": 600}],
[{"x": 6.6, "y": 700}, {"x": 7.7, "y": 800}, {"x": 8.8, "y": 900}],
]
content2 = ak._v2.Array(
ak._v2.contents.NumpyArray(np.array([[100], [200], [300]])), check_valid=True
)
assert to_list(ak._v2.operations.with_field(regulararray, content2, "y")) == [
[{"x": 0.0, "y": 100}, {"x": 1.1, "y": 100}, {"x": 2.2, "y": 100}],
[{"x": 3.3, "y": 200}, {"x": 4.4, "y": 200}, {"x": 5.5, "y": 200}],
[{"x": 6.6, "y": 300}, {"x": 7.7, "y": 300}, {"x": 8.8, "y": 300}],
]
content2 = ak._v2.Array(
ak._v2.contents.NumpyArray(
np.array([[100, 200, 300], [400, 500, 600], [700, 800, 900]])
),
check_valid=True,
)
assert to_list(ak._v2.operations.with_field(regulararray, content2, "y")) == [
[{"x": 0.0, "y": 100}, {"x": 1.1, "y": 200}, {"x": 2.2, "y": 300}],
[{"x": 3.3, "y": 400}, {"x": 4.4, "y": 500}, {"x": 5.5, "y": 600}],
[{"x": 6.6, "y": 700}, {"x": 7.7, "y": 800}, {"x": 8.8, "y": 900}],
]
def test_listarray():
one = ak._v2.Array(
[[{"x": 1}, {"x": 2}, {"x": 3}], [], [{"x": 4}, {"x": 5}]], check_valid=True
)
two = ak._v2.Array([[1.1, 2.2, 3.3], [], [4.4, 5.5]], check_valid=True)
assert to_list(ak._v2.operations.with_field(one, two, "y")) == [
[{"x": 1, "y": 1.1}, {"x": 2, "y": 2.2}, {"x": 3, "y": 3.3}],
[],
[{"x": 4, "y": 4.4}, {"x": 5, "y": 5.5}],
]
three = ak._v2.Array([100, 200, 300], check_valid=True)
assert to_list(ak._v2.operations.with_field(one, three, "y")) == [
[{"x": 1, "y": 100}, {"x": 2, "y": 100}, {"x": 3, "y": 100}],
[],
[{"x": 4, "y": 300}, {"x": 5, "y": 300}],
]
assert to_list(ak._v2.operations.with_field(one, [100, 200, 300], "y")) == [
[{"x": 1, "y": 100}, {"x": 2, "y": 100}, {"x": 3, "y": 100}],
[],
[{"x": 4, "y": 300}, {"x": 5, "y": 300}],
]
| 33.16129 | 87 | 0.444275 | 1,113 | 7,196 | 2.765499 | 0.074573 | 0.063678 | 0.131904 | 0.105263 | 0.852827 | 0.823587 | 0.794672 | 0.758934 | 0.729695 | 0.634828 | 0 | 0.139286 | 0.271679 | 7,196 | 216 | 88 | 33.314815 | 0.448006 | 0.016398 | 0 | 0.374332 | 0 | 0 | 0.033508 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 1 | 0.02139 | false | 0 | 0.016043 | 0 | 0.037433 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c53b67493b32363e2f81ac175778990ad5b11b9d | 7,222 | py | Python | sdk/python/pulumi_aws/kms/_inputs.py | alexbowers/pulumi-aws | 7dbdb03b1e4f7c0d51d5b5d17233ff4465c3eff5 | [
"ECL-2.0",
"Apache-2.0"
] | 260 | 2018-06-18T14:57:00.000Z | 2022-03-29T11:41:03.000Z | sdk/python/pulumi_aws/kms/_inputs.py | alexbowers/pulumi-aws | 7dbdb03b1e4f7c0d51d5b5d17233ff4465c3eff5 | [
"ECL-2.0",
"Apache-2.0"
] | 1,154 | 2018-06-19T20:38:20.000Z | 2022-03-31T19:48:16.000Z | sdk/python/pulumi_aws/kms/_inputs.py | alexbowers/pulumi-aws | 7dbdb03b1e4f7c0d51d5b5d17233ff4465c3eff5 | [
"ECL-2.0",
"Apache-2.0"
] | 115 | 2018-06-28T03:20:27.000Z | 2022-03-29T11:41:06.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
__all__ = [
'GrantConstraintArgs',
'GetSecretSecretArgs',
'GetSecretsSecretArgs',
]
@pulumi.input_type
class GrantConstraintArgs:
def __init__(__self__, *,
encryption_context_equals: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
encryption_context_subset: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None):
"""
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] encryption_context_equals: A list of key-value pairs that must match the encryption context in subsequent cryptographic operation requests. The grant allows the operation only when the encryption context in the request is the same as the encryption context specified in this constraint. Conflicts with `encryption_context_subset`.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] encryption_context_subset: A list of key-value pairs that must be included in the encryption context of subsequent cryptographic operation requests. The grant allows the cryptographic operation only when the encryption context in the request includes the key-value pairs specified in this constraint, although it can include additional key-value pairs. Conflicts with `encryption_context_equals`.
"""
if encryption_context_equals is not None:
pulumi.set(__self__, "encryption_context_equals", encryption_context_equals)
if encryption_context_subset is not None:
pulumi.set(__self__, "encryption_context_subset", encryption_context_subset)
@property
@pulumi.getter(name="encryptionContextEquals")
def encryption_context_equals(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A list of key-value pairs that must match the encryption context in subsequent cryptographic operation requests. The grant allows the operation only when the encryption context in the request is the same as the encryption context specified in this constraint. Conflicts with `encryption_context_subset`.
"""
return pulumi.get(self, "encryption_context_equals")
@encryption_context_equals.setter
def encryption_context_equals(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "encryption_context_equals", value)
@property
@pulumi.getter(name="encryptionContextSubset")
def encryption_context_subset(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A list of key-value pairs that must be included in the encryption context of subsequent cryptographic operation requests. The grant allows the cryptographic operation only when the encryption context in the request includes the key-value pairs specified in this constraint, although it can include additional key-value pairs. Conflicts with `encryption_context_equals`.
"""
return pulumi.get(self, "encryption_context_subset")
@encryption_context_subset.setter
def encryption_context_subset(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "encryption_context_subset", value)
@pulumi.input_type
class GetSecretSecretArgs:
def __init__(__self__, *,
name: str,
payload: str,
context: Optional[Mapping[str, str]] = None,
grant_tokens: Optional[Sequence[str]] = None):
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "payload", payload)
if context is not None:
pulumi.set(__self__, "context", context)
if grant_tokens is not None:
pulumi.set(__self__, "grant_tokens", grant_tokens)
@property
@pulumi.getter
def name(self) -> str:
return pulumi.get(self, "name")
@name.setter
def name(self, value: str):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def payload(self) -> str:
return pulumi.get(self, "payload")
@payload.setter
def payload(self, value: str):
pulumi.set(self, "payload", value)
@property
@pulumi.getter
def context(self) -> Optional[Mapping[str, str]]:
return pulumi.get(self, "context")
@context.setter
def context(self, value: Optional[Mapping[str, str]]):
pulumi.set(self, "context", value)
@property
@pulumi.getter(name="grantTokens")
def grant_tokens(self) -> Optional[Sequence[str]]:
return pulumi.get(self, "grant_tokens")
@grant_tokens.setter
def grant_tokens(self, value: Optional[Sequence[str]]):
pulumi.set(self, "grant_tokens", value)
@pulumi.input_type
class GetSecretsSecretArgs:
def __init__(__self__, *,
name: str,
payload: str,
context: Optional[Mapping[str, str]] = None,
grant_tokens: Optional[Sequence[str]] = None):
"""
:param str name: The name to export this secret under in the attributes.
:param str payload: Base64 encoded payload, as returned from a KMS encrypt operation.
:param Mapping[str, str] context: An optional mapping that makes up the Encryption Context for the secret.
:param Sequence[str] grant_tokens: An optional list of Grant Tokens for the secret.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "payload", payload)
if context is not None:
pulumi.set(__self__, "context", context)
if grant_tokens is not None:
pulumi.set(__self__, "grant_tokens", grant_tokens)
@property
@pulumi.getter
def name(self) -> str:
"""
The name to export this secret under in the attributes.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: str):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def payload(self) -> str:
"""
Base64 encoded payload, as returned from a KMS encrypt operation.
"""
return pulumi.get(self, "payload")
@payload.setter
def payload(self, value: str):
pulumi.set(self, "payload", value)
@property
@pulumi.getter
def context(self) -> Optional[Mapping[str, str]]:
"""
An optional mapping that makes up the Encryption Context for the secret.
"""
return pulumi.get(self, "context")
@context.setter
def context(self, value: Optional[Mapping[str, str]]):
pulumi.set(self, "context", value)
@property
@pulumi.getter(name="grantTokens")
def grant_tokens(self) -> Optional[Sequence[str]]:
"""
An optional list of Grant Tokens for the secret.
"""
return pulumi.get(self, "grant_tokens")
@grant_tokens.setter
def grant_tokens(self, value: Optional[Sequence[str]]):
pulumi.set(self, "grant_tokens", value)
| 41.034091 | 457 | 0.677652 | 879 | 7,222 | 5.409556 | 0.133106 | 0.128707 | 0.054679 | 0.039958 | 0.868139 | 0.830705 | 0.81388 | 0.780652 | 0.764248 | 0.728076 | 0 | 0.000892 | 0.223761 | 7,222 | 175 | 458 | 41.268571 | 0.847306 | 0.315702 | 0 | 0.730435 | 1 | 0 | 0.09659 | 0.041517 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.043478 | 0.034783 | 0.356522 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
c546469edd6fedb431d25427538d5a587a3603d3 | 113 | py | Python | lib/mmdet/version.py | jcjs/deep-high-resolution-net.pytorch | f19964688cb30c0e88a4a3076c7955d088f3e521 | [
"MIT"
] | null | null | null | lib/mmdet/version.py | jcjs/deep-high-resolution-net.pytorch | f19964688cb30c0e88a4a3076c7955d088f3e521 | [
"MIT"
] | null | null | null | lib/mmdet/version.py | jcjs/deep-high-resolution-net.pytorch | f19964688cb30c0e88a4a3076c7955d088f3e521 | [
"MIT"
] | null | null | null | # GENERATED VERSION FILE
# TIME: Wed Apr 17 10:00:06 2019
__version__ = '0.6.0+a9e21cf'
short_version = '0.6.0'
| 18.833333 | 32 | 0.699115 | 21 | 113 | 3.52381 | 0.714286 | 0.216216 | 0.243243 | 0.27027 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.221053 | 0.159292 | 113 | 5 | 33 | 22.6 | 0.557895 | 0.469027 | 0 | 0 | 1 | 0 | 0.315789 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c550f265ecbaeb618f9688edcb7e5d5876fb6fea | 3,821 | py | Python | pysilcam/tests/test_standards.py | Sondreab/PySilCam | a855f769fee8f86a364f9dc2c448c74a7a71c2a6 | [
"BSD-3-Clause"
] | null | null | null | pysilcam/tests/test_standards.py | Sondreab/PySilCam | a855f769fee8f86a364f9dc2c448c74a7a71c2a6 | [
"BSD-3-Clause"
] | null | null | null | pysilcam/tests/test_standards.py | Sondreab/PySilCam | a855f769fee8f86a364f9dc2c448c74a7a71c2a6 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
import os
import sys
import logging
from pysilcam.__main__ import silcam_process
import pysilcam.postprocess as scpp
import unittest
import pandas as pd
import pysilcam.silcam_classify as sccl
from pysilcam.config import PySilcamSettings
import tensorflow as tf
@unittest.skipIf(not os.path.isdir(
'E:/test data/hello_silcam/unittest_entries/STANDARDS/StandardsBig'),
"test path not accessible")
@unittest.skipIf(not os.path.isdir(
'E:/test data/hello_silcam/unittest_entries/STANDARDS/StandardsSmall'),
"test path not accessible")
def test_big_standards():
'''Testing that the large standards are sized correctly'''
path = os.path.dirname(__file__)
conf_file = os.path.join(path, 'config_glass_standards.ini')
data_file = 'E:/test data/hello_silcam/unittest_entries/STANDARDS/StandardsBig'
stats_file = 'E:/test data/hello_silcam/unittest_entries/STANDARDS/proc/StandardsBig-STATS.csv'
# if csv file already exists, it has to be deleted
if (os.path.isfile(stats_file)):
os.remove(stats_file)
# call process function
silcam_process(conf_file, data_file, multiProcess=False, nbImages=10)
# check that csv file has been created
assert os.path.isfile(stats_file), 'stats_file not created'
# check that csv file has been properly built
csvfile = open(stats_file)
lines = csvfile.readlines()
numline = len(lines)
assert numline > 1 , 'csv file empty'
# check the columns
assert lines[0] == 'particle index,major_axis_length,minor_axis_length,equivalent_diameter,solidity,minr,minc,maxr,maxc,'\
'probability_oil,probability_other,probability_bubble,probability_faecal_pellets,probability_copepod,'\
'probability_diatom_chain,probability_oily_gas,export name,timestamp,saturation\n', 'columns not properly built'
settings = PySilcamSettings(conf_file)
stats = pd.read_csv(stats_file)
d50 = scpp.d50_from_stats(stats, settings.PostProcess)
assert (d50 > 310 and d50 < 330), 'incorrect d50'
@unittest.skipIf(not os.path.isdir(
'E:/test data/hello_silcam/unittest_entries/STANDARDS/StandardsBig'),
"test path not accessible")
@unittest.skipIf(not os.path.isdir(
'E:/test data/hello_silcam/unittest_entries/STANDARDS/StandardsSmall'),
"test path not accessible")
def test_small_standards():
'''Testing that the small standards are sized correctly'''
path = os.path.dirname(__file__)
conf_file = os.path.join(path, 'config_glass_standards.ini')
data_file = 'E:/test data/hello_silcam/unittest_entries/STANDARDS/StandardsSmall'
stats_file = 'E:/test data/hello_silcam/unittest_entries/STANDARDS/proc/StandardsSmall-STATS.csv'
# if csv file already exists, it has to be deleted
if (os.path.isfile(stats_file)):
os.remove(stats_file)
# call process function
silcam_process(conf_file, data_file, multiProcess=False, nbImages=10)
# check that csv file has been created
assert os.path.isfile(stats_file), 'stats_file not created'
# check that csv file has been properly built
csvfile = open(stats_file)
lines = csvfile.readlines()
numline = len(lines)
assert numline > 1 , 'csv file empty'
# check the columns
assert lines[0] == 'particle index,major_axis_length,minor_axis_length,equivalent_diameter,solidity,minr,minc,maxr,maxc,'\
'probability_oil,probability_other,probability_bubble,probability_faecal_pellets,probability_copepod,'\
'probability_diatom_chain,probability_oily_gas,export name,timestamp,saturation\n', 'columns not properly built'
settings = PySilcamSettings(conf_file)
stats = pd.read_csv(stats_file)
d50 = scpp.d50_from_stats(stats, settings.PostProcess)
assert (d50 > 70 and d50 < 90), 'incorrect d50'
| 40.221053 | 126 | 0.744308 | 515 | 3,821 | 5.328155 | 0.24466 | 0.045918 | 0.026239 | 0.040816 | 0.866618 | 0.866618 | 0.866618 | 0.866618 | 0.866618 | 0.857143 | 0 | 0.012123 | 0.158074 | 3,821 | 94 | 127 | 40.648936 | 0.840846 | 0.122481 | 0 | 0.709677 | 0 | 0 | 0.42497 | 0.32593 | 0 | 0 | 0 | 0 | 0.129032 | 1 | 0.032258 | false | 0 | 0.16129 | 0 | 0.193548 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3dc0fcb5de0537cb5a43af9232e4d7e706dc0e6c | 2,176 | py | Python | lint/dead_code_test.py | hyroai/lint | ea5f18e4bd88c2c88f36a9856fa7f9d36838a7e6 | [
"MIT"
] | 1 | 2021-03-21T03:45:00.000Z | 2021-03-21T03:45:00.000Z | lint/dead_code_test.py | hyroai/lint | ea5f18e4bd88c2c88f36a9856fa7f9d36838a7e6 | [
"MIT"
] | 3 | 2020-07-15T16:16:37.000Z | 2022-01-27T01:06:20.000Z | lint/dead_code_test.py | hyroai/lint | ea5f18e4bd88c2c88f36a9856fa7f9d36838a7e6 | [
"MIT"
] | null | null | null | import ast
import gamla
from lint import dead_code
def test_allow_unused_public():
gamla.pipe(
'I_AM_A_CONSTANT = "asd"',
ast.parse,
dead_code.detect,
gamla.check(gamla.complement(gamla.count), AssertionError),
)
def test_allow_double_underscore():
gamla.pipe(
'd.__getitem__("bla")',
ast.parse,
dead_code.detect,
gamla.check(gamla.complement(gamla.count), AssertionError),
)
def test_disallow_unused_private():
gamla.pipe(
'_I_AM_A_CONSTANT = "asd"',
ast.parse,
dead_code.detect,
gamla.check(gamla.count, AssertionError),
)
def test_allow_unused_public_function():
gamla.pipe(
"def hi():\n return 1",
ast.parse,
dead_code.detect,
gamla.check(gamla.complement(gamla.count), AssertionError),
)
def test_disallow_unused_private_function():
gamla.pipe(
"def _hi():\n return 1",
ast.parse,
dead_code.detect,
gamla.check(gamla.count, AssertionError),
)
def test_disallow_unused_async_private_function():
gamla.pipe(
"async def _hi():\n return 1",
ast.parse,
dead_code.detect,
gamla.check(gamla.count, AssertionError),
)
def test_class_methods_allowed():
gamla.pipe(
"""@dataclasses.dataclass(frozen=True)
class SomeClass:
# Some comment.
text: Text
_private_thing: Text = "bla"
def is_something(self) -> bool:
return self._private_thing in []
""",
ast.parse,
dead_code.detect,
gamla.check(gamla.complement(gamla.count), AssertionError),
)
def test_class_methods_disallowed():
gamla.pipe(
"""@dataclasses.dataclass(frozen=True)
class SomeClass:
# Some comment.
text: Text
_private_thing: Text = "bla"
""",
ast.parse,
dead_code.detect,
gamla.check(gamla.count, AssertionError),
)
def test_private_class():
gamla.pipe(
"class _Something: pass; A = _Something()",
ast.parse,
dead_code.detect,
gamla.check(gamla.complement(gamla.count), AssertionError),
)
| 21.76 | 67 | 0.620404 | 247 | 2,176 | 5.214575 | 0.226721 | 0.062112 | 0.083851 | 0.111801 | 0.828416 | 0.800466 | 0.792702 | 0.763199 | 0.763199 | 0.76087 | 0 | 0.001868 | 0.261949 | 2,176 | 99 | 68 | 21.979798 | 0.800125 | 0 | 0 | 0.545455 | 0 | 0 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 1 | 0.136364 | true | 0.015152 | 0.045455 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
9aa3920672fbc2f75d6de0d46bbcc209cb8afe2f | 4,453 | py | Python | tasks/inventory/hourly.py | meteostat/routines | 8867b96a3fcb254ebcc9623933a76dac44157b70 | [
"MIT"
] | 7 | 2020-07-02T09:49:06.000Z | 2021-05-24T11:46:00.000Z | tasks/inventory/hourly.py | meteostat/routines | 8867b96a3fcb254ebcc9623933a76dac44157b70 | [
"MIT"
] | 16 | 2021-03-29T19:45:01.000Z | 2021-11-14T11:39:12.000Z | tasks/inventory/hourly.py | meteostat/routines | 8867b96a3fcb254ebcc9623933a76dac44157b70 | [
"MIT"
] | 1 | 2021-04-06T20:58:42.000Z | 2021-04-06T20:58:42.000Z | """
Update hourly inventory
The code is licensed under the MIT license.
"""
from routines import Routine
task = Routine('task.inventory.hourly')
task.query('''
INSERT INTO
`inventory`(`station`, `mode`, `start`)
SELECT
`station`,
'H' AS `mode`,
MIN(`mindate`) AS `start` FROM (
(SELECT
`station`,
DATE(MIN(`time`)) as `mindate`
FROM `hourly_synop`
GROUP BY `station`)
UNION ALL
(SELECT
`station`,
DATE(MIN(`time`)) as `mindate`
FROM `hourly_metar`
GROUP BY `station`)
UNION ALL
(SELECT
`station`,
DATE(MIN(`time`)) as `mindate`
FROM `hourly_national`
GROUP BY `station`)
UNION ALL
(SELECT
`station`,
DATE(MIN(`time`)) as `mindate`
FROM `hourly_isd`
GROUP BY `station`)
) AS `hourly_inventory`
GROUP BY `station`
ON DUPLICATE KEY UPDATE
`start` = VALUES(`start`)
''')
task.query('''
INSERT INTO
`inventory`(`station`, `mode`, `start`)
SELECT
`station`,
'P' AS `mode`,
MIN(`mindate`) AS `start` FROM (
(SELECT
`station`,
DATE(MIN(`time`)) as `mindate`
FROM `hourly_model`
GROUP BY `station`)
) AS `model_inventory`
GROUP BY `station`
ON DUPLICATE KEY UPDATE
`start` = VALUES(`start`)
''')
task.query('''
INSERT INTO
`inventory`(`station`, `mode`, `end`)
SELECT
`station`,
'H' AS `mode`,
MAX(`maxdate`) AS `end` FROM (
(SELECT
`station`,
DATE(MAX(`time`)) as `maxdate`
FROM `hourly_synop`
GROUP BY `station`)
UNION ALL
(SELECT
`station`,
DATE(MAX(`time`)) as `maxdate`
FROM `hourly_metar`
GROUP BY `station`)
UNION ALL
(SELECT
`station`,
DATE(MAX(`time`)) as `maxdate`
FROM `hourly_national`
GROUP BY `station`)
UNION ALL
(SELECT
`station`,
DATE(MAX(`time`)) as `maxdate`
FROM `hourly_isd`
GROUP BY `station`)
) AS `hourly_inventory`
GROUP BY `station`
ON DUPLICATE KEY UPDATE
`end` = VALUES(`end`)
''')
task.query('''
INSERT INTO
`inventory`(`station`, `mode`, `end`)
SELECT
`station`,
'P' AS `mode`,
MAX(`maxdate`) AS `end` FROM (
(SELECT
`station`,
DATE(MAX(`time`)) as `maxdate`
FROM `hourly_model`
GROUP BY `station`)
) AS `model_inventory`
GROUP BY `station`
ON DUPLICATE KEY UPDATE
`end` = VALUES(`end`)
''')
# Legacy
task.query("INSERT INTO `stations_inventory`(`station`, `hourly_start`) SELECT `station`, MIN(`mindate`) AS `hourly_start` FROM ((SELECT `station`,DATE(MIN(`time`)) as `mindate` FROM `hourly_model` GROUP BY `station`) UNION ALL (SELECT `station`,DATE(MIN(`time`)) as `mindate` FROM `hourly_metar` GROUP BY `station`) UNION ALL (SELECT `station`,DATE(MIN(`time`)) as `mindate` FROM `hourly_synop` GROUP BY `station`) UNION ALL (SELECT `station`,DATE(MIN(`time`)) as `mindate` FROM `hourly_national` GROUP BY `station`) UNION ALL (SELECT `station`,DATE(MIN(`time`)) as `mindate` FROM `hourly_isd` GROUP BY `station`)) AS `hourly_inventory` GROUP BY `station` ON DUPLICATE KEY UPDATE `hourly_start` = VALUES(`hourly_start`)")
task.query("INSERT INTO `stations_inventory`(`station`, `hourly_end`) SELECT `station`, MAX(`maxdate`) AS `hourly_end` FROM ((SELECT `station`,DATE(MAX(`time`)) as `maxdate` FROM `hourly_model` GROUP BY `station`) UNION ALL (SELECT `station`,DATE(MAX(`time`)) as `maxdate` FROM `hourly_metar` GROUP BY `station`) UNION ALL (SELECT `station`,DATE(MAX(`time`)) as `maxdate` FROM `hourly_synop` GROUP BY `station`) UNION ALL (SELECT `station`,DATE(MAX(`time`)) as `maxdate` FROM `hourly_national` GROUP BY `station`) UNION ALL (SELECT `station`,DATE(MAX(`time`)) as `maxdate` FROM `hourly_isd` GROUP BY `station`)) AS `hourly_inventory` GROUP BY `station` ON DUPLICATE KEY UPDATE `hourly_end` = VALUES(`hourly_end`)")
| 36.203252 | 722 | 0.547272 | 497 | 4,453 | 4.830986 | 0.094567 | 0.140775 | 0.151604 | 0.110787 | 0.907539 | 0.90379 | 0.90379 | 0.90379 | 0.862974 | 0.862974 | 0 | 0 | 0.313721 | 4,453 | 122 | 723 | 36.5 | 0.785668 | 0.017067 | 0 | 0.962963 | 0 | 0.018519 | 0.962463 | 0.105974 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.009259 | 0 | 0.009259 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
9ab36d18b474b77890c6b67070b57c69b93bb336 | 97 | py | Python | testenv/contrib/redis.py | mialinx/testenv | 1db6920c22f6b5469b35a78b93619445709705ac | [
"MIT"
] | 2 | 2019-01-30T15:43:30.000Z | 2020-07-13T16:13:06.000Z | testenv/contrib/redis.py | ko91h/testenv | 75b9b461974a75d8819d38fa010be74a49f06d27 | [
"MIT"
] | null | null | null | testenv/contrib/redis.py | ko91h/testenv | 75b9b461974a75d8819d38fa010be74a49f06d27 | [
"MIT"
] | 3 | 2015-12-01T15:38:35.000Z | 2020-05-29T10:46:57.000Z | # -*- coding: utf-8 -*-
from .. import server
class Redis(server.Server):
# TODO
pass
| 10.777778 | 27 | 0.57732 | 12 | 97 | 4.666667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013889 | 0.257732 | 97 | 8 | 28 | 12.125 | 0.763889 | 0.268041 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
b1084c7f762feb44611a0cf7c475f78ee3c2b707 | 48 | py | Python | fictrac_phidget_aout_demo/__init__.py | jennyl617/fictrac_phidget_aout_demo | e01ed97bf2c0037cbd03fef64c32dd56da8e7fd6 | [
"MIT"
] | null | null | null | fictrac_phidget_aout_demo/__init__.py | jennyl617/fictrac_phidget_aout_demo | e01ed97bf2c0037cbd03fef64c32dd56da8e7fd6 | [
"MIT"
] | null | null | null | fictrac_phidget_aout_demo/__init__.py | jennyl617/fictrac_phidget_aout_demo | e01ed97bf2c0037cbd03fef64c32dd56da8e7fd6 | [
"MIT"
] | null | null | null | from fictrac_phidget_aout_demo import aout_demo
| 24 | 47 | 0.916667 | 8 | 48 | 5 | 0.75 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 48 | 1 | 48 | 48 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
b136cc6f0e2c0b71436b8f55d63e6199e984c89b | 7,325 | py | Python | grafana/common/dashboards/aggregated/server_ip_address.py | MikeAT/visualizer | 946b98d82eaf7ec508861115585afd683fc49e5c | [
"MIT"
] | 6 | 2021-03-03T17:52:24.000Z | 2022-02-10T11:45:22.000Z | grafana/common/dashboards/aggregated/server_ip_address.py | Acidburn0zzz/visualizer | 20fba91f0d26b98531f97f643c8329640d1c0d11 | [
"MIT"
] | 1 | 2021-04-29T12:34:04.000Z | 2021-04-29T14:50:17.000Z | grafana/common/dashboards/aggregated/server_ip_address.py | Acidburn0zzz/visualizer | 20fba91f0d26b98531f97f643c8329640d1c0d11 | [
"MIT"
] | 2 | 2021-04-27T14:02:03.000Z | 2021-11-12T10:34:32.000Z | # Copyright 2021 Internet Corporation for Assigned Names and Numbers.
#
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, you can obtain one at https://mozilla.org/MPL/2.0/.
#
# Developed by Sinodun IT (sinodun.com)
#
# Aggregation server IP address plots
import textwrap
import grafanalib.core as GCore
import grafanacommon as GCommon
def dash(myuid, agginfo, nodesel, **kwargs):
return GCommon.Dashboard(
title = "Server IP address",
tags = [
agginfo['graph_tag']
],
uid = myuid,
rows = [
GCore.Row(
panels = [
GCommon.QPSGraph(
title = 'Server IP address',
targets = [
GCommon.ClickHouseTarget(
database = agginfo['database'],
table = 'ServerAddressTransport' + agginfo['table_suffix'],
round = agginfo['round'],
query = textwrap.dedent("""\
SELECT t, groupArray((replaceRegexpOne(Addr, '^::ffff:', ''), qc)) AS AddrCount
FROM
(
SELECT
t,Addr,cnt/{interval_divisor} AS qc
FROM
(
SELECT
$timeSeries AS t,
IPv6NumToString(ServerAddress) AS Addr,
sum(toUInt64(Count)) AS cnt
FROM $table
WHERE $timeFilter
AND NodeID IN {nodesel}
AND ServerAddress IN (
SELECT IPv6StringToNum(address)
FROM {nodeinfo_database}.server_address )
GROUP BY t, Addr
ORDER BY t, Addr
)
)
GROUP BY t
ORDER BY t""".format(
interval_divisor=agginfo['interval_divisor'],
nodesel=nodesel,
nodeinfo_database=agginfo['nodeinfo_database'])),
refId = 'A'
),
],
),
],
),
GCore.Row(
panels = [
GCommon.QPSGraph(
title = 'Server IP address, UDP',
targets = [
GCommon.ClickHouseTarget(
database = agginfo['database'],
table = 'ServerAddressTransport' + agginfo['table_suffix'],
round = agginfo['round'],
query = textwrap.dedent("""\
SELECT t, groupArray((replaceRegexpOne(Addr, '^::ffff:', ''), qc)) AS AddrCount
FROM
(
SELECT
t,Addr,cnt/{interval_divisor} AS qc
FROM
(
SELECT
$timeSeries AS t,
IPv6NumToString(ServerAddress) AS Addr,
sum(toUInt64(Count)) AS cnt
FROM $table
WHERE $timeFilter
AND TransportTCP = 0
AND NodeID IN {nodesel}
AND ServerAddress IN (
SELECT IPv6StringToNum(address)
FROM {nodeinfo_database}.server_address )
GROUP BY t, Addr
ORDER BY t, Addr
)
)
GROUP BY t
ORDER BY t""".format(
interval_divisor=agginfo['interval_divisor'],
nodesel=nodesel,
nodeinfo_database=agginfo['nodeinfo_database'])),
refId = 'A'
),
],
),
GCommon.QPSGraph(
title = 'Server IP address, TCP',
targets = [
GCommon.ClickHouseTarget(
database = agginfo['database'],
table = 'ServerAddressTransport' + agginfo['table_suffix'],
round = agginfo['round'],
query = textwrap.dedent("""\
SELECT t, groupArray((replaceRegexpOne(Addr, '^::ffff:', ''), qc)) AS AddrCount
FROM
(
SELECT
t,Addr,cnt/{interval_divisor} AS qc
FROM
(
SELECT
$timeSeries AS t,
IPv6NumToString(ServerAddress) AS Addr,
sum(toUInt64(Count)) AS cnt
FROM $table
WHERE $timeFilter
AND TransportTCP = 1
AND NodeID IN {nodesel}
AND ServerAddress IN (
SELECT IPv6StringToNum(address)
FROM {nodeinfo_database}.server_address )
GROUP BY t, Addr
ORDER BY t, Addr
)
)
GROUP BY t
ORDER BY t""".format(
interval_divisor=agginfo['interval_divisor'],
nodesel=nodesel,
nodeinfo_database=agginfo['nodeinfo_database'])),
refId = 'A'
),
],
),
],
),
]
)
| 48.509934 | 113 | 0.318498 | 414 | 7,325 | 5.574879 | 0.282609 | 0.015598 | 0.020797 | 0.034662 | 0.806326 | 0.806326 | 0.791161 | 0.791161 | 0.791161 | 0.7487 | 0 | 0.007971 | 0.623208 | 7,325 | 150 | 114 | 48.833333 | 0.828261 | 0.045734 | 0 | 0.788321 | 0 | 0 | 0.624355 | 0.074355 | 0 | 0 | 0 | 0 | 0 | 1 | 0.007299 | false | 0 | 0.021898 | 0.007299 | 0.036496 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
b13fcee95c620a1075270509758116b1e330d56b | 404,795 | py | Python | elements_sdk/api/storage_api.py | elements-storage/elements-sdk-python | 39c365fe079dcd5928c5fe1bbaa67389bd5a3d81 | [
"MIT"
] | 6 | 2020-11-16T23:15:18.000Z | 2022-03-14T03:56:12.000Z | elements_sdk/api/storage_api.py | elements-storage/elements-sdk-python | 39c365fe079dcd5928c5fe1bbaa67389bd5a3d81 | [
"MIT"
] | 1 | 2021-07-28T13:03:49.000Z | 2021-08-25T12:24:01.000Z | elements_sdk/api/storage_api.py | elements-storage/elements-sdk-python | 39c365fe079dcd5928c5fe1bbaa67389bd5a3d81 | [
"MIT"
] | null | null | null | # coding: utf-8
"""
ELEMENTS API
The version of the OpenAPI document: 2
Generated by: https://openapi-generator.tech
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from elements_sdk.api_client import ApiClient
from elements_sdk.exceptions import (
ApiTypeError,
ApiValueError
)
class StorageApi(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def apply_workspace_affinity(self, id, **kwargs): # noqa: E501
"""apply_workspace_affinity # noqa: E501
### Required permissions * User account permission: `projects:manage` # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.apply_workspace_affinity(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this workspace. (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.apply_workspace_affinity_with_http_info(id, **kwargs) # noqa: E501
def apply_workspace_affinity_with_http_info(self, id, **kwargs): # noqa: E501
"""apply_workspace_affinity # noqa: E501
### Required permissions * User account permission: `projects:manage` # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.apply_workspace_affinity_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this workspace. (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method apply_workspace_affinity" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `apply_workspace_affinity`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/workspaces/{id}/apply-affinity', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def bookmark_workspace(self, id, **kwargs): # noqa: E501
"""bookmark_workspace # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.bookmark_workspace(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this workspace. (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.bookmark_workspace_with_http_info(id, **kwargs) # noqa: E501
def bookmark_workspace_with_http_info(self, id, **kwargs): # noqa: E501
"""bookmark_workspace # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.bookmark_workspace_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this workspace. (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method bookmark_workspace" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `bookmark_workspace`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/workspaces/{id}/bookmark', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def calculate_directory_size(self, path_input, **kwargs): # noqa: E501
"""calculate_directory_size # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.calculate_directory_size(path_input, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param PathInput path_input: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: FileSizeEndpointResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.calculate_directory_size_with_http_info(path_input, **kwargs) # noqa: E501
def calculate_directory_size_with_http_info(self, path_input, **kwargs): # noqa: E501
"""calculate_directory_size # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.calculate_directory_size_with_http_info(path_input, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param PathInput path_input: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(FileSizeEndpointResponse, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['path_input'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method calculate_directory_size" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'path_input' is set
if self.api_client.client_side_validation and ('path_input' not in local_var_params or # noqa: E501
local_var_params['path_input'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `path_input` when calling `calculate_directory_size`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'path_input' in local_var_params:
body_params = local_var_params['path_input']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/filesystem/calculate-directory-size', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='FileSizeEndpointResponse', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def check_in_into_workspace(self, id, workspace_check_in, **kwargs): # noqa: E501
"""check_in_into_workspace # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.check_in_into_workspace(id, workspace_check_in, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this workspace. (required)
:param WorkspaceCheckIn workspace_check_in: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.check_in_into_workspace_with_http_info(id, workspace_check_in, **kwargs) # noqa: E501
def check_in_into_workspace_with_http_info(self, id, workspace_check_in, **kwargs): # noqa: E501
"""check_in_into_workspace # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.check_in_into_workspace_with_http_info(id, workspace_check_in, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this workspace. (required)
:param WorkspaceCheckIn workspace_check_in: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'workspace_check_in'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method check_in_into_workspace" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `check_in_into_workspace`") # noqa: E501
# verify the required parameter 'workspace_check_in' is set
if self.api_client.client_side_validation and ('workspace_check_in' not in local_var_params or # noqa: E501
local_var_params['workspace_check_in'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `workspace_check_in` when calling `check_in_into_workspace`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'workspace_check_in' in local_var_params:
body_params = local_var_params['workspace_check_in']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/workspaces/{id}/check-in', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def check_out_of_workspace(self, id, **kwargs): # noqa: E501
"""check_out_of_workspace # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.check_out_of_workspace(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this workspace. (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.check_out_of_workspace_with_http_info(id, **kwargs) # noqa: E501
def check_out_of_workspace_with_http_info(self, id, **kwargs): # noqa: E501
"""check_out_of_workspace # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.check_out_of_workspace_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this workspace. (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method check_out_of_workspace" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `check_out_of_workspace`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/workspaces/{id}/check-out', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def copy_files(self, file_copy_endpoint_request, **kwargs): # noqa: E501
"""copy_files # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.copy_files(file_copy_endpoint_request, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param FileCopyEndpointRequest file_copy_endpoint_request: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: TaskInfo
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.copy_files_with_http_info(file_copy_endpoint_request, **kwargs) # noqa: E501
def copy_files_with_http_info(self, file_copy_endpoint_request, **kwargs): # noqa: E501
"""copy_files # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.copy_files_with_http_info(file_copy_endpoint_request, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param FileCopyEndpointRequest file_copy_endpoint_request: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(TaskInfo, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['file_copy_endpoint_request'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method copy_files" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'file_copy_endpoint_request' is set
if self.api_client.client_side_validation and ('file_copy_endpoint_request' not in local_var_params or # noqa: E501
local_var_params['file_copy_endpoint_request'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `file_copy_endpoint_request` when calling `copy_files`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'file_copy_endpoint_request' in local_var_params:
body_params = local_var_params['file_copy_endpoint_request']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/filesystem/copy', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='TaskInfo', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def create_file(self, filesystem_file, **kwargs): # noqa: E501
"""create_file # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_file(filesystem_file, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param FilesystemFile filesystem_file: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: FilesystemFile
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.create_file_with_http_info(filesystem_file, **kwargs) # noqa: E501
def create_file_with_http_info(self, filesystem_file, **kwargs): # noqa: E501
"""create_file # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_file_with_http_info(filesystem_file, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param FilesystemFile filesystem_file: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(FilesystemFile, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['filesystem_file'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method create_file" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'filesystem_file' is set
if self.api_client.client_side_validation and ('filesystem_file' not in local_var_params or # noqa: E501
local_var_params['filesystem_file'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `filesystem_file` when calling `create_file`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'filesystem_file' in local_var_params:
body_params = local_var_params['filesystem_file']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/files', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='FilesystemFile', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def create_path_quota(self, id, relative_path, create_path_quota_request, **kwargs): # noqa: E501
"""create_path_quota # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_path_quota(id, relative_path, create_path_quota_request, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this volume. (required)
:param str relative_path: (required)
:param CreatePathQuotaRequest create_path_quota_request: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.create_path_quota_with_http_info(id, relative_path, create_path_quota_request, **kwargs) # noqa: E501
def create_path_quota_with_http_info(self, id, relative_path, create_path_quota_request, **kwargs): # noqa: E501
"""create_path_quota # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_path_quota_with_http_info(id, relative_path, create_path_quota_request, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this volume. (required)
:param str relative_path: (required)
:param CreatePathQuotaRequest create_path_quota_request: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'relative_path', 'create_path_quota_request'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method create_path_quota" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `create_path_quota`") # noqa: E501
# verify the required parameter 'relative_path' is set
if self.api_client.client_side_validation and ('relative_path' not in local_var_params or # noqa: E501
local_var_params['relative_path'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `relative_path` when calling `create_path_quota`") # noqa: E501
# verify the required parameter 'create_path_quota_request' is set
if self.api_client.client_side_validation and ('create_path_quota_request' not in local_var_params or # noqa: E501
local_var_params['create_path_quota_request'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `create_path_quota_request` when calling `create_path_quota`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
if 'relative_path' in local_var_params:
path_params['relative_path'] = local_var_params['relative_path'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'create_path_quota_request' in local_var_params:
body_params = local_var_params['create_path_quota_request']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/volumes/{id}/quotas/path/{relative_path}', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def create_production(self, production, **kwargs): # noqa: E501
"""create_production # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_production(production, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param Production production: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Production
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.create_production_with_http_info(production, **kwargs) # noqa: E501
def create_production_with_http_info(self, production, **kwargs): # noqa: E501
"""create_production # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_production_with_http_info(production, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param Production production: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(Production, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['production'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method create_production" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'production' is set
if self.api_client.client_side_validation and ('production' not in local_var_params or # noqa: E501
local_var_params['production'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `production` when calling `create_production`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'production' in local_var_params:
body_params = local_var_params['production']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/productions', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Production', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def create_share(self, share, **kwargs): # noqa: E501
"""create_share # noqa: E501
### Required permissions * User account permission: `shares:view` (read) / `shares:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_share(share, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param Share share: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Share
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.create_share_with_http_info(share, **kwargs) # noqa: E501
def create_share_with_http_info(self, share, **kwargs): # noqa: E501
"""create_share # noqa: E501
### Required permissions * User account permission: `shares:view` (read) / `shares:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_share_with_http_info(share, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param Share share: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(Share, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['share'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method create_share" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'share' is set
if self.api_client.client_side_validation and ('share' not in local_var_params or # noqa: E501
local_var_params['share'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `share` when calling `create_share`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'share' in local_var_params:
body_params = local_var_params['share']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/shares', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Share', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def create_snapshot(self, snapshot, **kwargs): # noqa: E501
"""create_snapshot # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_snapshot(snapshot, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param Snapshot snapshot: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Snapshot
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.create_snapshot_with_http_info(snapshot, **kwargs) # noqa: E501
def create_snapshot_with_http_info(self, snapshot, **kwargs): # noqa: E501
"""create_snapshot # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_snapshot_with_http_info(snapshot, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param Snapshot snapshot: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(Snapshot, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['snapshot'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method create_snapshot" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'snapshot' is set
if self.api_client.client_side_validation and ('snapshot' not in local_var_params or # noqa: E501
local_var_params['snapshot'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `snapshot` when calling `create_snapshot`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'snapshot' in local_var_params:
body_params = local_var_params['snapshot']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/snapshots', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Snapshot', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def create_template_folder(self, create_template_folder_endpoint_request, **kwargs): # noqa: E501
"""create_template_folder # noqa: E501
### Required permissions * User account permission: `folder_templates:manage` # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_template_folder(create_template_folder_endpoint_request, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param CreateTemplateFolderEndpointRequest create_template_folder_endpoint_request: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.create_template_folder_with_http_info(create_template_folder_endpoint_request, **kwargs) # noqa: E501
def create_template_folder_with_http_info(self, create_template_folder_endpoint_request, **kwargs): # noqa: E501
"""create_template_folder # noqa: E501
### Required permissions * User account permission: `folder_templates:manage` # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_template_folder_with_http_info(create_template_folder_endpoint_request, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param CreateTemplateFolderEndpointRequest create_template_folder_endpoint_request: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['create_template_folder_endpoint_request'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method create_template_folder" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'create_template_folder_endpoint_request' is set
if self.api_client.client_side_validation and ('create_template_folder_endpoint_request' not in local_var_params or # noqa: E501
local_var_params['create_template_folder_endpoint_request'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `create_template_folder_endpoint_request` when calling `create_template_folder`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'create_template_folder_endpoint_request' in local_var_params:
body_params = local_var_params['create_template_folder_endpoint_request']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/private/create-template-folder', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def create_workspace(self, workspace_detail, **kwargs): # noqa: E501
"""create_workspace # noqa: E501
### Required permissions * User account permission: `None` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_workspace(workspace_detail, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param WorkspaceDetail workspace_detail: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: WorkspaceDetail
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.create_workspace_with_http_info(workspace_detail, **kwargs) # noqa: E501
def create_workspace_with_http_info(self, workspace_detail, **kwargs): # noqa: E501
"""create_workspace # noqa: E501
### Required permissions * User account permission: `None` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_workspace_with_http_info(workspace_detail, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param WorkspaceDetail workspace_detail: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(WorkspaceDetail, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['workspace_detail'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method create_workspace" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'workspace_detail' is set
if self.api_client.client_side_validation and ('workspace_detail' not in local_var_params or # noqa: E501
local_var_params['workspace_detail'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `workspace_detail` when calling `create_workspace`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'workspace_detail' in local_var_params:
body_params = local_var_params['workspace_detail']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/workspaces', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='WorkspaceDetail', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def create_workspace_permission(self, workspace_permission, **kwargs): # noqa: E501
"""create_workspace_permission # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_workspace_permission(workspace_permission, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param WorkspacePermission workspace_permission: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: WorkspacePermission
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.create_workspace_permission_with_http_info(workspace_permission, **kwargs) # noqa: E501
def create_workspace_permission_with_http_info(self, workspace_permission, **kwargs): # noqa: E501
"""create_workspace_permission # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_workspace_permission_with_http_info(workspace_permission, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param WorkspacePermission workspace_permission: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(WorkspacePermission, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['workspace_permission'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method create_workspace_permission" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'workspace_permission' is set
if self.api_client.client_side_validation and ('workspace_permission' not in local_var_params or # noqa: E501
local_var_params['workspace_permission'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `workspace_permission` when calling `create_workspace_permission`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'workspace_permission' in local_var_params:
body_params = local_var_params['workspace_permission']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/workspace-permissions', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='WorkspacePermission', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_file(self, path, **kwargs): # noqa: E501
"""delete_file # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_file(path, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str path: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.delete_file_with_http_info(path, **kwargs) # noqa: E501
def delete_file_with_http_info(self, path, **kwargs): # noqa: E501
"""delete_file # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_file_with_http_info(path, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str path: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['path'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_file" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'path' is set
if self.api_client.client_side_validation and ('path' not in local_var_params or # noqa: E501
local_var_params['path'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `path` when calling `delete_file`") # noqa: E501
if self.api_client.client_side_validation and 'path' in local_var_params and not re.search(r'.*', local_var_params['path']): # noqa: E501
raise ApiValueError("Invalid value for parameter `path` when calling `delete_file`, must conform to the pattern `/.*/`") # noqa: E501
collection_formats = {}
path_params = {}
if 'path' in local_var_params:
path_params['path'] = local_var_params['path'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/files/{path}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_files(self, file_delete_endpoint_request, **kwargs): # noqa: E501
"""delete_files # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_files(file_delete_endpoint_request, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param FileDeleteEndpointRequest file_delete_endpoint_request: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: TaskInfo
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.delete_files_with_http_info(file_delete_endpoint_request, **kwargs) # noqa: E501
def delete_files_with_http_info(self, file_delete_endpoint_request, **kwargs): # noqa: E501
"""delete_files # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_files_with_http_info(file_delete_endpoint_request, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param FileDeleteEndpointRequest file_delete_endpoint_request: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(TaskInfo, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['file_delete_endpoint_request'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_files" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'file_delete_endpoint_request' is set
if self.api_client.client_side_validation and ('file_delete_endpoint_request' not in local_var_params or # noqa: E501
local_var_params['file_delete_endpoint_request'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `file_delete_endpoint_request` when calling `delete_files`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'file_delete_endpoint_request' in local_var_params:
body_params = local_var_params['file_delete_endpoint_request']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/filesystem/delete', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='TaskInfo', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_path_quota(self, id, relative_path, **kwargs): # noqa: E501
"""delete_path_quota # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_path_quota(id, relative_path, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this volume. (required)
:param str relative_path: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.delete_path_quota_with_http_info(id, relative_path, **kwargs) # noqa: E501
def delete_path_quota_with_http_info(self, id, relative_path, **kwargs): # noqa: E501
"""delete_path_quota # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_path_quota_with_http_info(id, relative_path, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this volume. (required)
:param str relative_path: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'relative_path'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_path_quota" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `delete_path_quota`") # noqa: E501
# verify the required parameter 'relative_path' is set
if self.api_client.client_side_validation and ('relative_path' not in local_var_params or # noqa: E501
local_var_params['relative_path'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `relative_path` when calling `delete_path_quota`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
if 'relative_path' in local_var_params:
path_params['relative_path'] = local_var_params['relative_path'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/volumes/{id}/quotas/path/{relative_path}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_production(self, id, **kwargs): # noqa: E501
"""delete_production # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_production(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this production. (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.delete_production_with_http_info(id, **kwargs) # noqa: E501
def delete_production_with_http_info(self, id, **kwargs): # noqa: E501
"""delete_production # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_production_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this production. (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_production" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `delete_production`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/productions/{id}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_share(self, id, **kwargs): # noqa: E501
"""delete_share # noqa: E501
### Required permissions * User account permission: `shares:view` (read) / `shares:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_share(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this share. (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.delete_share_with_http_info(id, **kwargs) # noqa: E501
def delete_share_with_http_info(self, id, **kwargs): # noqa: E501
"""delete_share # noqa: E501
### Required permissions * User account permission: `shares:view` (read) / `shares:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_share_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this share. (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_share" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `delete_share`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/shares/{id}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_snapshot(self, id, **kwargs): # noqa: E501
"""delete_snapshot # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_snapshot(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this snapshot. (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.delete_snapshot_with_http_info(id, **kwargs) # noqa: E501
def delete_snapshot_with_http_info(self, id, **kwargs): # noqa: E501
"""delete_snapshot # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_snapshot_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this snapshot. (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_snapshot" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `delete_snapshot`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/snapshots/{id}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_workspace(self, id, **kwargs): # noqa: E501
"""delete_workspace # noqa: E501
### Required permissions * User account permission: `None` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_workspace(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this workspace. (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.delete_workspace_with_http_info(id, **kwargs) # noqa: E501
def delete_workspace_with_http_info(self, id, **kwargs): # noqa: E501
"""delete_workspace # noqa: E501
### Required permissions * User account permission: `None` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_workspace_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this workspace. (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_workspace" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `delete_workspace`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/workspaces/{id}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_workspace_permission(self, id, **kwargs): # noqa: E501
"""delete_workspace_permission # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_workspace_permission(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this workspace permission. (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.delete_workspace_permission_with_http_info(id, **kwargs) # noqa: E501
def delete_workspace_permission_with_http_info(self, id, **kwargs): # noqa: E501
"""delete_workspace_permission # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_workspace_permission_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this workspace permission. (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_workspace_permission" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `delete_workspace_permission`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/workspace-permissions/{id}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_all_deleted_workspaces(self, **kwargs): # noqa: E501
"""get_all_deleted_workspaces # noqa: E501
### Required permissions * User account permission: `projects:view` # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_all_deleted_workspaces(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str is_template: Filter the returned list by `is_template`.
:param str production: Filter the returned list by `production`.
:param str volume: Filter the returned list by `volume`.
:param str home_for: Filter the returned list by `home_for`.
:param str volume__type: Filter the returned list by `volume__type`.
:param str production__name: Filter the returned list by `production__name`.
:param str production__active: Filter the returned list by `production__active`.
:param str name: Filter the returned list by `name`.
:param str is_external: Filter the returned list by `is_external`.
:param str active: Filter the returned list by `active`.
:param str ordering: Which field to use when ordering the results.
:param int limit: Number of results to return per page.
:param int offset: The initial index from which to return the results.
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: list[DeletedWorkspace]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_all_deleted_workspaces_with_http_info(**kwargs) # noqa: E501
def get_all_deleted_workspaces_with_http_info(self, **kwargs): # noqa: E501
"""get_all_deleted_workspaces # noqa: E501
### Required permissions * User account permission: `projects:view` # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_all_deleted_workspaces_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str is_template: Filter the returned list by `is_template`.
:param str production: Filter the returned list by `production`.
:param str volume: Filter the returned list by `volume`.
:param str home_for: Filter the returned list by `home_for`.
:param str volume__type: Filter the returned list by `volume__type`.
:param str production__name: Filter the returned list by `production__name`.
:param str production__active: Filter the returned list by `production__active`.
:param str name: Filter the returned list by `name`.
:param str is_external: Filter the returned list by `is_external`.
:param str active: Filter the returned list by `active`.
:param str ordering: Which field to use when ordering the results.
:param int limit: Number of results to return per page.
:param int offset: The initial index from which to return the results.
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(list[DeletedWorkspace], status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['is_template', 'production', 'volume', 'home_for', 'volume__type', 'production__name', 'production__active', 'name', 'is_external', 'active', 'ordering', 'limit', 'offset'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_all_deleted_workspaces" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'is_template' in local_var_params and local_var_params['is_template'] is not None: # noqa: E501
query_params.append(('is_template', local_var_params['is_template'])) # noqa: E501
if 'production' in local_var_params and local_var_params['production'] is not None: # noqa: E501
query_params.append(('production', local_var_params['production'])) # noqa: E501
if 'volume' in local_var_params and local_var_params['volume'] is not None: # noqa: E501
query_params.append(('volume', local_var_params['volume'])) # noqa: E501
if 'home_for' in local_var_params and local_var_params['home_for'] is not None: # noqa: E501
query_params.append(('home_for', local_var_params['home_for'])) # noqa: E501
if 'volume__type' in local_var_params and local_var_params['volume__type'] is not None: # noqa: E501
query_params.append(('volume__type', local_var_params['volume__type'])) # noqa: E501
if 'production__name' in local_var_params and local_var_params['production__name'] is not None: # noqa: E501
query_params.append(('production__name', local_var_params['production__name'])) # noqa: E501
if 'production__active' in local_var_params and local_var_params['production__active'] is not None: # noqa: E501
query_params.append(('production__active', local_var_params['production__active'])) # noqa: E501
if 'name' in local_var_params and local_var_params['name'] is not None: # noqa: E501
query_params.append(('name', local_var_params['name'])) # noqa: E501
if 'is_external' in local_var_params and local_var_params['is_external'] is not None: # noqa: E501
query_params.append(('is_external', local_var_params['is_external'])) # noqa: E501
if 'active' in local_var_params and local_var_params['active'] is not None: # noqa: E501
query_params.append(('active', local_var_params['active'])) # noqa: E501
if 'ordering' in local_var_params and local_var_params['ordering'] is not None: # noqa: E501
query_params.append(('ordering', local_var_params['ordering'])) # noqa: E501
if 'limit' in local_var_params and local_var_params['limit'] is not None: # noqa: E501
query_params.append(('limit', local_var_params['limit'])) # noqa: E501
if 'offset' in local_var_params and local_var_params['offset'] is not None: # noqa: E501
query_params.append(('offset', local_var_params['offset'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/workspaces/deleted', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[DeletedWorkspace]', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_all_productions(self, **kwargs): # noqa: E501
"""get_all_productions # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_all_productions(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str active: Filter the returned list by `active`.
:param str name: Filter the returned list by `name`.
:param str ordering: Which field to use when ordering the results.
:param int limit: Number of results to return per page.
:param int offset: The initial index from which to return the results.
:param bool copy_template_content:
:param bool include_total_size:
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: list[Production]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_all_productions_with_http_info(**kwargs) # noqa: E501
def get_all_productions_with_http_info(self, **kwargs): # noqa: E501
"""get_all_productions # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_all_productions_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str active: Filter the returned list by `active`.
:param str name: Filter the returned list by `name`.
:param str ordering: Which field to use when ordering the results.
:param int limit: Number of results to return per page.
:param int offset: The initial index from which to return the results.
:param bool copy_template_content:
:param bool include_total_size:
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(list[Production], status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['active', 'name', 'ordering', 'limit', 'offset', 'copy_template_content', 'include_total_size'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_all_productions" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'active' in local_var_params and local_var_params['active'] is not None: # noqa: E501
query_params.append(('active', local_var_params['active'])) # noqa: E501
if 'name' in local_var_params and local_var_params['name'] is not None: # noqa: E501
query_params.append(('name', local_var_params['name'])) # noqa: E501
if 'ordering' in local_var_params and local_var_params['ordering'] is not None: # noqa: E501
query_params.append(('ordering', local_var_params['ordering'])) # noqa: E501
if 'limit' in local_var_params and local_var_params['limit'] is not None: # noqa: E501
query_params.append(('limit', local_var_params['limit'])) # noqa: E501
if 'offset' in local_var_params and local_var_params['offset'] is not None: # noqa: E501
query_params.append(('offset', local_var_params['offset'])) # noqa: E501
if 'copy_template_content' in local_var_params and local_var_params['copy_template_content'] is not None: # noqa: E501
query_params.append(('copy_template_content', local_var_params['copy_template_content'])) # noqa: E501
if 'include_total_size' in local_var_params and local_var_params['include_total_size'] is not None: # noqa: E501
query_params.append(('include_total_size', local_var_params['include_total_size'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/productions', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[Production]', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_all_shares(self, **kwargs): # noqa: E501
"""get_all_shares # noqa: E501
### Required permissions * User account permission: `shares:view` (read) / `shares:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_all_shares(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str ordering: Which field to use when ordering the results.
:param int limit: Number of results to return per page.
:param int offset: The initial index from which to return the results.
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: list[Share]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_all_shares_with_http_info(**kwargs) # noqa: E501
def get_all_shares_with_http_info(self, **kwargs): # noqa: E501
"""get_all_shares # noqa: E501
### Required permissions * User account permission: `shares:view` (read) / `shares:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_all_shares_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str ordering: Which field to use when ordering the results.
:param int limit: Number of results to return per page.
:param int offset: The initial index from which to return the results.
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(list[Share], status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['ordering', 'limit', 'offset'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_all_shares" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'ordering' in local_var_params and local_var_params['ordering'] is not None: # noqa: E501
query_params.append(('ordering', local_var_params['ordering'])) # noqa: E501
if 'limit' in local_var_params and local_var_params['limit'] is not None: # noqa: E501
query_params.append(('limit', local_var_params['limit'])) # noqa: E501
if 'offset' in local_var_params and local_var_params['offset'] is not None: # noqa: E501
query_params.append(('offset', local_var_params['offset'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/shares', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[Share]', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_all_snapshots(self, **kwargs): # noqa: E501
"""get_all_snapshots # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_all_snapshots(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str workspace: Filter the returned list by `workspace`.
:param str ordering: Which field to use when ordering the results.
:param int limit: Number of results to return per page.
:param int offset: The initial index from which to return the results.
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: list[Snapshot]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_all_snapshots_with_http_info(**kwargs) # noqa: E501
def get_all_snapshots_with_http_info(self, **kwargs): # noqa: E501
"""get_all_snapshots # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_all_snapshots_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str workspace: Filter the returned list by `workspace`.
:param str ordering: Which field to use when ordering the results.
:param int limit: Number of results to return per page.
:param int offset: The initial index from which to return the results.
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(list[Snapshot], status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['workspace', 'ordering', 'limit', 'offset'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_all_snapshots" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'workspace' in local_var_params and local_var_params['workspace'] is not None: # noqa: E501
query_params.append(('workspace', local_var_params['workspace'])) # noqa: E501
if 'ordering' in local_var_params and local_var_params['ordering'] is not None: # noqa: E501
query_params.append(('ordering', local_var_params['ordering'])) # noqa: E501
if 'limit' in local_var_params and local_var_params['limit'] is not None: # noqa: E501
query_params.append(('limit', local_var_params['limit'])) # noqa: E501
if 'offset' in local_var_params and local_var_params['offset'] is not None: # noqa: E501
query_params.append(('offset', local_var_params['offset'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/snapshots', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[Snapshot]', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_all_volumes(self, **kwargs): # noqa: E501
"""get_all_volumes # noqa: E501
### Required permissions * User account permission: `None` (read) / `system:admin-access` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_all_volumes(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str is_default: Filter the returned list by `is_default`.
:param str type: Filter the returned list by `type`.
:param str use_for_homes: Filter the returned list by `use_for_homes`.
:param str use_for_workspaces: Filter the returned list by `use_for_workspaces`.
:param str name: Filter the returned list by `name`.
:param str display_name: Filter the returned list by `display_name`.
:param str visual_tag: Filter the returned list by `visual_tag`.
:param str ordering: Which field to use when ordering the results.
:param int limit: Number of results to return per page.
:param int offset: The initial index from which to return the results.
:param bool include_status:
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: list[Volume]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_all_volumes_with_http_info(**kwargs) # noqa: E501
def get_all_volumes_with_http_info(self, **kwargs): # noqa: E501
"""get_all_volumes # noqa: E501
### Required permissions * User account permission: `None` (read) / `system:admin-access` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_all_volumes_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str is_default: Filter the returned list by `is_default`.
:param str type: Filter the returned list by `type`.
:param str use_for_homes: Filter the returned list by `use_for_homes`.
:param str use_for_workspaces: Filter the returned list by `use_for_workspaces`.
:param str name: Filter the returned list by `name`.
:param str display_name: Filter the returned list by `display_name`.
:param str visual_tag: Filter the returned list by `visual_tag`.
:param str ordering: Which field to use when ordering the results.
:param int limit: Number of results to return per page.
:param int offset: The initial index from which to return the results.
:param bool include_status:
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(list[Volume], status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['is_default', 'type', 'use_for_homes', 'use_for_workspaces', 'name', 'display_name', 'visual_tag', 'ordering', 'limit', 'offset', 'include_status'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_all_volumes" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'is_default' in local_var_params and local_var_params['is_default'] is not None: # noqa: E501
query_params.append(('is_default', local_var_params['is_default'])) # noqa: E501
if 'type' in local_var_params and local_var_params['type'] is not None: # noqa: E501
query_params.append(('type', local_var_params['type'])) # noqa: E501
if 'use_for_homes' in local_var_params and local_var_params['use_for_homes'] is not None: # noqa: E501
query_params.append(('use_for_homes', local_var_params['use_for_homes'])) # noqa: E501
if 'use_for_workspaces' in local_var_params and local_var_params['use_for_workspaces'] is not None: # noqa: E501
query_params.append(('use_for_workspaces', local_var_params['use_for_workspaces'])) # noqa: E501
if 'name' in local_var_params and local_var_params['name'] is not None: # noqa: E501
query_params.append(('name', local_var_params['name'])) # noqa: E501
if 'display_name' in local_var_params and local_var_params['display_name'] is not None: # noqa: E501
query_params.append(('display_name', local_var_params['display_name'])) # noqa: E501
if 'visual_tag' in local_var_params and local_var_params['visual_tag'] is not None: # noqa: E501
query_params.append(('visual_tag', local_var_params['visual_tag'])) # noqa: E501
if 'ordering' in local_var_params and local_var_params['ordering'] is not None: # noqa: E501
query_params.append(('ordering', local_var_params['ordering'])) # noqa: E501
if 'limit' in local_var_params and local_var_params['limit'] is not None: # noqa: E501
query_params.append(('limit', local_var_params['limit'])) # noqa: E501
if 'offset' in local_var_params and local_var_params['offset'] is not None: # noqa: E501
query_params.append(('offset', local_var_params['offset'])) # noqa: E501
if 'include_status' in local_var_params and local_var_params['include_status'] is not None: # noqa: E501
query_params.append(('include_status', local_var_params['include_status'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/volumes', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[Volume]', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_all_workspace_permissions(self, **kwargs): # noqa: E501
"""get_all_workspace_permissions # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_all_workspace_permissions(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str workspace: Filter the returned list by `workspace`.
:param str user: Filter the returned list by `user`.
:param str group: Filter the returned list by `group`.
:param str ordering: Which field to use when ordering the results.
:param int limit: Number of results to return per page.
:param int offset: The initial index from which to return the results.
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: list[WorkspacePermission]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_all_workspace_permissions_with_http_info(**kwargs) # noqa: E501
def get_all_workspace_permissions_with_http_info(self, **kwargs): # noqa: E501
"""get_all_workspace_permissions # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_all_workspace_permissions_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str workspace: Filter the returned list by `workspace`.
:param str user: Filter the returned list by `user`.
:param str group: Filter the returned list by `group`.
:param str ordering: Which field to use when ordering the results.
:param int limit: Number of results to return per page.
:param int offset: The initial index from which to return the results.
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(list[WorkspacePermission], status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['workspace', 'user', 'group', 'ordering', 'limit', 'offset'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_all_workspace_permissions" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'workspace' in local_var_params and local_var_params['workspace'] is not None: # noqa: E501
query_params.append(('workspace', local_var_params['workspace'])) # noqa: E501
if 'user' in local_var_params and local_var_params['user'] is not None: # noqa: E501
query_params.append(('user', local_var_params['user'])) # noqa: E501
if 'group' in local_var_params and local_var_params['group'] is not None: # noqa: E501
query_params.append(('group', local_var_params['group'])) # noqa: E501
if 'ordering' in local_var_params and local_var_params['ordering'] is not None: # noqa: E501
query_params.append(('ordering', local_var_params['ordering'])) # noqa: E501
if 'limit' in local_var_params and local_var_params['limit'] is not None: # noqa: E501
query_params.append(('limit', local_var_params['limit'])) # noqa: E501
if 'offset' in local_var_params and local_var_params['offset'] is not None: # noqa: E501
query_params.append(('offset', local_var_params['offset'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/workspace-permissions', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[WorkspacePermission]', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_all_workspaces(self, **kwargs): # noqa: E501
"""get_all_workspaces # noqa: E501
### Required permissions * User account permission: `None` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_all_workspaces(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str is_template: Filter the returned list by `is_template`.
:param str production: Filter the returned list by `production`.
:param str volume: Filter the returned list by `volume`.
:param str home_for: Filter the returned list by `home_for`.
:param str volume__type: Filter the returned list by `volume__type`.
:param str production__name: Filter the returned list by `production__name`.
:param str production__active: Filter the returned list by `production__active`.
:param str name: Filter the returned list by `name`.
:param str is_external: Filter the returned list by `is_external`.
:param str active: Filter the returned list by `active`.
:param str ordering: Which field to use when ordering the results.
:param int limit: Number of results to return per page.
:param int offset: The initial index from which to return the results.
:param int resolve_access_for:
:param bool include_endpoints:
:param bool include_quotas:
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: list[Workspace]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_all_workspaces_with_http_info(**kwargs) # noqa: E501
def get_all_workspaces_with_http_info(self, **kwargs): # noqa: E501
"""get_all_workspaces # noqa: E501
### Required permissions * User account permission: `None` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_all_workspaces_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str is_template: Filter the returned list by `is_template`.
:param str production: Filter the returned list by `production`.
:param str volume: Filter the returned list by `volume`.
:param str home_for: Filter the returned list by `home_for`.
:param str volume__type: Filter the returned list by `volume__type`.
:param str production__name: Filter the returned list by `production__name`.
:param str production__active: Filter the returned list by `production__active`.
:param str name: Filter the returned list by `name`.
:param str is_external: Filter the returned list by `is_external`.
:param str active: Filter the returned list by `active`.
:param str ordering: Which field to use when ordering the results.
:param int limit: Number of results to return per page.
:param int offset: The initial index from which to return the results.
:param int resolve_access_for:
:param bool include_endpoints:
:param bool include_quotas:
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(list[Workspace], status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['is_template', 'production', 'volume', 'home_for', 'volume__type', 'production__name', 'production__active', 'name', 'is_external', 'active', 'ordering', 'limit', 'offset', 'resolve_access_for', 'include_endpoints', 'include_quotas'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_all_workspaces" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'is_template' in local_var_params and local_var_params['is_template'] is not None: # noqa: E501
query_params.append(('is_template', local_var_params['is_template'])) # noqa: E501
if 'production' in local_var_params and local_var_params['production'] is not None: # noqa: E501
query_params.append(('production', local_var_params['production'])) # noqa: E501
if 'volume' in local_var_params and local_var_params['volume'] is not None: # noqa: E501
query_params.append(('volume', local_var_params['volume'])) # noqa: E501
if 'home_for' in local_var_params and local_var_params['home_for'] is not None: # noqa: E501
query_params.append(('home_for', local_var_params['home_for'])) # noqa: E501
if 'volume__type' in local_var_params and local_var_params['volume__type'] is not None: # noqa: E501
query_params.append(('volume__type', local_var_params['volume__type'])) # noqa: E501
if 'production__name' in local_var_params and local_var_params['production__name'] is not None: # noqa: E501
query_params.append(('production__name', local_var_params['production__name'])) # noqa: E501
if 'production__active' in local_var_params and local_var_params['production__active'] is not None: # noqa: E501
query_params.append(('production__active', local_var_params['production__active'])) # noqa: E501
if 'name' in local_var_params and local_var_params['name'] is not None: # noqa: E501
query_params.append(('name', local_var_params['name'])) # noqa: E501
if 'is_external' in local_var_params and local_var_params['is_external'] is not None: # noqa: E501
query_params.append(('is_external', local_var_params['is_external'])) # noqa: E501
if 'active' in local_var_params and local_var_params['active'] is not None: # noqa: E501
query_params.append(('active', local_var_params['active'])) # noqa: E501
if 'ordering' in local_var_params and local_var_params['ordering'] is not None: # noqa: E501
query_params.append(('ordering', local_var_params['ordering'])) # noqa: E501
if 'limit' in local_var_params and local_var_params['limit'] is not None: # noqa: E501
query_params.append(('limit', local_var_params['limit'])) # noqa: E501
if 'offset' in local_var_params and local_var_params['offset'] is not None: # noqa: E501
query_params.append(('offset', local_var_params['offset'])) # noqa: E501
if 'resolve_access_for' in local_var_params and local_var_params['resolve_access_for'] is not None: # noqa: E501
query_params.append(('resolve_access_for', local_var_params['resolve_access_for'])) # noqa: E501
if 'include_endpoints' in local_var_params and local_var_params['include_endpoints'] is not None: # noqa: E501
query_params.append(('include_endpoints', local_var_params['include_endpoints'])) # noqa: E501
if 'include_quotas' in local_var_params and local_var_params['include_quotas'] is not None: # noqa: E501
query_params.append(('include_quotas', local_var_params['include_quotas'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/workspaces', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[Workspace]', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_file(self, path, **kwargs): # noqa: E501
"""get_file # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_file(path, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str path: (required)
:param int max_depth:
:param bool bundle:
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: FilesystemFile
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_file_with_http_info(path, **kwargs) # noqa: E501
def get_file_with_http_info(self, path, **kwargs): # noqa: E501
"""get_file # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_file_with_http_info(path, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str path: (required)
:param int max_depth:
:param bool bundle:
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(FilesystemFile, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['path', 'max_depth', 'bundle'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_file" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'path' is set
if self.api_client.client_side_validation and ('path' not in local_var_params or # noqa: E501
local_var_params['path'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `path` when calling `get_file`") # noqa: E501
if self.api_client.client_side_validation and 'path' in local_var_params and not re.search(r'.*', local_var_params['path']): # noqa: E501
raise ApiValueError("Invalid value for parameter `path` when calling `get_file`, must conform to the pattern `/.*/`") # noqa: E501
collection_formats = {}
path_params = {}
if 'path' in local_var_params:
path_params['path'] = local_var_params['path'] # noqa: E501
query_params = []
if 'max_depth' in local_var_params and local_var_params['max_depth'] is not None: # noqa: E501
query_params.append(('max_depth', local_var_params['max_depth'])) # noqa: E501
if 'bundle' in local_var_params and local_var_params['bundle'] is not None: # noqa: E501
query_params.append(('bundle', local_var_params['bundle'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/files/{path}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='FilesystemFile', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_group_quota(self, group_id, id, **kwargs): # noqa: E501
"""get_group_quota # noqa: E501
### Required permissions * User account permission: `users:manage` # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_group_quota(group_id, id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str group_id: (required)
:param int id: A unique integer value identifying this volume. (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Quota
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_group_quota_with_http_info(group_id, id, **kwargs) # noqa: E501
def get_group_quota_with_http_info(self, group_id, id, **kwargs): # noqa: E501
"""get_group_quota # noqa: E501
### Required permissions * User account permission: `users:manage` # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_group_quota_with_http_info(group_id, id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str group_id: (required)
:param int id: A unique integer value identifying this volume. (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(Quota, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['group_id', 'id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_group_quota" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'group_id' is set
if self.api_client.client_side_validation and ('group_id' not in local_var_params or # noqa: E501
local_var_params['group_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `group_id` when calling `get_group_quota`") # noqa: E501
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `get_group_quota`") # noqa: E501
collection_formats = {}
path_params = {}
if 'group_id' in local_var_params:
path_params['group_id'] = local_var_params['group_id'] # noqa: E501
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/volumes/{id}/quotas/group/{group_id}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Quota', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_my_workspaces(self, **kwargs): # noqa: E501
"""get_my_workspaces # noqa: E501
### Required permissions * User account permission: `None` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_my_workspaces(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str is_template: Filter the returned list by `is_template`.
:param str production: Filter the returned list by `production`.
:param str volume: Filter the returned list by `volume`.
:param str home_for: Filter the returned list by `home_for`.
:param str volume__type: Filter the returned list by `volume__type`.
:param str production__name: Filter the returned list by `production__name`.
:param str production__active: Filter the returned list by `production__active`.
:param str name: Filter the returned list by `name`.
:param str is_external: Filter the returned list by `is_external`.
:param str active: Filter the returned list by `active`.
:param str ordering: Which field to use when ordering the results.
:param int limit: Number of results to return per page.
:param int offset: The initial index from which to return the results.
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: list[Workspace]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_my_workspaces_with_http_info(**kwargs) # noqa: E501
def get_my_workspaces_with_http_info(self, **kwargs): # noqa: E501
"""get_my_workspaces # noqa: E501
### Required permissions * User account permission: `None` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_my_workspaces_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str is_template: Filter the returned list by `is_template`.
:param str production: Filter the returned list by `production`.
:param str volume: Filter the returned list by `volume`.
:param str home_for: Filter the returned list by `home_for`.
:param str volume__type: Filter the returned list by `volume__type`.
:param str production__name: Filter the returned list by `production__name`.
:param str production__active: Filter the returned list by `production__active`.
:param str name: Filter the returned list by `name`.
:param str is_external: Filter the returned list by `is_external`.
:param str active: Filter the returned list by `active`.
:param str ordering: Which field to use when ordering the results.
:param int limit: Number of results to return per page.
:param int offset: The initial index from which to return the results.
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(list[Workspace], status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['is_template', 'production', 'volume', 'home_for', 'volume__type', 'production__name', 'production__active', 'name', 'is_external', 'active', 'ordering', 'limit', 'offset'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_my_workspaces" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'is_template' in local_var_params and local_var_params['is_template'] is not None: # noqa: E501
query_params.append(('is_template', local_var_params['is_template'])) # noqa: E501
if 'production' in local_var_params and local_var_params['production'] is not None: # noqa: E501
query_params.append(('production', local_var_params['production'])) # noqa: E501
if 'volume' in local_var_params and local_var_params['volume'] is not None: # noqa: E501
query_params.append(('volume', local_var_params['volume'])) # noqa: E501
if 'home_for' in local_var_params and local_var_params['home_for'] is not None: # noqa: E501
query_params.append(('home_for', local_var_params['home_for'])) # noqa: E501
if 'volume__type' in local_var_params and local_var_params['volume__type'] is not None: # noqa: E501
query_params.append(('volume__type', local_var_params['volume__type'])) # noqa: E501
if 'production__name' in local_var_params and local_var_params['production__name'] is not None: # noqa: E501
query_params.append(('production__name', local_var_params['production__name'])) # noqa: E501
if 'production__active' in local_var_params and local_var_params['production__active'] is not None: # noqa: E501
query_params.append(('production__active', local_var_params['production__active'])) # noqa: E501
if 'name' in local_var_params and local_var_params['name'] is not None: # noqa: E501
query_params.append(('name', local_var_params['name'])) # noqa: E501
if 'is_external' in local_var_params and local_var_params['is_external'] is not None: # noqa: E501
query_params.append(('is_external', local_var_params['is_external'])) # noqa: E501
if 'active' in local_var_params and local_var_params['active'] is not None: # noqa: E501
query_params.append(('active', local_var_params['active'])) # noqa: E501
if 'ordering' in local_var_params and local_var_params['ordering'] is not None: # noqa: E501
query_params.append(('ordering', local_var_params['ordering'])) # noqa: E501
if 'limit' in local_var_params and local_var_params['limit'] is not None: # noqa: E501
query_params.append(('limit', local_var_params['limit'])) # noqa: E501
if 'offset' in local_var_params and local_var_params['offset'] is not None: # noqa: E501
query_params.append(('offset', local_var_params['offset'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/workspaces/mine', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[Workspace]', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_path_quota(self, id, relative_path, **kwargs): # noqa: E501
"""get_path_quota # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_path_quota(id, relative_path, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this volume. (required)
:param str relative_path: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Quota
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_path_quota_with_http_info(id, relative_path, **kwargs) # noqa: E501
def get_path_quota_with_http_info(self, id, relative_path, **kwargs): # noqa: E501
"""get_path_quota # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_path_quota_with_http_info(id, relative_path, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this volume. (required)
:param str relative_path: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(Quota, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'relative_path'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_path_quota" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `get_path_quota`") # noqa: E501
# verify the required parameter 'relative_path' is set
if self.api_client.client_side_validation and ('relative_path' not in local_var_params or # noqa: E501
local_var_params['relative_path'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `relative_path` when calling `get_path_quota`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
if 'relative_path' in local_var_params:
path_params['relative_path'] = local_var_params['relative_path'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/volumes/{id}/quotas/path/{relative_path}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Quota', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_production(self, id, **kwargs): # noqa: E501
"""get_production # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_production(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this production. (required)
:param bool copy_template_content:
:param bool include_total_size:
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Production
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_production_with_http_info(id, **kwargs) # noqa: E501
def get_production_with_http_info(self, id, **kwargs): # noqa: E501
"""get_production # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_production_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this production. (required)
:param bool copy_template_content:
:param bool include_total_size:
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(Production, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'copy_template_content', 'include_total_size'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_production" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `get_production`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
if 'copy_template_content' in local_var_params and local_var_params['copy_template_content'] is not None: # noqa: E501
query_params.append(('copy_template_content', local_var_params['copy_template_content'])) # noqa: E501
if 'include_total_size' in local_var_params and local_var_params['include_total_size'] is not None: # noqa: E501
query_params.append(('include_total_size', local_var_params['include_total_size'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/productions/{id}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Production', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_root_directory(self, **kwargs): # noqa: E501
"""get_root_directory # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_root_directory(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str ordering: Which field to use when ordering the results.
:param int limit: Number of results to return per page.
:param int offset: The initial index from which to return the results.
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_root_directory_with_http_info(**kwargs) # noqa: E501
def get_root_directory_with_http_info(self, **kwargs): # noqa: E501
"""get_root_directory # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_root_directory_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str ordering: Which field to use when ordering the results.
:param int limit: Number of results to return per page.
:param int offset: The initial index from which to return the results.
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['ordering', 'limit', 'offset'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_root_directory" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'ordering' in local_var_params and local_var_params['ordering'] is not None: # noqa: E501
query_params.append(('ordering', local_var_params['ordering'])) # noqa: E501
if 'limit' in local_var_params and local_var_params['limit'] is not None: # noqa: E501
query_params.append(('limit', local_var_params['limit'])) # noqa: E501
if 'offset' in local_var_params and local_var_params['offset'] is not None: # noqa: E501
query_params.append(('offset', local_var_params['offset'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/files', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_samba_dfree_string(self, **kwargs): # noqa: E501
"""get_samba_dfree_string # noqa: E501
### Required permissions * localhost only # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_samba_dfree_string(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_samba_dfree_string_with_http_info(**kwargs) # noqa: E501
def get_samba_dfree_string_with_http_info(self, **kwargs): # noqa: E501
"""get_samba_dfree_string # noqa: E501
### Required permissions * localhost only # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_samba_dfree_string_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_samba_dfree_string" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/private/dfree', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_share(self, id, **kwargs): # noqa: E501
"""get_share # noqa: E501
### Required permissions * User account permission: `shares:view` (read) / `shares:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_share(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this share. (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Share
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_share_with_http_info(id, **kwargs) # noqa: E501
def get_share_with_http_info(self, id, **kwargs): # noqa: E501
"""get_share # noqa: E501
### Required permissions * User account permission: `shares:view` (read) / `shares:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_share_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this share. (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(Share, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_share" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `get_share`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/shares/{id}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Share', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_snapshot(self, id, **kwargs): # noqa: E501
"""get_snapshot # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_snapshot(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this snapshot. (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Snapshot
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_snapshot_with_http_info(id, **kwargs) # noqa: E501
def get_snapshot_with_http_info(self, id, **kwargs): # noqa: E501
"""get_snapshot # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_snapshot_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this snapshot. (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(Snapshot, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_snapshot" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `get_snapshot`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/snapshots/{id}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Snapshot', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_user_quota(self, id, user_id, **kwargs): # noqa: E501
"""get_user_quota # noqa: E501
### Required permissions * User account permission: `users:manage` # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_user_quota(id, user_id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this volume. (required)
:param str user_id: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Quota
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_user_quota_with_http_info(id, user_id, **kwargs) # noqa: E501
def get_user_quota_with_http_info(self, id, user_id, **kwargs): # noqa: E501
"""get_user_quota # noqa: E501
### Required permissions * User account permission: `users:manage` # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_user_quota_with_http_info(id, user_id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this volume. (required)
:param str user_id: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(Quota, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'user_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_user_quota" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `get_user_quota`") # noqa: E501
# verify the required parameter 'user_id' is set
if self.api_client.client_side_validation and ('user_id' not in local_var_params or # noqa: E501
local_var_params['user_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `user_id` when calling `get_user_quota`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
if 'user_id' in local_var_params:
path_params['user_id'] = local_var_params['user_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/volumes/{id}/quotas/user/{user_id}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Quota', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_volume(self, id, **kwargs): # noqa: E501
"""get_volume # noqa: E501
### Required permissions * User account permission: `None` (read) / `system:admin-access` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_volume(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this volume. (required)
:param bool include_status:
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Volume
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_volume_with_http_info(id, **kwargs) # noqa: E501
def get_volume_with_http_info(self, id, **kwargs): # noqa: E501
"""get_volume # noqa: E501
### Required permissions * User account permission: `None` (read) / `system:admin-access` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_volume_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this volume. (required)
:param bool include_status:
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(Volume, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'include_status'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_volume" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `get_volume`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
if 'include_status' in local_var_params and local_var_params['include_status'] is not None: # noqa: E501
query_params.append(('include_status', local_var_params['include_status'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/volumes/{id}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Volume', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_volume_active_connections(self, id, **kwargs): # noqa: E501
"""get_volume_active_connections # noqa: E501
### Required permissions * User account permission: `system:status:view` # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_volume_active_connections(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this volume. (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: StorNextConnections
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_volume_active_connections_with_http_info(id, **kwargs) # noqa: E501
def get_volume_active_connections_with_http_info(self, id, **kwargs): # noqa: E501
"""get_volume_active_connections # noqa: E501
### Required permissions * User account permission: `system:status:view` # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_volume_active_connections_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this volume. (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(StorNextConnections, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_volume_active_connections" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `get_volume_active_connections`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/volumes/{id}/connections', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='StorNextConnections', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_volume_file_size_distribution(self, id, **kwargs): # noqa: E501
"""get_volume_file_size_distribution # noqa: E501
### Required permissions * User account permission: `system:status:view` # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_volume_file_size_distribution(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this volume. (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: FileSizeDistribution
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_volume_file_size_distribution_with_http_info(id, **kwargs) # noqa: E501
def get_volume_file_size_distribution_with_http_info(self, id, **kwargs): # noqa: E501
"""get_volume_file_size_distribution # noqa: E501
### Required permissions * User account permission: `system:status:view` # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_volume_file_size_distribution_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this volume. (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(FileSizeDistribution, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_volume_file_size_distribution" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `get_volume_file_size_distribution`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/volumes/{id}/file-size-distribution', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='FileSizeDistribution', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_volume_stats(self, id, **kwargs): # noqa: E501
"""get_volume_stats # noqa: E501
### Required permissions * User account permission: `system:status:view` # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_volume_stats(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this volume. (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: VolumeStats
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_volume_stats_with_http_info(id, **kwargs) # noqa: E501
def get_volume_stats_with_http_info(self, id, **kwargs): # noqa: E501
"""get_volume_stats # noqa: E501
### Required permissions * User account permission: `system:status:view` # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_volume_stats_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this volume. (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(VolumeStats, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_volume_stats" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `get_volume_stats`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/volumes/{id}/stats', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='VolumeStats', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_workspace(self, id, **kwargs): # noqa: E501
"""get_workspace # noqa: E501
### Required permissions * User account permission: `None` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_workspace(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this workspace. (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: WorkspaceDetail
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_workspace_with_http_info(id, **kwargs) # noqa: E501
def get_workspace_with_http_info(self, id, **kwargs): # noqa: E501
"""get_workspace # noqa: E501
### Required permissions * User account permission: `None` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_workspace_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this workspace. (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(WorkspaceDetail, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_workspace" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `get_workspace`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/workspaces/{id}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='WorkspaceDetail', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_workspace_permission(self, id, **kwargs): # noqa: E501
"""get_workspace_permission # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_workspace_permission(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this workspace permission. (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: WorkspacePermission
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_workspace_permission_with_http_info(id, **kwargs) # noqa: E501
def get_workspace_permission_with_http_info(self, id, **kwargs): # noqa: E501
"""get_workspace_permission # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_workspace_permission_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this workspace permission. (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(WorkspacePermission, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_workspace_permission" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `get_workspace_permission`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/workspace-permissions/{id}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='WorkspacePermission', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def move_files(self, file_move_endpoint_request, **kwargs): # noqa: E501
"""move_files # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.move_files(file_move_endpoint_request, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param FileMoveEndpointRequest file_move_endpoint_request: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: TaskInfo
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.move_files_with_http_info(file_move_endpoint_request, **kwargs) # noqa: E501
def move_files_with_http_info(self, file_move_endpoint_request, **kwargs): # noqa: E501
"""move_files # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.move_files_with_http_info(file_move_endpoint_request, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param FileMoveEndpointRequest file_move_endpoint_request: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(TaskInfo, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['file_move_endpoint_request'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method move_files" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'file_move_endpoint_request' is set
if self.api_client.client_side_validation and ('file_move_endpoint_request' not in local_var_params or # noqa: E501
local_var_params['file_move_endpoint_request'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `file_move_endpoint_request` when calling `move_files`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'file_move_endpoint_request' in local_var_params:
body_params = local_var_params['file_move_endpoint_request']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/filesystem/move', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='TaskInfo', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def move_workspace(self, id, move_workspace_request, **kwargs): # noqa: E501
"""move_workspace # noqa: E501
### Required permissions * User account permission: `projects:manage` # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.move_workspace(id, move_workspace_request, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this workspace. (required)
:param MoveWorkspaceRequest move_workspace_request: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: TaskInfo
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.move_workspace_with_http_info(id, move_workspace_request, **kwargs) # noqa: E501
def move_workspace_with_http_info(self, id, move_workspace_request, **kwargs): # noqa: E501
"""move_workspace # noqa: E501
### Required permissions * User account permission: `projects:manage` # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.move_workspace_with_http_info(id, move_workspace_request, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this workspace. (required)
:param MoveWorkspaceRequest move_workspace_request: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(TaskInfo, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'move_workspace_request'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method move_workspace" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `move_workspace`") # noqa: E501
# verify the required parameter 'move_workspace_request' is set
if self.api_client.client_side_validation and ('move_workspace_request' not in local_var_params or # noqa: E501
local_var_params['move_workspace_request'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `move_workspace_request` when calling `move_workspace`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'move_workspace_request' in local_var_params:
body_params = local_var_params['move_workspace_request']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/workspaces/{id}/move', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='TaskInfo', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def move_workspace_to_production(self, id, workspace_move_to_request, **kwargs): # noqa: E501
"""move_workspace_to_production # noqa: E501
### Required permissions * User account permission: `projects:manage` # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.move_workspace_to_production(id, workspace_move_to_request, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this workspace. (required)
:param WorkspaceMoveToRequest workspace_move_to_request: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.move_workspace_to_production_with_http_info(id, workspace_move_to_request, **kwargs) # noqa: E501
def move_workspace_to_production_with_http_info(self, id, workspace_move_to_request, **kwargs): # noqa: E501
"""move_workspace_to_production # noqa: E501
### Required permissions * User account permission: `projects:manage` # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.move_workspace_to_production_with_http_info(id, workspace_move_to_request, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this workspace. (required)
:param WorkspaceMoveToRequest workspace_move_to_request: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'workspace_move_to_request'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method move_workspace_to_production" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `move_workspace_to_production`") # noqa: E501
# verify the required parameter 'workspace_move_to_request' is set
if self.api_client.client_side_validation and ('workspace_move_to_request' not in local_var_params or # noqa: E501
local_var_params['workspace_move_to_request'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `workspace_move_to_request` when calling `move_workspace_to_production`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'workspace_move_to_request' in local_var_params:
body_params = local_var_params['workspace_move_to_request']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/workspaces/{id}/move-to', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def patch_file(self, path, file_partial_update, **kwargs): # noqa: E501
"""patch_file # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.patch_file(path, file_partial_update, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str path: (required)
:param FilePartialUpdate file_partial_update: (required)
:param int max_depth:
:param bool bundle:
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: FilesystemFile
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.patch_file_with_http_info(path, file_partial_update, **kwargs) # noqa: E501
def patch_file_with_http_info(self, path, file_partial_update, **kwargs): # noqa: E501
"""patch_file # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.patch_file_with_http_info(path, file_partial_update, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str path: (required)
:param FilePartialUpdate file_partial_update: (required)
:param int max_depth:
:param bool bundle:
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(FilesystemFile, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['path', 'file_partial_update', 'max_depth', 'bundle'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method patch_file" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'path' is set
if self.api_client.client_side_validation and ('path' not in local_var_params or # noqa: E501
local_var_params['path'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `path` when calling `patch_file`") # noqa: E501
# verify the required parameter 'file_partial_update' is set
if self.api_client.client_side_validation and ('file_partial_update' not in local_var_params or # noqa: E501
local_var_params['file_partial_update'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `file_partial_update` when calling `patch_file`") # noqa: E501
if self.api_client.client_side_validation and 'path' in local_var_params and not re.search(r'.*', local_var_params['path']): # noqa: E501
raise ApiValueError("Invalid value for parameter `path` when calling `patch_file`, must conform to the pattern `/.*/`") # noqa: E501
collection_formats = {}
path_params = {}
if 'path' in local_var_params:
path_params['path'] = local_var_params['path'] # noqa: E501
query_params = []
if 'max_depth' in local_var_params and local_var_params['max_depth'] is not None: # noqa: E501
query_params.append(('max_depth', local_var_params['max_depth'])) # noqa: E501
if 'bundle' in local_var_params and local_var_params['bundle'] is not None: # noqa: E501
query_params.append(('bundle', local_var_params['bundle'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'file_partial_update' in local_var_params:
body_params = local_var_params['file_partial_update']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/files/{path}', 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='FilesystemFile', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def patch_production(self, id, production_partial_update, **kwargs): # noqa: E501
"""patch_production # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.patch_production(id, production_partial_update, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this production. (required)
:param ProductionPartialUpdate production_partial_update: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Production
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.patch_production_with_http_info(id, production_partial_update, **kwargs) # noqa: E501
def patch_production_with_http_info(self, id, production_partial_update, **kwargs): # noqa: E501
"""patch_production # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.patch_production_with_http_info(id, production_partial_update, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this production. (required)
:param ProductionPartialUpdate production_partial_update: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(Production, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'production_partial_update'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method patch_production" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `patch_production`") # noqa: E501
# verify the required parameter 'production_partial_update' is set
if self.api_client.client_side_validation and ('production_partial_update' not in local_var_params or # noqa: E501
local_var_params['production_partial_update'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `production_partial_update` when calling `patch_production`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'production_partial_update' in local_var_params:
body_params = local_var_params['production_partial_update']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/productions/{id}', 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Production', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def patch_share(self, id, share_partial_update, **kwargs): # noqa: E501
"""patch_share # noqa: E501
### Required permissions * User account permission: `shares:view` (read) / `shares:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.patch_share(id, share_partial_update, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this share. (required)
:param SharePartialUpdate share_partial_update: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Share
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.patch_share_with_http_info(id, share_partial_update, **kwargs) # noqa: E501
def patch_share_with_http_info(self, id, share_partial_update, **kwargs): # noqa: E501
"""patch_share # noqa: E501
### Required permissions * User account permission: `shares:view` (read) / `shares:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.patch_share_with_http_info(id, share_partial_update, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this share. (required)
:param SharePartialUpdate share_partial_update: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(Share, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'share_partial_update'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method patch_share" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `patch_share`") # noqa: E501
# verify the required parameter 'share_partial_update' is set
if self.api_client.client_side_validation and ('share_partial_update' not in local_var_params or # noqa: E501
local_var_params['share_partial_update'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `share_partial_update` when calling `patch_share`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'share_partial_update' in local_var_params:
body_params = local_var_params['share_partial_update']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/shares/{id}', 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Share', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def patch_snapshot(self, id, snapshot_partial_update, **kwargs): # noqa: E501
"""patch_snapshot # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.patch_snapshot(id, snapshot_partial_update, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this snapshot. (required)
:param SnapshotPartialUpdate snapshot_partial_update: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Snapshot
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.patch_snapshot_with_http_info(id, snapshot_partial_update, **kwargs) # noqa: E501
def patch_snapshot_with_http_info(self, id, snapshot_partial_update, **kwargs): # noqa: E501
"""patch_snapshot # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.patch_snapshot_with_http_info(id, snapshot_partial_update, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this snapshot. (required)
:param SnapshotPartialUpdate snapshot_partial_update: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(Snapshot, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'snapshot_partial_update'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method patch_snapshot" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `patch_snapshot`") # noqa: E501
# verify the required parameter 'snapshot_partial_update' is set
if self.api_client.client_side_validation and ('snapshot_partial_update' not in local_var_params or # noqa: E501
local_var_params['snapshot_partial_update'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `snapshot_partial_update` when calling `patch_snapshot`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'snapshot_partial_update' in local_var_params:
body_params = local_var_params['snapshot_partial_update']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/snapshots/{id}', 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Snapshot', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def patch_volume(self, id, volume_partial_update, **kwargs): # noqa: E501
"""patch_volume # noqa: E501
### Required permissions * User account permission: `None` (read) / `system:admin-access` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.patch_volume(id, volume_partial_update, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this volume. (required)
:param VolumePartialUpdate volume_partial_update: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Volume
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.patch_volume_with_http_info(id, volume_partial_update, **kwargs) # noqa: E501
def patch_volume_with_http_info(self, id, volume_partial_update, **kwargs): # noqa: E501
"""patch_volume # noqa: E501
### Required permissions * User account permission: `None` (read) / `system:admin-access` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.patch_volume_with_http_info(id, volume_partial_update, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this volume. (required)
:param VolumePartialUpdate volume_partial_update: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(Volume, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'volume_partial_update'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method patch_volume" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `patch_volume`") # noqa: E501
# verify the required parameter 'volume_partial_update' is set
if self.api_client.client_side_validation and ('volume_partial_update' not in local_var_params or # noqa: E501
local_var_params['volume_partial_update'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `volume_partial_update` when calling `patch_volume`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'volume_partial_update' in local_var_params:
body_params = local_var_params['volume_partial_update']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/volumes/{id}', 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Volume', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def patch_workspace(self, id, workspace_detail_partial_update, **kwargs): # noqa: E501
"""patch_workspace # noqa: E501
### Required permissions * User account permission: `None` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.patch_workspace(id, workspace_detail_partial_update, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this workspace. (required)
:param WorkspaceDetailPartialUpdate workspace_detail_partial_update: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: WorkspaceDetail
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.patch_workspace_with_http_info(id, workspace_detail_partial_update, **kwargs) # noqa: E501
def patch_workspace_with_http_info(self, id, workspace_detail_partial_update, **kwargs): # noqa: E501
"""patch_workspace # noqa: E501
### Required permissions * User account permission: `None` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.patch_workspace_with_http_info(id, workspace_detail_partial_update, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this workspace. (required)
:param WorkspaceDetailPartialUpdate workspace_detail_partial_update: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(WorkspaceDetail, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'workspace_detail_partial_update'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method patch_workspace" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `patch_workspace`") # noqa: E501
# verify the required parameter 'workspace_detail_partial_update' is set
if self.api_client.client_side_validation and ('workspace_detail_partial_update' not in local_var_params or # noqa: E501
local_var_params['workspace_detail_partial_update'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `workspace_detail_partial_update` when calling `patch_workspace`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'workspace_detail_partial_update' in local_var_params:
body_params = local_var_params['workspace_detail_partial_update']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/workspaces/{id}', 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='WorkspaceDetail', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def patch_workspace_permission(self, id, workspace_permission_partial_update, **kwargs): # noqa: E501
"""patch_workspace_permission # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.patch_workspace_permission(id, workspace_permission_partial_update, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this workspace permission. (required)
:param WorkspacePermissionPartialUpdate workspace_permission_partial_update: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: WorkspacePermission
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.patch_workspace_permission_with_http_info(id, workspace_permission_partial_update, **kwargs) # noqa: E501
def patch_workspace_permission_with_http_info(self, id, workspace_permission_partial_update, **kwargs): # noqa: E501
"""patch_workspace_permission # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.patch_workspace_permission_with_http_info(id, workspace_permission_partial_update, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this workspace permission. (required)
:param WorkspacePermissionPartialUpdate workspace_permission_partial_update: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(WorkspacePermission, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'workspace_permission_partial_update'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method patch_workspace_permission" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `patch_workspace_permission`") # noqa: E501
# verify the required parameter 'workspace_permission_partial_update' is set
if self.api_client.client_side_validation and ('workspace_permission_partial_update' not in local_var_params or # noqa: E501
local_var_params['workspace_permission_partial_update'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `workspace_permission_partial_update` when calling `patch_workspace_permission`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'workspace_permission_partial_update' in local_var_params:
body_params = local_var_params['workspace_permission_partial_update']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/workspace-permissions/{id}', 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='WorkspacePermission', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def record_storage_trace(self, filesystem_trace_endpoint_request, **kwargs): # noqa: E501
"""record_storage_trace # noqa: E501
### Required permissions * User account permission: `system:admin-access` # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.record_storage_trace(filesystem_trace_endpoint_request, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param FilesystemTraceEndpointRequest filesystem_trace_endpoint_request: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: FilesystemTraceEndpointResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.record_storage_trace_with_http_info(filesystem_trace_endpoint_request, **kwargs) # noqa: E501
def record_storage_trace_with_http_info(self, filesystem_trace_endpoint_request, **kwargs): # noqa: E501
"""record_storage_trace # noqa: E501
### Required permissions * User account permission: `system:admin-access` # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.record_storage_trace_with_http_info(filesystem_trace_endpoint_request, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param FilesystemTraceEndpointRequest filesystem_trace_endpoint_request: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(FilesystemTraceEndpointResponse, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['filesystem_trace_endpoint_request'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method record_storage_trace" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'filesystem_trace_endpoint_request' is set
if self.api_client.client_side_validation and ('filesystem_trace_endpoint_request' not in local_var_params or # noqa: E501
local_var_params['filesystem_trace_endpoint_request'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `filesystem_trace_endpoint_request` when calling `record_storage_trace`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'filesystem_trace_endpoint_request' in local_var_params:
body_params = local_var_params['filesystem_trace_endpoint_request']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/filesystem/trace', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='FilesystemTraceEndpointResponse', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def repair_workspace_permissions(self, id, **kwargs): # noqa: E501
"""repair_workspace_permissions # noqa: E501
### Required permissions * User account permission: `projects:manage` # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.repair_workspace_permissions(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this workspace. (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: TaskInfo
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.repair_workspace_permissions_with_http_info(id, **kwargs) # noqa: E501
def repair_workspace_permissions_with_http_info(self, id, **kwargs): # noqa: E501
"""repair_workspace_permissions # noqa: E501
### Required permissions * User account permission: `projects:manage` # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.repair_workspace_permissions_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this workspace. (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(TaskInfo, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method repair_workspace_permissions" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `repair_workspace_permissions`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/workspaces/{id}/repair-permissions', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='TaskInfo', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def share_to_home_workspace(self, share_to_home_workspace_endpoint_request, **kwargs): # noqa: E501
"""share_to_home_workspace # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.share_to_home_workspace(share_to_home_workspace_endpoint_request, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param ShareToHomeWorkspaceEndpointRequest share_to_home_workspace_endpoint_request: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.share_to_home_workspace_with_http_info(share_to_home_workspace_endpoint_request, **kwargs) # noqa: E501
def share_to_home_workspace_with_http_info(self, share_to_home_workspace_endpoint_request, **kwargs): # noqa: E501
"""share_to_home_workspace # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.share_to_home_workspace_with_http_info(share_to_home_workspace_endpoint_request, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param ShareToHomeWorkspaceEndpointRequest share_to_home_workspace_endpoint_request: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['share_to_home_workspace_endpoint_request'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method share_to_home_workspace" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'share_to_home_workspace_endpoint_request' is set
if self.api_client.client_side_validation and ('share_to_home_workspace_endpoint_request' not in local_var_params or # noqa: E501
local_var_params['share_to_home_workspace_endpoint_request'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `share_to_home_workspace_endpoint_request` when calling `share_to_home_workspace`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'share_to_home_workspace_endpoint_request' in local_var_params:
body_params = local_var_params['share_to_home_workspace_endpoint_request']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/share-to-home-workspace', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def unbookmark_workspace(self, id, **kwargs): # noqa: E501
"""unbookmark_workspace # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.unbookmark_workspace(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this workspace. (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.unbookmark_workspace_with_http_info(id, **kwargs) # noqa: E501
def unbookmark_workspace_with_http_info(self, id, **kwargs): # noqa: E501
"""unbookmark_workspace # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.unbookmark_workspace_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this workspace. (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method unbookmark_workspace" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `unbookmark_workspace`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/workspaces/{id}/bookmark', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def unzip_file(self, file_unzip_endpoint_request, **kwargs): # noqa: E501
"""unzip_file # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.unzip_file(file_unzip_endpoint_request, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param FileUnzipEndpointRequest file_unzip_endpoint_request: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: TaskInfo
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.unzip_file_with_http_info(file_unzip_endpoint_request, **kwargs) # noqa: E501
def unzip_file_with_http_info(self, file_unzip_endpoint_request, **kwargs): # noqa: E501
"""unzip_file # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.unzip_file_with_http_info(file_unzip_endpoint_request, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param FileUnzipEndpointRequest file_unzip_endpoint_request: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(TaskInfo, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['file_unzip_endpoint_request'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method unzip_file" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'file_unzip_endpoint_request' is set
if self.api_client.client_side_validation and ('file_unzip_endpoint_request' not in local_var_params or # noqa: E501
local_var_params['file_unzip_endpoint_request'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `file_unzip_endpoint_request` when calling `unzip_file`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'file_unzip_endpoint_request' in local_var_params:
body_params = local_var_params['file_unzip_endpoint_request']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/filesystem/unzip', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='TaskInfo', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def update_group_quota(self, group_id, id, update_quota_request, **kwargs): # noqa: E501
"""update_group_quota # noqa: E501
### Required permissions * User account permission: `users:manage` # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_group_quota(group_id, id, update_quota_request, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str group_id: (required)
:param int id: A unique integer value identifying this volume. (required)
:param UpdateQuotaRequest update_quota_request: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.update_group_quota_with_http_info(group_id, id, update_quota_request, **kwargs) # noqa: E501
def update_group_quota_with_http_info(self, group_id, id, update_quota_request, **kwargs): # noqa: E501
"""update_group_quota # noqa: E501
### Required permissions * User account permission: `users:manage` # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_group_quota_with_http_info(group_id, id, update_quota_request, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str group_id: (required)
:param int id: A unique integer value identifying this volume. (required)
:param UpdateQuotaRequest update_quota_request: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['group_id', 'id', 'update_quota_request'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method update_group_quota" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'group_id' is set
if self.api_client.client_side_validation and ('group_id' not in local_var_params or # noqa: E501
local_var_params['group_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `group_id` when calling `update_group_quota`") # noqa: E501
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `update_group_quota`") # noqa: E501
# verify the required parameter 'update_quota_request' is set
if self.api_client.client_side_validation and ('update_quota_request' not in local_var_params or # noqa: E501
local_var_params['update_quota_request'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `update_quota_request` when calling `update_group_quota`") # noqa: E501
collection_formats = {}
path_params = {}
if 'group_id' in local_var_params:
path_params['group_id'] = local_var_params['group_id'] # noqa: E501
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'update_quota_request' in local_var_params:
body_params = local_var_params['update_quota_request']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/volumes/{id}/quotas/group/{group_id}', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def update_path_quota(self, id, relative_path, update_quota_request, **kwargs): # noqa: E501
"""update_path_quota # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_path_quota(id, relative_path, update_quota_request, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this volume. (required)
:param str relative_path: (required)
:param UpdateQuotaRequest update_quota_request: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.update_path_quota_with_http_info(id, relative_path, update_quota_request, **kwargs) # noqa: E501
def update_path_quota_with_http_info(self, id, relative_path, update_quota_request, **kwargs): # noqa: E501
"""update_path_quota # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_path_quota_with_http_info(id, relative_path, update_quota_request, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this volume. (required)
:param str relative_path: (required)
:param UpdateQuotaRequest update_quota_request: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'relative_path', 'update_quota_request'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method update_path_quota" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `update_path_quota`") # noqa: E501
# verify the required parameter 'relative_path' is set
if self.api_client.client_side_validation and ('relative_path' not in local_var_params or # noqa: E501
local_var_params['relative_path'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `relative_path` when calling `update_path_quota`") # noqa: E501
# verify the required parameter 'update_quota_request' is set
if self.api_client.client_side_validation and ('update_quota_request' not in local_var_params or # noqa: E501
local_var_params['update_quota_request'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `update_quota_request` when calling `update_path_quota`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
if 'relative_path' in local_var_params:
path_params['relative_path'] = local_var_params['relative_path'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'update_quota_request' in local_var_params:
body_params = local_var_params['update_quota_request']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/volumes/{id}/quotas/path/{relative_path}', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def update_production(self, id, production, **kwargs): # noqa: E501
"""update_production # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_production(id, production, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this production. (required)
:param Production production: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Production
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.update_production_with_http_info(id, production, **kwargs) # noqa: E501
def update_production_with_http_info(self, id, production, **kwargs): # noqa: E501
"""update_production # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_production_with_http_info(id, production, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this production. (required)
:param Production production: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(Production, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'production'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method update_production" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `update_production`") # noqa: E501
# verify the required parameter 'production' is set
if self.api_client.client_side_validation and ('production' not in local_var_params or # noqa: E501
local_var_params['production'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `production` when calling `update_production`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'production' in local_var_params:
body_params = local_var_params['production']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/productions/{id}', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Production', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def update_share(self, id, share, **kwargs): # noqa: E501
"""update_share # noqa: E501
### Required permissions * User account permission: `shares:view` (read) / `shares:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_share(id, share, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this share. (required)
:param Share share: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Share
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.update_share_with_http_info(id, share, **kwargs) # noqa: E501
def update_share_with_http_info(self, id, share, **kwargs): # noqa: E501
"""update_share # noqa: E501
### Required permissions * User account permission: `shares:view` (read) / `shares:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_share_with_http_info(id, share, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this share. (required)
:param Share share: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(Share, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'share'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method update_share" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `update_share`") # noqa: E501
# verify the required parameter 'share' is set
if self.api_client.client_side_validation and ('share' not in local_var_params or # noqa: E501
local_var_params['share'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `share` when calling `update_share`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'share' in local_var_params:
body_params = local_var_params['share']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/shares/{id}', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Share', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def update_snapshot(self, id, snapshot, **kwargs): # noqa: E501
"""update_snapshot # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_snapshot(id, snapshot, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this snapshot. (required)
:param Snapshot snapshot: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Snapshot
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.update_snapshot_with_http_info(id, snapshot, **kwargs) # noqa: E501
def update_snapshot_with_http_info(self, id, snapshot, **kwargs): # noqa: E501
"""update_snapshot # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_snapshot_with_http_info(id, snapshot, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this snapshot. (required)
:param Snapshot snapshot: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(Snapshot, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'snapshot'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method update_snapshot" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `update_snapshot`") # noqa: E501
# verify the required parameter 'snapshot' is set
if self.api_client.client_side_validation and ('snapshot' not in local_var_params or # noqa: E501
local_var_params['snapshot'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `snapshot` when calling `update_snapshot`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'snapshot' in local_var_params:
body_params = local_var_params['snapshot']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/snapshots/{id}', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Snapshot', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def update_user_quota(self, id, user_id, update_quota_request, **kwargs): # noqa: E501
"""update_user_quota # noqa: E501
### Required permissions * User account permission: `users:manage` # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_user_quota(id, user_id, update_quota_request, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this volume. (required)
:param str user_id: (required)
:param UpdateQuotaRequest update_quota_request: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.update_user_quota_with_http_info(id, user_id, update_quota_request, **kwargs) # noqa: E501
def update_user_quota_with_http_info(self, id, user_id, update_quota_request, **kwargs): # noqa: E501
"""update_user_quota # noqa: E501
### Required permissions * User account permission: `users:manage` # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_user_quota_with_http_info(id, user_id, update_quota_request, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this volume. (required)
:param str user_id: (required)
:param UpdateQuotaRequest update_quota_request: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'user_id', 'update_quota_request'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method update_user_quota" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `update_user_quota`") # noqa: E501
# verify the required parameter 'user_id' is set
if self.api_client.client_side_validation and ('user_id' not in local_var_params or # noqa: E501
local_var_params['user_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `user_id` when calling `update_user_quota`") # noqa: E501
# verify the required parameter 'update_quota_request' is set
if self.api_client.client_side_validation and ('update_quota_request' not in local_var_params or # noqa: E501
local_var_params['update_quota_request'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `update_quota_request` when calling `update_user_quota`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
if 'user_id' in local_var_params:
path_params['user_id'] = local_var_params['user_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'update_quota_request' in local_var_params:
body_params = local_var_params['update_quota_request']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/volumes/{id}/quotas/user/{user_id}', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def update_volume(self, id, volume, **kwargs): # noqa: E501
"""update_volume # noqa: E501
### Required permissions * User account permission: `None` (read) / `system:admin-access` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_volume(id, volume, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this volume. (required)
:param Volume volume: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Volume
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.update_volume_with_http_info(id, volume, **kwargs) # noqa: E501
def update_volume_with_http_info(self, id, volume, **kwargs): # noqa: E501
"""update_volume # noqa: E501
### Required permissions * User account permission: `None` (read) / `system:admin-access` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_volume_with_http_info(id, volume, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this volume. (required)
:param Volume volume: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(Volume, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'volume'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method update_volume" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `update_volume`") # noqa: E501
# verify the required parameter 'volume' is set
if self.api_client.client_side_validation and ('volume' not in local_var_params or # noqa: E501
local_var_params['volume'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `volume` when calling `update_volume`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'volume' in local_var_params:
body_params = local_var_params['volume']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/volumes/{id}', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Volume', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def update_workspace(self, id, workspace_detail, **kwargs): # noqa: E501
"""update_workspace # noqa: E501
### Required permissions * User account permission: `None` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_workspace(id, workspace_detail, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this workspace. (required)
:param WorkspaceDetail workspace_detail: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: WorkspaceDetail
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.update_workspace_with_http_info(id, workspace_detail, **kwargs) # noqa: E501
def update_workspace_with_http_info(self, id, workspace_detail, **kwargs): # noqa: E501
"""update_workspace # noqa: E501
### Required permissions * User account permission: `None` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_workspace_with_http_info(id, workspace_detail, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this workspace. (required)
:param WorkspaceDetail workspace_detail: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(WorkspaceDetail, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'workspace_detail'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method update_workspace" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `update_workspace`") # noqa: E501
# verify the required parameter 'workspace_detail' is set
if self.api_client.client_side_validation and ('workspace_detail' not in local_var_params or # noqa: E501
local_var_params['workspace_detail'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `workspace_detail` when calling `update_workspace`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'workspace_detail' in local_var_params:
body_params = local_var_params['workspace_detail']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/workspaces/{id}', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='WorkspaceDetail', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def update_workspace_permission(self, id, workspace_permission, **kwargs): # noqa: E501
"""update_workspace_permission # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_workspace_permission(id, workspace_permission, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this workspace permission. (required)
:param WorkspacePermission workspace_permission: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: WorkspacePermission
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.update_workspace_permission_with_http_info(id, workspace_permission, **kwargs) # noqa: E501
def update_workspace_permission_with_http_info(self, id, workspace_permission, **kwargs): # noqa: E501
"""update_workspace_permission # noqa: E501
### Required permissions * User account permission: `projects:view` (read) / `projects:manage` (write) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_workspace_permission_with_http_info(id, workspace_permission, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int id: A unique integer value identifying this workspace permission. (required)
:param WorkspacePermission workspace_permission: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(WorkspacePermission, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['id', 'workspace_permission'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method update_workspace_permission" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `update_workspace_permission`") # noqa: E501
# verify the required parameter 'workspace_permission' is set
if self.api_client.client_side_validation and ('workspace_permission' not in local_var_params or # noqa: E501
local_var_params['workspace_permission'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `workspace_permission` when calling `update_workspace_permission`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'workspace_permission' in local_var_params:
body_params = local_var_params['workspace_permission']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/workspace-permissions/{id}', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='WorkspacePermission', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def zip_files(self, file_zip_endpoint_request, **kwargs): # noqa: E501
"""zip_files # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.zip_files(file_zip_endpoint_request, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param FileZipEndpointRequest file_zip_endpoint_request: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: TaskInfo
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.zip_files_with_http_info(file_zip_endpoint_request, **kwargs) # noqa: E501
def zip_files_with_http_info(self, file_zip_endpoint_request, **kwargs): # noqa: E501
"""zip_files # noqa: E501
### Required permissions * Authenticated user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.zip_files_with_http_info(file_zip_endpoint_request, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param FileZipEndpointRequest file_zip_endpoint_request: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(TaskInfo, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['file_zip_endpoint_request'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method zip_files" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'file_zip_endpoint_request' is set
if self.api_client.client_side_validation and ('file_zip_endpoint_request' not in local_var_params or # noqa: E501
local_var_params['file_zip_endpoint_request'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `file_zip_endpoint_request` when calling `zip_files`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'file_zip_endpoint_request' in local_var_params:
body_params = local_var_params['file_zip_endpoint_request']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/2/filesystem/zip', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='TaskInfo', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
| 49.741337 | 269 | 0.609788 | 45,747 | 404,795 | 5.135659 | 0.006645 | 0.046956 | 0.069541 | 0.026815 | 0.989955 | 0.986167 | 0.979948 | 0.973908 | 0.968784 | 0.957649 | 0 | 0.015659 | 0.313119 | 404,795 | 8,137 | 270 | 49.74745 | 0.829312 | 0.459069 | 0 | 0.811187 | 1 | 0 | 0.18887 | 0.053222 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038472 | false | 0 | 0.001364 | 0 | 0.078308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
b1475b7e845dbc93785531e891ad324140327aec | 32,547 | py | Python | bombbomb/api/contacts_api.py | bombbomb/bombbomb-python-openapi | d1623cb06e58fdc83b04603a589e9d30e7eb3fdf | [
"Apache-2.0"
] | null | null | null | bombbomb/api/contacts_api.py | bombbomb/bombbomb-python-openapi | d1623cb06e58fdc83b04603a589e9d30e7eb3fdf | [
"Apache-2.0"
] | null | null | null | bombbomb/api/contacts_api.py | bombbomb/bombbomb-python-openapi | d1623cb06e58fdc83b04603a589e9d30e7eb3fdf | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
BombBomb
We make it easy to build relationships using simple videos. # noqa: E501
OpenAPI spec version: 2.0.831
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from bombbomb.api_client import ApiClient
class ContactsApi(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def add_contacts_csv(self, mapping_data, list_data, **kwargs): # noqa: E501
"""Add contacts from a CSV file. # noqa: E501
Add multiple contacts through the upload importer from a CSV file. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.add_contacts_csv(mapping_data, list_data, async=True)
>>> result = thread.get()
:param async bool
:param str mapping_data: The info sent for the contacts (required)
:param str list_data: The info sent with the import for the list (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.add_contacts_csv_with_http_info(mapping_data, list_data, **kwargs) # noqa: E501
else:
(data) = self.add_contacts_csv_with_http_info(mapping_data, list_data, **kwargs) # noqa: E501
return data
def add_contacts_csv_with_http_info(self, mapping_data, list_data, **kwargs): # noqa: E501
"""Add contacts from a CSV file. # noqa: E501
Add multiple contacts through the upload importer from a CSV file. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.add_contacts_csv_with_http_info(mapping_data, list_data, async=True)
>>> result = thread.get()
:param async bool
:param str mapping_data: The info sent for the contacts (required)
:param str list_data: The info sent with the import for the list (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['mapping_data', 'list_data'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method add_contacts_csv" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'mapping_data' is set
if ('mapping_data' not in params or
params['mapping_data'] is None):
raise ValueError("Missing the required parameter `mapping_data` when calling `add_contacts_csv`") # noqa: E501
# verify the required parameter 'list_data' is set
if ('list_data' not in params or
params['list_data'] is None):
raise ValueError("Missing the required parameter `list_data` when calling `add_contacts_csv`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
if 'mapping_data' in params:
form_params.append(('mappingData', params['mapping_data'])) # noqa: E501
if 'list_data' in params:
form_params.append(('listData', params['list_data'])) # noqa: E501
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/x-www-form-urlencoded']) # noqa: E501
# Authentication setting
auth_settings = ['BBOAuth2'] # noqa: E501
return self.api_client.call_api(
'/contacts/import_csv', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def add_new_contact(self, contact_email, **kwargs): # noqa: E501
"""Add a contact. # noqa: E501
Add a contact to the users list. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.add_new_contact(contact_email, async=True)
>>> result = thread.get()
:param async bool
:param str contact_email: Email of the new contact we are adding (required)
:param str contact_info: The info sent for this contact
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.add_new_contact_with_http_info(contact_email, **kwargs) # noqa: E501
else:
(data) = self.add_new_contact_with_http_info(contact_email, **kwargs) # noqa: E501
return data
def add_new_contact_with_http_info(self, contact_email, **kwargs): # noqa: E501
"""Add a contact. # noqa: E501
Add a contact to the users list. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.add_new_contact_with_http_info(contact_email, async=True)
>>> result = thread.get()
:param async bool
:param str contact_email: Email of the new contact we are adding (required)
:param str contact_info: The info sent for this contact
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['contact_email', 'contact_info'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method add_new_contact" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'contact_email' is set
if ('contact_email' not in params or
params['contact_email'] is None):
raise ValueError("Missing the required parameter `contact_email` when calling `add_new_contact`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
if 'contact_email' in params:
form_params.append(('contactEmail', params['contact_email'])) # noqa: E501
if 'contact_info' in params:
form_params.append(('contactInfo', params['contact_info'])) # noqa: E501
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/x-www-form-urlencoded']) # noqa: E501
# Authentication setting
auth_settings = ['BBOAuth2'] # noqa: E501
return self.api_client.call_api(
'/contacts/', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def add_new_custom_field(self, field_name, **kwargs): # noqa: E501
"""Add custom fields. # noqa: E501
Add a new custom field. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.add_new_custom_field(field_name, async=True)
>>> result = thread.get()
:param async bool
:param str field_name: Custom field name to be added (required)
:param str field_type: Custom field type for the field to be added
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.add_new_custom_field_with_http_info(field_name, **kwargs) # noqa: E501
else:
(data) = self.add_new_custom_field_with_http_info(field_name, **kwargs) # noqa: E501
return data
def add_new_custom_field_with_http_info(self, field_name, **kwargs): # noqa: E501
"""Add custom fields. # noqa: E501
Add a new custom field. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.add_new_custom_field_with_http_info(field_name, async=True)
>>> result = thread.get()
:param async bool
:param str field_name: Custom field name to be added (required)
:param str field_type: Custom field type for the field to be added
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['field_name', 'field_type'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method add_new_custom_field" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'field_name' is set
if ('field_name' not in params or
params['field_name'] is None):
raise ValueError("Missing the required parameter `field_name` when calling `add_new_custom_field`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
if 'field_name' in params:
form_params.append(('fieldName', params['field_name'])) # noqa: E501
if 'field_type' in params:
form_params.append(('fieldType', params['field_type'])) # noqa: E501
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/x-www-form-urlencoded']) # noqa: E501
# Authentication setting
auth_settings = ['BBOAuth2'] # noqa: E501
return self.api_client.call_api(
'/contacts/custom_fields/', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def add_pasted_contacts(self, contact_emails, **kwargs): # noqa: E501
"""Add pasted contacts. # noqa: E501
Add the pasted contacts to the users list. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.add_pasted_contacts(contact_emails, async=True)
>>> result = thread.get()
:param async bool
:param str contact_emails: Emails array of the new contacts we are adding (required)
:param str list_info: Information about the lists id, recalculations on totals, consent etc
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.add_pasted_contacts_with_http_info(contact_emails, **kwargs) # noqa: E501
else:
(data) = self.add_pasted_contacts_with_http_info(contact_emails, **kwargs) # noqa: E501
return data
def add_pasted_contacts_with_http_info(self, contact_emails, **kwargs): # noqa: E501
"""Add pasted contacts. # noqa: E501
Add the pasted contacts to the users list. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.add_pasted_contacts_with_http_info(contact_emails, async=True)
>>> result = thread.get()
:param async bool
:param str contact_emails: Emails array of the new contacts we are adding (required)
:param str list_info: Information about the lists id, recalculations on totals, consent etc
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['contact_emails', 'list_info'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method add_pasted_contacts" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'contact_emails' is set
if ('contact_emails' not in params or
params['contact_emails'] is None):
raise ValueError("Missing the required parameter `contact_emails` when calling `add_pasted_contacts`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
if 'contact_emails' in params:
form_params.append(('contactEmails', params['contact_emails'])) # noqa: E501
if 'list_info' in params:
form_params.append(('listInfo', params['list_info'])) # noqa: E501
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/x-www-form-urlencoded']) # noqa: E501
# Authentication setting
auth_settings = ['BBOAuth2'] # noqa: E501
return self.api_client.call_api(
'/contacts/paste', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def c_sv_to_object(self, file, **kwargs): # noqa: E501
"""Format CSV. # noqa: E501
Format a CSV file to an object. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.c_sv_to_object(file, async=True)
>>> result = thread.get()
:param async bool
:param str file: The CSV file being uploaded (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.c_sv_to_object_with_http_info(file, **kwargs) # noqa: E501
else:
(data) = self.c_sv_to_object_with_http_info(file, **kwargs) # noqa: E501
return data
def c_sv_to_object_with_http_info(self, file, **kwargs): # noqa: E501
"""Format CSV. # noqa: E501
Format a CSV file to an object. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.c_sv_to_object_with_http_info(file, async=True)
>>> result = thread.get()
:param async bool
:param str file: The CSV file being uploaded (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['file'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method c_sv_to_object" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'file' is set
if ('file' not in params or
params['file'] is None):
raise ValueError("Missing the required parameter `file` when calling `c_sv_to_object`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
if 'file' in params:
form_params.append(('file', params['file'])) # noqa: E501
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/x-www-form-urlencoded']) # noqa: E501
# Authentication setting
auth_settings = ['BBOAuth2'] # noqa: E501
return self.api_client.call_api(
'/csv-to-object', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_contacts(self, **kwargs): # noqa: E501
"""Delete Contacts # noqa: E501
Delete all contacts within a list, or provide a comma separated list of contactIds to delete. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.delete_contacts(async=True)
>>> result = thread.get()
:param async bool
:param str list_id: The list of contacts to be deleted.
:param str contact_ids: comma separated list of contact ids to delete
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.delete_contacts_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.delete_contacts_with_http_info(**kwargs) # noqa: E501
return data
def delete_contacts_with_http_info(self, **kwargs): # noqa: E501
"""Delete Contacts # noqa: E501
Delete all contacts within a list, or provide a comma separated list of contactIds to delete. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.delete_contacts_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param str list_id: The list of contacts to be deleted.
:param str contact_ids: comma separated list of contact ids to delete
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['list_id', 'contact_ids'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_contacts" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
if 'list_id' in params:
form_params.append(('listId', params['list_id'])) # noqa: E501
if 'contact_ids' in params:
form_params.append(('contactIds', params['contact_ids'])) # noqa: E501
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/x-www-form-urlencoded']) # noqa: E501
# Authentication setting
auth_settings = ['BBOAuth2'] # noqa: E501
return self.api_client.call_api(
'/contacts/delete', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_contact_by_id(self, id, **kwargs): # noqa: E501
"""Get Contact Details # noqa: E501
Get the contact details # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_contact_by_id(id, async=True)
>>> result = thread.get()
:param async bool
:param str id: Guid for the contact. (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.get_contact_by_id_with_http_info(id, **kwargs) # noqa: E501
else:
(data) = self.get_contact_by_id_with_http_info(id, **kwargs) # noqa: E501
return data
def get_contact_by_id_with_http_info(self, id, **kwargs): # noqa: E501
"""Get Contact Details # noqa: E501
Get the contact details # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_contact_by_id_with_http_info(id, async=True)
>>> result = thread.get()
:param async bool
:param str id: Guid for the contact. (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['id'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_contact_by_id" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'id' is set
if ('id' not in params or
params['id'] is None):
raise ValueError("Missing the required parameter `id` when calling `get_contact_by_id`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in params:
path_params['id'] = params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/x-www-form-urlencoded']) # noqa: E501
# Authentication setting
auth_settings = ['BBOAuth2'] # noqa: E501
return self.api_client.call_api(
'/contact/{id}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_custom_fields(self, **kwargs): # noqa: E501
"""Get custom fields. # noqa: E501
Get the current users custom fields. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_custom_fields(async=True)
>>> result = thread.get()
:param async bool
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.get_custom_fields_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.get_custom_fields_with_http_info(**kwargs) # noqa: E501
return data
def get_custom_fields_with_http_info(self, **kwargs): # noqa: E501
"""Get custom fields. # noqa: E501
Get the current users custom fields. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_custom_fields_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = [] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_custom_fields" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/x-www-form-urlencoded']) # noqa: E501
# Authentication setting
auth_settings = ['BBOAuth2'] # noqa: E501
return self.api_client.call_api(
'/contacts/custom_fields/', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 38.79261 | 128 | 0.605002 | 3,820 | 32,547 | 4.917801 | 0.055759 | 0.056212 | 0.023848 | 0.030661 | 0.927286 | 0.906366 | 0.887948 | 0.87496 | 0.8575 | 0.838763 | 0 | 0.018318 | 0.30393 | 32,547 | 838 | 129 | 38.838902 | 0.810903 | 0.06357 | 0 | 0.735426 | 1 | 0 | 0.181864 | 0.04402 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.011211 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
b1584923f1d5ab28768107e06cc73a5e4ecc69e8 | 164 | py | Python | conjur_api/errors/__init__.py | cyberark/conjur-api-python | 7dd1819bf68042620a06f38e395c3eb2989202a9 | [
"Apache-2.0"
] | 1 | 2022-03-09T18:25:29.000Z | 2022-03-09T18:25:29.000Z | conjur_api/errors/__init__.py | cyberark/conjur-api-python | 7dd1819bf68042620a06f38e395c3eb2989202a9 | [
"Apache-2.0"
] | null | null | null | conjur_api/errors/__init__.py | cyberark/conjur-api-python | 7dd1819bf68042620a06f38e395c3eb2989202a9 | [
"Apache-2.0"
] | null | null | null | """
Errors module
This module holds Conjur SDK-specific errors for this project
"""
from conjur_api.errors import errors
from conjur_api.errors import ssl_errors
| 18.222222 | 61 | 0.804878 | 25 | 164 | 5.16 | 0.52 | 0.155039 | 0.20155 | 0.294574 | 0.387597 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140244 | 164 | 8 | 62 | 20.5 | 0.914894 | 0.463415 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
494047d30f55786a9bd2ec72719a6202977f17ef | 3,985 | py | Python | null_bot_api/categories/experts.py | lordralinc/null_bot_api | 334b46d48669c7b6c95f27f23b3e7a81a897df2e | [
"MIT"
] | 1 | 2021-08-03T15:28:57.000Z | 2021-08-03T15:28:57.000Z | null_bot_api/categories/experts.py | lordralinc/null_bot_api | 334b46d48669c7b6c95f27f23b3e7a81a897df2e | [
"MIT"
] | null | null | null | null_bot_api/categories/experts.py | lordralinc/null_bot_api | 334b46d48669c7b6c95f27f23b3e7a81a897df2e | [
"MIT"
] | null | null | null | import typing as ty
from null_bot_api import models
from null_bot_api.categories.base import BaseAPICategories
class ExpertsAPICategories(BaseAPICategories):
def get_info(
self,
user_id: ty.Optional[ty.Union[str, int]] = None,
user_ids: ty.Optional[ty.List[ty.Union[str, int]]] = None
) -> models.ExpertsGetInfo:
"""Метод позволяет получить информацию о пользователях, состоящих в Экспертах ВКонтакте.
:param user_id: обязательный;
id пользователя или его короткое имя (screen_name), информацию о котором нужно получить.
Например: 123 или ryzhov.andrey.
Предпочтительнее передавать id пользователя (так работает быстрее).
:param user_ids: обязательный, если не указан user_id;
id пользователей или их короткие имена, разделённые запятыми,
о которых нужно получить информацию.
Максимальное количество: 100.
Например: 123,andrew,456.
"""
return self.api.make_request(
method='experts.getInfo',
data=dict(user_id=user_id, user_ids=user_ids),
dataclass=models.ExpertsGetInfo
)
async def get_info_async(
self,
user_id: ty.Optional[ty.Union[str, int]] = None,
user_ids: ty.Optional[ty.List[ty.Union[str, int]]] = None
) -> models.ExpertsGetInfo:
"""Метод позволяет получить информацию о пользователях, состоящих в Экспертах ВКонтакте.
:param user_id: обязательный;
id пользователя или его короткое имя (screen_name), информацию о котором нужно получить.
Например: 123 или ryzhov.andrey.
Предпочтительнее передавать id пользователя (так работает быстрее).
:param user_ids: обязательный, если не указан user_id;
id пользователей или их короткие имена, разделённые запятыми,
о которых нужно получить информацию.
Максимальное количество: 100.
Например: 123,andrew,456.
"""
return await self.api.make_request_async(
method='experts.getInfo',
data=dict(user_id=user_id, user_ids=user_ids),
dataclass=models.ExpertsGetInfo
)
def get_card(
self,
access_token: str
) -> models.ExpertsGetCard:
"""Метод позволяет получить карточку Эксперта ВКонтакте текущего пользователя.
:param access_token: токен пользователя, карточку которого нужно получить.
Например: 8f8efw9fj89h7h8fwrg9hug8fywe9h80rj4f3rneu9.
Подходят токены только от VK Me и VK для Android, никаких прав не нужно.
Получить токен можно тут: https://vkhost.github.io.
"""
return self.api.make_request(
method='experts.getCard',
data=dict(access_token=access_token),
dataclass=models.ExpertsGetCard
)
async def get_card_async(
self,
access_token: str
) -> models.ExpertsGetCard:
"""Метод позволяет получить карточку Эксперта ВКонтакте текущего пользователя.
:param access_token: токен пользователя, карточку которого нужно получить.
Например: 8f8efw9fj89h7h8fwrg9hug8fywe9h80rj4f3rneu9.
Подходят токены только от VK Me и VK для Android, никаких прав не нужно.
Получить токен можно тут: https://vkhost.github.io.
"""
return await self.api.make_request_async(
method='experts.getCard',
data=dict(access_token=access_token),
dataclass=models.ExpertsGetCard
)
| 45.284091 | 116 | 0.588959 | 393 | 3,985 | 5.860051 | 0.284987 | 0.026053 | 0.020842 | 0.022579 | 0.915328 | 0.915328 | 0.915328 | 0.899696 | 0.899696 | 0.87538 | 0 | 0.020721 | 0.346048 | 3,985 | 87 | 117 | 45.804598 | 0.863008 | 0.268256 | 0 | 0.714286 | 0 | 0 | 0.037951 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0 | 0.071429 | 0 | 0.238095 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
49567466e2e66e8966f56d0ed50ce791b1c40bc0 | 6,359 | py | Python | ivy/functional/ivy/set.py | hieultp/ivy | 26f1b11ee3ae7d6ba5b052a22020481f014eaa33 | [
"Apache-2.0"
] | null | null | null | ivy/functional/ivy/set.py | hieultp/ivy | 26f1b11ee3ae7d6ba5b052a22020481f014eaa33 | [
"Apache-2.0"
] | null | null | null | ivy/functional/ivy/set.py | hieultp/ivy | 26f1b11ee3ae7d6ba5b052a22020481f014eaa33 | [
"Apache-2.0"
] | null | null | null | # global
from typing import Union, Tuple, Optional
# local
import ivy
from ivy.framework_handler import current_framework as _cur_framework
# Array API Standard #
# -------------------#
def unique_all(x: Union[ivy.Array, ivy.NativeArray]) \
-> Tuple[ivy.Array, ivy.Array, ivy.Array, ivy.Array]:
"""
Returns the unique elements of an input array ``x``, the first occurring indices for each unique element in ``x``, the indices from the set of unique elements that reconstruct ``x``, and the corresponding counts for each unique element in ``x``.
.. admonition:: Data-dependent output shape
:class: important
The shapes of two of the output arrays for this function depend on the data values in the input array; hence, array libraries which build computation graphs (e.g., JAX, Dask, etc.) may find this function difficult to implement without knowing array values. Accordingly, such libraries may choose to omit this function. See :ref:`data-dependent-output-shapes` section for more details.
.. note::
Uniqueness should be determined based on value equality (i.e., ``x_i == x_j``). For input arrays having floating-point data types, value-based equality implies the following behavior.
- As ``nan`` values compare as ``False``, ``nan`` values should be considered distinct.
- As ``-0`` and ``+0`` compare as ``True``, signed zeros should not be considered distinct, and the corresponding unique element will be implementation-dependent (e.g., an implementation could choose to return ``-0`` if ``-0`` occurs before ``+0``).
As signed zeros are not distinct, using ``inverse_indices`` to reconstruct the input array is not guaranteed to return an array having the exact same values.
Each ``nan`` value should have a count of one, while the counts for signed zeros should be aggregated as a single count.
Parameters
----------
x: array
input array. If ``x`` has more than one dimension, the function must flatten ``x`` and return the unique elements of the flattened array.
Returns
-------
out: Tuple[array, array, array, array]
a namedtuple ``(values, indices, inverse_indices, counts)`` whose
- first element must have the field name ``values`` and must be an array containing the unique elements of ``x``. The array must have the same data type as ``x``.
- second element must have the field name ``indices`` and must be an array containing the indices (first occurrences) of ``x`` that result in ``values``. The array must have the same shape as ``values`` and must have the default array index data type.
- third element must have the field name ``inverse_indices`` and must be an array containing the indices of ``values`` that reconstruct ``x``. The array must have the same shape as ``x`` and must have the default array index data type.
- fourth element must have the field name ``counts`` and must be an array containing the number of times each unique element occurs in ``x``. The returned array must have same shape as ``values`` and must have the default array index data type.
.. note::
The order of unique elements is not specified and may vary between implementations.
"""
return _cur_framework(x).unique_all(x)
def unique_inverse(x: Union[ivy.Array, ivy.NativeArray]) \
-> Tuple[ivy.Array, ivy.Array]:
"""
Returns a tuple of two arrays, one being the unique elements of an input array x and the other one the indices from
the set of uniques elements that reconstruct x.
:param x: input array.
:return: tuple of two arrays (values, inverse_indices)
"""
return _cur_framework(x).unique_inverse(x)
def unique_values(x: Union[ivy.Array, ivy.NativeArray], out: Optional[Union[ivy.Array, ivy.NativeArray]] = None) \
-> ivy.Array:
"""
Returns the unique elements of an input array ``x``.
.. admonition:: Data-dependent output shape
:class: important
The shapes of two of the output arrays for this function depend on the data values in the input array; hence, array libraries which build computation graphs (e.g., JAX, Dask, etc.) may find this function difficult to implement without knowing array values. Accordingly, such libraries may choose to omit this function. See :ref:`data-dependent-output-shapes` section for more details.
.. note::
Uniqueness should be determined based on value equality (i.e., ``x_i == x_j``). For input arrays having floating-point data types, value-based equality implies the following behavior.
- As ``nan`` values compare as ``False``, ``nan`` values should be considered distinct.
- As ``-0`` and ``+0`` compare as ``True``, signed zeros should not be considered distinct, and the corresponding unique element will be implementation-dependent (e.g., an implementation could choose to return ``-0`` if ``-0`` occurs before ``+0``).
Parameters
----------
x: array
input array. If ``x`` has more than one dimension, the function must flatten ``x`` and return the unique elements of the flattened array.
Returns
-------
out: array
an array containing the set of unique elements in ``x``. The returned array must have the same data type as ``x``.
.. note::
The order of unique elements is not specified and may vary between implementations.
"""
return _cur_framework(x).unique_values(x, out)
def unique_counts(x: Union[ivy.Array, ivy.NativeArray])\
-> Tuple[ivy.Array, ivy.Array]:
"""
Returns the unique elements of an input array x and the corresponding counts for each unique element in x.
:param x: input array. If x has more than one dimension, the function must flatten x and return the unique elements of the flattened array.
:return: a namedtuple (values, counts) whose
-first element must have the field name values and must be an array containing the unique elements of x. The array must have the same data type as x.
-second element must have the field name counts and must be an array containing the number of times each unique element occurs in x. The returned array must have same shape as values and must have the default array index data type.
"""
return _cur_framework(x).unique_counts(x)
| 69.119565 | 392 | 0.701997 | 946 | 6,359 | 4.689218 | 0.174419 | 0.030658 | 0.037196 | 0.038548 | 0.82394 | 0.793282 | 0.76578 | 0.762399 | 0.751353 | 0.716186 | 0 | 0.001981 | 0.206322 | 6,359 | 91 | 393 | 69.879121 | 0.876957 | 0.835037 | 0 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.266667 | false | 0 | 0.2 | 0 | 0.733333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 9 |
b8d42ab9b0a00319220cdc49291d7239ac2ad44c | 27,288 | py | Python | NetPacket/Tool/GenerateResponse.py | BoilTask/HttpFramework | a0f956cd6375723667156f55196e98547355fb4e | [
"MIT"
] | null | null | null | NetPacket/Tool/GenerateResponse.py | BoilTask/HttpFramework | a0f956cd6375723667156f55196e98547355fb4e | [
"MIT"
] | null | null | null | NetPacket/Tool/GenerateResponse.py | BoilTask/HttpFramework | a0f956cd6375723667156f55196e98547355fb4e | [
"MIT"
] | null | null | null | # -*- coding: UTF-8 -*-
import os
import sys
import yaml
from netpacket import common
def write_net_packet_name():
write_content = "#pragma once\n"
write_content += "\n"
write_content += "//Exported by Tool, please don't edit this file directly.\n"
write_content += "\n"
for net_packet_name in net_packet_name_list:
write_content += "#include \"NetResponse"+net_packet_name+".h\"\n"
common.overwrite_file_content(
config_yaml_data["NetResponseFile"], write_content)
print("Write NetResponse Name Success!")
def get_data_type_empty(data_type, data_name):
content_prefix = " "
content = ""
if data_type == "bool":
content += content_prefix + data_name+"_ = false;\n"
elif data_type == "int32":
content += content_prefix + data_name+"_ = 0;\n"
elif data_type == "int64":
content += content_prefix + data_name+"_ = 0;\n"
elif data_type == "float":
content += content_prefix + data_name+"_ = 0.f;\n"
elif data_type == "string":
content += content_prefix + data_name+"_.Empty();\n"
elif data_type == "listint32":
content += content_prefix + data_name+"_.Empty();\n"
elif data_type[-2:] == "[]":
content += content_prefix + data_name+"_.Empty();\n"
else:
content += content_prefix + data_name+"_.Reset();\n"
return content
def get_data_type_parse(data_type, data_name):
content_prefix = " "
content = "\n"
if data_type == "bool":
content += content_prefix + \
"(*Data)->TryGetBoolField(\""+data_name+"\", "+data_name+"_);\n"
elif data_type == "int32":
content += content_prefix + \
"(*Data)->TryGetNumberField(\""+data_name+"\", "+data_name+"_);\n"
elif data_type == "int64":
content += content_prefix + \
"(*Data)->TryGetNumberField(\""+data_name+"\", "+data_name+"_);\n"
elif data_type == "float":
content += content_prefix + \
"(*Data)->TryGetNumberField(\""+data_name+"\", "+data_name+"_);\n"
elif data_type == "string":
content += content_prefix + \
"(*Data)->TryGetStringField(\""+data_name+"\", "+data_name+"_);\n"
elif data_type == "listint32":
content += content_prefix + "FString " + data_name + "_str_;\n"
content += content_prefix + \
"(*Data)->TryGetStringField(\"" + \
data_name + "\", " + data_name + "_str_);\n"
content += content_prefix + \
"GameParser::ParseListInt32(" + data_name + \
"_str_, " + data_name + "_);\n"
elif data_type[-2:] == "[]":
data_type = data_type.rstrip("[]")
content += content_prefix + \
"const TArray<TSharedPtr<FJsonValue>>* "+data_name+"_Data;\n"
content += content_prefix + \
"if ((*Data)->TryGetArrayField(\""+data_name + \
"\", "+data_name+"_Data) && "+data_name+"_Data)\n"
content += content_prefix + "{\n"
content += content_prefix + \
" for (auto const & "+data_name+"_Item: *"+data_name+"_Data)\n"
content += content_prefix + " {\n"
if data_type == "bool":
content += content_prefix + " bool _Temp;\n"
content += content_prefix + \
" (*"+data_name+"_Item).TryGetBool(_Temp);\n"
content += content_prefix + " "+data_name+"_.Emplace(_Temp);\n"
elif data_type == "int32":
content += content_prefix + " int32 _Temp;\n"
content += content_prefix + \
" (*"+data_name+"_Item).TryGetNumber(_Temp);\n"
content += content_prefix + " "+data_name+"_.Emplace(_Temp);\n"
elif data_type == "int64":
content += content_prefix + " int64 _Temp;\n"
content += content_prefix + \
" (*"+data_name+"_Item).TryGetNumber(_Temp);\n"
content += content_prefix + " "+data_name+"_.Emplace(_Temp);\n"
elif data_type == "float":
content += content_prefix + " double _Temp;\n"
content += content_prefix + \
" (*"+data_name+"_Item).TryGetNumber(_Temp);\n"
content += content_prefix + " "+data_name+"_.Emplace(_Temp);\n"
elif data_type == "string":
content += content_prefix + " FString _Temp;\n"
content += content_prefix + \
" (*"+data_name+"_Item).TryGetString(_Temp);\n"
content += content_prefix + " "+data_name+"_.Emplace(_Temp);\n"
else:
content += content_prefix + " const TSharedPtr<FJsonObject>* _Temp;\n"
content += content_prefix + \
" (*"+data_name+"_Item).TryGetObject(_Temp);\n"
content += content_prefix + " TSharedPtr<NetResponse"+data_type+"> " + \
data_name + \
"_Ptr = MakeShareable(new NetResponse"+data_type+"());\n"
content += content_prefix + " " + \
data_name+"_Ptr->ParseData(_Temp);\n"
content += content_prefix + " "+data_name + \
"_.Emplace("+data_name+"_Ptr);\n"
content += content_prefix + " }\n"
content += content_prefix + "}\n"
else:
content += content_prefix + "const TSharedPtr<FJsonObject>* "+data_name+"_Data;\n"
content += content_prefix + \
"if ((*Data)->TryGetObjectField(\"" + \
data_name+"\", "+data_name+"_Data))\n"
content += content_prefix + "{\n"
content += content_prefix + " if (!"+data_name+"_)\n"
content += content_prefix + " {\n"
content += content_prefix + " "+data_name + \
"_ = MakeShareable(new NetResponse"+data_type+"());\n"
content += content_prefix + " }\n"
content += content_prefix + " " + data_name + \
"_->ParseData("+data_name+"_Data);\n"
content += content_prefix + "}\n"
return content
def get_data_type_cpp_function(class_name, data_type, data_name):
content_prefix = ""
content = "\n"
if data_type == "bool":
content += content_prefix + "bool "+class_name+"::"+data_name+"() const\n"
content += content_prefix + "{\n"
content += content_prefix + " return "+data_name+"_;\n"
content += content_prefix + "}\n"
elif data_type == "int32":
content += content_prefix + "int32 "+class_name+"::"+data_name+"() const\n"
content += content_prefix + "{\n"
content += content_prefix + " return "+data_name+"_;\n"
content += content_prefix + "}\n"
elif data_type == "int64":
content += content_prefix + "int64 "+class_name+"::"+data_name+"() const\n"
content += content_prefix + "{\n"
content += content_prefix + " return "+data_name+"_;\n"
content += content_prefix + "}\n"
elif data_type == "float":
content += content_prefix + "double "+class_name+"::"+data_name+"() const\n"
content += content_prefix + "{\n"
content += content_prefix + " return "+data_name+"_;\n"
content += content_prefix + "}\n"
elif data_type == "string":
content += content_prefix + "FString const& " + \
class_name+"::"+data_name+"() const\n"
content += content_prefix + "{\n"
content += content_prefix + " return "+data_name+"_;\n"
content += content_prefix + "}\n"
elif data_type == "listint32":
content += content_prefix + "TArray<int32> const& " + \
class_name+"::"+data_name+"() const\n"
content += content_prefix + "{\n"
content += content_prefix + " return "+data_name+"_;\n"
content += content_prefix + "}\n"
content += content_prefix + "\n"
content += content_prefix + "int32 " + \
class_name+"::"+data_name+"(int32 Index) const\n"
content += content_prefix + "{\n"
content += content_prefix + \
" if (!(Index >= 0 && Index < "+data_name+"_size()))\n"
content += content_prefix + " {\n"
content += content_prefix + \
" UE_LOG(LogNetResponse, Error, TEXT(\"" + \
class_name+"::"+data_name+" Out of Range!\"));\n"
content += content_prefix + " }\n"
content += content_prefix + " return "+data_name+"_[Index];\n"
content += content_prefix + "}\n"
content += content_prefix + "\n"
content += content_prefix + "int32 "+class_name+"::"+data_name+"_size() const\n"
content += content_prefix + "{\n"
content += content_prefix + " return "+data_name+"_.Num();\n"
content += content_prefix + "}\n"
elif data_type == "bool[]":
content += content_prefix + "TArray<bool> const& " + \
class_name+"::"+data_name+"() const\n"
content += content_prefix + "{\n"
content += content_prefix + " return "+data_name+"_;\n"
content += content_prefix + "}\n"
content += content_prefix + "\n"
content += content_prefix + "bool " + \
class_name+"::"+data_name+"(int32 Index) const\n"
content += content_prefix + "{\n"
content += content_prefix + \
" if (!(Index >= 0 && Index < "+data_name+"_size()))\n"
content += content_prefix + " {\n"
content += content_prefix + \
" UE_LOG(LogNetResponse, Error, TEXT(\"" + \
class_name+"::"+data_name+" Out of Range!\"));\n"
content += content_prefix + " }\n"
content += content_prefix + " return "+data_name+"_[Index];\n"
content += content_prefix + "}\n"
content += content_prefix + "\n"
content += content_prefix + "int32 "+class_name+"::"+data_name+"_size() const\n"
content += content_prefix + "{\n"
content += content_prefix + " return "+data_name+"_.Num();\n"
content += content_prefix + "}\n"
elif data_type == "int32[]":
content += content_prefix + "TArray<int32> const& " + \
class_name+"::"+data_name+"() const\n"
content += content_prefix + "{\n"
content += content_prefix + " return "+data_name+"_;\n"
content += content_prefix + "}\n"
content += content_prefix + "\n"
content += content_prefix + "int32 " + \
class_name+"::"+data_name+"(int32 Index) const\n"
content += content_prefix + "{\n"
content += content_prefix + \
" if (!(Index >= 0 && Index < "+data_name+"_size()))\n"
content += content_prefix + " {\n"
content += content_prefix + \
" UE_LOG(LogNetResponse, Error, TEXT(\"" + \
class_name+"::"+data_name+" Out of Range!\"));\n"
content += content_prefix + " }\n"
content += content_prefix + " return "+data_name+"_[Index];\n"
content += content_prefix + "}\n"
content += content_prefix + "\n"
content += content_prefix + "int32 "+class_name+"::"+data_name+"_size() const\n"
content += content_prefix + "{\n"
content += content_prefix + " return "+data_name+"_.Num();\n"
content += content_prefix + "}\n"
elif data_type == "int64[]":
content += content_prefix + "TArray<int64> const& " + \
class_name+"::"+data_name+"() const\n"
content += content_prefix + "{\n"
content += content_prefix + " return "+data_name+"_;\n"
content += content_prefix + "}\n"
content += content_prefix + "\n"
content += content_prefix + "int64 " + \
class_name+"::"+data_name+"(int32 Index) const\n"
content += content_prefix + "{\n"
content += content_prefix + \
" if (!(Index >= 0 && Index < "+data_name+"_size()))\n"
content += content_prefix + " {\n"
content += content_prefix + \
" UE_LOG(LogNetResponse, Error, TEXT(\"" + \
class_name+"::"+data_name+" Out of Range!\"));\n"
content += content_prefix + " }\n"
content += content_prefix + " return "+data_name+"_[Index];\n"
content += content_prefix + "}\n"
content += content_prefix + "\n"
content += content_prefix + "int32 "+class_name+"::"+data_name+"_size() const\n"
content += content_prefix + "{\n"
content += content_prefix + " return "+data_name+"_.Num();\n"
content += content_prefix + "}\n"
elif data_type == "float[]":
content += content_prefix + "TArray<double> const& " + \
class_name+"::"+data_name+"() const\n"
content += content_prefix + "{\n"
content += content_prefix + " return "+data_name+"_;\n"
content += content_prefix + "}\n"
content += content_prefix + "\n"
content += content_prefix + "double " + \
class_name+"::"+data_name+"(int32 Index) const\n"
content += content_prefix + "{\n"
content += content_prefix + \
" if (!(Index >= 0 && Index < "+data_name+"_size()))\n"
content += content_prefix + " {\n"
content += content_prefix + \
" UE_LOG(LogNetResponse, Error, TEXT(\"" + \
class_name+"::"+data_name+" Out of Range!\"));\n"
content += content_prefix + " }\n"
content += content_prefix + " return "+data_name+"_[Index];\n"
content += content_prefix + "}\n"
content += content_prefix + "\n"
content += content_prefix + "int32 "+class_name+"::"+data_name+"_size() const\n"
content += content_prefix + "{\n"
content += content_prefix + " return "+data_name+"_.Num();\n"
content += content_prefix + "}\n"
elif data_type == "string[]":
content += content_prefix + "TArray<FString> const& " + \
class_name+"::"+data_name+"() const\n"
content += content_prefix + "{\n"
content += content_prefix + " return "+data_name+"_;\n"
content += content_prefix + "}\n"
content += content_prefix + "\n"
content += content_prefix + "FString const& " + \
class_name+"::"+data_name+"(int32 Index) const\n"
content += content_prefix + "{\n"
content += content_prefix + \
" if (!(Index >= 0 && Index < "+data_name+"_size()))\n"
content += content_prefix + " {\n"
content += content_prefix + \
" UE_LOG(LogNetResponse, Error, TEXT(\"" + \
class_name+"::"+data_name+" Out of Range!\"));\n"
content += content_prefix + " }\n"
content += content_prefix + " return "+data_name+"_[Index];\n"
content += content_prefix + "}\n"
content += content_prefix + "\n"
content += content_prefix + "int32 "+class_name+"::"+data_name+"_size() const\n"
content += content_prefix + "{\n"
content += content_prefix + " return "+data_name+"_.Num();\n"
content += content_prefix + "}\n"
elif data_type[-2:] == "[]":
data_type = data_type.rstrip("[]")
content += content_prefix + "TArray<TSharedPtr<NetResponse"+data_type + \
">> const& "+class_name+"::"+data_name+"() const\n"
content += content_prefix + "{\n"
content += content_prefix + " return "+data_name+"_;\n"
content += content_prefix + "}\n"
content += content_prefix + "\n"
content += content_prefix + "NetResponse" + data_type + \
" const& " + class_name+"::"+data_name+"(int32 Index) const\n"
content += content_prefix + "{\n"
content += content_prefix + \
" if (!(Index >= 0 && Index < "+data_name+"_size()))\n"
content += content_prefix + " {\n"
content += content_prefix + \
" UE_LOG(LogNetResponse, Error, TEXT(\"" + \
class_name+"::"+data_name+" Out of Range!\"));\n"
content += content_prefix + " }\n"
content += content_prefix + " return *"+data_name+"_[Index];\n"
content += content_prefix + "}\n"
content += content_prefix + "\n"
content += content_prefix + "int32 "+class_name+"::"+data_name+"_size() const\n"
content += content_prefix + "{\n"
content += content_prefix + " return "+data_name+"_.Num();\n"
content += content_prefix + "}\n"
else:
content += content_prefix + "bool " + \
class_name + "::has_" + data_name + "() const\n"
content += content_prefix + "{\n"
content += content_prefix + " return " + data_name+"_.IsValid();\n"
content += content_prefix + "}\n"
content += content_prefix + "\n"
content += content_prefix + "NetResponse" + \
data_type + "& " + class_name + "::" + data_name + "()\n"
content += content_prefix + "{\n"
content += content_prefix + "\tif (!" + data_name+"_.IsValid())\n"
content += content_prefix + "\t{\n"
content += content_prefix + "\t\t" + data_name + \
"_ = MakeShareable(new NetResponse" + data_type+"());\n"
content += content_prefix + "\t}\n"
content += content_prefix + "\treturn *" + data_name+"_;\n"
content += content_prefix + "}\n"
return content
def get_data_public_h_function(data_type, data_name):
content_prefix = " "
content = ""
if data_type == "bool":
content += content_prefix + "bool "+data_name+"() const;\n"
elif data_type == "int32":
content += content_prefix + "int32 "+data_name+"() const;\n"
elif data_type == "int64":
content += content_prefix + "int64 "+data_name+"() const;\n"
elif data_type == "float":
content += content_prefix + "double "+data_name+"() const;\n"
elif data_type == "string":
content += content_prefix + "FString const& "+data_name+"() const;\n"
elif data_type == "listint32":
content += content_prefix + "TArray<int32> const& "+data_name+"() const;\n"
content += content_prefix + "int32 "+data_name+"(int32 Index) const;\n"
content += content_prefix + "int32 "+data_name+"_size() const;\n"
elif data_type == "bool[]":
content += content_prefix + "TArray<bool> const& "+data_name+"() const;\n"
content += content_prefix + "bool "+data_name+"(int32 Index) const;\n"
content += content_prefix + "int32 "+data_name+"_size() const;\n"
elif data_type == "int32[]":
content += content_prefix + "TArray<int32> const& "+data_name+"() const;\n"
content += content_prefix + "int32 "+data_name+"(int32 Index) const;\n"
content += content_prefix + "int32 "+data_name+"_size() const;\n"
elif data_type == "int64[]":
content += content_prefix + "TArray<int64> const& "+data_name+"() const;\n"
content += content_prefix + "int64 "+data_name+"(int32 Index) const;\n"
content += content_prefix + "int32 "+data_name+"_size() const;\n"
elif data_type == "float[]":
content += content_prefix + "TArray<double> const& "+data_name+"() const;\n"
content += content_prefix + "double " + \
data_name+"(int32 Index) const;\n"
content += content_prefix + "int32 "+data_name+"_size() const;\n"
elif data_type == "string[]":
content += content_prefix + "TArray<FString> const& "+data_name+"() const;\n"
content += content_prefix + "FString const& " + \
data_name+"(int32 Index) const;\n"
content += content_prefix + "int32 "+data_name+"_size() const;\n"
elif data_type[-2:] == "[]":
data_type = data_type.rstrip("[]")
content += content_prefix + "TArray<TSharedPtr<NetResponse" + \
data_type+">> const& "+data_name+"() const;\n"
content += content_prefix + "NetResponse" + \
data_type+" const& "+data_name+"(int32 Index) const;\n"
content += content_prefix + "int32 "+data_name+"_size() const;\n"
else:
content += content_prefix + "bool has_"+data_name+"() const;\n"
content += content_prefix + "NetResponse" + \
data_type+"& "+data_name+"();\n"
return content
def get_data_private_h_definition(data_type, data_name):
content = " "
if data_type == "bool":
content += "bool "+data_name+"_;\n"
elif data_type == "int32":
content += "int32 "+data_name+"_;\n"
elif data_type == "int64":
content += "int64 "+data_name+"_;\n"
elif data_type == "float":
content += "double "+data_name+"_;\n"
elif data_type == "string":
content += "FString "+data_name+"_;\n"
elif data_type == "listint32":
content += "TArray<int32> "+data_name+"_;\n"
elif data_type == "bool[]":
content += "TArray<bool> "+data_name+"_;\n"
elif data_type == "int32[]":
content += "TArray<int32> "+data_name+"_;\n"
elif data_type == "int64[]":
content += "TArray<int64> "+data_name+"_;\n"
elif data_type == "float[]":
content += "TArray<double> "+data_name+"_;\n"
elif data_type == "string[]":
content += "TArray<FString> "+data_name+"_;\n"
elif data_type[-2:] == "[]":
data_type = data_type.rstrip("[]")
content += "TArray<TSharedPtr<NetResponse"+data_type+">> "+data_name+"_;\n"
else:
content += "TSharedPtr<NetResponse"+data_type+"> "+data_name+"_;\n"
return content
def get_class_h_content(file_name, packet_name):
packet_config_data = response_config_data[packet_name]
write_content = "\n"
response_class_name = "NetResponse" + file_name + packet_name
write_content += "class "+response_class_name+"\n"
write_content += "{\n"
write_content += "public:\n"
write_content += " "+response_class_name+"() = default;\n"
write_content += " ~"+response_class_name+"() = default;\n"
write_content += " void Clear();\n"
write_content += " bool ParseData(TSharedPtr<FJsonObject> const& Data);\n"
write_content += " bool ParseData(TSharedPtr<FJsonObject> const* Data);\n"
if packet_config_data:
write_content += "\n"
write_content += "public:\n"
for net_package_data_key in packet_config_data:
write_content += get_data_public_h_function(
packet_config_data[net_package_data_key], net_package_data_key)
if packet_config_data:
write_content += "\n"
write_content += "private:\n"
for net_package_data_key in packet_config_data:
write_content += get_data_private_h_definition(
packet_config_data[net_package_data_key], net_package_data_key)
write_content += "};\n"
return write_content
def get_class_cpp_content(file_name, packet_name):
net_response_data = response_config_data[packet_name]
write_content = "\n"
response_class_name = "NetResponse" + file_name + packet_name
write_content += "void "+response_class_name+"::Clear()\n"
write_content += "{\n"
if net_response_data:
for net_package_data_key in net_response_data:
write_content += get_data_type_empty(
net_response_data[net_package_data_key], net_package_data_key)
write_content += "}\n"
write_content += "\n"
write_content += "bool "+response_class_name + \
"::ParseData(TSharedPtr<FJsonObject> const& Data)\n"
write_content += "{\n"
write_content += " return ParseData(&Data);\n"
write_content += "}\n"
write_content += "\n"
write_content += "bool "+response_class_name + \
"::ParseData(TSharedPtr<FJsonObject> const* Data)\n"
write_content += "{\n"
write_content += " Clear();\n"
write_content += "\n"
write_content += " if (nullptr == Data || !(*Data).IsValid())\n"
write_content += " {\n"
write_content += " return false;\n"
write_content += " }\n"
if net_response_data:
for net_package_data_key in net_response_data:
write_content += get_data_type_parse(
net_response_data[net_package_data_key], net_package_data_key)
write_content += "\n"
write_content += " return true;\n"
write_content += "}\n"
if net_response_data:
for net_package_data_key in net_response_data:
write_content += get_data_type_cpp_function(
response_class_name, net_response_data[net_package_data_key], net_package_data_key)
return write_content
def write_net_response(net_response_name):
write_h_content = "//Exported by Tool, please don't edit this file directly.\n"
write_h_content += "\n"
write_h_content += "#pragma once\n"
write_h_content += "\n"
write_h_content += "#include \"NetDef.h\"\n"
if "_" in response_config_data:
if "Import" in response_config_data["_"]:
write_h_content += "\n"
import_file_list = response_config_data["_"]["Import"]
for import_file in import_file_list:
write_h_content += "#include \"NetResponse"+import_file+".h\"\n"
write_h_content += "\n"
for net_response_key in response_config_data:
if net_response_key == "_":
continue
write_h_content += "class NetResponse"+net_response_name+net_response_key+";\n"
for net_response_key in response_config_data:
if net_response_key == "_":
continue
write_h_content += get_class_h_content(
net_response_name, net_response_key)
write_cpp_content = "//Exported by Tool, please don't edit this file directly.\n"
write_cpp_content += "\n"
write_cpp_content += "#include \"NetResponse"+net_response_name+".h\"\n"
write_cpp_content += "#include \"GameParser.h\"\n"
for net_response_key in response_config_data:
if net_response_key == "_":
continue
write_cpp_content += get_class_cpp_content(
net_response_name, net_response_key)
common.overwrite_file_content(
config_yaml_data["ResponseExportPath"]+"/NetResponse"+net_response_name+".h", write_h_content)
common.overwrite_file_content(
config_yaml_data["ResponseExportPath"]+"/NetResponse"+net_response_name+".cpp", write_cpp_content)
print("Write NetResponse "+net_response_name + " Success!")
if __name__ == "__main__":
# 设置环境变量
file_path = os.path.dirname(os.path.abspath(sys.argv[0]))
os.chdir(file_path)
cwd = os.getcwd()
# 取数据配置
with open('../Config.yaml', 'r', encoding='utf-8') as config_yaml_file:
config_yaml_data = yaml.load(config_yaml_file, Loader=yaml.FullLoader)
config_yaml_file.close()
# print(config_yaml_data)
net_packet_name_list = []
common.clean_file_path(config_yaml_data["ResponseExportPath"])
config_files = os.listdir(config_yaml_data["ResponseConfigPath"])
for file_name in config_files:
with open(config_yaml_data["ResponseConfigPath"]+"/"+file_name, 'r', encoding='utf-8') as response_config_file:
response_config_data = yaml.load(
response_config_file, Loader=yaml.FullLoader)
response_config_file.close()
response_file_name = os.path.splitext(file_name)[0]
net_packet_name_list.append(response_file_name)
write_net_response(response_file_name)
write_net_packet_name()
| 42.306977 | 119 | 0.579009 | 3,019 | 27,288 | 4.887049 | 0.04836 | 0.214111 | 0.32398 | 0.271858 | 0.884641 | 0.852379 | 0.83679 | 0.807103 | 0.76725 | 0.679951 | 0 | 0.0084 | 0.258392 | 27,288 | 644 | 120 | 42.372671 | 0.72066 | 0.002125 | 0 | 0.698706 | 0 | 0 | 0.221047 | 0.035556 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016636 | false | 0 | 0.014787 | 0 | 0.044362 | 0.003697 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
b8d4ed6587b09981233cc668391922a010a6aa43 | 5,973 | py | Python | schedules/migrations/0001_initial.py | AntenehDev/HiLCoE | 3fad7e30e50f3aacb8bceecc12487a824cd66c6d | [
"MIT"
] | null | null | null | schedules/migrations/0001_initial.py | AntenehDev/HiLCoE | 3fad7e30e50f3aacb8bceecc12487a824cd66c6d | [
"MIT"
] | 2 | 2020-06-06T00:44:24.000Z | 2021-06-10T22:17:51.000Z | schedules/migrations/0001_initial.py | AntenehDev/HiLCoE | 3fad7e30e50f3aacb8bceecc12487a824cd66c6d | [
"MIT"
] | null | null | null | # Generated by Django 2.1 on 2019-11-09 06:17
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Announcement',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
],
),
migrations.CreateModel(
name='BatchNumber',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('batch_number', models.CharField(max_length=255)),
],
),
migrations.CreateModel(
name='Course',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('course_code', models.CharField(max_length=5)),
('course_title', models.CharField(max_length=255)),
],
),
migrations.CreateModel(
name='CourseFees',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('course_fee', models.CharField(max_length=255)),
('batch_number_fk', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='schedules.BatchNumber')),
('course_fk', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='schedules.Course')),
],
),
migrations.CreateModel(
name='CourseType',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('course_type', models.CharField(max_length=255)),
],
),
migrations.CreateModel(
name='CreditHour',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('credit_hour', models.IntegerField()),
],
),
migrations.CreateModel(
name='LectureRoom',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('lecture_room', models.CharField(max_length=5)),
],
),
migrations.CreateModel(
name='Message',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('message', models.CharField(max_length=255)),
],
),
migrations.CreateModel(
name='Schedule',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('batch_number_fk', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='schedules.BatchNumber')),
('course_fk', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='schedules.Course')),
('lecture_room_fk', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='schedules.LectureRoom')),
],
),
migrations.CreateModel(
name='ScheduleDay',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('schedule_day', models.CharField(max_length=255)),
],
),
migrations.CreateModel(
name='ScheduleTime',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('schedule_time', models.CharField(max_length=255)),
],
),
migrations.AddField(
model_name='schedule',
name='schedule_day_fk',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='schedules.ScheduleDay'),
),
migrations.AddField(
model_name='schedule',
name='schedule_time_fk',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='schedules.ScheduleTime'),
),
migrations.AddField(
model_name='course',
name='course_type_fk',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='schedules.CourseType'),
),
migrations.AddField(
model_name='course',
name='credit_hour_fk',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='schedules.CreditHour'),
),
migrations.AddField(
model_name='announcement',
name='batch_number_fk',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='schedules.BatchNumber'),
),
migrations.AddField(
model_name='announcement',
name='course_fk',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='schedules.Course'),
),
migrations.AddField(
model_name='announcement',
name='message_fk',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='schedules.Message'),
),
migrations.AddField(
model_name='announcement',
name='schedule_day_fk',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='schedules.ScheduleDay'),
),
migrations.AddField(
model_name='announcement',
name='schedule_time_fk',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='schedules.ScheduleTime'),
),
]
| 42.06338 | 128 | 0.581952 | 574 | 5,973 | 5.885017 | 0.127178 | 0.037892 | 0.062167 | 0.097691 | 0.829189 | 0.806394 | 0.741267 | 0.71344 | 0.636471 | 0.636471 | 0 | 0.008677 | 0.286121 | 5,973 | 141 | 129 | 42.361702 | 0.783537 | 0.007199 | 0 | 0.701493 | 1 | 0 | 0.137146 | 0.028677 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.014925 | 0 | 0.044776 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b8fe70ed03eceb0a2e8c8b1d02e9ff7df179e073 | 38 | py | Python | src/lib/pickle.py | DTenore/skulpt | 098d20acfb088d6db85535132c324b7ac2f2d212 | [
"MIT"
] | 2,671 | 2015-01-03T08:23:25.000Z | 2022-03-31T06:15:48.000Z | src/lib/pickle.py | wakeupmuyunhe/skulpt | a8fb11a80fb6d7c016bab5dfe3712517a350b347 | [
"MIT"
] | 972 | 2015-01-05T08:11:00.000Z | 2022-03-29T13:47:15.000Z | src/lib/pickle.py | wakeupmuyunhe/skulpt | a8fb11a80fb6d7c016bab5dfe3712517a350b347 | [
"MIT"
] | 845 | 2015-01-03T19:53:36.000Z | 2022-03-29T18:34:22.000Z | import _sk_fail; _sk_fail._("pickle")
| 19 | 37 | 0.763158 | 6 | 38 | 4 | 0.666667 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 38 | 1 | 38 | 38 | 0.685714 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.