hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
f2756dc0ba9074c59cff3bbdbc29fab4c8c80ee3 | 159 | py | Python | func.py | OPI-py/pytorch_chatbot | 5abe87c77589f9a97b209ccccfd7e6236f698b58 | [
"BSD-2-Clause"
] | 1 | 2021-03-04T21:06:17.000Z | 2021-03-04T21:06:17.000Z | func.py | OPI-py/pytorch_chatbot | 5abe87c77589f9a97b209ccccfd7e6236f698b58 | [
"BSD-2-Clause"
] | null | null | null | func.py | OPI-py/pytorch_chatbot | 5abe87c77589f9a97b209ccccfd7e6236f698b58 | [
"BSD-2-Clause"
] | null | null | null | import datetime
def current_time():
return datetime.datetime.now().strftime('%H:%M')
def day_today():
return datetime.date.today().strftime('%A') | 22.714286 | 52 | 0.679245 | 21 | 159 | 5.047619 | 0.666667 | 0.264151 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144654 | 159 | 7 | 53 | 22.714286 | 0.779412 | 0 | 0 | 0 | 0 | 0 | 0.04375 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | true | 0 | 0.2 | 0.4 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
f29c66f8e7d189d5053c2e4225e1681d2d088293 | 138 | py | Python | src/tests/__init__.py | lili668668/pretalx | 5ba2185ffd7c5f95254aafe25ad3de340a86eadb | [
"Apache-2.0"
] | 418 | 2017-10-05T05:52:49.000Z | 2022-03-24T09:50:06.000Z | src/tests/__init__.py | lili668668/pretalx | 5ba2185ffd7c5f95254aafe25ad3de340a86eadb | [
"Apache-2.0"
] | 1,049 | 2017-09-16T09:34:55.000Z | 2022-03-23T16:13:04.000Z | src/tests/__init__.py | lili668668/pretalx | 5ba2185ffd7c5f95254aafe25ad3de340a86eadb | [
"Apache-2.0"
] | 155 | 2017-10-16T18:32:01.000Z | 2022-03-15T12:48:33.000Z | from django.test import utils
from django_scopes import scopes_disabled
utils.setup_databases = scopes_disabled()(utils.setup_databases)
| 27.6 | 64 | 0.855072 | 19 | 138 | 5.947368 | 0.473684 | 0.176991 | 0.336283 | 0.424779 | 0.584071 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 138 | 4 | 65 | 34.5 | 0.896825 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
f2c67d3d36f1bea775683abbb0e454b2f17577f3 | 171 | py | Python | py/main_screen.py | richierh/SalesKivyMD | f445adc701946ff38865b4a1a00a03529142613e | [
"MIT"
] | null | null | null | py/main_screen.py | richierh/SalesKivyMD | f445adc701946ff38865b4a1a00a03529142613e | [
"MIT"
] | null | null | null | py/main_screen.py | richierh/SalesKivyMD | f445adc701946ff38865b4a1a00a03529142613e | [
"MIT"
] | null | null | null | from kivymd.uix.screen import MDScreen
from kivymd.uix.boxlayout import BoxLayout
class MainScreen(MDScreen):
pass
class ContentNavigationDrawer(BoxLayout):
pass | 21.375 | 42 | 0.807018 | 20 | 171 | 6.9 | 0.55 | 0.144928 | 0.188406 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134503 | 171 | 8 | 43 | 21.375 | 0.932432 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
4b673289efb8f2c0c3b351b7f86b65dd8f80e1da | 167 | py | Python | ambuild2/frontend/v2_0/cpp/__init__.py | Accelerator74/ambuild | b322668b7ee4063d2443623bc2b777cfad1695c1 | [
"BSD-3-Clause"
] | 34 | 2015-02-11T19:43:01.000Z | 2022-01-24T10:18:54.000Z | ambuild2/frontend/v2_0/cpp/__init__.py | Accelerator74/ambuild | b322668b7ee4063d2443623bc2b777cfad1695c1 | [
"BSD-3-Clause"
] | 64 | 2015-02-06T19:54:22.000Z | 2021-11-07T11:42:47.000Z | ambuild2/frontend/v2_0/cpp/__init__.py | Accelerator74/ambuild | b322668b7ee4063d2443623bc2b777cfad1695c1 | [
"BSD-3-Clause"
] | 32 | 2015-02-06T19:36:51.000Z | 2021-12-01T22:05:28.000Z | from ambuild2.frontend.v2_0.cpp.compilers import Compiler
from ambuild2.frontend.v2_0.cpp.builders import CppNodes
from ambuild2.frontend.v2_0.cpp.builders import Dep
| 41.75 | 57 | 0.856287 | 27 | 167 | 5.185185 | 0.444444 | 0.257143 | 0.428571 | 0.471429 | 0.757143 | 0.757143 | 0.571429 | 0.571429 | 0 | 0 | 0 | 0.058065 | 0.071856 | 167 | 3 | 58 | 55.666667 | 0.845161 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 9 |
29a3dd552327127430df6a481dbcc2262ee281ac | 209 | py | Python | temboo/core/Library/FedEx/Locations/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | 7 | 2016-03-07T02:07:21.000Z | 2022-01-21T02:22:41.000Z | temboo/core/Library/FedEx/Locations/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | null | null | null | temboo/core/Library/FedEx/Locations/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | 8 | 2016-06-14T06:01:11.000Z | 2020-04-22T09:21:44.000Z | from temboo.Library.FedEx.Locations.SearchLocationsByAddress import SearchLocationsByAddress, SearchLocationsByAddressInputSet, SearchLocationsByAddressResultSet, SearchLocationsByAddressChoreographyExecution
| 104.5 | 208 | 0.933014 | 11 | 209 | 17.727273 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033493 | 209 | 1 | 209 | 209 | 0.965347 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
29cec7f303b12a21a7edecf1fb6d15714b0787d8 | 1,749 | py | Python | asciiArt.py | owein-thuillier/generation_instances_TSP | f41c6923f6576f132df91fa37cb6b265502e0b77 | [
"MIT"
] | null | null | null | asciiArt.py | owein-thuillier/generation_instances_TSP | f41c6923f6576f132df91fa37cb6b265502e0b77 | [
"MIT"
] | null | null | null | asciiArt.py | owein-thuillier/generation_instances_TSP | f41c6923f6576f132df91fa37cb6b265502e0b77 | [
"MIT"
] | null | null | null | def asciiArtDebut():
print("\n\n\n ___________________________________________________________")
print("/ _____ _ \ ")
print("| / ____| | | |")
print("| | | __ ___ _ __ ___ _ __ __ _| |_ ___ _ _ _ __ |")
print("| | | |_ |/ _ \ '_ \ / _ \ '__/ _` | __/ _ \ | | | '__| |")
print("| | |__| | __/ | | | __/ | | (_| | || __/ |_| | | |")
print("| \_____|\___|_|_|_|\___|_| \__,_|\__\___|\__,_|_| |")
print("| | ( )_ _| | | |")
print("| __| |/ | | _ __ ___| |_ __ _ _ __ ___ ___ ___ |")
print("| / _` | | | | '_ \/ __| __/ _` | '_ \ / __/ _ \/ __| |")
print("| | (_| | _| |_| | | \__ \ || (_| | | | | (_| __/\__ \ |")
print("| \__,_| |_____|_| |_|___/\__\__,_|_| |_|\___\___||___/ |")
print("\___________________________________________________________/\n\n\n")
def asciiArtFin():
print(" ___________________________________")
print("/ ________ .-./`) ,---. .--. \ ")
print("| | |\ .-.')| \ | | |")
print("| | .----'/ `-' \| , \ | | |")
print("| | _|____ `-'`\"`| |\_ \| | |")
print("| |_( )_ | .---. | _( )_\ | |")
print("| (_ o._)__| | | | (_ o _) | |")
print("| |(_,_) | | | (_,_)\ | |")
print("| | | | | | | | | |")
print("| '---' '---' '--' '--' |")
print("\___________________________________/")
print("\n")
| 51.441176 | 137 | 0.311035 | 38 | 1,749 | 4.315789 | 0.157895 | 1.280488 | 1.646341 | 1.829268 | 0.743902 | 0.743902 | 0.579268 | 0.365854 | 0.365854 | 0.365854 | 0 | 0 | 0.445397 | 1,749 | 33 | 138 | 53 | 0.169072 | 0 | 0 | 0 | 0 | 0.259259 | 0.685175 | 0.14024 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074074 | true | 0 | 0 | 0 | 0.074074 | 0.925926 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 9 |
4b0c1c7def3d8846e1d428782303e8d5764a0868 | 136 | py | Python | 867-transpose-matrix/867-transpose-matrix.py | Atri10/Leet-code---Atri_Patel | 49fc59b9147a44ab04a66128fbb2ef259b5f7b7c | [
"MIT"
] | 1 | 2021-10-10T20:21:18.000Z | 2021-10-10T20:21:18.000Z | 867-transpose-matrix/867-transpose-matrix.py | Atri10/Leet-code---Atri_Patel | 49fc59b9147a44ab04a66128fbb2ef259b5f7b7c | [
"MIT"
] | null | null | null | 867-transpose-matrix/867-transpose-matrix.py | Atri10/Leet-code---Atri_Patel | 49fc59b9147a44ab04a66128fbb2ef259b5f7b7c | [
"MIT"
] | null | null | null | class Solution:
def transpose(self, matrix: List[List[int]]) -> List[List[int]]:
return list(zip(*matrix)) | 27.2 | 68 | 0.558824 | 16 | 136 | 4.75 | 0.625 | 0.210526 | 0.289474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.286765 | 136 | 5 | 69 | 27.2 | 0.783505 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
4b18a9547d40da4894846ddb3f872adbb9f5d6c4 | 117 | py | Python | rlcard/utils/__init__.py | rodrigodelazcano/rlcard | 0059d2026ef813de1cabee5a825d7dacc1f89f17 | [
"MIT"
] | 1 | 2021-06-20T16:25:42.000Z | 2021-06-20T16:25:42.000Z | rlcard/utils/__init__.py | hsywhu/rlcard | 963cf6886dfaf5f089e9c8d0039a1dbff87aca6d | [
"MIT"
] | null | null | null | rlcard/utils/__init__.py | hsywhu/rlcard | 963cf6886dfaf5f089e9c8d0039a1dbff87aca6d | [
"MIT"
] | null | null | null | from rlcard.utils.logger import Logger, plot_curve
from rlcard.utils import seeding
from rlcard.utils.utils import *
| 29.25 | 50 | 0.82906 | 18 | 117 | 5.333333 | 0.444444 | 0.3125 | 0.46875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 117 | 3 | 51 | 39 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
4b1fc774e48301155931c6b6c765a093875cccba | 50,843 | py | Python | BNNfed/source/natural_raparametrization_layer.py | jparras/fed-baselines | fd410521ceabd01da9a216eed7fcf1aae7ed3589 | [
"MIT"
] | 1 | 2021-03-26T08:20:36.000Z | 2021-03-26T08:20:36.000Z | BNNfed/source/natural_raparametrization_layer.py | jparras/fed-baselines | fd410521ceabd01da9a216eed7fcf1aae7ed3589 | [
"MIT"
] | null | null | null | BNNfed/source/natural_raparametrization_layer.py | jparras/fed-baselines | fd410521ceabd01da9a216eed7fcf1aae7ed3589 | [
"MIT"
] | null | null | null | import tensorflow as tf
import tensorflow_probability as tfp
from tensorflow_probability.python import distributions as tfd
from tensorflow_probability.python.layers import util as tfp_layers_util
from tensorflow.python.layers import utils as tf_layers_util
from source.centered_layers import LayerCentered
from source.tfp_utils import precision_from_untransformed_scale, sparse_delta_function
from source.normal_natural import NormalNatural, eps
from tensorflow.python.keras.constraints import Constraint
from tensorflow.python.keras import backend as K
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import nn_ops
from tensorflow.python.eager import context
from tensorflow.python.framework import ops
from tensorflow.python.keras.utils import tf_utils
from tensorflow.python.keras.layers.recurrent import _caching_device
class NonNegPrec(Constraint):
def __call__(self, w):
prec = w[..., -1]
prec = prec * math_ops.cast(
math_ops.greater_equal(prec, eps), K.floatx())
return tf.stack([w[..., 0], prec], axis=-1)
class NaturalRegularizer(tf.keras.regularizers.Regularizer):
def __init__(self, regularizer=None):
self.regularizer = regularizer
def __call__(self, w):
return self.regularizer.call(w[..., 0])
class NaturalConstraint(tf.keras.constraints.Constraint):
def __init__(self, constraint):
self.constraint = constraint
def __call__(self, w):
gamma = self.constraint(w[..., 0])
return tf.stack([gamma, w[..., 1]], axis=-1)
def tensor_natural_par_fn(is_singular=False,
natural_initializer=tf.constant_initializer(0.),
natural_regularizer=None, natural_constraint=None,
**kwargs):
def _fn(dtype, shape, name, trainable, add_variable_fn):
"""Creates 'natural' parameters."""
natural = add_variable_fn(
name=name + '_natural',
shape=list(shape) + [2],
initializer=natural_initializer,
regularizer=natural_regularizer,
constraint=natural_constraint,
dtype=dtype,
trainable=trainable,
**kwargs)
return natural
return _fn
class VariationalReparametrizedNatural(LayerCentered):
def build_posterior_fn_natural(self, shape, dtype, name, posterior_fn,
prior_fn):
natural_par_shape = list(shape) + [2]
server_par = self.add_variable(name=name+'_server_par',
shape=natural_par_shape,
dtype=dtype, trainable=False,
initializer=tf.keras.initializers.zeros)
client_par = self.add_variable(name=name+'_client_par',
shape=natural_par_shape,
dtype=dtype, trainable=False,
initializer=tf.keras.initializers.zeros)
ratio_par = tfp.util.DeferredTensor(
server_par, lambda x: x - self.client_weight * client_par)
posterior_fn = posterior_fn(ratio_par)
prior_fn = prior_fn(ratio_par)
self.server_variable_dict[name] = server_par
self.client_center_variable_dict[name] = client_par
return posterior_fn, prior_fn
def initialize_kernel_posterior(self):
for key in self.client_variable_dict.keys():
self.client_variable_dict[key].assign(
self.server_variable_dict[key])
def apply_damping(self, damping_factor):
for key in self.server_variable_dict.keys():
damped = self.apply_delta_function(
self.client_variable_dict[key] * damping_factor,
self.client_center_variable_dict[key] * (1 - damping_factor))
self.client_variable_dict[key].assign(damped)
def renormalize_natural_mean_field_normal_fn(self, ratio_par):
def _fn(dtype, shape, name, trainable, add_variable_fn,
natural_initializer=None,
natural_regularizer=None, natural_constraint=NonNegPrec(),
**kwargs):
natural_par_fn = tensor_natural_par_fn(
natural_initializer=natural_initializer,
natural_regularizer=natural_regularizer,
natural_constraint=natural_constraint,
**kwargs)
natural = natural_par_fn(
dtype, shape, name, trainable, add_variable_fn)
self.client_variable_dict['_'.join(name.split('_')[0:-1])] = natural
natural_reparametrized = tfp.util.DeferredTensor(
natural, lambda x: x * self.client_weight + ratio_par)
gamma = tfp.util.DeferredTensor(
natural_reparametrized, lambda x: x[..., 0], shape=shape)
prec = tfp.util.DeferredTensor(
natural_reparametrized, lambda x: x[..., 1], shape=shape)
dist = NormalNatural(gamma=gamma, prec=prec)
batch_ndims = tf.size(dist.batch_shape_tensor())
return tfd.Independent(dist, reinterpreted_batch_ndims=batch_ndims)
return _fn
def natural_tensor_multivariate_normal_fn(self, ratio_par):
def _fn(dtype, shape, name, trainable, add_variable_fn,
initializer=natural_prior_initializer_fn(),
regularizer=None, constraint=None, **kwargs):
del trainable
natural_par_fn = tensor_natural_par_fn(
natural_initializer=initializer,
natural_regularizer=regularizer,
natural_constraint=constraint,
**kwargs)
natural = natural_par_fn(dtype, shape, name, False, add_variable_fn)
natural_reparametrized = tfp.util.DeferredTensor(
natural, lambda x: x * self.client_weight + ratio_par)
gamma = tfp.util.DeferredTensor(
natural_reparametrized, lambda x: x[..., 0], shape=shape)
prec = tfp.util.DeferredTensor(
natural_reparametrized, lambda x: x[..., 1], shape=shape)
dist = NormalNatural(gamma=gamma, prec=prec)
batch_ndims = tf.size(input=dist.batch_shape_tensor())
return tfd.Independent(dist, reinterpreted_batch_ndims=batch_ndims)
return _fn
class DenseSharedNatural(VariationalReparametrizedNatural):
def __init__(
self, units,
activation=None,
activity_regularizer=None,
client_weight=1.,
trainable=True,
kernel_posterior_fn=None,
kernel_posterior_tensor_fn=(lambda d: d.sample()),
kernel_prior_fn=None,
kernel_divergence_fn=(
lambda q, p, ignore: tfd.kl_divergence(q, p)),
bias_posterior_fn=tfp_layers_util.default_mean_field_normal_fn(
is_singular=True),
bias_posterior_tensor_fn=(lambda d: d.sample()),
bias_prior_fn=None,
bias_divergence_fn=(lambda q, p, ignore: tfd.kl_divergence(q, p)),
**kwargs):
self.untransformed_scale_initializer = None
if 'untransformed_scale_initializer' in kwargs:
self.untransformed_scale_initializer = \
kwargs.pop('untransformed_scale_initializer')
self.loc_initializer = None
if 'loc_initializer' in kwargs:
self.loc_initializer = \
kwargs.pop('loc_initializer')
self.delta_percentile = kwargs.pop('delta_percentile', None)
if kernel_posterior_fn is None:
kernel_posterior_fn = self.renormalize_natural_mean_field_normal_fn
if kernel_prior_fn is None:
kernel_prior_fn = self.natural_tensor_multivariate_normal_fn
super(DenseSharedNatural, self).\
__init__(units,
activation=activation,
activity_regularizer=activity_regularizer,
trainable=trainable,
kernel_posterior_fn=kernel_posterior_fn,
kernel_posterior_tensor_fn=kernel_posterior_tensor_fn,
kernel_prior_fn=kernel_prior_fn,
kernel_divergence_fn=kernel_divergence_fn,
bias_posterior_fn=bias_posterior_fn,
bias_posterior_tensor_fn=bias_posterior_tensor_fn,
bias_prior_fn=bias_prior_fn,
bias_divergence_fn=bias_divergence_fn,
**kwargs)
self.client_weight = client_weight
self.delta_function = tf.subtract
if self.delta_percentile and not activation == 'softmax':
self.delta_function = sparse_delta_function(self.delta_percentile)
print(self, activation, 'using delta sparisfication')
self.apply_delta_function = tf.add
self.client_variable_dict = {}
self.client_center_variable_dict = {}
self.server_variable_dict = {}
def build(self, input_shape):
input_shape = tf.TensorShape(input_shape)
in_size = tf.compat.dimension_value(
input_shape.with_rank_at_least(2)[-1])
if in_size is None:
raise ValueError('The last dimension of the inputs to `Dense` '
'should be defined. Found `None`.')
self._input_spec = tf.keras.layers.InputSpec(
min_ndim=2, axes={-1: in_size})
# If self.dtype is None, build weights using the default dtype.
dtype = tf.as_dtype(self.dtype or tf.keras.backend.floatx())
shape = [in_size, self.units]
name = 'kernel'
self.kernel_posterior_fn, self.kernel_prior_fn = \
self.build_posterior_fn_natural(shape, dtype, name,
self.kernel_posterior_fn,
self.kernel_prior_fn)
natural_initializer = natural_initializer_fn(
loc_stdev=0.1, u_scale_init_avg=-5,
u_scale_init_stdev=0.1,
untransformed_scale_initializer=self.untransformed_scale_initializer,
loc_initializer=self.loc_initializer)
self.kernel_posterior = self.kernel_posterior_fn(
dtype, [in_size, self.units], 'kernel_posterior',
self.trainable, self.add_variable,
natural_initializer=natural_initializer)
if self.kernel_prior_fn is None:
self.kernel_prior = None
else:
self.kernel_prior = self.kernel_prior_fn(
dtype, [in_size, self.units], 'kernel_prior',
self.trainable, self.add_variable)
if self.bias_posterior_fn is None:
self.bias_posterior = None
else:
self.bias_posterior = self.bias_posterior_fn(
dtype, [self.units], 'bias_posterior',
self.trainable, self.add_variable)
if self.bias_prior_fn is None:
self.bias_prior = None
else:
self.bias_prior = self.bias_prior_fn(
dtype, [self.units], 'bias_prior',
self.trainable, self.add_variable)
if self.bias_posterior:
self.bias_center = self.add_weight(
'bias_center',
shape=[self.units, ],
initializer=tf.keras.initializers.constant(0.),
dtype=self.dtype,
trainable=False)
self.client_variable_dict['bias'] = self.bias_posterior.distribution.loc
self.server_variable_dict['bias'] = self.bias_posterior.distribution.loc
self.client_center_variable_dict['bias'] = self.bias_center
self.built = True
class DenseReparametrizationNaturalShared(
DenseSharedNatural, tfp.layers.DenseReparameterization):
pass
class DenseLocalReparametrizationNaturalShared(
DenseSharedNatural, tfp.layers.DenseLocalReparameterization):
def _apply_variational_kernel(self, inputs):
self.kernel_posterior_affine = tfd.Normal(
loc=tf.matmul(inputs, self.kernel_posterior.distribution.loc),
scale=tf.sqrt(tf.matmul(tf.math.square(inputs), tf.math.square(
self.kernel_posterior.distribution.scale))))
self.kernel_posterior_affine_tensor = (
self.kernel_posterior_tensor_fn(self.kernel_posterior_affine))
self.kernel_posterior_tensor = None
return self.kernel_posterior_affine_tensor
def natural_mean_field_normal_fn(natural_initializer=None):
def _fn(dtype, shape, name, trainable, add_variable_fn,
natural_initializer=natural_initializer,
natural_regularizer=None, natural_constraint=NonNegPrec(),
**kwargs):
natural_par_fn = tensor_natural_par_fn(
natural_initializer=natural_initializer,
natural_regularizer=natural_regularizer,
natural_constraint=natural_constraint,
**kwargs)
natural = natural_par_fn(dtype, shape, name, trainable, add_variable_fn)
gamma = tfp.util.DeferredTensor(
natural, lambda x: x[..., 0], shape=shape)
prec = tfp.util.DeferredTensor(
natural, lambda x: x[..., 1], shape=shape)
dist = NormalNatural(gamma=gamma, prec=prec)
batch_ndims = tf.size(dist.batch_shape_tensor())
return tfd.Independent(dist, reinterpreted_batch_ndims=batch_ndims)
return _fn
def natural_tensor_multivariate_normal_fn():
def _fn(dtype, shape, name, trainable, add_variable_fn,
initializer=natural_prior_initializer_fn(),
regularizer=None, constraint=None, **kwargs):
del trainable
natural_par_fn = tensor_natural_par_fn(natural_initializer=initializer,
natural_regularizer=regularizer,
natural_constraint=constraint,
**kwargs)
natural = natural_par_fn(dtype, shape, name, False, add_variable_fn)
gamma = tfp.util.DeferredTensor(
natural, lambda x: x[..., 0], shape=shape)
prec = tfp.util.DeferredTensor(
natural, lambda x: x[..., 1], shape=shape)
dist = NormalNatural(gamma=gamma, prec=prec)
batch_ndims = tf.size(input=dist.batch_shape_tensor())
return tfd.Independent(dist, reinterpreted_batch_ndims=batch_ndims)
return _fn
def natural_initializer_fn(loc_stdev=0.1, u_scale_init_avg=-5,
u_scale_init_stdev=0.1,
untransformed_scale_initializer=None,
loc_initializer=None):
if loc_initializer:
loc_init = loc_initializer
else:
loc_init = tf.random_normal_initializer(stddev=loc_stdev)
if untransformed_scale_initializer is None:
untransformed_scale_initializer = tf.random_normal_initializer(
mean=u_scale_init_avg, stddev=u_scale_init_stdev)
def natural_initializer(shape, dtype=tf.float32):
prec = precision_from_untransformed_scale(
untransformed_scale_initializer(shape[:-1], dtype))
gamma = loc_init(shape[:-1], dtype) * prec
natural = tf.stack([gamma, prec], axis=-1)
tf.debugging.check_numerics(natural, 'initializer')
return natural
return natural_initializer
def natural_prior_initializer_fn():
gamma_init = tf.constant_initializer(0.)
precision_init = tf.constant_initializer(1.)
def natural_initializer(shape, dtype):
prec = precision_init(shape[:-1], dtype)
gamma = gamma_init(shape[:-1], dtype)
natural = tf.stack([gamma, prec], axis=-1)
return natural
return natural_initializer
class Conv2DVirtualNatural(VariationalReparametrizedNatural,
tfp.layers.Convolution2DReparameterization):
def __init__(
self,
filters,
kernel_size,
strides=1,
padding='valid',
data_format='channels_last',
dilation_rate=1,
activation=None,
client_weight=1.,
activity_regularizer=None,
kernel_posterior_fn=None,
kernel_posterior_tensor_fn=(lambda d: d.sample()),
kernel_prior_fn=None,
kernel_divergence_fn=lambda q, p, ignore: tfd.kl_divergence(q, p),
bias_posterior_fn=
tfp_layers_util.default_mean_field_normal_fn(is_singular=True),
bias_posterior_tensor_fn=lambda d: d.sample(),
bias_prior_fn=None,
bias_divergence_fn=lambda q, p, ignore: tfd.kl_divergence(q, p),
**kwargs):
self.untransformed_scale_initializer = None
if 'untransformed_scale_initializer' in kwargs:
self.untransformed_scale_initializer = \
kwargs.pop('untransformed_scale_initializer')
self.loc_initializer = None
if 'loc_initializer' in kwargs:
self.loc_initializer = \
kwargs.pop('loc_initializer')
if kernel_posterior_fn is None:
kernel_posterior_fn = self.renormalize_natural_mean_field_normal_fn
if kernel_prior_fn is None:
kernel_prior_fn = self.natural_tensor_multivariate_normal_fn
super(Conv2DVirtualNatural, self).__init__(
filters=filters,
kernel_size=kernel_size,
strides=strides,
padding=padding,
data_format=data_format,
dilation_rate=dilation_rate,
activation=tf.keras.activations.get(activation),
activity_regularizer=activity_regularizer,
kernel_posterior_fn=kernel_posterior_fn,
kernel_posterior_tensor_fn=kernel_posterior_tensor_fn,
kernel_prior_fn=kernel_prior_fn,
kernel_divergence_fn=kernel_divergence_fn,
bias_posterior_fn=bias_posterior_fn,
bias_posterior_tensor_fn=bias_posterior_tensor_fn,
bias_prior_fn=bias_prior_fn,
bias_divergence_fn=bias_divergence_fn,
**kwargs)
self.client_weight= client_weight
self.delta_function = tf.subtract
self.apply_delta_function = tf.add
self.client_variable_dict = {}
self.client_center_variable_dict = {}
self.server_variable_dict = {}
def build(self, input_shape):
input_shape = tf.TensorShape(input_shape)
if self.data_format == 'channels_first':
channel_axis = 1
else:
channel_axis = -1
input_dim = tf.compat.dimension_value(input_shape[channel_axis])
if input_dim is None:
raise ValueError('The channel dimension of the inputs '
'should be defined. Found `None`.')
kernel_shape = self.kernel_size + (input_dim, self.filters)
# If self.dtype is None, build weights using the default dtype.
dtype = tf.as_dtype(self.dtype or tf.keras.backend.floatx())
name = 'kernel'
self.kernel_posterior_fn, self.kernel_prior_fn = \
self.build_posterior_fn_natural(kernel_shape, dtype, name,
self.kernel_posterior_fn,
self.kernel_prior_fn)
natural_initializer = natural_initializer_fn(
loc_stdev=0.1,
u_scale_init_avg=-5,
u_scale_init_stdev=0.1,
untransformed_scale_initializer=self.untransformed_scale_initializer)
self.kernel_posterior = self.kernel_posterior_fn(
dtype, kernel_shape, 'kernel_posterior',
self.trainable, self.add_variable,
natural_initializer=natural_initializer)
if self.kernel_prior_fn is None:
self.kernel_prior = None
else:
self.kernel_prior = self.kernel_prior_fn(
dtype, kernel_shape, 'kernel_prior',
self.trainable, self.add_variable)
self._built_kernel_divergence = False
if self.bias_posterior_fn is None:
self.bias_posterior = None
else:
self.bias_posterior = self.bias_posterior_fn(
dtype, (self.filters,), 'bias_posterior',
self.trainable, self.add_variable)
if self.bias_prior_fn is None:
self.bias_prior = None
else:
self.bias_prior = self.bias_prior_fn(
dtype, (self.filters,), 'bias_prior',
self.trainable, self.add_variable)
self._built_bias_divergence = False
self.input_spec = tf.keras.layers.InputSpec(
ndim=self.rank + 2, axes={channel_axis: input_dim})
self._convolution_op = nn_ops.Convolution(
input_shape,
filter_shape=tf.TensorShape(kernel_shape),
dilation_rate=self.dilation_rate,
strides=self.strides,
padding=self.padding.upper(),
data_format=tf_layers_util.convert_data_format(
self.data_format, self.rank + 2))
if self.bias_posterior:
self.bias_center = self.add_weight(
'bias_center',
shape=[self.units, ],
initializer=tf.keras.initializers.constant(0.),
dtype=self.dtype,
trainable=False)
self.client_variable_dict['bias'] = self.bias_posterior.distribution.loc
self.server_variable_dict['bias'] = self.bias_posterior.distribution.loc
self.client_center_variable_dict['bias'] = self.bias_center
self.built = True
class Conv1DVirtualNatural(tfp.layers.Convolution1DReparameterization,
VariationalReparametrizedNatural):
def __init__(
self,
filters,
kernel_size,
strides=1,
padding='valid',
client_weight=1.,
data_format='channels_last',
dilation_rate=1,
activation=None,
activity_regularizer=None,
kernel_posterior_fn=None,
kernel_posterior_tensor_fn=(lambda d: d.sample()),
kernel_prior_fn=None,
kernel_divergence_fn=lambda q, p, ignore: tfd.kl_divergence(q, p),
bias_posterior_fn=
tfp_layers_util.default_mean_field_normal_fn(is_singular=True),
bias_posterior_tensor_fn=lambda d: d.sample(),
bias_prior_fn=None,
bias_divergence_fn=lambda q, p, ignore: tfd.kl_divergence(q, p),
**kwargs):
self.untransformed_scale_initializer = None
if 'untransformed_scale_initializer' in kwargs:
self.untransformed_scale_initializer = \
kwargs.pop('untransformed_scale_initializer')
if kernel_posterior_fn is None:
kernel_posterior_fn = self.renormalize_natural_mean_field_normal_fn
if kernel_prior_fn is None:
kernel_prior_fn = self.natural_tensor_multivariate_normal_fn
super(Conv1DVirtualNatural, self).__init__(
filters=filters,
kernel_size=kernel_size,
strides=strides,
padding=padding,
data_format=data_format,
dilation_rate=dilation_rate,
activation=tf.keras.activations.get(activation),
activity_regularizer=activity_regularizer,
kernel_posterior_fn=kernel_posterior_fn,
kernel_posterior_tensor_fn=kernel_posterior_tensor_fn,
kernel_prior_fn=kernel_prior_fn,
kernel_divergence_fn=kernel_divergence_fn,
bias_posterior_fn=bias_posterior_fn,
bias_posterior_tensor_fn=bias_posterior_tensor_fn,
bias_prior_fn=bias_prior_fn,
bias_divergence_fn=bias_divergence_fn,
**kwargs)
self.client_weight = client_weight
self.delta_function = tf.subtract
self.apply_delta_function = tf.add
self.client_variable_dict = {}
self.client_center_variable_dict = {}
self.server_variable_dict = {}
def build(self, input_shape):
input_shape = tf.TensorShape(input_shape)
if self.data_format == 'channels_first':
channel_axis = 1
else:
channel_axis = -1
input_dim = tf.compat.dimension_value(input_shape[channel_axis])
if input_dim is None:
raise ValueError('The channel dimension of the inputs '
'should be defined. Found `None`.')
kernel_shape = self.kernel_size + (input_dim, self.filters)
# If self.dtype is None, build weights using the default dtype.
dtype = tf.as_dtype(self.dtype or tf.keras.backend.floatx())
name = 'kernel'
self.kernel_posterior_fn, self.kernel_prior_fn = \
self.build_posterior_fn_natural(kernel_shape, dtype, name,
self.kernel_posterior_fn,
self.kernel_prior_fn)
natural_initializer = natural_initializer_fn(
loc_stdev=0.1,
u_scale_init_avg=-5,
u_scale_init_stdev=0.1,
untransformed_scale_initializer=self.untransformed_scale_initializer)
# Must have a posterior kernel.
self.kernel_posterior = self.kernel_posterior_fn(
dtype, kernel_shape, 'kernel_posterior',
self.trainable, self.add_variable,
natural_initializer=natural_initializer)
if self.kernel_prior_fn is None:
self.kernel_prior = None
else:
self.kernel_prior = self.kernel_prior_fn(
dtype, kernel_shape, 'kernel_prior',
self.trainable, self.add_variable)
self._built_kernel_divergence = False
if self.bias_posterior_fn is None:
self.bias_posterior = None
else:
self.bias_posterior = self.bias_posterior_fn(
dtype, (self.filters,), 'bias_posterior',
self.trainable, self.add_variable)
if self.bias_prior_fn is None:
self.bias_prior = None
else:
self.bias_prior = self.bias_prior_fn(
dtype, (self.filters,), 'bias_prior',
self.trainable, self.add_variable)
self._built_bias_divergence = False
self.input_spec = tf.keras.layers.InputSpec(
ndim=self.rank + 2, axes={channel_axis: input_dim})
self._convolution_op = nn_ops.Convolution(
input_shape,
filter_shape=tf.TensorShape(kernel_shape),
dilation_rate=self.dilation_rate,
strides=self.strides,
padding=self.padding.upper(),
data_format=tf_layers_util.convert_data_format(
self.data_format, self.rank + 2))
if self.bias_posterior:
self.bias_center = self.add_weight(
'bias_center',
shape=[self.units, ],
initializer=tf.keras.initializers.constant(0.),
dtype=self.dtype,
trainable=False)
self.client_variable_dict['bias'] = self.bias_posterior.distribution.loc
self.server_variable_dict['bias'] = self.bias_posterior.distribution.loc
self.client_center_variable_dict['bias'] = self.bias_center
self.built = True
class NaturalGaussianEmbedding(
tf.keras.layers.Embedding, VariationalReparametrizedNatural):
def __init__(self,
input_dim,
output_dim,
mask_zero=False,
input_length=None,
client_weight=1.,
trainable=True,
embeddings_initializer=tf.keras.initializers.RandomUniform(
-0.01, 0.01),
embedding_posterior_fn=None,
embedding_posterior_tensor_fn=(lambda d: d.sample()),
embedding_prior_fn=None,
embedding_divergence_fn=(
lambda q, p, ignore: tfd.kl_divergence(q, p)),
**kwargs
):
self.untransformed_scale_initializer = None
if 'untransformed_scale_initializer' in kwargs:
self.untransformed_scale_initializer = \
kwargs.pop('untransformed_scale_initializer')
if embedding_posterior_fn is None:
embedding_posterior_fn = self.renormalize_natural_mean_field_normal_fn
if embedding_prior_fn is None:
embedding_prior_fn = self.natural_tensor_multivariate_normal_fn
super(NaturalGaussianEmbedding, self).__init__(input_dim,
output_dim,
mask_zero=mask_zero,
input_length=input_length,
trainable=trainable,
embeddings_initializer=embeddings_initializer,
**kwargs)
self.client_weight = client_weight
self.delta_function = tf.subtract
self.apply_delta_function = tf.add
self.embedding_posterior_fn = embedding_posterior_fn
self.embedding_prior_fn = embedding_prior_fn
self.embedding_posterior_tensor_fn = embedding_posterior_tensor_fn
self.embedding_divergence_fn = embedding_divergence_fn
self.client_variable_dict = {}
self.client_center_variable_dict = {}
self.server_variable_dict = {}
def build(self, input_shape):
dtype = tf.as_dtype(self.dtype or tf.keras.backend.floatx())
shape = (self.input_dim, self.output_dim)
if context.executing_eagerly() and context.context().num_gpus():
with ops.device('cpu:0'):
self.embedding_posterior_fn, self.embedding_prior_fn = \
self.build_posterior_fn_natural(shape, dtype, 'embedding',
self.embedding_posterior_fn,
self.embedding_prior_fn)
else:
self.embedding_posterior_fn, self.embedding_prior_fn = \
self.build_posterior_fn_natural(shape, dtype, 'embedding',
self.embedding_posterior_fn,
self.embedding_prior_fn)
natural_initializer = natural_initializer_fn(
untransformed_scale_initializer=self.untransformed_scale_initializer,
loc_initializer=self.embeddings_initializer)
self.embedding_posterior = self.embedding_posterior_fn(
dtype, shape, 'embedding_posterior',
self.trainable, self.add_variable,
natural_initializer=natural_initializer)
self.embedding_prior = self.embedding_prior_fn(
dtype, shape, 'embedding_prior',
self.trainable, self.add_variable)
self.built = True
def _apply_divergence(self, divergence_fn, posterior, prior,
posterior_tensor, name):
if (divergence_fn is None or
posterior is None or
prior is None):
divergence = None
return
divergence = tf.identity(
divergence_fn(
posterior, prior, posterior_tensor),
name=name)
self.add_loss(divergence)
def call(self, inputs):
self.embeddings = self.embedding_posterior_tensor_fn(self.embedding_posterior)
self._apply_divergence(self.embedding_divergence_fn,
self.embedding_posterior,
self.embedding_prior,
self.embeddings,
name='divergence_embeddings')
return super(NaturalGaussianEmbedding, self).call(inputs)
class LSTMCellVariationalNatural(tf.keras.layers.LSTMCell, VariationalReparametrizedNatural):
def __init__(self,
units,
activation='tanh',
recurrent_activation='hard_sigmoid',
use_bias=True,
kernel_initializer=tf.keras.initializers.VarianceScaling(scale=30.0,
mode='fan_avg',
distribution='uniform',),
recurrent_initializer=tf.keras.initializers.Orthogonal(gain=7.0),
bias_initializer='zeros',
unit_forget_bias=True,
kernel_constraint=None,
recurrent_constraint=None,
bias_constraint=None,
dropout=0.,
recurrent_dropout=0.,
implementation=1,
kernel_posterior_fn=None,
kernel_posterior_tensor_fn=(lambda d: d.sample()),
recurrent_kernel_posterior_fn=None,
recurrent_kernel_posterior_tensor_fn=(lambda d: d.sample()),
kernel_prior_fn=None,
recurrent_kernel_prior_fn=None,
kernel_divergence_fn=(lambda q, p, ignore: tfd.kl_divergence(q, p)),
recurrent_kernel_divergence_fn=(lambda q, p, ignore: tfd.kl_divergence(q, p)),
bias_posterior_fn=tfp_layers_util.default_mean_field_normal_fn(
is_singular=True),
bias_posterior_tensor_fn=(lambda d: d.sample()),
bias_prior_fn=None,
bias_divergence_fn=(lambda q, p, ignore: tfd.kl_divergence(q, p)),
client_weight=1.,
**kwargs):
self.untransformed_scale_initializer = kwargs.pop('untransformed_scale_initializer', None)
if kernel_posterior_fn is None:
kernel_posterior_fn = self.renormalize_natural_mean_field_normal_fn
if kernel_prior_fn is None:
kernel_prior_fn = self.natural_tensor_multivariate_normal_fn
if recurrent_kernel_posterior_fn is None:
recurrent_kernel_posterior_fn = self.renormalize_natural_mean_field_normal_fn
if recurrent_kernel_prior_fn is None:
recurrent_kernel_prior_fn = self.natural_tensor_multivariate_normal_fn
super(LSTMCellVariationalNatural, self).__init__(
units,
activation=activation,
recurrent_activation=recurrent_activation,
use_bias=use_bias,
kernel_initializer=kernel_initializer,
recurrent_initializer=recurrent_initializer,
bias_initializer=bias_initializer,
unit_forget_bias=unit_forget_bias,
kernel_regularizer=None,
recurrent_regularizer=None,
bias_regularizer=None,
kernel_constraint=kernel_constraint,
recurrent_constraint=recurrent_constraint,
bias_constraint=bias_constraint,
dropout=dropout,
recurrent_dropout=recurrent_dropout,
implementation=implementation,
**kwargs)
self.kernel_posterior_fn = kernel_posterior_fn
self.kernel_posterior_tensor_fn = kernel_posterior_tensor_fn
self.recurrent_kernel_posterior_fn = recurrent_kernel_posterior_fn
self.recurrent_kernel_posterior_tensor_fn = recurrent_kernel_posterior_tensor_fn
self.kernel_posterior_tensor_fn = kernel_posterior_tensor_fn
self.kernel_prior_fn = kernel_prior_fn
self.recurrent_kernel_prior_fn = recurrent_kernel_prior_fn
self.kernel_divergence_fn = kernel_divergence_fn
self.recurrent_kernel_divergence_fn = recurrent_kernel_divergence_fn
self.bias_posterior_fn = bias_posterior_fn
self.bias_posterior_tensor_fn = bias_posterior_tensor_fn
self.bias_prior_fn = bias_prior_fn
self.bias_divergence_fn = bias_divergence_fn
self.client_weight = client_weight
self.delta_function = tf.subtract
self.apply_delta_function = tf.add
self.client_variable_dict = {}
self.client_center_variable_dict = {}
self.server_variable_dict = {}
@tf_utils.shape_type_conversion
def build(self, input_shape):
default_caching_device = _caching_device(self)
input_dim = input_shape[-1]
shape_kernel = (input_dim, self.units * 4)
shape_recurrent = (self.units, self.units * 4)
dtype = tf.as_dtype(self.dtype or tf.keras.backend.floatx())
self.kernel_posterior_fn, self.kernel_prior_fn = \
self.build_posterior_fn_natural(shape_kernel, dtype, 'kernel',
self.kernel_posterior_fn,
self.kernel_prior_fn)
self.recurrent_kernel_posterior_fn, self.recurrent_kernel_prior_fn = \
self.build_posterior_fn_natural(shape_recurrent, dtype,
'recurrent_kernel',
self.recurrent_kernel_posterior_fn,
self.recurrent_kernel_prior_fn)
kernel_initializer = natural_initializer_fn(
loc_stdev=0.1,
u_scale_init_avg=-5,
u_scale_init_stdev=0.1,
untransformed_scale_initializer=self.untransformed_scale_initializer,
loc_initializer=self.kernel_initializer)
if self.kernel_regularizer: self.kernel_regularizer = NaturalRegularizer(self.kernel_regularizer)
if self.kernel_constraint: self.kernel_constraint = NaturalConstraint(self.kernel_constraint)
self.kernel_posterior = self.kernel_posterior_fn(
dtype, shape_kernel, 'kernel_posterior', self.trainable,
self.add_variable,
natural_initializer=kernel_initializer,
natural_regularizer=self.kernel_regularizer,
natural_constraint=self.kernel_constraint,
caching_device=default_caching_device)
if self.kernel_prior_fn is None:
self.kernel_prior = None
else:
self.kernel_prior = self.kernel_prior_fn(
dtype, shape_kernel, 'kernel_prior',
self.trainable, self.add_variable)
recurrent_initializer = natural_initializer_fn(
loc_stdev=0.1,
u_scale_init_avg=-5,
u_scale_init_stdev=0.1,
untransformed_scale_initializer=
self.untransformed_scale_initializer,
loc_initializer=self.recurrent_initializer)
if self.recurrent_regularizer:
self.recurrent_regularizer = NaturalRegularizer(
self.recurrent_regularizer)
if self.recurrent_constraint:
self.recurrent_constraint = NaturalConstraint(
self.recurrent_constraint)
self.recurrent_kernel_posterior = self.recurrent_kernel_posterior_fn(
dtype, shape_recurrent, 'recurrent_kernel_posterior',
self.trainable,
self.add_variable,
natural_initializer=recurrent_initializer,
natural_regularizer=self.recurrent_regularizer,
natural_constraint=self.recurrent_constraint,
caching_device=default_caching_device)
if self.recurrent_kernel_prior_fn is None:
self.recurrent_kernel_prior = None
else:
self.recurrent_kernel_prior = self.recurrent_kernel_prior_fn(
dtype, shape_recurrent, 'recurrent_kernel_prior',
self.trainable, self.add_variable)
if self.use_bias:
if self.unit_forget_bias:
def bias_initializer(_, *args, **kwargs):
return K.concatenate([
self.bias_initializer((self.units,), *args, **kwargs),
tf.keras.initializers.Ones()((self.units,), *args, **kwargs),
self.bias_initializer((self.units * 2,), *args, **kwargs),
])
else:
bias_initializer = self.bias_initializer
self.bias = self.add_weight(
shape=(self.units * 4,),
name='bias',
initializer=bias_initializer,
regularizer=self.bias_regularizer,
constraint=self.bias_constraint,
caching_device=default_caching_device)
else:
self.bias = None
self._apply_divergence(
self.kernel_divergence_fn,
self.kernel_posterior,
self.kernel_prior,
name='divergence_kernel')
self._apply_divergence(
self.recurrent_kernel_divergence_fn,
self.recurrent_kernel_posterior,
self.recurrent_kernel_prior,
name='divergence_recurrent_kernel')
self.built = True
def _apply_divergence(self, divergence_fn, posterior, prior, name,
posterior_tensor=None):
divergence = tf.identity(
divergence_fn(
posterior, prior, posterior_tensor),
name=name)
self.add_loss(divergence)
def sample_weights(self):
self.kernel = self.kernel_posterior_tensor_fn(self.kernel_posterior)
self.recurrent_kernel = self.recurrent_kernel_posterior_tensor_fn(
self.recurrent_kernel_posterior)
class LSTMCellReparametrizationNatural(tf.keras.layers.LSTMCell):
def __init__(self,
units,
activation='tanh',
recurrent_activation='hard_sigmoid',
use_bias=True,
kernel_initializer='glorot_uniform',
recurrent_initializer='orthogonal',
bias_initializer='zeros',
unit_forget_bias=True,
kernel_constraint=None,
recurrent_constraint=None,
bias_constraint=None,
dropout=0.,
recurrent_dropout=0.,
implementation=1,
kernel_posterior_fn=None,
kernel_posterior_tensor_fn=(lambda d: d.sample()),
recurrent_kernel_posterior_fn=None,
recurrent_kernel_posterior_tensor_fn=(lambda d: d.sample()),
kernel_prior_fn=None,
recurrent_kernel_prior_fn=None,
kernel_divergence_fn=(lambda q, p, ignore: tfd.kl_divergence(q, p)),
recurrent_kernel_divergence_fn=(lambda q, p, ignore: tfd.kl_divergence(q, p)),
bias_posterior_fn=tfp_layers_util.default_mean_field_normal_fn(
is_singular=True),
bias_posterior_tensor_fn=(lambda d: d.sample()),
bias_prior_fn=None,
bias_divergence_fn=(lambda q, p, ignore: tfd.kl_divergence(q, p)),
client_weight=1.,
**kwargs):
self.untransformed_scale_initializer = None
if 'untransformed_scale_initializer' in kwargs:
self.untransformed_scale_initializer = \
kwargs.pop('untransformed_scale_initializer')
if kernel_posterior_fn is None:
kernel_posterior_fn = self.renormalize_natural_mean_field_normal_fn
if kernel_prior_fn is None:
kernel_prior_fn = self.natural_tensor_multivariate_normal_fn
if recurrent_kernel_posterior_fn is None:
recurrent_kernel_posterior_fn = self.renormalize_natural_mean_field_normal_fn
if recurrent_kernel_prior_fn is None:
recurrent_kernel_prior_fn = self.natural_tensor_multivariate_normal_fn
super(LSTMCellReparametrizationNatural, self).__init__(
units,
activation=activation,
recurrent_activation=recurrent_activation,
use_bias=use_bias,
kernel_initializer=kernel_initializer,
recurrent_initializer=recurrent_initializer,
bias_initializer=bias_initializer,
unit_forget_bias=unit_forget_bias,
kernel_regularizer=None,
recurrent_regularizer=None,
bias_regularizer=None,
kernel_constraint=kernel_constraint,
recurrent_constraint=recurrent_constraint,
bias_constraint=bias_constraint,
dropout=dropout,
recurrent_dropout=recurrent_dropout,
implementation=implementation,
**kwargs)
self.kernel_posterior_fn = kernel_posterior_fn
self.kernel_posterior_tensor_fn = kernel_posterior_tensor_fn
self.recurrent_kernel_posterior_fn = recurrent_kernel_posterior_fn
self.recurrent_kernel_posterior_tensor_fn = recurrent_kernel_posterior_tensor_fn
self.kernel_posterior_tensor_fn = kernel_posterior_tensor_fn
self.kernel_prior_fn = kernel_prior_fn
self.recurrent_kernel_prior_fn = recurrent_kernel_prior_fn
self.kernel_divergence_fn = kernel_divergence_fn
self.recurrent_kernel_divergence_fn = recurrent_kernel_divergence_fn
self.bias_posterior_fn = bias_posterior_fn
self.bias_posterior_tensor_fn = bias_posterior_tensor_fn
self.bias_prior_fn = bias_prior_fn
self.bias_divergence_fn = bias_divergence_fn
self.client_weight = client_weight
@tf_utils.shape_type_conversion
def build(self, input_shape):
default_caching_device = _caching_device(self)
input_dim = input_shape[-1]
shape_kernel = (input_dim, self.units * 4)
shape_recurrent = (self.units, self.units * 4)
dtype = tf.as_dtype(self.dtype or tf.keras.backend.floatx())
kernel_initializer = natural_initializer_fn(
loc_stdev=0.1,
u_scale_init_avg=-5,
u_scale_init_stdev=0.1,
untransformed_scale_initializer=self.untransformed_scale_initializer,
loc_initializer=self.kernel_initializer)
self.kernel_posterior = self.kernel_posterior_fn(dtype, shape_kernel,
'kernel_posterior',
self.trainable,
self.add_variable,
natural_initializer=kernel_initializer)
if self.kernel_prior_fn is None:
self.kernel_prior = None
else:
self.kernel_prior = self.kernel_prior_fn(
dtype, shape_kernel, 'kernel_prior',
self.trainable, self.add_variable)
recurrent_initializer = natural_initializer_fn(
loc_stdev=0.1,
u_scale_init_avg=-5,
u_scale_init_stdev=0.1,
untransformed_scale_initializer=self.untransformed_scale_initializer,
loc_initializer=self.recurrent_initializer)
self.recurrent_kernel_posterior = \
self.recurrent_kernel_posterior_fn(dtype, shape_recurrent,
'recurrent_kernel_posterior',
self.trainable,
self.add_variable,
natural_initializer=recurrent_initializer)
if self.recurrent_kernel_prior_fn is None:
self.recurrent_kernel_prior = None
else:
self.recurrent_kernel_prior = self.recurrent_kernel_prior_fn(
dtype, shape_recurrent, 'recurrent_kernel_prior',
self.trainable, self.add_variable)
if self.use_bias:
if self.unit_forget_bias:
def bias_initializer(_, *args, **kwargs):
return K.concatenate([
self.bias_initializer((self.units,), *args, **kwargs),
tf.keras.initializers.Ones()((self.units,), *args, **kwargs),
self.bias_initializer((self.units * 2,), *args, **kwargs),
])
else:
bias_initializer = self.bias_initializer
self.bias = self.add_weight(
shape=(self.units * 4,),
name='bias',
initializer=bias_initializer,
regularizer=self.bias_regularizer,
constraint=self.bias_constraint,
caching_device=default_caching_device)
else:
self.bias = None
self._apply_divergence(
self.kernel_divergence_fn,
self.kernel_posterior,
self.kernel_prior,
name='divergence_kernel')
self._apply_divergence(
self.recurrent_kernel_divergence_fn,
self.recurrent_kernel_posterior,
self.recurrent_kernel_prior,
name='divergence_recurrent_kernel')
self.built = True
def _apply_divergence(self, divergence_fn, posterior, prior, name,
posterior_tensor=None):
divergence = tf.identity(
divergence_fn(
posterior, prior, posterior_tensor),
name=name)
self.add_loss(divergence)
def sample_weights(self):
self.kernel = self.kernel_posterior_tensor_fn(self.kernel_posterior)
self.recurrent_kernel = self.recurrent_kernel_posterior_tensor_fn(
self.recurrent_kernel_posterior)
class RNNVarReparametrized(tf.keras.layers.RNN):
def compute_delta(self):
return self.cell.compute_delta()
def renew_center(self, center_to_update=True):
self.cell.renew_center(center_to_update)
def apply_delta(self, delta):
self.cell.apply_delta(delta)
def receive_and_save_weights(self, layer_server):
self.cell.receive_and_save_weights(layer_server.cell)
def initialize_kernel_posterior(self):
self.cell.initialize_kernel_posterior()
def apply_damping(self, damping_factor):
self.cell.apply_damping(damping_factor)
def call(self,
inputs,
mask=None,
training=None,
initial_state=None,
constants=None):
self.cell.sample_weights()
return super(RNNVarReparametrized, self).call(
inputs,
mask=mask,
training=training,
initial_state=initial_state,
constants=constants)
| 42.546444 | 105 | 0.61804 | 5,310 | 50,843 | 5.549529 | 0.052731 | 0.058029 | 0.026028 | 0.023415 | 0.810099 | 0.778234 | 0.75828 | 0.749084 | 0.743484 | 0.733915 | 0 | 0.003702 | 0.309344 | 50,843 | 1,194 | 106 | 42.582077 | 0.835483 | 0.004838 | 0 | 0.75468 | 0 | 0 | 0.028465 | 0.010121 | 0 | 0 | 0 | 0 | 0 | 1 | 0.049261 | false | 0.000985 | 0.015764 | 0.003941 | 0.102463 | 0.000985 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d9aed0d58f5674e927bed724c9f9e664a273ba60 | 17,156 | py | Python | create_dataset/tests/create_nexus_dataset/tests_create_nexus_dataset.py | danmcelroy/VoSeq | e22bd5d971154170bf3f4f24b684b95a12418637 | [
"BSD-3-Clause"
] | null | null | null | create_dataset/tests/create_nexus_dataset/tests_create_nexus_dataset.py | danmcelroy/VoSeq | e22bd5d971154170bf3f4f24b684b95a12418637 | [
"BSD-3-Clause"
] | null | null | null | create_dataset/tests/create_nexus_dataset/tests_create_nexus_dataset.py | danmcelroy/VoSeq | e22bd5d971154170bf3f4f24b684b95a12418637 | [
"BSD-3-Clause"
] | null | null | null | import os
from django.conf import settings
from django.test import TestCase
from django.test.client import Client
from django.core.management import call_command
from create_dataset.utils import CreateDataset
from public_interface.models import Genes
from public_interface.models import GeneSets
from public_interface.models import TaxonSets
class CreateNexusDatasetTest(TestCase):
def setUp(self):
args = []
opts = {'dumpfile': settings.MEDIA_ROOT + 'test_data.xml', 'verbosity': 0}
cmd = 'migrate_db'
call_command(cmd, *args, **opts)
self.base_dir = os.path.dirname(os.path.dirname(__file__))
gene_set = GeneSets.objects.get(geneset_name='all_genes')
taxon_set = TaxonSets.objects.get(taxonset_name='all_taxa')
self.cleaned_data = {
'gene_codes': '',
'taxonset': taxon_set,
'voucher_codes': '',
'geneset': gene_set,
'taxon_names': ['CODE', 'GENUS', 'SPECIES'],
'translations': False,
'degen_translations': 'normal',
'number_genes': None,
'positions': ['ALL'],
'partition_by_positions': 'by gene',
'file_format': 'NEXUS',
'aminoacids': False,
'outgroup': None,
}
self.c = Client()
self.dataset_creator = CreateDataset(self.cleaned_data)
self.maxDiff = None
def test_nexus_all_codons_as_one(self):
dataset_file = os.path.join(self.base_dir, 'create_nexus_dataset', 'dataset.nex')
with open(dataset_file, 'r') as handle:
expected = handle.read()
cleaned_data = self.cleaned_data.copy()
cleaned_data['positions'] = ['ALL']
cleaned_data['partition_by_positions'] = 'by gene'
dataset_creator = CreateDataset(cleaned_data)
result = dataset_creator.dataset_str
self.assertEqual(expected.strip(), result)
def test_nexus_all_codons_partitioned_as_each(self):
dataset_file = os.path.join(
self.base_dir, 'create_nexus_dataset', 'dataset_partitioned_as_each.nex')
with open(dataset_file, 'r') as handle:
expected = handle.read()
cleaned_data = self.cleaned_data.copy()
cleaned_data['positions'] = ['ALL']
cleaned_data['partition_by_positions'] = 'by codon position'
dataset_creator = CreateDataset(cleaned_data)
result = dataset_creator.dataset_str
self.assertEqual(expected.strip(), result)
def test_nexus_all_codons_partitioned_as_1st2nd_3rd(self):
dataset_file = os.path.join(self.base_dir,
'create_nexus_dataset', 'dataset_partitioned_as_1st2nd_3rd.nex')
with open(dataset_file, 'r') as handle:
expected = handle.read()
cleaned_data = self.cleaned_data.copy()
cleaned_data['positions'] = ['ALL']
cleaned_data['partition_by_positions'] = '1st-2nd, 3rd'
dataset_creator = CreateDataset(cleaned_data)
result = dataset_creator.dataset_str
self.assertEqual(expected.strip(), result)
def test_nexus_1st_codon_as_one(self):
dataset_file = os.path.join(self.base_dir,
'create_nexus_dataset', 'dataset_1st_codon.nex')
with open(dataset_file, 'r') as handle:
expected = handle.read()
cleaned_data = self.cleaned_data.copy()
cleaned_data['positions'] = ['1st']
cleaned_data['partition_by_positions'] = 'by gene'
dataset_creator = CreateDataset(cleaned_data)
result = dataset_creator.dataset_str
self.assertEqual(expected.strip(), result)
def test_nexus_1st_codon_as_each(self):
dataset_file = os.path.join(self.base_dir,
'create_nexus_dataset', 'dataset_1st_codon.nex')
with open(dataset_file, 'r') as handle:
expected = handle.read()
cleaned_data = self.cleaned_data.copy()
cleaned_data['positions'] = ['1st']
cleaned_data['partition_by_positions'] = 'by codon position'
dataset_creator = CreateDataset(cleaned_data)
result = dataset_creator.dataset_str
self.assertEqual(expected.strip(), result)
def test_nexus_1st_codon_as_1st2nd_3rd(self):
dataset_file = os.path.join(self.base_dir,
'create_nexus_dataset', 'dataset_1st_codon.nex')
with open(dataset_file, 'r') as handle:
expected = handle.read()
cleaned_data = self.cleaned_data.copy()
cleaned_data['positions'] = ['1st']
cleaned_data['partition_by_positions'] = '1st-2nd, 3rd'
dataset_creator = CreateDataset(cleaned_data)
result = dataset_creator.dataset_str
self.assertEqual(expected.strip(), result)
def test_nexus_2nd_codon_as_one(self):
dataset_file = os.path.join(self.base_dir,
'create_nexus_dataset', 'dataset_2nd_codon.nex')
with open(dataset_file, 'r') as handle:
expected = handle.read()
cleaned_data = self.cleaned_data.copy()
cleaned_data['positions'] = ['2nd']
cleaned_data['partition_by_positions'] = 'by gene'
dataset_creator = CreateDataset(cleaned_data)
result = dataset_creator.dataset_str
self.assertEqual(expected.strip(), result)
def test_nexus_2nd_codon_as_each(self):
dataset_file = os.path.join(self.base_dir,
'create_nexus_dataset', 'dataset_2nd_codon.nex')
with open(dataset_file, 'r') as handle:
expected = handle.read()
cleaned_data = self.cleaned_data.copy()
cleaned_data['positions'] = ['2nd']
cleaned_data['partition_by_positions'] = 'by codon position'
dataset_creator = CreateDataset(cleaned_data)
result = dataset_creator.dataset_str
self.assertEqual(expected.strip(), result)
def test_nexus_2nd_codon_as_1st2nd_3rd(self):
dataset_file = os.path.join(self.base_dir,
'create_nexus_dataset', 'dataset_2nd_codon.nex')
with open(dataset_file, 'r') as handle:
expected = handle.read()
cleaned_data = self.cleaned_data.copy()
cleaned_data['positions'] = ['2nd']
cleaned_data['partition_by_positions'] = '1st-2nd, 3rd'
dataset_creator = CreateDataset(cleaned_data)
result = dataset_creator.dataset_str
self.assertEqual(expected.strip(), result)
def test_nexus_3rd_codon_as_one(self):
dataset_file = os.path.join(self.base_dir,
'create_nexus_dataset', 'dataset_3rd_codon.nex')
with open(dataset_file, 'r') as handle:
expected = handle.read()
cleaned_data = self.cleaned_data.copy()
cleaned_data['positions'] = ['3rd']
cleaned_data['partition_by_positions'] = 'by gene'
dataset_creator = CreateDataset(cleaned_data)
result = dataset_creator.dataset_str
self.assertEqual(expected.strip(), result)
def test_nexus_3rd_codon_as_each(self):
dataset_file = os.path.join(self.base_dir,
'create_nexus_dataset', 'dataset_3rd_codon.nex')
with open(dataset_file, 'r') as handle:
expected = handle.read()
cleaned_data = self.cleaned_data.copy()
cleaned_data['positions'] = ['3rd']
cleaned_data['partition_by_positions'] = 'by codon position'
dataset_creator = CreateDataset(cleaned_data)
result = dataset_creator.dataset_str
self.assertEqual(expected.strip(), result)
def test_nexus_3rd_codon_as_1st2nd_3rd(self):
dataset_file = os.path.join(self.base_dir,
'create_nexus_dataset', 'dataset_3rd_codon.nex')
with open(dataset_file, 'r') as handle:
expected = handle.read()
cleaned_data = self.cleaned_data.copy()
cleaned_data['positions'] = ['3rd']
cleaned_data['partition_by_positions'] = '1st-2nd, 3rd'
dataset_creator = CreateDataset(cleaned_data)
result = dataset_creator.dataset_str
self.assertEqual(expected.strip(), result)
def test_nexus_1st_2nd_codon_as_one(self):
dataset_file = os.path.join(self.base_dir,
'create_nexus_dataset', 'dataset_1st2nd_codons.nex')
with open(dataset_file, 'r') as handle:
expected = handle.read()
cleaned_data = self.cleaned_data.copy()
cleaned_data['positions'] = ['1st', '2nd']
cleaned_data['partition_by_positions'] = 'by gene'
dataset_creator = CreateDataset(cleaned_data)
result = dataset_creator.dataset_str
self.assertEqual(expected.strip(), result)
def test_nexus_1st_2nd_codon_as_each(self):
dataset_file = os.path.join(self.base_dir,
'create_nexus_dataset', 'dataset_1st2nd_codons_partitioned_as_each.nex')
with open(dataset_file, 'r') as handle:
expected = handle.read()
cleaned_data = self.cleaned_data.copy()
cleaned_data['positions'] = ['1st', '2nd']
cleaned_data['partition_by_positions'] = 'by codon position'
dataset_creator = CreateDataset(cleaned_data)
result = dataset_creator.dataset_str
self.assertEqual(expected.strip(), result)
def test_nexus_1st_2nd_codon_as_1st2nd_3rd(self):
dataset_file = os.path.join(self.base_dir,
'create_nexus_dataset', 'dataset_1st2nd_codons_partitioned_as_1st2nd_3rd.nex')
with open(dataset_file, 'r') as handle:
expected = handle.read()
cleaned_data = self.cleaned_data.copy()
cleaned_data['positions'] = ['1st', '2nd']
cleaned_data['partition_by_positions'] = '1st-2nd, 3rd'
dataset_creator = CreateDataset(cleaned_data)
result = dataset_creator.dataset_str
self.assertEqual(expected.strip(), result)
def test_nexus_1st_3rd_codon_as_one(self):
cleaned_data = self.cleaned_data.copy()
cleaned_data['positions'] = ['1st', '3rd']
cleaned_data['partition_by_positions'] = 'by gene'
dataset_creator = CreateDataset(cleaned_data)
expected = 'Cannot create dataset for only codon positions 1st and 3rd.'
result = dataset_creator.errors
self.assertTrue(expected in ''.join([str(i) for i in result]))
def test_nexus_1st_3rd_codon_as_each(self):
cleaned_data = self.cleaned_data.copy()
cleaned_data['positions'] = ['1st', '3rd']
cleaned_data['partition_by_positions'] = 'by codon position'
dataset_creator = CreateDataset(cleaned_data)
expected = 'Cannot create dataset for only codon positions 1st and 3rd.'
result = dataset_creator.errors
self.assertTrue(expected in ''.join([str(i) for i in result]))
def test_nexus_1st_3rd_codon_as_1st2nd_3rd(self):
cleaned_data = self.cleaned_data.copy()
cleaned_data['positions'] = ['1st', '3rd']
cleaned_data['partition_by_positions'] = '1st-2nd, 3rd'
dataset_creator = CreateDataset(cleaned_data)
expected = 'Cannot create dataset for only codon positions 1st and 3rd.'
result = dataset_creator.errors
self.assertTrue(expected in ''.join([str(i) for i in result]))
def test_nexus_2nd_3rd_codon_as_one(self):
cleaned_data = self.cleaned_data.copy()
cleaned_data['positions'] = ['2nd', '3rd']
cleaned_data['partition_by_positions'] = 'by gene'
dataset_creator = CreateDataset(cleaned_data)
expected = 'Cannot create dataset for only codon positions 2nd and 3rd.'
result = dataset_creator.errors
self.assertTrue(expected in ''.join([str(i) for i in result]))
def test_nexus_2nd_3rd_codon_as_each(self):
cleaned_data = self.cleaned_data.copy()
cleaned_data['positions'] = ['2nd', '3rd']
cleaned_data['partition_by_positions'] = 'by codon position'
dataset_creator = CreateDataset(cleaned_data)
expected = 'Cannot create dataset for only codon positions 2nd and 3rd.'
result = dataset_creator.errors
self.assertTrue(expected in ''.join([str(i) for i in result]))
def test_nexus_2nd_3rd_codon_as_1st2nd_3rd(self):
cleaned_data = self.cleaned_data.copy()
cleaned_data['positions'] = ['2nd', '3rd']
cleaned_data['partition_by_positions'] = '1st-2nd, 3rd'
dataset_creator = CreateDataset(cleaned_data)
expected = 'Cannot create dataset for only codon positions 2nd and 3rd.'
result = dataset_creator.errors
self.assertTrue(expected in ''.join([str(i) for i in result]))
def test_nexus_with_outgroup(self):
cleaned_data = self.cleaned_data
cleaned_data['outgroup'] = 'CP100-11'
cleaned_data['geneset'] = GeneSets.objects.get(geneset_name='all_genes')
dataset_creator = CreateDataset(cleaned_data)
result = dataset_creator.dataset_str
expected = "outgroup CP100_11_Aus_bus;"
self.assertTrue(expected in result)
def test_nexus_gene_no_reading_frame(self):
# For this test we will set the reading frame of ArgKin to None
argkin = Genes.objects.get(gene_code='ArgKin')
argkin.reading_frame = None
argkin.save()
cleaned_data = self.cleaned_data
cleaned_data['positions'] = ['1st', '2nd']
cleaned_data['outgroup'] = ''
cleaned_data['geneset'] = GeneSets.objects.get(geneset_name='all_genes')
dataset_creator = CreateDataset(cleaned_data)
result = dataset_creator.dataset_str
self.assertEqual('', result)
self.assertEqual('reading_frame attribute for gene ArgKin should be either '
'1, 2 or 3.', str(dataset_creator.errors[0]))
"""
def test_nexus_gene_excluding_taxa(self):
# Voucher CP100-11 should be dropped
cleaned_data = self.cleaned_data
cleaned_data['positions'] = ['1st', '2nd']
cleaned_data['outgroup'] = ''
cleaned_data['geneset'] = GeneSets.objects.get(geneset_name='all_genes')
cleaned_data['number_genes'] = 4
dataset_creator = CreateDataset(cleaned_data)
result = dataset_creator.dataset_str
expected = ""
self.assertEqual(expected.strip(), result)
"""
def test_with_total_char_lengths_aminoacids(self):
cleaned_data = self.cleaned_data
cleaned_data['aminoacids'] = True
cleaned_data['outgroup'] = ''
cleaned_data['geneset'] = GeneSets.objects.get(geneset_name='all_genes')
dataset_creator = CreateDataset(cleaned_data)
result = dataset_creator.dataset_str
expected = """
#NEXUS
BEGIN DATA;
DIMENSIONS NTAX=10 NCHAR=1575;
"""
self.assertTrue(expected.strip() in result)
def test_char_lengths_for_partitions_aminoacids(self):
cleaned_data = self.cleaned_data
cleaned_data['aminoacids'] = True
cleaned_data['outgroup'] = ''
cleaned_data['geneset'] = GeneSets.objects.get(geneset_name='all_genes')
dataset_creator = CreateDataset(cleaned_data)
result = dataset_creator.dataset_str
expected = "charset ef1a = 689-1101"
self.assertTrue(expected in result)
def test_order_of_vouchers_is_kept_along_partitions(self):
cleaned_data = self.cleaned_data
dataset_creator = CreateDataset(cleaned_data)
result = dataset_creator.dataset_str
expected = """
CP100_19_Aus_jus ??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
[ef1a]
"""
self.assertTrue(expected.strip() in result)
def test_try_dataset_degenerated_in_partitions(self):
cleaned_data = self.cleaned_data
cleaned_data['voucher_codes'] = 'CP100-10'
cleaned_data['degen_translations'] = 'normal'
cleaned_data['partition_by_positions'] = 'by gene'
cleaned_data['translations'] = True
dataset_creator = CreateDataset(cleaned_data)
result = dataset_creator.dataset_str
expected = "DIMENSIONS NTAX=10 NCHAR=4732;"
self.assertTrue(expected in result)
| 43.214106 | 861 | 0.620075 | 1,907 | 17,156 | 5.267436 | 0.084426 | 0.160976 | 0.061224 | 0.061324 | 0.866302 | 0.84888 | 0.84888 | 0.825087 | 0.808561 | 0.808561 | 0 | 0.014076 | 0.233912 | 17,156 | 396 | 862 | 43.323232 | 0.750209 | 0.003556 | 0 | 0.716129 | 0 | 0 | 0.217058 | 0.10201 | 0 | 0 | 0 | 0 | 0.090323 | 1 | 0.090323 | false | 0 | 0.029032 | 0 | 0.122581 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d9ea0f8d12ea862a4c712f8bbfdf9054d594a8a0 | 2,487 | py | Python | regexlib/2021-5-15/python_re2_test_file/regexlib_306.py | yetingli/ReDoS-Benchmarks | f5b5094d835649e957bf3fec6b8bd4f6efdb35fc | [
"MIT"
] | 1 | 2022-01-24T14:43:23.000Z | 2022-01-24T14:43:23.000Z | regexlib/python_re2_test_file/regexlib_306.py | yetingli/ReDoS-Benchmarks | f5b5094d835649e957bf3fec6b8bd4f6efdb35fc | [
"MIT"
] | null | null | null | regexlib/python_re2_test_file/regexlib_306.py | yetingli/ReDoS-Benchmarks | f5b5094d835649e957bf3fec6b8bd4f6efdb35fc | [
"MIT"
] | null | null | null | # 306
# (?<=(?:\n|:|&|\()\s*?)(Application\.Unlock|Application\.Lock|Application\.Contents\.RemoveAll|Application\.Contents\.Remove|Request\.BinaryRead|Request\.ClientCertificate|Request\.Cookies|Request\.Form|Request\.QueryString|Request\.ServerVariables|Request\.TotalBytes|Response\.AddHeader|Response\.AppendToLog|Response\.BinaryWrite|Response\.Clear|Response\.End|Response\.Flush|Response\.Redirect|Response\.Write|Response\.Buffer|Response\.CacheControl|Response\.Charset|Response\.CodePage|Response\.ContentType|Response\.Cookies|Response\.Expires|Response\.ExpiresAbsolute|Response\.IsClientConnected|Response\.LCID|Response\.PICS|Response\.Status|Server\.ScriptTimeout|Server\.CreateObject|Server\.Execute|Server\.GetLastError|Server\.HTMLEncode|Server\.MapPath|Server\.Transfer|Server\.URLEncode|Session\.Abandon|Session\.Contents\.Remove|Session\.Contents\.RemoveAll|Session\.CodePage|Session\.Contents|Session\.LCID|Session\.SessionID|Session\.StaticObjects|Session\.Timeout|Application|Session|Request)(?=\s|\.|\()
# POLYNOMIAL
# nums:5
# POLYNOMIAL AttackString:"<="+"\n"*10000+"!_1SLQ_2"
import re2 as re
from time import perf_counter
regex = """(?<=(?:\n|:|&|\()\s*?)(Application\.Unlock|Application\.Lock|Application\.Contents\.RemoveAll|Application\.Contents\.Remove|Request\.BinaryRead|Request\.ClientCertificate|Request\.Cookies|Request\.Form|Request\.QueryString|Request\.ServerVariables|Request\.TotalBytes|Response\.AddHeader|Response\.AppendToLog|Response\.BinaryWrite|Response\.Clear|Response\.End|Response\.Flush|Response\.Redirect|Response\.Write|Response\.Buffer|Response\.CacheControl|Response\.Charset|Response\.CodePage|Response\.ContentType|Response\.Cookies|Response\.Expires|Response\.ExpiresAbsolute|Response\.IsClientConnected|Response\.LCID|Response\.PICS|Response\.Status|Server\.ScriptTimeout|Server\.CreateObject|Server\.Execute|Server\.GetLastError|Server\.HTMLEncode|Server\.MapPath|Server\.Transfer|Server\.URLEncode|Session\.Abandon|Session\.Contents\.Remove|Session\.Contents\.RemoveAll|Session\.CodePage|Session\.Contents|Session\.LCID|Session\.SessionID|Session\.StaticObjects|Session\.Timeout|Application|Session|Request)(?=\s|\.|\()"""
REGEX = re.compile(regex)
for i in range(0, 150000):
ATTACK = "<=" + "\n" * i * 10000 + "!_1SLQ_2"
LEN = len(ATTACK)
BEGIN = perf_counter()
m = REGEX.search(ATTACK)
# m = REGEX.match(ATTACK)
DURATION = perf_counter() - BEGIN
print(f"{i *10000}: took {DURATION} seconds!") | 130.894737 | 1,034 | 0.780056 | 273 | 2,487 | 7.080586 | 0.311355 | 0.04656 | 0.013451 | 0.019659 | 0.861873 | 0.861873 | 0.861873 | 0.861873 | 0.861873 | 0.861873 | 0 | 0.012949 | 0.037394 | 2,487 | 19 | 1,035 | 130.894737 | 0.794486 | 0.449136 | 0 | 0 | 0 | 0.090909 | 0.782991 | 0.747801 | 0.090909 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.181818 | 0 | 0.181818 | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
8a39321f063d0708441e00ce9e33a3b65055c152 | 18,768 | py | Python | test/test_container.py | gagaronwarrior/Algorithms_with_python | 9dae6068fce0c844c3cc240b194900a574b93497 | [
"MIT"
] | null | null | null | test/test_container.py | gagaronwarrior/Algorithms_with_python | 9dae6068fce0c844c3cc240b194900a574b93497 | [
"MIT"
] | null | null | null | test/test_container.py | gagaronwarrior/Algorithms_with_python | 9dae6068fce0c844c3cc240b194900a574b93497 | [
"MIT"
] | null | null | null | import sys
import os
sys.path.append('/'.join(os.getcwd().split('/')[:-1]))
from lib.container import *
class TestBag(object):
def test_bag_init(self):
bag = Bag()
assert bag.size() == 0
assert bag.is_empty() == True
assert len(bag) == 0
def test_bag_add(self):
bag = Bag()
bag.add(1)
assert bag.size() == 1
assert bag.is_empty() == False
assert len(bag) == 1
bag.add(2)
assert bag.size() == 2
assert bag.is_empty() == False
assert len(bag) == 2
bag.add(3)
assert bag.size() == 3
assert bag.is_empty() == False
assert len(bag) == 3
bag.add(4)
assert bag.size() == 4
assert bag.is_empty() == False
assert len(bag) == 4
bag.add(5)
assert bag.size() == 5
assert bag.is_empty() == False
assert len(bag) == 5
def test_bag_str(self):
bag = Bag()
bag.add(1)
bag.add(2)
bag.add(3)
bag.add(4)
bag.add(5)
assert str(bag) == "(5, 4, 3, 2, 1)"
def test_bag_iter(self):
bag = Bag()
bag.add(1)
bag.add(2)
bag.add(3)
bag.add(4)
bag.add(5)
count = 5
for item in bag:
assert item == count
count -= 1
def test_bag_repr(self):
bag = Bag()
bag.add(1)
bag.add(2)
bag.add(3)
bag.add(4)
bag.add(5)
assert repr(bag) == "BAG: (5, 4, 3, 2, 1)"
class TestQueue(object):
def test_queue_init(self):
q = Queue()
assert q.size() == 0
assert len(q) == 0
assert q.is_empty() == True
def test_queue_enqueue(self):
q = Queue()
q.enqueue(1)
assert q.size() == 1
assert q.is_empty() == False
q.enqueue(2)
assert q.size() == 2
assert q.is_empty() == False
q.enqueue(3)
assert q.size() == 3
assert q.is_empty() == False
q.enqueue(4)
assert q.size() == 4
assert q.is_empty() == False
q.enqueue(5)
assert q.size() == 5
assert q.is_empty() == False
def test_queue_dequeue(self):
q = Queue()
q.enqueue(1)
q.enqueue(3)
q.enqueue(5)
q.enqueue(7)
q.enqueue(9)
item = q.dequeue()
assert q.size() == 4
assert q.is_empty() == False
assert item == 1
item = q.dequeue()
assert q.size() == 3
assert q.is_empty() == False
assert item == 3
item = q.dequeue()
assert q.size() == 2
assert q.is_empty() == False
assert item == 5
item = q.dequeue()
assert q.size() == 1
assert q.is_empty() == False
assert item == 7
item = q.dequeue()
assert q.size() == 0
assert q.is_empty() == True
assert item == 9
try:
item = q.dequeue()
except Exception:
assert q.size() == 0
assert q.is_empty() == True
def test_queue_iter(self):
q = Queue()
q.enqueue(1)
q.enqueue(3)
q.enqueue(5)
q.enqueue(7)
q.enqueue(9)
q.dequeue()
count = 3
for item in q:
assert item == count
count += 2
def test_queue_str(self):
q = Queue()
q.enqueue(1)
q.enqueue(3)
q.enqueue(5)
q.enqueue(7)
q.enqueue(9)
q.dequeue()
assert len(q) == 4
assert str(q) == "(3, 5, 7, 9)"
def test_queue_repr(self):
q = Queue()
q.enqueue(1)
q.enqueue(3)
q.enqueue(5)
q.enqueue(7)
q.enqueue(9)
q.dequeue()
assert repr(q) == "QUEUE: (3, 5, 7, 9)"
class TestStack(object):
def test_stack_init(self):
q = Stack()
assert q.size() == 0
assert len(q) == 0
assert q.is_empty() == True
def test_stack_push(self):
q = Stack()
q.push(1)
assert q.size() == 1
assert q.is_empty() == False
q.push(2)
assert q.size() == 2
assert q.is_empty() == False
q.push(3)
assert q.size() == 3
assert q.is_empty() == False
q.push(4)
assert q.size() == 4
assert q.is_empty() == False
q.push(5)
assert q.size() == 5
assert q.is_empty() == False
def test_stack_pop(self):
q = Stack()
q.push(1)
q.push(3)
q.push(5)
q.push(7)
q.push(9)
item = q.pop()
assert q.size() == 4
assert q.is_empty() == False
assert item == 9
item = q.pop()
assert q.size() == 3
assert q.is_empty() == False
assert item == 7
item = q.pop()
assert q.size() == 2
assert q.is_empty() == False
assert item == 5
item = q.pop()
assert q.size() == 1
assert q.is_empty() == False
assert item == 3
item = q.pop()
assert q.size() == 0
assert q.is_empty() == True
assert item == 1
try:
item = q.pop()
except Exception:
assert q.size() == 0
assert q.is_empty() == True
def test_stack_iter(self):
q = Stack()
q.push(1)
q.push(3)
q.push(5)
q.push(7)
q.push(9)
q.pop()
count = 7
for item in q:
assert item == count
count -= 2
def test_stack_str(self):
q = Stack()
q.push(1)
q.push(3)
q.push(5)
q.push(7)
q.push(9)
q.pop()
assert len(q) == 4
assert str(q) == "(7, 5, 3, 1)"
def test_stack_repr(self):
q = Stack()
q.push(1)
q.push(3)
q.push(5)
q.push(7)
q.push(9)
q.pop()
assert repr(q) == "STACK: (7, 5, 3, 1)"
class TestMaxPQ(object):
def test_init(self):
pq = MaxPQ(20)
assert pq.max() == None
assert pq.is_empty() == True
assert pq.size() == 0
assert len(pq) == 0
try:
pq.del_max()
except Exception:
assert pq.size() == 0
assert pq.is_empty() == True
assert len(pq) == 0
def test_insert_delMax(self):
pq = MaxPQ(20)
pq.insert('P')
assert pq.max() == 'P'
assert pq.is_empty() == False
assert pq.size() == 1
assert len(pq) == 1
assert str(pq) == "['P']"
assert repr(pq) == "MaxPQ: ['P']"
pq.insert('Q')
assert pq.max() == 'Q'
assert pq.is_empty() == False
assert pq.size() == 2
assert len(pq) == 2
assert str(pq) == "['Q', 'P']"
assert repr(pq) == "MaxPQ: ['Q', 'P']"
pq.insert('E')
assert pq.max() == 'Q'
assert pq.is_empty() == False
assert pq.size() == 3
assert len(pq) == 3
assert str(pq) == "['Q', 'P', 'E']"
assert repr(pq) == "MaxPQ: ['Q', 'P', 'E']"
del_item = pq.del_max()
assert del_item == 'Q'
assert pq.max() == 'P'
assert pq.is_empty() == False
assert pq.size() == 2
assert len(pq) == 2
assert str(pq) == "['P', 'E']"
assert repr(pq) == "MaxPQ: ['P', 'E']"
pq.insert('X')
assert pq.max() == 'X'
assert pq.is_empty() == False
assert pq.size() == 3
assert len(pq) == 3
assert str(pq) == "['X', 'E', 'P']"
assert repr(pq) == "MaxPQ: ['X', 'E', 'P']"
pq.insert('A')
assert pq.max() == 'X'
assert pq.is_empty() == False
assert pq.size() == 4
assert len(pq) == 4
assert str(pq) == "['X', 'E', 'P', 'A']"
assert repr(pq) == "MaxPQ: ['X', 'E', 'P', 'A']"
pq.insert('M')
assert pq.max() == 'X'
assert pq.is_empty() == False
assert pq.size() == 5
assert len(pq) == 5
assert str(pq) == "['X', 'M', 'P', 'A', 'E']"
assert repr(pq) == "MaxPQ: ['X', 'M', 'P', 'A', 'E']"
del_item = pq.del_max()
assert del_item == 'X'
assert pq.max() == 'P'
assert pq.is_empty() == False
assert pq.size() == 4
assert len(pq) == 4
assert str(pq) == "['P', 'M', 'E', 'A']"
assert repr(pq) == "MaxPQ: ['P', 'M', 'E', 'A']"
pq.insert('P')
assert pq.max() == 'P'
assert pq.is_empty() == False
assert pq.size() == 5
assert len(pq) == 5
assert str(pq) == "['P', 'P', 'E', 'A', 'M']"
assert repr(pq) == "MaxPQ: ['P', 'P', 'E', 'A', 'M']"
pq.insert('L')
assert pq.max() == 'P'
assert pq.is_empty() == False
assert pq.size() == 6
assert len(pq) == 6
assert str(pq) == "['P', 'P', 'L', 'A', 'M', 'E']"
assert repr(pq) == "MaxPQ: ['P', 'P', 'L', 'A', 'M', 'E']"
pq.insert('E')
assert pq.max() == 'P'
assert pq.is_empty() == False
assert pq.size() == 7
assert len(pq) == 7
assert str(pq) == "['P', 'P', 'L', 'A', 'M', 'E', 'E']"
assert repr(pq) == "MaxPQ: ['P', 'P', 'L', 'A', 'M', 'E', 'E']"
del_item = pq.del_max()
assert del_item == 'P'
assert pq.max() == 'P'
assert pq.is_empty() == False
assert pq.size() == 6
assert len(pq) == 6
assert str(pq) == "['P', 'M', 'L', 'A', 'E', 'E']"
assert repr(pq) == "MaxPQ: ['P', 'M', 'L', 'A', 'E', 'E']"
class TestMinPQ(object):
def test_init(self):
pq = MinPQ(20)
assert pq.min() == None
assert pq.is_empty() == True
assert pq.size() == 0
assert len(pq) == 0
try:
pq.del_min()
except Exception:
assert pq.size() == 0
assert pq.is_empty() == True
assert len(pq) == 0
def test_insert_delMin(self):
pq = MinPQ(20)
pq.insert('P')
assert pq.min() == 'P'
assert pq.is_empty() == False
assert pq.size() == 1
assert len(pq) == 1
assert str(pq) == "['P']"
assert repr(pq) == "MinPQ: ['P']"
pq.insert('Q')
assert pq.min() == 'P'
assert pq.is_empty() == False
assert pq.size() == 2
assert len(pq) == 2
assert str(pq) == "['P', 'Q']"
assert repr(pq) == "MinPQ: ['P', 'Q']"
pq.insert('E')
assert pq.min() == 'E'
assert pq.is_empty() == False
assert pq.size() == 3
assert len(pq) == 3
assert str(pq) == "['E', 'Q', 'P']"
assert repr(pq) == "MinPQ: ['E', 'Q', 'P']"
del_item = pq.del_min()
assert del_item == 'E'
assert pq.min() == 'P'
assert pq.is_empty() == False
assert pq.size() == 2
assert len(pq) == 2
assert str(pq) == "['P', 'Q']"
assert repr(pq) == "MinPQ: ['P', 'Q']"
pq.insert('X')
assert pq.min() == 'P'
assert pq.is_empty() == False
assert pq.size() == 3
assert len(pq) == 3
assert str(pq) == "['P', 'Q', 'X']"
assert repr(pq) == "MinPQ: ['P', 'Q', 'X']"
pq.insert('A')
assert pq.min() == 'A'
assert pq.is_empty() == False
assert pq.size() == 4
assert len(pq) == 4
assert str(pq) == "['A', 'P', 'X', 'Q']"
assert repr(pq) == "MinPQ: ['A', 'P', 'X', 'Q']"
pq.insert('M')
assert pq.min() == 'A'
assert pq.is_empty() == False
assert pq.size() == 5
assert len(pq) == 5
assert str(pq) == "['A', 'M', 'X', 'Q', 'P']"
assert repr(pq) == "MinPQ: ['A', 'M', 'X', 'Q', 'P']"
del_item = pq.del_min()
assert del_item == 'A'
assert pq.min() == 'M'
assert pq.is_empty() == False
assert pq.size() == 4
assert len(pq) == 4
assert str(pq) == "['M', 'P', 'X', 'Q']"
assert repr(pq) == "MinPQ: ['M', 'P', 'X', 'Q']"
pq.insert('P')
assert pq.min() == 'M'
assert pq.is_empty() == False
assert pq.size() == 5
assert len(pq) == 5
assert str(pq) == "['M', 'P', 'X', 'Q', 'P']"
assert repr(pq) == "MinPQ: ['M', 'P', 'X', 'Q', 'P']"
pq.insert('L')
assert pq.min() == 'L'
assert pq.is_empty() == False
assert pq.size() == 6
assert len(pq) == 6
assert str(pq) == "['L', 'P', 'M', 'Q', 'P', 'X']"
assert repr(pq) == "MinPQ: ['L', 'P', 'M', 'Q', 'P', 'X']"
pq.insert('E')
assert pq.min() == 'E'
assert pq.is_empty() == False
assert pq.size() == 7
assert len(pq) == 7
assert str(pq) == "['E', 'P', 'L', 'Q', 'P', 'X', 'M']"
assert repr(pq) == "MinPQ: ['E', 'P', 'L', 'Q', 'P', 'X', 'M']"
del_item = pq.del_min()
assert del_item == 'E'
assert pq.min() == 'L'
assert pq.is_empty() == False
assert pq.size() == 6
assert len(pq) == 6
assert str(pq) == "['L', 'P', 'M', 'Q', 'P', 'X']"
assert repr(pq) == "MinPQ: ['L', 'P', 'M', 'Q', 'P', 'X']"
class TestSteque(object):
def test_init(self):
stq = Steque()
assert stq.size() == 0
assert len(stq) == 0
assert stq.is_empty() == True
assert str(stq) == "()"
assert repr(stq) == "STEQUE: ()"
def test_push_pop_enqueue(self):
stq = Steque()
stq.enqueue(1)
assert stq.size() == 1
assert len(stq) == 1
assert stq.is_empty() == False
assert str(stq) == "(1)"
assert repr(stq) == "STEQUE: (1)"
item = stq.pop()
assert item == 1
assert stq.size() == 0
assert len(stq) == 0
assert stq.is_empty() == True
assert str(stq) == "()"
assert repr(stq) == "STEQUE: ()"
stq.push(1)
assert stq.size() == 1
assert len(stq) == 1
assert stq.is_empty() == False
assert str(stq) == "(1)"
assert repr(stq) == "STEQUE: (1)"
item = stq.pop()
assert item == 1
assert stq.size() == 0
assert len(stq) == 0
assert stq.is_empty() == True
assert str(stq) == "()"
assert repr(stq) == "STEQUE: ()"
try:
item = stq.pop()
except Exception:
assert stq.size() == 0
assert stq.is_empty() == True
stq.push(1)
stq.push(2)
stq.push(3)
stq.push(4)
stq.push(5)
assert stq.size() == 5
assert len(stq) == 5
assert stq.is_empty() == False
assert str(stq) == "(5, 4, 3, 2, 1)"
assert repr(stq) == "STEQUE: (5, 4, 3, 2, 1)"
item = stq.pop()
assert item == 5
item = stq.pop()
assert item == 4
item = stq.pop()
assert item == 3
item = stq.pop()
assert item == 2
item = stq.pop()
assert item == 1
assert stq.size() == 0
assert len(stq) == 0
assert stq.is_empty() == True
assert str(stq) == "()"
assert repr(stq) == "STEQUE: ()"
stq.push(1)
stq.push(2)
stq.push(3)
stq.push(4)
stq.push(5)
assert stq.size() == 5
assert len(stq) == 5
assert stq.is_empty() == False
assert str(stq) == "(5, 4, 3, 2, 1)"
assert repr(stq) == "STEQUE: (5, 4, 3, 2, 1)"
stq.enqueue(6)
stq.enqueue(7)
stq.enqueue(8)
stq.enqueue(9)
stq.enqueue(10)
assert stq.size() == 10
assert len(stq) == 10
assert stq.is_empty() == False
assert str(stq) == "(5, 4, 3, 2, 1, 6, 7, 8, 9, 10)"
assert repr(stq) == "STEQUE: (5, 4, 3, 2, 1, 6, 7, 8, 9, 10)"
class TestDeque(object):
def test_deque_init(self):
dq = Deque()
assert dq.size() == 0
assert len(dq) == 0
assert dq.is_empty() == True
assert str(dq) == "()"
assert repr(dq) == "DEQUE: ()"
def test_deque_push_pop(self):
dq = Deque()
dq.push_left(1)
assert dq.size() == 1
assert dq.is_empty() == False
assert len(dq) == 1
assert str(dq) == "(1)"
assert repr(dq) == "DEQUE: (1)"
item = dq.pop_left()
assert item == 1
assert dq.size() == 0
assert len(dq) == 0
assert dq.is_empty() == True
assert str(dq) == "()"
assert repr(dq) == "DEQUE: ()"
dq.push_right(1)
assert dq.size() == 1
assert dq.is_empty() == False
assert len(dq) == 1
assert str(dq) == "(1)"
assert repr(dq) == "DEQUE: (1)"
item = dq.pop_right()
assert item == 1
assert dq.size() == 0
assert len(dq) == 0
assert dq.is_empty() == True
assert str(dq) == "()"
assert repr(dq) == "DEQUE: ()"
dq.push_right(1)
assert dq.size() == 1
assert dq.is_empty() == False
assert len(dq) == 1
assert str(dq) == "(1)"
assert repr(dq) == "DEQUE: (1)"
item = dq.pop_left()
assert item == 1
assert dq.size() == 0
assert len(dq) == 0
assert dq.is_empty() == True
assert str(dq) == "()"
assert repr(dq) == "DEQUE: ()"
dq.push_left(1)
assert dq.size() == 1
assert dq.is_empty() == False
assert len(dq) == 1
assert str(dq) == "(1)"
assert repr(dq) == "DEQUE: (1)"
item = dq.pop_right()
assert item == 1
assert dq.size() == 0
assert len(dq) == 0
assert dq.is_empty() == True
assert str(dq) == "()"
assert repr(dq) == "DEQUE: ()"
dq.push_right(1)
dq.push_left(2)
dq.push_right(3)
dq.push_left(4)
dq.push_right(5)
dq.push_left(6)
dq.pop_left()
dq.pop_right()
dq.pop_left()
dq.pop_right()
dq.pop_left()
dq.pop_right()
assert dq.size() == 0
assert len(dq) == 0
assert dq.is_empty() == True
assert str(dq) == "()"
assert repr(dq) == "DEQUE: ()"
dq.push_right(1)
dq.push_left(2)
dq.push_right(3)
dq.push_left(4)
dq.push_right(5)
dq.push_left(6)
assert dq.size() == 6
assert len(dq) == 6
assert dq.is_empty() == False
assert str(dq) == "(6, 4, 2, 1, 3, 5)"
assert repr(dq) == "DEQUE: (6, 4, 2, 1, 3, 5)"
| 25.569482 | 71 | 0.439525 | 2,529 | 18,768 | 3.192962 | 0.033215 | 0.081238 | 0.084706 | 0.104768 | 0.876037 | 0.841734 | 0.785511 | 0.759009 | 0.733375 | 0.716533 | 0 | 0.031734 | 0.377078 | 18,768 | 733 | 72 | 25.604366 | 0.658968 | 0 | 0 | 0.795417 | 0 | 0.00982 | 0.088408 | 0 | 0 | 0 | 0 | 0 | 0.599018 | 1 | 0.040917 | false | 0 | 0.00491 | 0 | 0.057283 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
8a5b0fea5a86bd05ae9e7b5fa3f6be4d47ef8d4f | 52,850 | py | Python | tests.py | jktjkt/replxx | 12f2adee7f9123880db1f870c360a88c1f7ba182 | [
"Apache-2.0"
] | 1 | 2019-06-11T06:49:15.000Z | 2019-06-11T06:49:15.000Z | tests.py | jktjkt/replxx | 12f2adee7f9123880db1f870c360a88c1f7ba182 | [
"Apache-2.0"
] | null | null | null | tests.py | jktjkt/replxx | 12f2adee7f9123880db1f870c360a88c1f7ba182 | [
"Apache-2.0"
] | 1 | 2019-06-10T16:48:55.000Z | 2019-06-10T16:48:55.000Z | #! /usr/bin/python3
import pexpect
import unittest
import re
import os
import subprocess
import signal
import time
keytab = {
"<home>": "\033[1~",
"<end>": "\033[4~",
"<ins>": "\033[2~",
"<del>": "\033[3~",
"<pgup>": "\033[5~",
"<pgdown>": "\033[6~",
"<backspace>": "",
"<tab>": "\t",
"<cr>": "\r",
"<lf>": "\n",
"<left>": "\033[D",
"<aleft>": "\033OD",
"<right>": "\033[C",
"<aright>": "\033OC",
"<up>": "\033[A",
"<aup>": "\033OA",
"<down>": "\033[B",
"<adown>": "\033OB",
"<c-left>": "\033[1;5D",
"<c-right>": "\033[1;5C",
"<c-up>": "\033[1;5A",
"<c-down>": "\033[1;5B",
"<m-left>": "\033[1;3D",
"<m-right>": "\033[1;3C",
"<m-up>": "\033[1;3A",
"<m-down>": "\033[1;3B",
"<c-a>": "",
"<c-b>": "",
"<c-c>": "",
"<c-d>": "",
"<c-e>": "",
"<c-f>": "",
"<c-k>": "",
"<c-l>": "",
"<c-n>": "",
"<c-p>": "",
"<c-r>": "",
"<c-s>": "",
"<c-t>": "",
"<c-u>": "",
"<c-v>": "",
"<c-w>": "",
"<c-y>": "",
"<c-z>": "",
"<m-b>": "\033b",
"<m-c>": "\033c",
"<m-d>": "\033d",
"<m-f>": "\033f",
"<m-l>": "\033l",
"<m-n>": "\033n",
"<m-p>": "\033p",
"<m-u>": "\033u",
"<m-y>": "\033y",
"<m-backspace>": "\033\177",
"<f1>": "\033OP",
"<f2>": "\033OQ"
}
termseq = {
"\x1bc": "<RIS>",
"\x1b[0m": "<rst>",
"\x1b[H": "<mvhm>",
"\x1b[2J": "<clr>",
"\x1b[J": "<ceos>",
"\x1b[0;22;30m": "<black>",
"\x1b[0;22;31m": "<red>",
"\x1b[0;22;32m": "<green>",
"\x1b[0;22;33m": "<brown>",
"\x1b[0;22;34m": "<blue>",
"\x1b[0;22;35m": "<magenta>",
"\x1b[0;22;36m": "<cyan>",
"\x1b[0;22;37m": "<lightgray>",
"\x1b[0;1;30m": "<gray>",
"\x1b[0;1;31m": "<brightred>",
"\x1b[0;1;32m": "<brightgreen>",
"\x1b[0;1;33m": "<yellow>",
"\x1b[0;1;34m": "<brightblue>",
"\x1b[0;1;35m": "<brightmagenta>",
"\x1b[0;1;36m": "<brightcyan>",
"\x1b[0;1;37m": "<white>",
"\x1b[1;32m": "<brightgreen>",
"\x1b[101;1;33m": "<err>",
"\x07": "<bell>"
}
colRe = re.compile( "\\x1b\\[(\\d+)G" )
upRe = re.compile( "\\x1b\\[(\\d+)A" )
def sym_to_raw( str_ ):
for sym, seq in keytab.items():
str_ = str_.replace( sym, seq )
return str_
def seq_to_sym( str_ ):
for seq, sym in termseq.items():
str_ = str_.replace( seq, sym )
str_ = colRe.sub( "<c\\1>", str_ )
str_ = upRe.sub( "<u\\1>", str_ )
return str_
_words_ = [
"ada", "algol"
"bash", "basic",
"clojure", "cobol", "csharp",
"eiffel", "erlang",
"forth", "fortran", "fsharp",
"go", "groovy",
"haskell", "huginn",
"java", "javascript", "julia",
"kotlin",
"lisp", "lua",
"modula",
"nemerle",
"ocaml",
"perl", "php", "prolog", "python",
"rebol", "ruby", "rust",
"scala", "scheme", "sql", "swift",
"typescript"
]
def skip( test_ ):
return "SKIP" in os.environ and os.environ["SKIP"].find( test_ ) >= 0
verbosity = None
class ReplxxTests( unittest.TestCase ):
_prompt_ = "\033\\[1;32mreplxx\033\\[0m> "
_cxxSample_ = "./build/example-cxx-api"
_cSample_ = "./build/example-c-api"
_end_ = "\r\nExiting Replxx\r\n"
def check_scenario(
self_, seq_, expected_,
history = "one\ntwo\nthree\n",
term = "xterm",
command = _cxxSample_,
dimensions = ( 25, 80 ),
prompt = _prompt_,
end = _prompt_ + _end_,
encoding = "utf-8"
):
with open( "replxx_history.txt", "wb" ) as f:
f.write( history.encode( encoding ) )
f.close()
os.environ["TERM"] = term
command = command.replace( "\n", "~" )
if verbosity >= 2:
print( "\nTERM: {}, SIZE: {}, CMD: {}".format( term, dimensions, command ) )
prompt = prompt.replace( "\n", "\r\n" ).replace( "\r\r", "\r" )
end = end.replace( "\n", "\r\n" ).replace( "\r\r", "\r" )
self_._replxx = pexpect.spawn( command, maxread = 1, encoding = encoding, dimensions = dimensions )
self_._replxx.expect( prompt )
self_.maxDiff = None
seqs = seq_.split( "<c-z>" )
for seq in seqs:
last = seq is seqs[-1]
if not last:
seq += "<c-z>"
self_._replxx.send( sym_to_raw( seq ) )
if not last:
time.sleep( 0.25 )
self_._replxx.kill( signal.SIGCONT )
self_._replxx.expect( end )
self_.assertSequenceEqual( seq_to_sym( self_._replxx.before ), expected_ )
def test_unicode( self_ ):
self_.check_scenario(
"<up><cr><c-d>",
"<c9><ceos>aóą Ϩ <rst><gray><rst><c21>"
"<c9><ceos>aóą Ϩ <rst><c21>\r\n"
"aóą Ϩ \r\n",
"aóą Ϩ \n"
)
self_.check_scenario(
"aóą Ϩ <cr><c-d>",
"<c9><ceos>a<rst><gray><rst><c10><c9><ceos>aó<rst><gray><rst><c11><c9><ceos>aóą<rst><gray><rst><c12><c9><ceos>aóą "
"<rst><gray><rst><c13><c9><ceos>aóą Ϩ<rst><gray><rst><c14><c9><ceos>aóą Ϩ "
"<rst><gray><rst><c15><c9><ceos>aóą Ϩ <rst><gray><rst><c16><c9><ceos>aóą Ϩ "
"<rst><gray><rst><c17><c9><ceos>aóą Ϩ "
"<rst><gray><rst><c18><c9><ceos>aóą Ϩ <rst><gray><rst><c19><c9><ceos>aóą Ϩ "
"<rst><gray><rst><c20><c9><ceos>aóą Ϩ "
"<rst><gray><rst><c21><c9><ceos>aóą Ϩ <rst><c21>\r\n"
"aóą Ϩ \r\n"
)
@unittest.skipIf( skip( "8bit_encoding" ), "broken platform" )
def test_8bit_encoding( self_ ):
LC_CTYPE = "LC_CTYPE"
exists = LC_CTYPE in os.environ
lcCtype = None
if exists:
lcCtype = os.environ[LC_CTYPE]
os.environ[LC_CTYPE] = "pl_PL.ISO-8859-2"
self_.check_scenario(
"<aup><cr><c-d>",
"<c9><ceos>text ~ó~<rst><gray><rst><c17><c9><ceos>text ~ó~<rst><c17>\r\ntext ~ó~\r\n",
"text ~ó~\n",
encoding = "iso-8859-2"
)
if exists:
os.environ[LC_CTYPE] = lcCtype
else:
del os.environ[LC_CTYPE]
def test_bad_term( self_ ):
self_.check_scenario(
"a line of text<cr><c-d>",
"a line of text\r\na line of text\r\n",
term = "dumb"
)
def test_ctrl_c( self_ ):
self_.check_scenario(
"abc<c-c><c-d>",
"<c9><ceos>a<rst><gray><rst><c10><c9><ceos>ab<rst><gray><rst><c11><c9><ceos>abc<rst><gray><rst><c12><c9><ceos>abc<rst><c12>^C\r"
"\r\n"
)
def test_ctrl_z( self_ ):
self_.check_scenario(
"<up><c-z><cr><c-d>",
"<c9><ceos>three<rst><gray><rst><c14><brightgreen>replxx<rst>> "
"<c9><ceos>three<rst><gray><rst><c14><c9><ceos>three<rst><c14>\r\n"
"three\r\n"
)
self_.check_scenario(
"<c-r>w<c-z><cr><c-d>",
"<c9><ceos><rst><gray><rst><c9><c1><ceos>(reverse-i-search)`': "
"<c23><c1><ceos>(reverse-i-search)`w': "
"two<c25><c1><ceos>(reverse-i-search)`w': "
"two<c25><c1><ceos><brightgreen>replxx<rst>> "
"two<c10><c9><ceos>two<rst><c12>\r\n"
"two\r\n"
)
def test_ctrl_l( self_ ):
self_.check_scenario(
"<cr><cr><cr><c-l><c-d>",
"<c9><ceos><rst><c9>\r\n"
"<brightgreen>replxx<rst>> <c9><ceos><rst><c9>\r\n"
"<brightgreen>replxx<rst>> <c9><ceos><rst><c9>\r\n"
"<brightgreen>replxx<rst>> <RIS><mvhm><clr><rst><brightgreen>replxx<rst>> "
"<c9><ceos><rst><gray><rst><c9>",
end = "\r\nExiting Replxx\r\n"
)
self_.check_scenario(
"<cr><up><c-left><c-l><cr><c-d>",
"<c9><ceos><rst><c9>\r\n"
"<brightgreen>replxx<rst>> <c9><ceos>first "
"second<rst><gray><rst><c21><c9><ceos>first "
"second<rst><c15><RIS><mvhm><clr><rst><brightgreen>replxx<rst>> "
"<c9><ceos>first second<rst><c15><c9><ceos>first second<rst><c21>\r\n"
"first second\r\n",
"first second\n"
)
def test_backspace( self_ ):
self_.check_scenario(
"<up><c-a><m-f><c-right><backspace><backspace><backspace><backspace><cr><c-d>",
"<c9><ceos>one two three<rst><gray><rst><c22><c9><ceos>one two "
"three<rst><c9><c9><ceos>one two three<rst><c12><c9><ceos>one two "
"three<rst><c16><c9><ceos>one tw three<rst><c15><c9><ceos>one t "
"three<rst><c14><c9><ceos>one three<rst><c13><c9><ceos>one "
"three<rst><c12><c9><ceos>one three<rst><c18>\r\n"
"one three\r\n",
"one two three\n"
)
def test_delete( self_ ):
self_.check_scenario(
"<up><m-b><c-left><del><c-d><del><c-d><cr><c-d>",
"<c9><ceos>one two three<rst><gray><rst><c22><c9><ceos>one two "
"three<rst><c17><c9><ceos>one two three<rst><c13><c9><ceos>one wo "
"three<rst><c13><c9><ceos>one o three<rst><c13><c9><ceos>one "
"three<rst><c13><c9><ceos>one three<rst><c13><c9><ceos>one three<rst><c18>\r\n"
"one three\r\n",
"one two three\n"
)
def test_home_key( self_ ):
self_.check_scenario(
"abc<home>z<cr><c-d>",
"<c9><ceos>a<rst><gray><rst><c10><c9><ceos>ab<rst><gray><rst><c11><c9><ceos>abc<rst><gray><rst><c12><c9><ceos>abc<rst><c9><c9><ceos>zabc<rst><c10><c9><ceos>zabc<rst><c13>\r\n"
"zabc\r\n"
)
def test_end_key( self_ ):
self_.check_scenario(
"abc<home>z<end>q<cr><c-d>",
"<c9><ceos>a<rst><gray><rst><c10><c9><ceos>ab<rst><gray><rst><c11><c9><ceos>abc<rst><gray><rst><c12><c9><ceos>abc<rst><c9><c9><ceos>zabc<rst><c10><c9><ceos>zabc<rst><gray><rst><c13><c9><ceos>zabcq<rst><gray><rst><c14><c9><ceos>zabcq<rst><c14>\r\n"
"zabcq\r\n"
)
def test_left_key( self_ ):
self_.check_scenario(
"abc<left>x<aleft><left>y<cr><c-d>",
"<c9><ceos>a<rst><gray><rst><c10><c9><ceos>ab<rst><gray><rst><c11><c9><ceos>abc<rst><gray><rst><c12><c9><ceos>abc<rst><c11><c9><ceos>abxc<rst><c12><c9><ceos>abxc<rst><c11><c9><ceos>abxc<rst><c10><c9><ceos>aybxc<rst><c11><c9><ceos>aybxc<rst><c14>\r\n"
"aybxc\r\n"
)
def test_right_key( self_ ):
self_.check_scenario(
"abc<home><right>x<aright>y<cr><c-d>",
"<c9><ceos>a<rst><gray><rst><c10><c9><ceos>ab<rst><gray><rst><c11><c9><ceos>abc<rst><gray><rst><c12><c9><ceos>abc<rst><c9><c9><ceos>abc<rst><c10><c9><ceos>axbc<rst><c11><c9><ceos>axbc<rst><c12><c9><ceos>axbyc<rst><c13><c9><ceos>axbyc<rst><c14>\r\n"
"axbyc\r\n"
)
def test_prev_word_key( self_ ):
self_.check_scenario(
"abc def ghi<c-left><m-left>x<cr><c-d>",
"<c9><ceos>a<rst><gray><rst><c10><c9><ceos>ab<rst><gray><rst><c11><c9><ceos>abc<rst><gray><rst><c12><c9><ceos>abc "
"<rst><gray><rst><c13><c9><ceos>abc d<rst><gray><rst><c14><c9><ceos>abc "
"de<rst><gray><rst><c15><c9><ceos>abc def<rst><gray><rst><c16><c9><ceos>abc "
"def <rst><gray><rst><c17><c9><ceos>abc def "
"g<rst><gray><rst><c18><c9><ceos>abc def gh<rst><gray><rst><c19><c9><ceos>abc "
"def ghi<rst><gray><rst><c20><c9><ceos>abc def ghi<rst><c17><c9><ceos>abc def "
"ghi<rst><c13><c9><ceos>abc xdef ghi<rst><c14><c9><ceos>abc xdef "
"ghi<rst><c21>\r\n"
"abc xdef ghi\r\n"
)
def test_next_word_key( self_ ):
self_.check_scenario(
"abc def ghi<home><c-right><m-right>x<cr><c-d>",
"<c9><ceos>a<rst><gray><rst><c10><c9><ceos>ab<rst><gray><rst><c11><c9><ceos>abc<rst><gray><rst><c12><c9><ceos>abc "
"<rst><gray><rst><c13><c9><ceos>abc d<rst><gray><rst><c14><c9><ceos>abc "
"de<rst><gray><rst><c15><c9><ceos>abc def<rst><gray><rst><c16><c9><ceos>abc "
"def <rst><gray><rst><c17><c9><ceos>abc def "
"g<rst><gray><rst><c18><c9><ceos>abc def gh<rst><gray><rst><c19><c9><ceos>abc "
"def ghi<rst><gray><rst><c20><c9><ceos>abc def ghi<rst><c9><c9><ceos>abc def "
"ghi<rst><c12><c9><ceos>abc def ghi<rst><c16><c9><ceos>abc defx "
"ghi<rst><c17><c9><ceos>abc defx ghi<rst><c21>\r\n"
"abc defx ghi\r\n"
)
def test_hint_show( self_ ):
self_.check_scenario(
"co\r<c-d>",
"<c9><ceos>c<rst><gray><rst><c10><c9><ceos>co<rst><gray><rst>\r\n"
" <gray>color_black<rst>\r\n"
" <gray>color_red<rst>\r\n"
" <gray>color_green<rst><u3><c11><c9><ceos>co<rst><c11>\r\n"
"co\r\n"
)
self_.check_scenario(
"<up><cr><c-d>",
"<c9><ceos>zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz "
"<brightgreen>color_brightgreen<rst><green><rst><c15><u3><c9><ceos>zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz "
"<brightgreen>color_brightgreen<rst><c15>\r\n"
"zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz color_brightgreen\r\n",
"zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz color_brightgreen\n",
dimensions = ( 64, 16 )
)
def test_hint_scroll_down( self_ ):
self_.check_scenario(
"co<c-down><c-down><tab><cr><c-d>",
"<c9><ceos>c<rst><gray><rst><c10><c9><ceos>co<rst><gray><rst>\r\n"
" <gray>color_black<rst>\r\n"
" <gray>color_red<rst>\r\n"
" "
"<gray>color_green<rst><u3><c11><c9><ceos>co<rst><gray>lor_black<rst>\r\n"
" <gray>color_red<rst>\r\n"
" <gray>color_green<rst>\r\n"
" "
"<gray>color_brown<rst><u3><c11><c9><ceos>co<rst><gray>lor_red<rst>\r\n"
" <gray>color_green<rst>\r\n"
" <gray>color_brown<rst>\r\n"
" "
"<gray>color_blue<rst><u3><c11><c9><ceos><red>color_red<rst><green><rst><c18><c9><ceos><red>color_red<rst><c18>\r\n"
"color_red\r\n"
)
def test_hint_scroll_up( self_ ):
self_.check_scenario(
"co<c-up><c-up><tab><cr><c-d>",
"<c9><ceos>c<rst><gray><rst><c10><c9><ceos>co<rst><gray><rst>\r\n"
" <gray>color_black<rst>\r\n"
" <gray>color_red<rst>\r\n"
" "
"<gray>color_green<rst><u3><c11><c9><ceos>co<rst><gray>lor_normal<rst>\r\n"
" <gray>co\r\n"
" <gray>color_black<rst>\r\n"
" "
"<gray>color_red<rst><u3><c11><c9><ceos>co<rst><gray>lor_white<rst>\r\n"
" <gray>color_normal<rst>\r\n"
" <gray>co\r\n"
" "
"<gray>color_black<rst><u3><c11><c9><ceos><white>color_white<rst><green><rst><c20><c9><ceos><white>color_white<rst><c20>\r\n"
"color_white\r\n"
)
def test_history( self_ ):
self_.check_scenario(
"<up><up><up><up><down><down><down><down>four<cr><c-d>",
"<c9><ceos>three<rst><gray><rst><c14><c9><ceos>two<rst><gray><rst><c12><c9><ceos>one<rst><gray><rst><c12><c9><ceos>two<rst><gray><rst><c12><c9><ceos>three<rst><gray><rst><c14><c9><ceos><rst><gray><rst><c9><c9><ceos>f<rst><gray><rst><c10><c9><ceos>fo<rst><gray><rst><c11><c9><ceos>fou<rst><gray><rst><c12><c9><ceos>four<rst><gray><rst><c13><c9><ceos>four<rst><c13>\r\n"
"four\r\n"
)
with open( "replxx_history.txt", "rb" ) as f:
self_.assertSequenceEqual( f.read().decode(), "one\ntwo\nthree\nfour\n" )
def test_paren_matching( self_ ):
self_.check_scenario(
"ab(cd)ef<left><left><left><left><left><left><left><cr><c-d>",
"<c9><ceos>a<rst><gray><rst><c10><c9><ceos>ab<rst><gray><rst><c11><c9><ceos>ab<brightmagenta>(<rst><gray><rst><c12><c9><ceos>ab<brightmagenta>(<rst>c<rst><gray><rst><c13><c9><ceos>ab<brightmagenta>(<rst>cd<rst><gray><rst><c14><c9><ceos>ab<brightmagenta>(<rst>cd<brightmagenta>)<rst><gray><rst><c15><c9><ceos>ab<brightmagenta>(<rst>cd<brightmagenta>)<rst>e<rst><gray><rst><c16><c9><ceos>ab<brightmagenta>(<rst>cd<brightmagenta>)<rst>ef<rst><gray><rst><c17><c9><ceos>ab<brightmagenta>(<rst>cd<brightmagenta>)<rst>ef<rst><c16><c9><ceos>ab<brightmagenta>(<rst>cd<brightmagenta>)<rst>ef<rst><c15><c9><ceos>ab<brightred>(<rst>cd<brightmagenta>)<rst>ef<rst><c14><c9><ceos>ab<brightmagenta>(<rst>cd<brightmagenta>)<rst>ef<rst><c13><c9><ceos>ab<brightmagenta>(<rst>cd<brightmagenta>)<rst>ef<rst><c12><c9><ceos>ab<brightmagenta>(<rst>cd<brightred>)<rst>ef<rst><c11><c9><ceos>ab<brightmagenta>(<rst>cd<brightmagenta>)<rst>ef<rst><c10><c9><ceos>ab<brightmagenta>(<rst>cd<brightmagenta>)<rst>ef<rst><c17>\r\n"
"ab(cd)ef\r\n"
)
def test_paren_not_matched( self_ ):
self_.check_scenario(
"a(b[c)d<left><left><left><left><left><left><left><cr><c-d>",
"<c9><ceos>a<rst><gray><rst><c10><c9><ceos>a<brightmagenta>(<rst><gray><rst><c11><c9><ceos>a<brightmagenta>(<rst>b<rst><gray><rst><c12><c9><ceos>a<brightmagenta>(<rst>b<brightmagenta>[<rst><gray><rst><c13><c9><ceos>a<brightmagenta>(<rst>b<brightmagenta>[<rst>c<rst><gray><rst><c14><c9><ceos>a<brightmagenta>(<rst>b<brightmagenta>[<rst>c<brightmagenta>)<rst><gray><rst><c15><c9><ceos>a<brightmagenta>(<rst>b<brightmagenta>[<rst>c<brightmagenta>)<rst>d<rst><gray><rst><c16><c9><ceos>a<brightmagenta>(<rst>b<brightmagenta>[<rst>c<brightmagenta>)<rst>d<rst><c15><c9><ceos>a<err>(<rst>b<brightmagenta>[<rst>c<brightmagenta>)<rst>d<rst><c14><c9><ceos>a<brightmagenta>(<rst>b<brightmagenta>[<rst>c<brightmagenta>)<rst>d<rst><c13><c9><ceos>a<brightmagenta>(<rst>b<brightmagenta>[<rst>c<brightmagenta>)<rst>d<rst><c12><c9><ceos>a<brightmagenta>(<rst>b<brightmagenta>[<rst>c<brightmagenta>)<rst>d<rst><c11><c9><ceos>a<brightmagenta>(<rst>b<brightmagenta>[<rst>c<err>)<rst>d<rst><c10><c9><ceos>a<brightmagenta>(<rst>b<brightmagenta>[<rst>c<brightmagenta>)<rst>d<rst><c9><c9><ceos>a<brightmagenta>(<rst>b<brightmagenta>[<rst>c<brightmagenta>)<rst>d<rst><c16>\r\n"
"a(b[c)d\r\n"
)
def test_tab_completion( self_ ):
self_.check_scenario(
"co<tab><tab>bri<tab>b<tab><cr><c-d>",
"<c9><ceos>c<rst><gray><rst><c10><c9><ceos>co<rst><gray><rst>\r\n"
" <gray>color_black<rst>\r\n"
" <gray>color_red<rst>\r\n"
" <gray>color_green<rst><u3><c11><c9><ceos>color_<rst><gray><rst>\r\n"
" <gray>color_black<rst>\r\n"
" <gray>color_red<rst>\r\n"
" <gray>color_green<rst><u3><c15><c9><ceos>color_<rst><c15>\r\n"
"<brightmagenta>color_<rst>black "
"<brightmagenta>color_<rst>cyan "
"<brightmagenta>color_<rst>brightblue\r\n"
"<brightmagenta>color_<rst>red "
"<brightmagenta>color_<rst>lightgray "
"<brightmagenta>color_<rst>brightmagenta\r\n"
"<brightmagenta>color_<rst>green "
"<brightmagenta>color_<rst>gray "
"<brightmagenta>color_<rst>brightcyan\r\n"
"<brightmagenta>color_<rst>brown "
"<brightmagenta>color_<rst>brightred <brightmagenta>color_<rst>white\r\n"
"<brightmagenta>color_<rst>blue "
"<brightmagenta>color_<rst>brightgreen <brightmagenta>color_<rst>normal\r\n"
"<brightmagenta>color_<rst>magenta <brightmagenta>color_<rst>yellow\r\n"
"<brightgreen>replxx<rst>> <c9><ceos>color_<rst><gray><rst>\r\n"
" <gray>color_black<rst>\r\n"
" <gray>color_red<rst>\r\n"
" <gray>color_green<rst><u3><c15><c9><ceos>color_b<rst><gray><rst>\r\n"
" <gray>color_black<rst>\r\n"
" <gray>color_brown<rst>\r\n"
" <gray>color_blue<rst><u3><c16><c9><ceos>color_br<rst><gray><rst>\r\n"
" <gray>color_brown<rst>\r\n"
" <gray>color_brightred<rst>\r\n"
" "
"<gray>color_brightgreen<rst><u3><c17><c9><ceos>color_bri<rst><gray><rst>\r\n"
" <gray>color_brightred<rst>\r\n"
" <gray>color_brightgreen<rst>\r\n"
" "
"<gray>color_brightblue<rst><u3><c18><c9><ceos>color_bright<rst><gray><rst>\r\n"
" <gray>color_brightred<rst>\r\n"
" <gray>color_brightgreen<rst>\r\n"
" "
"<gray>color_brightblue<rst><u3><c21><c9><ceos>color_brightb<rst><green>lue<rst><c22><c9><ceos><brightblue>color_brightblue<rst><green><rst><c25><c9><ceos><brightblue>color_brightblue<rst><c25>\r\n"
"color_brightblue\r\n"
)
self_.check_scenario(
"<tab><tab>n<cr><c-d>",
"<bell><bell><c9><ceos>n<rst><gray><rst><c10><c9><ceos>n<rst><c10>\r\nn\r\n",
dimensions = ( 4, 32 ),
command = ReplxxTests._cSample_ + " q1 e0"
)
self_.check_scenario(
"<tab><tab>n<cr><c-d>",
"<c9><ceos><rst><c9>\r\n"
"<brightmagenta><rst>db\r\n"
"<brightmagenta><rst>hello\r\n"
"<brightmagenta><rst>hallo\r\n"
"--More--<bell>\r"
"\t\t\t\t\r"
"<brightgreen>replxx<rst>> "
"<c9><ceos><rst><gray><rst><c9><c9><ceos><rst><c9>\r\n",
dimensions = ( 4, 24 ),
command = ReplxxTests._cSample_ + " q1 e1"
)
self_.check_scenario(
"<up><home>co<tab><cr><c-d>",
"<c9><ceos>abcd<brightmagenta>()<rst><gray><rst><c15>"
"<c9><ceos>abcd<brightmagenta>()<rst><c9>"
"<c9><ceos>cabcd<brightmagenta>()<rst><c10>"
"<c9><ceos>coabcd<brightmagenta>()<rst><c11>"
"<c9><ceos>color_abcd<brightmagenta>()<rst><c15>"
"<c9><ceos>color_abcd<brightmagenta>()<rst><c21>\r\n"
"color_abcd()\r\n",
"abcd()\n"
)
def test_completion_pager( self_ ):
cmd = ReplxxTests._cSample_ + " q1 x" + ",".join( _words_ )
self_.check_scenario(
"<tab>py<cr><c-d>",
"<c9><ceos><rst><c9>\r\n"
"<brightmagenta><rst>ada <brightmagenta><rst>groovy <brightmagenta><rst>perl\r\n"
"<brightmagenta><rst>algolbash <brightmagenta><rst>haskell <brightmagenta><rst>php\r\n"
"<brightmagenta><rst>basic <brightmagenta><rst>huginn <brightmagenta><rst>prolog\r\n"
"<brightmagenta><rst>clojure <brightmagenta><rst>java <brightmagenta><rst>python\r\n"
"<brightmagenta><rst>cobol <brightmagenta><rst>javascript <brightmagenta><rst>rebol\r\n"
"<brightmagenta><rst>csharp <brightmagenta><rst>julia <brightmagenta><rst>ruby\r\n"
"<brightmagenta><rst>eiffel <brightmagenta><rst>kotlin <brightmagenta><rst>rust\r\n"
"<brightmagenta><rst>erlang <brightmagenta><rst>lisp <brightmagenta><rst>scala\r\n"
"<brightmagenta><rst>forth <brightmagenta><rst>lua <brightmagenta><rst>scheme\r\n"
"--More--<bell>\r"
"\t\t\t\t\r"
"<brightmagenta><rst>fortran <brightmagenta><rst>modula <brightmagenta><rst>sql\r\n"
"<brightmagenta><rst>fsharp <brightmagenta><rst>nemerle <brightmagenta><rst>swift\r\n"
"<brightmagenta><rst>go <brightmagenta><rst>ocaml <brightmagenta><rst>typescript\r\n"
"<brightgreen>replxx<rst>> "
"<c9><ceos><rst><gray><rst><c9><c9><ceos><rst><c9>\r\n",
dimensions = ( 10, 40 ),
command = cmd
)
self_.check_scenario(
"<tab><cr><cr><cr><cr><c-d>",
"<c9><ceos><rst><c9>\r\n"
"<brightmagenta><rst>ada <brightmagenta><rst>groovy <brightmagenta><rst>perl\r\n"
"<brightmagenta><rst>algolbash <brightmagenta><rst>haskell <brightmagenta><rst>php\r\n"
"<brightmagenta><rst>basic <brightmagenta><rst>huginn <brightmagenta><rst>prolog\r\n"
"<brightmagenta><rst>clojure <brightmagenta><rst>java <brightmagenta><rst>python\r\n"
"<brightmagenta><rst>cobol <brightmagenta><rst>javascript <brightmagenta><rst>rebol\r\n"
"<brightmagenta><rst>csharp <brightmagenta><rst>julia <brightmagenta><rst>ruby\r\n"
"<brightmagenta><rst>eiffel <brightmagenta><rst>kotlin <brightmagenta><rst>rust\r\n"
"<brightmagenta><rst>erlang <brightmagenta><rst>lisp <brightmagenta><rst>scala\r\n"
"<brightmagenta><rst>forth <brightmagenta><rst>lua <brightmagenta><rst>scheme\r\n"
"--More--\r"
"\t\t\t\t\r"
"<brightmagenta><rst>fortran <brightmagenta><rst>modula <brightmagenta><rst>sql\r\n"
"--More--\r"
"\t\t\t\t\r"
"<brightmagenta><rst>fsharp <brightmagenta><rst>nemerle <brightmagenta><rst>swift\r\n"
"--More--\r"
"\t\t\t\t\r"
"<brightmagenta><rst>go <brightmagenta><rst>ocaml <brightmagenta><rst>typescript\r\n"
"<brightgreen>replxx<rst>> "
"<c9><ceos><rst><gray><rst><c9><c9><ceos><rst><c9>\r\n",
dimensions = ( 10, 40 ),
command = cmd
)
self_.check_scenario(
"<tab><c-c><cr><c-d>",
"<c9><ceos><rst><c9>\r\n"
"<brightmagenta><rst>ada <brightmagenta><rst>kotlin\r\n"
"<brightmagenta><rst>algolbash <brightmagenta><rst>lisp\r\n"
"<brightmagenta><rst>basic <brightmagenta><rst>lua\r\n"
"<brightmagenta><rst>clojure <brightmagenta><rst>modula\r\n"
"<brightmagenta><rst>cobol <brightmagenta><rst>nemerle\r\n"
"<brightmagenta><rst>csharp <brightmagenta><rst>ocaml\r\n"
"<brightmagenta><rst>eiffel <brightmagenta><rst>perl\r\n"
"--More--^C\r\n"
"<brightgreen>replxx<rst>> "
"<c9><ceos><rst><gray><rst><c9><c9><ceos><rst><c9>\r\n",
dimensions = ( 8, 32 ),
command = cmd
)
self_.check_scenario(
"<tab>q<cr><c-d>",
"<c9><ceos><rst><c9>\r\n"
"<brightmagenta><rst>ada <brightmagenta><rst>kotlin\r\n"
"<brightmagenta><rst>algolbash <brightmagenta><rst>lisp\r\n"
"<brightmagenta><rst>basic <brightmagenta><rst>lua\r\n"
"<brightmagenta><rst>clojure <brightmagenta><rst>modula\r\n"
"<brightmagenta><rst>cobol <brightmagenta><rst>nemerle\r\n"
"<brightmagenta><rst>csharp <brightmagenta><rst>ocaml\r\n"
"<brightmagenta><rst>eiffel <brightmagenta><rst>perl\r\n"
"--More--\r"
"\t\t\t\t\r"
"<brightgreen>replxx<rst>> "
"<c9><ceos><rst><gray><rst><c9><c9><ceos><rst><c9>\r\n",
dimensions = ( 8, 32 ),
command = cmd
)
def test_double_tab_completion( self_ ):
cmd = ReplxxTests._cSample_ + " d1 q1 x" + ",".join( _words_ )
self_.check_scenario(
"fo<tab><tab>r<tab><cr><c-d>",
"<c9><ceos>f<rst><gray><rst>\r\n"
" <gray>forth<rst>\r\n"
" <gray>fortran<rst>\r\n"
" <gray>fsharp<rst><u3><c10><c9><ceos>fo<rst><gray><rst>\r\n"
" <gray>forth<rst>\r\n"
" <gray>fortran<rst><u2><c11><c9><ceos>fort<rst><gray><rst>\r\n"
" <gray>forth<rst>\r\n"
" "
"<gray>fortran<rst><u2><c13><c9><ceos>fortr<rst><gray>an<rst><c14><c9><ceos>fortran<rst><gray><rst><c16><c9><ceos>fortran<rst><c16>\r\n"
"fortran\r\n",
command = cmd
)
def test_beep_on_ambiguous_completion( self_ ):
cmd = ReplxxTests._cSample_ + " b1 d1 q1 x" + ",".join( _words_ )
self_.check_scenario(
"fo<tab><tab>r<tab><cr><c-d>",
"<c9><ceos>f<rst><gray><rst>\r\n"
" <gray>forth<rst>\r\n"
" <gray>fortran<rst>\r\n"
" <gray>fsharp<rst><u3><c10><c9><ceos>fo<rst><gray><rst>\r\n"
" <gray>forth<rst>\r\n"
" <gray>fortran<rst><u2><c11><bell><c9><ceos>fort<rst><gray><rst>\r\n"
" <gray>forth<rst>\r\n"
" "
"<gray>fortran<rst><u2><c13><bell><c9><ceos>fortr<rst><gray>an<rst><c14><c9><ceos>fortran<rst><gray><rst><c16><c9><ceos>fortran<rst><c16>\r\n"
"fortran\r\n",
command = cmd
)
def test_history_search_backward( self_ ):
self_.check_scenario(
"<c-r>repl<c-r><cr><c-d>",
"<c9><ceos><rst><gray><rst><c9><c1><ceos>(reverse-i-search)`': "
"<c23><c1><ceos>(reverse-i-search)`r': echo repl "
"golf<c29><c1><ceos>(reverse-i-search)`re': echo repl "
"golf<c30><c1><ceos>(reverse-i-search)`rep': echo repl "
"golf<c31><c1><ceos>(reverse-i-search)`repl': echo repl "
"golf<c32><c1><ceos>(reverse-i-search)`repl': charlie repl "
"delta<c35><c1><ceos><brightgreen>replxx<rst>> charlie repl "
"delta<c17><c9><ceos>charlie repl delta<rst><c27>\r\n"
"charlie repl delta\r\n",
"some command\n"
"alfa repl bravo\n"
"other request\n"
"charlie repl delta\n"
"misc input\n"
"echo repl golf\n"
"final thoughts\n"
)
self_.check_scenario(
"<c-r>for<backspace><backspace>s<cr><c-d>",
"<c9><ceos><rst><gray><rst><c9><c1><ceos>(reverse-i-search)`': "
"<c23><c1><ceos>(reverse-i-search)`f': "
"swift<c27><c1><ceos>(reverse-i-search)`fo': "
"fortran<c25><c1><ceos>(reverse-i-search)`for': "
"fortran<c26><c1><ceos>(reverse-i-search)`fo': "
"fortran<c25><c1><ceos>(reverse-i-search)`f': "
"swift<c27><c1><ceos>(reverse-i-search)`fs': "
"fsharp<c25><c1><ceos><brightgreen>replxx<rst>> "
"fsharp<c9><c9><ceos>fsharp<rst><c15>\r\n"
"fsharp\r\n",
"\n".join( _words_ ) + "\n"
)
self_.check_scenario(
"<c-r>mod<c-l><cr><c-d>",
"<c9><ceos><rst><gray><rst><c9><c1><ceos>(reverse-i-search)`': "
"<c23><c1><ceos>(reverse-i-search)`m': "
"scheme<c28><c1><ceos>(reverse-i-search)`mo': "
"modula<c25><c1><ceos>(reverse-i-search)`mod': "
"modula<c26><c1><ceos><brightgreen>replxx<rst>> "
"<c9><RIS><mvhm><clr><rst><brightgreen>replxx<rst>> "
"<c9><ceos><rst><gray><rst><c9><c9><ceos><rst><c9>\r\n",
"\n".join( _words_ ) + "\n"
)
def test_history_prefix_search_backward( self_ ):
self_.check_scenario(
"repl<m-p><m-p><cr><c-d>",
"<c9><ceos>r<rst><gray><rst><c10><c9><ceos>re<rst><gray><rst><c11><c9><ceos>rep<rst><gray><rst><c12><c9><ceos>repl<rst><gray><rst><c13><c9><ceos>repl_echo "
"golf<rst><gray><rst><c23><c9><ceos>repl_charlie "
"delta<rst><gray><rst><c27><c9><ceos>repl_charlie delta<rst><c27>\r\n"
"repl_charlie delta\r\n",
"some command\n"
"repl_alfa bravo\n"
"other request\n"
"repl_charlie delta\n"
"misc input\n"
"repl_echo golf\n"
"final thoughts\n"
)
def test_history_browse( self_ ):
self_.check_scenario(
"<up><aup><pgup><down><up><up><adown><pgdown><up><down><down><up><cr><c-d>",
"<c9><ceos>twelve<rst><gray><rst><c15>"
"<c9><ceos>eleven<rst><gray><rst><c15>"
"<c9><ceos>one<rst><gray><rst><c12>"
"<c9><ceos>two<rst><gray><rst><c12>"
"<c9><ceos>one<rst><gray><rst><c12>"
"<c9><ceos>two<rst><gray><rst><c12>"
"<c9><ceos><rst><gray><rst><c9>"
"<c9><ceos>twelve<rst><gray><rst><c15>"
"<c9><ceos><rst><gray><rst><c9>"
"<c9><ceos>twelve<rst><gray><rst><c15>"
"<c9><ceos>twelve<rst><c15>\r\n"
"twelve\r\n",
"one\n"
"two\n"
"three\n"
"four\n"
"five\n"
"six\n"
"seven\n"
"eight\n"
"nine\n"
"ten\n"
"eleven\n"
"twelve\n"
)
def test_history_max_size( self_ ):
self_.check_scenario(
"<pgup><pgdown>a<cr><pgup><cr><c-d>",
"<c9><ceos>three<rst><gray><rst><c14><c9><ceos><rst><gray><rst><c9><c9><ceos>a<rst><gray><rst><c10><c9><ceos>a<rst><c10>\r\n"
"a\r\n"
"<brightgreen>replxx<rst>> "
"<c9><ceos>four<rst><gray><rst><c13><c9><ceos>four<rst><c13>\r\n"
"four\r\n",
"one\n"
"two\n"
"three\n"
"four\n"
"five\n",
command = ReplxxTests._cSample_ + " q1 s3"
)
def test_capitalize( self_ ):
self_.check_scenario(
"<up><home><right><m-c><m-c><right><right><m-c><m-c><m-c><cr><c-d>",
"<c9><ceos>abc defg ijklmn zzxq<rst><gray><rst><c29><c9><ceos>abc defg ijklmn "
"zzxq<rst><c9><c9><ceos>abc defg ijklmn zzxq<rst><c10><c9><ceos>aBc defg "
"ijklmn zzxq<rst><c12><c9><ceos>aBc Defg ijklmn zzxq<rst><c17><c9><ceos>aBc "
"Defg ijklmn zzxq<rst><c18><c9><ceos>aBc Defg ijklmn "
"zzxq<rst><c19><c9><ceos>aBc Defg iJklmn zzxq<rst><c24><c9><ceos>aBc Defg "
"iJklmn Zzxq<rst><gray><rst><c29><c9><ceos>aBc Defg iJklmn Zzxq<rst><c29>\r\n"
"aBc Defg iJklmn Zzxq\r\n",
"abc defg ijklmn zzxq\n"
)
def test_make_upper_case( self_ ):
self_.check_scenario(
"<up><home><right><right><right><m-u><m-u><right><m-u><cr><c-d>",
"<c9><ceos>abcdefg hijklmno pqrstuvw<rst><gray><rst><c34><c9><ceos>abcdefg "
"hijklmno pqrstuvw<rst><c9><c9><ceos>abcdefg hijklmno "
"pqrstuvw<rst><c10><c9><ceos>abcdefg hijklmno "
"pqrstuvw<rst><c11><c9><ceos>abcdefg hijklmno "
"pqrstuvw<rst><c12><c9><ceos>abcDEFG hijklmno "
"pqrstuvw<rst><c16><c9><ceos>abcDEFG HIJKLMNO "
"pqrstuvw<rst><c25><c9><ceos>abcDEFG HIJKLMNO "
"pqrstuvw<rst><c26><c9><ceos>abcDEFG HIJKLMNO "
"PQRSTUVW<rst><gray><rst><c34><c9><ceos>abcDEFG HIJKLMNO "
"PQRSTUVW<rst><c34>\r\n"
"abcDEFG HIJKLMNO PQRSTUVW\r\n",
"abcdefg hijklmno pqrstuvw\n"
)
def test_make_lower_case( self_ ):
self_.check_scenario(
"<up><home><right><right><right><m-l><m-l><right><m-l><cr><c-d>",
"<c9><ceos>ABCDEFG HIJKLMNO PQRSTUVW<rst><gray><rst><c34><c9><ceos>ABCDEFG "
"HIJKLMNO PQRSTUVW<rst><c9><c9><ceos>ABCDEFG HIJKLMNO "
"PQRSTUVW<rst><c10><c9><ceos>ABCDEFG HIJKLMNO "
"PQRSTUVW<rst><c11><c9><ceos>ABCDEFG HIJKLMNO "
"PQRSTUVW<rst><c12><c9><ceos>ABCdefg HIJKLMNO "
"PQRSTUVW<rst><c16><c9><ceos>ABCdefg hijklmno "
"PQRSTUVW<rst><c25><c9><ceos>ABCdefg hijklmno "
"PQRSTUVW<rst><c26><c9><ceos>ABCdefg hijklmno "
"pqrstuvw<rst><gray><rst><c34><c9><ceos>ABCdefg hijklmno "
"pqrstuvw<rst><c34>\r\n"
"ABCdefg hijklmno pqrstuvw\r\n",
"ABCDEFG HIJKLMNO PQRSTUVW\n"
)
def test_transpose( self_ ):
self_.check_scenario(
"<up><home><c-t><right><c-t><c-t><c-t><c-t><c-t><cr><c-d>",
"<c9><ceos>abcd<rst><gray><rst><c13>"
"<c9><ceos>abcd<rst><c9>"
"<c9><ceos>abcd<rst><c10>"
"<c9><ceos>bacd<rst><c11>"
"<c9><ceos>bcad<rst><c12>"
"<c9><ceos>bcda<rst><gray><rst><c13>"
"<c9><ceos>bcad<rst><gray><rst><c13>"
"<c9><ceos>bcda<rst><gray><rst><c13>"
"<c9><ceos>bcda<rst><c13>\r\n"
"bcda\r\n",
"abcd\n"
)
def test_kill_to_beginning_of_line( self_ ):
self_.check_scenario(
"<up><home><c-right><c-right><right><c-u><end><c-y><cr><c-d>",
"<c9><ceos><brightblue>+<rst>abc defg<brightblue>--<rst>ijklmn "
"zzxq<brightblue>+<rst><gray><rst><c32><c9><ceos><brightblue>+<rst>abc "
"defg<brightblue>--<rst>ijklmn "
"zzxq<brightblue>+<rst><c9><c9><ceos><brightblue>+<rst>abc "
"defg<brightblue>--<rst>ijklmn "
"zzxq<brightblue>+<rst><c13><c9><ceos><brightblue>+<rst>abc "
"defg<brightblue>--<rst>ijklmn "
"zzxq<brightblue>+<rst><c18><c9><ceos><brightblue>+<rst>abc "
"defg<brightblue>--<rst>ijklmn "
"zzxq<brightblue>+<rst><c19><c9><ceos><brightblue>-<rst>ijklmn "
"zzxq<brightblue>+<rst><c9><c9><ceos><brightblue>-<rst>ijklmn "
"zzxq<brightblue>+<rst><gray><rst><c22><c9><ceos><brightblue>-<rst>ijklmn "
"zzxq<brightblue>++<rst>abc "
"defg<brightblue>-<rst><gray><rst><c32><c9><ceos><brightblue>-<rst>ijklmn "
"zzxq<brightblue>++<rst>abc defg<brightblue>-<rst><c32>\r\n"
"-ijklmn zzxq++abc defg-\r\n",
"+abc defg--ijklmn zzxq+\n"
)
def test_kill_to_end_of_line( self_ ):
self_.check_scenario(
"<up><home><c-right><c-right><right><c-k><home><c-y><cr><c-d>",
"<c9><ceos><brightblue>+<rst>abc defg<brightblue>--<rst>ijklmn "
"zzxq<brightblue>+<rst><gray><rst><c32><c9><ceos><brightblue>+<rst>abc "
"defg<brightblue>--<rst>ijklmn "
"zzxq<brightblue>+<rst><c9><c9><ceos><brightblue>+<rst>abc "
"defg<brightblue>--<rst>ijklmn "
"zzxq<brightblue>+<rst><c13><c9><ceos><brightblue>+<rst>abc "
"defg<brightblue>--<rst>ijklmn "
"zzxq<brightblue>+<rst><c18><c9><ceos><brightblue>+<rst>abc "
"defg<brightblue>--<rst>ijklmn "
"zzxq<brightblue>+<rst><c19><c9><ceos><brightblue>+<rst>abc "
"defg<brightblue>-<rst><gray><rst><c19><c9><ceos><brightblue>+<rst>abc "
"defg<brightblue>-<rst><c9><c9><ceos><brightblue>-<rst>ijklmn "
"zzxq<brightblue>++<rst>abc "
"defg<brightblue>-<rst><c22><c9><ceos><brightblue>-<rst>ijklmn "
"zzxq<brightblue>++<rst>abc defg<brightblue>-<rst><c32>\r\n"
"-ijklmn zzxq++abc defg-\r\n",
"+abc defg--ijklmn zzxq+\n"
)
def test_kill_next_word( self_ ):
self_.check_scenario(
"<up><home><c-right><m-d><c-right><c-y><cr><c-d>",
"<c9><ceos>alpha charlie bravo delta<rst><gray><rst><c34><c9><ceos>alpha "
"charlie bravo delta<rst><c9><c9><ceos>alpha charlie bravo "
"delta<rst><c14><c9><ceos>alpha bravo delta<rst><c14><c9><ceos>alpha bravo "
"delta<rst><c20><c9><ceos>alpha bravo charlie delta<rst><c28><c9><ceos>alpha "
"bravo charlie delta<rst><c34>\r\n"
"alpha bravo charlie delta\r\n",
"alpha charlie bravo delta\n"
)
def test_kill_prev_word_to_white_space( self_ ):
self_.check_scenario(
"<up><c-left><c-w><c-left><c-y><cr><c-d>",
"<c9><ceos>alpha charlie bravo delta<rst><gray><rst><c34><c9><ceos>alpha "
"charlie bravo delta<rst><c29><c9><ceos>alpha charlie "
"delta<rst><c23><c9><ceos>alpha charlie delta<rst><c15><c9><ceos>alpha bravo "
"charlie delta<rst><c21><c9><ceos>alpha bravo charlie delta<rst><c34>\r\n"
"alpha bravo charlie delta\r\n",
"alpha charlie bravo delta\n"
)
def test_kill_prev_word( self_ ):
self_.check_scenario(
"<up><c-left><m-backspace><c-left><c-y><cr><c-d>",
"<c9><ceos>alpha<brightmagenta>.<rst>charlie "
"bravo<brightmagenta>.<rst>delta<rst><gray><rst><c34><c9><ceos>alpha<brightmagenta>.<rst>charlie "
"bravo<brightmagenta>.<rst>delta<rst><c29><c9><ceos>alpha<brightmagenta>.<rst>charlie "
"delta<rst><c23><c9><ceos>alpha<brightmagenta>.<rst>charlie "
"delta<rst><c15><c9><ceos>alpha<brightmagenta>.<rst>bravo<brightmagenta>.<rst>charlie "
"delta<rst><c21><c9><ceos>alpha<brightmagenta>.<rst>bravo<brightmagenta>.<rst>charlie "
"delta<rst><c34>\r\n"
"alpha.bravo.charlie delta\r\n",
"alpha.charlie bravo.delta\n"
)
def test_kill_ring( self_ ):
self_.check_scenario(
"<up><c-w><backspace><c-w><backspace><c-w><backspace><c-u><c-y><m-y><m-y><m-y> <c-y><m-y><m-y><m-y> <c-y><m-y><m-y><m-y> <c-y><m-y><m-y><m-y><cr><c-d>",
"<c9><ceos>delta charlie bravo alpha<rst><gray><rst><c34><c9><ceos>delta "
"charlie bravo <rst><gray><rst><c29><c9><ceos>delta charlie "
"bravo<rst><gray><rst><c28><c9><ceos>delta charlie "
"<rst><gray><rst><c23><c9><ceos>delta "
"charlie<rst><gray><rst><c22><c9><ceos>delta "
"<rst><gray><rst><c15>"
"<c9><ceos>delta<rst><gray><rst><c14>"
"<c9><ceos><rst><gray><rst><c9>"
"<c9><ceos>delta<rst><gray><rst><c14>"
"<c9><ceos>charlie<rst><gray><rst><c16>"
"<c9><ceos>bravo<rst><gray><rst><c14>"
"<c9><ceos>alpha<rst><gray><rst><c14>"
"<c9><ceos>alpha "
"<rst><gray><rst><c15><c9><ceos>alpha "
"alpha<rst><gray><rst><c20><c9><ceos>alpha "
"delta<rst><gray><rst><c20><c9><ceos>alpha "
"charlie<rst><gray><rst><c22><c9><ceos>alpha "
"bravo<rst><gray><rst><c20><c9><ceos>alpha bravo "
"<rst><gray><rst><c21><c9><ceos>alpha bravo "
"bravo<rst><gray><rst><c26><c9><ceos>alpha bravo "
"alpha<rst><gray><rst><c26><c9><ceos>alpha bravo "
"delta<rst><gray><rst><c26><c9><ceos>alpha bravo "
"charlie<rst><gray><rst><c28><c9><ceos>alpha bravo charlie "
"<rst><gray><rst><c29><c9><ceos>alpha bravo charlie "
"charlie<rst><gray><rst><c36><c9><ceos>alpha bravo charlie "
"bravo<rst><gray><rst><c34><c9><ceos>alpha bravo charlie "
"alpha<rst><gray><rst><c34><c9><ceos>alpha bravo charlie "
"delta<rst><gray><rst><c34><c9><ceos>alpha bravo charlie delta<rst><c34>\r\n"
"alpha bravo charlie delta\r\n",
"delta charlie bravo alpha\n"
)
self_.check_scenario(
"<up><c-w><c-w><backspace><c-a><c-y> <cr><c-d>",
"<c9><ceos>charlie delta alpha bravo<rst><gray><rst><c34><c9><ceos>charlie "
"delta alpha <rst><gray><rst><c29><c9><ceos>charlie delta "
"<rst><gray><rst><c23><c9><ceos>charlie "
"delta<rst><gray><rst><c22><c9><ceos>charlie delta<rst><c9><c9><ceos>alpha "
"bravocharlie delta<rst><c20><c9><ceos>alpha bravo charlie "
"delta<rst><c21><c9><ceos>alpha bravo charlie delta<rst><c34>\r\n"
"alpha bravo charlie delta\r\n",
"charlie delta alpha bravo\n"
)
self_.check_scenario(
"<up><home><m-d><m-d><del><c-e> <c-y><cr><c-d>",
"<c9><ceos>charlie delta alpha bravo<rst><gray><rst><c34><c9><ceos>charlie "
"delta alpha bravo<rst><c9><c9><ceos> delta alpha bravo<rst><c9><c9><ceos> "
"alpha bravo<rst><c9><c9><ceos>alpha bravo<rst><c9><c9><ceos>alpha "
"bravo<rst><gray><rst><c20><c9><ceos>alpha bravo "
"<rst><gray><rst><c21><c9><ceos>alpha bravo charlie "
"delta<rst><gray><rst><c34><c9><ceos>alpha bravo charlie delta<rst><c34>\r\n"
"alpha bravo charlie delta\r\n",
"charlie delta alpha bravo\n"
)
self_.check_scenario(
"<up><c-w><backspace><c-w><backspace><c-w><backspace><c-w><backspace><c-w><backspace>"
"<c-w><backspace><c-w><backspace><c-w><backspace><c-w><backspace><c-w><backspace>"
"<c-w><c-y><m-y><m-y><m-y><m-y><m-y><m-y><m-y><m-y><m-y><m-y><cr><c-d>",
"<c9><ceos>a b c d e f g h i j k<rst><gray><rst><c30><c9><ceos>a b c d e f g "
"h i j <rst><gray><rst><c29><c9><ceos>a b c d e f g h i "
"j<rst><gray><rst><c28><c9><ceos>a b c d e f g h i "
"<rst><gray><rst><c27><c9><ceos>a b c d e f g h "
"i<rst><gray><rst><c26><c9><ceos>a b c d e f g h "
"<rst><gray><rst><c25><c9><ceos>a b c d e f g "
"h<rst><gray><rst><c24><c9><ceos>a b c d e f g "
"<rst><gray><rst><c23><c9><ceos>a b c d e f g<rst><gray><rst><c22><c9><ceos>a "
"b c d e f <rst><gray><rst><c21><c9><ceos>a b c d e "
"f<rst><gray><rst><c20><c9><ceos>a b c d e <rst><gray><rst><c19><c9><ceos>a b "
"c d e<rst><gray><rst><c18><c9><ceos>a b c d <rst><gray><rst><c17><c9><ceos>a "
"b c d<rst><gray><rst><c16><c9><ceos>a b c <rst><gray><rst><c15><c9><ceos>a b "
"c<rst><gray><rst><c14><c9><ceos>a b <rst><gray><rst><c13><c9><ceos>a "
"b<rst><gray><rst><c12><c9><ceos>a "
"<rst><gray><rst><c11>"
"<c9><ceos>a<rst><gray><rst><c10>"
"<c9><ceos><rst><gray><rst><c9>"
"<c9><ceos>a<rst><gray><rst><c10>"
"<c9><ceos>b<rst><gray><rst><c10>"
"<c9><ceos>c<rst><gray><rst><c10>"
"<c9><ceos>d<rst><gray><rst><c10>"
"<c9><ceos>e<rst><gray><rst><c10>"
"<c9><ceos>f<rst><gray><rst><c10>"
"<c9><ceos>g<rst><gray><rst><c10>"
"<c9><ceos>h<rst><gray><rst><c10>"
"<c9><ceos>i<rst><gray><rst><c10>"
"<c9><ceos>j<rst><gray><rst><c10>"
"<c9><ceos>a<rst><gray><rst><c10>"
"<c9><ceos>a<rst><c10>\r\n"
"a\r\n",
"a b c d e f g h i j k\n"
)
def test_tab_completion_cutoff( self_ ):
self_.check_scenario(
"<tab>n<tab>y<cr><c-d>",
"<c9><ceos><rst><gray><rst><c9>\r\n"
"Display all 9 possibilities? (y or n)\r\n"
"<brightgreen>replxx<rst>> "
"<c9><ceos><rst><gray><rst><c9><c9><ceos><rst><gray><rst><c9>\r\n"
"Display all 9 possibilities? (y or n)<ceos>\r\n"
"<brightmagenta><rst>db <brightmagenta><rst>hallo "
"<brightmagenta><rst>hansekogge <brightmagenta><rst>quetzalcoatl "
"<brightmagenta><rst>power\r\n"
"<brightmagenta><rst>hello <brightmagenta><rst>hans "
"<brightmagenta><rst>seamann <brightmagenta><rst>quit\r\n"
"<brightgreen>replxx<rst>> "
"<c9><ceos><rst><gray><rst><c9><c9><ceos><rst><c9>\r\n",
command = ReplxxTests._cSample_ + " q1 c3"
)
self_.check_scenario(
"<tab>n<cr><c-d>",
"<c9><ceos><rst><gray><rst><c9>\r\n"
"Display all 9 possibilities? (y or n)\r\n"
"<brightgreen>replxx<rst>> "
"<c9><ceos><rst><gray><rst><c9><c9><ceos><rst><c9>\r\n",
command = ReplxxTests._cSample_ + " q1 c3"
)
self_.check_scenario(
"<tab><c-c><cr><c-d>",
"<c9><ceos><rst><gray><rst><c9>\r\n"
"Display all 9 possibilities? (y or n)^C\r\n"
"<brightgreen>replxx<rst>> "
"<c9><ceos><rst><gray><rst><c9><c9><ceos><rst><c9>\r\n",
command = ReplxxTests._cSample_ + " q1 c3"
)
def test_preload( self_ ):
self_.check_scenario(
"<cr><c-d>",
"<c9><ceos>Alice has a cat.<rst><gray><rst><c25>"
"<c9><ceos>Alice has a cat.<rst><c25>\r\n"
"Alice has a cat.\r\n",
command = ReplxxTests._cSample_ + " q1 'iAlice has a cat.'"
)
self_.check_scenario(
"<cr><c-d>",
"<c9><ceos>Cat eats mice.\r\n"
"<rst><gray><rst><u1><c26><c9><ceos>Cat eats mice.\r\n"
"<rst><u1><c26>\r\n"
"Cat eats mice.\r\n"
"\r\n",
command = ReplxxTests._cSample_ + " q1 'iCat\teats\tmice.\r\n'"
)
self_.check_scenario(
"<cr><c-d>",
"<c9><ceos>M Alice has a cat.<rst><gray><rst><c27>"
"<c9><ceos>M Alice has a cat.<rst><c27>\r\n"
"M Alice has a cat.\r\n",
command = ReplxxTests._cSample_ + " q1 'iMAlice has a cat.'"
)
self_.check_scenario(
"<cr><c-d>",
"<c9><ceos>M Alice has a cat.<rst><gray><rst><c28>"
"<c9><ceos>M Alice has a cat.<rst><c28>\r\n"
"M Alice has a cat.\r\n",
command = ReplxxTests._cSample_ + " q1 'iM\t\t\t\tAlice has a cat.'"
)
def test_prompt( self_ ):
prompt = "date: now\nrepl> "
self_.check_scenario(
"<up><cr><up><up><cr><c-d>",
"<c7><ceos>three<rst><gray><rst><c12><c7><ceos>three<rst><c12>\r\n"
"three\r\n"
"date: now\r\n"
"repl> "
"<c7><ceos>three<rst><gray><rst><c12><c7><ceos>two<rst><gray><rst><c10><c7><ceos>two<rst><c10>\r\n"
"two\r\n",
command = ReplxxTests._cSample_ + " q1 'p{}'".format( prompt ),
prompt = prompt,
end = prompt + ReplxxTests._end_
)
def test_long_line( self_ ):
self_.check_scenario(
"<up><c-left>~<c-left>~<c-left>~<c-left>~<c-left>~<c-left>~<c-left>~<c-left>~<c-left>~<c-left>~<c-left>~<c-left>~<c-left><cr><c-d>",
"<c9><ceos>ada clojure eiffel fortran groovy java kotlin modula perl python "
"rust sql<rst><gray><rst><c2><u2><c9><ceos>ada clojure eiffel fortran groovy "
"java kotlin modula perl python rust sql<rst><u1><c39><u1><c9><ceos>ada "
"clojure eiffel fortran groovy java kotlin modula perl python rust "
"~sql<rst><u1><c40><u1><c9><ceos>ada clojure eiffel fortran groovy java "
"kotlin modula perl python rust ~sql<rst><u1><c34><u1><c9><ceos>ada clojure "
"eiffel fortran groovy java kotlin modula perl python ~rust "
"~sql<rst><u1><c35><u1><c9><ceos>ada clojure eiffel fortran groovy java "
"kotlin modula perl python ~rust ~sql<rst><u1><c27><u1><c9><ceos>ada clojure "
"eiffel fortran groovy java kotlin modula perl ~python ~rust "
"~sql<rst><u1><c28><u1><c9><ceos>ada clojure eiffel fortran groovy java "
"kotlin modula perl ~python ~rust ~sql<rst><u1><c22><u1><c9><ceos>ada clojure "
"eiffel fortran groovy java kotlin modula ~perl ~python ~rust "
"~sql<rst><u1><c23><u1><c9><ceos>ada clojure eiffel fortran groovy java "
"kotlin modula ~perl ~python ~rust ~sql<rst><u1><c15><u1><c9><ceos>ada "
"clojure eiffel fortran groovy java kotlin ~modula ~perl ~python ~rust "
"~sql<rst><u1><c16><u1><c9><ceos>ada clojure eiffel fortran groovy java "
"kotlin ~modula ~perl ~python ~rust ~sql<rst><u1><c8><u1><c9><ceos>ada "
"clojure eiffel fortran groovy java ~kotlin ~modula ~perl ~python ~rust "
"~sql<rst><u1><c9><u1><c9><ceos>ada clojure eiffel fortran groovy java "
"~kotlin ~modula ~perl ~python ~rust ~sql<rst><u1><c3><u1><c9><ceos>ada "
"clojure eiffel fortran groovy ~java ~kotlin ~modula ~perl ~python ~rust "
"~sql<rst><u1><c4><u1><c9><ceos>ada clojure eiffel fortran groovy ~java "
"~kotlin ~modula ~perl ~python ~rust ~sql<rst><u2><c36><c9><ceos>ada clojure "
"eiffel fortran ~groovy ~java ~kotlin ~modula ~perl ~python ~rust "
"~sql<rst><u2><c37><c9><ceos>ada clojure eiffel fortran ~groovy ~java ~kotlin "
"~modula ~perl ~python ~rust ~sql<rst><u2><c28><c9><ceos>ada clojure eiffel "
"~fortran ~groovy ~java ~kotlin ~modula ~perl ~python ~rust "
"~sql<rst><u2><c29><c9><ceos>ada clojure eiffel ~fortran ~groovy ~java "
"~kotlin ~modula ~perl ~python ~rust ~sql<rst><u2><c21><c9><ceos>ada clojure "
"~eiffel ~fortran ~groovy ~java ~kotlin ~modula ~perl ~python ~rust "
"~sql<rst><u2><c22><c9><ceos>ada clojure ~eiffel ~fortran ~groovy ~java "
"~kotlin ~modula ~perl ~python ~rust ~sql<rst><u2><c13><c9><ceos>ada ~clojure "
"~eiffel ~fortran ~groovy ~java ~kotlin ~modula ~perl ~python ~rust "
"~sql<rst><u2><c14><c9><ceos>ada ~clojure ~eiffel ~fortran ~groovy ~java "
"~kotlin ~modula ~perl ~python ~rust ~sql<rst><u2><c9><c9><ceos>~ada ~clojure "
"~eiffel ~fortran ~groovy ~java ~kotlin ~modula ~perl ~python ~rust "
"~sql<rst><u2><c10><c9><ceos>~ada ~clojure ~eiffel ~fortran ~groovy ~java "
"~kotlin ~modula ~perl ~python ~rust ~sql<rst><u2><c9><c9><ceos>~ada ~clojure "
"~eiffel ~fortran ~groovy ~java ~kotlin ~modula ~perl ~python ~rust "
"~sql<rst><c14>\r\n"
"~ada ~clojure ~eiffel ~fortran ~groovy ~java ~kotlin ~modula ~perl ~python "
"~rust ~sql\r\n",
" ".join( _words_[::3] ) + "\n",
dimensions = ( 10, 40 )
)
def test_colors( self_ ):
self_.check_scenario(
"<up><cr><c-d>",
"<c9><ceos><black>color_black<rst> <red>color_red<rst> "
"<green>color_green<rst> <brown>color_brown<rst> <blue>color_blue<rst> "
"<magenta>color_magenta<rst> <cyan>color_cyan<rst> "
"<lightgray>color_lightgray<rst> <gray>color_gray<rst> "
"<brightred>color_brightred<rst> <brightgreen>color_brightgreen<rst> "
"<yellow>color_yellow<rst> <brightblue>color_brightblue<rst> "
"<brightmagenta>color_brightmagenta<rst> <brightcyan>color_brightcyan<rst> "
"<white>color_white<rst><green><rst><c70><u2><c9><ceos><black>color_black<rst> "
"<red>color_red<rst> <green>color_green<rst> <brown>color_brown<rst> "
"<blue>color_blue<rst> <magenta>color_magenta<rst> <cyan>color_cyan<rst> "
"<lightgray>color_lightgray<rst> <gray>color_gray<rst> "
"<brightred>color_brightred<rst> <brightgreen>color_brightgreen<rst> "
"<yellow>color_yellow<rst> <brightblue>color_brightblue<rst> "
"<brightmagenta>color_brightmagenta<rst> <brightcyan>color_brightcyan<rst> "
"<white>color_white<rst><c70>\r\n"
"color_black color_red color_green color_brown color_blue color_magenta "
"color_cyan color_lightgray color_gray color_brightred color_brightgreen "
"color_yellow color_brightblue color_brightmagenta color_brightcyan "
"color_white\r\n",
"color_black color_red color_green color_brown color_blue color_magenta color_cyan color_lightgray"
" color_gray color_brightred color_brightgreen color_yellow color_brightblue color_brightmagenta color_brightcyan color_white\n"
)
def test_word_break_characters( self_ ):
self_.check_scenario(
"<up><c-left>x<c-left><c-left>x<c-left><c-left>x<c-left><c-left>x<c-left><c-left>x<c-left><c-left>x<cr><c-d>",
"<c9><ceos>one_two three-four five_six "
"seven-eight<rst><gray><rst><c48><c9><ceos>one_two three-four five_six "
"seven-eight<rst><c43><c9><ceos>one_two three-four five_six "
"seven-xeight<rst><c44><c9><ceos>one_two three-four five_six "
"seven-xeight<rst><c43><c9><ceos>one_two three-four five_six "
"seven-xeight<rst><c37><c9><ceos>one_two three-four five_six "
"xseven-xeight<rst><c38><c9><ceos>one_two three-four five_six "
"xseven-xeight<rst><c37><c9><ceos>one_two three-four five_six "
"xseven-xeight<rst><c28><c9><ceos>one_two three-four xfive_six "
"xseven-xeight<rst><c29><c9><ceos>one_two three-four xfive_six "
"xseven-xeight<rst><c28><c9><ceos>one_two three-four xfive_six "
"xseven-xeight<rst><c23><c9><ceos>one_two three-xfour xfive_six "
"xseven-xeight<rst><c24><c9><ceos>one_two three-xfour xfive_six "
"xseven-xeight<rst><c23><c9><ceos>one_two three-xfour xfive_six "
"xseven-xeight<rst><c17><c9><ceos>one_two xthree-xfour xfive_six "
"xseven-xeight<rst><c18><c9><ceos>one_two xthree-xfour xfive_six "
"xseven-xeight<rst><c17><c9><ceos>one_two xthree-xfour xfive_six "
"xseven-xeight<rst><c9><c9><ceos>xone_two xthree-xfour xfive_six "
"xseven-xeight<rst><c10><c9><ceos>xone_two xthree-xfour xfive_six "
"xseven-xeight<rst><c54>\r\n"
"xone_two xthree-xfour xfive_six xseven-xeight\r\n",
"one_two three-four five_six seven-eight\n",
command = ReplxxTests._cSample_ + " q1 'w \t-'"
)
self_.check_scenario(
"<up><c-left>x<c-left><c-left>x<c-left><c-left>x<c-left><c-left>x<c-left><c-left>x<c-left><c-left>x<cr><c-d>",
"<c9><ceos>one_two three-four five_six "
"seven-eight<rst><gray><rst><c48><c9><ceos>one_two three-four five_six "
"seven-eight<rst><c37><c9><ceos>one_two three-four five_six "
"xseven-eight<rst><c38><c9><ceos>one_two three-four five_six "
"xseven-eight<rst><c37><c9><ceos>one_two three-four five_six "
"xseven-eight<rst><c33><c9><ceos>one_two three-four five_xsix "
"xseven-eight<rst><c34><c9><ceos>one_two three-four five_xsix "
"xseven-eight<rst><c33><c9><ceos>one_two three-four five_xsix "
"xseven-eight<rst><c28><c9><ceos>one_two three-four xfive_xsix "
"xseven-eight<rst><c29><c9><ceos>one_two three-four xfive_xsix "
"xseven-eight<rst><c28><c9><ceos>one_two three-four xfive_xsix "
"xseven-eight<rst><c17><c9><ceos>one_two xthree-four xfive_xsix "
"xseven-eight<rst><c18><c9><ceos>one_two xthree-four xfive_xsix "
"xseven-eight<rst><c17><c9><ceos>one_two xthree-four xfive_xsix "
"xseven-eight<rst><c13><c9><ceos>one_xtwo xthree-four xfive_xsix "
"xseven-eight<rst><c14><c9><ceos>one_xtwo xthree-four xfive_xsix "
"xseven-eight<rst><c13><c9><ceos>one_xtwo xthree-four xfive_xsix "
"xseven-eight<rst><c9><c9><ceos>xone_xtwo xthree-four xfive_xsix "
"xseven-eight<rst><c10><c9><ceos>xone_xtwo xthree-four xfive_xsix "
"xseven-eight<rst><c54>\r\n"
"xone_xtwo xthree-four xfive_xsix xseven-eight\r\n",
"one_two three-four five_six seven-eight\n",
command = ReplxxTests._cSample_ + " q1 'w \t_'"
)
def test_no_color( self_ ):
self_.check_scenario(
"<up> X<cr><c-d>",
"<c9><ceos>color_black color_red color_green color_brown color_blue "
"color_magenta color_cyan color_lightgray color_gray color_brightred "
"color_brightgreen color_yellow color_brightblue color_brightmagenta "
"color_brightcyan color_white<c70> X<u2><c9><ceos>color_black color_red "
"color_green color_brown color_blue color_magenta color_cyan color_lightgray "
"color_gray color_brightred color_brightgreen color_yellow color_brightblue "
"color_brightmagenta color_brightcyan color_white X<c72>\r\n"
"color_black color_red color_green color_brown color_blue color_magenta "
"color_cyan color_lightgray color_gray color_brightred color_brightgreen "
"color_yellow color_brightblue color_brightmagenta color_brightcyan "
"color_white X\r\n",
"color_black color_red color_green color_brown color_blue color_magenta color_cyan color_lightgray"
" color_gray color_brightred color_brightgreen color_yellow color_brightblue color_brightmagenta color_brightcyan color_white\n",
command = ReplxxTests._cSample_ + " q1 m1"
)
def test_no_terminal( self_ ):
res = subprocess.run( [ ReplxxTests._cSample_, "q1" ], input = b"replxx FTW!\n", stdout = subprocess.PIPE, stderr = subprocess.PIPE )
self_.assertSequenceEqual( res.stdout, b"starting...\nreplxx FTW!\n\nExiting Replxx\n" )
def parseArgs( self, func, argv ):
global verbosity
res = func( self, argv )
verbosity = self.verbosity
return res
if __name__ == "__main__":
pa = unittest.TestProgram.parseArgs
unittest.TestProgram.parseArgs = lambda self, argv: parseArgs( self, pa, argv )
unittest.main()
| 45.481928 | 1,154 | 0.624503 | 8,575 | 52,850 | 3.76723 | 0.060175 | 0.091939 | 0.074913 | 0.015602 | 0.819682 | 0.772691 | 0.712265 | 0.676851 | 0.645121 | 0.620666 | 0 | 0.04148 | 0.133756 | 52,850 | 1,161 | 1,155 | 45.521103 | 0.663616 | 0.000341 | 0 | 0.334495 | 0 | 0.182927 | 0.761575 | 0.533466 | 0 | 0 | 0 | 0 | 0.002613 | 1 | 0.044425 | false | 0 | 0.006098 | 0.000871 | 0.058362 | 0.000871 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8a5c6b6cb2d31e52dda0c70cb620509f131596e3 | 32,858 | py | Python | astroNN/apogee/downloader.py | AbdulfattahBaalawi/astroNN | 0b970dd1a8d4d5e6d611ffa52cfd3c2ffdcb4643 | [
"MIT"
] | 1 | 2021-08-24T16:06:29.000Z | 2021-08-24T16:06:29.000Z | astroNN/apogee/downloader.py | AbdulfattahBaalawi/astroNN | 0b970dd1a8d4d5e6d611ffa52cfd3c2ffdcb4643 | [
"MIT"
] | null | null | null | astroNN/apogee/downloader.py | AbdulfattahBaalawi/astroNN | 0b970dd1a8d4d5e6d611ffa52cfd3c2ffdcb4643 | [
"MIT"
] | null | null | null | # ---------------------------------------------------------#
# astroNN.apogee.downloader: download apogee files
# ---------------------------------------------------------#
import getpass
import os
import urllib.request, urllib.error
import warnings
import numpy as np
from astroNN.apogee.apogee_shared import apogee_env, apogee_default_dr
from astroNN.shared.downloader_tools import TqdmUpTo, filehash
from astropy.io import fits
currentdir = os.getcwd()
# global var
warning_flag = False
_ALLSTAR_TEMP = {}
__apogee_credentials_username = None
__apogee_credentials_pw = None
def __apogee_credentials_downloader(url, fullfilename):
"""
Download file at the URL with apogee credentials, this function will prompt for username and password
:param url: URL
:type url: str
:param fullfilename: Full file name including path in local system
:type fullfilename: str
:return: None
:History: 2018-Aug-31 - Written - Henry Leung (University of Toronto)
"""
passman = urllib.request.HTTPPasswordMgrWithDefaultRealm()
global __apogee_credentials_username
global __apogee_credentials_pw
if __apogee_credentials_username is None:
print("\nYou are trying to access APOGEE proprietary data...Please provide username and password...")
__apogee_credentials_username = input('Username: ')
__apogee_credentials_pw = getpass.getpass('Password: ')
passman.add_password(None, url, __apogee_credentials_username, __apogee_credentials_pw)
authhandler = urllib.request.HTTPBasicAuthHandler(passman)
opener = urllib.request.build_opener(authhandler)
urllib.request.install_opener(opener)
# Check if directory exists
if not os.path.exists(os.path.dirname(fullfilename)):
os.makedirs(os.path.dirname(fullfilename))
try:
with TqdmUpTo(unit='B', unit_scale=True, miniters=1, desc=url.split('/')[-1]) as t:
urllib.request.urlretrieve(url, fullfilename, reporthook=t.update_to)
except urllib.error.HTTPError as emsg:
if '401' in str(emsg):
__apogee_credentials_username = None
__apogee_credentials_pw = None
raise ConnectionError('Wrong username or password')
elif '404' in str(emsg):
# just print, no need to warn as it will spam the console
print(f'{url} cannot be found on server, skipped')
fullfilename = warning_flag
else:
# don't raise error, so a batch downloading script will keep running despite some files not found
warnings.warn(f"Unknown error occurred - {emsg}", RuntimeWarning)
fullfilename = warning_flag
return fullfilename
def allstar(dr=None, flag=None):
"""
Download the allStar file (catalog of ASPCAP stellar parameters and abundances from combined spectra)
:param dr: APOGEE DR
:type dr: int
:param flag: 0: normal, 1: force to re-download
:type flag: int
:return: full file path and download in background if not found locally, False if cannot be found on server
:rtype: str
:History: 2017-Oct-09 - Written - Henry Leung (University of Toronto)
"""
dr = apogee_default_dr(dr=dr)
if dr == 13:
file_hash = '1718723ada3018de94e1022cd57d4d950a74f91f'
# Check if directory exists
fullfoldername = os.path.join(apogee_env(), 'dr13/apogee/spectro/redux/r6/stars/l30e/l30e.2/')
if not os.path.exists(fullfoldername):
os.makedirs(fullfoldername)
filename = 'allStar-l30e.2.fits'
fullfilename = os.path.join(fullfoldername, filename)
url = f'https://data.sdss.org/sas/dr13/apogee/spectro/redux/r6/stars/l30e/l30e.2/{filename}'
elif dr == 14:
file_hash = 'a7e1801924661954da792e377ad54f412219b105'
fullfoldername = os.path.join(apogee_env(), 'dr14/apogee/spectro/redux/r8/stars/l31c/l31c.2/')
# Check if directory exists
if not os.path.exists(fullfoldername):
os.makedirs(fullfoldername)
filename = 'allStar-l31c.2.fits'
fullfilename = os.path.join(fullfoldername, filename)
url = f'https://data.sdss.org/sas/dr14/apogee/spectro/redux/r8/stars/l31c/l31c.2/{filename}'
elif dr == 16:
file_hash = '66fe854bd000ca1c0a6b50a998877e4a3e41d184'
fullfoldername = os.path.join(apogee_env(), 'dr16/apogee/spectro/aspcap/r12/l33/')
# Check if directory exists
if not os.path.exists(fullfoldername):
os.makedirs(fullfoldername)
filename = 'allStar-r12-l33.fits'
fullfilename = os.path.join(fullfoldername, filename)
url = f'https://data.sdss.org/sas/dr16/apogee/spectro/aspcap/r12/l33/{filename}'
elif dr == 17:
file_hash = 'eb54df31753a45b355262d5fe0af4527b73fc29f'
fullfoldername = os.path.join(apogee_env(), 'apogeework/apogee/spectro/aspcap/r13/l33/')
# Check if directory exists
if not os.path.exists(fullfoldername):
os.makedirs(fullfoldername)
filename = 'allStar-r13-l33-58932beta.fits'
fullfilename = os.path.join(fullfoldername, filename)
url = f'https://data.sdss.org/sas/apogeework/apogee/spectro/aspcap/r13/l33/{filename}'
else:
raise ValueError('allstar() only supports APOGEE DR13-DR16')
# check file integrity
if os.path.isfile(fullfilename) and flag is None:
checksum = filehash(fullfilename, algorithm='sha1')
if checksum != file_hash.lower():
print('File corruption detected, astroNN is attempting to download again')
allstar(dr=dr, flag=1)
else:
print(fullfilename + ' was found!')
# Check if files exists
if not os.path.isfile(os.path.join(fullfoldername, filename)) or flag == 1:
with TqdmUpTo(unit='B', unit_scale=True, miniters=1, desc=url.split('/')[-1]) as t:
try:
urllib.request.urlretrieve(url, fullfilename, reporthook=t.update_to)
print(f'Downloaded DR{dr:d} allStar file catalog successfully to {fullfilename}')
checksum = filehash(fullfilename, algorithm='sha1')
if checksum != file_hash.lower():
print('File corruption detected, astroNN is attempting to download again')
allstar(dr=dr, flag=1)
except urllib.error.HTTPError as emsg:
if '401' in str(emsg):
fullfilename = __apogee_credentials_downloader(url, fullfilename)
elif '404' in str(emsg):
print(f'{url} cannot be found on server, skipped')
fullfilename = warning_flag
else:
print(f"Unknown error occurred - {emsg}")
fullfilename = warning_flag
return fullfilename
def apogee_astronn(dr=None, flag=None):
"""
Download the apogee_astroNN file (catalog of astroNN stellar parameters, abundances, distances and orbital
parameters from combined spectra)
:param dr: APOGEE DR
:type dr: int
:param flag: 0: normal, 1: force to re-download
:type flag: int
:return: full file path and download in background if not found locally, False if cannot be found on server
:rtype: str
:History: 2019-Dec-10 - Written - Henry Leung (University of Toronto)
"""
dr = apogee_default_dr(dr=dr)
if dr == 16:
# Check if directory exists
fullfoldername = os.path.join(apogee_env(), 'dr16/apogee/vac/apogee-astronn/')
# Check if directory exists
if not os.path.exists(fullfoldername):
os.makedirs(fullfoldername)
filename = 'apogee_astroNN-DR16.fits'
fullfilename = os.path.join(fullfoldername, filename)
file_hash = '02187ef2cbe5215dc4d65df7037ecf1b8cc5853d'
url = f'https://data.sdss.org/sas/dr16/apogee/vac/apogee-astronn/{filename}'
else:
raise ValueError('apogee_astroNN() only supports APOGEE DR16')
# check file integrity
if os.path.isfile(fullfilename) and flag is None:
checksum = filehash(fullfilename, algorithm='sha1')
if checksum != file_hash.lower():
print('File corruption detected, astroNN is attempting to download again')
apogee_astronn(dr=dr, flag=1)
else:
print(fullfilename + ' was found!')
# Check if files exists
if not os.path.isfile(os.path.join(fullfoldername, filename)) or flag == 1:
with TqdmUpTo(unit='B', unit_scale=True, miniters=1, desc=url.split('/')[-1]) as t:
urllib.request.urlretrieve(url, fullfilename, reporthook=t.update_to)
print(f'Downloaded DR{dr:d} apogee_astroNN file catalog successfully to {fullfilename}')
checksum = filehash(fullfilename, algorithm='sha1')
if checksum != file_hash.lower():
print('File corruption detected, astroNN is attempting to download again')
apogee_astronn(dr=dr, flag=1)
return fullfilename
def allstar_cannon(dr=None, flag=None):
"""
Download the allStarCannon file (catalog of Cannon stellar parameters and abundances from combined spectra)
:param dr: APOGEE DR
:type dr: int
:param flag: 0: normal, 1: force to re-download
:type flag: int
:return: full file path and download in background if not found locally, False if cannot be found on server
:rtype: str
:History: 2017-Oct-24 - Written - Henry Leung (University of Toronto)
"""
dr = apogee_default_dr(dr=dr)
if dr == 14:
# Check if directory exists
fullfoldername = os.path.join(apogee_env(), 'dr14/apogee/spectro/redux/r8/stars/l31c/l31c.2/cannon/')
# Check if directory exists
if not os.path.exists(fullfoldername):
os.makedirs(fullfoldername)
filename = 'allStarCannon-l31c.2.fits'
fullfilename = os.path.join(fullfoldername, filename)
file_hash = '64d485e95b3504df0b795ab604e21a71d5c7ae45'
url = f'https://data.sdss.org/sas/dr14/apogee/spectro/redux/r8/stars/l31c/l31c.2/cannon/{filename}'
else:
raise ValueError('allstar_cannon() only supports APOGEE DR14-DR15')
# check file integrity
if os.path.isfile(fullfilename) and flag is None:
checksum = filehash(fullfilename, algorithm='sha1')
if checksum != file_hash.lower():
print('File corruption detected, astroNN is attempting to download again')
allstar_cannon(dr=dr, flag=1)
else:
print(fullfilename + ' was found!')
# Check if files exists
if not os.path.isfile(os.path.join(fullfoldername, filename)) or flag == 1:
with TqdmUpTo(unit='B', unit_scale=True, miniters=1, desc=url.split('/')[-1]) as t:
urllib.request.urlretrieve(url, fullfilename, reporthook=t.update_to)
print(f'Downloaded DR{dr:d} allStarCannon file catalog successfully to {fullfilename}')
checksum = filehash(fullfilename, algorithm='sha1')
if checksum != file_hash.lower():
print('File corruption detected, astroNN is attempting to download again')
allstar_cannon(dr=dr, flag=1)
return fullfilename
def allvisit(dr=None, flag=None):
"""
Download the allVisit file (catalog of properties from individual visit spectra)
:param dr: APOGEE DR
:type dr: int
:param flag: 0: normal, 1: force to re-download
:type flag: int
:return: full file path and download in background if not found locally, False if cannot be found on server
:rtype: str
:History: 2017-Oct-11 - Written - Henry Leung (University of Toronto)
"""
dr = apogee_default_dr(dr=dr)
if dr == 13:
file_hash = '2a3b13ccd40a2c8aea8321be9630117922d55b51'
# Check if directory exists
fullfilepath = os.path.join(apogee_env(), 'dr13/apogee/spectro/redux/r6/')
if not os.path.exists(fullfilepath):
os.makedirs(fullfilepath)
filename = 'allVisit-l30e.2.fits'
fullfilename = os.path.join(fullfilepath, filename)
url = f'https://data.sdss.org/sas/dr13/apogee/spectro/redux/r6/{filename}'
elif dr == 14:
file_hash = 'abcecbcdc5fe8d00779738702c115633811e6bbd'
# Check if directory exists
fullfilepath = os.path.join(apogee_env(), 'dr14/apogee/spectro/redux/r8/')
if not os.path.exists(fullfilepath):
os.makedirs(fullfilepath)
filename = 'allVisit-l31c.2.fits'
fullfilename = os.path.join(fullfilepath, filename)
url = f'https://data.sdss.org/sas/dr14/apogee/spectro/redux/r8/{filename}'
elif dr == 16:
file_hash = '65befb967d8d9d6f4f87711c1fa8d0ac014b62da'
# Check if directory exists
fullfilepath = os.path.join(apogee_env(), 'dr16/apogee/spectro/aspcap/r12/l33/')
if not os.path.exists(fullfilepath):
os.makedirs(fullfilepath)
filename = 'allVisit-r12-l33.fits'
fullfilename = os.path.join(fullfilepath, filename)
url = f'https://data.sdss.org/sas/dr16/apogee/spectro/aspcap/r12/l33/{filename}'
else:
raise ValueError('allvisit() only supports APOGEE DR13-DR16')
# check file integrity
if os.path.isfile(fullfilename) and flag is None:
checksum = filehash(fullfilename, algorithm='sha1')
if checksum != file_hash.lower():
print('File corruption detected, astroNN is attempting to download again')
allvisit(dr=dr, flag=1)
else:
print(fullfilename + ' was found!')
elif not os.path.isfile(os.path.join(fullfilepath, filename)) or flag == 1:
with TqdmUpTo(unit='B', unit_scale=True, miniters=1, desc=url.split('/')[-1]) as t:
urllib.request.urlretrieve(url, fullfilename, reporthook=t.update_to)
print(f'Downloaded DR{dr:d} allVisit file catalog successfully to {fullfilepath}')
checksum = filehash(fullfilename, algorithm='sha1')
if checksum != file_hash.lower():
print('File corruption detected, astroNN is attempting to download again')
allstar(dr=dr, flag=1)
return fullfilename
def combined_spectra(dr=None, location=None, field=None, apogee=None, telescope=None, verbose=1, flag=None):
"""
Download the required combined spectra file a.k.a aspcapStar
:param dr: APOGEE DR
:type dr: int
:param location: Location ID [Optional]
:type location: int
:param field: Field [Optional]
:type field: str
:param apogee: Apogee ID
:type apogee: str
:param telescope: Telescope ID, for example 'apo25m' or 'lco25m'
:type telescope: str
:param verbose: verbose, set 0 to silent most logging
:type verbose: int
:param flag: 0: normal, 1: force to re-download
:type flag: int
:return: full file path and download in background if not found locally, False if cannot be found on server
:rtype: str
:History:
| 2017-Oct-15 - Written - Henry Leung (University of Toronto)
| 2018-Aug-31 - Updated - Henry Leung (University of Toronto)
"""
dr = apogee_default_dr(dr=dr)
# for DR16=<, location is expected to be none because field is used
if (location is None and dr < 16) or (field is None and dr >= 16): # try to load info if not enough info
global _ALLSTAR_TEMP
if not str(f'dr{dr}') in _ALLSTAR_TEMP:
_ALLSTAR_TEMP[f'dr{dr}'] = fits.getdata(allstar(dr=dr))
if telescope is None:
matched_idx = [np.nonzero(_ALLSTAR_TEMP[f'dr{dr}']['APOGEE_ID'] == apogee)[0]][0]
else:
matched_idx = [np.nonzero([(_ALLSTAR_TEMP[f'dr{dr}']['APOGEE_ID'] == apogee) &
(_ALLSTAR_TEMP[f'dr{dr}']['TELESCOPE'] == telescope)])][0][1]
if len(matched_idx) == 0:
raise ValueError(f"No entry found in allstar DR{dr} met with your requirement!!")
location = _ALLSTAR_TEMP[f'dr{dr}']['LOCATION_ID'][matched_idx][0] if not location else location
field = _ALLSTAR_TEMP[f'dr{dr}']['FIELD'][matched_idx][0] if not field else field
telescope = _ALLSTAR_TEMP[f'dr{dr}']['TELESCOPE'][matched_idx][0] if not telescope else telescope
if dr == 13:
reduce_prefix = 'r6'
aspcap_code = 'l30e'
str1 = f'https://data.sdss.org/sas/dr{dr}/apogee/spectro/redux/{reduce_prefix}/stars/{aspcap_code}/{aspcap_code}.2/{location}/'
filename = f'aspcapStar-{reduce_prefix}-{aspcap_code}.2-{apogee}.fits'
hash_filename = f'stars_{aspcap_code}_{aspcap_code}.2_{location}.sha1sum'
urlstr = str1 + filename
# check folder existence
fullfoldername = os.path.join(apogee_env(),
f'dr{dr}/apogee/spectro/redux/{reduce_prefix}/stars/{aspcap_code}/{aspcap_code}.2/',
str(location))
if not os.path.exists(fullfoldername):
os.makedirs(fullfoldername)
fullfilename = os.path.join(fullfoldername, filename)
elif dr == 14:
reduce_prefix = 'r8'
aspcap_code = 'l31c'
str1 = f'https://data.sdss.org/sas/dr{dr}/apogee/spectro/redux/{reduce_prefix}/stars/{aspcap_code}/{aspcap_code}.2/{location}/'
filename = f'aspcapStar-{reduce_prefix}-{aspcap_code}.2-{apogee}.fits'
hash_filename = f'stars_{aspcap_code}_{aspcap_code}.2_{location}.sha1sum'
urlstr = str1 + filename
# check folder existence
fullfoldername = os.path.join(apogee_env(),
f'dr{dr}/apogee/spectro/redux/{reduce_prefix}/stars/{aspcap_code}/{aspcap_code}.2/',
str(location))
if not os.path.exists(fullfoldername):
os.makedirs(fullfoldername)
fullfilename = os.path.join(fullfoldername, filename)
elif dr == 16:
reduce_prefix = 'r12'
aspcap_code = 'l33'
str1 = f'https://data.sdss.org/sas/dr16/apogee/spectro/aspcap/{reduce_prefix}/{aspcap_code}/{telescope}/{field}/'
filename = f'aspcapStar-{reduce_prefix}-{apogee}.fits'
hash_filename = f'{reduce_prefix}_{reduce_prefix}_{telescope}_{field}.sha1sum'
urlstr = str1 + filename
# check folder existence
fullfoldername = os.path.join(apogee_env(),
f'dr{dr}/apogee/spectro/aspcap/{reduce_prefix}/{aspcap_code}/{telescope}',
str(f'{field}'))
if not os.path.exists(fullfoldername):
os.makedirs(fullfoldername)
fullfilename = os.path.join(fullfoldername, filename)
else:
raise ValueError('combined_spectra() only supports DR13-DR16')
# check hash file
full_hash_filename = os.path.join(fullfoldername, hash_filename)
if not os.path.isfile(full_hash_filename):
# return warning flag if the location_id cannot even be found
try:
urllib.request.urlopen(str1)
except urllib.error.HTTPError:
return warning_flag
urllib.request.urlretrieve(str1 + hash_filename, full_hash_filename)
hash_list = np.loadtxt(full_hash_filename, dtype='str').T
# In some rare case, the hash cant be found, so during checking, check len(file_has)!=0 too
file_hash = hash_list[0][np.argwhere(hash_list[1] == filename)]
if os.path.isfile(fullfilename) and flag is None:
checksum = filehash(fullfilename, algorithm='sha1')
if checksum != file_hash and len(file_hash) != 0:
print('File corruption detected, astroNN is attempting to download again')
combined_spectra(dr=dr, location=location, apogee=apogee, verbose=verbose, flag=1)
if verbose == 1:
print(fullfilename + ' was found!')
elif not os.path.isfile(fullfilename) or flag == 1:
try:
urllib.request.urlretrieve(urlstr, fullfilename)
print(f'Downloaded DR{dr} combined file successfully to {fullfilename}')
checksum = filehash(fullfilename, algorithm='sha1')
if checksum != file_hash and len(file_hash) != 0:
print('File corruption detected, astroNN is attempting to download again')
combined_spectra(dr=dr, location=location, apogee=apogee, verbose=verbose, flag=1)
except urllib.error.HTTPError as emsg:
if '401' in str(emsg):
fullfilename = __apogee_credentials_downloader(urlstr, fullfilename)
elif '404' in str(emsg):
print(f'{urlstr} cannot be found on server, skipped')
fullfilename = warning_flag
else:
print(f"Unknown error occurred - {emsg}")
fullfilename = warning_flag
return fullfilename
def visit_spectra(dr=None, location=None, field=None, apogee=None, telescope=None, verbose=1, flag=None,
commission=False):
"""
Download the required individual spectra file a.k.a apStar or asStar
:param dr: APOGEE DR
:type dr: int
:param location: Location ID [Optional]
:type location: int
:param field: Field [Optional]
:type field: str
:param apogee: Apogee ID
:type apogee: str
:param telescope: Telescope ID, for example 'apo25m' or 'lco25m'
:type telescope: str
:param verbose: verbose, set 0 to silent most logging
:type verbose: int
:param flag: 0: normal, 1: force to re-download
:type flag: int
:param commission: whether the spectra is taken during commissioning
:type commission: bool
:return: full file path and download in background if not found locally, False if cannot be found on server
:rtype: str
:History:
| 2017-Nov-11 - Written - Henry Leung (University of Toronto)
| 2018-Aug-31 - Updated - Henry Leung (University of Toronto)
"""
dr = apogee_default_dr(dr=dr)
# for DR16=<, location is expected to be none because field is used
if (location is None and dr < 16) or (field is None and dr >= 16): # try to load info if not enough info
global _ALLSTAR_TEMP
if not str(f'dr{dr}') in _ALLSTAR_TEMP:
_ALLSTAR_TEMP[f'dr{dr}'] = fits.getdata(allstar(dr=dr))
if telescope is None:
matched_idx = [np.nonzero(_ALLSTAR_TEMP[f'dr{dr}']['APOGEE_ID'] == apogee)[0]][0]
else:
matched_idx = [np.nonzero([(_ALLSTAR_TEMP[f'dr{dr}']['APOGEE_ID'] == apogee) &
(_ALLSTAR_TEMP[f'dr{dr}']['TELESCOPE'] == telescope)])][0][1]
if len(matched_idx) == 0:
raise ValueError(f"No entry found in allstar DR{dr} met with your requirement!!")
location = _ALLSTAR_TEMP[f'dr{dr}']['LOCATION_ID'][matched_idx][0] if not location else location
field = _ALLSTAR_TEMP[f'dr{dr}']['FIELD'][matched_idx][0] if not field else field
telescope = _ALLSTAR_TEMP[f'dr{dr}']['TELESCOPE'][matched_idx][0] if not telescope else telescope
if dr == 13:
reduce_prefix = 'r6'
str1 = f'https://data.sdss.org/sas/dr{dr}/apogee/spectro/redux/{reduce_prefix}/stars/apo25m/{location}/'
if commission:
filename = f'apStarC-{reduce_prefix}-{apogee}.fits'
else:
filename = f'apStar-{reduce_prefix}-{apogee}.fits'
urlstr = str1 + filename
hash_filename = f'{reduce_prefix}_stars_apo25m_{location}.sha1sum'
fullfoldername = os.path.join(apogee_env(), f'dr{dr}/apogee/spectro/redux/{reduce_prefix}/stars/apo25m/',
str(location))
if not os.path.exists(fullfoldername):
os.makedirs(fullfoldername)
elif dr == 14:
reduce_prefix = 'r8'
str1 = f'https://data.sdss.org/sas/dr{dr}/apogee/spectro/redux/{reduce_prefix}/stars/apo25m/{location}/'
if commission:
filename = f'apStarC-{reduce_prefix}-{apogee}.fits'
else:
filename = f'apStar-{reduce_prefix}-{apogee}.fits'
urlstr = str1 + filename
hash_filename = f'{reduce_prefix}_stars_apo25m_{location}.sha1sum'
fullfoldername = os.path.join(apogee_env(), f'dr{dr}/apogee/spectro/redux/{reduce_prefix}/stars/apo25m/',
str(location))
if not os.path.exists(fullfoldername):
os.makedirs(fullfoldername)
elif dr == 16:
reduce_prefix = 'r12'
str1 = f'https://data.sdss.org/sas/dr16/apogee/spectro/redux/{reduce_prefix}/stars/{telescope}/{field}/'
if telescope == 'lco25m':
if commission:
filename = f'asStarC-{reduce_prefix}-{apogee}.fits'
else:
filename = f'asStar-{reduce_prefix}-{apogee}.fits'
else:
if commission:
filename = f'apStarC-{reduce_prefix}-{apogee}.fits'
else:
filename = f'apStar-{reduce_prefix}-{apogee}.fits'
urlstr = str1 + filename
hash_filename = f'{reduce_prefix}_stars_{telescope}_{field}.sha1sum'
fullfoldername = os.path.join(apogee_env(),
f'dr{dr}/apogee/spectro/redux/{reduce_prefix}/stars/{telescope}/',
str(f'{field}'))
if not os.path.exists(fullfoldername):
os.makedirs(fullfoldername)
else:
raise ValueError('visit_spectra() only supports DR13-DR16')
# check hash file
full_hash_filename = os.path.join(fullfoldername, hash_filename)
if not os.path.isfile(full_hash_filename):
# return warning flag if the location_id cannot even be found
try:
urllib.request.urlopen(str1)
except urllib.error.HTTPError:
return warning_flag
urllib.request.urlretrieve(str1 + hash_filename, full_hash_filename)
hash_list = np.loadtxt(full_hash_filename, dtype='str').T
fullfilename = os.path.join(fullfoldername, filename)
# In some rare case, the hash cant be found, so during checking, check len(file_has)!=0 too
# visit spectra has a different filename in checksum
# handle the case where apogee_id cannot be found
hash_idx = [i for i, item in enumerate(hash_list[1]) if f'apStar-{reduce_prefix}-{apogee}' in item]
file_hash = hash_list[0][hash_idx]
if os.path.isfile(fullfilename) and flag is None:
checksum = filehash(fullfilename, algorithm='sha1')
if checksum != file_hash and len(file_hash) != 0:
print('File corruption detected, astroNN is attempting to download again')
visit_spectra(dr=dr, location=location, apogee=apogee, verbose=verbose, flag=1)
if verbose:
print(fullfilename + ' was found!')
elif not os.path.isfile(fullfilename) or flag == 1:
try:
urllib.request.urlretrieve(urlstr, fullfilename)
print(f'Downloaded DR{dr} individual visit file successfully to {fullfilename}')
checksum = filehash(fullfilename, algorithm='sha1')
if checksum != file_hash and len(file_hash) != 0:
print('File corruption detected, astroNN is attempting to download again')
visit_spectra(dr=dr, location=location, apogee=apogee, verbose=verbose, flag=1)
except urllib.error.HTTPError as emsg:
if '401' in str(emsg):
fullfilename = __apogee_credentials_downloader(urlstr, fullfilename)
elif '404' in str(emsg):
print(f'{urlstr} cannot be found on server, skipped')
fullfilename = warning_flag
else:
print(f"Unknown error occurred - {emsg}")
fullfilename = warning_flag
return fullfilename
def apogee_vac_rc(dr=None, flag=None):
"""
Download the red clumps catalogue
:param dr: Apogee DR
:type dr: int
:param flag: Force to download if flag=1
:type flag: int
:return: full file path
:rtype: str
:History: 2017-Nov-16 - Written - Henry Leung (University of Toronto)
"""
dr = apogee_default_dr(dr=dr)
if dr == 13:
file_hash = '5e87eb3ba202f9db24216978dafb19d39d382fc6'
str1 = 'https://data.sdss.org/sas/dr13/apogee/vac/apogee-rc/cat/'
filename = f'apogee-rc-DR{dr}.fits'
urlstr = str1 + filename
fullfoldername = os.path.join(apogee_env(), 'dr13/apogee/vac/apogee-rc/cat/')
if not os.path.exists(fullfoldername):
os.makedirs(fullfoldername)
fullfilename = os.path.join(fullfoldername, filename)
elif dr == 14:
file_hash = '104513070f1c280954f3d1886cac429dbdf2eaf6'
str1 = 'https://data.sdss.org/sas/dr14/apogee/vac/apogee-rc/cat/'
filename = f'apogee-rc-DR{dr}.fits'
urlstr = str1 + filename
fullfoldername = os.path.join(apogee_env(), 'dr14/apogee/vac/apogee-rc/cat/')
if not os.path.exists(fullfoldername):
os.makedirs(fullfoldername)
fullfilename = os.path.join(fullfoldername, filename)
elif dr == 16:
file_hash = '0bc75a230058f50ed8a5ea3fa8554d803ffc103d'
str1 = 'https://data.sdss.org/sas/dr16/apogee/vac/apogee-rc/cat/'
filename = f'apogee-rc-DR{dr}.fits'
urlstr = str1 + filename
fullfoldername = os.path.join(apogee_env(), 'dr16/apogee/vac/apogee-rc/cat/')
if not os.path.exists(fullfoldername):
os.makedirs(fullfoldername)
fullfilename = os.path.join(fullfoldername, filename)
else:
raise ValueError('apogee_vac_rc() only supports DR13 or DR14')
# check file integrity
if os.path.isfile(fullfilename) and flag is None:
checksum = filehash(fullfilename, algorithm='sha1')
if checksum != file_hash.lower():
print('File corruption detected, astroNN is attempting to download again')
apogee_vac_rc(dr=dr, flag=1)
else:
print(fullfilename + ' was found!')
elif not os.path.isfile(fullfilename) or flag == 1:
try:
with TqdmUpTo(unit='B', unit_scale=True, miniters=1, desc=urlstr.split('/')[-1]) as t:
urllib.request.urlretrieve(urlstr, fullfilename, reporthook=t.update_to)
print(f'Downloaded DR{dr} Red Clumps Catalog successfully to {fullfilename}')
checksum = filehash(fullfilename, algorithm='sha1')
if checksum != file_hash.lower():
print('File corruption detected, astroNN is attempting to download again')
apogee_vac_rc(dr=dr, flag=1)
except urllib.error.HTTPError:
print(f'{urlstr} cannot be found on server, skipped')
fullfilename = warning_flag
return fullfilename
def apogee_distances(dr=None, flag=None):
"""
Download the Apogee Distances catalogue
:param dr: Apogee DR
:type dr: int
:param flag: Force to download if flag=1
:type flag: int
:return: full file path
:rtype: str
:History: 2018-Jan-24 - Written - Henry Leung (University of Toronto)
"""
dr = apogee_default_dr(dr=dr)
if dr == 14:
file_hash = 'b33c8419be784b1be3d14af3ee9696c6ac31830f'
str1 = 'https://data.sdss.org/sas/dr14/apogee/vac/apogee-distances/'
filename = f'apogee_distances-DR{dr}.fits'
urlstr = str1 + filename
fullfoldername = os.path.join(apogee_env(), 'dr14/apogee/vac/apogee-distances/')
if not os.path.exists(fullfoldername):
os.makedirs(fullfoldername)
fullfilename = os.path.join(fullfoldername, filename)
else:
raise ValueError('apogee_distances() only supports DR14')
# check file integrity
if os.path.isfile(fullfilename) and flag is None:
checksum = filehash(fullfilename, algorithm='sha1')
if checksum != file_hash.lower():
print('File corruption detected, astroNN is attempting to download again')
apogee_distances(dr=dr, flag=1)
else:
print(fullfilename + ' was found!')
elif not os.path.isfile(fullfilename) or flag == 1:
try:
with TqdmUpTo(unit='B', unit_scale=True, miniters=1, desc=urlstr.split('/')[-1]) as t:
urllib.request.urlretrieve(urlstr, fullfilename, reporthook=t.update_to)
print(f'Downloaded DR{dr} Distances successfully to {fullfilename}')
checksum = filehash(fullfilename, algorithm='sha1')
if checksum != file_hash.lower():
print('File corruption detected, astroNN is attempting to download again')
apogee_distances(dr=dr, flag=1)
except urllib.error.HTTPError:
print(f'{urlstr} cannot be found on server, skipped')
fullfilename = warning_flag
return fullfilename
| 43.986613 | 135 | 0.640788 | 4,026 | 32,858 | 5.139344 | 0.079483 | 0.023778 | 0.020299 | 0.013291 | 0.860036 | 0.837901 | 0.813445 | 0.800831 | 0.79073 | 0.778503 | 0 | 0.033091 | 0.247672 | 32,858 | 746 | 136 | 44.045576 | 0.803924 | 0.166778 | 0 | 0.745833 | 0 | 0.0375 | 0.260366 | 0.088403 | 0 | 0 | 0 | 0 | 0 | 1 | 0.01875 | false | 0.014583 | 0.016667 | 0 | 0.058333 | 0.0875 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8a5f924fbaf9d5fb73fe8762b394cab808a12288 | 96,289 | py | Python | clearml/backend_api/services/v2_9/models.py | arielleoren/clearml | 01f0be9895272c483129bab784a43cbd002022a7 | [
"Apache-2.0"
] | 2,097 | 2019-06-11T14:36:25.000Z | 2020-12-21T03:52:59.000Z | clearml/backend_api/services/v2_9/models.py | arielleoren/clearml | 01f0be9895272c483129bab784a43cbd002022a7 | [
"Apache-2.0"
] | 347 | 2020-12-23T22:38:48.000Z | 2022-03-31T20:01:06.000Z | clearml/backend_api/services/v2_9/models.py | arielleoren/clearml | 01f0be9895272c483129bab784a43cbd002022a7 | [
"Apache-2.0"
] | 256 | 2019-06-11T14:36:28.000Z | 2020-12-18T08:32:47.000Z | """
models service
This service provides a management interface for models (results of training tasks) stored in the system.
"""
from datetime import datetime
import six
from dateutil.parser import parse as parse_datetime
from ....backend_api.session import NonStrictDataModel, Request, Response, schema_property
class MultiFieldPatternData(NonStrictDataModel):
"""
:param pattern: Pattern string (regex)
:type pattern: str
:param fields: List of field names
:type fields: Sequence[str]
"""
_schema = {
'properties': {
'fields': {
'description': 'List of field names',
'items': {'type': 'string'},
'type': ['array', 'null'],
},
'pattern': {
'description': 'Pattern string (regex)',
'type': ['string', 'null'],
},
},
'type': 'object',
}
def __init__(
self, pattern=None, fields=None, **kwargs):
super(MultiFieldPatternData, self).__init__(**kwargs)
self.pattern = pattern
self.fields = fields
@schema_property('pattern')
def pattern(self):
return self._property_pattern
@pattern.setter
def pattern(self, value):
if value is None:
self._property_pattern = None
return
self.assert_isinstance(value, "pattern", six.string_types)
self._property_pattern = value
@schema_property('fields')
def fields(self):
return self._property_fields
@fields.setter
def fields(self, value):
if value is None:
self._property_fields = None
return
self.assert_isinstance(value, "fields", (list, tuple))
self.assert_isinstance(value, "fields", six.string_types, is_array=True)
self._property_fields = value
class Model(NonStrictDataModel):
"""
:param id: Model id
:type id: str
:param name: Model name
:type name: str
:param user: Associated user id
:type user: str
:param company: Company id
:type company: str
:param created: Model creation time
:type created: datetime.datetime
:param task: Task ID of task in which the model was created
:type task: str
:param parent: Parent model ID
:type parent: str
:param project: Associated project ID
:type project: str
:param comment: Model comment
:type comment: str
:param tags: User-defined tags
:type tags: Sequence[str]
:param system_tags: System tags. This field is reserved for system use, please
don't use it.
:type system_tags: Sequence[str]
:param framework: Framework on which the model is based. Should be identical to
the framework of the task which created the model
:type framework: str
:param design: Json object representing the model design. Should be identical
to the network design of the task which created the model
:type design: dict
:param labels: Json object representing the ids of the labels in the model. The
keys are the layers' names and the values are the ids.
:type labels: dict
:param uri: URI for the model, pointing to the destination storage.
:type uri: str
:param ready: Indication if the model is final and can be used by other tasks
:type ready: bool
:param ui_cache: UI cache for this model
:type ui_cache: dict
"""
_schema = {
'properties': {
'comment': {'description': 'Model comment', 'type': ['string', 'null']},
'company': {'description': 'Company id', 'type': ['string', 'null']},
'created': {
'description': 'Model creation time',
'format': 'date-time',
'type': ['string', 'null'],
},
'design': {
'additionalProperties': True,
'description': 'Json object representing the model design. Should be identical to the network design of the task which created the model',
'type': ['object', 'null'],
},
'framework': {
'description': 'Framework on which the model is based. Should be identical to the framework of the task which created the model',
'type': ['string', 'null'],
},
'id': {'description': 'Model id', 'type': ['string', 'null']},
'labels': {
'additionalProperties': {'type': 'integer'},
'description': "Json object representing the ids of the labels in the model. The keys are the layers' names and the values are the ids.",
'type': ['object', 'null'],
},
'name': {'description': 'Model name', 'type': ['string', 'null']},
'parent': {
'description': 'Parent model ID',
'type': ['string', 'null'],
},
'project': {
'description': 'Associated project ID',
'type': ['string', 'null'],
},
'ready': {
'description': 'Indication if the model is final and can be used by other tasks',
'type': ['boolean', 'null'],
},
'system_tags': {
'description': "System tags. This field is reserved for system use, please don't use it.",
'items': {'type': 'string'},
'type': ['array', 'null'],
},
'tags': {
'description': 'User-defined tags',
'items': {'type': 'string'},
'type': ['array', 'null'],
},
'task': {
'description': 'Task ID of task in which the model was created',
'type': ['string', 'null'],
},
'ui_cache': {
'additionalProperties': True,
'description': 'UI cache for this model',
'type': ['object', 'null'],
},
'uri': {
'description': 'URI for the model, pointing to the destination storage.',
'type': ['string', 'null'],
},
'user': {
'description': 'Associated user id',
'type': ['string', 'null'],
},
},
'type': 'object',
}
def __init__(
self, id=None, name=None, user=None, company=None, created=None, task=None, parent=None, project=None, comment=None, tags=None, system_tags=None, framework=None, design=None, labels=None, uri=None, ready=None, ui_cache=None, **kwargs):
super(Model, self).__init__(**kwargs)
self.id = id
self.name = name
self.user = user
self.company = company
self.created = created
self.task = task
self.parent = parent
self.project = project
self.comment = comment
self.tags = tags
self.system_tags = system_tags
self.framework = framework
self.design = design
self.labels = labels
self.uri = uri
self.ready = ready
self.ui_cache = ui_cache
@schema_property('id')
def id(self):
return self._property_id
@id.setter
def id(self, value):
if value is None:
self._property_id = None
return
self.assert_isinstance(value, "id", six.string_types)
self._property_id = value
@schema_property('name')
def name(self):
return self._property_name
@name.setter
def name(self, value):
if value is None:
self._property_name = None
return
self.assert_isinstance(value, "name", six.string_types)
self._property_name = value
@schema_property('user')
def user(self):
return self._property_user
@user.setter
def user(self, value):
if value is None:
self._property_user = None
return
self.assert_isinstance(value, "user", six.string_types)
self._property_user = value
@schema_property('company')
def company(self):
return self._property_company
@company.setter
def company(self, value):
if value is None:
self._property_company = None
return
self.assert_isinstance(value, "company", six.string_types)
self._property_company = value
@schema_property('created')
def created(self):
return self._property_created
@created.setter
def created(self, value):
if value is None:
self._property_created = None
return
self.assert_isinstance(value, "created", six.string_types + (datetime,))
if not isinstance(value, datetime):
value = parse_datetime(value)
self._property_created = value
@schema_property('task')
def task(self):
return self._property_task
@task.setter
def task(self, value):
if value is None:
self._property_task = None
return
self.assert_isinstance(value, "task", six.string_types)
self._property_task = value
@schema_property('parent')
def parent(self):
return self._property_parent
@parent.setter
def parent(self, value):
if value is None:
self._property_parent = None
return
self.assert_isinstance(value, "parent", six.string_types)
self._property_parent = value
@schema_property('project')
def project(self):
return self._property_project
@project.setter
def project(self, value):
if value is None:
self._property_project = None
return
self.assert_isinstance(value, "project", six.string_types)
self._property_project = value
@schema_property('comment')
def comment(self):
return self._property_comment
@comment.setter
def comment(self, value):
if value is None:
self._property_comment = None
return
self.assert_isinstance(value, "comment", six.string_types)
self._property_comment = value
@schema_property('tags')
def tags(self):
return self._property_tags
@tags.setter
def tags(self, value):
if value is None:
self._property_tags = None
return
self.assert_isinstance(value, "tags", (list, tuple))
self.assert_isinstance(value, "tags", six.string_types, is_array=True)
self._property_tags = value
@schema_property('system_tags')
def system_tags(self):
return self._property_system_tags
@system_tags.setter
def system_tags(self, value):
if value is None:
self._property_system_tags = None
return
self.assert_isinstance(value, "system_tags", (list, tuple))
self.assert_isinstance(value, "system_tags", six.string_types, is_array=True)
self._property_system_tags = value
@schema_property('framework')
def framework(self):
return self._property_framework
@framework.setter
def framework(self, value):
if value is None:
self._property_framework = None
return
self.assert_isinstance(value, "framework", six.string_types)
self._property_framework = value
@schema_property('design')
def design(self):
return self._property_design
@design.setter
def design(self, value):
if value is None:
self._property_design = None
return
self.assert_isinstance(value, "design", (dict,))
self._property_design = value
@schema_property('labels')
def labels(self):
return self._property_labels
@labels.setter
def labels(self, value):
if value is None:
self._property_labels = None
return
self.assert_isinstance(value, "labels", (dict,))
self._property_labels = value
@schema_property('uri')
def uri(self):
return self._property_uri
@uri.setter
def uri(self, value):
if value is None:
self._property_uri = None
return
self.assert_isinstance(value, "uri", six.string_types)
self._property_uri = value
@schema_property('ready')
def ready(self):
return self._property_ready
@ready.setter
def ready(self, value):
if value is None:
self._property_ready = None
return
self.assert_isinstance(value, "ready", (bool,))
self._property_ready = value
@schema_property('ui_cache')
def ui_cache(self):
return self._property_ui_cache
@ui_cache.setter
def ui_cache(self, value):
if value is None:
self._property_ui_cache = None
return
self.assert_isinstance(value, "ui_cache", (dict,))
self._property_ui_cache = value
class CreateRequest(Request):
"""
Create a new model not associated with a task
:param uri: URI for the model
:type uri: str
:param name: Model name Unique within the company.
:type name: str
:param comment: Model comment
:type comment: str
:param tags: User-defined tags list
:type tags: Sequence[str]
:param system_tags: System tags list. This field is reserved for system use,
please don't use it.
:type system_tags: Sequence[str]
:param framework: Framework on which the model is based. Case insensitive.
Should be identical to the framework of the task which created the model.
:type framework: str
:param design: Json[d] object representing the model design. Should be
identical to the network design of the task which created the model
:type design: dict
:param labels: Json object
:type labels: dict
:param ready: Indication if the model is final and can be used by other tasks
Default is false.
:type ready: bool
:param public: Create a public model Default is false.
:type public: bool
:param project: Project to which to model belongs
:type project: str
:param parent: Parent model
:type parent: str
:param task: Associated task ID
:type task: str
"""
_service = "models"
_action = "create"
_version = "2.9"
_schema = {
'definitions': {},
'properties': {
'comment': {'description': 'Model comment', 'type': 'string'},
'design': {
'additionalProperties': True,
'description': 'Json[d] object representing the model design. Should be identical to the network design of the task which created the model',
'type': 'object',
},
'framework': {
'description': 'Framework on which the model is based. Case insensitive. Should be identical to the framework of the task which created the model.',
'type': 'string',
},
'labels': {
'additionalProperties': {'type': 'integer'},
'description': 'Json object',
'type': 'object',
},
'name': {
'description': 'Model name Unique within the company.',
'type': 'string',
},
'parent': {'description': 'Parent model', 'type': 'string'},
'project': {
'description': 'Project to which to model belongs',
'type': 'string',
},
'public': {
'default': False,
'description': 'Create a public model Default is false.',
'type': 'boolean',
},
'ready': {
'default': False,
'description': 'Indication if the model is final and can be used by other tasks Default is false.',
'type': 'boolean',
},
'system_tags': {
'description': "System tags list. This field is reserved for system use, please don't use it.",
'items': {'type': 'string'},
'type': 'array',
},
'tags': {
'description': 'User-defined tags list',
'items': {'type': 'string'},
'type': 'array',
},
'task': {'description': 'Associated task ID', 'type': 'string'},
'uri': {'description': 'URI for the model', 'type': 'string'},
},
'required': ['uri', 'name'],
'type': 'object',
}
def __init__(
self, uri, name, comment=None, tags=None, system_tags=None, framework=None, design=None, labels=None, ready=False, public=False, project=None, parent=None, task=None, **kwargs):
super(CreateRequest, self).__init__(**kwargs)
self.uri = uri
self.name = name
self.comment = comment
self.tags = tags
self.system_tags = system_tags
self.framework = framework
self.design = design
self.labels = labels
self.ready = ready
self.public = public
self.project = project
self.parent = parent
self.task = task
@schema_property('uri')
def uri(self):
return self._property_uri
@uri.setter
def uri(self, value):
if value is None:
self._property_uri = None
return
self.assert_isinstance(value, "uri", six.string_types)
self._property_uri = value
@schema_property('name')
def name(self):
return self._property_name
@name.setter
def name(self, value):
if value is None:
self._property_name = None
return
self.assert_isinstance(value, "name", six.string_types)
self._property_name = value
@schema_property('comment')
def comment(self):
return self._property_comment
@comment.setter
def comment(self, value):
if value is None:
self._property_comment = None
return
self.assert_isinstance(value, "comment", six.string_types)
self._property_comment = value
@schema_property('tags')
def tags(self):
return self._property_tags
@tags.setter
def tags(self, value):
if value is None:
self._property_tags = None
return
self.assert_isinstance(value, "tags", (list, tuple))
self.assert_isinstance(value, "tags", six.string_types, is_array=True)
self._property_tags = value
@schema_property('system_tags')
def system_tags(self):
return self._property_system_tags
@system_tags.setter
def system_tags(self, value):
if value is None:
self._property_system_tags = None
return
self.assert_isinstance(value, "system_tags", (list, tuple))
self.assert_isinstance(value, "system_tags", six.string_types, is_array=True)
self._property_system_tags = value
@schema_property('framework')
def framework(self):
return self._property_framework
@framework.setter
def framework(self, value):
if value is None:
self._property_framework = None
return
self.assert_isinstance(value, "framework", six.string_types)
self._property_framework = value
@schema_property('design')
def design(self):
return self._property_design
@design.setter
def design(self, value):
if value is None:
self._property_design = None
return
self.assert_isinstance(value, "design", (dict,))
self._property_design = value
@schema_property('labels')
def labels(self):
return self._property_labels
@labels.setter
def labels(self, value):
if value is None:
self._property_labels = None
return
self.assert_isinstance(value, "labels", (dict,))
self._property_labels = value
@schema_property('ready')
def ready(self):
return self._property_ready
@ready.setter
def ready(self, value):
if value is None:
self._property_ready = None
return
self.assert_isinstance(value, "ready", (bool,))
self._property_ready = value
@schema_property('public')
def public(self):
return self._property_public
@public.setter
def public(self, value):
if value is None:
self._property_public = None
return
self.assert_isinstance(value, "public", (bool,))
self._property_public = value
@schema_property('project')
def project(self):
return self._property_project
@project.setter
def project(self, value):
if value is None:
self._property_project = None
return
self.assert_isinstance(value, "project", six.string_types)
self._property_project = value
@schema_property('parent')
def parent(self):
return self._property_parent
@parent.setter
def parent(self, value):
if value is None:
self._property_parent = None
return
self.assert_isinstance(value, "parent", six.string_types)
self._property_parent = value
@schema_property('task')
def task(self):
return self._property_task
@task.setter
def task(self, value):
if value is None:
self._property_task = None
return
self.assert_isinstance(value, "task", six.string_types)
self._property_task = value
class CreateResponse(Response):
"""
Response of models.create endpoint.
:param id: ID of the model
:type id: str
:param created: Was the model created
:type created: bool
"""
_service = "models"
_action = "create"
_version = "2.9"
_schema = {
'definitions': {},
'properties': {
'created': {
'description': 'Was the model created',
'type': ['boolean', 'null'],
},
'id': {'description': 'ID of the model', 'type': ['string', 'null']},
},
'type': 'object',
}
def __init__(
self, id=None, created=None, **kwargs):
super(CreateResponse, self).__init__(**kwargs)
self.id = id
self.created = created
@schema_property('id')
def id(self):
return self._property_id
@id.setter
def id(self, value):
if value is None:
self._property_id = None
return
self.assert_isinstance(value, "id", six.string_types)
self._property_id = value
@schema_property('created')
def created(self):
return self._property_created
@created.setter
def created(self, value):
if value is None:
self._property_created = None
return
self.assert_isinstance(value, "created", (bool,))
self._property_created = value
class DeleteRequest(Request):
"""
Delete a model.
:param model: Model ID
:type model: str
:param force: Force. Required if there are tasks that use the model as an
execution model, or if the model's creating task is published.
:type force: bool
"""
_service = "models"
_action = "delete"
_version = "2.9"
_schema = {
'definitions': {},
'properties': {
'force': {
'description': "Force. Required if there are tasks that use the model as an execution model, or if the model's creating task is published.\n ",
'type': 'boolean',
},
'model': {'description': 'Model ID', 'type': 'string'},
},
'required': ['model'],
'type': 'object',
}
def __init__(
self, model, force=None, **kwargs):
super(DeleteRequest, self).__init__(**kwargs)
self.model = model
self.force = force
@schema_property('model')
def model(self):
return self._property_model
@model.setter
def model(self, value):
if value is None:
self._property_model = None
return
self.assert_isinstance(value, "model", six.string_types)
self._property_model = value
@schema_property('force')
def force(self):
return self._property_force
@force.setter
def force(self, value):
if value is None:
self._property_force = None
return
self.assert_isinstance(value, "force", (bool,))
self._property_force = value
class DeleteResponse(Response):
"""
Response of models.delete endpoint.
:param deleted: Indicates whether the model was deleted
:type deleted: bool
"""
_service = "models"
_action = "delete"
_version = "2.9"
_schema = {
'definitions': {},
'properties': {
'deleted': {
'description': 'Indicates whether the model was deleted',
'type': ['boolean', 'null'],
},
},
'type': 'object',
}
def __init__(
self, deleted=None, **kwargs):
super(DeleteResponse, self).__init__(**kwargs)
self.deleted = deleted
@schema_property('deleted')
def deleted(self):
return self._property_deleted
@deleted.setter
def deleted(self, value):
if value is None:
self._property_deleted = None
return
self.assert_isinstance(value, "deleted", (bool,))
self._property_deleted = value
class EditRequest(Request):
"""
Edit an existing model
:param model: Model ID
:type model: str
:param uri: URI for the model
:type uri: str
:param name: Model name Unique within the company.
:type name: str
:param comment: Model comment
:type comment: str
:param tags: User-defined tags list
:type tags: Sequence[str]
:param system_tags: System tags list. This field is reserved for system use,
please don't use it.
:type system_tags: Sequence[str]
:param framework: Framework on which the model is based. Case insensitive.
Should be identical to the framework of the task which created the model.
:type framework: str
:param design: Json[d] object representing the model design. Should be
identical to the network design of the task which created the model
:type design: dict
:param labels: Json object
:type labels: dict
:param ready: Indication if the model is final and can be used by other tasks
:type ready: bool
:param project: Project to which to model belongs
:type project: str
:param parent: Parent model
:type parent: str
:param task: Associated task ID
:type task: str
:param iteration: Iteration (used to update task statistics)
:type iteration: int
"""
_service = "models"
_action = "edit"
_version = "2.9"
_schema = {
'definitions': {},
'properties': {
'comment': {'description': 'Model comment', 'type': 'string'},
'design': {
'additionalProperties': True,
'description': 'Json[d] object representing the model design. Should be identical to the network design of the task which created the model',
'type': 'object',
},
'framework': {
'description': 'Framework on which the model is based. Case insensitive. Should be identical to the framework of the task which created the model.',
'type': 'string',
},
'iteration': {
'description': 'Iteration (used to update task statistics)',
'type': 'integer',
},
'labels': {
'additionalProperties': {'type': 'integer'},
'description': 'Json object',
'type': 'object',
},
'model': {'description': 'Model ID', 'type': 'string'},
'name': {
'description': 'Model name Unique within the company.',
'type': 'string',
},
'parent': {'description': 'Parent model', 'type': 'string'},
'project': {
'description': 'Project to which to model belongs',
'type': 'string',
},
'ready': {
'description': 'Indication if the model is final and can be used by other tasks',
'type': 'boolean',
},
'system_tags': {
'description': "System tags list. This field is reserved for system use, please don't use it.",
'items': {'type': 'string'},
'type': 'array',
},
'tags': {
'description': 'User-defined tags list',
'items': {'type': 'string'},
'type': 'array',
},
'task': {'description': 'Associated task ID', 'type': 'string'},
'uri': {'description': 'URI for the model', 'type': 'string'},
},
'required': ['model'],
'type': 'object',
}
def __init__(
self, model, uri=None, name=None, comment=None, tags=None, system_tags=None, framework=None, design=None, labels=None, ready=None, project=None, parent=None, task=None, iteration=None, **kwargs):
super(EditRequest, self).__init__(**kwargs)
self.model = model
self.uri = uri
self.name = name
self.comment = comment
self.tags = tags
self.system_tags = system_tags
self.framework = framework
self.design = design
self.labels = labels
self.ready = ready
self.project = project
self.parent = parent
self.task = task
self.iteration = iteration
@schema_property('model')
def model(self):
return self._property_model
@model.setter
def model(self, value):
if value is None:
self._property_model = None
return
self.assert_isinstance(value, "model", six.string_types)
self._property_model = value
@schema_property('uri')
def uri(self):
return self._property_uri
@uri.setter
def uri(self, value):
if value is None:
self._property_uri = None
return
self.assert_isinstance(value, "uri", six.string_types)
self._property_uri = value
@schema_property('name')
def name(self):
return self._property_name
@name.setter
def name(self, value):
if value is None:
self._property_name = None
return
self.assert_isinstance(value, "name", six.string_types)
self._property_name = value
@schema_property('comment')
def comment(self):
return self._property_comment
@comment.setter
def comment(self, value):
if value is None:
self._property_comment = None
return
self.assert_isinstance(value, "comment", six.string_types)
self._property_comment = value
@schema_property('tags')
def tags(self):
return self._property_tags
@tags.setter
def tags(self, value):
if value is None:
self._property_tags = None
return
self.assert_isinstance(value, "tags", (list, tuple))
self.assert_isinstance(value, "tags", six.string_types, is_array=True)
self._property_tags = value
@schema_property('system_tags')
def system_tags(self):
return self._property_system_tags
@system_tags.setter
def system_tags(self, value):
if value is None:
self._property_system_tags = None
return
self.assert_isinstance(value, "system_tags", (list, tuple))
self.assert_isinstance(value, "system_tags", six.string_types, is_array=True)
self._property_system_tags = value
@schema_property('framework')
def framework(self):
return self._property_framework
@framework.setter
def framework(self, value):
if value is None:
self._property_framework = None
return
self.assert_isinstance(value, "framework", six.string_types)
self._property_framework = value
@schema_property('design')
def design(self):
return self._property_design
@design.setter
def design(self, value):
if value is None:
self._property_design = None
return
self.assert_isinstance(value, "design", (dict,))
self._property_design = value
@schema_property('labels')
def labels(self):
return self._property_labels
@labels.setter
def labels(self, value):
if value is None:
self._property_labels = None
return
self.assert_isinstance(value, "labels", (dict,))
self._property_labels = value
@schema_property('ready')
def ready(self):
return self._property_ready
@ready.setter
def ready(self, value):
if value is None:
self._property_ready = None
return
self.assert_isinstance(value, "ready", (bool,))
self._property_ready = value
@schema_property('project')
def project(self):
return self._property_project
@project.setter
def project(self, value):
if value is None:
self._property_project = None
return
self.assert_isinstance(value, "project", six.string_types)
self._property_project = value
@schema_property('parent')
def parent(self):
return self._property_parent
@parent.setter
def parent(self, value):
if value is None:
self._property_parent = None
return
self.assert_isinstance(value, "parent", six.string_types)
self._property_parent = value
@schema_property('task')
def task(self):
return self._property_task
@task.setter
def task(self, value):
if value is None:
self._property_task = None
return
self.assert_isinstance(value, "task", six.string_types)
self._property_task = value
@schema_property('iteration')
def iteration(self):
return self._property_iteration
@iteration.setter
def iteration(self, value):
if value is None:
self._property_iteration = None
return
if isinstance(value, float) and value.is_integer():
value = int(value)
self.assert_isinstance(value, "iteration", six.integer_types)
self._property_iteration = value
class EditResponse(Response):
"""
Response of models.edit endpoint.
:param updated: Number of models updated (0 or 1)
:type updated: int
:param fields: Updated fields names and values
:type fields: dict
"""
_service = "models"
_action = "edit"
_version = "2.9"
_schema = {
'definitions': {},
'properties': {
'fields': {
'additionalProperties': True,
'description': 'Updated fields names and values',
'type': ['object', 'null'],
},
'updated': {
'description': 'Number of models updated (0 or 1)',
'enum': [0, 1],
'type': ['integer', 'null'],
},
},
'type': 'object',
}
def __init__(
self, updated=None, fields=None, **kwargs):
super(EditResponse, self).__init__(**kwargs)
self.updated = updated
self.fields = fields
@schema_property('updated')
def updated(self):
return self._property_updated
@updated.setter
def updated(self, value):
if value is None:
self._property_updated = None
return
if isinstance(value, float) and value.is_integer():
value = int(value)
self.assert_isinstance(value, "updated", six.integer_types)
self._property_updated = value
@schema_property('fields')
def fields(self):
return self._property_fields
@fields.setter
def fields(self, value):
if value is None:
self._property_fields = None
return
self.assert_isinstance(value, "fields", (dict,))
self._property_fields = value
class GetAllRequest(Request):
"""
Get all models
:param name: Get only models whose name matches this pattern (python regular
expression syntax)
:type name: str
:param user: List of user IDs used to filter results by the model's creating
user
:type user: Sequence[str]
:param ready: Indication whether to retrieve only models that are marked ready
If not supplied returns both ready and not-ready projects.
:type ready: bool
:param tags: User-defined tags list used to filter results. Prepend '-' to tag
name to indicate exclusion
:type tags: Sequence[str]
:param system_tags: System tags list used to filter results. Prepend '-' to
system tag name to indicate exclusion
:type system_tags: Sequence[str]
:param only_fields: List of model field names (if applicable, nesting is
supported using '.'). If provided, this list defines the query's projection
(only these fields will be returned for each result entry)
:type only_fields: Sequence[str]
:param page: Page number, returns a specific page out of the resulting list of
models
:type page: int
:param page_size: Page size, specifies the number of results returned in each
page (last page may contain fewer results)
:type page_size: int
:param project: List of associated project IDs
:type project: Sequence[str]
:param order_by: List of field names to order by. When search_text is used,
'@text_score' can be used as a field representing the text score of returned
documents. Use '-' prefix to specify descending order. Optional, recommended
when using page
:type order_by: Sequence[str]
:param task: List of associated task IDs
:type task: Sequence[str]
:param id: List of model IDs
:type id: Sequence[str]
:param search_text: Free text search query
:type search_text: str
:param framework: List of frameworks
:type framework: Sequence[str]
:param uri: List of model URIs
:type uri: Sequence[str]
:param _all_: Multi-field pattern condition (all fields match pattern)
:type _all_: MultiFieldPatternData
:param _any_: Multi-field pattern condition (any field matches pattern)
:type _any_: MultiFieldPatternData
"""
_service = "models"
_action = "get_all"
_version = "2.9"
_schema = {
'definitions': {
'multi_field_pattern_data': {
'properties': {
'fields': {
'description': 'List of field names',
'items': {'type': 'string'},
'type': ['array', 'null'],
},
'pattern': {
'description': 'Pattern string (regex)',
'type': ['string', 'null'],
},
},
'type': 'object',
},
},
'dependencies': {'page': ['page_size']},
'properties': {
'_all_': {
'description': 'Multi-field pattern condition (all fields match pattern)',
'oneOf': [
{'$ref': '#/definitions/multi_field_pattern_data'},
{'type': 'null'},
],
},
'_any_': {
'description': 'Multi-field pattern condition (any field matches pattern)',
'oneOf': [
{'$ref': '#/definitions/multi_field_pattern_data'},
{'type': 'null'},
],
},
'framework': {
'description': 'List of frameworks',
'items': {'type': 'string'},
'type': ['array', 'null'],
},
'id': {
'description': 'List of model IDs',
'items': {'type': 'string'},
'type': ['array', 'null'],
},
'name': {
'description': 'Get only models whose name matches this pattern (python regular expression syntax)',
'type': ['string', 'null'],
},
'only_fields': {
'description': "List of model field names (if applicable, nesting is supported using '.'). If provided, this list defines the query's projection (only these fields will be returned for each result entry)",
'items': {'type': 'string'},
'type': ['array', 'null'],
},
'order_by': {
'description': "List of field names to order by. When search_text is used, '@text_score' can be used as a field representing the text score of returned documents. Use '-' prefix to specify descending order. Optional, recommended when using page",
'items': {'type': 'string'},
'type': ['array', 'null'],
},
'page': {
'description': 'Page number, returns a specific page out of the resulting list of models',
'minimum': 0,
'type': ['integer', 'null'],
},
'page_size': {
'description': 'Page size, specifies the number of results returned in each page (last page may contain fewer results)',
'minimum': 1,
'type': ['integer', 'null'],
},
'project': {
'description': 'List of associated project IDs',
'items': {'type': 'string'},
'type': ['array', 'null'],
},
'ready': {
'description': 'Indication whether to retrieve only models that are marked ready If not supplied returns both ready and not-ready projects.',
'type': ['boolean', 'null'],
},
'search_text': {
'description': 'Free text search query',
'type': ['string', 'null'],
},
'system_tags': {
'description': "System tags list used to filter results. Prepend '-' to system tag name to indicate exclusion",
'items': {'type': 'string'},
'type': ['array', 'null'],
},
'tags': {
'description': "User-defined tags list used to filter results. Prepend '-' to tag name to indicate exclusion",
'items': {'type': 'string'},
'type': ['array', 'null'],
},
'task': {
'description': 'List of associated task IDs',
'items': {'type': 'string'},
'type': ['array', 'null'],
},
'uri': {
'description': 'List of model URIs',
'items': {'type': 'string'},
'type': ['array', 'null'],
},
'user': {
'description': "List of user IDs used to filter results by the model's creating user",
'items': {'type': 'string'},
'type': ['array', 'null'],
},
},
'type': 'object',
}
def __init__(
self, name=None, user=None, ready=None, tags=None, system_tags=None, only_fields=None, page=None, page_size=None, project=None, order_by=None, task=None, id=None, search_text=None, framework=None, uri=None, _all_=None, _any_=None, **kwargs):
super(GetAllRequest, self).__init__(**kwargs)
self.name = name
self.user = user
self.ready = ready
self.tags = tags
self.system_tags = system_tags
self.only_fields = only_fields
self.page = page
self.page_size = page_size
self.project = project
self.order_by = order_by
self.task = task
self.id = id
self.search_text = search_text
self.framework = framework
self.uri = uri
self._all_ = _all_
self._any_ = _any_
@schema_property('name')
def name(self):
return self._property_name
@name.setter
def name(self, value):
if value is None:
self._property_name = None
return
self.assert_isinstance(value, "name", six.string_types)
self._property_name = value
@schema_property('user')
def user(self):
return self._property_user
@user.setter
def user(self, value):
if value is None:
self._property_user = None
return
self.assert_isinstance(value, "user", (list, tuple))
self.assert_isinstance(value, "user", six.string_types, is_array=True)
self._property_user = value
@schema_property('ready')
def ready(self):
return self._property_ready
@ready.setter
def ready(self, value):
if value is None:
self._property_ready = None
return
self.assert_isinstance(value, "ready", (bool,))
self._property_ready = value
@schema_property('tags')
def tags(self):
return self._property_tags
@tags.setter
def tags(self, value):
if value is None:
self._property_tags = None
return
self.assert_isinstance(value, "tags", (list, tuple))
self.assert_isinstance(value, "tags", six.string_types, is_array=True)
self._property_tags = value
@schema_property('system_tags')
def system_tags(self):
return self._property_system_tags
@system_tags.setter
def system_tags(self, value):
if value is None:
self._property_system_tags = None
return
self.assert_isinstance(value, "system_tags", (list, tuple))
self.assert_isinstance(value, "system_tags", six.string_types, is_array=True)
self._property_system_tags = value
@schema_property('only_fields')
def only_fields(self):
return self._property_only_fields
@only_fields.setter
def only_fields(self, value):
if value is None:
self._property_only_fields = None
return
self.assert_isinstance(value, "only_fields", (list, tuple))
self.assert_isinstance(value, "only_fields", six.string_types, is_array=True)
self._property_only_fields = value
@schema_property('page')
def page(self):
return self._property_page
@page.setter
def page(self, value):
if value is None:
self._property_page = None
return
if isinstance(value, float) and value.is_integer():
value = int(value)
self.assert_isinstance(value, "page", six.integer_types)
self._property_page = value
@schema_property('page_size')
def page_size(self):
return self._property_page_size
@page_size.setter
def page_size(self, value):
if value is None:
self._property_page_size = None
return
if isinstance(value, float) and value.is_integer():
value = int(value)
self.assert_isinstance(value, "page_size", six.integer_types)
self._property_page_size = value
@schema_property('project')
def project(self):
return self._property_project
@project.setter
def project(self, value):
if value is None:
self._property_project = None
return
self.assert_isinstance(value, "project", (list, tuple))
self.assert_isinstance(value, "project", six.string_types, is_array=True)
self._property_project = value
@schema_property('order_by')
def order_by(self):
return self._property_order_by
@order_by.setter
def order_by(self, value):
if value is None:
self._property_order_by = None
return
self.assert_isinstance(value, "order_by", (list, tuple))
self.assert_isinstance(value, "order_by", six.string_types, is_array=True)
self._property_order_by = value
@schema_property('task')
def task(self):
return self._property_task
@task.setter
def task(self, value):
if value is None:
self._property_task = None
return
self.assert_isinstance(value, "task", (list, tuple))
self.assert_isinstance(value, "task", six.string_types, is_array=True)
self._property_task = value
@schema_property('id')
def id(self):
return self._property_id
@id.setter
def id(self, value):
if value is None:
self._property_id = None
return
self.assert_isinstance(value, "id", (list, tuple))
self.assert_isinstance(value, "id", six.string_types, is_array=True)
self._property_id = value
@schema_property('search_text')
def search_text(self):
return self._property_search_text
@search_text.setter
def search_text(self, value):
if value is None:
self._property_search_text = None
return
self.assert_isinstance(value, "search_text", six.string_types)
self._property_search_text = value
@schema_property('framework')
def framework(self):
return self._property_framework
@framework.setter
def framework(self, value):
if value is None:
self._property_framework = None
return
self.assert_isinstance(value, "framework", (list, tuple))
self.assert_isinstance(value, "framework", six.string_types, is_array=True)
self._property_framework = value
@schema_property('uri')
def uri(self):
return self._property_uri
@uri.setter
def uri(self, value):
if value is None:
self._property_uri = None
return
self.assert_isinstance(value, "uri", (list, tuple))
self.assert_isinstance(value, "uri", six.string_types, is_array=True)
self._property_uri = value
@schema_property('_all_')
def _all_(self):
return self._property__all_
@_all_.setter
def _all_(self, value):
if value is None:
self._property__all_ = None
return
if isinstance(value, dict):
value = MultiFieldPatternData.from_dict(value)
else:
self.assert_isinstance(value, "_all_", MultiFieldPatternData)
self._property__all_ = value
@schema_property('_any_')
def _any_(self):
return self._property__any_
@_any_.setter
def _any_(self, value):
if value is None:
self._property__any_ = None
return
if isinstance(value, dict):
value = MultiFieldPatternData.from_dict(value)
else:
self.assert_isinstance(value, "_any_", MultiFieldPatternData)
self._property__any_ = value
class GetAllResponse(Response):
"""
Response of models.get_all endpoint.
:param models: Models list
:type models: Sequence[Model]
"""
_service = "models"
_action = "get_all"
_version = "2.9"
_schema = {
'definitions': {
'model': {
'properties': {
'comment': {
'description': 'Model comment',
'type': ['string', 'null'],
},
'company': {
'description': 'Company id',
'type': ['string', 'null'],
},
'created': {
'description': 'Model creation time',
'format': 'date-time',
'type': ['string', 'null'],
},
'design': {
'additionalProperties': True,
'description': 'Json object representing the model design. Should be identical to the network design of the task which created the model',
'type': ['object', 'null'],
},
'framework': {
'description': 'Framework on which the model is based. Should be identical to the framework of the task which created the model',
'type': ['string', 'null'],
},
'id': {'description': 'Model id', 'type': ['string', 'null']},
'labels': {
'additionalProperties': {'type': 'integer'},
'description': "Json object representing the ids of the labels in the model. The keys are the layers' names and the values are the ids.",
'type': ['object', 'null'],
},
'name': {
'description': 'Model name',
'type': ['string', 'null'],
},
'parent': {
'description': 'Parent model ID',
'type': ['string', 'null'],
},
'project': {
'description': 'Associated project ID',
'type': ['string', 'null'],
},
'ready': {
'description': 'Indication if the model is final and can be used by other tasks',
'type': ['boolean', 'null'],
},
'system_tags': {
'description': "System tags. This field is reserved for system use, please don't use it.",
'items': {'type': 'string'},
'type': ['array', 'null'],
},
'tags': {
'description': 'User-defined tags',
'items': {'type': 'string'},
'type': ['array', 'null'],
},
'task': {
'description': 'Task ID of task in which the model was created',
'type': ['string', 'null'],
},
'ui_cache': {
'additionalProperties': True,
'description': 'UI cache for this model',
'type': ['object', 'null'],
},
'uri': {
'description': 'URI for the model, pointing to the destination storage.',
'type': ['string', 'null'],
},
'user': {
'description': 'Associated user id',
'type': ['string', 'null'],
},
},
'type': 'object',
},
},
'properties': {
'models': {
'description': 'Models list',
'items': {'$ref': '#/definitions/model'},
'type': ['array', 'null'],
},
},
'type': 'object',
}
def __init__(
self, models=None, **kwargs):
super(GetAllResponse, self).__init__(**kwargs)
self.models = models
@schema_property('models')
def models(self):
return self._property_models
@models.setter
def models(self, value):
if value is None:
self._property_models = None
return
self.assert_isinstance(value, "models", (list, tuple))
if any(isinstance(v, dict) for v in value):
value = [Model.from_dict(v) if isinstance(v, dict) else v for v in value]
else:
self.assert_isinstance(value, "models", Model, is_array=True)
self._property_models = value
class GetByIdRequest(Request):
"""
Gets model information
:param model: Model id
:type model: str
"""
_service = "models"
_action = "get_by_id"
_version = "2.9"
_schema = {
'definitions': {},
'properties': {'model': {'description': 'Model id', 'type': 'string'}},
'required': ['model'],
'type': 'object',
}
def __init__(
self, model, **kwargs):
super(GetByIdRequest, self).__init__(**kwargs)
self.model = model
@schema_property('model')
def model(self):
return self._property_model
@model.setter
def model(self, value):
if value is None:
self._property_model = None
return
self.assert_isinstance(value, "model", six.string_types)
self._property_model = value
class GetByIdResponse(Response):
"""
Response of models.get_by_id endpoint.
:param model: Model info
:type model: Model
"""
_service = "models"
_action = "get_by_id"
_version = "2.9"
_schema = {
'definitions': {
'model': {
'properties': {
'comment': {
'description': 'Model comment',
'type': ['string', 'null'],
},
'company': {
'description': 'Company id',
'type': ['string', 'null'],
},
'created': {
'description': 'Model creation time',
'format': 'date-time',
'type': ['string', 'null'],
},
'design': {
'additionalProperties': True,
'description': 'Json object representing the model design. Should be identical to the network design of the task which created the model',
'type': ['object', 'null'],
},
'framework': {
'description': 'Framework on which the model is based. Should be identical to the framework of the task which created the model',
'type': ['string', 'null'],
},
'id': {'description': 'Model id', 'type': ['string', 'null']},
'labels': {
'additionalProperties': {'type': 'integer'},
'description': "Json object representing the ids of the labels in the model. The keys are the layers' names and the values are the ids.",
'type': ['object', 'null'],
},
'name': {
'description': 'Model name',
'type': ['string', 'null'],
},
'parent': {
'description': 'Parent model ID',
'type': ['string', 'null'],
},
'project': {
'description': 'Associated project ID',
'type': ['string', 'null'],
},
'ready': {
'description': 'Indication if the model is final and can be used by other tasks',
'type': ['boolean', 'null'],
},
'system_tags': {
'description': "System tags. This field is reserved for system use, please don't use it.",
'items': {'type': 'string'},
'type': ['array', 'null'],
},
'tags': {
'description': 'User-defined tags',
'items': {'type': 'string'},
'type': ['array', 'null'],
},
'task': {
'description': 'Task ID of task in which the model was created',
'type': ['string', 'null'],
},
'ui_cache': {
'additionalProperties': True,
'description': 'UI cache for this model',
'type': ['object', 'null'],
},
'uri': {
'description': 'URI for the model, pointing to the destination storage.',
'type': ['string', 'null'],
},
'user': {
'description': 'Associated user id',
'type': ['string', 'null'],
},
},
'type': 'object',
},
},
'properties': {
'model': {
'description': 'Model info',
'oneOf': [{'$ref': '#/definitions/model'}, {'type': 'null'}],
},
},
'type': 'object',
}
def __init__(
self, model=None, **kwargs):
super(GetByIdResponse, self).__init__(**kwargs)
self.model = model
@schema_property('model')
def model(self):
return self._property_model
@model.setter
def model(self, value):
if value is None:
self._property_model = None
return
if isinstance(value, dict):
value = Model.from_dict(value)
else:
self.assert_isinstance(value, "model", Model)
self._property_model = value
class GetByTaskIdRequest(Request):
"""
Gets model information
:param task: Task id
:type task: str
"""
_service = "models"
_action = "get_by_task_id"
_version = "2.9"
_schema = {
'definitions': {},
'properties': {
'task': {'description': 'Task id', 'type': ['string', 'null']},
},
'type': 'object',
}
def __init__(
self, task=None, **kwargs):
super(GetByTaskIdRequest, self).__init__(**kwargs)
self.task = task
@schema_property('task')
def task(self):
return self._property_task
@task.setter
def task(self, value):
if value is None:
self._property_task = None
return
self.assert_isinstance(value, "task", six.string_types)
self._property_task = value
class GetByTaskIdResponse(Response):
"""
Response of models.get_by_task_id endpoint.
:param model: Model info
:type model: Model
"""
_service = "models"
_action = "get_by_task_id"
_version = "2.9"
_schema = {
'definitions': {
'model': {
'properties': {
'comment': {
'description': 'Model comment',
'type': ['string', 'null'],
},
'company': {
'description': 'Company id',
'type': ['string', 'null'],
},
'created': {
'description': 'Model creation time',
'format': 'date-time',
'type': ['string', 'null'],
},
'design': {
'additionalProperties': True,
'description': 'Json object representing the model design. Should be identical to the network design of the task which created the model',
'type': ['object', 'null'],
},
'framework': {
'description': 'Framework on which the model is based. Should be identical to the framework of the task which created the model',
'type': ['string', 'null'],
},
'id': {'description': 'Model id', 'type': ['string', 'null']},
'labels': {
'additionalProperties': {'type': 'integer'},
'description': "Json object representing the ids of the labels in the model. The keys are the layers' names and the values are the ids.",
'type': ['object', 'null'],
},
'name': {
'description': 'Model name',
'type': ['string', 'null'],
},
'parent': {
'description': 'Parent model ID',
'type': ['string', 'null'],
},
'project': {
'description': 'Associated project ID',
'type': ['string', 'null'],
},
'ready': {
'description': 'Indication if the model is final and can be used by other tasks',
'type': ['boolean', 'null'],
},
'system_tags': {
'description': "System tags. This field is reserved for system use, please don't use it.",
'items': {'type': 'string'},
'type': ['array', 'null'],
},
'tags': {
'description': 'User-defined tags',
'items': {'type': 'string'},
'type': ['array', 'null'],
},
'task': {
'description': 'Task ID of task in which the model was created',
'type': ['string', 'null'],
},
'ui_cache': {
'additionalProperties': True,
'description': 'UI cache for this model',
'type': ['object', 'null'],
},
'uri': {
'description': 'URI for the model, pointing to the destination storage.',
'type': ['string', 'null'],
},
'user': {
'description': 'Associated user id',
'type': ['string', 'null'],
},
},
'type': 'object',
},
},
'properties': {
'model': {
'description': 'Model info',
'oneOf': [{'$ref': '#/definitions/model'}, {'type': 'null'}],
},
},
'type': 'object',
}
def __init__(
self, model=None, **kwargs):
super(GetByTaskIdResponse, self).__init__(**kwargs)
self.model = model
@schema_property('model')
def model(self):
return self._property_model
@model.setter
def model(self, value):
if value is None:
self._property_model = None
return
if isinstance(value, dict):
value = Model.from_dict(value)
else:
self.assert_isinstance(value, "model", Model)
self._property_model = value
class MakePrivateRequest(Request):
"""
Convert public models to private
:param ids: Ids of the models to convert. Only the models originated by the
company can be converted
:type ids: Sequence[str]
"""
_service = "models"
_action = "make_private"
_version = "2.9"
_schema = {
'definitions': {},
'properties': {
'ids': {
'description': 'Ids of the models to convert. Only the models originated by the company can be converted',
'items': {'type': 'string'},
'type': ['array', 'null'],
},
},
'type': 'object',
}
def __init__(
self, ids=None, **kwargs):
super(MakePrivateRequest, self).__init__(**kwargs)
self.ids = ids
@schema_property('ids')
def ids(self):
return self._property_ids
@ids.setter
def ids(self, value):
if value is None:
self._property_ids = None
return
self.assert_isinstance(value, "ids", (list, tuple))
self.assert_isinstance(value, "ids", six.string_types, is_array=True)
self._property_ids = value
class MakePrivateResponse(Response):
"""
Response of models.make_private endpoint.
:param updated: Number of models updated
:type updated: int
"""
_service = "models"
_action = "make_private"
_version = "2.9"
_schema = {
'definitions': {},
'properties': {
'updated': {
'description': 'Number of models updated',
'type': ['integer', 'null'],
},
},
'type': 'object',
}
def __init__(
self, updated=None, **kwargs):
super(MakePrivateResponse, self).__init__(**kwargs)
self.updated = updated
@schema_property('updated')
def updated(self):
return self._property_updated
@updated.setter
def updated(self, value):
if value is None:
self._property_updated = None
return
if isinstance(value, float) and value.is_integer():
value = int(value)
self.assert_isinstance(value, "updated", six.integer_types)
self._property_updated = value
class MakePublicRequest(Request):
"""
Convert company models to public
:param ids: Ids of the models to convert
:type ids: Sequence[str]
"""
_service = "models"
_action = "make_public"
_version = "2.9"
_schema = {
'definitions': {},
'properties': {
'ids': {
'description': 'Ids of the models to convert',
'items': {'type': 'string'},
'type': ['array', 'null'],
},
},
'type': 'object',
}
def __init__(
self, ids=None, **kwargs):
super(MakePublicRequest, self).__init__(**kwargs)
self.ids = ids
@schema_property('ids')
def ids(self):
return self._property_ids
@ids.setter
def ids(self, value):
if value is None:
self._property_ids = None
return
self.assert_isinstance(value, "ids", (list, tuple))
self.assert_isinstance(value, "ids", six.string_types, is_array=True)
self._property_ids = value
class MakePublicResponse(Response):
"""
Response of models.make_public endpoint.
:param updated: Number of models updated
:type updated: int
"""
_service = "models"
_action = "make_public"
_version = "2.9"
_schema = {
'definitions': {},
'properties': {
'updated': {
'description': 'Number of models updated',
'type': ['integer', 'null'],
},
},
'type': 'object',
}
def __init__(
self, updated=None, **kwargs):
super(MakePublicResponse, self).__init__(**kwargs)
self.updated = updated
@schema_property('updated')
def updated(self):
return self._property_updated
@updated.setter
def updated(self, value):
if value is None:
self._property_updated = None
return
if isinstance(value, float) and value.is_integer():
value = int(value)
self.assert_isinstance(value, "updated", six.integer_types)
self._property_updated = value
class SetReadyRequest(Request):
"""
Set the model ready flag to True. If the model is an output model of a task then try to publish the task.
:param model: Model id
:type model: str
:param force_publish_task: Publish the associated task (if exists) even if it
is not in the 'stopped' state. Optional, the default value is False.
:type force_publish_task: bool
:param publish_task: Indicates that the associated task (if exists) should be
published. Optional, the default value is True.
:type publish_task: bool
"""
_service = "models"
_action = "set_ready"
_version = "2.9"
_schema = {
'definitions': {},
'properties': {
'force_publish_task': {
'description': "Publish the associated task (if exists) even if it is not in the 'stopped' state. Optional, the default value is False.",
'type': 'boolean',
},
'model': {'description': 'Model id', 'type': 'string'},
'publish_task': {
'description': 'Indicates that the associated task (if exists) should be published. Optional, the default value is True.',
'type': 'boolean',
},
},
'required': ['model'],
'type': 'object',
}
def __init__(
self, model, force_publish_task=None, publish_task=None, **kwargs):
super(SetReadyRequest, self).__init__(**kwargs)
self.model = model
self.force_publish_task = force_publish_task
self.publish_task = publish_task
@schema_property('model')
def model(self):
return self._property_model
@model.setter
def model(self, value):
if value is None:
self._property_model = None
return
self.assert_isinstance(value, "model", six.string_types)
self._property_model = value
@schema_property('force_publish_task')
def force_publish_task(self):
return self._property_force_publish_task
@force_publish_task.setter
def force_publish_task(self, value):
if value is None:
self._property_force_publish_task = None
return
self.assert_isinstance(value, "force_publish_task", (bool,))
self._property_force_publish_task = value
@schema_property('publish_task')
def publish_task(self):
return self._property_publish_task
@publish_task.setter
def publish_task(self, value):
if value is None:
self._property_publish_task = None
return
self.assert_isinstance(value, "publish_task", (bool,))
self._property_publish_task = value
class SetReadyResponse(Response):
"""
Response of models.set_ready endpoint.
:param updated: Number of models updated (0 or 1)
:type updated: int
:param published_task: Result of publishing of the model's associated task (if
exists). Returned only if the task was published successfully as part of the
model publishing.
:type published_task: dict
"""
_service = "models"
_action = "set_ready"
_version = "2.9"
_schema = {
'definitions': {},
'properties': {
'published_task': {
'description': "Result of publishing of the model's associated task (if exists). Returned only if the task was published successfully as part of the model publishing.",
'properties': {
'data': {
'description': 'Data returned from the task publishing operation.',
'properties': {
'committed_versions_results': {
'description': 'Committed versions results',
'items': {
'additionalProperties': True,
'type': 'object',
},
'type': 'array',
},
'fields': {
'additionalProperties': True,
'description': 'Updated fields names and values',
'type': 'object',
},
'updated': {
'description': 'Number of tasks updated (0 or 1)',
'enum': [0, 1],
'type': 'integer',
},
},
'type': 'object',
},
'id': {'description': 'Task id', 'type': 'string'},
},
'type': ['object', 'null'],
},
'updated': {
'description': 'Number of models updated (0 or 1)',
'enum': [0, 1],
'type': ['integer', 'null'],
},
},
'type': 'object',
}
def __init__(
self, updated=None, published_task=None, **kwargs):
super(SetReadyResponse, self).__init__(**kwargs)
self.updated = updated
self.published_task = published_task
@schema_property('updated')
def updated(self):
return self._property_updated
@updated.setter
def updated(self, value):
if value is None:
self._property_updated = None
return
if isinstance(value, float) and value.is_integer():
value = int(value)
self.assert_isinstance(value, "updated", six.integer_types)
self._property_updated = value
@schema_property('published_task')
def published_task(self):
return self._property_published_task
@published_task.setter
def published_task(self, value):
if value is None:
self._property_published_task = None
return
self.assert_isinstance(value, "published_task", (dict,))
self._property_published_task = value
class UpdateRequest(Request):
"""
Update a model
:param model: Model id
:type model: str
:param name: Model name Unique within the company.
:type name: str
:param comment: Model comment
:type comment: str
:param tags: User-defined tags list
:type tags: Sequence[str]
:param system_tags: System tags list. This field is reserved for system use,
please don't use it.
:type system_tags: Sequence[str]
:param ready: Indication if the model is final and can be used by other tasks
Default is false.
:type ready: bool
:param created: Model creation time (UTC)
:type created: datetime.datetime
:param ui_cache: UI cache for this model
:type ui_cache: dict
:param project: Project to which to model belongs
:type project: str
:param task: Associated task ID
:type task: str
:param iteration: Iteration (used to update task statistics if an associated
task is reported)
:type iteration: int
"""
_service = "models"
_action = "update"
_version = "2.9"
_schema = {
'definitions': {},
'properties': {
'comment': {'description': 'Model comment', 'type': 'string'},
'created': {
'description': 'Model creation time (UTC) ',
'format': 'date-time',
'type': 'string',
},
'iteration': {
'description': 'Iteration (used to update task statistics if an associated task is reported)',
'type': 'integer',
},
'model': {'description': 'Model id', 'type': 'string'},
'name': {
'description': 'Model name Unique within the company.',
'type': 'string',
},
'project': {
'description': 'Project to which to model belongs',
'type': 'string',
},
'ready': {
'default': False,
'description': 'Indication if the model is final and can be used by other tasks Default is false.',
'type': 'boolean',
},
'system_tags': {
'description': "System tags list. This field is reserved for system use, please don't use it.",
'items': {'type': 'string'},
'type': 'array',
},
'tags': {
'description': 'User-defined tags list',
'items': {'type': 'string'},
'type': 'array',
},
'task': {'description': 'Associated task ID', 'type': 'string'},
'ui_cache': {
'additionalProperties': True,
'description': 'UI cache for this model',
'type': 'object',
},
},
'required': ['model'],
'type': 'object',
}
def __init__(
self, model, name=None, comment=None, tags=None, system_tags=None, ready=False, created=None, ui_cache=None, project=None, task=None, iteration=None, **kwargs):
super(UpdateRequest, self).__init__(**kwargs)
self.model = model
self.name = name
self.comment = comment
self.tags = tags
self.system_tags = system_tags
self.ready = ready
self.created = created
self.ui_cache = ui_cache
self.project = project
self.task = task
self.iteration = iteration
@schema_property('model')
def model(self):
return self._property_model
@model.setter
def model(self, value):
if value is None:
self._property_model = None
return
self.assert_isinstance(value, "model", six.string_types)
self._property_model = value
@schema_property('name')
def name(self):
return self._property_name
@name.setter
def name(self, value):
if value is None:
self._property_name = None
return
self.assert_isinstance(value, "name", six.string_types)
self._property_name = value
@schema_property('comment')
def comment(self):
return self._property_comment
@comment.setter
def comment(self, value):
if value is None:
self._property_comment = None
return
self.assert_isinstance(value, "comment", six.string_types)
self._property_comment = value
@schema_property('tags')
def tags(self):
return self._property_tags
@tags.setter
def tags(self, value):
if value is None:
self._property_tags = None
return
self.assert_isinstance(value, "tags", (list, tuple))
self.assert_isinstance(value, "tags", six.string_types, is_array=True)
self._property_tags = value
@schema_property('system_tags')
def system_tags(self):
return self._property_system_tags
@system_tags.setter
def system_tags(self, value):
if value is None:
self._property_system_tags = None
return
self.assert_isinstance(value, "system_tags", (list, tuple))
self.assert_isinstance(value, "system_tags", six.string_types, is_array=True)
self._property_system_tags = value
@schema_property('ready')
def ready(self):
return self._property_ready
@ready.setter
def ready(self, value):
if value is None:
self._property_ready = None
return
self.assert_isinstance(value, "ready", (bool,))
self._property_ready = value
@schema_property('created')
def created(self):
return self._property_created
@created.setter
def created(self, value):
if value is None:
self._property_created = None
return
self.assert_isinstance(value, "created", six.string_types + (datetime,))
if not isinstance(value, datetime):
value = parse_datetime(value)
self._property_created = value
@schema_property('ui_cache')
def ui_cache(self):
return self._property_ui_cache
@ui_cache.setter
def ui_cache(self, value):
if value is None:
self._property_ui_cache = None
return
self.assert_isinstance(value, "ui_cache", (dict,))
self._property_ui_cache = value
@schema_property('project')
def project(self):
return self._property_project
@project.setter
def project(self, value):
if value is None:
self._property_project = None
return
self.assert_isinstance(value, "project", six.string_types)
self._property_project = value
@schema_property('task')
def task(self):
return self._property_task
@task.setter
def task(self, value):
if value is None:
self._property_task = None
return
self.assert_isinstance(value, "task", six.string_types)
self._property_task = value
@schema_property('iteration')
def iteration(self):
return self._property_iteration
@iteration.setter
def iteration(self, value):
if value is None:
self._property_iteration = None
return
if isinstance(value, float) and value.is_integer():
value = int(value)
self.assert_isinstance(value, "iteration", six.integer_types)
self._property_iteration = value
class UpdateResponse(Response):
"""
Response of models.update endpoint.
:param updated: Number of models updated (0 or 1)
:type updated: int
:param fields: Updated fields names and values
:type fields: dict
"""
_service = "models"
_action = "update"
_version = "2.9"
_schema = {
'definitions': {},
'properties': {
'fields': {
'additionalProperties': True,
'description': 'Updated fields names and values',
'type': ['object', 'null'],
},
'updated': {
'description': 'Number of models updated (0 or 1)',
'enum': [0, 1],
'type': ['integer', 'null'],
},
},
'type': 'object',
}
def __init__(
self, updated=None, fields=None, **kwargs):
super(UpdateResponse, self).__init__(**kwargs)
self.updated = updated
self.fields = fields
@schema_property('updated')
def updated(self):
return self._property_updated
@updated.setter
def updated(self, value):
if value is None:
self._property_updated = None
return
if isinstance(value, float) and value.is_integer():
value = int(value)
self.assert_isinstance(value, "updated", six.integer_types)
self._property_updated = value
@schema_property('fields')
def fields(self):
return self._property_fields
@fields.setter
def fields(self, value):
if value is None:
self._property_fields = None
return
self.assert_isinstance(value, "fields", (dict,))
self._property_fields = value
class UpdateForTaskRequest(Request):
"""
Create or update a new model for a task
:param task: Task id
:type task: str
:param uri: URI for the model. Exactly one of uri or override_model_id is a
required.
:type uri: str
:param name: Model name Unique within the company.
:type name: str
:param comment: Model comment
:type comment: str
:param tags: User-defined tags list
:type tags: Sequence[str]
:param system_tags: System tags list. This field is reserved for system use,
please don't use it.
:type system_tags: Sequence[str]
:param override_model_id: Override model ID. If provided, this model is updated
in the task. Exactly one of override_model_id or uri is required.
:type override_model_id: str
:param iteration: Iteration (used to update task statistics)
:type iteration: int
"""
_service = "models"
_action = "update_for_task"
_version = "2.9"
_schema = {
'definitions': {},
'properties': {
'comment': {'description': 'Model comment', 'type': 'string'},
'iteration': {
'description': 'Iteration (used to update task statistics)',
'type': 'integer',
},
'name': {
'description': 'Model name Unique within the company.',
'type': 'string',
},
'override_model_id': {
'description': 'Override model ID. If provided, this model is updated in the task. Exactly one of override_model_id or uri is required.',
'type': 'string',
},
'system_tags': {
'description': "System tags list. This field is reserved for system use, please don't use it.",
'items': {'type': 'string'},
'type': 'array',
},
'tags': {
'description': 'User-defined tags list',
'items': {'type': 'string'},
'type': 'array',
},
'task': {'description': 'Task id', 'type': 'string'},
'uri': {
'description': 'URI for the model. Exactly one of uri or override_model_id is a required.',
'type': 'string',
},
},
'required': ['task'],
'type': 'object',
}
def __init__(
self, task, uri=None, name=None, comment=None, tags=None, system_tags=None, override_model_id=None, iteration=None, **kwargs):
super(UpdateForTaskRequest, self).__init__(**kwargs)
self.task = task
self.uri = uri
self.name = name
self.comment = comment
self.tags = tags
self.system_tags = system_tags
self.override_model_id = override_model_id
self.iteration = iteration
@schema_property('task')
def task(self):
return self._property_task
@task.setter
def task(self, value):
if value is None:
self._property_task = None
return
self.assert_isinstance(value, "task", six.string_types)
self._property_task = value
@schema_property('uri')
def uri(self):
return self._property_uri
@uri.setter
def uri(self, value):
if value is None:
self._property_uri = None
return
self.assert_isinstance(value, "uri", six.string_types)
self._property_uri = value
@schema_property('name')
def name(self):
return self._property_name
@name.setter
def name(self, value):
if value is None:
self._property_name = None
return
self.assert_isinstance(value, "name", six.string_types)
self._property_name = value
@schema_property('comment')
def comment(self):
return self._property_comment
@comment.setter
def comment(self, value):
if value is None:
self._property_comment = None
return
self.assert_isinstance(value, "comment", six.string_types)
self._property_comment = value
@schema_property('tags')
def tags(self):
return self._property_tags
@tags.setter
def tags(self, value):
if value is None:
self._property_tags = None
return
self.assert_isinstance(value, "tags", (list, tuple))
self.assert_isinstance(value, "tags", six.string_types, is_array=True)
self._property_tags = value
@schema_property('system_tags')
def system_tags(self):
return self._property_system_tags
@system_tags.setter
def system_tags(self, value):
if value is None:
self._property_system_tags = None
return
self.assert_isinstance(value, "system_tags", (list, tuple))
self.assert_isinstance(value, "system_tags", six.string_types, is_array=True)
self._property_system_tags = value
@schema_property('override_model_id')
def override_model_id(self):
return self._property_override_model_id
@override_model_id.setter
def override_model_id(self, value):
if value is None:
self._property_override_model_id = None
return
self.assert_isinstance(value, "override_model_id", six.string_types)
self._property_override_model_id = value
@schema_property('iteration')
def iteration(self):
return self._property_iteration
@iteration.setter
def iteration(self, value):
if value is None:
self._property_iteration = None
return
if isinstance(value, float) and value.is_integer():
value = int(value)
self.assert_isinstance(value, "iteration", six.integer_types)
self._property_iteration = value
class UpdateForTaskResponse(Response):
"""
Response of models.update_for_task endpoint.
:param id: ID of the model
:type id: str
:param created: Was the model created
:type created: bool
:param updated: Number of models updated (0 or 1)
:type updated: int
:param fields: Updated fields names and values
:type fields: dict
"""
_service = "models"
_action = "update_for_task"
_version = "2.9"
_schema = {
'definitions': {},
'properties': {
'created': {
'description': 'Was the model created',
'type': ['boolean', 'null'],
},
'fields': {
'additionalProperties': True,
'description': 'Updated fields names and values',
'type': ['object', 'null'],
},
'id': {'description': 'ID of the model', 'type': ['string', 'null']},
'updated': {
'description': 'Number of models updated (0 or 1)',
'type': ['integer', 'null'],
},
},
'type': 'object',
}
def __init__(
self, id=None, created=None, updated=None, fields=None, **kwargs):
super(UpdateForTaskResponse, self).__init__(**kwargs)
self.id = id
self.created = created
self.updated = updated
self.fields = fields
@schema_property('id')
def id(self):
return self._property_id
@id.setter
def id(self, value):
if value is None:
self._property_id = None
return
self.assert_isinstance(value, "id", six.string_types)
self._property_id = value
@schema_property('created')
def created(self):
return self._property_created
@created.setter
def created(self, value):
if value is None:
self._property_created = None
return
self.assert_isinstance(value, "created", (bool,))
self._property_created = value
@schema_property('updated')
def updated(self):
return self._property_updated
@updated.setter
def updated(self, value):
if value is None:
self._property_updated = None
return
if isinstance(value, float) and value.is_integer():
value = int(value)
self.assert_isinstance(value, "updated", six.integer_types)
self._property_updated = value
@schema_property('fields')
def fields(self):
return self._property_fields
@fields.setter
def fields(self, value):
if value is None:
self._property_fields = None
return
self.assert_isinstance(value, "fields", (dict,))
self._property_fields = value
response_mapping = {
GetByIdRequest: GetByIdResponse,
GetByTaskIdRequest: GetByTaskIdResponse,
GetAllRequest: GetAllResponse,
UpdateForTaskRequest: UpdateForTaskResponse,
CreateRequest: CreateResponse,
EditRequest: EditResponse,
UpdateRequest: UpdateResponse,
SetReadyRequest: SetReadyResponse,
DeleteRequest: DeleteResponse,
MakePublicRequest: MakePublicResponse,
MakePrivateRequest: MakePrivateResponse,
}
| 31.323682 | 262 | 0.54702 | 9,880 | 96,289 | 5.16083 | 0.029858 | 0.076958 | 0.052168 | 0.06521 | 0.879229 | 0.85391 | 0.828885 | 0.81396 | 0.785562 | 0.767322 | 0 | 0.001141 | 0.344432 | 96,289 | 3,073 | 263 | 31.333876 | 0.806619 | 0.107437 | 0 | 0.771391 | 0 | 0.010531 | 0.19782 | 0.00149 | 0 | 0 | 0 | 0 | 0.058359 | 1 | 0.106187 | false | 0 | 0.001755 | 0.047828 | 0.25362 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8a6a84fab2bbb6be6521abd9727ff80a3dddb3cc | 35,446 | py | Python | a10sdk/core/cgnv6/cgnv6_lsn_rule_list_domain_name.py | deepfield/a10sdk-python | bfaa58099f51f085d5e91652d1d1a3fd5c529d5d | [
"Apache-2.0"
] | 16 | 2015-05-20T07:26:30.000Z | 2021-01-23T11:56:57.000Z | a10sdk/core/cgnv6/cgnv6_lsn_rule_list_domain_name.py | deepfield/a10sdk-python | bfaa58099f51f085d5e91652d1d1a3fd5c529d5d | [
"Apache-2.0"
] | 6 | 2015-03-24T22:07:11.000Z | 2017-03-28T21:31:18.000Z | a10sdk/core/cgnv6/cgnv6_lsn_rule_list_domain_name.py | deepfield/a10sdk-python | bfaa58099f51f085d5e91652d1d1a3fd5c529d5d | [
"Apache-2.0"
] | 23 | 2015-03-29T15:43:01.000Z | 2021-06-02T17:12:01.000Z | from a10sdk.common.A10BaseClass import A10BaseClass
class UdpCfg(A10BaseClass):
"""This class does not support CRUD Operations please use parent.
:param action_type: {"enum": ["dnat", "drop", "one-to-one-snat", "pass-through", "snat", "set-dscp"], "type": "string", "description": "'dnat': Apply Dest NAT; 'drop': Drop the Packets; 'one-to-one-snat': Apply one-to-one source NAT for the packets; 'pass-through': Pass the Packets Through; 'snat': Redirect the Packets to a Different Source NAT Pool; 'set-dscp': To set dscp value for the packets; ", "format": "enum"}
:param dscp_value: {"enum": ["default", "af11", "af12", "af13", "af21", "af22", "af23", "af31", "af32", "af33", "af41", "af42", "af43", "cs1", "cs2", "cs3", "cs4", "cs5", "cs6", "cs7", "ef", "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", "22", "23", "24", "25", "26", "27", "28", "29", "30", "31", "32", "33", "34", "35", "36", "37", "38", "39", "40", "41", "42", "43", "44", "45", "46", "47", "48", "49", "50", "51", "52", "53", "54", "55", "56", "57", "58", "59", "60", "61", "62", "63"], "type": "string", "description": "'default': Default dscp (000000); 'af11': AF11 (001010); 'af12': AF12 (001100); 'af13': AF13 (001110); 'af21': AF21 (010010); 'af22': AF22 (010100); 'af23': AF23 (010110); 'af31': AF31 (011010); 'af32': AF32 (011100); 'af33': AF33 (011110); 'af41': AF41 (100010); 'af42': AF42 (100100); 'af43': AF43 (100110); 'cs1': CS1 (001000); 'cs2': CS2 (010000); 'cs3': CS3 (011000); 'cs4': CS4 (100000); 'cs5': CS5 (101000); 'cs6': CS6 (110000); 'cs7': CS7 (111000); 'ef': EF (101110); '0': 000000; '1': 000001; '2': 000010; '3': 000011; '4': 000100; '5': 000101; '6': 000110; '7': 000111; '8': 001000; '9': 001001; '10': 001010; '11': 001011; '12': 001100; '13': 001101; '14': 001110; '15': 001111; '16': 010000; '17': 010001; '18': 010010; '19': 010011; '20': 010100; '21': 010101; '22': 010110; '23': 010111; '24': 011000; '25': 011001; '26': 011010; '27': 011011; '28': 011100; '29': 011101; '30': 011110; '31': 011111; '32': 100000; '33': 100001; '34': 100010; '35': 100011; '36': 100100; '37': 100101; '38': 100110; '39': 100111; '40': 101000; '41': 101001; '42': 101010; '43': 101011; '44': 101100; '45': 101101; '46': 101110; '47': 101111; '48': 110000; '49': 110001; '50': 110010; '51': 110011; '52': 110100; '53': 110101; '54': 110110; '55': 110111; '56': 111000; '57': 111001; '58': 111010; '59': 111011; '60': 111100; '61': 111101; '62': 111110; '63': 111111; ", "format": "enum"}
:param end_port: {"description": "End of Port Range (inclusive)", "minimum": 1, "type": "number", "maximum": 65535, "format": "number"}
:param ipv4_list: {"minLength": 1, "maxLength": 63, "type": "string", "description": "IP-List (IP-List Name)", "format": "string-rlx"}
:param action_cfg: {"enum": ["action", "no-action"], "type": "string", "description": "'action': LSN Rule-List Action; 'no-action': Exclude LSN Rule-List Action; ", "format": "enum"}
:param dscp_direction: {"enum": ["inbound", "outbound"], "type": "string", "description": "'inbound': To set dscp value for inbound packets; 'outbound': To set dscp value for outbound packets; ", "format": "enum"}
:param start_port: {"description": "Single Port or Start of Port Range (inclusive), Port 0 is Match Any Port", "minimum": 0, "type": "number", "maximum": 65535, "format": "number"}
:param shared: {"default": 0, "partition-visibility": "private", "type": "number", "description": "The pool is a shared pool", "format": "flag"}
:param pool: {"minLength": 1, "maxLength": 63, "type": "string", "description": "NAT Pool (NAT Pool or Pool Group)", "format": "string-rlx"}
:param DeviceProxy: The device proxy for REST operations and session handling. Refer to `common/device_proxy.py`
"""
def __init__(self, **kwargs):
self.ERROR_MSG = ""
self.b_key = "udp-cfg"
self.DeviceProxy = ""
self.action_type = ""
self.dscp_value = ""
self.end_port = ""
self.ipv4_list = ""
self.action_cfg = ""
self.dscp_direction = ""
self.start_port = ""
self.shared = ""
self.pool = ""
for keys, value in kwargs.items():
setattr(self,keys, value)
class TcpCfg(A10BaseClass):
"""This class does not support CRUD Operations please use parent.
:param action_type: {"enum": ["dnat", "drop", "one-to-one-snat", "pass-through", "snat", "set-dscp", "template"], "type": "string", "description": "'dnat': Apply Dest NAT; 'drop': Drop the Packets; 'one-to-one-snat': Apply one-to-one source NAT for the packets; 'pass-through': Pass the Packets Through; 'snat': Redirect the Packets to a Different Source NAT Pool; 'set-dscp': To set dscp value for the packets; 'template': Template; ", "format": "enum"}
:param dscp_value: {"enum": ["default", "af11", "af12", "af13", "af21", "af22", "af23", "af31", "af32", "af33", "af41", "af42", "af43", "cs1", "cs2", "cs3", "cs4", "cs5", "cs6", "cs7", "ef", "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", "22", "23", "24", "25", "26", "27", "28", "29", "30", "31", "32", "33", "34", "35", "36", "37", "38", "39", "40", "41", "42", "43", "44", "45", "46", "47", "48", "49", "50", "51", "52", "53", "54", "55", "56", "57", "58", "59", "60", "61", "62", "63"], "type": "string", "description": "'default': Default dscp (000000); 'af11': AF11 (001010); 'af12': AF12 (001100); 'af13': AF13 (001110); 'af21': AF21 (010010); 'af22': AF22 (010100); 'af23': AF23 (010110); 'af31': AF31 (011010); 'af32': AF32 (011100); 'af33': AF33 (011110); 'af41': AF41 (100010); 'af42': AF42 (100100); 'af43': AF43 (100110); 'cs1': CS1 (001000); 'cs2': CS2 (010000); 'cs3': CS3 (011000); 'cs4': CS4 (100000); 'cs5': CS5 (101000); 'cs6': CS6 (110000); 'cs7': CS7 (111000); 'ef': EF (101110); '0': 000000; '1': 000001; '2': 000010; '3': 000011; '4': 000100; '5': 000101; '6': 000110; '7': 000111; '8': 001000; '9': 001001; '10': 001010; '11': 001011; '12': 001100; '13': 001101; '14': 001110; '15': 001111; '16': 010000; '17': 010001; '18': 010010; '19': 010011; '20': 010100; '21': 010101; '22': 010110; '23': 010111; '24': 011000; '25': 011001; '26': 011010; '27': 011011; '28': 011100; '29': 011101; '30': 011110; '31': 011111; '32': 100000; '33': 100001; '34': 100010; '35': 100011; '36': 100100; '37': 100101; '38': 100110; '39': 100111; '40': 101000; '41': 101001; '42': 101010; '43': 101011; '44': 101100; '45': 101101; '46': 101110; '47': 101111; '48': 110000; '49': 110001; '50': 110010; '51': 110011; '52': 110100; '53': 110101; '54': 110110; '55': 110111; '56': 111000; '57': 111001; '58': 111010; '59': 111011; '60': 111100; '61': 111101; '62': 111110; '63': 111111; ", "format": "enum"}
:param end_port: {"description": "End of Port Range (inclusive)", "minimum": 1, "type": "number", "maximum": 65535, "format": "number"}
:param ipv4_list: {"minLength": 1, "maxLength": 63, "type": "string", "description": "IP-List (IP-List Name)", "format": "string-rlx"}
:param action_cfg: {"enum": ["action", "no-action"], "type": "string", "description": "'action': LSN Rule-List Action; 'no-action': Exclude LSN Rule-List Action; ", "format": "enum"}
:param dscp_direction: {"enum": ["inbound", "outbound"], "type": "string", "description": "'inbound': To set dscp value for inbound packets; 'outbound': To set dscp value for outbound packets; ", "format": "enum"}
:param start_port: {"description": "Single Port or Start of Port Range (inclusive), Port 0 is Match Any Port", "minimum": 0, "type": "number", "maximum": 65535, "format": "number"}
:param shared: {"default": 0, "partition-visibility": "private", "type": "number", "description": "The pool is a shared pool", "format": "flag"}
:param pool: {"minLength": 1, "maxLength": 63, "type": "string", "description": "NAT Pool (NAT Pool or Pool Group)", "format": "string-rlx"}
:param http_alg: {"minLength": 1, "maxLength": 63, "type": "string", "description": "HTTP-ALG Template (Template Name)", "format": "string-rlx"}
:param DeviceProxy: The device proxy for REST operations and session handling. Refer to `common/device_proxy.py`
"""
def __init__(self, **kwargs):
self.ERROR_MSG = ""
self.b_key = "tcp-cfg"
self.DeviceProxy = ""
self.action_type = ""
self.dscp_value = ""
self.end_port = ""
self.ipv4_list = ""
self.action_cfg = ""
self.dscp_direction = ""
self.start_port = ""
self.shared = ""
self.pool = ""
self.http_alg = ""
for keys, value in kwargs.items():
setattr(self,keys, value)
class DscpCfg(A10BaseClass):
"""This class does not support CRUD Operations please use parent.
:param action_type: {"enum": ["set-dscp"], "type": "string", "description": "'set-dscp': To set dscp value for the packets; ", "format": "enum"}
:param action_cfg: {"enum": ["action"], "type": "string", "description": "'action': LSN Rule-List Action; ", "format": "enum"}
:param dscp_value: {"enum": ["default", "af11", "af12", "af13", "af21", "af22", "af23", "af31", "af32", "af33", "af41", "af42", "af43", "cs1", "cs2", "cs3", "cs4", "cs5", "cs6", "cs7", "ef", "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", "22", "23", "24", "25", "26", "27", "28", "29", "30", "31", "32", "33", "34", "35", "36", "37", "38", "39", "40", "41", "42", "43", "44", "45", "46", "47", "48", "49", "50", "51", "52", "53", "54", "55", "56", "57", "58", "59", "60", "61", "62", "63"], "type": "string", "description": "'default': Default dscp (000000); 'af11': AF11 (001010); 'af12': AF12 (001100); 'af13': AF13 (001110); 'af21': AF21 (010010); 'af22': AF22 (010100); 'af23': AF23 (010110); 'af31': AF31 (011010); 'af32': AF32 (011100); 'af33': AF33 (011110); 'af41': AF41 (100010); 'af42': AF42 (100100); 'af43': AF43 (100110); 'cs1': CS1 (001000); 'cs2': CS2 (010000); 'cs3': CS3 (011000); 'cs4': CS4 (100000); 'cs5': CS5 (101000); 'cs6': CS6 (110000); 'cs7': CS7 (111000); 'ef': EF (101110); '0': 000000; '1': 000001; '2': 000010; '3': 000011; '4': 000100; '5': 000101; '6': 000110; '7': 000111; '8': 001000; '9': 001001; '10': 001010; '11': 001011; '12': 001100; '13': 001101; '14': 001110; '15': 001111; '16': 010000; '17': 010001; '18': 010010; '19': 010011; '20': 010100; '21': 010101; '22': 010110; '23': 010111; '24': 011000; '25': 011001; '26': 011010; '27': 011011; '28': 011100; '29': 011101; '30': 011110; '31': 011111; '32': 100000; '33': 100001; '34': 100010; '35': 100011; '36': 100100; '37': 100101; '38': 100110; '39': 100111; '40': 101000; '41': 101001; '42': 101010; '43': 101011; '44': 101100; '45': 101101; '46': 101110; '47': 101111; '48': 110000; '49': 110001; '50': 110010; '51': 110011; '52': 110100; '53': 110101; '54': 110110; '55': 110111; '56': 111000; '57': 111001; '58': 111010; '59': 111011; '60': 111100; '61': 111101; '62': 111110; '63': 111111; ", "format": "enum"}
:param dscp_match: {"enum": ["default", "af11", "af12", "af13", "af21", "af22", "af23", "af31", "af32", "af33", "af41", "af42", "af43", "cs1", "cs2", "cs3", "cs4", "cs5", "cs6", "cs7", "ef", "any", "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", "22", "23", "24", "25", "26", "27", "28", "29", "30", "31", "32", "33", "34", "35", "36", "37", "38", "39", "40", "41", "42", "43", "44", "45", "46", "47", "48", "49", "50", "51", "52", "53", "54", "55", "56", "57", "58", "59", "60", "61", "62", "63"], "type": "string", "description": "'default': Default dscp (000000); 'af11': AF11 (001010); 'af12': AF12 (001100); 'af13': AF13 (001110); 'af21': AF21 (010010); 'af22': AF22 (010100); 'af23': AF23 (010110); 'af31': AF31 (011010); 'af32': AF32 (011100); 'af33': AF33 (011110); 'af41': AF41 (100010); 'af42': AF42 (100100); 'af43': AF43 (100110); 'cs1': CS1 (001000); 'cs2': CS2 (010000); 'cs3': CS3 (011000); 'cs4': CS4 (100000); 'cs5': CS5 (101000); 'cs6': CS6 (110000); 'cs7': CS7 (111000); 'ef': EF (101110); 'any': Match any dscp value; '0': 000000; '1': 000001; '2': 000010; '3': 000011; '4': 000100; '5': 000101; '6': 000110; '7': 000111; '8': 001000; '9': 001001; '10': 001010; '11': 001011; '12': 001100; '13': 001101; '14': 001110; '15': 001111; '16': 010000; '17': 010001; '18': 010010; '19': 010011; '20': 010100; '21': 010101; '22': 010110; '23': 010111; '24': 011000; '25': 011001; '26': 011010; '27': 011011; '28': 011100; '29': 011101; '30': 011110; '31': 011111; '32': 100000; '33': 100001; '34': 100010; '35': 100011; '36': 100100; '37': 100101; '38': 100110; '39': 100111; '40': 101000; '41': 101001; '42': 101010; '43': 101011; '44': 101100; '45': 101101; '46': 101110; '47': 101111; '48': 110000; '49': 110001; '50': 110010; '51': 110011; '52': 110100; '53': 110101; '54': 110110; '55': 110111; '56': 111000; '57': 111001; '58': 111010; '59': 111011; '60': 111100; '61': 111101; '62': 111110; '63': 111111; ", "format": "enum"}
:param dscp_direction: {"enum": ["inbound", "outbound"], "type": "string", "description": "'inbound': To set dscp value for inbound packets; 'outbound': To set dscp value for outbound packets; ", "format": "enum"}
:param DeviceProxy: The device proxy for REST operations and session handling. Refer to `common/device_proxy.py`
"""
def __init__(self, **kwargs):
self.ERROR_MSG = ""
self.b_key = "dscp-cfg"
self.DeviceProxy = ""
self.action_type = ""
self.action_cfg = ""
self.dscp_value = ""
self.dscp_match = ""
self.dscp_direction = ""
for keys, value in kwargs.items():
setattr(self,keys, value)
class IcmpOthersCfg(A10BaseClass):
"""This class does not support CRUD Operations please use parent.
:param action_type: {"enum": ["dnat", "drop", "one-to-one-snat", "pass-through", "snat", "set-dscp"], "type": "string", "description": "'dnat': Apply Dest NAT; 'drop': Drop the Packets; 'one-to-one-snat': Apply one-to-one source NAT for the packets; 'pass-through': Pass the Packets Through; 'snat': Redirect the Packets to a Different Source NAT Pool; 'set-dscp': To set dscp value for the packets; ", "format": "enum"}
:param dscp_value: {"enum": ["default", "af11", "af12", "af13", "af21", "af22", "af23", "af31", "af32", "af33", "af41", "af42", "af43", "cs1", "cs2", "cs3", "cs4", "cs5", "cs6", "cs7", "ef", "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", "22", "23", "24", "25", "26", "27", "28", "29", "30", "31", "32", "33", "34", "35", "36", "37", "38", "39", "40", "41", "42", "43", "44", "45", "46", "47", "48", "49", "50", "51", "52", "53", "54", "55", "56", "57", "58", "59", "60", "61", "62", "63"], "type": "string", "description": "'default': Default dscp (000000); 'af11': AF11 (001010); 'af12': AF12 (001100); 'af13': AF13 (001110); 'af21': AF21 (010010); 'af22': AF22 (010100); 'af23': AF23 (010110); 'af31': AF31 (011010); 'af32': AF32 (011100); 'af33': AF33 (011110); 'af41': AF41 (100010); 'af42': AF42 (100100); 'af43': AF43 (100110); 'cs1': CS1 (001000); 'cs2': CS2 (010000); 'cs3': CS3 (011000); 'cs4': CS4 (100000); 'cs5': CS5 (101000); 'cs6': CS6 (110000); 'cs7': CS7 (111000); 'ef': EF (101110); '0': 000000; '1': 000001; '2': 000010; '3': 000011; '4': 000100; '5': 000101; '6': 000110; '7': 000111; '8': 001000; '9': 001001; '10': 001010; '11': 001011; '12': 001100; '13': 001101; '14': 001110; '15': 001111; '16': 010000; '17': 010001; '18': 010010; '19': 010011; '20': 010100; '21': 010101; '22': 010110; '23': 010111; '24': 011000; '25': 011001; '26': 011010; '27': 011011; '28': 011100; '29': 011101; '30': 011110; '31': 011111; '32': 100000; '33': 100001; '34': 100010; '35': 100011; '36': 100100; '37': 100101; '38': 100110; '39': 100111; '40': 101000; '41': 101001; '42': 101010; '43': 101011; '44': 101100; '45': 101101; '46': 101110; '47': 101111; '48': 110000; '49': 110001; '50': 110010; '51': 110011; '52': 110100; '53': 110101; '54': 110110; '55': 110111; '56': 111000; '57': 111001; '58': 111010; '59': 111011; '60': 111100; '61': 111101; '62': 111110; '63': 111111; ", "format": "enum"}
:param ipv4_list: {"minLength": 1, "maxLength": 63, "type": "string", "description": "IP-List (IP-List Name)", "format": "string-rlx"}
:param action_cfg: {"enum": ["action", "no-action"], "type": "string", "description": "'action': LSN Rule-List Action; 'no-action': Exclude LSN Rule-List Action; ", "format": "enum"}
:param dscp_direction: {"enum": ["inbound", "outbound"], "type": "string", "description": "'inbound': To set dscp value for inbound packets; 'outbound': To set dscp value for outbound packets; ", "format": "enum"}
:param shared: {"default": 0, "partition-visibility": "private", "type": "number", "description": "The pool is a shared pool", "format": "flag"}
:param pool: {"minLength": 1, "maxLength": 63, "type": "string", "description": "NAT Pool (NAT Pool or Pool Group)", "format": "string-rlx"}
:param DeviceProxy: The device proxy for REST operations and session handling. Refer to `common/device_proxy.py`
"""
def __init__(self, **kwargs):
self.ERROR_MSG = ""
self.b_key = "icmp-others-cfg"
self.DeviceProxy = ""
self.action_type = ""
self.dscp_value = ""
self.ipv4_list = ""
self.action_cfg = ""
self.dscp_direction = ""
self.shared = ""
self.pool = ""
for keys, value in kwargs.items():
setattr(self,keys, value)
class RuleCfg(A10BaseClass):
"""This class does not support CRUD Operations please use parent.
:param proto: {"enum": ["tcp", "udp", "icmp", "others", "dscp"], "type": "string", "description": "'tcp': TCP L4 Protoco; 'udp': UDP L4 Protocol; 'icmp': ICMP L4 Protocol; 'others': Other L4 Protocl; 'dscp': Match dscp value; ", "format": "enum"}
:param DeviceProxy: The device proxy for REST operations and session handling. Refer to `common/device_proxy.py`
"""
def __init__(self, **kwargs):
self.ERROR_MSG = ""
self.b_key = "rule-cfg"
self.DeviceProxy = ""
self.proto = ""
self.udp_cfg = {}
self.tcp_cfg = {}
self.dscp_cfg = {}
self.icmp_others_cfg = {}
for keys, value in kwargs.items():
setattr(self,keys, value)
class DomainName(A10BaseClass):
"""Class Description::
Configure a Specific Rule-Set (Domain Name).
Class domain-name supports CRUD Operations and inherits from `common/A10BaseClass`.
This class is the `"PARENT"` class for this module.`
:param name_domain: {"description": "Configure a Specific Rule-Set (Domain Name)", "format": "string", "minLength": 1, "optional": false, "maxLength": 63, "type": "string"}
:param uuid: {"description": "uuid of the object", "format": "string", "minLength": 1, "modify-not-allowed": 1, "optional": true, "maxLength": 64, "type": "string"}
:param rule_cfg: {"minItems": 1, "items": {"type": "object"}, "uniqueItems": true, "type": "array", "array": [{"properties": {"proto": {"enum": ["tcp", "udp", "icmp", "others", "dscp"], "type": "string", "description": "'tcp': TCP L4 Protoco; 'udp': UDP L4 Protocol; 'icmp': ICMP L4 Protocol; 'others': Other L4 Protocl; 'dscp': Match dscp value; ", "format": "enum"}, "udp-cfg": {"type": "object", "properties": {"action-type": {"enum": ["dnat", "drop", "one-to-one-snat", "pass-through", "snat", "set-dscp"], "type": "string", "description": "'dnat': Apply Dest NAT; 'drop': Drop the Packets; 'one-to-one-snat': Apply one-to-one source NAT for the packets; 'pass-through': Pass the Packets Through; 'snat': Redirect the Packets to a Different Source NAT Pool; 'set-dscp': To set dscp value for the packets; ", "format": "enum"}, "dscp-value": {"enum": ["default", "af11", "af12", "af13", "af21", "af22", "af23", "af31", "af32", "af33", "af41", "af42", "af43", "cs1", "cs2", "cs3", "cs4", "cs5", "cs6", "cs7", "ef", "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", "22", "23", "24", "25", "26", "27", "28", "29", "30", "31", "32", "33", "34", "35", "36", "37", "38", "39", "40", "41", "42", "43", "44", "45", "46", "47", "48", "49", "50", "51", "52", "53", "54", "55", "56", "57", "58", "59", "60", "61", "62", "63"], "type": "string", "description": "'default': Default dscp (000000); 'af11': AF11 (001010); 'af12': AF12 (001100); 'af13': AF13 (001110); 'af21': AF21 (010010); 'af22': AF22 (010100); 'af23': AF23 (010110); 'af31': AF31 (011010); 'af32': AF32 (011100); 'af33': AF33 (011110); 'af41': AF41 (100010); 'af42': AF42 (100100); 'af43': AF43 (100110); 'cs1': CS1 (001000); 'cs2': CS2 (010000); 'cs3': CS3 (011000); 'cs4': CS4 (100000); 'cs5': CS5 (101000); 'cs6': CS6 (110000); 'cs7': CS7 (111000); 'ef': EF (101110); '0': 000000; '1': 000001; '2': 000010; '3': 000011; '4': 000100; '5': 000101; '6': 000110; '7': 000111; '8': 001000; '9': 001001; '10': 001010; '11': 001011; '12': 001100; '13': 001101; '14': 001110; '15': 001111; '16': 010000; '17': 010001; '18': 010010; '19': 010011; '20': 010100; '21': 010101; '22': 010110; '23': 010111; '24': 011000; '25': 011001; '26': 011010; '27': 011011; '28': 011100; '29': 011101; '30': 011110; '31': 011111; '32': 100000; '33': 100001; '34': 100010; '35': 100011; '36': 100100; '37': 100101; '38': 100110; '39': 100111; '40': 101000; '41': 101001; '42': 101010; '43': 101011; '44': 101100; '45': 101101; '46': 101110; '47': 101111; '48': 110000; '49': 110001; '50': 110010; '51': 110011; '52': 110100; '53': 110101; '54': 110110; '55': 110111; '56': 111000; '57': 111001; '58': 111010; '59': 111011; '60': 111100; '61': 111101; '62': 111110; '63': 111111; ", "format": "enum"}, "end-port": {"description": "End of Port Range (inclusive)", "minimum": 1, "type": "number", "maximum": 65535, "format": "number"}, "ipv4-list": {"minLength": 1, "maxLength": 63, "type": "string", "description": "IP-List (IP-List Name)", "format": "string-rlx"}, "action-cfg": {"enum": ["action", "no-action"], "type": "string", "description": "'action': LSN Rule-List Action; 'no-action': Exclude LSN Rule-List Action; ", "format": "enum"}, "dscp-direction": {"enum": ["inbound", "outbound"], "type": "string", "description": "'inbound': To set dscp value for inbound packets; 'outbound': To set dscp value for outbound packets; ", "format": "enum"}, "start-port": {"description": "Single Port or Start of Port Range (inclusive), Port 0 is Match Any Port", "minimum": 0, "type": "number", "maximum": 65535, "format": "number"}, "shared": {"default": 0, "partition-visibility": "private", "type": "number", "description": "The pool is a shared pool", "format": "flag"}, "pool": {"minLength": 1, "maxLength": 63, "type": "string", "description": "NAT Pool (NAT Pool or Pool Group)", "format": "string-rlx"}}}, "tcp-cfg": {"type": "object", "properties": {"action-type": {"enum": ["dnat", "drop", "one-to-one-snat", "pass-through", "snat", "set-dscp", "template"], "type": "string", "description": "'dnat': Apply Dest NAT; 'drop': Drop the Packets; 'one-to-one-snat': Apply one-to-one source NAT for the packets; 'pass-through': Pass the Packets Through; 'snat': Redirect the Packets to a Different Source NAT Pool; 'set-dscp': To set dscp value for the packets; 'template': Template; ", "format": "enum"}, "dscp-value": {"enum": ["default", "af11", "af12", "af13", "af21", "af22", "af23", "af31", "af32", "af33", "af41", "af42", "af43", "cs1", "cs2", "cs3", "cs4", "cs5", "cs6", "cs7", "ef", "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", "22", "23", "24", "25", "26", "27", "28", "29", "30", "31", "32", "33", "34", "35", "36", "37", "38", "39", "40", "41", "42", "43", "44", "45", "46", "47", "48", "49", "50", "51", "52", "53", "54", "55", "56", "57", "58", "59", "60", "61", "62", "63"], "type": "string", "description": "'default': Default dscp (000000); 'af11': AF11 (001010); 'af12': AF12 (001100); 'af13': AF13 (001110); 'af21': AF21 (010010); 'af22': AF22 (010100); 'af23': AF23 (010110); 'af31': AF31 (011010); 'af32': AF32 (011100); 'af33': AF33 (011110); 'af41': AF41 (100010); 'af42': AF42 (100100); 'af43': AF43 (100110); 'cs1': CS1 (001000); 'cs2': CS2 (010000); 'cs3': CS3 (011000); 'cs4': CS4 (100000); 'cs5': CS5 (101000); 'cs6': CS6 (110000); 'cs7': CS7 (111000); 'ef': EF (101110); '0': 000000; '1': 000001; '2': 000010; '3': 000011; '4': 000100; '5': 000101; '6': 000110; '7': 000111; '8': 001000; '9': 001001; '10': 001010; '11': 001011; '12': 001100; '13': 001101; '14': 001110; '15': 001111; '16': 010000; '17': 010001; '18': 010010; '19': 010011; '20': 010100; '21': 010101; '22': 010110; '23': 010111; '24': 011000; '25': 011001; '26': 011010; '27': 011011; '28': 011100; '29': 011101; '30': 011110; '31': 011111; '32': 100000; '33': 100001; '34': 100010; '35': 100011; '36': 100100; '37': 100101; '38': 100110; '39': 100111; '40': 101000; '41': 101001; '42': 101010; '43': 101011; '44': 101100; '45': 101101; '46': 101110; '47': 101111; '48': 110000; '49': 110001; '50': 110010; '51': 110011; '52': 110100; '53': 110101; '54': 110110; '55': 110111; '56': 111000; '57': 111001; '58': 111010; '59': 111011; '60': 111100; '61': 111101; '62': 111110; '63': 111111; ", "format": "enum"}, "end-port": {"description": "End of Port Range (inclusive)", "minimum": 1, "type": "number", "maximum": 65535, "format": "number"}, "ipv4-list": {"minLength": 1, "maxLength": 63, "type": "string", "description": "IP-List (IP-List Name)", "format": "string-rlx"}, "action-cfg": {"enum": ["action", "no-action"], "type": "string", "description": "'action': LSN Rule-List Action; 'no-action': Exclude LSN Rule-List Action; ", "format": "enum"}, "dscp-direction": {"enum": ["inbound", "outbound"], "type": "string", "description": "'inbound': To set dscp value for inbound packets; 'outbound': To set dscp value for outbound packets; ", "format": "enum"}, "start-port": {"description": "Single Port or Start of Port Range (inclusive), Port 0 is Match Any Port", "minimum": 0, "type": "number", "maximum": 65535, "format": "number"}, "shared": {"default": 0, "partition-visibility": "private", "type": "number", "description": "The pool is a shared pool", "format": "flag"}, "pool": {"minLength": 1, "maxLength": 63, "type": "string", "description": "NAT Pool (NAT Pool or Pool Group)", "format": "string-rlx"}, "http-alg": {"minLength": 1, "maxLength": 63, "type": "string", "description": "HTTP-ALG Template (Template Name)", "format": "string-rlx"}}}, "dscp-cfg": {"type": "object", "properties": {"action-type": {"enum": ["set-dscp"], "type": "string", "description": "'set-dscp': To set dscp value for the packets; ", "format": "enum"}, "action-cfg": {"enum": ["action"], "type": "string", "description": "'action': LSN Rule-List Action; ", "format": "enum"}, "dscp-value": {"enum": ["default", "af11", "af12", "af13", "af21", "af22", "af23", "af31", "af32", "af33", "af41", "af42", "af43", "cs1", "cs2", "cs3", "cs4", "cs5", "cs6", "cs7", "ef", "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", "22", "23", "24", "25", "26", "27", "28", "29", "30", "31", "32", "33", "34", "35", "36", "37", "38", "39", "40", "41", "42", "43", "44", "45", "46", "47", "48", "49", "50", "51", "52", "53", "54", "55", "56", "57", "58", "59", "60", "61", "62", "63"], "type": "string", "description": "'default': Default dscp (000000); 'af11': AF11 (001010); 'af12': AF12 (001100); 'af13': AF13 (001110); 'af21': AF21 (010010); 'af22': AF22 (010100); 'af23': AF23 (010110); 'af31': AF31 (011010); 'af32': AF32 (011100); 'af33': AF33 (011110); 'af41': AF41 (100010); 'af42': AF42 (100100); 'af43': AF43 (100110); 'cs1': CS1 (001000); 'cs2': CS2 (010000); 'cs3': CS3 (011000); 'cs4': CS4 (100000); 'cs5': CS5 (101000); 'cs6': CS6 (110000); 'cs7': CS7 (111000); 'ef': EF (101110); '0': 000000; '1': 000001; '2': 000010; '3': 000011; '4': 000100; '5': 000101; '6': 000110; '7': 000111; '8': 001000; '9': 001001; '10': 001010; '11': 001011; '12': 001100; '13': 001101; '14': 001110; '15': 001111; '16': 010000; '17': 010001; '18': 010010; '19': 010011; '20': 010100; '21': 010101; '22': 010110; '23': 010111; '24': 011000; '25': 011001; '26': 011010; '27': 011011; '28': 011100; '29': 011101; '30': 011110; '31': 011111; '32': 100000; '33': 100001; '34': 100010; '35': 100011; '36': 100100; '37': 100101; '38': 100110; '39': 100111; '40': 101000; '41': 101001; '42': 101010; '43': 101011; '44': 101100; '45': 101101; '46': 101110; '47': 101111; '48': 110000; '49': 110001; '50': 110010; '51': 110011; '52': 110100; '53': 110101; '54': 110110; '55': 110111; '56': 111000; '57': 111001; '58': 111010; '59': 111011; '60': 111100; '61': 111101; '62': 111110; '63': 111111; ", "format": "enum"}, "dscp-match": {"enum": ["default", "af11", "af12", "af13", "af21", "af22", "af23", "af31", "af32", "af33", "af41", "af42", "af43", "cs1", "cs2", "cs3", "cs4", "cs5", "cs6", "cs7", "ef", "any", "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", "22", "23", "24", "25", "26", "27", "28", "29", "30", "31", "32", "33", "34", "35", "36", "37", "38", "39", "40", "41", "42", "43", "44", "45", "46", "47", "48", "49", "50", "51", "52", "53", "54", "55", "56", "57", "58", "59", "60", "61", "62", "63"], "type": "string", "description": "'default': Default dscp (000000); 'af11': AF11 (001010); 'af12': AF12 (001100); 'af13': AF13 (001110); 'af21': AF21 (010010); 'af22': AF22 (010100); 'af23': AF23 (010110); 'af31': AF31 (011010); 'af32': AF32 (011100); 'af33': AF33 (011110); 'af41': AF41 (100010); 'af42': AF42 (100100); 'af43': AF43 (100110); 'cs1': CS1 (001000); 'cs2': CS2 (010000); 'cs3': CS3 (011000); 'cs4': CS4 (100000); 'cs5': CS5 (101000); 'cs6': CS6 (110000); 'cs7': CS7 (111000); 'ef': EF (101110); 'any': Match any dscp value; '0': 000000; '1': 000001; '2': 000010; '3': 000011; '4': 000100; '5': 000101; '6': 000110; '7': 000111; '8': 001000; '9': 001001; '10': 001010; '11': 001011; '12': 001100; '13': 001101; '14': 001110; '15': 001111; '16': 010000; '17': 010001; '18': 010010; '19': 010011; '20': 010100; '21': 010101; '22': 010110; '23': 010111; '24': 011000; '25': 011001; '26': 011010; '27': 011011; '28': 011100; '29': 011101; '30': 011110; '31': 011111; '32': 100000; '33': 100001; '34': 100010; '35': 100011; '36': 100100; '37': 100101; '38': 100110; '39': 100111; '40': 101000; '41': 101001; '42': 101010; '43': 101011; '44': 101100; '45': 101101; '46': 101110; '47': 101111; '48': 110000; '49': 110001; '50': 110010; '51': 110011; '52': 110100; '53': 110101; '54': 110110; '55': 110111; '56': 111000; '57': 111001; '58': 111010; '59': 111011; '60': 111100; '61': 111101; '62': 111110; '63': 111111; ", "format": "enum"}, "dscp-direction": {"enum": ["inbound", "outbound"], "type": "string", "description": "'inbound': To set dscp value for inbound packets; 'outbound': To set dscp value for outbound packets; ", "format": "enum"}}}, "optional": true, "icmp-others-cfg": {"type": "object", "properties": {"action-type": {"enum": ["dnat", "drop", "one-to-one-snat", "pass-through", "snat", "set-dscp"], "type": "string", "description": "'dnat': Apply Dest NAT; 'drop': Drop the Packets; 'one-to-one-snat': Apply one-to-one source NAT for the packets; 'pass-through': Pass the Packets Through; 'snat': Redirect the Packets to a Different Source NAT Pool; 'set-dscp': To set dscp value for the packets; ", "format": "enum"}, "dscp-value": {"enum": ["default", "af11", "af12", "af13", "af21", "af22", "af23", "af31", "af32", "af33", "af41", "af42", "af43", "cs1", "cs2", "cs3", "cs4", "cs5", "cs6", "cs7", "ef", "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", "22", "23", "24", "25", "26", "27", "28", "29", "30", "31", "32", "33", "34", "35", "36", "37", "38", "39", "40", "41", "42", "43", "44", "45", "46", "47", "48", "49", "50", "51", "52", "53", "54", "55", "56", "57", "58", "59", "60", "61", "62", "63"], "type": "string", "description": "'default': Default dscp (000000); 'af11': AF11 (001010); 'af12': AF12 (001100); 'af13': AF13 (001110); 'af21': AF21 (010010); 'af22': AF22 (010100); 'af23': AF23 (010110); 'af31': AF31 (011010); 'af32': AF32 (011100); 'af33': AF33 (011110); 'af41': AF41 (100010); 'af42': AF42 (100100); 'af43': AF43 (100110); 'cs1': CS1 (001000); 'cs2': CS2 (010000); 'cs3': CS3 (011000); 'cs4': CS4 (100000); 'cs5': CS5 (101000); 'cs6': CS6 (110000); 'cs7': CS7 (111000); 'ef': EF (101110); '0': 000000; '1': 000001; '2': 000010; '3': 000011; '4': 000100; '5': 000101; '6': 000110; '7': 000111; '8': 001000; '9': 001001; '10': 001010; '11': 001011; '12': 001100; '13': 001101; '14': 001110; '15': 001111; '16': 010000; '17': 010001; '18': 010010; '19': 010011; '20': 010100; '21': 010101; '22': 010110; '23': 010111; '24': 011000; '25': 011001; '26': 011010; '27': 011011; '28': 011100; '29': 011101; '30': 011110; '31': 011111; '32': 100000; '33': 100001; '34': 100010; '35': 100011; '36': 100100; '37': 100101; '38': 100110; '39': 100111; '40': 101000; '41': 101001; '42': 101010; '43': 101011; '44': 101100; '45': 101101; '46': 101110; '47': 101111; '48': 110000; '49': 110001; '50': 110010; '51': 110011; '52': 110100; '53': 110101; '54': 110110; '55': 110111; '56': 111000; '57': 111001; '58': 111010; '59': 111011; '60': 111100; '61': 111101; '62': 111110; '63': 111111; ", "format": "enum"}, "ipv4-list": {"minLength": 1, "maxLength": 63, "type": "string", "description": "IP-List (IP-List Name)", "format": "string-rlx"}, "action-cfg": {"enum": ["action", "no-action"], "type": "string", "description": "'action': LSN Rule-List Action; 'no-action': Exclude LSN Rule-List Action; ", "format": "enum"}, "dscp-direction": {"enum": ["inbound", "outbound"], "type": "string", "description": "'inbound': To set dscp value for inbound packets; 'outbound': To set dscp value for outbound packets; ", "format": "enum"}, "shared": {"default": 0, "partition-visibility": "private", "type": "number", "description": "The pool is a shared pool", "format": "flag"}, "pool": {"minLength": 1, "maxLength": 63, "type": "string", "description": "NAT Pool (NAT Pool or Pool Group)", "format": "string-rlx"}}}}}]}
:param DeviceProxy: The device proxy for REST operations and session handling. Refer to `common/device_proxy.py`
URL for this object::
`https://<Hostname|Ip address>//axapi/v3/cgnv6/lsn-rule-list/{name}/domain-name/{name_domain}`.
"""
def __init__(self, **kwargs):
self.ERROR_MSG = ""
self.required = [ "name_domain"]
self.b_key = "domain-name"
self.a10_url="/axapi/v3/cgnv6/lsn-rule-list/{name}/domain-name/{name_domain}"
self.DeviceProxy = ""
self.name_domain = ""
self.uuid = ""
self.rule_cfg = []
for keys, value in kwargs.items():
setattr(self,keys, value)
| 169.598086 | 15,433 | 0.568104 | 4,852 | 35,446 | 4.128607 | 0.0608 | 0.025958 | 0.052416 | 0.016773 | 0.961012 | 0.960164 | 0.960164 | 0.957218 | 0.950978 | 0.950978 | 0 | 0.28664 | 0.159369 | 35,446 | 208 | 15,434 | 170.413462 | 0.385643 | 0.911358 | 0 | 0.702381 | 0 | 0.011905 | 0.045761 | 0.021994 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.011905 | 0 | 0.154762 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 12 |
8ab68fefd400a9367bdd8e521149cb7d90911dc8 | 12,354 | py | Python | test_calculator_app.py | Nimerkuloff/testwork | a412fa7a15c78fc616da8545dba26a58d899c158 | [
"MIT"
] | null | null | null | test_calculator_app.py | Nimerkuloff/testwork | a412fa7a15c78fc616da8545dba26a58d899c158 | [
"MIT"
] | null | null | null | test_calculator_app.py | Nimerkuloff/testwork | a412fa7a15c78fc616da8545dba26a58d899c158 | [
"MIT"
] | null | null | null | import time,pytest
import pywinauto
from .gui_auto import PyWinAuto
"""
Basic Operational Tests
"""
def test_user_can_start_app():
mgr = PyWinAuto()
assert mgr.app.is_process_running(), "APP COULDNT START"
mgr.close_app()
def test_user_can_close_app():
mgr = PyWinAuto()
mgr.app.Калькулятор.type_keys("%{F4}") # Alt-F4
time.sleep(1)
assert mgr.app.is_process_running() == False, "APP COULDNT CLOSE"
def test_user_can_press_0():
mgr = PyWinAuto()
mgr.app.Калькулятор.Button9.click()
assert mgr.app.Калькулятор.Edit.window_text() == "0", "0 BUTTON MISMATCH"
mgr.close_app()
def test_user_can_press_1():
mgr = PyWinAuto()
mgr.app.Калькулятор.Button19.click()
assert mgr.app.Калькулятор.Edit.window_text() == "1", "1 BUTTON MISMATCH"
mgr.close_app()
def test_user_can_press_2():
mgr = PyWinAuto()
mgr.app.Калькулятор.Button18.click()
assert mgr.app.Калькулятор.Edit.window_text() == "2", "2 BUTTON MISMATCH"
mgr.close_app()
def test_user_can_press_3():
mgr = PyWinAuto()
mgr.app.Калькулятор.Button17.click()
assert mgr.app.Калькулятор.Edit.window_text() == "3", "3 BUTTON MISMATCH"
mgr.close_app()
def test_user_can_press_4():
mgr = PyWinAuto()
mgr.app.Калькулятор.Button16.click()
assert mgr.app.Калькулятор.Edit.window_text() == "4", "4 BUTTON MISMATCH"
mgr.close_app()
def test_user_can_press_5():
mgr = PyWinAuto()
mgr.app.Калькулятор.Button15.click()
assert mgr.app.Калькулятор.Edit.window_text() == "5", "5 BUTTON MISMATCH"
mgr.close_app()
@pytest.mark.xfail(reason="fix needed. Press 6. Get 5")
def test_user_can_press_6():
mgr = PyWinAuto()
mgr.app.Калькулятор.Button14.click()
assert mgr.app.Калькулятор.Edit.window_text() == "6", "6 BUTTON MISMATCH"
mgr.close_app()
def test_user_can_press_7():
mgr = PyWinAuto()
mgr.app.Калькулятор.Button13.click()
assert mgr.app.Калькулятор.Edit.window_text() == "7", "7 BUTTON MISMATCH"
mgr.close_app()
def test_user_can_press_8():
mgr = PyWinAuto()
mgr.app.Калькулятор.Button12.click()
assert mgr.app.Калькулятор.Edit.window_text() == "8", "8 BUTTON MISMATCH"
mgr.close_app()
def test_user_can_press_9():
mgr = PyWinAuto()
mgr.app.Калькулятор.Button11.click()
assert mgr.app.Калькулятор.Edit.window_text() == "9", "9 BUTTON MISMATCH"
mgr.close_app()
def test_user_can_press_clear():
mgr = PyWinAuto()
mgr.app.Калькулятор.Button4.click()
try:
mgr.close_app()
except (pywinauto.findwindows.ElementAmbiguousError) as e:
print("\n"+"==== Exception error window rised ====",)
assert False
@pytest.mark.xfail(reason="fix needed. Press delete char. Get Unhandled Exception")
def test_user_can_press_delete():
mgr = PyWinAuto()
mgr.app.Калькулятор.Button7.click()
try:
mgr.close_app()
except (pywinauto.findwindows.ElementAmbiguousError) as e:
print("\n"+"==== Exception error window rised ====",)
assert False
@pytest.mark.xfail(reason="fix needed. Press divide. Get Unhandled Exception")
def test_user_can_press_divide():
mgr = PyWinAuto()
mgr.app.Калькулятор.Button6.click()
try:
mgr.close_app()
except (pywinauto.findwindows.ElementAmbiguousError) as e:
print("\n"+"==== Exception error window rised ====",)
assert False
def test_user_can_press_mult():
mgr = PyWinAuto()
mgr.app.Калькулятор.Button5.click()
try:
mgr.close_app()
except (pywinauto.findwindows.ElementAmbiguousError) as e:
print("\n"+"==== Exception error window rised ====",)
assert False
def test_user_can_press_minus():
mgr = PyWinAuto()
mgr.app.Калькулятор.Button3.click()
try:
mgr.close_app()
except (pywinauto.findwindows.ElementAmbiguousError) as e:
print("\n"+"==== Exception error window rised ====",)
assert False
def test_user_can_press_plus():
mgr = PyWinAuto()
mgr.app.Калькулятор.Button2.click()
try:
mgr.close_app()
except (pywinauto.findwindows.ElementAmbiguousError) as e:
print("\n"+"==== Exception error window rised ====",)
assert False
def test_user_can_press_equal():
mgr = PyWinAuto()
mgr.app.Калькулятор.Button0.click()
try:
mgr.close_app()
except (pywinauto.findwindows.ElementAmbiguousError) as e:
print("\n"+"==== Exception error window rised ====",)
assert False
def test_user_can_press_float():
mgr = PyWinAuto()
mgr.app.Калькулятор.Button8.click()
try:
mgr.close_app()
except (pywinauto.findwindows.ElementAmbiguousError) as e:
print("\n" + "==== Exception error window rised ====", )
assert False
@pytest.mark.xfail(reason="fix needed. Press change sign. Get Unhandled Exception")
def test_user_can_press_sign():
mgr = PyWinAuto()
mgr.app.Калькулятор.Button10.click()
try:
mgr.close_app()
except (pywinauto.findwindows.ElementAmbiguousError) as e:
print("\n" + "==== Exception error window rised ====", )
assert False
def test_user_can_perform_clear():
mgr = PyWinAuto()
mgr.app.Калькулятор.Button19.click()
mgr.app.Калькулятор.Button18.click()
mgr.app.Калькулятор.Button17.click()
mgr.app.Калькулятор.Button4.click()
assert mgr.app.Калькулятор.Edit.window_text() == "", "CLEAR NOT PERFOMED"
mgr.close_app()
def test_user_can_delete_one_digit():
mgr = PyWinAuto()
mgr.app.Калькулятор.Button19.click()
mgr.app.Калькулятор.Button18.click()
mgr.app.Калькулятор.Button17.click()
mgr.app.Калькулятор.Button7.click()
assert mgr.app.Калькулятор.Edit.window_text() == "12", "DELETE DIGIT NOT PERFOMED"
mgr.close_app()
@pytest.mark.xfail
def test_user_can_perform_clear_6_times():
mgr = PyWinAuto()
for i in range(10):
mgr.app.Калькулятор.Button19.click()
mgr.app.Калькулятор.Button18.click()
mgr.app.Калькулятор.Button17.click()
mgr.app.Калькулятор.Button4.click()
assert mgr.app.Калькулятор.Edit.window_text() == "", "CLEAR NOT PERFOMED"
mgr.close_app()
"""
Functionality Test Cases
"""
def test_user_can_add_two_int():
mgr = PyWinAuto()
#2
mgr.app.Калькулятор.Button18.click()
#+
mgr.app.Калькулятор.Button2.click()
#2
mgr.app.Калькулятор.Button18.click()
#=
mgr.app.Калькулятор.Button0.click()
assert mgr.app.Калькулятор.Edit.window_text() == "4"
mgr.close_app()
@pytest.mark.xfail()
def test_user_can_add_two_negative():
mgr = PyWinAuto()
# 8
mgr.app.Калькулятор.Button12.click()
#-8
mgr.app.Калькулятор.Button10.click()
# +
mgr.app.Калькулятор.Button2.click()
# 2
mgr.app.Калькулятор.Button18.click()
# -2
mgr.app.Калькулятор.Button10.click()
# =
mgr.app.Калькулятор.Button0.click()
assert mgr.app.Калькулятор.Edit.window_text() == "-10"
mgr.close_app()
def test_user_can_add_pos_and_neg():
mgr = PyWinAuto()
mgr.app.Калькулятор.Button12.click()# 8
mgr.app.Калькулятор.Button2.click() # +
mgr.app.Калькулятор.Button18.click() # 2
mgr.app.Калькулятор.Button10.click() # -2
mgr.app.Калькулятор.Button0.click() # =
assert mgr.app.Калькулятор.Edit.window_text() == "6"
mgr.close_app()
@pytest.mark.xfail()
def test_user_can_sub_two_negative():
mgr = PyWinAuto()
mgr.app.Калькулятор.Button12.click() # 8
mgr.app.Калькулятор.Button10.click() # -8
mgr.app.Калькулятор.Button3.click() # -
mgr.app.Калькулятор.Button18.click() # 2
mgr.app.Калькулятор.Button10.click() # -2
mgr.app.Калькулятор.Button0.click() # =
assert mgr.app.Калькулятор.Edit.window_text() == "-6"
mgr.close_app()
def test_user_can_sub_pos_and_neg():
mgr = PyWinAuto()
mgr.app.Калькулятор.Button12.click() # 8
mgr.app.Калькулятор.Button3.click() # -
mgr.app.Калькулятор.Button18.click() # 2
mgr.app.Калькулятор.Button10.click() # -2
mgr.app.Калькулятор.Button0.click() # =
assert mgr.app.Калькулятор.Edit.window_text() == "10"
mgr.close_app()
@pytest.mark.xfail()
def test_user_can_sub_neg_and_pos():
mgr = PyWinAuto()
mgr.app.Калькулятор.Button12.click() # 8
mgr.app.Калькулятор.Button10.click() # -8
mgr.app.Калькулятор.Button3.click() # -
mgr.app.Калькулятор.Button18.click() # 2
mgr.app.Калькулятор.Button0.click() # =
assert mgr.app.Калькулятор.Edit.window_text() == "-10"
mgr.close_app()
def test_user_can_mult_two_int():
mgr = PyWinAuto()
mgr.app.Калькулятор.Button12.click() # 8
mgr.app.Калькулятор.Button5.click() # *
mgr.app.Калькулятор.Button18.click() # 2
mgr.app.Калькулятор.Button0.click() # =
assert mgr.app.Калькулятор.Edit.window_text() == "16"
mgr.close_app()
def test_user_can_mult_two_negative():
mgr = PyWinAuto()
mgr.app.Калькулятор.Button12.click() # 8
mgr.app.Калькулятор.Button10.click() # -8
mgr.app.Калькулятор.Button5.click() # *
mgr.app.Калькулятор.Button18.click() # 2
mgr.app.Калькулятор.Button10.click() # -2
mgr.app.Калькулятор.Button0.click() # =
assert mgr.app.Калькулятор.Edit.window_text() == "16"
mgr.close_app()
@pytest.mark.xfail()
def test_user_can_mult_pos_and_neg():
mgr = PyWinAuto()
mgr.app.Калькулятор.Button12.click() # 8
mgr.app.Калькулятор.Button5.click() # *
mgr.app.Калькулятор.Button18.click() # 2
mgr.app.Калькулятор.Button10.click() # -2
mgr.app.Калькулятор.Button0.click() # =
assert mgr.app.Калькулятор.Edit.window_text() == "-16"
mgr.close_app()
@pytest.mark.xfail()
def test_user_can_mult_neg_and_pos():
mgr = PyWinAuto()
mgr.app.Калькулятор.Button12.click() # 8
mgr.app.Калькулятор.Button10.click() # -8
mgr.app.Калькулятор.Button5.click() # *
mgr.app.Калькулятор.Button18.click() # 2
mgr.app.Калькулятор.Button0.click() # =
assert mgr.app.Калькулятор.Edit.window_text() == "-16"
mgr.close_app()
def test_user_can_div_two_pos():
mgr = PyWinAuto()
mgr.app.Калькулятор.Button12.click() # 8
mgr.app.Калькулятор.Button6.click() #:
mgr.app.Калькулятор.Button18.click() # 2
mgr.app.Калькулятор.Button0.click() # =
assert mgr.app.Калькулятор.Edit.window_text() == "4"
mgr.close_app()
def test_user_can_div_two_neg():
mgr = PyWinAuto()
mgr.app.Калькулятор.Button12.click() # 8
mgr.app.Калькулятор.Button10.click() # -8
mgr.app.Калькулятор.Button6.click() #:
mgr.app.Калькулятор.Button18.click() # 2
mgr.app.Калькулятор.Button10.click() # -2
mgr.app.Калькулятор.Button0.click() # =
assert mgr.app.Калькулятор.Edit.window_text() == "4"
mgr.close_app()
@pytest.mark.xfail()
def test_user_can_div_pos_and_neg():
mgr = PyWinAuto()
mgr.app.Калькулятор.Button12.click() # 8
mgr.app.Калькулятор.Button6.click() #:
mgr.app.Калькулятор.Button18.click() # 2
mgr.app.Калькулятор.Button10.click() # -2
mgr.app.Калькулятор.Button0.click() # =
assert mgr.app.Калькулятор.Edit.window_text() == "-4"
mgr.close_app()
@pytest.mark.xfail
def test_user_can_div_number_by_zero_without_exception():
mgr = PyWinAuto()
mgr.app.Калькулятор.Button12.click() # 8
mgr.app.Калькулятор.Button6.click() #:
mgr.app.Калькулятор.Button9.click() # 0
mgr.app.Калькулятор.Button0.click() # =
try:
mgr.close_app()
except (pywinauto.findwindows.ElementAmbiguousError) as e:
print("\n" + "==== Exception error window rised ====", )
assert False
def test_user_can_div_zero_by_any_number_without_exceptio():
mgr = PyWinAuto()
mgr.app.Калькулятор.Button9.click() # 0
mgr.app.Калькулятор.Button6.click() #:
mgr.app.Калькулятор.Button12.click() # 8
mgr.app.Калькулятор.Button0.click() # =
try:
mgr.close_app()
except (pywinauto.findwindows.ElementAmbiguousError) as e:
print("\n" + "==== Exception error window rised ====", )
assert mgr.app.Калькулятор.Edit.window_text() == "0"
# TODO add tests for float numbers
# TODO add tests for input from keyboard
| 25.263804 | 86 | 0.663429 | 1,576 | 12,354 | 5.036802 | 0.081853 | 0.102041 | 0.284832 | 0.068783 | 0.929453 | 0.853993 | 0.834971 | 0.808516 | 0.740111 | 0.709877 | 0 | 0.027138 | 0.191679 | 12,354 | 488 | 87 | 25.315574 | 0.767775 | 0.018617 | 0 | 0.765273 | 0 | 0 | 0.077816 | 0 | 0 | 0 | 0 | 0.002049 | 0.125402 | 1 | 0.125402 | false | 0 | 0.009646 | 0 | 0.135048 | 0.03537 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
76e1818f0da32384d9c6c016dd9387177d9087cf | 5,009 | py | Python | openaerostruct/tests/test_struct_point_masses.py | mdolab/OpenAeroStruct | a10a673ec0c0fd7e4c41b8ec39b856606ce7ec78 | [
"Apache-2.0"
] | 114 | 2017-04-06T15:24:19.000Z | 2022-03-21T09:57:43.000Z | openaerostruct/tests/test_struct_point_masses.py | mdolab/OpenAeroStruct | a10a673ec0c0fd7e4c41b8ec39b856606ce7ec78 | [
"Apache-2.0"
] | 322 | 2017-04-07T01:40:03.000Z | 2022-03-17T21:50:52.000Z | openaerostruct/tests/test_struct_point_masses.py | mdolab/OpenAeroStruct | a10a673ec0c0fd7e4c41b8ec39b856606ce7ec78 | [
"Apache-2.0"
] | 83 | 2017-04-06T16:53:26.000Z | 2022-03-19T19:34:05.000Z | from openmdao.utils.assert_utils import assert_rel_error
import unittest
import numpy as np
from openaerostruct.geometry.utils import generate_mesh
from openaerostruct.structures.struct_groups import SpatialBeamAlone
import openmdao.api as om
class Test(unittest.TestCase):
def test(self):
# Create a dictionary to store options about the surface
mesh_dict = {"num_y": 31, "wing_type": "rect", "span": 10, "symmetry": True}
mesh = generate_mesh(mesh_dict)
surface = {
# Wing definition
"name": "wing", # name of the surface
"symmetry": True, # if true, model one half of wing
# reflected across the plane y = 0
"fem_model_type": "tube",
"mesh": mesh,
# Structural values are based on aluminum 7075
"E": 70.0e9, # [Pa] Young's modulus of the spar
"G": 30.0e9, # [Pa] shear modulus of the spar
"yield": 500.0e6 / 2.5, # [Pa] yield stress divided by 2.5 for limiting case
"mrho": 3.0e3, # [kg/m^3] material density
"fem_origin": 0.5, # normalized chordwise location of the spar
"t_over_c_cp": np.array([0.15]), # maximum airfoil thickness
"thickness_cp": np.ones((3)) * 0.1,
"wing_weight_ratio": 2.0,
"struct_weight_relief": False, # True to add the weight of the structure to the loads on the structure
"distributed_fuel_weight": False,
"exact_failure_constraint": False,
}
# Create the problem and assign the model group
prob = om.Problem()
ny = surface["mesh"].shape[1]
surface["n_point_masses"] = 1
indep_var_comp = om.IndepVarComp()
indep_var_comp.add_output("loads", val=np.zeros((ny, 6)), units="N")
indep_var_comp.add_output("load_factor", val=1.0)
point_masses = np.array([[10.0]])
point_mass_locations = np.array([[1.0, -10.0, 0.0]])
indep_var_comp.add_output("point_masses", val=point_masses, units="kg")
indep_var_comp.add_output("point_mass_locations", val=point_mass_locations, units="m")
struct_group = SpatialBeamAlone(surface=surface)
# Add indep_vars to the structural group
struct_group.add_subsystem("indep_vars", indep_var_comp, promotes=["*"])
prob.model.add_subsystem(surface["name"], struct_group, promotes=["*"])
# Set up the problem
prob.setup()
prob.run_model()
assert_rel_error(self, prob["vonmises"][-1, 0], 2956850.70882332, 1e-4)
def test_multiple_masses(self):
# Create a dictionary to store options about the surface
mesh_dict = {"num_y": 31, "wing_type": "rect", "span": 10, "symmetry": True}
mesh = generate_mesh(mesh_dict)
surface = {
# Wing definition
"name": "wing", # name of the surface
"symmetry": True, # if true, model one half of wing
# reflected across the plane y = 0
"fem_model_type": "tube",
"mesh": mesh,
# Structural values are based on aluminum 7075
"E": 70.0e9, # [Pa] Young's modulus of the spar
"G": 30.0e9, # [Pa] shear modulus of the spar
"yield": 500.0e6 / 2.5, # [Pa] yield stress divided by 2.5 for limiting case
"mrho": 3.0e3, # [kg/m^3] material density
"fem_origin": 0.5, # normalized chordwise location of the spar
"t_over_c_cp": np.array([0.15]), # maximum airfoil thickness
"thickness_cp": np.ones((3)) * 0.1,
"wing_weight_ratio": 2.0,
"struct_weight_relief": False, # True to add the weight of the structure to the loads on the structure
"distributed_fuel_weight": False,
"exact_failure_constraint": False,
}
# Create the problem and assign the model group
prob = om.Problem()
ny = surface["mesh"].shape[1]
surface["n_point_masses"] = 2
indep_var_comp = om.IndepVarComp()
indep_var_comp.add_output("loads", val=np.zeros((ny, 6)), units="N")
indep_var_comp.add_output("load_factor", val=1.0)
point_masses = np.array([[10.0, 20.0]])
point_mass_locations = np.array([[1.0, -1.0, 0.0], [1.0, -2.0, 0.0]])
indep_var_comp.add_output("point_masses", val=point_masses, units="kg")
indep_var_comp.add_output("point_mass_locations", val=point_mass_locations, units="m")
struct_group = SpatialBeamAlone(surface=surface)
# Add indep_vars to the structural group
struct_group.add_subsystem("indep_vars", indep_var_comp, promotes=["*"])
prob.model.add_subsystem(surface["name"], struct_group, promotes=["*"])
# Set up the problem
prob.setup()
prob.run_model()
assert_rel_error(self, prob["vonmises"][-1, 0], 1557126.5793494075, 1e-4)
if __name__ == "__main__":
unittest.main()
| 37.661654 | 115 | 0.608704 | 675 | 5,009 | 4.322963 | 0.24 | 0.032899 | 0.049349 | 0.041124 | 0.885195 | 0.885195 | 0.885195 | 0.885195 | 0.866347 | 0.866347 | 0 | 0.043205 | 0.269914 | 5,009 | 132 | 116 | 37.94697 | 0.754717 | 0.232981 | 0 | 0.740741 | 1 | 0 | 0.154371 | 0.024678 | 0 | 0 | 0 | 0 | 0.037037 | 1 | 0.024691 | false | 0 | 0.074074 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
76ff8d5bb05a922963675d16d8e858a83ace1778 | 116 | py | Python | chexpert-model/util/__init__.py | stanfordmlgroup/CheXseg | ca345d7c4523816c174aafd2c40929345b38b2b2 | [
"MIT"
] | 7 | 2021-05-19T20:55:42.000Z | 2022-03-22T05:20:59.000Z | chexpert-model/util/__init__.py | subratac/CheXseg | fb5c411ce08e394cd4a2a87d963843942bdc2021 | [
"MIT"
] | null | null | null | chexpert-model/util/__init__.py | subratac/CheXseg | fb5c411ce08e394cd4a2a87d963843942bdc2021 | [
"MIT"
] | 3 | 2021-05-19T20:52:39.000Z | 2021-09-01T01:57:25.000Z | from util.io_util import *
from util.image_util import *
from util.model_util import *
from util.label_util import * | 29 | 29 | 0.801724 | 20 | 116 | 4.45 | 0.35 | 0.359551 | 0.47191 | 0.606742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12931 | 116 | 4 | 30 | 29 | 0.881188 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
0a56b8712b8c93d35d03f583002e5e6599ef46d5 | 36,599 | py | Python | sdk/python/pulumi_rancher2/cloud_credential.py | pulumi/pulumi-rancher2 | 7a98af8cf598b711084a7f46c0fe71b43ed7a8ac | [
"ECL-2.0",
"Apache-2.0"
] | 3 | 2020-03-23T15:59:11.000Z | 2021-01-29T00:37:32.000Z | sdk/python/pulumi_rancher2/cloud_credential.py | pulumi/pulumi-rancher2 | 7a98af8cf598b711084a7f46c0fe71b43ed7a8ac | [
"ECL-2.0",
"Apache-2.0"
] | 76 | 2020-01-16T20:00:25.000Z | 2022-03-31T20:30:08.000Z | sdk/python/pulumi_rancher2/cloud_credential.py | pulumi/pulumi-rancher2 | 7a98af8cf598b711084a7f46c0fe71b43ed7a8ac | [
"ECL-2.0",
"Apache-2.0"
] | 2 | 2020-03-27T17:39:59.000Z | 2020-11-24T23:09:24.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from . import _utilities
from . import outputs
from ._inputs import *
__all__ = ['CloudCredentialArgs', 'CloudCredential']
@pulumi.input_type
class CloudCredentialArgs:
def __init__(__self__, *,
amazonec2_credential_config: Optional[pulumi.Input['CloudCredentialAmazonec2CredentialConfigArgs']] = None,
annotations: Optional[pulumi.Input[Mapping[str, Any]]] = None,
azure_credential_config: Optional[pulumi.Input['CloudCredentialAzureCredentialConfigArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
digitalocean_credential_config: Optional[pulumi.Input['CloudCredentialDigitaloceanCredentialConfigArgs']] = None,
google_credential_config: Optional[pulumi.Input['CloudCredentialGoogleCredentialConfigArgs']] = None,
labels: Optional[pulumi.Input[Mapping[str, Any]]] = None,
linode_credential_config: Optional[pulumi.Input['CloudCredentialLinodeCredentialConfigArgs']] = None,
name: Optional[pulumi.Input[str]] = None,
openstack_credential_config: Optional[pulumi.Input['CloudCredentialOpenstackCredentialConfigArgs']] = None,
vsphere_credential_config: Optional[pulumi.Input['CloudCredentialVsphereCredentialConfigArgs']] = None):
"""
The set of arguments for constructing a CloudCredential resource.
:param pulumi.Input['CloudCredentialAmazonec2CredentialConfigArgs'] amazonec2_credential_config: AWS config for the Cloud Credential (list maxitems:1)
:param pulumi.Input[Mapping[str, Any]] annotations: Annotations for Cloud Credential object (map)
:param pulumi.Input['CloudCredentialAzureCredentialConfigArgs'] azure_credential_config: Azure config for the Cloud Credential (list maxitems:1)
:param pulumi.Input[str] description: Description for the Cloud Credential (string)
:param pulumi.Input['CloudCredentialDigitaloceanCredentialConfigArgs'] digitalocean_credential_config: DigitalOcean config for the Cloud Credential (list maxitems:1)
:param pulumi.Input['CloudCredentialGoogleCredentialConfigArgs'] google_credential_config: Google config for the Cloud Credential (list maxitems:1)
:param pulumi.Input[Mapping[str, Any]] labels: Labels for Cloud Credential object (map)
:param pulumi.Input['CloudCredentialLinodeCredentialConfigArgs'] linode_credential_config: Linode config for the Cloud Credential (list maxitems:1)
:param pulumi.Input[str] name: The name of the Cloud Credential (string)
:param pulumi.Input['CloudCredentialOpenstackCredentialConfigArgs'] openstack_credential_config: OpenStack config for the Cloud Credential (list maxitems:1)
:param pulumi.Input['CloudCredentialVsphereCredentialConfigArgs'] vsphere_credential_config: vSphere config for the Cloud Credential (list maxitems:1)
"""
if amazonec2_credential_config is not None:
pulumi.set(__self__, "amazonec2_credential_config", amazonec2_credential_config)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if azure_credential_config is not None:
pulumi.set(__self__, "azure_credential_config", azure_credential_config)
if description is not None:
pulumi.set(__self__, "description", description)
if digitalocean_credential_config is not None:
pulumi.set(__self__, "digitalocean_credential_config", digitalocean_credential_config)
if google_credential_config is not None:
pulumi.set(__self__, "google_credential_config", google_credential_config)
if labels is not None:
pulumi.set(__self__, "labels", labels)
if linode_credential_config is not None:
pulumi.set(__self__, "linode_credential_config", linode_credential_config)
if name is not None:
pulumi.set(__self__, "name", name)
if openstack_credential_config is not None:
pulumi.set(__self__, "openstack_credential_config", openstack_credential_config)
if vsphere_credential_config is not None:
pulumi.set(__self__, "vsphere_credential_config", vsphere_credential_config)
@property
@pulumi.getter(name="amazonec2CredentialConfig")
def amazonec2_credential_config(self) -> Optional[pulumi.Input['CloudCredentialAmazonec2CredentialConfigArgs']]:
"""
AWS config for the Cloud Credential (list maxitems:1)
"""
return pulumi.get(self, "amazonec2_credential_config")
@amazonec2_credential_config.setter
def amazonec2_credential_config(self, value: Optional[pulumi.Input['CloudCredentialAmazonec2CredentialConfigArgs']]):
pulumi.set(self, "amazonec2_credential_config", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
Annotations for Cloud Credential object (map)
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="azureCredentialConfig")
def azure_credential_config(self) -> Optional[pulumi.Input['CloudCredentialAzureCredentialConfigArgs']]:
"""
Azure config for the Cloud Credential (list maxitems:1)
"""
return pulumi.get(self, "azure_credential_config")
@azure_credential_config.setter
def azure_credential_config(self, value: Optional[pulumi.Input['CloudCredentialAzureCredentialConfigArgs']]):
pulumi.set(self, "azure_credential_config", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Description for the Cloud Credential (string)
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="digitaloceanCredentialConfig")
def digitalocean_credential_config(self) -> Optional[pulumi.Input['CloudCredentialDigitaloceanCredentialConfigArgs']]:
"""
DigitalOcean config for the Cloud Credential (list maxitems:1)
"""
return pulumi.get(self, "digitalocean_credential_config")
@digitalocean_credential_config.setter
def digitalocean_credential_config(self, value: Optional[pulumi.Input['CloudCredentialDigitaloceanCredentialConfigArgs']]):
pulumi.set(self, "digitalocean_credential_config", value)
@property
@pulumi.getter(name="googleCredentialConfig")
def google_credential_config(self) -> Optional[pulumi.Input['CloudCredentialGoogleCredentialConfigArgs']]:
"""
Google config for the Cloud Credential (list maxitems:1)
"""
return pulumi.get(self, "google_credential_config")
@google_credential_config.setter
def google_credential_config(self, value: Optional[pulumi.Input['CloudCredentialGoogleCredentialConfigArgs']]):
pulumi.set(self, "google_credential_config", value)
@property
@pulumi.getter
def labels(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
Labels for Cloud Credential object (map)
"""
return pulumi.get(self, "labels")
@labels.setter
def labels(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "labels", value)
@property
@pulumi.getter(name="linodeCredentialConfig")
def linode_credential_config(self) -> Optional[pulumi.Input['CloudCredentialLinodeCredentialConfigArgs']]:
"""
Linode config for the Cloud Credential (list maxitems:1)
"""
return pulumi.get(self, "linode_credential_config")
@linode_credential_config.setter
def linode_credential_config(self, value: Optional[pulumi.Input['CloudCredentialLinodeCredentialConfigArgs']]):
pulumi.set(self, "linode_credential_config", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the Cloud Credential (string)
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="openstackCredentialConfig")
def openstack_credential_config(self) -> Optional[pulumi.Input['CloudCredentialOpenstackCredentialConfigArgs']]:
"""
OpenStack config for the Cloud Credential (list maxitems:1)
"""
return pulumi.get(self, "openstack_credential_config")
@openstack_credential_config.setter
def openstack_credential_config(self, value: Optional[pulumi.Input['CloudCredentialOpenstackCredentialConfigArgs']]):
pulumi.set(self, "openstack_credential_config", value)
@property
@pulumi.getter(name="vsphereCredentialConfig")
def vsphere_credential_config(self) -> Optional[pulumi.Input['CloudCredentialVsphereCredentialConfigArgs']]:
"""
vSphere config for the Cloud Credential (list maxitems:1)
"""
return pulumi.get(self, "vsphere_credential_config")
@vsphere_credential_config.setter
def vsphere_credential_config(self, value: Optional[pulumi.Input['CloudCredentialVsphereCredentialConfigArgs']]):
pulumi.set(self, "vsphere_credential_config", value)
@pulumi.input_type
class _CloudCredentialState:
def __init__(__self__, *,
amazonec2_credential_config: Optional[pulumi.Input['CloudCredentialAmazonec2CredentialConfigArgs']] = None,
annotations: Optional[pulumi.Input[Mapping[str, Any]]] = None,
azure_credential_config: Optional[pulumi.Input['CloudCredentialAzureCredentialConfigArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
digitalocean_credential_config: Optional[pulumi.Input['CloudCredentialDigitaloceanCredentialConfigArgs']] = None,
driver: Optional[pulumi.Input[str]] = None,
google_credential_config: Optional[pulumi.Input['CloudCredentialGoogleCredentialConfigArgs']] = None,
labels: Optional[pulumi.Input[Mapping[str, Any]]] = None,
linode_credential_config: Optional[pulumi.Input['CloudCredentialLinodeCredentialConfigArgs']] = None,
name: Optional[pulumi.Input[str]] = None,
openstack_credential_config: Optional[pulumi.Input['CloudCredentialOpenstackCredentialConfigArgs']] = None,
vsphere_credential_config: Optional[pulumi.Input['CloudCredentialVsphereCredentialConfigArgs']] = None):
"""
Input properties used for looking up and filtering CloudCredential resources.
:param pulumi.Input['CloudCredentialAmazonec2CredentialConfigArgs'] amazonec2_credential_config: AWS config for the Cloud Credential (list maxitems:1)
:param pulumi.Input[Mapping[str, Any]] annotations: Annotations for Cloud Credential object (map)
:param pulumi.Input['CloudCredentialAzureCredentialConfigArgs'] azure_credential_config: Azure config for the Cloud Credential (list maxitems:1)
:param pulumi.Input[str] description: Description for the Cloud Credential (string)
:param pulumi.Input['CloudCredentialDigitaloceanCredentialConfigArgs'] digitalocean_credential_config: DigitalOcean config for the Cloud Credential (list maxitems:1)
:param pulumi.Input[str] driver: (Computed) The driver of the Cloud Credential (string)
:param pulumi.Input['CloudCredentialGoogleCredentialConfigArgs'] google_credential_config: Google config for the Cloud Credential (list maxitems:1)
:param pulumi.Input[Mapping[str, Any]] labels: Labels for Cloud Credential object (map)
:param pulumi.Input['CloudCredentialLinodeCredentialConfigArgs'] linode_credential_config: Linode config for the Cloud Credential (list maxitems:1)
:param pulumi.Input[str] name: The name of the Cloud Credential (string)
:param pulumi.Input['CloudCredentialOpenstackCredentialConfigArgs'] openstack_credential_config: OpenStack config for the Cloud Credential (list maxitems:1)
:param pulumi.Input['CloudCredentialVsphereCredentialConfigArgs'] vsphere_credential_config: vSphere config for the Cloud Credential (list maxitems:1)
"""
if amazonec2_credential_config is not None:
pulumi.set(__self__, "amazonec2_credential_config", amazonec2_credential_config)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if azure_credential_config is not None:
pulumi.set(__self__, "azure_credential_config", azure_credential_config)
if description is not None:
pulumi.set(__self__, "description", description)
if digitalocean_credential_config is not None:
pulumi.set(__self__, "digitalocean_credential_config", digitalocean_credential_config)
if driver is not None:
pulumi.set(__self__, "driver", driver)
if google_credential_config is not None:
pulumi.set(__self__, "google_credential_config", google_credential_config)
if labels is not None:
pulumi.set(__self__, "labels", labels)
if linode_credential_config is not None:
pulumi.set(__self__, "linode_credential_config", linode_credential_config)
if name is not None:
pulumi.set(__self__, "name", name)
if openstack_credential_config is not None:
pulumi.set(__self__, "openstack_credential_config", openstack_credential_config)
if vsphere_credential_config is not None:
pulumi.set(__self__, "vsphere_credential_config", vsphere_credential_config)
@property
@pulumi.getter(name="amazonec2CredentialConfig")
def amazonec2_credential_config(self) -> Optional[pulumi.Input['CloudCredentialAmazonec2CredentialConfigArgs']]:
"""
AWS config for the Cloud Credential (list maxitems:1)
"""
return pulumi.get(self, "amazonec2_credential_config")
@amazonec2_credential_config.setter
def amazonec2_credential_config(self, value: Optional[pulumi.Input['CloudCredentialAmazonec2CredentialConfigArgs']]):
pulumi.set(self, "amazonec2_credential_config", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
Annotations for Cloud Credential object (map)
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="azureCredentialConfig")
def azure_credential_config(self) -> Optional[pulumi.Input['CloudCredentialAzureCredentialConfigArgs']]:
"""
Azure config for the Cloud Credential (list maxitems:1)
"""
return pulumi.get(self, "azure_credential_config")
@azure_credential_config.setter
def azure_credential_config(self, value: Optional[pulumi.Input['CloudCredentialAzureCredentialConfigArgs']]):
pulumi.set(self, "azure_credential_config", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Description for the Cloud Credential (string)
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="digitaloceanCredentialConfig")
def digitalocean_credential_config(self) -> Optional[pulumi.Input['CloudCredentialDigitaloceanCredentialConfigArgs']]:
"""
DigitalOcean config for the Cloud Credential (list maxitems:1)
"""
return pulumi.get(self, "digitalocean_credential_config")
@digitalocean_credential_config.setter
def digitalocean_credential_config(self, value: Optional[pulumi.Input['CloudCredentialDigitaloceanCredentialConfigArgs']]):
pulumi.set(self, "digitalocean_credential_config", value)
@property
@pulumi.getter
def driver(self) -> Optional[pulumi.Input[str]]:
"""
(Computed) The driver of the Cloud Credential (string)
"""
return pulumi.get(self, "driver")
@driver.setter
def driver(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "driver", value)
@property
@pulumi.getter(name="googleCredentialConfig")
def google_credential_config(self) -> Optional[pulumi.Input['CloudCredentialGoogleCredentialConfigArgs']]:
"""
Google config for the Cloud Credential (list maxitems:1)
"""
return pulumi.get(self, "google_credential_config")
@google_credential_config.setter
def google_credential_config(self, value: Optional[pulumi.Input['CloudCredentialGoogleCredentialConfigArgs']]):
pulumi.set(self, "google_credential_config", value)
@property
@pulumi.getter
def labels(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
Labels for Cloud Credential object (map)
"""
return pulumi.get(self, "labels")
@labels.setter
def labels(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "labels", value)
@property
@pulumi.getter(name="linodeCredentialConfig")
def linode_credential_config(self) -> Optional[pulumi.Input['CloudCredentialLinodeCredentialConfigArgs']]:
"""
Linode config for the Cloud Credential (list maxitems:1)
"""
return pulumi.get(self, "linode_credential_config")
@linode_credential_config.setter
def linode_credential_config(self, value: Optional[pulumi.Input['CloudCredentialLinodeCredentialConfigArgs']]):
pulumi.set(self, "linode_credential_config", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the Cloud Credential (string)
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="openstackCredentialConfig")
def openstack_credential_config(self) -> Optional[pulumi.Input['CloudCredentialOpenstackCredentialConfigArgs']]:
"""
OpenStack config for the Cloud Credential (list maxitems:1)
"""
return pulumi.get(self, "openstack_credential_config")
@openstack_credential_config.setter
def openstack_credential_config(self, value: Optional[pulumi.Input['CloudCredentialOpenstackCredentialConfigArgs']]):
pulumi.set(self, "openstack_credential_config", value)
@property
@pulumi.getter(name="vsphereCredentialConfig")
def vsphere_credential_config(self) -> Optional[pulumi.Input['CloudCredentialVsphereCredentialConfigArgs']]:
"""
vSphere config for the Cloud Credential (list maxitems:1)
"""
return pulumi.get(self, "vsphere_credential_config")
@vsphere_credential_config.setter
def vsphere_credential_config(self, value: Optional[pulumi.Input['CloudCredentialVsphereCredentialConfigArgs']]):
pulumi.set(self, "vsphere_credential_config", value)
class CloudCredential(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
amazonec2_credential_config: Optional[pulumi.Input[pulumi.InputType['CloudCredentialAmazonec2CredentialConfigArgs']]] = None,
annotations: Optional[pulumi.Input[Mapping[str, Any]]] = None,
azure_credential_config: Optional[pulumi.Input[pulumi.InputType['CloudCredentialAzureCredentialConfigArgs']]] = None,
description: Optional[pulumi.Input[str]] = None,
digitalocean_credential_config: Optional[pulumi.Input[pulumi.InputType['CloudCredentialDigitaloceanCredentialConfigArgs']]] = None,
google_credential_config: Optional[pulumi.Input[pulumi.InputType['CloudCredentialGoogleCredentialConfigArgs']]] = None,
labels: Optional[pulumi.Input[Mapping[str, Any]]] = None,
linode_credential_config: Optional[pulumi.Input[pulumi.InputType['CloudCredentialLinodeCredentialConfigArgs']]] = None,
name: Optional[pulumi.Input[str]] = None,
openstack_credential_config: Optional[pulumi.Input[pulumi.InputType['CloudCredentialOpenstackCredentialConfigArgs']]] = None,
vsphere_credential_config: Optional[pulumi.Input[pulumi.InputType['CloudCredentialVsphereCredentialConfigArgs']]] = None,
__props__=None):
"""
Provides a Rancher v2 Cloud Credential resource. This can be used to create Cloud Credential for Rancher v2.2.x and retrieve their information.
amazonec2, azure, digitalocean, linode, openstack and vsphere credentials config are supported for Cloud Credential.
## Example Usage
```python
import pulumi
import pulumi_rancher2 as rancher2
# Create a new rancher2 Cloud Credential
foo = rancher2.CloudCredential("foo",
amazonec2_credential_config=rancher2.CloudCredentialAmazonec2CredentialConfigArgs(
access_key="<AWS_ACCESS_KEY>",
secret_key="<AWS_SECRET_KEY>",
),
description="foo test")
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[pulumi.InputType['CloudCredentialAmazonec2CredentialConfigArgs']] amazonec2_credential_config: AWS config for the Cloud Credential (list maxitems:1)
:param pulumi.Input[Mapping[str, Any]] annotations: Annotations for Cloud Credential object (map)
:param pulumi.Input[pulumi.InputType['CloudCredentialAzureCredentialConfigArgs']] azure_credential_config: Azure config for the Cloud Credential (list maxitems:1)
:param pulumi.Input[str] description: Description for the Cloud Credential (string)
:param pulumi.Input[pulumi.InputType['CloudCredentialDigitaloceanCredentialConfigArgs']] digitalocean_credential_config: DigitalOcean config for the Cloud Credential (list maxitems:1)
:param pulumi.Input[pulumi.InputType['CloudCredentialGoogleCredentialConfigArgs']] google_credential_config: Google config for the Cloud Credential (list maxitems:1)
:param pulumi.Input[Mapping[str, Any]] labels: Labels for Cloud Credential object (map)
:param pulumi.Input[pulumi.InputType['CloudCredentialLinodeCredentialConfigArgs']] linode_credential_config: Linode config for the Cloud Credential (list maxitems:1)
:param pulumi.Input[str] name: The name of the Cloud Credential (string)
:param pulumi.Input[pulumi.InputType['CloudCredentialOpenstackCredentialConfigArgs']] openstack_credential_config: OpenStack config for the Cloud Credential (list maxitems:1)
:param pulumi.Input[pulumi.InputType['CloudCredentialVsphereCredentialConfigArgs']] vsphere_credential_config: vSphere config for the Cloud Credential (list maxitems:1)
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: Optional[CloudCredentialArgs] = None,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Provides a Rancher v2 Cloud Credential resource. This can be used to create Cloud Credential for Rancher v2.2.x and retrieve their information.
amazonec2, azure, digitalocean, linode, openstack and vsphere credentials config are supported for Cloud Credential.
## Example Usage
```python
import pulumi
import pulumi_rancher2 as rancher2
# Create a new rancher2 Cloud Credential
foo = rancher2.CloudCredential("foo",
amazonec2_credential_config=rancher2.CloudCredentialAmazonec2CredentialConfigArgs(
access_key="<AWS_ACCESS_KEY>",
secret_key="<AWS_SECRET_KEY>",
),
description="foo test")
```
:param str resource_name: The name of the resource.
:param CloudCredentialArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(CloudCredentialArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
amazonec2_credential_config: Optional[pulumi.Input[pulumi.InputType['CloudCredentialAmazonec2CredentialConfigArgs']]] = None,
annotations: Optional[pulumi.Input[Mapping[str, Any]]] = None,
azure_credential_config: Optional[pulumi.Input[pulumi.InputType['CloudCredentialAzureCredentialConfigArgs']]] = None,
description: Optional[pulumi.Input[str]] = None,
digitalocean_credential_config: Optional[pulumi.Input[pulumi.InputType['CloudCredentialDigitaloceanCredentialConfigArgs']]] = None,
google_credential_config: Optional[pulumi.Input[pulumi.InputType['CloudCredentialGoogleCredentialConfigArgs']]] = None,
labels: Optional[pulumi.Input[Mapping[str, Any]]] = None,
linode_credential_config: Optional[pulumi.Input[pulumi.InputType['CloudCredentialLinodeCredentialConfigArgs']]] = None,
name: Optional[pulumi.Input[str]] = None,
openstack_credential_config: Optional[pulumi.Input[pulumi.InputType['CloudCredentialOpenstackCredentialConfigArgs']]] = None,
vsphere_credential_config: Optional[pulumi.Input[pulumi.InputType['CloudCredentialVsphereCredentialConfigArgs']]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = CloudCredentialArgs.__new__(CloudCredentialArgs)
__props__.__dict__["amazonec2_credential_config"] = amazonec2_credential_config
__props__.__dict__["annotations"] = annotations
__props__.__dict__["azure_credential_config"] = azure_credential_config
__props__.__dict__["description"] = description
__props__.__dict__["digitalocean_credential_config"] = digitalocean_credential_config
__props__.__dict__["google_credential_config"] = google_credential_config
__props__.__dict__["labels"] = labels
__props__.__dict__["linode_credential_config"] = linode_credential_config
__props__.__dict__["name"] = name
__props__.__dict__["openstack_credential_config"] = openstack_credential_config
__props__.__dict__["vsphere_credential_config"] = vsphere_credential_config
__props__.__dict__["driver"] = None
super(CloudCredential, __self__).__init__(
'rancher2:index/cloudCredential:CloudCredential',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
amazonec2_credential_config: Optional[pulumi.Input[pulumi.InputType['CloudCredentialAmazonec2CredentialConfigArgs']]] = None,
annotations: Optional[pulumi.Input[Mapping[str, Any]]] = None,
azure_credential_config: Optional[pulumi.Input[pulumi.InputType['CloudCredentialAzureCredentialConfigArgs']]] = None,
description: Optional[pulumi.Input[str]] = None,
digitalocean_credential_config: Optional[pulumi.Input[pulumi.InputType['CloudCredentialDigitaloceanCredentialConfigArgs']]] = None,
driver: Optional[pulumi.Input[str]] = None,
google_credential_config: Optional[pulumi.Input[pulumi.InputType['CloudCredentialGoogleCredentialConfigArgs']]] = None,
labels: Optional[pulumi.Input[Mapping[str, Any]]] = None,
linode_credential_config: Optional[pulumi.Input[pulumi.InputType['CloudCredentialLinodeCredentialConfigArgs']]] = None,
name: Optional[pulumi.Input[str]] = None,
openstack_credential_config: Optional[pulumi.Input[pulumi.InputType['CloudCredentialOpenstackCredentialConfigArgs']]] = None,
vsphere_credential_config: Optional[pulumi.Input[pulumi.InputType['CloudCredentialVsphereCredentialConfigArgs']]] = None) -> 'CloudCredential':
"""
Get an existing CloudCredential resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[pulumi.InputType['CloudCredentialAmazonec2CredentialConfigArgs']] amazonec2_credential_config: AWS config for the Cloud Credential (list maxitems:1)
:param pulumi.Input[Mapping[str, Any]] annotations: Annotations for Cloud Credential object (map)
:param pulumi.Input[pulumi.InputType['CloudCredentialAzureCredentialConfigArgs']] azure_credential_config: Azure config for the Cloud Credential (list maxitems:1)
:param pulumi.Input[str] description: Description for the Cloud Credential (string)
:param pulumi.Input[pulumi.InputType['CloudCredentialDigitaloceanCredentialConfigArgs']] digitalocean_credential_config: DigitalOcean config for the Cloud Credential (list maxitems:1)
:param pulumi.Input[str] driver: (Computed) The driver of the Cloud Credential (string)
:param pulumi.Input[pulumi.InputType['CloudCredentialGoogleCredentialConfigArgs']] google_credential_config: Google config for the Cloud Credential (list maxitems:1)
:param pulumi.Input[Mapping[str, Any]] labels: Labels for Cloud Credential object (map)
:param pulumi.Input[pulumi.InputType['CloudCredentialLinodeCredentialConfigArgs']] linode_credential_config: Linode config for the Cloud Credential (list maxitems:1)
:param pulumi.Input[str] name: The name of the Cloud Credential (string)
:param pulumi.Input[pulumi.InputType['CloudCredentialOpenstackCredentialConfigArgs']] openstack_credential_config: OpenStack config for the Cloud Credential (list maxitems:1)
:param pulumi.Input[pulumi.InputType['CloudCredentialVsphereCredentialConfigArgs']] vsphere_credential_config: vSphere config for the Cloud Credential (list maxitems:1)
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _CloudCredentialState.__new__(_CloudCredentialState)
__props__.__dict__["amazonec2_credential_config"] = amazonec2_credential_config
__props__.__dict__["annotations"] = annotations
__props__.__dict__["azure_credential_config"] = azure_credential_config
__props__.__dict__["description"] = description
__props__.__dict__["digitalocean_credential_config"] = digitalocean_credential_config
__props__.__dict__["driver"] = driver
__props__.__dict__["google_credential_config"] = google_credential_config
__props__.__dict__["labels"] = labels
__props__.__dict__["linode_credential_config"] = linode_credential_config
__props__.__dict__["name"] = name
__props__.__dict__["openstack_credential_config"] = openstack_credential_config
__props__.__dict__["vsphere_credential_config"] = vsphere_credential_config
return CloudCredential(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="amazonec2CredentialConfig")
def amazonec2_credential_config(self) -> pulumi.Output[Optional['outputs.CloudCredentialAmazonec2CredentialConfig']]:
"""
AWS config for the Cloud Credential (list maxitems:1)
"""
return pulumi.get(self, "amazonec2_credential_config")
@property
@pulumi.getter
def annotations(self) -> pulumi.Output[Mapping[str, Any]]:
"""
Annotations for Cloud Credential object (map)
"""
return pulumi.get(self, "annotations")
@property
@pulumi.getter(name="azureCredentialConfig")
def azure_credential_config(self) -> pulumi.Output[Optional['outputs.CloudCredentialAzureCredentialConfig']]:
"""
Azure config for the Cloud Credential (list maxitems:1)
"""
return pulumi.get(self, "azure_credential_config")
@property
@pulumi.getter
def description(self) -> pulumi.Output[Optional[str]]:
"""
Description for the Cloud Credential (string)
"""
return pulumi.get(self, "description")
@property
@pulumi.getter(name="digitaloceanCredentialConfig")
def digitalocean_credential_config(self) -> pulumi.Output[Optional['outputs.CloudCredentialDigitaloceanCredentialConfig']]:
"""
DigitalOcean config for the Cloud Credential (list maxitems:1)
"""
return pulumi.get(self, "digitalocean_credential_config")
@property
@pulumi.getter
def driver(self) -> pulumi.Output[str]:
"""
(Computed) The driver of the Cloud Credential (string)
"""
return pulumi.get(self, "driver")
@property
@pulumi.getter(name="googleCredentialConfig")
def google_credential_config(self) -> pulumi.Output[Optional['outputs.CloudCredentialGoogleCredentialConfig']]:
"""
Google config for the Cloud Credential (list maxitems:1)
"""
return pulumi.get(self, "google_credential_config")
@property
@pulumi.getter
def labels(self) -> pulumi.Output[Mapping[str, Any]]:
"""
Labels for Cloud Credential object (map)
"""
return pulumi.get(self, "labels")
@property
@pulumi.getter(name="linodeCredentialConfig")
def linode_credential_config(self) -> pulumi.Output[Optional['outputs.CloudCredentialLinodeCredentialConfig']]:
"""
Linode config for the Cloud Credential (list maxitems:1)
"""
return pulumi.get(self, "linode_credential_config")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
The name of the Cloud Credential (string)
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="openstackCredentialConfig")
def openstack_credential_config(self) -> pulumi.Output[Optional['outputs.CloudCredentialOpenstackCredentialConfig']]:
"""
OpenStack config for the Cloud Credential (list maxitems:1)
"""
return pulumi.get(self, "openstack_credential_config")
@property
@pulumi.getter(name="vsphereCredentialConfig")
def vsphere_credential_config(self) -> pulumi.Output[Optional['outputs.CloudCredentialVsphereCredentialConfig']]:
"""
vSphere config for the Cloud Credential (list maxitems:1)
"""
return pulumi.get(self, "vsphere_credential_config")
| 53.664223 | 191 | 0.710129 | 3,575 | 36,599 | 7.023217 | 0.052587 | 0.139557 | 0.077943 | 0.046838 | 0.917516 | 0.911462 | 0.902342 | 0.891549 | 0.889517 | 0.887765 | 0 | 0.004113 | 0.19618 | 36,599 | 681 | 192 | 53.743025 | 0.849349 | 0.277357 | 0 | 0.843038 | 1 | 0 | 0.229619 | 0.208313 | 0 | 0 | 0 | 0 | 0 | 1 | 0.164557 | false | 0.002532 | 0.017722 | 0 | 0.281013 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
6a6821dac954b59e8fc93e7af84a21d8f943ac4c | 24,304 | py | Python | tests/tests.py | simplykhanh/thoughtloungev2 | e26f3fd7cf1a179622a4b20ae77dd2aa1bd5daa6 | [
"MIT"
] | null | null | null | tests/tests.py | simplykhanh/thoughtloungev2 | e26f3fd7cf1a179622a4b20ae77dd2aa1bd5daa6 | [
"MIT"
] | 1 | 2015-10-27T00:22:51.000Z | 2018-11-20T19:02:47.000Z | tests/tests.py | simplykhanh/thoughtloungev2 | e26f3fd7cf1a179622a4b20ae77dd2aa1bd5daa6 | [
"MIT"
] | 1 | 2015-10-27T00:11:22.000Z | 2015-10-27T00:11:22.000Z | import sys
sys.path.insert(0, '../thought_lounge')
from thought_lounge import app
from thought_lounge.models import *
from samples import *
from nose.tools import *
from io import BytesIO
import json, importlib
test_app = app.test_client()
def setup_func():
app.config['ORIGINAL_SQLALCHEMY_DATABASE_URI'] = str(app.config['SQLALCHEMY_DATABASE_URI'])
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///' + app.config['DATABASE_FOLDER'] + 'test.db'
app.config['TESTING'] = True
db.create_all()
def teardown_func():
db.session.remove()
db.drop_all()
app.config['SQLALCHEMY_DATABASE_URI'] = str(app.config['ORIGINAL_SQLALCHEMY_DATABASE_URI'])
app.config['TESTING'] = False
def check_content_type(headers):
eq_(headers['Content-Type'], 'application/json')
def test_arithmetic():
eq_(100 + 20 + 3, 123)
headers = {
'Content-Type': 'application/json'
}
@with_setup(setup_func, teardown_func)
def test_picture():
# No pictures
url = '/api/pictures/'
rv = test_app.get(url)
check_content_type(rv.headers)
eq_(rv.status_code, 200)
resp = json.loads(rv.data.decode('utf-8'))
eq_(len(resp), 2)
ok_('href' in resp)
ok_('items' in resp)
eq_(len(resp['items']), 0)
# Creating a picture
url = '/api/pictures/'
files = {
'file': (BytesIO(pictures[0]['bytes']), 'user.jpg')
}
rv = test_app.post(url, data = files)
check_content_type(rv.headers)
eq_(rv.status_code, 201)
resp = json.loads(rv.data.decode('utf-8'))
eq_(len(resp), 2)
ok_('href' in resp)
ok_('image' in resp)
@with_setup(setup_func, teardown_func)
def test_user():
# No users
url = '/api/users/'
rv = test_app.get(url)
check_content_type(rv.headers)
eq_(rv.status_code, 200)
resp = json.loads(rv.data.decode('utf-8'))
eq_(len(resp), 2)
ok_('href' in resp)
ok_('items' in resp)
eq_(len(resp['items']), 0)
# Creating a picture
url = '/api/pictures/'
files = {
'file': (BytesIO(pictures[0]['bytes']), 'user.jpg')
}
rv = test_app.post(url, data = files)
resp = json.loads(rv.data.decode('utf-8'))
# Creating a user
url = '/api/users/'
data = dict(users[0])
data['picture'] = resp
rv = test_app.post(url, data = json.dumps(data), headers = headers)
check_content_type(rv.headers)
eq_(rv.status_code, 201)
resp = json.loads(rv.data.decode('utf-8'))
eq_(len(resp), 11)
ok_('href' in resp)
ok_('key' in resp)
ok_('href' in resp['key'])
ok_('picture' in resp)
ok_('href' in resp['picture'])
ok_('userLounges' in resp)
ok_('href' in resp['userLounges'])
ok_('hostApplications' in resp)
ok_('href' in resp['hostApplications'])
ok_('web' in resp)
ok_('href' in resp['web'])
eq_(resp['email'], 'ludwig@uvienna.edu')
eq_(resp['firstName'], 'Ludwig')
eq_(resp['lastName'], 'Wittgenstein')
eq_(resp['bio'], 'I\'ve solved philosophy!')
eq_(resp['role'], 'lounger')
# Creating a user without a picture
url = '/api/users/'
rv = test_app.post(url, data = json.dumps(users[2]), headers = headers)
check_content_type(rv.headers)
eq_(rv.status_code, 201)
resp = json.loads(rv.data.decode('utf-8'))
eq_(len(resp), 11)
ok_('href' in resp)
ok_('key' in resp)
ok_('href' in resp['key'])
ok_('picture' in resp)
ok_('href' not in resp['picture'])
eq_(resp['email'], 'richard@caltech.fake.edu')
@with_setup(setup_func, teardown_func)
def test_password_auth():
# Creating a picture
url = '/api/pictures/'
files = {
'file': (BytesIO(pictures[0]['bytes']), 'user.jpg')
}
rv = test_app.post(url, data = files)
# Creating a user
url = '/api/users/'
rv = test_app.post(url, data = json.dumps(users[0]), headers = headers)
href = json.loads(rv.data.decode('utf-8'))['href']
# Signing in
url = '/api/auth/sign_in/'
data = {
'email': 'ludwig@uvienna.edu',
'password': 'BeetleinaBox'
}
rv = test_app.post(url, data = json.dumps(data), headers = headers)
check_content_type(rv.headers)
eq_(rv.status_code, 200)
resp = json.loads(rv.data.decode('utf-8'))
eq_(href, resp['href'])
# Checking current user
url = '/api/auth/sign_in/'
rv = test_app.get(url)
check_content_type(rv.headers)
resp = json.loads(rv.data.decode('utf-8'))
eq_(rv.status_code, 200)
eq_(href, resp['href'])
# Signing out
url = '/api/auth/sign_out/'
rv = test_app.post(url)
check_content_type(rv.headers)
eq_(rv.status_code, 204)
# Checking current user (none)
url = '/api/auth/sign_in/'
rv = test_app.get(url)
check_content_type(rv.headers)
eq_(rv.status_code, 204)
@with_setup(setup_func, teardown_func)
def test_key():
# Creating a picture
url = '/api/pictures/'
files = {
'file': (BytesIO(pictures[0]['bytes']), 'user.jpg')
}
rv = test_app.post(url, data = files)
files = {
'file': (BytesIO(pictures[1]['bytes']), 'user.jpg')
}
rv = test_app.post(url, data = files)
# Creating user
url = '/api/users/'
rv = test_app.post(url, data = json.dumps(users[0]), headers = headers)
resp = json.loads(rv.data.decode('utf-8'))
user_href = resp['href']
user_key_href = resp['key']['href']
user = User.query.all()[0]
user_key = user.key.key
# Creating admin
url = '/api/users/'
rv = test_app.post(url, data = json.dumps(users[1]), headers = headers)
resp = json.loads(rv.data.decode('utf-8'))
admin_href = resp['href']
admin_key_href = resp['key']['href']
# Hack because no admins exist yet; in practice one admin would be manually added to the database
admin = User.query.all()[-1]
admin.role = 'admin'
db.session.commit()
admin_key = admin.key.key
# Users are signed in automatically. Last test fails because the current user signed in is an admin, we fix this by signing her out.
rv = test_app.post('/api/auth/sign_out/')
# Must authenticate to access keys
url = user_key_href
rv = test_app.get(url)
check_content_type(rv.headers)
eq_(rv.status_code, 401)
# Admin can access admin key
url = admin_key_href
headers['Authorization-API-Key'] = admin_key
rv = test_app.get(url, headers = headers)
check_content_type(rv.headers)
resp = json.loads(rv.data.decode('utf-8'))
eq_(rv.status_code, 200)
eq_(len(resp), 2)
eq_(admin_key_href, resp['href'])
eq_(admin_key, resp['key'])
# Admin can access lounger key
url = user_key_href
headers['Authorization-API-Key'] = user_key
rv = test_app.get(url, headers = headers)
check_content_type(rv.headers)
resp = json.loads(rv.data.decode('utf-8'))
eq_(rv.status_code, 200)
eq_(user_key_href, resp['href'])
# Lounger can access lounger key
url = user_key_href
headers['Authorization-API-Key'] = user_key
rv = test_app.get(url, headers = headers)
check_content_type(rv.headers)
resp = json.loads(rv.data.decode('utf-8'))
eq_(rv.status_code, 200)
eq_(user_key_href, resp['href'])
# Lounger cannot access admin key
url = admin_key_href
headers['Authorization-API-Key'] = user_key
rv = test_app.get(url, headers = headers)
check_content_type(rv.headers)
eq_(rv.status_code, 403)
@with_setup(setup_func, teardown_func)
def test_user_put():
# Creating a picture
url = '/api/pictures/'
files = {
'file': (BytesIO(pictures[0]['bytes']), 'user.jpg')
}
rv = test_app.post(url, data = files)
files = {
'file': (BytesIO(pictures[1]['bytes']), 'user.jpg')
}
rv = test_app.post(url, data = files)
pic2_resp = json.loads(rv.data.decode('utf-8'))
# Creating user
url = '/api/users/'
rv = test_app.post(url, data = json.dumps(users[0]), headers = headers)
resp = json.loads(rv.data.decode('utf-8'))
user_href = resp['href']
user_key_href = resp['key']['href']
user = User.query.all()[0]
user_key = user.key.key
# Creating admin
url = '/api/users/'
rv = test_app.post(url, data = json.dumps(users[1]), headers = headers)
resp = json.loads(rv.data.decode('utf-8'))
admin_href = resp['href']
admin_key_href = resp['key']['href']
# Hack because no admins exist yet; in practice one admin would be manually added to the database
admin = User.query.all()[-1]
admin.role = 'admin'
db.session.commit()
admin_key = admin.key.key
# Lounger can change own information but not role
url = user_href
# dict() to prevent mutating
data = dict(users[0])
data['lastName'] = data['lastName'].upper()
data['bio'] = data['bio'].lower()
headers['Authorization-API-Key'] = user_key
rv = test_app.put(url, data = json.dumps(data), headers = headers)
check_content_type(rv.headers)
resp = json.loads(rv.data.decode('utf-8'))
eq_(rv.status_code, 200)
eq_(len(resp), 11)
eq_('href' in resp, True)
eq_('key' in resp, True)
eq_('href' in resp['key'], True)
eq_(resp['email'], 'ludwig@uvienna.edu')
eq_(resp['firstName'], 'Ludwig')
eq_(resp['lastName'], 'WITTGENSTEIN')
eq_(resp['bio'], 'i\'ve solved philosophy!')
eq_(resp['role'], 'lounger')
# Admin can change lounger's information including role
url = user_href
data = dict(users[0])
data['lastName'] = data['lastName'].lower()
data['bio'] = data['bio'].upper()
data['role'] = 'host'
data['picture'] = {'href': pic2_resp['href']}
headers['Authorization-API-Key'] = admin_key
rv = test_app.put(url, data = json.dumps(data), headers = headers)
check_content_type(rv.headers)
resp = json.loads(rv.data.decode('utf-8'))
eq_(rv.status_code, 200)
eq_(len(resp), 11)
eq_('href' in resp, True)
eq_('key' in resp, True)
eq_('href' in resp['key'], True)
eq_(resp['email'], 'ludwig@uvienna.edu')
eq_(resp['firstName'], 'Ludwig')
eq_(resp['lastName'], 'wittgenstein')
eq_(resp['bio'], 'I\'VE SOLVED PHILOSOPHY!')
eq_(resp['role'], 'host')
eq_(resp['picture']['href'], pic2_resp['href'])
@with_setup(setup_func, teardown_func)
def test_lounge():
# Creating a picture
url = '/api/pictures/'
files = {
'file': (BytesIO(pictures[0]['bytes']), 'user.jpg')
}
rv = test_app.post(url, data = files)
files = {
'file': (BytesIO(pictures[1]['bytes']), 'user.jpg')
}
rv = test_app.post(url, data = files)
# Creating user
url = '/api/users/'
rv = test_app.post(url, data = json.dumps(users[0]), headers = headers)
rv = test_app.post(url, data = json.dumps(users[1]), headers = headers)
resp_host1 = json.loads(rv.data.decode('utf-8'))
for host in User.query.all():
host.role = 'host'
db.session.commit()
# No lounges
url = '/api/lounges/'
rv = test_app.get(url)
check_content_type(rv.headers)
eq_(rv.status_code, 200)
resp = json.loads(rv.data.decode('utf-8'))
eq_(len(resp), 2)
ok_('href' in resp)
ok_('items' in resp)
eq_(len(resp['items']), 0)
# Need authentication to create a lounge
url = '/api/lounges/'
rv = test_app.post(url, data = json.dumps(lounges[0]))
check_content_type(rv.headers)
eq_(rv.status_code, 401)
resp = json.loads(rv.data.decode('utf-8'))
# Creating a lounge
url = '/api/lounges/'
headers['Authorization-API-Key'] = User.query.get(2).key.key,
rv = test_app.post(url, data = json.dumps(lounges[0]), headers = headers)
check_content_type(rv.headers)
eq_(rv.status_code, 201)
resp = json.loads(rv.data.decode('utf-8'))
eq_(len(resp), 10)
ok_('href' in resp)
ok_('pictures' in resp)
eq_(resp['dateTime'], '2015-02-25T03:00:00+00:00')
eq_(resp['location'], '')
eq_(resp['campus'], 'UC Berkeley')
eq_(resp['isReserved'], False)
eq_(resp['summary'], '')
# Checking if the host was added properly
rv = test_app.get(resp['loungeUsers']['href'] + '?type=host?expand=user', headers = headers)
host_resp = json.loads(rv.data.decode('utf-8'))
eq_(len(host_resp), 2)
ok_('href' in host_resp)
ok_('href' in host_resp['items'][0])
eq_(len(host_resp['items']), 1)
eq_(host_resp['items'][0]['user']['href'], resp_host1['href'])
# Changing the lounge
url = '/api/lounges/1/'
headers['Authorization-API-Key'] = User.query.get(2).key.key,
data = dict(lounges[0])
data.pop('campus')
data['location'] = 'Haas B2'
rv = test_app.put(url, data = json.dumps(data), headers = headers)
check_content_type(rv.headers)
eq_(rv.status_code, 200)
resp = json.loads(rv.data.decode('utf-8'))
eq_(len(resp), 10)
ok_('href' in resp)
ok_('pictures' in resp)
eq_(resp['dateTime'], '2015-02-25T03:00:00+00:00')
eq_(resp['location'], 'Haas B2')
eq_(resp['campus'], '')
eq_(resp['isReserved'], False)
eq_(resp['summary'], '')
@with_setup(setup_func, teardown_func)
def test_lounge_picture():
# Creating a picture
url = '/api/pictures/'
files = {
'file': (BytesIO(pictures[0]['bytes']), 'user.jpg')
}
rv = test_app.post(url, data = files)
files = {
'file': (BytesIO(pictures[1]['bytes']), 'user.jpg')
}
rv = test_app.post(url, data = files)
resp_picture1 = json.loads(rv.data.decode('utf-8'))
# Creating user
url = '/api/users/'
rv = test_app.post(url, data = json.dumps(users[0]), headers = headers)
resp_user1 = json.loads(rv.data.decode('utf-8'))
for host in User.query.all():
host.role = 'host'
db.session.commit()
# Creating a lounge
url = '/api/lounges/'
headers['Authorization-API-Key'] = User.query.get(1).key.key,
rv = test_app.post(url, data = json.dumps(lounges[3]), headers = headers)
resp_lounge1 = json.loads(rv.data.decode('utf-8'))
# Need authorization to add a picture
url = resp_lounge1['pictures']['href']
headers['Authorization-API-Key'] = "mywrongapikey",
data = {'picture': {'href': resp_picture1['href']}}
rv = test_app.post(url, data = json.dumps(data), headers = headers)
check_content_type(rv.headers)
eq_(rv.status_code, 401)
# Adding a picture
url = resp_lounge1['pictures']['href']
headers['Authorization-API-Key'] = User.query.get(1).key.key,
data = {'picture': {'href': resp_picture1['href']}}
rv = test_app.post(url, data = json.dumps(data), headers = headers)
check_content_type(rv.headers)
eq_(rv.status_code, 200)
resp_lounge_picture1 = json.loads(rv.data.decode('utf-8'))
eq_(len(resp_lounge_picture1), 2)
print(resp_lounge_picture1)
ok_('href' in resp_lounge_picture1)
ok_('picture' in resp_lounge_picture1)
ok_('href' in resp_lounge_picture1['picture'])
eq_(resp_lounge_picture1['picture']['href'], resp_picture1['href'])
eq_(resp_lounge_picture1['picture']['image'], resp_picture1['image'])
# Deleting a picture
url = resp_lounge_picture1['href']
headers['Authorization-API-Key'] = User.query.get(1).key.key,
rv = test_app.delete(url, headers = headers)
check_content_type(rv.headers)
eq_(rv.status_code, 204)
# No pictures in lounge
url = resp_lounge1['pictures']['href']
rv = test_app.get(url)
check_content_type(rv.headers)
eq_(rv.status_code, 200)
resp = json.loads(rv.data.decode('utf-8'))
eq_(len(resp), 2)
ok_('href' in resp)
ok_('items' in resp)
eq_(len(resp['items']), 0)
@with_setup(setup_func, teardown_func)
def test_user_lounge():
# Creating a picture
url = '/api/pictures/'
files = {
'file': (BytesIO(pictures[0]['bytes']), 'user.jpg')
}
rv = test_app.post(url, data = files)
files = {
'file': (BytesIO(pictures[1]['bytes']), 'user.jpg')
}
rv = test_app.post(url, data = files)
# Creating user
url = '/api/users/'
rv = test_app.post(url, data = json.dumps(users[0]), headers = headers)
resp_user1 = json.loads(rv.data.decode('utf-8'))
rv = test_app.post(url, data = json.dumps(users[1]), headers = headers)
resp_user2 = json.loads(rv.data.decode('utf-8'))
for host in User.query.all():
host.role = 'host'
db.session.commit()
# Creating a lounge
url = '/api/lounges/'
headers['Authorization-API-Key'] = User.query.get(1).key.key
rv = test_app.post(url, data = json.dumps(lounges[0]), headers = headers)
resp_lounge1 = json.loads(rv.data.decode('utf-8'))
rv = test_app.post(url, data = json.dumps(lounges[1]), headers = headers)
resp_lounge2 = json.loads(rv.data.decode('utf-8'))
headers['Authorization-API-Key'] = User.query.get(2).key.key
# Adding a user lounge
url = resp_user2['userLounges']['href']
data = dict(user_lounges[1])
data['isHost'] = False
data['lounge'] = {'href': resp_lounge1['href']}
rv = test_app.post(url, data = json.dumps(data), headers = headers)
check_content_type(rv.headers)
eq_(rv.status_code, 201)
resp = resp_user_lounge1 = json.loads(rv.data.decode('utf-8'))
eq_(len(resp), 6)
ok_('href' in resp)
ok_('lounge' in resp)
ok_('href' in resp['lounge'])
eq_(resp['topic'], 'I saw the best minds of my generation destroyed by madness.')
eq_(resp['summary'], 'Starving hysterical naked, dragging themselves through the negro streets at dawn looking for an angry fix.')
eq_(resp['showedUp'], True)
eq_(resp['isHost'], False)
eq_(resp['lounge']['href'], resp_lounge1['href'])
# Adding a user lounge in conflict
headers['Authorization-API-Key'] = User.query.get(2).key.key
url = resp_user2['userLounges']['href']
data = user_lounges[1]
data['isHost'] = False
data['lounge'] = {'href': resp_lounge1['href']}
rv = test_app.post(url, data = json.dumps(data), headers = headers)
check_content_type(rv.headers)
eq_(rv.status_code, 409)
# Changing a user lounge
url = resp_user_lounge1['href']
data = dict(user_lounges[1])
data['lounge'] = {'href': resp_lounge2['href']}
data['showedUp'] = False
data['isHost'] = False
headers['Authorization-API-Key'] = User.query.get(2).key.key
rv = test_app.put(url, data = json.dumps(data), headers = headers)
check_content_type(rv.headers)
eq_(rv.status_code, 200)
resp = json.loads(rv.data.decode('utf-8'))
eq_(len(resp), 6)
ok_('href' in resp)
ok_('lounge' in resp)
ok_('href' in resp['lounge'])
eq_(resp['topic'], 'I saw the best minds of my generation destroyed by madness.')
eq_(resp['summary'], 'Starving hysterical naked, dragging themselves through the negro streets at dawn looking for an angry fix.')
eq_(resp['showedUp'], False)
eq_(resp['isHost'], False)
eq_(resp['lounge']['href'], resp_lounge2['href'])
@with_setup(setup_func, teardown_func)
def test_lounge_user():
# Creating a picture
url = '/api/pictures/'
files = {
'file': (BytesIO(pictures[0]['bytes']), 'user.jpg')
}
rv = test_app.post(url, data = files)
files = {
'file': (BytesIO(pictures[1]['bytes']), 'user.jpg')
}
rv = test_app.post(url, data = files)
# Creating user
url = '/api/users/'
rv = test_app.post(url, data = json.dumps(users[0]), headers = headers)
resp_user1 = json.loads(rv.data.decode('utf-8'))
rv = test_app.post(url, data = json.dumps(users[1]), headers = headers)
resp_user2 = json.loads(rv.data.decode('utf-8'))
rv = test_app.post(url, data = json.dumps(users[2]), headers = headers)
resp_user3 = json.loads(rv.data.decode('utf-8'))
for host in User.query.all():
host.role = 'host'
db.session.commit()
# Creating a lounge
url = '/api/lounges/'
headers['Authorization-API-Key'] = User.query.get(1).key.key
rv = test_app.post(url, data = json.dumps(lounges[0]), headers = headers)
resp_lounge1 = json.loads(rv.data.decode('utf-8'))
rv = test_app.post(url, data = json.dumps(lounges[1]), headers = headers)
resp_lounge2 = json.loads(rv.data.decode('utf-8'))
# Adding a lounge user
url = resp_lounge1['loungeUsers']['href']
data = dict(user_lounges[1])
data['user'] = {'href': resp_user2['href']}
data['isHost'] = False
rv = test_app.post(url, data = json.dumps(data), headers = headers)
check_content_type(rv.headers)
eq_(rv.status_code, 201)
resp = resp_lounge_user1 = json.loads(rv.data.decode('utf-8'))
eq_(len(resp), 6)
ok_('href' in resp)
ok_('user' in resp)
ok_('href' in resp['user'])
eq_(resp['topic'], 'I saw the best minds of my generation destroyed by madness.')
eq_(resp['summary'], 'Starving hysterical naked, dragging themselves through the negro streets at dawn looking for an angry fix.')
eq_(resp['showedUp'], True)
eq_(resp['isHost'], False)
eq_(resp['user']['href'], resp_user2['href'])
# Adding a user lounge in conflict
url = resp_lounge1['loungeUsers']['href']
data = user_lounges[1]
data['user'] = {'href': resp_user1['href']}
rv = test_app.post(url, data = json.dumps(data), headers = headers)
check_content_type(rv.headers)
eq_(rv.status_code, 409)
# Changing a lounge user
url = resp_lounge_user1['href']
data = dict(user_lounges[1])
data['user'] = {'href': resp_user3['href']}
data['isHost'] = False
data['showedUp'] = False
rv = test_app.put(url, data = json.dumps(data), headers = headers)
check_content_type(rv.headers)
eq_(rv.status_code, 200)
resp = json.loads(rv.data.decode('utf-8'))
eq_(len(resp), 6)
ok_('href' in resp)
ok_('user' in resp)
ok_('href' in resp['user'])
eq_(resp['topic'], 'I saw the best minds of my generation destroyed by madness.')
eq_(resp['summary'], 'Starving hysterical naked, dragging themselves through the negro streets at dawn looking for an angry fix.')
eq_(resp['showedUp'], False)
eq_(resp['isHost'], False)
eq_(resp['user']['href'], resp_user3['href'])
@with_setup(setup_func, teardown_func)
def test_host_application():
# Creating admin
url = '/api/users/'
rv = test_app.post(url, data = json.dumps(users[1]), headers = headers)
resp = json.loads(rv.data.decode('utf-8'))
admin_href = resp['href']
admin_key_href = resp['key']['href']
# Hack because no admins exist yet; in practice one admin would be manually added to the database
admin = User.query.get(1)
admin.role = 'admin'
db.session.commit()
admin_key = admin.key.key
# Creating user
url = '/api/users/'
rv = test_app.post(url, data = json.dumps(users[2]), headers = headers)
resp_user1 = json.loads(rv.data.decode('utf-8'))
# No user host applications
url = resp_user1['hostApplications']['href']
rv = test_app.get(url)
check_content_type(rv.headers)
eq_(rv.status_code, 200)
resp = json.loads(rv.data.decode('utf-8'))
eq_(len(resp), 2)
ok_('href' in resp)
ok_('items' in resp)
eq_(resp['href'], url)
eq_(len(resp['items']), 0)
# Need authentication to create a host application
url = resp_user1['hostApplications']['href']
rv = test_app.post(url, data = json.dumps(host_applications[0]))
check_content_type(rv.headers)
eq_(rv.status_code, 401)
resp = json.loads(rv.data.decode('utf-8'))
# Creating a host application
url = resp_user1['hostApplications']['href']
headers['Authorization-API-Key'] = User.query.get(2).key.key,
rv = test_app.post(url, data = json.dumps(host_applications[0]), headers = headers)
check_content_type(rv.headers)
eq_(rv.status_code, 201)
resp = resp_user_host_application1 = json.loads(rv.data.decode('utf-8'))
eq_(len(resp), 3)
ok_('href' in resp)
eq_(resp['application'], 'Make me a host!')
eq_(resp['isApproved'], None)
# Changing the host application
url = resp_user_host_application1['href']
headers['Authorization-API-Key'] = User.query.get(1).key.key,
data = dict(host_applications[0])
data['isApproved'] = True
rv = test_app.put(url, data = json.dumps(data), headers = headers)
check_content_type(rv.headers)
eq_(rv.status_code, 204)
| 34.473759 | 136 | 0.634628 | 3,517 | 24,304 | 4.226614 | 0.068524 | 0.033434 | 0.042381 | 0.043727 | 0.865859 | 0.831887 | 0.809216 | 0.794215 | 0.771813 | 0.745913 | 0 | 0.01834 | 0.19684 | 24,304 | 704 | 137 | 34.522727 | 0.743186 | 0.07147 | 0 | 0.751748 | 0 | 0.04021 | 0.175924 | 0.027906 | 0 | 0 | 0 | 0 | 0 | 1 | 0.024476 | false | 0.003497 | 0.012238 | 0 | 0.036713 | 0.001748 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6ac597a95f81ed833824eb1505ac4d3af893f3aa | 26,360 | py | Python | data_generation/shapeset/shapeset2_1cspo_2_3.5000.test_code/templates_v3.py | j3bian/Image-Background-Compression | 62937707e3d1e503b3b567a6fd695cec5f80751a | [
"Apache-2.0"
] | 3 | 2018-04-18T02:13:41.000Z | 2019-12-01T12:45:24.000Z | data_generation/shapeset/shapeset2_1cspo_2_3.5000.test_code/templates_v3.py | j3bian/Image-Background-Compression | 62937707e3d1e503b3b567a6fd695cec5f80751a | [
"Apache-2.0"
] | null | null | null | data_generation/shapeset/shapeset2_1cspo_2_3.5000.test_code/templates_v3.py | j3bian/Image-Background-Compression | 62937707e3d1e503b3b567a6fd695cec5f80751a | [
"Apache-2.0"
] | 1 | 2018-10-21T09:30:13.000Z | 2018-10-21T09:30:13.000Z | #!/usr/local/bin/python
# builds sets of templates
from random import Random
#++++++++
RANDOM = Random()
# related changes: s/random/RANDOM/
# reason: different modules should use different random number generators so they don't step on each others' toes
#--------
################################################################################################################
# Semantic Knowledge (Vocabulary)
# Hierarchical relations, including
# "is-a"
vowels = 'aeiou'
VERB = ['is','are','Is','Are']
NEGATION = 'not'
article = 'a'
# references
DETERMIN = ['the','There','The','this']
PRONOUN = ['it','they','its','It','their']
# Spacial relations, including
# positionAt
spacePREPOSITION = ['on','of','in','at']
# Clause relations, including
# CONJUNCTIONs
CONJUNCTION = ['and','with']
# DISJUNCTION
DISJUNCTION = ['or','than']
# Temporal relations, including
# precedence
timePREPOSITION = ['before','after']
# Measure relations, including
# cardinal numbers (upto the number of objects)
CARDINAL = ['one','two','tree']
# ordinal numbers (counting the number of objects)
ORDINAL = ['first','second','third']
COMPSIZE = ['smaller','bigger']
# lists of positions
vertical = ['top','bottom']
horizontal = ['left','right']
##############################################################################################################
# Production RUles for Object description
# randomized description of an object
def the_object(shape,color,size):
#1. a shape
#2. a color shape
#3. a size color shape
if size.rstrip() in ['medium','NA']: size = ''
description = [DETERMIN[0] + ' ' +shape, DETERMIN[0] +' '+color +' '+ shape, DETERMIN[0] +' '+ size+ ' '+color +' '+ shape]
return RANDOM.choice(description)
def the_object_sameShape(shape,color,size):
#1. a shape
#2. a color shape
#3. a size color shape
if size.rstrip() in ['medium','NA']: size = ''
description = [DETERMIN[0] +' '+color +' '+ shape, DETERMIN[0] +' '+ size+ ' '+color +' '+ shape]
return RANDOM.choice(description)
def the_object_sameShapeColor(shape,color,size):
#1. a shape
#2. a color shape
#3. a size color shape
if size.rstrip() in ['medium','NA']: size = ''
return DETERMIN[0] +' '+ size+ ' '+color +' '+ shape
def an_object(shape,color,size):
#1. a shape
#2. a color shape
#3. a size color shape
if size.rstrip() in ['medium','NA']: size = ''
article0 = choose_article(shape)
article1 = choose_article(color)
article2 = choose_article(size)
if size == '': article2 = choose_article(shape)
description = [article0 + ' ' +shape, article1 +' '+color +' '+ shape, article2 +' '+ size+ ' '+ shape]
return RANDOM.choice(description)
def an_object_sameShape(shape,color,size):
#1. a shape
#2. a color shape
#3. a size color shape
if size.rstrip() in ['medium','NA']: size = ''
article1 = choose_article(color)
article2 = choose_article(size)
if size == '': article2 = choose_article(color)
description = [article1 +' '+color +' '+ shape, article2 +' '+ size+ ' '+ color+ ' '+ shape]
return RANDOM.choice(description)
def an_object_sameShapeColor(shape,color,size):
#1. a shape
#2. a color shape
#3. a size color shape
if size.rstrip() in ['medium','NA']: size = ''
article2 = choose_article(size)
if size == '': article2 = choose_article(color)
return article2 +' '+ size+ ' '+ color+ ' '+ shape
def choose_article(word):
if len(word)==0:
return "MISSING"
article = 'a'
if word.replace(" ","")[0] in vowels: article = 'an'
return article
#####################################################################################################################
#Questions for one object, WITHOUT reference to other objects on a screen
def one_object_questions_location_vert(obj,obj1,vr):
questionOut = [ DETERMIN[1]+ ' '+ VERB[0]+ ' '+ obj1 + '. '+ VERB[2] +' '+ PRONOUN[0] + ' ' + spacePREPOSITION[3] + ' ' + DETERMIN[0]+ ' ' + vertical[0] + ' '+DISJUNCTION[0]+' '+DETERMIN[0]+ ' '+ vertical[1]+'?'+' '+vr,
DETERMIN[1]+ ' '+ VERB[0]+ ' '+ obj1 + '. '+ VERB[2] +' '+ PRONOUN[0] + ' ' + 'more to' + ' '+ DETERMIN[0]+ ' ' + vertical[0] + ' '+DISJUNCTION[0]+' '+DETERMIN[0]+ ' '+ vertical[1]+'?'+' '+vr,
VERB[2] +' '+ obj + ' ' + spacePREPOSITION[3] + ' ' + DETERMIN[0]+ ' ' + vertical[0] + ' '+DISJUNCTION[0]+' '+ DETERMIN[0]+ ' '+vertical[1]+'?'+' '+vr,
VERB[2] +' '+ obj + ' ' + 'more' + ' '+'to'+ ' ' + DETERMIN[0]+ ' ' + vertical[0] + ' '+DISJUNCTION[0]+' '+ DETERMIN[0]+ ' '+vertical[1]+'?'+' '+vr,
VERB[2] + ' '+ obj +' '+ spacePREPOSITION[3] +' ' + DETERMIN[0]+ ' '+ vertical[0] + ' '+ DISJUNCTION[0]+ ' ' + DETERMIN[0]+' '+vertical[1]+'?'+' '+vr,
'Where' + ' '+ VERB[0]+ ' '+ obj+','+' '+'more'+ ' '+'to'+ ' '+ DETERMIN[0]+' '+vertical[0] + ' '+DISJUNCTION[0]+' '+DETERMIN[0]+ ' '+ vertical[1]+'?'+' '+vr,
'Where' + ' '+ VERB[0]+ ' '+ obj+','+' '+'closer'+ ' '+'to'+ ' '+ DETERMIN[0]+' '+vertical[0] + ' '+DISJUNCTION[0]+' '+DETERMIN[0]+ ' '+ vertical[1]+'?'+' '+vr,
'Where' + ' '+ VERB[0]+ ' '+ obj+'?'+' '+VERB[2] +' '+ 'it' + ' ' + 'more to' + ' ' + DETERMIN[0]+ ' ' + vertical[0] + ' '+DISJUNCTION[0]+' '+DETERMIN[0]+ ' '+ vertical[1]+'?'+' '+vr,
'Where' + ' '+ VERB[0]+ ' '+ obj+'?'+ ' ' + VERB[2] +' '+ 'it' + ' ' + 'closer' + ' '+'to'+' ' + DETERMIN[0]+ ' ' + vertical[0] + ' '+DISJUNCTION[0]+' '+DETERMIN[0]+ ' '+ vertical[1]+'?'+' '+vr]
return questionOut
def one_object_location_answer_vert(obj,vert):
return obj +' '+ VERB[0]+' '+spacePREPOSITION[3]+' '+DETERMIN[0]+' '+vert
def one_object_questions_location_hor(obj,obj1,hr):
# templates
# 1. There is a color shape. Is it in the top or the bottom?
#2 There is a size color shape. It is at the top or the bottom?
#3. Is an object in the top or bottom?
questionOut = [ DETERMIN[1]+ ' '+ VERB[0]+ ' '+ obj1 + '. '+ VERB[2] +' '+ 'it' + ' ' + ' '+ spacePREPOSITION[3] + ' ' +
DETERMIN[0]+ ' ' + horizontal[0] + ' '+DISJUNCTION[0]+ ' ' + DETERMIN[0]+' '+horizontal[1]+'?'+' '+hr,
DETERMIN[1]+ ' '+ VERB[0]+ ' '+ obj1 + '. '+ VERB[2] +' '+ 'it' + ' ' + ' ' + 'closer' + ' '+'to'+ ' ' + DETERMIN[0]+ ' ' + horizontal[0] + ' '+DISJUNCTION[0]+ ' ' + DETERMIN[0]+' '+horizontal[1]+'?'+' '+hr,
VERB[2] + ' '+ obj +' '+ spacePREPOSITION[3] + ' ' + DETERMIN[0]+' ' + horizontal[0] + ' '+ DISJUNCTION[0]+ ' ' + DETERMIN[0]+' '+horizontal[1]+'?'+' '+hr,
VERB[2] + ' '+ obj +' '+ 'closer' + ' '+'to'+ ' ' + DETERMIN[0]+' ' + horizontal[0] + ' '+ DISJUNCTION[0]+ ' ' + DETERMIN[0]+' '+horizontal[1]+'?'+' '+hr,
'Where' + ' '+ VERB[0]+ ' '+ obj+','+' '+'more'+ ' '+'to'+ ' '+ DETERMIN[0]+' '+horizontal[0] + ' '+DISJUNCTION[0]+' '+DETERMIN[0]+ ' '+ horizontal[1]+'?'+' '+hr,
'Where' + ' '+ VERB[0]+ ' '+ obj+'? '+VERB[2] +' '+ PRONOUN[0] + ' ' + ' '+ 'more to' + ' ' + DETERMIN[0]+ ' ' + horizontal[0] + ' '+DISJUNCTION[0]+ ' ' + DETERMIN[0]+' '+horizontal[1]+'?'+' '+hr,
'Where' + ' '+ VERB[0]+ ' '+ obj+','+ 'closer' + ' '+'to'+ ' ' + DETERMIN[0]+ ' ' + horizontal[0] + ' '+DISJUNCTION[0]+ ' ' + DETERMIN[0]+' '+horizontal[1]+'?'+' '+hr]
return questionOut
def one_object_location_answer_hor(obj,hor):
return obj +' '+ VERB[0]+' '+spacePREPOSITION[3]+' '+DETERMIN[0]+' '+hor
def one_object_questions_shape(cl,sz,sh,v,w):
#6. A color object is in hor vert. What is its shape?
#7. Is an object in the top or bottom?
article= choose_article(cl)
if sz in ['medium','NA']: sz=''
if v == 'NA' and w == 'NA':
sentenceOut = ['What'+' '+ VERB[0]+' '+DETERMIN[0]+' '+'shape'+' '+spacePREPOSITION[1]+' '+DETERMIN[0]+' ' + cl +' '+ 'object' +'?'+' '+sh,
'What'+' '+ VERB[0]+' '+DETERMIN[0]+' '+'shape'+' '+spacePREPOSITION[1]+' '+DETERMIN[0]+' ' + ' '+ sz+ ' '+ cl +' '+ 'object' +'?'+' '+sh,
'What'+' '+ VERB[0]+' '+DETERMIN[0]+' '+'shape'+' '+spacePREPOSITION[1]+' '+DETERMIN[0]+' ' + cl +' '+ 'object' +'?'+' '+sh,
'What'+' '+ VERB[0]+' '+DETERMIN[0]+' '+'shape'+' '+spacePREPOSITION[1]+' '+DETERMIN[0]+' ' + ' '+ sz+ ' '+ cl +' '+ 'object'+'?'+' '+sh,
DETERMIN[1]+' '+ VERB[0]+' ' + article +' '+cl+' '+'object'+ '. '+ ' '+'What'+' '+VERB[0]+' '+PRONOUN[2]+' '+'shape' +'?'+' '+sh,
DETERMIN[1]+' '+ VERB[0]+' ' + article +' '+cl+' '+'object'+ '. '+ ' '+'What'+' '+'shape'+' '+ VERB[0]+' '+PRONOUN[0] +'?'+' '+sh]
else:
if v == 'NA': v=''
if w == 'NA': w=''
sentenceOut = ['What'+' '+ VERB[0]+' '+DETERMIN[0]+' '+'shape'+' '+spacePREPOSITION[1]+' '+DETERMIN[0]+' ' + cl +' '+ 'object' + ' ' + spacePREPOSITION[3] + ' '+ DETERMIN[0]+ ' ' + v + ' '+ w +'?'+' '+sh,
'What'+' '+ VERB[0]+' '+DETERMIN[0]+' '+'shape'+' '+spacePREPOSITION[1]+' '+DETERMIN[0]+' ' + ' '+ sz+ ' '+ cl +' '+ 'object' + ' ' + spacePREPOSITION[3] + ' '+ DETERMIN[0]+ ' ' + v + ' '+ w+'?'+' '+sh,
'What'+' '+ VERB[0]+' '+DETERMIN[0]+' '+'shape'+' '+spacePREPOSITION[1]+' '+DETERMIN[0]+' ' + cl +' '+ 'object' +'?'+' '+sh,
'What'+' '+ VERB[0]+' '+DETERMIN[0]+' '+'shape'+' '+spacePREPOSITION[1]+' '+DETERMIN[0]+' ' + ' '+ sz+ ' '+ cl +' '+ 'object'+'?'+' '+sh,
DETERMIN[1]+' '+ VERB[0]+' ' + article +' '+cl+' '+'object'+ '. '+ ' '+'What'+' '+VERB[0]+' '+PRONOUN[2]+' '+'shape' +'?'+' '+sh,
DETERMIN[1]+' '+ VERB[0]+' ' + article +' '+cl+' '+'object'+ '. '+ ' '+'What'+' '+'shape'+' '+ VERB[0]+' '+PRONOUN[0] +'?'+' '+sh]
return sentenceOut
def one_object_questions_size(cl,sz,sh,v,w):
#6. A color object is in hor vert. What is its shape?
#7. Is an object in the top or bottom?
article= choose_article(cl)
if v == 'NA' and w == 'NA':
sentenceOut = ['What'+' '+ VERB[0]+' '+DETERMIN[0]+' '+'size'+' '+spacePREPOSITION[1]+' '+DETERMIN[0]+' ' + cl +' '+ 'object' +'?'+' '+sz,
'What'+' '+ VERB[0]+' '+DETERMIN[0]+' '+'size'+' '+spacePREPOSITION[1]+' '+DETERMIN[0]+' ' + ' '+ cl +' '+ sh +'?'+' '+sz,
'What'+' '+ VERB[0]+' '+DETERMIN[0]+' '+'size'+' '+spacePREPOSITION[1]+' '+DETERMIN[0]+' ' + cl +' '+ 'object' +'?'+' '+sz,
'What'+' '+ VERB[0]+' '+DETERMIN[0]+' '+'size'+' '+spacePREPOSITION[1]+' '+DETERMIN[0]+' ' + ' '+ cl +' '+ sh +'?'+' '+sz,
DETERMIN[1]+' '+ VERB[0]+' ' + article +' '+cl+' '+'object'+ '. '+ ' '+'What'+' '+VERB[0]+' '+PRONOUN[2]+' '+'size' +'?'+' '+sz,
DETERMIN[1]+' '+ VERB[0]+' ' + article +' '+cl+' '+'object'+ '. '+ ' '+'What'+' '+'size'+' '+ VERB[0]+' '+PRONOUN[0] +'?'+' '+sz]
else:
if v == 'NA': v=''
if w == 'NA': w=''
sentenceOut = ['What'+' '+ VERB[0]+' '+DETERMIN[0]+' '+'size'+' '+spacePREPOSITION[1]+' '+DETERMIN[0]+' ' + cl +' '+ 'object' + ' ' + spacePREPOSITION[3] + ' '+ DETERMIN[0]+ ' ' + v + ' '+ w +'?'+' '+sz,
'What'+' '+ VERB[0]+' '+DETERMIN[0]+' '+'size'+' '+spacePREPOSITION[1]+' '+DETERMIN[0]+ ' '+ cl +' '+ sh + ' ' + spacePREPOSITION[3] + ' '+ DETERMIN[0]+ ' ' + v + ' '+ w+'?'+' '+sz,
'What'+' '+ VERB[0]+' '+DETERMIN[0]+' '+'size'+' '+spacePREPOSITION[1]+' '+DETERMIN[0]+' ' + cl +' '+ 'object' +'?'+' '+sz,
'What'+' '+ VERB[0]+' '+DETERMIN[0]+' '+'size'+' '+spacePREPOSITION[1]+' '+DETERMIN[0]+' ' + ' '+ cl +' '+ sh +'?'+' '+sz,
DETERMIN[1]+' '+ VERB[0]+' ' + article +' '+cl+' '+'object'+ '. '+ ' '+'What'+' '+VERB[0]+' '+PRONOUN[2]+' '+'size' +'?'+' '+sz,
DETERMIN[1]+' '+ VERB[0]+' ' + article +' '+cl+' '+'object'+ '. '+ ' '+'What'+' '+'size'+' '+ VERB[0]+' '+PRONOUN[0] +'?'+' '+sz]
return sentenceOut
def one_object_questions_color(cl,sz,sh,v,w):
# templates
# 3. A shape is in the horisontal vertical. What is its color?
#4. A shape is in the horisontal vertical. Is it color or non-color?
#5. A shape is in the horisontal vertical. Is this color or non-color color?
if sz in ['medium','NA']:
article1 = choose_article(sh)
if v == 'NA' and w == 'NA':
sentenceOut = [ 'What'+' '+ VERB[0]+' '+DETERMIN[0]+' '+'color'+' '+spacePREPOSITION[1]+' '+DETERMIN[0]+' ' + sh +'?'+' '+cl,
DETERMIN[1]+' '+ VERB[0]+' an object'+ '. '+ ' '+'What'+' '+VERB[0]+' '+PRONOUN[2]+' '+'color' +'?'+' '+cl,
DETERMIN[1]+' '+ VERB[0]+' ' + article1 +' '+sh +'.'+ ' '+'What'+' '+'color'+' '+VERB[0]+' '+PRONOUN[0] +'?'+' '+cl,
'What'+' '+ VERB[0]+' '+DETERMIN[0]+' '+'color'+' '+spacePREPOSITION[1]+' '+DETERMIN[0]+' '+ sh +'?'+' '+cl]
else:
if v == 'NA': v=''
if w == 'NA': w=''
sentenceOut = [ 'What'+' '+ VERB[0]+' '+DETERMIN[0]+' '+'color'+' '+spacePREPOSITION[1]+' '+DETERMIN[0]+' ' + sh + ' ' + 'more to' + ' '+ DETERMIN[0]+ ' ' + v + ' '+ w+'?'+' '+cl,
DETERMIN[1]+' '+ VERB[0]+' an object'+ '. '+ ' '+'What'+' '+VERB[0]+' '+PRONOUN[2]+' '+'color' +'?'+' '+cl,
DETERMIN[1]+' '+ VERB[0]+' ' + article1 +' '+sh + ' ' + spacePREPOSITION[3] + ' '+ DETERMIN[0]+ ' ' + v + ' '+ w+'.'+ ' '+'What'+' '+'color'+' '+VERB[0]+' '+PRONOUN[0] +'?'+' '+cl,
'What'+' '+ VERB[0]+' '+DETERMIN[0]+' '+'color'+' '+spacePREPOSITION[1]+' '+DETERMIN[0]+' '+ sh + ' ' + spacePREPOSITION[3] + ' '+ DETERMIN[0]+ ' ' + v + ' '+ w+'?'+' '+cl]
else:
article = choose_article(sz)
article1 = choose_article(sh)
if v == 'NA' and w == 'NA':
sentenceOut = [ 'What'+' '+ VERB[0]+' '+DETERMIN[0]+' '+'color'+' '+spacePREPOSITION[1]+' '+DETERMIN[0]+' ' + sh +'?'+' '+cl,
DETERMIN[1]+' '+ VERB[0]+' ' + article +' '+ sz +' '+'object'+ '. '+ ' '+'What'+' '+VERB[0]+' '+PRONOUN[2]+' '+'color' +'?'+' '+cl,
DETERMIN[1]+' '+ VERB[0]+' ' + article1 +' '+sh + '.'+ ' '+'What'+' '+'color'+' '+VERB[0]+' '+PRONOUN[0] +'?'+' '+cl,
'What'+' '+ VERB[0]+' '+DETERMIN[0]+' '+'color'+' '+spacePREPOSITION[1]+' '+DETERMIN[0]+' ' +sz+' '+ sh + '?'+' '+cl]
else:
if v == 'NA': v=''
if w == 'NA': w=''
sentenceOut = [ 'What'+' '+ VERB[0]+' '+DETERMIN[0]+' '+'color'+' '+spacePREPOSITION[1]+' '+DETERMIN[0]+' ' + sh + ' ' + 'more to' + ' '+ DETERMIN[0]+ ' ' + v + ' '+ w+'?'+' '+cl,
DETERMIN[1]+' '+ VERB[0]+' ' + article +' '+ sz +' '+'object'+ '. '+ ' '+'What'+' '+VERB[0]+' '+PRONOUN[2]+' '+'color' +'?'+' '+cl,
DETERMIN[1]+' '+ VERB[0]+' ' + article1 +' '+sh + ' ' + spacePREPOSITION[3] + ' '+ DETERMIN[0]+ ' ' + v + ' '+ w+'.'+ ' '+'What'+' '+'color'+' '+VERB[0]+' '+PRONOUN[0] +'?'+' '+cl,
'What'+' '+ VERB[0]+' '+DETERMIN[0]+' '+'color'+' '+spacePREPOSITION[1]+' '+DETERMIN[0]+' ' +sz+' '+ sh + ' ' + spacePREPOSITION[3] + ' '+ DETERMIN[0]+ ' ' + v + ' '+ w+'?'+' '+cl]
return sentenceOut
#######################################################################################################
# Questions for two objects
def two_objects_questions_color(cl1,sz1,sh1,obj2,obj2t,relposition):
article = choose_article(sh1)
if sz1 in ['medium','NA']: sz1 = ''
sentenceOut = ['What' + ' ' + VERB[0] + ' ' + DETERMIN[0] + ' ' + 'color' + ' ' + spacePREPOSITION[1] + ' ' + DETERMIN[0] + ' '+ sh1 + ' ' +relposition + ' ' + obj2t + '?' + ' '+ cl1,
'What' + ' ' + VERB[0] + ' ' + DETERMIN[0] + ' ' + 'color' + ' ' + spacePREPOSITION[1] + ' ' + DETERMIN[0] + ' '+ sz1 + ' ' + sh1 + ' ' +relposition + ' ' + obj2t + '?' + ' '+ cl1,
DETERMIN[1] + ' ' +VERB[0] +' '+ article +' '+ sh1 + ' ' + relposition + ' '+ obj2 +'.'+' '+ 'What'+' '+'color'+' '+ VERB[0] + ' ' + PRONOUN[0] + '?'+' ' +cl1,
DETERMIN[1] + ' ' +VERB[0]+' '+ 'an' +' ' + 'object' + ' ' + relposition + ' '+ obj2 + '.' + ' ' + 'What' + ' ' + VERB[0] + ' ' + PRONOUN[2] + ' ' + 'color' + '?' + ' ' + cl1]
return sentenceOut
def two_objects_questions_color_sameLocation(cl1,sz1,sh1,obj2,obj2t,relposition):
article = choose_article(sh1)
if sz1 in ['medium','NA']: sz1 = ''
sentenceOut = ['What' + ' ' + VERB[0] + ' ' + DETERMIN[0] + ' ' + 'color' + ' ' + spacePREPOSITION[1] + ' ' + DETERMIN[0] + ' '+ sh1 + ' ' +relposition + ' ' + obj2t + '?' + ' '+ cl1,
'What' + ' ' + VERB[0] + ' ' + DETERMIN[0] + ' ' + 'color' + ' ' + spacePREPOSITION[1] + ' ' + DETERMIN[0] + ' '+ sz1 + ' ' + sh1 + ' ' +relposition + ' ' + obj2t + '?' + ' '+ cl1,
DETERMIN[1] + ' ' +VERB[0] +' '+ article +' '+ sh1 + ' ' + relposition + ' '+ obj2 +'.'+' '+ 'What'+' '+'color'+' '+ VERB[0] + ' ' + PRONOUN[0] + '?'+' ' +cl1]
return sentenceOut
def two_objects_questions_color_sameLocationShape(cl1,sz1,sh1,obj2,obj2t,relposition):
article = choose_article(sh1)
if sz1 in ['medium','NA']: sz1 = ''
sentenceOut = [ 'What' + ' ' + VERB[0] + ' ' + DETERMIN[0] + ' ' + 'color' + ' ' + spacePREPOSITION[1] + ' ' + DETERMIN[0] + ' '+ sz1 + ' ' + sh1 + ' ' +relposition + ' ' + obj2t + '?' + ' '+ cl1]
return sentenceOut
def two_objects_questions_color_abspos(cl1,sz1,sh1,obj2,obj2t,abspos):
# templates
# 3. A shape is in the horisontal vertical. What is its color?
#4. A shape is in the horisontal vertical. Is it color or non-color?
#5. A shape is in the horisontal vertical. Is this color or non-color color?
article = choose_article(sz1)
if sz1 in ['medium','NA']:
sz1 = ''
article = choose_article(sh1)
sentenceOut = [article + ' '+ sz1+ ' '+ sh1+ ' ' + VERB[0] + ' ' + abspos +'.'+' '+'What'+' '+ VERB[0]+' '+ PRONOUN[2] + ' ' + 'color' + '?' + ' '+cl1]
return sentenceOut
def two_objects_questions_shape(cl1,sz1,sh1,obj2,obj2t,relposition):
article = choose_article(cl1)
if sz1 in ['medium','NA']: sz1 = ''
sentenceOut = [ 'What'+' '+ VERB[0]+' '+DETERMIN[0]+' ' + 'shape' + ' ' +spacePREPOSITION[1] + ' '+ DETERMIN[0]+' '+ 'object' + ' ' +relposition + ' '+obj2t+'?'+' '+sh1,
'What'+' '+ VERB[0]+' '+DETERMIN[0]+' '+'shape'+' '+spacePREPOSITION[1]+' '+DETERMIN[0]+' ' +cl1+' ' + 'object' + ' ' + relposition + ' '+obj2t+'?' + ' '+ sh1,
DETERMIN[1] + ' ' +VERB[0]+' '+ 'an' +' '+'object' + ' ' + relposition + ' '+ obj2 +'.'+' '+ 'What'+' '+'shape'+' '+ VERB[0] + ' ' + PRONOUN[0] + '?'+' ' +sh1,
DETERMIN[1]+' '+VERB[0]+' '+ article +' '+cl1+' '+'object' + ' ' +relposition + ' '+obj2 +'.'+' '+'What'+' '+ VERB[0]+' '+ PRONOUN[2] + ' ' + 'shape' + '?' + ' '+sh1,
DETERMIN[1]+' '+VERB[0]+' '+ article+' '+sz1+' '+cl1+' '+'object' + ' ' +relposition + ' '+obj2 +'.'+' '+'What'+' '+ VERB[0]+' '+ PRONOUN[2] + ' ' + 'shape' + '?' + ' '+sh1]
return sentenceOut
def two_objects_questions_shape_sameLocationColor(cl1,sz1,sh1,obj2,obj2t,relposition):
# templates
# 3. A shape is in the horisontal vertical. What is its color?
#4. A shape is in the horisontal vertical. Is it color or non-color?
#5. A shape is in the horisontal vertical. Is this color or non-color color?
article = choose_article(cl1)
if sz1 in ['medium','NA']: sz1 = ''
sentenceOut = [ DETERMIN[1]+' '+VERB[0]+' '+ article +' '+sz1+' '+'object' + ' ' +relposition + ' '+obj2 +'.'+' '+'What'+' '+ VERB[0]+' '+ PRONOUN[2] + ' ' + 'shape' + '?' + ' '+sh1,
DETERMIN[1]+' '+VERB[0]+' '+ article+' '+sz1+' '+cl1+' '+'object' + ' ' +relposition + ' '+obj2 +'.'+' '+'What'+' '+ VERB[0]+' '+ PRONOUN[2] + ' ' + 'shape' + '?' + ' '+sh1]
return sentenceOut
def two_objects_questions_shape_sameLocationSize(cl1,sz1,sh1,obj2,obj2t,relposition):
# templates
# 3. A shape is in the horisontal vertical. What is its color?
#4. A shape is in the horisontal vertical. Is it color or non-color?
#5. A shape is in the horisontal vertical. Is this color or non-color color?
article = choose_article(cl1)
if sz1 in ['medium','NA']: sz1 = ''
sentenceOut = ['What'+' '+ VERB[0]+' '+DETERMIN[0]+' '+'shape'+' '+spacePREPOSITION[1]+' '+DETERMIN[0]+' ' +cl1+' ' + 'object' + ' ' + relposition + ' '+obj2t+'?' + ' '+ sh1,
DETERMIN[1]+' '+VERB[0]+' '+ article +' '+cl1+' '+'object' + ' ' +relposition + ' '+obj2 +'.'+' '+'What'+' '+ VERB[0]+' '+ PRONOUN[2] + ' ' + 'shape' + '?' + ' '+sh1,
DETERMIN[1]+' '+VERB[0]+' '+ article+' '+sz1+' '+cl1+' '+'object' + ' ' +relposition + ' '+obj2 +'.'+' '+'What'+' '+ VERB[0]+' '+ PRONOUN[2] + ' ' + 'shape' + '?' + ' '+sh1]
return sentenceOut
def two_objects_questions_shape_sameLocationColorSize(cl1,sz1,sh1,obj2,obj2t,relposition):
# templates
# 3. A shape is in the horisontal vertical. What is its color?
#4. A shape is in the horisontal vertical. Is it color or non-color?
#5. A shape is in the horisontal vertical. Is this color or non-color color?
article = choose_article(cl1)
if sz1 in ['medium','NA']: sz1 = ''
sentenceOut = ['What'+' '+ VERB[0]+' '+DETERMIN[0]+' '+'shape'+' '+spacePREPOSITION[1]+' '+DETERMIN[0]+' ' +cl1+' ' + 'objects' + ' ' + relposition + ' '+obj2t+'?' + ' '+ sh1,
DETERMIN[1]+' '+VERB[1]+' ' +sz1+' '+'objects' + ' ' +relposition + ' '+obj2 +'.'+' '+'What'+' '+ VERB[0]+' '+ PRONOUN[4] + ' ' + 'shape' + '?' + ' '+sh1,
DETERMIN[1]+' '+VERB[0]+' '+ sz1+' '+cl1+' '+'objects' + ' ' +relposition + ' '+obj2 +'.'+' '+'What'+' '+ VERB[0]+' '+ PRONOUN[4] + ' ' + 'shape' + '?' + ' '+sh1]
return sentenceOut
def two_objects_questions_shape_abspos(cl1,sz1,sh1,obj2,obj2t,abspos):
# templates
# 3. A shape is in the horisontal vertical. What is its color?
#4. A shape is in the horisontal vertical. Is it color or non-color?
#5. A shape is in the horisontal vertical. Is this color or non-color color?
article = choose_article(sz1)
if sz1 in ['medium','NA']:
sz1 = ''
article = choose_article(cl1)
sentenceOut = [article + ' '+ sz1+ ' '+ cl1+ ' '+ 'object' +' '+ VERB[0] + ' ' + abspos +'.'+' '+'What'+' '+ VERB[0]+' '+ PRONOUN[2] + ' ' + 'shape' + '?' + ' '+sh1]
return sentenceOut
def two_objects_questions_location_hor (obj1,obj1t,obj2,obj2t,relposition,absp):
# templates
# 3. A shape is in the horisontal vertical. What is its color?
#4. A shape is in the horisontal vertical. Is it color or non-color?
#5. A shape is in the horisontal vertical. Is this color or non-color color?
hor_choice = '0'
hor_choice = RANDOM.choice('12')
if hor_choice == '1':
hor1 = 'left'
hor2 = 'right'
elif hor_choice == '2':
hor1 = 'right'
hor2 = 'left'
preposition = ['to the','']
prep = RANDOM.choice(preposition)
sentenceOut = [ VERB[2]+' '+ obj1t +' '+ prep + ' ' + hor1 + ' '+ DISJUNCTION[0] + ' '+prep +' '+hor2 +' '+ spacePREPOSITION[1]+' '+obj2t+'?'+' ' +relposition,
obj1 + ' ' + VERB[0] + ' ' + absp+'.'+' '+ VERB[2]+' '+ PRONOUN[0] +' '+ prep + ' ' + hor1 + ' '+ DISJUNCTION[0] + ' '+prep +' '+hor2 +' '+ spacePREPOSITION[1]+' '+obj2t+'?'+' ' +relposition]
return sentenceOut
def two_objects_questions_location_hor_abs(obj1,obj1t,obj2,obj2t,relposition,absp):
# templates
# 3. A shape is in the horisontal vertical. What is its color?
#4. A shape is in the horisontal vertical. Is it color or non-color?
#5. A shape is in the horisontal vertical. Is this color or non-color color?
hor_choice = '0'
hor_choice = RANDOM.choice('12')
if hor_choice == '1':
hor1 = 'left'
hor2 = 'right'
elif hor_choice == '2':
hor1 = 'right'
hor2 = 'left'
preposition = ['to the','']
prep = RANDOM.choice(preposition)
sentenceOut = [ obj1 + ' ' + VERB[0] + ' ' + abspos+'.'+' '+ VERB[2]+' '+ PRONOUN[0] +' '+ prep + ' ' + hor1 + ' '+ DISJUNCTION[0] + ' '+prep +' '+hor2 +' '+ spacePREPOSITION[1]+' '+obj2t+'?'+' ' +relposition]
return sentenceOut
def two_objects_questions_location_vert (obj1,obj1t,obj2,obj2t,relposition,absp):
# templates
# 3. A shape is in the horisontal vertical. What is its color?
#4. A shape is in the horisontal vertical. Is it color or non-color?
#5. A shape is in the horisontal vertical. Is this color or non-color color?
vert_choice = RANDOM.choice('12')
if vert_choice == '1':
vert1 = 'above'
vert2 = 'below'
elif vert_choice == '2':
vert1 = 'below'
vert2 = 'above'
sentenceOut = [ VERB[2]+' '+ obj1t + ' ' + vert1 + ' '+ DISJUNCTION[0] +' '+vert2 +' '+ spacePREPOSITION[1]+' '+obj2t+'?'+' ' +relposition,
obj1 + ' ' + VERB[0] + ' ' + absp +'.'+' '+ VERB[2]+' '+ PRONOUN[0] + ' ' + vert1 + ' '+ DISJUNCTION[0] +' '+ vert2 +' '+ spacePREPOSITION[1]+' '+obj2t+'?'+' ' +relposition]
return sentenceOut
def two_objects_questions_location_vert_abs (obj1,obj1t,obj2,obj2t,relposition,absp):
# templates
# 3. A shape is in the horisontal vertical. What is its color?
#4. A shape is in the horisontal vertical. Is it color or non-color?
#5. A shape is in the horisontal vertical. Is this color or non-color color?
vert_choice = RANDOM.choice('12')
if vert_choice == '1':
vert1 = 'above'
vert2 = 'below'
elif vert_choice == '2':
vert1 = 'below'
vert2 = 'above'
sentenceOut = [ obj1 + ' ' + VERB[0] + ' ' + absp +'.'+' '+ VERB[2]+' '+ PRONOUN[0] + ' ' + vert1 + ' '+ DISJUNCTION[0] +' '+ vert2 +' '+ spacePREPOSITION[1]+' '+obj2t+'?'+' ' +relposition]
return sentenceOut
def two_objects_questions_size (obj1,obj2,obj2t,relposition,relsize):
sentenceOut = [ obj1 + ' ' + VERB[0] + ' ' + relposition + ' '+ obj2 +'.'+' '+ VERB[2]+' '+ PRONOUN[0] + ' ' + COMPSIZE[0] + ' '+ DISJUNCTION[0] +' '+ COMPSIZE[1] +' '+ DISJUNCTION[1]+' '+obj2t+'?'+' ' +relsize]
return sentenceOut
def two_objects_questions_size_sameAttribute (sz1,cl1,sh1,obj2,obj2t,relposition,relsize):
article = choose_article(cl1)
sentenceOut = [ VERB[2]+' '+DETERMIN[0] +' '+ cl1 +' ' + sh1 + ' ' + COMPSIZE[0] + ' '+ DISJUNCTION[0] +' '+COMPSIZE[1] +' '+ DISJUNCTION[1]+' '+obj2t+'?'+' ' +relsize,
article +' '+ cl1 +' ' + sh1 + ' ' + VERB[0] + ' ' + relposition + ' '+ obj2 +'.'+' '+ VERB[2]+' '+ PRONOUN[0] + ' ' + COMPSIZE[0] + ' '+ DISJUNCTION[0] +' '+ COMPSIZE[1] +' '+ DISJUNCTION[1]+' '+obj2t+'?'+' ' +relsize]
return sentenceOut
def two_objects_questions_size_abspos (obj1,obj2,obj2t,abspos,relsize):
sentenceOut = [ obj1 + ' ' + VERB[0] + ' ' + abspos +'.'+' '+ VERB[2]+' '+ PRONOUN[0] + ' ' + COMPSIZE[0] + ' '+ DISJUNCTION[0] +' '+ COMPSIZE[1] +' '+ DISJUNCTION[1]+' '+obj2t+'?'+' ' +relsize]
return sentenceOut
| 45.292096 | 229 | 0.513581 | 2,980 | 26,360 | 4.496309 | 0.062416 | 0.078588 | 0.034928 | 0.041869 | 0.87708 | 0.854019 | 0.841779 | 0.834689 | 0.798194 | 0.780282 | 0 | 0.037377 | 0.211381 | 26,360 | 581 | 230 | 45.370052 | 0.607177 | 0.135319 | 0 | 0.586716 | 0 | 0 | 0.10887 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.00369 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6ac8be8681c8c4f603c18c523e09ef81e6ddeaea | 99,908 | py | Python | unicorn_fy/unicorn_fy.py | StarBalll/unicorn-fy | 4393d53389fbf1d72b376cd9ab7027fd96c21c12 | [
"MIT"
] | 19 | 2020-12-03T22:44:44.000Z | 2021-12-01T21:23:50.000Z | unicorn_fy/unicorn_fy.py | StarBalll/unicorn-fy | 4393d53389fbf1d72b376cd9ab7027fd96c21c12 | [
"MIT"
] | 8 | 2021-03-16T20:49:33.000Z | 2021-11-15T14:35:48.000Z | unicorn_fy/unicorn_fy.py | StarBalll/unicorn-fy | 4393d53389fbf1d72b376cd9ab7027fd96c21c12 | [
"MIT"
] | 7 | 2021-01-29T04:58:13.000Z | 2021-05-15T22:16:19.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# File: unicorn_fy.py
#
# Part of ‘UnicornFy’
# Project website: https://github.com/oliver-zehentleitner/unicorn-fy
# Documentation: https://oliver-zehentleitner.github.io/unicorn-fy
# PyPI: https://pypi.org/project/unicorn-fy
#
# Author: Oliver Zehentleitner
# https://about.me/oliver-zehentleitner
#
# Copyright (c) 2019-2021, Oliver Zehentleitner
# All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish, dis-
# tribute, sublicense, and/or sell copies of the Software, and to permit
# persons to whom the Software is furnished to do so, subject to the fol-
# lowing conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
import logging
import time
import requests
import ujson as json
class UnicornFy(object):
"""
Unify received data from crypto exchanges
Supported exchanges:
- Binance.com
- Binance-com-futures
- Binance-com-coin_futures
- Binance-com-margin
- Binance-com-isolated_margin
- Binance.je
- Binance.us
- trBinance.com
- Binance.org
- Jex.com
"""
VERSION = "0.11.0.dev"
def __init__(self):
self.last_update_check_github = {'timestamp': time.time(),
'status': None}
@staticmethod
def binance_org_websocket(stream_data_json):
"""
unicorn_fy binance.org (incl testnet) raw_stream_data
:param stream_data_json: The received raw stream data from the Binance websocket
:type stream_data_json: json
:return: dict
"""
logging.info("Can not convert raw data from binance.org")
return stream_data_json
@staticmethod
def binance_com_websocket(stream_data_json):
"""
unicorn_fy binance.com raw_stream_data
:param stream_data_json: The received raw stream data from the Binance websocket
:type stream_data_json: json
:return: dict
"""
return UnicornFy.binance_websocket(stream_data_json, exchange="binance.com", show_deprecated_warning=False)
@staticmethod
def binance_com_margin_websocket(stream_data_json):
"""
unicorn_fy binance.com-margin raw_stream_data
:param stream_data_json: The received raw stream data from the Binance websocket
:type stream_data_json: json
:return: dict
"""
return UnicornFy.binance_websocket(stream_data_json, exchange="binance.com-margin",
show_deprecated_warning=False)
@staticmethod
def binance_com_isolated_margin_websocket(stream_data_json):
"""
unicorn_fy binance.com-isolated_margin raw_stream_data
:param stream_data_json: The received raw stream data from the Binance websocket
:type stream_data_json: json
:return: dict
"""
return UnicornFy.binance_websocket(stream_data_json,
exchange="binance.com-isolated_margin",
show_deprecated_warning=False)
@staticmethod
def binance_com_futures_websocket(stream_data_json):
"""
unicorn_fy binance.com-futures raw_stream_data
:param stream_data_json: The received raw stream data from the Binance websocket
:type stream_data_json: json
:return: dict
"""
return UnicornFy.binance_futures_websocket(stream_data_json,
exchange="binance.com-futures",
show_deprecated_warning=False)
@staticmethod
def binance_com_coin_futures_websocket(stream_data_json):
"""
unicorn_fy binance.com-coin_futures raw_stream_data
:param stream_data_json: The received raw stream data from the Binance websocket
:type stream_data_json: json
:return: dict
"""
return UnicornFy.binance_coin_futures_websocket(stream_data_json,
exchange="binance.com-coin_futures",
show_deprecated_warning=False)
@staticmethod
def binance_je_websocket(stream_data_json):
"""
unicorn_fy binance.je (Jersey) raw_stream_data
:param stream_data_json: The received raw stream data from the Binance websocket
:type stream_data_json: json
:return: dict
"""
return UnicornFy.binance_websocket(stream_data_json, exchange="binance.je", show_deprecated_warning=False)
@staticmethod
def binance_us_websocket(stream_data_json):
"""
unicorn_fy binance.us (US) raw_stream_data
:param stream_data_json: The received raw stream data from the Binance websocket
:type stream_data_json: json
:return: dict
"""
return UnicornFy.binance_websocket(stream_data_json, exchange="binance.us", show_deprecated_warning=False)
@staticmethod
def trbinance_com_websocket(stream_data_json):
"""
unicorn_fy trbinance.com (TR) raw_stream_data
:param stream_data_json: The received raw stream data from the Binance websocket
:type stream_data_json: json
:return: dict
"""
return UnicornFy.binance_websocket(stream_data_json, exchange="trbinance.com", show_deprecated_warning=False)
@staticmethod
def binance_websocket(stream_data_json, exchange="binance", show_deprecated_warning=True):
"""
unicorn_fy binance.com raw_stream_data
:param stream_data_json: The received raw stream data from the Binance websocket
:type stream_data_json: json
:param exchange: Exchange endpoint.
:type exchange: str
:param show_deprecated_warning: Show or hide warning
:type show_deprecated_warning: bool
:return: dict
"""
unicorn_fied_data = False
logging.debug("UnicornFy->binance_websocket(" + str(stream_data_json) + ")")
if show_deprecated_warning is True:
logging.warning("Using `UnicornFy.binance_websocket()` is deprecated, use "
"`UnicornFy.binance_com_websocket()` or `UnicornFy.binance_je_websocket()` instead!")
if UnicornFy.is_json(stream_data_json) is False:
return stream_data_json
stream_data = json.loads(stream_data_json)
try:
if stream_data[0]['e'] == "24hrMiniTicker":
stream_data = {'data': {'e': "24hrMiniTicker"},
'items': stream_data}
elif stream_data[0]['e'] == "24hrTicker":
stream_data = {'data': {'e': "24hrTicker"},
'items': stream_data}
except KeyError:
pass
try:
if "!ticker@arr" in stream_data['stream']:
stream_data = {'data': {'e': "24hrTicker"},
'items': stream_data['data']}
elif "!miniTicker@arr" in stream_data['stream']:
stream_data = {'data': {'e': "24hrMiniTicker"},
'items': stream_data['data']}
except KeyError:
pass
try:
if stream_data['e'] == 'outboundAccountInfo':
stream_data = {'data': stream_data}
elif stream_data['e'] == 'executionReport':
stream_data = {'data': stream_data}
elif stream_data['e'] == 'outboundAccountPosition':
stream_data = {'data': stream_data}
elif stream_data['e'] == 'listStatus':
stream_data = {'data': stream_data}
elif stream_data['e'] == 'balanceUpdate':
stream_data = {'data': stream_data}
except KeyError:
pass
try:
if stream_data['stream'].find('@depth5') != -1:
stream_data['data']['e'] = "depth"
stream_data['data']['depth_level'] = 5
elif stream_data['stream'].find('@depth10') != -1:
stream_data['data']['e'] = "depth"
stream_data['data']['depth_level'] = 10
elif stream_data['stream'].find('@depth20') != -1:
stream_data['data']['e'] = "depth"
stream_data['data']['depth_level'] = 20
elif "@bookTicker" in stream_data['stream']:
stream_data['data']['e'] = "bookTicker"
except KeyError:
pass
try:
# return if already unicorn_fied
if stream_data['unicorn_fied']:
return stream_data
except KeyError:
pass
try:
if stream_data['result'] is None:
unicorn_fied_version = [exchange, UnicornFy.VERSION]
stream_data['unicorn_fied'] = unicorn_fied_version
logging.debug(f"UnicornFy->binance_websocket({str(stream_data)}, {str(exchange)}")
return stream_data
else:
unicorn_fied_version = [exchange, UnicornFy.VERSION]
stream_data['unicorn_fied'] = unicorn_fied_version
logging.debug(f"UnicornFy->binance_websocket({str(stream_data)}, {str(exchange)}")
return stream_data
except KeyError:
pass
try:
if stream_data['error']:
unicorn_fied_version = [exchange, UnicornFy.VERSION]
stream_data['unicorn_fied'] = unicorn_fied_version
logging.debug(f"UnicornFy->binance_websocket({str(stream_data)}, {str(exchange)}")
return stream_data
except KeyError:
pass
if stream_data['data']['e'] == 'aggTrade':
unicorn_fied_data = {'stream_type': stream_data['stream'],
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'symbol': stream_data['data']['s'],
'aggregate_trade_id': stream_data['data']['a'],
'price': stream_data['data']['p'],
'quantity': stream_data['data']['q'],
'first_trade_id': stream_data['data']['f'],
'last_trade_id': stream_data['data']['l'],
'trade_time': stream_data['data']['T'],
'is_market_maker': stream_data['data']['m'],
'ignore': stream_data['data']['M']}
elif stream_data['data']['e'] == 'listStatus':
objects = []
for item in stream_data['data']['O']:
objects.append({'symbol': item['s'],
'order_id': item['i'],
'client_order_id': item['c']})
unicorn_fied_data = {'stream_type': stream_data['data']['s'].lower() + "@listStatus",
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'symbol': stream_data['data']['s'],
'order_list_id': stream_data['data']['g'],
'contingency_type': stream_data['data']['c'],
'list_status_type': stream_data['data']['l'],
'list_order_status': stream_data['data']['L'],
'list_reject_reason': stream_data['data']['r'],
'list_client_order_id': stream_data['data']['C'],
'transaction_time': stream_data['data']['T'],
'objects': objects}
elif stream_data['data']['e'] == 'trade':
unicorn_fied_data = {'stream_type': stream_data['stream'],
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'symbol': stream_data['data']['s'],
'trade_id': stream_data['data']['t'],
'price': stream_data['data']['p'],
'quantity': stream_data['data']['q'],
'buyer_order_id': stream_data['data']['b'],
'seller_order_id': stream_data['data']['a'],
'trade_time': stream_data['data']['T'],
'is_market_maker': stream_data['data']['m'],
'ignore': stream_data['data']['M']}
elif stream_data['data']['e'] == 'bookTicker':
unicorn_fied_data = {'stream_type': stream_data['stream'],
'order_book_update_id': stream_data['data']['u'],
'symbol': stream_data['data']['s'],
'best_bid_price': stream_data['data']['b'],
'best_bid_quantity': stream_data['data']['B'],
'best_ask_price': stream_data['data']['a'],
'best_ask_quantity': stream_data['data']['A'],
'event_type': stream_data['data']['e']}
elif stream_data['data']['e'] == 'kline':
stream_data['data'] = UnicornFy.set_to_false_if_not_exist(stream_data['data'], 'f')
stream_data['data'] = UnicornFy.set_to_false_if_not_exist(stream_data['data'], 'L')
unicorn_fied_data = {'stream_type': stream_data['stream'],
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'symbol': stream_data['data']['s'],
'kline': {'kline_start_time': stream_data['data']['k']['t'],
'kline_close_time': stream_data['data']['k']['T'],
'symbol': stream_data['data']['k']['s'],
'interval': stream_data['data']['k']['i'],
'first_trade_id': stream_data['data']['f'],
'last_trade_id': stream_data['data']['L'],
'open_price': stream_data['data']['k']['o'],
'close_price': stream_data['data']['k']['c'],
'high_price': stream_data['data']['k']['h'],
'low_price': stream_data['data']['k']['l'],
'base_volume': stream_data['data']['k']['v'],
'number_of_trades': stream_data['data']['k']['n'],
'is_closed': stream_data['data']['k']['x'],
'quote': stream_data['data']['k']['q'],
'taker_by_base_asset_volume': stream_data['data']['k']['V'],
'taker_by_quote_asset_volume': stream_data['data']['k']['Q'],
'ignore': stream_data['data']['k']['B']}}
elif stream_data['data']['e'] == '24hrMiniTicker':
try:
if stream_data['stream']:
pass
except KeyError:
stream_data['stream'] = '!miniTicker@arr'
unicorn_fied_data = {'stream_type': stream_data['stream'],
'event_type': stream_data['data']['e'],
'data': []}
try:
for item in stream_data['items']:
data = {'stream_type': stream_data['stream'],
'event_type': item['e'],
'event_time': item['E'],
'symbol': item['s'],
'close_price': item['c'],
'open_price': item['o'],
'high_price': item['h'],
'low_price': item['l'],
'taker_by_base_asset_volume': item['v'],
'taker_by_quote_asset_volume': item['q']}
unicorn_fied_data['data'].append(data)
except KeyError:
try:
data = {'stream_type': stream_data['stream'],
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'symbol': stream_data['data']['s'],
'close_price': stream_data['data']['c'],
'open_price': stream_data['data']['o'],
'high_price': stream_data['data']['h'],
'low_price': stream_data['data']['l'],
'taker_by_base_asset_volume': stream_data['data']['v'],
'taker_by_quote_asset_volume': stream_data['data']['q']}
unicorn_fied_data['data'].append(data)
except KeyError as error_msg:
logging.critical(f"UnicornFy->binance_com_futures_websocket({str(stream_data)}) - "
f"error: {str(error_msg)}")
print(str(stream_data))
elif stream_data['data']['e'] == '24hrTicker':
try:
if stream_data['stream']:
pass
except KeyError:
stream_data['stream'] = '!ticker@arr'
unicorn_fied_data = {'stream_type': stream_data['stream'],
'event_type': stream_data['data']['e'],
'data': []}
try:
for item in stream_data['items']:
data = {'stream_type': stream_data['stream'],
'event_type': item['e'],
'event_time': item['E'],
'symbol': item['s'],
'price_change': item['p'],
'price_change_percent': item['P'],
'weighted_average_price': item['w'],
'trade_before_24h_window': item['x'],
'last_price': item['c'],
'last_quantity': item['Q'],
'best_bid_price': item['b'],
'best_bid_quantity': item['B'],
'best_ask_price': item['a'],
'best_ask_quantity': item['A'],
'open_price': item['o'],
'high_price': item['h'],
'low_price': item['l'],
'total_traded_base_asset_volume': item['v'],
'total_traded_quote_asset_volume': item['q'],
'statistics_open_time': item['O'],
'statistics_close_time': item['C'],
'first_trade_id': item['F'],
'last_trade_id': item['L'],
'total_nr_of_trades': item['n']}
unicorn_fied_data['data'].append(data)
except KeyError:
data = {'stream_type': stream_data['stream'],
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'symbol': stream_data['data']['s'],
'price_change': stream_data['data']['p'],
'price_change_percent': stream_data['data']['P'],
'weighted_average_price': stream_data['data']['w'],
'trade_before_24h_window': stream_data['data']['x'],
'last_price': stream_data['data']['c'],
'last_quantity': stream_data['data']['Q'],
'best_bid_price': stream_data['data']['b'],
'best_bid_quantity': stream_data['data']['B'],
'best_ask_price': stream_data['data']['a'],
'best_ask_quantity': stream_data['data']['A'],
'open_price': stream_data['data']['o'],
'high_price': stream_data['data']['h'],
'low_price': stream_data['data']['l'],
'total_traded_base_asset_volume': stream_data['data']['v'],
'total_traded_quote_asset_volume': stream_data['data']['q'],
'statistics_open_time': stream_data['data']['O'],
'statistics_close_time': stream_data['data']['C'],
'first_trade_id': stream_data['data']['F'],
'last_trade_id': stream_data['data']['L'],
'total_nr_of_trades': stream_data['data']['n']}
unicorn_fied_data['data'].append(data)
elif stream_data['data']['e'] == 'depth':
unicorn_fied_data = {'stream_type': stream_data['stream'],
'event_type': stream_data['data']['e'],
'symbol': stream_data['stream'][:stream_data['stream'].find('@')].upper(),
'last_update_id': stream_data['data']['lastUpdateId'],
'bids': stream_data['data']['bids'],
'asks': stream_data['data']['asks']}
elif stream_data['data']['e'] == 'depthUpdate':
unicorn_fied_data = {'stream_type': stream_data['stream'],
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'symbol': stream_data['data']['s'],
'first_update_id_in_event': stream_data['data']['U'],
'final_update_id_in_event': stream_data['data']['u'],
'bids': stream_data['data']['b'],
'asks': stream_data['data']['a']}
elif stream_data['data']['e'] == 'outboundAccountInfo':
unicorn_fied_data = {'stream_type': '!userData@arr',
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'maker_commission_rate': stream_data['data']['m'],
'taker_commission_rate': stream_data['data']['t'],
'buyer_commission_rate': stream_data['data']['b'],
'seller_commission_rate': stream_data['data']['s'],
'can_trade': stream_data['data']['T'],
'can_withdraw': stream_data['data']['W'],
'can_deposit': stream_data['data']['D'],
'balances': [],
'account_permissions': stream_data['data']['P']}
for item in stream_data['data']['B']:
new_item = {'asset': item['a'],
'free': item['f'],
'locked': item['l']}
unicorn_fied_data['balances'] += [new_item]
elif stream_data['data']['e'] == 'outboundAccountPosition':
unicorn_fied_data = {'stream_type': '!userData@arr',
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'last_update_time': stream_data['data']['u'],
'balances': []}
for item in stream_data['data']['B']:
new_item = {'asset': item['a'],
'free': item['f'],
'locked': item['l']}
unicorn_fied_data['balances'] += [new_item]
elif stream_data['data']['e'] == 'balanceUpdate':
unicorn_fied_data = {'stream_type': '!userData@arr',
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'asset': stream_data['data']['a'],
'balance_delta': stream_data['data']['d'],
'clear_time': stream_data['data']['T']}
elif stream_data['data']['e'] == 'executionReport':
unicorn_fied_data = {'stream_type': '!userData@arr',
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'symbol': stream_data['data']['s'],
'client_order_id': stream_data['data']['c'],
'side': stream_data['data']['S'],
'order_type': stream_data['data']['o'],
'time_in_force': stream_data['data']['f'],
'order_quantity': stream_data['data']['q'],
'order_price': stream_data['data']['p'],
'stop_price': stream_data['data']['P'],
'iceberg_quantity': stream_data['data']['F'],
'ignore_g': stream_data['data']['g'],
'original_client_order_id': stream_data['data']['C'],
'current_execution_type': stream_data['data']['x'],
'current_order_status': stream_data['data']['X'],
'order_reject_reason': stream_data['data']['r'],
'order_id': stream_data['data']['i'],
'last_executed_quantity': stream_data['data']['l'],
'cumulative_filled_quantity': stream_data['data']['z'],
'last_executed_price': stream_data['data']['L'],
'commission_amount': stream_data['data']['n'],
'commission_asset': stream_data['data']['N'],
'transaction_time': stream_data['data']['T'],
'trade_id': stream_data['data']['t'],
'ignore_I': stream_data['data']['I'],
'is_order_working': stream_data['data']['w'],
'is_trade_maker_side': stream_data['data']['m'],
'ignore_M': stream_data['data']['M'],
'order_creation_time': stream_data['data']['O'],
'cumulative_quote_asset_transacted_quantity': stream_data['data']['Z'],
'last_quote_asset_transacted_quantity': stream_data['data']['Y']}
unicorn_fied_version = [exchange, UnicornFy.VERSION]
unicorn_fied_data['unicorn_fied'] = unicorn_fied_version
logging.debug("UnicornFy->binance_com_futures_websocket(" + str(unicorn_fied_data) + ")")
return unicorn_fied_data
@staticmethod
def binance_futures_websocket(stream_data_json, exchange="binance.com-futures", show_deprecated_warning=False):
"""
unicorn_fy binance.com-futures raw_stream_data
:param stream_data_json: The received raw stream data from the Binance websocket
:type stream_data_json: json
:param exchange: Exchange endpoint.
:type exchange: str
:param show_deprecated_warning: Show or hide warning
:type show_deprecated_warning: bool
:return: dict
"""
unicorn_fied_data = False
logging.debug("UnicornFy->binance_futures_websocket(" + str(stream_data_json) + ")")
if show_deprecated_warning is True:
pass
if UnicornFy.is_json(stream_data_json) is False:
return stream_data_json
stream_data = json.loads(stream_data_json)
try:
if stream_data['e'] == 'outboundAccountInfo':
stream_data = {'data': stream_data}
elif stream_data['e'] == 'executionReport':
stream_data = {'data': stream_data}
elif stream_data['e'] == 'balanceUpdate':
stream_data = {'data': stream_data}
elif stream_data['e'] == 'ORDER_TRADE_UPDATE':
stream_data = {'data': stream_data}
elif stream_data['e'] == 'ACCOUNT_UPDATE':
stream_data = {'data': stream_data}
elif stream_data['e'] == 'ACCOUNT_CONFIG_UPDATE':
stream_data = {'data': stream_data}
elif stream_data['e'] == 'MARGIN_CALL':
stream_data = {'data': stream_data}
except KeyError:
pass
try:
if stream_data['stream'].find('@depth5') != -1:
stream_data['data']['e'] = "depth"
stream_data['data']['depth_level'] = 5
elif stream_data['stream'].find('@depth10') != -1:
stream_data['data']['e'] = "depth"
stream_data['data']['depth_level'] = 10
elif stream_data['stream'].find('@depth20') != -1:
stream_data['data']['e'] = "depth"
stream_data['data']['depth_level'] = 20
elif "@bookTicker" in stream_data['stream']:
stream_data['data']['e'] = "bookTicker"
except KeyError:
pass
try:
# return if already unicorn_fied
if stream_data['unicorn_fied']:
return stream_data
except KeyError:
pass
try:
if stream_data['result'] is None:
unicorn_fied_version = [exchange, UnicornFy.VERSION]
stream_data['unicorn_fied'] = unicorn_fied_version
logging.debug(f"UnicornFy->binance_futures_websocket({str(stream_data)}, {str(exchange)}")
return stream_data
else:
unicorn_fied_version = [exchange, UnicornFy.VERSION]
stream_data['unicorn_fied'] = unicorn_fied_version
logging.debug(f"UnicornFy->binance_futures_websocket({str(stream_data)}, {str(exchange)}")
return stream_data
except KeyError:
pass
try:
if stream_data['error']:
unicorn_fied_version = [exchange, UnicornFy.VERSION]
stream_data['unicorn_fied'] = unicorn_fied_version
logging.debug(f"UnicornFy->binance_futures_websocket({str(stream_data)}, {str(exchange)}")
return stream_data
except KeyError:
pass
try:
if stream_data['data']['e'] == 'aggTrade':
unicorn_fied_data = {'stream_type': stream_data['stream'],
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'symbol': stream_data['data']['s'],
'aggregate_trade_id': stream_data['data']['a'],
'price': stream_data['data']['p'],
'quantity': stream_data['data']['q'],
'first_trade_id': stream_data['data']['f'],
'last_trade_id': stream_data['data']['l'],
'trade_time': stream_data['data']['T'],
'is_market_maker': stream_data['data']['m']}
elif stream_data['data']['e'] == 'trade':
# Todo: KeyError: 'b'
# 'buyer_order_id': stream_data['data']['b'],
# Todo: KeyError: 'a'
# 'seller_order_id': stream_data['data']['a'],
# Todo: KeyError: 'M'
# , 'ignore': stream_data['data']['M']
unicorn_fied_data = {'stream_type': stream_data['stream'],
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'symbol': stream_data['data']['s'],
'trade_id': stream_data['data']['t'],
'price': stream_data['data']['p'],
'quantity': stream_data['data']['q'],
'trade_time': stream_data['data']['T'],
'is_market_maker': stream_data['data']['m']}
elif stream_data['data']['e'] == 'kline':
stream_data['data'] = UnicornFy.set_to_false_if_not_exist(stream_data['data'], 'f')
stream_data['data'] = UnicornFy.set_to_false_if_not_exist(stream_data['data'], 'L')
unicorn_fied_data = {'stream_type': stream_data['stream'],
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'symbol': stream_data['data']['s'],
'kline': {'kline_start_time': stream_data['data']['k']['t'],
'kline_close_time': stream_data['data']['k']['T'],
'symbol': stream_data['data']['k']['s'],
'interval': stream_data['data']['k']['i'],
'first_trade_id': stream_data['data']['f'],
'last_trade_id': stream_data['data']['L'],
'open_price': stream_data['data']['k']['o'],
'close_price': stream_data['data']['k']['c'],
'high_price': stream_data['data']['k']['h'],
'low_price': stream_data['data']['k']['l'],
'base_volume': stream_data['data']['k']['v'],
'number_of_trades': stream_data['data']['k']['n'],
'is_closed': stream_data['data']['k']['x'],
'quote': stream_data['data']['k']['q'],
'taker_by_base_asset_volume': stream_data['data']['k']['V'],
'taker_by_quote_asset_volume': stream_data['data']['k']['Q'],
'ignore': stream_data['data']['k']['B']}}
elif stream_data['data']['e'] == '24hrMiniTicker':
try:
if stream_data['stream']:
pass
except KeyError:
stream_data['stream'] = '!miniTicker@arr'
unicorn_fied_data = {'stream_type': stream_data['stream'],
'event_type': stream_data['data']['e'],
'data': []}
try:
for item in stream_data['items']:
data = {'stream_type': stream_data['stream'],
'event_type': item['e'],
'event_time': item['E'],
'symbol': item['s'],
'close_price': item['c'],
'open_price': item['o'],
'high_price': item['h'],
'low_price': item['l'],
'taker_by_base_asset_volume': item['v'],
'taker_by_quote_asset_volume': item['q']}
unicorn_fied_data['data'].append(data)
except KeyError:
data = {'stream_type': stream_data['stream'],
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'symbol': stream_data['data']['s'],
'close_price': stream_data['data']['c'],
'open_price': stream_data['data']['o'],
'high_price': stream_data['data']['h'],
'low_price': stream_data['data']['l'],
'taker_by_base_asset_volume': stream_data['data']['v'],
'taker_by_quote_asset_volume': stream_data['data']['q']}
unicorn_fied_data['data'].append(data)
elif stream_data['data']['e'] == '24hrTicker':
try:
if stream_data['stream']:
pass
except KeyError:
stream_data['stream'] = '!ticker@arr'
unicorn_fied_data = {'stream_type': stream_data['stream'],
'event_type': stream_data['data']['e'],
'data': []}
try:
for item in stream_data['items']:
data = {'stream_type': stream_data['stream'],
'event_type': item['e'],
'event_time': item['E'],
'symbol': item['s'],
'price_change': item['p'],
'price_change_percent': item['P'],
'weighted_average_price': item['w'],
'trade_before_24h_window': item['x'],
'last_price': item['c'],
'last_quantity': item['Q'],
'best_bid_price': item['b'],
'best_bid_quantity': item['B'],
'best_ask_price': item['a'],
'best_ask_quantity': item['A'],
'open_price': item['o'],
'high_price': item['h'],
'low_price': item['l'],
'total_traded_base_asset_volume': item['v'],
'total_traded_quote_asset_volume': item['q'],
'statistics_open_time': item['O'],
'statistics_close_time': item['C'],
'first_trade_id': item['F'],
'last_trade_id': item['L'],
'total_nr_of_trades': item['n']}
unicorn_fied_data['data'].append(data)
except KeyError:
# Todo: KeyError: 'x'
# 'trade_before_24h_window': stream_data['data']['x'],
# Todo: KeyError: 'b'
# 'best_bid_price': stream_data['data']['b'],
# Todo: KeyError: 'B'
# 'best_bid_quantity': stream_data['data']['B'],
# Todo KeyError: 'a'
# 'best_ask_price': stream_data['data']['a'],
# Todo KeyError: 'A'
# 'best_ask_quantity': stream_data['data']['A'],
data = {'stream_type': stream_data['stream'],
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'symbol': stream_data['data']['s'],
'price_change': stream_data['data']['p'],
'price_change_percent': stream_data['data']['P'],
'weighted_average_price': stream_data['data']['w'],
'last_price': stream_data['data']['c'],
'last_quantity': stream_data['data']['Q'],
'open_price': stream_data['data']['o'],
'high_price': stream_data['data']['h'],
'low_price': stream_data['data']['l'],
'total_traded_base_asset_volume': stream_data['data']['v'],
'total_traded_quote_asset_volume': stream_data['data']['q'],
'statistics_open_time': stream_data['data']['O'],
'statistics_close_time': stream_data['data']['C'],
'first_trade_id': stream_data['data']['F'],
'last_trade_id': stream_data['data']['L'],
'total_nr_of_trades': stream_data['data']['n']}
unicorn_fied_data['data'].append(data)
elif stream_data['data']['e'] == 'depth':
# Todo: KeyError: 'lastUpdateId'
# 'last_update_id': stream_data['data']['lastUpdateId'],
# Todo: KeyError: 'bids'
# 'bids': stream_data['data']['bids'],
# , 'asks': stream_data['data']['asks']
unicorn_fied_data = {'stream_type': stream_data['stream'],
'event_type': stream_data['data']['e'],
'symbol': stream_data['stream'][:stream_data['stream'].find('@')].upper()}
elif stream_data['data']['e'] == 'depthUpdate':
# Todo: KeyError: 'bids'
# 'bids': stream_data['data']['b'],
unicorn_fied_data = {'stream_type': stream_data['stream'],
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'symbol': stream_data['data']['s'],
'first_update_id_in_event': stream_data['data']['U'],
'final_update_id_in_event': stream_data['data']['u'],
'asks': stream_data['data']['a']}
elif stream_data['data']['e'] == 'outboundAccountInfo':
unicorn_fied_data = {'stream_type': '!userData@arr',
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'maker_commission_rate': stream_data['data']['m'],
'taker_commission_rate': stream_data['data']['t'],
'buyer_commission_rate': stream_data['data']['b'],
'seller_commission_rate': stream_data['data']['s'],
'can_trade': stream_data['data']['T'],
'can_withdraw': stream_data['data']['W'],
'can_deposit': stream_data['data']['D'],
'balances': []}
for item in stream_data['data']['B']:
new_item = {'asset': item['a'],
'free': item['f'],
'locked': item['l']}
unicorn_fied_data['balances'] += [new_item]
elif stream_data['data']['e'] == 'balanceUpdate':
unicorn_fied_data = {'stream_type': '!userData@arr',
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'asset': stream_data['data']['a'],
'balance_delta': stream_data['data']['d'],
'clear_time': stream_data['data']['T']}
elif stream_data['data']['e'] == 'executionReport':
unicorn_fied_data = {'stream_type': '!userData@arr',
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'symbol': stream_data['data']['s'],
'client_order_id': stream_data['data']['c'],
'side': stream_data['data']['S'],
'order_type': stream_data['data']['o'],
'time_in_force': stream_data['data']['f'],
'order_quantity': stream_data['data']['q'],
'order_price': stream_data['data']['p'],
'stop_price': stream_data['data']['P'],
'iceberg_quantity': stream_data['data']['F'],
'ignore_g': stream_data['data']['g'],
'original_client_order_id': stream_data['data']['C'],
'current_execution_type': stream_data['data']['x'],
'current_order_status': stream_data['data']['X'],
'order_reject_reason': stream_data['data']['r'],
'order_id': stream_data['data']['i'],
'last_executed_quantity': stream_data['data']['l'],
'cumulative_filled_quantity': stream_data['data']['z'],
'last_executed_price': stream_data['data']['L'],
'commission_amount': stream_data['data']['n'],
'commission_asset': stream_data['data']['N'],
'transaction_time': stream_data['data']['T'],
'trade_id': stream_data['data']['t'],
'ignore_I': stream_data['data']['I'],
'is_order_working': stream_data['data']['w'],
'is_trade_maker_side': stream_data['data']['m'],
'ignore_M': stream_data['data']['M'],
'order_creation_time': stream_data['data']['O'],
'cumulative_quote_asset_transacted_quantity': stream_data['data']['Z'],
'last_quote_asset_transacted_quantity': stream_data['data']['Y']}
elif stream_data['data']['e'] == 'ORDER_TRADE_UPDATE':
'''
url: https://binance-docs.github.io/apidocs/futures/en/#event-order-update
ex:
{
"e":"ORDER_TRADE_UPDATE", // Event Type
"E":1568879465651, // Event Time
"T":1568879465650, // Transaction Time
"o":{
"s":"BTCUSDT", // Symbol
"c":"TEST", // Client Order Id
// special client order id:
// starts with "autoclose-": liquidation order
// "adl_autoclose": ADL auto close order
"S":"SELL", // Side
"o":"TRAILING_STOP_MARKET", // Order Type
"f":"GTC", // Time in Force
"q":"0.001", // Original Quantity
"p":"0", // Original Price
"ap":"0", // Average Price
"sp":"7103.04", // Stop Price. Please ignore with TRAILING_STOP_MARKET order
"x":"NEW", // Execution Type
"X":"NEW", // Order Status
"i":8886774, // Order Id
"l":"0", // Order Last Filled Quantity
"z":"0", // Order Filled Accumulated Quantity
"L":"0", // Last Filled Price
"N":"USDT", // Commission Asset, will not push if no commission
"n":"0", // Commission, will not push if no commission
"T":1568879465651, // Order Trade Time
"t":0, // Trade Id
"b":"0", // Bids Notional
"a":"9.91", // Ask Notional
"m":false, // Is this trade the maker side?
"R":false, // Is this reduce only
"wt":"CONTRACT_PRICE", // Stop Price Working Type
"ot":"TRAILING_STOP_MARKET", // Original Order Type
"ps":"LONG", // Position Side
"cp":false, // If Close-All, pushed with conditional order
"AP":"7476.89", // Activation Price, only puhed with TRAILING_STOP_MARKET order
"cr":"5.0", // Callback Rate, only puhed with TRAILING_STOP_MARKET order
"rp":"0" // Realized Profit of the trade
}
}
'''
unicorn_fied_data = {'stream_type': 'ORDER_TRADE_UPDATE',
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'symbol': stream_data['data']['o']['s'], # Symbol
'client_order_id': stream_data['data']['o']['c'], # Client Order Id
'side': stream_data['data']['o']['S'], # Side
'order_type': stream_data['data']['o']['o'], # Order Type
'time_in_force': stream_data['data']['o']['f'], # Time in Force
'order_quantity': stream_data['data']['o']['q'], # Original Quantity
'order_price': stream_data['data']['o']['p'], # Original Price
'order_avg_price': stream_data['data']['o']['ap'], # Average Price
'order_stop_price': stream_data['data']['o']['sp'], # Stop Price.
'current_execution_type': stream_data['data']['o']['x'], # Execution Type
'current_order_status': stream_data['data']['o']['X'], # Order Status
'order_id': stream_data['data']['o']['i'], # Order Id
'last_executed_quantity': stream_data['data']['o']['l'],
'cumulative_filled_quantity': stream_data['data']['o']['z'],
'last_executed_price': stream_data['data']['o']['L'], # Last Filled Price
'transaction_time': stream_data['data']['o']['T'], # Order Trade Time
'trade_id': stream_data['data']['o']['t'], # Trade Id
'net_pay': stream_data['data']['o']['b'], # Ask Notional
'net_selling_order_value': stream_data['data']['o']['a'], # Ask Notional
'is_trade_maker_side': stream_data['data']['o']['m'],
'reduce_only': stream_data['data']['o']['R'], # Is this reduce only
'trigger_price_type': stream_data['data']['o']['wt'], # Stop Price Working Type
'order_price_type': stream_data['data']['o']['ot'], # Original Order Type
'position_side': stream_data['data']['o']['ps'],
# Todo:
# 'cumulative_quote_asset_transacted_quantity': stream_data['data']['cp'],
# 'cumulative_quote_asset_transacted_quantity': stream_data['data']['AP'],
# 'cumulative_quote_asset_transacted_quantity': stream_data['data']['cr'],
'order_realized_profit': stream_data['data']['o']['rp']} # Realized Profit
elif stream_data['data']['e'] == 'ACCOUNT_UPDATE':
'''
url: https://binance-docs.github.io/apidocs/futures/en/#event-balance-and-position-update
ex:
{
"e": "ACCOUNT_UPDATE", // Event Type
"E": 1564745798939, // Event Time
"T": 1564745798938 , // Transaction
"a": // Update Data
{
"m":"ORDER", // Event reason type
"B":[ // Balances
{
"a":"USDT", // Asset
"wb":"122624.12345678", // Wallet Balance
"cw":"100.12345678" // Cross Wallet Balance
},
{
"a":"BNB",
"wb":"1.00000000",
"cw":"0.00000000"
}
],
"P":[
{
"s":"BTCUSDT", // Symbol
"pa":"0", // Position Amount
"ep":"0.00000", // Entry Price
"cr":"200", // (Pre-fee) Accumulated Realized
"up":"0", // Unrealized PnL
"mt":"isolated", // Margin Type
"iw":"0.00000000", // Isolated Wallet (if isolated position)
"ps":"BOTH" // Position Side
},
{
"s":"BTCUSDT",
"pa":"20",
"ep":"6563.66500",
"cr":"0",
"up":"2850.21200",
"mt":"isolated",
"iw":"13200.70726908",
"ps":"LONG"
},
{
"s":"BTCUSDT",
"pa":"-10",
"ep":"6563.86000",
"cr":"-45.04000000",
"up":"-1423.15600",
"mt":"isolated",
"iw":"6570.42511771",
"ps":"SHORT"
}
]
}
}
'''
unicorn_fied_data = {
'stream_type': 'ACCOUNT_UPDATE',
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'transaction': stream_data['data']['T'],
'event_reason': stream_data['data']['a']['m'],
'balances': [],
'positions': []
}
for balance in stream_data['data']['a']['B']:
unicorn_fied_data['balances'].append({
'asset': balance['a'],
'wallet_balance': balance['wb'],
'cross_wallet_balance': balance['cw']
})
for position in stream_data['data']['a']['P']:
unicorn_fied_data['positions'].append({
'symbol': position['s'],
'position_amount': position['pa'],
'entry_price': position['ep'],
'accumulated_realized': position['cr'],
'upnl': position['up'],
'margin_type': position['mt'],
'isolated_wallet': position['iw'],
'position_side': position['ps']
})
elif stream_data['data']['e'] == 'MARGIN_CALL':
'''
url: https://binance-docs.github.io/apidocs/futures/en/#event-margin-call
ex: {
"e":"MARGIN_CALL", // Event Type
"E":1587727187525, // Event Time
"cw":"3.16812045", // Cross Wallet Balance. Only pushed with crossed position margin call
"p":[ // Position(s) of Margin Call
{
"s":"ETHUSDT", // Symbol
"ps":"LONG", // Position Side
"pa":"1.327", // Position Amount
"mt":"CROSSED", // Margin Type
"iw":"0", // Isolated Wallet (if isolated position)
"mp":"187.17127", // Mark Price
"up":"-1.166074", // Unrealized PnL
"mm":"1m,n.614445" // Maintenance Margin Required
}
]
}
'''
unicorn_fied_data = {'stream_type': 'MARGIN_CALL',
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'symbol': stream_data['data']['p']['s'],
'side': stream_data['data']['p']['ps'],
'amount': stream_data['data']['p']['pa'],
'type': stream_data['data']['p']['mt'],
'wallet': stream_data['data']['p']['iw'],
'price': stream_data['data']['p']['mp'],
'pnl': stream_data['data']['p']['up'],
'margin': stream_data['data']['p']['mm']}
elif stream_data['data']['e'] == 'ACCOUNT_CONFIG_UPDATE':
'''
url: https://binance-docs.github.io/apidocs/futures/en/#event-order-update
ex:
{
"e":"ACCOUNT_CONFIG_UPDATE", // Event Type
"E":1611646737479, // Event Time
"T":1611646737476, // Transaction Time
"ac":{
"s":"BTCUSDT", // symbol
"l":25 // leverage
}
}
'''
unicorn_fied_data = {'stream_type': 'ACCOUNT_CONFIG_UPDATE',
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'symbol': stream_data['data']['ac']['s'],
'leverage': stream_data['data']['ac']['l']
}
except TypeError as error_msg:
logging.critical(f"UnicornFy->binance_futures_websocket({str(unicorn_fied_data)}) - "
f"error: {str(error_msg)}")
unicorn_fied_version = [exchange, UnicornFy.VERSION]
try:
unicorn_fied_data['unicorn_fied'] = unicorn_fied_version
except TypeError as error_msg:
logging.critical(f"UnicornFy->binance_futures_websocket({str(unicorn_fied_data)}) - "
f"error: {str(error_msg)}")
logging.debug("UnicornFy->binance_futures_websocket(" + str(unicorn_fied_data) + ")")
return unicorn_fied_data
@staticmethod
def binance_coin_futures_websocket(stream_data_json, exchange="binance.com-coin_futures",
show_deprecated_warning=False):
"""
unicorn_fy binance.com-coin_futures raw_stream_data
:param stream_data_json: The received raw stream data from the Binance websocket
:type stream_data_json: json
:param exchange: Exchange endpoint.
:type exchange: str
:param show_deprecated_warning: Show or hide warning
:type show_deprecated_warning: bool
:return: dict
"""
unicorn_fied_data = False
logging.debug("UnicornFy->binance_coin_futures_websocket(" + str(stream_data_json) + ")")
if show_deprecated_warning is True:
pass
if UnicornFy.is_json(stream_data_json) is False:
return stream_data_json
stream_data = json.loads(stream_data_json)
try:
if stream_data['e'] == 'outboundAccountInfo':
stream_data = {'data': stream_data}
elif stream_data['e'] == 'executionReport':
stream_data = {'data': stream_data}
elif stream_data['e'] == 'balanceUpdate':
stream_data = {'data': stream_data}
elif stream_data['e'] == 'ORDER_TRADE_UPDATE':
stream_data = {'data': stream_data}
elif stream_data['e'] == 'ACCOUNT_UPDATE':
stream_data = {'data': stream_data}
elif stream_data['e'] == 'ACCOUNT_CONFIG_UPDATE':
stream_data = {'data': stream_data}
elif stream_data['e'] == 'MARGIN_CALL':
stream_data = {'data': stream_data}
except KeyError:
pass
try:
if stream_data['stream'].find('@depth5') != -1:
stream_data['data']['e'] = "depth"
stream_data['data']['depth_level'] = 5
elif stream_data['stream'].find('@depth10') != -1:
stream_data['data']['e'] = "depth"
stream_data['data']['depth_level'] = 10
elif stream_data['stream'].find('@depth20') != -1:
stream_data['data']['e'] = "depth"
stream_data['data']['depth_level'] = 20
elif "@bookTicker" in stream_data['stream']:
stream_data['data']['e'] = "bookTicker"
except KeyError:
pass
try:
# return if already unicorn_fied
if stream_data['unicorn_fied']:
return stream_data
except KeyError:
pass
try:
if stream_data['result'] is None:
unicorn_fied_version = [exchange, UnicornFy.VERSION]
stream_data['unicorn_fied'] = unicorn_fied_version
logging.debug(f"UnicornFy->binance_coin_futures_websocket({str(stream_data)}, {str(exchange)}")
return stream_data
else:
unicorn_fied_version = [exchange, UnicornFy.VERSION]
stream_data['unicorn_fied'] = unicorn_fied_version
logging.debug(f"UnicornFy->binance_coin_futures_websocket({str(stream_data)}, {str(exchange)}")
return stream_data
except KeyError:
pass
try:
if stream_data['error']:
unicorn_fied_version = [exchange, UnicornFy.VERSION]
stream_data['unicorn_fied'] = unicorn_fied_version
logging.debug(f"UnicornFy->binance_coin_futures_websocket({str(stream_data)}, {str(exchange)}")
return stream_data
except KeyError:
pass
try:
if stream_data['data']['e'] == 'aggTrade':
unicorn_fied_data = {'stream_type': stream_data['stream'],
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'symbol': stream_data['data']['s'],
'aggregate_trade_id': stream_data['data']['a'],
'price': stream_data['data']['p'],
'quantity': stream_data['data']['q'],
'first_trade_id': stream_data['data']['f'],
'last_trade_id': stream_data['data']['l'],
'trade_time': stream_data['data']['T'],
'is_market_maker': stream_data['data']['m']}
elif stream_data['data']['e'] == 'trade':
# Todo: KeyError: 'b'
# 'buyer_order_id': stream_data['data']['b'],
# Todo: KeyError: 'a'
# 'seller_order_id': stream_data['data']['a'],
# Todo: KeyError: 'M'
# , 'ignore': stream_data['data']['M']
unicorn_fied_data = {'stream_type': stream_data['stream'],
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'symbol': stream_data['data']['s'],
'trade_id': stream_data['data']['t'],
'price': stream_data['data']['p'],
'quantity': stream_data['data']['q'],
'trade_time': stream_data['data']['T'],
'is_market_maker': stream_data['data']['m']}
elif stream_data['data']['e'] == 'kline':
stream_data['data'] = UnicornFy.set_to_false_if_not_exist(stream_data['data'], 'f')
stream_data['data'] = UnicornFy.set_to_false_if_not_exist(stream_data['data'], 'L')
unicorn_fied_data = {'stream_type': stream_data['stream'],
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'symbol': stream_data['data']['s'],
'kline': {'kline_start_time': stream_data['data']['k']['t'],
'kline_close_time': stream_data['data']['k']['T'],
'symbol': stream_data['data']['k']['s'],
'interval': stream_data['data']['k']['i'],
'first_trade_id': stream_data['data']['f'],
'last_trade_id': stream_data['data']['L'],
'open_price': stream_data['data']['k']['o'],
'close_price': stream_data['data']['k']['c'],
'high_price': stream_data['data']['k']['h'],
'low_price': stream_data['data']['k']['l'],
'base_volume': stream_data['data']['k']['v'],
'number_of_trades': stream_data['data']['k']['n'],
'is_closed': stream_data['data']['k']['x'],
'quote': stream_data['data']['k']['q'],
'taker_by_base_asset_volume': stream_data['data']['k']['V'],
'taker_by_quote_asset_volume': stream_data['data']['k']['Q'],
'ignore': stream_data['data']['k']['B']}}
elif stream_data['data']['e'] == '24hrMiniTicker':
try:
if stream_data['stream']:
pass
except KeyError:
stream_data['stream'] = '!miniTicker@arr'
unicorn_fied_data = {'stream_type': stream_data['stream'],
'event_type': stream_data['data']['e'],
'data': []}
try:
for item in stream_data['items']:
data = {'stream_type': stream_data['stream'],
'event_type': item['e'],
'event_time': item['E'],
'symbol': item['s'],
'close_price': item['c'],
'open_price': item['o'],
'high_price': item['h'],
'low_price': item['l'],
'taker_by_base_asset_volume': item['v'],
'taker_by_quote_asset_volume': item['q']}
unicorn_fied_data['data'].append(data)
except KeyError:
data = {'stream_type': stream_data['stream'],
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'symbol': stream_data['data']['s'],
'close_price': stream_data['data']['c'],
'open_price': stream_data['data']['o'],
'high_price': stream_data['data']['h'],
'low_price': stream_data['data']['l'],
'taker_by_base_asset_volume': stream_data['data']['v'],
'taker_by_quote_asset_volume': stream_data['data']['q']}
unicorn_fied_data['data'].append(data)
elif stream_data['data']['e'] == '24hrTicker':
try:
if stream_data['stream']:
pass
except KeyError:
stream_data['stream'] = '!ticker@arr'
unicorn_fied_data = {'stream_type': stream_data['stream'],
'event_type': stream_data['data']['e'],
'data': []}
try:
for item in stream_data['items']:
data = {'stream_type': stream_data['stream'],
'event_type': item['e'],
'event_time': item['E'],
'symbol': item['s'],
'price_change': item['p'],
'price_change_percent': item['P'],
'weighted_average_price': item['w'],
'trade_before_24h_window': item['x'],
'last_price': item['c'],
'last_quantity': item['Q'],
'best_bid_price': item['b'],
'best_bid_quantity': item['B'],
'best_ask_price': item['a'],
'best_ask_quantity': item['A'],
'open_price': item['o'],
'high_price': item['h'],
'low_price': item['l'],
'total_traded_base_asset_volume': item['v'],
'total_traded_quote_asset_volume': item['q'],
'statistics_open_time': item['O'],
'statistics_close_time': item['C'],
'first_trade_id': item['F'],
'last_trade_id': item['L'],
'total_nr_of_trades': item['n']}
unicorn_fied_data['data'].append(data)
except KeyError:
# Todo: KeyError: 'x'
# 'trade_before_24h_window': stream_data['data']['x'],
# Todo: KeyError: 'b'
# 'best_bid_price': stream_data['data']['b'],
# Todo: KeyError: 'B'
# 'best_bid_quantity': stream_data['data']['B'],
# Todo KeyError: 'a'
# 'best_ask_price': stream_data['data']['a'],
# Todo KeyError: 'A'
# 'best_ask_quantity': stream_data['data']['A'],
data = {'stream_type': stream_data['stream'],
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'symbol': stream_data['data']['s'],
'price_change': stream_data['data']['p'],
'price_change_percent': stream_data['data']['P'],
'weighted_average_price': stream_data['data']['w'],
'last_price': stream_data['data']['c'],
'last_quantity': stream_data['data']['Q'],
'open_price': stream_data['data']['o'],
'high_price': stream_data['data']['h'],
'low_price': stream_data['data']['l'],
'total_traded_base_asset_volume': stream_data['data']['v'],
'total_traded_quote_asset_volume': stream_data['data']['q'],
'statistics_open_time': stream_data['data']['O'],
'statistics_close_time': stream_data['data']['C'],
'first_trade_id': stream_data['data']['F'],
'last_trade_id': stream_data['data']['L'],
'total_nr_of_trades': stream_data['data']['n']}
unicorn_fied_data['data'].append(data)
elif stream_data['data']['e'] == 'depth':
# Todo: KeyError: 'lastUpdateId'
# 'last_update_id': stream_data['data']['lastUpdateId'],
# Todo: KeyError: 'bids'
# 'bids': stream_data['data']['bids'],
# , 'asks': stream_data['data']['asks']
unicorn_fied_data = {'stream_type': stream_data['stream'],
'event_type': stream_data['data']['e'],
'symbol': stream_data['stream'][:stream_data['stream'].find('@')].upper()}
elif stream_data['data']['e'] == 'depthUpdate':
# Todo: KeyError: 'bids'
# 'bids': stream_data['data']['b'],
unicorn_fied_data = {'stream_type': stream_data['stream'],
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'symbol': stream_data['data']['s'],
'first_update_id_in_event': stream_data['data']['U'],
'final_update_id_in_event': stream_data['data']['u'],
'asks': stream_data['data']['a']}
elif stream_data['data']['e'] == 'outboundAccountInfo':
unicorn_fied_data = {'stream_type': '!userData@arr',
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'maker_commission_rate': stream_data['data']['m'],
'taker_commission_rate': stream_data['data']['t'],
'buyer_commission_rate': stream_data['data']['b'],
'seller_commission_rate': stream_data['data']['s'],
'can_trade': stream_data['data']['T'],
'can_withdraw': stream_data['data']['W'],
'can_deposit': stream_data['data']['D'],
'balances': []}
for item in stream_data['data']['B']:
new_item = {'asset': item['a'],
'free': item['f'],
'locked': item['l']}
unicorn_fied_data['balances'] += [new_item]
elif stream_data['data']['e'] == 'balanceUpdate':
unicorn_fied_data = {'stream_type': '!userData@arr',
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'asset': stream_data['data']['a'],
'balance_delta': stream_data['data']['d'],
'clear_time': stream_data['data']['T']}
elif stream_data['data']['e'] == 'executionReport':
unicorn_fied_data = {'stream_type': '!userData@arr',
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'symbol': stream_data['data']['s'],
'client_order_id': stream_data['data']['c'],
'side': stream_data['data']['S'],
'order_type': stream_data['data']['o'],
'time_in_force': stream_data['data']['f'],
'order_quantity': stream_data['data']['q'],
'order_price': stream_data['data']['p'],
'stop_price': stream_data['data']['P'],
'iceberg_quantity': stream_data['data']['F'],
'ignore_g': stream_data['data']['g'],
'original_client_order_id': stream_data['data']['C'],
'current_execution_type': stream_data['data']['x'],
'current_order_status': stream_data['data']['X'],
'order_reject_reason': stream_data['data']['r'],
'order_id': stream_data['data']['i'],
'last_executed_quantity': stream_data['data']['l'],
'cumulative_filled_quantity': stream_data['data']['z'],
'last_executed_price': stream_data['data']['L'],
'commission_amount': stream_data['data']['n'],
'commission_asset': stream_data['data']['N'],
'transaction_time': stream_data['data']['T'],
'trade_id': stream_data['data']['t'],
'ignore_I': stream_data['data']['I'],
'is_order_working': stream_data['data']['w'],
'is_trade_maker_side': stream_data['data']['m'],
'ignore_M': stream_data['data']['M'],
'order_creation_time': stream_data['data']['O'],
'cumulative_quote_asset_transacted_quantity': stream_data['data']['Z'],
'last_quote_asset_transacted_quantity': stream_data['data']['Y']}
elif stream_data['data']['e'] == 'ORDER_TRADE_UPDATE':
'''
url: https://binance-docs.github.io/apidocs/futures/en/#event-order-update
ex:
{
"e":"ORDER_TRADE_UPDATE", // Event Type
"E":1568879465651, // Event Time
"T":1568879465650, // Transaction Time
"o":{
"s":"BTCUSDT", // Symbol
"c":"TEST", // Client Order Id
// special client order id:
// starts with "autoclose-": liquidation order
// "adl_autoclose": ADL auto close order
"S":"SELL", // Side
"o":"TRAILING_STOP_MARKET", // Order Type
"f":"GTC", // Time in Force
"q":"0.001", // Original Quantity
"p":"0", // Original Price
"ap":"0", // Average Price
"sp":"7103.04", // Stop Price. Please ignore with TRAILING_STOP_MARKET order
"x":"NEW", // Execution Type
"X":"NEW", // Order Status
"i":8886774, // Order Id
"l":"0", // Order Last Filled Quantity
"z":"0", // Order Filled Accumulated Quantity
"L":"0", // Last Filled Price
"N":"USDT", // Commission Asset, will not push if no commission
"n":"0", // Commission, will not push if no commission
"T":1568879465651, // Order Trade Time
"t":0, // Trade Id
"b":"0", // Bids Notional
"a":"9.91", // Ask Notional
"m":false, // Is this trade the maker side?
"R":false, // Is this reduce only
"wt":"CONTRACT_PRICE", // Stop Price Working Type
"ot":"TRAILING_STOP_MARKET", // Original Order Type
"ps":"LONG", // Position Side
"cp":false, // If Close-All, pushed with conditional order
"AP":"7476.89", // Activation Price, only puhed with TRAILING_STOP_MARKET order
"cr":"5.0", // Callback Rate, only puhed with TRAILING_STOP_MARKET order
"rp":"0" // Realized Profit of the trade
}
}
'''
unicorn_fied_data = {'stream_type': 'ORDER_TRADE_UPDATE',
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'symbol': stream_data['data']['o']['s'], # Symbol
'client_order_id': stream_data['data']['o']['c'], # Client Order Id
'side': stream_data['data']['o']['S'], # Side
'order_type': stream_data['data']['o']['o'], # Order Type
'time_in_force': stream_data['data']['o']['f'], # Time in Force
'order_quantity': stream_data['data']['o']['q'], # Original Quantity
'order_price': stream_data['data']['o']['p'], # Original Price
'order_avg_price': stream_data['data']['o']['ap'], # Average Price
'order_stop_price': stream_data['data']['o']['sp'], # Stop Price.
'current_execution_type': stream_data['data']['o']['x'], # Execution Type
'current_order_status': stream_data['data']['o']['X'], # Order Status
'order_id': stream_data['data']['o']['i'], # Order Id
'last_executed_quantity': stream_data['data']['o']['l'],
'cumulative_filled_quantity': stream_data['data']['o']['z'],
'last_executed_price': stream_data['data']['o']['L'], # Last Filled Price
'transaction_time': stream_data['data']['o']['T'], # Order Trade Time
'trade_id': stream_data['data']['o']['t'], # Trade Id
'net_pay': stream_data['data']['o']['b'], # Ask Notional
'net_selling_order_value': stream_data['data']['o']['a'], # Ask Notional
'is_trade_maker_side': stream_data['data']['o']['m'],
'reduce_only': stream_data['data']['o']['R'], # Is this reduce only
'trigger_price_type': stream_data['data']['o']['wt'], # Stop Price Working Type
'order_price_type': stream_data['data']['o']['ot'], # Original Order Type
'position_side': stream_data['data']['o']['ps'],
# Todo:
# 'cumulative_quote_asset_transacted_quantity': stream_data['data']['cp'],
# 'cumulative_quote_asset_transacted_quantity': stream_data['data']['AP'],
# 'cumulative_quote_asset_transacted_quantity': stream_data['data']['cr'],
'order_realized_profit': stream_data['data']['o']['rp']} # Realized Profit
elif stream_data['data']['e'] == 'ACCOUNT_UPDATE':
'''
url: https://binance-docs.github.io/apidocs/futures/en/#event-balance-and-position-update
ex:
{
"e": "ACCOUNT_UPDATE", // Event Type
"E": 1564745798939, // Event Time
"T": 1564745798938 , // Transaction
"a": // Update Data
{
"m":"ORDER", // Event reason type
"B":[ // Balances
{
"a":"USDT", // Asset
"wb":"122624.12345678", // Wallet Balance
"cw":"100.12345678" // Cross Wallet Balance
},
{
"a":"BNB",
"wb":"1.00000000",
"cw":"0.00000000"
}
],
"P":[
{
"s":"BTCUSDT", // Symbol
"pa":"0", // Position Amount
"ep":"0.00000", // Entry Price
"cr":"200", // (Pre-fee) Accumulated Realized
"up":"0", // Unrealized PnL
"mt":"isolated", // Margin Type
"iw":"0.00000000", // Isolated Wallet (if isolated position)
"ps":"BOTH" // Position Side
},
{
"s":"BTCUSDT",
"pa":"20",
"ep":"6563.66500",
"cr":"0",
"up":"2850.21200",
"mt":"isolated",
"iw":"13200.70726908",
"ps":"LONG"
},
{
"s":"BTCUSDT",
"pa":"-10",
"ep":"6563.86000",
"cr":"-45.04000000",
"up":"-1423.15600",
"mt":"isolated",
"iw":"6570.42511771",
"ps":"SHORT"
}
]
}
}
'''
# Todo: unfinished!
unicorn_fied_data = {'stream_type': 'ACCOUNT_UPDATE',
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'transaction': stream_data['data']['T'],
'update_data': stream_data['data']['a']}
elif stream_data['data']['e'] == 'MARGIN_CALL':
'''
url: https://binance-docs.github.io/apidocs/futures/en/#event-margin-call
ex: {
"e":"MARGIN_CALL", // Event Type
"E":1587727187525, // Event Time
"cw":"3.16812045", // Cross Wallet Balance. Only pushed with crossed position margin call
"p":[ // Position(s) of Margin Call
{
"s":"ETHUSDT", // Symbol
"ps":"LONG", // Position Side
"pa":"1.327", // Position Amount
"mt":"CROSSED", // Margin Type
"iw":"0", // Isolated Wallet (if isolated position)
"mp":"187.17127", // Mark Price
"up":"-1.166074", // Unrealized PnL
"mm":"1m,n.614445" // Maintenance Margin Required
}
]
}
'''
unicorn_fied_data = {'stream_type': 'MARGIN_CALL',
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'symbol': stream_data['data']['p']['s'],
'side': stream_data['data']['p']['ps'],
'amount': stream_data['data']['p']['pa'],
'type': stream_data['data']['p']['mt'],
'wallet': stream_data['data']['p']['iw'],
'price': stream_data['data']['p']['mp'],
'pnl': stream_data['data']['p']['up'],
'margin': stream_data['data']['p']['mm']}
elif stream_data['data']['e'] == 'ACCOUNT_CONFIG_UPDATE':
'''
url: https://binance-docs.github.io/apidocs/futures/en/#event-order-update
ex:
{
"e":"ACCOUNT_CONFIG_UPDATE", // Event Type
"E":1611646737479, // Event Time
"T":1611646737476, // Transaction Time
"ac":{
"s":"BTCUSDT", // symbol
"l":25 // leverage
}
}
'''
unicorn_fied_data = {'stream_type': 'ACCOUNT_CONFIG_UPDATE',
'event_type': stream_data['data']['e'],
'event_time': stream_data['data']['E'],
'symbol': stream_data['data']['ac']['s'],
'leverage': stream_data['data']['ac']['l']
}
except TypeError as error_msg:
logging.critical(f"UnicornFy->binance_coin_futures_websocket({str(unicorn_fied_data)}) - "
f"error: {str(error_msg)}")
unicorn_fied_version = [exchange, UnicornFy.VERSION]
try:
unicorn_fied_data['unicorn_fied'] = unicorn_fied_version
except TypeError as error_msg:
logging.critical(f"UnicornFy->binance_coin_futures_websocket({str(unicorn_fied_data)}) - "
f"error: {str(error_msg)}")
logging.debug("UnicornFy->binance_coin_futures_websocket(" + str(unicorn_fied_data) + ")")
return unicorn_fied_data
@staticmethod
def jex_com_websocket(stream_data_json):
"""
unicorn_fy jex.com raw_stream_data
:param stream_data_json: The received raw stream data from the Binance websocket
:type stream_data_json: json
:return: dict
"""
return UnicornFy.binance_websocket(stream_data_json, exchange="jex.com", show_deprecated_warning=False)
@staticmethod
def get_latest_release_info():
"""
Get infos about the latest available release
:return: dict or False
"""
try:
respond = requests.get('https://api.github.com/repos/oliver-zehentleitner/unicorn-fy/releases/latest')
return respond.json()
except Exception:
return False
def get_latest_version(self):
"""
Get the version of the latest available release (cache time 1 hour)
:return: str or False
"""
# Do a fresh request if status is None or last timestamp is older 1 hour
if self.last_update_check_github['status'] is None or \
(self.last_update_check_github['timestamp'] + (60 * 60) < time.time()):
self.last_update_check_github['status'] = self.get_latest_release_info()
if self.last_update_check_github['status']:
try:
return self.last_update_check_github['status']["tag_name"]
except KeyError:
return "unknown"
else:
return "unknown"
@staticmethod
def get_version():
"""
Get the package/module version
:return: str
"""
return UnicornFy.VERSION
@staticmethod
def is_json(data):
"""
Is the string in json format?
:param data: the data to verify
:type data: str
:return: True or False
:rtype: bool
"""
try:
json.loads(data)
except ValueError:
return False
except TypeError:
return False
return True
def is_update_availabe(self):
"""
Is a new release of this package available?
:return: bool
"""
installed_version = self.get_version()
latest_version = self.get_latest_version()
if ".dev" in installed_version:
installed_version = installed_version[:-4]
if latest_version == installed_version:
return False
elif latest_version == "unknown":
return False
else:
return True
@staticmethod
def set_to_false_if_not_exist(value, key):
"""
some vars are non existent if they would be empty, so we create the missing vars with default values
:param value: default value
:type value: str
:param key: the key name
:type key: str
:return: final value
:rtype: str
"""
try:
if value[key]:
return value[key]
except KeyError:
value[key] = False
return value
except IndexError:
value[key] = False
return value
| 57.451409 | 123 | 0.419726 | 8,612 | 99,908 | 4.588481 | 0.058291 | 0.218393 | 0.218949 | 0.053523 | 0.904469 | 0.898547 | 0.886982 | 0.872684 | 0.859778 | 0.849302 | 0 | 0.013299 | 0.448302 | 99,908 | 1,738 | 124 | 57.484465 | 0.703623 | 0.077281 | 0 | 0.884716 | 0 | 0.000873 | 0.214578 | 0.049853 | 0 | 0 | 0 | 0.002877 | 0 | 1 | 0.017467 | false | 0.021834 | 0.003493 | 0 | 0.060262 | 0.000873 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
0a7790d2bce27a2282e8d331c89577b27cf52cca | 40,400 | py | Python | src/openprocurement/tender/belowthreshold/tests/document_blanks.py | ProzorroUKR/openprocurement.api | 2855a99aa8738fb832ee0dbad4e9590bd3643511 | [
"Apache-2.0"
] | 10 | 2020-02-18T01:56:21.000Z | 2022-03-28T00:32:57.000Z | src/openprocurement/tender/belowthreshold/tests/document_blanks.py | quintagroup/openprocurement.api | 2855a99aa8738fb832ee0dbad4e9590bd3643511 | [
"Apache-2.0"
] | 26 | 2018-07-16T09:30:44.000Z | 2021-02-02T17:51:30.000Z | src/openprocurement/tender/belowthreshold/tests/document_blanks.py | ProzorroUKR/openprocurement.api | 2855a99aa8738fb832ee0dbad4e9590bd3643511 | [
"Apache-2.0"
] | 15 | 2019-08-08T10:50:47.000Z | 2022-02-05T14:13:36.000Z | # -*- coding: utf-8 -*-
from email.header import Header
# TenderDocumentResourceTest
from mock import patch
from openprocurement.tender.core.tests.base import bad_rs_request, srequest
def not_found(self):
response = self.app.get("/tenders/some_id/documents", status=404)
self.assertEqual(response.status, "404 Not Found")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(response.json["status"], "error")
self.assertEqual(
response.json["errors"], [{"description": "Not Found", "location": "url", "name": "tender_id"}]
)
response = self.app.post("/tenders/some_id/documents", status=404, upload_files=[("file", "name.doc", b"content")])
self.assertEqual(response.status, "404 Not Found")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(response.json["status"], "error")
self.assertEqual(
response.json["errors"], [{"description": "Not Found", "location": "url", "name": "tender_id"}]
)
response = self.app.post(
"/tenders/{}/documents?acc_token={}".format(self.tender_id, self.tender_token),
status=404,
upload_files=[("invalid_name", "name.doc", b"content")],
)
self.assertEqual(response.status, "404 Not Found")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(response.json["status"], "error")
self.assertEqual(response.json["errors"], [{"description": "Not Found", "location": "body", "name": "file"}])
response = self.app.put(
"/tenders/some_id/documents/some_id", status=404, upload_files=[("file", "name.doc", b"content2")]
)
self.assertEqual(response.status, "404 Not Found")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(response.json["status"], "error")
self.assertEqual(
response.json["errors"], [{"description": "Not Found", "location": "url", "name": "tender_id"}]
)
response = self.app.put(
"/tenders/{}/documents/some_id?acc_token={}".format(self.tender_id, self.tender_token),
status=404,
upload_files=[("file", "name.doc", b"content2")],
)
self.assertEqual(response.status, "404 Not Found")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(response.json["status"], "error")
self.assertEqual(
response.json["errors"], [{"description": "Not Found", "location": "url", "name": "document_id"}]
)
response = self.app.get("/tenders/some_id/documents/some_id", status=404)
self.assertEqual(response.status, "404 Not Found")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(response.json["status"], "error")
self.assertEqual(
response.json["errors"], [{"description": "Not Found", "location": "url", "name": "tender_id"}]
)
response = self.app.get(
"/tenders/{}/documents/some_id?acc_token={}".format(self.tender_id, self.tender_token), status=404
)
self.assertEqual(response.status, "404 Not Found")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(response.json["status"], "error")
self.assertEqual(
response.json["errors"], [{"description": "Not Found", "location": "url", "name": "document_id"}]
)
def create_tender_document(self):
response = self.app.get("/tenders/{}/documents".format(self.tender_id))
self.assertEqual(response.status, "200 OK")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(response.json, {"data": []})
response = self.app.post(
"/tenders/{}/documents?acc_token={}".format(self.tender_id, self.tender_token),
upload_files=[("file", "укр.doc", b"content")]
)
self.assertEqual(response.status, "201 Created")
self.assertEqual(response.content_type, "application/json")
doc_id = response.json["data"]["id"]
self.assertIn(doc_id, response.headers["Location"])
self.assertEqual("укр.doc", response.json["data"]["title"])
if self.docservice:
self.assertIn("Signature=", response.json["data"]["url"])
self.assertIn("KeyID=", response.json["data"]["url"])
self.assertNotIn("Expires=", response.json["data"]["url"])
key = response.json["data"]["url"].split("/")[-1].split("?")[0]
tender = self.db.get(self.tender_id)
self.assertIn(key, tender["documents"][-1]["url"])
self.assertIn("Signature=", tender["documents"][-1]["url"])
self.assertIn("KeyID=", tender["documents"][-1]["url"])
self.assertNotIn("Expires=", tender["documents"][-1]["url"])
else:
key = response.json["data"]["url"].split("?")[-1].split("=")[-1]
response = self.app.get("/tenders/{}/documents".format(self.tender_id))
self.assertEqual(response.status, "200 OK")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(doc_id, response.json["data"][0]["id"])
self.assertEqual("укр.doc", response.json["data"][0]["title"])
response = self.app.get("/tenders/{}/documents/{}?download=some_id".format(self.tender_id, doc_id), status=404)
self.assertEqual(response.status, "404 Not Found")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(response.json["status"], "error")
self.assertEqual(
response.json["errors"], [{"description": "Not Found", "location": "url", "name": "download"}]
)
if self.docservice:
response = self.app.get("/tenders/{}/documents/{}?download={}".format(self.tender_id, doc_id, key))
self.assertEqual(response.status, "302 Moved Temporarily")
self.assertIn("http://localhost/get/", response.location)
self.assertIn("Signature=", response.location)
self.assertIn("KeyID=", response.location)
self.assertNotIn("Expires=", response.location)
else:
response = self.app.get("/tenders/{}/documents/{}?download={}".format(self.tender_id, doc_id, key))
self.assertEqual(response.status, "200 OK")
self.assertEqual(response.content_type, "application/msword")
self.assertEqual(response.content_length, 7)
self.assertEqual(response.body, b"content")
response = self.app.get("/tenders/{}/documents/{}".format(self.tender_id, doc_id))
self.assertEqual(response.status, "200 OK")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(doc_id, response.json["data"]["id"])
self.assertEqual("укр.doc", response.json["data"]["title"])
response = self.app.post(
"/tenders/{}/documents?acc_token={}".format(self.tender_id, self.tender_token),
upload_files=[("file", "укр.doc", b"content")],
)
self.assertEqual(response.status, "201 Created")
self.assertEqual(response.content_type, "application/json")
self.assertEqual("укр.doc", response.json["data"]["title"])
doc_id = response.json["data"]["id"]
self.assertIn(doc_id, response.headers["Location"])
self.assertNotIn("acc_token", response.headers["Location"])
def create_document_active_tendering_status(self):
self.set_status("active.tendering")
response = self.app.post(
"/tenders/{}/documents?acc_token={}".format(self.tender_id, self.tender_token),
upload_files=[("file", "укр.doc", b"content")],
status=403,
)
self.assertEqual(response.status, "403 Forbidden")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(
response.json["errors"][0]["description"], "Can't add document in current (active.tendering) tender status"
)
def put_tender_document(self):
response = self.app.post(
"/tenders/{}/documents?acc_token={}".format(self.tender_id, self.tender_token),
upload_files=[("file", "укр.doc", b"content")],
)
self.assertEqual(response.status, "201 Created")
self.assertEqual(response.content_type, "application/json")
self.assertEqual("укр.doc", response.json["data"]["title"])
doc_id = response.json["data"]["id"]
dateModified = response.json["data"]["dateModified"]
datePublished = response.json["data"]["datePublished"]
self.assertIn(doc_id, response.headers["Location"])
response = self.app.put(
"/tenders/{}/documents/{}?acc_token={}".format(self.tender_id, doc_id, self.tender_token),
upload_files=[("file", "name name.doc", b"content2")],
)
self.assertEqual(response.status, "200 OK")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(doc_id, response.json["data"]["id"])
if self.docservice:
self.assertIn("Signature=", response.json["data"]["url"])
self.assertIn("KeyID=", response.json["data"]["url"])
self.assertNotIn("Expires=", response.json["data"]["url"])
key = response.json["data"]["url"].split("/")[-1].split("?")[0]
tender = self.db.get(self.tender_id)
self.assertIn(key, tender["documents"][-1]["url"])
self.assertIn("Signature=", tender["documents"][-1]["url"])
self.assertIn("KeyID=", tender["documents"][-1]["url"])
self.assertNotIn("Expires=", tender["documents"][-1]["url"])
else:
key = response.json["data"]["url"].split("?")[-1].split("=")[-1]
if self.docservice:
response = self.app.get("/tenders/{}/documents/{}?download={}".format(self.tender_id, doc_id, key))
self.assertEqual(response.status, "302 Moved Temporarily")
self.assertIn("http://localhost/get/", response.location)
self.assertIn("Signature=", response.location)
self.assertIn("KeyID=", response.location)
self.assertNotIn("Expires=", response.location)
else:
response = self.app.get("/tenders/{}/documents/{}?download={}".format(self.tender_id, doc_id, key))
self.assertEqual(response.status, "200 OK")
self.assertEqual(response.content_type, "application/msword")
self.assertEqual(response.content_length, 8)
self.assertEqual(response.body, b"content2")
response = self.app.get("/tenders/{}/documents/{}".format(self.tender_id, doc_id))
self.assertEqual(response.status, "200 OK")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(doc_id, response.json["data"]["id"])
self.assertEqual("name name.doc", response.json["data"]["title"])
dateModified2 = response.json["data"]["dateModified"]
self.assertTrue(dateModified < dateModified2)
self.assertEqual(dateModified, response.json["data"]["previousVersions"][0]["dateModified"])
self.assertEqual(response.json["data"]["datePublished"], datePublished)
response = self.app.get("/tenders/{}/documents?all=true".format(self.tender_id))
self.assertEqual(response.status, "200 OK")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(dateModified, response.json["data"][0]["dateModified"])
self.assertEqual(dateModified2, response.json["data"][1]["dateModified"])
response = self.app.post(
"/tenders/{}/documents?acc_token={}".format(self.tender_id, self.tender_token),
upload_files=[("file", "name.doc", b"content")],
)
self.assertEqual(response.status, "201 Created")
self.assertEqual(response.content_type, "application/json")
doc_id = response.json["data"]["id"]
dateModified = response.json["data"]["dateModified"]
self.assertIn(doc_id, response.headers["Location"])
response = self.app.get("/tenders/{}/documents".format(self.tender_id))
self.assertEqual(response.status, "200 OK")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(dateModified2, response.json["data"][0]["dateModified"])
self.assertEqual(dateModified, response.json["data"][1]["dateModified"])
response = self.app.put(
"/tenders/{}/documents/{}?acc_token={}".format(self.tender_id, doc_id, self.tender_token),
status=404,
upload_files=[("invalid_name", "name.doc", b"content")],
)
self.assertEqual(response.status, "404 Not Found")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(response.json["status"], "error")
self.assertEqual(response.json["errors"], [{"description": "Not Found", "location": "body", "name": "file"}])
response = self.app.put(
"/tenders/{}/documents/{}?acc_token={}".format(self.tender_id, doc_id, self.tender_token),
"content3",
content_type="application/msword",
)
self.assertEqual(response.status, "200 OK")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(doc_id, response.json["data"]["id"])
if self.docservice:
self.assertIn("Signature=", response.json["data"]["url"])
self.assertIn("KeyID=", response.json["data"]["url"])
self.assertNotIn("Expires=", response.json["data"]["url"])
key = response.json["data"]["url"].split("/")[-1].split("?")[0]
tender = self.db.get(self.tender_id)
self.assertIn(key, tender["documents"][-1]["url"])
self.assertIn("Signature=", tender["documents"][-1]["url"])
self.assertIn("KeyID=", tender["documents"][-1]["url"])
self.assertNotIn("Expires=", tender["documents"][-1]["url"])
else:
key = response.json["data"]["url"].split("?")[-1].split("=")[-1]
if self.docservice:
response = self.app.get("/tenders/{}/documents/{}?download={}".format(self.tender_id, doc_id, key))
self.assertEqual(response.status, "302 Moved Temporarily")
self.assertIn("http://localhost/get/", response.location)
self.assertIn("Signature=", response.location)
self.assertIn("KeyID=", response.location)
self.assertNotIn("Expires=", response.location)
else:
response = self.app.get("/tenders/{}/documents/{}?download={}".format(self.tender_id, doc_id, key))
self.assertEqual(response.status, "200 OK")
self.assertEqual(response.content_type, "application/msword")
self.assertEqual(response.content_length, 8)
self.assertEqual(response.body, b"content3")
self.set_status(self.forbidden_document_modification_actions_status)
response = self.app.put(
"/tenders/{}/documents/{}?acc_token={}".format(self.tender_id, doc_id, self.tender_token),
upload_files=[("file", "name.doc", b"content3")],
status=403,
)
self.assertEqual(response.status, "403 Forbidden")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(
response.json["errors"][0]["description"],
"Can't update document in current ({}) tender status".format(
self.forbidden_document_modification_actions_status
),
)
def patch_tender_document(self):
response = self.app.post(
"/tenders/{}/documents?acc_token={}".format(self.tender_id, self.tender_token),
upload_files=[("file", str(Header("укр.doc", "utf-8")), b"content")],
)
self.assertEqual(response.status, "201 Created")
self.assertEqual(response.content_type, "application/json")
doc_id = response.json["data"]["id"]
# dateModified = response.json["data"]['dateModified']
self.assertIn(doc_id, response.headers["Location"])
self.assertEqual("укр.doc", response.json["data"]["title"])
self.assertNotIn("documentType", response.json["data"])
response = self.app.patch_json(
"/tenders/{}/documents/{}?acc_token={}".format(self.tender_id, doc_id, self.tender_token),
{"data": {"documentOf": "item", "relatedItem": "0" * 32}},
status=422,
)
self.assertEqual(response.status, "422 Unprocessable Entity")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(response.json["status"], "error")
self.assertEqual(
response.json["errors"],
[{"description": ["relatedItem should be one of items"], "location": "body", "name": "relatedItem"}],
)
response = self.app.patch_json(
"/tenders/{}/documents/{}?acc_token={}".format(self.tender_id, doc_id, self.tender_token),
{"data": {"description": "document description", "documentType": "tenderNotice"}},
)
self.assertEqual(response.status, "200 OK")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(doc_id, response.json["data"]["id"])
self.assertIn("documentType", response.json["data"])
self.assertEqual(response.json["data"]["documentType"], "tenderNotice")
response = self.app.patch_json(
"/tenders/{}/documents/{}?acc_token={}".format(self.tender_id, doc_id, self.tender_token),
{"data": {"documentType": None}},
)
self.assertEqual(response.status, "200 OK")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(doc_id, response.json["data"]["id"])
self.assertNotIn("documentType", response.json["data"])
response = self.app.get("/tenders/{}/documents/{}?acc_token={}".format(self.tender_id, doc_id, self.tender_token))
self.assertEqual(response.status, "200 OK")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(doc_id, response.json["data"]["id"])
self.assertEqual("document description", response.json["data"]["description"])
# self.assertTrue(dateModified < response.json["data"]["dateModified"])
self.set_status(self.forbidden_document_modification_actions_status)
response = self.app.patch_json(
"/tenders/{}/documents/{}?acc_token={}".format(self.tender_id, doc_id, self.tender_token),
{"data": {"description": "document description"}},
status=403,
)
self.assertEqual(response.status, "403 Forbidden")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(
response.json["errors"][0]["description"],
"Can't update document in current ({}) tender status".format(
self.forbidden_document_modification_actions_status
),
)
# TenderDocumentWithDSResourceTest
def create_tender_document_error(self):
with patch("openprocurement.api.utils.SESSION", srequest):
response = self.app.post(
"/tenders/{}/documents?acc_token={}".format(self.tender_id, self.tender_token),
upload_files=[("file", "укр.doc", b"content")],
status=422,
)
self.assertEqual(response.status, "422 Unprocessable Entity")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(response.json["errors"][0]["description"], "Can't upload document to document service.")
with patch("openprocurement.api.utils.SESSION", bad_rs_request):
response = self.app.post(
"/tenders/{}/documents?acc_token={}".format(self.tender_id, self.tender_token),
upload_files=[("file", "укр.doc", b"content")],
status=422,
)
self.assertEqual(response.status, "422 Unprocessable Entity")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(response.json["errors"][0]["description"], "Can't upload document to document service.")
def create_tender_document_json_invalid(self):
response = self.app.post_json(
"/tenders/{}/documents?acc_token={}".format(self.tender_id, self.tender_token),
{"data": {"title": "укр.doc", "url": self.generate_docservice_url(), "format": "application/msword"}},
status=422,
)
self.assertEqual(response.status, "422 Unprocessable Entity")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(response.json["errors"][0]["description"], "This field is required.")
response = self.app.post_json(
"/tenders/{}/documents?acc_token={}".format(self.tender_id, self.tender_token),
{
"data": {
"title": "укр.doc",
"url": self.generate_docservice_url(),
"hash": "0" * 32,
"format": "application/msword",
}
},
status=422,
)
self.assertEqual(response.status, "422 Unprocessable Entity")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(
response.json["errors"],
[{"description": ["Hash type is not supported."], "location": "body", "name": "hash"}],
)
response = self.app.post_json(
"/tenders/{}/documents?acc_token={}".format(self.tender_id, self.tender_token),
{
"data": {
"title": "укр.doc",
"url": self.generate_docservice_url(),
"hash": "sha2048:" + "0" * 32,
"format": "application/msword",
}
},
status=422,
)
self.assertEqual(response.status, "422 Unprocessable Entity")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(
response.json["errors"],
[{"description": ["Hash type is not supported."], "location": "body", "name": "hash"}],
)
response = self.app.post_json(
"/tenders/{}/documents?acc_token={}".format(self.tender_id, self.tender_token),
{
"data": {
"title": "укр.doc",
"url": self.generate_docservice_url(),
"hash": "sha512:" + "0" * 32,
"format": "application/msword",
}
},
status=422,
)
self.assertEqual(response.status, "422 Unprocessable Entity")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(
response.json["errors"],
[{"description": ["Hash value is wrong length."], "location": "body", "name": "hash"}],
)
response = self.app.post_json(
"/tenders/{}/documents?acc_token={}".format(self.tender_id, self.tender_token),
{
"data": {
"title": "укр.doc",
"url": self.generate_docservice_url(),
"hash": "md5:" + "O" * 32,
"format": "application/msword",
}
},
status=422,
)
self.assertEqual(response.status, "422 Unprocessable Entity")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(
response.json["errors"],
[{"description": ["Hash value is not hexadecimal."], "location": "body", "name": "hash"}],
)
response = self.app.post_json(
"/tenders/{}/documents?acc_token={}".format(self.tender_id, self.tender_token),
{
"data": {
"title": "укр.doc",
"url": "http://invalid.docservice.url/get/uuid",
"hash": "md5:" + "0" * 32,
"format": "application/msword",
}
},
status=403,
)
self.assertEqual(response.status, "403 Forbidden")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(response.json["errors"][0]["description"], "Can add document only from document service.")
response = self.app.post_json(
"/tenders/{}/documents?acc_token={}".format(self.tender_id, self.tender_token),
{
"data": {
"title": "укр.doc",
"url": "/".join(self.generate_docservice_url().split("/")[:4]),
"hash": "md5:" + "0" * 32,
"format": "application/msword",
}
},
status=403,
)
self.assertEqual(response.status, "403 Forbidden")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(response.json["errors"][0]["description"], "Can add document only from document service.")
response = self.app.post_json(
"/tenders/{}/documents?acc_token={}".format(self.tender_id, self.tender_token),
{
"data": {
"title": "укр.doc",
"url": self.generate_docservice_url().split("?")[0],
"hash": "md5:" + "0" * 32,
"format": "application/msword",
}
},
status=403,
)
self.assertEqual(response.status, "403 Forbidden")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(response.json["errors"][0]["description"], "Can add document only from document service.")
response = self.app.post_json(
"/tenders/{}/documents?acc_token={}".format(self.tender_id, self.tender_token),
{
"data": {
"title": "укр.doc",
"url": self.generate_docservice_url().replace(list(self.app.app.registry.keyring.keys())[-1], "0" * 8),
"hash": "md5:" + "0" * 32,
"format": "application/msword",
}
},
status=422,
)
self.assertEqual(response.status, "422 Unprocessable Entity")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(response.json["errors"][0]["description"], "Document url expired.")
response = self.app.post_json(
"/tenders/{}/documents?acc_token={}".format(self.tender_id, self.tender_token),
{
"data": {
"title": "укр.doc",
"url": self.generate_docservice_url().replace("Signature=", "Signature=ABC"),
"hash": "md5:" + "0" * 32,
"format": "application/msword",
}
},
status=422,
)
self.assertEqual(response.status, "422 Unprocessable Entity")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(response.json["errors"][0]["description"], "Document url signature invalid.")
response = self.app.post_json(
"/tenders/{}/documents?acc_token={}".format(self.tender_id, self.tender_token),
{
"data": {
"title": "укр.doc",
"url": self.generate_docservice_url().replace("Signature=", "Signature=bw%3D%3D"),
"hash": "md5:" + "0" * 32,
"format": "application/msword",
}
},
status=422,
)
self.assertEqual(response.status, "422 Unprocessable Entity")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(response.json["errors"][0]["description"], "Document url invalid.")
def create_tender_document_json(self):
response = self.app.post_json(
"/tenders/{}/documents?acc_token={}".format(self.tender_id, self.tender_token),
{
"data": {
"title": "укр.doc",
"url": self.generate_docservice_url(),
"hash": "md5:" + "0" * 32,
"format": "application/msword",
}
},
)
self.assertEqual(response.status, "201 Created")
self.assertEqual(response.content_type, "application/json")
doc_id = response.json["data"]["id"]
self.assertIn(doc_id, response.headers["Location"])
self.assertEqual("укр.doc", response.json["data"]["title"])
self.assertIn("Signature=", response.json["data"]["url"])
self.assertIn("KeyID=", response.json["data"]["url"])
self.assertNotIn("Expires=", response.json["data"]["url"])
key = response.json["data"]["url"].split("/")[-1].split("?")[0]
tender = self.db.get(self.tender_id)
self.assertIn(key, tender["documents"][-1]["url"])
self.assertIn("Signature=", tender["documents"][-1]["url"])
self.assertIn("KeyID=", tender["documents"][-1]["url"])
self.assertNotIn("Expires=", tender["documents"][-1]["url"])
response = self.app.get("/tenders/{}/documents".format(self.tender_id))
self.assertEqual(response.status, "200 OK")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(doc_id, response.json["data"][0]["id"])
self.assertEqual("укр.doc", response.json["data"][0]["title"])
response = self.app.get("/tenders/{}/documents/{}?download=some_id".format(self.tender_id, doc_id), status=404)
self.assertEqual(response.status, "404 Not Found")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(response.json["status"], "error")
self.assertEqual(
response.json["errors"], [{"description": "Not Found", "location": "url", "name": "download"}]
)
response = self.app.get("/tenders/{}/documents/{}?download={}".format(self.tender_id, doc_id, key))
self.assertEqual(response.status, "302 Moved Temporarily")
self.assertIn("http://localhost/get/", response.location)
self.assertIn("Signature=", response.location)
self.assertIn("KeyID=", response.location)
self.assertNotIn("Expires=", response.location)
response = self.app.get("/tenders/{}/documents/{}".format(self.tender_id, doc_id))
self.assertEqual(response.status, "200 OK")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(doc_id, response.json["data"]["id"])
self.assertEqual("укр.doc", response.json["data"]["title"])
self.set_status(self.forbidden_document_modification_actions_status)
response = self.app.post_json(
"/tenders/{}/documents?acc_token={}".format(self.tender_id, self.tender_token),
{
"data": {
"title": "укр.doc",
"url": self.generate_docservice_url(),
"hash": "md5:" + "0" * 32,
"format": "application/msword",
}
},
status=403,
)
self.assertEqual(response.status, "403 Forbidden")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(
response.json["errors"][0]["description"],
"Can't add document in current ({}) tender status".format(self.forbidden_document_modification_actions_status),
)
def create_tender_document_json_bulk(self):
response = self.app.post_json(
"/tenders/{}/documents?acc_token={}".format(self.tender_id, self.tender_token),
{
"data": [
{
"title": "name1.doc",
"url": self.generate_docservice_url(),
"hash": "md5:" + "0" * 32,
"format": "application/msword",
},
{
"title": "name2.doc",
"url": self.generate_docservice_url(),
"hash": "md5:" + "0" * 32,
"format": "application/msword",
}
]
},
)
self.assertEqual(response.status, "201 Created")
self.assertEqual(response.content_type, "application/json")
doc_1 = response.json["data"][0]
doc_2 = response.json["data"][1]
def assert_document(document, title):
self.assertEqual(title, document["title"])
self.assertIn("Signature=", document["url"])
self.assertIn("KeyID=", document["url"])
self.assertNotIn("Expires=", document["url"])
assert_document(doc_1, "name1.doc")
assert_document(doc_2, "name2.doc")
tender = self.db.get(self.tender_id)
doc_1 = tender["documents"][0]
doc_2 = tender["documents"][1]
assert_document(doc_1, "name1.doc")
assert_document(doc_2, "name2.doc")
response = self.app.get("/tenders/{}/documents".format(self.tender_id))
self.assertEqual(response.status, "200 OK")
self.assertEqual(response.content_type, "application/json")
doc_1 = response.json["data"][0]
doc_2 = response.json["data"][1]
assert_document(doc_1, "name1.doc")
assert_document(doc_2, "name2.doc")
def put_tender_document_json(self):
response = self.app.post_json(
"/tenders/{}/documents?acc_token={}".format(self.tender_id, self.tender_token),
{
"data": {
"title": "укр.doc",
"url": self.generate_docservice_url(),
"hash": "md5:" + "0" * 32,
"format": "application/msword",
}
},
)
self.assertEqual(response.status, "201 Created")
self.assertEqual(response.content_type, "application/json")
self.assertEqual("укр.doc", response.json["data"]["title"])
doc_id = response.json["data"]["id"]
dateModified = response.json["data"]["dateModified"]
datePublished = response.json["data"]["datePublished"]
self.assertIn(doc_id, response.headers["Location"])
response = self.app.put_json(
"/tenders/{}/documents/{}?acc_token={}".format(self.tender_id, doc_id, self.tender_token),
{
"data": {
"title": "name.doc",
"url": self.generate_docservice_url(),
"hash": "md5:" + "0" * 32,
"format": "application/msword",
}
},
)
self.assertEqual(response.status, "200 OK")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(doc_id, response.json["data"]["id"])
self.assertIn("Signature=", response.json["data"]["url"])
self.assertIn("KeyID=", response.json["data"]["url"])
self.assertNotIn("Expires=", response.json["data"]["url"])
key = response.json["data"]["url"].split("/")[-1].split("?")[0]
tender = self.db.get(self.tender_id)
self.assertIn(key, tender["documents"][-1]["url"])
self.assertIn("Signature=", tender["documents"][-1]["url"])
self.assertIn("KeyID=", tender["documents"][-1]["url"])
self.assertNotIn("Expires=", tender["documents"][-1]["url"])
response = self.app.get("/tenders/{}/documents/{}?download={}".format(self.tender_id, doc_id, key))
self.assertEqual(response.status, "302 Moved Temporarily")
self.assertIn("http://localhost/get/", response.location)
self.assertIn("Signature=", response.location)
self.assertIn("KeyID=", response.location)
self.assertNotIn("Expires=", response.location)
response = self.app.get("/tenders/{}/documents/{}".format(self.tender_id, doc_id))
self.assertEqual(response.status, "200 OK")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(doc_id, response.json["data"]["id"])
self.assertEqual("name.doc", response.json["data"]["title"])
dateModified2 = response.json["data"]["dateModified"]
self.assertTrue(dateModified < dateModified2)
self.assertEqual(dateModified, response.json["data"]["previousVersions"][0]["dateModified"])
self.assertEqual(response.json["data"]["datePublished"], datePublished)
response = self.app.get("/tenders/{}/documents?all=true".format(self.tender_id))
self.assertEqual(response.status, "200 OK")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(dateModified, response.json["data"][0]["dateModified"])
self.assertEqual(dateModified2, response.json["data"][1]["dateModified"])
response = self.app.post_json(
"/tenders/{}/documents?acc_token={}".format(self.tender_id, self.tender_token),
{
"data": {
"title": "name.doc",
"url": self.generate_docservice_url(),
"hash": "md5:" + "0" * 32,
"format": "application/msword",
}
},
)
self.assertEqual(response.status, "201 Created")
self.assertEqual(response.content_type, "application/json")
doc_id = response.json["data"]["id"]
dateModified = response.json["data"]["dateModified"]
self.assertIn(doc_id, response.headers["Location"])
response = self.app.get("/tenders/{}/documents".format(self.tender_id))
self.assertEqual(response.status, "200 OK")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(dateModified2, response.json["data"][0]["dateModified"])
self.assertEqual(dateModified, response.json["data"][1]["dateModified"])
response = self.app.put_json(
"/tenders/{}/documents/{}?acc_token={}".format(self.tender_id, doc_id, self.tender_token),
{
"data": {
"title": "укр.doc",
"url": self.generate_docservice_url(),
"hash": "md5:" + "0" * 32,
"format": "application/msword",
}
},
)
self.assertEqual(response.status, "200 OK")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(doc_id, response.json["data"]["id"])
self.assertIn("Signature=", response.json["data"]["url"])
self.assertIn("KeyID=", response.json["data"]["url"])
self.assertNotIn("Expires=", response.json["data"]["url"])
key = response.json["data"]["url"].split("/")[-1].split("?")[0]
tender = self.db.get(self.tender_id)
self.assertIn(key, tender["documents"][-1]["url"])
self.assertIn("Signature=", tender["documents"][-1]["url"])
self.assertIn("KeyID=", tender["documents"][-1]["url"])
self.assertNotIn("Expires=", tender["documents"][-1]["url"])
response = self.app.get("/tenders/{}/documents/{}?download={}".format(self.tender_id, doc_id, key))
self.assertEqual(response.status, "302 Moved Temporarily")
self.assertIn("http://localhost/get/", response.location)
self.assertIn("Signature=", response.location)
self.assertIn("KeyID=", response.location)
self.assertNotIn("Expires=", response.location)
self.set_status(self.forbidden_document_modification_actions_status)
response = self.app.put_json(
"/tenders/{}/documents/{}?acc_token={}".format(self.tender_id, doc_id, self.tender_token),
{
"data": {
"title": "укр.doc",
"url": self.generate_docservice_url(),
"hash": "md5:" + "0" * 32,
"format": "application/msword",
}
},
status=403,
)
self.assertEqual(response.status, "403 Forbidden")
self.assertEqual(response.content_type, "application/json")
self.assertEqual(
response.json["errors"][0]["description"],
"Can't update document in current ({}) tender status".format(
self.forbidden_document_modification_actions_status
),
)
def lot_patch_tender_document_json_lots_none(self):
response = self.app.post_json(
"/tenders/{}/documents?acc_token={}".format(self.tender_id, self.tender_token),
{
"data": {
"title": "укр.doc",
"url": self.generate_docservice_url(),
"hash": "md5:" + "0" * 32,
"format": "application/msword",
"documentOf": "lot",
"relatedItem": self.initial_lots[0]["id"],
}
},
)
self.assertEqual(response.status, "201 Created")
self.assertEqual(response.content_type, "application/json")
response = self.app.patch_json(
"/tenders/{}?acc_token={}".format(self.tender_id, self.tender_token), {"data": {"lots": [None]}}, status=422
)
self.assertEqual(response.status, "422 Unprocessable Entity")
self.assertEqual(response.content_type, "application/json")
errors = {error["name"]: error["description"] for error in response.json["errors"]}
self.assertEqual(errors["lots"][0], ["This field is required."])
self.assertEqual(errors["documents"][0], {"relatedItem": ["relatedItem should be one of lots"]})
def lot_patch_tender_document_json_items_none(self):
response = self.app.get("/tenders/{}".format(self.tender_id))
response = self.app.post_json(
"/tenders/{}/documents?acc_token={}".format(self.tender_id, self.tender_token),
{
"data": {
"title": "укр.doc",
"url": self.generate_docservice_url(),
"hash": "md5:" + "0" * 32,
"format": "application/msword",
"documentOf": "item",
"relatedItem": response.json["data"]["items"][0]["id"],
}
},
)
self.assertEqual(response.status, "201 Created")
self.assertEqual(response.content_type, "application/json")
response = self.app.patch_json(
"/tenders/{}?acc_token={}".format(self.tender_id, self.tender_token), {"data": {"items": [None]}}, status=422
)
self.assertEqual(response.status, "422 Unprocessable Entity")
self.assertEqual(response.content_type, "application/json")
errors = {error["name"]: error["description"] for error in response.json["errors"]}
self.assertEqual(errors["documents"][0], {"relatedItem": ["relatedItem should be one of items"]})
| 44.444444 | 119 | 0.627624 | 4,442 | 40,400 | 5.598379 | 0.037821 | 0.135113 | 0.170179 | 0.081631 | 0.952389 | 0.945874 | 0.934052 | 0.932524 | 0.927296 | 0.923556 | 0 | 0.015959 | 0.191906 | 40,400 | 908 | 120 | 44.493392 | 0.745765 | 0.00505 | 0 | 0.735802 | 0 | 0 | 0.2527 | 0.05892 | 0 | 0 | 0 | 0 | 0.387654 | 1 | 0.016049 | false | 0 | 0.003704 | 0 | 0.019753 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0aa7e6bed437ad2d170ddd61d6d0e44899c06730 | 106,258 | py | Python | src/oci/container_engine/container_engine_client.py | LaudateCorpus1/oci-python-sdk | b0d3ce629d5113df4d8b83b7a6502b2c5bfa3015 | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | src/oci/container_engine/container_engine_client.py | LaudateCorpus1/oci-python-sdk | b0d3ce629d5113df4d8b83b7a6502b2c5bfa3015 | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | src/oci/container_engine/container_engine_client.py | LaudateCorpus1/oci-python-sdk | b0d3ce629d5113df4d8b83b7a6502b2c5bfa3015 | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | # coding: utf-8
# Copyright (c) 2016, 2022, Oracle and/or its affiliates. All rights reserved.
# This software is dual-licensed to you under the Universal Permissive License (UPL) 1.0 as shown at https://oss.oracle.com/licenses/upl or Apache License 2.0 as shown at http://www.apache.org/licenses/LICENSE-2.0. You may choose either license.
from __future__ import absolute_import
from oci._vendor import requests # noqa: F401
from oci._vendor import six
from oci import retry, circuit_breaker # noqa: F401
from oci.base_client import BaseClient
from oci.config import get_config_value_or_default, validate_config
from oci.signer import Signer
from oci.util import Sentinel, get_signer_from_authentication_type, AUTHENTICATION_TYPE_FIELD_NAME
from .models import container_engine_type_mapping
missing = Sentinel("Missing")
class ContainerEngineClient(object):
"""
API for the Container Engine for Kubernetes service. Use this API to build, deploy,
and manage cloud-native applications. For more information, see
[Overview of Container Engine for Kubernetes](/iaas/Content/ContEng/Concepts/contengoverview.htm).
"""
def __init__(self, config, **kwargs):
"""
Creates a new service client
:param dict config:
Configuration keys and values as per `SDK and Tool Configuration <https://docs.cloud.oracle.com/Content/API/Concepts/sdkconfig.htm>`__.
The :py:meth:`~oci.config.from_file` method can be used to load configuration from a file. Alternatively, a ``dict`` can be passed. You can validate_config
the dict using :py:meth:`~oci.config.validate_config`
:param str service_endpoint: (optional)
The endpoint of the service to call using this client. For example ``https://iaas.us-ashburn-1.oraclecloud.com``. If this keyword argument is
not provided then it will be derived using the region in the config parameter. You should only provide this keyword argument if you have an explicit
need to specify a service endpoint.
:param timeout: (optional)
The connection and read timeouts for the client. The default values are connection timeout 10 seconds and read timeout 60 seconds. This keyword argument can be provided
as a single float, in which case the value provided is used for both the read and connection timeouts, or as a tuple of two floats. If
a tuple is provided then the first value is used as the connection timeout and the second value as the read timeout.
:type timeout: float or tuple(float, float)
:param signer: (optional)
The signer to use when signing requests made by the service client. The default is to use a :py:class:`~oci.signer.Signer` based on the values
provided in the config parameter.
One use case for this parameter is for `Instance Principals authentication <https://docs.cloud.oracle.com/Content/Identity/Tasks/callingservicesfrominstances.htm>`__
by passing an instance of :py:class:`~oci.auth.signers.InstancePrincipalsSecurityTokenSigner` as the value for this keyword argument
:type signer: :py:class:`~oci.signer.AbstractBaseSigner`
:param obj retry_strategy: (optional)
A retry strategy to apply to all calls made by this service client (i.e. at the client level). There is no retry strategy applied by default.
Retry strategies can also be applied at the operation level by passing a ``retry_strategy`` keyword argument as part of calling the operation.
Any value provided at the operation level will override whatever is specified at the client level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.
:param obj circuit_breaker_strategy: (optional)
A circuit breaker strategy to apply to all calls made by this service client (i.e. at the client level).
This client uses :py:data:`~oci.circuit_breaker.DEFAULT_CIRCUIT_BREAKER_STRATEGY` as default if no circuit breaker strategy is provided.
The specifics of circuit breaker strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/circuit_breakers.html>`__.
:param function circuit_breaker_callback: (optional)
Callback function to receive any exceptions triggerred by the circuit breaker.
:param allow_control_chars: (optional)
allow_control_chars is a boolean to indicate whether or not this client should allow control characters in the response object. By default, the client will not
allow control characters to be in the response object.
"""
validate_config(config, signer=kwargs.get('signer'))
if 'signer' in kwargs:
signer = kwargs['signer']
elif AUTHENTICATION_TYPE_FIELD_NAME in config:
signer = get_signer_from_authentication_type(config)
else:
signer = Signer(
tenancy=config["tenancy"],
user=config["user"],
fingerprint=config["fingerprint"],
private_key_file_location=config.get("key_file"),
pass_phrase=get_config_value_or_default(config, "pass_phrase"),
private_key_content=config.get("key_content")
)
base_client_init_kwargs = {
'regional_client': True,
'service_endpoint': kwargs.get('service_endpoint'),
'base_path': '/20180222',
'service_endpoint_template': 'https://containerengine.{region}.oci.{secondLevelDomain}',
'skip_deserialization': kwargs.get('skip_deserialization', False),
'circuit_breaker_strategy': kwargs.get('circuit_breaker_strategy', circuit_breaker.GLOBAL_CIRCUIT_BREAKER_STRATEGY)
}
if 'timeout' in kwargs:
base_client_init_kwargs['timeout'] = kwargs.get('timeout')
if base_client_init_kwargs.get('circuit_breaker_strategy') is None:
base_client_init_kwargs['circuit_breaker_strategy'] = circuit_breaker.DEFAULT_CIRCUIT_BREAKER_STRATEGY
if 'allow_control_chars' in kwargs:
base_client_init_kwargs['allow_control_chars'] = kwargs.get('allow_control_chars')
self.base_client = BaseClient("container_engine", config, signer, container_engine_type_mapping, **base_client_init_kwargs)
self.retry_strategy = kwargs.get('retry_strategy')
self.circuit_breaker_callback = kwargs.get('circuit_breaker_callback')
def cluster_migrate_to_native_vcn(self, cluster_id, cluster_migrate_to_native_vcn_details, **kwargs):
"""
Initiates cluster migration to use native VCN.
:param str cluster_id: (required)
The OCID of the cluster.
:param oci.container_engine.models.ClusterMigrateToNativeVcnDetails cluster_migrate_to_native_vcn_details: (required)
The details for the cluster's migration to native VCN.
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param str opc_request_id: (optional)
Unique Oracle-assigned identifier for the request. If you need to contact
Oracle about a particular request, please provide the request ID.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it.
The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:param bool allow_control_chars: (optional)
allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object.
By default, the response will not allow control characters in strings
:return: A :class:`~oci.response.Response` object with data of type None
:rtype: :class:`~oci.response.Response`
:example:
Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/containerengine/cluster_migrate_to_native_vcn.py.html>`__ to see an example of how to use cluster_migrate_to_native_vcn API.
"""
resource_path = "/clusters/{clusterId}/actions/migrateToNativeVcn"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"allow_control_chars",
"retry_strategy",
"if_match",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"cluster_migrate_to_native_vcn got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"clusterId": cluster_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing),
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.base_client.get_preferred_retry_strategy(
operation_retry_strategy=kwargs.get('retry_strategy'),
client_retry_strategy=self.retry_strategy
)
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_client_retries_header(header_params)
retry_strategy.add_circuit_breaker_callback(self.circuit_breaker_callback)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=cluster_migrate_to_native_vcn_details)
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=cluster_migrate_to_native_vcn_details)
def create_cluster(self, create_cluster_details, **kwargs):
"""
Create a new cluster.
:param oci.container_engine.models.CreateClusterDetails create_cluster_details: (required)
The details of the cluster to create.
:param str opc_retry_token: (optional)
A token you supply to uniquely identify the request and provide idempotency if
the request is retried. Idempotency tokens expire after 24 hours.
:param str opc_request_id: (optional)
Unique Oracle-assigned identifier for the request. If you need to contact
Oracle about a particular request, please provide the request ID.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it.
The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:param bool allow_control_chars: (optional)
allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object.
By default, the response will not allow control characters in strings
:return: A :class:`~oci.response.Response` object with data of type None
:rtype: :class:`~oci.response.Response`
:example:
Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/containerengine/create_cluster.py.html>`__ to see an example of how to use create_cluster API.
"""
resource_path = "/clusters"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"allow_control_chars",
"retry_strategy",
"opc_retry_token",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"create_cluster got unknown kwargs: {!r}".format(extra_kwargs))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-retry-token": kwargs.get("opc_retry_token", missing),
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.base_client.get_preferred_retry_strategy(
operation_retry_strategy=kwargs.get('retry_strategy'),
client_retry_strategy=self.retry_strategy
)
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_retry_token_if_needed(header_params)
self.base_client.add_opc_client_retries_header(header_params)
retry_strategy.add_circuit_breaker_callback(self.circuit_breaker_callback)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
header_params=header_params,
body=create_cluster_details)
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
header_params=header_params,
body=create_cluster_details)
def create_kubeconfig(self, cluster_id, **kwargs):
"""
Create the Kubeconfig YAML for a cluster.
:param str cluster_id: (required)
The OCID of the cluster.
:param str opc_request_id: (optional)
Unique Oracle-assigned identifier for the request. If you need to contact
Oracle about a particular request, please provide the request ID.
:param oci.container_engine.models.CreateClusterKubeconfigContentDetails create_cluster_kubeconfig_content_details: (optional)
The details of the cluster kubeconfig to create.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it.
The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:param bool allow_control_chars: (optional)
allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object.
By default, the response will not allow control characters in strings
:return: A :class:`~oci.response.Response` object with data of type stream
:rtype: :class:`~oci.response.Response`
:example:
Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/containerengine/create_kubeconfig.py.html>`__ to see an example of how to use create_kubeconfig API.
"""
resource_path = "/clusters/{clusterId}/kubeconfig/content"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"allow_control_chars",
"retry_strategy",
"opc_request_id",
"create_cluster_kubeconfig_content_details"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"create_kubeconfig got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"clusterId": cluster_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/x-yaml",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.base_client.get_preferred_retry_strategy(
operation_retry_strategy=kwargs.get('retry_strategy'),
client_retry_strategy=self.retry_strategy
)
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_client_retries_header(header_params)
retry_strategy.add_circuit_breaker_callback(self.circuit_breaker_callback)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=kwargs.get('create_cluster_kubeconfig_content_details'),
response_type="stream")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=kwargs.get('create_cluster_kubeconfig_content_details'),
response_type="stream")
def create_node_pool(self, create_node_pool_details, **kwargs):
"""
Create a new node pool.
:param oci.container_engine.models.CreateNodePoolDetails create_node_pool_details: (required)
The details of the node pool to create.
:param str opc_retry_token: (optional)
A token you supply to uniquely identify the request and provide idempotency if
the request is retried. Idempotency tokens expire after 24 hours.
:param str opc_request_id: (optional)
Unique Oracle-assigned identifier for the request. If you need to contact
Oracle about a particular request, please provide the request ID.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it.
The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:param bool allow_control_chars: (optional)
allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object.
By default, the response will not allow control characters in strings
:return: A :class:`~oci.response.Response` object with data of type None
:rtype: :class:`~oci.response.Response`
:example:
Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/containerengine/create_node_pool.py.html>`__ to see an example of how to use create_node_pool API.
"""
resource_path = "/nodePools"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"allow_control_chars",
"retry_strategy",
"opc_retry_token",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"create_node_pool got unknown kwargs: {!r}".format(extra_kwargs))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-retry-token": kwargs.get("opc_retry_token", missing),
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.base_client.get_preferred_retry_strategy(
operation_retry_strategy=kwargs.get('retry_strategy'),
client_retry_strategy=self.retry_strategy
)
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_retry_token_if_needed(header_params)
self.base_client.add_opc_client_retries_header(header_params)
retry_strategy.add_circuit_breaker_callback(self.circuit_breaker_callback)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
header_params=header_params,
body=create_node_pool_details)
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
header_params=header_params,
body=create_node_pool_details)
def delete_cluster(self, cluster_id, **kwargs):
"""
Delete a cluster.
:param str cluster_id: (required)
The OCID of the cluster.
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param str opc_request_id: (optional)
Unique Oracle-assigned identifier for the request. If you need to contact
Oracle about a particular request, please provide the request ID.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it.
The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:param bool allow_control_chars: (optional)
allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object.
By default, the response will not allow control characters in strings
:return: A :class:`~oci.response.Response` object with data of type None
:rtype: :class:`~oci.response.Response`
:example:
Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/containerengine/delete_cluster.py.html>`__ to see an example of how to use delete_cluster API.
"""
resource_path = "/clusters/{clusterId}"
method = "DELETE"
# Don't accept unknown kwargs
expected_kwargs = [
"allow_control_chars",
"retry_strategy",
"if_match",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"delete_cluster got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"clusterId": cluster_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing),
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.base_client.get_preferred_retry_strategy(
operation_retry_strategy=kwargs.get('retry_strategy'),
client_retry_strategy=self.retry_strategy
)
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_client_retries_header(header_params)
retry_strategy.add_circuit_breaker_callback(self.circuit_breaker_callback)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params)
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params)
def delete_node_pool(self, node_pool_id, **kwargs):
"""
Delete a node pool.
:param str node_pool_id: (required)
The OCID of the node pool.
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param str opc_request_id: (optional)
Unique Oracle-assigned identifier for the request. If you need to contact
Oracle about a particular request, please provide the request ID.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it.
The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:param bool allow_control_chars: (optional)
allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object.
By default, the response will not allow control characters in strings
:return: A :class:`~oci.response.Response` object with data of type None
:rtype: :class:`~oci.response.Response`
:example:
Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/containerengine/delete_node_pool.py.html>`__ to see an example of how to use delete_node_pool API.
"""
resource_path = "/nodePools/{nodePoolId}"
method = "DELETE"
# Don't accept unknown kwargs
expected_kwargs = [
"allow_control_chars",
"retry_strategy",
"if_match",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"delete_node_pool got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"nodePoolId": node_pool_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing),
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.base_client.get_preferred_retry_strategy(
operation_retry_strategy=kwargs.get('retry_strategy'),
client_retry_strategy=self.retry_strategy
)
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_client_retries_header(header_params)
retry_strategy.add_circuit_breaker_callback(self.circuit_breaker_callback)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params)
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params)
def delete_work_request(self, work_request_id, **kwargs):
"""
Cancel a work request that has not started.
:param str work_request_id: (required)
The OCID of the work request.
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param str opc_request_id: (optional)
Unique Oracle-assigned identifier for the request. If you need to contact
Oracle about a particular request, please provide the request ID.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it.
The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:param bool allow_control_chars: (optional)
allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object.
By default, the response will not allow control characters in strings
:return: A :class:`~oci.response.Response` object with data of type None
:rtype: :class:`~oci.response.Response`
:example:
Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/containerengine/delete_work_request.py.html>`__ to see an example of how to use delete_work_request API.
"""
resource_path = "/workRequests/{workRequestId}"
method = "DELETE"
# Don't accept unknown kwargs
expected_kwargs = [
"allow_control_chars",
"retry_strategy",
"if_match",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"delete_work_request got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"workRequestId": work_request_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing),
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.base_client.get_preferred_retry_strategy(
operation_retry_strategy=kwargs.get('retry_strategy'),
client_retry_strategy=self.retry_strategy
)
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_client_retries_header(header_params)
retry_strategy.add_circuit_breaker_callback(self.circuit_breaker_callback)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params)
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params)
def get_cluster(self, cluster_id, **kwargs):
"""
Get the details of a cluster.
:param str cluster_id: (required)
The OCID of the cluster.
:param str opc_request_id: (optional)
Unique Oracle-assigned identifier for the request. If you need to contact
Oracle about a particular request, please provide the request ID.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it.
The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:param bool allow_control_chars: (optional)
allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object.
By default, the response will not allow control characters in strings
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.container_engine.models.Cluster`
:rtype: :class:`~oci.response.Response`
:example:
Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/containerengine/get_cluster.py.html>`__ to see an example of how to use get_cluster API.
"""
resource_path = "/clusters/{clusterId}"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"allow_control_chars",
"retry_strategy",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"get_cluster got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"clusterId": cluster_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.base_client.get_preferred_retry_strategy(
operation_retry_strategy=kwargs.get('retry_strategy'),
client_retry_strategy=self.retry_strategy
)
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_client_retries_header(header_params)
retry_strategy.add_circuit_breaker_callback(self.circuit_breaker_callback)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="Cluster")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="Cluster")
def get_cluster_migrate_to_native_vcn_status(self, cluster_id, **kwargs):
"""
Get details on a cluster's migration to native VCN.
:param str cluster_id: (required)
The OCID of the cluster.
:param str opc_request_id: (optional)
Unique Oracle-assigned identifier for the request. If you need to contact
Oracle about a particular request, please provide the request ID.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it.
The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:param bool allow_control_chars: (optional)
allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object.
By default, the response will not allow control characters in strings
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.container_engine.models.ClusterMigrateToNativeVcnStatus`
:rtype: :class:`~oci.response.Response`
:example:
Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/containerengine/get_cluster_migrate_to_native_vcn_status.py.html>`__ to see an example of how to use get_cluster_migrate_to_native_vcn_status API.
"""
resource_path = "/clusters/{clusterId}/migrateToNativeVcnStatus"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"allow_control_chars",
"retry_strategy",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"get_cluster_migrate_to_native_vcn_status got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"clusterId": cluster_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.base_client.get_preferred_retry_strategy(
operation_retry_strategy=kwargs.get('retry_strategy'),
client_retry_strategy=self.retry_strategy
)
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_client_retries_header(header_params)
retry_strategy.add_circuit_breaker_callback(self.circuit_breaker_callback)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="ClusterMigrateToNativeVcnStatus")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="ClusterMigrateToNativeVcnStatus")
def get_cluster_options(self, cluster_option_id, **kwargs):
"""
Get options available for clusters.
:param str cluster_option_id: (required)
The id of the option set to retrieve. Use \"all\" get all options, or use a cluster ID to get options specific to the provided cluster.
:param str compartment_id: (optional)
The OCID of the compartment.
:param str opc_request_id: (optional)
Unique Oracle-assigned identifier for the request. If you need to contact
Oracle about a particular request, please provide the request ID.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it.
The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:param bool allow_control_chars: (optional)
allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object.
By default, the response will not allow control characters in strings
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.container_engine.models.ClusterOptions`
:rtype: :class:`~oci.response.Response`
:example:
Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/containerengine/get_cluster_options.py.html>`__ to see an example of how to use get_cluster_options API.
"""
resource_path = "/clusterOptions/{clusterOptionId}"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"allow_control_chars",
"retry_strategy",
"compartment_id",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"get_cluster_options got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"clusterOptionId": cluster_option_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
query_params = {
"compartmentId": kwargs.get("compartment_id", missing)
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.base_client.get_preferred_retry_strategy(
operation_retry_strategy=kwargs.get('retry_strategy'),
client_retry_strategy=self.retry_strategy
)
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_client_retries_header(header_params)
retry_strategy.add_circuit_breaker_callback(self.circuit_breaker_callback)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
query_params=query_params,
header_params=header_params,
response_type="ClusterOptions")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
query_params=query_params,
header_params=header_params,
response_type="ClusterOptions")
def get_node_pool(self, node_pool_id, **kwargs):
"""
Get the details of a node pool.
:param str node_pool_id: (required)
The OCID of the node pool.
:param str opc_request_id: (optional)
Unique Oracle-assigned identifier for the request. If you need to contact
Oracle about a particular request, please provide the request ID.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it.
The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:param bool allow_control_chars: (optional)
allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object.
By default, the response will not allow control characters in strings
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.container_engine.models.NodePool`
:rtype: :class:`~oci.response.Response`
:example:
Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/containerengine/get_node_pool.py.html>`__ to see an example of how to use get_node_pool API.
"""
resource_path = "/nodePools/{nodePoolId}"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"allow_control_chars",
"retry_strategy",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"get_node_pool got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"nodePoolId": node_pool_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.base_client.get_preferred_retry_strategy(
operation_retry_strategy=kwargs.get('retry_strategy'),
client_retry_strategy=self.retry_strategy
)
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_client_retries_header(header_params)
retry_strategy.add_circuit_breaker_callback(self.circuit_breaker_callback)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="NodePool")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="NodePool")
def get_node_pool_options(self, node_pool_option_id, **kwargs):
"""
Get options available for node pools.
:param str node_pool_option_id: (required)
The id of the option set to retrieve. Use \"all\" get all options, or use a cluster ID to get options specific to the provided cluster.
:param str compartment_id: (optional)
The OCID of the compartment.
:param str opc_request_id: (optional)
Unique Oracle-assigned identifier for the request. If you need to contact
Oracle about a particular request, please provide the request ID.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it.
The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:param bool allow_control_chars: (optional)
allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object.
By default, the response will not allow control characters in strings
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.container_engine.models.NodePoolOptions`
:rtype: :class:`~oci.response.Response`
:example:
Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/containerengine/get_node_pool_options.py.html>`__ to see an example of how to use get_node_pool_options API.
"""
resource_path = "/nodePoolOptions/{nodePoolOptionId}"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"allow_control_chars",
"retry_strategy",
"compartment_id",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"get_node_pool_options got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"nodePoolOptionId": node_pool_option_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
query_params = {
"compartmentId": kwargs.get("compartment_id", missing)
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.base_client.get_preferred_retry_strategy(
operation_retry_strategy=kwargs.get('retry_strategy'),
client_retry_strategy=self.retry_strategy
)
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_client_retries_header(header_params)
retry_strategy.add_circuit_breaker_callback(self.circuit_breaker_callback)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
query_params=query_params,
header_params=header_params,
response_type="NodePoolOptions")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
query_params=query_params,
header_params=header_params,
response_type="NodePoolOptions")
def get_work_request(self, work_request_id, **kwargs):
"""
Get the details of a work request.
:param str work_request_id: (required)
The OCID of the work request.
:param str opc_request_id: (optional)
Unique Oracle-assigned identifier for the request. If you need to contact
Oracle about a particular request, please provide the request ID.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it.
The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:param bool allow_control_chars: (optional)
allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object.
By default, the response will not allow control characters in strings
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.container_engine.models.WorkRequest`
:rtype: :class:`~oci.response.Response`
:example:
Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/containerengine/get_work_request.py.html>`__ to see an example of how to use get_work_request API.
"""
resource_path = "/workRequests/{workRequestId}"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"allow_control_chars",
"retry_strategy",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"get_work_request got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"workRequestId": work_request_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.base_client.get_preferred_retry_strategy(
operation_retry_strategy=kwargs.get('retry_strategy'),
client_retry_strategy=self.retry_strategy
)
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_client_retries_header(header_params)
retry_strategy.add_circuit_breaker_callback(self.circuit_breaker_callback)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="WorkRequest")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="WorkRequest")
def list_clusters(self, compartment_id, **kwargs):
"""
List all the cluster objects in a compartment.
:param str compartment_id: (required)
The OCID of the compartment.
:param list[str] lifecycle_state: (optional)
A cluster lifecycle state to filter on. Can have multiple parameters of this name.
Allowed values are: "CREATING", "ACTIVE", "FAILED", "DELETING", "DELETED", "UPDATING"
:param str name: (optional)
The name to filter on.
:param int limit: (optional)
For list pagination. The maximum number of results per page, or items to return in a paginated \"List\" call.
1 is the minimum, 1000 is the maximum. For important details about how pagination works,
see `List Pagination`__.
__ https://docs.cloud.oracle.com/iaas/Content/API/Concepts/usingapi.htm#nine
:param str page: (optional)
For list pagination. The value of the `opc-next-page` response header from the previous \"List\" call.
For important details about how pagination works, see `List Pagination`__.
__ https://docs.cloud.oracle.com/iaas/Content/API/Concepts/usingapi.htm#nine
:param str sort_order: (optional)
The optional order in which to sort the results.
Allowed values are: "ASC", "DESC"
:param str sort_by: (optional)
The optional field to sort the results by.
Allowed values are: "ID", "NAME", "TIME_CREATED"
:param str opc_request_id: (optional)
Unique Oracle-assigned identifier for the request. If you need to contact
Oracle about a particular request, please provide the request ID.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it.
The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:param bool allow_control_chars: (optional)
allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object.
By default, the response will not allow control characters in strings
:return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.container_engine.models.ClusterSummary`
:rtype: :class:`~oci.response.Response`
:example:
Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/containerengine/list_clusters.py.html>`__ to see an example of how to use list_clusters API.
"""
resource_path = "/clusters"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"allow_control_chars",
"retry_strategy",
"lifecycle_state",
"name",
"limit",
"page",
"sort_order",
"sort_by",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"list_clusters got unknown kwargs: {!r}".format(extra_kwargs))
if 'lifecycle_state' in kwargs:
lifecycle_state_allowed_values = ["CREATING", "ACTIVE", "FAILED", "DELETING", "DELETED", "UPDATING"]
for lifecycle_state_item in kwargs['lifecycle_state']:
if lifecycle_state_item not in lifecycle_state_allowed_values:
raise ValueError(
"Invalid value for `lifecycle_state`, must be one of {0}".format(lifecycle_state_allowed_values)
)
if 'sort_order' in kwargs:
sort_order_allowed_values = ["ASC", "DESC"]
if kwargs['sort_order'] not in sort_order_allowed_values:
raise ValueError(
"Invalid value for `sort_order`, must be one of {0}".format(sort_order_allowed_values)
)
if 'sort_by' in kwargs:
sort_by_allowed_values = ["ID", "NAME", "TIME_CREATED"]
if kwargs['sort_by'] not in sort_by_allowed_values:
raise ValueError(
"Invalid value for `sort_by`, must be one of {0}".format(sort_by_allowed_values)
)
query_params = {
"compartmentId": compartment_id,
"lifecycleState": self.base_client.generate_collection_format_param(kwargs.get("lifecycle_state", missing), 'multi'),
"name": kwargs.get("name", missing),
"limit": kwargs.get("limit", missing),
"page": kwargs.get("page", missing),
"sortOrder": kwargs.get("sort_order", missing),
"sortBy": kwargs.get("sort_by", missing)
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.base_client.get_preferred_retry_strategy(
operation_retry_strategy=kwargs.get('retry_strategy'),
client_retry_strategy=self.retry_strategy
)
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_client_retries_header(header_params)
retry_strategy.add_circuit_breaker_callback(self.circuit_breaker_callback)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[ClusterSummary]")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[ClusterSummary]")
def list_node_pools(self, compartment_id, **kwargs):
"""
List all the node pools in a compartment, and optionally filter by cluster.
:param str compartment_id: (required)
The OCID of the compartment.
:param str cluster_id: (optional)
The OCID of the cluster.
:param str name: (optional)
The name to filter on.
:param int limit: (optional)
For list pagination. The maximum number of results per page, or items to return in a paginated \"List\" call.
1 is the minimum, 1000 is the maximum. For important details about how pagination works,
see `List Pagination`__.
__ https://docs.cloud.oracle.com/iaas/Content/API/Concepts/usingapi.htm#nine
:param str page: (optional)
For list pagination. The value of the `opc-next-page` response header from the previous \"List\" call.
For important details about how pagination works, see `List Pagination`__.
__ https://docs.cloud.oracle.com/iaas/Content/API/Concepts/usingapi.htm#nine
:param str sort_order: (optional)
The optional order in which to sort the results.
Allowed values are: "ASC", "DESC"
:param str sort_by: (optional)
The optional field to sort the results by.
Allowed values are: "ID", "NAME", "TIME_CREATED"
:param str opc_request_id: (optional)
Unique Oracle-assigned identifier for the request. If you need to contact
Oracle about a particular request, please provide the request ID.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it.
The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:param bool allow_control_chars: (optional)
allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object.
By default, the response will not allow control characters in strings
:return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.container_engine.models.NodePoolSummary`
:rtype: :class:`~oci.response.Response`
:example:
Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/containerengine/list_node_pools.py.html>`__ to see an example of how to use list_node_pools API.
"""
resource_path = "/nodePools"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"allow_control_chars",
"retry_strategy",
"cluster_id",
"name",
"limit",
"page",
"sort_order",
"sort_by",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"list_node_pools got unknown kwargs: {!r}".format(extra_kwargs))
if 'sort_order' in kwargs:
sort_order_allowed_values = ["ASC", "DESC"]
if kwargs['sort_order'] not in sort_order_allowed_values:
raise ValueError(
"Invalid value for `sort_order`, must be one of {0}".format(sort_order_allowed_values)
)
if 'sort_by' in kwargs:
sort_by_allowed_values = ["ID", "NAME", "TIME_CREATED"]
if kwargs['sort_by'] not in sort_by_allowed_values:
raise ValueError(
"Invalid value for `sort_by`, must be one of {0}".format(sort_by_allowed_values)
)
query_params = {
"compartmentId": compartment_id,
"clusterId": kwargs.get("cluster_id", missing),
"name": kwargs.get("name", missing),
"limit": kwargs.get("limit", missing),
"page": kwargs.get("page", missing),
"sortOrder": kwargs.get("sort_order", missing),
"sortBy": kwargs.get("sort_by", missing)
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.base_client.get_preferred_retry_strategy(
operation_retry_strategy=kwargs.get('retry_strategy'),
client_retry_strategy=self.retry_strategy
)
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_client_retries_header(header_params)
retry_strategy.add_circuit_breaker_callback(self.circuit_breaker_callback)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[NodePoolSummary]")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[NodePoolSummary]")
def list_work_request_errors(self, compartment_id, work_request_id, **kwargs):
"""
Get the errors of a work request.
:param str compartment_id: (required)
The OCID of the compartment.
:param str work_request_id: (required)
The OCID of the work request.
:param str opc_request_id: (optional)
Unique Oracle-assigned identifier for the request. If you need to contact
Oracle about a particular request, please provide the request ID.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it.
The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:param bool allow_control_chars: (optional)
allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object.
By default, the response will not allow control characters in strings
:return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.container_engine.models.WorkRequestError`
:rtype: :class:`~oci.response.Response`
:example:
Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/containerengine/list_work_request_errors.py.html>`__ to see an example of how to use list_work_request_errors API.
"""
resource_path = "/workRequests/{workRequestId}/errors"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"allow_control_chars",
"retry_strategy",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"list_work_request_errors got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"workRequestId": work_request_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
query_params = {
"compartmentId": compartment_id
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.base_client.get_preferred_retry_strategy(
operation_retry_strategy=kwargs.get('retry_strategy'),
client_retry_strategy=self.retry_strategy
)
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_client_retries_header(header_params)
retry_strategy.add_circuit_breaker_callback(self.circuit_breaker_callback)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
query_params=query_params,
header_params=header_params,
response_type="list[WorkRequestError]")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
query_params=query_params,
header_params=header_params,
response_type="list[WorkRequestError]")
def list_work_request_logs(self, compartment_id, work_request_id, **kwargs):
"""
Get the logs of a work request.
:param str compartment_id: (required)
The OCID of the compartment.
:param str work_request_id: (required)
The OCID of the work request.
:param str opc_request_id: (optional)
Unique Oracle-assigned identifier for the request. If you need to contact
Oracle about a particular request, please provide the request ID.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it.
The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:param bool allow_control_chars: (optional)
allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object.
By default, the response will not allow control characters in strings
:return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.container_engine.models.WorkRequestLogEntry`
:rtype: :class:`~oci.response.Response`
:example:
Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/containerengine/list_work_request_logs.py.html>`__ to see an example of how to use list_work_request_logs API.
"""
resource_path = "/workRequests/{workRequestId}/logs"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"allow_control_chars",
"retry_strategy",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"list_work_request_logs got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"workRequestId": work_request_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
query_params = {
"compartmentId": compartment_id
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.base_client.get_preferred_retry_strategy(
operation_retry_strategy=kwargs.get('retry_strategy'),
client_retry_strategy=self.retry_strategy
)
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_client_retries_header(header_params)
retry_strategy.add_circuit_breaker_callback(self.circuit_breaker_callback)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
query_params=query_params,
header_params=header_params,
response_type="list[WorkRequestLogEntry]")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
query_params=query_params,
header_params=header_params,
response_type="list[WorkRequestLogEntry]")
def list_work_requests(self, compartment_id, **kwargs):
"""
List all work requests in a compartment.
:param str compartment_id: (required)
The OCID of the compartment.
:param str cluster_id: (optional)
The OCID of the cluster.
:param str resource_id: (optional)
The OCID of the resource associated with a work request
:param str resource_type: (optional)
Type of the resource associated with a work request
Allowed values are: "CLUSTER", "NODEPOOL"
:param list[str] status: (optional)
A work request status to filter on. Can have multiple parameters of this name.
:param int limit: (optional)
For list pagination. The maximum number of results per page, or items to return in a paginated \"List\" call.
1 is the minimum, 1000 is the maximum. For important details about how pagination works,
see `List Pagination`__.
__ https://docs.cloud.oracle.com/iaas/Content/API/Concepts/usingapi.htm#nine
:param str page: (optional)
For list pagination. The value of the `opc-next-page` response header from the previous \"List\" call.
For important details about how pagination works, see `List Pagination`__.
__ https://docs.cloud.oracle.com/iaas/Content/API/Concepts/usingapi.htm#nine
:param str sort_order: (optional)
The optional order in which to sort the results.
Allowed values are: "ASC", "DESC"
:param str sort_by: (optional)
The optional field to sort the results by.
Allowed values are: "ID", "OPERATION_TYPE", "STATUS", "TIME_ACCEPTED", "TIME_STARTED", "TIME_FINISHED"
:param str opc_request_id: (optional)
Unique Oracle-assigned identifier for the request. If you need to contact
Oracle about a particular request, please provide the request ID.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it.
The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:param bool allow_control_chars: (optional)
allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object.
By default, the response will not allow control characters in strings
:return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.container_engine.models.WorkRequestSummary`
:rtype: :class:`~oci.response.Response`
:example:
Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/containerengine/list_work_requests.py.html>`__ to see an example of how to use list_work_requests API.
"""
resource_path = "/workRequests"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"allow_control_chars",
"retry_strategy",
"cluster_id",
"resource_id",
"resource_type",
"status",
"limit",
"page",
"sort_order",
"sort_by",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"list_work_requests got unknown kwargs: {!r}".format(extra_kwargs))
if 'resource_type' in kwargs:
resource_type_allowed_values = ["CLUSTER", "NODEPOOL"]
if kwargs['resource_type'] not in resource_type_allowed_values:
raise ValueError(
"Invalid value for `resource_type`, must be one of {0}".format(resource_type_allowed_values)
)
if 'sort_order' in kwargs:
sort_order_allowed_values = ["ASC", "DESC"]
if kwargs['sort_order'] not in sort_order_allowed_values:
raise ValueError(
"Invalid value for `sort_order`, must be one of {0}".format(sort_order_allowed_values)
)
if 'sort_by' in kwargs:
sort_by_allowed_values = ["ID", "OPERATION_TYPE", "STATUS", "TIME_ACCEPTED", "TIME_STARTED", "TIME_FINISHED"]
if kwargs['sort_by'] not in sort_by_allowed_values:
raise ValueError(
"Invalid value for `sort_by`, must be one of {0}".format(sort_by_allowed_values)
)
query_params = {
"compartmentId": compartment_id,
"clusterId": kwargs.get("cluster_id", missing),
"resourceId": kwargs.get("resource_id", missing),
"resourceType": kwargs.get("resource_type", missing),
"status": self.base_client.generate_collection_format_param(kwargs.get("status", missing), 'multi'),
"limit": kwargs.get("limit", missing),
"page": kwargs.get("page", missing),
"sortOrder": kwargs.get("sort_order", missing),
"sortBy": kwargs.get("sort_by", missing)
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.base_client.get_preferred_retry_strategy(
operation_retry_strategy=kwargs.get('retry_strategy'),
client_retry_strategy=self.retry_strategy
)
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_client_retries_header(header_params)
retry_strategy.add_circuit_breaker_callback(self.circuit_breaker_callback)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[WorkRequestSummary]")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[WorkRequestSummary]")
def update_cluster(self, cluster_id, update_cluster_details, **kwargs):
"""
Update the details of a cluster.
:param str cluster_id: (required)
The OCID of the cluster.
:param oci.container_engine.models.UpdateClusterDetails update_cluster_details: (required)
The details of the cluster to update.
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param str opc_request_id: (optional)
Unique Oracle-assigned identifier for the request. If you need to contact
Oracle about a particular request, please provide the request ID.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it.
The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:param bool allow_control_chars: (optional)
allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object.
By default, the response will not allow control characters in strings
:return: A :class:`~oci.response.Response` object with data of type None
:rtype: :class:`~oci.response.Response`
:example:
Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/containerengine/update_cluster.py.html>`__ to see an example of how to use update_cluster API.
"""
resource_path = "/clusters/{clusterId}"
method = "PUT"
# Don't accept unknown kwargs
expected_kwargs = [
"allow_control_chars",
"retry_strategy",
"if_match",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"update_cluster got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"clusterId": cluster_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing),
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.base_client.get_preferred_retry_strategy(
operation_retry_strategy=kwargs.get('retry_strategy'),
client_retry_strategy=self.retry_strategy
)
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_client_retries_header(header_params)
retry_strategy.add_circuit_breaker_callback(self.circuit_breaker_callback)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=update_cluster_details)
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=update_cluster_details)
def update_cluster_endpoint_config(self, cluster_id, update_cluster_endpoint_config_details, **kwargs):
"""
Update the details of the cluster endpoint configuration.
:param str cluster_id: (required)
The OCID of the cluster.
:param oci.container_engine.models.UpdateClusterEndpointConfigDetails update_cluster_endpoint_config_details: (required)
The details of the cluster's endpoint to update.
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param str opc_request_id: (optional)
Unique Oracle-assigned identifier for the request. If you need to contact
Oracle about a particular request, please provide the request ID.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it.
The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:param bool allow_control_chars: (optional)
allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object.
By default, the response will not allow control characters in strings
:return: A :class:`~oci.response.Response` object with data of type None
:rtype: :class:`~oci.response.Response`
:example:
Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/containerengine/update_cluster_endpoint_config.py.html>`__ to see an example of how to use update_cluster_endpoint_config API.
"""
resource_path = "/clusters/{clusterId}/actions/updateEndpointConfig"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"allow_control_chars",
"retry_strategy",
"if_match",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"update_cluster_endpoint_config got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"clusterId": cluster_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing),
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.base_client.get_preferred_retry_strategy(
operation_retry_strategy=kwargs.get('retry_strategy'),
client_retry_strategy=self.retry_strategy
)
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_client_retries_header(header_params)
retry_strategy.add_circuit_breaker_callback(self.circuit_breaker_callback)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=update_cluster_endpoint_config_details)
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=update_cluster_endpoint_config_details)
def update_node_pool(self, node_pool_id, update_node_pool_details, **kwargs):
"""
Update the details of a node pool.
:param str node_pool_id: (required)
The OCID of the node pool.
:param oci.container_engine.models.UpdateNodePoolDetails update_node_pool_details: (required)
The fields to update in a node pool.
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param str opc_request_id: (optional)
Unique Oracle-assigned identifier for the request. If you need to contact
Oracle about a particular request, please provide the request ID.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it.
The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:param bool allow_control_chars: (optional)
allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object.
By default, the response will not allow control characters in strings
:return: A :class:`~oci.response.Response` object with data of type None
:rtype: :class:`~oci.response.Response`
:example:
Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/containerengine/update_node_pool.py.html>`__ to see an example of how to use update_node_pool API.
"""
resource_path = "/nodePools/{nodePoolId}"
method = "PUT"
# Don't accept unknown kwargs
expected_kwargs = [
"allow_control_chars",
"retry_strategy",
"if_match",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"update_node_pool got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"nodePoolId": node_pool_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing),
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.base_client.get_preferred_retry_strategy(
operation_retry_strategy=kwargs.get('retry_strategy'),
client_retry_strategy=self.retry_strategy
)
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_client_retries_header(header_params)
retry_strategy.add_circuit_breaker_callback(self.circuit_breaker_callback)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=update_node_pool_details)
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=update_node_pool_details)
| 49.079908 | 261 | 0.652346 | 13,446 | 106,258 | 4.968838 | 0.034211 | 0.066935 | 0.01865 | 0.006286 | 0.912799 | 0.898654 | 0.887698 | 0.876368 | 0.869513 | 0.865262 | 0 | 0.000991 | 0.268949 | 106,258 | 2,164 | 262 | 49.102588 | 0.859089 | 0.432664 | 0 | 0.814978 | 0 | 0 | 0.146412 | 0.021414 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019383 | false | 0.000881 | 0.00793 | 0 | 0.065198 | 0.000881 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0aadf14e5999c78fd9acf30993dbe5661d5cc997 | 39,054 | py | Python | nova/tests/unit/network/test_os_vif_util.py | teresa-ho/stx-nova | 1f82323439da2449edbbaed2fe1c8414a550c86f | [
"Apache-2.0"
] | null | null | null | nova/tests/unit/network/test_os_vif_util.py | teresa-ho/stx-nova | 1f82323439da2449edbbaed2fe1c8414a550c86f | [
"Apache-2.0"
] | null | null | null | nova/tests/unit/network/test_os_vif_util.py | teresa-ho/stx-nova | 1f82323439da2449edbbaed2fe1c8414a550c86f | [
"Apache-2.0"
] | null | null | null | # Copyright 2016 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from os_vif import objects as osv_objects
from os_vif.objects import fields as os_vif_fields
from nova import exception
from nova.network import model
from nova.network import os_vif_util
from nova import objects
from nova import test
class OSVIFUtilTestCase(test.NoDBTestCase):
def setUp(self):
super(OSVIFUtilTestCase, self).setUp()
osv_objects.register_all()
# Remove when all os-vif objects include the
# ComparableVersionedObject mix-in
def assertObjEqual(self, expect, actual):
actual.obj_reset_changes(recursive=True)
expect.obj_reset_changes(recursive=True)
self.assertEqual(expect.obj_to_primitive(),
actual.obj_to_primitive())
def _test_is_firewall_required(self, port_filter, driver, expect):
vif = model.VIF(
id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
type=model.VIF_TYPE_BRIDGE,
address="22:52:25:62:e2:aa",
network=model.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
label="Demo Net",
subnets=[]),
details={
model.VIF_DETAILS_PORT_FILTER: port_filter,
}
)
self.flags(firewall_driver=driver)
self.assertEqual(expect, os_vif_util._is_firewall_required(vif))
def test_is_firewall_required_via_vif(self):
self._test_is_firewall_required(
True, "nova.virt.libvirt.firewall.IptablesFirewallDriver", False)
def test_is_firewall_required_via_driver(self):
self._test_is_firewall_required(
False, "nova.virt.libvirt.firewall.IptablesFirewallDriver", True)
def test_is_firewall_required_not(self):
self._test_is_firewall_required(
False, "nova.virt.firewall.NoopFirewallDriver", False)
def test_nova_to_osvif_instance(self):
inst = objects.Instance(
id="1242",
uuid="d5b1090c-9e00-4fa4-9504-4b1494857970",
project_id="2f37d7f6-e51a-4a1f-8b6e-b0917ffc8390")
info = os_vif_util.nova_to_osvif_instance(inst)
expect = osv_objects.instance_info.InstanceInfo(
uuid="d5b1090c-9e00-4fa4-9504-4b1494857970",
name="instance-000004da",
project_id="2f37d7f6-e51a-4a1f-8b6e-b0917ffc8390")
self.assertObjEqual(info, expect)
def test_nova_to_osvif_instance_minimal(self):
inst = objects.Instance(
id="1242",
uuid="d5b1090c-9e00-4fa4-9504-4b1494857970")
actual = os_vif_util.nova_to_osvif_instance(inst)
expect = osv_objects.instance_info.InstanceInfo(
uuid=inst.uuid,
name=inst.name)
self.assertObjEqual(expect, actual)
def test_nova_to_osvif_ips(self):
ips = [
model.FixedIP(
address="192.168.122.24",
floating_ips=[
model.IP(address="192.168.122.100",
type="floating"),
model.IP(address="192.168.122.101",
type="floating"),
model.IP(address="192.168.122.102",
type="floating"),
],
version=4),
model.FixedIP(
address="2001::beef",
version=6),
]
actual = os_vif_util._nova_to_osvif_ips(ips)
expect = osv_objects.fixed_ip.FixedIPList(
objects=[
osv_objects.fixed_ip.FixedIP(
address="192.168.122.24",
floating_ips=[
"192.168.122.100",
"192.168.122.101",
"192.168.122.102",
]),
osv_objects.fixed_ip.FixedIP(
address="2001::beef",
floating_ips=[]),
],
)
self.assertObjEqual(expect, actual)
def test_nova_to_osvif_routes(self):
routes = [
model.Route(cidr="192.168.1.0/24",
gateway=model.IP(
address="192.168.1.254",
type='gateway'),
interface="eth0"),
model.Route(cidr="10.0.0.0/8",
gateway=model.IP(
address="10.0.0.1",
type='gateway')),
]
expect = osv_objects.route.RouteList(
objects=[
osv_objects.route.Route(
cidr="192.168.1.0/24",
gateway="192.168.1.254",
interface="eth0"),
osv_objects.route.Route(
cidr="10.0.0.0/8",
gateway="10.0.0.1"),
])
actual = os_vif_util._nova_to_osvif_routes(routes)
self.assertObjEqual(expect, actual)
def test_nova_to_osvif_subnets(self):
subnets = [
model.Subnet(cidr="192.168.1.0/24",
dns=[
model.IP(
address="192.168.1.1",
type="dns"),
model.IP(
address="192.168.1.2",
type="dns"),
],
gateway=model.IP(
address="192.168.1.254",
type='gateway'),
ips=[
model.FixedIP(
address="192.168.1.100",
),
model.FixedIP(
address="192.168.1.101",
),
],
routes=[
model.Route(
cidr="10.0.0.1/24",
gateway=model.IP(
address="192.168.1.254",
type="gateway"),
interface="eth0"),
]),
model.Subnet(dns=[
model.IP(
address="192.168.1.1",
type="dns"),
model.IP(
address="192.168.1.2",
type="dns"),
],
ips=[
model.FixedIP(
address="192.168.1.100",
),
model.FixedIP(
address="192.168.1.101",
),
],
routes=[
model.Route(
cidr="10.0.0.1/24",
gateway=model.IP(
address="192.168.1.254",
type="gateway"),
interface="eth0"),
]),
model.Subnet(dns=[
model.IP(
address="192.168.1.1",
type="dns"),
model.IP(
address="192.168.1.2",
type="dns"),
],
gateway=model.IP(
type='gateway'),
ips=[
model.FixedIP(
address="192.168.1.100",
),
model.FixedIP(
address="192.168.1.101",
),
],
routes=[
model.Route(
cidr="10.0.0.1/24",
gateway=model.IP(
address="192.168.1.254",
type="gateway"),
interface="eth0"),
]),
]
expect = osv_objects.subnet.SubnetList(
objects=[
osv_objects.subnet.Subnet(
cidr="192.168.1.0/24",
dns=["192.168.1.1",
"192.168.1.2"],
gateway="192.168.1.254",
ips=osv_objects.fixed_ip.FixedIPList(
objects=[
osv_objects.fixed_ip.FixedIP(
address="192.168.1.100",
floating_ips=[]),
osv_objects.fixed_ip.FixedIP(
address="192.168.1.101",
floating_ips=[]),
]),
routes=osv_objects.route.RouteList(
objects=[
osv_objects.route.Route(
cidr="10.0.0.1/24",
gateway="192.168.1.254",
interface="eth0")
]),
),
osv_objects.subnet.Subnet(
dns=["192.168.1.1",
"192.168.1.2"],
ips=osv_objects.fixed_ip.FixedIPList(
objects=[
osv_objects.fixed_ip.FixedIP(
address="192.168.1.100",
floating_ips=[]),
osv_objects.fixed_ip.FixedIP(
address="192.168.1.101",
floating_ips=[]),
]),
routes=osv_objects.route.RouteList(
objects=[
osv_objects.route.Route(
cidr="10.0.0.1/24",
gateway="192.168.1.254",
interface="eth0")
]),
),
osv_objects.subnet.Subnet(
dns=["192.168.1.1",
"192.168.1.2"],
ips=osv_objects.fixed_ip.FixedIPList(
objects=[
osv_objects.fixed_ip.FixedIP(
address="192.168.1.100",
floating_ips=[]),
osv_objects.fixed_ip.FixedIP(
address="192.168.1.101",
floating_ips=[]),
]),
routes=osv_objects.route.RouteList(
objects=[
osv_objects.route.Route(
cidr="10.0.0.1/24",
gateway="192.168.1.254",
interface="eth0")
]),
),
])
actual = os_vif_util._nova_to_osvif_subnets(subnets)
self.assertObjEqual(expect, actual)
def test_nova_to_osvif_network(self):
network = model.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
bridge="br0",
subnets=[
model.Subnet(cidr="192.168.1.0/24",
gateway=model.IP(
address="192.168.1.254",
type='gateway')),
])
expect = osv_objects.network.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
bridge="br0",
bridge_interface=None,
subnets=osv_objects.subnet.SubnetList(
objects=[
osv_objects.subnet.Subnet(
cidr="192.168.1.0/24",
dns=[],
gateway="192.168.1.254",
ips=osv_objects.fixed_ip.FixedIPList(
objects=[]),
routes=osv_objects.route.RouteList(
objects=[]),
)
]))
actual = os_vif_util._nova_to_osvif_network(network)
self.assertObjEqual(expect, actual)
def test_nova_to_osvif_network_extra(self):
network = model.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
bridge="br0",
multi_host=True,
should_create_bridge=True,
should_create_vlan=True,
bridge_interface="eth0",
vlan=1729,
subnets=[
model.Subnet(cidr="192.168.1.0/24",
gateway=model.IP(
address="192.168.1.254",
type='gateway')),
])
expect = osv_objects.network.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
bridge="br0",
multi_host=True,
should_provide_bridge=True,
should_provide_vlan=True,
bridge_interface="eth0",
vlan=1729,
subnets=osv_objects.subnet.SubnetList(
objects=[
osv_objects.subnet.Subnet(
cidr="192.168.1.0/24",
dns=[],
gateway="192.168.1.254",
ips=osv_objects.fixed_ip.FixedIPList(
objects=[]),
routes=osv_objects.route.RouteList(
objects=[]),
)
]))
actual = os_vif_util._nova_to_osvif_network(network)
self.assertObjEqual(expect, actual)
def test_nova_to_osvif_network_labeled_no_bridge(self):
network = model.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
label="Demo Net",
subnets=[
model.Subnet(cidr="192.168.1.0/24",
gateway=model.IP(
address="192.168.1.254",
type='gateway')),
])
expect = osv_objects.network.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
bridge_interface=None,
label="Demo Net",
subnets=osv_objects.subnet.SubnetList(
objects=[
osv_objects.subnet.Subnet(
cidr="192.168.1.0/24",
dns=[],
gateway="192.168.1.254",
ips=osv_objects.fixed_ip.FixedIPList(
objects=[]),
routes=osv_objects.route.RouteList(
objects=[]),
)
]))
actual = os_vif_util._nova_to_osvif_network(network)
self.assertObjEqual(expect, actual)
def test_nova_to_osvif_network_labeled_no_vlan(self):
network = model.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
label="Demo Net",
should_create_vlan=True,
subnets=[
model.Subnet(cidr="192.168.1.0/24",
gateway=model.IP(
address="192.168.1.254",
type='gateway')),
])
self.assertRaises(exception.NovaException,
os_vif_util._nova_to_osvif_network,
network)
def test_nova_to_osvif_network_mtu(self):
network = model.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
bridge="br0",
mtu=550,
subnets=[])
osv_obj = os_vif_util._nova_to_osvif_network(network)
self.assertEqual(550, osv_obj.mtu)
def test_nova_to_osvif_vif_linux_bridge(self):
vif = model.VIF(
id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
type=model.VIF_TYPE_BRIDGE,
address="22:52:25:62:e2:aa",
network=model.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
label="Demo Net",
subnets=[]),
details={
model.VIF_DETAILS_PORT_FILTER: True,
}
)
actual = os_vif_util.nova_to_osvif_vif(vif)
expect = osv_objects.vif.VIFBridge(
id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
active=False,
address="22:52:25:62:e2:aa",
has_traffic_filtering=True,
plugin="linux_bridge",
preserve_on_delete=False,
vif_name="nicdc065497-3c",
network=osv_objects.network.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
bridge_interface=None,
label="Demo Net",
subnets=osv_objects.subnet.SubnetList(
objects=[])))
self.assertObjEqual(expect, actual)
def test_nova_to_osvif_vif_agilio_ovs_fallthrough(self):
vif = model.VIF(
id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
type=model.VIF_TYPE_AGILIO_OVS,
address="22:52:25:62:e2:aa",
network=model.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
label="Demo Net",
subnets=[]),
details={
model.VIF_DETAILS_PORT_FILTER: True,
}
)
actual = os_vif_util.nova_to_osvif_vif(vif)
expect = osv_objects.vif.VIFOpenVSwitch(
id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
active=False,
address="22:52:25:62:e2:aa",
has_traffic_filtering=True,
plugin="ovs",
port_profile=osv_objects.vif.VIFPortProfileOpenVSwitch(
interface_id="dc065497-3c8d-4f44-8fb4-e1d33c16a536"),
preserve_on_delete=False,
vif_name="nicdc065497-3c",
network=osv_objects.network.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
bridge_interface=None,
label="Demo Net",
subnets=osv_objects.subnet.SubnetList(
objects=[])))
self.assertObjEqual(expect, actual)
def test_nova_to_osvif_vif_agilio_ovs_direct(self):
vif = model.VIF(
id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
type=model.VIF_TYPE_AGILIO_OVS,
address="22:52:25:62:e2:aa",
profile={
"pci_slot": "0000:08:08.5",
},
network=model.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
label="Demo Net",
subnets=[]),
vnic_type=model.VNIC_TYPE_DIRECT,
)
actual = os_vif_util.nova_to_osvif_vif(vif)
expect = osv_objects.vif.VIFHostDevice(
id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
active=False,
has_traffic_filtering=False,
address="22:52:25:62:e2:aa",
dev_type=osv_objects.fields.VIFHostDeviceDevType.ETHERNET,
dev_address="0000:08:08.5",
plugin="agilio_ovs",
port_profile=osv_objects.vif.VIFPortProfileOVSRepresentor(
interface_id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
representor_name="nicdc065497-3c",
representor_address="0000:08:08.5"),
preserve_on_delete=False,
vif_name="nicdc065497-3c",
network=osv_objects.network.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
bridge_interface=None,
label="Demo Net",
subnets=osv_objects.subnet.SubnetList(
objects=[])))
self.assertObjEqual(expect, actual)
def test_nova_to_osvif_vif_agilio_ovs_forwarder(self):
vif = model.VIF(
id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
type=model.VIF_TYPE_AGILIO_OVS,
address="22:52:25:62:e2:aa",
profile={
"pci_slot": "0000:08:08.5",
},
network=model.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
label="Demo Net",
subnets=[]),
vnic_type=model.VNIC_TYPE_VIRTIO_FORWARDER,
details={
model.VIF_DETAILS_VHOSTUSER_MODE: 'client',
model.VIF_DETAILS_VHOSTUSER_OVS_PLUG: True,
model.VIF_DETAILS_VHOSTUSER_SOCKET: '/fake/socket',
}
)
actual = os_vif_util.nova_to_osvif_vif(vif)
expect = osv_objects.vif.VIFVHostUser(
id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
active=False,
address="22:52:25:62:e2:aa",
has_traffic_filtering=False,
plugin="agilio_ovs",
port_profile=osv_objects.vif.VIFPortProfileOVSRepresentor(
interface_id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
representor_address="0000:08:08.5",
representor_name="nicdc065497-3c",),
preserve_on_delete=False,
vif_name="nicdc065497-3c",
path='/fake/socket',
mode='client',
network=osv_objects.network.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
bridge_interface=None,
label="Demo Net",
subnets=osv_objects.subnet.SubnetList(
objects=[])))
self.assertObjEqual(expect, actual)
def test_nova_to_osvif_vif_ovs_plain(self):
vif = model.VIF(
id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
type=model.VIF_TYPE_OVS,
address="22:52:25:62:e2:aa",
network=model.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
label="Demo Net",
subnets=[]),
details={
model.VIF_DETAILS_PORT_FILTER: True,
}
)
actual = os_vif_util.nova_to_osvif_vif(vif)
expect = osv_objects.vif.VIFOpenVSwitch(
id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
active=False,
address="22:52:25:62:e2:aa",
has_traffic_filtering=True,
plugin="ovs",
port_profile=osv_objects.vif.VIFPortProfileOpenVSwitch(
interface_id="dc065497-3c8d-4f44-8fb4-e1d33c16a536"),
preserve_on_delete=False,
vif_name="nicdc065497-3c",
network=osv_objects.network.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
bridge_interface=None,
label="Demo Net",
subnets=osv_objects.subnet.SubnetList(
objects=[])))
self.assertObjEqual(expect, actual)
def test_nova_to_osvif_vif_ovs_hybrid(self):
self.flags(firewall_driver=None)
vif = model.VIF(
id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
type=model.VIF_TYPE_OVS,
address="22:52:25:62:e2:aa",
network=model.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
label="Demo Net",
subnets=[]),
details={
model.VIF_DETAILS_PORT_FILTER: False,
}
)
actual = os_vif_util.nova_to_osvif_vif(vif)
expect = osv_objects.vif.VIFBridge(
id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
active=False,
address="22:52:25:62:e2:aa",
has_traffic_filtering=False,
plugin="ovs",
bridge_name="qbrdc065497-3c",
port_profile=osv_objects.vif.VIFPortProfileOpenVSwitch(
interface_id="dc065497-3c8d-4f44-8fb4-e1d33c16a536"),
preserve_on_delete=False,
vif_name="nicdc065497-3c",
network=osv_objects.network.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
bridge_interface=None,
label="Demo Net",
subnets=osv_objects.subnet.SubnetList(
objects=[])))
self.assertObjEqual(expect, actual)
def test_nova_to_osvif_ovs_with_vnic_direct(self):
vif = model.VIF(
id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
type=model.VIF_TYPE_OVS,
address="22:52:25:62:e2:aa",
vnic_type=model.VNIC_TYPE_DIRECT,
network=model.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
label="Demo Net",
subnets=[]),
profile={'pci_slot': '0000:0a:00.1'}
)
actual = os_vif_util.nova_to_osvif_vif(vif)
expect = osv_objects.vif.VIFHostDevice(
id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
active=False,
address="22:52:25:62:e2:aa",
dev_address='0000:0a:00.1',
dev_type=os_vif_fields.VIFHostDeviceDevType.ETHERNET,
plugin="ovs",
port_profile=osv_objects.vif.VIFPortProfileOVSRepresentor(
interface_id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
representor_name="nicdc065497-3c",
representor_address="0000:0a:00.1"),
has_traffic_filtering=False,
preserve_on_delete=False,
network=osv_objects.network.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
bridge_interface=None,
label="Demo Net",
subnets=osv_objects.subnet.SubnetList(
objects=[])))
self.assertObjEqual(expect, actual)
def test_nova_to_osvif_vhostuser_ovs(self):
vif = model.VIF(
id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
type=model.VIF_TYPE_VHOSTUSER,
address="22:52:25:62:e2:aa",
network=model.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
label="Demo Net",
subnets=[]),
details={
model.VIF_DETAILS_VHOSTUSER_MODE: 'client',
model.VIF_DETAILS_VHOSTUSER_OVS_PLUG: True,
model.VIF_DETAILS_VHOSTUSER_SOCKET: '/fake/socket',
model.VIF_DETAILS_PORT_FILTER: True
}
)
actual = os_vif_util.nova_to_osvif_vif(vif)
expect = osv_objects.vif.VIFVHostUser(
id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
active=False,
address="22:52:25:62:e2:aa",
plugin="ovs",
port_profile=osv_objects.vif.VIFPortProfileOpenVSwitch(
interface_id="dc065497-3c8d-4f44-8fb4-e1d33c16a536"),
vif_name="vhudc065497-3c",
path='/fake/socket',
mode='client',
has_traffic_filtering=True,
preserve_on_delete=False,
network=osv_objects.network.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
bridge_interface=None,
label="Demo Net",
subnets=osv_objects.subnet.SubnetList(
objects=[])))
self.assertObjEqual(expect, actual)
def test_nova_to_osvif_vhostuser_ovs_no_socket_path(self):
vif = model.VIF(
id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
type=model.VIF_TYPE_VHOSTUSER,
address="22:52:25:62:e2:aa",
network=model.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
label="Demo Net",
subnets=[]),
details={
model.VIF_DETAILS_VHOSTUSER_MODE: 'client',
model.VIF_DETAILS_VHOSTUSER_OVS_PLUG: True,
model.VIF_DETAILS_PORT_FILTER: True
}
)
self.assertRaises(exception.VifDetailsMissingVhostuserSockPath,
os_vif_util.nova_to_osvif_vif,
vif)
def test_nova_to_osvif_vhostuser_non_ovs(self):
vif = model.VIF(
id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
active=False,
type=model.VIF_TYPE_VHOSTUSER,
address="22:52:25:62:e2:aa",
network=model.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
label="Demo Net",
subnets=[]),
details={
model.VIF_DETAILS_VHOSTUSER_MODE: 'client',
model.VIF_DETAILS_VHOSTUSER_OVS_PLUG: False,
model.VIF_DETAILS_VHOSTUSER_SOCKET: '/fake/socket'
}
)
self.assertIsNone(os_vif_util.nova_to_osvif_vif(vif))
def test_nova_to_osvif_vhostuser_fp_ovs_hybrid(self):
vif = model.VIF(
id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
type=model.VIF_TYPE_VHOSTUSER,
address="22:52:25:62:e2:aa",
network=model.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
label="Demo Net",
mtu="1500",
subnets=[]),
details={
model.VIF_DETAILS_VHOSTUSER_MODE: 'client',
model.VIF_DETAILS_VHOSTUSER_SOCKET: '/fake/socket',
model.VIF_DETAILS_VHOSTUSER_FP_PLUG: True,
model.VIF_DETAILS_VHOSTUSER_OVS_PLUG: True,
model.VIF_DETAILS_OVS_HYBRID_PLUG: True,
model.VIF_DETAILS_PORT_FILTER: False,
}
)
actual = os_vif_util.nova_to_osvif_vif(vif)
expect = osv_objects.vif.VIFVHostUser(
id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
active=False,
address="22:52:25:62:e2:aa",
plugin="vhostuser_fp",
port_profile=osv_objects.vif.VIFPortProfileFPOpenVSwitch(
interface_id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
bridge_name="qbrdc065497-3c",
hybrid_plug=True),
vif_name="nicdc065497-3c",
path='/fake/socket',
mode='client',
has_traffic_filtering=False,
preserve_on_delete=False,
network=osv_objects.network.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
bridge_interface=None,
label="Demo Net",
mtu="1500",
subnets=osv_objects.subnet.SubnetList(
objects=[])))
self.assertObjEqual(expect, actual)
def test_nova_to_osvif_vhostuser_fp_ovs_plain(self):
vif = model.VIF(
id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
type=model.VIF_TYPE_VHOSTUSER,
address="22:52:25:62:e2:aa",
network=model.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
label="Demo Net",
mtu="1500",
bridge="br-int",
subnets=[]),
details={
model.VIF_DETAILS_VHOSTUSER_MODE: 'client',
model.VIF_DETAILS_VHOSTUSER_SOCKET: '/fake/socket',
model.VIF_DETAILS_VHOSTUSER_FP_PLUG: True,
model.VIF_DETAILS_VHOSTUSER_OVS_PLUG: True,
model.VIF_DETAILS_OVS_HYBRID_PLUG: False,
model.VIF_DETAILS_PORT_FILTER: True,
}
)
actual = os_vif_util.nova_to_osvif_vif(vif)
expect = osv_objects.vif.VIFVHostUser(
id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
active=False,
address="22:52:25:62:e2:aa",
plugin="vhostuser_fp",
port_profile=osv_objects.vif.VIFPortProfileFPOpenVSwitch(
interface_id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
bridge_name="br-int",
hybrid_plug=False),
vif_name="nicdc065497-3c",
path='/fake/socket',
mode='client',
has_traffic_filtering=True,
preserve_on_delete=False,
network=osv_objects.network.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
bridge_interface=None,
label="Demo Net",
mtu="1500",
bridge="br-int",
subnets=osv_objects.subnet.SubnetList(
objects=[])))
self.assertObjEqual(expect, actual)
def test_nova_to_osvif_vhostuser_fp_lb(self):
vif = model.VIF(
id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
type=model.VIF_TYPE_VHOSTUSER,
address="22:52:25:62:e2:aa",
network=model.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
label="Demo Net",
mtu="1500",
bridge="brq12345",
subnets=[]),
details={
model.VIF_DETAILS_VHOSTUSER_MODE: 'client',
model.VIF_DETAILS_VHOSTUSER_SOCKET: '/fake/socket',
model.VIF_DETAILS_VHOSTUSER_FP_PLUG: True,
model.VIF_DETAILS_VHOSTUSER_OVS_PLUG: False,
}
)
actual = os_vif_util.nova_to_osvif_vif(vif)
expect = osv_objects.vif.VIFVHostUser(
id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
active=False,
address="22:52:25:62:e2:aa",
plugin="vhostuser_fp",
port_profile=osv_objects.vif.VIFPortProfileFPBridge(
bridge_name="brq12345"),
vif_name="nicdc065497-3c",
path='/fake/socket',
mode='client',
has_traffic_filtering=False,
preserve_on_delete=False,
network=osv_objects.network.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
bridge_interface=None,
label="Demo Net",
mtu="1500",
bridge="brq12345",
subnets=osv_objects.subnet.SubnetList(
objects=[])))
self.assertObjEqual(expect, actual)
def test_nova_to_osvif_vhostuser_fp_no_socket_path(self):
vif = model.VIF(
id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
type=model.VIF_TYPE_VHOSTUSER,
address="22:52:25:62:e2:aa",
network=model.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
label="Demo Net",
subnets=[]),
details={
model.VIF_DETAILS_VHOSTUSER_MODE: 'client',
model.VIF_DETAILS_VHOSTUSER_FP_PLUG: True,
model.VIF_DETAILS_VHOSTUSER_OVS_PLUG: False,
model.VIF_DETAILS_PORT_FILTER: True,
}
)
self.assertRaises(exception.VifDetailsMissingVhostuserSockPath,
os_vif_util.nova_to_osvif_vif,
vif)
def test_nova_to_osvif_vif_ivs_plain(self):
vif = model.VIF(
id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
type=model.VIF_TYPE_IVS,
address="22:52:25:62:e2:aa",
network=model.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
label="Demo Net",
subnets=[]),
details={
model.VIF_DETAILS_PORT_FILTER: True,
}
)
actual = os_vif_util.nova_to_osvif_vif(vif)
self.assertIsNone(actual)
def test_nova_to_osvif_vif_unknown(self):
vif = model.VIF(
id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
type="wibble",
address="22:52:25:62:e2:aa",
network=model.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
label="Demo Net",
subnets=[]),
)
self.assertRaises(exception.NovaException,
os_vif_util.nova_to_osvif_vif,
vif)
def test_nova_to_osvif_vhostuser_vrouter(self):
vif = model.VIF(
id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
type=model.VIF_TYPE_VHOSTUSER,
address="22:52:25:62:e2:aa",
network=model.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
label="Demo Net",
subnets=[]),
details={
model.VIF_DETAILS_VHOSTUSER_MODE: 'client',
model.VIF_DETAILS_VHOSTUSER_VROUTER_PLUG: True,
model.VIF_DETAILS_VHOSTUSER_SOCKET: '/fake/socket',
}
)
actual = os_vif_util.nova_to_osvif_vif(vif)
expect = osv_objects.vif.VIFVHostUser(
id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
active=False,
address="22:52:25:62:e2:aa",
plugin="contrail_vrouter",
vif_name="nicdc065497-3c",
path='/fake/socket',
mode='client',
has_traffic_filtering=False,
preserve_on_delete=False,
network=osv_objects.network.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
bridge_interface=None,
label="Demo Net",
subnets=osv_objects.subnet.SubnetList(
objects=[])))
self.assertObjEqual(expect, actual)
def test_nova_to_osvif_vhostuser_vrouter_no_socket_path(self):
vif = model.VIF(
id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
type=model.VIF_TYPE_VHOSTUSER,
address="22:52:25:62:e2:aa",
network=model.Network(
id="b82c1929-051e-481d-8110-4669916c7915",
label="Demo Net",
subnets=[]),
details={
model.VIF_DETAILS_VHOSTUSER_MODE: 'client',
model.VIF_DETAILS_VHOSTUSER_VROUTER_PLUG: True,
}
)
self.assertRaises(exception.VifDetailsMissingVhostuserSockPath,
os_vif_util.nova_to_osvif_vif,
vif)
| 37.769826 | 78 | 0.497439 | 3,774 | 39,054 | 4.930313 | 0.070747 | 0.048906 | 0.033106 | 0.038695 | 0.892245 | 0.879239 | 0.861235 | 0.849626 | 0.83087 | 0.811738 | 0 | 0.137572 | 0.400676 | 39,054 | 1,033 | 79 | 37.806389 | 0.657396 | 0.016746 | 0 | 0.823789 | 0 | 0 | 0.151808 | 0.0823 | 0 | 0 | 0 | 0 | 0.034141 | 1 | 0.037445 | false | 0 | 0.007709 | 0 | 0.046256 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0acd77bafc8870c4976906571d54bf9d56929189 | 53,923 | py | Python | waferscreen/inst_control/inactive/lakeshore370.py | chw3k5/WaferScreen | c0ca7fe939fe7cd0b722b7d6129b148c03a7505c | [
"Apache-2.0"
] | 1 | 2021-07-30T19:06:07.000Z | 2021-07-30T19:06:07.000Z | waferscreen/inst_control/inactive/lakeshore370.py | chw3k5/WaferScreen | c0ca7fe939fe7cd0b722b7d6129b148c03a7505c | [
"Apache-2.0"
] | 8 | 2021-04-22T20:47:48.000Z | 2021-07-30T19:06:01.000Z | waferscreen/inst_control/inactive/lakeshore370.py | chw3k5/WaferScreen | c0ca7fe939fe7cd0b722b7d6129b148c03a7505c | [
"Apache-2.0"
] | null | null | null | '''
Lakesore370
Created on Mar 11, 2009
@author: bennett
'''
import gpib_instrument
from lookup import Lookup
from time import sleep
import math
import numpy
#from scipy.io import read_array #obsolete, replace with numpy.genfromtxt
import pylab
import scipy
from scipy.interpolate import interp1d
class Lakeshore370(gpib_instrument.Gpib_Instrument):
'''
The Lakeshore 370 AC Bridge GPIB communication class
'''
def __init__(self, pad, board_number = 0, name = '', sad = 0, timeout = 13, send_eoi = 1, eos_mode = 0):
'''Constructor The PAD (Primary GPIB Address) is the only required parameter '''
super(Lakeshore370, self).__init__(board_number, name, pad, sad, timeout, send_eoi, eos_mode)
# GPIB identity string of the instrument
self.id_string = "LSCI,MODEL370,370447,09272005"
self.manufacturer = 'Lakeshore'
self.model_number = '370'
self.description = 'Bridge - Temperature Controller'
self.voltage = None
#self.compare_identity()
self.control_mode_switch = Lookup({
'closed' : '1',
'zone' : '2',
'open' : '3',
'off' : '4'
})
self.on_off_switch = Lookup({
'off' : '0',
'on' : '1'
})
def getTemperature(self, channel=1):
''' Get temperature from a given channel as a float '''
commandstring = 'RDGK? ' + str(channel)
#result = self.ask(commandstring)
#self.voltage = float(result)
self.voltage = self.askFloat(commandstring)
return self.voltage
def getResistance(self, channel=1):
'''Get resistance from a given channel as a float.'''
commandstring = 'RDGR? ' + str(channel)
# result = self.ask(commandstring)
# resistance = float(result)
resistance = self.askFloat(commandstring)
return resistance
def setControlMode(self, controlmode = 'off'):
''' Set control mode 'off', 'zone', 'open' or 'closed' '''
#switch = {
# 'closed' : '1',
# 'zone' : '2',
# 'open' : '3',
# 'off' : '4'
#}
#commandstring = 'CMODE ' + switch.get(controlmode,'4')
commandstring = 'CMODE ' + self.control_mode_switch.get(controlmode,'4')
self.write(commandstring)
def getControlMode(self):
''' Get control mode 'off', 'zone', 'open' or 'closed' '''
#switch = {
# '1' : 'closed',
# '2' : 'zone',
# '3' : 'open',
# '4' : 'off'
#}
commandstring = 'CMODE?'
result = self.ask(commandstring)
#mode = switch.get(result, 'com error')
mode = self.control_mode_switch.get_key(result)
return mode[0]
def setPIDValues(self, P=1, I=1, D=0):
''' Set P, I and D values where I and D are i nunits of seconds '''
commandstring = 'PID ' + str(P) + ', ' + str(I) + ', ' + str(D)
self.write(commandstring)
def getPIDValues(self):
'''Returns P,I and D values as floats where is I and D have units of seconds '''
commandstring = 'PID?'
result = self.ask(commandstring)
valuestrings = result.split(',')
PIDvalues = [0,0,0]
PIDvalues[0] = float(valuestrings[0])
PIDvalues[1] = float(valuestrings[1])
PIDvalues[2] = float(valuestrings[2])
return PIDvalues
def setManualHeaterOut(self, heatpercent=0):
''' Set the manual heater output as a percent of heater range '''
commandstring = 'MOUT ' + str(heatpercent)
self.write(commandstring)
def getManualHeaterOut(self):
''' Get the manual heater output as a percent of heater range '''
commandstring = 'MOUT?'
result = self.ask(commandstring)
heaterout = float(result)
return heaterout
def getHeaterOut(self):
''' Get the manual heater output as a percent of heater range '''
commandstring = 'HTR?'
result = self.ask(commandstring)
heaterout = float(result)
return heaterout
def setTemperatureSetPoint(self, setpoint=0.010):
''' Set the temperature set point in units of Kelvin '''
commandstring = 'SETP ' + str(setpoint)
self.write(commandstring)
def getTemperatureSetPoint(self):
''' Get the temperature set point in units of Kelvin '''
commandstring = 'SETP?'
result = self.ask(commandstring)
setpoint = float(result)
return setpoint
def setHeaterRange(self, range=10):
''' Set the temperature heater range in units of mA '''
if range >= 0.0316 and range < .1:
rangestring = '1'
elif range >= .1 and range < .316:
rangestring = '2'
elif range >= .316 and range < 1:
rangestring = '3'
elif range >= 1 and range < 3.16:
rangestring = '4'
elif range >= 3.16 and range < 10:
rangestring = '5'
elif range >= 10 and range < 31.6:
rangestring = '6'
elif range >= 31.6 and range < 100:
rangestring = '7'
elif range >= 100 and range < 316:
rangestring = '8'
else:
rangestring = '0'
commandstring = 'HTRRNG ' + str(rangestring)
result = self.write(commandstring)
def getHeaterRange(self):
''' Get the temperature heater range in units of mA '''
switch = {
'0' : 0,
'1' : 0,
'2' : 0.100,
'3' : 0.316,
'4' : 1,
'5' : 3.16,
'6' : 10,
'7' : 31.6,
'8' : 100
}
commandstring = 'HTRRNG?'
result = self.ask(commandstring)
htrrange = switch.get(result , 'com error')
return htrrange
def setControlPolarity(self, polarity = 'unipolar'):
''' Set the heater output polarity 'unipolar' or 'bipolar' '''
switch = {
'unipolar' : '0',
'bipolar' : '1'
}
commandstring = 'CPOL ' + switch.get(polarity,'0')
self.write(commandstring)
def getControlPolarity(self):
''' Get the heater output polarity 'unipolar' or 'bipolar' '''
switch = {
'0' : 'unipolar',
'1' : 'bipolar'
}
commandstring = 'CPOL?'
result = self.ask(commandstring)
polarity = switch.get(result , 'com error')
return polarity
def setScan(self, channel = 1, autoscan = 'off'):
''' Set the channel autoscanner 'on' or 'off' '''
switch = {
'off' : '0',
'on' : '1'
}
commandstring = 'SCAN ' + str(channel) + ', ' + switch.get(autoscan,'0')
self.write(commandstring)
def setRamp(self, rampmode = 'on' , ramprate = 0.1):
''' Set the ramp mode to 'on' or 'off' and specify ramp rate in Kelvin/minute'''
switch = {
'off' : '0',
'on' : '1'
}
commandstring = 'RAMP ' + switch.get(rampmode,'1') + ', ' + str(ramprate)
self.write(commandstring)
def getRamp(self):
''' Get the ramp mode either 'on' or 'off' and the ramp rate in Kelvin/minute '''
switch = {
'0' : 'off',
'1' : 'on'
}
commandstring = 'RAMP?'
result = self.ask(commandstring)
results = result.split(',')
ramp = ['off', 0]
ramp[0] = switch.get(results[0] , 'com error')
ramp[1] = float(results[1])
return ramp
def setTemperatureControlSetup(self, channel = 1, units = 'Kelvin', maxrange = 10, delay = 2, htrres = 1, output = 'current', filterread = 'unfiltered'):
'''
Setup the temperature control channel, units 'Kelvin' or 'Ohms', the maximum heater range in mA, delay in seconds, heater resistance in Ohms, output the 'current' or 'power', and 'filterer' or 'unfiltered'
'''
switchunits = {
'Kelvin' : '1',
'Ohms' : '2'
}
if maxrange >= 0.0316 and maxrange < .1:
rangestring = '1'
elif maxrange >= .1 and maxrange < .316:
rangestring = '2'
elif maxrange >= .316 and maxrange < 1:
rangestring = '3'
elif maxrange >= 1 and maxrange < 3.16:
rangestring = '4'
elif maxrange >= 3.16 and maxrange < 10:
rangestring = '5'
elif maxrange >= 10 and maxrange < 31.6:
rangestring = '6'
elif maxrange >= 31.6 and maxrange < 100:
rangestring = '7'
elif maxrange >= 100 and maxrange < 316:
rangestring = '8'
else:
rangestring = '0'
switchoutput = {
'current' : '1',
'power' : '2'
}
switchfilter = {
'unfiltered' : '0',
'filtered' : '1'
}
commandstring = 'CSET ' + str(channel) + ', ' + switchfilter.get(filterread,'0') + ', ' + switchunits.get(units,'1') + ', ' + str(delay) + ', ' + switchoutput.get(output,'1') + ', ' + rangestring + ', ' + str(htrres)
self.write(commandstring)
def setReadChannelSetup(self, channel = 1, mode = 'current', exciterange = 10e-9, resistancerange = 63.2e3,autorange = 'off', excitation = 'on'):
'''
Sets the measurment parameters for a given channel, in 'current' or 'voltage' excitation mode, excitation range in Amps or Volts, resistance range in ohms
'''
switchmode = {
'voltage' : '0',
'current' : '1'
}
switchautorange = {
'off' : '0',
'on' : '1'
}
switchexcitation = {
'on' : '0',
'off' : '1'
}
#Get Excitation Range String
if mode == 'voltage':
if exciterange >= 2e-6 and exciterange < 6.32e-6:
exciterangestring = '1'
elif exciterange >= 6.32e-6 and exciterange < 20e-6:
exciterangestring = '2'
elif exciterange >= 20e-6 and exciterange < 63.2e-6:
exciterangestring = '3'
elif exciterange >= 63.2e-6 and exciterange < 200e-6:
exciterangestring = '4'
elif exciterange >= 200e-6 and exciterange < 632e-6:
exciterangestring = '5'
elif exciterange >= 632e-6 and exciterange < 2e-3:
exciterangestring = '6'
elif exciterange >= 2e-3 and exciterange < 6.32e-3:
exciterangestring = '7'
elif exciterange >= 6.32e-3 and exciterange < 20e-3:
exciterangestring = '8'
else:
exciterangestring = '1'
else:
if exciterange >= 1e-12 and exciterange < 3.16e-12:
exciterangestring = '1'
elif exciterange >= 3.16e-12 and exciterange < 10e-12:
exciterangestring = '2'
elif exciterange >= 10e-12 and exciterange < 31.6e-12:
exciterangestring = '3'
elif exciterange >= 31.6e-12 and exciterange < 100e-12:
exciterangestring = '4'
elif exciterange >= 100e-12 and exciterange < 316e-12:
exciterangestring = '5'
elif exciterange >= 316e-12 and exciterange < 1e-9:
exciterangestring = '6'
elif exciterange >= 1e-9 and exciterange < 3.16e-9:
exciterangestring = '7'
elif exciterange >= 3.16e-9 and exciterange < 10e-9:
exciterangestring = '8'
elif exciterange >= 10e-9 and exciterange < 31.6e-9:
exciterangestring = '9'
elif exciterange >= 31.6e-9 and exciterange < 100e-9:
exciterangestring = '10'
elif exciterange >= 100e-9 and exciterange < 316e-9:
exciterangestring = '11'
elif exciterange >= 316e-9 and exciterange < 1e-6:
exciterangestring = '12'
elif exciterange >= 1e-6 and exciterange < 3.16e-6:
exciterangestring = '13'
elif exciterange >= 3.16e-6 and exciterange < 10e-6:
exciterangestring = '14'
elif exciterange >= 10e-6 and exciterange < 31.6e-6:
exciterangestring = '15'
elif exciterange >= 31.6e-6 and exciterange < 100e-6:
exciterangestring = '16'
elif exciterange >= 100e-6 and exciterange < 316e-6:
exciterangestring = '17'
elif exciterange >= 316e-6 and exciterange < 1e-3:
exciterangestring = '18'
elif exciterange >= 1e-3 and exciterange < 3.16e-3:
exciterangestring = '19'
elif exciterange >= 3.16e-3 and exciterange < 10e-3:
exciterangestring = '20'
elif exciterange >= 10e-3 and exciterange < 31.6e-3:
exciterangestring = '21'
elif exciterange >= 31.6e-3 and exciterange < 100e-3:
exciterangestring = '22'
else:
exciterangestring = '7'
#Get Resistance Range String
if resistancerange < 2e-3:
resistancerangestring= '1'
elif resistancerange > 2e-3 and resistancerange <= 6.32e-3:
resistancerangestring = '2'
elif resistancerange > 6.32e-3 and resistancerange <= 20e-3:
resistancerangestring = '3'
elif resistancerange > 20e-3 and resistancerange <= 63.2e-3:
resistancerangestring = '4'
elif resistancerange > 63.2e-3 and resistancerange <= 200e-3:
resistancerangestring = '5'
elif resistancerange > 200e-3 and resistancerange <= 632e-3:
resistancerangestring = '6'
elif resistancerange > 632e-3 and resistancerange <= 2.0:
resistancerangestring = '7'
elif resistancerange > 2.0 and resistancerange <= 6.32:
resistancerangestring = '8'
elif resistancerange > 6.32 and resistancerange <= 20:
resistancerangestring = '9'
elif resistancerange > 20 and resistancerange <= 63.2:
resistancerangestring = '10'
elif resistancerange > 63.2 and resistancerange <= 200:
resistancerangestring = '11'
elif resistancerange > 200 and resistancerange <= 632:
resistancerangestring = '12'
elif resistancerange > 632 and resistancerange <= 2e3:
resistancerangestring = '13'
elif resistancerange > 2e3 and resistancerange <= 6.32e3:
resistancerangestring = '14'
elif resistancerange > 6.32e3 and resistancerange <= 20e3:
resistancerangestring = '15'
elif resistancerange > 20e3 and resistancerange <= 63.2e3:
resistancerangestring = '16'
elif resistancerange > 63.2e3 and resistancerange <= 200e3:
resistancerangestring = '17'
elif resistancerange > 200e3 and resistancerange <= 632e3:
resistancerangestring = '18'
elif resistancerange > 632e3 and resistancerange <= 2e6:
resistancerangestring = '19'
elif resistancerange > 2e6 and resistancerange <= 6.32e6:
resistancerangestring = '20'
elif resistancerange > 6.32e6 and resistancerange <= 20e6:
resistancerangestring = '21'
elif resistancerange > 20e6 and resistancerange <= 63.2e6:
resistancerangestring = '22'
elif resistancerange > 63.2e6 and resistancerange <= 200e6:
resistancerangestring = '23'
else:
resistancerangestring = '1'
#Send Resistance Range Command String
commandstring = 'RDGRNG ' + str(channel) + ', ' + switchmode.get(mode,'1') + ', ' + exciterangestring + ',' + resistancerangestring + ',' + switchautorange.get(autorange,'0') + ', ' + switchexcitation.get(excitation,'0')
self.write(commandstring)
def getHeaterStatus(self):
switch = {
'0' : 'no error',
'1' : 'heater open error'
}
commandstring = 'HTRST?'
result = self.ask(commandstring)
status = switch.get(result, 'com error')
return status
def magUpSetup(self, heater_resistance=1):
''' Setup the lakeshore for magup '''
self.setTemperatureControlSetup(channel=1, units='Kelvin', maxrange=10, delay=2, htrres=heater_resistance, output='current', filterread='unfiltered')
self.setControlMode(controlmode = 'open')
self.setControlPolarity(polarity = 'unipolar')
self.setHeaterRange(range=10) # 1 Volt max input to Kepco for 100 Ohm shunt
self.setReadChannelSetup(channel = 1, mode = 'current', exciterange = 10e-9, resistancerange = 2e3,autorange = 'on')
def demagSetup(self, heater_resistance=1):
''' Setup the lakeshore for demag '''
self.setTemperatureControlSetup(channel=1, units='Kelvin', maxrange=10, delay=2, htrres=heater_resistance, output='current', filterread='unfiltered')
self.setControlMode(controlmode = 'open')
self.setControlPolarity(polarity = 'bipolar') #Set to bipolar so that current can get to zero faster
self.setHeaterRange(range=10) # 1 Volt max input to Kepco for 100 Ohm shunt
self.setReadChannelSetup(channel = 1, mode = 'current', exciterange = 10e-9, resistancerange = 2e3,autorange = 'on')
def setupPID(self, exciterange=3.16e-9, therm_control_channel=1, ramprate=0.05, heater_resistance=1):
'''Setup the lakeshore for temperature regulation '''
self.setScan(channel = therm_control_channel, autoscan = 'off')
sleep(3)
self.setReadChannelSetup(channel=1, mode='current', exciterange=exciterange, resistancerange=63.2e3,autorange='on')
sleep(15) #Give time for range to settle, or servoing will fail
self.setReadChannelSetup(channel=1, mode='current', exciterange=exciterange, resistancerange=63.2e3,autorange='off')
sleep(2)
self.setTemperatureControlSetup(channel=1, units='Kelvin', maxrange=100, delay=2, htrres=heater_resistance, output='current', filterread='unfiltered')
self.setControlMode(controlmode = 'closed')
self.setControlPolarity(polarity = 'unipolar')
self.setRamp(rampmode = 'off') #Turn off ramp mode to not to ramp setpoint down to aprox 0
sleep(.5) #Give time for Set Ramp to take effect
self.SetTemperatureSetPoint(setpoint=0.035)
sleep(.5) #Give time for Setpoint to take effect
self.setRamp(rampmode = 'on' , ramprate = ramprate)
self.setHeaterRange(range=100) #Set heater range to 100mA to get 10V output range
#self.SetReadChannelSetup(channel = 1, mode = 'current', exciterange = 1e-9, resistancerange = 2e3,autorange = 'on')
# Public Calibration Methods
def sendStandardRuOxCalibration(self):
pass
def sendCalibrationFromArrays(self, rData, tData, curveindex, thermname='Cernox 1030', serialnumber='x0000',\
temp_lim=300, tempco = 1, units=4, makeFig = False):
''' Send a calibration based on a input file
Input:
rData: array of themometer resistance values (Ohms for units=3 or log(R/Ohms) for units=4)
tData: array of themometer temperature values (Kelvin)
curveindex: the curve index location in the lakeshore 370 (1 thru 20)
thermname: sensor type
serialnumber: thermometer serial number
interp: if True the data will be evenly spaced from the max to min with 200 pts
if False the raw data is used. User must ensure no doubles and < 200 pts
temp_lim: temperature limit (K)
tempco: 1 if dR/dT is negative, 2 if dR/dT is positive
units: 3 to use ohm/K, 4 to use log ohm/K
'''
if curveindex < 1 or curveindex > 20:
print ' 1 <= curveindex <= 20 for lakeshore 370'
return 1
# Send Header
# 4, 350 ,1 -- logOhm/K, temperature limit, temperature coefficient 1=negative
commandstring = 'CRVHDR ' + str(curveindex) + ', ' + thermname + ', ' + serialnumber + ', '+str(units)+', '+\
str(temp_lim)+', '+ str(tempco)
self.write(commandstring)
print(commandstring)
# Send Data Points
for i in range(len(rData)):
pntindex = i+1
if rData[i] < 10:
stringRPoint = '%7.5f' % rData[i]
else:
stringRPoint = '%8.5f' % rData[i]
stringTPoint = '%7.5f' % tData[i]
datapointstring = 'CRVPT ' + str(curveindex) + ', ' + str(pntindex) + ', ' + stringRPoint + ', ' + stringTPoint
self.write(datapointstring)
print datapointstring
if makeFig:
pylab.figure()
pylab.plot(rData,tData,'o')
pylab.xlabel('Resistance (Ohms)')
pylab.ylabel('Temperature (K)')
def sendCalibration(self, filename, datacol, tempcol, curveindex, thermname='Cernox 1030', serialnumber='x0000', interp=True,\
temp_lim=300, tempco = 1, units=4):
''' Send a calibration based on a input file
Input:
filename: location of calibration file
datacol: defines which column in filename will be used as data (zero indexed)
tempcol: defines which column in filename will be used as temperature (zero indexed)
curveindex: the curve index location in the lakeshore 370 (1 thru 20)
thermname: sensor type
serialnumber: thermometer serial number
interp: if True the data will be evenly spaced from the max to min with 200 pts
if False the raw data is used. User must ensure no doubles and < 200 pts
temp_lim: temperature limit (K)
tempco: 1 if dR/dT is negative, 2 if dR/dT is positive
units: 3 to use ohm/K, 4 to use log ohm/K
'''
if curveindex < 1 or curveindex > 20:
print ' 1 <= curveindex <= 20 for lakeshore 370'
return 1
#rawdata = read_array(filename) #obsolete, replace with genfromtxt
rawdata = numpy.genfromtxt(filename)
rawdatat = rawdata.transpose()
datat = numpy.array(rawdatat[:,rawdatat[datacol,:].argsort()])
#now remove doubles
last = datat[datacol,-1]
for i in range(len(datat[datacol,:])-2,-1,-1):
if last == datat[1,i]:
datat = numpy.hstack((datat[:,: i+1],datat[:,i+2 :]))
else:
last = datat[datacol,i]
pylab.figure()
pylab.plot(datat[datacol],datat[tempcol],'o')
pylab.show()
f = interp1d(datat[datacol],datat[tempcol])
self.f = f
# interpolate from min to max with 200 evenly spaced points if interp True
if interp:
Rs = scipy.linspace(min(datat[datacol]),max(datat[datacol]), num = 200)
else:
Rs = datat[datacol]
Rs[1] = 2730
Rs[2] = 2930
Rs[3] = 3100
Temps = f(Rs)
pylab.figure()
pylab.plot(datat[datacol],datat[tempcol],'o')
#pylab.holdon()
pylab.plot(Rs,Temps,'rx')
pylab.show()
# Send Header
# 4, 350 ,1 -- logOhm/K, temperature limit, temperature coefficient 1=negative
commandstring = 'CRVHDR ' + str(curveindex) + ', ' + thermname + ', ' + serialnumber + ', '+str(units)+', '+\
str(temp_lim)+', '+ str(tempco)
self.write(commandstring)
print commandstring
# Send Data Points
for i in range(len(Rs)):
pntindex = i+1
if units == 4:
logrofpoint = math.log10(Rs[i])
else:
logrofpoint = Rs[i]
if Rs[i] < 10:
stringlogrofpoint = '%(logrofpoint)7.5f' % vars()
else:
stringlogrofpoint = '%(logrofpoint)8.5f' % vars()
tempofpoint = Temps[i]
stringtempofpoint = '%(tempofpoint)5.5f' % vars()
datapointstring = 'CRVPT ' + str(curveindex) + ', ' + str(pntindex) + ', ' + stringlogrofpoint + ', ' + stringtempofpoint
self.write(datapointstring)
print datapointstring
pylab.figure()
pylab.plot(Rs,Temps,'o')
pylab.xlabel('Resistance (Ohms)')
pylab.ylabel('Temperature (K)')
def sendMartinisRuOxCalibration(self, curveindex, thermname='RuOx Martinis', serialnumber='19740', interp=True,\
temp_lim=300, tempco = 1, units=4):
self.sendMartinisCalibration(curveindex, thermname, serialnumber, interp, temp_lim, tempco, units)
def sendMartinisCalibration(self, curveindex, thermname='RuOx Martinis', serialnumber='19740', interp=True,\
temp_lim=300, tempco = 1, units=4):
''' Send a calibration based on a input file
Input:
filename: location of calibration file
datacol: defines which column in filename will be used as data (zero indexed)
tempcol: defines which column in filename will be used as temperature (zero indexed)
curveindex: the curve index location in the lakeshore 370 (1 thru 20)
thermname: sensor type
serialnumber: thermometer serial number
interp: if True the data will be evenly spaced from the max to min with 200 pts
if False the raw data is used. User must ensure no doubles and < 200 pts
temp_lim: temperature limit (K)
tempco: 1 if dR/dT is negative, 2 if dR/dT is positive
units: 3 to use ohm/K, 4 to use log ohm/K
'''
if curveindex < 1 or curveindex > 20:
print ' 1 <= curveindex <= 20 for lakeshore 370'
return 1
#rawdata = read_array(filename)
#rawdatat = rawdata.transpose()
#datat = numpy.array(rawdatat[:,rawdatat[datacol,:].argsort()])
#now remove doubles
#last = datat[datacol,-1]
#for i in range(len(datat[datacol,:])-2,-1,-1):
# if last == datat[1,i]:
# datat = numpy.hstack((datat[:,: i+1],datat[:,i+2 :]))
# else:
# last = datat[datacol,i]
#pylab.figure()
#pylab.plot(datat[datacol],datat[tempcol],'o')
#pylab.show()
#f = interp1d(datat[datacol],datat[tempcol])
#self.f = f
# interpolate from min to max with 200 evenly spaced points if interp True
#if interp:
#Rs = scipy.linspace(min(datat[datacol]),max(datat[datacol]), num = 200)
Rs = scipy.linspace(1000.92541, 63095.734448, num=200)
#else:
# Rs = datat[datacol]
Temps = (2.85 / (numpy.log((Rs-652.)/100.)))**4
# pylab.figure()
# #pylab.plot(datat[datacol],datat[tempcol],'o')
# #pylab.holdon()
# pylab.plot(Rs,Temps,'rx')
# pylab.show()
# Send Header
# 4, 350 ,1 -- logOhm/K, temperature limit, temperature coefficient 1=negative
commandstring = 'CRVHDR ' + str(curveindex) + ', ' + thermname + ', ' + serialnumber + ', '+str(units)+', '+\
str(temp_lim)+', '+ str(tempco)
self.write(commandstring)
print commandstring
# Send Data Points
for i in range(len(Rs)):
pntindex = i+1
logrofpoint = math.log10(Rs[i])
if Rs[i] < 10:
stringlogrofpoint = '%(logrofpoint)7.5f' % vars()
else:
stringlogrofpoint = '%(logrofpoint)8.5f' % vars()
tempofpoint = Temps[i]
stringtempofpoint = '%(tempofpoint)7.5f' % vars()
datapointstring = 'CRVPT ' + str(curveindex) + ', ' + str(pntindex) + ', ' + stringlogrofpoint + ', ' + stringtempofpoint
self.write(datapointstring)
print datapointstring
pylab.figure()
pylab.plot(Rs,Temps,'o')
pylab.xlabel('Resistance (Ohms)')
pylab.ylabel('Temperature (K)')
pylab.show()
# All these methods have been renamed and will be depricated eventualy
def GetTemperature(self, channel=1):
''' Get temperature from a given channel as a float '''
commandstring = 'RDGK? ' + str(channel)
#result = self.ask(commandstring)
#self.voltage = float(result)
self.voltage = self.askFloat(commandstring)
return self.voltage
def GetResistance(self, channel=1):
'''Get resistance from a given channel as a float.'''
commandstring = 'RDGR? ' + str(channel)
# result = self.ask(commandstring)
# resistance = float(result)
resistance = self.askFloat(commandstring)
return resistance
def SetControlMode(self, controlmode = 'off'):
''' Set control mode 'off', 'zone', 'open' or 'closed' '''
#switch = {
# 'closed' : '1',
# 'zone' : '2',
# 'open' : '3',
# 'off' : '4'
#}
#commandstring = 'CMODE ' + switch.get(controlmode,'4')
commandstring = 'CMODE ' + self.control_mode_switch.get(controlmode,'4')
self.write(commandstring)
def GetControlMode(self):
''' Get control mode 'off', 'zone', 'open' or 'closed' '''
#switch = {
# '1' : 'closed',
# '2' : 'zone',
# '3' : 'open',
# '4' : 'off'
#}
commandstring = 'CMODE?'
result = self.ask(commandstring)
#mode = switch.get(result, 'com error')
mode = self.control_mode_switch.get_key(result)
return mode[0]
def SetPIDValues(self, P=1, I=1, D=0):
''' Set P, I and D values where I and D are i nunits of seconds '''
commandstring = 'PID ' + str(P) + ', ' + str(I) + ', ' + str(D)
self.write(commandstring)
def GetPIDValues(self):
'''Returns P,I and D values as floats where is I and D have units of seconds '''
commandstring = 'PID?'
result = self.ask(commandstring)
valuestrings = result.split(',')
PIDvalues = [0,0,0]
PIDvalues[0] = float(valuestrings[0])
PIDvalues[1] = float(valuestrings[1])
PIDvalues[2] = float(valuestrings[2])
return PIDvalues
def SetManualHeaterOut(self, heatpercent=0):
''' Set the manual heater output as a percent of heater range '''
commandstring = 'MOUT ' + str(heatpercent)
self.write(commandstring)
def GetManualHeaterOut(self):
''' Get the manual heater output as a percent of heater range '''
commandstring = 'MOUT?'
result = self.ask(commandstring)
heaterout = float(result)
return heaterout
def GetHeaterOut(self):
''' Get the manual heater output as a percent of heater range '''
commandstring = 'HTR?'
result = self.ask(commandstring)
heaterout = float(result)
return heaterout
def SetTemperatureSetPoint(self, setpoint=0.010):
''' Set the temperature set point in units of Kelvin '''
commandstring = 'SETP ' + str(setpoint)
self.write(commandstring)
def GetTemperatureSetPoint(self):
''' Get the temperature set point in units of Kelvin '''
commandstring = 'SETP?'
result = self.ask(commandstring)
setpoint = float(result)
return setpoint
def SetHeaterRange(self, range=10):
''' Set the temperature heater range in units of mA '''
if range >= 0.0316 and range < .1:
rangestring = '1'
elif range >= .1 and range < .316:
rangestring = '2'
elif range >= .316 and range < 1:
rangestring = '3'
elif range >= 1 and range < 3.16:
rangestring = '4'
elif range >= 3.16 and range < 10:
rangestring = '5'
elif range >= 10 and range < 31.6:
rangestring = '6'
elif range >= 31.6 and range < 100:
rangestring = '7'
elif range >= 100 and range < 316:
rangestring = '8'
else:
rangestring = '0'
commandstring = 'HTRRNG ' + str(rangestring)
result = self.write(commandstring)
def GetHeaterRange(self):
''' Get the temperature heater range in units of mA '''
switch = {
'0' : 0,
'1' : 0,
'2' : 0.100,
'3' : 0.316,
'4' : 1,
'5' : 3.16,
'6' : 10,
'7' : 31.6,
'8' : 100
}
commandstring = 'HTRRNG?'
result = self.ask(commandstring)
htrrange = switch.get(result , 'com error')
return htrrange
def SetControlPolarity(self, polarity = 'unipolar'):
''' Set the heater output polarity 'unipolar' or 'bipolar' '''
switch = {
'unipolar' : '0',
'bipolar' : '1'
}
commandstring = 'CPOL ' + switch.get(polarity,'0')
self.write(commandstring)
def GetControlPolarity(self):
''' Get the heater output polarity 'unipolar' or 'bipolar' '''
switch = {
'0' : 'unipolar',
'1' : 'bipolar'
}
commandstring = 'CPOL?'
result = self.ask(commandstring)
polarity = switch.get(result , 'com error')
return polarity
def SetScan(self, channel = 1, autoscan = 'off'):
''' Set the channel autoscanner 'on' or 'off' '''
switch = {
'off' : '0',
'on' : '1'
}
commandstring = 'SCAN ' + str(channel) + ', ' + switch.get(autoscan,'0')
self.write(commandstring)
def SetRamp(self, rampmode = 'on' , ramprate = 0.1):
''' Set the ramp mode to 'on' or 'off' and specify ramp rate in Kelvin/minute'''
switch = {
'off' : '0',
'on' : '1'
}
commandstring = 'RAMP ' + switch.get(rampmode,'1') + ', ' + str(ramprate)
self.write(commandstring)
def GetRamp(self):
''' Get the ramp mode either 'on' or 'off' and the ramp rate in Kelvin/minute '''
switch = {
'0' : 'off',
'1' : 'on'
}
commandstring = 'RAMP?'
result = self.ask(commandstring)
results = result.split(',')
ramp = ['off', 0]
ramp[0] = switch.get(results[0] , 'com error')
ramp[1] = float(results[1])
return ramp
def SetTemperatureControlSetup(self, channel = 1, units = 'Kelvin', maxrange = 10, delay = 2, htrres = 1, output = 'current', filterread = 'unfiltered'):
'''
Setup the temperature control channel, units 'Kelvin' or 'Ohms', the maximum heater range in mA, delay in seconds, heater resistance in Ohms, output the 'current' or 'power', and 'filterer' or 'unfiltered'
'''
switchunits = {
'Kelvin' : '1',
'Ohms' : '2'
}
if maxrange >= 0.0316 and maxrange < .1:
rangestring = '1'
elif maxrange >= .1 and maxrange < .316:
rangestring = '2'
elif maxrange >= .316 and maxrange < 1:
rangestring = '3'
elif maxrange >= 1 and maxrange < 3.16:
rangestring = '4'
elif maxrange >= 3.16 and maxrange < 10:
rangestring = '5'
elif maxrange >= 10 and maxrange < 31.6:
rangestring = '6'
elif maxrange >= 31.6 and maxrange < 100:
rangestring = '7'
elif maxrange >= 100 and maxrange < 316:
rangestring = '8'
else:
rangestring = '0'
switchoutput = {
'current' : '1',
'power' : '2'
}
switchfilter = {
'unfiltered' : '0',
'filtered' : '1'
}
commandstring = 'CSET ' + str(channel) + ', ' + switchfilter.get(filterread,'0') + ', ' + switchunits.get(units,'1') + ', ' + str(delay) + ', ' + switchoutput.get(output,'1') + ', ' + rangestring + ', ' + str(htrres)
self.write(commandstring)
def SetReadChannelSetup(self, channel = 1, mode = 'current', exciterange = 10e-9, resistancerange = 63.2e3,autorange = 'off', excitation = 'on'):
'''
Sets the measurment parameters for a given channel, in 'current' or 'voltage' excitation mode, excitation range in Amps or Volts, resistance range in ohms
'''
switchmode = {
'voltage' : '0',
'current' : '1'
}
switchautorange = {
'off' : '0',
'on' : '1'
}
switchexcitation = {
'on' : '0',
'off' : '1'
}
#Get Excitation Range String
if mode == 'voltage':
if exciterange >= 2e-6 and exciterange < 6.32e-6:
exciterangestring = '1'
elif exciterange >= 6.32e-6 and exciterange < 20e-6:
exciterangestring = '2'
elif exciterange >= 20e-6 and exciterange < 63.2e-6:
exciterangestring = '3'
elif exciterange >= 63.2e-6 and exciterange < 200e-6:
exciterangestring = '4'
elif exciterange >= 200e-6 and exciterange < 632e-6:
exciterangestring = '5'
elif exciterange >= 632e-6 and exciterange < 2e-3:
exciterangestring = '6'
elif exciterange >= 2e-3 and exciterange < 6.32e-3:
exciterangestring = '7'
elif exciterange >= 6.32e-3 and exciterange < 20e-3:
exciterangestring = '8'
else:
exciterangestring = '1'
else:
if exciterange >= 1e-12 and exciterange < 3.16e-12:
exciterangestring = '1'
elif exciterange >= 3.16e-12 and exciterange < 10e-12:
exciterangestring = '2'
elif exciterange >= 10e-12 and exciterange < 31.6e-12:
exciterangestring = '3'
elif exciterange >= 31.6e-12 and exciterange < 100e-12:
exciterangestring = '4'
elif exciterange >= 100e-12 and exciterange < 316e-12:
exciterangestring = '5'
elif exciterange >= 316e-12 and exciterange < 1e-9:
exciterangestring = '6'
elif exciterange >= 1e-9 and exciterange < 3.16e-9:
exciterangestring = '7'
elif exciterange >= 3.16e-9 and exciterange < 10e-9:
exciterangestring = '8'
elif exciterange >= 10e-9 and exciterange < 31.6e-9:
exciterangestring = '9'
elif exciterange >= 31.6e-9 and exciterange < 100e-9:
exciterangestring = '10'
elif exciterange >= 100e-9 and exciterange < 316e-9:
exciterangestring = '11'
elif exciterange >= 316e-9 and exciterange < 1e-6:
exciterangestring = '12'
elif exciterange >= 1e-6 and exciterange < 3.16e-6:
exciterangestring = '13'
elif exciterange >= 3.16e-6 and exciterange < 10e-6:
exciterangestring = '14'
elif exciterange >= 10e-6 and exciterange < 31.6e-6:
exciterangestring = '15'
elif exciterange >= 31.6e-6 and exciterange < 100e-6:
exciterangestring = '16'
elif exciterange >= 100e-6 and exciterange < 316e-6:
exciterangestring = '17'
elif exciterange >= 316e-6 and exciterange < 1e-3:
exciterangestring = '18'
elif exciterange >= 1e-3 and exciterange < 3.16e-3:
exciterangestring = '19'
elif exciterange >= 3.16e-3 and exciterange < 10e-3:
exciterangestring = '20'
elif exciterange >= 10e-3 and exciterange < 31.6e-3:
exciterangestring = '21'
elif exciterange >= 31.6e-3 and exciterange < 100e-3:
exciterangestring = '22'
else:
exciterangestring = '7'
#Get Resistance Range String
if resistancerange >= 2e-3 and resistancerange < 6.32e-3:
resistancerangestring = '1'
elif resistancerange >= 6.32e-3 and resistancerange < 20e-3:
resistancerangestring = '2'
elif resistancerange >= 20e-3 and resistancerange < 63.2e-3:
resistancerangestring = '3'
elif resistancerange >= 63.2e-3 and resistancerange < 200e-3:
resistancerangestring = '4'
elif resistancerange >= 200e-3 and resistancerange < 632e-3:
resistancerangestring = '5'
elif resistancerange >= 632e-3 and resistancerange < 2.0:
resistancerangestring = '6'
elif resistancerange >= 2.0 and resistancerange < 6.32:
resistancerangestring = '7'
elif resistancerange >= 6.32 and resistancerange < 20:
resistancerangestring = '8'
elif resistancerange >= 20 and resistancerange < 63.2:
resistancerangestring = '9'
elif resistancerange >= 63.2 and resistancerange < 200:
resistancerangestring = '10'
elif resistancerange >= 200 and resistancerange < 632:
resistancerangestring = '11'
elif resistancerange >= 632 and resistancerange < 2e3:
resistancerangestring = '12'
elif resistancerange >= 2e3 and resistancerange < 6.32e3:
resistancerangestring = '13'
elif resistancerange >= 6.32e3 and resistancerange < 20e3:
resistancerangestring = '14'
elif resistancerange >= 20e3 and resistancerange < 63.2e3:
resistancerangestring = '15'
elif resistancerange >= 63.2e3 and resistancerange < 200e3:
resistancerangestring = '16'
elif resistancerange >= 200e3 and resistancerange < 632e3:
resistancerangestring = '17'
elif resistancerange >= 632e3 and resistancerange < 2e6:
resistancerangestring = '18'
elif resistancerange >= 2e6 and resistancerange < 6.32e6:
resistancerangestring = '19'
elif resistancerange >= 6.32e6 and resistancerange < 20e6:
resistancerangestring = '20'
elif resistancerange >= 20e6 and resistancerange < 63.2e6:
resistancerangestring = '21'
elif resistancerange >= 63.2e6 and resistancerange < 200e6:
resistancerangestring = '22'
else:
resistancerangestring = '1'
#Send Resistance Range Command String
commandstring = 'RDGRNG ' + str(channel) + ', ' + switchmode.get(mode,'1') + ', ' + exciterangestring + ',' + resistancerangestring + ',' + switchautorange.get(autorange,'0') + ', ' + switchexcitation.get(excitation,'0')
self.write(commandstring)
def GetHeaterStatus(self):
switch = {
'0' : 'no error',
'1' : 'heater open error'
}
commandstring = 'HTRST?'
result = self.ask(commandstring)
status = switch.get(result, 'com error')
return status
def MagUpSetup(self, heater_resistance=1):
''' Setup the lakeshore for magup '''
self.SetTemperatureControlSetup(channel=1, units='Kelvin', maxrange=10, delay=2, htrres=heater_resistance, output='current', filterread='unfiltered')
self.SetControlMode(controlmode = 'open')
self.SetControlPolarity(polarity = 'unipolar')
self.SetHeaterRange(range=10) # 1 Volt max input to Kepco for 100 Ohm shunt
self.SetReadChannelSetup(channel = 1, mode = 'current', exciterange = 10e-9, resistancerange = 2e3,autorange = 'on')
def DemagSetup(self, heater_resistance=1):
''' Setup the lakeshore for demag '''
self.SetTemperatureControlSetup(channel=1, units='Kelvin', maxrange=10, delay=2, htrres=heater_resistance, output='current', filterread='unfiltered')
self.SetControlMode(controlmode = 'open')
self.SetControlPolarity(polarity = 'bipolar') #Set to bipolar so that current can get to zero faster
self.SetHeaterRange(range=10) # 1 Volt max input to Kepco for 100 Ohm shunt
self.SetReadChannelSetup(channel = 1, mode = 'current', exciterange = 10e-9, resistancerange = 2e3,autorange = 'on')
def PIDSetup(self, heater_resistance=1):
'''Setup the lakeshore for temperature regulation '''
self.SetReadChannelSetup(channel=1, mode='current', exciterange=1e-8, resistancerange=63.2e3,autorange='on')
sleep(15) #Give time for range to settle, or servoing will fail
self.SetReadChannelSetup(channel=1, mode='current', exciterange=1e-8, resistancerange=63.2e3,autorange='off')
sleep(2)
self.SetTemperatureControlSetup(channel=1, units='Kelvin', maxrange=100, delay=2, htrres=heater_resistance, output='current', filterread='unfiltered')
self.SetControlMode(controlmode = 'closed')
self.SetControlPolarity(polarity = 'unipolar')
self.SetRamp(rampmode = 'off') #Turn off ramp mode to not to ramp setpoint down to aprox 0
sleep(.5) #Give time for Set Ramp to take effect
self.SetTemperatureSetPoint(setpoint=0.035)
sleep(.5) #Give time for Setpoint to take effect
self.SetRamp(rampmode = 'on' , ramprate = 0.2)
self.SetHeaterRange(range=100) #Set heater range to 100mA to get 10V output range
#self.SetReadChannelSetup(channel = 1, mode = 'current', exciterange = 1e-9, resistancerange = 2e3,autorange = 'on')
# Public Calibration Methods
def SendStandardRuOxCalibration(self):
pass
def SendCalibration(self, filename, datacol, tempcol, curveindex, thermname='Cernox 1030', serialnumber='x0000', interp=True,\
temp_lim=300, tempco = 1, units=4):
''' Send a calibration based on a input file
Input:
filename: location of calibration file
datacol: defines which column in filename will be used as data (zero indexed)
tempcol: defines which column in filename will be used as temperature (zero indexed)
curveindex: the curve index location in the lakeshore 370 (1 thru 20)
thermname: sensor type
serialnumber: thermometer serial number
interp: if True the data will be evenly spaced from the max to min with 200 pts
if False the raw data is used. User must ensure no doubles and < 200 pts
temp_lim: temperature limit (K)
tempco: 1 if dR/dT is negative, 2 if dR/dT is positive
units: 3 to use ohm/K, 4 to use log ohm/K
'''
if curveindex < 1 or curveindex > 20:
print ' 1 <= curveindex <= 20 for lakeshore 370'
return 1
#rawdata = read_array(filename) #obsolete, replace with genfromtxt
rawdata = numpy.genfromtxt(filename)
rawdatat = rawdata.transpose()
datat = numpy.array(rawdatat[:,rawdatat[datacol,:].argsort()])
#now remove doubles
last = datat[datacol,-1]
for i in range(len(datat[datacol,:])-2,-1,-1):
if last == datat[1,i]:
datat = numpy.hstack((datat[:,: i+1],datat[:,i+2 :]))
else:
last = datat[datacol,i]
pylab.figure()
pylab.plot(datat[datacol],datat[tempcol],'o')
pylab.show()
f = interp1d(datat[datacol],datat[tempcol])
self.f = f
# interpolate from min to max with 200 evenly spaced points if interp True
if interp:
Rs = scipy.linspace(min(datat[datacol]),max(datat[datacol]), num = 200)
else:
Rs = datat[datacol]
Temps = f(Rs)
pylab.figure()
pylab.plot(datat[datacol],datat[tempcol],'o')
#pylab.holdon()
pylab.plot(Rs,Temps,'rx')
pylab.show()
# Send Header
# 4, 350 ,1 -- logOhm/K, temperature limit, temperature coefficient 1=negative
commandstring = 'CRVHDR ' + str(curveindex) + ', ' + thermname + ', ' + serialnumber + ', '+str(units)+', '+\
str(temp_lim)+', '+ str(tempco)
self.write(commandstring)
print commandstring
# Send Data Points
for i in range(len(Rs)):
pntindex = i+1
if units == 4:
logrofpoint = math.log10(Rs[i])
else:
logrofpoint = Rs[i]
if Rs[i] < 10:
stringlogrofpoint = '%(logrofpoint)7.5f' % vars()
else:
stringlogrofpoint = '%(logrofpoint)8.5f' % vars()
tempofpoint = Temps[i]
stringtempofpoint = '%(tempofpoint)5.3f' % vars()
datapointstring = 'CRVPT ' + str(curveindex) + ', ' + str(pntindex) + ', ' + stringlogrofpoint + ', ' + stringtempofpoint
self.write(datapointstring)
print datapointstring
pylab.figure()
pylab.plot(Rs,Temps,'o')
def SendMartinisRuOxCalibration(self, curveindex, thermname='RuOx Martinis', serialnumber='19740', interp=True,\
temp_lim=300, tempco = 1, units=4):
self.SendMartinisCalibration(curveindex, thermname, serialnumber, interp, temp_lim, tempco, units)
def SendMartinisCalibration(self, curveindex, thermname='RuOx Martinis', serialnumber='19740', interp=True,\
temp_lim=300, tempco = 1, units=4):
''' Send a calibration based on a input file
Input:
filename: location of calibration file
datacol: defines which column in filename will be used as data (zero indexed)
tempcol: defines which column in filename will be used as temperature (zero indexed)
curveindex: the curve index location in the lakeshore 370 (1 thru 20)
thermname: sensor type
serialnumber: thermometer serial number
interp: if True the data will be evenly spaced from the max to min with 200 pts
if False the raw data is used. User must ensure no doubles and < 200 pts
temp_lim: temperature limit (K)
tempco: 1 if dR/dT is negative, 2 if dR/dT is positive
units: 3 to use ohm/K, 4 to use log ohm/K
'''
if curveindex < 1 or curveindex > 20:
print ' 1 <= curveindex <= 20 for lakeshore 370'
return 1
#rawdata = read_array(filename)
#rawdatat = rawdata.transpose()
#datat = numpy.array(rawdatat[:,rawdatat[datacol,:].argsort()])
#now remove doubles
#last = datat[datacol,-1]
#for i in range(len(datat[datacol,:])-2,-1,-1):
# if last == datat[1,i]:
# datat = numpy.hstack((datat[:,: i+1],datat[:,i+2 :]))
# else:
# last = datat[datacol,i]
#pylab.figure()
#pylab.plot(datat[datacol],datat[tempcol],'o')
#pylab.show()
#f = interp1d(datat[datacol],datat[tempcol])
#self.f = f
# interpolate from min to max with 200 evenly spaced points if interp True
#if interp:
#Rs = scipy.linspace(min(datat[datacol]),max(datat[datacol]), num = 200)
Rs = scipy.linspace(1258.92541, 63095.734448, num=200)
#else:
# Rs = datat[datacol]
Temps = (2.85 / (numpy.log((Rs-652.)/100.)))**4
pylab.figure()
#pylab.plot(datat[datacol],datat[tempcol],'o')
#pylab.holdon()
pylab.plot(Rs,Temps,'rx')
pylab.show()
# Send Header
# 4, 350 ,1 -- logOhm/K, temperature limit, temperature coefficient 1=negative
commandstring = 'CRVHDR ' + str(curveindex) + ', ' + thermname + ', ' + serialnumber + ', '+str(units)+', '+\
str(temp_lim)+', '+ str(tempco)
self.write(commandstring)
print commandstring
# Send Data Points
for i in range(len(Rs)):
pntindex = i+1
logrofpoint = math.log10(Rs[i])
if Rs[i] < 10:
stringlogrofpoint = '%(logrofpoint)7.5f' % vars()
else:
stringlogrofpoint = '%(logrofpoint)8.5f' % vars()
tempofpoint = Temps[i]
stringtempofpoint = '%(tempofpoint)7.5f' % vars()
datapointstring = 'CRVPT ' + str(curveindex) + ', ' + str(pntindex) + ', ' + stringlogrofpoint + ', ' + stringtempofpoint
self.write(datapointstring)
print datapointstring
pylab.figure()
pylab.plot(Rs,Temps,'o')
| 38.905483 | 228 | 0.554773 | 5,607 | 53,923 | 5.322097 | 0.075798 | 0.028149 | 0.018431 | 0.019168 | 0.95409 | 0.951443 | 0.948427 | 0.930096 | 0.880601 | 0.862639 | 0 | 0.054867 | 0.334477 | 53,923 | 1,385 | 229 | 38.933574 | 0.77666 | 0.084491 | 0 | 0.783659 | 0 | 0 | 0.053858 | 0.000697 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.002302 | 0.009206 | null | null | 0.017261 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
0ad5e4941496232a2a3f534cff7d680c9e879e4d | 324 | py | Python | src/parser/operations/operations.py | pranavbaburaj/jodb | 1242a6235fd3e976b70e8fa4da83bf1108baed42 | [
"MIT"
] | 2 | 2021-03-29T18:30:48.000Z | 2021-04-10T17:44:34.000Z | src/parser/operations/operations.py | pranavbaburaj/json-based-database | 1242a6235fd3e976b70e8fa4da83bf1108baed42 | [
"MIT"
] | null | null | null | src/parser/operations/operations.py | pranavbaburaj/json-based-database | 1242a6235fd3e976b70e8fa4da83bf1108baed42 | [
"MIT"
] | null | null | null | class Operators():
@staticmethod
def add(n, x):
return float(n) + float(x)
@staticmethod
def mul(n, x):
return float(n) * float(x)
@staticmethod
def div(n, x):
return float(n) / float(x)
@staticmethod
def sub(n, x):
return float(n) - float(x) | 20.25 | 35 | 0.512346 | 42 | 324 | 3.952381 | 0.285714 | 0.361446 | 0.192771 | 0.313253 | 0.753012 | 0.753012 | 0.753012 | 0.63253 | 0.63253 | 0 | 0 | 0 | 0.354938 | 324 | 16 | 36 | 20.25 | 0.794258 | 0 | 0 | 0.307692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.307692 | false | 0 | 0 | 0.307692 | 0.692308 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 8 |
e4079fd47f2b02f43a3ec579ee14e21414a90a6b | 15,496 | py | Python | fenlei_tf/script_Jan26/src/seg_net_arch.py | bbcdli/xuexi | f791d6bdc2fccc1bab322b474c9cfc7572690f44 | [
"Apache-2.0"
] | 1 | 2019-01-16T05:55:23.000Z | 2019-01-16T05:55:23.000Z | fenlei_tf/script_Jan26/src/seg_net_arch.py | bbcdli/xuexi | f791d6bdc2fccc1bab322b474c9cfc7572690f44 | [
"Apache-2.0"
] | null | null | null | fenlei_tf/script_Jan26/src/seg_net_arch.py | bbcdli/xuexi | f791d6bdc2fccc1bab322b474c9cfc7572690f44 | [
"Apache-2.0"
] | null | null | null | from keras.models import Model
from keras.layers import Input, merge, Convolution2D, MaxPooling2D, UpSampling2D, Dropout, Reshape, Permute, Activation, \
Cropping2D
from keras.optimizers import Adam, SGD
#from keras.callbacks import ModelCheckpoint, LearningRateScheduler
from keras import backend as K
#from keras.utils.visualize_util import plot
smooth = 1.
def dice_coef(y_true, y_pred):
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
intersection = K.sum(y_true_f * y_pred_f)
return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)
def dice_coef_loss(y_true, y_pred):
return 1-dice_coef(y_true, y_pred)
def testbench_arch(h,w):
inputs = Input((1, h, w)) # 160 x 160
conv1 = Convolution2D(8, 3, 3, activation='relu', border_mode='same', init = 'he_normal')(inputs)
conv1 = Convolution2D(8, 3, 3, activation='relu', border_mode='same', init = 'he_normal')(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
pool1 = Dropout(0.15)(pool1)
conv2 = Convolution2D(16, 3, 3, activation='relu', border_mode='same', init = 'he_normal')(pool1)
conv2 = Convolution2D(16, 3, 3, activation='relu', border_mode='same', init = 'he_normal')(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
pool2 = Dropout(0.25)(pool2)
model = Model(input=inputs, output=conv2)
model.summary()
#plot(model, "model.png")
return model
def unet_archPaper(h, w):
print("Model of size: %d %d" % (h, w))
inputs = Input((1, h , w)) # 160 x 160
ordering = 'th' # 'th': (ch, h, w), 'tf': (h, w, ch)
conv_1 = Convolution2D(64, 3, 3, activation='relu', border_mode='same', init = 'he_normal')(inputs)
conv_2 = Convolution2D(64, 3, 3, activation='relu', border_mode='same', init = 'he_normal')(conv_1)
print 'view conv2', conv_2.get_shape()
pool1 = MaxPooling2D(pool_size=(2, 2),dim_ordering=ordering)(conv_2)
pool1 = Dropout(0.15)(pool1)
print 'view pool1', pool1.get_shape()
conv_3 = Convolution2D(128, 3, 3, activation='relu', border_mode='same', init = 'he_normal')(pool1)
conv_4 = Convolution2D(128, 3, 3, activation='relu', border_mode='same', init = 'he_normal')(conv_3)
print '\nview conv4', conv_3.get_shape(), '< up-3'
pool2 = MaxPooling2D(pool_size=(2, 2),dim_ordering=ordering)(conv_4)
pool2 = Dropout(0.25)(pool2)
print 'view pool2', pool2.get_shape()
conv_5 = Convolution2D(256, 3, 3, activation='relu', border_mode='same', init = 'he_normal')(pool2)
conv_6 = Convolution2D(256, 3, 3, activation='relu', border_mode='same', init = 'he_normal')(conv_5)
print '\nview conv6', conv_5.get_shape(), '< up-2'
pool3 = MaxPooling2D(pool_size=(2, 2),dim_ordering=ordering)(conv_6)
pool3 = Dropout(0.4)(pool3)
print 'view pool3', pool3.get_shape()
conv_7 = Convolution2D(512, 3, 3, activation='relu', border_mode='same', init = 'he_normal')(pool3)
conv_8 = Convolution2D(512, 3, 3, activation='relu', border_mode='same', init = 'he_normal')(conv_7)
print '\nview conv8', conv_8.get_shape(), '< up-1'
pool4 = MaxPooling2D(pool_size=(2, 2),dim_ordering=ordering)(conv_8)
pool4 = Dropout(0.5)(pool4)
print 'view pool4', pool4.get_shape()
conv_9 = Convolution2D(1024, 3, 3, activation='relu', border_mode='same', init = 'he_normal')(pool4)
print '\nview conv9', conv_9.get_shape()
conv_10 = Convolution2D(1024, 3, 3, activation='relu', border_mode='same', init = 'he_normal')(conv_9)
print 'view conv10', conv_10.get_shape()
pool5 = MaxPooling2D(pool_size=(2, 2),dim_ordering=ordering)(conv_10) # 5x5
pool5 = Dropout(0.5)(pool5)
print 'view pool5', pool5.get_shape()
####################################################################################################
up_1 = merge([UpSampling2D(size=(2, 1))(conv_8), pool5], mode='concat', concat_axis=1)
print '\nview up1', up_1.get_shape()
conv_12 = Convolution2D(512, 3, 3, activation='relu', border_mode='same', init = 'he_normal')(up_1)
conv_13 = Convolution2D(512, 3, 3, activation='relu', border_mode='same', init = 'he_normal')(conv_12)
pool6 = MaxPooling2D(pool_size=(2, 2), dim_ordering=ordering)(conv_13) # 5x5
pool6 = Dropout(0.5)(pool6)
print 'view pool6', pool6.get_shape()
##################
up_2 = merge([UpSampling2D(size=(2, 1))(conv_6), pool6], mode='concat', concat_axis=1)
print '\nview up2', up_2.get_shape()
conv_15 = Convolution2D(256, 3, 3, activation='relu', border_mode='same', init='he_normal')(up_2)
conv_16 = Convolution2D(256, 3, 3, activation='relu', border_mode='same', init='he_normal')(conv_15)
print 'view conv16', conv_16.get_shape()
pool7 = Dropout(0.15)(conv_16)
print 'view pool7', pool7.get_shape()
##################
up_3 = merge([UpSampling2D(size=(2, 1))(conv_4), pool7], mode='concat', concat_axis=1)
print '\nview up3', up_3.get_shape()
conv_18 = Convolution2D(128, 3, 3, activation='relu', border_mode='same', init = 'he_normal')(up_3)
conv_19 = Convolution2D(128, 3, 3, activation='relu', border_mode='same', init = 'he_normal')(conv_18)
print 'view conv18', conv_18.get_shape()
pool8 = Dropout(0.4)(conv_19)
print 'view pool8', pool8.get_shape()
##################
up_4 = merge([UpSampling2D(size=(2, 1))(conv_2), pool8], mode='concat', concat_axis=1)
print 'view up4', up_4.get_shape()
conv_21 = Convolution2D(64, 3, 3, activation='relu', border_mode='same', init = 'he_normal')(up_4)
print 'view conv9-1', conv_21.get_shape()
conv_22 = Convolution2D(64, 3, 3, activation='relu', border_mode='same', init = 'he_normal')(conv_21)
print 'view conv9', conv_22.get_shape()
pool9 = Dropout(0.25)(conv_22)
##################################################################
conv_23 = Convolution2D(1, 1, 1, activation='sigmoid', init = 'he_normal')(pool9)
conv_24 = Convolution2D(1, 1, 1, activation='sigmoid', init = 'he_normal')(conv_23)
print 'view conv10', conv_24.get_shape()
model = Model(input=inputs, output=conv_24)
#model = Model(input=inputs, output=conv12)
model.summary()
#plot(model, "model.png")
return model
def unet_arch_6c(h, w):
print("Model of size: %d %d" % (h, w))
ch = 3
ordering = 'th' # 'th': (ch, h, w), 'tf': (h, w, ch)
inputs = Input(shape=(ch, h , w)) # 160 x 160
#inputs = Input(shape=(h , w,ch)) # 160 x 160
conv1 = Convolution2D(8, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(inputs)
conv1 = Convolution2D(8, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2),dim_ordering=ordering)(conv1)
pool1 = Dropout(0.15)(pool1)
print 'pool1', pool1.get_shape()
conv2 = Convolution2D(16, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(pool1)
conv2 = Convolution2D(16, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2),dim_ordering=ordering)(conv2)
pool2 = Dropout(0.25)(pool2)
print 'pool2', pool2.get_shape()
conv3 = Convolution2D(32, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(pool2)
conv3 = Convolution2D(32, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(conv3)
pool3 = MaxPooling2D(pool_size=(2, 2),dim_ordering=ordering)(conv3)
pool3 = Dropout(0.4)(pool3)
print 'pool3', pool3.get_shape()
conv4 = Convolution2D(32, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(pool3)
conv4 = Convolution2D(32, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(conv4)
print 'conv4', conv4.get_shape()
pool4 = MaxPooling2D(pool_size=(2, 2),dim_ordering=ordering)(conv4)
pool4 = Dropout(0.5)(pool4)
print 'pool4', pool4.get_shape()
conv5 = Convolution2D(32, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(pool4)
conv5 = Convolution2D(32, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(conv5)
# pool5 = MaxPooling2D(pool_size=(2, 2),dim_ordering=ordering)(conv5) # 5x5
# pool5 = Dropout(0.5)(pool5)
print 'conv5', conv5.get_shape()
up1 = UpSampling2D(size=(2, 2),dim_ordering=ordering)(conv5)
#print 'up1', up1.get_shape()
up1 = merge([up1, conv4], mode='concat', concat_axis=1)
#up1 = merge([(UpSampling2D(size=(2, 2),dim_ordering=ordering)(conv5)), pool4], mode='concat', concat_axis=1)
up1 = Dropout(0.4)(up1)
print 'up1', up1.get_shape()
conv8 = Convolution2D(32, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(up1)
conv8 = Convolution2D(32, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(conv8)
print 'conv8', conv8.get_shape()
up2 = UpSampling2D(size=(2, 2),dim_ordering=ordering)(conv8)
up2 = merge([up2, conv3], mode='concat', concat_axis=1)
#up2 = merge([UpSampling2D(size=(2, 2))(conv8), conv3], mode='concat', concat_axis=1)
up2 = Dropout(0.25)(up2)
print 'up2',up2.get_shape()
conv9 = Convolution2D(32, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(up2)
conv9 = Convolution2D(32, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(conv9)
print 'conv9',conv9.get_shape() # 7,80,32
print 'conv2',conv2.get_shape() # 1,160,16
up3 = UpSampling2D(size=(2, 2),dim_ordering=ordering)(conv9) # 14, 160, 32
up3 = merge([up3, conv2], mode='concat', concat_axis=1)
#up3 = merge([UpSampling2D(size=(2, 2))(conv9), conv2], mode='concat', concat_axis=1)
up3 = Dropout(0.15)(up3)
print 'up3',up3.get_shape()
conv10 = Convolution2D(16, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(up3)
conv10 = Convolution2D(16, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(conv10)
print 'conv10',conv10.get_shape()
up4 = UpSampling2D(size=(2, 2),dim_ordering=ordering)(conv10)
up4 = merge([up4, conv1], mode='concat', concat_axis=1)
#up4 = merge([UpSampling2D(size=(2, 2))(conv10), conv1], mode='concat', concat_axis=1)
up4 = Dropout(0.15)(up4)
conv11 = Convolution2D(8, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(up4)
conv11 = Convolution2D(8, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(conv11)
predictions = Convolution2D(ch, 1, 1, activation='sigmoid', init='he_normal',dim_ordering=ordering)(conv11)
model = Model(input=inputs, output=predictions)
model.summary()
#plot(model, "model.png")
return model
def unet_arch_2c(h, w):
print("Model of size: %d %d" % (h, w))
ch = 1 # 1
inputs = Input(shape=(ch, h , w)) # 160 x 160
ordering = 'th' # 'th': (ch, h, w), 'tf': (h, w, ch)
conv1 = Convolution2D(8, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(inputs)
conv1 = Convolution2D(8, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2),dim_ordering=ordering)(conv1)
pool1 = Dropout(0.15)(pool1)
print 'pool1', pool1.get_shape()
conv2 = Convolution2D(16, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(pool1)
conv2 = Convolution2D(16, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2),dim_ordering=ordering)(conv2)
pool2 = Dropout(0.25)(pool2)
print 'pool2', pool2.get_shape()
conv3 = Convolution2D(32, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(pool2)
conv3 = Convolution2D(32, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(conv3)
pool3 = MaxPooling2D(pool_size=(2, 2),dim_ordering=ordering)(conv3)
pool3 = Dropout(0.4)(pool3)
print 'pool3', pool3.get_shape()
conv4 = Convolution2D(32, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(pool3)
conv4 = Convolution2D(32, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(conv4)
print 'conv4', conv4.get_shape()
pool4 = MaxPooling2D(pool_size=(2, 2),dim_ordering=ordering)(conv4)
pool4 = Dropout(0.5)(pool4)
print 'pool4', pool4.get_shape()
conv5 = Convolution2D(32, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(pool4)
conv5 = Convolution2D(32, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(conv5)
# pool5 = MaxPooling2D(pool_size=(2, 2),dim_ordering=ordering)(conv5) # 5x5
# pool5 = Dropout(0.5)(pool5)
print 'conv5', conv5.get_shape()
up1 = UpSampling2D(size=(2, 2),dim_ordering=ordering)(conv5)
#print 'up1', up1.get_shape()
up1 = merge([up1, conv4], mode='concat', concat_axis=1)
#up1 = merge([(UpSampling2D(size=(2, 2),dim_ordering=ordering)(conv5)), pool4], mode='concat', concat_axis=1)
up1 = Dropout(0.4)(up1)
print 'up1', up1.get_shape()
conv8 = Convolution2D(32, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(up1)
conv8 = Convolution2D(32, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(conv8)
print 'conv8', conv8.get_shape()
up2 = UpSampling2D(size=(2, 2),dim_ordering=ordering)(conv8)
up2 = merge([up2, conv3], mode='concat', concat_axis=1)
#up2 = merge([UpSampling2D(size=(2, 2))(conv8), conv3], mode='concat', concat_axis=1)
up2 = Dropout(0.25)(up2)
print 'up2',up2.get_shape()
conv9 = Convolution2D(32, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(up2)
conv9 = Convolution2D(32, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(conv9)
print 'conv9',conv9.get_shape() # 7,80,32
print 'conv2',conv2.get_shape() # 1,160,16
up3 = UpSampling2D(size=(2, 2),dim_ordering=ordering)(conv9) # 14, 160, 32
up3 = merge([up3, conv2], mode='concat', concat_axis=1)
#up3 = merge([UpSampling2D(size=(2, 2))(conv9), conv2], mode='concat', concat_axis=1)
up3 = Dropout(0.15)(up3)
print 'up3',up3.get_shape()
conv10 = Convolution2D(16, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(up3)
conv10 = Convolution2D(16, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(conv10)
print 'conv10',conv10.get_shape()
up4 = UpSampling2D(size=(2, 2),dim_ordering=ordering)(conv10)
up4 = merge([up4, conv1], mode='concat', concat_axis=1)
#up4 = merge([UpSampling2D(size=(2, 2))(conv10), conv1], mode='concat', concat_axis=1)
up4 = Dropout(0.15)(up4)
conv11 = Convolution2D(8, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(up4)
conv11 = Convolution2D(8, 3, 3, activation='relu', border_mode='same', init='he_normal',dim_ordering=ordering)(conv11)
predictions = Convolution2D(ch, 1, 1, activation='sigmoid', init='he_normal',dim_ordering=ordering)(conv11)
model = Model(input=inputs, output=predictions)
model.summary()
#plot(model, "model.png")
return model
| 53.068493 | 123 | 0.675465 | 2,262 | 15,496 | 4.463307 | 0.061892 | 0.069731 | 0.120444 | 0.091918 | 0.865689 | 0.855685 | 0.824683 | 0.806458 | 0.806458 | 0.763867 | 0 | 0.079091 | 0.136745 | 15,496 | 291 | 124 | 53.250859 | 0.675637 | 0.095896 | 0 | 0.604651 | 0 | 0 | 0.116511 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.018605 | null | null | 0.24186 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7c2c0e83234737db7ffff0ecbcb57329964e5149 | 3,584 | py | Python | democrasite/webiscite/tests/test_templates.py | mfosterw/cookiestocracy | 6912e9e7c3006024d0fbee61dce5c48e63e9e231 | [
"MIT"
] | null | null | null | democrasite/webiscite/tests/test_templates.py | mfosterw/cookiestocracy | 6912e9e7c3006024d0fbee61dce5c48e63e9e231 | [
"MIT"
] | 9 | 2021-07-18T17:16:42.000Z | 2022-03-31T00:19:14.000Z | democrasite/webiscite/tests/test_templates.py | mfosterw/cookiestocracy | 6912e9e7c3006024d0fbee61dce5c48e63e9e231 | [
"MIT"
] | null | null | null | """Render each branch on each template to ensure there are no rendering errors."""
# pylint: disable=too-few-public-methods,no-self-use
from django.test import Client
from django.urls import reverse
from ..models import Bill
from .factories import BillFactory
class TestWebisciteTemplates:
def test_bill_detail(self, client: Client, user):
bill = BillFactory(state=Bill.CLOSED, author=user, constitutional=True)
response = client.get(reverse("webiscite:bill-detail", args=(bill.id,)))
assert response.status_code == 200
assert response.templates[0].name == "webiscite/bill_detail.html"
assert b"Log in to vote" not in response.content
assert b"vote.js" not in response.content
assert b"svg" not in response.content
assert bill.get_state_display().encode() in response.content
bill.state = Bill.OPEN
bill.save()
response = client.get(reverse("webiscite:bill-detail", args=(bill.id,)))
assert b"Log in to vote" in response.content
client.force_login(user)
response = client.get(reverse("webiscite:bill-detail", args=(bill.id,)))
assert b"vote.js" in response.content
assert b"svg" in response.content
assert bill.get_state_display().encode() not in response.content
def test_bill_form(self, client: Client, user):
bill = BillFactory(author=user)
client.force_login(user)
response = client.get(reverse("webiscite:bill-update", args=(bill.id,)))
assert response.status_code == 200
assert response.templates[0].name == "webiscite/bill_form.html"
def test_bill_list_empty(self, client: Client, user):
response = client.get(reverse("webiscite:index"))
assert response.status_code == 200
assert response.templates[0].name == "webiscite/bill_list.html"
assert b"No bills" in response.content
assert b"you haven't proposed any bills" not in response.content
assert b"you haven't voted on any bills" not in response.content
client.force_login(user)
response = client.get(reverse("webiscite:my-bills"))
assert b"No bills" not in response.content
assert b"you haven't proposed any bills" in response.content
assert b"you haven't voted on any bills" not in response.content
response = client.get(reverse("webiscite:my-bill-votes"))
assert b"No bills" not in response.content
assert b"you haven't proposed any bills" not in response.content
assert b"you haven't voted on any bills" in response.content
def test_bill_list_populated(self, client: Client, user):
bill = BillFactory(state=Bill.OPEN, author=user, constitutional=True)
response = client.get(reverse("webiscite:index"))
assert response.status_code == 200
assert bill.name.encode() in response.content
assert bill.get_state_display().encode() not in response.content
assert b"Log in to vote" in response.content
assert b"vote.js" not in response.content
client.force_login(user)
response = client.get(reverse("webiscite:index"))
assert b"Log in to vote" not in response.content
bill.state = Bill.CLOSED
bill.save()
response = client.get(reverse("webiscite:my-bills"))
assert bill.name.encode() in response.content
assert bill.get_state_display().encode() in response.content
assert b"Log in to vote" not in response.content
assert b"vote.js" in response.content
| 37.333333 | 82 | 0.680246 | 492 | 3,584 | 4.896341 | 0.168699 | 0.107929 | 0.183479 | 0.162308 | 0.840598 | 0.839352 | 0.77501 | 0.751764 | 0.691988 | 0.654628 | 0 | 0.005353 | 0.218192 | 3,584 | 95 | 83 | 37.726316 | 0.85439 | 0.035714 | 0 | 0.619048 | 0 | 0 | 0.165217 | 0.052464 | 0 | 0 | 0 | 0 | 0.52381 | 1 | 0.063492 | false | 0 | 0.063492 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
7cc92d0ba1661d17388db40f2b6dc2432b5a8bcc | 12,963 | py | Python | core/apps/razers2/tests/run_tests.py | h-2/seqan | 3916e41713ebb5899699d199a123f44dab9d8c99 | [
"BSD-3-Clause"
] | null | null | null | core/apps/razers2/tests/run_tests.py | h-2/seqan | 3916e41713ebb5899699d199a123f44dab9d8c99 | [
"BSD-3-Clause"
] | 10 | 2015-03-02T16:45:39.000Z | 2015-06-23T14:02:13.000Z | core/apps/razers2/tests/run_tests.py | h-2/seqan | 3916e41713ebb5899699d199a123f44dab9d8c99 | [
"BSD-3-Clause"
] | 2 | 2015-02-24T19:07:54.000Z | 2015-04-08T13:53:24.000Z | #!/usr/bin/env python
"""Execute the tests for the razers2 program.
The golden test outputs are generated by the script generate_outputs.sh.
You have to give the root paths to the source and the binaries as arguments to
the program. These are the paths to the directory that contains the 'projects'
directory.
Usage: run_tests.py SOURCE_ROOT_PATH BINARY_ROOT_PATH
"""
import logging
import os.path
import sys
# Automagically add util/py_lib to PYTHONPATH environment variable.
path = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..',
'..', '..', 'util', 'py_lib'))
sys.path.insert(0, path)
import seqan.app_tests as app_tests
def main(source_base, binary_base):
"""Main entry point of the script."""
print 'Executing test for razers2'
print '==========================='
print
ph = app_tests.TestPathHelper(
source_base, binary_base,
'core/apps/razers2/tests') # tests dir
# ============================================================
# Auto-detect the binary path.
# ============================================================
path_to_program = app_tests.autolocateBinary(
binary_base, 'core/apps/razers2', 'razers2')
# ============================================================
# Built TestConf list.
# ============================================================
# Build list with TestConf objects, analoguely to how the output
# was generated in generate_outputs.sh.
conf_list = []
# ============================================================
# Run Adeno Single-End Tests
# ============================================================
# We run the following for all read lengths we have reads for.
for rl in [36, 100]:
# Run with default options.
conf = app_tests.TestConf(
program=path_to_program,
redir_stdout=ph.outFile('se-adeno-reads%d_1.stdout' % rl),
args=['--low-memory',
ph.inFile('adeno-genome.fa'),
ph.inFile('adeno-reads%d_1.fa' % rl),
'-o', ph.outFile('se-adeno-reads%d_1.razers' % rl)],
to_diff=[(ph.inFile('se-adeno-reads%d_1.razers' % rl),
ph.outFile('se-adeno-reads%d_1.razers' % rl)),
(ph.inFile('se-adeno-reads%d_1.stdout' % rl),
ph.outFile('se-adeno-reads%d_1.stdout' % rl))])
conf_list.append(conf)
# Allow indels.
conf = app_tests.TestConf(
program=path_to_program,
redir_stdout=ph.outFile('se-adeno-reads%d_1-id.stdout' % rl),
args=['--low-memory', '-id',
ph.inFile('adeno-genome.fa'),
ph.inFile('adeno-reads%d_1.fa' % rl),
'-o', ph.outFile('se-adeno-reads%d_1-id.razers' % rl)],
to_diff=[(ph.inFile('se-adeno-reads%d_1-id.razers' % rl),
ph.outFile('se-adeno-reads%d_1-id.razers' % rl)),
(ph.inFile('se-adeno-reads%d_1-id.stdout' % rl),
ph.outFile('se-adeno-reads%d_1-id.stdout' % rl))])
conf_list.append(conf)
# Compute forward/reverse matches only.
for o in ['-r', '-f']:
conf = app_tests.TestConf(
program=path_to_program,
redir_stdout=ph.outFile('se-adeno-reads%d_1-id%s.stdout' % (rl, o)),
args=['--low-memory', '-id', o,
ph.inFile('adeno-genome.fa'),
ph.inFile('adeno-reads%d_1.fa' % rl),
'-o', ph.outFile('se-adeno-reads%d_1-id%s.razers' % (rl, o))],
to_diff=[(ph.inFile('se-adeno-reads%d_1-id%s.razers' % (rl, o)),
ph.outFile('se-adeno-reads%d_1-id%s.razers' % (rl, o))),
(ph.inFile('se-adeno-reads%d_1-id%s.stdout' % (rl, o)),
ph.outFile('se-adeno-reads%d_1-id%s.stdout' % (rl, o)))])
conf_list.append(conf)
# Compute with different identity rates.
for i in range(90, 101):
conf = app_tests.TestConf(
program=path_to_program,
redir_stdout=ph.outFile('se-adeno-reads%d_1-id-i%d.stdout' % (rl, i)),
args=['--low-memory', '-id', '-i', str(i),
ph.inFile('adeno-genome.fa'),
ph.inFile('adeno-reads%d_1.fa' % rl),
'-o', ph.outFile('se-adeno-reads%d_1-id-i%d.razers' % (rl, i))],
to_diff=[(ph.inFile('se-adeno-reads%d_1-id-i%d.razers' % (rl, i)),
ph.outFile('se-adeno-reads%d_1-id-i%d.razers' % (rl, i))),
(ph.inFile('se-adeno-reads%d_1-id-i%d.stdout' % (rl, i)),
ph.outFile('se-adeno-reads%d_1-id-i%d.stdout' % (rl, i)))])
conf_list.append(conf)
# Compute with different output formats.
for suffix in ['razers', 'fa', 'eland', 'gff', 'sam', 'afg']:
conf = app_tests.TestConf(
program=path_to_program,
redir_stdout=ph.outFile('se-adeno-reads%d_1-id.%s.stdout' % (rl, suffix)),
args=['--low-memory', '-id',
ph.inFile('adeno-genome.fa'),
ph.inFile('adeno-reads%d_1.fa' % rl),
'-o', ph.outFile('se-adeno-reads%d_1-id.%s' % (rl, suffix))],
to_diff=[(ph.inFile('se-adeno-reads%d_1-id.%s' % (rl, suffix)),
ph.outFile('se-adeno-reads%d_1-id.%s' % (rl, suffix))),
(ph.inFile('se-adeno-reads%d_1-id.%s.stdout' % (rl, suffix)),
ph.outFile('se-adeno-reads%d_1-id.%s.stdout' % (rl, suffix)))])
conf_list.append(conf)
# Compute with different sort orders.
for so in [0, 1]:
conf = app_tests.TestConf(
program=path_to_program,
redir_stdout=ph.outFile('se-adeno-reads%d_1-id-so%d.stdout' % (rl, so)),
args=['--low-memory', '-id', '-so', str(so),
ph.inFile('adeno-genome.fa'),
ph.inFile('adeno-reads%d_1.fa' % rl),
'-o', ph.outFile('se-adeno-reads%d_1-id-so%d.razers' % (rl, so))],
to_diff=[(ph.inFile('se-adeno-reads%d_1-id-so%d.razers' % (rl, so)),
ph.outFile('se-adeno-reads%d_1-id-so%d.razers' % (rl, so))),
(ph.inFile('se-adeno-reads%d_1-id-so%d.stdout' % (rl, so)),
ph.outFile('se-adeno-reads%d_1-id-so%d.stdout' % (rl, so)))])
conf_list.append(conf)
# ============================================================
# Run Adeno Paired-End Tests
# ============================================================
# We run the following for all read lengths we have reads for.
for rl in [36, 100]:
# Run with default options.
conf = app_tests.TestConf(
program=path_to_program,
redir_stdout=ph.outFile('pe-adeno-reads%d_2.stdout' % rl),
args=['--low-memory',
ph.inFile('adeno-genome.fa'),
ph.inFile('adeno-reads%d_1.fa' % rl),
ph.inFile('adeno-reads%d_2.fa' % rl),
'-o', ph.outFile('pe-adeno-reads%d_2.razers' % rl)],
to_diff=[(ph.inFile('pe-adeno-reads%d_2.razers' % rl),
ph.outFile('pe-adeno-reads%d_2.razers' % rl)),
(ph.inFile('pe-adeno-reads%d_2.stdout' % rl),
ph.outFile('pe-adeno-reads%d_2.stdout' % rl))])
conf_list.append(conf)
# Allow indels.
conf = app_tests.TestConf(
program=path_to_program,
redir_stdout=ph.outFile('pe-adeno-reads%d_2-id.stdout' % rl),
args=['--low-memory', '-id',
ph.inFile('adeno-genome.fa'),
ph.inFile('adeno-reads%d_1.fa' % rl),
ph.inFile('adeno-reads%d_2.fa' % rl),
'-o', ph.outFile('pe-adeno-reads%d_2-id.razers' % rl)],
to_diff=[(ph.inFile('pe-adeno-reads%d_2-id.razers' % rl),
ph.outFile('pe-adeno-reads%d_2-id.razers' % rl)),
(ph.inFile('pe-adeno-reads%d_2-id.stdout' % rl),
ph.outFile('pe-adeno-reads%d_2-id.stdout' % rl))])
conf_list.append(conf)
# Compute forward/reverse matches only.
for o in ['-r', '-f']:
conf = app_tests.TestConf(
program=path_to_program,
redir_stdout=ph.outFile('pe-adeno-reads%d_2-id%s.stdout' % (rl, o)),
args=['--low-memory', '-id', o,
ph.inFile('adeno-genome.fa'),
ph.inFile('adeno-reads%d_1.fa' % rl),
ph.inFile('adeno-reads%d_2.fa' % rl),
'-o', ph.outFile('pe-adeno-reads%d_2-id%s.razers' % (rl, o))],
to_diff=[(ph.inFile('pe-adeno-reads%d_2-id%s.razers' % (rl, o)),
ph.outFile('pe-adeno-reads%d_2-id%s.razers' % (rl, o))),
(ph.inFile('pe-adeno-reads%d_2-id%s.stdout' % (rl, o)),
ph.outFile('pe-adeno-reads%d_2-id%s.stdout' % (rl, o)))])
conf_list.append(conf)
# Compute with different identity rates.
for i in range(90, 101):
conf = app_tests.TestConf(
program=path_to_program,
redir_stdout=ph.outFile('pe-adeno-reads%d_2-id-i%d.stdout' % (rl, i)),
args=['--low-memory', '-id', '-i', str(i),
ph.inFile('adeno-genome.fa'),
ph.inFile('adeno-reads%d_1.fa' % rl),
ph.inFile('adeno-reads%d_2.fa' % rl),
'-o', ph.outFile('pe-adeno-reads%d_2-id-i%d.razers' % (rl, i))],
to_diff=[(ph.inFile('pe-adeno-reads%d_2-id-i%d.razers' % (rl, i)),
ph.outFile('pe-adeno-reads%d_2-id-i%d.razers' % (rl, i))),
(ph.inFile('pe-adeno-reads%d_2-id-i%d.stdout' % (rl, i)),
ph.outFile('pe-adeno-reads%d_2-id-i%d.stdout' % (rl, i)))])
conf_list.append(conf)
# Compute with different output formats.
for suffix in ['razers', 'fa', 'eland', 'gff', 'sam', 'afg']:
conf = app_tests.TestConf(
program=path_to_program,
redir_stdout=ph.outFile('pe-adeno-reads%d_2-id.%s.stdout' % (rl, suffix)),
args=['--low-memory', '-id',
ph.inFile('adeno-genome.fa'),
ph.inFile('adeno-reads%d_1.fa' % rl),
ph.inFile('adeno-reads%d_2.fa' % rl),
'-o', ph.outFile('pe-adeno-reads%d_2-id.%s' % (rl, suffix))],
to_diff=[(ph.inFile('pe-adeno-reads%d_2-id.%s' % (rl, suffix)),
ph.outFile('pe-adeno-reads%d_2-id.%s' % (rl, suffix))),
(ph.inFile('pe-adeno-reads%d_2-id.%s.stdout' % (rl, suffix)),
ph.outFile('pe-adeno-reads%d_2-id.%s.stdout' % (rl, suffix)))])
conf_list.append(conf)
# Compute with different sort orders.
for so in [0, 1]:
conf = app_tests.TestConf(
program=path_to_program,
redir_stdout=ph.outFile('pe-adeno-reads%d_2-id-so%d.stdout' % (rl, so)),
args=['--low-memory', '-id', '-so', str(so),
ph.inFile('adeno-genome.fa'),
ph.inFile('adeno-reads%d_1.fa' % rl),
ph.inFile('adeno-reads%d_2.fa' % rl),
'-o', ph.outFile('pe-adeno-reads%d_2-id-so%d.razers' % (rl, so))],
to_diff=[(ph.inFile('pe-adeno-reads%d_2-id-so%d.razers' % (rl, so)),
ph.outFile('pe-adeno-reads%d_2-id-so%d.razers' % (rl, so))),
(ph.inFile('pe-adeno-reads%d_2-id-so%d.stdout' % (rl, so)),
ph.outFile('pe-adeno-reads%d_2-id-so%d.stdout' % (rl, so)))])
conf_list.append(conf)
# Execute the tests.
failures = 0
for conf in conf_list:
res = app_tests.runTest(conf)
# Output to the user.
print ' '.join(['razers2'] + conf.args),
if res:
print 'OK'
else:
failures += 1
print 'FAILED'
# Cleanup.
ph.deleteTempDir()
print '=============================='
print ' total tests: %d' % len(conf_list)
print ' failed tests: %d' % failures
print 'successful tests: %d' % (len(conf_list) - failures)
print '=============================='
# Compute and return return code.
return failures != 0
if __name__ == '__main__':
sys.exit(app_tests.main(main))
| 48.189591 | 90 | 0.484919 | 1,687 | 12,963 | 3.607587 | 0.097807 | 0.14788 | 0.162668 | 0.094643 | 0.815971 | 0.802169 | 0.802169 | 0.802169 | 0.801512 | 0.758791 | 0 | 0.013864 | 0.310036 | 12,963 | 268 | 91 | 48.369403 | 0.666592 | 0.106071 | 0 | 0.461538 | 1 | 0 | 0.280494 | 0.19846 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.020513 | null | null | 0.05641 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
6b1b2943a7fab8e940178aafd7242af6b4362916 | 2,344 | py | Python | src/autotests/test_request.py | NSKgooner/hack_moscow | 27ae092d0086c8dde2ceb9598e65c3b79a654f56 | [
"Apache-2.0"
] | 3 | 2019-10-27T07:40:26.000Z | 2020-04-18T20:44:08.000Z | src/autotests/test_request.py | NSKgooner/hack_moscow | 27ae092d0086c8dde2ceb9598e65c3b79a654f56 | [
"Apache-2.0"
] | 1 | 2019-11-04T04:16:29.000Z | 2019-11-04T04:16:29.000Z | src/autotests/test_request.py | NSKgooner/hack_moscow | 27ae092d0086c8dde2ceb9598e65c3b79a654f56 | [
"Apache-2.0"
] | null | null | null | import aiohttp
import pytest
@pytest.mark.asyncio
async def test_spec(url):
async with aiohttp.ClientSession() as session:
async with session.get(f'{url}/spec?name=p') as resp:
assert resp.status == 200
@pytest.mark.asyncio
async def test_vacancies(url):
async with aiohttp.ClientSession() as session:
async with session.get(f'{url}/vacancies?name=python developer&lvl=Middle') as resp:
assert resp.status == 200
@pytest.mark.asyncio
async def test_skills(url):
async with aiohttp.ClientSession() as session:
async with session.post(f'{url}/skills', json={
'email': 'test@test.test', 'skillsItemUiModel': [
{'layoutId': 2, 'name': 'android', 'selected': {'mValue': True}},
{'layoutId': 2, 'name': 'git', 'selected': {'mValue': False}}]
}) as resp:
assert resp.status == 200
@pytest.mark.asyncio
async def test_roadmaps(url):
async with aiohttp.ClientSession() as session:
async with session.get(f'{url}/roadmaps?email=test@test.test') as resp:
assert resp.status == 200
@pytest.mark.asyncio
async def test_profile_known(url):
async with aiohttp.ClientSession() as session:
async with session.get(f'{url}/profile/known?email=test@test.test') as resp:
assert resp.status == 200
@pytest.mark.asyncio
async def test_profile_unknown(url):
async with aiohttp.ClientSession() as session:
async with session.get(f'{url}/profile/unknown?email=test@test.test') as resp:
assert resp.status == 200
@pytest.mark.asyncio
async def test_profile_score(url):
async with aiohttp.ClientSession() as session:
async with session.get(f'{url}/profile/score?email=test@test.test') as resp:
assert resp.status == 200
@pytest.mark.asyncio
async def test_profile_complete(url):
async with aiohttp.ClientSession() as session:
async with session.post(f'{url}/profile/complete', json={
'email': 'test@test.test', 'skill': 'git'
}) as resp:
assert resp.status == 200
@pytest.mark.asyncio
async def test_profile_courses(url):
async with aiohttp.ClientSession() as session:
async with session.get(f'{url}/profile/courses?skill=python') as resp:
assert resp.status == 200
| 32.555556 | 92 | 0.658276 | 309 | 2,344 | 4.94822 | 0.15534 | 0.105952 | 0.100065 | 0.129496 | 0.809026 | 0.781557 | 0.746239 | 0.746239 | 0.746239 | 0.746239 | 0 | 0.015718 | 0.212884 | 2,344 | 71 | 93 | 33.014085 | 0.813008 | 0 | 0 | 0.54717 | 0 | 0 | 0.177048 | 0.102389 | 0 | 0 | 0 | 0 | 0.169811 | 1 | 0 | true | 0 | 0.037736 | 0 | 0.037736 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8671e131b89318c1553f71c5887dc3666f2749ba | 10,049 | py | Python | gamingaccount/gamingaccount.py | failgod-marcus/failCogs | ed37c1c222e2f2ae8d793e8dba592abe13899e68 | [
"MIT"
] | null | null | null | gamingaccount/gamingaccount.py | failgod-marcus/failCogs | ed37c1c222e2f2ae8d793e8dba592abe13899e68 | [
"MIT"
] | null | null | null | gamingaccount/gamingaccount.py | failgod-marcus/failCogs | ed37c1c222e2f2ae8d793e8dba592abe13899e68 | [
"MIT"
] | 1 | 2019-05-24T07:46:16.000Z | 2019-05-24T07:46:16.000Z | import discord
from discord.ext import commands
from .utils.dataIO import dataIO
from .utils import checks
from __main__ import send_cmd_help
import os
from .utils.chat_formatting import *
class GamingAccount:
"""The GamingAccount Cog"""
def __init__(self, bot):
self.bot = bot
self.profile = "data/gamingaccount/accounts.json"
self.nerdie = dataIO.load_json(self.profile)
@commands.command(name="signup", pass_context=True, invoke_without_command=True, no_pm=True)
async def _reg(self, ctx):
"""Melde dich an um deinen Account einzustellen"""
server = ctx.message.server
user = ctx.message.author
if server.id not in self.nerdie:
self.nerdie[server.id] = {}
else:
pass
if user.id not in self.nerdie[server.id]:
self.nerdie[server.id][user.id] = {}
dataIO.save_json(self.profile, self.nerdie)
data = discord.Embed(colour=user.colour)
data.add_field(name="Glückwunsch!:sparkles:", value="Du hast einen Account angelegt für **{}**, {}.".format(server, user.mention))
await self.bot.say(embed=data)
else:
data = discord.Embed(colour=user.colour)
data.add_field(name="Hinweis:",value="Sieht so aus als hättest du schon einen Account, {}.".format(user.mention))
await self.bot.say(embed=data)
@commands.command(name="account", pass_context=True, invoke_without_command=True, no_pm=True)
async def _acc(self, ctx, user : discord.Member=None):
"""Dein/ein anderer Account"""
server = ctx.message.server
if server.id not in self.nerdie:
self.nerdie[server.id] = {}
else:
pass
if not user:
user = ctx.message.author
if user.id in self.nerdie[server.id]:
data = discord.Embed(description="{}".format(server), colour=user.colour)
if "PSN" in self.nerdie[server.id][user.id]:
psn = self.nerdie[server.id][user.id]["PSN"]
data.add_field(name="PSN:", value=psn)
else:
pass
if user.avatar_url:
name = str(user)
name = " ~ ".join((name, user.nick)) if user.nick else name
data.set_author(name=name, url=user.avatar_url)
data.set_thumbnail(url=user.avatar_url)
else:
data.set_author(name=user.name)
if "XBOX" in self.nerdie[server.id][user.id]:
xbox = self.nerdie[server.id][user.id]["XBOX"]
data.add_field(name="XBOX:", value=xbox)
else:
pass
if user.avatar_url:
name = str(user)
name = " ~ ".join((name, user.nick)) if user.nick else name
data.set_author(name=name, url=user.avatar_url)
data.set_thumbnail(url=user.avatar_url)
else:
data.set_author(name=user.name)
if "Wohnort" in self.nerdie[server.id][user.id]:
ort = self.nerdie[server.id][user.id]["Wohnort"]
data.add_field(name="Wohnort:", value=ort)
else:
pass
if user.avatar_url:
name = str(user)
name = " ~ ".join((name, user.nick)) if user.nick else name
data.set_author(name=name, url=user.avatar_url)
data.set_thumbnail(url=user.avatar_url)
else:
data.set_author(name=user.name)
await self.bot.say(embed=data)
else:
prefix = ctx.prefix
data = discord.Embed(colour=user.colour)
data.add_field(name="Hinweis:",value="Du brauchst einen Account um das nutzen zu können. \n\nUm einen anzulegen sage einfach `{}signup`.".format(prefix))
await self.bot.say(embed=data)
else:
server = ctx.message.server
if user.id in self.nerdie[server.id]:
data = discord.Embed(description="{}".format(server), colour=user.colour)
if "PSN" in self.nerdie[server.id][user.id]:
psn = self.nerdie[server.id][user.id]["PSN"]
data.add_field(name="PSN", value=psn)
else:
pass
if user.avatar_url:
name = str(user)
name = " ~ ".join((name, user.nick)) if user.nick else name
data.set_author(name=name, url=user.avatar_url)
data.set_thumbnail(url=user.avatar_url)
else:
data.set_author(name=user.name)
if "XBOX" in self.nerdie[server.id][user.id]:
xbox = self.nerdie[server.id][user.id]["XBOX"]
data.add_field(name="XBOX:", value=xbox)
else:
pass
if user.avatar_url:
name = str(user)
name = " ~ ".join((name, user.nick)) if user.nick else name
data.set_author(name=name, url=user.avatar_url)
data.set_thumbnail(url=user.avatar_url)
else:
data.set_author(name=user.name)
if "Wohnort" in self.nerdie[server.id][user.id]:
ort = self.nerdie[server.id][user.id]["Wohnort"]
data.add_field(name="Wohnort:", value=ort)
else:
pass
if user.avatar_url:
name = str(user)
name = " ~ ".join((name, user.nick)) if user.nick else name
data.set_author(name=name, url=user.avatar_url)
data.set_thumbnail(url=user.avatar_url)
else:
data.set_author(name=user.name)
await self.bot.say(embed=data)
else:
data = discord.Embed(colour=user.colour)
data.add_field(name="Fehler:",value="{} hat keinen Account.".format(user.mention))
await self.bot.say(embed=data)
@commands.group(name="update", pass_context=True, invoke_without_command=True, no_pm=True)
async def update(self, ctx):
"""Update deine Infos"""
await send_cmd_help(ctx)
@update.command(pass_context=True, no_pm=True)
async def psn(self, ctx, *, psn):
"""Wie lautet deine PSN?"""
server = ctx.message.server
user = ctx.message.author
prefix = ctx.prefix
if server.id not in self.nerdie:
self.nerdie[server.id] = {}
else:
pass
if user.id not in self.nerdie[server.id]:
data = discord.Embed(colour=user.colour)
data.add_field(name="Hinweis:",value="Du brauchst einen Account um das nutzen zu können. \n\nUm einen anzulegen sage einfach `{}signup`.".format(prefix))
await self.bot.say(embed=data)
else:
self.nerdie[server.id][user.id].update({"PSN" : psn})
dataIO.save_json(self.profile, self.nerdie)
data = discord.Embed(colour=user.colour)
data.add_field(name="Glückwunsch!:sparkles:",value="Deine PSN ist jetzt {}".format(psn))
await self.bot.say(embed=data)
@update.command(pass_context=True, no_pm=True)
async def xbox(self, ctx, *, xbox):
"""Wie lautet dein Xbox Name?"""
server = ctx.message.server
user = ctx.message.author
prefix = ctx.prefix
if server.id not in self.nerdie:
self.nerdie[server.id] = {}
else:
pass
if user.id not in self.nerdie[server.id]:
data = discord.Embed(colour=user.colour)
data.add_field(name="Hinweis:",value="Du brauchst einen Account um das nutzen zu können. \n\nUm einen anzulegen sage einfach `{}signup`.".format(prefix))
await self.bot.say(embed=data)
else:
self.nerdie[server.id][user.id].update({"XBOX" : xbox})
dataIO.save_json(self.profile, self.nerdie)
data = discord.Embed(colour=user.colour)
data.add_field(name="Glückwunsch!:sparkles:",value="Deine Xbox Name ist jetzt {}".format(xbox))
await self.bot.say(embed=data)
@update.command(pass_context=True, no_pm=True)
async def wohnort(self, ctx, *, ort):
"""Wo wohnst du?"""
server = ctx.message.server
user = ctx.message.author
prefix = ctx.prefix
if server.id not in self.nerdie:
self.nerdie[server.id] = {}
else:
pass
if user.id not in self.nerdie[server.id]:
data = discord.Embed(colour=user.colour)
data.add_field(name="Hinweis:",value="Du brauchst einen Account um das nutzen zu können. \n\nUm einen anzulegen sage einfach `{}signup`.".format(prefix))
await self.bot.say(embed=data)
else:
self.nerdie[server.id][user.id].update({"Wohnort" : ort})
dataIO.save_json(self.profile, self.nerdie)
data = discord.Embed(colour=user.colour)
data.add_field(name="Glückwunsch!:sparkles:",value="Dein Wohnort ist jetzt {}".format(ort))
await self.bot.say(embed=data)
def check_folder():
if not os.path.exists("data/gamingaccount"):
print("Creating data/account folder...")
os.makedirs("data/gamingaccount")
def check_file():
data = {}
f = "data/gamingaccount/accounts.json"
if not dataIO.is_valid_json(f):
print("I'm creating the file, so relax bruh.")
dataIO.save_json(f, data)
def setup(bot):
check_folder()
check_file()
bot.add_cog(GamingAccount(bot)) | 41.524793 | 169 | 0.549806 | 1,222 | 10,049 | 4.44108 | 0.114566 | 0.068178 | 0.079602 | 0.089552 | 0.800811 | 0.789018 | 0.780173 | 0.780173 | 0.770407 | 0.770407 | 0 | 0 | 0.329486 | 10,049 | 242 | 170 | 41.524793 | 0.805432 | 0.00209 | 0 | 0.757426 | 0 | 0.019802 | 0.105071 | 0.015446 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019802 | false | 0.084158 | 0.034653 | 0 | 0.059406 | 0.009901 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
868bcef4f84cc2eed40cffdf245801b53699a66f | 6,922 | py | Python | test/test_shield.py | ebb29c/asyncio-channel | c1c5d0db8b58e3ed9c8274b1604dbda382657fa4 | [
"MIT"
] | 10 | 2020-08-13T19:39:03.000Z | 2022-02-28T16:15:54.000Z | test/test_shield.py | ebb29c/asyncio-channel | c1c5d0db8b58e3ed9c8274b1604dbda382657fa4 | [
"MIT"
] | 3 | 2020-08-17T17:59:34.000Z | 2022-03-01T02:54:51.000Z | test/test_shield.py | ebb29c/asyncio-channel | c1c5d0db8b58e3ed9c8274b1604dbda382657fa4 | [
"MIT"
] | 1 | 2020-08-17T23:05:53.000Z | 2020-08-17T23:05:53.000Z | from asyncio_channel import ProhibitedOperationError, create_channel, \
shield_from_close, shield_from_read, \
shield_from_write
import asyncio
import pytest
@pytest.mark.asyncio
async def test_shield_from_close_forwarding():
"""
GIVEN
An open channel, shielded from close.
WHEN
Call forwarded methods, i.e. everything except 'close'.
EXPECT
Normal channel behavior.
"""
ch = create_channel()
dch = shield_from_close(ch)
assert dch.empty()
assert not dch.full()
a = 'a'
assert dch.offer(a)
assert not dch.empty()
assert dch.full()
assert dch.poll() == a
b = 'b'
assert await dch.put(b, timeout=0.05)
assert await dch.take(timeout=0.05) == b
asyncio.get_running_loop().call_later(0.05, dch.offer, a)
assert await dch.item(timeout=0.1)
asyncio.get_running_loop().call_later(0.05, dch.poll)
assert await dch.capacity(timeout=0.1)
assert not dch.is_closed()
with pytest.raises(asyncio.TimeoutError):
await asyncio.wait_for(dch.closed(), timeout=0.05)
asyncio.get_running_loop().call_later(0.05, ch.close)
await asyncio.wait_for(dch.closed(), timeout=0.1)
assert dch.is_closed()
@pytest.mark.asyncio
async def test_shield_from_read_forwarding():
"""
GIVEN
An open channel, shielded from read operations.
WHEN
Call forwarded methods, i.e. everything except 'item',
'poll', and 'take'.
EXPECT
Normal channel behavior.
"""
ch = create_channel()
dch = shield_from_read(ch)
assert dch.empty()
assert not dch.full()
a = 'a'
assert dch.offer(a)
assert not dch.empty()
assert dch.full()
assert ch.poll() == a
b = 'b'
assert await dch.put(b, timeout=0.05)
assert await ch.take(timeout=0.05) == b
assert ch
asyncio.get_running_loop().call_later(0.05, ch.poll)
assert await dch.capacity(timeout=0.1)
assert not dch.is_closed()
with pytest.raises(asyncio.TimeoutError):
await asyncio.wait_for(dch.closed(), timeout=0.05)
asyncio.get_running_loop().call_later(0.05, dch.close)
await asyncio.wait_for(dch.closed(), timeout=0.1)
assert dch.is_closed()
@pytest.mark.asyncio
async def test_shield_from_write_forwarding():
"""
GIVEN
An open channel, shielded from write operations.
WHEN
Call forwarded methods, i.e. everything except 'capacity',
'offer', and 'put'.
EXPECT
Normal channel behavior.
"""
ch = create_channel()
dch = shield_from_close(ch)
assert dch.empty()
assert not dch.full()
a = 'a'
assert ch.offer(a)
assert not dch.empty()
assert dch.full()
assert dch.poll() == a
b = 'b'
assert await ch.put(b, timeout=0.05)
assert await dch.take(timeout=0.05) == b
asyncio.get_running_loop().call_later(0.05, ch.offer, a)
assert await dch.item(timeout=0.1)
assert not dch.is_closed()
with pytest.raises(asyncio.TimeoutError):
await asyncio.wait_for(dch.closed(), timeout=0.05)
asyncio.get_running_loop().call_later(0.05, ch.close)
await asyncio.wait_for(dch.closed(), timeout=0.1)
assert dch.is_closed()
def test_shield_from_close():
"""
GIVEN
An open channel, shielded from close.
WHEN
.close() is called.
EXPECT
Channel is not closed.
"""
with pytest.raises(TypeError, match='must be a channel, not a int'):
shield_from_close(1)
ch = create_channel()
dch = shield_from_close(ch, silent=True)
dch.close()
assert not ch.is_closed()
dch = shield_from_close(ch)
with pytest.raises(ProhibitedOperationError, match='close'):
dch.close()
@pytest.mark.asyncio
async def test_shield_from_read():
"""
GIVEN
An open channel, shielded from read operations.
WHEN
Read operations ('item', 'poll', 'take') are called.
EXPECT
The operations return defaults or raise
ProhibitedOperationError.
"""
with pytest.raises(TypeError, match='must be a channel, not a int'):
shield_from_read(1)
ch = create_channel()
dch = shield_from_read(ch, silent=True)
assert not await dch.item()
assert dch.poll() is None
assert await dch.take() is None
dch = shield_from_read(ch)
with pytest.raises(ProhibitedOperationError, match='item'):
await dch.item()
with pytest.raises(ProhibitedOperationError, match='poll'):
dch.poll()
with pytest.raises(ProhibitedOperationError, match='take'):
await dch.take()
@pytest.mark.asyncio
async def test_shield_from_write():
"""
GIVEN
An open channel, shielded from write operations.
WHEN
Write operations ('capacity', 'offer', 'put') are called.
EXPECT
The operations return defaults or raise
ProhibitedOperationError.
"""
with pytest.raises(TypeError, match='must be a channel, not a int'):
shield_from_write(1)
ch = create_channel()
dch = shield_from_write(ch, silent=True)
assert not await dch.capacity()
assert not dch.offer('a')
assert not await dch.put('b')
dch = shield_from_write(ch)
with pytest.raises(ProhibitedOperationError, match='capacity'):
await dch.capacity()
with pytest.raises(ProhibitedOperationError, match='offer'):
dch.offer('a')
with pytest.raises(ProhibitedOperationError, match='put'):
await dch.put('b')
@pytest.mark.asyncio
async def test_shield_combined():
"""
GIVEN
An open channel, shielded from everything.
WHEN
Any operation is called.
EXPECT
The operation to return default or raise
ProhibitedOperationError.
"""
ch = create_channel()
dch = shield_from_close(ch, silent=True)
dch = shield_from_read(dch, silent=True)
dch = shield_from_write(dch, silent=True)
dch.close()
assert not ch.is_closed()
assert not await dch.item()
assert dch.poll() is None
assert await dch.take() is None
assert not await dch.capacity()
assert not dch.offer('a')
assert not await dch.put('b')
dch = shield_from_close(ch)
dch = shield_from_read(dch)
dch = shield_from_write(dch)
with pytest.raises(ProhibitedOperationError, match='close'):
dch.close()
with pytest.raises(ProhibitedOperationError, match='item'):
await dch.item()
with pytest.raises(ProhibitedOperationError, match='poll'):
dch.poll()
with pytest.raises(ProhibitedOperationError, match='take'):
await dch.take()
with pytest.raises(ProhibitedOperationError, match='capacity'):
await dch.capacity()
with pytest.raises(ProhibitedOperationError, match='offer'):
dch.offer('a')
with pytest.raises(ProhibitedOperationError, match='put'):
await dch.put('b')
| 31.752294 | 72 | 0.656169 | 916 | 6,922 | 4.83952 | 0.090611 | 0.060907 | 0.072186 | 0.126325 | 0.909993 | 0.868712 | 0.860591 | 0.833972 | 0.764268 | 0.637266 | 0 | 0.012207 | 0.230714 | 6,922 | 217 | 73 | 31.898618 | 0.820282 | 0.015891 | 0 | 0.782313 | 0 | 0 | 0.030086 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.006803 | false | 0 | 0.020408 | 0 | 0.027211 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8695a478ba1d9912ae9ce6d42c15ecb4df6c7efe | 52,557 | py | Python | src/azure-cli/azure/cli/command_modules/appservice/tests/latest/test_webapp_up_commands.py | Mystic421980/azure-cli | 827f5fbc9fe8a673dd4abf7a8991dde7c726107e | [
"MIT"
] | null | null | null | src/azure-cli/azure/cli/command_modules/appservice/tests/latest/test_webapp_up_commands.py | Mystic421980/azure-cli | 827f5fbc9fe8a673dd4abf7a8991dde7c726107e | [
"MIT"
] | null | null | null | src/azure-cli/azure/cli/command_modules/appservice/tests/latest/test_webapp_up_commands.py | Mystic421980/azure-cli | 827f5fbc9fe8a673dd4abf7a8991dde7c726107e | [
"MIT"
] | null | null | null | # --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
# pylint: disable=line-too-long
# pylint: disable=too-few-public-methods
import unittest
import os
from pytest import skip
import requests
from azure.cli.testsdk.scenario_tests import AllowLargeResponse
from azure.cli.testsdk import (
ScenarioTest, ResourceGroupPreparer, JMESPathCheck, live_only)
TEST_DIR = os.path.abspath(os.path.join(os.path.abspath(__file__), '..'))
WINDOWS_ASP_LOCATION_WEBAPP = 'japanwest'
LINUX_ASP_LOCATION_WEBAPP = 'eastus2'
class WebAppUpE2ETests(ScenarioTest):
@live_only()
@ResourceGroupPreparer(random_name_length=24, name_prefix='clitest', location=LINUX_ASP_LOCATION_WEBAPP)
def test_webapp_up_no_plan_e2e(self, resource_group):
webapp_name = self.create_random_name('up-nodeapp', 24)
zip_file_name = os.path.join(TEST_DIR, 'node-Express-up.zip')
# create a temp directory and unzip the code to this folder
import zipfile
import tempfile
temp_dir = tempfile.mkdtemp()
zip_ref = zipfile.ZipFile(zip_file_name, 'r')
zip_ref.extractall(temp_dir)
current_working_dir = os.getcwd()
# change the working dir to the dir where the code has been extracted to
up_working_dir = os.path.join(temp_dir, 'myExpressApp')
os.chdir(up_working_dir)
# test the full E2E operation works
self.cmd('webapp up -n {} -g {}'.format(webapp_name, resource_group)).get_output_in_json()
# Verify app is created
# since we set local context, -n and -g are no longer required
self.cmd('webapp show', checks=[
JMESPathCheck('name', webapp_name),
JMESPathCheck('httpsOnly', True),
JMESPathCheck('kind', 'app,linux'),
JMESPathCheck('resourceGroup', resource_group)
])
self.cmd('webapp config show', checks=[
JMESPathCheck('linuxFxVersion', 'NODE|12-lts'),
JMESPathCheck('tags.cli', 'None'),
])
self.cmd('webapp config appsettings list', checks=[
JMESPathCheck('[0].name', 'SCM_DO_BUILD_DURING_DEPLOYMENT'),
JMESPathCheck('[0].value', 'True')
])
# cleanup
# switch back the working dir
os.chdir(current_working_dir)
# delete temp_dir
import shutil
shutil.rmtree(temp_dir)
@live_only()
@unittest.skip("Flaky test skipping")
@ResourceGroupPreparer(random_name_length=24, name_prefix='clitest', location=LINUX_ASP_LOCATION_WEBAPP)
def test_webapp_up_no_plan_different_os_e2e(self, resource_group):
webapp_name = self.create_random_name('up-nodeapp', 24)
zip_file_name = os.path.join(TEST_DIR, 'node-Express-up.zip')
# create a temp directory and unzip the code to this folder
import zipfile
import tempfile
temp_dir = tempfile.mkdtemp()
zip_ref = zipfile.ZipFile(zip_file_name, 'r')
zip_ref.extractall(temp_dir)
current_working_dir = os.getcwd()
# change the working dir to the dir where the code has been extracted to
up_working_dir = os.path.join(temp_dir, 'myExpressApp')
os.chdir(up_working_dir)
# test the full E2E operation works
self.cmd('webapp up -n {} -g {} --os-type windows'.format(webapp_name, resource_group))
# Verify app is created
# since we set local context, -n and -g are no longer required
self.cmd('webapp show', checks=[
JMESPathCheck('name', webapp_name),
JMESPathCheck('httpsOnly', True),
JMESPathCheck('kind', 'app'),
JMESPathCheck('resourceGroup', resource_group)
])
webapp_name = self.create_random_name('up-nodeapp', 24)
self.cmd('webapp up -n {} -g {} --os-type linux'.format(webapp_name, resource_group), expect_failure=True)
# cleanup
# switch back the working dir
os.chdir(current_working_dir)
# delete temp_dir
import shutil
shutil.rmtree(temp_dir)
@live_only()
@ResourceGroupPreparer(random_name_length=24, name_prefix='clitest', location=LINUX_ASP_LOCATION_WEBAPP)
def test_webapp_up_node_e2e(self, resource_group):
plan = self.create_random_name('up-nodeplan', 24)
webapp_name = self.create_random_name('up-nodeapp', 24)
zip_file_name = os.path.join(TEST_DIR, 'node-Express-up.zip')
# create a temp directory and unzip the code to this folder
import zipfile
import tempfile
temp_dir = tempfile.mkdtemp()
zip_ref = zipfile.ZipFile(zip_file_name, 'r')
zip_ref.extractall(temp_dir)
current_working_dir = os.getcwd()
# change the working dir to the dir where the code has been extracted to
up_working_dir = os.path.join(temp_dir, 'myExpressApp')
os.chdir(up_working_dir)
# test dryrun operation
result = self.cmd(
'webapp up -n {} --dryrun'.format(webapp_name)).get_output_in_json()
self.assertEqual(result['sku'].lower(), 'premiumv2')
self.assertTrue(result['name'].startswith(webapp_name))
self.assertTrue(result['src_path'].replace(
os.sep + os.sep, os.sep), up_working_dir)
self.assertEqual(result['runtime_version'].lower(), 'node|12-lts')
self.assertEqual(result['os'].lower(), 'linux')
# test the full E2E operation works
full_result = self.cmd(
'webapp up -n {} -g {} --plan {}'.format(webapp_name, resource_group, plan)).get_output_in_json()
self.assertEqual(result['name'], full_result['name'])
# Verify app is created
# since we set local context, -n and -g are no longer required
self.cmd('webapp show', checks=[
JMESPathCheck('name', webapp_name),
JMESPathCheck('httpsOnly', True),
JMESPathCheck('kind', 'app,linux'),
JMESPathCheck('resourceGroup', resource_group)
])
self.cmd('webapp config show', checks=[
JMESPathCheck('linuxFxVersion', 'NODE|12-lts'),
JMESPathCheck('tags.cli', 'None'),
])
self.cmd('webapp config appsettings list', checks=[
JMESPathCheck('[0].name', 'SCM_DO_BUILD_DURING_DEPLOYMENT'),
JMESPathCheck('[0].value', 'True')
])
self.cmd('appservice plan show', checks=[
JMESPathCheck('properties.reserved', True),
JMESPathCheck('name', plan),
JMESPathCheck('sku.tier', 'PremiumV2'),
JMESPathCheck('sku.name', 'P1v2')
])
# cleanup
# switch back the working dir
os.chdir(current_working_dir)
# delete temp_dir
import shutil
shutil.rmtree(temp_dir)
@live_only()
@ResourceGroupPreparer(random_name_length=24, name_prefix='clitest', location=LINUX_ASP_LOCATION_WEBAPP)
def test_webapp_up_python_e2e(self, resource_group):
plan = self.create_random_name('up-pythonplan', 24)
webapp_name = self.create_random_name('up-pythonapp', 24)
zip_file_name = os.path.join(TEST_DIR, 'python-hello-world-up.zip')
# create a temp directory and unzip the code to this folder
import zipfile
import tempfile
temp_dir = tempfile.mkdtemp()
zip_ref = zipfile.ZipFile(zip_file_name, 'r')
zip_ref.extractall(temp_dir)
current_working_dir = os.getcwd()
# change the working dir to the dir where the code has been extracted to
up_working_dir = os.path.join(temp_dir, 'python-docs-hello-world')
os.chdir(up_working_dir)
# test dryrun operation
result = self.cmd('webapp up -n {} --sku S1 --dryrun'
.format(webapp_name)).get_output_in_json()
self.assertEqual(result['sku'].lower(), 'standard')
self.assertTrue(result['name'].startswith(webapp_name))
self.assertTrue(result['src_path'].replace(
os.sep + os.sep, os.sep), up_working_dir)
self.assertEqual(result['runtime_version'].lower(), 'python|3.7')
self.assertEqual(result['os'].lower(), 'linux')
# test the full E2E operation works
full_result = self.cmd(
'webapp up -n {} --sku S1 -g {} --plan {}'.format(webapp_name, resource_group, plan)).get_output_in_json()
self.assertEqual(result['name'], full_result['name'])
# Verify app is created
# since we set local context, -n and -g are no longer required
self.cmd('webapp show', checks=[
JMESPathCheck('name', webapp_name),
JMESPathCheck('httpsOnly', True),
JMESPathCheck('kind', 'app,linux'),
JMESPathCheck('resourceGroup', resource_group)
])
self.cmd('webapp config show', checks=[
JMESPathCheck('linuxFxVersion', 'PYTHON|3.7'),
JMESPathCheck('tags.cli', 'None'),
])
self.cmd('webapp config appsettings list', checks=[
JMESPathCheck('[0].name', 'SCM_DO_BUILD_DURING_DEPLOYMENT'),
JMESPathCheck('[0].value', 'True')
])
# verify SKU and kind of ASP created
self.cmd('appservice plan show', checks=[
JMESPathCheck('properties.reserved', True),
JMESPathCheck('name', plan),
JMESPathCheck('sku.tier', 'Standard'),
JMESPathCheck('sku.name', 'S1')
])
# cleanup
# switch back the working dir
os.chdir(current_working_dir)
# delete temp_dir
import shutil
shutil.rmtree(temp_dir)
@live_only()
@ResourceGroupPreparer(random_name_length=24, name_prefix='clitest', location=WINDOWS_ASP_LOCATION_WEBAPP)
def test_webapp_up_dotnetcore_e2e(self, resource_group):
plan = self.create_random_name('up-dotnetcoreplan', 24)
webapp_name = self.create_random_name('up-dotnetcoreapp', 24)
zip_file_name = os.path.join(TEST_DIR, 'dotnetcore-hello-up.zip')
# create a temp directory and unzip the code to this folder
import zipfile
import tempfile
temp_dir = tempfile.mkdtemp()
zip_ref = zipfile.ZipFile(zip_file_name, 'r')
zip_ref.extractall(temp_dir)
current_working_dir = os.getcwd()
# change the working dir to the dir where the code has been extracted to
up_working_dir = os.path.join(temp_dir, 'hellodotnetcore')
os.chdir(up_working_dir)
# test dryrun operation
result = self.cmd('webapp up -n {} --dryrun'
.format(webapp_name)).get_output_in_json()
self.assertEqual(result['sku'].lower(), 'free')
self.assertTrue(result['name'].startswith(webapp_name))
self.assertTrue(result['src_path'].replace(
os.sep + os.sep, os.sep), up_working_dir)
self.assertEqual(result['runtime_version'].lower(), 'dotnetcore|3.1')
self.assertEqual(result['os'].lower(), 'windows')
self.assertNotEqual(result['location'], 'None')
# test the full E2E operation works
full_result = self.cmd(
'webapp up -n {} -g {} --plan {}'.format(webapp_name, resource_group, plan)).get_output_in_json()
self.assertEqual(result['name'], full_result['name'])
self.assertEqual(result['location'], full_result['location'])
# Verify app is created
# since we set local context, -n and -g are no longer required
self.cmd('webapp show', checks=[
JMESPathCheck('name', webapp_name),
JMESPathCheck('httpsOnly', True),
JMESPathCheck('kind', 'app'),
JMESPathCheck('resourceGroup', resource_group)
])
self.cmd('webapp config show', checks=[
JMESPathCheck('tags.cli', 'None'),
JMESPathCheck('windowsFxVersion', None)
])
self.cmd('webapp config appsettings list', checks=[
JMESPathCheck('[0].name', 'SCM_DO_BUILD_DURING_DEPLOYMENT'),
JMESPathCheck('[0].value', 'True')
])
# verify SKU and kind of ASP created
self.cmd('appservice plan show', checks=[
JMESPathCheck('properties.reserved', False),
JMESPathCheck('name', plan),
JMESPathCheck('sku.tier', 'Free'),
JMESPathCheck('sku.name', 'F1')
])
# cleanup
# switch back the working dir
os.chdir(current_working_dir)
# delete temp_dir
import shutil
shutil.rmtree(temp_dir)
@live_only()
@ResourceGroupPreparer(random_name_length=24, name_prefix='clitest', location=WINDOWS_ASP_LOCATION_WEBAPP)
def test_webapp_up_dotnet6_e2e(self, resource_group):
plan = self.create_random_name('up-dotnetplan', 24)
webapp_name = self.create_random_name('up-dotnetapp', 24)
zip_file_name = os.path.join(TEST_DIR, 'dotnet6-hello-up.zip')
# create a temp directory and unzip the code to this folder
import zipfile
import tempfile
temp_dir = tempfile.mkdtemp()
zip_ref = zipfile.ZipFile(zip_file_name, 'r')
zip_ref.extractall(temp_dir)
current_working_dir = os.getcwd()
# change the working dir to the dir where the code has been extracted to
up_working_dir = temp_dir
os.chdir(up_working_dir)
# test dryrun operation
result = self.cmd('webapp up -n {} --dryrun --os-type Linux'
.format(webapp_name)).get_output_in_json()
self.assertEqual(result['sku'].lower(), 'free')
self.assertTrue(result['name'].startswith(webapp_name))
self.assertTrue(result['src_path'].replace(
os.sep + os.sep, os.sep), up_working_dir)
self.assertEqual(result['runtime_version'].lower(), 'dotnetcore|6.0')
self.assertEqual(result['os'].lower(), 'linux')
self.assertNotEqual(result['location'], 'None')
# test the full E2E operation works
full_result = self.cmd(
'webapp up -n {} -g {} --plan {}'.format(webapp_name, resource_group, plan)).get_output_in_json()
self.assertEqual(result['name'], full_result['name'])
self.assertEqual(result['location'], full_result['location'])
# Verify app is created
# since we set local context, -n and -g are no longer required
self.cmd('webapp show', checks=[
JMESPathCheck('name', webapp_name),
JMESPathCheck('httpsOnly', True),
JMESPathCheck('kind', 'app'),
JMESPathCheck('resourceGroup', resource_group)
])
self.cmd('webapp config show', checks=[
JMESPathCheck('tags.cli', 'None'),
JMESPathCheck('windowsFxVersion', None)
])
self.cmd('webapp config appsettings list', checks=[
JMESPathCheck('[0].name', 'SCM_DO_BUILD_DURING_DEPLOYMENT'),
JMESPathCheck('[0].value', 'True')
])
# verify SKU and kind of ASP created
self.cmd('appservice plan show', checks=[
JMESPathCheck('properties.reserved', False),
JMESPathCheck('name', plan),
JMESPathCheck('sku.tier', 'Free'),
JMESPathCheck('sku.name', 'F1')
])
# cleanup
# switch back the working dir
os.chdir(current_working_dir)
# delete temp_dir
import shutil
shutil.rmtree(temp_dir)
@live_only()
@ResourceGroupPreparer(random_name_length=24, name_prefix='clitest', location=WINDOWS_ASP_LOCATION_WEBAPP)
def test_webapp_up_nested_dotnet_project_e2e(self, resource_group):
plan = self.create_random_name('up-dotnetplan', 24)
webapp_name = self.create_random_name('up-dotnetapp', 24)
zip_file_name = os.path.join(TEST_DIR, 'dotnet6-nested-hello-up.zip')
# create a temp directory and unzip the code to this folder
import zipfile
import tempfile
temp_dir = tempfile.mkdtemp()
zip_ref = zipfile.ZipFile(zip_file_name, 'r')
zip_ref.extractall(temp_dir)
current_working_dir = os.getcwd()
# change the working dir to the dir where the code has been extracted to
up_working_dir = temp_dir
os.chdir(up_working_dir)
# test dryrun operation
result = self.cmd('webapp up -n {} --dryrun --os-type Linux'
.format(webapp_name)).get_output_in_json()
self.assertEqual(result['sku'].lower(), 'free')
self.assertTrue(result['name'].startswith(webapp_name))
self.assertTrue(result['src_path'].replace(
os.sep + os.sep, os.sep), up_working_dir)
self.assertEqual(result['runtime_version'], 'dotnetcore|6.0')
self.assertEqual(result['os'].lower(), 'linux')
self.assertNotEqual(result['location'], 'None')
# test the full E2E operation works
full_result = self.cmd(
'webapp up -n {} -g {} --plan {}'.format(webapp_name, resource_group, plan)).get_output_in_json()
self.assertEqual(result['name'], full_result['name'])
self.assertEqual(result['location'], full_result['location'])
# Verify app is created
# since we set local context, -n and -g are no longer required
self.cmd('webapp show', checks=[
JMESPathCheck('name', webapp_name),
JMESPathCheck('httpsOnly', True),
JMESPathCheck('kind', 'app'),
JMESPathCheck('resourceGroup', resource_group)
])
self.cmd('webapp config show', checks=[
JMESPathCheck('tags.cli', 'None'),
JMESPathCheck('windowsFxVersion', None)
])
self.cmd('webapp config appsettings list', checks=[
JMESPathCheck('[0].name', 'SCM_DO_BUILD_DURING_DEPLOYMENT'),
JMESPathCheck('[0].value', 'True')
])
# verify SKU and kind of ASP created
self.cmd('appservice plan show', checks=[
JMESPathCheck('properties.reserved', False),
JMESPathCheck('name', plan),
JMESPathCheck('sku.tier', 'Free'),
JMESPathCheck('sku.name', 'F1')
])
# cleanup
# switch back the working dir
os.chdir(current_working_dir)
# delete temp_dir
import shutil
shutil.rmtree(temp_dir)
@live_only()
@ResourceGroupPreparer(random_name_length=24, name_prefix='clitest', location=WINDOWS_ASP_LOCATION_WEBAPP)
def test_webapp_up_statichtml_e2e(self, resource_group):
plan = self.create_random_name('up-statichtmlplan', 24)
webapp_name = self.create_random_name('up-statichtmlapp', 24)
zip_file_name = os.path.join(
TEST_DIR, 'html-static-hello-world-up.zip')
# create a temp directory and unzip the code to this folder
import zipfile
import tempfile
temp_dir = tempfile.mkdtemp()
zip_ref = zipfile.ZipFile(zip_file_name, 'r')
zip_ref.extractall(temp_dir)
current_working_dir = os.getcwd()
# change the working dir to the dir where the code has been extracted to
up_working_dir = os.path.join(temp_dir, 'html-docs-hello-world-master')
os.chdir(up_working_dir)
# test dryrun operation
result = self.cmd('webapp up -n {} --dryrun --html'
.format(webapp_name)).get_output_in_json()
self.assertEqual(result['sku'].lower(), 'free')
self.assertTrue(result['name'].startswith(webapp_name))
self.assertTrue(result['src_path'].replace(
os.sep + os.sep, os.sep), up_working_dir)
self.assertEqual(result['runtime_version'].lower(), '-')
self.assertEqual(result['os'].lower(), 'windows')
# test the full E2E operation works
full_result = self.cmd(
'webapp up -n {} -g {} --plan {} --html'.format(webapp_name, resource_group, plan)).get_output_in_json()
self.assertEqual(result['name'], full_result['name'])
# Verify app is created
# since we set local context, -n and -g are no longer required
self.cmd('webapp show', checks=[
JMESPathCheck('name', webapp_name),
JMESPathCheck('httpsOnly', True),
JMESPathCheck('kind', 'app'),
JMESPathCheck('resourceGroup', resource_group)
])
self.cmd('webapp config show', checks=[
JMESPathCheck('tags.cli', 'None'),
JMESPathCheck('windowsFxVersion', None)
])
self.cmd('webapp config appsettings list', checks=[
JMESPathCheck('[1].name', 'SCM_DO_BUILD_DURING_DEPLOYMENT'),
JMESPathCheck('[1].value', 'True')
])
# verify SKU and kind of ASP created
self.cmd('appservice plan show', checks=[
JMESPathCheck('properties.reserved', False),
JMESPathCheck('name', plan),
JMESPathCheck('sku.tier', 'Free'),
JMESPathCheck('sku.name', 'F1')
])
# cleanup
# switch back the working dir
os.chdir(current_working_dir)
# delete temp_dir
import shutil
shutil.rmtree(temp_dir)
@live_only()
@ResourceGroupPreparer(random_name_length=24, name_prefix='clitest', location=LINUX_ASP_LOCATION_WEBAPP)
def test_webapp_up_invalid_name(self, resource_group):
webapp_name = self.create_random_name('invalid_name', 40)
zip_file_name = os.path.join(TEST_DIR, 'python-hello-world-up.zip')
# create a temp directory and unzip the code to this folder
import zipfile
import tempfile
temp_dir = tempfile.mkdtemp()
zip_ref = zipfile.ZipFile(zip_file_name, 'r')
zip_ref.extractall(temp_dir)
current_working_dir = os.getcwd()
# change the working dir to the dir where the code has been extracted to
up_working_dir = os.path.join(temp_dir, 'python-docs-hello-world')
os.chdir(up_working_dir)
from azure.cli.core.util import CLIError
with self.assertRaises(CLIError):
self.cmd('webapp up -n {} --dryrun'.format(webapp_name))
with self.assertRaises(CLIError):
self.cmd('webapp up -n {}'.format(webapp_name))
# cleanup
# switch back the working dir
os.chdir(current_working_dir)
# delete temp_dir
import shutil
shutil.rmtree(temp_dir)
@live_only()
@AllowLargeResponse()
@ResourceGroupPreparer(random_name_length=24, name_prefix='clitest', location=LINUX_ASP_LOCATION_WEBAPP)
def test_webapp_up_name_exists_not_in_subscription(self, resource_group):
# Make sure webapp_name is the name of an existing web app and is not in your subscription
webapp_name = 'helloworld'
zip_file_name = os.path.join(TEST_DIR, 'python-hello-world-up.zip')
# create a temp directory and unzip the code to this folder
import zipfile
import tempfile
temp_dir = tempfile.mkdtemp()
zip_ref = zipfile.ZipFile(zip_file_name, 'r')
zip_ref.extractall(temp_dir)
current_working_dir = os.getcwd()
# change the working dir to the dir where the code has been extracted to
up_working_dir = os.path.join(temp_dir, 'python-docs-hello-world')
os.chdir(up_working_dir)
from azure.cli.core.util import CLIError
with self.assertRaises(CLIError):
self.cmd('webapp up -n {} --dryrun'.format(webapp_name))
with self.assertRaises(CLIError):
self.cmd('webapp up -n {}'.format(webapp_name))
# cleanup
# switch back the working dir
os.chdir(current_working_dir)
# delete temp_dir
import shutil
shutil.rmtree(temp_dir)
@live_only()
@AllowLargeResponse()
@ResourceGroupPreparer(random_name_length=24, name_prefix='clitest', location=LINUX_ASP_LOCATION_WEBAPP)
def test_webapp_up_name_exists_in_subscription(self, resource_group):
plan = self.create_random_name('up-name-exists-plan', 40)
webapp_name = self.create_random_name('up-name-exists-app', 40)
zip_file_name = os.path.join(TEST_DIR, 'python-hello-world-up.zip')
# create a temp directory and unzip the code to this folder
import zipfile
import tempfile
temp_dir = tempfile.mkdtemp()
zip_ref = zipfile.ZipFile(zip_file_name, 'r')
zip_ref.extractall(temp_dir)
current_working_dir = os.getcwd()
# change the working dir to the dir where the code has been extracted to
up_working_dir = os.path.join(temp_dir, 'python-docs-hello-world')
os.chdir(up_working_dir)
# create a webapp with the same name
self.cmd(
'appservice plan create -g {} -n {} --sku S1 --is-linux'.format(resource_group, plan))
self.cmd(
'webapp create -g {} -n {} --plan {} -r "python|3.7"'.format(resource_group, webapp_name, plan))
requests.get('http://{}.azurewebsites.net'.format(webapp_name), timeout=240)
self.cmd('webapp up -n {}'.format(webapp_name))
self.cmd('webapp list -g {}'.format(resource_group), checks=[
JMESPathCheck('length(@)', 1),
JMESPathCheck('[0].name', webapp_name),
JMESPathCheck('[0].hostNames[0]', webapp_name +
'.azurewebsites.net')
])
# test dryrun operation
result = self.cmd('webapp up -n {} --sku S1 --dryrun'
.format(webapp_name)).get_output_in_json()
self.assertEqual(result['sku'].lower(), 'standard')
self.assertTrue(result['name'].startswith(webapp_name))
self.assertTrue(result['src_path'].replace(
os.sep + os.sep, os.sep), up_working_dir)
self.assertEqual(result['runtime_version'], 'python|3.7')
self.assertEqual(result['os'].lower(), 'linux')
# test the full E2E operation works
full_result = self.cmd(
'webapp up -n {} --sku S1 -g {} --plan {}'.format(webapp_name, resource_group, plan)).get_output_in_json()
self.assertEqual(result['name'], full_result['name'])
# cleanup
# switch back the working dir
os.chdir(current_working_dir)
# delete temp_dir
import shutil
shutil.rmtree(temp_dir)
@live_only()
@ResourceGroupPreparer(random_name_length=24, name_prefix='clitest', location=LINUX_ASP_LOCATION_WEBAPP)
def test_webapp_up_choose_os(self, resource_group):
plan = self.create_random_name('up-nodeplan', 24)
webapp_name = self.create_random_name('up-nodeapp', 24)
zip_file_name = os.path.join(TEST_DIR, 'node-Express-up.zip')
# create a temp directory and unzip the code to this folder
import zipfile
import tempfile
temp_dir = tempfile.mkdtemp()
zip_ref = zipfile.ZipFile(zip_file_name, 'r')
zip_ref.extractall(temp_dir)
current_working_dir = os.getcwd()
# change the working dir to the dir where the code has been extracted to
up_working_dir = os.path.join(temp_dir, 'myExpressApp')
os.chdir(up_working_dir)
# test dryrun operation
result = self.cmd(
'webapp up -n {} -g {} --plan {} --os-type "linux" --dryrun'.format(webapp_name, resource_group, plan)).get_output_in_json()
self.assertEqual(result['sku'].lower(), 'premiumv2')
self.assertTrue(result['name'].startswith(webapp_name))
self.assertTrue(result['src_path'].replace(
os.sep + os.sep, os.sep), up_working_dir)
self.assertEqual(result['runtime_version'].lower(), 'node|12-lts')
self.assertEqual(result['os'].lower(), 'linux')
# test the full E2E operation works
full_result = self.cmd(
'webapp up -n {} -g {} --plan {} --os-type "linux"'.format(webapp_name, resource_group, plan)).get_output_in_json()
self.assertEqual(result['name'], full_result['name'])
# Verify app is created
# since we set local context, -n and -g are no longer required
self.cmd('webapp show', checks=[
JMESPathCheck('name', webapp_name),
JMESPathCheck('httpsOnly', True),
JMESPathCheck('kind', 'app,linux'),
JMESPathCheck('resourceGroup', resource_group)
])
self.cmd('webapp config show', checks=[
JMESPathCheck('tags.cli', 'None')
])
self.cmd('webapp config appsettings list', checks=[
JMESPathCheck('[0].name', 'SCM_DO_BUILD_DURING_DEPLOYMENT'),
JMESPathCheck('[0].value', 'True')
])
self.cmd('appservice plan show', checks=[
JMESPathCheck('name', plan),
JMESPathCheck('sku.tier', 'PremiumV2'),
JMESPathCheck('sku.name', 'P1v2')
])
# cleanup
# switch back the working dir
os.chdir(current_working_dir)
# delete temp_dir
import shutil
shutil.rmtree(temp_dir)
@live_only()
@ResourceGroupPreparer(random_name_length=24, name_prefix='clitest', location=LINUX_ASP_LOCATION_WEBAPP)
def test_webapp_up_choose_runtime(self, resource_group):
plan = self.create_random_name('up-pythonplan', 24)
webapp_name = self.create_random_name('up-pythonapp', 24)
zip_file_name = os.path.join(TEST_DIR, 'python-hello-world-up.zip')
# create a temp directory and unzip the code to this folder
import zipfile
import tempfile
temp_dir = tempfile.mkdtemp()
zip_ref = zipfile.ZipFile(zip_file_name, 'r')
zip_ref.extractall(temp_dir)
current_working_dir = os.getcwd()
# change the working dir to the dir where the code has been extracted to
up_working_dir = os.path.join(temp_dir, 'python-docs-hello-world')
os.chdir(up_working_dir)
# test dryrun operation
result = self.cmd(
'webapp up -n {} -g {} --plan {} --runtime "PYTHON|3.7" --sku S1 --dryrun'.format(webapp_name, resource_group, plan)).get_output_in_json()
self.assertEqual(result['sku'].lower(), 'standard')
self.assertTrue(result['name'].startswith(webapp_name))
self.assertTrue(result['src_path'].replace(
os.sep + os.sep, os.sep), up_working_dir)
self.assertEqual(result['runtime_version'].lower(), 'python|3.7')
self.assertEqual(result['os'].lower(), 'linux')
# test the full E2E operation works
full_result = self.cmd(
'webapp up -n {} -g {} --plan {} --runtime "PYTHON|3.7" --sku "S1"'.format(webapp_name, resource_group, plan)).get_output_in_json()
self.assertEqual(result['name'], full_result['name'])
# Verify app is created
# since we set local context, -n and -g are no longer required
self.cmd('webapp show', checks=[
JMESPathCheck('name', webapp_name),
JMESPathCheck('httpsOnly', True),
JMESPathCheck('kind', 'app,linux'),
JMESPathCheck('resourceGroup', resource_group)
])
self.cmd('webapp config show', checks=[
JMESPathCheck('linuxFxVersion', 'PYTHON|3.7'),
JMESPathCheck('tags.cli', 'None')
])
self.cmd('webapp config appsettings list', checks=[
JMESPathCheck('[0].name', 'SCM_DO_BUILD_DURING_DEPLOYMENT'),
JMESPathCheck('[0].value', 'True')
])
self.cmd('appservice plan show', checks=[
JMESPathCheck('properties.reserved', True),
JMESPathCheck('name', plan),
JMESPathCheck('sku.tier', 'Standard'),
JMESPathCheck('sku.name', 'S1')
])
# cleanup
# switch back the working dir
os.chdir(current_working_dir)
# delete temp_dir
import shutil
shutil.rmtree(temp_dir)
@live_only()
@ResourceGroupPreparer(random_name_length=24, name_prefix='clitest', location=LINUX_ASP_LOCATION_WEBAPP)
def test_webapp_up_choose_os_and_runtime(self, resource_group):
plan = self.create_random_name('up-nodeplan', 24)
webapp_name = self.create_random_name('up-nodeapp', 24)
zip_file_name = os.path.join(TEST_DIR, 'node-Express-up.zip')
# create a temp directory and unzip the code to this folder
import zipfile
import tempfile
temp_dir = tempfile.mkdtemp()
zip_ref = zipfile.ZipFile(zip_file_name, 'r')
zip_ref.extractall(temp_dir)
current_working_dir = os.getcwd()
# change the working dir to the dir where the code has been extracted to
up_working_dir = os.path.join(temp_dir, 'myExpressApp')
os.chdir(up_working_dir)
# test dryrun operation
result = self.cmd(
'webapp up -n {} -g {} --plan {} --os "linux" --runtime "node|12-lts" --sku "S1" --dryrun'.format(webapp_name, resource_group, plan)).get_output_in_json()
self.assertEqual(result['sku'].lower(), 'standard')
self.assertTrue(result['name'].startswith(webapp_name))
self.assertTrue(result['src_path'].replace(
os.sep + os.sep, os.sep), up_working_dir)
self.assertEqual(result['runtime_version'].lower(), 'node|12-lts')
self.assertEqual(result['os'].lower(), 'linux')
# test the full E2E operation works
full_result = self.cmd(
'webapp up -n {} -g {} --plan {} --os "linux" --runtime "node|12-lts" --sku "S1"'.format(webapp_name, resource_group, plan)).get_output_in_json()
self.assertEqual(result['name'], full_result['name'])
# Verify app is created
# since we set local context, -n and -g are no longer required
self.cmd('webapp show', checks=[
JMESPathCheck('name', webapp_name),
JMESPathCheck('httpsOnly', True),
JMESPathCheck('kind', 'app,linux'),
JMESPathCheck('resourceGroup', resource_group)
])
self.cmd('webapp config show', checks=[
JMESPathCheck('tags.cli', 'None')
])
self.cmd('webapp config appsettings list', checks=[
JMESPathCheck('[0].name', 'SCM_DO_BUILD_DURING_DEPLOYMENT'),
JMESPathCheck('[0].value', 'True')
])
self.cmd('appservice plan show', checks=[
JMESPathCheck('name', plan),
JMESPathCheck('sku.tier', 'Standard'),
JMESPathCheck('sku.name', 'S1')
])
# cleanup
# switch back the working dir
os.chdir(current_working_dir)
# delete temp_dir
import shutil
shutil.rmtree(temp_dir)
@live_only()
@ResourceGroupPreparer(random_name_length=24, name_prefix='clitest', location=LINUX_ASP_LOCATION_WEBAPP)
def test_webapp_up_runtime_delimiters(self, resource_group):
plan = self.create_random_name('up-nodeplan', 24)
webapp_name = self.create_random_name('up-nodeapp', 24)
zip_file_name = os.path.join(TEST_DIR, 'node-Express-up.zip')
# create a temp directory and unzip the code to this folder
import zipfile
import tempfile
temp_dir = tempfile.mkdtemp()
zip_ref = zipfile.ZipFile(zip_file_name, 'r')
zip_ref.extractall(temp_dir)
current_working_dir = os.getcwd()
# change the working dir to the dir where the code has been extracted to
up_working_dir = os.path.join(temp_dir, 'myExpressApp')
os.chdir(up_working_dir)
# test dryrun operation
result = self.cmd(
'webapp up -n {} -g {} --plan {} --os "windows" --runtime "java|1.8|Java SE|8" --sku "S1" --dryrun'.format(webapp_name, resource_group, plan)).get_output_in_json()
self.assertEqual(result['sku'].lower(), 'standard')
self.assertTrue(result['name'].startswith(webapp_name))
self.assertTrue(result['src_path'].replace(
os.sep + os.sep, os.sep), up_working_dir)
self.assertEqual(result['runtime_version'].lower(), 'java|1.8|java se|8')
# test dryrun operation
result = self.cmd(
'webapp up -n {} -g {} --plan {} --os "windows" --runtime "java:1.8:Java SE:8" --sku "S1" --dryrun'.format(webapp_name, resource_group, plan)).get_output_in_json()
self.assertEqual(result['sku'].lower(), 'standard')
self.assertTrue(result['name'].startswith(webapp_name))
self.assertTrue(result['src_path'].replace(
os.sep + os.sep, os.sep), up_working_dir)
self.assertEqual(result['runtime_version'].lower(), 'java|1.8|java se|8')
# cleanup
# switch back the working dir
os.chdir(current_working_dir)
# delete temp_dir
import shutil
shutil.rmtree(temp_dir)
@live_only()
@AllowLargeResponse()
@ResourceGroupPreparer(random_name_length=24, name_prefix='clitest', location=LINUX_ASP_LOCATION_WEBAPP)
def test_linux_to_windows_fail(self, resource_group):
plan = self.create_random_name('up-nodeplan', 24)
webapp_name = self.create_random_name('up-nodeapp', 24)
zip_file_name = os.path.join(TEST_DIR, 'node-Express-up.zip')
# create a temp directory and unzip the code to this folder
import zipfile
import tempfile
temp_dir = tempfile.mkdtemp()
zip_ref = zipfile.ZipFile(zip_file_name, 'r')
zip_ref.extractall(temp_dir)
current_working_dir = os.getcwd()
# change the working dir to the dir where the code has been extracted to
up_working_dir = os.path.join(temp_dir, 'myExpressApp')
os.chdir(up_working_dir)
# test dryrun operation
result = self.cmd(
'webapp up -n {} -g {} --plan {} --os "linux" --runtime "node|12-lts" --sku "S1" --dryrun'.format(webapp_name, resource_group, plan)).get_output_in_json()
self.assertEqual(result['sku'].lower(), 'standard')
self.assertTrue(result['name'].startswith(webapp_name))
self.assertTrue(result['src_path'].replace(
os.sep + os.sep, os.sep), up_working_dir)
self.assertEqual(result['runtime_version'].lower(), 'node|12-lts')
self.assertEqual(result['os'].lower(), 'linux')
# test the full E2E operation works
full_result = self.cmd(
'webapp up -n {} -g {} --plan {} --os "linux" --runtime "node|12-lts" --sku "S1"'.format(webapp_name, resource_group, plan)).get_output_in_json()
self.assertEqual(result['name'], full_result['name'])
# Verify app is created
# since we set local context, -n and -g are no longer required
self.cmd('webapp show', checks=[
JMESPathCheck('name', webapp_name),
JMESPathCheck('httpsOnly', True),
JMESPathCheck('kind', 'app,linux'),
JMESPathCheck('resourceGroup', resource_group)
])
from azure.cli.core.util import CLIError
# changing existing linux app to windows should fail gracefully
with self.assertRaises(CLIError):
self.cmd('webapp up -n {} -g {} --plan {} --os "windows" --runtime "node|12LTS" --sku "S1"'.format(webapp_name, resource_group, plan))
# cleanup
# switch back the working dir
os.chdir(current_working_dir)
# delete temp_dir
import shutil
shutil.rmtree(temp_dir)
@live_only()
@AllowLargeResponse()
@ResourceGroupPreparer(random_name_length=24, name_prefix='clitest', location=WINDOWS_ASP_LOCATION_WEBAPP)
@unittest.skip("Temp skip - flaky test")
def test_windows_to_linux_fail(self, resource_group):
plan = self.create_random_name('up-nodeplan', 24)
webapp_name = self.create_random_name('up-nodeapp', 24)
zip_file_name = os.path.join(TEST_DIR, 'node-Express-up-windows.zip')
# create a temp directory and unzip the code to this folder
import zipfile
import tempfile
temp_dir = tempfile.mkdtemp()
zip_ref = zipfile.ZipFile(zip_file_name, 'r')
zip_ref.extractall(temp_dir)
current_working_dir = os.getcwd()
# change the working dir to the dir where the code has been extracted to
up_working_dir = os.path.join(temp_dir, 'myExpressApp')
os.chdir(temp_dir)
# test dryrun operation
result = self.cmd(
'webapp up -n {} -g {} --plan {} --os "windows" --runtime "node|12LTS" --sku "S1" --dryrun'.format(webapp_name, resource_group, plan)).get_output_in_json()
self.assertEqual(result['sku'].lower(), 'standard')
self.assertTrue(result['name'].startswith(webapp_name))
self.assertTrue(result['src_path'].replace(
os.sep + os.sep, os.sep), up_working_dir)
self.assertEqual(result['runtime_version'].lower(), 'node|12lts')
self.assertEqual(result['os'].lower(), 'windows')
# test the full E2E operation works
full_result = self.cmd(
'webapp up -n {} -g {} --plan {} --os "windows" --runtime "node|12LTS" --sku "S1"'.format(webapp_name, resource_group, plan)).get_output_in_json()
self.assertEqual(result['name'], full_result['name'])
# Verify app is created
# since we set local context, -n and -g are no longer required
self.cmd('webapp show', checks=[
JMESPathCheck('name', webapp_name),
JMESPathCheck('httpsOnly', True),
JMESPathCheck('kind', 'app'),
JMESPathCheck('resourceGroup', resource_group)
])
from azure.cli.core.util import CLIError
# changing existing linux app to windows should fail gracefully
with self.assertRaises(CLIError):
self.cmd('webapp up -n {} -g {} --plan {} --os "linux" --runtime "node|12-LTS" --sku "S1"'.format(webapp_name, resource_group, plan))
# cleanup
# switch back the working dir
os.chdir(current_working_dir)
# delete temp_dir
import shutil
shutil.rmtree(temp_dir)
@unittest.skip("Flaky test")
@live_only()
@AllowLargeResponse()
@ResourceGroupPreparer(random_name_length=24, name_prefix='clitest', location=LINUX_ASP_LOCATION_WEBAPP)
def test_webapp_up_change_runtime_version(self, resource_group):
plan = self.create_random_name('up-nodeplan', 24)
webapp_name = self.create_random_name('up-nodeapp', 24)
zip_file_name = os.path.join(TEST_DIR, 'node-Express-up.zip')
# create a temp directory and unzip the code to this folder
import zipfile
import tempfile
import time
temp_dir = tempfile.mkdtemp()
zip_ref = zipfile.ZipFile(zip_file_name, 'r')
zip_ref.extractall(temp_dir)
current_working_dir = os.getcwd()
# change the working dir to the dir where the code has been extracted to
up_working_dir = os.path.join(temp_dir, 'myExpressApp')
os.chdir(up_working_dir)
# test dryrun operation
result = self.cmd(
'webapp up -n {} -g {} --plan {} --os "linux" --runtime "node|12-LTS" --sku "S1" --dryrun'.format(webapp_name, resource_group, plan)).get_output_in_json()
self.assertEqual(result['sku'].lower(), 'standard')
self.assertTrue(result['name'].startswith(webapp_name))
self.assertTrue(result['src_path'].replace(
os.sep + os.sep, os.sep), up_working_dir)
self.assertEqual(result['runtime_version'].lower(), 'node|12-lts')
self.assertEqual(result['os'].lower(), 'linux')
# test the full E2E operation works
full_result = self.cmd(
'webapp up -n {} -g {} --plan {} --os "linux" --runtime "node|12-LTS" --sku "S1"'.format(webapp_name, resource_group, plan)).get_output_in_json()
self.assertEqual(result['name'], full_result['name'])
# Verify app is created
# since we set local context, -n and -g are no longer required
self.cmd('webapp show', checks=[
JMESPathCheck('name', webapp_name),
JMESPathCheck('httpsOnly', True),
JMESPathCheck('kind', 'app,linux'),
JMESPathCheck('resourceGroup', resource_group)
])
# test changing runtime to newer version
time.sleep(30)
full_result = self.cmd(
'webapp up -n {} -g {} --plan {} --os "linux" --runtime "node|16-lts" --sku "S1"'.format(webapp_name, resource_group, plan)).get_output_in_json()
self.assertEqual(result['name'], full_result['name'])
# verify newer version
self.cmd('webapp config show', checks=[
JMESPathCheck('linuxFxVersion', "NODE|16-lts"),
JMESPathCheck('tags.cli', 'None')
])
# test changing runtime to older version
time.sleep(30)
full_result = self.cmd(
'webapp up -n {} -g {} --plan {} --os "linux" --runtime "node|12-lts" --sku "S1"'.format(webapp_name, resource_group, plan)).get_output_in_json()
self.assertEqual(result['name'], full_result['name'])
# verify older version
self.cmd('webapp config show', checks=[
JMESPathCheck('linuxFxVersion', "NODE|12-lts"),
JMESPathCheck('tags.cli', 'None')
])
# cleanup
# switch back the working dir
os.chdir(current_working_dir)
# delete temp_dir
import shutil
shutil.rmtree(temp_dir)
@live_only()
@ResourceGroupPreparer(random_name_length=24, name_prefix='clitest', location=LINUX_ASP_LOCATION_WEBAPP)
def test_webapp_up_generate_default_name(self, resource_group):
plan = self.create_random_name('up-nodeplan', 24)
zip_file_name = os.path.join(TEST_DIR, 'node-Express-up.zip')
# create a temp directory and unzip the code to this folder
import zipfile
import tempfile
temp_dir = tempfile.mkdtemp()
zip_ref = zipfile.ZipFile(zip_file_name, 'r')
zip_ref.extractall(temp_dir)
current_working_dir = os.getcwd()
# change the working dir to the dir where the code has been extracted to
up_working_dir = os.path.join(temp_dir, 'myExpressApp')
os.chdir(up_working_dir)
# test dryrun operation
result = self.cmd(
'webapp up --dryrun').get_output_in_json()
self.assertEqual(result['sku'].lower(), 'premiumv2')
self.assertTrue(result['src_path'].replace(
os.sep + os.sep, os.sep), up_working_dir)
self.assertEqual(result['runtime_version'].lower(), 'node|12-lts')
self.assertEqual(result['os'].lower(), 'linux')
# test the full E2E operation works
self.cmd(
'webapp up -g {} --plan {}'.format(resource_group, plan)).get_output_in_json()
# Verify app is created
# since we set local context, -n and -g are no longer required
self.cmd('webapp show', checks=[
JMESPathCheck('httpsOnly', True),
JMESPathCheck('kind', 'app,linux'),
JMESPathCheck('resourceGroup', resource_group)
])
self.cmd('webapp config show', checks=[
JMESPathCheck('linuxFxVersion', 'NODE|12-lts'),
JMESPathCheck('tags.cli', 'None'),
])
self.cmd('webapp config appsettings list', checks=[
JMESPathCheck('[0].name', 'SCM_DO_BUILD_DURING_DEPLOYMENT'),
JMESPathCheck('[0].value', 'True')
])
self.cmd('appservice plan show', checks=[
JMESPathCheck('properties.reserved', True),
JMESPathCheck('name', plan),
JMESPathCheck('sku.tier', 'PremiumV2'),
JMESPathCheck('sku.name', 'P1v2')
])
# cleanup
# switch back the working dir
os.chdir(current_working_dir)
# delete temp_dir
import shutil
shutil.rmtree(temp_dir)
@live_only()
@ResourceGroupPreparer(random_name_length=24, name_prefix='clitest', location=LINUX_ASP_LOCATION_WEBAPP)
def test_webapp_up_linux_windows_sharing_resource_group(self, resource_group):
linux_plan = self.create_random_name('plan-linux', 24)
linux_webapp_name = self.create_random_name('app-linux', 24)
windows_plan = self.create_random_name('plan-windows', 26)
windows_webapp_name = self.create_random_name('app-windows', 26)
zip_file_name = os.path.join(TEST_DIR, 'node-Express-up.zip')
# create a temp directory and unzip the code to this folder
import zipfile
import tempfile
temp_dir = tempfile.mkdtemp()
zip_ref = zipfile.ZipFile(zip_file_name, 'r')
zip_ref.extractall(temp_dir)
current_working_dir = os.getcwd()
# change the working dir to the dir where the code has been extracted to
up_working_dir = os.path.join(temp_dir, 'myExpressApp')
os.chdir(up_working_dir)
# test linux dryrun operation
result = self.cmd('webapp up -n {} --sku S1 --dryrun'
.format(linux_webapp_name)).get_output_in_json()
self.assertEqual(result['sku'].lower(), 'standard')
self.assertTrue(result['name'].startswith(linux_webapp_name))
self.assertTrue(result['src_path'].replace(
os.sep + os.sep, os.sep), up_working_dir)
self.assertEqual(result['runtime_version'].lower(), 'node|12-lts')
self.assertEqual(result['os'].lower(), 'linux')
# test the full linux E2E operation works
full_result = self.cmd(
'webapp up -n {} --sku S1 -g {} --plan {} --os-type linux'.format(linux_webapp_name, resource_group, linux_plan)).get_output_in_json()
self.assertEqual(result['name'], full_result['name'])
# Verify linux app is created
# since we set local context, -n and -g are no longer required
self.cmd('webapp show', checks=[
JMESPathCheck('name', linux_webapp_name),
JMESPathCheck('httpsOnly', True),
JMESPathCheck('kind', 'app,linux'),
JMESPathCheck('resourceGroup', resource_group)
])
self.cmd('webapp config show', checks=[
JMESPathCheck('linuxFxVersion', 'NODE|12-lts'),
JMESPathCheck('tags.cli', 'None'),
])
self.cmd('webapp config appsettings list', checks=[
JMESPathCheck('[0].name', 'SCM_DO_BUILD_DURING_DEPLOYMENT'),
JMESPathCheck('[0].value', 'True')
])
# test windows dryrun operation
result = self.cmd("webapp up -n {} --sku S1 --dryrun -r 'node|14lts' --os-type windows --plan {}"
.format(windows_webapp_name, windows_plan)).get_output_in_json()
self.assertEqual(result['sku'].lower(), 'standard')
self.assertTrue(result['name'].startswith(windows_webapp_name))
self.assertTrue(result['src_path'].replace(
os.sep + os.sep, os.sep), up_working_dir)
self.assertEqual(result['runtime_version'].lower(), 'node|14lts')
self.assertEqual(result['os'].lower(), 'windows')
# test the full windows E2E operation works
full_result = self.cmd(
'webapp up -n {} --sku S1 -g {} --plan {}'.format(windows_webapp_name, resource_group, windows_plan)).get_output_in_json()
self.assertEqual(result['name'], full_result['name'])
# Verify windows app is created
self.cmd('webapp show -g {} -n {}'.format(resource_group, windows_webapp_name), checks=[
JMESPathCheck('name', windows_webapp_name),
JMESPathCheck('httpsOnly', True),
JMESPathCheck('resourceGroup', resource_group)
])
# verify windows SKU and kind of ASP created
self.cmd('appservice plan show', checks=[
JMESPathCheck('properties.reserved', True),
JMESPathCheck('name', windows_plan),
JMESPathCheck('sku.tier', 'Standard'),
JMESPathCheck('sku.name', 'S1')
])
# cleanup
# switch back the working dir
os.chdir(current_working_dir)
# delete temp_dir
import shutil
shutil.rmtree(temp_dir)
if __name__ == '__main__':
unittest.main()
| 42.418886 | 175 | 0.625835 | 6,359 | 52,557 | 4.984589 | 0.040415 | 0.043222 | 0.037322 | 0.021769 | 0.940057 | 0.934568 | 0.929615 | 0.923999 | 0.918857 | 0.912231 | 0 | 0.007975 | 0.248492 | 52,557 | 1,238 | 176 | 42.45315 | 0.794541 | 0.133398 | 0 | 0.872093 | 0 | 0.019767 | 0.184121 | 0.016205 | 0 | 0 | 0 | 0 | 0.134884 | 1 | 0.023256 | false | 0 | 0.082558 | 0 | 0.106977 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
86e2e59e4f9d519c429a7ccf5da6ef7194b2ca40 | 37,653 | py | Python | utils/data_utils.py | ryanpram/indonlu-facqa | 56e8bcdd17f6309a310ed221365416c08662c6b5 | [
"MIT"
] | 286 | 2020-09-22T15:02:28.000Z | 2022-03-31T17:43:18.000Z | utils/data_utils.py | ryanpram/indonlu-facqa | 56e8bcdd17f6309a310ed221365416c08662c6b5 | [
"MIT"
] | 31 | 2020-09-25T00:34:54.000Z | 2022-03-29T05:08:47.000Z | utils/data_utils.py | ryanpram/indonlu-facqa | 56e8bcdd17f6309a310ed221365416c08662c6b5 | [
"MIT"
] | 128 | 2020-09-26T09:22:42.000Z | 2022-03-30T06:50:00.000Z | import numpy as np
import pandas as pd
import string
import torch
import re
from torch.utils.data import Dataset, DataLoader
from transformers import AutoTokenizer
#####
# Term Extraction Airy
#####
class AspectExtractionDataset(Dataset):
# Static constant variable
LABEL2INDEX = {'I-SENTIMENT': 0, 'O': 1, 'I-ASPECT': 2, 'B-SENTIMENT': 3, 'B-ASPECT': 4}
INDEX2LABEL = {0: 'I-SENTIMENT', 1: 'O', 2: 'I-ASPECT', 3: 'B-SENTIMENT', 4: 'B-ASPECT'}
NUM_LABELS = 5
def load_dataset(self, path):
# Read file
data = open(path,'r').readlines()
# Prepare buffer
dataset = []
sentence = []
seq_label = []
for line in data:
if '\t' in line:
token, label = line[:-1].split('\t')
sentence.append(token)
seq_label.append(self.LABEL2INDEX[label])
else:
dataset.append({
'sentence': sentence,
'seq_label': seq_label
})
sentence = []
seq_label = []
return dataset
def __init__(self, dataset_path, tokenizer, *args, **kwargs):
self.data = self.load_dataset(dataset_path)
self.tokenizer = tokenizer
def __getitem__(self, index):
data = self.data[index]
sentence, seq_label = data['sentence'], data['seq_label']
# Add CLS token
subwords = [self.tokenizer.cls_token_id]
subword_to_word_indices = [-1] # For CLS
# Add subwords
for word_idx, word in enumerate(sentence):
subword_list = self.tokenizer.encode(word, add_special_tokens=False)
subword_to_word_indices += [word_idx for i in range(len(subword_list))]
subwords += subword_list
# Add last SEP token
subwords += [self.tokenizer.sep_token_id]
subword_to_word_indices += [-1]
return np.array(subwords), np.array(subword_to_word_indices), np.array(seq_label), data['sentence']
def __len__(self):
return len(self.data)
class AspectExtractionDataLoader(DataLoader):
def __init__(self, max_seq_len=512, *args, **kwargs):
super(AspectExtractionDataLoader, self).__init__(*args, **kwargs)
self.collate_fn = self._collate_fn
self.max_seq_len = max_seq_len
def _collate_fn(self, batch):
batch_size = len(batch)
max_seq_len = max(map(lambda x: len(x[0]), batch))
max_seq_len = min(self.max_seq_len, max_seq_len)
max_tgt_len = max(map(lambda x: len(x[2]), batch))
subword_batch = np.zeros((batch_size, max_seq_len), dtype=np.int64)
mask_batch = np.zeros((batch_size, max_seq_len), dtype=np.float32)
subword_to_word_indices_batch = np.full((batch_size, max_seq_len), -1, dtype=np.int64)
seq_label_batch = np.full((batch_size, max_tgt_len), -100, dtype=np.int64)
seq_list = []
for i, (subwords, subword_to_word_indices, seq_label, raw_seq) in enumerate(batch):
subwords = subwords[:max_seq_len]
subword_to_word_indices = subword_to_word_indices[:max_seq_len]
subword_batch[i,:len(subwords)] = subwords
mask_batch[i,:len(subwords)] = 1
subword_to_word_indices_batch[i,:len(subwords)] = subword_to_word_indices
seq_label_batch[i,:len(seq_label)] = seq_label
seq_list.append(raw_seq)
return subword_batch, mask_batch, subword_to_word_indices_batch, seq_label_batch, seq_list
#####
# Ner Grit + Prosa
#####
class NerGritDataset(Dataset):
# Static constant variable
LABEL2INDEX = {'I-PERSON': 0, 'B-ORGANISATION': 1, 'I-ORGANISATION': 2, 'B-PLACE': 3, 'I-PLACE': 4, 'O': 5, 'B-PERSON': 6}
INDEX2LABEL = {0: 'I-PERSON', 1: 'B-ORGANISATION', 2: 'I-ORGANISATION', 3: 'B-PLACE', 4: 'I-PLACE', 5: 'O', 6: 'B-PERSON'}
NUM_LABELS = 7
def load_dataset(self, path):
# Read file
data = open(path,'r').readlines()
# Prepare buffer
dataset = []
sentence = []
seq_label = []
for line in data:
if len(line.strip()) > 0:
token, label = line[:-1].split('\t')
sentence.append(token)
seq_label.append(self.LABEL2INDEX[label])
else:
dataset.append({
'sentence': sentence,
'seq_label': seq_label
})
sentence = []
seq_label = []
return dataset
def __init__(self, dataset_path, tokenizer, *args, **kwargs):
self.data = self.load_dataset(dataset_path)
self.tokenizer = tokenizer
def __getitem__(self, index):
data = self.data[index]
sentence, seq_label = data['sentence'], data['seq_label']
# Add CLS token
subwords = [self.tokenizer.cls_token_id]
subword_to_word_indices = [-1] # For CLS
# Add subwords
for word_idx, word in enumerate(sentence):
subword_list = self.tokenizer.encode(word, add_special_tokens=False)
subword_to_word_indices += [word_idx for i in range(len(subword_list))]
subwords += subword_list
# Add last SEP token
subwords += [self.tokenizer.sep_token_id]
subword_to_word_indices += [-1]
return np.array(subwords), np.array(subword_to_word_indices), np.array(seq_label), data['sentence']
def __len__(self):
return len(self.data)
class NerProsaDataset(Dataset):
# Static constant variable
LABEL2INDEX = {'I-PPL': 0, 'B-EVT': 1, 'B-PLC': 2, 'I-IND': 3, 'B-IND': 4, 'B-FNB': 5, 'I-EVT': 6, 'B-PPL': 7, 'I-PLC': 8, 'O': 9, 'I-FNB': 10}
INDEX2LABEL = {0: 'I-PPL', 1: 'B-EVT', 2: 'B-PLC', 3: 'I-IND', 4: 'B-IND', 5: 'B-FNB', 6: 'I-EVT', 7: 'B-PPL', 8: 'I-PLC', 9: 'O', 10: 'I-FNB'}
NUM_LABELS = 11
def load_dataset(self, path):
# Read file
data = open(path,'r').readlines()
# Prepare buffer
dataset = []
sentence = []
seq_label = []
for line in data:
if len(line.strip()) > 0:
token, label = line[:-1].split('\t')
sentence.append(token)
seq_label.append(self.LABEL2INDEX[label])
else:
dataset.append({
'sentence': sentence,
'seq_label': seq_label
})
sentence = []
seq_label = []
return dataset
def __init__(self, dataset_path, tokenizer, *args, **kwargs):
self.data = self.load_dataset(dataset_path)
self.tokenizer = tokenizer
def __getitem__(self, index):
data = self.data[index]
sentence, seq_label = data['sentence'], data['seq_label']
# Add CLS token
subwords = [self.tokenizer.cls_token_id]
subword_to_word_indices = [-1] # For CLS
# Add subwords
for word_idx, word in enumerate(sentence):
subword_list = self.tokenizer.encode(word, add_special_tokens=False)
subword_to_word_indices += [word_idx for i in range(len(subword_list))]
subwords += subword_list
# Add last SEP token
subwords += [self.tokenizer.sep_token_id]
subword_to_word_indices += [-1]
return np.array(subwords), np.array(subword_to_word_indices), np.array(seq_label), data['sentence']
def __len__(self):
return len(self.data)
class NerDataLoader(DataLoader):
def __init__(self, max_seq_len=512, *args, **kwargs):
super(NerDataLoader, self).__init__(*args, **kwargs)
self.collate_fn = self._collate_fn
self.max_seq_len = max_seq_len
def _collate_fn(self, batch):
batch_size = len(batch)
max_seq_len = max(map(lambda x: len(x[0]), batch))
max_seq_len = min(self.max_seq_len, max_seq_len)
max_tgt_len = max(map(lambda x: len(x[2]), batch))
subword_batch = np.zeros((batch_size, max_seq_len), dtype=np.int64)
mask_batch = np.zeros((batch_size, max_seq_len), dtype=np.float32)
subword_to_word_indices_batch = np.full((batch_size, max_seq_len), -1, dtype=np.int64)
seq_label_batch = np.full((batch_size, max_tgt_len), -100, dtype=np.int64)
seq_list = []
for i, (subwords, subword_to_word_indices, seq_label, raw_seq) in enumerate(batch):
subwords = subwords[:max_seq_len]
subword_to_word_indices = subword_to_word_indices[:max_seq_len]
subword_batch[i,:len(subwords)] = subwords
mask_batch[i,:len(subwords)] = 1
subword_to_word_indices_batch[i,:len(subwords)] = subword_to_word_indices
seq_label_batch[i,:len(seq_label)] = seq_label
seq_list.append(raw_seq)
return subword_batch, mask_batch, subword_to_word_indices_batch, seq_label_batch, seq_list
#####
# Pos Tag Idn + Prosa
#####
class PosTagIdnDataset(Dataset):
# Static constant variable
LABEL2INDEX = {'B-PR': 0, 'B-CD': 1, 'I-PR': 2, 'B-SYM': 3, 'B-JJ': 4, 'B-DT': 5, 'I-UH': 6, 'I-NND': 7, 'B-SC': 8, 'I-WH': 9, 'I-IN': 10, 'I-NNP': 11, 'I-VB': 12, 'B-IN': 13, 'B-NND': 14, 'I-CD': 15, 'I-JJ': 16, 'I-X': 17, 'B-OD': 18, 'B-RP': 19, 'B-RB': 20, 'B-NNP': 21, 'I-RB': 22, 'I-Z': 23, 'B-CC': 24, 'B-NEG': 25, 'B-VB': 26, 'B-NN': 27, 'B-MD': 28, 'B-UH': 29, 'I-NN': 30, 'B-PRP': 31, 'I-SC': 32, 'B-Z': 33, 'I-PRP': 34, 'I-OD': 35, 'I-SYM': 36, 'B-WH': 37, 'B-FW': 38, 'I-CC': 39, 'B-X': 40}
INDEX2LABEL = {0: 'B-PR', 1: 'B-CD', 2: 'I-PR', 3: 'B-SYM', 4: 'B-JJ', 5: 'B-DT', 6: 'I-UH', 7: 'I-NND', 8: 'B-SC', 9: 'I-WH', 10: 'I-IN', 11: 'I-NNP', 12: 'I-VB', 13: 'B-IN', 14: 'B-NND', 15: 'I-CD', 16: 'I-JJ', 17: 'I-X', 18: 'B-OD', 19: 'B-RP', 20: 'B-RB', 21: 'B-NNP', 22: 'I-RB', 23: 'I-Z', 24: 'B-CC', 25: 'B-NEG', 26: 'B-VB', 27: 'B-NN', 28: 'B-MD', 29: 'B-UH', 30: 'I-NN', 31: 'B-PRP', 32: 'I-SC', 33: 'B-Z', 34: 'I-PRP', 35: 'I-OD', 36: 'I-SYM', 37: 'B-WH', 38: 'B-FW', 39: 'I-CC', 40: 'B-X'}
NUM_LABELS = 41
def load_dataset(self, path):
# Read file
data = open(path,'r').readlines()
# Prepare buffer
dataset = []
sentence = []
seq_label = []
for line in data:
if len(line.strip()) > 0:
token, label = line[:-1].split('\t')
sentence.append(token)
seq_label.append(self.LABEL2INDEX[label])
else:
dataset.append({
'sentence': sentence,
'seq_label': seq_label
})
sentence = []
seq_label = []
return dataset
def __init__(self, dataset_path, tokenizer, *args, **kwargs):
self.data = self.load_dataset(dataset_path)
self.tokenizer = tokenizer
def __getitem__(self, index):
data = self.data[index]
sentence, seq_label = data['sentence'], data['seq_label']
# Add CLS token
subwords = [self.tokenizer.cls_token_id]
subword_to_word_indices = [-1] # For CLS
# Add subwords
for word_idx, word in enumerate(sentence):
subword_list = self.tokenizer.encode(word, add_special_tokens=False)
subword_to_word_indices += [word_idx for i in range(len(subword_list))]
subwords += subword_list
# Add last SEP token
subwords += [self.tokenizer.sep_token_id]
subword_to_word_indices += [-1]
return np.array(subwords), np.array(subword_to_word_indices), np.array(seq_label), data['sentence']
def __len__(self):
return len(self.data)
class PosTagProsaDataset(Dataset):
# Static constant variable
LABEL2INDEX = {'B-PPO': 0, 'B-KUA': 1, 'B-ADV': 2, 'B-PRN': 3, 'B-VBI': 4, 'B-PAR': 5, 'B-VBP': 6, 'B-NNP': 7, 'B-UNS': 8, 'B-VBT': 9, 'B-VBL': 10, 'B-NNO': 11, 'B-ADJ': 12, 'B-PRR': 13, 'B-PRK': 14, 'B-CCN': 15, 'B-$$$': 16, 'B-ADK': 17, 'B-ART': 18, 'B-CSN': 19, 'B-NUM': 20, 'B-SYM': 21, 'B-INT': 22, 'B-NEG': 23, 'B-PRI': 24, 'B-VBE': 25}
INDEX2LABEL = {0: 'B-PPO', 1: 'B-KUA', 2: 'B-ADV', 3: 'B-PRN', 4: 'B-VBI', 5: 'B-PAR', 6: 'B-VBP', 7: 'B-NNP', 8: 'B-UNS', 9: 'B-VBT', 10: 'B-VBL', 11: 'B-NNO', 12: 'B-ADJ', 13: 'B-PRR', 14: 'B-PRK', 15: 'B-CCN', 16: 'B-$$$', 17: 'B-ADK', 18: 'B-ART', 19: 'B-CSN', 20: 'B-NUM', 21: 'B-SYM', 22: 'B-INT', 23: 'B-NEG', 24: 'B-PRI', 25: 'B-VBE'}
NUM_LABELS = 26
def load_dataset(self, path):
# Read file
data = open(path,'r').readlines()
# Prepare buffer
dataset = []
sentence = []
seq_label = []
for line in data:
if len(line.strip()) > 0:
token, label = line[:-1].split('\t')
sentence.append(token)
seq_label.append(self.LABEL2INDEX[label])
else:
dataset.append({
'sentence': sentence,
'seq_label': seq_label
})
sentence = []
seq_label = []
return dataset
def __init__(self, dataset_path, tokenizer, *args, **kwargs):
self.data = self.load_dataset(dataset_path)
self.tokenizer = tokenizer
def __getitem__(self, index):
data = self.data[index]
sentence, seq_label = data['sentence'], data['seq_label']
# Add CLS token
subwords = [self.tokenizer.cls_token_id]
subword_to_word_indices = [-1] # For CLS
# Add subwords
for word_idx, word in enumerate(sentence):
subword_list = self.tokenizer.encode(word, add_special_tokens=False)
subword_to_word_indices += [word_idx for i in range(len(subword_list))]
subwords += subword_list
# Add last SEP token
subwords += [self.tokenizer.sep_token_id]
subword_to_word_indices += [-1]
return np.array(subwords), np.array(subword_to_word_indices), np.array(seq_label), data['sentence']
def __len__(self):
return len(self.data)
class PosTagDataLoader(DataLoader):
def __init__(self, max_seq_len=512, *args, **kwargs):
super(PosTagDataLoader, self).__init__(*args, **kwargs)
self.collate_fn = self._collate_fn
self.max_seq_len = max_seq_len
def _collate_fn(self, batch):
batch_size = len(batch)
max_seq_len = max(map(lambda x: len(x[0]), batch))
max_seq_len = min(self.max_seq_len, max_seq_len)
max_tgt_len = max(map(lambda x: len(x[2]), batch))
subword_batch = np.zeros((batch_size, max_seq_len), dtype=np.int64)
mask_batch = np.zeros((batch_size, max_seq_len), dtype=np.float32)
subword_to_word_indices_batch = np.full((batch_size, max_seq_len), -1, dtype=np.int64)
seq_label_batch = np.full((batch_size, max_tgt_len), -100, dtype=np.int64)
seq_list = []
for i, (subwords, subword_to_word_indices, seq_label, raw_seq) in enumerate(batch):
subwords = subwords[:max_seq_len]
subword_to_word_indices = subword_to_word_indices[:max_seq_len]
subword_batch[i,:len(subwords)] = subwords
mask_batch[i,:len(subwords)] = 1
subword_to_word_indices_batch[i,:len(subwords)] = subword_to_word_indices
seq_label_batch[i,:len(seq_label)] = seq_label
seq_list.append(raw_seq)
return subword_batch, mask_batch, subword_to_word_indices_batch, seq_label_batch, seq_list
#####
# Emotion Twitter
#####
class EmotionDetectionDataset(Dataset):
# Static constant variable
LABEL2INDEX = {'sadness': 0, 'anger': 1, 'love': 2, 'fear': 3, 'happy': 4}
INDEX2LABEL = {0: 'sadness', 1: 'anger', 2: 'love', 3: 'fear', 4: 'happy'}
NUM_LABELS = 5
def load_dataset(self, path):
# Load dataset
dataset = pd.read_csv(path)
dataset['label'] = dataset['label'].apply(lambda sen: self.LABEL2INDEX[sen])
return dataset
def __init__(self, dataset_path, tokenizer, no_special_token=False, *args, **kwargs):
self.data = self.load_dataset(dataset_path)
self.tokenizer = tokenizer
self.no_special_token = no_special_token
def __getitem__(self, index):
tweet, label = self.data.loc[index,'tweet'], self.data.loc[index,'label']
subwords = self.tokenizer.encode(tweet, add_special_tokens=not self.no_special_token)
return np.array(subwords), np.array(label), tweet
def __len__(self):
return len(self.data)
class EmotionDetectionDataLoader(DataLoader):
def __init__(self, max_seq_len=512, *args, **kwargs):
super(EmotionDetectionDataLoader, self).__init__(*args, **kwargs)
self.collate_fn = self._collate_fn
self.max_seq_len = max_seq_len
def _collate_fn(self, batch):
batch_size = len(batch)
max_seq_len = max(map(lambda x: len(x[0]), batch))
max_seq_len = min(self.max_seq_len, max_seq_len)
subword_batch = np.zeros((batch_size, max_seq_len), dtype=np.int64)
mask_batch = np.zeros((batch_size, max_seq_len), dtype=np.float32)
label_batch = np.full((batch_size, 1), -100, dtype=np.int64)
seq_list = []
for i, (subwords, label, raw_seq) in enumerate(batch):
subwords = subwords[:max_seq_len]
subword_batch[i,:len(subwords)] = subwords
mask_batch[i,:len(subwords)] = 1
label_batch[i] = label
seq_list.append(raw_seq)
return subword_batch, mask_batch, label_batch, seq_list
#####
# Entailment UI
#####
class EntailmentDataset(Dataset):
# Static constant variable
LABEL2INDEX = {'NotEntail': 0, 'Entail_or_Paraphrase': 1}
INDEX2LABEL = {0: 'NotEntail', 1: 'Entail_or_Paraphrase'}
NUM_LABELS = 2
def load_dataset(self, path):
df = pd.read_csv(path)
df['label'] = df['label'].apply(lambda label: self.LABEL2INDEX[label])
return df
def __init__(self, dataset_path, tokenizer, no_special_token=False, *args, **kwargs):
self.data = self.load_dataset(dataset_path)
self.tokenizer = tokenizer
self.no_special_token = no_special_token
def __getitem__(self, index):
data = self.data.loc[index,:]
sent_A, sent_B, label = data['sent_A'], data['sent_B'], data['label']
encoded_inputs = self.tokenizer.encode_plus(sent_A, sent_B, add_special_tokens=not self.no_special_token, return_token_type_ids=True)
subwords, token_type_ids = encoded_inputs["input_ids"], encoded_inputs["token_type_ids"]
return np.array(subwords), np.array(token_type_ids), np.array(label), data['sent_A'] + "|" + data['sent_B']
def __len__(self):
return len(self.data)
class EntailmentDataLoader(DataLoader):
def __init__(self, max_seq_len=512, *args, **kwargs):
super(EntailmentDataLoader, self).__init__(*args, **kwargs)
self.collate_fn = self._collate_fn
self.max_seq_len = max_seq_len
def _collate_fn(self, batch):
batch_size = len(batch)
max_seq_len = max(map(lambda x: len(x[0]), batch))
max_seq_len = min(self.max_seq_len, max_seq_len)
subword_batch = np.zeros((batch_size, max_seq_len), dtype=np.int64)
mask_batch = np.zeros((batch_size, max_seq_len), dtype=np.float32)
token_type_batch = np.zeros((batch_size, max_seq_len), dtype=np.int64)
label_batch = np.zeros((batch_size, 1), dtype=np.int64)
seq_list = []
for i, (subwords, token_type_ids, label, raw_seq) in enumerate(batch):
subwords = subwords[:max_seq_len]
subword_batch[i,:len(subwords)] = subwords
mask_batch[i,:len(subwords)] = 1
token_type_batch[i,:len(subwords)] = token_type_ids
label_batch[i,0] = label
seq_list.append(raw_seq)
return subword_batch, mask_batch, token_type_batch, label_batch, seq_list
#####
# Document Sentiment Prosa
#####
class DocumentSentimentDataset(Dataset):
# Static constant variable
LABEL2INDEX = {'positive': 0, 'neutral': 1, 'negative': 2}
INDEX2LABEL = {0: 'positive', 1: 'neutral', 2: 'negative'}
NUM_LABELS = 3
def load_dataset(self, path):
df = pd.read_csv(path, sep='\t', header=None)
df.columns = ['text','sentiment']
df['sentiment'] = df['sentiment'].apply(lambda lab: self.LABEL2INDEX[lab])
return df
def __init__(self, dataset_path, tokenizer, no_special_token=False, *args, **kwargs):
self.data = self.load_dataset(dataset_path)
self.tokenizer = tokenizer
self.no_special_token = no_special_token
def __getitem__(self, index):
data = self.data.loc[index,:]
text, sentiment = data['text'], data['sentiment']
subwords = self.tokenizer.encode(text, add_special_tokens=not self.no_special_token)
return np.array(subwords), np.array(sentiment), data['text']
def __len__(self):
return len(self.data)
class DocumentSentimentDataLoader(DataLoader):
def __init__(self, max_seq_len=512, *args, **kwargs):
super(DocumentSentimentDataLoader, self).__init__(*args, **kwargs)
self.collate_fn = self._collate_fn
self.max_seq_len = max_seq_len
def _collate_fn(self, batch):
batch_size = len(batch)
max_seq_len = max(map(lambda x: len(x[0]), batch))
max_seq_len = min(self.max_seq_len, max_seq_len)
subword_batch = np.zeros((batch_size, max_seq_len), dtype=np.int64)
mask_batch = np.zeros((batch_size, max_seq_len), dtype=np.float32)
sentiment_batch = np.zeros((batch_size, 1), dtype=np.int64)
seq_list = []
for i, (subwords, sentiment, raw_seq) in enumerate(batch):
subwords = subwords[:max_seq_len]
subword_batch[i,:len(subwords)] = subwords
mask_batch[i,:len(subwords)] = 1
sentiment_batch[i,0] = sentiment
seq_list.append(raw_seq)
return subword_batch, mask_batch, sentiment_batch, seq_list
#####
# Keyword Extraction Prosa
#####
class KeywordExtractionDataset(Dataset):
# Static constant variable
LABEL2INDEX = {'O':0, 'B':1, 'I':2}
INDEX2LABEL = {0:'O', 1:'B', 2:'I'}
NUM_LABELS = 3
def load_dataset(self, path):
# Read file
data = open(path,'r').readlines()
# Prepare buffer
dataset = []
sentence = []
seq_label = []
for line in data:
if len(line.strip()) > 0:
token, label = line[:-1].split('\t')
sentence.append(token)
seq_label.append(self.LABEL2INDEX[label])
else:
dataset.append({
'sentence': sentence,
'seq_label': seq_label
})
sentence = []
seq_label = []
return dataset
def __init__(self, dataset_path, tokenizer, *args, **kwargs):
self.data = self.load_dataset(dataset_path)
self.tokenizer = tokenizer
def __getitem__(self, index):
data = self.data[index]
sentence, seq_label = data['sentence'], data['seq_label']
# Add CLS token
subwords = [self.tokenizer.cls_token_id]
subword_to_word_indices = [-1] # For CLS
# Add subwords
for word_idx, word in enumerate(sentence):
subword_list = self.tokenizer.encode(word, add_special_tokens=False)
subword_to_word_indices += [word_idx for i in range(len(subword_list))]
subwords += subword_list
# Add last SEP token
subwords += [self.tokenizer.sep_token_id]
subword_to_word_indices += [-1]
return np.array(subwords), np.array(subword_to_word_indices), np.array(seq_label), data['sentence']
def __len__(self):
return len(self.data)
class KeywordExtractionDataLoader(DataLoader):
def __init__(self, max_seq_len=512, *args, **kwargs):
super(KeywordExtractionDataLoader, self).__init__(*args, **kwargs)
self.collate_fn = self._collate_fn
self.max_seq_len = max_seq_len
def _collate_fn(self, batch):
batch_size = len(batch)
max_seq_len = max(map(lambda x: len(x[0]), batch))
max_seq_len = min(self.max_seq_len, max_seq_len)
max_tgt_len = max(map(lambda x: len(x[2]), batch))
subword_batch = np.zeros((batch_size, max_seq_len), dtype=np.int64)
mask_batch = np.zeros((batch_size, max_seq_len), dtype=np.float32)
subword_to_word_indices_batch = np.full((batch_size, max_seq_len), -1, dtype=np.int64)
seq_label_batch = np.full((batch_size, max_tgt_len), -100, dtype=np.int64)
seq_list = []
for i, (subwords, subword_to_word_indices, seq_label, raw_seq) in enumerate(batch):
subwords = subwords[:max_seq_len]
subword_to_word_indices = subword_to_word_indices[:max_seq_len]
subword_batch[i,:len(subwords)] = subwords
mask_batch[i,:len(subwords)] = 1
subword_to_word_indices_batch[i,:len(subwords)] = subword_to_word_indices
seq_label_batch[i,:len(seq_label)] = seq_label
seq_list.append(raw_seq)
return subword_batch, mask_batch, subword_to_word_indices_batch, seq_label_batch, seq_list
#####
# QA Factoid ITB
#####
class QAFactoidDataset(Dataset):
# Static constant variable
LABEL2INDEX = {'O':0, 'B':1, 'I':2}
INDEX2LABEL = {0:'O', 1:'B', 2:'I'}
NUM_LABELS = 3
def load_dataset(self, path):
# Read file
dataset = pd.read_csv(path)
# Question and passage are a list of words and seq_label is list of B/I/O
dataset['question'] = dataset['question'].apply(lambda x: eval(x))
dataset['passage'] = dataset['passage'].apply(lambda x: eval(x))
dataset['seq_label'] = dataset['seq_label'].apply(lambda x: [self.LABEL2INDEX[l] for l in eval(x)])
return dataset
def __init__(self, dataset_path, tokenizer, *args, **kwargs):
self.data = self.load_dataset(dataset_path)
self.tokenizer = tokenizer
def __getitem__(self, index):
data = self.data.loc[index,:]
question, passage, seq_label = data['question'], data['passage'], data['seq_label']
# Add CLS token
subwords = [self.tokenizer.cls_token_id]
subword_to_word_indices = [-1] # For CLS
token_type_ids = [0]
# Add subwords for question
for word_idx, word in enumerate(question):
subword_list = self.tokenizer.encode(word, add_special_tokens=False)
subword_to_word_indices += [-1 for i in range(len(subword_list))]
token_type_ids += [0 for i in range(len(subword_list))]
subwords += subword_list
# Add intermediate SEP token
subwords += [self.tokenizer.sep_token_id]
subword_to_word_indices += [-1]
token_type_ids += [0]
# Add subwords
for word_idx, word in enumerate(passage):
subword_list = self.tokenizer.encode(word, add_special_tokens=False)
subword_to_word_indices += [word_idx for i in range(len(subword_list))]
token_type_ids += [1 for i in range(len(subword_list))]
subwords += subword_list
# Add last SEP token
subwords += [self.tokenizer.sep_token_id]
subword_to_word_indices += [-1]
token_type_ids += [1]
return np.array(subwords), np.array(token_type_ids), np.array(subword_to_word_indices), np.array(seq_label), ' '.join(question) + "|" + ' '.join(passage)
def __len__(self):
return len(self.data)
class QAFactoidDataLoader(DataLoader):
def __init__(self, max_seq_len=512, *args, **kwargs):
super(QAFactoidDataLoader, self).__init__(*args, **kwargs)
self.collate_fn = self._collate_fn
self.max_seq_len = max_seq_len
def _collate_fn(self, batch):
batch_size = len(batch)
max_seq_len = max(map(lambda x: len(x[0]), batch))
max_seq_len = min(self.max_seq_len, max_seq_len)
max_tgt_len = max(map(lambda x: len(x[3]), batch))
subword_batch = np.zeros((batch_size, max_seq_len), dtype=np.int64)
mask_batch = np.zeros((batch_size, max_seq_len), dtype=np.float32)
token_type_batch = np.zeros((batch_size, max_seq_len), dtype=np.int64)
subword_to_word_indices_batch = np.full((batch_size, max_seq_len), -1, dtype=np.int64)
seq_label_batch = np.full((batch_size, max_tgt_len), -100, dtype=np.int64)
seq_list = []
for i, (subwords, token_type_ids, subword_to_word_indices, seq_label, raw_seq) in enumerate(batch):
subwords = subwords[:max_seq_len]
subword_to_word_indices = subword_to_word_indices[:max_seq_len]
subword_batch[i,:len(subwords)] = subwords
mask_batch[i,:len(subwords)] = 1
token_type_batch[i,:len(subwords)] = token_type_ids
subword_to_word_indices_batch[i,:len(subwords)] = subword_to_word_indices
seq_label_batch[i,:len(seq_label)] = seq_label
seq_list.append(raw_seq)
return subword_batch, mask_batch, token_type_batch, subword_to_word_indices_batch, seq_label_batch, seq_list
#####
# ABSA Airy + Prosa
#####
class AspectBasedSentimentAnalysisAiryDataset(Dataset):
# Static constant variable
ASPECT_DOMAIN = ['ac', 'air_panas', 'bau', 'general', 'kebersihan', 'linen', 'service', 'sunrise_meal', 'tv', 'wifi']
LABEL2INDEX = {'neg': 0, 'neut': 1, 'pos': 2, 'neg_pos': 3}
INDEX2LABEL = {0: 'neg', 1: 'neut', 2: 'pos', 3: 'neg_pos'}
NUM_LABELS = [4, 4, 4, 4, 4, 4, 4, 4, 4, 4]
NUM_ASPECTS = 10
def load_dataset(self, path):
df = pd.read_csv(path)
for aspect in self.ASPECT_DOMAIN:
df[aspect] = df[aspect].apply(lambda sen: self.LABEL2INDEX[sen])
return df
def __init__(self, dataset_path, tokenizer, no_special_token=False, *args, **kwargs):
self.data = self.load_dataset(dataset_path)
self.tokenizer = tokenizer
self.no_special_token = no_special_token
def __getitem__(self, index):
data = self.data.loc[index,:]
sentence, labels = data['review'], [data[aspect] for aspect in self.ASPECT_DOMAIN]
subwords = self.tokenizer.encode(sentence, add_special_tokens=not self.no_special_token)
return np.array(subwords), np.array(labels), data['review']
def __len__(self):
return len(self.data)
class AspectBasedSentimentAnalysisProsaDataset(Dataset):
# Static constant variable
ASPECT_DOMAIN = ['fuel', 'machine', 'others', 'part', 'price', 'service']
LABEL2INDEX = {'negative': 0, 'neutral': 1, 'positive': 2}
INDEX2LABEL = {0: 'negative', 1: 'neutral', 2: 'positive'}
NUM_LABELS = [3, 3, 3, 3, 3, 3]
NUM_ASPECTS = 6
def load_dataset(self, path):
df = pd.read_csv(path)
for aspect in self.ASPECT_DOMAIN:
df[aspect] = df[aspect].apply(lambda sen: self.LABEL2INDEX[sen])
return df
def __init__(self, dataset_path, tokenizer, no_special_token=False, *args, **kwargs):
self.data = self.load_dataset(dataset_path)
self.tokenizer = tokenizer
self.no_special_token = no_special_token
def __getitem__(self, index):
data = self.data.loc[index,:]
sentence, labels = data['sentence'], [data[aspect] for aspect in self.ASPECT_DOMAIN]
subwords = self.tokenizer.encode(sentence, add_special_tokens=not self.no_special_token)
return np.array(subwords), np.array(labels), data['sentence']
def __len__(self):
return len(self.data)
class AspectBasedSentimentAnalysisDataLoader(DataLoader):
def __init__(self, dataset, max_seq_len=512, *args, **kwargs):
super(AspectBasedSentimentAnalysisDataLoader, self).__init__(dataset=dataset, *args, **kwargs)
self.num_aspects = dataset.NUM_ASPECTS
self.collate_fn = self._collate_fn
self.max_seq_len = max_seq_len
def _collate_fn(self, batch):
batch_size = len(batch)
max_seq_len = max(map(lambda x: len(x[0]), batch))
max_seq_len = min(self.max_seq_len, max_seq_len)
subword_batch = np.zeros((batch_size, max_seq_len), dtype=np.int64)
mask_batch = np.zeros((batch_size, max_seq_len), dtype=np.float32)
label_batch = np.zeros((batch_size, self.num_aspects), dtype=np.int64)
seq_list = []
for i, (subwords, label, raw_seq) in enumerate(batch):
subwords = subwords[:max_seq_len]
subword_batch[i,:len(subwords)] = subwords
mask_batch[i,:len(subwords)] = 1
label_batch[i,:] = label
seq_list.append(raw_seq)
return subword_batch, mask_batch, label_batch, seq_list
#####
# News Categorization Prosa
#####
class NewsCategorizationDataset(Dataset):
# Static constant variable
LABEL2INDEX = {'permasalahan pada bank besar domestik': 0, 'pertumbuhan ekonomi domestik yang terbatas': 1, 'volatilitas harga komoditas utama dunia': 2, 'frekuensi kenaikan fed fund rate (ffr) yang melebihi ekspektasi': 3, 'perubahan kebijakan dan/atau regulasi pada institusi keuangan': 4, 'isu politik domestik': 5, 'permasalahan pada bank besar international': 6, 'perubahan kebijakan pemerintah yang berkaitan dengan fiskal': 7, 'pertumbuhan ekonomi global yang terbatas': 8, 'kebijakan pemerintah yang bersifat sektoral': 9, 'isu politik dan ekonomi luar negeri': 10, 'kenaikan harga volatile food': 11, 'tidak berisiko': 12, 'pergerakan harga minyak mentah dunia': 13, 'force majeure yang memengaruhi operasional sistem keuangan': 14, 'kenaikan administered price': 15}
INDEX2LABEL = {0: 'permasalahan pada bank besar domestik', 1: 'pertumbuhan ekonomi domestik yang terbatas', 2: 'volatilitas harga komoditas utama dunia', 3: 'frekuensi kenaikan fed fund rate (ffr) yang melebihi ekspektasi', 4: 'perubahan kebijakan dan/atau regulasi pada institusi keuangan', 5: 'isu politik domestik', 6: 'permasalahan pada bank besar international', 7: 'perubahan kebijakan pemerintah yang berkaitan dengan fiskal', 8: 'pertumbuhan ekonomi global yang terbatas', 9: 'kebijakan pemerintah yang bersifat sektoral', 10: 'isu politik dan ekonomi luar negeri', 11: 'kenaikan harga volatile food', 12: 'tidak berisiko', 13: 'pergerakan harga minyak mentah dunia', 14: 'force majeure yang memengaruhi operasional sistem keuangan', 15: 'kenaikan administered price'}
NUM_LABELS = 16
def load_dataset(self, path):
dataset = pd.read_csv(path, sep='\t', header=None)
dataset.columns = ['text', 'label']
dataset['label'] = dataset['label'].apply(lambda labels: [self.LABEL2INDEX[label] for label in labels.split(',')])
return dataset
def __init__(self, dataset_path, tokenizer, no_special_token=False, *args, **kwargs):
self.data = self.load_dataset(dataset_path)
self.tokenizer = tokenizer
self.no_special_token = no_special_token
def __getitem__(self, index):
data = self.data.loc[index,:]
text, labels = data['text'], data['label']
subwords = self.tokenizer.encode(text, add_special_tokens=not self.no_special_token)
return np.array(subwords), np.array(labels), data['text']
def __len__(self):
return len(self.data)
class NewsCategorizationDataLoader(DataLoader):
def __init__(self, max_seq_len=512, *args, **kwargs):
super(NewsCategorizationDataLoader, self).__init__(*args, **kwargs)
self.collate_fn = self._collate_fn
self.max_seq_len = max_seq_len
def _collate_fn(self, batch):
batch_size = len(batch)
max_seq_len = max(map(lambda x: len(x[0]), batch))
max_seq_len = min(self.max_seq_len, max_seq_len)
# Trimmed input based on specified max_len
if self.max_seq_len < max_seq_len:
max_seq_len = self.max_seq_len
subword_batch = np.zeros((batch_size, max_seq_len), dtype=np.int64)
mask_batch = np.zeros((batch_size, max_seq_len), dtype=np.float32)
labels_batch = np.zeros((batch_size, NUM_LABELS), dtype=np.int64)
seq_list = []
for i, (subwords, labels) in enumerate(batch):
subwords = subwords[:max_seq_len]
subword_batch[i,:len(subwords)] = subwords
mask_batch[i,:len(subwords)] = 1
for label in labels:
labels_batch[i,label] = 1
seq_list.append(raw_seq)
return subword_batch, mask_batch, labels_batch, seq_list | 42.449831 | 780 | 0.605662 | 4,949 | 37,653 | 4.342898 | 0.072742 | 0.032383 | 0.048574 | 0.060485 | 0.814684 | 0.776671 | 0.752617 | 0.736612 | 0.716931 | 0.709022 | 0 | 0.0227 | 0.264096 | 37,653 | 887 | 781 | 42.449831 | 0.752968 | 0.032879 | 0 | 0.785374 | 0 | 0 | 0.084616 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.114467 | false | 0.006359 | 0.011129 | 0.020668 | 0.308426 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
810de3fce27b3d1749c6067fe99e4b1ba0685a50 | 22,337 | py | Python | tests/commands/test_tidy.py | joeweaver/tidysol | 37d1c078ff5c8a805b9e44f3ec75402d8968c68f | [
"MIT"
] | null | null | null | tests/commands/test_tidy.py | joeweaver/tidysol | 37d1c078ff5c8a805b9e44f3ec75402d8968c68f | [
"MIT"
] | null | null | null | tests/commands/test_tidy.py | joeweaver/tidysol | 37d1c078ff5c8a805b9e44f3ec75402d8968c68f | [
"MIT"
] | null | null | null | """Tests for our `tidysol tidy subcommand."""
from subprocess import PIPE, Popen as popen
from unittest import TestCase
import shutil, tempfile
import os
import re
from tidysol.Exceptions import TidysolException
class TestTidy(TestCase):
def __init__(self, *args, **kwargs):
super(TestTidy,self).__init__(*args,**kwargs)
self.savewd=None
#temp directory method of testing files courtesy of https://gist.github.com/odyniec/d4ea0959d4e0ba17a980
def setUp(self):
if not self.savewd:
self.savewd = os.getcwd()
# Create a temporary directory
self.test_dir = tempfile.mkdtemp()
os.chdir(self.savewd)
def tearDown(self):
os.chdir(self.savewd)
# Remove the directory after the test
try:
shutil.rmtree(self.test_dir)
except Exception as e:
print(e)
#time not found
def test_single_timestep_time_not_found(self):
time=12.3
expected = "Could not find data for time {0}".format(time)
output = popen(['tidysol', 'tidy', self.savewd+'\\tests\\commands\\data\\good-single-timestep.txt','--times={0}'.format(time)], stdout=PIPE,bufsize=16384).communicate()[0].decode("utf-8")
assert(expected == output.rstrip())
#time is not correct format LAST or numerals and decimals
#not explicitly testing regex here, just that if regex fails we get this
def test_single_timestep_time_format_bad(self):
time="twelve"
expected = "{0} is not a valid timestep".format(time)
output = popen(['tidysol', 'tidy', 'tests\\commands\\data\\good-single-timestep.txt','--times={0}'.format(time)], stdout=PIPE,bufsize=16384).communicate()[0].decode("utf-8")
assert(expected == output.rstrip())
#base case for single time step, default time,defualt vars
def test_single_timestep_all_default(self):
#TODO some odd gymnastics to handle test files and temp file writing. There's probably a better way
#one option is to test the output string in ComsolExportFile to_csv() then either trust tidy to make the correct call to_csv and write file
#or do a little mocking
#TODO should probably factor out all the common file handling in the next few tests
fgold = open(self.savewd+"\\tests\\commands\\data\\goldfiles\\single_timestep_all_default.csv")
savewd = os.getcwd()
shutil.copyfile(self.savewd+'\\tests\\commands\\data\\good-single-timestep.txt',self.test_dir+"\\good-single-timestep.txt")
fdebug= open(self.savewd+"\\tests\\debug-dump.txt","w")
os.chdir(self.test_dir)
fwritten=None
try:
popen(['tidysol', 'tidy', self.test_dir+"\\good-single-timestep.txt"], stdout=PIPE).communicate()[0].decode("utf-8")
fwritten = open(self.test_dir+"\\good-single-timestep.csv")
finally:
goldtext=fgold.read()
writtentext=fwritten.read()
fgold.close()
fwritten.close()
os.chdir(savewd)
fdebug.write(writtentext)
fdebug.close()
self.maxDiff=None
self.assertMultiLineEqual(goldtext,writtentext)
os.chdir(savewd)
if fwritten is not None:
fwritten.close()
#two time steps default time,defualt vars
def test_multiple_timestep_all_default(self):
#TODO some odd gymnastics to handle test files and temp file writing. There's probably a better way
fgold = open(self.savewd+"\\tests\\commands\\data\\goldfiles\\two_timestep_all_default.csv","r")
fwritten=None
try:
shutil.copyfile(self.savewd+'\\tests\\commands\\data\\good-two-timestep.txt',self.test_dir+"\\good-two-timestep.txt")
os.chdir(self.test_dir)
popen(['tidysol', 'tidy', self.test_dir+"\\good-two-timestep.txt"], stdout=PIPE).communicate()[0].decode("utf-8")
fwritten = open(self.test_dir+"\\good-two-timestep.csv","r")
finally:
os.chdir(self.savewd)
goldtext=fgold.read()
writtentext=fwritten.read()
fgold.close()
if fwritten is not None:
fwritten.close()
print(goldtext)
print(writtentext)
#self.maxDiff=None
self.assertMultiLineEqual(goldtext,writtentext)
#two time steps only first time,defualt vars
def test_multiple_timestep_first_timestep(self):
#TODO some odd gymnastics to handle test files and temp file writing. There's probably a better way
fgold = open(self.savewd+"\\tests\\commands\\data\\goldfiles\\two_timestep_first_only.csv")
fdebug= open(self.savewd+"\\tests\\debug-dump.txt","w")
fwritten=None
savewd = os.getcwd()
shutil.copyfile(self.savewd+'\\tests\\commands\\data\\good-two-timestep.txt',self.test_dir+"\\good-two-timestep.txt")
os.chdir(self.test_dir)
try:
popen(['tidysol', 'tidy', self.test_dir+"\\good-two-timestep.txt",'--times=0.1'], stdout=PIPE).communicate()[0].decode("utf-8")
fwritten = open(self.test_dir+"\\good-two-timestep.csv")
finally:
goldtext=fgold.read()
writtentext=fwritten.read()
fgold.close()
fwritten.close()
os.chdir(savewd)
fdebug.write(writtentext)
fdebug.close()
self.maxDiff=None
self.assertMultiLineEqual(goldtext,writtentext)
os.chdir(savewd)
if fwritten is not None:
fwritten.close()
#two time steps LAST kewword time,defualt vars
#TODO bother with making sure last is case insensitive?
def test_multiple_timestep_last_keyword(self):
#TODO some odd gymnastics to handle test files and temp file writing. There's probably a better way
fgold = open(self.savewd+"\\tests\\commands\\data\\goldfiles\\two_timestep_second_only.csv")
fdebug= open(self.savewd+"\\tests\\debug-dump.txt","w")
fwritten=None
savewd = os.getcwd()
shutil.copyfile(self.savewd+'\\tests\\commands\\data\\good-two-timestep.txt',self.test_dir+"\\good-two-timestep.txt")
os.chdir(self.test_dir)
try:
popen(['tidysol', 'tidy', self.test_dir+"\\good-two-timestep.txt",'--times=LAST'], stdout=PIPE).communicate()[0].decode("utf-8")
fwritten = open(self.test_dir+"\\good-two-timestep.csv")
finally:
goldtext=fgold.read()
writtentext=fwritten.read()
fgold.close()
fwritten.close()
os.chdir(savewd)
fdebug.write(writtentext)
fdebug.close()
self.maxDiff=None
self.assertMultiLineEqual(goldtext,writtentext)
os.chdir(savewd)
if fwritten is not None:
fwritten.close()
#two time steps both times explicit keyword time,incorrect var name
def test_multiple_timestep_bad_var_name(self):
badvar="spf.sr"
expected = "Could not find data for variable {0}".format(badvar)
output=popen(['tidysol', 'tidy', 'tests\\commands\\data\\good-single-timestep.txt','--cols={0}'.format(badvar)], stdout=PIPE).communicate()[0].decode("utf-8")
assert(expected == output.rstrip())
#two time steps default time,only inlcude shear rate var
#just name 'spf2.sr'
def test_multiple_timestep_only_one_var(self):
#TODO some odd gymnastics to handle test files and temp file writing. There's probably a better way
fgold = open(self.savewd+"\\tests\\commands\\data\\goldfiles\\two_timestep_one_var.csv")
fdebug= open(self.savewd+"\\tests\\debug-dump.txt","w")
fwritten=None
savewd = os.getcwd()
shutil.copyfile(self.savewd+'\\tests\\commands\\data\\good-two-timestep.txt',self.test_dir+"\\good-two-timestep.txt")
os.chdir(self.test_dir)
try:
popen(['tidysol', 'tidy', self.test_dir+"\\good-two-timestep.txt",'--cols=spf2.sr'], stdout=PIPE).communicate()[0].decode("utf-8")
fwritten = open(self.test_dir+"\\good-two-timestep.csv")
finally:
goldtext=fgold.read()
writtentext=fwritten.read()
fgold.close()
fwritten.close()
os.chdir(savewd)
fdebug.write(writtentext)
fdebug.close()
self.maxDiff=None
self.assertMultiLineEqual(goldtext,writtentext)
os.chdir(savewd)
if fwritten is not None:
fwritten.close()
#name with desc spf2.sr [Shear rate]
def test_multiple_timestep_only_one_var_w_desc(self):
#TODO some odd gymnastics to handle test files and temp file writing. There's probably a better way
fgold = open(self.savewd+"\\tests\\commands\\data\\goldfiles\\two_timestep_one_var.csv")
fdebug= open(self.savewd+"\\tests\\debug-dump.txt","w")
fwritten=None
savewd = os.getcwd()
shutil.copyfile(self.savewd+'\\tests\\commands\\data\\good-two-timestep.txt',self.test_dir+"\\good-two-timestep.txt")
os.chdir(self.test_dir)
try:
output=popen(['tidysol', 'tidy', self.test_dir+"\\good-two-timestep.txt",'--cols="spf2.sr [Shear rate]"'], stdout=PIPE).communicate()[0].decode("utf-8")
print(output)
fwritten = open(self.test_dir+"\\good-two-timestep.csv")
finally:
goldtext=fgold.read()
writtentext=fwritten.read()
fgold.close()
fwritten.close()
os.chdir(savewd)
fdebug.write(writtentext)
fdebug.close()
self.maxDiff=None
self.assertMultiLineEqual(goldtext,writtentext)
os.chdir(savewd)
if fwritten is not None:
fwritten.close()
#incorrect var desc
def test_multiple_timestep_bad_var_desc(self):
badvar='"spf.sr [Shearr rate]"'
expected = "Could not find data for variable {0}".format(re.sub('\"','',badvar))
output=popen(['tidysol', 'tidy', self.savewd+'\\tests\\commands\\data\\good-single-timestep.txt','--cols={0}'.format(badvar)], stdout=PIPE).communicate()[0].decode("utf-8")
assert(expected == output.rstrip())
#two time steps both times explicit, using LAST keyword
def test_multiple_timestep_both_times_last_keyword(self):
fgold = open(self.savewd+"\\tests\\commands\\data\\goldfiles\\two_timestep_all_default.csv")
fdebug= open(self.savewd+"\\tests\\debug-dump.txt","w")
fwritten=None
savewd = os.getcwd()
shutil.copyfile(self.savewd+'\\tests\\commands\\data\\good-two-timestep.txt',self.test_dir+"\\good-two-timestep.txt")
os.chdir(self.test_dir)
try:
popen(['tidysol', 'tidy', self.test_dir+"\\good-two-timestep.txt",'--times=0.1,LAST'], stdout=PIPE).communicate()[0].decode("utf-8")
fwritten = open(self.test_dir+"\\good-two-timestep.csv")
finally:
goldtext=fgold.read()
writtentext=fwritten.read()
fgold.close()
os.chdir(savewd)
fdebug.write(writtentext)
fdebug.close()
self.maxDiff=None
self.assertMultiLineEqual(goldtext,writtentext)
os.chdir(savewd)
if fwritten is not None:
fwritten.close()
#two time steps default time,only inlcude shear rate var and reynolds #
def test_multiple_timestep_two_var(self):
#TODO some odd gymnastics to handle test files and temp file writing. There's probably a better way
fgold = open(self.savewd+"\\tests\\commands\\data\\goldfiles\\two_timestep_two_var.csv")
fdebug= open(self.savewd+"\\tests\\debug-dump.txt","w")
fwritten=None
savewd = os.getcwd()
shutil.copyfile(self.savewd+'\\tests\\commands\\data\\good-two-timestep.txt',self.test_dir+"\\good-two-timestep.txt")
os.chdir(self.test_dir)
try:
popen(['tidysol', 'tidy', self.test_dir+"\\good-two-timestep.txt",'--cols=spf2.sr,spf2.cellRe'], stdout=PIPE).communicate()[0].decode("utf-8")
fwritten = open(self.test_dir+"\\good-two-timestep.csv")
finally:
goldtext=fgold.read()
writtentext=fwritten.read()
fgold.close()
fwritten.close()
os.chdir(savewd)
fdebug.write(writtentext)
fdebug.close()
self.maxDiff=None
self.assertMultiLineEqual(goldtext,writtentext)
os.chdir(savewd)
if fwritten is not None:
fwritten.close()
def test_multiple_timestep_two_var_wdesc(self):
#TODO some odd gymnastics to handle test files and temp file writing. There's probably a better way
fgold = open(self.savewd+"\\tests\\commands\\data\\goldfiles\\two_timestep_two_var.csv")
fdebug= open(self.savewd+"\\tests\\debug-dump.txt","w")
fwritten=None
savewd = os.getcwd()
shutil.copyfile(self.savewd+'\\tests\\commands\\data\\good-two-timestep.txt',self.test_dir+"\\good-two-timestep.txt")
os.chdir(self.test_dir)
try:
popen(['tidysol', 'tidy', self.test_dir+"\\good-two-timestep.txt",'--cols="spf2.sr [Shear rate],spf2.cellRe [Cell Reynolds number]"'], stdout=PIPE).communicate()[0].decode("utf-8")
fwritten = open(self.test_dir+"\\good-two-timestep.csv")
finally:
goldtext=fgold.read()
writtentext=fwritten.read()
fgold.close()
fwritten.close()
os.chdir(savewd)
fdebug.write(writtentext)
fdebug.close()
self.maxDiff=None
self.assertMultiLineEqual(goldtext,writtentext)
os.chdir(savewd)
if fwritten is not None:
fwritten.close()
#two times steps only include sr,cellRe and Date meta
def test_multiple_timestep_two_var_one_meta(self):
#TODO some odd gymnastics to handle test files and temp file writing. There's probably a better way
fgold = open(self.savewd+"\\tests\\commands\\data\\goldfiles\\two_timestep_two_var_one_meta.csv")
#fdebug= open(self.savewd+"\\tests\\debug-dump.txt","w")
fwritten=None
savewd = os.getcwd()
shutil.copyfile(self.savewd+'\\tests\\commands\\data\\good-two-timestep.txt',self.test_dir+"\\good-two-timestep.txt")
os.chdir(self.test_dir)
try:
popen(['tidysol', 'tidy', self.test_dir+"\\good-two-timestep.txt",'--cols=spf2.sr,spf2.cellRe,Date'], stdout=PIPE).communicate()[0].decode("utf-8")
fwritten = open(self.test_dir+"\\good-two-timestep.csv")
finally:
goldtext=fgold.read()
writtentext=fwritten.read()
fgold.close()
fwritten.close()
os.chdir(savewd)
#fdebug.write(writtentext)
#fdebug.close()
self.maxDiff=None
self.assertMultiLineEqual(goldtext,writtentext)
os.chdir(savewd)
if fwritten is not None:
fwritten.close()
#two time steps LAST keyword ,defualt vars, output to non-default file & directory
def test_multiple_timestep_specify_filename(self):
#TODO some odd gymnastics to handle test files and temp file writing. There's probably a better way
fgold = open(self.savewd+"\\tests\\commands\\data\\goldfiles\\two_timestep_all_default.csv")
fdebug= open(self.savewd+"\\tests\\debug-dump.txt","w")
fwritten=None
savewd = os.getcwd()
shutil.copyfile(self.savewd+'\\tests\\commands\\data\\good-two-timestep.txt',self.test_dir+"\\good-two-timestep.txt")
os.chdir(self.test_dir)
try:
popen(['tidysol', 'tidy', self.test_dir+"\\good-two-timestep.txt",'--output=newname.csv'], stdout=PIPE).communicate()[0].decode("utf-8")
fwritten = open(self.test_dir+"\\newname.csv")
finally:
goldtext=fgold.read()
writtentext=fwritten.read()
fgold.close()
fwritten.close()
os.chdir(savewd)
fdebug.write(writtentext)
fdebug.close()
self.maxDiff=None
self.assertMultiLineEqual(goldtext,writtentext)
os.chdir(savewd)
if fwritten is not None:
fwritten.close()
#output dir does not exist
def test_multiple_timestep_specify_filename_w_newdir(self):
#TODO some odd gymnastics to handle test files and temp file writing. There's probably a better way
fgold = open(self.savewd+"\\tests\\commands\\data\\goldfiles\\two_timestep_all_default.csv")
fdebug= open(self.savewd+"\\tests\\debug-dump.txt","w")
fwritten=None
savewd = os.getcwd()
shutil.copyfile(self.savewd+'\\tests\\commands\\data\\good-two-timestep.txt',self.test_dir+"\\good-two-timestep.txt")
os.chdir(self.test_dir)
try:
popen(['tidysol', 'tidy', self.test_dir+"\\good-two-timestep.txt",'--output=newdir\\newname.csv'], stdout=PIPE).communicate()[0].decode("utf-8")
fwritten = open(self.test_dir+"\\newdir\\newname.csv")
finally:
goldtext=fgold.read()
writtentext=fwritten.read()
fgold.close()
fwritten.close()
os.chdir(savewd)
fdebug.write(writtentext)
fdebug.close()
self.maxDiff=None
self.assertMultiLineEqual(goldtext,writtentext)
os.chdir(savewd)
if fwritten is not None:
fwritten.close()
#specify times vars and output
def test_multiple_timestep_all_args(self):
#TODO some odd gymnastics to handle test files and temp file writing. There's probably a better way
fgold = open(self.savewd+"\\tests\\commands\\data\\goldfiles\\two_timestep_all_args.csv")
fdebug= open(self.savewd+"\\tests\\debug-dump.txt","w")
fwritten=None
savewd = os.getcwd()
shutil.copyfile(self.savewd+'\\tests\\commands\\data\\good-two-timestep.txt',self.test_dir+"\\good-two-timestep.txt")
os.chdir(self.test_dir)
try:
popen(['tidysol', 'tidy', self.test_dir+"\\good-two-timestep.txt",'--times=LAST','--cols=spf2.cellRe', '--output=newdir\\newname.csv'], stdout=PIPE).communicate()[0].decode("utf-8")
fwritten = open(self.test_dir+"\\newdir\\newname.csv")
finally:
goldtext=fgold.read()
writtentext=fwritten.read()
fgold.close()
fwritten.close()
os.chdir(savewd)
fdebug.write(writtentext)
fdebug.close()
self.maxDiff=None
self.assertMultiLineEqual(goldtext,writtentext)
os.chdir(savewd)
if fwritten is not None:
fwritten.close()
#test for duplicate specfied timesteps
def test_multiple_timestep_first_timestep_repeated(self):
#TODO some odd gymnastics to handle test files and temp file writing. There's probably a better way
fgold = open(self.savewd+"\\tests\\commands\\data\\goldfiles\\two_timestep_first_only.csv")
fdebug= open(self.savewd+"\\tests\\debug-dump.txt","w")
fwritten=None
savewd = os.getcwd()
shutil.copyfile(self.savewd+'\\tests\\commands\\data\\good-two-timestep.txt',self.test_dir+"\\good-two-timestep.txt")
os.chdir(self.test_dir)
try:
popen(['tidysol', 'tidy', self.test_dir+"\\good-two-timestep.txt",'--times=0.1,0.1'], stdout=PIPE).communicate()[0].decode("utf-8")
fwritten = open(self.test_dir+"\\good-two-timestep.csv")
finally:
goldtext=fgold.read()
writtentext=fwritten.read()
fgold.close()
fwritten.close()
os.chdir(savewd)
fdebug.write(writtentext)
fdebug.close()
self.maxDiff=None
self.assertMultiLineEqual(goldtext,writtentext)
os.chdir(savewd)
if fwritten is not None:
fwritten.close()
#test for duplicate specified vars
def test_multiple_timestep_two_var_repeat_one(self):
#TODO some odd gymnastics to handle test files and temp file writing. There's probably a better way
fgold = open(self.savewd+"\\tests\\commands\\data\\goldfiles\\two_timestep_two_var.csv")
fdebug= open(self.savewd+"\\tests\\debug-dump.txt","w")
fwritten=None
savewd = os.getcwd()
shutil.copyfile(self.savewd+'\\tests\\commands\\data\\good-two-timestep.txt',self.test_dir+"\\good-two-timestep.txt")
os.chdir(self.test_dir)
try:
popen(['tidysol', 'tidy', self.test_dir+"\\good-two-timestep.txt",'--cols=spf2.sr,spf2.cellRe,spf2.sr'], stdout=PIPE).communicate()[0].decode("utf-8")
fwritten = open(self.test_dir+"\\good-two-timestep.csv")
finally:
goldtext=fgold.read()
writtentext=fwritten.read()
fgold.close()
fwritten.close()
os.chdir(savewd)
fdebug.write(writtentext)
fdebug.close()
self.maxDiff=None
self.assertMultiLineEqual(goldtext,writtentext)
os.chdir(savewd)
if fwritten is not None:
fwritten.close()
#what about handling a large file? | 49.74833 | 199 | 0.597484 | 2,642 | 22,337 | 4.970855 | 0.084784 | 0.056118 | 0.05193 | 0.047971 | 0.868423 | 0.858524 | 0.827153 | 0.8053 | 0.795477 | 0.784893 | 0 | 0.005718 | 0.271836 | 22,337 | 449 | 200 | 49.74833 | 0.801721 | 0.136948 | 0 | 0.798942 | 0 | 0 | 0.210299 | 0.171183 | 0 | 0 | 0 | 0.002227 | 0.050265 | 1 | 0.058201 | false | 0 | 0.015873 | 0 | 0.07672 | 0.010582 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8129a59251b37d72a79cf526fec6f79bec83de11 | 133,295 | py | Python | tensorflow/python/kernel_tests/lookup_ops_test.py | ouakif/tensorflow | 63c45aacf30e819b00e74b85bd1c9f11b0760cd3 | [
"Apache-2.0"
] | 27 | 2020-02-29T04:13:22.000Z | 2022-02-07T21:54:50.000Z | tensorflow/python/kernel_tests/lookup_ops_test.py | top-on/tensorflow | 6efce9a74d4ba2ba2182d92ac1e4f144b5d755d2 | [
"Apache-2.0"
] | 5 | 2020-06-01T18:50:38.000Z | 2021-07-16T07:13:52.000Z | tensorflow/python/kernel_tests/lookup_ops_test.py | top-on/tensorflow | 6efce9a74d4ba2ba2182d92ac1e4f144b5d755d2 | [
"Apache-2.0"
] | 10 | 2020-12-15T03:55:24.000Z | 2021-12-17T23:14:11.000Z | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for lookup ops."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import tempfile
import numpy as np
import six
from tensorflow.python import tf2
from tensorflow.python.client import session
from tensorflow.python.data.experimental.ops import counter
from tensorflow.python.data.ops import dataset_ops
from tensorflow.python.eager import backprop
from tensorflow.python.eager import context
from tensorflow.python.eager import def_function
from tensorflow.python.eager import function
from tensorflow.python.eager import wrap_function
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import errors_impl
from tensorflow.python.framework import ops
from tensorflow.python.framework import sparse_tensor
from tensorflow.python.framework import test_util
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import lookup_ops
from tensorflow.python.ops import map_fn
from tensorflow.python.ops import variables
from tensorflow.python.platform import test
from tensorflow.python.training import saver
from tensorflow.python.training import server_lib
from tensorflow.python.training.tracking import graph_view
from tensorflow.python.training.tracking import tracking
from tensorflow.python.training.tracking import util as trackable
from tensorflow.python.util import compat
class BaseLookupTableTest(test.TestCase):
def getHashTable(self):
if tf2.enabled():
return lookup_ops.StaticHashTable
else:
return lookup_ops.StaticHashTableV1
def getVocabularyTable(self):
if tf2.enabled():
return lookup_ops.StaticVocabularyTable
else:
return lookup_ops.StaticVocabularyTableV1
def initialize_table(self, table):
if not tf2.enabled():
self.evaluate(table.initializer)
class StaticHashTableTest(BaseLookupTableTest):
def testStaticHashTable(self):
default_val = -1
keys = constant_op.constant(["brain", "salad", "surgery"])
values = constant_op.constant([0, 1, 2], dtypes.int64)
table = self.getHashTable()(
lookup_ops.KeyValueTensorInitializer(keys, values), default_val)
self.initialize_table(table)
self.assertAllEqual(3, self.evaluate(table.size()))
input_string = constant_op.constant(["brain", "salad", "tank"])
output = table.lookup(input_string)
self.assertAllEqual([3], output.get_shape())
result = self.evaluate(output)
self.assertAllEqual([0, 1, -1], result)
exported_keys_tensor, exported_values_tensor = table.export()
self.assertItemsEqual([b"brain", b"salad", b"surgery"],
self.evaluate(exported_keys_tensor))
self.assertItemsEqual([0, 1, 2], self.evaluate(exported_values_tensor))
def testStaticHashTableFindHighRank(self):
default_val = -1
keys = constant_op.constant(["brain", "salad", "surgery"])
values = constant_op.constant([0, 1, 2], dtypes.int64)
table = self.getHashTable()(
lookup_ops.KeyValueTensorInitializer(keys, values), default_val)
self.initialize_table(table)
self.assertAllEqual(3, self.evaluate(table.size()))
input_string = constant_op.constant([["brain", "salad"],
["tank", "tarkus"]])
output = table.lookup(input_string)
result = self.evaluate(output)
self.assertAllEqual([[0, 1], [-1, -1]], result)
def testStaticHashTableInitWithPythonArrays(self):
default_val = -1
keys = ["brain", "salad", "surgery"]
values = [0, 1, 2]
table = self.getHashTable()(
lookup_ops.KeyValueTensorInitializer(
keys, values, value_dtype=dtypes.int64), default_val)
self.initialize_table(table)
self.assertAllEqual(3, self.evaluate(table.size()))
input_string = constant_op.constant(["brain", "salad", "tank"])
output = table.lookup(input_string)
result = self.evaluate(output)
self.assertAllEqual([0, 1, -1], result)
def testStaticHashTableInitWithNumPyArrays(self):
default_val = -1
keys = np.array(["brain", "salad", "surgery"], dtype=np.str)
values = np.array([0, 1, 2], dtype=np.int64)
table = self.getHashTable()(
lookup_ops.KeyValueTensorInitializer(keys, values), default_val)
self.initialize_table(table)
self.assertAllEqual(3, self.evaluate(table.size()))
input_string = constant_op.constant(["brain", "salad", "tank"])
output = table.lookup(input_string)
result = self.evaluate(output)
self.assertAllEqual([0, 1, -1], result)
def testMultipleStaticHashTables(self):
default_val = -1
keys = constant_op.constant(["brain", "salad", "surgery"])
values = constant_op.constant([0, 1, 2], dtypes.int64)
table1 = self.getHashTable()(
lookup_ops.KeyValueTensorInitializer(keys, values), default_val)
table2 = self.getHashTable()(
lookup_ops.KeyValueTensorInitializer(keys, values), default_val)
table3 = self.getHashTable()(
lookup_ops.KeyValueTensorInitializer(keys, values), default_val)
self.initialize_table(table1)
self.initialize_table(table2)
self.initialize_table(table3)
self.assertAllEqual(3, self.evaluate(table1.size()))
self.assertAllEqual(3, self.evaluate(table2.size()))
self.assertAllEqual(3, self.evaluate(table3.size()))
input_string = constant_op.constant(["brain", "salad", "tank"])
output1 = table1.lookup(input_string)
output2 = table2.lookup(input_string)
output3 = table3.lookup(input_string)
out1, out2, out3 = self.evaluate([output1, output2, output3])
self.assertAllEqual([0, 1, -1], out1)
self.assertAllEqual([0, 1, -1], out2)
self.assertAllEqual([0, 1, -1], out3)
def testStaticHashTableWithTensorDefault(self):
default_val = constant_op.constant(-1, dtypes.int64)
keys = constant_op.constant(["brain", "salad", "surgery"])
values = constant_op.constant([0, 1, 2], dtypes.int64)
table = self.getHashTable()(
lookup_ops.KeyValueTensorInitializer(keys, values), default_val)
self.initialize_table(table)
input_string = constant_op.constant(["brain", "salad", "tank"])
output = table.lookup(input_string)
result = self.evaluate(output)
self.assertAllEqual([0, 1, -1], result)
def testStaticHashTableWithSparseTensorInput(self):
default_val = constant_op.constant(-1, dtypes.int64)
keys = constant_op.constant(["brain", "salad", "surgery"])
values = constant_op.constant([0, 1, 2], dtypes.int64)
table = self.getHashTable()(
lookup_ops.KeyValueTensorInitializer(keys, values), default_val)
self.initialize_table(table)
sp_indices = [[0, 0], [0, 1], [1, 0]]
sp_shape = [2, 2]
input_tensor = sparse_tensor.SparseTensor(
constant_op.constant(sp_indices, dtypes.int64),
constant_op.constant(["brain", "salad", "tank"]),
constant_op.constant(sp_shape, dtypes.int64))
output = table.lookup(input_tensor)
out_indices, out_values, out_shape = self.evaluate(output)
self.assertAllEqual([0, 1, -1], out_values)
self.assertAllEqual(sp_indices, out_indices)
self.assertAllEqual(sp_shape, out_shape)
def testSignatureMismatch(self):
default_val = -1
keys = constant_op.constant(["brain", "salad", "surgery"])
values = constant_op.constant([0, 1, 2], dtypes.int64)
table = self.getHashTable()(
lookup_ops.KeyValueTensorInitializer(keys, values), default_val)
self.initialize_table(table)
# Ref types do not produce a lookup signature mismatch.
input_string_ref = variables.Variable("brain")
self.evaluate(input_string_ref.initializer)
self.assertEqual(0, self.evaluate(table.lookup(input_string_ref)))
input_string = constant_op.constant([1, 2, 3], dtypes.int64)
with self.assertRaises(TypeError):
table.lookup(input_string)
with self.assertRaises(TypeError):
self.getHashTable()(
lookup_ops.KeyValueTensorInitializer(keys, values), "UNK")
def testDTypes(self):
default_val = -1
with self.assertRaises(TypeError):
self.getHashTable()(
lookup_ops.KeyValueTensorInitializer(["a"], [1], [dtypes.string],
dtypes.int64), default_val)
@test_util.run_v1_only("(Cached) Sessions not available in TF2.0")
def testNotInitialized(self):
with self.cached_session():
default_val = -1
table = self.getHashTable()(
lookup_ops.KeyValueTensorInitializer(["a"], [1],
value_dtype=dtypes.int64),
default_val)
input_string = constant_op.constant(["brain", "salad", "surgery"])
output = table.lookup(input_string)
with self.assertRaisesOpError("Table not initialized"):
self.evaluate(output)
@test_util.run_v1_only("(Cached) Sessions not available in TF2.0")
def testInitializeTwice(self):
with self.cached_session():
default_val = -1
keys = constant_op.constant(["brain", "salad", "surgery"])
values = constant_op.constant([0, 1, 2], dtypes.int64)
table = self.getHashTable()(
lookup_ops.KeyValueTensorInitializer(keys, values), default_val)
self.initialize_table(table)
# Make sure that initializing twice doesn't throw any errors.
self.initialize_table(table)
def testInitializationWithInvalidDimensions(self):
default_val = -1
keys = constant_op.constant(["brain", "salad", "surgery"])
values = constant_op.constant([0, 1, 2, 3, 4], dtypes.int64)
raised_error = ValueError
if context.executing_eagerly():
raised_error = errors_impl.InvalidArgumentError
with self.assertRaises(raised_error):
self.getHashTable()(
lookup_ops.KeyValueTensorInitializer(keys, values), default_val)
@test_util.run_v1_only("Sessions not available in TF2.0")
def testMultipleSessions(self):
# Start a server
server = server_lib.Server({"local0": ["localhost:0"]},
protocol="grpc",
start=True)
# Create two sessions sharing the same state
session1 = session.Session(server.target)
session2 = session.Session(server.target)
default_val = -1
keys = constant_op.constant(["brain", "salad", "surgery"])
values = constant_op.constant([0, 1, 2], dtypes.int64)
table = self.getHashTable()(
lookup_ops.KeyValueTensorInitializer(keys, values),
default_val,
name="t1")
# Init the table in the first session.
with session1:
self.initialize_table(table)
self.assertAllEqual(3, self.evaluate(table.size()))
# Init the table in the second session and verify that we do not get a
# "Table already initialized" error.
with session2:
table.initializer.run()
self.assertAllEqual(3, self.evaluate(table.size()))
@test_util.run_v2_only
def testImportedHashTable(self):
g = ops.Graph()
with g.as_default():
t = lookup_ops.StaticHashTable(
lookup_ops.KeyValueTensorInitializer(["a"], [1]),
2)
init_op = t._init_op
op = t.lookup(ops.convert_to_tensor(["a"]))
meta_graph = saver.export_meta_graph()
def f():
saver.import_meta_graph(meta_graph)
return ops.get_default_graph().get_tensor_by_name(op.name)
wrapped = wrap_function.wrap_function(f, [])
pruned_init_fn = wrapped.prune(
(), [wrapped.graph.get_operation_by_name(init_op.name)])
self.evaluate(pruned_init_fn())
self.assertAllEqual([1], wrapped())
def testStaticHashTableInt32String(self):
default_val = "n/a"
keys = constant_op.constant([0, 1, 2], dtypes.int32)
values = constant_op.constant(["brain", "salad", "surgery"])
table = self.getHashTable()(
lookup_ops.KeyValueTensorInitializer(keys, values), default_val)
self.initialize_table(table)
input_tensor = constant_op.constant([0, 1, -1])
output = table.lookup(input_tensor)
result = self.evaluate(output)
self.assertAllEqual([b"brain", b"salad", b"n/a"], result)
def testTableUseInFunction(self):
if not context.executing_eagerly():
self.skipTest("Only Eager mode test.")
keys = constant_op.constant([0, 1, 2], dtypes.int32)
values = constant_op.constant(["brain", "salad", "surgery"])
table = self.getHashTable()(lookup_ops.KeyValueTensorInitializer(
keys, values), "n/a")
@function.defun()
def lookup_table_func(k):
return table.lookup(k)
result = lookup_table_func(constant_op.constant([0, 1, -1]))
self.assertAllEqual([b"brain", b"salad", b"n/a"], result)
result = lookup_table_func(constant_op.constant([2, -1, 1]))
self.assertAllEqual([b"surgery", b"n/a", b"salad"], result)
def testTableCreatedInFunction(self):
if not context.executing_eagerly():
self.skipTest("Only Eager mode test.")
keys = constant_op.constant([0, 1, 2], dtypes.int32)
values = constant_op.constant(["brain", "salad", "surgery"])
@function.defun()
def lookup_table_func(k):
table = self.getHashTable()(lookup_ops.KeyValueTensorInitializer(
keys, values), "n/a")
return table.lookup(k)
result = lookup_table_func(constant_op.constant([0, 1, -1]))
self.assertAllEqual([b"brain", b"salad", b"n/a"], result)
result = lookup_table_func(constant_op.constant([2, -1, 1]))
self.assertAllEqual([b"surgery", b"n/a", b"salad"], result)
def testTwoTablesInControlFlow(self):
keys = constant_op.constant([1, 2, 3], dtypes.int32)
values = constant_op.constant([5, 10, 15], dtypes.int32)
def table_func1(x):
table = self.getHashTable()(lookup_ops.KeyValueTensorInitializer(
keys, values), -1)
return table.lookup(x)
elems = np.array([2, 4, 1], dtype=np.int32)
result1 = map_fn.map_fn(table_func1, elems, dtype=dtypes.int32)
def table_func2(x):
table = self.getHashTable()(lookup_ops.KeyValueTensorInitializer(
keys, values), -1)
return table.lookup(x)
elems = np.array([2, 4, 1], dtype=np.int32)
result2 = map_fn.map_fn(table_func2, elems, dtype=dtypes.int32)
self.evaluate(lookup_ops.tables_initializer())
self.assertAllEqual([10, -1, 5], self.evaluate(result1))
self.assertAllEqual([10, -1, 5], self.evaluate(result2))
@test_util.enable_control_flow_v2
def testLookupTableInWhileV2(self):
lookup = self.getHashTable()(lookup_ops.KeyValueTensorInitializer(
constant_op.constant([2, 5], dtype=dtypes.int64),
constant_op.constant([-10.0, 1], dtype=dtypes.float32)), -1)
beta = variables.Variable(1.0, trainable=True)
@def_function.function
def get_loss(unused_beta):
return map_fn.map_fn(
lookup.lookup,
constant_op.constant([2, 3], dtype=dtypes.int64),
dtype=dtypes.float32)
with backprop.GradientTape() as tape:
loss = get_loss(beta)
self.assertIsNone(tape.gradient(loss, beta))
@test_util.enable_control_flow_v2
def testLookupTableInCondV2(self):
lookup = self.getHashTable()(lookup_ops.KeyValueTensorInitializer(
constant_op.constant([2, 5], dtype=dtypes.int64),
constant_op.constant([-10.0, 1], dtype=dtypes.float32)), -1)
beta = variables.Variable(1.0, trainable=True)
@def_function.function
def get_loss(beta):
def true_fn():
return lookup.lookup(constant_op.constant(2, dtype=dtypes.int64))
def false_fn():
return constant_op.constant(0, dtype=dtypes.float32)
return beta * control_flow_ops.cond(
constant_op.constant(True), true_fn=true_fn, false_fn=false_fn)
with backprop.GradientTape() as tape:
loss = get_loss(beta)
grad = tape.gradient(loss, beta)
self.evaluate(variables.global_variables_initializer())
self.evaluate(lookup_ops.tables_initializer())
self.assertAllEqual(grad, -10.)
class KeyValueTensorInitializerTest(BaseLookupTableTest):
def test_string(self):
init = lookup_ops.KeyValueTensorInitializer(
("brain", "salad", "surgery"), (0, 1, 2), dtypes.string, dtypes.int64)
table = self.getHashTable()(init, default_value=-1)
self.initialize_table(table)
def test_multiple_tables(self):
with ops.name_scope("table_scope"):
init1 = lookup_ops.KeyValueTensorInitializer(
("brain", "salad", "surgery"), (0, 1, 2), dtypes.string, dtypes.int64)
table1 = self.getHashTable()(init1, default_value=-1)
if not context.executing_eagerly():
self.assertEqual("hash_table", table1.name)
self.assertEqual("table_scope/hash_table",
table1.resource_handle.op.name)
init2 = lookup_ops.KeyValueTensorInitializer(
("brain", "salad", "surgery"), (0, 1, 2), dtypes.string, dtypes.int64)
table2 = self.getHashTable()(init2, default_value=-1)
if not context.executing_eagerly():
self.assertEqual("hash_table_1", table2.name)
self.assertEqual("table_scope/hash_table_1",
table2.resource_handle.op.name)
def test_int64(self):
init = lookup_ops.KeyValueTensorInitializer((42, 1, -1000), (0, 1, 2),
dtypes.int64, dtypes.int64)
table = self.getHashTable()(init, default_value=-1)
self.initialize_table(table)
def test_int32(self):
init = lookup_ops.KeyValueTensorInitializer((42, 1, -1000), (0, 1, 2),
dtypes.int32, dtypes.int64)
with self.assertRaises(errors_impl.OpError):
table = self.getHashTable()(init, default_value=-1)
self.initialize_table(table)
class InitializeTableFromFileOpTest(BaseLookupTableTest):
def _createVocabFile(self, basename, values=("brain", "salad", "surgery")):
vocabulary_file = os.path.join(self.get_temp_dir(), basename)
with open(vocabulary_file, "w") as f:
f.write("\n".join(values) + "\n")
return vocabulary_file
def testInitializeStringTable(self):
vocabulary_file = self._createVocabFile("one_column_1.txt")
default_value = -1
init = lookup_ops.TextFileInitializer(
vocabulary_file, dtypes.string, lookup_ops.TextFileIndex.WHOLE_LINE,
dtypes.int64, lookup_ops.TextFileIndex.LINE_NUMBER)
self.assertIn("one_column_1.txt_-2_-1", init._shared_name)
table = self.getHashTable()(init, default_value)
self.initialize_table(table)
output = table.lookup(constant_op.constant(["brain", "salad", "tank"]))
result = self.evaluate(output)
self.assertAllEqual([0, 1, -1], result)
def testInitializeInt64Table(self):
vocabulary_file = self._createVocabFile(
"one_column_int64.txt", values=("42", "1", "-1000"))
with self.cached_session():
default_value = -1
init = lookup_ops.TextFileInitializer(
vocabulary_file, dtypes.int64, lookup_ops.TextFileIndex.WHOLE_LINE,
dtypes.int64, lookup_ops.TextFileIndex.LINE_NUMBER)
self.assertIn("one_column_int64.txt_-2_-1", init._shared_name)
table = self.getHashTable()(init, default_value)
self.initialize_table(table)
output = table.lookup(
constant_op.constant((42, 1, 11), dtype=dtypes.int64))
result = self.evaluate(output)
self.assertAllEqual([0, 1, -1], result)
def testInitializeIndexTable(self):
vocabulary_file = self._createVocabFile("one_column_2.txt")
with self.cached_session():
default_value = "UNK"
key_index = lookup_ops.TextFileIndex.LINE_NUMBER
value_index = lookup_ops.TextFileIndex.WHOLE_LINE
init = lookup_ops.TextFileInitializer(
vocabulary_file, dtypes.int64, key_index, dtypes.string, value_index)
self.assertIn("one_column_2.txt_-1_-2", init._shared_name)
table = self.getHashTable()(init, default_value)
self.initialize_table(table)
input_values = constant_op.constant([0, 1, 2, 3], dtypes.int64)
output = table.lookup(input_values)
result = self.evaluate(output)
self.assertAllEqual([b"brain", b"salad", b"surgery", b"UNK"], result)
def testMultiColumn(self):
vocabulary_file = os.path.join(self.get_temp_dir(), "three_columns.txt")
with open(vocabulary_file, "w") as f:
f.write("\n".join(["0\tbrain\t1", "1\tsalad\t5", "2\tsurgery\t6"]) + "\n")
with self.cached_session():
default_value = -1
key_index = 1
value_index = 2
init = lookup_ops.TextFileInitializer(
vocabulary_file, dtypes.string, key_index, dtypes.int64, value_index)
self.assertIn("three_columns.txt_1_2", init._shared_name)
table = self.getHashTable()(init, default_value)
self.initialize_table(table)
input_string = constant_op.constant(["brain", "salad", "surgery"])
output = table.lookup(input_string)
result = self.evaluate(output)
self.assertAllEqual([1, 5, 6], result)
def testInvalidDataTypeInMultiColumn(self):
vocabulary_file = os.path.join(self.get_temp_dir(), "three_columns.txt")
with open(vocabulary_file, "w") as f:
f.write("\n".join(["0\tbrain\t1", "1\tsalad\t5", "2\tsurgery\t6"]) + "\n")
with self.cached_session():
default_value = -1
key_index = 2
value_index = 1
init = lookup_ops.TextFileInitializer(
vocabulary_file, dtypes.string, key_index, dtypes.int64, value_index)
self.assertIn("three_columns.txt_2_1", init._shared_name)
with self.assertRaisesOpError("is not a valid"):
table = self.getHashTable()(init, default_value)
self.initialize_table(table)
def testInvalidDataType(self):
vocabulary_file = self._createVocabFile("one_column_3.txt")
with self.cached_session():
default_value = "UNK"
key_index = lookup_ops.TextFileIndex.WHOLE_LINE
value_index = lookup_ops.TextFileIndex.LINE_NUMBER
with self.assertRaises(ValueError):
init = lookup_ops.TextFileInitializer(vocabulary_file, dtypes.int64,
key_index, dtypes.string,
value_index)
self.assertIn("one_column_3.txt_-2_-1", init._shared_name)
self.getHashTable()(init, default_value)
def testInvalidIndex(self):
vocabulary_file = self._createVocabFile("one_column_4.txt")
with self.cached_session():
default_value = -1
key_index = 1 # second column of the line
value_index = lookup_ops.TextFileIndex.LINE_NUMBER
init = lookup_ops.TextFileInitializer(
vocabulary_file, dtypes.string, key_index, dtypes.int64, value_index)
self.assertIn("one_column_4.txt_1_-1", init._shared_name)
with self.assertRaisesOpError("Invalid number of columns"):
table = self.getHashTable()(init, default_value)
self.initialize_table(table)
def testInitializeSameTableWithMultipleNodes(self):
vocabulary_file = self._createVocabFile("one_column_5.txt")
with self.cached_session():
default_value = -1
init1 = lookup_ops.TextFileInitializer(
vocabulary_file, dtypes.string, lookup_ops.TextFileIndex.WHOLE_LINE,
dtypes.int64, lookup_ops.TextFileIndex.LINE_NUMBER)
self.assertIn("one_column_5.txt_-2_-1", init1._shared_name)
table1 = self.getHashTable()(init1, default_value)
init2 = lookup_ops.TextFileInitializer(
vocabulary_file, dtypes.string, lookup_ops.TextFileIndex.WHOLE_LINE,
dtypes.int64, lookup_ops.TextFileIndex.LINE_NUMBER)
self.assertIn("one_column_5.txt_-2_-1", init2._shared_name)
table2 = self.getHashTable()(init2, default_value)
init3 = lookup_ops.TextFileInitializer(
vocabulary_file, dtypes.string, lookup_ops.TextFileIndex.WHOLE_LINE,
dtypes.int64, lookup_ops.TextFileIndex.LINE_NUMBER)
self.assertIn("one_column_5.txt_-2_-1", init3._shared_name)
table3 = self.getHashTable()(init3, default_value)
self.evaluate(lookup_ops.tables_initializer())
input_string = constant_op.constant(["brain", "salad", "tank"])
output1 = table1.lookup(input_string)
output2 = table2.lookup(input_string)
output3 = table3.lookup(input_string)
out1, out2, out3 = self.evaluate([output1, output2, output3])
self.assertAllEqual([0, 1, -1], out1)
self.assertAllEqual([0, 1, -1], out2)
self.assertAllEqual([0, 1, -1], out3)
def testInitializeTableWithNoFilename(self):
with self.cached_session():
default_value = -1
with self.assertRaises(ValueError):
self.getHashTable()(lookup_ops.TextFileInitializer(
"", dtypes.string, lookup_ops.TextFileIndex.WHOLE_LINE,
dtypes.int64, lookup_ops.TextFileIndex.LINE_NUMBER), default_value)
def testInitializeWithVocabSize(self):
with self.cached_session():
default_value = -1
vocab_size = 3
vocabulary_file1 = self._createVocabFile("one_column6.txt")
init1 = lookup_ops.TextFileInitializer(
vocabulary_file1,
dtypes.string,
lookup_ops.TextFileIndex.WHOLE_LINE,
dtypes.int64,
lookup_ops.TextFileIndex.LINE_NUMBER,
vocab_size=vocab_size)
self.assertIn("one_column6.txt_3_-2_-1", init1._shared_name)
table1 = self.getHashTable()(init1, default_value)
# Initialize from file.
self.initialize_table(table1)
self.assertEqual(vocab_size, self.evaluate(table1.size()))
vocabulary_file2 = self._createVocabFile("one_column7.txt")
vocab_size = 5
init2 = lookup_ops.TextFileInitializer(
vocabulary_file2,
dtypes.string,
lookup_ops.TextFileIndex.WHOLE_LINE,
dtypes.int64,
lookup_ops.TextFileIndex.LINE_NUMBER,
vocab_size=vocab_size)
self.assertIn("one_column7.txt_5_-2_-1", init2._shared_name)
with self.assertRaisesOpError("Invalid vocab_size"):
table2 = self.getHashTable()(init2, default_value)
self.initialize_table(table2)
vocab_size = 1
vocabulary_file3 = self._createVocabFile("one_column3.txt")
init3 = lookup_ops.TextFileInitializer(
vocabulary_file3,
dtypes.string,
lookup_ops.TextFileIndex.WHOLE_LINE,
dtypes.int64,
lookup_ops.TextFileIndex.LINE_NUMBER,
vocab_size=vocab_size)
self.assertIn("one_column3.txt_1_-2_-1", init3._shared_name)
table3 = self.getHashTable()(init3, default_value)
# Smaller vocab size reads only vocab_size records.
self.initialize_table(table3)
self.assertEqual(vocab_size, self.evaluate(table3.size()))
@test_util.run_v1_only("placeholder usage")
def testFeedVocabularyName(self):
vocabulary_file = self._createVocabFile("feed_vocabulary.txt")
with self.cached_session():
default_value = -1
init = lookup_ops.TextFileInitializer(
"old_file.txt", dtypes.string, lookup_ops.TextFileIndex.WHOLE_LINE,
dtypes.int64, lookup_ops.TextFileIndex.LINE_NUMBER)
self.assertIn("old_file.txt_-2_-1", init._shared_name)
table = self.getHashTable()(init, default_value)
# Initialize with non existing file (old_file.txt) should fail.
# TODO(yleon): Update message, which might change per FileSystem.
with self.assertRaisesOpError("old_file.txt"):
table.initializer.run()
# Initialize the model feeding the vocabulary file.
filenames = ops.get_collection(ops.GraphKeys.ASSET_FILEPATHS)
table.initializer.run(feed_dict={filenames[0]: vocabulary_file})
input_string = constant_op.constant(["brain", "salad", "tank"])
output = table.lookup(input_string)
result = self.evaluate(output)
self.assertAllEqual([0, 1, -1], result)
def testInvalidFilenames(self):
vocabulary_file = self._createVocabFile("filename_shape.txt")
with self.cached_session():
default_value = -1
# Invalid data type
other_type = constant_op.constant(1)
with self.assertRaises(Exception) as cm:
self.getHashTable()(lookup_ops.TextFileInitializer(
other_type, dtypes.string, lookup_ops.TextFileIndex.WHOLE_LINE,
dtypes.int64, lookup_ops.TextFileIndex.LINE_NUMBER), default_value)
self.assertIsInstance(cm.exception, (ValueError, TypeError))
# Non-scalar filename
filenames = constant_op.constant([vocabulary_file, vocabulary_file])
if not context.executing_eagerly():
with self.assertRaises(Exception) as cm:
self.getHashTable()(lookup_ops.TextFileInitializer(
filenames, dtypes.string, lookup_ops.TextFileIndex.WHOLE_LINE,
dtypes.int64, lookup_ops.TextFileIndex.LINE_NUMBER),
default_value)
self.assertIsInstance(cm.exception, (ValueError, TypeError))
else:
with self.assertRaises(errors_impl.InvalidArgumentError):
self.getHashTable()(lookup_ops.TextFileInitializer(
filenames, dtypes.string, lookup_ops.TextFileIndex.WHOLE_LINE,
dtypes.int64, lookup_ops.TextFileIndex.LINE_NUMBER),
default_value)
def testIdToStringTable(self):
vocab_file = self._createVocabFile("feat_to_id_1.txt")
with self.cached_session():
default_value = "UNK"
vocab_size = 3
init = lookup_ops.TextFileStringTableInitializer(
vocab_file, vocab_size=vocab_size)
self.assertTrue("feat_to_id_1.txt_3_-1_-2", init._shared_name)
table = self.getHashTable()(init, default_value)
self.initialize_table(table)
input_values = constant_op.constant([0, 1, 2, 3], dtypes.int64)
out = table.lookup(input_values)
self.assertAllEqual([b"brain", b"salad", b"surgery", b"UNK"],
self.evaluate(out))
self.assertEqual(vocab_size, self.evaluate(table.size()))
def testStringToIdTable(self):
vocab_file = self._createVocabFile("feat_to_id_2.txt")
with self.cached_session():
default_value = -1
vocab_size = 3
init = lookup_ops.TextFileIdTableInitializer(
vocab_file, vocab_size=vocab_size)
self.assertTrue("feat_to_id_2.txt_3_-1_-2", init._shared_name)
table = self.getHashTable()(init, default_value)
self.initialize_table(table)
input_string = constant_op.constant(["brain", "salad", "surgery", "UNK"])
out = table.lookup(input_string)
self.assertAllEqual([0, 1, 2, -1], self.evaluate(out))
self.assertEqual(vocab_size, self.evaluate(table.size()))
def testInt64ToIdTable(self):
vocab_file = self._createVocabFile(
"feat_to_id_3.txt", values=("42", "1", "-1000"))
with self.cached_session():
default_value = -1
vocab_size = 3
init = lookup_ops.TextFileIdTableInitializer(
vocab_file, vocab_size=vocab_size, key_dtype=dtypes.int64)
self.assertTrue("feat_to_id_3.txt_3_-1_-2", init._shared_name)
table = self.getHashTable()(init, default_value)
self.initialize_table(table)
out = table.lookup(
constant_op.constant((42, 1, -1000, 11), dtype=dtypes.int64))
self.assertAllEqual((0, 1, 2, -1), self.evaluate(out))
self.assertEqual(vocab_size, self.evaluate(table.size()))
class StaticVocabularyTableTest(BaseLookupTableTest):
def _createVocabFile(self, basename, values=("brain", "salad", "surgery")):
vocabulary_file = os.path.join(self.get_temp_dir(), basename)
with open(vocabulary_file, "w") as f:
f.write("\n".join(values) + "\n")
return vocabulary_file
def testStringStaticVocabularyTable(self):
vocab_file = self._createVocabFile("feat_to_id_1.txt")
vocab_size = 3
oov_buckets = 1
table = self.getVocabularyTable()(lookup_ops.TextFileIdTableInitializer(
vocab_file, vocab_size=vocab_size), oov_buckets)
self.initialize_table(table)
input_string = constant_op.constant(["brain", "salad", "surgery", "UNK"])
out = table.lookup(input_string)
self.assertAllEqual([0, 1, 2, 3], self.evaluate(out))
self.assertEqual(vocab_size + oov_buckets, self.evaluate(table.size()))
def testInt32StaticVocabularyTable(self):
vocab_file = self._createVocabFile("feat_to_id_2.txt", ("42", "1", "-1000"))
vocab_size = 3
oov_buckets = 1
table = self.getVocabularyTable()(
lookup_ops.TextFileIdTableInitializer(
vocab_file, vocab_size=vocab_size, key_dtype=dtypes.int64),
oov_buckets,
lookup_key_dtype=dtypes.int32)
self.initialize_table(table)
values = constant_op.constant((42, 1, -1000, 11), dtype=dtypes.int32)
out = table.lookup(values)
self.assertAllEqual([0, 1, 2, 3], self.evaluate(out))
self.assertEqual(vocab_size + oov_buckets, self.evaluate(table.size()))
def testInt64StaticVocabularyTable(self):
vocab_file = self._createVocabFile("feat_to_id_3.txt", ("42", "1", "-1000"))
vocab_size = 3
oov_buckets = 1
table = self.getVocabularyTable()(lookup_ops.TextFileIdTableInitializer(
vocab_file, vocab_size=vocab_size, key_dtype=dtypes.int64), oov_buckets)
self.initialize_table(table)
values = constant_op.constant((42, 1, -1000, 11), dtype=dtypes.int64)
out = table.lookup(values)
self.assertAllEqual([0, 1, 2, 3], self.evaluate(out))
self.assertEqual(vocab_size + oov_buckets, self.evaluate(table.size()))
def testStringStaticVocabularyTableNoInitializer(self):
oov_buckets = 5
# Set a table that only uses hash buckets, for each input value returns
# an id calculated by fingerprint("input") mod oov_buckets.
table = self.getVocabularyTable()(None, oov_buckets)
self.initialize_table(table)
values = constant_op.constant(("brain", "salad", "surgery"))
out = table.lookup(values)
self.assertAllEqual(
[
3, # fingerprint("brain") mod 5.
1, # fingerprint("salad") mod 5.
4 # fingerprint("surgery") mod 5
],
self.evaluate(out))
self.assertEqual(oov_buckets, self.evaluate(table.size()))
def testStaticVocabularyTableWithMultipleInitializers(self):
vocab_file = self._createVocabFile("feat_to_id_4.txt")
vocab_size = 3
oov_buckets = 3
init = lookup_ops.TextFileIdTableInitializer(
vocab_file, vocab_size=vocab_size)
table1 = self.getVocabularyTable()(init, oov_buckets, name="table1")
table2 = self.getVocabularyTable()(init, oov_buckets, name="table2")
self.evaluate(lookup_ops.tables_initializer())
input_string = constant_op.constant(
["fruit", "brain", "salad", "surgery", "UNK"])
out1 = table1.lookup(input_string)
out2 = table2.lookup(input_string)
out1, out2 = self.evaluate([out1, out2])
self.assertAllEqual([5, 0, 1, 2, 5], out1)
self.assertAllEqual([5, 0, 1, 2, 5], out2)
self.assertEqual(vocab_size + oov_buckets, self.evaluate(table1.size()))
self.assertEqual(vocab_size + oov_buckets, self.evaluate(table2.size()))
def testStaticVocabularyTableInitializationAcrossSessions(self):
vocab_file = self._createVocabFile("feat_to_id_5.txt")
with self.cached_session():
vocab_size = 3
oov_buckets = 1
table1 = self.getVocabularyTable()(lookup_ops.TextFileIdTableInitializer(
vocab_file, vocab_size=vocab_size), oov_buckets)
self.initialize_table(table1)
input_string_1 = constant_op.constant(
["brain", "salad", "surgery", "UNK"])
out1 = table1.lookup(input_string_1)
self.assertAllEqual([0, 1, 2, 3], self.evaluate(out1))
self.assertEqual(vocab_size + oov_buckets, self.evaluate(table1.size()))
with self.cached_session():
vocab_size = 3
oov_buckets = 1
# Underlying lookup table already initialized in previous session.
# No need to initialize table2
table2 = self.getVocabularyTable()(lookup_ops.TextFileIdTableInitializer(
vocab_file, vocab_size=vocab_size), oov_buckets)
input_string_2 = constant_op.constant(["fruit", "salad", "UNK"])
out2 = table2.lookup(input_string_2)
self.assertAllEqual([3, 1, 3], self.evaluate(out2))
self.assertEqual(vocab_size + oov_buckets, self.evaluate(table2.size()))
def testStaticVocabularyTableAssetTracking(self):
vocab_file = self._createVocabFile("vocab.txt")
vocab_size = 3
oov_buckets = 1
table = self.getVocabularyTable()(lookup_ops.TextFileIdTableInitializer(
vocab_file, vocab_size=vocab_size), oov_buckets)
object_graph_view = graph_view.ObjectGraphView(table)
objects = object_graph_view.list_objects()
assets = list(filter(lambda obj: isinstance(obj, tracking.Asset), objects))
self.assertLen(assets, 1)
self.assertEqual(
self.evaluate(assets[0].asset_path), compat.as_bytes(vocab_file))
def testSparseTensor(self):
vocab_file = self._createVocabFile("feat_to_id_7.txt")
input_indices = [[0, 0], [0, 1], [2, 0], [2, 2], [3, 0]]
input_shape = [4, 4]
sp_features = sparse_tensor.SparseTensor(
constant_op.constant(input_indices, dtypes.int64),
constant_op.constant(["brain", "salad", "brain", "surgery", "tarkus"],
dtypes.string),
constant_op.constant(input_shape, dtypes.int64))
table = self.getVocabularyTable()(lookup_ops.TextFileIdTableInitializer(
vocab_file, vocab_size=3), 1)
self.initialize_table(table)
sp_ids = table.lookup(sp_features)
self.assertAllEqual([5], sp_ids.values._shape_as_list())
sp_ids_ind, sp_ids_val, sp_ids_shape = self.evaluate(
[sp_ids.indices, sp_ids.values, sp_ids.dense_shape])
self.assertAllEqual(input_indices, sp_ids_ind)
self.assertAllEqual([0, 1, 0, 2, 3], sp_ids_val)
self.assertAllEqual(input_shape, sp_ids_shape)
def testInt32SparseTensor(self):
input_indices = [[0, 0], [0, 1], [2, 0], [2, 2], [3, 0]]
input_shape = [4, 4]
sp_features = sparse_tensor.SparseTensor(
constant_op.constant(input_indices, dtypes.int64),
constant_op.constant([42, 1, 42, -1000, 11], dtypes.int32),
constant_op.constant(input_shape, dtypes.int64))
table = self.getVocabularyTable()(
lookup_ops.KeyValueTensorInitializer((42, 1, -1000), (0, 1, 2),
dtypes.int64, dtypes.int64),
1,
lookup_key_dtype=dtypes.int32)
self.initialize_table(table)
sp_ids = table.lookup(sp_features)
self.assertAllEqual([5], sp_ids.values._shape_as_list())
sp_ids_ind, sp_ids_val, sp_ids_shape = self.evaluate(
[sp_ids.indices, sp_ids.values, sp_ids.dense_shape])
self.assertAllEqual(input_indices, sp_ids_ind)
self.assertAllEqual([0, 1, 0, 2, 3], sp_ids_val)
self.assertAllEqual(input_shape, sp_ids_shape)
def testInt64SparseTensor(self):
input_indices = [[0, 0], [0, 1], [2, 0], [2, 2], [3, 0]]
input_shape = [4, 4]
sp_features = sparse_tensor.SparseTensor(
constant_op.constant(input_indices, dtypes.int64),
constant_op.constant([42, 1, 42, -1000, 11], dtypes.int64),
constant_op.constant(input_shape, dtypes.int64))
table = self.getVocabularyTable()(lookup_ops.KeyValueTensorInitializer(
(42, 1, -1000), (0, 1, 2), dtypes.int64, dtypes.int64), 1)
self.initialize_table(table)
sp_ids = table.lookup(sp_features)
self.assertAllEqual([5], sp_ids.values._shape_as_list())
sp_ids_ind, sp_ids_val, sp_ids_shape = self.evaluate(
[sp_ids.indices, sp_ids.values, sp_ids.dense_shape])
self.assertAllEqual(input_indices, sp_ids_ind)
self.assertAllEqual([0, 1, 0, 2, 3], sp_ids_val)
self.assertAllEqual(input_shape, sp_ids_shape)
def testStaticVocabularyTableNoInnerTable(self):
table = self.getVocabularyTable()(None, num_oov_buckets=1)
self.assertIsNone(table.resource_handle)
class DenseHashTableOpTest(test.TestCase):
def testBasic(self):
with self.cached_session():
keys = constant_op.constant([11, 12, 13, 14], dtypes.int64)
values = constant_op.constant([0, 1, 2, 3], dtypes.int64)
table = lookup_ops.DenseHashTable(
dtypes.int64,
dtypes.int64,
default_value=-1,
empty_key=0,
deleted_key=-1)
self.assertAllEqual(0, self.evaluate(table.size()))
self.evaluate(table.insert(keys, values))
self.assertAllEqual(4, self.evaluate(table.size()))
remove_string = constant_op.constant([12, 15], dtypes.int64)
self.evaluate(table.remove(remove_string))
self.assertAllEqual(3, self.evaluate(table.size()))
input_string = constant_op.constant([11, 12, 15], dtypes.int64)
output = table.lookup(input_string)
self.assertAllEqual([3], output.get_shape())
result = self.evaluate(output)
self.assertAllEqual([0, -1, -1], result)
def testBasicBool(self):
with self.cached_session():
keys = constant_op.constant([11, 12, 13, 14], dtypes.int64)
values = constant_op.constant([True, True, True, True], dtypes.bool)
table = lookup_ops.DenseHashTable(
dtypes.int64,
dtypes.bool,
default_value=False,
empty_key=0,
deleted_key=-1)
self.assertAllEqual(0, self.evaluate(table.size()))
self.evaluate(table.insert(keys, values))
self.assertAllEqual(4, self.evaluate(table.size()))
remove_string = constant_op.constant([11, 15], dtypes.int64)
self.evaluate(table.remove(remove_string))
self.assertAllEqual(3, self.evaluate(table.size()))
input_string = constant_op.constant([11, 12, 15], dtypes.int64)
output = table.lookup(input_string)
self.assertAllEqual([3], output.get_shape())
result = self.evaluate(output)
self.assertAllEqual([False, True, False], result)
def testSameEmptyAndDeletedKey(self):
with self.cached_session():
with self.assertRaisesRegexp(errors_impl.InvalidArgumentError,
"Empty and deleted keys"):
table = lookup_ops.DenseHashTable(
dtypes.int64,
dtypes.int64,
default_value=-1,
empty_key=42,
deleted_key=42)
self.assertAllEqual(0, self.evaluate(table.size()))
@test_util.run_v1_only("uses placeholders")
def testLookupUnknownShape(self):
with self.cached_session():
keys = constant_op.constant([11, 12, 13], dtypes.int64)
values = constant_op.constant([0, 1, 2], dtypes.int64)
table = lookup_ops.DenseHashTable(
dtypes.int64,
dtypes.int64,
default_value=-1,
empty_key=0,
deleted_key=-1)
self.evaluate(table.insert(keys, values))
self.assertAllEqual(3, self.evaluate(table.size()))
placeholder_keys = array_ops.placeholder(dtypes.int64)
output = table.lookup(placeholder_keys)
self.assertAllEqual(None, output.get_shape())
result = output.eval({placeholder_keys: [11, 12, 15]})
self.assertAllEqual([0, 1, -1], result)
def testMapStringToFloat(self):
with self.cached_session():
keys = constant_op.constant(["a", "b", "c", "d"], dtypes.string)
values = constant_op.constant([0.0, 1.1, 2.2, 3.3], dtypes.float32)
default_value = constant_op.constant(-1.5, dtypes.float32)
table = lookup_ops.DenseHashTable(
dtypes.string,
dtypes.float32,
default_value=default_value,
empty_key="",
deleted_key="$")
self.assertAllEqual(0, self.evaluate(table.size()))
self.evaluate(table.insert(keys, values))
self.assertAllEqual(4, self.evaluate(table.size()))
remove_string = constant_op.constant(["b", "e"])
self.evaluate(table.remove(remove_string))
self.assertAllEqual(3, self.evaluate(table.size()))
input_string = constant_op.constant(["a", "b", "d", "e"], dtypes.string)
output = table.lookup(input_string)
self.assertAllEqual([4], output.get_shape())
result = self.evaluate(output)
self.assertAllClose([0, -1.5, 3.3, -1.5], result)
def testMapInt64ToFloat(self):
for float_dtype in [dtypes.float32, dtypes.float64]:
with self.cached_session():
keys = constant_op.constant([11, 12, 13, 14], dtypes.int64)
values = constant_op.constant([0.0, 1.1, 2.2, 3.3], float_dtype)
default_value = constant_op.constant(-1.5, float_dtype)
table = lookup_ops.DenseHashTable(
dtypes.int64,
float_dtype,
default_value=default_value,
empty_key=0,
deleted_key=-1)
self.assertAllEqual(0, self.evaluate(table.size()))
self.evaluate(table.insert(keys, values))
self.assertAllEqual(4, self.evaluate(table.size()))
remove_string = constant_op.constant([12, 15], dtypes.int64)
self.evaluate(table.remove(remove_string))
self.assertAllEqual(3, self.evaluate(table.size()))
input_string = constant_op.constant([11, 12, 14, 15], dtypes.int64)
output = table.lookup(input_string)
self.assertAllEqual([4], output.get_shape())
result = self.evaluate(output)
self.assertAllClose([0, -1.5, 3.3, -1.5], result)
def testVectorValues(self):
with self.cached_session():
keys = constant_op.constant([11, 12, 13], dtypes.int64)
values = constant_op.constant([[0, 1, 2, 3], [3, 4, 5, 6], [6, 7, 8, 9]],
dtypes.int64)
default_value = constant_op.constant([-1, -2, -3, -4], dtypes.int64)
table = lookup_ops.DenseHashTable(
dtypes.int64,
dtypes.int64,
default_value=default_value,
empty_key=0,
deleted_key=-1,
initial_num_buckets=4)
self.assertAllEqual(0, self.evaluate(table.size()))
self.evaluate(table.insert(keys, values))
self.assertAllEqual(3, self.evaluate(table.size()))
self.assertAllEqual(4, len(self.evaluate(table.export()[0])))
self.evaluate(
table.insert(
constant_op.constant([14], dtypes.int64),
constant_op.constant([[2, 3, 4, 5]], dtypes.int64)))
self.assertAllEqual(4, self.evaluate(table.size()))
self.assertAllEqual(8, len(self.evaluate(table.export()[0])))
remove_string = constant_op.constant([12, 16], dtypes.int64)
self.evaluate(table.remove(remove_string))
self.assertAllEqual(3, self.evaluate(table.size()))
self.assertAllEqual(8, len(self.evaluate(table.export()[0])))
input_string = constant_op.constant([11, 12, 14, 15], dtypes.int64)
output = table.lookup(input_string)
self.assertAllEqual([4, 4],
output.shape,
msg="Saw shape: %s" % output.shape)
result = self.evaluate(output)
self.assertAllEqual(
[[0, 1, 2, 3], [-1, -2, -3, -4], [2, 3, 4, 5], [-1, -2, -3, -4]],
result)
def testVectorKeys(self):
with self.cached_session():
keys = constant_op.constant([[0, 1], [1, 2], [1, 3]], dtypes.int64)
values = constant_op.constant([10, 11, 12], dtypes.int64)
empty_key = constant_op.constant([0, 3], dtypes.int64)
deleted_key = constant_op.constant([-1, -1], dtypes.int64)
default_value = constant_op.constant(-1, dtypes.int64)
table = lookup_ops.DenseHashTable(
dtypes.int64,
dtypes.int64,
default_value=default_value,
empty_key=empty_key,
deleted_key=deleted_key,
initial_num_buckets=8)
self.assertAllEqual(0, self.evaluate(table.size()))
self.evaluate(table.insert(keys, values))
self.assertAllEqual(3, self.evaluate(table.size()))
self.evaluate(
table.insert(
constant_op.constant([[0, 0]], dtypes.int64),
constant_op.constant([13], dtypes.int64)))
self.assertAllEqual(4, self.evaluate(table.size()))
self.assertAllEqual(8, len(self.evaluate(table.export()[0])))
remove_string = constant_op.constant([[1, 2], [7, 8]], dtypes.int64)
self.evaluate(table.remove(remove_string))
self.assertAllEqual(3, self.evaluate(table.size()))
self.assertAllEqual(8, len(self.evaluate(table.export()[0])))
input_string = constant_op.constant([[0, 1], [1, 2], [1, 3], [0, 2]],
dtypes.int64)
output = table.lookup(input_string)
self.assertAllEqual([4], output.get_shape())
result = self.evaluate(output)
self.assertAllEqual([10, -1, 12, -1], result)
def testResize(self):
with self.cached_session():
keys = constant_op.constant([11, 12, 13], dtypes.int64)
values = constant_op.constant([0, 1, 2], dtypes.int64)
table = lookup_ops.DenseHashTable(
dtypes.int64,
dtypes.int64,
default_value=-1,
empty_key=0,
deleted_key=-1,
initial_num_buckets=4)
self.assertAllEqual(0, self.evaluate(table.size()))
self.evaluate(table.insert(keys, values))
self.assertAllEqual(3, self.evaluate(table.size()))
self.assertAllEqual(4, len(self.evaluate(table.export()[0])))
keys2 = constant_op.constant([12, 99], dtypes.int64)
self.evaluate(table.remove(keys2))
self.assertAllEqual(2, self.evaluate(table.size()))
self.assertAllEqual(4, len(self.evaluate(table.export()[0])))
keys3 = constant_op.constant([13, 14, 15, 16, 17], dtypes.int64)
values3 = constant_op.constant([3, 4, 5, 6, 7], dtypes.int64)
self.evaluate(table.insert(keys3, values3))
self.assertAllEqual(6, self.evaluate(table.size()))
self.assertAllEqual(16, len(self.evaluate(table.export()[0])))
keys4 = constant_op.constant([10, 11, 12, 13, 14, 15, 16, 17, 18],
dtypes.int64)
output = table.lookup(keys4)
self.assertAllEqual([-1, 0, -1, 3, 4, 5, 6, 7, -1], self.evaluate(output))
def testExport(self):
with self.cached_session():
keys = constant_op.constant([11, 12, 13, 14], dtypes.int64)
values = constant_op.constant([1, 2, 3, 4], dtypes.int64)
table = lookup_ops.DenseHashTable(
dtypes.int64,
dtypes.int64,
default_value=-1,
empty_key=100,
deleted_key=200,
initial_num_buckets=8)
self.assertAllEqual(0, self.evaluate(table.size()))
self.evaluate(table.insert(keys, values))
self.assertAllEqual(4, self.evaluate(table.size()))
keys2 = constant_op.constant([12, 15], dtypes.int64)
self.evaluate(table.remove(keys2))
self.assertAllEqual(3, self.evaluate(table.size()))
exported_keys, exported_values = table.export()
np_keys = self.evaluate(exported_keys)
np_values = self.evaluate(exported_values)
self.assertAllEqual(8, len(np_keys))
self.assertAllEqual(8, len(np_values))
# pair up keys and values, drop extra added dimension
pairs = np.dstack((np_keys.flatten(), np_values.flatten()))[0]
# sort by key
pairs = pairs[pairs[:, 0].argsort()]
self.assertAllEqual([[11, 1], [13, 3], [14, 4], [100, 0], [100, 0],
[100, 0], [100, 0], [200, 2]], pairs)
@test_util.run_v1_only("Saver V1 only")
def testSaveRestore(self):
save_dir = os.path.join(self.get_temp_dir(), "save_restore")
save_path = os.path.join(tempfile.mkdtemp(prefix=save_dir), "hash")
with self.session(graph=ops.Graph()) as sess:
default_value = -1
empty_key = 0
deleted_key = -1
keys = constant_op.constant([11, 12, 13, 14], dtypes.int64)
values = constant_op.constant([0, 1, 2, 3], dtypes.int64)
table = lookup_ops.DenseHashTable(
dtypes.int64,
dtypes.int64,
default_value=default_value,
empty_key=empty_key,
deleted_key=deleted_key,
name="t1",
checkpoint=True,
initial_num_buckets=32)
save = saver.Saver()
self.assertAllEqual(0, table.size().eval())
table.insert(keys, values).run()
self.assertAllEqual(4, table.size().eval())
self.assertAllEqual(32, len(table.export()[0].eval()))
keys2 = constant_op.constant([12, 15], dtypes.int64)
table.remove(keys2).run()
self.assertAllEqual(3, table.size().eval())
self.assertAllEqual(32, len(table.export()[0].eval()))
val = save.save(sess, save_path)
self.assertIsInstance(val, six.string_types)
self.assertEqual(save_path, val)
with self.session(graph=ops.Graph()) as sess:
table = lookup_ops.DenseHashTable(
dtypes.int64,
dtypes.int64,
default_value=default_value,
empty_key=empty_key,
deleted_key=deleted_key,
name="t1",
checkpoint=True,
initial_num_buckets=64)
table.insert(
constant_op.constant([11, 14], dtypes.int64),
constant_op.constant([12, 24], dtypes.int64)).run()
self.assertAllEqual(2, table.size().eval())
self.assertAllEqual(64, len(table.export()[0].eval()))
save = saver.Saver()
# Restore the saved values in the parameter nodes.
save.restore(sess, save_path)
self.assertAllEqual(3, table.size().eval())
self.assertAllEqual(32, len(table.export()[0].eval()))
input_string = constant_op.constant([10, 11, 12, 13, 14], dtypes.int64)
output = table.lookup(input_string)
self.assertAllEqual([-1, 0, -1, 2, 3], output.eval())
@test_util.run_v1_only("Saver V1 only")
def testSaveRestoreOnlyTable(self):
save_dir = os.path.join(self.get_temp_dir(), "save_restore")
save_path = os.path.join(tempfile.mkdtemp(prefix=save_dir), "hash")
with self.session(graph=ops.Graph()) as sess:
default_value = -1
empty_key = 0
deleted_key = -1
keys = constant_op.constant([11, 12, 13, 14], dtypes.int64)
values = constant_op.constant([0, 1, 2, 3], dtypes.int64)
table = lookup_ops.DenseHashTable(
dtypes.int64,
dtypes.int64,
default_value=default_value,
empty_key=empty_key,
deleted_key=deleted_key,
name="t1",
checkpoint=True,
initial_num_buckets=32)
save = saver.Saver([table])
self.assertAllEqual(0, table.size().eval())
table.insert(keys, values).run()
self.assertAllEqual(4, table.size().eval())
self.assertAllEqual(32, len(table.export()[0].eval()))
keys2 = constant_op.constant([12, 15], dtypes.int64)
table.remove(keys2).run()
self.assertAllEqual(3, table.size().eval())
self.assertAllEqual(32, len(table.export()[0].eval()))
val = save.save(sess, save_path)
self.assertIsInstance(val, six.string_types)
self.assertEqual(save_path, val)
with self.session(graph=ops.Graph()) as sess:
table = lookup_ops.DenseHashTable(
dtypes.int64,
dtypes.int64,
default_value=default_value,
empty_key=empty_key,
deleted_key=deleted_key,
name="t1",
checkpoint=True,
initial_num_buckets=64)
table.insert(
constant_op.constant([11, 14], dtypes.int64),
constant_op.constant([12, 24], dtypes.int64)).run()
self.assertAllEqual(2, table.size().eval())
self.assertAllEqual(64, len(table.export()[0].eval()))
save = saver.Saver([table])
# Restore the saved values in the parameter nodes.
save.restore(sess, save_path)
self.assertAllEqual(3, table.size().eval())
self.assertAllEqual(32, len(table.export()[0].eval()))
input_string = constant_op.constant([10, 11, 12, 13, 14], dtypes.int64)
output = table.lookup(input_string)
self.assertAllEqual([-1, 0, -1, 2, 3], output.eval())
@test_util.run_in_graph_and_eager_modes
def testObjectSaveRestore(self):
save_dir = os.path.join(self.get_temp_dir(), "save_restore")
save_prefix = os.path.join(tempfile.mkdtemp(prefix=save_dir), "hash")
default_value = -1
empty_key = 0
deleted_key = -1
keys = constant_op.constant([11, 12, 13], dtypes.int64)
values = constant_op.constant([0, 1, 2], dtypes.int64)
save_table = lookup_ops.DenseHashTable(
dtypes.int64,
dtypes.int64,
default_value=default_value,
empty_key=empty_key,
deleted_key=deleted_key,
name="t1",
checkpoint=True,
initial_num_buckets=32)
save_checkpoint = trackable.Checkpoint(table=save_table)
self.assertAllEqual(0, self.evaluate(save_table.size()))
self.evaluate(save_table.insert(keys, values))
self.assertAllEqual(3, self.evaluate(save_table.size()))
self.assertAllEqual(32, len(self.evaluate(save_table.export()[0])))
save_path = save_checkpoint.save(save_prefix)
del save_table, save_checkpoint
load_table = lookup_ops.DenseHashTable(
dtypes.int64,
dtypes.int64,
default_value=default_value,
empty_key=empty_key,
deleted_key=deleted_key,
name="t1",
checkpoint=True,
initial_num_buckets=64)
self.evaluate(
load_table.insert(
constant_op.constant([11, 14], dtypes.int64),
constant_op.constant([12, 24], dtypes.int64)))
self.assertAllEqual(2, self.evaluate(load_table.size()))
self.assertAllEqual(64, len(self.evaluate(load_table.export()[0])))
restore_checkpoint = trackable.Checkpoint(table=load_table)
# Restore the saved values in the parameter nodes.
restore_checkpoint.restore(save_path).run_restore_ops()
self.assertAllEqual(3, self.evaluate(load_table.size()))
self.assertAllEqual(32, len(self.evaluate(load_table.export()[0])))
input_string = constant_op.constant([10, 11, 12, 13, 14], dtypes.int64)
output = load_table.lookup(input_string)
self.assertAllEqual([-1, 0, 1, 2, -1], self.evaluate(output))
@test_util.run_v1_only("Saver V1 only")
def testVectorSaveRestore(self):
save_dir = os.path.join(self.get_temp_dir(), "vector_save_restore")
save_path = os.path.join(tempfile.mkdtemp(prefix=save_dir), "hash")
with self.session(graph=ops.Graph()) as sess:
empty_key = constant_op.constant([11, 13], dtypes.int64)
deleted_key = constant_op.constant([-2, -3], dtypes.int64)
default_value = constant_op.constant([-1, -2], dtypes.int64)
keys = constant_op.constant([[11, 12], [11, 14], [12, 13], [13, 14]],
dtypes.int64)
values = constant_op.constant([[0, 1], [2, 3], [2, 4], [4, 5]],
dtypes.int64)
table = lookup_ops.DenseHashTable(
dtypes.int64,
dtypes.int64,
default_value=default_value,
empty_key=empty_key,
deleted_key=deleted_key,
name="t1",
checkpoint=True,
initial_num_buckets=32)
save = saver.Saver()
self.assertAllEqual(0, table.size().eval())
table.insert(keys, values).run()
self.assertAllEqual(4, table.size().eval())
self.assertAllEqual(32, len(table.export()[0].eval()))
keys2 = constant_op.constant([[12, 13], [16, 17]], dtypes.int64)
table.remove(keys2).run()
self.assertAllEqual(3, table.size().eval())
self.assertAllEqual(32, len(table.export()[0].eval()))
val = save.save(sess, save_path)
self.assertIsInstance(val, six.string_types)
self.assertEqual(save_path, val)
with self.session(graph=ops.Graph()) as sess:
empty_key = constant_op.constant([11, 13], dtypes.int64)
deleted_key = constant_op.constant([-2, -3], dtypes.int64)
default_value = constant_op.constant([-1, -2], dtypes.int64)
table = lookup_ops.DenseHashTable(
dtypes.int64,
dtypes.int64,
default_value=default_value,
empty_key=empty_key,
deleted_key=deleted_key,
name="t1",
checkpoint=True,
initial_num_buckets=64)
table.insert(
constant_op.constant([[11, 12], [13, 15]], dtypes.int64),
constant_op.constant([[21, 22], [23, 24]], dtypes.int64)).run()
self.assertAllEqual(2, table.size().eval())
self.assertAllEqual(64, len(table.export()[0].eval()))
save = saver.Saver()
# Restore the saved values in the parameter nodes.
save.restore(sess, save_path)
self.assertAllEqual(3, table.size().eval())
self.assertAllEqual(32, len(table.export()[0].eval()))
input_string = constant_op.constant(
[[11, 12], [11, 14], [11, 15], [13, 14], [13, 15]], dtypes.int64)
output = table.lookup(input_string)
self.assertAllEqual([[0, 1], [2, 3], [-1, -2], [4, 5], [-1, -2]],
output.eval())
@test_util.run_v1_only("Saver V1 only")
def testVectorScalarSaveRestore(self):
save_dir = os.path.join(self.get_temp_dir(), "vector_scalar_save_restore")
save_path = os.path.join(tempfile.mkdtemp(prefix=save_dir), "hash")
with self.session(graph=ops.Graph()) as sess:
empty_key = constant_op.constant([11, 13], dtypes.int64)
deleted_key = constant_op.constant([-1, -1], dtypes.int64)
default_value = constant_op.constant(-1, dtypes.int64)
keys = constant_op.constant([[11, 12], [11, 14], [12, 13], [13, 14]],
dtypes.int64)
values = constant_op.constant([0, 1, 2, 3], dtypes.int64)
table = lookup_ops.DenseHashTable(
dtypes.int64,
dtypes.int64,
default_value=default_value,
empty_key=empty_key,
deleted_key=deleted_key,
name="t2",
checkpoint=True,
initial_num_buckets=32)
save = saver.Saver()
self.assertAllEqual(0, table.size().eval())
table.insert(keys, values).run()
self.assertAllEqual(4, table.size().eval())
self.assertAllEqual(32, len(table.export()[0].eval()))
keys2 = constant_op.constant([[12, 13], [15, 16]], dtypes.int64)
table.remove(keys2).run()
self.assertAllEqual(3, table.size().eval())
self.assertAllEqual(32, len(table.export()[0].eval()))
val = save.save(sess, save_path)
self.assertIsInstance(val, six.string_types)
self.assertEqual(save_path, val)
with self.session(graph=ops.Graph()) as sess:
empty_key = constant_op.constant([11, 13], dtypes.int64)
deleted_key = constant_op.constant([-1, -1], dtypes.int64)
default_value = constant_op.constant(-1, dtypes.int64)
table = lookup_ops.DenseHashTable(
dtypes.int64,
dtypes.int64,
default_value=default_value,
empty_key=empty_key,
deleted_key=deleted_key,
name="t2",
checkpoint=True,
initial_num_buckets=64)
table.insert(
constant_op.constant([[11, 12], [13, 15]], dtypes.int64),
constant_op.constant([3, 4], dtypes.int64)).run()
self.assertAllEqual(2, table.size().eval())
self.assertAllEqual(64, len(table.export()[0].eval()))
save = saver.Saver()
# Restore the saved values in the parameter nodes.
save.restore(sess, save_path)
self.assertAllEqual(3, table.size().eval())
self.assertAllEqual(32, len(table.export()[0].eval()))
input_string = constant_op.constant(
[[11, 12], [11, 14], [11, 15], [13, 14], [13, 15]], dtypes.int64)
output = table.lookup(input_string)
self.assertAllEqual([0, 1, -1, 3, -1], output.eval())
def testReprobe(self):
with self.cached_session():
# Insert 6 keys into a table with 8 buckets.
# The values are chosen to make sure collisions occur when using GCC STL
keys = constant_op.constant([11, 12, 13, 19, 20, 21], dtypes.int64)
values = constant_op.constant([51, 52, 53, 54, 55, 56], dtypes.int64)
table = lookup_ops.DenseHashTable(
dtypes.int64,
dtypes.int64,
default_value=-1,
empty_key=0,
deleted_key=-1,
initial_num_buckets=8)
self.assertAllEqual(0, self.evaluate(table.size()))
self.evaluate(table.insert(keys, values))
self.assertAllEqual(6, self.evaluate(table.size()))
input_string = constant_op.constant([10, 11, 12, 13, 14, 19, 20, 21, 22],
dtypes.int64)
output = table.lookup(input_string)
self.assertAllEqual([9], output.get_shape())
result = self.evaluate(output)
self.assertAllEqual([-1, 51, 52, 53, -1, 54, 55, 56, -1], result)
def testCustomEmptyKey(self):
with self.cached_session():
keys = constant_op.constant([11, 0, 13], dtypes.int64)
values = constant_op.constant([0, 1, 2], dtypes.int64)
table = lookup_ops.DenseHashTable(
dtypes.int64,
dtypes.int64,
default_value=-1,
empty_key=12,
deleted_key=-1)
self.assertAllEqual(0, self.evaluate(table.size()))
self.evaluate(table.insert(keys, values))
self.assertAllEqual(3, self.evaluate(table.size()))
input_string = constant_op.constant([11, 0, 15], dtypes.int64)
output = table.lookup(input_string)
self.assertAllEqual([3], output.get_shape())
result = self.evaluate(output)
self.assertAllEqual([0, 1, -1], result)
def testErrors(self):
with self.cached_session():
table = lookup_ops.DenseHashTable(
dtypes.int64,
dtypes.int64,
default_value=-1,
empty_key=0,
deleted_key=-1)
# Inserting the empty key returns an error
keys1 = constant_op.constant([11, 0], dtypes.int64)
values1 = constant_op.constant([0, 1], dtypes.int64)
with self.assertRaisesRegexp(errors_impl.InvalidArgumentError,
"empty_key"):
self.evaluate(table.insert(keys1, values1))
# Looking up the empty key returns an error
with self.assertRaisesRegexp(errors_impl.InvalidArgumentError,
"empty_key"):
self.evaluate(table.lookup(keys1))
# Inserting the deleted key returns an error
keys2 = constant_op.constant([11, -1], dtypes.int64)
values2 = constant_op.constant([0, 1], dtypes.int64)
with self.assertRaisesRegexp(errors_impl.InvalidArgumentError,
"deleted_key"):
self.evaluate(table.insert(keys2, values2))
# Looking up the empty key returns an error
with self.assertRaisesRegexp(errors_impl.InvalidArgumentError,
"deleted_key"):
self.evaluate(table.lookup(keys2))
# Arbitrary tensors of keys are not supported
keys = constant_op.constant([[11, 0], [12, 1]], dtypes.int64)
values = constant_op.constant([[11, 0], [12, 1]], dtypes.int64)
with self.assertRaisesRegexp(errors_impl.InvalidArgumentError,
"Expected key shape"):
self.evaluate(table.lookup(keys))
with self.assertRaisesRegexp(errors_impl.InvalidArgumentError,
"Expected key shape"):
self.evaluate(table.insert(keys, values))
with self.assertRaisesRegexp(errors_impl.InvalidArgumentError,
"Number of buckets must be"):
table2 = lookup_ops.DenseHashTable(
dtypes.int64,
dtypes.int64,
default_value=-1,
empty_key=17,
deleted_key=-1,
initial_num_buckets=12)
self.assertAllEqual(0, self.evaluate(table2.size()))
with self.assertRaisesRegexp(
errors_impl.InvalidArgumentError,
"Empty and deleted keys must have same shape"):
table3 = lookup_ops.DenseHashTable(
dtypes.int64,
dtypes.int64,
default_value=-1,
empty_key=42,
deleted_key=[1, 2])
self.assertAllEqual(0, self.evaluate(table3.size()))
with self.assertRaisesRegexp(errors_impl.InvalidArgumentError,
"Empty and deleted keys cannot be equal"):
table4 = lookup_ops.DenseHashTable(
dtypes.int64,
dtypes.int64,
default_value=-1,
empty_key=42,
deleted_key=42)
self.assertAllEqual(0, self.evaluate(table4.size()))
with self.assertRaisesRegexp(errors_impl.InvalidArgumentError,
"Empty and deleted keys cannot be equal"):
table5 = lookup_ops.DenseHashTable(
dtypes.int64,
dtypes.int64,
default_value=-1,
empty_key=[1, 2, 3],
deleted_key=[1, 2, 3])
self.assertAllEqual(0, self.evaluate(table5.size()))
class IndexTableFromFile(test.TestCase):
def _createVocabFile(self, basename, values=("brain", "salad", "surgery")):
vocabulary_file = os.path.join(self.get_temp_dir(), basename)
with open(vocabulary_file, "w") as f:
f.write("\n".join(values) + "\n")
return vocabulary_file
def test_string_index_table_from_file(self):
vocabulary_file = self._createVocabFile("f2i_vocab1.txt")
with self.cached_session():
table = lookup_ops.index_table_from_file(
vocabulary_file=vocabulary_file, num_oov_buckets=1)
ids = table.lookup(constant_op.constant(["salad", "surgery", "tarkus"]))
if not context.executing_eagerly():
with self.assertRaises(errors_impl.OpError):
self.evaluate(ids)
self.evaluate(lookup_ops.tables_initializer())
self.assertAllEqual((1, 2, 3), self.evaluate(ids))
def test_string_index_table_from_multicolumn_file(self):
vocabulary_file = self._createVocabFile(
"f2i_vocab1.txt", values=("brain\t300", "salad\t20", "surgery\t1"))
with self.cached_session():
table = lookup_ops.index_table_from_file(
vocabulary_file=vocabulary_file,
num_oov_buckets=1,
key_column_index=0,
value_column_index=lookup_ops.TextFileIndex.LINE_NUMBER)
ids = table.lookup(constant_op.constant(["salad", "surgery", "tarkus"]))
if not context.executing_eagerly():
with self.assertRaises(errors_impl.OpError):
self.evaluate(ids)
self.evaluate(lookup_ops.tables_initializer())
self.assertAllEqual((1, 2, 3), self.evaluate(ids))
def test_string_index_table_from_multicolumn_file_custom_delimiter(self):
vocabulary_file = self._createVocabFile(
"f2i_vocab1.txt", values=("brain 300", "salad 20", "surgery 1"))
with self.cached_session():
table = lookup_ops.index_table_from_file(
vocabulary_file=vocabulary_file,
num_oov_buckets=1,
key_column_index=0,
value_column_index=lookup_ops.TextFileIndex.LINE_NUMBER,
delimiter=" ")
ids = table.lookup(constant_op.constant(["salad", "surgery", "tarkus"]))
if not context.executing_eagerly():
with self.assertRaises(errors_impl.OpError):
self.evaluate(ids)
self.evaluate(lookup_ops.tables_initializer())
self.assertAllEqual((1, 2, 3), self.evaluate(ids))
def test_string_index_table_from_file_tensor_filename(self):
vocabulary_file = self._createVocabFile("f2i_vocab1.txt")
with self.cached_session():
vocabulary_file = constant_op.constant(vocabulary_file)
table = lookup_ops.index_table_from_file(
vocabulary_file=vocabulary_file, num_oov_buckets=1)
ids = table.lookup(constant_op.constant(["salad", "surgery", "tarkus"]))
if not context.executing_eagerly():
with self.assertRaises(errors_impl.OpError):
self.evaluate(ids)
self.evaluate(lookup_ops.tables_initializer())
self.assertAllEqual((1, 2, 3), self.evaluate(ids))
if not context.executing_eagerly():
self.assertEqual(1,
len(ops.get_collection(ops.GraphKeys.ASSET_FILEPATHS)))
@test_util.run_v1_only("placeholder usage")
def test_string_index_table_from_file_placeholder_filename(self):
vocabulary_file = self._createVocabFile("f2i_vocab1.txt")
with self.cached_session():
vocabulary_placeholder = array_ops.placeholder(dtypes.string, [])
table = lookup_ops.index_table_from_file(
vocabulary_file=vocabulary_placeholder, num_oov_buckets=1)
ids = table.lookup(constant_op.constant(["salad", "surgery", "tarkus"]))
with self.assertRaises(errors_impl.OpError):
self.evaluate(ids)
feed_dict = {vocabulary_placeholder.name: vocabulary_file}
lookup_ops.tables_initializer().run(feed_dict=feed_dict)
self.assertAllEqual((1, 2, 3), self.evaluate(ids))
self.assertEqual(0,
len(ops.get_collection(ops.GraphKeys.ASSET_FILEPATHS)))
def test_int32_index_table_from_file(self):
vocabulary_file = self._createVocabFile(
"f2i_vocab2.txt", values=("42", "1", "-1000"))
with self.cached_session():
table = lookup_ops.index_table_from_file(
vocabulary_file=vocabulary_file,
num_oov_buckets=1,
key_dtype=dtypes.int32)
ids = table.lookup(
constant_op.constant((1, -1000, 11), dtype=dtypes.int32))
if not context.executing_eagerly():
with self.assertRaises(errors_impl.OpError):
self.evaluate(ids)
self.evaluate(lookup_ops.tables_initializer())
self.assertAllEqual((1, 2, 3), self.evaluate(ids))
def test_int64_index_table_from_file(self):
vocabulary_file = self._createVocabFile(
"f2i_vocab3.txt", values=("42", "1", "-1000"))
with self.cached_session():
table = lookup_ops.index_table_from_file(
vocabulary_file=vocabulary_file,
num_oov_buckets=1,
key_dtype=dtypes.int64)
ids = table.lookup(
constant_op.constant((1, -1000, 11), dtype=dtypes.int64))
if not context.executing_eagerly():
with self.assertRaises(errors_impl.OpError):
self.evaluate(ids)
self.evaluate(lookup_ops.tables_initializer())
self.assertAllEqual((1, 2, 3), self.evaluate(ids))
def test_index_table_from_file_with_default_value(self):
default_value = -42
vocabulary_file = self._createVocabFile("f2i_vocab4.txt")
with self.cached_session():
table = lookup_ops.index_table_from_file(
vocabulary_file=vocabulary_file, default_value=default_value)
ids = table.lookup(constant_op.constant(["salad", "surgery", "tarkus"]))
if not context.executing_eagerly():
with self.assertRaises(errors_impl.OpError):
self.evaluate(ids)
self.evaluate(lookup_ops.tables_initializer())
self.assertAllEqual((1, 2, default_value), self.evaluate(ids))
def test_index_table_from_file_with_oov_buckets(self):
vocabulary_file = self._createVocabFile("f2i_vocab5.txt")
with self.cached_session():
table = lookup_ops.index_table_from_file(
vocabulary_file=vocabulary_file, num_oov_buckets=1000)
ids = table.lookup(
constant_op.constant(["salad", "surgery", "tarkus", "toccata"]))
if not context.executing_eagerly():
with self.assertRaises(errors_impl.OpError):
self.evaluate(ids)
self.evaluate(lookup_ops.tables_initializer())
self.assertAllEqual(
(
1, # From vocabulary file.
2, # From vocabulary file.
867, # 3 + fingerprint("tarkus") mod 300.
860), # 3 + fingerprint("toccata") mod 300.
self.evaluate(ids))
def test_index_table_from_file_fails_with_empty_vocabulary_file_name(self):
self.assertRaises(
ValueError, lookup_ops.index_table_from_file, vocabulary_file="")
def test_index_table_from_file_fails_with_empty_vocabulary(self):
self.assertRaises(
ValueError, lookup_ops.index_table_from_file, vocabulary_file=None)
def test_index_table_from_file_str_fails_with_zero_size_vocabulary(self):
vocabulary_file = self._createVocabFile("zero_vocab_str.txt")
self.assertRaisesRegexp(
ValueError,
"vocab_size must be greater than 0, got 0. "
"vocabulary_file: .*zero_vocab_str.txt",
lookup_ops.index_table_from_file,
vocabulary_file=vocabulary_file,
vocab_size=0)
def test_index_table_from_file_tensor_fails_with_zero_size_vocabulary(self):
vocabulary_file = constant_op.constant(
self._createVocabFile("zero_vocab_tensor.txt"))
self.assertRaisesRegexp(
ValueError,
"vocab_size must be greater than 0, got 0. "
"vocabulary_file: .*zero_vocab_tensor.txt",
lookup_ops.index_table_from_file,
vocabulary_file=vocabulary_file,
vocab_size=0)
def test_index_table_from_file_with_vocab_size_too_small(self):
vocabulary_file = self._createVocabFile("f2i_vocab6.txt")
with self.cached_session():
table = lookup_ops.index_table_from_file(
vocabulary_file=vocabulary_file, vocab_size=2)
ids = table.lookup(constant_op.constant(["salad", "surgery", "tarkus"]))
if not context.executing_eagerly():
with self.assertRaises(errors_impl.OpError):
self.evaluate(ids)
self.evaluate(lookup_ops.tables_initializer())
self.assertAllEqual((1, -1, -1), self.evaluate(ids))
self.assertEqual(2, self.evaluate(table.size()))
def test_index_table_from_file_with_vocab_size_too_large(self):
vocabulary_file = self._createVocabFile("f2i_vocab7.txt")
with self.cached_session():
with self.assertRaisesRegexp(errors_impl.InvalidArgumentError,
"Invalid vocab_size"):
table = lookup_ops.index_table_from_file(
vocabulary_file=vocabulary_file, vocab_size=4)
self.evaluate(table.initializer)
def test_index_table_from_file_with_vocab_size(self):
vocabulary_file = self._createVocabFile("f2i_vocab8.txt")
self.assertRaises(
ValueError,
lookup_ops.index_table_from_file,
vocabulary_file=vocabulary_file,
vocab_size=0)
with self.cached_session():
table = lookup_ops.index_table_from_file(
vocabulary_file=vocabulary_file, vocab_size=3)
ids = table.lookup(constant_op.constant(["salad", "surgery", "tarkus"]))
if not context.executing_eagerly():
with self.assertRaises(errors_impl.OpError):
self.evaluate(ids)
self.evaluate(lookup_ops.tables_initializer())
self.assertAllEqual((1, 2, -1), self.evaluate(ids))
self.assertEqual(3, self.evaluate(table.size()))
def test_index_table_from_file_with_invalid_hashers(self):
vocabulary_file = self._createVocabFile("invalid_hasher.txt")
with self.cached_session():
with self.assertRaises(TypeError):
lookup_ops.index_table_from_file(
vocabulary_file=vocabulary_file,
vocab_size=3,
num_oov_buckets=1,
hasher_spec=1)
table = lookup_ops.index_table_from_file(
vocabulary_file=vocabulary_file,
vocab_size=3,
num_oov_buckets=1,
hasher_spec=lookup_ops.HasherSpec("my-awesome-hash", None))
self.assertRaises(ValueError, table.lookup,
constant_op.constant(["salad", "surgery", "tarkus"]))
def test_index_table_from_file_table_ref_with_oov_buckets(self):
vocabulary_file = self._createVocabFile("f2i_vocab9.txt")
with self.cached_session():
table = lookup_ops.index_table_from_file(
vocabulary_file=vocabulary_file, num_oov_buckets=1)
self.assertIsNotNone(table.resource_handle)
def test_index_table_from_file_table_ref_without_oov_buckets(self):
vocabulary_file = self._createVocabFile("f2i_vocab10.txt")
with self.cached_session():
table = lookup_ops.index_table_from_file(
vocabulary_file=vocabulary_file, num_oov_buckets=0)
self.assertIsNotNone(table.resource_handle)
class IndexTableFromTensor(test.TestCase):
@test_util.run_in_graph_and_eager_modes
def test_index_table_from_tensor_with_tensor_init(self):
table = lookup_ops.index_table_from_tensor(
vocabulary_list=("brain", "salad", "surgery"), num_oov_buckets=1)
if not context.executing_eagerly():
with self.assertRaises(errors_impl.OpError):
self.evaluate(
table.lookup(constant_op.constant(("salad", "surgery", "tarkus"))))
else:
# Reinitializing a table in eager should work.
table = lookup_ops.index_table_from_tensor(
vocabulary_list=("brain", "salad", "surgery"), num_oov_buckets=1)
self.evaluate(lookup_ops.tables_initializer())
ids = table.lookup(constant_op.constant(("salad", "surgery", "tarkus")))
self.assertAllEqual((1, 2, 3), self.evaluate(ids))
def test_int32_index_table_from_tensor_with_tensor_init(self):
with self.cached_session():
table = lookup_ops.index_table_from_tensor(
vocabulary_list=(42, 1, -1000), num_oov_buckets=1, dtype=dtypes.int32)
ids = table.lookup(
constant_op.constant((1, -1000, 11), dtype=dtypes.int32))
if not context.executing_eagerly():
with self.assertRaises(errors_impl.FailedPreconditionError):
self.evaluate(ids)
self.evaluate(lookup_ops.tables_initializer())
self.assertAllEqual((1, 2, 3), self.evaluate(ids))
def test_int64_index_table_from_tensor_with_tensor_init(self):
with self.cached_session():
table = lookup_ops.index_table_from_tensor(
vocabulary_list=(42, 1, -1000), num_oov_buckets=1, dtype=dtypes.int64)
ids = table.lookup(
constant_op.constant((1, -1000, 11), dtype=dtypes.int64))
if not context.executing_eagerly():
with self.assertRaises(errors_impl.FailedPreconditionError):
self.evaluate(ids)
self.evaluate(lookup_ops.tables_initializer())
self.assertAllEqual((1, 2, 3), self.evaluate(ids))
def test_index_table_from_tensor_with_default_value(self):
default_value = -42
with self.cached_session():
table = lookup_ops.index_table_from_tensor(
vocabulary_list=["brain", "salad", "surgery"],
default_value=default_value)
ids = table.lookup(constant_op.constant(["salad", "surgery", "tarkus"]))
if not context.executing_eagerly():
with self.assertRaises(errors_impl.FailedPreconditionError):
self.evaluate(ids)
self.evaluate(lookup_ops.tables_initializer())
self.assertAllEqual((1, 2, default_value), self.evaluate(ids))
def test_index_table_from_tensor_missing_vocabulary_list(self):
with self.cached_session():
with self.assertRaisesRegexp(ValueError,
"vocabulary_list must be specified"):
lookup_ops.index_table_from_tensor(
vocabulary_list=None, num_oov_buckets=1)
def test_index_table_from_tensor_empty_vocabulary_list(self):
with self.cached_session():
with self.assertRaisesRegexp(
errors_impl.OpError, "keys and values cannot be empty"):
_ = lookup_ops.index_table_from_tensor(
vocabulary_list=np.array([], dtype=np.str_), num_oov_buckets=1)
self.evaluate(lookup_ops.tables_initializer())
def test_index_table_from_tensor_with_invalid_hashers(self):
with self.cached_session():
with self.assertRaises(TypeError):
lookup_ops.index_table_from_tensor(
vocabulary_list=["brain", "salad", "surgery"],
num_oov_buckets=1,
hasher_spec=1)
table = lookup_ops.index_table_from_tensor(
vocabulary_list=["brain", "salad", "surgery"],
num_oov_buckets=1,
hasher_spec=lookup_ops.HasherSpec("my-awesome-hash", None))
self.assertRaises(ValueError, table.lookup,
constant_op.constant(["salad", "surgery", "tarkus"]))
class IndexToStringTableFromFileTest(test.TestCase):
def _createVocabFile(self, basename, values=("brain", "salad", "surgery")):
vocabulary_file = os.path.join(self.get_temp_dir(), basename)
with open(vocabulary_file, "w") as f:
f.write("\n".join(values) + "\n")
return vocabulary_file
def test_index_to_string_table(self):
vocabulary_path = self._createVocabFile("i2f_vocab1.txt")
# vocabulary_file supports string and tensor
type_funcs = [str, constant_op.constant]
for type_func in type_funcs:
vocabulary_file = type_func(vocabulary_path)
with self.cached_session():
table = lookup_ops.index_to_string_table_from_file(
vocabulary_file=vocabulary_file)
features = table.lookup(
constant_op.constant([0, 1, 2, 3], dtypes.int64))
if not context.executing_eagerly():
with self.assertRaises(errors_impl.OpError):
self.evaluate(features)
self.evaluate(lookup_ops.tables_initializer())
self.assertAllEqual((b"brain", b"salad", b"surgery", b"UNK"),
self.evaluate(features))
def test_index_to_string_table_from_multicolumn_file(self):
vocabulary_file = self._createVocabFile(
"f2i_vocab1.txt", values=("brain\t300", "salad\t20", "surgery\t1"))
with self.cached_session():
table = lookup_ops.index_to_string_table_from_file(
vocabulary_file=vocabulary_file,
key_column_index=lookup_ops.TextFileIndex.LINE_NUMBER,
value_column_index=0)
features = table.lookup(constant_op.constant([0, 1, 2, 3], dtypes.int64))
if not context.executing_eagerly():
with self.assertRaises(errors_impl.OpError):
self.evaluate(features)
self.evaluate(lookup_ops.tables_initializer())
self.assertAllEqual((b"brain", b"salad", b"surgery", b"UNK"),
self.evaluate(features))
def test_index_to_string_table_from_multicolumn_file_custom_delimiter(self):
vocabulary_file = self._createVocabFile(
"f2i_vocab1.txt", values=("brain 300", "salad 20", "surgery 1"))
with self.cached_session():
table = lookup_ops.index_to_string_table_from_file(
vocabulary_file=vocabulary_file,
key_column_index=lookup_ops.TextFileIndex.LINE_NUMBER,
value_column_index=0,
delimiter=" ")
features = table.lookup(constant_op.constant([0, 1, 2, 3], dtypes.int64))
if not context.executing_eagerly():
with self.assertRaises(errors_impl.OpError):
self.evaluate(features)
self.evaluate(lookup_ops.tables_initializer())
self.assertAllEqual((b"brain", b"salad", b"surgery", b"UNK"),
self.evaluate(features))
def test_index_to_string_table_with_default_value(self):
default_value = b"NONE"
vocabulary_file = self._createVocabFile("f2i_vocab2.txt")
with self.cached_session():
table = lookup_ops.index_to_string_table_from_file(
vocabulary_file=vocabulary_file, default_value=default_value)
features = table.lookup(constant_op.constant([1, 2, 4], dtypes.int64))
if not context.executing_eagerly():
with self.assertRaises(errors_impl.OpError):
self.evaluate(features)
self.evaluate(lookup_ops.tables_initializer())
self.assertAllEqual((b"salad", b"surgery", default_value),
self.evaluate(features))
def test_index_to_string_table_with_vocab_size_too_small(self):
default_value = b"NONE"
vocabulary_file = self._createVocabFile("f2i_vocab2.txt")
with self.cached_session():
table = lookup_ops.index_to_string_table_from_file(
vocabulary_file=vocabulary_file,
vocab_size=2,
default_value=default_value)
features = table.lookup(constant_op.constant([1, 2, 4], dtypes.int64))
if not context.executing_eagerly():
with self.assertRaises(errors_impl.OpError):
self.evaluate(features)
self.evaluate(lookup_ops.tables_initializer())
self.assertAllEqual((b"salad", default_value, default_value),
self.evaluate(features))
def test_index_to_string_table_with_vocab_size_too_large(self):
vocabulary_file = self._createVocabFile("f2i_vocab6.txt")
with self.cached_session():
with self.assertRaisesRegexp(errors_impl.InvalidArgumentError,
"Invalid vocab_size"):
_ = lookup_ops.index_to_string_table_from_file(
vocabulary_file=vocabulary_file, vocab_size=4)
self.evaluate(lookup_ops.tables_initializer())
def test_index_to_string_table_with_vocab_size(self):
vocabulary_file = self._createVocabFile("f2i_vocab7.txt")
with self.cached_session():
table = lookup_ops.index_to_string_table_from_file(
vocabulary_file=vocabulary_file, vocab_size=3)
features = table.lookup(constant_op.constant([1, 2, 4], dtypes.int64))
if not context.executing_eagerly():
with self.assertRaises(errors_impl.OpError):
self.evaluate(features)
self.evaluate(lookup_ops.tables_initializer())
self.assertAllEqual((b"salad", b"surgery", b"UNK"),
self.evaluate(features))
class IndexToStringTableFromTensorTest(test.TestCase):
def test_index_to_string_table_from_tensor(self):
with self.cached_session():
vocabulary_list = constant_op.constant(["brain", "salad", "surgery"])
table = lookup_ops.index_to_string_table_from_tensor(
vocabulary_list=vocabulary_list)
indices = constant_op.constant([0, 1, 2, 3], dtypes.int64)
features = table.lookup(indices)
if not context.executing_eagerly():
with self.assertRaises(errors_impl.OpError):
self.evaluate(features)
self.evaluate(lookup_ops.tables_initializer())
self.assertAllEqual((b"brain", b"salad", b"surgery", b"UNK"),
self.evaluate(features))
def test_duplicate_entries(self):
with self.cached_session():
vocabulary_list = constant_op.constant(["hello", "hello"])
table = lookup_ops.index_to_string_table_from_tensor(
vocabulary_list=vocabulary_list)
indices = constant_op.constant([0, 1, 4], dtypes.int64)
features = table.lookup(indices)
self.evaluate(lookup_ops.tables_initializer())
self.assertAllEqual((b"hello", b"hello", b"UNK"), self.evaluate(features))
def test_index_to_string_with_default_value(self):
default_value = b"NONE"
with self.cached_session():
vocabulary_list = constant_op.constant(["brain", "salad", "surgery"])
table = lookup_ops.index_to_string_table_from_tensor(
vocabulary_list=vocabulary_list, default_value=default_value)
indices = constant_op.constant([1, 2, 4], dtypes.int64)
features = table.lookup(indices)
if not context.executing_eagerly():
with self.assertRaises(errors_impl.OpError):
self.evaluate(features)
self.evaluate(lookup_ops.tables_initializer())
self.assertAllEqual((b"salad", b"surgery", default_value),
self.evaluate(features))
class IdTableWithHashBucketsTest(test.TestCase):
def _createVocabFile(self, basename, values=("brain", "salad", "surgery")):
vocabulary_file = os.path.join(self.get_temp_dir(), basename)
with open(vocabulary_file, "w") as f:
f.write("\n".join(values) + "\n")
return vocabulary_file
@test_util.run_deprecated_v1
def testStringIdTableWithHashBuckets(self):
vocab_file = self._createVocabFile("feat_to_id_1.txt")
with self.cached_session():
default_value = -1
vocab_size = 3
oov_buckets = 1
table = lookup_ops.IdTableWithHashBuckets(
lookup_ops.StaticHashTable(
lookup_ops.TextFileIdTableInitializer(
vocab_file, vocab_size=vocab_size), default_value),
oov_buckets)
table.initializer.run()
input_string = constant_op.constant(["brain", "salad", "surgery", "UNK"])
out = table.lookup(input_string)
self.assertAllEqual([0, 1, 2, 3], self.evaluate(out))
self.assertEqual(vocab_size + oov_buckets, table.size().eval())
@test_util.run_deprecated_v1
def testInt32IdTableWithHashBuckets(self):
vocab_file = self._createVocabFile("feat_to_id_2.txt", ("42", "1", "-1000"))
with self.cached_session():
default_value = -1
vocab_size = 3
oov_buckets = 1
table = lookup_ops.IdTableWithHashBuckets(
lookup_ops.StaticHashTable(
lookup_ops.TextFileIdTableInitializer(
vocab_file, vocab_size=vocab_size, key_dtype=dtypes.int64),
default_value),
oov_buckets,
key_dtype=dtypes.int32)
table.initializer.run()
values = constant_op.constant((42, 1, -1000, 11), dtype=dtypes.int32)
out = table.lookup(values)
self.assertAllEqual([0, 1, 2, 3], self.evaluate(out))
self.assertEqual(vocab_size + oov_buckets, table.size().eval())
@test_util.run_deprecated_v1
def testInt64IdTableWithHashBuckets(self):
vocab_file = self._createVocabFile("feat_to_id_3.txt", ("42", "1", "-1000"))
with self.cached_session():
default_value = -1
vocab_size = 3
oov_buckets = 1
table = lookup_ops.IdTableWithHashBuckets(
lookup_ops.StaticHashTable(
lookup_ops.TextFileIdTableInitializer(
vocab_file, vocab_size=vocab_size, key_dtype=dtypes.int64),
default_value), oov_buckets)
table.initializer.run()
values = constant_op.constant((42, 1, -1000, 11), dtype=dtypes.int64)
out = table.lookup(values)
self.assertAllEqual([0, 1, 2, 3], self.evaluate(out))
self.assertEqual(vocab_size + oov_buckets, table.size().eval())
@test_util.run_deprecated_v1
def testStringIdTableWithOnlyHashBucket(self):
with self.cached_session():
oov_buckets = 5
# Set a table that only uses hash buckets, for each input value returns
# an id calculated by fingerprint("input") mod oov_buckets.
table = lookup_ops.IdTableWithHashBuckets(None, oov_buckets)
table.initializer.run()
values = constant_op.constant(("brain", "salad", "surgery"))
out = table.lookup(values)
self.assertAllEqual(
[
3, # fingerprint("brain") mod 5.
1, # fingerprint("salad") mod 5.
4 # fingerprint("surgery") mod 5
],
self.evaluate(out))
self.assertEqual(oov_buckets, table.size().eval())
@test_util.run_deprecated_v1
def testInt32IdTableWithOnlyHashBucket(self):
with self.cached_session():
oov_buckets = 5
# Set a table that only uses hash buckets, for each input value returns
# an id calculated by fingerprint("input") mod oov_buckets.
table = lookup_ops.IdTableWithHashBuckets(
None, oov_buckets, key_dtype=dtypes.int32)
table.initializer.run()
input_string = constant_op.constant([42, 1, -1000], dtype=dtypes.int32)
out = table.lookup(input_string)
self.assertAllEqual(
[
1, # fingerprint("42") mod 5.
4, # fingerprint("1") mod 5.
2 # fingerprint("-1000") mod 5
],
self.evaluate(out))
self.assertEqual(oov_buckets, table.size().eval())
def testFloat64IdTableWithOnlyHashBucket(self):
with self.cached_session():
with self.assertRaisesRegexp(TypeError, "Invalid key_dtype"):
lookup_ops.IdTableWithHashBuckets(
None, num_oov_buckets=5, key_dtype=dtypes.float64)
def testBoolIdTableWithOnlyHashBucket(self):
with self.cached_session():
with self.assertRaisesRegexp(TypeError, "Invalid key_dtype"):
lookup_ops.IdTableWithHashBuckets(
None, num_oov_buckets=5, key_dtype=dtypes.bool)
@test_util.run_deprecated_v1
def testIdTableWithHashBucketsWithMultipleInitializers(self):
vocab_file = self._createVocabFile("feat_to_id_4.txt")
with self.cached_session() as sess:
default_value = -1
vocab_size = 3
oov_buckets = 3
vocab_table = lookup_ops.StaticHashTable(
lookup_ops.TextFileIdTableInitializer(
vocab_file, vocab_size=vocab_size), default_value)
table1 = lookup_ops.IdTableWithHashBuckets(
vocab_table,
oov_buckets,
hasher_spec=lookup_ops.FastHashSpec,
name="table1")
table2 = lookup_ops.IdTableWithHashBuckets(
vocab_table,
oov_buckets,
hasher_spec=lookup_ops.StrongHashSpec((1, 2)),
name="table2")
lookup_ops.tables_initializer().run()
input_string = constant_op.constant(
["fruit", "brain", "salad", "surgery", "UNK"])
out1 = table1.lookup(input_string)
out2 = table2.lookup(input_string)
out1, out2 = self.evaluate([out1, out2])
self.assertAllEqual([5, 0, 1, 2, 5], out1)
self.assertAllEqual([5, 0, 1, 2, 3], out2)
self.assertEqual(vocab_size + oov_buckets, table1.size().eval())
self.assertEqual(vocab_size + oov_buckets, table2.size().eval())
test_util.assert_ops_in_graph({
"table1_Lookup/hash_bucket": "StringToHashBucketFast",
"table2_Lookup/hash_bucket": "StringToHashBucketStrong",
}, sess.graph)
@test_util.run_deprecated_v1
def testIdTableWithHashBucketsInitializationAcrossSessions(self):
vocab_file = self._createVocabFile("feat_to_id_5.txt")
with self.cached_session():
default_value = -1
vocab_size = 3
oov_buckets = 1
table1 = lookup_ops.IdTableWithHashBuckets(
lookup_ops.StaticHashTable(
lookup_ops.TextFileIdTableInitializer(
vocab_file, vocab_size=vocab_size), default_value),
oov_buckets)
table1.initializer.run()
input_string_1 = constant_op.constant(
["brain", "salad", "surgery", "UNK"])
out1 = table1.lookup(input_string_1)
self.assertAllEqual([0, 1, 2, 3], self.evaluate(out1))
self.assertEqual(vocab_size + oov_buckets, table1.size().eval())
with self.cached_session():
default_value = -1
vocab_size = 3
oov_buckets = 1
# Underlying lookup table already initialized in previous session.
# No need to call table2.initializer.run()
table2 = lookup_ops.IdTableWithHashBuckets(
lookup_ops.StaticHashTable(
lookup_ops.TextFileIdTableInitializer(
vocab_file, vocab_size=vocab_size), default_value),
oov_buckets)
input_string_2 = constant_op.constant(["fruit", "salad", "UNK"])
out2 = table2.lookup(input_string_2)
self.assertAllEqual([3, 1, 3], self.evaluate(out2))
self.assertEqual(vocab_size + oov_buckets, table2.size().eval())
@test_util.run_deprecated_v1
def testIdTableWithHashBucketsWithMultipleInitializersDifferentDefault(self):
vocab_file = self._createVocabFile("feat_to_id_6.txt")
with self.cached_session() as sess:
default_value1 = -1
vocab_size = 3
oov_buckets = 0
table1 = lookup_ops.IdTableWithHashBuckets(
lookup_ops.StaticHashTable(
lookup_ops.TextFileIdTableInitializer(
vocab_file, vocab_size=vocab_size), default_value1),
oov_buckets)
default_value2 = -2
table2 = lookup_ops.IdTableWithHashBuckets(
lookup_ops.StaticHashTable(
lookup_ops.TextFileIdTableInitializer(
vocab_file, vocab_size=vocab_size), default_value2),
oov_buckets)
lookup_ops.tables_initializer().run()
input_string_1 = constant_op.constant(
["brain", "salad", "surgery", "UNK"])
input_string_2 = constant_op.constant(["fruit", "salad", "UNK"])
out1 = table1.lookup(input_string_1)
out2 = table2.lookup(input_string_2)
out1, out2 = self.evaluate([out1, out2])
self.assertAllEqual([0, 1, 2, -1], out1)
self.assertAllEqual([-2, 1, -2], out2)
self.assertEqual(vocab_size + oov_buckets, table1.size().eval())
self.assertEqual(vocab_size + oov_buckets, table2.size().eval())
@test_util.run_deprecated_v1
def testSparseTensor(self):
vocab_file = self._createVocabFile("feat_to_id_7.txt")
input_indices = [[0, 0], [0, 1], [2, 0], [2, 2], [3, 0]]
input_shape = [4, 4]
with self.cached_session() as sess:
sp_features = sparse_tensor.SparseTensor(
constant_op.constant(input_indices, dtypes.int64),
constant_op.constant(["brain", "salad", "brain", "surgery", "tarkus"],
dtypes.string),
constant_op.constant(input_shape, dtypes.int64))
table = lookup_ops.IdTableWithHashBuckets(
lookup_ops.StaticHashTable(
lookup_ops.TextFileIdTableInitializer(vocab_file, vocab_size=3),
-1), 1)
table.initializer.run()
sp_ids = table.lookup(sp_features)
self.assertAllEqual([5], sp_ids.values._shape_as_list())
sp_ids_ind, sp_ids_val, sp_ids_shape = sess.run(
[sp_ids.indices, sp_ids.values, sp_ids.dense_shape])
self.assertAllEqual(input_indices, sp_ids_ind)
self.assertAllEqual([0, 1, 0, 2, 3], sp_ids_val)
self.assertAllEqual(input_shape, sp_ids_shape)
@test_util.run_deprecated_v1
def testInt32SparseTensor(self):
input_indices = [[0, 0], [0, 1], [2, 0], [2, 2], [3, 0]]
input_shape = [4, 4]
with self.cached_session() as sess:
sp_features = sparse_tensor.SparseTensor(
constant_op.constant(input_indices, dtypes.int64),
constant_op.constant([42, 1, 42, -1000, 11], dtypes.int32),
constant_op.constant(input_shape, dtypes.int64))
table = lookup_ops.IdTableWithHashBuckets(
lookup_ops.StaticHashTable(
lookup_ops.KeyValueTensorInitializer(
(42, 1, -1000), (0, 1, 2), dtypes.int64, dtypes.int64), -1),
1,
key_dtype=dtypes.int32)
table.initializer.run()
sp_ids = table.lookup(sp_features)
self.assertAllEqual([5], sp_ids.values._shape_as_list())
sp_ids_ind, sp_ids_val, sp_ids_shape = sess.run(
[sp_ids.indices, sp_ids.values, sp_ids.dense_shape])
self.assertAllEqual(input_indices, sp_ids_ind)
self.assertAllEqual([0, 1, 0, 2, 3], sp_ids_val)
self.assertAllEqual(input_shape, sp_ids_shape)
@test_util.run_deprecated_v1
def testInt64SparseTensor(self):
input_indices = [[0, 0], [0, 1], [2, 0], [2, 2], [3, 0]]
input_shape = [4, 4]
with self.cached_session() as sess:
sp_features = sparse_tensor.SparseTensor(
constant_op.constant(input_indices, dtypes.int64),
constant_op.constant([42, 1, 42, -1000, 11], dtypes.int64),
constant_op.constant(input_shape, dtypes.int64))
table = lookup_ops.IdTableWithHashBuckets(
lookup_ops.StaticHashTable(
lookup_ops.KeyValueTensorInitializer(
(42, 1, -1000), (0, 1, 2), dtypes.int64, dtypes.int64), -1),
1,
key_dtype=dtypes.int64)
table.initializer.run()
sp_ids = table.lookup(sp_features)
self.assertAllEqual([5], sp_ids.values._shape_as_list())
sp_ids_ind, sp_ids_val, sp_ids_shape = sess.run(
[sp_ids.indices, sp_ids.values, sp_ids.dense_shape])
self.assertAllEqual(input_indices, sp_ids_ind)
self.assertAllEqual([0, 1, 0, 2, 3], sp_ids_val)
self.assertAllEqual(input_shape, sp_ids_shape)
def testIdTableWithHashBucketsWithInvalidHashers(self):
vocab_file = self._createVocabFile("feat_to_id_4.txt")
with self.cached_session():
default_value = -1
vocab_size = 3
oov_buckets = 1
lookup_table = lookup_ops.StaticHashTable(
lookup_ops.TextFileIdTableInitializer(
vocab_file, vocab_size=vocab_size), default_value)
with self.assertRaises(TypeError):
lookup_ops.IdTableWithHashBuckets(
lookup_table, oov_buckets, hasher_spec=1)
table = lookup_ops.IdTableWithHashBuckets(
lookup_table,
oov_buckets,
hasher_spec=lookup_ops.HasherSpec("my-awesome-hash", None))
input_string = constant_op.constant(["brain", "salad", "surgery", "UNK"])
with self.assertRaises(ValueError):
table.lookup(input_string)
with self.assertRaises(ValueError):
table = lookup_ops.IdTableWithHashBuckets(
lookup_table,
oov_buckets,
hasher_spec=lookup_ops.StrongHashSpec([]))
with self.assertRaises(ValueError):
table = lookup_ops.IdTableWithHashBuckets(
lookup_table,
oov_buckets,
hasher_spec=lookup_ops.StrongHashSpec([1, 2, 3]))
with self.assertRaises(TypeError):
table = lookup_ops.IdTableWithHashBuckets(
lookup_table,
oov_buckets,
hasher_spec=lookup_ops.StrongHashSpec([None, 2]))
def testIdTableWithHashBucketsNoInnerTable(self):
with self.cached_session():
table = lookup_ops.IdTableWithHashBuckets(None, num_oov_buckets=1)
self.assertIsNone(table.resource_handle)
class MutableHashTableOpTest(test.TestCase):
def testMutableHashTable(self):
with self.cached_session():
default_val = -1
keys = constant_op.constant(["brain", "salad", "surgery", "tarkus"])
values = constant_op.constant([0, 1, 2, 3], dtypes.int64)
table = lookup_ops.MutableHashTable(dtypes.string, dtypes.int64,
default_val)
self.assertAllEqual(0, self.evaluate(table.size()))
self.evaluate(table.insert(keys, values))
self.assertAllEqual(4, self.evaluate(table.size()))
remove_string = constant_op.constant(["tarkus", "tank"])
self.evaluate(table.remove(remove_string))
self.assertAllEqual(3, self.evaluate(table.size()))
input_string = constant_op.constant(["brain", "salad", "tank"])
output = table.lookup(input_string)
self.assertAllEqual([3], output.get_shape())
result = self.evaluate(output)
self.assertAllEqual([0, 1, -1], result)
exported_keys, exported_values = table.export()
# exported data is in the order of the internal map, i.e. undefined
sorted_keys = np.sort(self.evaluate(exported_keys))
sorted_values = np.sort(self.evaluate(exported_values))
self.assertAllEqual([b"brain", b"salad", b"surgery"], sorted_keys)
self.assertAllEqual([0, 1, 2], sorted_values)
@test_util.run_v1_only("SaverV1")
def testSaveRestore(self):
save_dir = os.path.join(self.get_temp_dir(), "save_restore")
save_path = os.path.join(tempfile.mkdtemp(prefix=save_dir), "hash")
with self.session(graph=ops.Graph()) as sess:
v0 = variables.Variable(10.0, name="v0")
v1 = variables.Variable(20.0, name="v1")
default_val = -1
keys = constant_op.constant(["b", "c", "d"], dtypes.string)
values = constant_op.constant([0, 1, 2], dtypes.int64)
table = lookup_ops.MutableHashTable(
dtypes.string, dtypes.int64, default_val, name="t1", checkpoint=True)
save = saver.Saver()
self.evaluate(variables.global_variables_initializer())
# Check that the parameter nodes have been initialized.
self.assertEqual(10.0, self.evaluate(v0))
self.assertEqual(20.0, self.evaluate(v1))
self.assertAllEqual(0, self.evaluate(table.size()))
self.evaluate(table.insert(keys, values))
self.assertAllEqual(3, self.evaluate(table.size()))
val = save.save(sess, save_path)
self.assertIsInstance(val, six.string_types)
self.assertEqual(save_path, val)
with self.session(graph=ops.Graph()) as sess:
v0 = variables.Variable(-1.0, name="v0")
v1 = variables.Variable(-1.0, name="v1")
default_val = -1
table = lookup_ops.MutableHashTable(
dtypes.string, dtypes.int64, default_val, name="t1", checkpoint=True)
self.evaluate(
table.insert(
constant_op.constant(["a", "c"], dtypes.string),
constant_op.constant([12, 24], dtypes.int64)))
self.assertAllEqual(2, self.evaluate(table.size()))
save = saver.Saver()
# Restore the saved values in the parameter nodes.
save.restore(sess, save_path)
# Check that the parameter nodes have been restored.
self.assertEqual(10.0, self.evaluate(v0))
self.assertEqual(20.0, self.evaluate(v1))
self.assertAllEqual(3, self.evaluate(table.size()))
input_string = constant_op.constant(["a", "b", "c", "d", "e"],
dtypes.string)
output = table.lookup(input_string)
self.assertAllEqual([-1, 0, 1, 2, -1], self.evaluate(output))
@test_util.run_v1_only("SaverV1")
def testSaveRestoreOnlyTable(self):
save_dir = os.path.join(self.get_temp_dir(), "save_restore")
save_path = os.path.join(tempfile.mkdtemp(prefix=save_dir), "hash")
with self.session(graph=ops.Graph()) as sess:
v0 = variables.Variable(10.0, name="v0")
v1 = variables.Variable(20.0, name="v1")
default_val = -1
keys = constant_op.constant(["b", "c", "d"], dtypes.string)
values = constant_op.constant([0, 1, 2], dtypes.int64)
table = lookup_ops.MutableHashTable(
dtypes.string, dtypes.int64, default_val, name="t1", checkpoint=True)
save = saver.Saver([table])
self.evaluate(variables.global_variables_initializer())
# Check that the parameter nodes have been initialized.
self.assertEqual(10.0, self.evaluate(v0))
self.assertEqual(20.0, self.evaluate(v1))
self.assertAllEqual(0, self.evaluate(table.size()))
self.evaluate(table.insert(keys, values))
self.assertAllEqual(3, self.evaluate(table.size()))
val = save.save(sess, save_path)
self.assertIsInstance(val, six.string_types)
self.assertEqual(save_path, val)
with self.session(graph=ops.Graph()) as sess:
default_val = -1
table = lookup_ops.MutableHashTable(
dtypes.string, dtypes.int64, default_val, name="t1", checkpoint=True)
self.evaluate(
table.insert(
constant_op.constant(["a", "c"], dtypes.string),
constant_op.constant([12, 24], dtypes.int64)))
self.assertAllEqual(2, self.evaluate(table.size()))
save = saver.Saver([table])
# Restore the saved values in the parameter nodes.
save.restore(sess, save_path)
# Check that the parameter nodes have been restored.
self.assertAllEqual(3, self.evaluate(table.size()))
input_string = constant_op.constant(["a", "b", "c", "d", "e"],
dtypes.string)
output = table.lookup(input_string)
self.assertAllEqual([-1, 0, 1, 2, -1], self.evaluate(output))
@test_util.run_in_graph_and_eager_modes
def testObjectSaveRestore(self):
save_dir = os.path.join(self.get_temp_dir(), "save_restore")
save_prefix = os.path.join(tempfile.mkdtemp(prefix=save_dir), "hash")
v0 = variables.Variable(10.0, name="v0")
v1 = variables.Variable(20.0, name="v1")
default_val = -1
keys = constant_op.constant(["b", "c", "d"], dtypes.string)
values = constant_op.constant([0, 1, 2], dtypes.int64)
table = lookup_ops.MutableHashTable(
dtypes.string, dtypes.int64, default_val, name="t1", checkpoint=True)
checkpoint = trackable.Checkpoint(table=table, v0=v0, v1=v1)
self.evaluate([v0.initializer, v1.initializer])
# Check that the parameter nodes have been initialized.
self.assertEqual(10.0, self.evaluate(v0))
self.assertEqual(20.0, self.evaluate(v1))
self.assertAllEqual(0, self.evaluate(table.size()))
self.evaluate(table.insert(keys, values))
self.assertAllEqual(3, self.evaluate(table.size()))
save_path = checkpoint.save(save_prefix)
del table, checkpoint, v0, v1
v0 = variables.Variable(-1.0, name="v0")
v1 = variables.Variable(-1.0, name="v1")
default_val = -1
table = lookup_ops.MutableHashTable(
dtypes.string, dtypes.int64, default_val, name="t1", checkpoint=True)
self.evaluate(
table.insert(
constant_op.constant(["a", "c"], dtypes.string),
constant_op.constant([12, 24], dtypes.int64)))
self.assertAllEqual(2, self.evaluate(table.size()))
checkpoint = trackable.Checkpoint(table=table, v0=v0, v1=v1)
# Restore the saved values in the parameter nodes.
checkpoint.restore(save_path).run_restore_ops()
# Check that the parameter nodes have been restored.
self.assertEqual(10.0, self.evaluate(v0))
self.assertEqual(20.0, self.evaluate(v1))
self.assertAllEqual(3, self.evaluate(table.size()))
input_string = constant_op.constant(["a", "b", "c", "d", "e"],
dtypes.string)
output = table.lookup(input_string)
self.assertAllEqual([-1, 0, 1, 2, -1], self.evaluate(output))
@test_util.run_v1_only("Multiple sessions")
def testSharing(self):
# Start a server to store the table state
server = server_lib.Server({"local0": ["localhost:0"]},
protocol="grpc",
start=True)
# Create two sessions sharing the same state
session1 = session.Session(server.target)
session2 = session.Session(server.target)
table = lookup_ops.MutableHashTable(
dtypes.int64, dtypes.string, "-", name="t1")
# Populate the table in the first session
with session1:
self.assertAllEqual(0, table.size().eval())
keys = constant_op.constant([11, 12], dtypes.int64)
values = constant_op.constant(["a", "b"])
table.insert(keys, values).run()
self.assertAllEqual(2, table.size().eval())
output = table.lookup(constant_op.constant([11, 12, 13], dtypes.int64))
self.assertAllEqual([b"a", b"b", b"-"], output.eval())
# Verify that we can access the shared data from the second session
with session2:
self.assertAllEqual(2, table.size().eval())
output = table.lookup(constant_op.constant([10, 11, 12], dtypes.int64))
self.assertAllEqual([b"-", b"a", b"b"], output.eval())
def testMutableHashTableOfTensors(self):
with self.cached_session():
default_val = constant_op.constant([-1, -1], dtypes.int64)
keys = constant_op.constant(["brain", "salad", "surgery", "tarkus"])
values = constant_op.constant([[0, 1], [2, 3], [4, 5], [6, 7]],
dtypes.int64)
table = lookup_ops.MutableHashTable(dtypes.string, dtypes.int64,
default_val)
self.assertAllEqual(0, self.evaluate(table.size()))
self.evaluate(table.insert(keys, values))
self.assertAllEqual(4, self.evaluate(table.size()))
remove_string = constant_op.constant(["tarkus", "tank"])
self.evaluate(table.remove(remove_string))
self.assertAllEqual(3, self.evaluate(table.size()))
input_string = constant_op.constant(["brain", "salad", "tank"])
output = table.lookup(input_string)
self.assertAllEqual([3, 2], output.get_shape())
result = self.evaluate(output)
self.assertAllEqual([[0, 1], [2, 3], [-1, -1]], result)
exported_keys, exported_values = table.export()
# exported data is in the order of the internal map, i.e. undefined
sorted_keys = np.sort(self.evaluate(exported_keys))
sorted_values = np.sort(self.evaluate(exported_values), axis=0)
self.assertAllEqual([b"brain", b"salad", b"surgery"], sorted_keys)
sorted_expected_values = np.sort([[4, 5], [2, 3], [0, 1]], axis=0)
self.assertAllEqual(sorted_expected_values, sorted_values)
def testMutableHashTableExportInsert(self):
with self.cached_session():
default_val = constant_op.constant([-1, -1], dtypes.int64)
keys = constant_op.constant(["brain", "salad", "surgery"])
values = constant_op.constant([[0, 1], [2, 3], [4, 5]], dtypes.int64)
table1 = lookup_ops.MutableHashTable(dtypes.string, dtypes.int64,
default_val)
self.assertAllEqual(0, self.evaluate(table1.size()))
self.evaluate(table1.insert(keys, values))
self.assertAllEqual(3, self.evaluate(table1.size()))
input_string = constant_op.constant(["brain", "salad", "tank"])
expected_output = [[0, 1], [2, 3], [-1, -1]]
output1 = table1.lookup(input_string)
self.assertAllEqual(expected_output, self.evaluate(output1))
exported_keys, exported_values = table1.export()
self.assertAllEqual(3, self.evaluate(exported_keys).size)
self.assertAllEqual(6, self.evaluate(exported_values).size)
# Populate a second table from the exported data
table2 = lookup_ops.MutableHashTable(dtypes.string, dtypes.int64,
default_val)
self.assertAllEqual(0, self.evaluate(table2.size()))
self.evaluate(table2.insert(exported_keys, exported_values))
self.assertAllEqual(3, self.evaluate(table2.size()))
# Verify lookup result is still the same
output2 = table2.lookup(input_string)
self.assertAllEqual(expected_output, self.evaluate(output2))
def testMutableHashTableOfTensorsInvalidShape(self):
with self.cached_session():
default_val = constant_op.constant([-1, -1], dtypes.int64)
keys = constant_op.constant(["brain", "salad", "surgery"])
table = lookup_ops.MutableHashTable(dtypes.string, dtypes.int64,
default_val)
# Shape [6] instead of [3, 2]
values = constant_op.constant([0, 1, 2, 3, 4, 5], dtypes.int64)
with self.assertRaisesOpError("Expected shape"):
self.evaluate(table.insert(keys, values))
# Shape [2,3] instead of [3, 2]
values = constant_op.constant([[0, 1, 2], [3, 4, 5]], dtypes.int64)
with self.assertRaisesOpError("Expected shape"):
self.evaluate(table.insert(keys, values))
# Shape [2, 2] instead of [3, 2]
values = constant_op.constant([[0, 1], [2, 3]], dtypes.int64)
with self.assertRaisesOpError("Expected shape"):
self.evaluate(table.insert(keys, values))
# Shape [3, 1] instead of [3, 2]
values = constant_op.constant([[0], [2], [4]], dtypes.int64)
with self.assertRaisesOpError("Expected shape"):
self.evaluate(table.insert(keys, values))
# Valid Insert
values = constant_op.constant([[0, 1], [2, 3], [4, 5]], dtypes.int64)
self.evaluate(table.insert(keys, values))
self.assertAllEqual(3, self.evaluate(table.size()))
def testMutableHashTableInvalidDefaultValue(self):
with self.cached_session():
default_val = constant_op.constant([[-1, -1]], dtypes.int64)
with self.assertRaisesOpError("Default value must be a vector"):
table = lookup_ops.MutableHashTable(dtypes.string, dtypes.int64,
default_val)
self.assertAllEqual(0, self.evaluate(table.size()))
def testMutableHashTableDuplicateInsert(self):
with self.cached_session():
default_val = -1
keys = constant_op.constant(["brain", "salad", "surgery", "brain"])
values = constant_op.constant([0, 1, 2, 3], dtypes.int64)
table = lookup_ops.MutableHashTable(dtypes.string, dtypes.int64,
default_val)
self.assertAllEqual(0, self.evaluate(table.size()))
self.evaluate(table.insert(keys, values))
self.assertAllEqual(3, self.evaluate(table.size()))
input_string = constant_op.constant(["brain", "salad", "tank"])
output = table.lookup(input_string)
result = self.evaluate(output)
self.assertAllEqual([3, 1, -1], result)
def testMutableHashTableFindHighRank(self):
with self.cached_session():
default_val = -1
keys = constant_op.constant(["brain", "salad", "surgery"])
values = constant_op.constant([0, 1, 2], dtypes.int64)
table = lookup_ops.MutableHashTable(dtypes.string, dtypes.int64,
default_val)
self.evaluate(table.insert(keys, values))
self.assertAllEqual(3, self.evaluate(table.size()))
input_string = constant_op.constant([["brain", "salad"],
["tank", "tarkus"]])
output = table.lookup(input_string)
self.assertAllEqual([2, 2], output.get_shape())
result = self.evaluate(output)
self.assertAllEqual([[0, 1], [-1, -1]], result)
def testMutableHashTableInsertHighRank(self):
with self.cached_session():
default_val = -1
keys = constant_op.constant([["brain", "salad"], ["surgery", "tank"]])
values = constant_op.constant([[0, 1], [2, 3]], dtypes.int64)
table = lookup_ops.MutableHashTable(dtypes.string, dtypes.int64,
default_val)
self.evaluate(table.insert(keys, values))
self.assertAllEqual(4, self.evaluate(table.size()))
input_string = constant_op.constant(["brain", "salad", "tank", "tarkus"])
output = table.lookup(input_string)
result = self.evaluate(output)
self.assertAllEqual([0, 1, 3, -1], result)
def testMutableHashTableRemoveHighRank(self):
with self.test_session():
default_val = -1
keys = constant_op.constant([["brain", "salad"], ["surgery", "tank"]])
values = constant_op.constant([[0, 1], [2, 3]], dtypes.int64)
table = lookup_ops.MutableHashTable(dtypes.string, dtypes.int64,
default_val)
self.evaluate(table.insert(keys, values))
self.assertAllEqual(4, self.evaluate(table.size()))
remove_string = constant_op.constant(["salad", "tarkus"])
self.evaluate(table.remove(remove_string))
self.assertAllEqual(3, self.evaluate(table.size()))
input_string = constant_op.constant(["brain", "salad", "tank", "tarkus"])
output = table.lookup(input_string)
result = self.evaluate(output)
self.assertAllEqual([0, -1, 3, -1], result)
def testMutableHashTableOfTensorsFindHighRank(self):
with self.cached_session():
default_val = constant_op.constant([-1, -1, -1], dtypes.int64)
keys = constant_op.constant(["brain", "salad", "surgery"])
values = constant_op.constant([[0, 1, 2], [2, 3, 4], [4, 5, 6]],
dtypes.int64)
table = lookup_ops.MutableHashTable(dtypes.string, dtypes.int64,
default_val)
self.evaluate(table.insert(keys, values))
self.assertAllEqual(3, self.evaluate(table.size()))
input_string = constant_op.constant([["brain", "salad"],
["tank", "tarkus"]])
output = table.lookup(input_string)
self.assertAllEqual([2, 2, 3], output.get_shape())
result = self.evaluate(output)
self.assertAllEqual(
[[[0, 1, 2], [2, 3, 4]], [[-1, -1, -1], [-1, -1, -1]]], result)
def testMutableHashTableOfTensorsRemoveHighRank(self):
with self.test_session():
default_val = constant_op.constant([-1, -1, -1], dtypes.int64)
keys = constant_op.constant(["brain", "salad", "surgery"])
values = constant_op.constant([[0, 1, 2], [2, 3, 4], [4, 5, 6]],
dtypes.int64)
table = lookup_ops.MutableHashTable(dtypes.string, dtypes.int64,
default_val)
self.evaluate(table.insert(keys, values))
self.assertAllEqual(3, self.evaluate(table.size()))
remove_string = constant_op.constant([["brain", "tank"]])
self.evaluate(table.remove(remove_string))
self.assertAllEqual(2, self.evaluate(table.size()))
input_string = constant_op.constant([["brain", "salad"],
["surgery", "tank"]])
output = table.lookup(input_string)
self.assertAllEqual([2, 2, 3], output.get_shape())
result = self.evaluate(output)
self.assertAllEqual(
[[[-1, -1, -1], [2, 3, 4]], [[4, 5, 6], [-1, -1, -1]]], result)
def testMultipleMutableHashTables(self):
with self.cached_session():
default_val = -1
keys = constant_op.constant(["brain", "salad", "surgery"])
values = constant_op.constant([0, 1, 2], dtypes.int64)
table1 = lookup_ops.MutableHashTable(dtypes.string, dtypes.int64,
default_val)
table2 = lookup_ops.MutableHashTable(dtypes.string, dtypes.int64,
default_val)
table3 = lookup_ops.MutableHashTable(dtypes.string, dtypes.int64,
default_val)
self.evaluate(table1.insert(keys, values))
self.evaluate(table2.insert(keys, values))
self.evaluate(table3.insert(keys, values))
self.assertAllEqual(3, self.evaluate(table1.size()))
self.assertAllEqual(3, self.evaluate(table2.size()))
self.assertAllEqual(3, self.evaluate(table3.size()))
input_string = constant_op.constant(["brain", "salad", "tank"])
output1 = table1.lookup(input_string)
output2 = table2.lookup(input_string)
output3 = table3.lookup(input_string)
out1, out2, out3 = self.evaluate([output1, output2, output3])
self.assertAllEqual([0, 1, -1], out1)
self.assertAllEqual([0, 1, -1], out2)
self.assertAllEqual([0, 1, -1], out3)
def testMutableHashTableWithTensorDefault(self):
with self.cached_session():
default_val = constant_op.constant(-1, dtypes.int64)
keys = constant_op.constant(["brain", "salad", "surgery"])
values = constant_op.constant([0, 1, 2], dtypes.int64)
table = lookup_ops.MutableHashTable(dtypes.string, dtypes.int64,
default_val)
self.evaluate(table.insert(keys, values))
self.assertAllEqual(3, self.evaluate(table.size()))
input_string = constant_op.constant(["brain", "salad", "tank"])
output = table.lookup(input_string)
result = self.evaluate(output)
self.assertAllEqual([0, 1, -1], result)
def testSignatureMismatch(self):
with self.cached_session():
default_val = -1
keys = constant_op.constant(["brain", "salad", "surgery"])
values = constant_op.constant([0, 1, 2], dtypes.int64)
table = lookup_ops.MutableHashTable(dtypes.string, dtypes.int64,
default_val)
# insert with keys of the wrong type
with self.assertRaises(ValueError):
self.evaluate(table.insert(constant_op.constant([4, 5, 6]), values))
# insert with values of the wrong type
with self.assertRaises(ValueError):
self.evaluate(table.insert(keys, constant_op.constant(["a", "b", "c"])))
self.assertAllEqual(0, self.evaluate(table.size()))
self.evaluate(table.insert(keys, values))
self.assertAllEqual(3, self.evaluate(table.size()))
input_string_ref = variables.Variable("brain")
input_int64_ref = variables.Variable(-1, dtype=dtypes.int64)
self.evaluate(variables.global_variables_initializer())
# Ref types do not produce an insert signature mismatch.
self.evaluate(table.insert(input_string_ref, input_int64_ref))
self.assertAllEqual(3, self.evaluate(table.size()))
# Ref types do not produce a lookup signature mismatch.
self.assertEqual(-1, self.evaluate(table.lookup(input_string_ref)))
# lookup with keys of the wrong type
input_string = constant_op.constant([1, 2, 3], dtypes.int64)
with self.assertRaises(ValueError):
self.evaluate(table.lookup(input_string))
# default value of the wrong type
with self.assertRaises(TypeError):
lookup_ops.MutableHashTable(dtypes.string, dtypes.int64, "UNK")
def testMutableHashTableStringFloat(self):
with self.cached_session():
default_val = -1.5
keys = constant_op.constant(["brain", "salad", "surgery"])
values = constant_op.constant([0, 1.1, 2.2], dtypes.float32)
table = lookup_ops.MutableHashTable(dtypes.string, dtypes.float32,
default_val)
self.assertAllEqual(0, self.evaluate(table.size()))
self.evaluate(table.insert(keys, values))
self.assertAllEqual(3, self.evaluate(table.size()))
input_string = constant_op.constant(["brain", "salad", "tank"])
output = table.lookup(input_string)
result = self.evaluate(output)
self.assertAllClose([0, 1.1, default_val], result)
def testMutableHashTableIntFloat(self):
with self.cached_session():
default_val = -1.0
keys = constant_op.constant([3, 7, 0], dtypes.int64)
values = constant_op.constant([7.5, -1.2, 9.9], dtypes.float32)
table = lookup_ops.MutableHashTable(dtypes.int64, dtypes.float32,
default_val)
self.assertAllEqual(0, self.evaluate(table.size()))
self.evaluate(table.insert(keys, values))
self.assertAllEqual(3, self.evaluate(table.size()))
input_string = constant_op.constant([7, 0, 11], dtypes.int64)
output = table.lookup(input_string)
result = self.evaluate(output)
self.assertAllClose([-1.2, 9.9, default_val], result)
def testMutableHashTableInt64String(self):
with self.cached_session():
default_val = "n/a"
keys = constant_op.constant([0, 1, 2], dtypes.int64)
values = constant_op.constant(["brain", "salad", "surgery"])
table = lookup_ops.MutableHashTable(dtypes.int64, dtypes.string,
default_val)
self.assertAllEqual(0, self.evaluate(table.size()))
self.evaluate(table.insert(keys, values))
self.assertAllEqual(3, self.evaluate(table.size()))
input_string = constant_op.constant([0, 1, 3], dtypes.int64)
output = table.lookup(input_string)
result = self.evaluate(output)
self.assertAllEqual((b"brain", b"salad", b"n/a"), result)
class MutableHashTableBenchmark(test.Benchmark):
def _create_table(self):
return lookup_ops.MutableHashTable(dtypes.int64, dtypes.float32, 0.0)
def benchmark_single_repeated_scalar_insert_scalar(self):
table = self._create_table()
value = variables.Variable(1.0)
insert = table.insert(0, value)
size = table.size()
with session.Session() as sess:
sess.run(value.initializer)
self.run_op_benchmark(sess, insert, burn_iters=10, min_iters=10000)
assert sess.run(size) == 1
def benchmark_many_repeated_scalar_insert_scalar(self):
table = self._create_table()
c = dataset_ops.make_one_shot_iterator(counter.Counter()).get_next()
value = variables.Variable(1.0)
insert = table.insert(c, value)
size = table.size()
with session.Session() as sess:
sess.run(value.initializer)
self.run_op_benchmark(sess, insert, burn_iters=10, min_iters=10000)
assert sess.run(size) >= 10000
def benchmark_single_repeated_batch_32_insert_scalar(self):
table = self._create_table()
value = variables.Variable([1.0] * 32)
insert = table.insert(list(range(32)), value)
size = table.size()
with session.Session() as sess:
sess.run(value.initializer)
self.run_op_benchmark(sess, insert, burn_iters=10, min_iters=1000)
assert sess.run(size) == 32
def benchmark_many_repeated_batch_32_insert_scalar(self):
table = self._create_table()
c = dataset_ops.make_one_shot_iterator(counter.Counter()).get_next()
value = variables.Variable([1.0] * 32)
insert = table.insert(32 * c + list(range(32)), value)
size = table.size()
with session.Session() as sess:
sess.run(value.initializer)
self.run_op_benchmark(sess, insert, burn_iters=10, min_iters=1000)
assert sess.run(size) >= 1000 * 32
class DenseHashTableBenchmark(MutableHashTableBenchmark):
def _create_table(self):
return lookup_ops.DenseHashTable(
dtypes.int64,
dtypes.float32,
default_value=0.0,
empty_key=-1,
deleted_key=-2)
if __name__ == "__main__":
test.main()
| 39.635742 | 80 | 0.662043 | 16,123 | 133,295 | 5.271662 | 0.042982 | 0.051391 | 0.066286 | 0.022978 | 0.872945 | 0.844979 | 0.815883 | 0.784493 | 0.762021 | 0.732455 | 0 | 0.03438 | 0.212244 | 133,295 | 3,362 | 81 | 39.647531 | 0.775066 | 0.031704 | 0 | 0.741172 | 0 | 0 | 0.040553 | 0.004451 | 0 | 0 | 0 | 0.000297 | 0.17731 | 1 | 0.061232 | false | 0 | 0.013524 | 0.002254 | 0.087528 | 0.000376 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
813bd95d4b78e4b467ca1045173f90970c8d5d02 | 104,809 | py | Python | Tests/test_SearchIO_blast_text.py | aertslab/biopython | 51673eeaa6c12d84e1102b0424453f12645f88b5 | [
"PostgreSQL"
] | 1 | 2020-05-26T22:51:14.000Z | 2020-05-26T22:51:14.000Z | Tests/test_SearchIO_blast_text.py | aertslab/biopython | 51673eeaa6c12d84e1102b0424453f12645f88b5 | [
"PostgreSQL"
] | null | null | null | Tests/test_SearchIO_blast_text.py | aertslab/biopython | 51673eeaa6c12d84e1102b0424453f12645f88b5 | [
"PostgreSQL"
] | null | null | null | # Copyright 2012 by Wibowo Arindrarto. All rights reserved.
# This code is part of the Biopython distribution and governed by its
# license. Please see the LICENSE file that should have been included
# as part of this package.
"""Tests for SearchIO BlastIO plain text parsers."""
import os
import unittest
from Bio import BiopythonExperimentalWarning
import warnings
with warnings.catch_warnings():
warnings.simplefilter('ignore', BiopythonExperimentalWarning)
from Bio.SearchIO import parse
# test case files are in the Blast directory
TEST_DIR = 'Blast'
FMT = 'blast-text'
def get_file(filename):
"""Returns the path of a test file."""
return os.path.join(TEST_DIR, filename)
class BaseBlastCases(unittest.TestCase):
def check_common_attrs(self, qresults):
# check common attributes
for qresult in qresults:
for hit in qresult:
self.assertEqual(qresult.id, hit.query_id)
for hsp in hit:
self.assertEqual(hit.id, hsp.hit_id)
self.assertEqual(qresult.id, hsp.query_id)
class BlastnCases(BaseBlastCases):
def test_text_2226_blastn_001(self):
"""Test parsing blastn output (text_2226_blastn_001.txt)"""
blast_file = get_file('text_2226_blastn_001.txt')
qresults = list(parse(blast_file, FMT))
self.assertEqual(1, len(qresults))
self.check_common_attrs(qresults)
# test first qresult
qresult = qresults[0]
self.assertEqual('random_s00', qresult.id)
self.assertEqual('', qresult.description)
self.assertEqual(128, qresult.seq_len)
self.assertEqual('NCBI Transcript Reference Sequences', qresult.target)
self.assertEqual('blastn', qresult.program)
self.assertEqual('2.2.26+', qresult.version)
self.assertEqual(0, len(qresult))
def test_text_2226_blastn_002(self):
"""Test parsing blastn output (text_2226_blastn_002.txt)"""
blast_file = get_file('text_2226_blastn_002.txt')
qresults = list(parse(blast_file, FMT))
self.assertEqual(1, len(qresults))
self.check_common_attrs(qresults)
# test first qresult
qresult = qresults[0]
self.assertEqual('gi|356995852:1-490', qresult.id)
self.assertEqual('Mus musculus POU domain, class 5, transcriptionfactor 1 (Pou5f1), transcript variant 1, mRNA', qresult.description)
self.assertEqual(490, qresult.seq_len)
self.assertEqual('NCBI Transcript Reference Sequences', qresult.target)
self.assertEqual('blastn', qresult.program)
self.assertEqual('2.2.26+', qresult.version)
self.assertEqual(8, len(qresult))
# first qresult, first hit
hit = qresult[0]
self.assertEqual('ref|NM_013633.3|', hit.id)
self.assertEqual('Mus musculus POU domain, class 5, transcription factor 1 (Pou5f1), transcript variant 1, mRNA', hit.description)
self.assertEqual(1353, hit.seq_len)
self.assertEqual(1, len(hit))
# first qresult, first hit, first hsp
hsp = qresult[0].hsps[0]
self.assertEqual(490, hsp.aln_span)
self.assertEqual(0.0, hsp.evalue)
self.assertEqual(905.0, hsp.bitscore)
self.assertEqual(490.0, hsp.bitscore_raw)
self.assertEqual(1, hsp.query_frame)
self.assertEqual(1, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(490, hsp.ident_num)
self.assertEqual(490, hsp.pos_num)
self.assertEqual(1, hsp.query_strand)
self.assertEqual(1, hsp.hit_strand)
self.assertEqual(0, hsp.query_start)
self.assertEqual(0, hsp.hit_start)
self.assertEqual(490, hsp.query_end)
self.assertEqual(490, hsp.hit_end)
self.assertEqual('GAGGTGAAACCGTCCCTAGGTGAGCCGTCTTTCCACCAGG', str(hsp.query.seq)[:40])
self.assertEqual('||||||||||||||||||||||||||||||||||||||||', hsp.aln_annotation['similarity'][:40])
self.assertEqual('GAGGTGAAACCGTCCCTAGGTGAGCCGTCTTTCCACCAGG', str(hsp.hit.seq)[:40])
self.assertEqual('AGTCCCAGGACATGAAAGCCCTGCAGAAGGAGCTAGAACA', str(hsp.query.seq)[-40:])
self.assertEqual('||||||||||||||||||||||||||||||||||||||||', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('AGTCCCAGGACATGAAAGCCCTGCAGAAGGAGCTAGAACA', str(hsp.hit.seq)[-40:])
# first qresult, second hit
hit = qresult[1]
self.assertEqual('ref|XR_141831.1|', hit.id)
self.assertEqual('PREDICTED: Mus musculus predicted gene, 19553 (Gm19553), miscRNA ref|XR_105837.2| PREDICTED: Mus musculus predicted gene, 19553 (Gm19553), miscRNA ref|XR_141464.1| PREDICTED: Mus musculus predicted gene, 19553 (Gm19553), miscRNA ref|XR_141446.1| PREDICTED: Mus musculus predicted gene, 19553 (Gm19553), miscRNA', hit.description)
self.assertEqual(570, hit.seq_len)
self.assertEqual(1, len(hit))
# first qresult, second hit, first hsp
hsp = qresult[1].hsps[0]
self.assertEqual(490, hsp.aln_span)
self.assertEqual(0.0, hsp.evalue)
self.assertEqual(900.0, hsp.bitscore)
self.assertEqual(487.0, hsp.bitscore_raw)
self.assertEqual(1, hsp.query_frame)
self.assertEqual(1, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(489, hsp.ident_num)
self.assertEqual(490, hsp.pos_num)
self.assertEqual(1, hsp.query_strand)
self.assertEqual(1, hsp.hit_strand)
self.assertEqual(0, hsp.query_start)
self.assertEqual(80, hsp.hit_start)
self.assertEqual(490, hsp.query_end)
self.assertEqual(570, hsp.hit_end)
self.assertEqual('GAGGTGAAACCGTCCCTAGGTGAGCCGTCTTTCCACCAGG', str(hsp.query.seq)[:40])
self.assertEqual('||||||||||||||||||||||||||||||||||||||||', hsp.aln_annotation['similarity'][:40])
self.assertEqual('GAGGTGAAACCGTCCCTAGGTGAGCCGTCTTTCCACCAGG', str(hsp.hit.seq)[:40])
self.assertEqual('AGTCCCAGGACATGAAAGCCCTGCAGAAGGAGCTAGAACA', str(hsp.query.seq)[-40:])
self.assertEqual('||||||||||||||||||||||||||||||||||||||||', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('AGTCCCAGGACATGAAAGCCCTGCAGAAGGAGCTAGAACA', str(hsp.hit.seq)[-40:])
def test_text_2226_blastn_003(self):
"""Test parsing blastn output (text_2226_blastn_003.txt)"""
blast_file = get_file('text_2226_blastn_003.txt')
qresults = list(parse(blast_file, FMT))
self.assertEqual(1, len(qresults))
self.check_common_attrs(qresults)
# test first qresult
qresult = qresults[0]
self.assertEqual('hg19_dna', qresult.id)
self.assertEqual('range=chr1:1207307-1207372 5\'pad=0 3\'pad=0 strand=+repeatMasking=none', qresult.description)
self.assertEqual(66, qresult.seq_len)
self.assertEqual('NCBI Transcript Reference Sequences', qresult.target)
self.assertEqual('blastn', qresult.program)
self.assertEqual('2.2.26+', qresult.version)
self.assertEqual(10, len(qresult))
# first qresult, first hit
hit = qresult[0]
self.assertEqual('ref|XM_003267724.1|', hit.id)
self.assertEqual('PREDICTED: Nomascus leucogenys ATG14 autophagy related 14 homolog (S. cerevisiae) (ATG14), mRNA', hit.description)
self.assertEqual(4771, hit.seq_len)
self.assertEqual(1, len(hit))
# first qresult, first hit, first hsp
hsp = qresult[0].hsps[0]
self.assertEqual(62, hsp.aln_span)
self.assertEqual(3e-24, hsp.evalue)
self.assertEqual(115.0, hsp.bitscore)
self.assertEqual(62.0, hsp.bitscore_raw)
self.assertEqual(1, hsp.query_frame)
self.assertEqual(1, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(62, hsp.ident_num)
self.assertEqual(62, hsp.pos_num)
self.assertEqual(1, hsp.query_strand)
self.assertEqual(1, hsp.hit_strand)
self.assertEqual(4, hsp.query_start)
self.assertEqual(2864, hsp.hit_start)
self.assertEqual(66, hsp.query_end)
self.assertEqual(2926, hsp.hit_end)
self.assertEqual('GCCATTGCACTCCAGCCTGGGCAACAAGAGCGAAACTCCG', str(hsp.query.seq)[:40])
self.assertEqual('||||||||||||||||||||||||||||||||||||||||', hsp.aln_annotation['similarity'][:40])
self.assertEqual('GCCATTGCACTCCAGCCTGGGCAACAAGAGCGAAACTCCG', str(hsp.hit.seq)[:40])
self.assertEqual('AACAAGAGCGAAACTCCGTCTCaaaaaaaaaaaaaaaaaa', str(hsp.query.seq)[-40:])
self.assertEqual('||||||||||||||||||||||||||||||||||||||||', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('AACAAGAGCGAAACTCCGTCTCAAAAAAAAAAAAAAAAAA', str(hsp.hit.seq)[-40:])
# first qresult, second hit
hit = qresult[1]
self.assertEqual('ref|NM_001040441.1|', hit.id)
self.assertEqual('Homo sapiens zinc finger and BTB domain containing 8A (ZBTB8A), mRNA', hit.description)
self.assertEqual(7333, hit.seq_len)
self.assertEqual(2, len(hit))
# first qresult, second hit, first hsp
hsp = qresult[1].hsps[0]
self.assertEqual(62, hsp.aln_span)
self.assertEqual(3e-24, hsp.evalue)
self.assertEqual(115.0, hsp.bitscore)
self.assertEqual(62.0, hsp.bitscore_raw)
self.assertEqual(1, hsp.query_frame)
self.assertEqual(1, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(62, hsp.ident_num)
self.assertEqual(62, hsp.pos_num)
self.assertEqual(1, hsp.query_strand)
self.assertEqual(1, hsp.hit_strand)
self.assertEqual(4, hsp.query_start)
self.assertEqual(3676, hsp.hit_start)
self.assertEqual(66, hsp.query_end)
self.assertEqual(3738, hsp.hit_end)
self.assertEqual('GCCATTGCACTCCAGCCTGGGCAACAAGAGCGAAACTCCG', str(hsp.query.seq)[:40])
self.assertEqual('||||||||||||||||||||||||||||||||||||||||', hsp.aln_annotation['similarity'][:40])
self.assertEqual('GCCATTGCACTCCAGCCTGGGCAACAAGAGCGAAACTCCG', str(hsp.hit.seq)[:40])
self.assertEqual('AACAAGAGCGAAACTCCGTCTCaaaaaaaaaaaaaaaaaa', str(hsp.query.seq)[-40:])
self.assertEqual('||||||||||||||||||||||||||||||||||||||||', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('AACAAGAGCGAAACTCCGTCTCAAAAAAAAAAAAAAAAAA', str(hsp.hit.seq)[-40:])
# first qresult, second hit, second hsp
hsp = qresult[1].hsps[1]
self.assertEqual(53, hsp.aln_span)
self.assertEqual(3e-19, hsp.evalue)
self.assertEqual(99.0, hsp.bitscore)
self.assertEqual(53.0, hsp.bitscore_raw)
self.assertEqual(1, hsp.query_frame)
self.assertEqual(1, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(53, hsp.ident_num)
self.assertEqual(53, hsp.pos_num)
self.assertEqual(1, hsp.query_strand)
self.assertEqual(1, hsp.hit_strand)
self.assertEqual(5, hsp.query_start)
self.assertEqual(2823, hsp.hit_start)
self.assertEqual(58, hsp.query_end)
self.assertEqual(2876, hsp.hit_end)
self.assertEqual('CCATTGCACTCCAGCCTGGGCAACAAGAGCGAAACTCCGT', str(hsp.query.seq)[:40])
self.assertEqual('||||||||||||||||||||||||||||||||||||||||', hsp.aln_annotation['similarity'][:40])
self.assertEqual('CCATTGCACTCCAGCCTGGGCAACAAGAGCGAAACTCCGT', str(hsp.hit.seq)[:40])
self.assertEqual('GCCTGGGCAACAAGAGCGAAACTCCGTCTCaaaaaaaaaa', str(hsp.query.seq)[-40:])
self.assertEqual('||||||||||||||||||||||||||||||||||||||||', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('GCCTGGGCAACAAGAGCGAAACTCCGTCTCAAAAAAAAAA', str(hsp.hit.seq)[-40:])
def test_text_2226_blastn_004(self):
"""Test parsing blastn output (text_2226_blastn_004.txt)"""
blast_file = get_file('text_2226_blastn_004.txt')
qresults = list(parse(blast_file, FMT))
self.assertEqual(3, len(qresults))
self.check_common_attrs(qresults)
# test first qresult
qresult = qresults[0]
self.assertEqual('random_s00', qresult.id)
self.assertEqual('', qresult.description)
self.assertEqual(128, qresult.seq_len)
self.assertEqual('minirefseq_mrna', qresult.target)
self.assertEqual('blastn', qresult.program)
self.assertEqual('2.2.26+', qresult.version)
self.assertEqual(0, len(qresult))
# test second qresult
qresult = qresults[1]
self.assertEqual('gi|356995852:1-490', qresult.id)
self.assertEqual('Mus musculus POU domain, class 5, transcriptionfactor 1 (Pou5f1), transcript variant 1, mRNA', qresult.description)
self.assertEqual(490, qresult.seq_len)
self.assertEqual('minirefseq_mrna', qresult.target)
self.assertEqual('blastn', qresult.program)
self.assertEqual('2.2.26+', qresult.version)
self.assertEqual(5, len(qresult))
# second qresult, first hit
hit = qresult[0]
self.assertEqual('gi|356995852|ref|NM_013633.3|', hit.id)
self.assertEqual('Mus musculus POU domain, class 5, transcription factor 1 (Pou5f1), transcript variant 1, mRNA', hit.description)
self.assertEqual(1353, hit.seq_len)
self.assertEqual(1, len(hit))
# second qresult, first hit, first hsp
hsp = qresult[0].hsps[0]
self.assertEqual(490, hsp.aln_span)
self.assertEqual(0.0, hsp.evalue)
self.assertEqual(905.0, hsp.bitscore)
self.assertEqual(490.0, hsp.bitscore_raw)
self.assertEqual(1, hsp.query_frame)
self.assertEqual(1, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(490, hsp.ident_num)
self.assertEqual(490, hsp.pos_num)
self.assertEqual(1, hsp.query_strand)
self.assertEqual(1, hsp.hit_strand)
self.assertEqual(0, hsp.query_start)
self.assertEqual(0, hsp.hit_start)
self.assertEqual(490, hsp.query_end)
self.assertEqual(490, hsp.hit_end)
self.assertEqual('GAGGTGAAACCGTCCCTAGGTGAGCCGTCTTTCCACCAGG', str(hsp.query.seq)[:40])
self.assertEqual('||||||||||||||||||||||||||||||||||||||||', hsp.aln_annotation['similarity'][:40])
self.assertEqual('GAGGTGAAACCGTCCCTAGGTGAGCCGTCTTTCCACCAGG', str(hsp.hit.seq)[:40])
self.assertEqual('AGTCCCAGGACATGAAAGCCCTGCAGAAGGAGCTAGAACA', str(hsp.query.seq)[-40:])
self.assertEqual('||||||||||||||||||||||||||||||||||||||||', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('AGTCCCAGGACATGAAAGCCCTGCAGAAGGAGCTAGAACA', str(hsp.hit.seq)[-40:])
# second qresult, second hit
hit = qresult[1]
self.assertEqual('gi|377833530|ref|XR_141831.1|', hit.id)
self.assertEqual('PREDICTED: Mus musculus predicted gene, 19553 (Gm19553), miscRNA', hit.description)
self.assertEqual(570, hit.seq_len)
self.assertEqual(1, len(hit))
# second qresult, second hit, first hsp
hsp = qresult[1].hsps[0]
self.assertEqual(490, hsp.aln_span)
self.assertEqual(0.0, hsp.evalue)
self.assertEqual(900.0, hsp.bitscore)
self.assertEqual(487.0, hsp.bitscore_raw)
self.assertEqual(1, hsp.query_frame)
self.assertEqual(1, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(489, hsp.ident_num)
self.assertEqual(490, hsp.pos_num)
self.assertEqual(1, hsp.query_strand)
self.assertEqual(1, hsp.hit_strand)
self.assertEqual(0, hsp.query_start)
self.assertEqual(80, hsp.hit_start)
self.assertEqual(490, hsp.query_end)
self.assertEqual(570, hsp.hit_end)
self.assertEqual('GAGGTGAAACCGTCCCTAGGTGAGCCGTCTTTCCACCAGG', str(hsp.query.seq)[:40])
self.assertEqual('||||||||||||||||||||||||||||||||||||||||', hsp.aln_annotation['similarity'][:40])
self.assertEqual('GAGGTGAAACCGTCCCTAGGTGAGCCGTCTTTCCACCAGG', str(hsp.hit.seq)[:40])
self.assertEqual('AGTCCCAGGACATGAAAGCCCTGCAGAAGGAGCTAGAACA', str(hsp.query.seq)[-40:])
self.assertEqual('||||||||||||||||||||||||||||||||||||||||', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('AGTCCCAGGACATGAAAGCCCTGCAGAAGGAGCTAGAACA', str(hsp.hit.seq)[-40:])
# test third qresult
qresult = qresults[2]
self.assertEqual('hg19_dna', qresult.id)
self.assertEqual('range=chr1:1207307-1207372 5\'pad=0 3\'pad=0 strand=+repeatMasking=none', qresult.description)
self.assertEqual(66, qresult.seq_len)
self.assertEqual('minirefseq_mrna', qresult.target)
self.assertEqual('blastn', qresult.program)
self.assertEqual('2.2.26+', qresult.version)
self.assertEqual(5, len(qresult))
# third qresult, first hit
hit = qresult[0]
self.assertEqual('gi|94721341|ref|NM_001040441.1|', hit.id)
self.assertEqual('Homo sapiens zinc finger and BTB domain containing 8A (ZBTB8A), mRNA', hit.description)
self.assertEqual(7333, hit.seq_len)
self.assertEqual(2, len(hit))
# third qresult, first hit, first hsp
hsp = qresult[0].hsps[0]
self.assertEqual(62, hsp.aln_span)
self.assertEqual(6e-29, hsp.evalue)
self.assertEqual(115.0, hsp.bitscore)
self.assertEqual(62.0, hsp.bitscore_raw)
self.assertEqual(1, hsp.query_frame)
self.assertEqual(1, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(62, hsp.ident_num)
self.assertEqual(62, hsp.pos_num)
self.assertEqual(1, hsp.query_strand)
self.assertEqual(1, hsp.hit_strand)
self.assertEqual(4, hsp.query_start)
self.assertEqual(3676, hsp.hit_start)
self.assertEqual(66, hsp.query_end)
self.assertEqual(3738, hsp.hit_end)
self.assertEqual('GCCATTGCACTCCAGCCTGGGCAACAAGAGCGAAACTCCG', str(hsp.query.seq)[:40])
self.assertEqual('||||||||||||||||||||||||||||||||||||||||', hsp.aln_annotation['similarity'][:40])
self.assertEqual('GCCATTGCACTCCAGCCTGGGCAACAAGAGCGAAACTCCG', str(hsp.hit.seq)[:40])
self.assertEqual('AACAAGAGCGAAACTCCGTCTCaaaaaaaaaaaaaaaaaa', str(hsp.query.seq)[-40:])
self.assertEqual('||||||||||||||||||||||||||||||||||||||||', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('AACAAGAGCGAAACTCCGTCTCAAAAAAAAAAAAAAAAAA', str(hsp.hit.seq)[-40:])
# third qresult, first hit, second hsp
hsp = qresult[0].hsps[1]
self.assertEqual(53, hsp.aln_span)
self.assertEqual(6e-24, hsp.evalue)
self.assertEqual(99.0, hsp.bitscore)
self.assertEqual(53.0, hsp.bitscore_raw)
self.assertEqual(1, hsp.query_frame)
self.assertEqual(1, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(53, hsp.ident_num)
self.assertEqual(53, hsp.pos_num)
self.assertEqual(1, hsp.query_strand)
self.assertEqual(1, hsp.hit_strand)
self.assertEqual(5, hsp.query_start)
self.assertEqual(2823, hsp.hit_start)
self.assertEqual(58, hsp.query_end)
self.assertEqual(2876, hsp.hit_end)
self.assertEqual('CCATTGCACTCCAGCCTGGGCAACAAGAGCGAAACTCCGT', str(hsp.query.seq)[:40])
self.assertEqual('||||||||||||||||||||||||||||||||||||||||', hsp.aln_annotation['similarity'][:40])
self.assertEqual('CCATTGCACTCCAGCCTGGGCAACAAGAGCGAAACTCCGT', str(hsp.hit.seq)[:40])
self.assertEqual('GCCTGGGCAACAAGAGCGAAACTCCGTCTCaaaaaaaaaa', str(hsp.query.seq)[-40:])
self.assertEqual('||||||||||||||||||||||||||||||||||||||||', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('GCCTGGGCAACAAGAGCGAAACTCCGTCTCAAAAAAAAAA', str(hsp.hit.seq)[-40:])
# third qresult, second hit
hit = qresult[1]
self.assertEqual('gi|332237160|ref|XM_003267724.1|', hit.id)
self.assertEqual('PREDICTED: Nomascus leucogenys ATG14 autophagy related 14 homolog (S. cerevisiae) (ATG14), mRNA', hit.description)
self.assertEqual(4771, hit.seq_len)
self.assertEqual(1, len(hit))
# third qresult, second hit, first hsp
hsp = qresult[1].hsps[0]
self.assertEqual(62, hsp.aln_span)
self.assertEqual(6e-29, hsp.evalue)
self.assertEqual(115.0, hsp.bitscore)
self.assertEqual(62.0, hsp.bitscore_raw)
self.assertEqual(1, hsp.query_frame)
self.assertEqual(1, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(62, hsp.ident_num)
self.assertEqual(62, hsp.pos_num)
self.assertEqual(1, hsp.query_strand)
self.assertEqual(1, hsp.hit_strand)
self.assertEqual(4, hsp.query_start)
self.assertEqual(2864, hsp.hit_start)
self.assertEqual(66, hsp.query_end)
self.assertEqual(2926, hsp.hit_end)
self.assertEqual('GCCATTGCACTCCAGCCTGGGCAACAAGAGCGAAACTCCG', str(hsp.query.seq)[:40])
self.assertEqual('||||||||||||||||||||||||||||||||||||||||', hsp.aln_annotation['similarity'][:40])
self.assertEqual('GCCATTGCACTCCAGCCTGGGCAACAAGAGCGAAACTCCG', str(hsp.hit.seq)[:40])
self.assertEqual('AACAAGAGCGAAACTCCGTCTCaaaaaaaaaaaaaaaaaa', str(hsp.query.seq)[-40:])
self.assertEqual('||||||||||||||||||||||||||||||||||||||||', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('AACAAGAGCGAAACTCCGTCTCAAAAAAAAAAAAAAAAAA', str(hsp.hit.seq)[-40:])
class BlastpCases(BaseBlastCases):
def test_text_2226_blastp_001(self):
"""Test parsing blastp output (text_2226_blastp_001.txt)"""
blast_file = get_file('text_2226_blastp_001.txt')
qresults = list(parse(blast_file, FMT))
self.assertEqual(1, len(qresults))
self.check_common_attrs(qresults)
# test first qresult
qresult = qresults[0]
self.assertEqual('random_s00', qresult.id)
self.assertEqual('', qresult.description)
self.assertEqual(32, qresult.seq_len)
self.assertEqual('NCBI Protein Reference Sequences', qresult.target)
self.assertEqual('blastp', qresult.program)
self.assertEqual('2.2.26+', qresult.version)
self.assertEqual(0, len(qresult))
def test_text_2226_blastp_002(self):
"""Test parsing blastp output (text_2226_blastp_002.txt)"""
blast_file = get_file('text_2226_blastp_002.txt')
qresults = list(parse(blast_file, FMT))
self.assertEqual(1, len(qresults))
self.check_common_attrs(qresults)
# test first qresult
qresult = qresults[0]
self.assertEqual('gi|16080617|ref|NP_391444.1|', qresult.id)
self.assertEqual('membrane bound lipoprotein [Bacillussubtilis subsp. subtilis str. 168]', qresult.description)
self.assertEqual(102, qresult.seq_len)
self.assertEqual('NCBI Protein Reference Sequences', qresult.target)
self.assertEqual('blastp', qresult.program)
self.assertEqual('2.2.26+', qresult.version)
self.assertEqual(10, len(qresult))
# first qresult, first hit
hit = qresult[0]
self.assertEqual('ref|NP_391444.1|', hit.id)
self.assertEqual('membrane bound lipoprotein [Bacillus subtilis subsp. subtilis str. 168] ref|ZP_03593363.1| membrane bound lipoprotein [Bacillus subtilis subsp. subtilis str. 168] ref|ZP_03597648.1| membrane bound lipoprotein [Bacillus subtilis subsp. subtilis str. NCIB 3610] ref|ZP_03602051.1| membrane bound lipoprotein [Bacillus subtilis subsp. subtilis str. JH642] ref|ZP_03606337.1| membrane bound lipoprotein [Bacillus subtilis subsp. subtilis str. SMY] ref|YP_004205398.1| unnamed protein product [Bacillus subtilis BSn5]', hit.description)
self.assertEqual(102, hit.seq_len)
self.assertEqual(1, len(hit))
# first qresult, first hit, first hsp
hsp = qresult[0].hsps[0]
self.assertEqual(102, hsp.aln_span)
self.assertEqual(1e-66, hsp.evalue)
self.assertEqual(205.0, hsp.bitscore)
self.assertEqual(521.0, hsp.bitscore_raw)
self.assertEqual(0, hsp.query_frame)
self.assertEqual(0, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(102, hsp.ident_num)
self.assertEqual(102, hsp.pos_num)
self.assertEqual(0, hsp.query_strand)
self.assertEqual(0, hsp.hit_strand)
self.assertEqual(0, hsp.query_start)
self.assertEqual(0, hsp.hit_start)
self.assertEqual(102, hsp.query_end)
self.assertEqual(102, hsp.hit_end)
self.assertEqual('MKKFIALLFFILLLSGCGVNSQKSQGEDVSPDSNIETKEG', str(hsp.query.seq)[:40])
self.assertEqual('MKKFIALLFFILLLSGCGVNSQKSQGEDVSPDSNIETKEG', hsp.aln_annotation['similarity'][:40])
self.assertEqual('MKKFIALLFFILLLSGCGVNSQKSQGEDVSPDSNIETKEG', str(hsp.hit.seq)[:40])
self.assertEqual('DITEESTSDLDKFNSGDKVTITYEKNDEGQLLLKDIERAN', str(hsp.query.seq)[-40:])
self.assertEqual('DITEESTSDLDKFNSGDKVTITYEKNDEGQLLLKDIERAN', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('DITEESTSDLDKFNSGDKVTITYEKNDEGQLLLKDIERAN', str(hsp.hit.seq)[-40:])
# first qresult, second hit
hit = qresult[1]
self.assertEqual('ref|YP_003922001.1|', hit.id)
self.assertEqual('membrane bound lipoprotein [Bacillus amyloliquefaciens DSM 7]', hit.description)
self.assertEqual(100, hit.seq_len)
self.assertEqual(1, len(hit))
# first qresult, second hit, first hsp
hsp = qresult[1].hsps[0]
self.assertEqual(102, hsp.aln_span)
self.assertEqual(1e-40, hsp.evalue)
self.assertEqual(139.0, hsp.bitscore)
self.assertEqual(350.0, hsp.bitscore_raw)
self.assertEqual(0, hsp.query_frame)
self.assertEqual(0, hsp.hit_frame)
self.assertEqual(2, hsp.gap_num)
self.assertEqual(69, hsp.ident_num)
self.assertEqual(81, hsp.pos_num)
self.assertEqual(0, hsp.query_strand)
self.assertEqual(0, hsp.hit_strand)
self.assertEqual(0, hsp.query_start)
self.assertEqual(0, hsp.hit_start)
self.assertEqual(102, hsp.query_end)
self.assertEqual(100, hsp.hit_end)
self.assertEqual('MKKFIALLFFILLLSGCGVNSQKSQGEDVSPDSNIETKEG', str(hsp.query.seq)[:40])
self.assertEqual('MKK LFFILLL+GCGV ++KSQGED + TKEG', hsp.aln_annotation['similarity'][:40])
self.assertEqual('MKKIFGCLFFILLLAGCGVTNEKSQGEDAG--EKLVTKEG', str(hsp.hit.seq)[:40])
self.assertEqual('DITEESTSDLDKFNSGDKVTITYEKNDEGQLLLKDIERAN', str(hsp.query.seq)[-40:])
self.assertEqual('DITEES D+ N+G+KVT+ Y+KN +GQL+LKDIE AN', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('DITEESADDVKNLNNGEKVTVKYQKNSKGQLVLKDIEPAN', str(hsp.hit.seq)[-40:])
def test_text_2226_blastp_003(self):
"""Test parsing blastp output (text_2226_blastp_003.txt)"""
blast_file = get_file('text_2226_blastp_003.txt')
qresults = list(parse(blast_file, FMT))
self.assertEqual(1, len(qresults))
self.check_common_attrs(qresults)
# test first qresult
qresult = qresults[0]
self.assertEqual('gi|11464971:4-101', qresult.id)
self.assertEqual('pleckstrin [Mus musculus]', qresult.description)
self.assertEqual(98, qresult.seq_len)
self.assertEqual('NCBI Protein Reference Sequences', qresult.target)
self.assertEqual('blastp', qresult.program)
self.assertEqual('2.2.26+', qresult.version)
self.assertEqual(10, len(qresult))
# first qresult, first hit
hit = qresult[0]
self.assertEqual('ref|NP_062422.1|', hit.id)
self.assertEqual('pleckstrin [Mus musculus]', hit.description)
self.assertEqual(350, hit.seq_len)
self.assertEqual(2, len(hit))
# first qresult, first hit, first hsp
hsp = qresult[0].hsps[0]
self.assertEqual(98, hsp.aln_span)
self.assertEqual(1e-63, hsp.evalue)
self.assertEqual(205.0, hsp.bitscore)
self.assertEqual(522.0, hsp.bitscore_raw)
self.assertEqual(0, hsp.query_frame)
self.assertEqual(0, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(98, hsp.ident_num)
self.assertEqual(98, hsp.pos_num)
self.assertEqual(0, hsp.query_strand)
self.assertEqual(0, hsp.hit_strand)
self.assertEqual(0, hsp.query_start)
self.assertEqual(3, hsp.hit_start)
self.assertEqual(98, hsp.query_end)
self.assertEqual(101, hsp.hit_end)
self.assertEqual('KRIREGYLVKKGSVFNTWKPMWVVLLEDGIEFYKKKSDNS', str(hsp.query.seq)[:40])
self.assertEqual('KRIREGYLVKKGSVFNTWKPMWVVLLEDGIEFYKKKSDNS', hsp.aln_annotation['similarity'][:40])
self.assertEqual('KRIREGYLVKKGSVFNTWKPMWVVLLEDGIEFYKKKSDNS', str(hsp.hit.seq)[:40])
self.assertEqual('FGKRMFVLKITTTKQQDHFFQAAFLEERDAWVRDIKKAIK', str(hsp.query.seq)[-40:])
self.assertEqual('FGKRMFVLKITTTKQQDHFFQAAFLEERDAWVRDIKKAIK', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('FGKRMFVLKITTTKQQDHFFQAAFLEERDAWVRDIKKAIK', str(hsp.hit.seq)[-40:])
# first qresult, first hit, second hsp
hsp = qresult[0].hsps[1]
self.assertEqual(100, hsp.aln_span)
self.assertEqual(0.002, hsp.evalue)
self.assertEqual(43.5, hsp.bitscore)
self.assertEqual(101.0, hsp.bitscore_raw)
self.assertEqual(0, hsp.query_frame)
self.assertEqual(0, hsp.hit_frame)
self.assertEqual(6, hsp.gap_num)
self.assertEqual(29, hsp.ident_num)
self.assertEqual(48, hsp.pos_num)
self.assertEqual(0, hsp.query_strand)
self.assertEqual(0, hsp.hit_strand)
self.assertEqual(2, hsp.query_start)
self.assertEqual(245, hsp.hit_start)
self.assertEqual(96, hsp.query_end)
self.assertEqual(345, hsp.hit_end)
self.assertEqual('IREGYLVKKGSVFNTWKPMWVVLLEDG--IEFYKKKSDNS', str(hsp.query.seq)[:40])
self.assertEqual('I++G L+K+G WK +L ED + +Y ', hsp.aln_annotation['similarity'][:40])
self.assertEqual('IKQGCLLKQGHRRKNWKVRKFILREDPAYLHYYDPAGGED', str(hsp.hit.seq)[:40])
self.assertEqual('FGK--RMFVLKITTTKQQDHFFQAAFLEERDAWVRDIKKA', str(hsp.query.seq)[-40:])
self.assertEqual(' K + +I T + ++ QAA +ER W++ I+ A', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('VKKSDEENLFEIITADEVHYYLQAATSKERTEWIKAIQVA', str(hsp.hit.seq)[-40:])
# first qresult, second hit
hit = qresult[1]
self.assertEqual('ref|XP_003502426.1|', hit.id)
self.assertEqual('PREDICTED: pleckstrin-like [Cricetulus griseus]', hit.description)
self.assertEqual(350, hit.seq_len)
self.assertEqual(2, len(hit))
# first qresult, second hit, first hsp
hsp = qresult[1].hsps[0]
self.assertEqual(98, hsp.aln_span)
self.assertEqual(2e-63, hsp.evalue)
self.assertEqual(205.0, hsp.bitscore)
self.assertEqual(521.0, hsp.bitscore_raw)
self.assertEqual(0, hsp.query_frame)
self.assertEqual(0, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(98, hsp.ident_num)
self.assertEqual(98, hsp.pos_num)
self.assertEqual(0, hsp.query_strand)
self.assertEqual(0, hsp.hit_strand)
self.assertEqual(0, hsp.query_start)
self.assertEqual(3, hsp.hit_start)
self.assertEqual(98, hsp.query_end)
self.assertEqual(101, hsp.hit_end)
self.assertEqual('KRIREGYLVKKGSVFNTWKPMWVVLLEDGIEFYKKKSDNS', str(hsp.query.seq)[:40])
self.assertEqual('KRIREGYLVKKGSVFNTWKPMWVVLLEDGIEFYKKKSDNS', hsp.aln_annotation['similarity'][:40])
self.assertEqual('KRIREGYLVKKGSVFNTWKPMWVVLLEDGIEFYKKKSDNS', str(hsp.hit.seq)[:40])
self.assertEqual('FGKRMFVLKITTTKQQDHFFQAAFLEERDAWVRDIKKAIK', str(hsp.query.seq)[-40:])
self.assertEqual('FGKRMFVLKITTTKQQDHFFQAAFLEERDAWVRDIKKAIK', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('FGKRMFVLKITTTKQQDHFFQAAFLEERDAWVRDIKKAIK', str(hsp.hit.seq)[-40:])
# first qresult, second hit, second hsp
hsp = qresult[1].hsps[1]
self.assertEqual(100, hsp.aln_span)
self.assertEqual(0.001, hsp.evalue)
self.assertEqual(43.9, hsp.bitscore)
self.assertEqual(102.0, hsp.bitscore_raw)
self.assertEqual(0, hsp.query_frame)
self.assertEqual(0, hsp.hit_frame)
self.assertEqual(6, hsp.gap_num)
self.assertEqual(30, hsp.ident_num)
self.assertEqual(50, hsp.pos_num)
self.assertEqual(0, hsp.query_strand)
self.assertEqual(0, hsp.hit_strand)
self.assertEqual(2, hsp.query_start)
self.assertEqual(245, hsp.hit_start)
self.assertEqual(96, hsp.query_end)
self.assertEqual(345, hsp.hit_end)
self.assertEqual('IREGYLVKKGSVFNTWKPMWVVLLEDG--IEFYKKKSDNS', str(hsp.query.seq)[:40])
self.assertEqual('I++G L+K+G WK +L ED + +Y ', hsp.aln_annotation['similarity'][:40])
self.assertEqual('IKQGCLLKQGHRRKNWKVRKFILREDPAYLHYYDPAGGED', str(hsp.hit.seq)[:40])
self.assertEqual('GKRM---FVLKITTTKQQDHFFQAAFLEERDAWVRDIKKA', str(hsp.query.seq)[-40:])
self.assertEqual('GK+ + +I T + ++ QAA +ER W++ I+ A', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('GKKSDDENLFEIITADEVHYYLQAAAPKERTEWIKAIQVA', str(hsp.hit.seq)[-40:])
def test_text_2226_blastp_004(self):
"""Test parsing blastp output (text_2226_blastp_004.txt)"""
blast_file = get_file('text_2226_blastp_004.txt')
qresults = list(parse(blast_file, FMT))
self.assertEqual(3, len(qresults))
self.check_common_attrs(qresults)
# test first qresult
qresult = qresults[0]
self.assertEqual('random_s00', qresult.id)
self.assertEqual('', qresult.description)
self.assertEqual(32, qresult.seq_len)
self.assertEqual('minirefseq_prot', qresult.target)
self.assertEqual('blastp', qresult.program)
self.assertEqual('2.2.26+', qresult.version)
self.assertEqual(0, len(qresult))
# test second qresult
qresult = qresults[1]
self.assertEqual('gi|16080617|ref|NP_391444.1|', qresult.id)
self.assertEqual('membrane bound lipoprotein [Bacillussubtilis subsp. subtilis str. 168]', qresult.description)
self.assertEqual(102, qresult.seq_len)
self.assertEqual('minirefseq_prot', qresult.target)
self.assertEqual('blastp', qresult.program)
self.assertEqual('2.2.26+', qresult.version)
self.assertEqual(5, len(qresult))
# second qresult, first hit
hit = qresult[0]
self.assertEqual('gi|308175296|ref|YP_003922001.1|', hit.id)
self.assertEqual('membrane bound lipoprotein [Bacillus amyloliquefaciens DSM 7]', hit.description)
self.assertEqual(100, hit.seq_len)
self.assertEqual(1, len(hit))
# second qresult, first hit, first hsp
hsp = qresult[0].hsps[0]
self.assertEqual(102, hsp.aln_span)
self.assertEqual(2e-46, hsp.evalue)
self.assertEqual(139.0, hsp.bitscore)
self.assertEqual(350.0, hsp.bitscore_raw)
self.assertEqual(0, hsp.query_frame)
self.assertEqual(0, hsp.hit_frame)
self.assertEqual(2, hsp.gap_num)
self.assertEqual(69, hsp.ident_num)
self.assertEqual(81, hsp.pos_num)
self.assertEqual(0, hsp.query_strand)
self.assertEqual(0, hsp.hit_strand)
self.assertEqual(0, hsp.query_start)
self.assertEqual(0, hsp.hit_start)
self.assertEqual(102, hsp.query_end)
self.assertEqual(100, hsp.hit_end)
self.assertEqual('MKKFIALLFFILLLSGCGVNSQKSQGEDVSPDSNIETKEG', str(hsp.query.seq)[:40])
self.assertEqual('MKK LFFILLL+GCGV ++KSQGED + TKEG', hsp.aln_annotation['similarity'][:40])
self.assertEqual('MKKIFGCLFFILLLAGCGVTNEKSQGEDAG--EKLVTKEG', str(hsp.hit.seq)[:40])
self.assertEqual('DITEESTSDLDKFNSGDKVTITYEKNDEGQLLLKDIERAN', str(hsp.query.seq)[-40:])
self.assertEqual('DITEES D+ N+G+KVT+ Y+KN +GQL+LKDIE AN', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('DITEESADDVKNLNNGEKVTVKYQKNSKGQLVLKDIEPAN', str(hsp.hit.seq)[-40:])
# second qresult, second hit
hit = qresult[1]
self.assertEqual('gi|375363999|ref|YP_005132038.1|', hit.id)
self.assertEqual('lytA gene product [Bacillus amyloliquefaciens subsp. plantarum CAU B946]', hit.description)
self.assertEqual(105, hit.seq_len)
self.assertEqual(1, len(hit))
# second qresult, second hit, first hsp
hsp = qresult[1].hsps[0]
self.assertEqual(105, hsp.aln_span)
self.assertEqual(7e-27, hsp.evalue)
self.assertEqual(89.0, hsp.bitscore)
self.assertEqual(219.0, hsp.bitscore_raw)
self.assertEqual(0, hsp.query_frame)
self.assertEqual(0, hsp.hit_frame)
self.assertEqual(5, hsp.gap_num)
self.assertEqual(48, hsp.ident_num)
self.assertEqual(69, hsp.pos_num)
self.assertEqual(0, hsp.query_strand)
self.assertEqual(0, hsp.hit_strand)
self.assertEqual(0, hsp.query_start)
self.assertEqual(0, hsp.hit_start)
self.assertEqual(101, hsp.query_end)
self.assertEqual(104, hsp.hit_end)
self.assertEqual('MKKFIALLFFILL----LSGCGVNSQKSQGEDVSPDSNIE', str(hsp.query.seq)[:40])
self.assertEqual('MKK IA F ILL L+ CG Q +G S ++ +', hsp.aln_annotation['similarity'][:40])
self.assertEqual('MKKTIAASFLILLFSVVLAACGTAEQSKKGSG-SSENQAQ', str(hsp.hit.seq)[:40])
self.assertEqual('LDITEESTSDLDKFNSGDKVTITYEKNDEGQLLLKDIERA', str(hsp.query.seq)[-40:])
self.assertEqual(' + +++ + L+KF+ DKV+ITY ND+GQ +K+IE+A', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('FEFSDDFSDVLNKFSENDKVSITYFTNDKGQKEIKEIEKA', str(hsp.hit.seq)[-40:])
# test third qresult
qresult = qresults[2]
self.assertEqual('gi|11464971:4-101', qresult.id)
self.assertEqual('pleckstrin [Mus musculus]', qresult.description)
self.assertEqual(98, qresult.seq_len)
self.assertEqual('minirefseq_prot', qresult.target)
self.assertEqual('blastp', qresult.program)
self.assertEqual('2.2.26+', qresult.version)
self.assertEqual(5, len(qresult))
# third qresult, first hit
hit = qresult[0]
self.assertEqual('gi|11464971|ref|NP_062422.1|', hit.id)
self.assertEqual('pleckstrin [Mus musculus]', hit.description)
self.assertEqual(350, hit.seq_len)
self.assertEqual(2, len(hit))
# third qresult, first hit, first hsp
hsp = qresult[0].hsps[0]
self.assertEqual(98, hsp.aln_span)
self.assertEqual(2e-69, hsp.evalue)
self.assertEqual(205.0, hsp.bitscore)
self.assertEqual(522.0, hsp.bitscore_raw)
self.assertEqual(0, hsp.query_frame)
self.assertEqual(0, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(98, hsp.ident_num)
self.assertEqual(98, hsp.pos_num)
self.assertEqual(0, hsp.query_strand)
self.assertEqual(0, hsp.hit_strand)
self.assertEqual(0, hsp.query_start)
self.assertEqual(3, hsp.hit_start)
self.assertEqual(98, hsp.query_end)
self.assertEqual(101, hsp.hit_end)
self.assertEqual('KRIREGYLVKKGSVFNTWKPMWVVLLEDGIEFYKKKSDNS', str(hsp.query.seq)[:40])
self.assertEqual('KRIREGYLVKKGSVFNTWKPMWVVLLEDGIEFYKKKSDNS', hsp.aln_annotation['similarity'][:40])
self.assertEqual('KRIREGYLVKKGSVFNTWKPMWVVLLEDGIEFYKKKSDNS', str(hsp.hit.seq)[:40])
self.assertEqual('FGKRMFVLKITTTKQQDHFFQAAFLEERDAWVRDIKKAIK', str(hsp.query.seq)[-40:])
self.assertEqual('FGKRMFVLKITTTKQQDHFFQAAFLEERDAWVRDIKKAIK', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('FGKRMFVLKITTTKQQDHFFQAAFLEERDAWVRDIKKAIK', str(hsp.hit.seq)[-40:])
# third qresult, first hit, second hsp
hsp = qresult[0].hsps[1]
self.assertEqual(100, hsp.aln_span)
self.assertEqual(3e-09, hsp.evalue)
self.assertEqual(43.5, hsp.bitscore)
self.assertEqual(101.0, hsp.bitscore_raw)
self.assertEqual(0, hsp.query_frame)
self.assertEqual(0, hsp.hit_frame)
self.assertEqual(6, hsp.gap_num)
self.assertEqual(29, hsp.ident_num)
self.assertEqual(48, hsp.pos_num)
self.assertEqual(0, hsp.query_strand)
self.assertEqual(0, hsp.hit_strand)
self.assertEqual(2, hsp.query_start)
self.assertEqual(245, hsp.hit_start)
self.assertEqual(96, hsp.query_end)
self.assertEqual(345, hsp.hit_end)
self.assertEqual('IREGYLVKKGSVFNTWKPMWVVLLEDG--IEFYKKKSDNS', str(hsp.query.seq)[:40])
self.assertEqual('I++G L+K+G WK +L ED + +Y ', hsp.aln_annotation['similarity'][:40])
self.assertEqual('IKQGCLLKQGHRRKNWKVRKFILREDPAYLHYYDPAGGED', str(hsp.hit.seq)[:40])
self.assertEqual('FGK--RMFVLKITTTKQQDHFFQAAFLEERDAWVRDIKKA', str(hsp.query.seq)[-40:])
self.assertEqual(' K + +I T + ++ QAA +ER W++ I+ A', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('VKKSDEENLFEIITADEVHYYLQAATSKERTEWIKAIQVA', str(hsp.hit.seq)[-40:])
# third qresult, second hit
hit = qresult[1]
self.assertEqual('gi|354480464|ref|XP_003502426.1|', hit.id)
self.assertEqual('PREDICTED: pleckstrin-like [Cricetulus griseus]', hit.description)
self.assertEqual(350, hit.seq_len)
self.assertEqual(2, len(hit))
# third qresult, second hit, first hsp
hsp = qresult[1].hsps[0]
self.assertEqual(98, hsp.aln_span)
self.assertEqual(3e-69, hsp.evalue)
self.assertEqual(205.0, hsp.bitscore)
self.assertEqual(521.0, hsp.bitscore_raw)
self.assertEqual(0, hsp.query_frame)
self.assertEqual(0, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(98, hsp.ident_num)
self.assertEqual(98, hsp.pos_num)
self.assertEqual(0, hsp.query_strand)
self.assertEqual(0, hsp.hit_strand)
self.assertEqual(0, hsp.query_start)
self.assertEqual(3, hsp.hit_start)
self.assertEqual(98, hsp.query_end)
self.assertEqual(101, hsp.hit_end)
self.assertEqual('KRIREGYLVKKGSVFNTWKPMWVVLLEDGIEFYKKKSDNS', str(hsp.query.seq)[:40])
self.assertEqual('KRIREGYLVKKGSVFNTWKPMWVVLLEDGIEFYKKKSDNS', hsp.aln_annotation['similarity'][:40])
self.assertEqual('KRIREGYLVKKGSVFNTWKPMWVVLLEDGIEFYKKKSDNS', str(hsp.hit.seq)[:40])
self.assertEqual('FGKRMFVLKITTTKQQDHFFQAAFLEERDAWVRDIKKAIK', str(hsp.query.seq)[-40:])
self.assertEqual('FGKRMFVLKITTTKQQDHFFQAAFLEERDAWVRDIKKAIK', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('FGKRMFVLKITTTKQQDHFFQAAFLEERDAWVRDIKKAIK', str(hsp.hit.seq)[-40:])
# third qresult, second hit, second hsp
hsp = qresult[1].hsps[1]
self.assertEqual(100, hsp.aln_span)
self.assertEqual(2e-09, hsp.evalue)
self.assertEqual(43.9, hsp.bitscore)
self.assertEqual(102.0, hsp.bitscore_raw)
self.assertEqual(0, hsp.query_frame)
self.assertEqual(0, hsp.hit_frame)
self.assertEqual(6, hsp.gap_num)
self.assertEqual(30, hsp.ident_num)
self.assertEqual(50, hsp.pos_num)
self.assertEqual(0, hsp.query_strand)
self.assertEqual(0, hsp.hit_strand)
self.assertEqual(2, hsp.query_start)
self.assertEqual(245, hsp.hit_start)
self.assertEqual(96, hsp.query_end)
self.assertEqual(345, hsp.hit_end)
self.assertEqual('IREGYLVKKGSVFNTWKPMWVVLLEDG--IEFYKKKSDNS', str(hsp.query.seq)[:40])
self.assertEqual('I++G L+K+G WK +L ED + +Y ', hsp.aln_annotation['similarity'][:40])
self.assertEqual('IKQGCLLKQGHRRKNWKVRKFILREDPAYLHYYDPAGGED', str(hsp.hit.seq)[:40])
self.assertEqual('GKRM---FVLKITTTKQQDHFFQAAFLEERDAWVRDIKKA', str(hsp.query.seq)[-40:])
self.assertEqual('GK+ + +I T + ++ QAA +ER W++ I+ A', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('GKKSDDENLFEIITADEVHYYLQAAAPKERTEWIKAIQVA', str(hsp.hit.seq)[-40:])
class BlastxCases(BaseBlastCases):
def test_text_2226_blastx_001(self):
"""Test parsing blastx output (text_2226_blastx_001.txt)"""
blast_file = get_file('text_2226_blastx_001.txt')
qresults = list(parse(blast_file, FMT))
self.assertEqual(1, len(qresults))
self.check_common_attrs(qresults)
# test first qresult
qresult = qresults[0]
self.assertEqual('random_s00', qresult.id)
self.assertEqual('', qresult.description)
self.assertEqual(128, qresult.seq_len)
self.assertEqual('NCBI Protein Reference Sequences', qresult.target)
self.assertEqual('blastx', qresult.program)
self.assertEqual('2.2.26+', qresult.version)
self.assertEqual(0, len(qresult))
def test_text_2226_blastx_002(self):
"""Test parsing blastx output (text_2226_blastx_002.txt)"""
blast_file = get_file('text_2226_blastx_002.txt')
qresults = list(parse(blast_file, FMT))
self.assertEqual(1, len(qresults))
self.check_common_attrs(qresults)
# test first qresult
qresult = qresults[0]
self.assertEqual('gi|356995852:1-490', qresult.id)
self.assertEqual('Mus musculus POU domain, class 5, transcriptionfactor 1 (Pou5f1), transcript variant 1, mRNA', qresult.description)
self.assertEqual(490, qresult.seq_len)
self.assertEqual('NCBI Protein Reference Sequences', qresult.target)
self.assertEqual('blastx', qresult.program)
self.assertEqual('2.2.26+', qresult.version)
self.assertEqual(10, len(qresult))
# first qresult, first hit
hit = qresult[0]
self.assertEqual('ref|NP_038661.2|', hit.id)
self.assertEqual('POU domain, class 5, transcription factor 1 isoform 1 [Mus musculus]', hit.description)
self.assertEqual(352, hit.seq_len)
self.assertEqual(1, len(hit))
# first qresult, first hit, first hsp
hsp = qresult[0].hsps[0]
self.assertEqual(140, hsp.aln_span)
self.assertEqual(4e-57, hsp.evalue)
self.assertEqual(192.0, hsp.bitscore)
self.assertEqual(487.0, hsp.bitscore_raw)
self.assertEqual(3, hsp.query_frame)
self.assertEqual(1, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(140, hsp.ident_num)
self.assertEqual(140, hsp.pos_num)
self.assertEqual(1, hsp.query_strand)
self.assertEqual(1, hsp.hit_strand)
self.assertEqual(68, hsp.query_start)
self.assertEqual(0, hsp.hit_start)
self.assertEqual(488, hsp.query_end)
self.assertEqual(140, hsp.hit_end)
self.assertEqual('MAGHLAsdfafspppgggdgsagLEPGWVDPRTWLSFQgp', str(hsp.query.seq)[:40])
self.assertEqual('MAGHLASDFAFSPPPGGGDGSAGLEPGWVDPRTWLSFQGP', hsp.aln_annotation['similarity'][:40])
self.assertEqual('MAGHLASDFAFSPPPGGGDGSAGLEPGWVDPRTWLSFQGP', str(hsp.hit.seq)[:40])
self.assertEqual('NSEGTSSEPCADRPNAVKLEKVEPTPEESQDMKALQKELE', str(hsp.query.seq)[-40:])
self.assertEqual('NSEGTSSEPCADRPNAVKLEKVEPTPEESQDMKALQKELE', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('NSEGTSSEPCADRPNAVKLEKVEPTPEESQDMKALQKELE', str(hsp.hit.seq)[-40:])
# first qresult, second hit
hit = qresult[1]
self.assertEqual('ref|NP_001009178.1|', hit.id)
self.assertEqual('POU class 5 homeobox 1 [Rattus norvegicus]', hit.description)
self.assertEqual(352, hit.seq_len)
self.assertEqual(1, len(hit))
# first qresult, second hit, first hsp
hsp = qresult[1].hsps[0]
self.assertEqual(140, hsp.aln_span)
self.assertEqual(3e-52, hsp.evalue)
self.assertEqual(179.0, hsp.bitscore)
self.assertEqual(454.0, hsp.bitscore_raw)
self.assertEqual(3, hsp.query_frame)
self.assertEqual(1, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(133, hsp.ident_num)
self.assertEqual(135, hsp.pos_num)
self.assertEqual(1, hsp.query_strand)
self.assertEqual(1, hsp.hit_strand)
self.assertEqual(68, hsp.query_start)
self.assertEqual(0, hsp.hit_start)
self.assertEqual(488, hsp.query_end)
self.assertEqual(140, hsp.hit_end)
self.assertEqual('MAGHLAsdfafspppgggdgsagLEPGWVDPRTWLSFQgp', str(hsp.query.seq)[:40])
self.assertEqual('MAGHLASDFAFSPPPGGGDGSAGLEPGWVDPRTWLSFQGP', hsp.aln_annotation['similarity'][:40])
self.assertEqual('MAGHLASDFAFSPPPGGGDGSAGLEPGWVDPRTWLSFQGP', str(hsp.hit.seq)[:40])
self.assertEqual('NSEGTSSEPCADRPNAVKLEKVEPTPEESQDMKALQKELE', str(hsp.query.seq)[-40:])
self.assertEqual('NSEG SS PC RP+AVKLEKVEP+PEESQDMKALQKELE', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('NSEGASSGPCTARPSAVKLEKVEPSPEESQDMKALQKELE', str(hsp.hit.seq)[-40:])
def test_text_2226_blastx_003(self):
"""Test parsing blastx output (text_2226_blastx_003.txt)"""
blast_file = get_file('text_2226_blastx_003.txt')
qresults = list(parse(blast_file, FMT))
self.assertEqual(1, len(qresults))
self.check_common_attrs(qresults)
# test first qresult
qresult = qresults[0]
self.assertEqual('hg19_dna', qresult.id)
self.assertEqual('range=chr1:1207057-1207541 5\'pad=0 3\'pad=0 strand=+repeatMasking=none', qresult.description)
self.assertEqual(485, qresult.seq_len)
self.assertEqual('NCBI Protein Reference Sequences', qresult.target)
self.assertEqual('blastx', qresult.program)
self.assertEqual('2.2.26+', qresult.version)
self.assertEqual(10, len(qresult))
# first qresult, first hit
hit = qresult[0]
self.assertEqual('ref|XP_003278367.1|', hit.id)
self.assertEqual('PREDICTED: UPF0764 protein C16orf89-like [Nomascus leucogenys]', hit.description)
self.assertEqual(132, hit.seq_len)
self.assertEqual(2, len(hit))
# first qresult, first hit, first hsp
hsp = qresult[0].hsps[0]
self.assertEqual(95, hsp.aln_span)
self.assertEqual(2e-32, hsp.evalue)
self.assertEqual(121.0, hsp.bitscore)
self.assertEqual(304.0, hsp.bitscore_raw)
self.assertEqual(-3, hsp.query_frame)
self.assertEqual(1, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(69, hsp.ident_num)
self.assertEqual(74, hsp.pos_num)
self.assertEqual(-1, hsp.query_strand)
self.assertEqual(1, hsp.hit_strand)
self.assertEqual(15, hsp.query_start)
self.assertEqual(24, hsp.hit_start)
self.assertEqual(300, hsp.query_end)
self.assertEqual(119, hsp.hit_end)
self.assertEqual('LRRSFALVAQAGVQWLDLGppqpppPGFK*FSCLSHPSSW', str(hsp.query.seq)[:40])
self.assertEqual('LRRSFALVAQ VQW +LG PQPPPPGFK FSCLS SSW', hsp.aln_annotation['similarity'][:40])
self.assertEqual('LRRSFALVAQTRVQWYNLGSPQPPPPGFKRFSCLSLLSSW', str(hsp.hit.seq)[:40])
self.assertEqual('VETGFYHVGQAGLEPPISGNLPAWASQSVGITGVSHHAQP', str(hsp.query.seq)[-40:])
self.assertEqual('VE GF HVGQAGLE SG+ P SQS GI GVSH AQP', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('VEMGFLHVGQAGLELVTSGDPPTLTSQSAGIIGVSHCAQP', str(hsp.hit.seq)[-40:])
# first qresult, first hit, second hsp
hsp = qresult[0].hsps[1]
self.assertEqual(72, hsp.aln_span)
self.assertEqual(2e-06, hsp.evalue)
self.assertEqual(51.6, hsp.bitscore)
self.assertEqual(122.0, hsp.bitscore_raw)
self.assertEqual(-3, hsp.query_frame)
self.assertEqual(1, hsp.hit_frame)
self.assertEqual(5, hsp.gap_num)
self.assertEqual(34, hsp.ident_num)
self.assertEqual(41, hsp.pos_num)
self.assertEqual(-1, hsp.query_strand)
self.assertEqual(1, hsp.hit_strand)
self.assertEqual(243, hsp.query_start)
self.assertEqual(31, hsp.hit_start)
self.assertEqual(459, hsp.query_end)
self.assertEqual(98, hsp.hit_end)
self.assertEqual('VGPARVQ*HDLSSLQPPAPEFK*FSHLSLQSSWDCRCPPP', str(hsp.query.seq)[:40])
self.assertEqual('V RVQ ++L S QPP P FK FS LSL SSW+ R PP', hsp.aln_annotation['similarity'][:40])
self.assertEqual('VAQTRVQWYNLGSPQPPPPGFKRFSCLSLLSSWEYRHVPP', str(hsp.hit.seq)[:40])
self.assertEqual('WDCRCPPPHPANffffffffFLRRSFALVAQAGVQWLDLG', str(hsp.query.seq)[-40:])
self.assertEqual('W+ R PPH AN F F + F V QAG++ + G', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('WEYRHVPPHLAN-----FLFLVEMGFLHVGQAGLELVTSG', str(hsp.hit.seq)[-40:])
# first qresult, second hit
hit = qresult[1]
self.assertEqual('ref|NP_001243358.1|', hit.id)
self.assertEqual('PDZ and LIM domain protein 5 isoform i [Homo sapiens]', hit.description)
self.assertEqual(136, hit.seq_len)
self.assertEqual(2, len(hit))
# first qresult, second hit, first hsp
hsp = qresult[1].hsps[0]
self.assertEqual(88, hsp.aln_span)
self.assertEqual(1e-29, hsp.evalue)
self.assertEqual(114.0, hsp.bitscore)
self.assertEqual(286.0, hsp.bitscore_raw)
self.assertEqual(-3, hsp.query_frame)
self.assertEqual(1, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(63, hsp.ident_num)
self.assertEqual(69, hsp.pos_num)
self.assertEqual(-1, hsp.query_strand)
self.assertEqual(1, hsp.hit_strand)
self.assertEqual(15, hsp.query_start)
self.assertEqual(29, hsp.hit_start)
self.assertEqual(279, hsp.query_end)
self.assertEqual(117, hsp.hit_end)
self.assertEqual('VAQAGVQWLDLGppqpppPGFK*FSCLSHPSSWDYRHMPP', str(hsp.query.seq)[:40])
self.assertEqual('++ AGVQW +LG PQPP P FK FSCLS PSSWDYRH+PP', hsp.aln_annotation['similarity'][:40])
self.assertEqual('ISSAGVQWRNLGSPQPPSPEFKRFSCLSLPSSWDYRHVPP', str(hsp.hit.seq)[:40])
self.assertEqual('VETGFYHVGQAGLEPPISGNLPAWASQSVGITGVSHHAQP', str(hsp.query.seq)[-40:])
self.assertEqual('VET F +VGQAGLE P SG+LP ASQS ITGVSH A P', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('VETKFPYVGQAGLELPTSGDLPTSASQSAKITGVSHRAWP', str(hsp.hit.seq)[-40:])
# first qresult, second hit, second hsp
hsp = qresult[1].hsps[1]
self.assertEqual(69, hsp.aln_span)
self.assertEqual(1e-06, hsp.evalue)
self.assertEqual(52.4, hsp.bitscore)
self.assertEqual(124.0, hsp.bitscore_raw)
self.assertEqual(-3, hsp.query_frame)
self.assertEqual(1, hsp.hit_frame)
self.assertEqual(5, hsp.gap_num)
self.assertEqual(33, hsp.ident_num)
self.assertEqual(41, hsp.pos_num)
self.assertEqual(-1, hsp.query_strand)
self.assertEqual(1, hsp.hit_strand)
self.assertEqual(258, hsp.query_start)
self.assertEqual(27, hsp.hit_start)
self.assertEqual(465, hsp.query_end)
self.assertEqual(91, hsp.hit_end)
self.assertEqual('VSVGPARVQ*HDLSSLQPPAPEFK*FSHLSLQSSWDCRCP', str(hsp.query.seq)[:40])
self.assertEqual('+++ A VQ +L S QPP+PEFK FS LSL SSWD R ', hsp.aln_annotation['similarity'][:40])
self.assertEqual('LTISSAGVQWRNLGSPQPPSPEFKRFSCLSLPSSWDYRHV', str(hsp.hit.seq)[:40])
self.assertEqual('SLQSSWDCRCPPPHPANffffffffFLRRSFALVAQAGVQ', str(hsp.query.seq)[-40:])
self.assertEqual('SL SSWD R PP AN F F + F V QAG++', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('SLPSSWDYRHVPPRLAN-----FVFLVETKFPYVGQAGLE', str(hsp.hit.seq)[-40:])
def test_text_2226_blastx_004(self):
"""Test parsing blastx output (text_2226_blastx_004.txt)"""
blast_file = get_file('text_2226_blastx_004.txt')
qresults = list(parse(blast_file, FMT))
self.assertEqual(2, len(qresults))
self.check_common_attrs(qresults)
# test first qresult
qresult = qresults[0]
self.assertEqual('random_s00', qresult.id)
self.assertEqual('', qresult.description)
self.assertEqual(128, qresult.seq_len)
self.assertEqual('minirefseq_prot', qresult.target)
self.assertEqual('blastx', qresult.program)
self.assertEqual('2.2.26+', qresult.version)
self.assertEqual(0, len(qresult))
# test second qresult
qresult = qresults[1]
self.assertEqual('hg19_dna', qresult.id)
self.assertEqual('range=chr1:1207057-1207541 5\'pad=0 3\'pad=0 strand=+repeatMasking=none', qresult.description)
self.assertEqual(485, qresult.seq_len)
self.assertEqual('minirefseq_prot', qresult.target)
self.assertEqual('blastx', qresult.program)
self.assertEqual('2.2.26+', qresult.version)
self.assertEqual(5, len(qresult))
# second qresult, first hit
hit = qresult[0]
self.assertEqual('gi|332258565|ref|XP_003278367.1|', hit.id)
self.assertEqual('PREDICTED: UPF0764 protein C16orf89-like [Nomascus leucogenys]', hit.description)
self.assertEqual(132, hit.seq_len)
self.assertEqual(2, len(hit))
# second qresult, first hit, first hsp
hsp = qresult[0].hsps[0]
self.assertEqual(95, hsp.aln_span)
self.assertEqual(3e-38, hsp.evalue)
self.assertEqual(121.0, hsp.bitscore)
self.assertEqual(304.0, hsp.bitscore_raw)
self.assertEqual(-3, hsp.query_frame)
self.assertEqual(1, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(69, hsp.ident_num)
self.assertEqual(74, hsp.pos_num)
self.assertEqual(-1, hsp.query_strand)
self.assertEqual(1, hsp.hit_strand)
self.assertEqual(15, hsp.query_start)
self.assertEqual(24, hsp.hit_start)
self.assertEqual(300, hsp.query_end)
self.assertEqual(119, hsp.hit_end)
self.assertEqual('LRRSFALVAQAGVQWLDLGppqpppPGFK*FSCLSHPSSW', str(hsp.query.seq)[:40])
self.assertEqual('LRRSFALVAQ VQW +LG PQPPPPGFK FSCLS SSW', hsp.aln_annotation['similarity'][:40])
self.assertEqual('LRRSFALVAQTRVQWYNLGSPQPPPPGFKRFSCLSLLSSW', str(hsp.hit.seq)[:40])
self.assertEqual('VETGFYHVGQAGLEPPISGNLPAWASQSVGITGVSHHAQP', str(hsp.query.seq)[-40:])
self.assertEqual('VE GF HVGQAGLE SG+ P SQS GI GVSH AQP', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('VEMGFLHVGQAGLELVTSGDPPTLTSQSAGIIGVSHCAQP', str(hsp.hit.seq)[-40:])
# second qresult, first hit, second hsp
hsp = qresult[0].hsps[1]
self.assertEqual(72, hsp.aln_span)
self.assertEqual(3e-12, hsp.evalue)
self.assertEqual(51.6, hsp.bitscore)
self.assertEqual(122.0, hsp.bitscore_raw)
self.assertEqual(-3, hsp.query_frame)
self.assertEqual(1, hsp.hit_frame)
self.assertEqual(5, hsp.gap_num)
self.assertEqual(34, hsp.ident_num)
self.assertEqual(41, hsp.pos_num)
self.assertEqual(-1, hsp.query_strand)
self.assertEqual(1, hsp.hit_strand)
self.assertEqual(243, hsp.query_start)
self.assertEqual(31, hsp.hit_start)
self.assertEqual(459, hsp.query_end)
self.assertEqual(98, hsp.hit_end)
self.assertEqual('VGPARVQ*HDLSSLQPPAPEFK*FSHLSLQSSWDCRCPPP', str(hsp.query.seq)[:40])
self.assertEqual('V RVQ ++L S QPP P FK FS LSL SSW+ R PP', hsp.aln_annotation['similarity'][:40])
self.assertEqual('VAQTRVQWYNLGSPQPPPPGFKRFSCLSLLSSWEYRHVPP', str(hsp.hit.seq)[:40])
self.assertEqual('WDCRCPPPHPANffffffffFLRRSFALVAQAGVQWLDLG', str(hsp.query.seq)[-40:])
self.assertEqual('W+ R PPH AN F F + F V QAG++ + G', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('WEYRHVPPHLAN-----FLFLVEMGFLHVGQAGLELVTSG', str(hsp.hit.seq)[-40:])
# second qresult, second hit
hit = qresult[1]
self.assertEqual('gi|374093214|ref|NP_001243358.1|', hit.id)
self.assertEqual('PDZ and LIM domain protein 5 isoform i [Homo sapiens]', hit.description)
self.assertEqual(136, hit.seq_len)
self.assertEqual(2, len(hit))
# second qresult, second hit, first hsp
hsp = qresult[1].hsps[0]
self.assertEqual(88, hsp.aln_span)
self.assertEqual(2e-35, hsp.evalue)
self.assertEqual(114.0, hsp.bitscore)
self.assertEqual(286.0, hsp.bitscore_raw)
self.assertEqual(-3, hsp.query_frame)
self.assertEqual(1, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(63, hsp.ident_num)
self.assertEqual(69, hsp.pos_num)
self.assertEqual(-1, hsp.query_strand)
self.assertEqual(1, hsp.hit_strand)
self.assertEqual(15, hsp.query_start)
self.assertEqual(29, hsp.hit_start)
self.assertEqual(279, hsp.query_end)
self.assertEqual(117, hsp.hit_end)
self.assertEqual('VAQAGVQWLDLGppqpppPGFK*FSCLSHPSSWDYRHMPP', str(hsp.query.seq)[:40])
self.assertEqual('++ AGVQW +LG PQPP P FK FSCLS PSSWDYRH+PP', hsp.aln_annotation['similarity'][:40])
self.assertEqual('ISSAGVQWRNLGSPQPPSPEFKRFSCLSLPSSWDYRHVPP', str(hsp.hit.seq)[:40])
self.assertEqual('VETGFYHVGQAGLEPPISGNLPAWASQSVGITGVSHHAQP', str(hsp.query.seq)[-40:])
self.assertEqual('VET F +VGQAGLE P SG+LP ASQS ITGVSH A P', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('VETKFPYVGQAGLELPTSGDLPTSASQSAKITGVSHRAWP', str(hsp.hit.seq)[-40:])
# second qresult, second hit, second hsp
hsp = qresult[1].hsps[1]
self.assertEqual(69, hsp.aln_span)
self.assertEqual(2e-12, hsp.evalue)
self.assertEqual(52.4, hsp.bitscore)
self.assertEqual(124.0, hsp.bitscore_raw)
self.assertEqual(-3, hsp.query_frame)
self.assertEqual(1, hsp.hit_frame)
self.assertEqual(5, hsp.gap_num)
self.assertEqual(33, hsp.ident_num)
self.assertEqual(41, hsp.pos_num)
self.assertEqual(-1, hsp.query_strand)
self.assertEqual(1, hsp.hit_strand)
self.assertEqual(258, hsp.query_start)
self.assertEqual(27, hsp.hit_start)
self.assertEqual(465, hsp.query_end)
self.assertEqual(91, hsp.hit_end)
self.assertEqual('VSVGPARVQ*HDLSSLQPPAPEFK*FSHLSLQSSWDCRCP', str(hsp.query.seq)[:40])
self.assertEqual('+++ A VQ +L S QPP+PEFK FS LSL SSWD R ', hsp.aln_annotation['similarity'][:40])
self.assertEqual('LTISSAGVQWRNLGSPQPPSPEFKRFSCLSLPSSWDYRHV', str(hsp.hit.seq)[:40])
self.assertEqual('SLQSSWDCRCPPPHPANffffffffFLRRSFALVAQAGVQ', str(hsp.query.seq)[-40:])
self.assertEqual('SL SSWD R PP AN F F + F V QAG++', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('SLPSSWDYRHVPPRLAN-----FVFLVETKFPYVGQAGLE', str(hsp.hit.seq)[-40:])
class TblastnCases(BaseBlastCases):
def test_text_2226_tblastn_001(self):
"""Test parsing tblastn output (text_2226_tblastn_001.txt)"""
blast_file = get_file('text_2226_tblastn_001.txt')
qresults = list(parse(blast_file, FMT))
self.assertEqual(1, len(qresults))
self.check_common_attrs(qresults)
# test first qresult
qresult = qresults[0]
self.assertEqual('random_s00', qresult.id)
self.assertEqual('', qresult.description)
self.assertEqual(32, qresult.seq_len)
self.assertEqual('NCBI Transcript Reference Sequences', qresult.target)
self.assertEqual('tblastn', qresult.program)
self.assertEqual('2.2.26+', qresult.version)
self.assertEqual(0, len(qresult))
def test_text_2226_tblastn_002(self):
"""Test parsing tblastn output (text_2226_tblastn_002.txt)"""
blast_file = get_file('text_2226_tblastn_002.txt')
qresults = list(parse(blast_file, FMT))
self.assertEqual(1, len(qresults))
self.check_common_attrs(qresults)
# test first qresult
qresult = qresults[0]
self.assertEqual('gi|16080617|ref|NP_391444.1|', qresult.id)
self.assertEqual('membrane bound lipoprotein [Bacillussubtilis subsp. subtilis str. 168]', qresult.description)
self.assertEqual(102, qresult.seq_len)
self.assertEqual('NCBI Transcript Reference Sequences', qresult.target)
self.assertEqual('tblastn', qresult.program)
self.assertEqual('2.2.26+', qresult.version)
self.assertEqual(4, len(qresult))
# first qresult, first hit
hit = qresult[0]
self.assertEqual('ref|XM_001425911.1|', hit.id)
self.assertEqual('Paramecium tetraurelia hypothetical protein (GSPATT00004923001) partial mRNA', hit.description)
self.assertEqual(4632, hit.seq_len)
self.assertEqual(1, len(hit))
# first qresult, first hit, first hsp
hsp = qresult[0].hsps[0]
self.assertEqual(43, hsp.aln_span)
self.assertEqual(0.74, hsp.evalue)
self.assertEqual(34.7, hsp.bitscore)
self.assertEqual(78.0, hsp.bitscore_raw)
self.assertEqual(1, hsp.query_frame)
self.assertEqual(0, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(15, hsp.ident_num)
self.assertEqual(26, hsp.pos_num)
self.assertEqual(1, hsp.query_strand)
self.assertEqual(0, hsp.hit_strand)
self.assertEqual(30, hsp.query_start)
self.assertEqual(1743, hsp.hit_start)
self.assertEqual(73, hsp.query_end)
self.assertEqual(1872, hsp.hit_end)
self.assertEqual('PDSNIETKEGTYVGLADTHTIEVTVDNEPVSLDITEESTS', str(hsp.query.seq)[:40])
self.assertEqual('P + TK+GT +GL HTI + + +SL++ E++ ', hsp.aln_annotation['similarity'][:40])
self.assertEqual('PKTATGTKKGTIIGLLSIHTILFILTSHALSLEVKEQT*K', str(hsp.hit.seq)[:40])
self.assertEqual('NIETKEGTYVGLADTHTIEVTVDNEPVSLDITEESTSDLD', str(hsp.query.seq)[-40:])
self.assertEqual(' TK+GT +GL HTI + + +SL++ E++ D+D', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('ATGTKKGTIIGLLSIHTILFILTSHALSLEVKEQT*KDID', str(hsp.hit.seq)[-40:])
# first qresult, second hit
hit = qresult[1]
self.assertEqual('ref|XM_003382561.1|', hit.id)
self.assertEqual('PREDICTED: Amphimedon queenslandica CWF19-like protein 1-like (LOC100635130), mRNA', hit.description)
self.assertEqual(1811, hit.seq_len)
self.assertEqual(1, len(hit))
# first qresult, second hit, first hsp
hsp = qresult[1].hsps[0]
self.assertEqual(74, hsp.aln_span)
self.assertEqual(6.4, hsp.evalue)
self.assertEqual(32.0, hsp.bitscore)
self.assertEqual(71.0, hsp.bitscore_raw)
self.assertEqual(2, hsp.query_frame)
self.assertEqual(0, hsp.hit_frame)
self.assertEqual(11, hsp.gap_num)
self.assertEqual(19, hsp.ident_num)
self.assertEqual(36, hsp.pos_num)
self.assertEqual(1, hsp.query_strand)
self.assertEqual(0, hsp.hit_strand)
self.assertEqual(28, hsp.query_start)
self.assertEqual(1105, hsp.hit_start)
self.assertEqual(94, hsp.query_end)
self.assertEqual(1318, hsp.hit_end)
self.assertEqual('VSPDSNIETKEGTYVGLADTHTIEVTVDNEPVSLDITEES', str(hsp.query.seq)[:40])
self.assertEqual('+ DS + +G GL D H + + + + P S+D +E ', hsp.aln_annotation['similarity'][:40])
self.assertEqual('IGNDSYLALSKG---GLVDEHVLILPIGHYPSSIDAPQEV', str(hsp.hit.seq)[:40])
self.assertEqual('DITEESTSDLDK--------FNSGDKVTITYEKNDEGQLL', str(hsp.query.seq)[-40:])
self.assertEqual('D +E ++DK F+S ++ + +E+N Q L', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('DAPQEVIEEIDKFKVALRKYFSSKNQTCVMFERNFRSQHL', str(hsp.hit.seq)[-40:])
def test_text_2226_tblastn_003(self):
"""Test parsing tblastn output (text_2226_tblastn_003.txt)"""
blast_file = get_file('text_2226_tblastn_003.txt')
qresults = list(parse(blast_file, FMT))
self.assertEqual(1, len(qresults))
self.check_common_attrs(qresults)
# test first qresult
qresult = qresults[0]
self.assertEqual('gi|11464971:4-101', qresult.id)
self.assertEqual('pleckstrin [Mus musculus]', qresult.description)
self.assertEqual(98, qresult.seq_len)
self.assertEqual('NCBI Transcript Reference Sequences', qresult.target)
self.assertEqual('tblastn', qresult.program)
self.assertEqual('2.2.26+', qresult.version)
self.assertEqual(10, len(qresult))
# first qresult, first hit
hit = qresult[0]
self.assertEqual('ref|XM_003502378.1|', hit.id)
self.assertEqual('PREDICTED: Cricetulus griseus pleckstrin-like (LOC100773128), mRNA', hit.description)
self.assertEqual(1119, hit.seq_len)
self.assertEqual(2, len(hit))
# first qresult, first hit, first hsp
hsp = qresult[0].hsps[0]
self.assertEqual(98, hsp.aln_span)
self.assertEqual(1e-63, hsp.evalue)
self.assertEqual(205.0, hsp.bitscore)
self.assertEqual(521.0, hsp.bitscore_raw)
self.assertEqual(1, hsp.query_frame)
self.assertEqual(0, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(98, hsp.ident_num)
self.assertEqual(98, hsp.pos_num)
self.assertEqual(1, hsp.query_strand)
self.assertEqual(0, hsp.hit_strand)
self.assertEqual(0, hsp.query_start)
self.assertEqual(75, hsp.hit_start)
self.assertEqual(98, hsp.query_end)
self.assertEqual(369, hsp.hit_end)
self.assertEqual('KRIREGYLVKKGSVFNTWKPMWVVLLEDGIEFYKKKSDNS', str(hsp.query.seq)[:40])
self.assertEqual('KRIREGYLVKKGSVFNTWKPMWVVLLEDGIEFYKKKSDNS', hsp.aln_annotation['similarity'][:40])
self.assertEqual('KRIREGYLVKKGSVFNTWKPMWVVLLEDGIEFYKKKSDNS', str(hsp.hit.seq)[:40])
self.assertEqual('FGKRMFVLKITTTKQQDHFFQAAFLEERDAWVRDIKKAIK', str(hsp.query.seq)[-40:])
self.assertEqual('FGKRMFVLKITTTKQQDHFFQAAFLEERDAWVRDIKKAIK', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('FGKRMFVLKITTTKQQDHFFQAAFLEERDAWVRDIKKAIK', str(hsp.hit.seq)[-40:])
# first qresult, first hit, second hsp
hsp = qresult[0].hsps[1]
self.assertEqual(100, hsp.aln_span)
self.assertEqual(0.0005, hsp.evalue)
self.assertEqual(43.9, hsp.bitscore)
self.assertEqual(102.0, hsp.bitscore_raw)
self.assertEqual(1, hsp.query_frame)
self.assertEqual(0, hsp.hit_frame)
self.assertEqual(6, hsp.gap_num)
self.assertEqual(30, hsp.ident_num)
self.assertEqual(50, hsp.pos_num)
self.assertEqual(1, hsp.query_strand)
self.assertEqual(0, hsp.hit_strand)
self.assertEqual(2, hsp.query_start)
self.assertEqual(801, hsp.hit_start)
self.assertEqual(96, hsp.query_end)
self.assertEqual(1101, hsp.hit_end)
self.assertEqual('IREGYLVKKGSVFNTWKPMWVVLLEDG--IEFYKKKSDNS', str(hsp.query.seq)[:40])
self.assertEqual('I++G L+K+G WK +L ED + +Y ', hsp.aln_annotation['similarity'][:40])
self.assertEqual('IKQGCLLKQGHRRKNWKVRKFILREDPAYLHYYDPAGGED', str(hsp.hit.seq)[:40])
self.assertEqual('GKRM---FVLKITTTKQQDHFFQAAFLEERDAWVRDIKKA', str(hsp.query.seq)[-40:])
self.assertEqual('GK+ + +I T + ++ QAA +ER W++ I+ A', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('GKKSDDENLFEIITADEVHYYLQAAAPKERTEWIKAIQVA', str(hsp.hit.seq)[-40:])
# first qresult, second hit
hit = qresult[1]
self.assertEqual('ref|XM_003360601.2|', hit.id)
self.assertEqual('PREDICTED: Sus scrofa pleckstrin-like (LOC100626968), mRNA', hit.description)
self.assertEqual(772, hit.seq_len)
self.assertEqual(2, len(hit))
# first qresult, second hit, first hsp
hsp = qresult[1].hsps[0]
self.assertEqual(98, hsp.aln_span)
self.assertEqual(1e-62, hsp.evalue)
self.assertEqual(199.0, hsp.bitscore)
self.assertEqual(506.0, hsp.bitscore_raw)
self.assertEqual(2, hsp.query_frame)
self.assertEqual(0, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(94, hsp.ident_num)
self.assertEqual(96, hsp.pos_num)
self.assertEqual(1, hsp.query_strand)
self.assertEqual(0, hsp.hit_strand)
self.assertEqual(0, hsp.query_start)
self.assertEqual(94, hsp.hit_start)
self.assertEqual(98, hsp.query_end)
self.assertEqual(388, hsp.hit_end)
self.assertEqual('KRIREGYLVKKGSVFNTWKPMWVVLLEDGIEFYKKKSDNS', str(hsp.query.seq)[:40])
self.assertEqual('KRIREGYLVKKGS+FNTWKPMWV+LLEDGIEFYKKKSDNS', hsp.aln_annotation['similarity'][:40])
self.assertEqual('KRIREGYLVKKGSMFNTWKPMWVILLEDGIEFYKKKSDNS', str(hsp.hit.seq)[:40])
self.assertEqual('FGKRMFVLKITTTKQQDHFFQAAFLEERDAWVRDIKKAIK', str(hsp.query.seq)[-40:])
self.assertEqual('FGKRMFV KITTTKQQDHFFQAAFLEERD WVRDIKKAIK', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('FGKRMFVFKITTTKQQDHFFQAAFLEERDGWVRDIKKAIK', str(hsp.hit.seq)[-40:])
# first qresult, second hit, second hsp
hsp = qresult[1].hsps[1]
self.assertEqual(71, hsp.aln_span)
self.assertEqual(2.8, hsp.evalue)
self.assertEqual(32.7, hsp.bitscore)
self.assertEqual(73.0, hsp.bitscore_raw)
self.assertEqual(2, hsp.query_frame)
self.assertEqual(0, hsp.hit_frame)
self.assertEqual(4, hsp.gap_num)
self.assertEqual(21, hsp.ident_num)
self.assertEqual(33, hsp.pos_num)
self.assertEqual(1, hsp.query_strand)
self.assertEqual(0, hsp.hit_strand)
self.assertEqual(29, hsp.query_start)
self.assertEqual(541, hsp.hit_start)
self.assertEqual(96, hsp.query_end)
self.assertEqual(754, hsp.hit_end)
self.assertEqual('IEFYKKKSDNSPKGMIPLKGSTLTS-PCQDFGKRMFVLK-', str(hsp.query.seq)[:40])
self.assertEqual('+ +Y P G I L+G +TS GK F+ + ', hsp.aln_annotation['similarity'][:40])
self.assertEqual('LHYYDPAGGEDPLGAIHLRGCVVTSVESNTDGKNGFLWER', str(hsp.hit.seq)[:40])
self.assertEqual('GKRMFVLK---ITTTKQQDHFFQAAFLEERDAWVRDIKKA', str(hsp.query.seq)[-40:])
self.assertEqual('GK F+ + T + +F QAA +ER W++ I+ A', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('GKNGFLWERAXXITADEVHYFLQAANPKERTEWIKAIQVA', str(hsp.hit.seq)[-40:])
def test_text_2226_tblastn_004(self):
"""Test parsing tblastn output (text_2226_tblastn_004.txt)"""
blast_file = get_file('text_2226_tblastn_004.txt')
qresults = list(parse(blast_file, FMT))
self.assertEqual(3, len(qresults))
self.check_common_attrs(qresults)
# test first qresult
qresult = qresults[0]
self.assertEqual('random_s00', qresult.id)
self.assertEqual('', qresult.description)
self.assertEqual(32, qresult.seq_len)
self.assertEqual('minirefseq_mrna', qresult.target)
self.assertEqual('tblastn', qresult.program)
self.assertEqual('2.2.26+', qresult.version)
self.assertEqual(0, len(qresult))
# test second qresult
qresult = qresults[1]
self.assertEqual('gi|16080617|ref|NP_391444.1|', qresult.id)
self.assertEqual('membrane bound lipoprotein [Bacillussubtilis subsp. subtilis str. 168]', qresult.description)
self.assertEqual(102, qresult.seq_len)
self.assertEqual('minirefseq_mrna', qresult.target)
self.assertEqual('tblastn', qresult.program)
self.assertEqual('2.2.26+', qresult.version)
self.assertEqual(3, len(qresult))
# second qresult, first hit
hit = qresult[0]
self.assertEqual('gi|145479850|ref|XM_001425911.1|', hit.id)
self.assertEqual('Paramecium tetraurelia hypothetical protein (GSPATT00004923001) partial mRNA', hit.description)
self.assertEqual(4632, hit.seq_len)
self.assertEqual(1, len(hit))
# second qresult, first hit, first hsp
hsp = qresult[0].hsps[0]
self.assertEqual(43, hsp.aln_span)
self.assertEqual(1e-05, hsp.evalue)
self.assertEqual(34.7, hsp.bitscore)
self.assertEqual(78.0, hsp.bitscore_raw)
self.assertEqual(1, hsp.query_frame)
self.assertEqual(0, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(15, hsp.ident_num)
self.assertEqual(26, hsp.pos_num)
self.assertEqual(1, hsp.query_strand)
self.assertEqual(0, hsp.hit_strand)
self.assertEqual(30, hsp.query_start)
self.assertEqual(1743, hsp.hit_start)
self.assertEqual(73, hsp.query_end)
self.assertEqual(1872, hsp.hit_end)
self.assertEqual('PDSNIETKEGTYVGLADTHTIEVTVDNEPVSLDITEESTS', str(hsp.query.seq)[:40])
self.assertEqual('P + TK+GT +GL HTI + + +SL++ E++ ', hsp.aln_annotation['similarity'][:40])
self.assertEqual('PKTATGTKKGTIIGLLSIHTILFILTSHALSLEVKEQT*K', str(hsp.hit.seq)[:40])
self.assertEqual('NIETKEGTYVGLADTHTIEVTVDNEPVSLDITEESTSDLD', str(hsp.query.seq)[-40:])
self.assertEqual(' TK+GT +GL HTI + + +SL++ E++ D+D', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('ATGTKKGTIIGLLSIHTILFILTSHALSLEVKEQT*KDID', str(hsp.hit.seq)[-40:])
# second qresult, second hit
hit = qresult[1]
self.assertEqual('gi|72012412|ref|XM_777959.1|', hit.id)
self.assertEqual('PREDICTED: Strongylocentrotus purpuratus hypothetical LOC577746 (LOC577746), mRNA', hit.description)
self.assertEqual(1593, hit.seq_len)
self.assertEqual(1, len(hit))
# second qresult, second hit, first hsp
hsp = qresult[1].hsps[0]
self.assertEqual(59, hsp.aln_span)
self.assertEqual(0.0001, hsp.evalue)
self.assertEqual(31.6, hsp.bitscore)
self.assertEqual(70.0, hsp.bitscore_raw)
self.assertEqual(1, hsp.query_frame)
self.assertEqual(0, hsp.hit_frame)
self.assertEqual(8, hsp.gap_num)
self.assertEqual(20, hsp.ident_num)
self.assertEqual(29, hsp.pos_num)
self.assertEqual(1, hsp.query_strand)
self.assertEqual(0, hsp.hit_strand)
self.assertEqual(43, hsp.query_start)
self.assertEqual(1056, hsp.hit_start)
self.assertEqual(94, hsp.query_end)
self.assertEqual(1233, hsp.hit_end)
self.assertEqual('GLADTHTIEVTVDNEPVSLDITEESTSDLDKFNSG-----', str(hsp.query.seq)[:40])
self.assertEqual('GL HT+ + V + LD+TEE ++LD+F S ', hsp.aln_annotation['similarity'][:40])
self.assertEqual('GLVPDHTLILPVGHYQSMLDLTEEVQTELDQFKSALRKYY', str(hsp.hit.seq)[:40])
self.assertEqual('DITEESTSDLDKFNSG--------DKVTITYEKNDEGQLL', str(hsp.query.seq)[-40:])
self.assertEqual('D+TEE ++LD+F S K + YE+N Q L', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('DLTEEVQTELDQFKSALRKYYLSKGKTCVIYERNFRTQHL', str(hsp.hit.seq)[-40:])
# test third qresult
qresult = qresults[2]
self.assertEqual('gi|11464971:4-101', qresult.id)
self.assertEqual('pleckstrin [Mus musculus]', qresult.description)
self.assertEqual(98, qresult.seq_len)
self.assertEqual('minirefseq_mrna', qresult.target)
self.assertEqual('tblastn', qresult.program)
self.assertEqual('2.2.26+', qresult.version)
self.assertEqual(5, len(qresult))
# third qresult, first hit
hit = qresult[0]
self.assertEqual('gi|350596019|ref|XM_003360601.2|', hit.id)
self.assertEqual('PREDICTED: Sus scrofa pleckstrin-like (LOC100626968), mRNA', hit.description)
self.assertEqual(772, hit.seq_len)
self.assertEqual(2, len(hit))
# third qresult, first hit, first hsp
hsp = qresult[0].hsps[0]
self.assertEqual(98, hsp.aln_span)
self.assertEqual(2e-67, hsp.evalue)
self.assertEqual(199.0, hsp.bitscore)
self.assertEqual(506.0, hsp.bitscore_raw)
self.assertEqual(2, hsp.query_frame)
self.assertEqual(0, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(94, hsp.ident_num)
self.assertEqual(96, hsp.pos_num)
self.assertEqual(1, hsp.query_strand)
self.assertEqual(0, hsp.hit_strand)
self.assertEqual(0, hsp.query_start)
self.assertEqual(94, hsp.hit_start)
self.assertEqual(98, hsp.query_end)
self.assertEqual(388, hsp.hit_end)
self.assertEqual('KRIREGYLVKKGSVFNTWKPMWVVLLEDGIEFYKKKSDNS', str(hsp.query.seq)[:40])
self.assertEqual('KRIREGYLVKKGS+FNTWKPMWV+LLEDGIEFYKKKSDNS', hsp.aln_annotation['similarity'][:40])
self.assertEqual('KRIREGYLVKKGSMFNTWKPMWVILLEDGIEFYKKKSDNS', str(hsp.hit.seq)[:40])
self.assertEqual('FGKRMFVLKITTTKQQDHFFQAAFLEERDAWVRDIKKAIK', str(hsp.query.seq)[-40:])
self.assertEqual('FGKRMFV KITTTKQQDHFFQAAFLEERD WVRDIKKAIK', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('FGKRMFVFKITTTKQQDHFFQAAFLEERDGWVRDIKKAIK', str(hsp.hit.seq)[-40:])
# third qresult, first hit, second hsp
hsp = qresult[0].hsps[1]
self.assertEqual(71, hsp.aln_span)
self.assertEqual(4e-05, hsp.evalue)
self.assertEqual(32.7, hsp.bitscore)
self.assertEqual(73.0, hsp.bitscore_raw)
self.assertEqual(2, hsp.query_frame)
self.assertEqual(0, hsp.hit_frame)
self.assertEqual(4, hsp.gap_num)
self.assertEqual(21, hsp.ident_num)
self.assertEqual(33, hsp.pos_num)
self.assertEqual(1, hsp.query_strand)
self.assertEqual(0, hsp.hit_strand)
self.assertEqual(29, hsp.query_start)
self.assertEqual(541, hsp.hit_start)
self.assertEqual(96, hsp.query_end)
self.assertEqual(754, hsp.hit_end)
self.assertEqual('IEFYKKKSDNSPKGMIPLKGSTLTS-PCQDFGKRMFVLK-', str(hsp.query.seq)[:40])
self.assertEqual('+ +Y P G I L+G +TS GK F+ + ', hsp.aln_annotation['similarity'][:40])
self.assertEqual('LHYYDPAGGEDPLGAIHLRGCVVTSVESNTDGKNGFLWER', str(hsp.hit.seq)[:40])
self.assertEqual('GKRMFVLK---ITTTKQQDHFFQAAFLEERDAWVRDIKKA', str(hsp.query.seq)[-40:])
self.assertEqual('GK F+ + T + +F QAA +ER W++ I+ A', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('GKNGFLWERAXXITADEVHYFLQAANPKERTEWIKAIQVA', str(hsp.hit.seq)[-40:])
# third qresult, second hit
hit = qresult[1]
self.assertEqual('gi|301779869|ref|XM_002925302.1|', hit.id)
self.assertEqual('PREDICTED: Ailuropoda melanoleuca pleckstrin-like (LOC100466932), mRNA', hit.description)
self.assertEqual(1144, hit.seq_len)
self.assertEqual(2, len(hit))
# third qresult, second hit, first hsp
hsp = qresult[1].hsps[0]
self.assertEqual(98, hsp.aln_span)
self.assertEqual(2e-67, hsp.evalue)
self.assertEqual(202.0, hsp.bitscore)
self.assertEqual(515.0, hsp.bitscore_raw)
self.assertEqual(3, hsp.query_frame)
self.assertEqual(0, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(96, hsp.ident_num)
self.assertEqual(97, hsp.pos_num)
self.assertEqual(1, hsp.query_strand)
self.assertEqual(0, hsp.hit_strand)
self.assertEqual(0, hsp.query_start)
self.assertEqual(77, hsp.hit_start)
self.assertEqual(98, hsp.query_end)
self.assertEqual(371, hsp.hit_end)
self.assertEqual('KRIREGYLVKKGSVFNTWKPMWVVLLEDGIEFYKKKSDNS', str(hsp.query.seq)[:40])
self.assertEqual('KRIREGYLVK+GSVFNTWKPMWVVLLEDGIEFYKKKSDNS', hsp.aln_annotation['similarity'][:40])
self.assertEqual('KRIREGYLVKRGSVFNTWKPMWVVLLEDGIEFYKKKSDNS', str(hsp.hit.seq)[:40])
self.assertEqual('FGKRMFVLKITTTKQQDHFFQAAFLEERDAWVRDIKKAIK', str(hsp.query.seq)[-40:])
self.assertEqual('FGKRMFV KITTTKQQDHFFQAAFLEERDAWVRDIKKAIK', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('FGKRMFVFKITTTKQQDHFFQAAFLEERDAWVRDIKKAIK', str(hsp.hit.seq)[-40:])
# third qresult, second hit, second hsp
hsp = qresult[1].hsps[1]
self.assertEqual(100, hsp.aln_span)
self.assertEqual(3e-09, hsp.evalue)
self.assertEqual(45.1, hsp.bitscore)
self.assertEqual(105.0, hsp.bitscore_raw)
self.assertEqual(3, hsp.query_frame)
self.assertEqual(0, hsp.hit_frame)
self.assertEqual(6, hsp.gap_num)
self.assertEqual(30, hsp.ident_num)
self.assertEqual(48, hsp.pos_num)
self.assertEqual(1, hsp.query_strand)
self.assertEqual(0, hsp.hit_strand)
self.assertEqual(2, hsp.query_start)
self.assertEqual(803, hsp.hit_start)
self.assertEqual(96, hsp.query_end)
self.assertEqual(1103, hsp.hit_end)
self.assertEqual('IREGYLVKKGSVFNTWKPMWVVLLEDG--IEFYKKKSDNS', str(hsp.query.seq)[:40])
self.assertEqual('I++G L+K+G WK +L ED + +Y ', hsp.aln_annotation['similarity'][:40])
self.assertEqual('IKQGCLLKQGHRRKNWKVRKFILREDPAYLHYYDPAGGED', str(hsp.hit.seq)[:40])
self.assertEqual('QDFGKRMFVLKITTTKQQDHFFQAAFLEERDAWVRDIKKA', str(hsp.query.seq)[-40:])
self.assertEqual(' + + +I T + +F QAA +ER W++ I+ A', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('VRKSEEENLFEIITADEVHYFLQAATPKERTEWIKAIQVA', str(hsp.hit.seq)[-40:])
class TblastxCases(BaseBlastCases):
def test_text_2226_tblastx_001(self):
"""Test parsing tblastx output (text_2226_tblastx_001.txt)"""
blast_file = get_file('text_2226_tblastx_001.txt')
qresults = list(parse(blast_file, FMT))
self.assertEqual(1, len(qresults))
self.check_common_attrs(qresults)
# test first qresult
qresult = qresults[0]
self.assertEqual('random_s00', qresult.id)
self.assertEqual('', qresult.description)
self.assertEqual(128, qresult.seq_len)
self.assertEqual('NCBI Transcript Reference Sequences', qresult.target)
self.assertEqual('tblastx', qresult.program)
self.assertEqual('2.2.26+', qresult.version)
self.assertEqual(0, len(qresult))
def test_text_2226_tblastx_002(self):
"""Test parsing tblastx output (text_2226_tblastx_002.txt)"""
blast_file = get_file('text_2226_tblastx_002.txt')
qresults = list(parse(blast_file, FMT))
self.assertEqual(1, len(qresults))
self.check_common_attrs(qresults)
# test first qresult
qresult = qresults[0]
self.assertEqual('gi|356995852:1-490', qresult.id)
self.assertEqual('Mus musculus POU domain, class 5, transcriptionfactor 1 (Pou5f1), transcript variant 1, mRNA', qresult.description)
self.assertEqual(490, qresult.seq_len)
self.assertEqual('NCBI Transcript Reference Sequences', qresult.target)
self.assertEqual('tblastx', qresult.program)
self.assertEqual('2.2.26+', qresult.version)
self.assertEqual(10, len(qresult))
# first qresult, first hit
hit = qresult[0]
self.assertEqual('ref|NM_013633.3|', hit.id)
self.assertEqual('Mus musculus POU domain, class 5, transcription factor 1 (Pou5f1), transcript variant 1, mRNA', hit.description)
self.assertEqual(1353, hit.seq_len)
self.assertEqual(1, len(hit))
# first qresult, first hit, first hsp
hsp = qresult[0].hsps[0]
self.assertEqual(163, hsp.aln_span)
self.assertEqual(2e-115, hsp.evalue)
self.assertEqual(418.0, hsp.bitscore)
self.assertEqual(908.0, hsp.bitscore_raw)
self.assertEqual(1, hsp.query_frame)
self.assertEqual(1, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(163, hsp.ident_num)
self.assertEqual(163, hsp.pos_num)
self.assertEqual(1, hsp.query_strand)
self.assertEqual(1, hsp.hit_strand)
self.assertEqual(0, hsp.query_start)
self.assertEqual(0, hsp.hit_start)
self.assertEqual(489, hsp.query_end)
self.assertEqual(489, hsp.hit_end)
self.assertEqual('EVKPSLGEPSFHQAPGSGCPPSPWLDTWLQTSPSHPHQVG', str(hsp.query.seq)[:40])
self.assertEqual('EVKPSLGEPSFHQAPGSGCPPSPWLDTWLQTSPSHPHQVG', hsp.aln_annotation['similarity'][:40])
self.assertEqual('EVKPSLGEPSFHQAPGSGCPPSPWLDTWLQTSPSHPHQVG', str(hsp.hit.seq)[:40])
self.assertEqual('TQREPPLSPVPTAPMP*SWRRWNQLPRSPRT*KPCRRS*N', str(hsp.query.seq)[-40:])
self.assertEqual('TQREPPLSPVPTAPMP*SWRRWNQLPRSPRT*KPCRRS*N', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('TQREPPLSPVPTAPMP*SWRRWNQLPRSPRT*KPCRRS*N', str(hsp.hit.seq)[-40:])
# first qresult, second hit
hit = qresult[1]
self.assertEqual('ref|XR_141831.1|', hit.id)
self.assertEqual('PREDICTED: Mus musculus predicted gene, 19553 (Gm19553), miscRNA ref|XR_105837.2| PREDICTED: Mus musculus predicted gene, 19553 (Gm19553), miscRNA ref|XR_141464.1| PREDICTED: Mus musculus predicted gene, 19553 (Gm19553), miscRNA ref|XR_141446.1| PREDICTED: Mus musculus predicted gene, 19553 (Gm19553), miscRNA', hit.description)
self.assertEqual(570, hit.seq_len)
self.assertEqual(1, len(hit))
# first qresult, second hit, first hsp
hsp = qresult[1].hsps[0]
self.assertEqual(163, hsp.aln_span)
self.assertEqual(3e-114, hsp.evalue)
self.assertEqual(415.0, hsp.bitscore)
self.assertEqual(900.0, hsp.bitscore_raw)
self.assertEqual(1, hsp.query_frame)
self.assertEqual(-1, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(162, hsp.ident_num)
self.assertEqual(162, hsp.pos_num)
self.assertEqual(1, hsp.query_strand)
self.assertEqual(-1, hsp.hit_strand)
self.assertEqual(0, hsp.query_start)
self.assertEqual(81, hsp.hit_start)
self.assertEqual(489, hsp.query_end)
self.assertEqual(570, hsp.hit_end)
self.assertEqual('EVKPSLGEPSFHQAPGSGCPPSPWLDTWLQTSPSHPHQVG', str(hsp.query.seq)[:40])
self.assertEqual('EVKPSLGEPSFHQAPGSGCPPSPWLDTWLQTSPSHPHQVG', hsp.aln_annotation['similarity'][:40])
self.assertEqual('EVKPSLGEPSFHQAPGSGCPPSPWLDTWLQTSPSHPHQVG', str(hsp.hit.seq)[:40])
self.assertEqual('TQREPPLSPVPTAPMP*SWRRWNQLPRSPRT*KPCRRS*N', str(hsp.query.seq)[-40:])
self.assertEqual('TQREPPLSPVPTAPMP*SWRRWNQL RSPRT*KPCRRS*N', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('TQREPPLSPVPTAPMP*SWRRWNQLQRSPRT*KPCRRS*N', str(hsp.hit.seq)[-40:])
def test_text_2226_tblastx_003(self):
"""Test parsing tblastx output (text_2226_tblastx_003.txt)"""
blast_file = get_file('text_2226_tblastx_003.txt')
qresults = list(parse(blast_file, FMT))
self.assertEqual(1, len(qresults))
self.check_common_attrs(qresults)
# test first qresult
qresult = qresults[0]
self.assertEqual('hg19_dna', qresult.id)
self.assertEqual('range=chr1:1207057-1207541 5\'pad=0 3\'pad=0 strand=+repeatMasking=none', qresult.description)
self.assertEqual(485, qresult.seq_len)
self.assertEqual('NCBI Transcript Reference Sequences', qresult.target)
self.assertEqual('tblastx', qresult.program)
self.assertEqual('2.2.26+', qresult.version)
self.assertEqual(10, len(qresult))
# first qresult, first hit
hit = qresult[0]
self.assertEqual('ref|NM_002985.2|', hit.id)
self.assertEqual('Homo sapiens chemokine (C-C motif) ligand 5 (CCL5), mRNA', hit.description)
self.assertEqual(1237, hit.seq_len)
self.assertEqual(3, len(hit))
# first qresult, first hit, first hsp
hsp = qresult[0].hsps[0]
self.assertEqual(107, hsp.aln_span)
self.assertEqual(4e-49, hsp.evalue)
self.assertEqual(118.0, hsp.bitscore)
self.assertEqual(252.0, hsp.bitscore_raw)
self.assertEqual(-3, hsp.query_frame)
self.assertEqual(-1, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(68, hsp.ident_num)
self.assertEqual(72, hsp.pos_num)
self.assertEqual(-1, hsp.query_strand)
self.assertEqual(-1, hsp.hit_strand)
self.assertEqual(138, hsp.query_start)
self.assertEqual(622, hsp.hit_start)
self.assertEqual(459, hsp.query_end)
self.assertEqual(943, hsp.hit_end)
self.assertEqual('VGPARVQ*HDLSSLQPPAPEFK*FSHLSLQSSWDCRCPPP', str(hsp.query.seq)[:40])
self.assertEqual('V A V+ H+LSSLQPP P FK FS LSL SSWD R PP', hsp.aln_annotation['similarity'][:40])
self.assertEqual('VTQAGVKWHNLSSLQPPPPGFKQFSCLSLPSSWDYRRGPP', str(hsp.hit.seq)[:40])
self.assertEqual('WLDLGppqpppPGFK*FSCLSHPSSWDYRHMPPCLINFVF', str(hsp.query.seq)[-40:])
self.assertEqual('W DLG Q PPPGF FSCLS PSSWDYR P NF++', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('WRDLGSLQAPPPGFTPFSCLSLPSSWDYRRPLPRPANFLY', str(hsp.hit.seq)[-40:])
# first qresult, first hit, second hsp
hsp = qresult[0].hsps[1]
self.assertEqual(44, hsp.aln_span)
self.assertEqual(4e-49, hsp.evalue)
self.assertEqual(100.0, hsp.bitscore)
self.assertEqual(214.0, hsp.bitscore_raw)
self.assertEqual(-2, hsp.query_frame)
self.assertEqual(-2, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(37, hsp.ident_num)
self.assertEqual(38, hsp.pos_num)
self.assertEqual(-1, hsp.query_strand)
self.assertEqual(-1, hsp.hit_strand)
self.assertEqual(16, hsp.query_start)
self.assertEqual(498, hsp.hit_start)
self.assertEqual(148, hsp.query_end)
self.assertEqual(630, hsp.hit_end)
self.assertEqual('FCIFSRDGVLPCWSGWSRTPDLR*SACLGLPKCWDYRCEP', str(hsp.query.seq)[:40])
self.assertEqual('FCIFSRDGV CW GWSRTPDL+*S LGLPKCWDYR EP', hsp.aln_annotation['similarity'][:40])
self.assertEqual('FCIFSRDGVSSCWPGWSRTPDLK*STHLGLPKCWDYRREP', str(hsp.hit.seq)[:40])
self.assertEqual('SRDGVLPCWSGWSRTPDLR*SACLGLPKCWDYRCEPPRPA', str(hsp.query.seq)[-40:])
self.assertEqual('SRDGV CW GWSRTPDL+*S LGLPKCWDYR EPPRPA', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('SRDGVSSCWPGWSRTPDLK*STHLGLPKCWDYRREPPRPA', str(hsp.hit.seq)[-40:])
# first qresult, second hit
hit = qresult[1]
self.assertEqual('ref|XM_003255417.1|', hit.id)
self.assertEqual('PREDICTED: Nomascus leucogenys 5\'-nucleotidase, cytosolic II, transcript variant 2 (NT5C2), mRNA', hit.description)
self.assertEqual(3285, hit.seq_len)
self.assertEqual(3, len(hit))
# first qresult, second hit, first hsp
hsp = qresult[1].hsps[0]
self.assertEqual(94, hsp.aln_span)
self.assertEqual(9e-49, hsp.evalue)
self.assertEqual(197.0, hsp.bitscore)
self.assertEqual(425.0, hsp.bitscore_raw)
self.assertEqual(-2, hsp.query_frame)
self.assertEqual(-2, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(78, hsp.ident_num)
self.assertEqual(79, hsp.pos_num)
self.assertEqual(-1, hsp.query_strand)
self.assertEqual(-1, hsp.hit_strand)
self.assertEqual(16, hsp.query_start)
self.assertEqual(2744, hsp.hit_start)
self.assertEqual(298, hsp.query_end)
self.assertEqual(3026, hsp.hit_end)
self.assertEqual('ETEFRSCCPGWSAMA*SWPTTASTSWIQVILLPQSPE*LG', str(hsp.query.seq)[:40])
self.assertEqual('E EFRSCCPGWSAMA SW S SW+QVIL PQ PE*LG', hsp.aln_annotation['similarity'][:40])
self.assertEqual('EMEFRSCCPGWSAMAQSWLIATSVSWVQVILWPQPPE*LG', str(hsp.hit.seq)[:40])
self.assertEqual('SRDGVLPCWSGWSRTPDLR*SACLGLPKCWDYRCEPPRPA', str(hsp.query.seq)[-40:])
self.assertEqual('SRDGV PCWSGWSRTPDLR*SACLGLPKCWDYR EPP PA', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('SRDGVSPCWSGWSRTPDLR*SACLGLPKCWDYRREPPCPA', str(hsp.hit.seq)[-40:])
# first qresult, second hit, second hsp
hsp = qresult[1].hsps[1]
self.assertEqual(94, hsp.aln_span)
self.assertEqual(4e-43, hsp.evalue)
self.assertEqual(178.0, hsp.bitscore)
self.assertEqual(384.0, hsp.bitscore_raw)
self.assertEqual(3, hsp.query_frame)
self.assertEqual(1, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(77, hsp.ident_num)
self.assertEqual(83, hsp.pos_num)
self.assertEqual(1, hsp.query_strand)
self.assertEqual(1, hsp.hit_strand)
self.assertEqual(17, hsp.query_start)
self.assertEqual(2745, hsp.hit_start)
self.assertEqual(299, hsp.query_end)
self.assertEqual(3027, hsp.hit_end)
self.assertEqual('AGRGGSHL*SQHFGRPRQADYLRSGVRDQPDQHGKTPSLL', str(hsp.query.seq)[:40])
self.assertEqual('AG GGS L*SQHFGRPRQAD+LRSGVRDQPDQHG+TPSLL', hsp.aln_annotation['similarity'][:40])
self.assertEqual('AGHGGSRL*SQHFGRPRQADHLRSGVRDQPDQHGETPSLL', str(hsp.hit.seq)[:40])
self.assertEqual('PSYSGD*GRRIT*IQEVEAVVGQDQAIALQPGQQERNSVS', str(hsp.query.seq)[-40:])
self.assertEqual('PSYSG *G+RIT* QE E + QD AIALQPGQQERNS+S', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('PSYSGG*GQRIT*TQETEVAMSQDCAIALQPGQQERNSIS', str(hsp.hit.seq)[-40:])
def test_text_2226_tblastx_004(self):
"""Test parsing tblastx output (text_2226_tblastx_004.txt)"""
blast_file = get_file('text_2226_tblastx_004.txt')
qresults = list(parse(blast_file, FMT))
self.assertEqual(2, len(qresults))
self.check_common_attrs(qresults)
# test first qresult
qresult = qresults[0]
self.assertEqual('random_s00', qresult.id)
self.assertEqual('', qresult.description)
self.assertEqual(128, qresult.seq_len)
self.assertEqual('minirefseq_mrna', qresult.target)
self.assertEqual('tblastx', qresult.program)
self.assertEqual('2.2.26+', qresult.version)
self.assertEqual(0, len(qresult))
# test second qresult
qresult = qresults[1]
self.assertEqual('gi|296147483:1-350', qresult.id)
self.assertEqual('Saccharomyces cerevisiae S288c Mon2p (MON2) mRNA,complete cds', qresult.description)
self.assertEqual(350, qresult.seq_len)
self.assertEqual('minirefseq_mrna', qresult.target)
self.assertEqual('tblastx', qresult.program)
self.assertEqual('2.2.26+', qresult.version)
self.assertEqual(5, len(qresult))
# second qresult, first hit
hit = qresult[0]
self.assertEqual('gi|296147483|ref|NM_001183135.1|', hit.id)
self.assertEqual('Saccharomyces cerevisiae S288c Mon2p (MON2) mRNA, complete cds', hit.description)
self.assertEqual(4911, hit.seq_len)
self.assertEqual(8, len(hit))
# second qresult, first hit, first hsp
hsp = qresult[0].hsps[0]
self.assertEqual(116, hsp.aln_span)
self.assertEqual(2e-81, hsp.evalue)
self.assertEqual(289.0, hsp.bitscore)
self.assertEqual(626.0, hsp.bitscore_raw)
self.assertEqual(2, hsp.query_frame)
self.assertEqual(2, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(116, hsp.ident_num)
self.assertEqual(116, hsp.pos_num)
self.assertEqual(1, hsp.query_strand)
self.assertEqual(1, hsp.hit_strand)
self.assertEqual(1, hsp.query_start)
self.assertEqual(1, hsp.hit_start)
self.assertEqual(349, hsp.query_end)
self.assertEqual(349, hsp.hit_end)
self.assertEqual('WP*TLEGLTPCKGNLKQNCVLYLPNRKEEIQPFAMLVINP', str(hsp.query.seq)[:40])
self.assertEqual('WP*TLEGLTPCKGNLKQNCVLYLPNRKEEIQPFAMLVINP', hsp.aln_annotation['similarity'][:40])
self.assertEqual('WP*TLEGLTPCKGNLKQNCVLYLPNRKEEIQPFAMLVINP', str(hsp.hit.seq)[:40])
self.assertEqual('WQCNAYRDCQPFHLFLEAGCLKFWMPSLRLLISRWRFN*K', str(hsp.query.seq)[-40:])
self.assertEqual('WQCNAYRDCQPFHLFLEAGCLKFWMPSLRLLISRWRFN*K', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('WQCNAYRDCQPFHLFLEAGCLKFWMPSLRLLISRWRFN*K', str(hsp.hit.seq)[-40:])
# second qresult, first hit, second hsp
hsp = qresult[0].hsps[1]
self.assertEqual(116, hsp.aln_span)
self.assertEqual(5e-78, hsp.evalue)
self.assertEqual(278.0, hsp.bitscore)
self.assertEqual(602.0, hsp.bitscore_raw)
self.assertEqual(-2, hsp.query_frame)
self.assertEqual(-3, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(116, hsp.ident_num)
self.assertEqual(116, hsp.pos_num)
self.assertEqual(-1, hsp.query_strand)
self.assertEqual(-1, hsp.hit_strand)
self.assertEqual(1, hsp.query_start)
self.assertEqual(1, hsp.hit_start)
self.assertEqual(349, hsp.query_end)
self.assertEqual(349, hsp.hit_end)
self.assertEqual('LLIESPSRDE*PQ*RHPKFQTAGFEE*MERLTVPVGIALP', str(hsp.query.seq)[:40])
self.assertEqual('LLIESPSRDE*PQ*RHPKFQTAGFEE*MERLTVPVGIALP', hsp.aln_annotation['similarity'][:40])
self.assertEqual('LLIESPSRDE*PQ*RHPKFQTAGFEE*MERLTVPVGIALP', str(hsp.hit.seq)[:40])
self.assertEqual('WIYH*HGEWLNFFFSIRKIKNAILLQVAFAWSQTLQCSWP', str(hsp.query.seq)[-40:])
self.assertEqual('WIYH*HGEWLNFFFSIRKIKNAILLQVAFAWSQTLQCSWP', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('WIYH*HGEWLNFFFSIRKIKNAILLQVAFAWSQTLQCSWP', str(hsp.hit.seq)[-40:])
# second qresult, second hit
hit = qresult[1]
self.assertEqual('gi|365982352|ref|XM_003667962.1|', hit.id)
self.assertEqual('Naumovozyma dairenensis CBS 421 hypothetical protein (NDAI0A06120), mRNA', hit.description)
self.assertEqual(4932, hit.seq_len)
self.assertEqual(10, len(hit))
# second qresult, second hit, first hsp
hsp = qresult[1].hsps[0]
self.assertEqual(85, hsp.aln_span)
self.assertEqual(5e-42, hsp.evalue)
self.assertEqual(152.0, hsp.bitscore)
self.assertEqual(327.0, hsp.bitscore_raw)
self.assertEqual(1, hsp.query_frame)
self.assertEqual(1, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(62, hsp.ident_num)
self.assertEqual(73, hsp.pos_num)
self.assertEqual(1, hsp.query_strand)
self.assertEqual(1, hsp.hit_strand)
self.assertEqual(93, hsp.query_start)
self.assertEqual(87, hsp.hit_start)
self.assertEqual(348, hsp.query_end)
self.assertEqual(342, hsp.hit_end)
self.assertEqual('TIRHASDKSIEILKRVHSFEELERHPDFALPFVLACQSRN', str(hsp.query.seq)[:40])
self.assertEqual('TI+HASDKSI+ILK + + EEL RHPDF P VLAC SRN', hsp.aln_annotation['similarity'][:40])
self.assertEqual('TIKHASDKSIDILKTIQNIEELVRHPDFVTPLVLACSSRN', str(hsp.hit.seq)[:40])
self.assertEqual('LAMQCLQGLSTVPSIPRSRLSEILDAFIEATHLAMEIQLK', str(hsp.query.seq)[-40:])
self.assertEqual('+AMQCLQGL++VPSIP SR+ E+LD FIEAT LAMEIQLK', hsp.aln_annotation['similarity'][-40:])
self.assertEqual('IAMQCLQGLASVPSIPESRIPEVLDGFIEATQLAMEIQLK', str(hsp.hit.seq)[-40:])
# second qresult, second hit, second hsp
hsp = qresult[1].hsps[1]
self.assertEqual(14, hsp.aln_span)
self.assertEqual(5e-42, hsp.evalue)
self.assertEqual(26.3, hsp.bitscore)
self.assertEqual(51.0, hsp.bitscore_raw)
self.assertEqual(3, hsp.query_frame)
self.assertEqual(3, hsp.hit_frame)
self.assertEqual(0, hsp.gap_num)
self.assertEqual(11, hsp.ident_num)
self.assertEqual(11, hsp.pos_num)
self.assertEqual(1, hsp.query_strand)
self.assertEqual(1, hsp.hit_strand)
self.assertEqual(68, hsp.query_start)
self.assertEqual(62, hsp.hit_start)
self.assertEqual(110, hsp.query_end)
self.assertEqual(104, hsp.hit_end)
self.assertEqual('FRIEKKKFNHSPC*', str(hsp.query.seq))
self.assertEqual('FRI KKKFNH C*', hsp.aln_annotation['similarity'])
self.assertEqual('FRI*KKKFNH*TC*', str(hsp.hit.seq))
if __name__ == "__main__":
runner = unittest.TextTestRunner(verbosity=2)
unittest.main(testRunner=runner)
| 53.610742 | 557 | 0.667462 | 12,630 | 104,809 | 5.438401 | 0.063341 | 0.326263 | 0.065587 | 0.046297 | 0.934835 | 0.903169 | 0.895555 | 0.872014 | 0.839708 | 0.835922 | 0 | 0.053489 | 0.196052 | 104,809 | 1,954 | 558 | 53.638178 | 0.761681 | 0.047057 | 0 | 0.793124 | 0 | 0.001748 | 0.211475 | 0.118664 | 0 | 0 | 0 | 0 | 0.870629 | 1 | 0.012821 | false | 0 | 0.002914 | 0 | 0.019814 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
d491a53217ac68134582b7e7878071d95c2467af | 1,453 | py | Python | zaci.py | Ortrvis/ucimepajtn | 2af916ef00816c277214283c280e18eef5b68aa4 | [
"BSD-3-Clause"
] | null | null | null | zaci.py | Ortrvis/ucimepajtn | 2af916ef00816c277214283c280e18eef5b68aa4 | [
"BSD-3-Clause"
] | null | null | null | zaci.py | Ortrvis/ucimepajtn | 2af916ef00816c277214283c280e18eef5b68aa4 | [
"BSD-3-Clause"
] | null | null | null | x=".............."
print(f"{x}")
tridy = [
{"Oznaceni": "IT1A", "Pocet zaku": 16},
]
print(f'Oznaceni: {tridy[0]["Oznaceni"]}')
print(f'Pocet zaku: {tridy[0]["Pocet zaku"]}')
print(f"{x}")
tridy = [
{"Oznaceni": "IT1B", "Pocet zaku": 17},
]
print(f'Oznaceni: {tridy[0]["Oznaceni"]}')
print(f'Pocet zaku: {tridy[0]["Pocet zaku"]}')
print(f"{x}")
tridy = [
{"Oznaceni": "IT2A", "Pocet zaku": 20},
]
print(f'Oznaceni: {tridy[0]["Oznaceni"]}')
print(f'Pocet zaku: {tridy[0]["Pocet zaku"]}')
print(f"{x}")
tridy = [
{"Oznaceni": "IT2B", "Pocet zaku": 19},
]
print(f'Oznaceni: {tridy[0]["Oznaceni"]}')
print(f'Pocet zaku: {tridy[0]["Pocet zaku"]}')
print(f"{x}")
tridy = [
{"Oznaceni": "IT3A", "Pocet zaku": 15},
]
print(f'Oznaceni: {tridy[0]["Oznaceni"]}')
print(f'Pocet zaku: {tridy[0]["Pocet zaku"]}')
print(f"{x}")
tridy = [
{"Oznaceni": "IT3B", "Pocet zaku": 14},
]
print(f'Oznaceni: {tridy[0]["Oznaceni"]}')
print(f'Pocet zaku: {tridy[0]["Pocet zaku"]}')
print(f"{x}")
tridy = [
{"Oznaceni": "IT1A", "Pocet zaku": 16},
{"Oznaceni": "IT1B", "Pocet zaku": 17},
{"Oznaceni": "IT2A", "Pocet zaku": 20},
{"Oznaceni": "IT2B", "Pocet zaku": 19},
{"Oznaceni": "IT3A", "Pocet zaku": 15},
{"Oznaceni": "IT3B", "Pocet zaku": 14},
]
print(f'Oznaceni: {tridy[0]["Oznaceni"]}')
print(f'Pocet zaku: {tridy[0]["Pocet zaku"]}')
| 23.063492 | 48 | 0.527873 | 187 | 1,453 | 4.101604 | 0.106952 | 0.305085 | 0.063885 | 0.109518 | 0.998696 | 0.800522 | 0.800522 | 0.800522 | 0.800522 | 0.735332 | 0 | 0.042626 | 0.192705 | 1,453 | 62 | 49 | 23.435484 | 0.611253 | 0 | 0 | 0.833333 | 0 | 0 | 0.557554 | 0.110791 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.4375 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 9 |
d4dff69af95ed34c04649b3487a5034594d3c6b2 | 3,800 | py | Python | tests/proxy/transactional_multi_map_test.py | buraksezer/hazelcast-python-client | 4cc593ef7de994bd84fdac8331b81b309cce30a0 | [
"Apache-2.0"
] | 3 | 2020-05-01T15:01:54.000Z | 2021-01-27T14:51:45.000Z | tests/proxy/transactional_multi_map_test.py | buraksezer/hazelcast-python-client | 4cc593ef7de994bd84fdac8331b81b309cce30a0 | [
"Apache-2.0"
] | null | null | null | tests/proxy/transactional_multi_map_test.py | buraksezer/hazelcast-python-client | 4cc593ef7de994bd84fdac8331b81b309cce30a0 | [
"Apache-2.0"
] | 1 | 2020-12-01T20:00:35.000Z | 2020-12-01T20:00:35.000Z | from tests.base import SingleMemberTestCase
from tests.util import random_string
from hazelcast import six
class TransactionalMultiMapTest(SingleMemberTestCase):
def setUp(self):
self.multi_map = self.client.get_multi_map(random_string()).blocking()
def test_put(self):
with self.client.new_transaction() as tx:
tx_multi_map = tx.get_multi_map(self.multi_map.name)
self.assertTrue(tx_multi_map.put("key", "value-1"))
self.assertTrue(tx_multi_map.put("key", "value-2"))
six.assertCountEqual(self, self.multi_map.get("key"), ["value-1", "value-2"])
def test_put_when_present(self):
self.multi_map.put("key", "value-1")
with self.client.new_transaction() as tx:
tx_multi_map = tx.get_multi_map(self.multi_map.name)
self.assertFalse(tx_multi_map.put("key", "value-1"))
six.assertCountEqual(self, self.multi_map.get("key"), ["value-1"])
def test_get(self):
self.multi_map.put("key", "value-1")
self.multi_map.put("key", "value-2")
with self.client.new_transaction() as tx:
tx_multi_map = tx.get_multi_map(self.multi_map.name)
six.assertCountEqual(self, tx_multi_map.get("key"), ["value-1", "value-2"])
def test_get_when_missing(self):
with self.client.new_transaction() as tx:
tx_multi_map = tx.get_multi_map(self.multi_map.name)
self.assertEqual(tx_multi_map.get("key"), [])
def test_size(self):
self.multi_map.put("key", "value-1")
self.multi_map.put("key", "value-2")
self.multi_map.put("key", "value-3")
with self.client.new_transaction() as tx:
tx_multi_map = tx.get_multi_map(self.multi_map.name)
self.assertTrue(tx_multi_map.size(), 3)
def test_remove_all(self):
self.multi_map.put("key", "value-1")
self.multi_map.put("key", "value-2")
with self.client.new_transaction() as tx:
tx_multi_map = tx.get_multi_map(self.multi_map.name)
six.assertCountEqual(self, ["value-1", "value-2"], tx_multi_map.remove_all("key"))
self.assertFalse(self.multi_map.contains_key("key"))
def test_remove_all_when_missing(self):
with self.client.new_transaction() as tx:
tx_multi_map = tx.get_multi_map(self.multi_map.name)
self.assertEqual([], tx_multi_map.remove_all("key"))
def test_remove_when_same(self):
self.multi_map.put("key", "value-1")
self.multi_map.put("key", "value-2")
with self.client.new_transaction() as tx:
tx_multi_map = tx.get_multi_map(self.multi_map.name)
self.assertTrue(tx_multi_map.remove("key", "value-1"))
six.assertCountEqual(self, ["value-2"], self.multi_map.get("key"))
def test_remove_when_different(self):
self.multi_map.put("key", "value-1")
self.multi_map.put("key", "value-2")
with self.client.new_transaction() as tx:
tx_multi_map = tx.get_multi_map(self.multi_map.name)
self.assertFalse(tx_multi_map.remove("key", "value-3"))
six.assertCountEqual(self, ["value-1", "value-2"], self.multi_map.get("key"))
def test_value_count(self):
self.multi_map.put("key", "value-1")
self.multi_map.put("key", "value-2")
self.multi_map.put("key", "value-3")
with self.client.new_transaction() as tx:
tx_multi_map = tx.get_multi_map(self.multi_map.name)
self.assertEqual(tx_multi_map.value_count("key"), 3)
def test_str(self):
with self.client.new_transaction() as tx:
tx_multi_map = tx.get_multi_map(self.multi_map.name)
self.assertTrue(str(tx_multi_map).startswith("TransactionalMultiMap"))
| 39.583333 | 94 | 0.646316 | 553 | 3,800 | 4.18264 | 0.090416 | 0.231734 | 0.166018 | 0.108949 | 0.819282 | 0.806744 | 0.763511 | 0.740164 | 0.708171 | 0.681366 | 0 | 0.010645 | 0.208947 | 3,800 | 95 | 95 | 40 | 0.758816 | 0 | 0 | 0.521127 | 0 | 0 | 0.084474 | 0.005526 | 0 | 0 | 0 | 0 | 0.239437 | 1 | 0.169014 | false | 0 | 0.042254 | 0 | 0.225352 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d4f7fb32c615e83e0526a1fe7a966093346c4462 | 1,174 | py | Python | swpt_creditors/inspect_ops.py | epandurski/swpt_creditors | 35b9c6fa8ec84fe26e203a2604aff9cd5280dc4c | [
"MIT"
] | null | null | null | swpt_creditors/inspect_ops.py | epandurski/swpt_creditors | 35b9c6fa8ec84fe26e203a2604aff9cd5280dc4c | [
"MIT"
] | null | null | null | swpt_creditors/inspect_ops.py | epandurski/swpt_creditors | 35b9c6fa8ec84fe26e203a2604aff9cd5280dc4c | [
"MIT"
] | 1 | 2020-01-16T13:24:31.000Z | 2020-01-16T13:24:31.000Z | """Implement functions that inspect operations susceptible to DOS attacks."""
class ForbiddenOperation(Exception):
"""The operation is forbidden."""
def allow_account_creation(creditor_id: int, debtor_id: int) -> None:
"""May Raise `ForbiddenOperation`."""
def register_account_creation(creditor_id: int, debtor_id: int) -> None:
increment_account_number(creditor_id, debtor_id)
def allow_transfer_creation(creditor_id: int, debtor_id: int) -> None:
"""May Raise `ForbiddenOperation`."""
def register_transfer_creation(creditor_id: int, debtor_id: int) -> None:
increment_transfer_number(creditor_id, debtor_id)
def allow_account_reconfig(creditor_id: int, debtor_id: int) -> None:
"""May Raise `ForbiddenOperation`."""
def register_account_reconfig(creditor_id: int, debtor_id: int) -> None:
pass
def increment_account_number(creditor_id: int, debtor_id: int) -> None:
pass
def decrement_account_number(creditor_id: int, debtor_id: int) -> None:
pass
def increment_transfer_number(creditor_id: int, debtor_id: int) -> None:
pass
def decrement_transfer_number(creditor_id: int, debtor_id: int) -> None:
pass
| 25.521739 | 77 | 0.745315 | 154 | 1,174 | 5.37013 | 0.220779 | 0.120919 | 0.157195 | 0.229746 | 0.848851 | 0.830713 | 0.830713 | 0.753325 | 0.753325 | 0.588875 | 0 | 0 | 0.143952 | 1,174 | 45 | 78 | 26.088889 | 0.822886 | 0.166099 | 0 | 0.277778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.555556 | false | 0.277778 | 0 | 0 | 0.611111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 9 |
be02a5e2e93698025010b4da8bd2bfd3429f11a7 | 2,001 | py | Python | tests/components/canary/conftest.py | pcaston/core | e74d946cef7a9d4e232ae9e0ba150d18018cfe33 | [
"Apache-2.0"
] | 1 | 2021-07-08T20:09:55.000Z | 2021-07-08T20:09:55.000Z | tests/components/canary/conftest.py | pcaston/core | e74d946cef7a9d4e232ae9e0ba150d18018cfe33 | [
"Apache-2.0"
] | 47 | 2021-02-21T23:43:07.000Z | 2022-03-31T06:07:10.000Z | tests/components/canary/conftest.py | OpenPeerPower/core | f673dfac9f2d0c48fa30af37b0a99df9dd6640ee | [
"Apache-2.0"
] | null | null | null | """Define fixtures available for all tests."""
from unittest.mock import MagicMock, patch
from canary.api import Api
from pytest import fixture
@fixture(autouse=True)
def mock_ffmpeg(opp):
"""Mock ffmpeg is loaded."""
opp.config.components.add("ffmpeg")
@fixture
def canary(opp):
"""Mock the CanaryApi for easier testing."""
with patch.object(Api, "login", return_value=True), patch(
"openpeerpower.components.canary.Api"
) as mock_canary:
instance = mock_canary.return_value = Api(
"test-username",
"test-password",
1,
)
instance.login = MagicMock(return_value=True)
instance.get_entries = MagicMock(return_value=[])
instance.get_locations = MagicMock(return_value=[])
instance.get_location = MagicMock(return_value=None)
instance.get_modes = MagicMock(return_value=[])
instance.get_readings = MagicMock(return_value=[])
instance.get_latest_readings = MagicMock(return_value=[])
instance.set_location_mode = MagicMock(return_value=None)
yield mock_canary
@fixture
def canary_config_flow(opp):
"""Mock the CanaryApi for easier config flow testing."""
with patch.object(Api, "login", return_value=True), patch(
"openpeerpower.components.canary.config_flow.Api"
) as mock_canary:
instance = mock_canary.return_value = Api(
"test-username",
"test-password",
1,
)
instance.login = MagicMock(return_value=True)
instance.get_entries = MagicMock(return_value=[])
instance.get_locations = MagicMock(return_value=[])
instance.get_location = MagicMock(return_value=None)
instance.get_modes = MagicMock(return_value=[])
instance.get_readings = MagicMock(return_value=[])
instance.get_latest_readings = MagicMock(return_value=[])
instance.set_location_mode = MagicMock(return_value=None)
yield mock_canary
| 33.35 | 65 | 0.671164 | 227 | 2,001 | 5.704846 | 0.237885 | 0.169884 | 0.247104 | 0.216216 | 0.801544 | 0.801544 | 0.758301 | 0.758301 | 0.758301 | 0.758301 | 0 | 0.001283 | 0.22089 | 2,001 | 59 | 66 | 33.915254 | 0.829378 | 0.076462 | 0 | 0.727273 | 0 | 0 | 0.082102 | 0.044882 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068182 | false | 0.045455 | 0.068182 | 0 | 0.136364 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
076ca9101cd791817c8edd424ae0e908e1ff7f14 | 3,272 | py | Python | test/programytest/parser/template/graph_tests/rdf_tests/test_deletetriple.py | motazsaad/fit-bot-fb-clt | 580477aa1ec91855b621d9ae276f2705962f6a87 | [
"MIT"
] | 5 | 2018-08-21T00:13:45.000Z | 2018-09-01T20:00:55.000Z | test/programytest/parser/template/graph_tests/rdf_tests/test_deletetriple.py | motazsaad/fit-bot-fb-clt | 580477aa1ec91855b621d9ae276f2705962f6a87 | [
"MIT"
] | 1 | 2018-09-12T18:30:17.000Z | 2018-09-12T18:30:17.000Z | test/programytest/parser/template/graph_tests/rdf_tests/test_deletetriple.py | motazsaad/fit-bot-fb-clt | 580477aa1ec91855b621d9ae276f2705962f6a87 | [
"MIT"
] | 5 | 2018-08-21T00:08:36.000Z | 2018-09-23T06:11:04.000Z | import xml.etree.ElementTree as ET
from programy.parser.template.nodes.base import TemplateNode
from programy.parser.template.nodes.deletetriple import TemplateDeleteTripleNode
from programytest.parser.template.graph_tests.graph_test_client import TemplateGraphTestClient
class TemplateGraphDeleteTripleTests(TemplateGraphTestClient):
def test_delete_triple_type1(self):
self.assertFalse(self._client_context.brain.rdf.has_object("X", "Y", "Z"))
self._client_context.brain.rdf.add_entity("X", "Y", "Z", "LETTERS")
self.assertTrue(self._client_context.brain.rdf.has_object("X", "Y", "Z"))
template = ET.fromstring("""
<template>
<deletetriple>
<subj>X</subj>
<pred>Y</pred>
<obj>Z</obj>
</deletetriple>
</template>
""")
ast = self._graph.parse_template_expression(template)
self.assertIsNotNone(ast)
self.assertIsInstance(ast, TemplateNode)
self.assertIsNotNone(ast.children)
self.assertIsNotNone(ast.children[0])
self.assertIsInstance(ast.children[0], TemplateDeleteTripleNode)
self.assertEqual(0, len(ast.children[0].children))
result = ast.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertFalse(self._client_context.brain.rdf.has_object("X", "Y", "Z"))
def test_delete_triple_type2(self):
self.assertFalse(self._client_context.brain.rdf.has_object("X", "Y", "Z"))
self._client_context.brain.rdf.add_entity("X", "Y", "Z", "LETTERS")
self.assertTrue(self._client_context.brain.rdf.has_object("X", "Y", "Z"))
template = ET.fromstring("""
<template>
<deletetriple subj="X" pred="Y" obj="Z">
</deletetriple>
</template>
""")
ast = self._graph.parse_template_expression(template)
self.assertIsNotNone(ast)
self.assertIsInstance(ast, TemplateNode)
self.assertIsNotNone(ast.children)
self.assertIsNotNone(ast.children[0])
self.assertIsInstance(ast.children[0], TemplateDeleteTripleNode)
self.assertEqual(0, len(ast.children[0].children))
result = ast.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertFalse(self._client_context.brain.rdf.has_object("X", "Y", "Z"))
def test_delete_triple_type3(self):
self.assertFalse(self._client_context.brain.rdf.has_object("X", "Y", "Z"))
self._client_context.brain.rdf.add_entity("X", "Y", "Z", "LETTERS")
self.assertTrue(self._client_context.brain.rdf.has_object("X", "Y", "Z"))
template = ET.fromstring("""
<template>
<deletetriple subj="X" pred="Y" obj="Z" />
</template>
""")
ast = self._graph.parse_template_expression(template)
self.assertIsNotNone(ast)
self.assertIsInstance(ast, TemplateNode)
self.assertIsNotNone(ast.children)
self.assertIsNotNone(ast.children[0])
self.assertIsInstance(ast.children[0], TemplateDeleteTripleNode)
self.assertEqual(0, len(ast.children[0].children))
result = ast.resolve(self._client_context)
self.assertIsNotNone(result)
self.assertFalse(self._client_context.brain.rdf.has_object("X", "Y", "Z"))
| 37.609195 | 94 | 0.673594 | 374 | 3,272 | 5.724599 | 0.15508 | 0.070061 | 0.119103 | 0.123307 | 0.865016 | 0.836058 | 0.836058 | 0.836058 | 0.836058 | 0.836058 | 0 | 0.005626 | 0.185208 | 3,272 | 86 | 95 | 38.046512 | 0.797449 | 0 | 0 | 0.791045 | 0 | 0 | 0.121638 | 0 | 0 | 0 | 0 | 0 | 0.447761 | 1 | 0.044776 | false | 0 | 0.059701 | 0 | 0.119403 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
077c0cbb43c07649cd5cabb9c860d6887953b2aa | 8,501 | py | Python | test/test_misc_explainers.py | luckystar9111/interpret-community | 3a4094d3aa516a39dc52d65183f8b1f9aa31a801 | [
"MIT"
] | 1 | 2021-04-15T01:45:57.000Z | 2021-04-15T01:45:57.000Z | test/test_misc_explainers.py | luckystar9111/interpret-community | 3a4094d3aa516a39dc52d65183f8b1f9aa31a801 | [
"MIT"
] | null | null | null | test/test_misc_explainers.py | luckystar9111/interpret-community | 3a4094d3aa516a39dc52d65183f8b1f9aa31a801 | [
"MIT"
] | null | null | null | # ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
# Tests for kernel, tree and deep explainers.
import pytest
import logging
from lightgbm import LGBMClassifier, LGBMRegressor
from interpret_community.shap.kernel_explainer import KernelExplainer
from interpret_community.shap.tree_explainer import TreeExplainer
from interpret_community.shap.deep_explainer import DeepExplainer
from interpret_community.shap.linear_explainer import LinearExplainer
from interpret_community.common.constants import ShapValuesOutput
from common_tabular_tests import VerifyTabularTests
from common_utils import create_keras_multiclass_classifier, create_keras_regressor, \
create_sklearn_linear_regressor, create_sklearn_logistic_regressor
from constants import owner_email_tools_and_ux, ModelType
test_logger = logging.getLogger(__name__)
test_logger.setLevel(logging.INFO)
@pytest.mark.owner(email=owner_email_tools_and_ux)
@pytest.mark.usefixtures("clean_dir")
class TestKernelExplainer(object):
def setup_class(self):
def create_explainer(model, x_train, **kwargs):
return KernelExplainer(model, x_train, **kwargs)
self._verify_tabular = VerifyTabularTests(test_logger, create_explainer)
def test_kernel_explainer_raw_transformations_list_classification(self):
self._verify_tabular.verify_explain_model_transformations_list_classification()
def test_kernel_explainer_raw_transformations_column_transformer_classification(self):
self._verify_tabular.verify_explain_model_transformations_column_transformer_classification()
def test_kernel_explainer_raw_transformations_list_regression(self):
self._verify_tabular.verify_explain_model_transformations_list_regression()
def test_kernel_explainer_raw_transformations_column_transformer_regression(self):
self._verify_tabular.verify_explain_model_transformations_list_regression()
@pytest.mark.owner(email=owner_email_tools_and_ux)
@pytest.mark.usefixtures("clean_dir")
class TestDeepExplainer(object):
def setup_class(self):
def create_explainer(model, x_train, **kwargs):
return DeepExplainer(model, x_train, **kwargs)
self._verify_tabular = VerifyTabularTests(test_logger, create_explainer)
def _get_create_model(self, classification):
if classification:
train_fn = create_keras_multiclass_classifier
else:
train_fn = create_keras_regressor
def create_model(x, y):
return train_fn(x, y)
return create_model
def test_deep_explainer_raw_transformations_list_classification(self):
self._verify_tabular.verify_explain_model_transformations_list_classification(self._get_create_model(
classification=True))
def test_deep_explainer_raw_transformations_column_transformer_classification(self):
self._verify_tabular.verify_explain_model_transformations_column_transformer_classification(
self._get_create_model(classification=True))
def test_deep_explainer_raw_transformations_list_regression(self):
# retrying 4 times in case this test fails due to shap summation bug
for i in range(4):
try:
self._verify_tabular.verify_explain_model_transformations_list_regression(self._get_create_model(
classification=False))
break
except AssertionError:
print("Retrying deep explainer test: " + str(i))
pass
def test_deep_explainer_raw_transformations_column_transformer_regression(self):
self._verify_tabular.verify_explain_model_transformations_column_transformer_regression(
self._get_create_model(classification=False))
@pytest.mark.owner(email=owner_email_tools_and_ux)
@pytest.mark.usefixtures("clean_dir")
class TestTreeExplainer(object):
def setup_class(self):
def create_explainer(model, x_train, **kwargs):
return TreeExplainer(model, **kwargs)
self._verify_tabular = VerifyTabularTests(test_logger, create_explainer)
def _get_create_model(self, classification):
if classification:
model = LGBMClassifier()
else:
model = LGBMRegressor()
def create_model(x, y):
return model.fit(x, y)
return create_model
def test_tree_explainer_raw_transformations_list_classification(self):
self._verify_tabular.verify_explain_model_transformations_list_classification(self._get_create_model(
classification=True))
def test_tree_explainer_raw_transformations_column_transformer_classification(self):
self._verify_tabular.verify_explain_model_transformations_column_transformer_classification(
self._get_create_model(classification=True))
def test_tree_explainer_raw_transformations_list_regression(self):
self._verify_tabular.verify_explain_model_transformations_list_regression(self._get_create_model(
classification=False))
def test_tree_explainer_raw_transformations_column_transformer_regression(self):
self._verify_tabular.verify_explain_model_transformations_list_regression(self._get_create_model(
classification=False))
def test_tree_explainer_shap_values_binary_xgboost(self):
self._verify_tabular.verify_explain_model_shap_values_binary(model_type=ModelType.XGBOOST)
def test_tree_explainer_shap_values_binary_proba(self):
self._verify_tabular.verify_explain_model_shap_values_binary(ShapValuesOutput.PROBABILITY,
model_type=ModelType.TREE)
def test_tree_explainer_shap_values_binary_proba_xgboost(self):
self._verify_tabular.verify_explain_model_shap_values_binary(ShapValuesOutput.PROBABILITY,
model_type=ModelType.XGBOOST)
def test_tree_explainer_shap_values_multiclass(self):
self._verify_tabular.verify_explain_model_shap_values_multiclass(model_type=ModelType.TREE)
def test_tree_explainer_shap_values_multiclass_proba(self):
self._verify_tabular.verify_explain_model_shap_values_multiclass(ShapValuesOutput.PROBABILITY,
model_type=ModelType.TREE)
def test_tree_explainer_shap_values_multiclass_proba_xgboost(self):
self._verify_tabular.verify_explain_model_shap_values_multiclass(ShapValuesOutput.PROBABILITY,
model_type=ModelType.XGBOOST)
def test_tree_explainer_shap_values_regression(self):
self._verify_tabular.verify_explain_model_shap_values_regression(model_type=ModelType.TREE)
@pytest.mark.owner(email=owner_email_tools_and_ux)
@pytest.mark.usefixtures("clean_dir")
class TestLinearExplainer(object):
def setup_class(self):
def create_explainer(model, x_train, **kwargs):
return LinearExplainer(model, x_train, **kwargs)
self._verify_tabular = VerifyTabularTests(test_logger, create_explainer)
def _get_create_model(self, classification):
if classification:
train_fn = create_sklearn_logistic_regressor
else:
train_fn = create_sklearn_linear_regressor
def create_model(x, y):
return train_fn(x, y)
return create_model
def test_linear_explainer_raw_transformations_list_classification(self):
self._verify_tabular.verify_explain_model_transformations_list_classification(self._get_create_model(
classification=True))
def test_linear_explainer_raw_transformations_column_transformer_classification(self):
self._verify_tabular.verify_explain_model_transformations_column_transformer_classification(
self._get_create_model(classification=True))
def test_linear_explainer_raw_transformations_list_regression(self):
self._verify_tabular.verify_explain_model_transformations_list_regression(self._get_create_model(
classification=False))
def test_linear_explainer_raw_transformations_column_transformer_regression(self):
self._verify_tabular.verify_explain_model_transformations_list_regression(self._get_create_model(
classification=False))
| 45.218085 | 113 | 0.751794 | 930 | 8,501 | 6.365591 | 0.123656 | 0.045608 | 0.077534 | 0.089358 | 0.803885 | 0.800507 | 0.794426 | 0.770777 | 0.759797 | 0.75152 | 0 | 0.000286 | 0.178802 | 8,501 | 187 | 114 | 45.459893 | 0.84773 | 0.033408 | 0 | 0.514925 | 0 | 0 | 0.008038 | 0 | 0 | 0 | 0 | 0 | 0.007463 | 1 | 0.276119 | false | 0.007463 | 0.08209 | 0.052239 | 0.462687 | 0.007463 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
07ab8957b138a671ade413a30193ef29a09c2185 | 23,423 | py | Python | ondewo/nlu/entity_type_pb2_grpc.py | ondewo/ondewo-vtsi-client-python | 8339dbe355d42ef7d02d441c6604200bbaae491a | [
"Apache-2.0"
] | null | null | null | ondewo/nlu/entity_type_pb2_grpc.py | ondewo/ondewo-vtsi-client-python | 8339dbe355d42ef7d02d441c6604200bbaae491a | [
"Apache-2.0"
] | 3 | 2021-03-09T11:47:27.000Z | 2021-04-16T15:13:30.000Z | ondewo/nlu/entity_type_pb2_grpc.py | ondewo/ondewo-vtsi-client-python | 8339dbe355d42ef7d02d441c6604200bbaae491a | [
"Apache-2.0"
] | null | null | null | # Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
"""Client and server classes corresponding to protobuf-defined services."""
import grpc
from google.longrunning import operations_pb2 as google_dot_longrunning_dot_operations__pb2
from google.protobuf import empty_pb2 as google_dot_protobuf_dot_empty__pb2
from ondewo.nlu import entity_type_pb2 as ondewo_dot_nlu_dot_entity__type__pb2
class EntityTypesStub(object):
"""Entities are extracted from user input and represent parameters that are
meaningful to your application. For example, a date range, a proper name
such as a geographic location or landmark, and so on. Entities represent
actionable data for your application.
When you define an entity, you can also include synonyms that all map to
that entity. For example, "soft drink", "soda", "pop", and so on.
There are three types of entities:
* **System** - entities that are defined by the Dialogflow API for common
data types such as date, time, currency, and so on. A system entity is
represented by the `EntityType` type.
* **Developer** - entities that are defined by you that represent
actionable data that is meaningful to your application. For example,
you could define a `pizza.sauce` entity for red or white pizza sauce,
a `pizza.cheese` entity for the different types of cheese on a pizza,
a `pizza.topping` entity for different toppings, and so on. A developer
entity is represented by the `EntityType` type.
* **User** - entities that are built for an individual user such as
favorites, preferences, playlists, and so on. A user entity is
represented by the [SessionEntityType][google.cloud.dialogflow.v2.SessionEntityType] type.
For more information about entity types, see the
[Dialogflow documentation](https://dialogflow.com/docs/entities).
"""
def __init__(self, channel):
"""Constructor.
Args:
channel: A grpc.Channel.
"""
self.ListEntityTypes = channel.unary_unary(
'/ondewo.nlu.EntityTypes/ListEntityTypes',
request_serializer=ondewo_dot_nlu_dot_entity__type__pb2.ListEntityTypesRequest.SerializeToString,
response_deserializer=ondewo_dot_nlu_dot_entity__type__pb2.ListEntityTypesResponse.FromString,
)
self.GetEntityType = channel.unary_unary(
'/ondewo.nlu.EntityTypes/GetEntityType',
request_serializer=ondewo_dot_nlu_dot_entity__type__pb2.GetEntityTypeRequest.SerializeToString,
response_deserializer=ondewo_dot_nlu_dot_entity__type__pb2.EntityType.FromString,
)
self.CreateEntityType = channel.unary_unary(
'/ondewo.nlu.EntityTypes/CreateEntityType',
request_serializer=ondewo_dot_nlu_dot_entity__type__pb2.CreateEntityTypeRequest.SerializeToString,
response_deserializer=ondewo_dot_nlu_dot_entity__type__pb2.EntityType.FromString,
)
self.UpdateEntityType = channel.unary_unary(
'/ondewo.nlu.EntityTypes/UpdateEntityType',
request_serializer=ondewo_dot_nlu_dot_entity__type__pb2.UpdateEntityTypeRequest.SerializeToString,
response_deserializer=ondewo_dot_nlu_dot_entity__type__pb2.EntityType.FromString,
)
self.DeleteEntityType = channel.unary_unary(
'/ondewo.nlu.EntityTypes/DeleteEntityType',
request_serializer=ondewo_dot_nlu_dot_entity__type__pb2.DeleteEntityTypeRequest.SerializeToString,
response_deserializer=google_dot_protobuf_dot_empty__pb2.Empty.FromString,
)
self.BatchUpdateEntityTypes = channel.unary_unary(
'/ondewo.nlu.EntityTypes/BatchUpdateEntityTypes',
request_serializer=ondewo_dot_nlu_dot_entity__type__pb2.BatchUpdateEntityTypesRequest.SerializeToString,
response_deserializer=google_dot_longrunning_dot_operations__pb2.Operation.FromString,
)
self.BatchDeleteEntityTypes = channel.unary_unary(
'/ondewo.nlu.EntityTypes/BatchDeleteEntityTypes',
request_serializer=ondewo_dot_nlu_dot_entity__type__pb2.BatchDeleteEntityTypesRequest.SerializeToString,
response_deserializer=google_dot_longrunning_dot_operations__pb2.Operation.FromString,
)
self.BatchCreateEntities = channel.unary_unary(
'/ondewo.nlu.EntityTypes/BatchCreateEntities',
request_serializer=ondewo_dot_nlu_dot_entity__type__pb2.BatchCreateEntitiesRequest.SerializeToString,
response_deserializer=google_dot_longrunning_dot_operations__pb2.Operation.FromString,
)
self.BatchUpdateEntities = channel.unary_unary(
'/ondewo.nlu.EntityTypes/BatchUpdateEntities',
request_serializer=ondewo_dot_nlu_dot_entity__type__pb2.BatchUpdateEntitiesRequest.SerializeToString,
response_deserializer=google_dot_longrunning_dot_operations__pb2.Operation.FromString,
)
self.BatchDeleteEntities = channel.unary_unary(
'/ondewo.nlu.EntityTypes/BatchDeleteEntities',
request_serializer=ondewo_dot_nlu_dot_entity__type__pb2.BatchDeleteEntitiesRequest.SerializeToString,
response_deserializer=google_dot_longrunning_dot_operations__pb2.Operation.FromString,
)
class EntityTypesServicer(object):
"""Entities are extracted from user input and represent parameters that are
meaningful to your application. For example, a date range, a proper name
such as a geographic location or landmark, and so on. Entities represent
actionable data for your application.
When you define an entity, you can also include synonyms that all map to
that entity. For example, "soft drink", "soda", "pop", and so on.
There are three types of entities:
* **System** - entities that are defined by the Dialogflow API for common
data types such as date, time, currency, and so on. A system entity is
represented by the `EntityType` type.
* **Developer** - entities that are defined by you that represent
actionable data that is meaningful to your application. For example,
you could define a `pizza.sauce` entity for red or white pizza sauce,
a `pizza.cheese` entity for the different types of cheese on a pizza,
a `pizza.topping` entity for different toppings, and so on. A developer
entity is represented by the `EntityType` type.
* **User** - entities that are built for an individual user such as
favorites, preferences, playlists, and so on. A user entity is
represented by the [SessionEntityType][google.cloud.dialogflow.v2.SessionEntityType] type.
For more information about entity types, see the
[Dialogflow documentation](https://dialogflow.com/docs/entities).
"""
def ListEntityTypes(self, request, context):
"""Returns the list of all entity types in the specified agent.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def GetEntityType(self, request, context):
"""Retrieves the specified entity type.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def CreateEntityType(self, request, context):
"""Creates an entity type in the specified agent.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def UpdateEntityType(self, request, context):
"""Updates the specified entity type.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def DeleteEntityType(self, request, context):
"""Deletes the specified entity type.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def BatchUpdateEntityTypes(self, request, context):
"""Updates/Creates multiple entity types in the specified agent.
Operation <response: [BatchUpdateEntityTypesResponse][google.cloud.dialogflow.v2.BatchUpdateEntityTypesResponse],
metadata: [google.protobuf.Struct][google.protobuf.Struct]>
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def BatchDeleteEntityTypes(self, request, context):
"""Deletes entity types in the specified agent.
Operation <response: [google.protobuf.Empty][google.protobuf.Empty],
metadata: [google.protobuf.Struct][google.protobuf.Struct]>
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def BatchCreateEntities(self, request, context):
"""Creates multiple new entities in the specified entity type (extends the
existing collection of entries).
Operation <response: [google.protobuf.Empty][google.protobuf.Empty]>
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def BatchUpdateEntities(self, request, context):
"""Updates entities in the specified entity type (replaces the existing
collection of entries).
Operation <response: [google.protobuf.Empty][google.protobuf.Empty],
metadata: [google.protobuf.Struct][google.protobuf.Struct]>
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def BatchDeleteEntities(self, request, context):
"""Deletes entities in the specified entity type.
Operation <response: [google.protobuf.Empty][google.protobuf.Empty],
metadata: [google.protobuf.Struct][google.protobuf.Struct]>
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def add_EntityTypesServicer_to_server(servicer, server):
rpc_method_handlers = {
'ListEntityTypes': grpc.unary_unary_rpc_method_handler(
servicer.ListEntityTypes,
request_deserializer=ondewo_dot_nlu_dot_entity__type__pb2.ListEntityTypesRequest.FromString,
response_serializer=ondewo_dot_nlu_dot_entity__type__pb2.ListEntityTypesResponse.SerializeToString,
),
'GetEntityType': grpc.unary_unary_rpc_method_handler(
servicer.GetEntityType,
request_deserializer=ondewo_dot_nlu_dot_entity__type__pb2.GetEntityTypeRequest.FromString,
response_serializer=ondewo_dot_nlu_dot_entity__type__pb2.EntityType.SerializeToString,
),
'CreateEntityType': grpc.unary_unary_rpc_method_handler(
servicer.CreateEntityType,
request_deserializer=ondewo_dot_nlu_dot_entity__type__pb2.CreateEntityTypeRequest.FromString,
response_serializer=ondewo_dot_nlu_dot_entity__type__pb2.EntityType.SerializeToString,
),
'UpdateEntityType': grpc.unary_unary_rpc_method_handler(
servicer.UpdateEntityType,
request_deserializer=ondewo_dot_nlu_dot_entity__type__pb2.UpdateEntityTypeRequest.FromString,
response_serializer=ondewo_dot_nlu_dot_entity__type__pb2.EntityType.SerializeToString,
),
'DeleteEntityType': grpc.unary_unary_rpc_method_handler(
servicer.DeleteEntityType,
request_deserializer=ondewo_dot_nlu_dot_entity__type__pb2.DeleteEntityTypeRequest.FromString,
response_serializer=google_dot_protobuf_dot_empty__pb2.Empty.SerializeToString,
),
'BatchUpdateEntityTypes': grpc.unary_unary_rpc_method_handler(
servicer.BatchUpdateEntityTypes,
request_deserializer=ondewo_dot_nlu_dot_entity__type__pb2.BatchUpdateEntityTypesRequest.FromString,
response_serializer=google_dot_longrunning_dot_operations__pb2.Operation.SerializeToString,
),
'BatchDeleteEntityTypes': grpc.unary_unary_rpc_method_handler(
servicer.BatchDeleteEntityTypes,
request_deserializer=ondewo_dot_nlu_dot_entity__type__pb2.BatchDeleteEntityTypesRequest.FromString,
response_serializer=google_dot_longrunning_dot_operations__pb2.Operation.SerializeToString,
),
'BatchCreateEntities': grpc.unary_unary_rpc_method_handler(
servicer.BatchCreateEntities,
request_deserializer=ondewo_dot_nlu_dot_entity__type__pb2.BatchCreateEntitiesRequest.FromString,
response_serializer=google_dot_longrunning_dot_operations__pb2.Operation.SerializeToString,
),
'BatchUpdateEntities': grpc.unary_unary_rpc_method_handler(
servicer.BatchUpdateEntities,
request_deserializer=ondewo_dot_nlu_dot_entity__type__pb2.BatchUpdateEntitiesRequest.FromString,
response_serializer=google_dot_longrunning_dot_operations__pb2.Operation.SerializeToString,
),
'BatchDeleteEntities': grpc.unary_unary_rpc_method_handler(
servicer.BatchDeleteEntities,
request_deserializer=ondewo_dot_nlu_dot_entity__type__pb2.BatchDeleteEntitiesRequest.FromString,
response_serializer=google_dot_longrunning_dot_operations__pb2.Operation.SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
'ondewo.nlu.EntityTypes', rpc_method_handlers)
server.add_generic_rpc_handlers((generic_handler,))
# This class is part of an EXPERIMENTAL API.
class EntityTypes(object):
"""Entities are extracted from user input and represent parameters that are
meaningful to your application. For example, a date range, a proper name
such as a geographic location or landmark, and so on. Entities represent
actionable data for your application.
When you define an entity, you can also include synonyms that all map to
that entity. For example, "soft drink", "soda", "pop", and so on.
There are three types of entities:
* **System** - entities that are defined by the Dialogflow API for common
data types such as date, time, currency, and so on. A system entity is
represented by the `EntityType` type.
* **Developer** - entities that are defined by you that represent
actionable data that is meaningful to your application. For example,
you could define a `pizza.sauce` entity for red or white pizza sauce,
a `pizza.cheese` entity for the different types of cheese on a pizza,
a `pizza.topping` entity for different toppings, and so on. A developer
entity is represented by the `EntityType` type.
* **User** - entities that are built for an individual user such as
favorites, preferences, playlists, and so on. A user entity is
represented by the [SessionEntityType][google.cloud.dialogflow.v2.SessionEntityType] type.
For more information about entity types, see the
[Dialogflow documentation](https://dialogflow.com/docs/entities).
"""
@staticmethod
def ListEntityTypes(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ondewo.nlu.EntityTypes/ListEntityTypes',
ondewo_dot_nlu_dot_entity__type__pb2.ListEntityTypesRequest.SerializeToString,
ondewo_dot_nlu_dot_entity__type__pb2.ListEntityTypesResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def GetEntityType(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ondewo.nlu.EntityTypes/GetEntityType',
ondewo_dot_nlu_dot_entity__type__pb2.GetEntityTypeRequest.SerializeToString,
ondewo_dot_nlu_dot_entity__type__pb2.EntityType.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def CreateEntityType(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ondewo.nlu.EntityTypes/CreateEntityType',
ondewo_dot_nlu_dot_entity__type__pb2.CreateEntityTypeRequest.SerializeToString,
ondewo_dot_nlu_dot_entity__type__pb2.EntityType.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def UpdateEntityType(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ondewo.nlu.EntityTypes/UpdateEntityType',
ondewo_dot_nlu_dot_entity__type__pb2.UpdateEntityTypeRequest.SerializeToString,
ondewo_dot_nlu_dot_entity__type__pb2.EntityType.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def DeleteEntityType(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ondewo.nlu.EntityTypes/DeleteEntityType',
ondewo_dot_nlu_dot_entity__type__pb2.DeleteEntityTypeRequest.SerializeToString,
google_dot_protobuf_dot_empty__pb2.Empty.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def BatchUpdateEntityTypes(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ondewo.nlu.EntityTypes/BatchUpdateEntityTypes',
ondewo_dot_nlu_dot_entity__type__pb2.BatchUpdateEntityTypesRequest.SerializeToString,
google_dot_longrunning_dot_operations__pb2.Operation.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def BatchDeleteEntityTypes(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ondewo.nlu.EntityTypes/BatchDeleteEntityTypes',
ondewo_dot_nlu_dot_entity__type__pb2.BatchDeleteEntityTypesRequest.SerializeToString,
google_dot_longrunning_dot_operations__pb2.Operation.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def BatchCreateEntities(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ondewo.nlu.EntityTypes/BatchCreateEntities',
ondewo_dot_nlu_dot_entity__type__pb2.BatchCreateEntitiesRequest.SerializeToString,
google_dot_longrunning_dot_operations__pb2.Operation.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def BatchUpdateEntities(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ondewo.nlu.EntityTypes/BatchUpdateEntities',
ondewo_dot_nlu_dot_entity__type__pb2.BatchUpdateEntitiesRequest.SerializeToString,
google_dot_longrunning_dot_operations__pb2.Operation.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def BatchDeleteEntities(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ondewo.nlu.EntityTypes/BatchDeleteEntities',
ondewo_dot_nlu_dot_entity__type__pb2.BatchDeleteEntitiesRequest.SerializeToString,
google_dot_longrunning_dot_operations__pb2.Operation.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
| 49.520085 | 121 | 0.696196 | 2,378 | 23,423 | 6.568545 | 0.085786 | 0.03265 | 0.03662 | 0.041293 | 0.855122 | 0.852049 | 0.817414 | 0.789181 | 0.7621 | 0.645006 | 0 | 0.003913 | 0.236178 | 23,423 | 472 | 122 | 49.625 | 0.869152 | 0.236989 | 0 | 0.585987 | 1 | 0 | 0.085893 | 0.051778 | 0 | 0 | 0 | 0 | 0 | 1 | 0.070064 | false | 0 | 0.012739 | 0.031847 | 0.124204 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ed38283be2cba8955f7a80bbc34b45129bd0bade | 403,122 | py | Python | core/domain/exp_domain_test.py | z-dras/oppia | 7eb863deada31367d711bb41413724c1765e14d0 | [
"Apache-2.0"
] | null | null | null | core/domain/exp_domain_test.py | z-dras/oppia | 7eb863deada31367d711bb41413724c1765e14d0 | [
"Apache-2.0"
] | null | null | null | core/domain/exp_domain_test.py | z-dras/oppia | 7eb863deada31367d711bb41413724c1765e14d0 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
#
# Copyright 2014 The Oppia Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS-IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for exploration domain objects and methods defined on them."""
from __future__ import annotations
import copy
import datetime
import os
import re
from core import feconf
from core import utils
from core.constants import constants
from core.domain import exp_domain
from core.domain import exp_fetchers
from core.domain import exp_services
from core.domain import exp_services_test
from core.domain import param_domain
from core.domain import rights_manager
from core.domain import state_domain
from core.domain import translation_domain
from core.platform import models
from core.tests import test_utils
(exp_models,) = models.Registry.import_models([models.NAMES.exploration])
class ExplorationChangeTests(test_utils.GenericTestBase):
def test_exp_change_object_with_missing_cmd(self):
with self.assertRaisesRegex(
utils.ValidationError, 'Missing cmd key in change dict'):
exp_domain.ExplorationChange({'invalid': 'data'})
def test_exp_change_object_with_invalid_cmd(self):
with self.assertRaisesRegex(
utils.ValidationError, 'Command invalid is not allowed'):
exp_domain.ExplorationChange({'cmd': 'invalid'})
def test_exp_change_object_with_deprecated_cmd(self):
with self.assertRaisesRegex(
utils.DeprecatedCommandError, 'Command clone is deprecated'):
exp_domain.ExplorationChange({
'cmd': 'clone',
'property_name': 'content',
'old_value': 'old_value'
})
def test_exp_change_object_with_deprecated_cmd_argument(self):
with self.assertRaisesRegex(
utils.DeprecatedCommandError,
'Value for property_name in cmd edit_state_property: '
'fallbacks is deprecated'):
exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'state_name': 'Introduction',
'property_name': 'fallbacks',
'new_value': 'foo',
})
def test_exp_change_object_with_missing_attribute_in_cmd(self):
with self.assertRaisesRegex(
utils.ValidationError, (
'The following required attributes are missing: '
'new_value')):
exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'property_name': 'content',
'old_value': 'old_value'
})
def test_exp_change_object_with_extra_attribute_in_cmd(self):
with self.assertRaisesRegex(
utils.ValidationError, (
'The following extra attributes are present: invalid')):
exp_domain.ExplorationChange({
'cmd': 'rename_state',
'old_state_name': 'old_state_name',
'new_state_name': 'new_state_name',
'invalid': 'invalid'
})
def test_exp_change_object_with_invalid_exploration_property(self):
with self.assertRaisesRegex(
utils.ValidationError, (
'Value for property_name in cmd edit_exploration_property: '
'invalid is not allowed')):
exp_domain.ExplorationChange({
'cmd': 'edit_exploration_property',
'property_name': 'invalid',
'old_value': 'old_value',
'new_value': 'new_value',
})
def test_exp_change_object_with_invalid_state_property(self):
with self.assertRaisesRegex(
utils.ValidationError, (
'Value for property_name in cmd edit_state_property: '
'invalid is not allowed')):
exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'state_name': 'state_name',
'property_name': 'invalid',
'old_value': 'old_value',
'new_value': 'new_value',
})
def test_exp_change_object_with_create_new(self):
exp_change_object = exp_domain.ExplorationChange({
'cmd': 'create_new',
'category': 'category',
'title': 'title'
})
self.assertEqual(exp_change_object.cmd, 'create_new')
self.assertEqual(exp_change_object.category, 'category')
self.assertEqual(exp_change_object.title, 'title')
def test_exp_change_object_with_add_state(self):
exp_change_object = exp_domain.ExplorationChange({
'cmd': 'add_state',
'state_name': 'state_name',
})
self.assertEqual(exp_change_object.cmd, 'add_state')
self.assertEqual(exp_change_object.state_name, 'state_name')
def test_exp_change_object_with_rename_state(self):
exp_change_object = exp_domain.ExplorationChange({
'cmd': 'rename_state',
'old_state_name': 'old_state_name',
'new_state_name': 'new_state_name'
})
self.assertEqual(exp_change_object.cmd, 'rename_state')
self.assertEqual(exp_change_object.old_state_name, 'old_state_name')
self.assertEqual(exp_change_object.new_state_name, 'new_state_name')
def test_exp_change_object_with_delete_state(self):
exp_change_object = exp_domain.ExplorationChange({
'cmd': 'delete_state',
'state_name': 'state_name',
})
self.assertEqual(exp_change_object.cmd, 'delete_state')
self.assertEqual(exp_change_object.state_name, 'state_name')
def test_exp_change_object_with_edit_state_property(self):
exp_change_object = exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'state_name': 'state_name',
'property_name': 'content',
'new_value': 'new_value',
'old_value': 'old_value'
})
self.assertEqual(exp_change_object.cmd, 'edit_state_property')
self.assertEqual(exp_change_object.state_name, 'state_name')
self.assertEqual(exp_change_object.property_name, 'content')
self.assertEqual(exp_change_object.new_value, 'new_value')
self.assertEqual(exp_change_object.old_value, 'old_value')
def test_exp_change_object_with_edit_exploration_property(self):
exp_change_object = exp_domain.ExplorationChange({
'cmd': 'edit_exploration_property',
'property_name': 'title',
'new_value': 'new_value',
'old_value': 'old_value'
})
self.assertEqual(exp_change_object.cmd, 'edit_exploration_property')
self.assertEqual(exp_change_object.property_name, 'title')
self.assertEqual(exp_change_object.new_value, 'new_value')
self.assertEqual(exp_change_object.old_value, 'old_value')
def test_exp_change_object_with_migrate_states_schema_to_latest_version(
self):
exp_change_object = exp_domain.ExplorationChange({
'cmd': 'migrate_states_schema_to_latest_version',
'from_version': 'from_version',
'to_version': 'to_version',
})
self.assertEqual(
exp_change_object.cmd, 'migrate_states_schema_to_latest_version')
self.assertEqual(exp_change_object.from_version, 'from_version')
self.assertEqual(exp_change_object.to_version, 'to_version')
def test_exp_change_object_with_revert_commit(self):
exp_change_object = exp_domain.ExplorationChange({
'cmd': exp_models.ExplorationModel.CMD_REVERT_COMMIT,
'version_number': 'version_number'
})
self.assertEqual(
exp_change_object.cmd,
exp_models.ExplorationModel.CMD_REVERT_COMMIT)
self.assertEqual(exp_change_object.version_number, 'version_number')
def test_to_dict(self):
exp_change_dict = {
'cmd': 'create_new',
'title': 'title',
'category': 'category'
}
exp_change_object = exp_domain.ExplorationChange(exp_change_dict)
self.assertEqual(exp_change_object.to_dict(), exp_change_dict)
class ExplorationVersionsDiffDomainUnitTests(test_utils.GenericTestBase):
"""Test the exploration versions difference domain object."""
def setUp(self):
super(ExplorationVersionsDiffDomainUnitTests, self).setUp()
self.exp_id = 'exp_id1'
test_exp_filepath = os.path.join(
feconf.TESTS_DATA_DIR, 'string_classifier_test.yaml')
yaml_content = utils.get_file_contents(test_exp_filepath)
assets_list = []
exp_services.save_new_exploration_from_yaml_and_assets(
feconf.SYSTEM_COMMITTER_ID, yaml_content, self.exp_id,
assets_list)
self.exploration = exp_fetchers.get_exploration_by_id(self.exp_id)
def test_correct_creation_of_version_diffs(self):
# Rename a state.
self.exploration.rename_state('Home', 'Renamed state')
change_list = [exp_domain.ExplorationChange({
'cmd': 'rename_state',
'old_state_name': 'Home',
'new_state_name': 'Renamed state'
})]
exp_versions_diff = exp_domain.ExplorationVersionsDiff(change_list)
self.assertEqual(exp_versions_diff.added_state_names, [])
self.assertEqual(exp_versions_diff.deleted_state_names, [])
self.assertEqual(
exp_versions_diff.old_to_new_state_names, {
'Home': 'Renamed state'
})
self.exploration.version += 1
# Add a state.
self.exploration.add_states(['New state'])
self.exploration.states['New state'] = copy.deepcopy(
self.exploration.states['Renamed state'])
change_list = [exp_domain.ExplorationChange({
'cmd': 'add_state',
'state_name': 'New state',
})]
exp_versions_diff = exp_domain.ExplorationVersionsDiff(change_list)
self.assertEqual(exp_versions_diff.added_state_names, ['New state'])
self.assertEqual(exp_versions_diff.deleted_state_names, [])
self.assertEqual(exp_versions_diff.old_to_new_state_names, {})
self.exploration.version += 1
# Delete state.
self.exploration.delete_state('New state')
change_list = [exp_domain.ExplorationChange({
'cmd': 'delete_state',
'state_name': 'New state'
})]
exp_versions_diff = exp_domain.ExplorationVersionsDiff(change_list)
self.assertEqual(exp_versions_diff.added_state_names, [])
self.assertEqual(exp_versions_diff.deleted_state_names, ['New state'])
self.assertEqual(exp_versions_diff.old_to_new_state_names, {})
self.exploration.version += 1
# Test addition and multiple renames.
self.exploration.add_states(['New state'])
self.exploration.states['New state'] = copy.deepcopy(
self.exploration.states['Renamed state'])
self.exploration.rename_state('New state', 'New state2')
self.exploration.rename_state('New state2', 'New state3')
change_list = [exp_domain.ExplorationChange({
'cmd': 'add_state',
'state_name': 'New state',
}), exp_domain.ExplorationChange({
'cmd': 'rename_state',
'old_state_name': 'New state',
'new_state_name': 'New state2'
}), exp_domain.ExplorationChange({
'cmd': 'rename_state',
'old_state_name': 'New state2',
'new_state_name': 'New state3'
})]
exp_versions_diff = exp_domain.ExplorationVersionsDiff(change_list)
self.assertEqual(exp_versions_diff.added_state_names, ['New state3'])
self.assertEqual(exp_versions_diff.deleted_state_names, [])
self.assertEqual(exp_versions_diff.old_to_new_state_names, {})
self.exploration.version += 1
# Test addition, rename and deletion.
self.exploration.add_states(['New state 2'])
self.exploration.rename_state('New state 2', 'Renamed state 2')
self.exploration.delete_state('Renamed state 2')
change_list = [exp_domain.ExplorationChange({
'cmd': 'add_state',
'state_name': 'New state 2'
}), exp_domain.ExplorationChange({
'cmd': 'rename_state',
'old_state_name': 'New state 2',
'new_state_name': 'Renamed state 2'
}), exp_domain.ExplorationChange({
'cmd': 'delete_state',
'state_name': 'Renamed state 2'
})]
exp_versions_diff = exp_domain.ExplorationVersionsDiff(change_list)
self.assertEqual(exp_versions_diff.added_state_names, [])
self.assertEqual(exp_versions_diff.deleted_state_names, [])
self.assertEqual(exp_versions_diff.old_to_new_state_names, {})
self.exploration.version += 1
# Test multiple renames and deletion.
self.exploration.rename_state('New state3', 'Renamed state 3')
self.exploration.rename_state('Renamed state 3', 'Renamed state 4')
self.exploration.delete_state('Renamed state 4')
change_list = [exp_domain.ExplorationChange({
'cmd': 'rename_state',
'old_state_name': 'New state3',
'new_state_name': 'Renamed state 3'
}), exp_domain.ExplorationChange({
'cmd': 'rename_state',
'old_state_name': 'Renamed state 3',
'new_state_name': 'Renamed state 4'
}), exp_domain.ExplorationChange({
'cmd': 'delete_state',
'state_name': 'Renamed state 4'
})]
exp_versions_diff = exp_domain.ExplorationVersionsDiff(change_list)
self.assertEqual(exp_versions_diff.added_state_names, [])
self.assertEqual(
exp_versions_diff.deleted_state_names, ['New state3'])
self.assertEqual(exp_versions_diff.old_to_new_state_names, {})
self.exploration.version += 1
def test_cannot_create_exploration_change_with_invalid_change_dict(self):
with self.assertRaisesRegex(
Exception, 'Missing cmd key in change dict'):
exp_domain.ExplorationChange({
'invalid_cmd': 'invalid'
})
def test_cannot_create_exploration_change_with_invalid_cmd(self):
with self.assertRaisesRegex(
Exception, 'Command invalid_cmd is not allowed'):
exp_domain.ExplorationChange({
'cmd': 'invalid_cmd'
})
def test_cannot_create_exploration_change_with_invalid_state_property(self):
exp_change = exp_domain.ExplorationChange({
'cmd': exp_domain.CMD_EDIT_STATE_PROPERTY,
'property_name': exp_domain.STATE_PROPERTY_INTERACTION_ID,
'state_name': '',
'new_value': ''
})
self.assertTrue(isinstance(exp_change, exp_domain.ExplorationChange))
with self.assertRaisesRegex(
Exception,
'Value for property_name in cmd edit_state_property: '
'invalid_property is not allowed'):
exp_domain.ExplorationChange({
'cmd': exp_domain.CMD_EDIT_STATE_PROPERTY,
'property_name': 'invalid_property',
'state_name': '',
'new_value': ''
})
def test_cannot_create_exploration_change_with_invalid_exploration_property(
self):
exp_change = exp_domain.ExplorationChange({
'cmd': exp_domain.CMD_EDIT_EXPLORATION_PROPERTY,
'property_name': 'title',
'new_value': ''
})
self.assertTrue(isinstance(exp_change, exp_domain.ExplorationChange))
with self.assertRaisesRegex(
Exception,
'Value for property_name in cmd edit_exploration_property: '
'invalid_property is not allowed'):
exp_domain.ExplorationChange({
'cmd': exp_domain.CMD_EDIT_EXPLORATION_PROPERTY,
'property_name': 'invalid_property',
'new_value': ''
})
def test_revert_exploration_commit(self):
exp_change = exp_domain.ExplorationChange({
'cmd': exp_models.ExplorationModel.CMD_REVERT_COMMIT,
'version_number': 1
})
self.assertEqual(exp_change.version_number, 1)
exp_change = exp_domain.ExplorationChange({
'cmd': exp_models.ExplorationModel.CMD_REVERT_COMMIT,
'version_number': 2
})
self.assertEqual(exp_change.version_number, 2)
class ExpVersionReferenceTests(test_utils.GenericTestBase):
def test_create_exp_version_reference_object(self):
exp_version_reference = exp_domain.ExpVersionReference('exp_id', 1)
self.assertEqual(
exp_version_reference.to_dict(), {
'exp_id': 'exp_id',
'version': 1
})
def test_validate_exp_version(self):
with self.assertRaisesRegex(
Exception,
'Expected version to be an int, received invalid_version'):
exp_domain.ExpVersionReference('exp_id', 'invalid_version')
def test_validate_exp_id(self):
with self.assertRaisesRegex(
Exception, 'Expected exp_id to be a str, received 0'):
exp_domain.ExpVersionReference(0, 1)
class ExplorationCheckpointsUnitTests(test_utils.GenericTestBase):
"""Test checkpoints validations in an exploration. """
def setUp(self):
super(ExplorationCheckpointsUnitTests, self).setUp()
self.exploration = (
exp_domain.Exploration.create_default_exploration('eid'))
self.new_state = state_domain.State.create_default_state(
'Introduction', is_initial_state=True)
self.set_interaction_for_state(self.new_state, 'TextInput')
self.exploration.init_state_name = 'Introduction'
self.exploration.states = {
self.exploration.init_state_name: self.new_state
}
self.set_interaction_for_state(
self.exploration.states[self.exploration.init_state_name],
'TextInput')
self.init_state = (
self.exploration.states[self.exploration.init_state_name])
self.end_state = state_domain.State.create_default_state('End')
self.set_interaction_for_state(self.end_state, 'EndExploration')
self.end_state.update_interaction_default_outcome(None)
def test_init_state_with_card_is_checkpoint_false_is_invalid(self):
self.init_state.update_card_is_checkpoint(False)
with self.assertRaisesRegex(
Exception, 'Expected card_is_checkpoint of first state to '
'be True but found it to be False'):
self.exploration.validate(strict=True)
self.init_state.update_card_is_checkpoint(True)
def test_end_state_with_card_is_checkpoint_true_is_invalid(self):
default_outcome = self.init_state.interaction.default_outcome
default_outcome.dest = self.exploration.init_state_name
self.init_state.update_interaction_default_outcome(default_outcome)
self.exploration.states = {
self.exploration.init_state_name: self.new_state,
'End': self.end_state
}
self.end_state.update_card_is_checkpoint(True)
with self.assertRaisesRegex(
Exception, 'Expected card_is_checkpoint of terminal state '
'to be False but found it to be True'):
self.exploration.validate(strict=True)
self.end_state.update_card_is_checkpoint(False)
def test_init_state_checkpoint_with_end_exp_interaction_is_valid(self):
self.exploration.init_state_name = 'End'
self.exploration.states = {
self.exploration.init_state_name: self.end_state
}
self.exploration.objective = 'Objective'
self.exploration.title = 'Title'
self.exploration.category = 'Category'
self.end_state.update_card_is_checkpoint(True)
self.exploration.validate(strict=True)
self.end_state.update_card_is_checkpoint(False)
def test_checkpoint_count_with_count_outside_range_is_invalid(self):
self.exploration.init_state_name = 'Introduction'
self.exploration.states = {
self.exploration.init_state_name: self.new_state,
'End': self.end_state
}
for i in range(8):
self.exploration.add_states(['State%s' % i])
self.exploration.states['State%s' % i].card_is_checkpoint = True
self.set_interaction_for_state(
self.exploration.states['State%s' % i],
'Continue')
with self.assertRaisesRegex(
Exception, 'Expected checkpoint count to be between 1 and 8 '
'inclusive but found it to be 9'
):
self.exploration.validate(strict=True)
self.exploration.states = {
self.exploration.init_state_name: self.new_state,
'End': self.end_state
}
def test_bypassable_state_with_card_is_checkpoint_true_is_invalid(self):
# Note: In the graphs below, states with the * symbol are checkpoints.
# Exploration to test a checkpoint state which has no outcome.
# ┌────────────────┐
# │ Introduction* │
# └──┬───────────┬─┘
# │ │
# │ │
# ┌────────┴──┐ ┌─┴─────────┐
# │ Second* │ │ Third │
# └───────────┘ └─┬─────────┘
# │
# ┌─────────────┴─┐
# │ End │
# └───────────────┘.
second_state = state_domain.State.create_default_state('Second')
self.set_interaction_for_state(second_state, 'TextInput')
third_state = state_domain.State.create_default_state('Third')
self.set_interaction_for_state(third_state, 'TextInput')
self.exploration.states = {
self.exploration.init_state_name: self.new_state,
'End': self.end_state,
'Second': second_state,
'Third': third_state,
}
# Answer group dicts to connect init_state to second_state and
# third_state.
init_state_answer_groups = [
state_domain.AnswerGroup(
state_domain.Outcome(
'Second', state_domain.SubtitledHtml(
'feedback_0', '<p>Feedback</p>'),
False, [], None, None),
[
state_domain.RuleSpec(
'Contains',
{
'x':
{
'contentId': 'rule_input_0',
'normalizedStrSet': ['Test0']
}
})
],
[],
None
), state_domain.AnswerGroup(
state_domain.Outcome(
'Third', state_domain.SubtitledHtml(
'feedback_1', '<p>Feedback</p>'),
False, [], None, None),
[
state_domain.RuleSpec(
'Contains',
{
'x':
{
'contentId': 'rule_input_1',
'normalizedStrSet': ['Test1']
}
})
],
[],
None
)
]
# Answer group dict to connect third_state to end_state.
third_state_answer_groups = [
state_domain.AnswerGroup(
state_domain.Outcome(
'End', state_domain.SubtitledHtml(
'feedback_0', '<p>Feedback</p>'),
False, [], None, None),
[
state_domain.RuleSpec(
'Contains',
{
'x':
{
'contentId': 'rule_input_0',
'normalizedStrSet': ['Test0']
}
})
],
[],
None
)
]
self.init_state.update_interaction_answer_groups(
init_state_answer_groups)
third_state.update_interaction_answer_groups(
third_state_answer_groups)
# The exploration can be completed via third_state. Hence, making
# second_state a checkpoint raises a validation error.
second_state.card_is_checkpoint = True
with self.assertRaisesRegex(
Exception, 'Cannot make Second a checkpoint as it is'
' bypassable'
):
self.exploration.validate(strict=True)
second_state.card_is_checkpoint = False
# Exploration to test a checkpoint state when the state in the other
# path has no outcome.
# ┌────────────────┐
# │ Introduction* │
# └──┬───────────┬─┘
# │ │
# │ │
# ┌────────┴──┐ ┌─┴─────────┐
# │ Second* │ │ Third │
# └────────┬──┘ └───────────┘
# │
# ┌─┴─────────────┐
# │ End │
# └───────────────┘.
# Answer group dicts to connect second_state to end_state.
second_state_answer_groups = [
state_domain.AnswerGroup(
state_domain.Outcome(
'End', state_domain.SubtitledHtml(
'feedback_0', '<p>Feedback</p>'),
False, [], None, None),
[
state_domain.RuleSpec(
'Contains',
{
'x':
{
'contentId': 'rule_input_0',
'normalizedStrSet': ['Test0']
}
})
],
[],
None
)
]
second_state.update_interaction_answer_groups(
second_state_answer_groups)
# Reset the answer group dicts of third_state.
third_state.update_interaction_answer_groups([])
# As second_state is now connected to end_state and third_state has no
# outcome, second_state has become non-bypassable.
second_state.update_card_is_checkpoint(True)
self.exploration.validate()
# Reset the exploration.
self.exploration.states = {
self.exploration.init_state_name: self.new_state,
'End': self.end_state
}
# Exploration to test a bypassable state.
# ┌────────────────┐
# │ Introduction* │
# └─┬─────┬──────┬─┘
# ┌───────────┐ │ │ │ ┌────────────┐
# │ A ├────┘ │ └─────┤ C │
# └────┬──────┘ │ └─────┬──────┘
# │ ┌────┴─────┐ │
# │ │ B │ │
# │ └──┬───────┘ │
# └─────────┐ │ │
# ┌──────┴─────┴─┐ ┌─────────────┘
# │ D* │ │
# └─────────────┬┘ │
# │ │
# ┌──┴─────┴──┐
# │ End │
# └───────────┘.
a_state = state_domain.State.create_default_state('A')
self.set_interaction_for_state(a_state, 'TextInput')
b_state = state_domain.State.create_default_state('B')
self.set_interaction_for_state(b_state, 'TextInput')
c_state = state_domain.State.create_default_state('C')
self.set_interaction_for_state(c_state, 'TextInput')
d_state = state_domain.State.create_default_state('D')
self.set_interaction_for_state(d_state, 'TextInput')
self.exploration.states = {
self.exploration.init_state_name: self.new_state,
'A': a_state,
'B': b_state,
'C': c_state,
'D': d_state,
'End': self.end_state
}
# Answer group dicts to connect init_state to a_state, b_state and
# c_state.
init_state_answer_groups = [
state_domain.AnswerGroup(
state_domain.Outcome(
'A', state_domain.SubtitledHtml(
'feedback_0', '<p>Feedback</p>'),
False, [], None, None),
[
state_domain.RuleSpec(
'Contains',
{
'x':
{
'contentId': 'rule_input_0',
'normalizedStrSet': ['Test0']
}
})
],
[],
None
), state_domain.AnswerGroup(
state_domain.Outcome(
'B', state_domain.SubtitledHtml(
'feedback_1', '<p>Feedback</p>'),
False, [], None, None),
[
state_domain.RuleSpec(
'Contains',
{
'x':
{
'contentId': 'rule_input_1',
'normalizedStrSet': ['Test1']
}
})
],
[],
None
), state_domain.AnswerGroup(
state_domain.Outcome(
'C', state_domain.SubtitledHtml(
'feedback_2', '<p>Feedback</p>'),
False, [], None, None),
[
state_domain.RuleSpec(
'Contains',
{
'x':
{
'contentId': 'rule_input_2',
'normalizedStrSet': ['Test2']
}
})
],
[],
None
)
]
# Answer group dict to connect a_state and b_state to d_state.
a_and_b_state_answer_groups = [
state_domain.AnswerGroup(
state_domain.Outcome(
'D', state_domain.SubtitledHtml(
'feedback_0', '<p>Feedback</p>'),
False, [], None, None),
[
state_domain.RuleSpec(
'Contains',
{
'x':
{
'contentId': 'rule_input_0',
'normalizedStrSet': ['Test0']
}
})
],
[],
None
)
]
# Answer group dict to connect c_state and d_state to end_state.
c_and_d_state_answer_groups = [
state_domain.AnswerGroup(
state_domain.Outcome(
'End', state_domain.SubtitledHtml(
'feedback_0', '<p>Feedback</p>'),
False, [], None, None),
[
state_domain.RuleSpec(
'Contains',
{
'x':
{
'contentId': 'rule_input_0',
'normalizedStrSet': ['Test0']
}
})
],
[],
None
)
]
self.init_state.update_interaction_answer_groups(
init_state_answer_groups)
a_state.update_interaction_answer_groups(
a_and_b_state_answer_groups)
b_state.update_interaction_answer_groups(
a_and_b_state_answer_groups)
c_state.update_interaction_answer_groups(
c_and_d_state_answer_groups)
d_state.update_interaction_answer_groups(
c_and_d_state_answer_groups)
# As a user can complete the exploration by going through c_state,
# d_state becomes bypassable. Hence, making d_state a checkpoint raises
# validation error.
d_state.update_card_is_checkpoint(True)
with self.assertRaisesRegex(
Exception, 'Cannot make D a checkpoint as it is bypassable'
):
self.exploration.validate(strict=True)
d_state.update_card_is_checkpoint(False)
# Modifying the graph to make D non-bypassable.
# ┌────────────────┐
# │ Introduction* │
# └─┬─────┬──────┬─┘
# ┌───────────┐ │ │ │ ┌────────────┐
# │ A ├────┘ │ └─────┤ C │
# └────┬──────┘ │ └──────┬─────┘
# │ ┌────┴─────┐ │
# │ │ B │ │
# │ └────┬─────┘ │
# │ │ │
# │ ┌──────┴───────┐ │
# └──────────┤ D* ├───────────┘
# └──────┬───────┘
# │
# ┌─────┴─────┐
# │ End │
# └───────────┘.
# Answer group dict to connect c_state to d_state. Hence, making d_state
# non-bypassable.
c_state_answer_groups = [
state_domain.AnswerGroup(
state_domain.Outcome(
'D', state_domain.SubtitledHtml(
'feedback_0', '<p>Feedback</p>'),
False, [], None, None),
[
state_domain.RuleSpec(
'Contains',
{
'x':
{
'contentId': 'rule_input_0',
'normalizedStrSet': ['Test0']
}
})
],
[],
None
)
]
c_state.update_interaction_answer_groups(
c_state_answer_groups)
d_state.update_card_is_checkpoint(True)
self.exploration.validate()
# Modifying the graph to add another EndExploration state.
# ┌────────────────┐
# │ Introduction* │
# └─┬─────┬──────┬─┘
# ┌───────────┐ │ │ │ ┌────────────┐
# │ A ├────┘ │ └─────┤ C │
# └────┬──────┘ │ └──────┬───┬─┘
# │ ┌────┴─────┐ │ │
# │ │ B │ │ │
# │ └────┬─────┘ │ │
# │ │ │ │
# │ ┌──────┴───────┐ │ │
# └──────────┤ D* ├───────────┘ │
# └──────┬───────┘ │
# │ │
# ┌─────┴─────┐ ┌─────┴─────┐
# │ End │ │ End 2 │
# └───────────┘ └───────────┘.
new_end_state = state_domain.State.create_default_state('End 2')
self.set_interaction_for_state(new_end_state, 'EndExploration')
new_end_state.update_interaction_default_outcome(None)
self.exploration.states = {
self.exploration.init_state_name: self.new_state,
'A': a_state,
'B': b_state,
'C': c_state,
'D': d_state,
'End': self.end_state,
'End 2': new_end_state
}
# Answer group dicts to connect c_state to d_state and new_end_state,
# making d_state bypassable.
c_state_answer_groups = [
state_domain.AnswerGroup(
state_domain.Outcome(
'D', state_domain.SubtitledHtml(
'feedback_0', '<p>Feedback</p>'),
False, [], None, None),
[
state_domain.RuleSpec(
'Contains',
{
'x':
{
'contentId': 'rule_input_0',
'normalizedStrSet': ['Test0']
}
})
],
[],
None
), state_domain.AnswerGroup(
state_domain.Outcome(
'End 2', state_domain.SubtitledHtml(
'feedback_1', '<p>Feedback</p>'),
False, [], None, None),
[
state_domain.RuleSpec(
'Contains',
{
'x':
{
'contentId': 'rule_input_1',
'normalizedStrSet': ['Test1']
}
})
],
[],
None
)
]
c_state.update_interaction_answer_groups(
c_state_answer_groups)
with self.assertRaisesRegex(
Exception, 'Cannot make D a checkpoint as it is bypassable'
):
self.exploration.validate(strict=True)
d_state.update_card_is_checkpoint(False)
class ExplorationDomainUnitTests(test_utils.GenericTestBase):
"""Test the exploration domain object."""
def setUp(self):
super(ExplorationDomainUnitTests, self).setUp()
translation_dict = {
'content_id_3': translation_domain.TranslatedContent(
'My name is Nikhil.', True)
}
self.dummy_entity_translations = translation_domain.EntityTranslation(
'exp_id', feconf.TranslatableEntityType.EXPLORATION, 1, 'en',
translation_dict)
# TODO(bhenning): The validation tests below should be split into separate
# unit tests. Also, all validation errors should be covered in the tests.
def test_validation(self):
"""Test validation of explorations."""
exploration = exp_domain.Exploration.create_default_exploration('eid')
exploration.init_state_name = ''
exploration.states = {}
exploration.title = 'Hello #'
self._assert_validation_error(exploration, 'Invalid character #')
exploration.title = 2
self._assert_validation_error(
exploration, 'Expected title to be a string, received 2')
exploration.title = 'Title'
exploration.category = 'Category'
# Note: If '/' ever becomes a valid state name, ensure that the rule
# editor frontend tenplate is fixed -- it currently uses '/' as a
# sentinel for an invalid state name.
bad_state = state_domain.State.create_default_state('/')
exploration.states = {'/': bad_state}
self._assert_validation_error(
exploration, 'Invalid character / in a state name')
new_state = state_domain.State.create_default_state('ABC')
self.set_interaction_for_state(new_state, 'TextInput')
# The 'states' property must be a non-empty dict of states.
exploration.states = {}
self._assert_validation_error(
exploration, 'exploration has no states')
exploration.states = {'A string #': new_state}
self._assert_validation_error(
exploration, 'Invalid character # in a state name')
exploration.states = {'A string _': new_state}
self._assert_validation_error(
exploration, 'Invalid character _ in a state name')
exploration.states = {'ABC': new_state}
self._assert_validation_error(
exploration, 'has no initial state name')
exploration.init_state_name = 'initname'
self._assert_validation_error(
exploration,
r'There is no state in \[\'ABC\'\] corresponding to '
'the exploration\'s initial state name initname.')
# Test whether a default outcome to a non-existing state is invalid.
exploration.states = {exploration.init_state_name: new_state}
self._assert_validation_error(
exploration, 'destination ABC is not a valid')
# Restore a valid exploration.
init_state = exploration.states[exploration.init_state_name]
default_outcome = init_state.interaction.default_outcome
default_outcome.dest = exploration.init_state_name
init_state.update_interaction_default_outcome(default_outcome)
init_state.update_card_is_checkpoint(True)
exploration.validate()
# Ensure an invalid destination can also be detected for answer groups.
# Note: The state must keep its default_outcome, otherwise it will
# trigger a validation error for non-terminal states needing to have a
# default outcome. To validate the outcome of the answer group, this
# default outcome must point to a valid state.
init_state = exploration.states[exploration.init_state_name]
default_outcome = init_state.interaction.default_outcome
default_outcome.dest = exploration.init_state_name
old_answer_groups = copy.deepcopy(init_state.interaction.answer_groups)
old_answer_groups.append({
'outcome': {
'dest': exploration.init_state_name,
'feedback': {
'content_id': 'feedback_1',
'html': '<p>Feedback</p>'
},
'labelled_as_correct': False,
'param_changes': [],
'refresher_exploration_id': None,
'missing_prerequisite_skill_id': None
},
'rule_specs': [{
'inputs': {
'x': {
'contentId': 'rule_input_Equals',
'normalizedStrSet': ['Test']
}
},
'rule_type': 'Contains'
}],
'training_data': [],
'tagged_skill_misconception_id': None
})
new_answer_groups = [
state_domain.AnswerGroup.from_dict(answer_group)
for answer_group in old_answer_groups
]
init_state.update_interaction_answer_groups(new_answer_groups)
exploration.validate()
interaction = init_state.interaction
answer_groups = interaction.answer_groups
answer_group = answer_groups[0]
answer_group.outcome.dest = 'DEF'
self._assert_validation_error(
exploration, 'destination DEF is not a valid')
# Restore a valid exploration.
self.set_interaction_for_state(
init_state, 'TextInput')
new_answer_groups = [
state_domain.AnswerGroup.from_dict(answer_groups)
for answer_groups in old_answer_groups
]
init_state.update_interaction_answer_groups(new_answer_groups)
answer_groups = interaction.answer_groups
answer_group = answer_groups[0]
answer_group.outcome.dest = exploration.init_state_name
exploration.validate()
# Validate RuleSpec.
rule_spec = answer_group.rule_specs[0]
rule_spec.inputs = {}
self._assert_validation_error(
exploration, 'RuleSpec \'Contains\' is missing inputs')
rule_spec.inputs = 'Inputs string'
self._assert_validation_error(
exploration, 'Expected inputs to be a dict')
rule_spec.inputs = {'x': 'Test'}
rule_spec.rule_type = 'FakeRuleType'
self._assert_validation_error(exploration, 'Unrecognized rule type')
rule_spec.inputs = {'x': {
'contentId': 'rule_input_Equals',
'normalizedStrSet': 15
}}
rule_spec.rule_type = 'Contains'
with self.assertRaisesRegex(
AssertionError, 'Expected list, received 15'
):
exploration.validate()
self.set_interaction_for_state(
exploration.states[exploration.init_state_name],
'PencilCodeEditor')
temp_rule = old_answer_groups[0]['rule_specs'][0]
old_answer_groups[0]['rule_specs'][0] = {
'rule_type': 'ErrorContains',
'inputs': {'x': '{{ExampleParam}}'}
}
new_answer_groups = [
state_domain.AnswerGroup.from_dict(answer_group)
for answer_group in old_answer_groups
]
init_state.update_interaction_answer_groups(new_answer_groups)
old_answer_groups[0]['rule_specs'][0] = temp_rule
self._assert_validation_error(
exploration,
'RuleSpec \'ErrorContains\' has an input with name \'x\' which '
'refers to an unknown parameter within the exploration: '
'ExampleParam')
# Restore a valid exploration.
exploration.param_specs['ExampleParam'] = param_domain.ParamSpec(
'UnicodeString')
exploration.validate()
# Validate Outcome.
outcome = init_state.interaction.answer_groups[0].outcome
destination = exploration.init_state_name
outcome.dest = None
self._assert_validation_error(
exploration, 'Every outcome should have a destination.')
# Try setting the outcome destination to something other than a string.
outcome.dest = 15
self._assert_validation_error(
exploration, 'Expected outcome dest to be a string')
outcome.dest = destination
outcome.feedback = state_domain.SubtitledHtml('feedback_1', '')
exploration.validate()
outcome.labelled_as_correct = 'hello'
self._assert_validation_error(
exploration, 'The "labelled_as_correct" field should be a boolean')
# Test that labelled_as_correct must be False for self-loops, and that
# this causes a strict validation failure but not a normal validation
# failure.
outcome.labelled_as_correct = True
with self.assertRaisesRegex(
Exception, 'is labelled correct but is a self-loop.'
):
exploration.validate(strict=True)
exploration.validate()
outcome.labelled_as_correct = False
exploration.validate()
outcome.param_changes = 'Changes'
self._assert_validation_error(
exploration, 'Expected outcome param_changes to be a list')
outcome.param_changes = [param_domain.ParamChange(
0, 'generator_id', {})]
self._assert_validation_error(
exploration,
'Expected param_change name to be a string, received 0')
outcome.param_changes = []
exploration.validate()
outcome.refresher_exploration_id = 12345
self._assert_validation_error(
exploration,
'Expected outcome refresher_exploration_id to be a string')
outcome.refresher_exploration_id = None
exploration.validate()
outcome.refresher_exploration_id = 'valid_string'
exploration.validate()
outcome.missing_prerequisite_skill_id = 12345
self._assert_validation_error(
exploration,
'Expected outcome missing_prerequisite_skill_id to be a string')
outcome.missing_prerequisite_skill_id = None
exploration.validate()
outcome.missing_prerequisite_skill_id = 'valid_string'
exploration.validate()
# Test that refresher_exploration_id must be None for non-self-loops.
new_state_name = 'New state'
exploration.add_states([new_state_name])
outcome.dest = new_state_name
outcome.refresher_exploration_id = 'another_string'
self._assert_validation_error(
exploration,
'has a refresher exploration ID, but is not a self-loop')
outcome.refresher_exploration_id = None
exploration.validate()
exploration.delete_state(new_state_name)
# Validate InteractionInstance.
interaction.id = 15
self._assert_validation_error(
exploration, 'Expected interaction id to be a string')
interaction.id = 'SomeInteractionTypeThatDoesNotExist'
self._assert_validation_error(exploration, 'Invalid interaction id')
interaction.id = 'PencilCodeEditor'
self.set_interaction_for_state(init_state, 'TextInput')
new_answer_groups = [
state_domain.AnswerGroup.from_dict(answer_group)
for answer_group in old_answer_groups
]
init_state.update_interaction_answer_groups(new_answer_groups)
valid_text_input_cust_args = init_state.interaction.customization_args
rule_spec.inputs = {'x': {
'contentId': 'rule_input_Equals',
'normalizedStrSet': ['Test']
}}
rule_spec.rule_type = 'Contains'
exploration.validate()
interaction.customization_args = []
self._assert_validation_error(
exploration, 'Expected customization args to be a dict')
interaction.customization_args = {15: ''}
self._assert_validation_error(
exploration,
(
'Expected customization arg value to be a '
'InteractionCustomizationArg'
)
)
interaction.customization_args = {
15: state_domain.InteractionCustomizationArg('', {
'type': 'unicode'
})
}
self._assert_validation_error(
exploration, 'Invalid customization arg name')
interaction.customization_args = valid_text_input_cust_args
self.set_interaction_for_state(init_state, 'TextInput')
exploration.validate()
interaction.answer_groups = {}
self._assert_validation_error(
exploration, 'Expected answer groups to be a list')
new_answer_groups = [
state_domain.AnswerGroup.from_dict(answer_group)
for answer_group in old_answer_groups
]
init_state.update_interaction_answer_groups(new_answer_groups)
self.set_interaction_for_state(init_state, 'EndExploration')
self._assert_validation_error(
exploration,
'Terminal interactions must not have a default outcome.')
self.set_interaction_for_state(init_state, 'TextInput')
init_state.update_interaction_default_outcome(None)
self._assert_validation_error(
exploration,
'Non-terminal interactions must have a default outcome.')
self.set_interaction_for_state(init_state, 'EndExploration')
init_state.interaction.answer_groups = answer_groups
self._assert_validation_error(
exploration,
'Terminal interactions must not have any answer groups.')
# A terminal interaction without a default outcome or answer group is
# valid. This resets the exploration back to a valid state.
init_state.interaction.answer_groups = []
exploration.validate()
# Restore a valid exploration.
self.set_interaction_for_state(init_state, 'TextInput')
init_state.update_interaction_answer_groups(answer_groups)
init_state.update_interaction_default_outcome(default_outcome)
exploration.validate()
solution_dict = {
'answer_is_exclusive': True,
'correct_answer': 'hello_world!',
'explanation': {
'content_id': 'solution',
'html': 'hello_world is a string'
}
}
solution = state_domain.Solution.from_dict(
init_state.interaction.id, solution_dict)
init_state.update_interaction_solution(solution)
self._assert_validation_error(
exploration,
re.escape('Hint(s) must be specified if solution is specified'))
init_state.update_interaction_solution(None)
interaction.hints = {}
self._assert_validation_error(
exploration, 'Expected hints to be a list')
interaction.hints = []
# Validate AnswerGroup.
state_answer_group = state_domain.AnswerGroup(
state_domain.Outcome(
exploration.init_state_name, state_domain.SubtitledHtml(
'feedback_1', 'Feedback'),
False, [], None, None),
[
state_domain.RuleSpec(
'Contains',
{
'x':
{
'contentId': 'rule_input_Contains',
'normalizedStrSet': ['Test']
}
})
],
[],
1
)
init_state.update_interaction_answer_groups([state_answer_group])
self._assert_validation_error(
exploration,
'Expected tagged skill misconception id to be a str, received 1')
state_answer_group = state_domain.AnswerGroup(
state_domain.Outcome(
exploration.init_state_name, state_domain.SubtitledHtml(
'feedback_1', 'Feedback'),
False, [], None, None),
[
state_domain.RuleSpec(
'Contains',
{
'x':
{
'contentId': 'rule_input_Contains',
'normalizedStrSet': ['Test']
}
})
],
[],
'invalid_tagged_skill_misconception_id'
)
init_state.update_interaction_answer_groups([state_answer_group])
self._assert_validation_error(
exploration,
'Expected the format of tagged skill misconception id '
'to be <skill_id>-<misconception_id>, received '
'invalid_tagged_skill_misconception_id')
init_state.interaction.answer_groups[0].rule_specs = {}
self._assert_validation_error(
exploration, 'Expected answer group rules to be a list')
first_answer_group = init_state.interaction.answer_groups[0]
first_answer_group.tagged_skill_misconception_id = None
first_answer_group.rule_specs = []
self._assert_validation_error(
exploration,
'There must be at least one rule or training data for each'
' answer group.')
exploration.states = {
exploration.init_state_name: (
state_domain.State.create_default_state(
exploration.init_state_name, is_initial_state=True))
}
self.set_interaction_for_state(
exploration.states[exploration.init_state_name], 'TextInput')
exploration.validate()
exploration.language_code = 'fake_code'
self._assert_validation_error(exploration, 'Invalid language_code')
exploration.language_code = 'English'
self._assert_validation_error(exploration, 'Invalid language_code')
exploration.language_code = 'en'
exploration.validate()
exploration.param_specs = 'A string'
self._assert_validation_error(exploration, 'param_specs to be a dict')
exploration.param_specs = {
'@': param_domain.ParamSpec.from_dict({
'obj_type': 'UnicodeString'
})
}
self._assert_validation_error(
exploration, 'Only parameter names with characters')
exploration.param_specs = {
'notAParamSpec': param_domain.ParamSpec.from_dict(
{'obj_type': 'UnicodeString'})
}
exploration.validate()
def test_tag_validation(self):
"""Test validation of exploration tags."""
exploration = exp_domain.Exploration.create_default_exploration('eid')
exploration.objective = 'Objective'
init_state = exploration.states[exploration.init_state_name]
self.set_interaction_for_state(init_state, 'EndExploration')
init_state.update_interaction_default_outcome(None)
exploration.validate()
exploration.tags = 'this should be a list'
self._assert_validation_error(
exploration, 'Expected \'tags\' to be a list')
exploration.tags = [123]
self._assert_validation_error(exploration, 'to be a string')
exploration.tags = ['abc', 123]
self._assert_validation_error(exploration, 'to be a string')
exploration.tags = ['']
self._assert_validation_error(exploration, 'Tags should be non-empty')
exploration.tags = ['123']
self._assert_validation_error(
exploration, 'should only contain lowercase letters and spaces')
exploration.tags = ['ABC']
self._assert_validation_error(
exploration, 'should only contain lowercase letters and spaces')
exploration.tags = [' a b']
self._assert_validation_error(
exploration, 'Tags should not start or end with whitespace')
exploration.tags = ['a b ']
self._assert_validation_error(
exploration, 'Tags should not start or end with whitespace')
exploration.tags = ['a b']
self._assert_validation_error(
exploration, 'Adjacent whitespace in tags should be collapsed')
exploration.tags = ['abc', 'abc']
self._assert_validation_error(
exploration, 'Some tags duplicate each other')
exploration.tags = ['computer science', 'analysis', 'a b c']
exploration.validate()
def test_title_category_and_objective_validation(self):
"""Test that titles, categories and objectives are validated only in
'strict' mode.
"""
self.save_new_valid_exploration(
'exp_id', 'user@example.com', title='', category='',
objective='', end_state_name='End')
exploration = exp_fetchers.get_exploration_by_id('exp_id')
exploration.validate()
with self.assertRaisesRegex(
utils.ValidationError, 'title must be specified'
):
exploration.validate(strict=True)
exploration.title = 'A title'
with self.assertRaisesRegex(
utils.ValidationError, 'category must be specified'
):
exploration.validate(strict=True)
exploration.category = 'A category'
with self.assertRaisesRegex(
utils.ValidationError, 'objective must be specified'
):
exploration.validate(strict=True)
exploration.objective = 'An objective'
exploration.validate(strict=True)
def test_get_trainable_states_dict_with_changed_answer_group(self):
exp_id = 'exp_id1'
test_exp_filepath = os.path.join(
feconf.TESTS_DATA_DIR, 'string_classifier_test.yaml')
yaml_content = utils.get_file_contents(test_exp_filepath)
assets_list = []
exp_services.save_new_exploration_from_yaml_and_assets(
feconf.SYSTEM_COMMITTER_ID, yaml_content, exp_id,
assets_list)
exploration_model = exp_models.ExplorationModel.get(
exp_id, strict=False)
old_states = exp_fetchers.get_exploration_from_model(
exploration_model).states
exploration = exp_fetchers.get_exploration_by_id(exp_id)
exploration.rename_state('Home', 'Renamed state')
old_states['Home'].update_interaction_id(42)
change_list = [exp_domain.ExplorationChange({
'cmd': 'rename_state',
'old_state_name': 'Home',
'new_state_name': 'Renamed state'
})]
expected_dict = {
'state_names_with_changed_answer_groups': ['Renamed state'],
'state_names_with_unchanged_answer_groups': []
}
exp_versions_diff = exp_domain.ExplorationVersionsDiff(change_list)
actual_dict = exploration.get_trainable_states_dict(
old_states, exp_versions_diff)
self.assertEqual(actual_dict, expected_dict)
def test_get_trainable_states_dict(self):
"""Test the get_trainable_states_dict() method."""
exp_id = 'exp_id1'
test_exp_filepath = os.path.join(
feconf.TESTS_DATA_DIR, 'string_classifier_test.yaml')
yaml_content = utils.get_file_contents(test_exp_filepath)
assets_list = []
exp_services.save_new_exploration_from_yaml_and_assets(
feconf.SYSTEM_COMMITTER_ID, yaml_content, exp_id,
assets_list)
exploration_model = exp_models.ExplorationModel.get(
exp_id, strict=False)
old_states = exp_fetchers.get_exploration_from_model(
exploration_model).states
exploration = exp_fetchers.get_exploration_by_id(exp_id)
# Rename a state to add it in unchanged answer group.
exploration.rename_state('Home', 'Renamed state')
change_list = [exp_domain.ExplorationChange({
'cmd': 'rename_state',
'old_state_name': 'Home',
'new_state_name': 'Renamed state'
})]
expected_dict = {
'state_names_with_changed_answer_groups': [],
'state_names_with_unchanged_answer_groups': ['Renamed state']
}
exp_versions_diff = exp_domain.ExplorationVersionsDiff(change_list)
actual_dict = exploration.get_trainable_states_dict(
old_states, exp_versions_diff)
self.assertEqual(actual_dict, expected_dict)
# Modify answer groups to trigger change in answer groups.
state = exploration.states['Renamed state']
exploration.states['Renamed state'].interaction.answer_groups.insert(
3, state.interaction.answer_groups[3])
answer_groups = []
for answer_group in state.interaction.answer_groups:
answer_groups.append(answer_group.to_dict())
change_list = [exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'state_name': 'Renamed state',
'property_name': 'answer_groups',
'new_value': answer_groups
})]
expected_dict = {
'state_names_with_changed_answer_groups': ['Renamed state'],
'state_names_with_unchanged_answer_groups': []
}
exp_versions_diff = exp_domain.ExplorationVersionsDiff(change_list)
actual_dict = exploration.get_trainable_states_dict(
old_states, exp_versions_diff)
self.assertEqual(actual_dict, expected_dict)
# Add new state to trigger change in answer groups.
exploration.add_states(['New state'])
exploration.states['New state'] = copy.deepcopy(
exploration.states['Renamed state'])
change_list = [exp_domain.ExplorationChange({
'cmd': 'add_state',
'state_name': 'New state',
})]
expected_dict = {
'state_names_with_changed_answer_groups': [
'Renamed state', 'New state'],
'state_names_with_unchanged_answer_groups': []
}
exp_versions_diff = exp_domain.ExplorationVersionsDiff(change_list)
actual_dict = exploration.get_trainable_states_dict(
old_states, exp_versions_diff)
self.assertEqual(actual_dict, expected_dict)
# Delete state.
exploration.delete_state('New state')
change_list = [exp_domain.ExplorationChange({
'cmd': 'delete_state',
'state_name': 'New state'
})]
expected_dict = {
'state_names_with_changed_answer_groups': ['Renamed state'],
'state_names_with_unchanged_answer_groups': []
}
exp_versions_diff = exp_domain.ExplorationVersionsDiff(change_list)
actual_dict = exploration.get_trainable_states_dict(
old_states, exp_versions_diff)
self.assertEqual(actual_dict, expected_dict)
# Test addition and multiple renames.
exploration.add_states(['New state'])
exploration.states['New state'] = copy.deepcopy(
exploration.states['Renamed state'])
exploration.rename_state('New state', 'New state2')
exploration.rename_state('New state2', 'New state3')
change_list = [exp_domain.ExplorationChange({
'cmd': 'add_state',
'state_name': 'New state',
}), exp_domain.ExplorationChange({
'cmd': 'rename_state',
'old_state_name': 'New state',
'new_state_name': 'New state2'
}), exp_domain.ExplorationChange({
'cmd': 'rename_state',
'old_state_name': 'New state2',
'new_state_name': 'New state3'
})]
expected_dict = {
'state_names_with_changed_answer_groups': [
'Renamed state', 'New state3'
],
'state_names_with_unchanged_answer_groups': []
}
exp_versions_diff = exp_domain.ExplorationVersionsDiff(change_list)
actual_dict = exploration.get_trainable_states_dict(
old_states, exp_versions_diff)
self.assertEqual(actual_dict, expected_dict)
def test_get_languages_with_complete_translation(self):
exploration = exp_domain.Exploration.create_default_exploration('0')
self.assertEqual(
exploration.get_languages_with_complete_translation(), [])
written_translations = state_domain.WrittenTranslations.from_dict({
'translations_mapping': {
'content': {
'hi': {
'data_format': 'html',
'translation': '<p>Translation in Hindi.</p>',
'needs_update': False
}
}
}
})
exploration.states[
feconf.DEFAULT_INIT_STATE_NAME].update_written_translations(
written_translations)
self.assertEqual(
exploration.get_languages_with_complete_translation(), ['hi'])
def test_get_translation_counts_with_no_needs_update(self):
exploration = exp_domain.Exploration.create_default_exploration('0')
self.assertEqual(
exploration.get_translation_counts(), {})
init_state = exploration.states[exploration.init_state_name]
init_state.update_content(
state_domain.SubtitledHtml.from_dict({
'content_id': 'content',
'html': '<p>This is content</p>'
}))
init_state.update_interaction_id('TextInput')
default_outcome = state_domain.Outcome(
'Introduction', state_domain.SubtitledHtml(
'default_outcome', '<p>The default outcome.</p>'),
False, [], None, None
)
init_state.update_interaction_default_outcome(default_outcome)
written_translations = state_domain.WrittenTranslations.from_dict({
'translations_mapping': {
'content': {
'hi': {
'data_format': 'html',
'translation': '<p>Translation in Hindi.</p>',
'needs_update': False
}
},
'default_outcome': {
'hi': {
'data_format': 'html',
'translation': '<p>Translation in Hindi.</p>',
'needs_update': False
}
}
}
})
init_state.update_written_translations(written_translations)
exploration.add_states(['New state'])
new_state = exploration.states['New state']
new_state.update_content(
state_domain.SubtitledHtml.from_dict({
'content_id': 'content',
'html': '<p>This is content</p>'
}))
new_state.update_interaction_id('TextInput')
default_outcome = state_domain.Outcome(
'Introduction', state_domain.SubtitledHtml(
'default_outcome', '<p>The default outcome.</p>'),
False, [], None, None)
new_state.update_interaction_default_outcome(default_outcome)
written_translations = state_domain.WrittenTranslations.from_dict({
'translations_mapping': {
'content': {
'hi': {
'data_format': 'html',
'translation': '<p>New state translation in Hindi.</p>',
'needs_update': False
}
},
'default_outcome': {
'hi': {
'data_format': 'html',
'translation': '<p>New State translation in Hindi.</p>',
'needs_update': False
}
}
}
})
new_state.update_written_translations(written_translations)
self.assertEqual(
exploration.get_translation_counts(), {'hi': 4})
def test_get_translation_counts_with_needs_update(self):
exploration = exp_domain.Exploration.create_default_exploration('0')
self.assertEqual(
exploration.get_translation_counts(), {})
init_state = exploration.states[feconf.DEFAULT_INIT_STATE_NAME]
init_state.update_content(
state_domain.SubtitledHtml.from_dict({
'content_id': 'content',
'html': '<p>This is content</p>'
}))
init_state.update_interaction_id('TextInput')
default_outcome = state_domain.Outcome(
'Introduction', state_domain.SubtitledHtml(
'default_outcome', '<p>The default outcome.</p>'),
False, [], None, None
)
init_state.update_interaction_default_outcome(default_outcome)
written_translations = state_domain.WrittenTranslations.from_dict({
'translations_mapping': {
'content': {
'hi': {
'data_format': 'html',
'translation': '<p>Translation in Hindi.</p>',
'needs_update': True
}
},
'default_outcome': {
'hi': {
'data_format': 'html',
'translation': '<p>Translation in Hindi.</p>',
'needs_update': False
}
}
}
})
init_state.update_written_translations(written_translations)
self.assertEqual(
exploration.get_translation_counts(), {'hi': 1})
def test_get_translation_counts_with_translation_in_multiple_lang(self):
exploration = exp_domain.Exploration.create_default_exploration('0')
self.assertEqual(
exploration.get_translation_counts(), {})
init_state = exploration.states[feconf.DEFAULT_INIT_STATE_NAME]
init_state.update_content(
state_domain.SubtitledHtml.from_dict({
'content_id': 'content',
'html': '<p>This is content</p>'
}))
init_state.update_interaction_id('TextInput')
default_outcome = state_domain.Outcome(
'Introduction', state_domain.SubtitledHtml(
'default_outcome', '<p>The default outcome.</p>'),
False, [], None, None
)
init_state.update_interaction_default_outcome(default_outcome)
written_translations = state_domain.WrittenTranslations.from_dict({
'translations_mapping': {
'content': {
'hi-en': {
'data_format': 'html',
'translation': '<p>Translation in Hindi.</p>',
'needs_update': False
},
'hi': {
'data_format': 'html',
'translation': '<p>Translation in Hindi.</p>',
'needs_update': False
}
},
'default_outcome': {
'hi': {
'data_format': 'html',
'translation': '<p>Translation in Hindi.</p>',
'needs_update': False
}
}
}
})
init_state.update_written_translations(written_translations)
self.assertEqual(
exploration.get_translation_counts(), {
'hi': 2,
'hi-en': 1
})
def test_get_content_count(self):
# Adds 1 to content count to exploration (content, default_outcome).
exploration = exp_domain.Exploration.create_default_exploration('0')
self.assertEqual(exploration.get_content_count(), 1)
# Adds 2 to content count to exploration (content default_outcome).
exploration.add_states(['New state'])
init_state = exploration.states[exploration.init_state_name]
# Adds 1 to content count to exploration (ca_placeholder_0)
self.set_interaction_for_state(init_state, 'TextInput')
state_answer_group = state_domain.AnswerGroup(
state_domain.Outcome(
exploration.init_state_name, state_domain.SubtitledHtml(
'feedback_1', 'Feedback'),
False, [], None, None),
[
state_domain.RuleSpec(
'Contains',
{
'x':
{
'contentId': 'rule_input_5',
'normalizedStrSet': ['Test']
}
})
],
[],
None
)
# Adds 1 to content count to exploration (feedback_1).
init_state.update_interaction_answer_groups([state_answer_group])
hints_list = [
state_domain.Hint(
state_domain.SubtitledHtml('hint_1', '<p>hint one</p>')
)
]
# Adds 1 to content count to exploration (hint_1).
init_state.update_interaction_hints(hints_list)
solution_dict = {
'answer_is_exclusive': False,
'correct_answer': 'helloworld!',
'explanation': {
'content_id': 'solution',
'html': '<p>hello_world is a string</p>'
},
}
solution = state_domain.Solution.from_dict(
init_state.interaction.id, solution_dict)
# Adds 1 to content count to exploration (solution).
init_state.update_interaction_solution(solution)
self.assertEqual(exploration.get_content_count(), 6)
def test_get_content_with_correct_state_name_returns_html(self):
exploration = exp_domain.Exploration.create_default_exploration('0')
init_state = exploration.states[exploration.init_state_name]
self.set_interaction_for_state(init_state, 'TextInput')
hints_list = [
state_domain.Hint(
state_domain.SubtitledHtml('hint_1', '<p>hint one</p>')
)
]
init_state.update_interaction_hints(hints_list)
self.assertEqual(
exploration.get_content_html(exploration.init_state_name, 'hint_1'),
'<p>hint one</p>')
hints_list[0].hint_content.html = '<p>Changed hint one</p>'
init_state.update_interaction_hints(hints_list)
self.assertEqual(
exploration.get_content_html(exploration.init_state_name, 'hint_1'),
'<p>Changed hint one</p>')
def test_get_content_with_incorrect_state_name_raise_error(self):
exploration = exp_domain.Exploration.create_default_exploration('0')
init_state = exploration.states[exploration.init_state_name]
self.set_interaction_for_state(init_state, 'TextInput')
hints_list = [
state_domain.Hint(
state_domain.SubtitledHtml('hint_1', '<p>hint one</p>')
)
]
init_state.update_interaction_hints(hints_list)
self.assertEqual(
exploration.get_content_html(exploration.init_state_name, 'hint_1'),
'<p>hint one</p>')
with self.assertRaisesRegex(
ValueError, 'State Invalid state does not exist'):
exploration.get_content_html('Invalid state', 'hint_1')
def test_is_demo_property(self):
"""Test the is_demo property."""
demo = exp_domain.Exploration.create_default_exploration('0')
self.assertEqual(demo.is_demo, True)
notdemo1 = exp_domain.Exploration.create_default_exploration('a')
self.assertEqual(notdemo1.is_demo, False)
notdemo2 = exp_domain.Exploration.create_default_exploration('abcd')
self.assertEqual(notdemo2.is_demo, False)
def test_has_state_name(self):
"""Test for has_state_name."""
demo = exp_domain.Exploration.create_default_exploration('0')
state_names = list(demo.states.keys())
self.assertEqual(state_names, ['Introduction'])
self.assertEqual(demo.has_state_name('Introduction'), True)
self.assertEqual(demo.has_state_name('Fake state name'), False)
def test_get_interaction_id_by_state_name(self):
"""Test for get_interaction_id_by_state_name."""
demo = exp_domain.Exploration.create_default_exploration('0')
self.assertEqual(
demo.get_interaction_id_by_state_name('Introduction'), None)
def test_exploration_export_import(self):
"""Test that to_dict and from_dict preserve all data within an
exploration.
"""
demo = exp_domain.Exploration.create_default_exploration('0')
demo_dict = demo.to_dict()
exp_from_dict = exp_domain.Exploration.from_dict(demo_dict)
self.assertEqual(exp_from_dict.to_dict(), demo_dict)
def test_interaction_with_none_id_is_not_terminal(self):
"""Test that an interaction with an id of None leads to is_terminal
being false.
"""
# Default exploration has a default interaction with an ID of None.
demo = exp_domain.Exploration.create_default_exploration('0')
init_state = demo.states[feconf.DEFAULT_INIT_STATE_NAME]
self.assertFalse(init_state.interaction.is_terminal)
def test_cannot_create_demo_exp_with_invalid_param_changes(self):
demo_exp = exp_domain.Exploration.create_default_exploration('0')
demo_dict = demo_exp.to_dict()
new_state = state_domain.State.create_default_state('new_state_name')
new_state.param_changes = [param_domain.ParamChange.from_dict({
'customization_args': {
'list_of_values': ['1', '2'], 'parse_with_jinja': False
},
'name': 'myParam',
'generator_id': 'RandomSelector'
})]
demo_dict['states']['new_state_name'] = new_state.to_dict()
demo_dict['param_specs'] = {
'ParamSpec': {'obj_type': 'UnicodeString'}
}
with self.assertRaisesRegex(
Exception,
'Parameter myParam was used in a state but not '
'declared in the exploration param_specs.'):
exp_domain.Exploration.from_dict(demo_dict)
def test_validate_exploration_category(self):
exploration = self.save_new_valid_exploration(
'exp_id', 'user@example.com', title='', category='',
objective='', end_state_name='End')
exploration.validate()
exploration.category = 1
with self.assertRaisesRegex(
Exception, 'Expected category to be a string, received 1'):
exploration.validate()
def test_validate_exploration_objective(self):
exploration = self.save_new_valid_exploration(
'exp_id', 'user@example.com', title='', category='',
objective='', end_state_name='End')
exploration.validate()
exploration.objective = 1
with self.assertRaisesRegex(
Exception, 'Expected objective to be a string, received 1'):
exploration.validate()
def test_validate_exploration_blurb(self):
exploration = self.save_new_valid_exploration(
'exp_id', 'user@example.com', title='', category='',
objective='', end_state_name='End')
exploration.validate()
exploration.blurb = 1
with self.assertRaisesRegex(
Exception, 'Expected blurb to be a string, received 1'):
exploration.validate()
def test_validate_exploration_language_code(self):
exploration = self.save_new_valid_exploration(
'exp_id', 'user@example.com', title='', category='',
objective='', end_state_name='End')
exploration.validate()
exploration.language_code = 1
with self.assertRaisesRegex(
Exception, 'Expected language_code to be a string, received 1'):
exploration.validate()
def test_validate_exploration_author_notes(self):
exploration = self.save_new_valid_exploration(
'exp_id', 'user@example.com', title='', category='',
objective='', end_state_name='End')
exploration.validate()
exploration.author_notes = 1
with self.assertRaisesRegex(
Exception, 'Expected author_notes to be a string, received 1'):
exploration.validate()
def test_validate_exploration_states(self):
exploration = self.save_new_valid_exploration(
'exp_id', 'user@example.com', title='', category='',
objective='', end_state_name='End')
exploration.validate()
exploration.states = 1
with self.assertRaisesRegex(
Exception, 'Expected states to be a dict, received 1'):
exploration.validate()
def test_validate_exploration_state_param_changes_invalid_name(self):
exploration = self.save_new_valid_exploration(
'exp_id', 'user@example.com', title='', category='',
objective='', end_state_name='End')
exploration.add_states(['state1'])
new_state = exploration.states['state1']
new_state.param_changes = [param_domain.ParamChange.from_dict({
'customization_args': {
'list_of_values': ['1', '2'], 'parse_with_jinja': False
},
'name': 'all',
'generator_id': 'RandomSelector'
})]
with self.assertRaisesRegex(
Exception, 'The parameter name \'all\''
' is reserved. Please choose a different name for the'
' parameter being set in state \'state1\'.'):
exploration.validate()
def test_validate_exploration_state_param_changes_not_in_param_specs(self):
exploration = self.save_new_valid_exploration(
'exp_id', 'user@example.com', title='', category='',
objective='', end_state_name='End')
exploration.add_states(['state1'])
new_state = exploration.states['state1']
new_state.param_changes = [param_domain.ParamChange.from_dict({
'customization_args': {
'list_of_values': ['1', '2'], 'parse_with_jinja': False
},
'name': 'invalid',
'generator_id': 'RandomSelector'
})]
with self.assertRaisesRegex(
Exception, 'The parameter with name \'invalid\''
' was set in state \'state1\', but it does not exist'
' in the list of parameter specifications for this exploration.'):
exploration.validate()
def test_validate_exploration_outcome_dest(self):
exploration = self.save_new_valid_exploration(
'exp_id', 'user@example.com', title='', category='',
objective='', end_state_name='End')
exploration.validate()
exploration.init_state.interaction.default_outcome.dest = None
with self.assertRaisesRegex(
Exception, 'Every outcome should have a destination.'):
exploration.validate()
def test_validate_exploration_outcome_dest_type(self):
exploration = self.save_new_valid_exploration(
'exp_id', 'user@example.com', title='', category='',
objective='', end_state_name='End')
exploration.validate()
exploration.init_state.interaction.default_outcome.dest = 1
with self.assertRaisesRegex(
Exception, 'Expected outcome dest to be a string, received 1'):
exploration.validate()
def test_validate_exploration_states_schema_version(self):
exploration = self.save_new_valid_exploration(
'exp_id', 'user@example.com', title='', category='',
objective='', end_state_name='End')
exploration.validate()
exploration.states_schema_version = None
with self.assertRaisesRegex(
Exception, 'This exploration has no states schema version.'):
exploration.validate()
def test_validate_exploration_auto_tts_enabled(self):
exploration = self.save_new_valid_exploration(
'exp_id', 'user@example.com', title='', category='',
objective='', end_state_name='End')
exploration.validate()
exploration.auto_tts_enabled = 1
with self.assertRaisesRegex(
Exception, 'Expected auto_tts_enabled to be a bool, received 1'):
exploration.validate()
def test_validate_exploration_correctness_feedback_enabled(self):
exploration = self.save_new_valid_exploration(
'exp_id', 'user@example.com', title='', category='',
objective='', end_state_name='End')
exploration.validate()
exploration.correctness_feedback_enabled = 1
with self.assertRaisesRegex(
Exception,
'Expected correctness_feedback_enabled to be a bool, received 1'):
exploration.validate()
def test_validate_exploration_param_specs(self):
exploration = self.save_new_valid_exploration(
'exp_id', 'user@example.com', title='', category='',
objective='', end_state_name='End')
exploration.validate()
exploration.param_specs = {
1: param_domain.ParamSpec.from_dict(
{'obj_type': 'UnicodeString'})
}
with self.assertRaisesRegex(
Exception, 'Expected parameter name to be a string, received 1'):
exploration.validate()
def test_validate_exploration_param_changes_type(self):
exploration = self.save_new_valid_exploration(
'exp_id', 'user@example.com', title='', category='',
objective='', end_state_name='End')
exploration.validate()
exploration.param_changes = 1
with self.assertRaisesRegex(
Exception, 'Expected param_changes to be a list, received 1'):
exploration.validate()
def test_validate_exploration_param_name(self):
exploration = self.save_new_valid_exploration(
'exp_id', 'user@example.com', title='', category='',
objective='', end_state_name='End')
exploration.validate()
exploration.param_changes = [param_domain.ParamChange.from_dict({
'customization_args': {
'list_of_values': ['1', '2'], 'parse_with_jinja': False
},
'name': 'invalid',
'generator_id': 'RandomSelector'
})]
with self.assertRaisesRegex(
Exception,
'No parameter named \'invalid\' exists in this '
'exploration'):
exploration.validate()
def test_validate_exploration_reserved_param_name(self):
exploration = self.save_new_valid_exploration(
'exp_id', 'user@example.com', title='', category='',
objective='', end_state_name='End')
exploration.validate()
exploration.param_changes = [param_domain.ParamChange.from_dict({
'customization_args': {
'list_of_values': ['1', '2'], 'parse_with_jinja': False
},
'name': 'all',
'generator_id': 'RandomSelector'
})]
with self.assertRaisesRegex(
Exception,
'The exploration-level parameter with name \'all\' is '
'reserved. Please choose a different name.'):
exploration.validate()
def test_validate_exploration_is_non_self_loop(self):
exploration = self.save_new_valid_exploration(
'exp_id', 'user@example.com', title='', category='',
objective='', end_state_name='End')
exploration.validate()
exploration.add_states(['DEF'])
default_outcome = state_domain.Outcome(
'DEF', state_domain.SubtitledHtml(
'default_outcome', '<p>Default outcome for state1</p>'),
False, [], 'refresher_exploration_id', None,
)
exploration.init_state.update_interaction_default_outcome(
default_outcome
)
with self.assertRaisesRegex(
Exception,
'The default outcome for state Introduction has a refresher '
'exploration ID, but is not a self-loop.'):
exploration.validate()
def test_validate_exploration_answer_group_parameter(self):
exploration = self.save_new_valid_exploration(
'exp_id', 'user@example.com', title='', category='',
objective='', end_state_name='End')
exploration.validate()
param_changes = [param_domain.ParamChange(
'ParamChange', 'RandomSelector', {
'list_of_values': ['1', '2'], 'parse_with_jinja': False
}
)]
state_answer_group = state_domain.AnswerGroup(
state_domain.Outcome(
exploration.init_state_name, state_domain.SubtitledHtml(
'feedback_1', 'Feedback'),
False, param_changes, None, None),
[
state_domain.RuleSpec(
'Contains',
{
'x':
{
'contentId': 'rule_input_Equals',
'normalizedStrSet': ['Test']
}
})
],
[],
None
)
exploration.init_state.update_interaction_answer_groups(
[state_answer_group])
with self.assertRaisesRegex(
Exception,
'The parameter ParamChange was used in an answer group, '
'but it does not exist in this exploration'):
exploration.validate()
def test_verify_all_states_reachable(self):
exploration = self.save_new_valid_exploration(
'exp_id', 'owner_id')
exploration.validate()
exploration.add_states(['End'])
end_state = exploration.states['End']
self.set_interaction_for_state(end_state, 'EndExploration')
end_state.update_interaction_default_outcome(None)
with self.assertRaisesRegex(
Exception,
'Please fix the following issues before saving this exploration: '
'1. The following states are not reachable from the initial state: '
'End 2. It is impossible to complete the exploration from the '
'following states: Introduction'):
exploration.validate(strict=True)
def test_update_init_state_name_with_invalid_state(self):
exploration = self.save_new_valid_exploration(
'exp_id', 'user@example.com', title='title', category='category',
objective='objective', end_state_name='End')
exploration.update_init_state_name('End')
self.assertEqual(exploration.init_state_name, 'End')
with self.assertRaisesRegex(
Exception,
'Invalid new initial state name: invalid_state;'):
exploration.update_init_state_name('invalid_state')
def test_rename_state_with_invalid_state(self):
exploration = self.save_new_valid_exploration(
'exp_id', 'user@example.com', title='title', category='category',
objective='objective', end_state_name='End')
self.assertTrue(exploration.states.get('End'))
self.assertFalse(exploration.states.get('new state name'))
exploration.rename_state('End', 'new state name')
self.assertFalse(exploration.states.get('End'))
self.assertTrue(exploration.states.get('new state name'))
with self.assertRaisesRegex(
Exception, 'State invalid_state does not exist'):
exploration.rename_state('invalid_state', 'new state name')
def test_default_outcome_is_labelled_incorrect_for_self_loop(self):
exploration = self.save_new_valid_exploration(
'exp_id', 'user@example.com', title='title', category='category',
objective='objective', end_state_name='End')
exploration.validate(strict=True)
(
exploration.init_state.interaction.default_outcome
.labelled_as_correct) = True
(
exploration.init_state.interaction.default_outcome
.dest) = exploration.init_state_name
with self.assertRaisesRegex(
Exception,
'The default outcome for state Introduction is labelled '
'correct but is a self-loop'):
exploration.validate(strict=True)
def test_serialize_and_deserialize_returns_unchanged_exploration(self):
"""Checks that serializing and then deserializing a default exploration
works as intended by leaving the exploration unchanged.
"""
exploration = exp_domain.Exploration.create_default_exploration('eid')
self.assertEqual(
exploration.to_dict(),
exp_domain.Exploration.deserialize(
exploration.serialize()).to_dict())
def test_get_all_translatable_content_for_exp(self):
"""Get all translatable fields from exploration."""
exploration = exp_domain.Exploration.create_default_exploration(
'exp_id')
exploration.add_states(['State1'])
state = exploration.states['State1']
state_content_dict = {
'content_id': 'content',
'html': '<p>state content html</p>'
}
state_answer_group = [state_domain.AnswerGroup(
state_domain.Outcome(
exploration.init_state_name, state_domain.SubtitledHtml(
'feedback_1', '<p>state outcome html</p>'),
False, [], None, None),
[
state_domain.RuleSpec(
'Equals', {
'x': {
'contentId': 'rule_input_Equals',
'normalizedStrSet': ['Test']
}})
],
[],
None
)]
state_default_outcome = state_domain.Outcome(
'State1', state_domain.SubtitledHtml(
'default_outcome', '<p>Default outcome for State1</p>'),
False, [], None, None
)
state_hint_list = [
state_domain.Hint(
state_domain.SubtitledHtml(
'hint_1', '<p>Hello, this is html1 for state1</p>'
)
),
state_domain.Hint(
state_domain.SubtitledHtml(
'hint_2', '<p>Hello, this is html2 for state1</p>'
)
),
]
state_solution_dict = {
'answer_is_exclusive': True,
'correct_answer': 'Answer1',
'explanation': {
'content_id': 'solution',
'html': '<p>This is solution for state1</p>'
}
}
state_interaction_cust_args = {
'placeholder': {
'value': {
'content_id': 'ca_placeholder_0',
'unicode_str': ''
}
},
'rows': {'value': 1}
}
state.update_next_content_id_index(3)
state.update_content(
state_domain.SubtitledHtml.from_dict(state_content_dict))
state.update_interaction_id('TextInput')
state.update_interaction_customization_args(state_interaction_cust_args)
state.update_interaction_answer_groups(
state_answer_group)
state.update_interaction_default_outcome(state_default_outcome)
state.update_interaction_hints(state_hint_list)
solution = state_domain.Solution.from_dict(
state.interaction.id, state_solution_dict)
state.update_interaction_solution(solution)
translatable_contents = [
translatable_content.content_value
for translatable_content in
exploration.get_all_contents_which_need_translations(
self.dummy_entity_translations)
]
self.assertItemsEqual(
translatable_contents,
[
'<p>state outcome html</p>',
'<p>Default outcome for State1</p>',
'<p>Hello, this is html1 for state1</p>',
['Test'],
'<p>Hello, this is html2 for state1</p>',
'<p>This is solution for state1</p>',
'<p>state content html</p>'
])
self.assertEqual(
2,
len(exploration.get_translatable_text(exploration.language_code)))
class ExplorationSummaryTests(test_utils.GenericTestBase):
def setUp(self):
super(ExplorationSummaryTests, self).setUp()
self.signup(self.OWNER_EMAIL, self.OWNER_USERNAME)
self.owner_id = self.get_user_id_from_email(self.OWNER_EMAIL)
exploration = exp_domain.Exploration.create_default_exploration('eid')
exp_services.save_new_exploration(self.owner_id, exploration)
self.exp_summary = exp_fetchers.get_exploration_summary_by_id('eid')
self.exp_summary.editor_ids = ['editor_id']
self.exp_summary.voice_artist_ids = ['voice_artist_id']
self.exp_summary.viewer_ids = ['viewer_id']
self.exp_summary.contributor_ids = ['contributor_id']
def test_validation_passes_with_valid_properties(self):
self.exp_summary.validate()
def test_validation_fails_with_invalid_title(self):
self.exp_summary.title = 0
with self.assertRaisesRegex(
utils.ValidationError,
'Expected title to be a string, received 0'):
self.exp_summary.validate()
def test_validation_fails_with_invalid_category(self):
self.exp_summary.category = 0
with self.assertRaisesRegex(
utils.ValidationError,
'Expected category to be a string, received 0'):
self.exp_summary.validate()
def test_validation_fails_with_invalid_objective(self):
self.exp_summary.objective = 0
with self.assertRaisesRegex(
utils.ValidationError,
'Expected objective to be a string, received 0'):
self.exp_summary.validate()
def test_validation_fails_with_invalid_language_code(self):
self.exp_summary.language_code = 0
with self.assertRaisesRegex(
utils.ValidationError,
'Expected language_code to be a string, received 0'):
self.exp_summary.validate()
def test_validation_fails_with_unallowed_language_code(self):
self.exp_summary.language_code = 'invalid'
with self.assertRaisesRegex(
utils.ValidationError, 'Invalid language_code: invalid'):
self.exp_summary.validate()
def test_validation_fails_with_invalid_tags(self):
self.exp_summary.tags = 'tags'
with self.assertRaisesRegex(
utils.ValidationError,
'Expected \'tags\' to be a list, received tags'):
self.exp_summary.validate()
def test_validation_fails_with_invalid_tag_in_tags(self):
self.exp_summary.tags = ['tag', 2]
with self.assertRaisesRegex(
utils.ValidationError,
'Expected each tag in \'tags\' to be a string, received \'2\''):
self.exp_summary.validate()
def test_validation_fails_with_empty_tag_in_tags(self):
self.exp_summary.tags = ['', 'abc']
with self.assertRaisesRegex(
utils.ValidationError, 'Tags should be non-empty'):
self.exp_summary.validate()
def test_validation_fails_with_unallowed_characters_in_tag(self):
self.exp_summary.tags = ['123', 'abc']
with self.assertRaisesRegex(
utils.ValidationError, (
'Tags should only contain lowercase '
'letters and spaces, received \'123\'')):
self.exp_summary.validate()
def test_validation_fails_with_whitespace_in_tag_start(self):
self.exp_summary.tags = [' ab', 'abc']
with self.assertRaisesRegex(
utils.ValidationError,
'Tags should not start or end with whitespace, received \' ab\''):
self.exp_summary.validate()
def test_validation_fails_with_whitespace_in_tag_end(self):
self.exp_summary.tags = ['ab ', 'abc']
with self.assertRaisesRegex(
utils.ValidationError,
'Tags should not start or end with whitespace, received \'ab \''):
self.exp_summary.validate()
def test_validation_fails_with_adjacent_whitespace_in_tag(self):
self.exp_summary.tags = ['a b', 'abc']
with self.assertRaisesRegex(
utils.ValidationError, (
'Adjacent whitespace in tags should '
'be collapsed, received \'a b\'')):
self.exp_summary.validate()
def test_validation_fails_with_duplicate_tags(self):
self.exp_summary.tags = ['abc', 'abc', 'ab']
with self.assertRaisesRegex(
utils.ValidationError, 'Some tags duplicate each other'):
self.exp_summary.validate()
def test_validation_fails_with_invalid_rating_type(self):
self.exp_summary.ratings = 0
with self.assertRaisesRegex(
utils.ValidationError, 'Expected ratings to be a dict, received 0'):
self.exp_summary.validate()
def test_validation_fails_with_invalid_rating_keys(self):
self.exp_summary.ratings = {'1': 0, '10': 1}
with self.assertRaisesRegex(
utils.ValidationError,
'Expected ratings to have keys: 1, 2, 3, 4, 5, received 1, 10'):
self.exp_summary.validate()
def test_validation_fails_with_invalid_value_type_for_ratings(self):
self.exp_summary.ratings = {'1': 0, '2': 'one', '3': 0, '4': 0, '5': 0}
with self.assertRaisesRegex(
utils.ValidationError, 'Expected value to be int, received one'):
self.exp_summary.validate()
def test_validation_fails_with_invalid_value_for_ratings(self):
self.exp_summary.ratings = {'1': 0, '2': -1, '3': 0, '4': 0, '5': 0}
with self.assertRaisesRegex(
utils.ValidationError,
'Expected value to be non-negative, received -1'):
self.exp_summary.validate()
def test_validation_passes_with_int_scaled_average_rating(self):
self.exp_summary.scaled_average_rating = 1
self.exp_summary.validate()
self.assertEqual(self.exp_summary.scaled_average_rating, 1)
def test_validation_fails_with_invalid_scaled_average_rating(self):
self.exp_summary.scaled_average_rating = 'one'
with self.assertRaisesRegex(
utils.ValidationError,
'Expected scaled_average_rating to be float, received one'
):
self.exp_summary.validate()
def test_validation_fails_with_invalid_status(self):
self.exp_summary.status = 0
with self.assertRaisesRegex(
utils.ValidationError, 'Expected status to be string, received 0'):
self.exp_summary.validate()
def test_validation_fails_with_invalid_community_owned(self):
self.exp_summary.community_owned = '1'
with self.assertRaisesRegex(
utils.ValidationError,
'Expected community_owned to be bool, received 1'):
self.exp_summary.validate()
def test_validation_fails_with_invalid_contributors_summary(self):
self.exp_summary.contributors_summary = 0
with self.assertRaisesRegex(
utils.ValidationError,
'Expected contributors_summary to be dict, received 0'):
self.exp_summary.validate()
def test_validation_fails_with_invalid_owner_ids_type(self):
self.exp_summary.owner_ids = 0
with self.assertRaisesRegex(
utils.ValidationError, 'Expected owner_ids to be list, received 0'):
self.exp_summary.validate()
def test_validation_fails_with_invalid_owner_id_in_owner_ids(self):
self.exp_summary.owner_ids = ['1', 2, '3']
with self.assertRaisesRegex(
utils.ValidationError,
'Expected each id in owner_ids to be string, received 2'):
self.exp_summary.validate()
def test_validation_fails_with_invalid_editor_ids_type(self):
self.exp_summary.editor_ids = 0
with self.assertRaisesRegex(
utils.ValidationError,
'Expected editor_ids to be list, received 0'):
self.exp_summary.validate()
def test_validation_fails_with_invalid_editor_id_in_editor_ids(self):
self.exp_summary.editor_ids = ['1', 2, '3']
with self.assertRaisesRegex(
utils.ValidationError,
'Expected each id in editor_ids to be string, received 2'):
self.exp_summary.validate()
def test_validation_fails_with_invalid_voice_artist_ids_type(self):
self.exp_summary.voice_artist_ids = 0
with self.assertRaisesRegex(
utils.ValidationError,
'Expected voice_artist_ids to be list, received 0'):
self.exp_summary.validate()
def test_validation_fails_with_invalid_voice_artist_id_in_voice_artists_ids(
self):
self.exp_summary.voice_artist_ids = ['1', 2, '3']
with self.assertRaisesRegex(
utils.ValidationError,
'Expected each id in voice_artist_ids to be string, received 2'):
self.exp_summary.validate()
def test_validation_fails_with_invalid_viewer_ids_type(self):
self.exp_summary.viewer_ids = 0
with self.assertRaisesRegex(
utils.ValidationError,
'Expected viewer_ids to be list, received 0'):
self.exp_summary.validate()
def test_validation_fails_with_invalid_viewer_id_in_viewer_ids(self):
self.exp_summary.viewer_ids = ['1', 2, '3']
with self.assertRaisesRegex(
utils.ValidationError,
'Expected each id in viewer_ids to be string, received 2'):
self.exp_summary.validate()
def test_validation_fails_with_duplicate_user_role(self):
self.exp_summary.owner_ids = ['1']
self.exp_summary.editor_ids = ['2', '3']
self.exp_summary.voice_artist_ids = ['4']
self.exp_summary.viewer_ids = ['2']
with self.assertRaisesRegex(
utils.ValidationError, (
'Users should not be assigned to multiple roles at once, '
'received users: 1, 2, 3, 4, 2')
):
self.exp_summary.validate()
def test_validation_fails_with_invalid_contributor_ids_type(self):
self.exp_summary.contributor_ids = 0
with self.assertRaisesRegex(
utils.ValidationError,
'Expected contributor_ids to be list, received 0'):
self.exp_summary.validate()
def test_validation_fails_with_invalid_contributor_id_in_contributor_ids(
self):
self.exp_summary.contributor_ids = ['1', 2, '3']
with self.assertRaisesRegex(
utils.ValidationError,
'Expected each id in contributor_ids to be string, received 2'):
self.exp_summary.validate()
def test_is_private(self):
self.assertTrue(self.exp_summary.is_private())
self.exp_summary.status = constants.ACTIVITY_STATUS_PUBLIC
self.assertFalse(self.exp_summary.is_private())
def test_is_solely_owned_by_user_one_owner(self):
self.assertTrue(self.exp_summary.is_solely_owned_by_user(self.owner_id))
self.assertFalse(self.exp_summary.is_solely_owned_by_user('other_id'))
self.exp_summary.owner_ids = ['other_id']
self.assertFalse(
self.exp_summary.is_solely_owned_by_user(self.owner_id))
self.assertTrue(self.exp_summary.is_solely_owned_by_user('other_id'))
def test_is_solely_owned_by_user_multiple_owners(self):
self.assertTrue(self.exp_summary.is_solely_owned_by_user(self.owner_id))
self.assertFalse(self.exp_summary.is_solely_owned_by_user('other_id'))
self.exp_summary.owner_ids = [self.owner_id, 'other_id']
self.assertFalse(
self.exp_summary.is_solely_owned_by_user(self.owner_id))
self.assertFalse(self.exp_summary.is_solely_owned_by_user('other_id'))
def test_is_solely_owned_by_user_other_users(self):
self.assertFalse(self.exp_summary.is_solely_owned_by_user('editor_id'))
self.assertFalse(
self.exp_summary.is_solely_owned_by_user('voice_artist_id'))
self.assertFalse(self.exp_summary.is_solely_owned_by_user('viewer_id'))
self.assertFalse(
self.exp_summary.is_solely_owned_by_user('contributor_id'))
def test_add_new_contribution_for_user_adds_user_to_contributors(self):
self.exp_summary.add_contribution_by_user('user_id')
self.assertIn('user_id', self.exp_summary.contributors_summary)
self.assertEqual(self.exp_summary.contributors_summary['user_id'], 1)
self.assertIn('user_id', self.exp_summary.contributor_ids)
def test_add_new_contribution_for_user_increases_score_in_contributors(
self):
self.exp_summary.add_contribution_by_user('user_id')
self.exp_summary.add_contribution_by_user('user_id')
self.assertIn('user_id', self.exp_summary.contributors_summary)
self.assertEqual(self.exp_summary.contributors_summary['user_id'], 2)
def test_add_new_contribution_for_user_does_not_add_system_user(self):
self.exp_summary.add_contribution_by_user(
feconf.SYSTEM_COMMITTER_ID)
self.assertNotIn(
feconf.SYSTEM_COMMITTER_ID, self.exp_summary.contributors_summary)
self.assertNotIn(
feconf.SYSTEM_COMMITTER_ID, self.exp_summary.contributor_ids)
def test_metadata_dict(self):
self.exp_summary.id = 0
self.exp_summary.title = 'title'
self.exp_summary.objective = 'ob'
mydict = self.exp_summary.to_metadata_dict()
self.assertEqual(mydict, {
'id': 0,
'title': 'title',
'objective': 'ob'
})
def test_does_user_have_any_role(self):
self.exp_summary.owner_ids = ['0', '1']
self.assertEqual(self.exp_summary.does_user_have_any_role('2'), False)
self.assertEqual(self.exp_summary.does_user_have_any_role('0'), True)
class YamlCreationUnitTests(test_utils.GenericTestBase):
"""Test creation of explorations from YAML files."""
YAML_CONTENT_INVALID_SCHEMA_VERSION = (
"""author_notes: ''
auto_tts_enabled: true
blurb: ''
category: Category
correctness_feedback_enabled: false
init_state_name: (untitled state)
language_code: en
objective: ''
param_changes: []
param_specs: {}
schema_version: 10000
states:
(untitled state):
classifier_model_id: null
content:
content_id: content
html: ''
interaction:
answer_groups:
- outcome:
dest: END
feedback:
content_id: feedback_1
html: <p>Correct!</p>
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
rule_specs:
- inputs:
x:
contentId: rule_input_3
normalizedStrSet:
- InputString
rule_type: Equals
tagged_skill_misconception_id: null
training_data: []
confirmed_unclassified_answers: []
customization_args:
placeholder:
value:
content_id: ca_placeholder_2
unicode_str: ''
rows:
value: 1
default_outcome:
dest: (untitled state)
feedback:
content_id: default_outcome
html: ''
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
hints: []
id: TextInput
solution: null
next_content_id_index: 4
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
ca_placeholder_2: {}
content: {}
default_outcome: {}
feedback_1: {}
rule_input_3: {}
solicit_answer_details: false
written_translations:
translations_mapping:
ca_placeholder_2: {}
content: {}
default_outcome: {}
feedback_1: {}
rule_input_3: {}
END:
classifier_model_id: null
content:
content_id: content
html: <p>Congratulations, you have finished!</p>
interaction:
answer_groups: []
confirmed_unclassified_answers: []
customization_args:
recommendedExplorationIds:
value: []
default_outcome: null
hints: []
id: EndExploration
solution: null
next_content_id_index: 0
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
content: {}
solicit_answer_details: false
written_translations:
translations_mapping:
content: {}
New state:
classifier_model_id: null
content:
content_id: content
html: ''
interaction:
answer_groups: []
confirmed_unclassified_answers: []
customization_args:
placeholder:
value:
content_id: ca_placeholder_0
unicode_str: ''
rows:
value: 1
default_outcome:
dest: END
feedback:
content_id: default_outcome
html: ''
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
hints: []
id: TextInput
solution: null
next_content_id_index: 1
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
ca_placeholder_0: {}
content: {}
default_outcome: {}
solicit_answer_details: false
written_translations:
translations_mapping:
ca_placeholder_0: {}
content: {}
default_outcome: {}
states_schema_version: 10000
tags: []
title: Title
""")
EXP_ID = 'An exploration_id'
def test_creation_with_invalid_yaml_schema_version(self):
"""Test that a schema version that is too big is detected."""
with self.assertRaisesRegex(
Exception,
'Sorry, we can only process v46 to v[0-9]+ exploration YAML files '
'at present.'):
exp_domain.Exploration.from_yaml(
'bad_exp', self.YAML_CONTENT_INVALID_SCHEMA_VERSION)
def test_yaml_import_and_export(self):
"""Test the from_yaml() and to_yaml() methods."""
exploration = exp_domain.Exploration.create_default_exploration(
self.EXP_ID, title='Title', category='Category')
exploration.add_states(['New state'])
self.assertEqual(len(exploration.states), 2)
exploration.validate()
yaml_content = exploration.to_yaml()
self.assertEqual(yaml_content, self.SAMPLE_YAML_CONTENT)
exploration2 = exp_domain.Exploration.from_yaml('exp2', yaml_content)
self.assertEqual(len(exploration2.states), 2)
yaml_content_2 = exploration2.to_yaml()
self.assertEqual(yaml_content_2, yaml_content)
with self.assertRaisesRegex(
Exception, 'Please ensure that you are uploading a YAML text file, '
'not a zip file. The YAML parser returned the following error: '):
exp_domain.Exploration.from_yaml('exp3', 'No_initial_state_name')
with self.assertRaisesRegex(
Exception,
'Please ensure that you are uploading a YAML text file, not a zip'
' file. The YAML parser returned the following error: mapping '
'values are not allowed here'):
exp_domain.Exploration.from_yaml(
'exp4', 'Invalid\ninit_state_name:\nMore stuff')
with self.assertRaisesRegex(
Exception,
'Please ensure that you are uploading a YAML text file, not a zip'
' file. The YAML parser returned the following error: while '
'scanning a simple key'):
exp_domain.Exploration.from_yaml(
'exp4', 'State1:\n(\nInvalid yaml')
class SchemaMigrationMethodsUnitTests(test_utils.GenericTestBase):
"""Tests the presence of appropriate schema migration methods in the
Exploration domain object class.
"""
def test_correct_states_schema_conversion_methods_exist(self):
"""Test that the right states schema conversion methods exist."""
current_states_schema_version = (
feconf.CURRENT_STATE_SCHEMA_VERSION)
for version_num in range(
feconf.EARLIEST_SUPPORTED_STATE_SCHEMA_VERSION,
current_states_schema_version):
self.assertTrue(hasattr(
exp_domain.Exploration,
'_convert_states_v%s_dict_to_v%s_dict' % (
version_num, version_num + 1)))
self.assertFalse(hasattr(
exp_domain.Exploration,
'_convert_states_v%s_dict_to_v%s_dict' % (
current_states_schema_version,
current_states_schema_version + 1)))
def test_correct_exploration_schema_conversion_methods_exist(self):
"""Test that the right exploration schema conversion methods exist."""
current_exp_schema_version = (
exp_domain.Exploration.CURRENT_EXP_SCHEMA_VERSION)
for version_num in range(
exp_domain.Exploration.EARLIEST_SUPPORTED_EXP_SCHEMA_VERSION,
current_exp_schema_version):
self.assertTrue(hasattr(
exp_domain.Exploration,
'_convert_v%s_dict_to_v%s_dict' % (
version_num, version_num + 1)))
self.assertFalse(hasattr(
exp_domain.Exploration,
'_convert_v%s_dict_to_v%s_dict' % (
current_exp_schema_version, current_exp_schema_version + 1)))
class SchemaMigrationUnitTests(test_utils.GenericTestBase):
"""Test migration methods for yaml content."""
YAML_CONTENT_V46 = (
"""author_notes: ''
auto_tts_enabled: true
blurb: ''
category: Category
correctness_feedback_enabled: false
init_state_name: (untitled state)
language_code: en
objective: ''
param_changes: []
param_specs: {}
schema_version: 46
states:
(untitled state):
classifier_model_id: null
content:
content_id: content
html: ''
interaction:
answer_groups:
- outcome:
dest: END
feedback:
content_id: feedback_1
html: <p>Correct!</p>
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
rule_specs:
- inputs:
x:
contentId: rule_input_3
normalizedStrSet:
- InputString
rule_type: Equals
tagged_skill_misconception_id: null
training_data: []
confirmed_unclassified_answers: []
customization_args:
placeholder:
value:
content_id: ca_placeholder_2
unicode_str: ''
rows:
value: 1
default_outcome:
dest: (untitled state)
feedback:
content_id: default_outcome
html: ''
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
hints: []
id: TextInput
solution: null
next_content_id_index: 4
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
ca_placeholder_2: {}
content: {}
default_outcome: {}
feedback_1: {}
rule_input_3: {}
solicit_answer_details: false
written_translations:
translations_mapping:
ca_placeholder_2: {}
content: {}
default_outcome: {}
feedback_1: {}
rule_input_3: {}
END:
classifier_model_id: null
content:
content_id: content
html: <p>Congratulations, you have finished!</p>
interaction:
answer_groups: []
confirmed_unclassified_answers: []
customization_args:
recommendedExplorationIds:
value: []
default_outcome: null
hints: []
id: EndExploration
solution: null
next_content_id_index: 0
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
content: {}
solicit_answer_details: false
written_translations:
translations_mapping:
content: {}
New state:
classifier_model_id: null
content:
content_id: content
html: ''
interaction:
answer_groups: []
confirmed_unclassified_answers: []
customization_args:
placeholder:
value:
content_id: ca_placeholder_0
unicode_str: ''
rows:
value: 1
default_outcome:
dest: END
feedback:
content_id: default_outcome
html: ''
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
hints: []
id: TextInput
solution: null
next_content_id_index: 1
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
ca_placeholder_0: {}
content: {}
default_outcome: {}
solicit_answer_details: false
written_translations:
translations_mapping:
ca_placeholder_0: {}
content: {}
default_outcome: {}
states_schema_version: 41
tags: []
title: Title
""")
YAML_CONTENT_V47 = (
"""author_notes: ''
auto_tts_enabled: true
blurb: ''
category: Category
correctness_feedback_enabled: false
init_state_name: (untitled state)
language_code: en
objective: ''
param_changes: []
param_specs: {}
schema_version: 47
states:
(untitled state):
classifier_model_id: null
content:
content_id: content
html: ''
interaction:
answer_groups:
- outcome:
dest: END
feedback:
content_id: feedback_1
html: <p>Correct!</p>
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
rule_specs:
- inputs:
x:
contentId: rule_input_3
normalizedStrSet:
- InputString
rule_type: Equals
tagged_skill_misconception_id: null
training_data: []
confirmed_unclassified_answers: []
customization_args:
placeholder:
value:
content_id: ca_placeholder_2
unicode_str: ''
rows:
value: 1
default_outcome:
dest: (untitled state)
feedback:
content_id: default_outcome
html: ''
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
hints: []
id: NumericExpressionInput
solution: null
next_content_id_index: 4
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
ca_placeholder_2: {}
content: {}
default_outcome: {}
feedback_1: {}
rule_input_3: {}
solicit_answer_details: false
written_translations:
translations_mapping:
ca_placeholder_2: {}
content: {}
default_outcome: {}
feedback_1: {}
rule_input_3: {}
END:
classifier_model_id: null
content:
content_id: content
html: <p>Congratulations, you have finished!</p>
interaction:
answer_groups: []
confirmed_unclassified_answers: []
customization_args:
recommendedExplorationIds:
value: []
default_outcome: null
hints: []
id: EndExploration
solution: null
next_content_id_index: 0
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
content: {}
solicit_answer_details: false
written_translations:
translations_mapping:
content: {}
New state:
classifier_model_id: null
content:
content_id: content
html: ''
interaction:
answer_groups: []
confirmed_unclassified_answers: []
customization_args:
placeholder:
value:
content_id: ca_placeholder_0
unicode_str: ''
rows:
value: 1
default_outcome:
dest: END
feedback:
content_id: default_outcome
html: ''
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
hints: []
id: TextInput
solution: null
next_content_id_index: 1
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
ca_placeholder_0: {}
content: {}
default_outcome: {}
solicit_answer_details: false
written_translations:
translations_mapping:
ca_placeholder_0: {}
content: {}
default_outcome: {}
states_schema_version: 42
tags: []
title: Title
""")
YAML_CONTENT_V48 = (
"""author_notes: ''
auto_tts_enabled: true
blurb: ''
category: Category
correctness_feedback_enabled: false
init_state_name: (untitled state)
language_code: en
objective: ''
param_changes: []
param_specs: {}
schema_version: 48
states:
(untitled state):
classifier_model_id: null
content:
content_id: content
html: ''
interaction:
answer_groups:
- outcome:
dest: END
feedback:
content_id: feedback_1
html: <p>Correct!</p>
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
rule_specs:
- inputs:
x:
contentId: rule_input_3
normalizedStrSet:
- InputString
rule_type: Equals
tagged_skill_misconception_id: null
training_data: []
confirmed_unclassified_answers: []
customization_args:
placeholder:
value:
content_id: ca_placeholder_2
unicode_str: ''
rows:
value: 1
default_outcome:
dest: (untitled state)
feedback:
content_id: default_outcome
html: ''
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
hints: []
id: TextInput
solution: null
next_content_id_index: 4
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
ca_placeholder_2: {}
content: {}
default_outcome: {}
feedback_1: {}
rule_input_3: {}
solicit_answer_details: false
written_translations:
translations_mapping:
ca_placeholder_2: {}
content: {}
default_outcome: {}
feedback_1: {}
rule_input_3: {}
END:
classifier_model_id: null
content:
content_id: content
html: <p>Congratulations, you have finished!</p>
interaction:
answer_groups: []
confirmed_unclassified_answers: []
customization_args:
recommendedExplorationIds:
value: []
default_outcome: null
hints: []
id: EndExploration
solution: null
next_content_id_index: 0
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
content: {}
solicit_answer_details: false
written_translations:
translations_mapping:
content: {}
New state:
classifier_model_id: null
content:
content_id: content
html: ''
interaction:
answer_groups: []
confirmed_unclassified_answers: []
customization_args:
placeholder:
value:
content_id: ca_placeholder_0
unicode_str: ''
rows:
value: 1
default_outcome:
dest: END
feedback:
content_id: default_outcome
html: ''
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
hints: []
id: TextInput
solution: null
next_content_id_index: 1
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
ca_placeholder_0: {}
content: {}
default_outcome: {}
solicit_answer_details: false
written_translations:
translations_mapping:
ca_placeholder_0: {}
content: {}
default_outcome: {}
states_schema_version: 43
tags: []
title: Title
""")
YAML_CONTENT_V49 = (
"""author_notes: ''
auto_tts_enabled: true
blurb: ''
category: Category
correctness_feedback_enabled: false
init_state_name: (untitled state)
language_code: en
objective: ''
param_changes: []
param_specs: {}
schema_version: 49
states:
(untitled state):
card_is_checkpoint: true
classifier_model_id: null
content:
content_id: content
html: ''
interaction:
answer_groups:
- outcome:
dest: END
feedback:
content_id: feedback_1
html: <p>Correct!</p>
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
rule_specs:
- inputs:
x:
contentId: rule_input_3
normalizedStrSet:
- InputString
rule_type: Equals
tagged_skill_misconception_id: null
training_data: []
confirmed_unclassified_answers: []
customization_args:
placeholder:
value:
content_id: ca_placeholder_2
unicode_str: ''
rows:
value: 1
default_outcome:
dest: (untitled state)
feedback:
content_id: default_outcome
html: ''
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
hints: []
id: TextInput
solution: null
next_content_id_index: 4
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
ca_placeholder_2: {}
content: {}
default_outcome: {}
feedback_1: {}
rule_input_3: {}
solicit_answer_details: false
written_translations:
translations_mapping:
ca_placeholder_2: {}
content: {}
default_outcome: {}
feedback_1: {}
rule_input_3: {}
END:
card_is_checkpoint: false
classifier_model_id: null
content:
content_id: content
html: <p>Congratulations, you have finished!</p>
interaction:
answer_groups: []
confirmed_unclassified_answers: []
customization_args:
recommendedExplorationIds:
value: []
default_outcome: null
hints: []
id: EndExploration
solution: null
next_content_id_index: 0
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
content: {}
solicit_answer_details: false
written_translations:
translations_mapping:
content: {}
New state:
classifier_model_id: null
content:
content_id: content
html: ''
interaction:
answer_groups: []
confirmed_unclassified_answers: []
customization_args:
placeholder:
value:
content_id: ca_placeholder_0
unicode_str: ''
rows:
value: 1
default_outcome:
dest: END
feedback:
content_id: default_outcome
html: ''
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
hints: []
id: TextInput
solution: null
next_content_id_index: 1
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
ca_placeholder_0: {}
content: {}
default_outcome: {}
solicit_answer_details: false
written_translations:
translations_mapping:
ca_placeholder_0: {}
content: {}
default_outcome: {}
states_schema_version: 44
tags: []
title: Title
""")
YAML_CONTENT_V50 = (
"""author_notes: ''
auto_tts_enabled: true
blurb: ''
category: Category
correctness_feedback_enabled: false
init_state_name: (untitled state)
language_code: en
objective: ''
param_changes: []
param_specs: {}
schema_version: 50
states:
(untitled state):
card_is_checkpoint: true
classifier_model_id: null
content:
content_id: content
html: ''
interaction:
answer_groups:
- outcome:
dest: END
feedback:
content_id: feedback_1
html: <p>Correct!</p>
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
rule_specs:
- inputs:
x:
contentId: rule_input_3
normalizedStrSet:
- InputString
rule_type: Equals
tagged_skill_misconception_id: null
training_data: []
confirmed_unclassified_answers: []
customization_args:
placeholder:
value:
content_id: ca_placeholder_2
unicode_str: ''
rows:
value: 1
default_outcome:
dest: (untitled state)
feedback:
content_id: default_outcome
html: ''
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
hints: []
id: TextInput
solution: null
linked_skill_id: null
next_content_id_index: 4
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
ca_placeholder_2: {}
content: {}
default_outcome: {}
feedback_1: {}
rule_input_3: {}
solicit_answer_details: false
written_translations:
translations_mapping:
ca_placeholder_2: {}
content: {}
default_outcome: {}
feedback_1: {}
rule_input_3: {}
END:
card_is_checkpoint: false
classifier_model_id: null
content:
content_id: content
html: <p>Congratulations, you have finished!</p>
interaction:
answer_groups: []
confirmed_unclassified_answers: []
customization_args:
recommendedExplorationIds:
value: []
default_outcome: null
hints: []
id: EndExploration
solution: null
linked_skill_id: null
next_content_id_index: 0
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
content: {}
solicit_answer_details: false
written_translations:
translations_mapping:
content: {}
New state:
classifier_model_id: null
content:
content_id: content
html: ''
interaction:
answer_groups: []
confirmed_unclassified_answers: []
customization_args:
placeholder:
value:
content_id: ca_placeholder_0
unicode_str: ''
rows:
value: 1
default_outcome:
dest: END
feedback:
content_id: default_outcome
html: ''
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
hints: []
id: TextInput
solution: null
linked_skill_id: null
next_content_id_index: 1
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
ca_placeholder_0: {}
content: {}
default_outcome: {}
solicit_answer_details: false
written_translations:
translations_mapping:
ca_placeholder_0: {}
content: {}
default_outcome: {}
states_schema_version: 45
tags: []
title: Title
""")
YAML_CONTENT_V51 = (
"""author_notes: ''
auto_tts_enabled: true
blurb: ''
category: Category
correctness_feedback_enabled: false
init_state_name: (untitled state)
language_code: en
objective: ''
param_changes: []
param_specs: {}
schema_version: 51
states:
(untitled state):
card_is_checkpoint: true
classifier_model_id: null
content:
content_id: content
html: ''
interaction:
answer_groups:
- outcome:
dest: END
feedback:
content_id: feedback_1
html: <p>Correct!</p>
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
rule_specs:
- inputs:
x:
contentId: rule_input_3
normalizedStrSet:
- InputString
rule_type: Equals
tagged_skill_misconception_id: null
training_data: []
confirmed_unclassified_answers: []
customization_args:
placeholder:
value:
content_id: ca_placeholder_2
unicode_str: ''
rows:
value: 1
default_outcome:
dest: (untitled state)
feedback:
content_id: default_outcome
html: ''
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
hints: []
id: TextInput
solution: null
linked_skill_id: null
next_content_id_index: 4
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
ca_placeholder_2: {}
content: {}
default_outcome: {}
feedback_1: {}
rule_input_3: {}
solicit_answer_details: false
written_translations:
translations_mapping:
ca_placeholder_2: {}
content: {}
default_outcome: {}
feedback_1: {}
rule_input_3: {}
END:
card_is_checkpoint: false
classifier_model_id: null
content:
content_id: content
html: <p>Congratulations, you have finished!</p>
interaction:
answer_groups: []
confirmed_unclassified_answers: []
customization_args:
recommendedExplorationIds:
value: []
default_outcome: null
hints: []
id: EndExploration
solution: null
linked_skill_id: null
next_content_id_index: 0
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
content: {}
solicit_answer_details: false
written_translations:
translations_mapping:
content: {}
New state:
classifier_model_id: null
content:
content_id: content
html: ''
interaction:
answer_groups: []
confirmed_unclassified_answers: []
customization_args:
placeholder:
value:
content_id: ca_placeholder_0
unicode_str: ''
rows:
value: 1
default_outcome:
dest: END
feedback:
content_id: default_outcome
html: ''
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
hints: []
id: TextInput
solution: null
linked_skill_id: null
next_content_id_index: 1
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
ca_placeholder_0: {}
content: {}
default_outcome: {}
solicit_answer_details: false
written_translations:
translations_mapping:
ca_placeholder_0: {}
content: {}
default_outcome: {}
states_schema_version: 46
tags: []
title: Title
""")
YAML_CONTENT_V52 = (
"""author_notes: ''
auto_tts_enabled: true
blurb: ''
category: Category
correctness_feedback_enabled: false
init_state_name: (untitled state)
language_code: en
objective: ''
param_changes: []
param_specs: {}
schema_version: 52
states:
(untitled state):
card_is_checkpoint: true
classifier_model_id: null
content:
content_id: content
html: ''
interaction:
answer_groups:
- outcome:
dest: END
feedback:
content_id: feedback_1
html: <p>Correct!</p>
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
rule_specs:
- inputs:
x:
contentId: rule_input_3
normalizedStrSet:
- InputString
rule_type: Equals
tagged_skill_misconception_id: null
training_data: []
confirmed_unclassified_answers: []
customization_args:
placeholder:
value:
content_id: ca_placeholder_2
unicode_str: ''
rows:
value: 1
default_outcome:
dest: (untitled state)
feedback:
content_id: default_outcome
html: ''
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
hints: []
id: TextInput
solution: null
linked_skill_id: null
next_content_id_index: 4
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
ca_placeholder_2: {}
content: {}
default_outcome: {}
feedback_1: {}
rule_input_3: {}
solicit_answer_details: false
written_translations:
translations_mapping:
ca_placeholder_2: {}
content: {}
default_outcome: {}
feedback_1: {}
rule_input_3: {}
END:
card_is_checkpoint: false
classifier_model_id: null
content:
content_id: content
html: <p>Congratulations, you have finished!</p>
interaction:
answer_groups: []
confirmed_unclassified_answers: []
customization_args:
recommendedExplorationIds:
value: []
default_outcome: null
hints: []
id: EndExploration
solution: null
linked_skill_id: null
next_content_id_index: 0
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
content: {}
solicit_answer_details: false
written_translations:
translations_mapping:
content: {}
New state:
classifier_model_id: null
content:
content_id: content
html: ''
interaction:
answer_groups: []
confirmed_unclassified_answers: []
customization_args:
placeholder:
value:
content_id: ca_placeholder_0
unicode_str: ''
rows:
value: 1
default_outcome:
dest: END
feedback:
content_id: default_outcome
html: ''
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
hints: []
id: TextInput
solution: null
linked_skill_id: null
next_content_id_index: 1
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
ca_placeholder_0: {}
content: {}
default_outcome: {}
solicit_answer_details: false
written_translations:
translations_mapping:
ca_placeholder_0: {}
content: {}
default_outcome: {}
states_schema_version: 47
tags: []
title: Title
""")
YAML_CONTENT_V53 = (
"""author_notes: ''
auto_tts_enabled: true
blurb: ''
category: Category
correctness_feedback_enabled: false
init_state_name: (untitled state)
language_code: en
objective: ''
param_changes: []
param_specs: {}
schema_version: 53
states:
(untitled state):
card_is_checkpoint: true
classifier_model_id: null
content:
content_id: content
html: ''
interaction:
answer_groups:
- outcome:
dest: END
feedback:
content_id: feedback_1
html: <p>Correct!</p>
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
rule_specs:
- inputs:
x:
contentId: rule_input_3
normalizedStrSet:
- InputString
rule_type: Equals
tagged_skill_misconception_id: null
training_data: []
confirmed_unclassified_answers: []
customization_args:
placeholder:
value:
content_id: ca_placeholder_2
unicode_str: ''
rows:
value: 1
default_outcome:
dest: (untitled state)
feedback:
content_id: default_outcome
html: ''
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
hints: []
id: TextInput
solution: null
linked_skill_id: null
next_content_id_index: 4
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
ca_placeholder_2: {}
content: {}
default_outcome: {}
feedback_1: {}
rule_input_3: {}
solicit_answer_details: false
written_translations:
translations_mapping:
ca_placeholder_2: {}
content: {}
default_outcome: {}
feedback_1: {}
rule_input_3: {}
END:
card_is_checkpoint: false
classifier_model_id: null
content:
content_id: content
html: <p>Congratulations, you have finished!</p>
interaction:
answer_groups: []
confirmed_unclassified_answers: []
customization_args:
recommendedExplorationIds:
value: []
default_outcome: null
hints: []
id: EndExploration
solution: null
linked_skill_id: null
next_content_id_index: 0
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
content: {}
solicit_answer_details: false
written_translations:
translations_mapping:
content: {}
New state:
card_is_checkpoint: true
classifier_model_id: null
content:
content_id: content
html: ''
interaction:
answer_groups: []
confirmed_unclassified_answers: []
customization_args:
placeholder:
value:
content_id: ca_placeholder_0
unicode_str: ''
rows:
value: 1
default_outcome:
dest: END
feedback:
content_id: default_outcome
html: ''
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
hints: []
id: NumericInput
solution: null
linked_skill_id: null
next_content_id_index: 1
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
ca_placeholder_0: {}
content: {}
default_outcome: {}
solicit_answer_details: false
written_translations:
translations_mapping:
ca_placeholder_0: {}
content: {}
default_outcome: {}
states_schema_version: 48
tags: []
title: Title
""")
YAML_CONTENT_V54 = (
"""author_notes: ''
auto_tts_enabled: true
blurb: ''
category: Category
correctness_feedback_enabled: false
init_state_name: (untitled state)
language_code: en
objective: ''
param_changes: []
param_specs: {}
schema_version: 54
states:
(untitled state):
card_is_checkpoint: true
classifier_model_id: null
content:
content_id: content
html: ''
interaction:
answer_groups:
- outcome:
dest: END
feedback:
content_id: feedback_1
html: <p>Correct!</p>
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
rule_specs:
- inputs:
x: 6
rule_type: Equals
tagged_skill_misconception_id: null
training_data: []
confirmed_unclassified_answers: []
customization_args:
requireNonnegativeInput:
value: False
default_outcome:
dest: (untitled state)
feedback:
content_id: default_outcome
html: ''
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
hints: []
id: NumericInput
solution: null
linked_skill_id: null
next_content_id_index: 4
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
ca_placeholder_2: {}
content: {}
default_outcome: {}
feedback_1: {}
rule_input_3: {}
solicit_answer_details: false
written_translations:
translations_mapping:
ca_placeholder_2: {}
content: {}
default_outcome: {}
feedback_1: {}
rule_input_3: {}
END:
card_is_checkpoint: false
classifier_model_id: null
content:
content_id: content
html: <p>Congratulations, you have finished!</p>
interaction:
answer_groups: []
confirmed_unclassified_answers: []
customization_args:
recommendedExplorationIds:
value: []
default_outcome: null
hints: []
id: EndExploration
solution: null
linked_skill_id: null
next_content_id_index: 0
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
content: {}
solicit_answer_details: false
written_translations:
translations_mapping:
content: {}
New state:
classifier_model_id: null
content:
content_id: content
html: ''
interaction:
answer_groups: []
confirmed_unclassified_answers: []
customization_args:
requireNonnegativeInput:
value: False
default_outcome:
dest: END
feedback:
content_id: default_outcome
html: ''
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
hints: []
id: NumericInput
solution: null
linked_skill_id: null
next_content_id_index: 1
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
ca_placeholder_0: {}
content: {}
default_outcome: {}
solicit_answer_details: false
written_translations:
translations_mapping:
ca_placeholder_0: {}
content: {}
default_outcome: {}
states_schema_version: 49
tags: []
title: Title
""")
_LATEST_YAML_CONTENT = YAML_CONTENT_V54
def test_load_from_v46_with_item_selection_input_interaction(self):
"""Tests the migration of ItemSelectionInput rule inputs."""
sample_yaml_content = (
"""author_notes: ''
auto_tts_enabled: false
blurb: ''
category: Category
correctness_feedback_enabled: false
init_state_name: (untitled state)
language_code: en
objective: ''
param_changes: []
param_specs: {}
schema_version: 46
states:
(untitled state):
classifier_model_id: null
content:
content_id: content
html: ''
interaction:
answer_groups:
- outcome:
dest: END
feedback:
content_id: feedback_1
html: <p>Correct!</p>
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
rule_specs:
- inputs:
x:
- <p>Choice 1</p>
- <p>Choice 2</p>
- <p>Choice Invalid</p>
rule_type: Equals
tagged_skill_misconception_id: null
training_data: []
confirmed_unclassified_answers: []
customization_args:
choices:
value:
- content_id: ca_choices_2
html: <p>Choice 1</p>
- content_id: ca_choices_3
html: <p>Choice 2</p>
maxAllowableSelectionCount:
value: 2
minAllowableSelectionCount:
value: 1
default_outcome:
dest: (untitled state)
feedback:
content_id: default_outcome
html: ''
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
hints: []
id: ItemSelectionInput
solution:
answer_is_exclusive: true
correct_answer:
- <p>Choice 1</p>
explanation:
content_id: solution
html: This is <i>solution</i> for state1
next_content_id_index: 4
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
ca_choices_2: {}
ca_choices_3: {}
content: {}
default_outcome: {}
feedback_1: {}
solution: {}
solicit_answer_details: false
written_translations:
translations_mapping:
ca_choices_2: {}
ca_choices_3: {}
content: {}
default_outcome: {}
feedback_1: {}
solution: {}
END:
classifier_model_id: null
content:
content_id: content
html: <p>Congratulations, you have finished!</p>
interaction:
answer_groups: []
confirmed_unclassified_answers: []
customization_args:
recommendedExplorationIds:
value: []
default_outcome: null
hints: []
id: EndExploration
solution: null
next_content_id_index: 0
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
content: {}
solicit_answer_details: false
written_translations:
translations_mapping:
content: {}
states_schema_version: 41
tags: []
title: Title
""")
latest_sample_yaml_content = (
"""author_notes: ''
auto_tts_enabled: false
blurb: ''
category: Category
correctness_feedback_enabled: false
init_state_name: (untitled state)
language_code: en
objective: ''
param_changes: []
param_specs: {}
schema_version: 54
states:
(untitled state):
card_is_checkpoint: true
classifier_model_id: null
content:
content_id: content
html: ''
interaction:
answer_groups:
- outcome:
dest: END
feedback:
content_id: feedback_1
html: <p>Correct!</p>
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
rule_specs:
- inputs:
x:
- ca_choices_2
- ca_choices_3
- invalid_content_id
rule_type: Equals
tagged_skill_misconception_id: null
training_data: []
confirmed_unclassified_answers: []
customization_args:
choices:
value:
- content_id: ca_choices_2
html: <p>Choice 1</p>
- content_id: ca_choices_3
html: <p>Choice 2</p>
maxAllowableSelectionCount:
value: 2
minAllowableSelectionCount:
value: 1
default_outcome:
dest: (untitled state)
feedback:
content_id: default_outcome
html: ''
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
hints: []
id: ItemSelectionInput
solution:
answer_is_exclusive: true
correct_answer:
- ca_choices_2
explanation:
content_id: solution
html: This is <i>solution</i> for state1
linked_skill_id: null
next_content_id_index: 4
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
ca_choices_2: {}
ca_choices_3: {}
content: {}
default_outcome: {}
feedback_1: {}
solution: {}
solicit_answer_details: false
written_translations:
translations_mapping:
ca_choices_2: {}
ca_choices_3: {}
content: {}
default_outcome: {}
feedback_1: {}
solution: {}
END:
card_is_checkpoint: false
classifier_model_id: null
content:
content_id: content
html: <p>Congratulations, you have finished!</p>
interaction:
answer_groups: []
confirmed_unclassified_answers: []
customization_args:
recommendedExplorationIds:
value: []
default_outcome: null
hints: []
id: EndExploration
solution: null
linked_skill_id: null
next_content_id_index: 0
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
content: {}
solicit_answer_details: false
written_translations:
translations_mapping:
content: {}
states_schema_version: 49
tags: []
title: Title
""")
exploration = exp_domain.Exploration.from_yaml(
'eid', sample_yaml_content)
self.assertEqual(exploration.to_yaml(), latest_sample_yaml_content)
def test_load_from_v46_with_drag_and_drop_sort_input_interaction(self):
"""Tests the migration of DragAndDropSortInput rule inputs."""
sample_yaml_content = (
"""author_notes: ''
auto_tts_enabled: true
blurb: ''
category: Category
correctness_feedback_enabled: false
init_state_name: (untitled state)
language_code: en
objective: ''
param_changes: []
param_specs: {}
schema_version: 46
states:
(untitled state):
classifier_model_id: null
content:
content_id: content
html: ''
interaction:
answer_groups:
- outcome:
dest: END
feedback:
content_id: feedback_1
html: <p>Correct!</p>
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
rule_specs:
- inputs:
x:
- - <p>Choice 1</p>
- <p>Choice 2</p>
rule_type: IsEqualToOrdering
- inputs:
x:
- - <p>Choice 1</p>
rule_type: IsEqualToOrderingWithOneItemAtIncorrectPosition
- inputs:
x: <p>Choice 1</p>
y: 1
rule_type: HasElementXAtPositionY
- inputs:
x: <p>Choice 1</p>
y: <p>Choice 2</p>
rule_type: HasElementXBeforeElementY
tagged_skill_misconception_id: null
training_data: []
confirmed_unclassified_answers: []
customization_args:
allowMultipleItemsInSamePosition:
value: true
choices:
value:
- content_id: ca_choices_2
html: <p>Choice 1</p>
- content_id: ca_choices_3
html: <p>Choice 2</p>
default_outcome:
dest: (untitled state)
feedback:
content_id: default_outcome
html: ''
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
hints: []
id: DragAndDropSortInput
solution:
answer_is_exclusive: true
correct_answer:
- - <p>Choice 1</p>
- <p>Choice 2</p>
explanation:
content_id: solution
html: This is <i>solution</i> for state1
linked_skill_id: null
next_content_id_index: 4
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
ca_choices_2: {}
ca_choices_3: {}
content: {}
default_outcome: {}
feedback_1: {}
solution: {}
solicit_answer_details: false
written_translations:
translations_mapping:
ca_choices_2: {}
ca_choices_3: {}
content: {}
default_outcome: {}
feedback_1: {}
solution: {}
END:
classifier_model_id: null
content:
content_id: content
html: <p>Congratulations, you have finished!</p>
interaction:
answer_groups: []
confirmed_unclassified_answers: []
customization_args:
recommendedExplorationIds:
value: []
default_outcome: null
hints: []
id: EndExploration
solution: null
linked_skill_id: null
next_content_id_index: 0
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
content: {}
solicit_answer_details: false
written_translations:
translations_mapping:
content: {}
states_schema_version: 41
tags: []
title: Title
""")
latest_sample_yaml_content = (
"""author_notes: ''
auto_tts_enabled: true
blurb: ''
category: Category
correctness_feedback_enabled: false
init_state_name: (untitled state)
language_code: en
objective: ''
param_changes: []
param_specs: {}
schema_version: 54
states:
(untitled state):
card_is_checkpoint: true
classifier_model_id: null
content:
content_id: content
html: ''
interaction:
answer_groups:
- outcome:
dest: END
feedback:
content_id: feedback_1
html: <p>Correct!</p>
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
rule_specs:
- inputs:
x:
- - ca_choices_2
- ca_choices_3
rule_type: IsEqualToOrdering
- inputs:
x:
- - ca_choices_2
rule_type: IsEqualToOrderingWithOneItemAtIncorrectPosition
- inputs:
x: ca_choices_2
y: 1
rule_type: HasElementXAtPositionY
- inputs:
x: ca_choices_2
y: ca_choices_3
rule_type: HasElementXBeforeElementY
tagged_skill_misconception_id: null
training_data: []
confirmed_unclassified_answers: []
customization_args:
allowMultipleItemsInSamePosition:
value: true
choices:
value:
- content_id: ca_choices_2
html: <p>Choice 1</p>
- content_id: ca_choices_3
html: <p>Choice 2</p>
default_outcome:
dest: (untitled state)
feedback:
content_id: default_outcome
html: ''
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
hints: []
id: DragAndDropSortInput
solution:
answer_is_exclusive: true
correct_answer:
- - ca_choices_2
- ca_choices_3
explanation:
content_id: solution
html: This is <i>solution</i> for state1
linked_skill_id: null
next_content_id_index: 4
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
ca_choices_2: {}
ca_choices_3: {}
content: {}
default_outcome: {}
feedback_1: {}
solution: {}
solicit_answer_details: false
written_translations:
translations_mapping:
ca_choices_2: {}
ca_choices_3: {}
content: {}
default_outcome: {}
feedback_1: {}
solution: {}
END:
card_is_checkpoint: false
classifier_model_id: null
content:
content_id: content
html: <p>Congratulations, you have finished!</p>
interaction:
answer_groups: []
confirmed_unclassified_answers: []
customization_args:
recommendedExplorationIds:
value: []
default_outcome: null
hints: []
id: EndExploration
solution: null
linked_skill_id: null
next_content_id_index: 0
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
content: {}
solicit_answer_details: false
written_translations:
translations_mapping:
content: {}
states_schema_version: 49
tags: []
title: Title
""")
exploration = exp_domain.Exploration.from_yaml(
'eid', sample_yaml_content)
self.assertEqual(exploration.to_yaml(), latest_sample_yaml_content)
def test_load_from_v46_with_invalid_unicode_written_translations(self):
"""Tests the migration of unicode written translations rule inputs."""
sample_yaml_content = (
"""author_notes: ''
auto_tts_enabled: true
blurb: ''
category: Category
correctness_feedback_enabled: false
init_state_name: (untitled state)
language_code: en
objective: ''
param_changes: []
param_specs: {}
schema_version: 46
states:
(untitled state):
classifier_model_id: null
content:
content_id: content
html: ''
interaction:
answer_groups: []
confirmed_unclassified_answers: []
customization_args:
buttonText:
value:
content_id: ca_buttonText
unicode_str: Continue
default_outcome:
dest: END
feedback:
content_id: default_outcome
html: ''
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
hints: []
id: Continue
solution: null
linked_skill_id: null
next_content_id_index: 4
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
ca_buttonText: {}
content: {}
default_outcome: {}
feedback_1: {}
solution: {}
solicit_answer_details: false
written_translations:
translations_mapping:
ca_buttonText:
bn:
data_format: html
needs_update: false
translation: <p>hello</p>
content: {}
default_outcome: {}
feedback_1: {}
solution: {}
END:
classifier_model_id: null
content:
content_id: content
html: <p>Congratulations, you have finished!</p>
interaction:
answer_groups: []
confirmed_unclassified_answers: []
customization_args:
recommendedExplorationIds:
value: []
default_outcome: null
hints: []
id: EndExploration
solution: null
linked_skill_id: null
next_content_id_index: 0
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
content: {}
solicit_answer_details: false
written_translations:
translations_mapping:
content: {}
states_schema_version: 41
tags: []
title: Title
""")
latest_sample_yaml_content = (
"""author_notes: ''
auto_tts_enabled: true
blurb: ''
category: Category
correctness_feedback_enabled: false
init_state_name: (untitled state)
language_code: en
objective: ''
param_changes: []
param_specs: {}
schema_version: 54
states:
(untitled state):
card_is_checkpoint: true
classifier_model_id: null
content:
content_id: content
html: ''
interaction:
answer_groups: []
confirmed_unclassified_answers: []
customization_args:
buttonText:
value:
content_id: ca_buttonText
unicode_str: Continue
default_outcome:
dest: END
feedback:
content_id: default_outcome
html: ''
labelled_as_correct: false
missing_prerequisite_skill_id: null
param_changes: []
refresher_exploration_id: null
hints: []
id: Continue
solution: null
linked_skill_id: null
next_content_id_index: 4
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
ca_buttonText: {}
content: {}
default_outcome: {}
feedback_1: {}
solution: {}
solicit_answer_details: false
written_translations:
translations_mapping:
ca_buttonText:
bn:
data_format: unicode
needs_update: false
translation: hello
content: {}
default_outcome: {}
feedback_1: {}
solution: {}
END:
card_is_checkpoint: false
classifier_model_id: null
content:
content_id: content
html: <p>Congratulations, you have finished!</p>
interaction:
answer_groups: []
confirmed_unclassified_answers: []
customization_args:
recommendedExplorationIds:
value: []
default_outcome: null
hints: []
id: EndExploration
solution: null
linked_skill_id: null
next_content_id_index: 0
param_changes: []
recorded_voiceovers:
voiceovers_mapping:
content: {}
solicit_answer_details: false
written_translations:
translations_mapping:
content: {}
states_schema_version: 49
tags: []
title: Title
""")
exploration = exp_domain.Exploration.from_yaml(
'eid', sample_yaml_content)
self.assertEqual(exploration.to_yaml(), latest_sample_yaml_content)
def test_yaml_v48_to_49_interaction_id_special_case(self):
exp_domain.Exploration.from_yaml(1, self.YAML_CONTENT_V53)
def test_yaml_v42_to_43_interaction_id_special_case(self):
exp_domain.Exploration.from_yaml(1, self.YAML_CONTENT_V47)
class ConversionUnitTests(test_utils.GenericTestBase):
"""Test conversion methods."""
def test_convert_exploration_to_player_dict(self):
exp_title = 'Title'
second_state_name = 'first state'
exploration = exp_domain.Exploration.create_default_exploration(
'eid', title=exp_title, category='Category')
exploration.add_states([second_state_name])
def _get_default_state_dict(content_str, dest_name, is_init_state):
"""Gets the default state dict of the exploration."""
return {
'linked_skill_id': None,
'next_content_id_index': 0,
'classifier_model_id': None,
'content': {
'content_id': 'content',
'html': content_str,
},
'recorded_voiceovers': {
'voiceovers_mapping': {
'content': {},
'default_outcome': {}
}
},
'solicit_answer_details': False,
'card_is_checkpoint': is_init_state,
'written_translations': {
'translations_mapping': {
'content': {},
'default_outcome': {}
}
},
'interaction': {
'answer_groups': [],
'confirmed_unclassified_answers': [],
'customization_args': {},
'default_outcome': {
'dest': dest_name,
'feedback': {
'content_id': feconf.DEFAULT_OUTCOME_CONTENT_ID,
'html': ''
},
'labelled_as_correct': False,
'param_changes': [],
'refresher_exploration_id': None,
'missing_prerequisite_skill_id': None
},
'hints': [],
'id': None,
'solution': None,
},
'param_changes': [],
}
self.assertEqual(exploration.to_player_dict(), {
'init_state_name': feconf.DEFAULT_INIT_STATE_NAME,
'title': exp_title,
'objective': feconf.DEFAULT_EXPLORATION_OBJECTIVE,
'states': {
feconf.DEFAULT_INIT_STATE_NAME: _get_default_state_dict(
feconf.DEFAULT_INIT_STATE_CONTENT_STR,
feconf.DEFAULT_INIT_STATE_NAME, True),
second_state_name: _get_default_state_dict(
'', second_state_name, False),
},
'param_changes': [],
'param_specs': {},
'language_code': 'en',
'correctness_feedback_enabled': True,
})
class StateOperationsUnitTests(test_utils.GenericTestBase):
"""Test methods operating on states."""
def test_delete_state(self):
"""Test deletion of states."""
exploration = exp_domain.Exploration.create_default_exploration('eid')
exploration.add_states(['first state'])
with self.assertRaisesRegex(
ValueError, 'Cannot delete initial state'
):
exploration.delete_state(exploration.init_state_name)
exploration.add_states(['second state'])
exploration.delete_state('second state')
with self.assertRaisesRegex(ValueError, 'fake state does not exist'):
exploration.delete_state('fake state')
def test_no_duplicate_states(self):
exp = exp_domain.Exploration.create_default_exploration('eid')
exp.add_states(['first state'])
with self.assertRaisesRegex(
ValueError,
'Duplicate state name first state'):
exp.add_states(['first state'])
def test_no_duplicate_renamed_states(self):
exp = exp_domain.Exploration.create_default_exploration(
'eid')
exp.add_states(['first state', 'second state'])
with self.assertRaisesRegex(
ValueError,
'Duplicate state name: first state'):
exp.rename_state('second state', 'first state')
class HtmlCollectionTests(test_utils.GenericTestBase):
"""Test method to obtain all html strings."""
def test_all_html_strings_are_collected(self):
exploration = exp_domain.Exploration.create_default_exploration(
'eid', title='title', category='category')
exploration.add_states(['state1', 'state2', 'state3', 'state4'])
state1 = exploration.states['state1']
state2 = exploration.states['state2']
state3 = exploration.states['state3']
state4 = exploration.states['state4']
content1_dict = {
'content_id': 'content',
'html': '<blockquote>Hello, this is state1</blockquote>'
}
content2_dict = {
'content_id': 'content',
'html': '<pre>Hello, this is state2</pre>'
}
content3_dict = {
'content_id': 'content',
'html': '<p>Hello, this is state3</p>'
}
content4_dict = {
'content_id': 'content',
'html': '<p>Hello, this is state4</p>'
}
state1.update_content(
state_domain.SubtitledHtml.from_dict(content1_dict))
state2.update_content(
state_domain.SubtitledHtml.from_dict(content2_dict))
state3.update_content(
state_domain.SubtitledHtml.from_dict(content3_dict))
state4.update_content(
state_domain.SubtitledHtml.from_dict(content4_dict))
self.set_interaction_for_state(state1, 'TextInput')
self.set_interaction_for_state(state2, 'MultipleChoiceInput')
self.set_interaction_for_state(state3, 'ItemSelectionInput')
self.set_interaction_for_state(state4, 'DragAndDropSortInput')
customization_args_dict1 = {
'placeholder': {
'value': {
'content_id': 'ca_placeholder_0',
'unicode_str': 'Enter here.'
}
},
'rows': {'value': 1}
}
customization_args_dict2 = {
'choices': {'value': [
{
'content_id': 'ca_choices_0',
'html': '<p>This is value1 for MultipleChoice</p>'
},
{
'content_id': 'ca_choices_1',
'html': '<p>This is value2 for MultipleChoice</p>'
}
]},
'showChoicesInShuffledOrder': {'value': True}
}
customization_args_dict3 = {
'choices': {'value': [
{
'content_id': 'ca_choices_0',
'html': '<p>This is value1 for ItemSelection</p>'
},
{
'content_id': 'ca_choices_1',
'html': '<p>This is value2 for ItemSelection</p>'
},
{
'content_id': 'ca_choices_2',
'html': '<p>This is value3 for ItemSelection</p>'
}
]},
'minAllowableSelectionCount': {'value': 1},
'maxAllowableSelectionCount': {'value': 2}
}
customization_args_dict4 = {
'choices': {'value': [
{
'content_id': 'ca_choices_0',
'html': '<p>This is value1 for DragAndDropSortInput</p>'
},
{
'content_id': 'ca_choices_1',
'html': '<p>This is value2 for DragAndDropSortInput</p>'
}
]},
'allowMultipleItemsInSamePosition': {'value': True}
}
state1.update_interaction_customization_args(customization_args_dict1)
state2.update_interaction_customization_args(customization_args_dict2)
state3.update_interaction_customization_args(customization_args_dict3)
state4.update_interaction_customization_args(customization_args_dict4)
default_outcome = state_domain.Outcome(
'state2', state_domain.SubtitledHtml(
'default_outcome', '<p>Default outcome for state1</p>'),
False, [], None, None
)
state1.update_interaction_default_outcome(default_outcome)
hint_list2 = [
state_domain.Hint(
state_domain.SubtitledHtml(
'hint_1', '<p>Hello, this is html1 for state2</p>'
)
),
state_domain.Hint(
state_domain.SubtitledHtml(
'hint_2', '<p>Hello, this is html2 for state2</p>'
)
),
]
state2.update_interaction_hints(hint_list2)
solution_dict = {
'interaction_id': '',
'answer_is_exclusive': True,
'correct_answer': 'Answer1',
'explanation': {
'content_id': 'solution',
'html': '<p>This is solution for state1</p>'
}
}
solution = state_domain.Solution.from_dict(
state1.interaction.id, solution_dict)
state1.update_interaction_solution(solution)
state_answer_group_list2 = [
state_domain.AnswerGroup(
state_domain.Outcome(
'state1', state_domain.SubtitledHtml(
'feedback_1', '<p>Outcome2 for state2</p>'),
False, [], None, None),
[
state_domain.RuleSpec(
'Equals',
{
'x': 0
}),
state_domain.RuleSpec(
'Equals',
{
'x': 1
})
],
[],
None),
state_domain.AnswerGroup(
state_domain.Outcome(
'state3', state_domain.SubtitledHtml(
'feedback_2', '<p>Outcome1 for state2</p>'),
False, [], None, None),
[
state_domain.RuleSpec(
'Equals',
{
'x': 0
})
],
[],
None
)]
state_answer_group_list3 = [state_domain.AnswerGroup(
state_domain.Outcome(
'state1', state_domain.SubtitledHtml(
'feedback_1', '<p>Outcome for state3</p>'),
False, [], None, None),
[
state_domain.RuleSpec(
'Equals',
{
'x': ['ca_choices_0']
}),
state_domain.RuleSpec(
'Equals',
{
'x': ['ca_choices_2']
})
],
[],
None
)]
state2.update_interaction_answer_groups(state_answer_group_list2)
state3.update_interaction_answer_groups(state_answer_group_list3)
expected_html_list = [
'',
'',
'<pre>Hello, this is state2</pre>',
'<p>Outcome1 for state2</p>',
'<p>Outcome2 for state2</p>',
'',
'<p>Hello, this is html1 for state2</p>',
'<p>Hello, this is html2 for state2</p>',
'<p>This is value1 for MultipleChoice</p>',
'<p>This is value2 for MultipleChoice</p>',
'<blockquote>Hello, this is state1</blockquote>',
'<p>Default outcome for state1</p>',
'<p>This is solution for state1</p>',
'<p>Hello, this is state3</p>',
'<p>Outcome for state3</p>',
'',
'<p>This is value1 for ItemSelection</p>',
'<p>This is value2 for ItemSelection</p>',
'<p>This is value3 for ItemSelection</p>',
'<p>Hello, this is state4</p>',
'',
'<p>This is value1 for DragAndDropSortInput</p>',
'<p>This is value2 for DragAndDropSortInput</p>'
]
actual_outcome_list = exploration.get_all_html_content_strings()
self.assertItemsEqual(set(actual_outcome_list), set(expected_html_list))
class ExplorationChangesMergeabilityUnitTests(
exp_services_test.ExplorationServicesUnitTests,
test_utils.EmailTestBase):
"""Test methods related to exploration changes mergeability."""
def test_changes_are_mergeable_when_content_changes_do_not_conflict(self):
self.save_new_valid_exploration(
self.EXP_0_ID, self.owner_id, end_state_name='End')
rights_manager.publish_exploration(self.owner, self.EXP_0_ID)
change_list = [exp_domain.ExplorationChange({
'cmd': exp_domain.CMD_EDIT_EXPLORATION_PROPERTY,
'property_name': 'title',
'new_value': 'First title'
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list, 'Changed title.')
# Making changes to properties except content.
change_list_2 = [exp_domain.ExplorationChange({
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'property_name': 'widget_id',
'new_value': None,
'old_value': 'TextInput'
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'property_name': 'widget_customization_args',
'new_value': {},
'old_value': {
'placeholder': {
'value': {
'content_id': 'ca_placeholder_0',
'unicode_str': ''
}
},
'rows': {
'value': 1
}
}
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'property_name': 'next_content_id_index',
'new_value': 2,
'old_value': 1
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'property_name': 'widget_id',
'new_value': 'Continue',
'old_value': None
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'property_name': 'widget_customization_args',
'new_value': {
'buttonText': {
'value': {
'content_id': 'ca_buttonText_1',
'unicode_str': 'Continue'
}
}
},
'old_value': {}
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list_2, 'Changed Interaction.')
# Changing content of second state.
change_list_3 = [exp_domain.ExplorationChange({
'property_name': 'content',
'state_name': 'End',
'cmd': 'edit_state_property',
'old_value': {
'html': '',
'content_id': 'content'
},
'new_value': {
'html': '<p>Congratulations, you have finished!</p>',
'content_id': 'content'
}
})]
# Checking that the changes can be applied when
# changing to same version.
changes_are_mergeable = exp_services.are_changes_mergeable(
self.EXP_0_ID, 3, change_list_3)
self.assertEqual(changes_are_mergeable, True)
changes_are_mergeable = exp_services.are_changes_mergeable(
self.EXP_0_ID, 2, change_list_3)
self.assertEqual(changes_are_mergeable, True)
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list_3,
'Changed content of End state.')
# Changing content of first state.
change_list_4 = [exp_domain.ExplorationChange({
'cmd': exp_domain.CMD_RENAME_STATE,
'old_state_name': 'Introduction',
'new_state_name': 'Renamed state'
}), exp_domain.ExplorationChange({
'cmd': exp_domain.CMD_RENAME_STATE,
'old_state_name': 'Renamed state',
'new_state_name': 'Renamed state again'
}), exp_domain.ExplorationChange({
'cmd': exp_domain.CMD_RENAME_STATE,
'old_state_name': 'Renamed state again',
'new_state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'property_name': 'content',
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'old_value': {
'html': '',
'content_id': 'content'
},
'new_value': {
'html': '<p>Hello</p>',
'content_id': 'content'
}
})]
# Checking for the mergability of the fourth change list.
changes_are_mergeable = exp_services.are_changes_mergeable(
self.EXP_0_ID, 2, change_list_4)
self.assertEqual(changes_are_mergeable, True)
# Checking for the mergability when working on latest version.
changes_are_mergeable = exp_services.are_changes_mergeable(
self.EXP_0_ID, 3, change_list_4)
self.assertEqual(changes_are_mergeable, True)
def test_changes_are_not_mergeable_when_content_changes_conflict(self):
self.save_new_valid_exploration(
self.EXP_0_ID, self.owner_id, end_state_name='End')
rights_manager.publish_exploration(self.owner, self.EXP_0_ID)
# Making changes to content of the first state.
change_list = [exp_domain.ExplorationChange({
'property_name': 'content',
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'old_value': {
'html': '',
'content_id': 'content'
},
'new_value': {
'html': '<p>Content 1.</p>',
'content_id': 'content'
}
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list, 'Changed Content.')
# Changing content of the same state to check that
# changes are not mergeable.
change_list_2 = [exp_domain.ExplorationChange({
'property_name': 'content',
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'old_value': {
'html': '',
'content_id': 'content'
},
'new_value': {
'html': '<p>Content 2.</p>',
'content_id': 'content'
}
})]
# Checking for the mergability of the second change list.
changes_are_not_mergeable = exp_services.are_changes_mergeable(
self.EXP_0_ID, 1, change_list_2)
self.assertEqual(changes_are_not_mergeable, False)
def test_changes_are_mergeable_when_interaction_id_changes_do_not_conflict(self): # pylint: disable=line-too-long
self.save_new_valid_exploration(
self.EXP_0_ID, self.owner_id, end_state_name='End')
rights_manager.publish_exploration(self.owner, self.EXP_0_ID)
# Making changes in the properties which are
# not related to the interaction id.
change_list_2 = [exp_domain.ExplorationChange({
'new_value': {
'content_id': 'content',
'html': '<p>This is the first state.</p>'
},
'state_name': 'Introduction',
'old_value': {
'content_id': 'content',
'html': ''
},
'cmd': 'edit_state_property',
'property_name': 'content'
}), exp_domain.ExplorationChange({
'new_value': [{
'hint_content': {
'content_id': 'hint_1',
'html': '<p>This is a first hint.</p>'
}
}],
'state_name': 'Introduction',
'old_value': [],
'cmd': 'edit_state_property',
'property_name': 'hints'
}), exp_domain.ExplorationChange({
'new_value': 2,
'state_name': 'Introduction',
'old_value': 1,
'cmd': 'edit_state_property',
'property_name': 'next_content_id_index'
}), exp_domain.ExplorationChange({
'new_value': [{
'hint_content': {
'content_id': 'hint_1',
'html': '<p>This is a first hint.</p>'
}
}, {
'hint_content': {
'content_id': 'hint_2',
'html': '<p>This is the second hint.</p>'
}
}],
'state_name': 'Introduction',
'old_value': [{
'hint_content': {
'content_id': 'hint_1',
'html': '<p>This is a first hint.</p>'
}
}],
'cmd': 'edit_state_property',
'property_name': 'hints'
}), exp_domain.ExplorationChange({
'new_value': {
'content_id': 'content',
'html': '<p>Congratulations, you have finished!</p>'
},
'state_name': 'End',
'old_value': {
'content_id': 'content',
'html': ''
},
'cmd': 'edit_state_property',
'property_name': 'content'
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list_2,
'Changed Contents and Hint')
# Changes to the properties affected by or affecting
# interaction id and in interaction_id itself.
change_list_3 = [exp_domain.ExplorationChange({
'new_value': None,
'state_name': 'Introduction',
'old_value': 'TextInput',
'cmd': 'edit_state_property',
'property_name': 'widget_id'
}), exp_domain.ExplorationChange({
'new_value': {},
'state_name': 'Introduction',
'old_value': {
'rows': {
'value': 1
},
'placeholder': {
'value': {
'content_id': 'ca_placeholder_0',
'unicode_str': ''
}
}
},
'cmd': 'edit_state_property',
'property_name': 'widget_customization_args'
}), exp_domain.ExplorationChange({
'new_value': 2,
'state_name': 'Introduction',
'old_value': 1,
'cmd': 'edit_state_property',
'property_name': 'next_content_id_index'
}), exp_domain.ExplorationChange({
'new_value': 'Continue',
'state_name': 'Introduction',
'old_value': None,
'cmd': 'edit_state_property',
'property_name': 'widget_id'
}), exp_domain.ExplorationChange({
'new_value': {
'buttonText': {
'value': {
'content_id': 'ca_buttonText_1',
'unicode_str': 'Continue'
}
}
},
'state_name': 'Introduction',
'old_value': {},
'cmd': 'edit_state_property',
'property_name': 'widget_customization_args'
})]
changes_are_mergeable = exp_services.are_changes_mergeable(
self.EXP_0_ID, 1, change_list_3)
self.assertEqual(changes_are_mergeable, True)
# Creating second exploration to test the scenario
# when changes to same properties are made in two
# different states.
self.save_new_valid_exploration(
self.EXP_1_ID, self.owner_id, end_state_name='End')
rights_manager.publish_exploration(self.owner, self.EXP_1_ID)
# Using the old change_list_3 here because they already covers
# the changes related to interaction in first state.
exp_services.update_exploration(
self.owner_id, self.EXP_1_ID, change_list_3, 'Changed Interaction')
# Changes related to interaction in the second state
# to check for mergeability.
change_list_4 = [exp_domain.ExplorationChange({
'state_name': 'End',
'cmd': 'edit_state_property',
'new_value': None,
'old_value': 'EndExploration',
'property_name': 'widget_id'
}), exp_domain.ExplorationChange({
'state_name': 'End',
'cmd': 'edit_state_property',
'new_value': {},
'old_value': {
'recommendedExplorationIds': {
'value': []
}
},
'property_name': 'widget_customization_args'
}), exp_domain.ExplorationChange({
'state_name': 'End',
'cmd': 'edit_state_property',
'new_value': 'NumericInput',
'old_value': None,
'property_name': 'widget_id'
}), exp_domain.ExplorationChange({
'state_name': 'End',
'cmd': 'edit_state_property',
'new_value': {
'refresher_exploration_id': None,
'missing_prerequisite_skill_id': None,
'dest': 'End',
'labelled_as_correct': False,
'param_changes': [],
'feedback': {
'html': '',
'content_id': 'default_outcome'
}
},
'old_value': None,
'property_name': 'default_outcome'
}), exp_domain.ExplorationChange({
'state_name': 'End',
'cmd': 'edit_state_property',
'new_value': 1,
'old_value': 0,
'property_name': 'next_content_id_index'
}), exp_domain.ExplorationChange({
'state_name': 'End',
'cmd': 'edit_state_property',
'new_value': [{
'outcome': {
'refresher_exploration_id': None,
'missing_prerequisite_skill_id': None,
'dest': 'End',
'labelled_as_correct': False,
'param_changes': [],
'feedback': {
'html': '<p>Feedback</p>',
'content_id': 'feedback_0'
}
},
'rule_specs': [{
'inputs': {
'x': 60
},
'rule_type': 'IsLessThanOrEqualTo'
}],
'tagged_skill_misconception_id': None,
'training_data': []
}],
'old_value': [],
'property_name': 'answer_groups'
}), exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'state_name': 'End',
'property_name': 'solicit_answer_details',
'new_value': True
})]
changes_are_mergeable_1 = exp_services.are_changes_mergeable(
self.EXP_1_ID, 1, change_list_4)
self.assertEqual(changes_are_mergeable_1, True)
def test_changes_are_not_mergeable_when_interaction_id_changes_conflict(self): # pylint: disable=line-too-long
self.save_new_valid_exploration(
self.EXP_0_ID, self.owner_id, end_state_name='End')
rights_manager.publish_exploration(self.owner, self.EXP_0_ID)
# Changes to the properties affected by or affecting
# interaction id and in interaction_id itself.
change_list_2 = [exp_domain.ExplorationChange({
'new_value': None,
'state_name': 'Introduction',
'old_value': 'TextInput',
'cmd': 'edit_state_property',
'property_name': 'widget_id'
}), exp_domain.ExplorationChange({
'new_value': {},
'state_name': 'Introduction',
'old_value': {
'rows': {
'value': 1
},
'placeholder': {
'value': {
'content_id': 'ca_placeholder_0',
'unicode_str': ''
}
}
},
'cmd': 'edit_state_property',
'property_name': 'widget_customization_args'
}), exp_domain.ExplorationChange({
'new_value': 2,
'state_name': 'Introduction',
'old_value': 1,
'cmd': 'edit_state_property',
'property_name': 'next_content_id_index'
}), exp_domain.ExplorationChange({
'new_value': 'Continue',
'state_name': 'Introduction',
'old_value': None,
'cmd': 'edit_state_property',
'property_name': 'widget_id'
}), exp_domain.ExplorationChange({
'new_value': {
'buttonText': {
'value': {
'content_id': 'ca_buttonText_1',
'unicode_str': 'Continue'
}
}
},
'state_name': 'Introduction',
'old_value': {},
'cmd': 'edit_state_property',
'property_name': 'widget_customization_args'
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list_2,
'Changed Contents and Hint')
# Changes to the properties affected by or affecting
# interaction id and in interaction_id itself again
# to check that changes are not mergeable.
change_list_3 = [exp_domain.ExplorationChange({
'new_value': None,
'state_name': 'Introduction',
'old_value': 'TextInput',
'cmd': 'edit_state_property',
'property_name': 'widget_id'
}), exp_domain.ExplorationChange({
'new_value': {},
'state_name': 'Introduction',
'old_value': {
'rows': {
'value': 1
},
'placeholder': {
'value': {
'content_id': 'ca_placeholder_0',
'unicode_str': ''
}
}
},
'cmd': 'edit_state_property',
'property_name': 'widget_customization_args'
}), exp_domain.ExplorationChange({
'new_value': 2,
'state_name': 'Introduction',
'old_value': 1,
'cmd': 'edit_state_property',
'property_name': 'next_content_id_index'
}), exp_domain.ExplorationChange({
'new_value': 'Continue',
'state_name': 'Introduction',
'old_value': None,
'cmd': 'edit_state_property',
'property_name': 'widget_id'
}), exp_domain.ExplorationChange({
'new_value': {
'buttonText': {
'value': {
'content_id': 'ca_buttonText_1',
'unicode_str': 'Continue'
}
}
},
'state_name': 'Introduction',
'old_value': {},
'cmd': 'edit_state_property',
'property_name': 'widget_customization_args'
})]
changes_are_not_mergeable = exp_services.are_changes_mergeable(
self.EXP_0_ID, 1, change_list_3)
self.assertEqual(changes_are_not_mergeable, False)
def test_changes_are_mergeable_when_customization_args_changes_do_not_conflict(self): # pylint: disable=line-too-long
self.save_new_valid_exploration(
self.EXP_0_ID, self.owner_id, end_state_name='End')
rights_manager.publish_exploration(self.owner, self.EXP_0_ID)
# Changes in the properties which aren't affected by
# customization args or doesn't affects customization_args.
change_list = [exp_domain.ExplorationChange({
'new_value': {
'content_id': 'content',
'html': '<p>This is the first state.</p>'
},
'state_name': 'Introduction',
'old_value': {
'content_id': 'content',
'html': ''
},
'cmd': 'edit_state_property',
'property_name': 'content'
}), exp_domain.ExplorationChange({
'new_value': [{
'hint_content': {
'content_id': 'hint_1',
'html': '<p>This is a first hint.</p>'
}
}],
'state_name': 'Introduction',
'old_value': [],
'cmd': 'edit_state_property',
'property_name': 'hints'
}), exp_domain.ExplorationChange({
'new_value': 2,
'state_name': 'Introduction',
'old_value': 1,
'cmd': 'edit_state_property',
'property_name': 'next_content_id_index'
}), exp_domain.ExplorationChange({
'new_value': [{
'hint_content': {
'content_id': 'hint_1',
'html': '<p>This is a first hint.</p>'
}
}, {
'hint_content': {
'content_id': 'hint_2',
'html': '<p>This is the second hint.</p>'
}
}],
'state_name': 'Introduction',
'old_value': [{
'hint_content': {
'content_id': 'hint_1',
'html': '<p>This is a first hint.</p>'
}
}],
'cmd': 'edit_state_property',
'property_name': 'hints'
}), exp_domain.ExplorationChange({
'new_value': 3,
'state_name': 'Introduction',
'old_value': 2,
'cmd': 'edit_state_property',
'property_name': 'next_content_id_index'
}), exp_domain.ExplorationChange({
'new_value': {
'content_id': 'content',
'html': '<p>Congratulations, you have finished!</p>'
},
'state_name': 'End',
'old_value': {
'content_id': 'content',
'html': ''
},
'cmd': 'edit_state_property',
'property_name': 'content'
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list,
'Changed Contents and Hints')
# Changes to the properties affecting customization_args
# or are affected by customization_args in the same state.
# This includes changes related to renaming a state in
# order to check that changes are applied even if states
# are renamed.
change_list_2 = [exp_domain.ExplorationChange({
'cmd': 'rename_state',
'new_state_name': 'Intro-rename',
'old_state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'old_value': 'Introduction',
'property_name': 'init_state_name',
'new_value': 'Intro-rename',
'cmd': 'edit_exploration_property'
}), exp_domain.ExplorationChange({
'state_name': 'Intro-rename',
'old_value': {
'placeholder':
{
'value':
{
'content_id': 'ca_placeholder_0',
'unicode_str': ''
}
},
'rows': {
'value': 1
}
},
'property_name': 'widget_customization_args',
'new_value':
{
'placeholder':
{
'value':
{
'content_id': 'ca_placeholder_0',
'unicode_str': 'Placeholder text'
}
},
'rows':
{
'value': 2
}
},
'cmd': 'edit_state_property'
}), exp_domain.ExplorationChange({
'state_name': 'Intro-rename',
'old_value': 'TextInput',
'property_name': 'widget_id',
'new_value': None,
'cmd': 'edit_state_property'
}), exp_domain.ExplorationChange({
'state_name': 'Intro-rename',
'old_value':
{
'placeholder':
{
'value':
{
'content_id': 'ca_placeholder_0',
'unicode_str': 'Placeholder text'
}
},
'rows':
{
'value': 2
}
},
'property_name': 'widget_customization_args',
'new_value': {},
'cmd': 'edit_state_property'
}), exp_domain.ExplorationChange({
'state_name': 'Intro-rename',
'old_value': 1,
'property_name': 'next_content_id_index',
'new_value': 3,
'cmd': 'edit_state_property'
}), exp_domain.ExplorationChange({
'state_name': 'Intro-rename',
'old_value': None,
'property_name': 'widget_id',
'new_value': 'NumericInput',
'cmd': 'edit_state_property'
}), exp_domain.ExplorationChange({
'state_name': 'Intro-rename',
'old_value':
{
'requireNonnegativeInput':
{
'value': True
}
},
'property_name': 'widget_customization_args',
'new_value':
{
'requireNonnegativeInput':
{
'value': False
}
},
'cmd': 'edit_state_property'
}), exp_domain.ExplorationChange({
'state_name': 'Intro-rename',
'old_value': 3,
'property_name': 'next_content_id_index',
'new_value': 4,
'cmd': 'edit_state_property'
}), exp_domain.ExplorationChange({
'state_name': 'Intro-rename',
'old_value': [],
'property_name': 'answer_groups',
'new_value':
[
{
'rule_specs':
[
{
'inputs':
{
'x': 50
},
'rule_type': 'IsLessThanOrEqualTo'
}
],
'training_data': [],
'tagged_skill_misconception_id': None,
'outcome':
{
'feedback':
{
'content_id': 'feedback_3',
'html': '<p>Next</p>'
},
'param_changes': [],
'refresher_exploration_id': None,
'dest': 'End',
'missing_prerequisite_skill_id': None,
'labelled_as_correct': False
}
}
],
'cmd': 'edit_state_property'
})]
changes_are_mergeable = exp_services.are_changes_mergeable(
self.EXP_0_ID, 1, change_list_2)
self.assertEqual(changes_are_mergeable, True)
# Creating second exploration to test the scenario
# when changes to same properties are made in two
# different states.
self.save_new_valid_exploration(
self.EXP_1_ID, self.owner_id, end_state_name='End')
rights_manager.publish_exploration(self.owner, self.EXP_1_ID)
# Using the old change_list_2 here because they already covers
# the changes related to customization args in first state.
exp_services.update_exploration(
self.owner_id, self.EXP_1_ID, change_list_2,
'Changed Interactions and Customization_args in One State')
# Changes to the properties related to the customization args
# in the second state to check for mergeability.
change_list_3 = [exp_domain.ExplorationChange({
'old_value': 'EndExploration',
'state_name': 'End',
'property_name': 'widget_id',
'cmd': 'edit_state_property',
'new_value': None
}), exp_domain.ExplorationChange({
'old_value': {
'recommendedExplorationIds': {
'value': []
}
},
'state_name': 'End',
'property_name': 'widget_customization_args',
'cmd': 'edit_state_property',
'new_value': {}
}), exp_domain.ExplorationChange({
'old_value': 0,
'state_name': 'End',
'property_name': 'next_content_id_index',
'cmd': 'edit_state_property',
'new_value': 4
}), exp_domain.ExplorationChange({
'old_value': None,
'state_name': 'End',
'property_name': 'widget_id',
'cmd': 'edit_state_property',
'new_value': 'ItemSelectionInput'
}), exp_domain.ExplorationChange({
'old_value': {},
'state_name': 'End',
'property_name': 'widget_customization_args',
'cmd': 'edit_state_property',
'new_value': {
'minAllowableSelectionCount': {
'value': 1
},
'choices': {
'value': [{
'html': '<p>A</p>',
'content_id': 'ca_choices_0'
}, {
'html': '<p>B</p>',
'content_id': 'ca_choices_1'
}, {
'html': '<p>C</p>',
'content_id': 'ca_choices_2'
}, {
'html': '<p>D</p>',
'content_id': 'ca_choices_3'
}]
},
'maxAllowableSelectionCount': {
'value': 1
}
}
}), exp_domain.ExplorationChange({
'old_value': None,
'state_name': 'End',
'property_name': 'default_outcome',
'cmd': 'edit_state_property',
'new_value': {
'refresher_exploration_id': None,
'dest': 'End',
'missing_prerequisite_skill_id': None,
'feedback': {
'html': '',
'content_id': 'default_outcome'
},
'param_changes': [],
'labelled_as_correct': False
}
}), exp_domain.ExplorationChange({
'old_value': 4,
'state_name': 'End',
'property_name': 'next_content_id_index',
'cmd': 'edit_state_property',
'new_value': 5
}), exp_domain.ExplorationChange({
'old_value': [],
'state_name': 'End',
'property_name': 'answer_groups',
'cmd': 'edit_state_property',
'new_value':
[
{
'training_data': [],
'tagged_skill_misconception_id': None,
'outcome':
{
'refresher_exploration_id': None,
'dest': 'End',
'missing_prerequisite_skill_id': None,
'feedback':
{
'html': '<p>Good</p>',
'content_id': 'feedback_4'
},
'param_changes': [],
'labelled_as_correct': False
},
'rule_specs':
[
{
'rule_type': 'Equals',
'inputs':
{
'x':
[
'ca_choices_1'
]
}
}
]
}
]
})]
changes_are_mergeable_1 = exp_services.are_changes_mergeable(
self.EXP_1_ID, 1, change_list_3)
self.assertEqual(changes_are_mergeable_1, True)
def test_changes_are_not_mergeable_when_customization_args_changes_conflict(self): # pylint: disable=line-too-long
self.save_new_valid_exploration(
self.EXP_0_ID, self.owner_id, end_state_name='End')
rights_manager.publish_exploration(self.owner, self.EXP_0_ID)
# Changes in the properties which affected by or affecting
# customization_args.
change_list = [exp_domain.ExplorationChange({
'cmd': 'rename_state',
'new_state_name': 'Intro-rename',
'old_state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'old_value': 'Introduction',
'property_name': 'init_state_name',
'new_value': 'Intro-rename',
'cmd': 'edit_exploration_property'
}), exp_domain.ExplorationChange({
'state_name': 'Intro-rename',
'old_value': {
'placeholder':
{
'value':
{
'content_id': 'ca_placeholder_0',
'unicode_str': ''
}
},
'rows': {
'value': 1
}
},
'property_name': 'widget_customization_args',
'new_value':
{
'placeholder':
{
'value':
{
'content_id': 'ca_placeholder_0',
'unicode_str': 'Placeholder text'
}
},
'rows':
{
'value': 2
}
},
'cmd': 'edit_state_property'
}), exp_domain.ExplorationChange({
'state_name': 'Intro-rename',
'old_value': 'TextInput',
'property_name': 'widget_id',
'new_value': None,
'cmd': 'edit_state_property'
}), exp_domain.ExplorationChange({
'state_name': 'Intro-rename',
'old_value':
{
'placeholder':
{
'value':
{
'content_id': 'ca_placeholder_0',
'unicode_str': 'Placeholder text'
}
},
'rows':
{
'value': 2
}
},
'property_name': 'widget_customization_args',
'new_value': {},
'cmd': 'edit_state_property'
}), exp_domain.ExplorationChange({
'state_name': 'Intro-rename',
'old_value': 1,
'property_name': 'next_content_id_index',
'new_value': 3,
'cmd': 'edit_state_property'
}), exp_domain.ExplorationChange({
'state_name': 'Intro-rename',
'old_value': None,
'property_name': 'widget_id',
'new_value': 'NumericInput',
'cmd': 'edit_state_property'
}), exp_domain.ExplorationChange({
'state_name': 'Intro-rename',
'old_value':
{
'requireNonnegativeInput':
{
'value': True
}
},
'property_name': 'widget_customization_args',
'new_value':
{
'requireNonnegativeInput':
{
'value': False
}
},
'cmd': 'edit_state_property'
}), exp_domain.ExplorationChange({
'state_name': 'Intro-rename',
'old_value': 3,
'property_name': 'next_content_id_index',
'new_value': 4,
'cmd': 'edit_state_property'
}), exp_domain.ExplorationChange({
'state_name': 'Intro-rename',
'old_value': [],
'property_name': 'answer_groups',
'new_value':
[
{
'rule_specs':
[
{
'inputs':
{
'x': 50
},
'rule_type': 'IsLessThanOrEqualTo'
}
],
'training_data': [],
'tagged_skill_misconception_id': None,
'outcome':
{
'feedback':
{
'content_id': 'feedback_3',
'html': '<p>Next</p>'
},
'param_changes': [],
'refresher_exploration_id': None,
'dest': 'End',
'missing_prerequisite_skill_id': None,
'labelled_as_correct': False
}
}
],
'cmd': 'edit_state_property'
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list,
'Changed Customization Args and related properties again')
# Changes to the customization_args in same
# state again to check that changes are not mergeable.
change_list_2 = [exp_domain.ExplorationChange({
'state_name': 'Introduction',
'old_value': {
'placeholder':
{
'value':
{
'content_id': 'ca_placeholder_0',
'unicode_str': ''
}
},
'rows': {
'value': 1
}
},
'property_name': 'widget_customization_args',
'new_value':
{
'placeholder':
{
'value':
{
'content_id': 'ca_placeholder_0',
'unicode_str': 'Placeholder text 2.'
}
},
'rows':
{
'value': 2
}
},
'cmd': 'edit_state_property'
})]
changes_are_not_mergeable = exp_services.are_changes_mergeable(
self.EXP_0_ID, 1, change_list_2)
self.assertEqual(changes_are_not_mergeable, False)
def test_changes_are_mergeable_when_answer_groups_changes_do_not_conflict(self): # pylint: disable=line-too-long
self.save_new_valid_exploration(
self.EXP_0_ID, self.owner_id, end_state_name='End')
rights_manager.publish_exploration(self.owner, self.EXP_0_ID)
# Adding answer_groups and solutions to the existing state.
change_list = [exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'property_name': 'next_content_id_index',
'old_value': 1,
'state_name': 'Introduction',
'new_value': 3
}), exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'property_name': 'answer_groups',
'old_value': [],
'state_name': 'Introduction',
'new_value': [{
'rule_specs': [{
'rule_type': 'StartsWith',
'inputs': {
'x': {
'contentId': 'rule_input_2',
'normalizedStrSet': ['Hello', 'Hola']
}
}
}],
'tagged_skill_misconception_id': None,
'outcome': {
'labelled_as_correct': False,
'feedback': {
'content_id': 'feedback_1',
'html': '<p>Feedback</p>'
},
'missing_prerequisite_skill_id': None,
'dest': 'End',
'param_changes': [],
'refresher_exploration_id': None
},
'training_data': []
}]
}), exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'property_name': 'hints',
'old_value': [],
'state_name': 'Introduction',
'new_value': [{
'hint_content': {
'content_id': 'hint_3',
'html': '<p>Hint 1.</p>'
}
}]
}), exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'property_name': 'next_content_id_index',
'old_value': 3,
'state_name': 'Introduction',
'new_value': 4
}), exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'property_name': 'solution',
'old_value': None,
'state_name': 'Introduction',
'new_value': {
'correct_answer': 'Hello Aryaman!',
'explanation': {
'content_id': 'solution',
'html': '<p>Explanation.</p>'
},
'answer_is_exclusive': False
}
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list,
'Added answer groups and solution')
# Changes to the properties that are not related to
# the answer_groups. These changes are done to check
# when the changes are made in unrelated properties,
# they can be merged easily.
change_list_2 = [exp_domain.ExplorationChange({
'new_value': {
'content_id': 'content',
'html': '<p>This is the first state.</p>'
},
'state_name': 'Introduction',
'old_value': {
'content_id': 'content',
'html': ''
},
'cmd': 'edit_state_property',
'property_name': 'content'
}), exp_domain.ExplorationChange({
'new_value': [{
'hint_content': {
'content_id': 'hint_3',
'html': '<p>Hint 1.</p>'
}
}, {
'hint_content': {
'content_id': 'hint_4',
'html': '<p>This is a first hint.</p>'
}
}],
'state_name': 'Introduction',
'old_value': [{
'hint_content': {
'content_id': 'hint_3',
'html': '<p>Hint 1.</p>'
}
}],
'cmd': 'edit_state_property',
'property_name': 'hints'
}), exp_domain.ExplorationChange({
'new_value': 5,
'state_name': 'Introduction',
'old_value': 4,
'cmd': 'edit_state_property',
'property_name': 'next_content_id_index'
}), exp_domain.ExplorationChange({
'new_value': {
'content_id': 'content',
'html': '<p>Congratulations, you have finished!</p>'
},
'state_name': 'End',
'old_value': {
'content_id': 'content',
'html': ''
},
'cmd': 'edit_state_property',
'property_name': 'content'
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list_2,
'Changed Contents and Hint')
change_list_3 = [exp_domain.ExplorationChange({
'property_name': 'default_outcome',
'old_value': {
'labelled_as_correct': False,
'missing_prerequisite_skill_id': None,
'refresher_exploration_id': None,
'feedback': {
'content_id': 'default_outcome',
'html': ''
},
'param_changes': [
],
'dest': 'End'
},
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'new_value': {
'labelled_as_correct': False,
'missing_prerequisite_skill_id': None,
'refresher_exploration_id': None,
'feedback': {
'content_id': 'default_outcome',
'html': '<p>Feedback 1.</p>'
},
'param_changes': [
],
'dest': 'End'
}
})]
changes_are_mergeable = exp_services.are_changes_mergeable(
self.EXP_0_ID, 2, change_list_3)
self.assertEqual(changes_are_mergeable, True)
# Changes to the answer_groups and the properties that
# affects or are affected by answer_groups.
change_list_4 = [exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': [{
'outcome': {
'feedback': {
'content_id': 'feedback_1',
'html': '<p>Feedback</p>'
},
'missing_prerequisite_skill_id': None,
'dest': 'End',
'param_changes': [],
'labelled_as_correct': False,
'refresher_exploration_id': None
},
'training_data': [],
'rule_specs': [{
'inputs': {
'x': {
'normalizedStrSet': [
'Hello',
'Hola',
'Hi'],
'contentId': 'rule_input_2'
}
},
'rule_type': 'StartsWith'
}],
'tagged_skill_misconception_id': None
}],
'cmd': 'edit_state_property',
'property_name': 'answer_groups',
'old_value': [{
'outcome': {
'feedback': {
'content_id': 'feedback_1',
'html': '<p>Feedback</p>'
},
'missing_prerequisite_skill_id': None,
'dest': 'End',
'param_changes': [],
'labelled_as_correct': False,
'refresher_exploration_id': None
},
'training_data': [],
'rule_specs': [{
'inputs': {
'x': {
'normalizedStrSet': ['Hello', 'Hola'],
'contentId': 'rule_input_2'
}
},
'rule_type': 'StartsWith'
}],
'tagged_skill_misconception_id': None
}]
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': {
'answer_is_exclusive': False,
'correct_answer': 'Hi Aryaman!',
'explanation': {
'content_id': 'solution',
'html': '<p>Explanation.</p>'
}
},
'cmd': 'edit_state_property',
'property_name': 'solution',
'old_value': {
'answer_is_exclusive': False,
'correct_answer': 'Hello Aryaman!',
'explanation': {
'content_id': 'solution',
'html': '<p>Explanation.</p>'
}
}
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': 6,
'cmd': 'edit_state_property',
'property_name': 'next_content_id_index',
'old_value': 4
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': [{
'outcome': {
'feedback': {
'content_id': 'feedback_1',
'html': '<p>Feedback</p>'
},
'missing_prerequisite_skill_id': None,
'dest': 'End',
'param_changes': [],
'labelled_as_correct': False,
'refresher_exploration_id': None
},
'training_data': [],
'rule_specs': [{
'inputs': {
'x': {
'normalizedStrSet': ['Hello', 'Hola', 'Hi'],
'contentId': 'rule_input_2'
}
},
'rule_type': 'StartsWith'
}],
'tagged_skill_misconception_id': None
}, {
'outcome': {
'feedback': {
'content_id': 'feedback_4',
'html': ''
},
'missing_prerequisite_skill_id': None,
'dest': 'End',
'param_changes': [],
'labelled_as_correct': False,
'refresher_exploration_id': None
},
'training_data': [],
'rule_specs': [{
'inputs': {
'x': {
'normalizedStrSet': ['Oppia', 'GSoC'],
'contentId': 'rule_input_5'
}
},
'rule_type': 'Contains'
}],
'tagged_skill_misconception_id': None
}],
'cmd': 'edit_state_property',
'property_name': 'answer_groups',
'old_value': [{
'outcome': {
'feedback': {
'content_id': 'feedback_1',
'html': '<p>Feedback</p>'
},
'missing_prerequisite_skill_id': None,
'dest': 'End',
'param_changes': [],
'labelled_as_correct': False,
'refresher_exploration_id': None
},
'training_data': [],
'rule_specs': [{
'inputs': {
'x': {
'normalizedStrSet': ['Hello', 'Hola', 'Hi'],
'contentId': 'rule_input_2'
}
},
'rule_type': 'StartsWith'
}],
'tagged_skill_misconception_id': None
}]
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': {
'answer_is_exclusive': False,
'correct_answer': 'Oppia is selected for GSoC.',
'explanation': {
'content_id': 'solution',
'html': '<p>Explanation.</p>'
}
},
'cmd': 'edit_state_property',
'property_name': 'solution',
'old_value': {
'answer_is_exclusive': False,
'correct_answer': 'Hi Aryaman!',
'explanation': {
'content_id': 'solution',
'html': '<p>Explanation.</p>'
}
}
}), exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'state_name': 'Introduction',
'property_name': 'solicit_answer_details',
'new_value': True
})]
changes_are_mergeable = exp_services.are_changes_mergeable(
self.EXP_0_ID, 2, change_list_4)
self.assertEqual(changes_are_mergeable, True)
# Creating second exploration to test the scenario
# when changes to same properties are made in two
# different states.
self.save_new_valid_exploration(
self.EXP_1_ID, self.owner_id, end_state_name='End')
rights_manager.publish_exploration(self.owner, self.EXP_1_ID)
# Using the old change_list_2 and change_list_3 here
# because they already covers the changes related to
# the answer_groups in the first state.
exp_services.update_exploration(
self.owner_id, self.EXP_1_ID, change_list_2,
'Added Answer Group and Solution in One state')
exp_services.update_exploration(
self.owner_id, self.EXP_1_ID, change_list_3,
'Changed Answer Groups and Solutions in One State')
# Changes to the properties related to the answer_groups
# in the second state to check for mergeability.
change_list_5 = [exp_domain.ExplorationChange({
'old_value': 'EndExploration',
'state_name': 'End',
'property_name': 'widget_id',
'cmd': 'edit_state_property',
'new_value': None
}), exp_domain.ExplorationChange({
'old_value': {
'recommendedExplorationIds': {
'value': []
}
},
'state_name': 'End',
'property_name': 'widget_customization_args',
'cmd': 'edit_state_property',
'new_value': {}
}), exp_domain.ExplorationChange({
'old_value': 0,
'state_name': 'End',
'property_name': 'next_content_id_index',
'cmd': 'edit_state_property',
'new_value': 4
}), exp_domain.ExplorationChange({
'old_value': None,
'state_name': 'End',
'property_name': 'widget_id',
'cmd': 'edit_state_property',
'new_value': 'ItemSelectionInput'
}), exp_domain.ExplorationChange({
'old_value': {},
'state_name': 'End',
'property_name': 'widget_customization_args',
'cmd': 'edit_state_property',
'new_value': {
'minAllowableSelectionCount': {
'value': 1
},
'choices': {
'value': [{
'html': '<p>A</p>',
'content_id': 'ca_choices_0'
}, {
'html': '<p>B</p>',
'content_id': 'ca_choices_1'
}, {
'html': '<p>C</p>',
'content_id': 'ca_choices_2'
}, {
'html': '<p>D</p>',
'content_id': 'ca_choices_3'
}]
},
'maxAllowableSelectionCount': {
'value': 1
}
}
}), exp_domain.ExplorationChange({
'old_value': None,
'state_name': 'End',
'property_name': 'default_outcome',
'cmd': 'edit_state_property',
'new_value': {
'refresher_exploration_id': None,
'dest': 'End',
'missing_prerequisite_skill_id': None,
'feedback': {
'html': '',
'content_id': 'default_outcome'
},
'param_changes': [],
'labelled_as_correct': False
}
}), exp_domain.ExplorationChange({
'old_value': 4,
'state_name': 'End',
'property_name': 'next_content_id_index',
'cmd': 'edit_state_property',
'new_value': 5
}), exp_domain.ExplorationChange({
'old_value': [],
'state_name': 'End',
'property_name': 'answer_groups',
'cmd': 'edit_state_property',
'new_value': [{
'training_data': [],
'tagged_skill_misconception_id': None,
'outcome': {
'refresher_exploration_id': None,
'dest': 'End',
'missing_prerequisite_skill_id': None,
'feedback': {
'html': '<p>Good</p>',
'content_id': 'feedback_4'
},
'param_changes': [],
'labelled_as_correct': False
},
'rule_specs': [{
'rule_type': 'Equals',
'inputs': {
'x': ['ca_choices_1']
}
}]
}]
})]
changes_are_mergeable_1 = exp_services.are_changes_mergeable(
self.EXP_1_ID, 2, change_list_5)
self.assertEqual(changes_are_mergeable_1, True)
def test_changes_are_not_mergeable_when_answer_groups_changes_conflict(self): # pylint: disable=line-too-long
self.save_new_valid_exploration(
self.EXP_0_ID, self.owner_id, end_state_name='End')
rights_manager.publish_exploration(self.owner, self.EXP_0_ID)
# Adding answer_groups and solutions to the existing state.
change_list = [exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'property_name': 'next_content_id_index',
'old_value': 1,
'state_name': 'Introduction',
'new_value': 3
}), exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'property_name': 'answer_groups',
'old_value': [],
'state_name': 'Introduction',
'new_value': [{
'rule_specs': [{
'rule_type': 'StartsWith',
'inputs': {
'x': {
'contentId': 'rule_input_2',
'normalizedStrSet': ['Hello', 'Hola']
}
}
}],
'tagged_skill_misconception_id': None,
'outcome': {
'labelled_as_correct': False,
'feedback': {
'content_id': 'feedback_1',
'html': '<p>Feedback</p>'
},
'missing_prerequisite_skill_id': None,
'dest': 'End',
'param_changes': [],
'refresher_exploration_id': None
},
'training_data': []
}]
}), exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'property_name': 'hints',
'old_value': [],
'state_name': 'Introduction',
'new_value': [{
'hint_content': {
'content_id': 'hint_3',
'html': '<p>Hint 1.</p>'
}
}]
}), exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'property_name': 'next_content_id_index',
'old_value': 3,
'state_name': 'Introduction',
'new_value': 4
}), exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'property_name': 'solution',
'old_value': None,
'state_name': 'Introduction',
'new_value': {
'correct_answer': 'Hello Aryaman!',
'explanation': {
'content_id': 'solution',
'html': '<p>Explanation.</p>'
},
'answer_is_exclusive': False
}
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list,
'Added answer groups and solution')
# Changes to the answer_groups and the properties that
# affects or are affected by answer_groups.
change_list_2 = [exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': [{
'outcome': {
'feedback': {
'content_id': 'feedback_1',
'html': '<p>Feedback</p>'
},
'missing_prerequisite_skill_id': None,
'dest': 'End',
'param_changes': [],
'labelled_as_correct': False,
'refresher_exploration_id': None
},
'training_data': [],
'rule_specs': [{
'inputs': {
'x': {
'normalizedStrSet': [
'Hello',
'Hola',
'Hi'],
'contentId': 'rule_input_2'
}
},
'rule_type': 'StartsWith'
}],
'tagged_skill_misconception_id': None
}],
'cmd': 'edit_state_property',
'property_name': 'answer_groups',
'old_value': [{
'outcome': {
'feedback': {
'content_id': 'feedback_1',
'html': '<p>Feedback</p>'
},
'missing_prerequisite_skill_id': None,
'dest': 'End',
'param_changes': [],
'labelled_as_correct': False,
'refresher_exploration_id': None
},
'training_data': [],
'rule_specs': [{
'inputs': {
'x': {
'normalizedStrSet': ['Hello', 'Hola'],
'contentId': 'rule_input_2'
}
},
'rule_type': 'StartsWith'
}],
'tagged_skill_misconception_id': None
}]
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': {
'answer_is_exclusive': False,
'correct_answer': 'Hi Aryaman!',
'explanation': {
'content_id': 'solution',
'html': '<p>Explanation.</p>'
}
},
'cmd': 'edit_state_property',
'property_name': 'solution',
'old_value': {
'answer_is_exclusive': False,
'correct_answer': 'Hello Aryaman!',
'explanation': {
'content_id': 'solution',
'html': '<p>Explanation.</p>'
}
}
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': 6,
'cmd': 'edit_state_property',
'property_name': 'next_content_id_index',
'old_value': 4
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': [{
'outcome': {
'feedback': {
'content_id': 'feedback_1',
'html': '<p>Feedback</p>'
},
'missing_prerequisite_skill_id': None,
'dest': 'End',
'param_changes': [],
'labelled_as_correct': False,
'refresher_exploration_id': None
},
'training_data': [],
'rule_specs': [{
'inputs': {
'x': {
'normalizedStrSet': ['Hello', 'Hola', 'Hi'],
'contentId': 'rule_input_2'
}
},
'rule_type': 'StartsWith'
}],
'tagged_skill_misconception_id': None
}, {
'outcome': {
'feedback': {
'content_id': 'feedback_4',
'html': ''
},
'missing_prerequisite_skill_id': None,
'dest': 'End',
'param_changes': [],
'labelled_as_correct': False,
'refresher_exploration_id': None
},
'training_data': [],
'rule_specs': [{
'inputs': {
'x': {
'normalizedStrSet': ['Oppia', 'GSoC'],
'contentId': 'rule_input_5'
}
},
'rule_type': 'Contains'
}],
'tagged_skill_misconception_id': None
}],
'cmd': 'edit_state_property',
'property_name': 'answer_groups',
'old_value': [{
'outcome': {
'feedback': {
'content_id': 'feedback_1',
'html': '<p>Feedback</p>'
},
'missing_prerequisite_skill_id': None,
'dest': 'End',
'param_changes': [],
'labelled_as_correct': False,
'refresher_exploration_id': None
},
'training_data': [],
'rule_specs': [{
'inputs': {
'x': {
'normalizedStrSet': ['Hello', 'Hola', 'Hi'],
'contentId': 'rule_input_2'
}
},
'rule_type': 'StartsWith'
}],
'tagged_skill_misconception_id': None
}]
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': {
'answer_is_exclusive': False,
'correct_answer': 'Oppia is selected for GSoC.',
'explanation': {
'content_id': 'solution',
'html': '<p>Explanation.</p>'
}
},
'cmd': 'edit_state_property',
'property_name': 'solution',
'old_value': {
'answer_is_exclusive': False,
'correct_answer': 'Hi Aryaman!',
'explanation': {
'content_id': 'solution',
'html': '<p>Explanation.</p>'
}
}
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list_2,
'Changed Answer Groups and related properties')
# Changes to the answer group in same state again
# to check that changes are not mergeable.
change_list_3 = [exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': [{
'outcome': {
'feedback': {
'content_id': 'feedback_1',
'html': '<p>Feedback</p>'
},
'missing_prerequisite_skill_id': None,
'dest': 'End',
'param_changes': [],
'labelled_as_correct': False,
'refresher_exploration_id': None
},
'training_data': [],
'rule_specs': [{
'inputs': {
'x': {
'normalizedStrSet': [
'Hello',
'Hola',
'Hey'],
'contentId': 'rule_input_2'
}
},
'rule_type': 'StartsWith'
}],
'tagged_skill_misconception_id': None
}],
'cmd': 'edit_state_property',
'property_name': 'answer_groups',
'old_value': [{
'outcome': {
'feedback': {
'content_id': 'feedback_1',
'html': '<p>Feedback</p>'
},
'missing_prerequisite_skill_id': None,
'dest': 'End',
'param_changes': [],
'labelled_as_correct': False,
'refresher_exploration_id': None
},
'training_data': [],
'rule_specs': [{
'inputs': {
'x': {
'normalizedStrSet': ['Hello', 'Hola'],
'contentId': 'rule_input_2'
}
},
'rule_type': 'StartsWith'
}],
'tagged_skill_misconception_id': None
}]
})]
changes_are_not_mergeable = exp_services.are_changes_mergeable(
self.EXP_0_ID, 2, change_list_3)
self.assertEqual(changes_are_not_mergeable, False)
def test_changes_are_mergeable_when_solutions_changes_do_not_conflict(self):
self.save_new_valid_exploration(
self.EXP_0_ID, self.owner_id, end_state_name='End')
rights_manager.publish_exploration(self.owner, self.EXP_0_ID)
# Adding new answer_groups and solutions.
change_list = [exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'property_name': 'next_content_id_index',
'old_value': 1,
'state_name': 'Introduction',
'new_value': 3
}), exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'property_name': 'answer_groups',
'old_value': [],
'state_name': 'Introduction',
'new_value': [{
'rule_specs': [{
'rule_type': 'StartsWith',
'inputs': {
'x': {
'contentId': 'rule_input_2',
'normalizedStrSet': [
'Hello',
'Hola'
]
}
}
}],
'tagged_skill_misconception_id': None,
'outcome': {
'labelled_as_correct': False,
'feedback': {
'content_id': 'feedback_1',
'html': '<p>Feedback</p>'
},
'missing_prerequisite_skill_id': None,
'dest': 'End',
'param_changes': [],
'refresher_exploration_id': None
},
'training_data': []
}]
}), exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'property_name': 'hints',
'old_value': [],
'state_name': 'Introduction',
'new_value': [{
'hint_content': {
'content_id': 'hint_3',
'html': '<p>Hint 1.</p>'
}
}]
}), exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'property_name': 'next_content_id_index',
'old_value': 3,
'state_name': 'Introduction',
'new_value': 4
}), exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'property_name': 'solution',
'old_value': None,
'state_name': 'Introduction',
'new_value': {
'correct_answer': 'Hello Aryaman!',
'explanation': {
'content_id': 'solution',
'html': '<p>Explanation.</p>'
},
'answer_is_exclusive': False
}
}), exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'state_name': 'Introduction',
'property_name': 'solicit_answer_details',
'new_value': True
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list,
'Added answer groups and solution')
# Changes to the properties unrelated to the solutions.
change_list_2 = [exp_domain.ExplorationChange({
'new_value': {
'content_id': 'content',
'html': '<p>This is the first state.</p>'
},
'state_name': 'Introduction',
'old_value': {
'content_id': 'content',
'html': ''
},
'cmd': 'edit_state_property',
'property_name': 'content'
}), exp_domain.ExplorationChange({
'new_value': [{
'hint_content': {
'content_id': 'hint_3',
'html': '<p>Hint 1.</p>'
}
}, {
'hint_content': {
'content_id': 'hint_4',
'html': '<p>This is a first hint.</p>'
}
}],
'state_name': 'Introduction',
'old_value': [{
'hint_content': {
'content_id': 'hint_3',
'html': '<p>Hint 1.</p>'
}
}],
'cmd': 'edit_state_property',
'property_name': 'hints'
}), exp_domain.ExplorationChange({
'new_value': 5,
'state_name': 'Introduction',
'old_value': 4,
'cmd': 'edit_state_property',
'property_name': 'next_content_id_index'
}), exp_domain.ExplorationChange({
'new_value': {
'content_id': 'content',
'html': '<p>Congratulations, you have finished!</p>'
},
'state_name': 'End',
'old_value': {
'content_id': 'content',
'html': ''
},
'cmd': 'edit_state_property',
'property_name': 'content'
}), exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'state_name': 'Introduction',
'property_name': 'solicit_answer_details',
'new_value': True
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list_2,
'Changed Contents and Hint')
# Changes to the solutions and the properties that affects
# solutions to check for mergeability.
change_list_3 = [exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': [{
'outcome': {
'feedback': {
'content_id': 'feedback_1',
'html': '<p>Feedback</p>'
},
'missing_prerequisite_skill_id': None,
'dest': 'End',
'param_changes': [],
'labelled_as_correct': False,
'refresher_exploration_id': None
},
'training_data': [],
'rule_specs': [{
'inputs': {
'x': {
'normalizedStrSet': ['Hello', 'Hola', 'Hi'],
'contentId': 'rule_input_2'
}
},
'rule_type': 'StartsWith'
}],
'tagged_skill_misconception_id': None
}],
'cmd': 'edit_state_property',
'property_name': 'answer_groups',
'old_value': [{
'outcome': {
'feedback': {
'content_id': 'feedback_1',
'html': '<p>Feedback</p>'
},
'missing_prerequisite_skill_id': None,
'dest': 'End',
'param_changes': [],
'labelled_as_correct': False,
'refresher_exploration_id': None
},
'training_data': [],
'rule_specs': [{
'inputs': {
'x': {
'normalizedStrSet': ['Hello', 'Hola'],
'contentId': 'rule_input_2'
}
},
'rule_type': 'StartsWith'
}],
'tagged_skill_misconception_id': None
}]
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': {
'answer_is_exclusive': False,
'correct_answer': 'Hi Aryaman!',
'explanation': {
'content_id': 'solution',
'html': '<p>Explanation.</p>'
}
},
'cmd': 'edit_state_property',
'property_name': 'solution',
'old_value': {
'answer_is_exclusive': False,
'correct_answer': 'Hello Aryaman!',
'explanation': {
'content_id': 'solution',
'html': '<p>Explanation.</p>'
}
}
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': 6,
'cmd': 'edit_state_property',
'property_name': 'next_content_id_index',
'old_value': 4
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': [{
'outcome': {
'feedback': {
'content_id': 'feedback_1',
'html': '<p>Feedback</p>'
},
'missing_prerequisite_skill_id': None,
'dest': 'End',
'param_changes': [],
'labelled_as_correct': False,
'refresher_exploration_id': None
},
'training_data': [],
'rule_specs': [{
'inputs': {
'x': {
'normalizedStrSet': ['Hello', 'Hola', 'Hi'],
'contentId': 'rule_input_2'
}
},
'rule_type': 'StartsWith'
}],
'tagged_skill_misconception_id': None
}, {
'outcome': {
'feedback': {
'content_id': 'feedback_4',
'html': ''
},
'missing_prerequisite_skill_id': None,
'dest': 'End',
'param_changes': [],
'labelled_as_correct': False,
'refresher_exploration_id': None
},
'training_data': [],
'rule_specs': [{
'inputs': {
'x': {
'normalizedStrSet': ['Oppia', 'GSoC'],
'contentId': 'rule_input_5'
}
},
'rule_type': 'Contains'
}],
'tagged_skill_misconception_id': None
}],
'cmd': 'edit_state_property',
'property_name': 'answer_groups',
'old_value': [{
'outcome': {
'feedback': {
'content_id': 'feedback_1',
'html': '<p>Feedback</p>'
},
'missing_prerequisite_skill_id': None,
'dest': 'End',
'param_changes': [],
'labelled_as_correct': False,
'refresher_exploration_id': None
},
'training_data': [],
'rule_specs': [{
'inputs': {
'x': {
'normalizedStrSet': ['Hello', 'Hola', 'Hi'],
'contentId': 'rule_input_2'
}
},
'rule_type': 'StartsWith'
}],
'tagged_skill_misconception_id': None
}]
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': {
'answer_is_exclusive': False,
'correct_answer': 'Oppia is selected for GSoC.',
'explanation': {
'content_id': 'solution',
'html': '<p>Explanation.</p>'
}
},
'cmd': 'edit_state_property',
'property_name': 'solution',
'old_value': {
'answer_is_exclusive': False,
'correct_answer': 'Hi Aryaman!',
'explanation': {
'content_id': 'solution',
'html': '<p>Explanation.</p>'
}
}
}), exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'state_name': 'Introduction',
'property_name': 'solicit_answer_details',
'new_value': False
})]
changes_are_mergeable = exp_services.are_changes_mergeable(
self.EXP_0_ID, 2, change_list_3)
self.assertEqual(changes_are_mergeable, True)
# Creating second exploration to test the scenario
# when changes to same properties are made in two
# different states.
self.save_new_valid_exploration(
self.EXP_1_ID, self.owner_id, end_state_name='End')
rights_manager.publish_exploration(self.owner, self.EXP_1_ID)
# Using the old change_list_2 and change_list_3 here
# because they already covers the changes related to
# the solutions in the first state.
exp_services.update_exploration(
self.owner_id, self.EXP_1_ID, change_list_2,
'Added Answer Group and Solution in One state')
exp_services.update_exploration(
self.owner_id, self.EXP_1_ID, change_list_3,
'Changed Answer Groups and Solutions in One State')
# Changes to the properties related to the solutions
# in the second state to check for mergeability.
change_list_4 = [exp_domain.ExplorationChange({
'old_value': 'EndExploration',
'new_value': None,
'cmd': 'edit_state_property',
'property_name': 'widget_id',
'state_name': 'End'
}), exp_domain.ExplorationChange({
'old_value': {
'recommendedExplorationIds': {
'value': []
}
},
'new_value': {},
'cmd': 'edit_state_property',
'property_name': 'widget_customization_args',
'state_name': 'End'
}), exp_domain.ExplorationChange({
'old_value': None,
'new_value': 'NumericInput',
'cmd': 'edit_state_property',
'property_name': 'widget_id',
'state_name': 'End'
}), exp_domain.ExplorationChange({
'old_value': None,
'new_value': {
'dest': 'End',
'missing_prerequisite_skill_id': None,
'param_changes': [],
'labelled_as_correct': False,
'refresher_exploration_id': None,
'feedback': {
'html': '',
'content_id': 'default_outcome'
}
},
'cmd': 'edit_state_property',
'property_name': 'default_outcome',
'state_name': 'End'
}), exp_domain.ExplorationChange({
'old_value': 0,
'new_value': 1,
'cmd': 'edit_state_property',
'property_name': 'next_content_id_index',
'state_name': 'End'
}), exp_domain.ExplorationChange({
'old_value': [],
'new_value': [{
'outcome': {
'dest': 'End',
'missing_prerequisite_skill_id': None,
'param_changes': [],
'labelled_as_correct': False,
'refresher_exploration_id': None,
'feedback': {
'html': '<p>Good</p>',
'content_id': 'feedback_0'
}
},
'training_data': [],
'tagged_skill_misconception_id': None,
'rule_specs': [{
'rule_type': 'IsGreaterThanOrEqualTo',
'inputs': {
'x': 20
}
}]
}],
'cmd': 'edit_state_property',
'property_name': 'answer_groups',
'state_name': 'End'
}), exp_domain.ExplorationChange({
'old_value': [],
'new_value': [{
'hint_content': {
'html': '<p>Hint 1. State 2.</p>',
'content_id': 'hint_1'
}
}],
'cmd': 'edit_state_property',
'property_name': 'hints',
'state_name': 'End'
}), exp_domain.ExplorationChange({
'old_value': 1,
'new_value': 2,
'cmd': 'edit_state_property',
'property_name': 'next_content_id_index',
'state_name': 'End'
}), exp_domain.ExplorationChange({
'old_value': None,
'new_value': {
'correct_answer': 30,
'explanation': {
'html': '<p>Explanation.</p>',
'content_id': 'solution'
},
'answer_is_exclusive': False
},
'cmd': 'edit_state_property',
'property_name': 'solution',
'state_name': 'End'
}), exp_domain.ExplorationChange({
'old_value': {
'correct_answer': 30,
'explanation': {
'html': '<p>Explanation.</p>',
'content_id': 'solution'
},
'answer_is_exclusive': False
},
'new_value': {
'correct_answer': 10,
'explanation': {
'html': '<p>Explanation.</p>',
'content_id': 'solution'
},
'answer_is_exclusive': False
},
'cmd': 'edit_state_property',
'property_name': 'solution',
'state_name': 'End'
})]
changes_are_mergeable_1 = exp_services.are_changes_mergeable(
self.EXP_1_ID, 2, change_list_4)
self.assertEqual(changes_are_mergeable_1, True)
def test_changes_are_not_mergeable_when_solutions_changes_conflict(self):
self.save_new_valid_exploration(
self.EXP_0_ID, self.owner_id, end_state_name='End')
rights_manager.publish_exploration(self.owner, self.EXP_0_ID)
# Adding new answer_groups and solutions.
change_list = [exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'property_name': 'next_content_id_index',
'old_value': 1,
'state_name': 'Introduction',
'new_value': 3
}), exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'property_name': 'answer_groups',
'old_value': [],
'state_name': 'Introduction',
'new_value': [{
'rule_specs': [{
'rule_type': 'StartsWith',
'inputs': {
'x': {
'contentId': 'rule_input_2',
'normalizedStrSet': [
'Hello',
'Hola'
]
}
}
}],
'tagged_skill_misconception_id': None,
'outcome': {
'labelled_as_correct': False,
'feedback': {
'content_id': 'feedback_1',
'html': '<p>Feedback</p>'
},
'missing_prerequisite_skill_id': None,
'dest': 'End',
'param_changes': [],
'refresher_exploration_id': None
},
'training_data': []
}]
}), exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'property_name': 'hints',
'old_value': [],
'state_name': 'Introduction',
'new_value': [{
'hint_content': {
'content_id': 'hint_3',
'html': '<p>Hint 1.</p>'
}
}]
}), exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'property_name': 'next_content_id_index',
'old_value': 3,
'state_name': 'Introduction',
'new_value': 4
}), exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'property_name': 'solution',
'old_value': None,
'state_name': 'Introduction',
'new_value': {
'correct_answer': 'Hello Aryaman!',
'explanation': {
'content_id': 'solution',
'html': '<p>Explanation.</p>'
},
'answer_is_exclusive': False
}
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list,
'Added answer groups and solution')
# Changes to the solutions and the properties that affects
# solutions to check for mergeability.
change_list_2 = [exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': [{
'outcome': {
'feedback': {
'content_id': 'feedback_1',
'html': '<p>Feedback</p>'
},
'missing_prerequisite_skill_id': None,
'dest': 'End',
'param_changes': [],
'labelled_as_correct': False,
'refresher_exploration_id': None
},
'training_data': [],
'rule_specs': [{
'inputs': {
'x': {
'normalizedStrSet': ['Hello', 'Hola', 'Hi'],
'contentId': 'rule_input_2'
}
},
'rule_type': 'StartsWith'
}],
'tagged_skill_misconception_id': None
}],
'cmd': 'edit_state_property',
'property_name': 'answer_groups',
'old_value': [{
'outcome': {
'feedback': {
'content_id': 'feedback_1',
'html': '<p>Feedback</p>'
},
'missing_prerequisite_skill_id': None,
'dest': 'End',
'param_changes': [],
'labelled_as_correct': False,
'refresher_exploration_id': None
},
'training_data': [],
'rule_specs': [{
'inputs': {
'x': {
'normalizedStrSet': ['Hello', 'Hola'],
'contentId': 'rule_input_2'
}
},
'rule_type': 'StartsWith'
}],
'tagged_skill_misconception_id': None
}]
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': {
'answer_is_exclusive': False,
'correct_answer': 'Hi Aryaman!',
'explanation': {
'content_id': 'solution',
'html': '<p>Explanation.</p>'
}
},
'cmd': 'edit_state_property',
'property_name': 'solution',
'old_value': {
'answer_is_exclusive': False,
'correct_answer': 'Hello Aryaman!',
'explanation': {
'content_id': 'solution',
'html': '<p>Explanation.</p>'
}
}
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': 6,
'cmd': 'edit_state_property',
'property_name': 'next_content_id_index',
'old_value': 4
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': [{
'outcome': {
'feedback': {
'content_id': 'feedback_1',
'html': '<p>Feedback</p>'
},
'missing_prerequisite_skill_id': None,
'dest': 'End',
'param_changes': [],
'labelled_as_correct': False,
'refresher_exploration_id': None
},
'training_data': [],
'rule_specs': [{
'inputs': {
'x': {
'normalizedStrSet': ['Hello', 'Hola', 'Hi'],
'contentId': 'rule_input_2'
}
},
'rule_type': 'StartsWith'
}],
'tagged_skill_misconception_id': None
}, {
'outcome': {
'feedback': {
'content_id': 'feedback_4',
'html': ''
},
'missing_prerequisite_skill_id': None,
'dest': 'End',
'param_changes': [],
'labelled_as_correct': False,
'refresher_exploration_id': None
},
'training_data': [],
'rule_specs': [{
'inputs': {
'x': {
'normalizedStrSet': ['Oppia', 'GSoC'],
'contentId': 'rule_input_5'
}
},
'rule_type': 'Contains'
}],
'tagged_skill_misconception_id': None
}],
'cmd': 'edit_state_property',
'property_name': 'answer_groups',
'old_value': [{
'outcome': {
'feedback': {
'content_id': 'feedback_1',
'html': '<p>Feedback</p>'
},
'missing_prerequisite_skill_id': None,
'dest': 'End',
'param_changes': [],
'labelled_as_correct': False,
'refresher_exploration_id': None
},
'training_data': [],
'rule_specs': [{
'inputs': {
'x': {
'normalizedStrSet': ['Hello', 'Hola', 'Hi'],
'contentId': 'rule_input_2'
}
},
'rule_type': 'StartsWith'
}],
'tagged_skill_misconception_id': None
}]
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': {
'answer_is_exclusive': False,
'correct_answer': 'Oppia is selected for GSoC.',
'explanation': {
'content_id': 'solution',
'html': '<p>Explanation.</p>'
}
},
'cmd': 'edit_state_property',
'property_name': 'solution',
'old_value': {
'answer_is_exclusive': False,
'correct_answer': 'Hi Aryaman!',
'explanation': {
'content_id': 'solution',
'html': '<p>Explanation.</p>'
}
}
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list_2,
'Changed Solutions and affected properties')
# Change to the solution of same state again
# to check that changes are not mergeable.
change_list_3 = [exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': {
'answer_is_exclusive': False,
'correct_answer': 'Hello Aryaman!',
'explanation': {
'content_id': 'solution',
'html': '<p>Explanation.</p>'
}
},
'cmd': 'edit_state_property',
'property_name': 'solution',
'old_value': {
'answer_is_exclusive': False,
'correct_answer': 'Hello Aryaman!',
'explanation': {
'content_id': 'solution',
'html': '<p>Changed Explanation.</p>'
}
}
})]
changes_are_not_mergeable = exp_services.are_changes_mergeable(
self.EXP_0_ID, 2, change_list_3)
self.assertEqual(changes_are_not_mergeable, False)
def test_changes_are_mergeable_when_hints_changes_do_not_conflict(self):
self.save_new_valid_exploration(
self.EXP_0_ID, self.owner_id, end_state_name='End')
# Adding hints to the existing state.
rights_manager.publish_exploration(self.owner, self.EXP_0_ID)
change_list = [exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': [{
'hint_content': {
'html': '<p>Hint 1.</p>',
'content_id': 'hint_1'
}
}],
'property_name': 'hints',
'cmd': 'edit_state_property',
'old_value': []
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': 2,
'property_name': 'next_content_id_index',
'cmd': 'edit_state_property',
'old_value': 1
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': {
'answer_is_exclusive': False,
'explanation': {
'html': '<p>Explanation</p>',
'content_id': 'solution'
},
'correct_answer': 'Hello'
},
'property_name': 'solution',
'cmd': 'edit_state_property',
'old_value': None
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list,
'Added Hint and Solution in Introduction state')
# Changes to all state propeties other than the hints.
change_list_2 = [exp_domain.ExplorationChange({
'property_name': 'content',
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'old_value': {
'html': '',
'content_id': 'content'
},
'new_value': {
'html': '<p>Content in Introduction.</p>',
'content_id': 'content'
}
}), exp_domain.ExplorationChange({
'property_name': 'solution',
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'old_value': {
'explanation': {
'html': '<p>Explanation</p>',
'content_id': 'solution'
},
'answer_is_exclusive': False,
'correct_answer': 'Hello'
},
'new_value': {
'explanation': {
'html': '<p>Explanation</p>',
'content_id': 'solution'
},
'answer_is_exclusive': False,
'correct_answer': 'Hello Aryaman'
}
}), exp_domain.ExplorationChange({
'property_name': 'widget_id',
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'old_value': 'TextInput',
'new_value': None
}), exp_domain.ExplorationChange({
'property_name': 'widget_customization_args',
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'old_value': {
'placeholder': {
'value': {
'content_id': 'ca_placeholder_0',
'unicode_str': ''
}
},
'rows': {
'value': 1
}
},
'new_value': {}
}), exp_domain.ExplorationChange({
'property_name': 'solution',
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'old_value': {
'explanation': {
'html': '<p>Explanation</p>',
'content_id': 'solution'
},
'answer_is_exclusive': False,
'correct_answer': 'Hello Aryaman'
},
'new_value': None
}), exp_domain.ExplorationChange({
'property_name': 'widget_id',
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'old_value': None,
'new_value': 'NumericInput'
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'old_value':
{
'requireNonnegativeInput':
{
'value': True
}
},
'property_name': 'widget_customization_args',
'new_value':
{
'requireNonnegativeInput':
{
'value': False
}
},
'cmd': 'edit_state_property'
}), exp_domain.ExplorationChange({
'property_name': 'next_content_id_index',
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'old_value': 2,
'new_value': 3
}), exp_domain.ExplorationChange({
'property_name': 'answer_groups',
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'old_value': [],
'new_value': [{
'rule_specs': [{
'inputs': {
'x': 46
},
'rule_type': 'IsLessThanOrEqualTo'
}],
'training_data': [],
'tagged_skill_misconception_id': None,
'outcome': {
'labelled_as_correct': False,
'refresher_exploration_id': None,
'missing_prerequisite_skill_id': None,
'dest': 'End',
'feedback': {
'html': '',
'content_id': 'feedback_2'
},
'param_changes': []
}
}]
}), exp_domain.ExplorationChange({
'property_name': 'solution',
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'old_value': None,
'new_value': {
'explanation': {
'html': '<p>Explanation</p>',
'content_id': 'solution'
},
'answer_is_exclusive': False,
'correct_answer': 42
}
}), exp_domain.ExplorationChange({
'property_name': 'content',
'state_name': 'End',
'cmd': 'edit_state_property',
'old_value': {
'html': '',
'content_id': 'content'
},
'new_value': {
'html': '<p>Congratulations, you have finished!</p>',
'content_id': 'content'
}
}), exp_domain.ExplorationChange({
'property_name': 'title',
'cmd': 'edit_exploration_property',
'old_value': 'A title',
'new_value': 'First Title'
}), exp_domain.ExplorationChange({
'property_name': 'solution',
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'old_value': {
'explanation': {
'html': '<p>Explanation</p>',
'content_id': 'solution'
},
'answer_is_exclusive': False,
'correct_answer': 42
},
'new_value': {
'explanation': {
'html': '<p>Explanation</p>',
'content_id': 'solution'
},
'answer_is_exclusive': False,
'correct_answer': 40
}
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list_2,
'Made changes in interaction, contents, solutions, answer_groups in both states') # pylint: disable=line-too-long
# Changes to the old hints and also deleted and added
# new hints to take all the cases to check for mergeability.
change_list_3 = [exp_domain.ExplorationChange({
'old_value': [{
'hint_content': {
'html': '<p>Hint 1.</p>',
'content_id': 'hint_1'
}
}],
'cmd': 'edit_state_property',
'property_name': 'hints',
'new_value': [{
'hint_content': {
'html': '<p>Hint 1.</p>',
'content_id': 'hint_1'
}
}, {
'hint_content': {
'html': '<p>Hint 2.</p>',
'content_id': 'hint_2'
}
}],
'state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'old_value': 2,
'cmd': 'edit_state_property',
'property_name': 'next_content_id_index',
'new_value': 3,
'state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'old_value': [{
'hint_content': {
'html': '<p>Hint 1.</p>',
'content_id': 'hint_1'
}
}, {
'hint_content': {
'html': '<p>Hint 2.</p>',
'content_id': 'hint_2'
}
}],
'cmd': 'edit_state_property',
'property_name': 'hints',
'new_value': [{
'hint_content': {
'html': '<p>Changed hint 1.</p>',
'content_id': 'hint_1'
}
}, {
'hint_content': {
'html': '<p>Hint 2.</p>',
'content_id': 'hint_2'
}
}],
'state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'old_value': [{
'hint_content': {
'html': '<p>Changed hint 1.</p>',
'content_id': 'hint_1'
}
}, {
'hint_content': {
'html': '<p>Hint 2.</p>',
'content_id': 'hint_2'
}
}],
'cmd': 'edit_state_property',
'property_name': 'hints',
'new_value': [
{
'hint_content': {
'html': '<p>Hint 2.</p>',
'content_id': 'hint_2'
}
}, {
'hint_content': {
'html': '<p>Changed hint 1.</p>',
'content_id': 'hint_1'
}
}
],
'state_name': 'Introduction'
})]
changes_are_mergeable = exp_services.are_changes_mergeable(
self.EXP_0_ID, 2, change_list_3)
self.assertEqual(changes_are_mergeable, True)
def test_changes_are_not_mergeable_when_hints_changes_conflict(self):
self.save_new_valid_exploration(
self.EXP_0_ID, self.owner_id, end_state_name='End')
# Adding hints to the existing state.
rights_manager.publish_exploration(self.owner, self.EXP_0_ID)
change_list = [exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': [{
'hint_content': {
'html': '<p>Hint 1.</p>',
'content_id': 'hint_1'
}
}],
'property_name': 'hints',
'cmd': 'edit_state_property',
'old_value': []
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': 2,
'property_name': 'next_content_id_index',
'cmd': 'edit_state_property',
'old_value': 1
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': {
'answer_is_exclusive': False,
'explanation': {
'html': '<p>Explanation</p>',
'content_id': 'solution'
},
'correct_answer': 'Hello'
},
'property_name': 'solution',
'cmd': 'edit_state_property',
'old_value': None
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list,
'Added Hint and Solution in Introduction state')
# Changes to the old hints and also deleted and added
# new hints to take all the cases to check for mergeability.
change_list_2 = [exp_domain.ExplorationChange({
'old_value': [{
'hint_content': {
'html': '<p>Hint 1.</p>',
'content_id': 'hint_1'
}
}],
'cmd': 'edit_state_property',
'property_name': 'hints',
'new_value': [{
'hint_content': {
'html': '<p>Hint 1.</p>',
'content_id': 'hint_1'
}
}, {
'hint_content': {
'html': '<p>Hint 2.</p>',
'content_id': 'hint_2'
}
}],
'state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'old_value': 2,
'cmd': 'edit_state_property',
'property_name': 'next_content_id_index',
'new_value': 3,
'state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'old_value': [{
'hint_content': {
'html': '<p>Hint 1.</p>',
'content_id': 'hint_1'
}
}, {
'hint_content': {
'html': '<p>Hint 2.</p>',
'content_id': 'hint_2'
}
}],
'cmd': 'edit_state_property',
'property_name': 'hints',
'new_value': [{
'hint_content': {
'html': '<p>Changed hint 1.</p>',
'content_id': 'hint_1'
}
}, {
'hint_content': {
'html': '<p>Hint 2.</p>',
'content_id': 'hint_2'
}
}],
'state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'old_value': [{
'hint_content': {
'html': '<p>Changed hint 1.</p>',
'content_id': 'hint_1'
}
}, {
'hint_content': {
'html': '<p>Hint 2.</p>',
'content_id': 'hint_2'
}
}],
'cmd': 'edit_state_property',
'property_name': 'hints',
'new_value': [
{
'hint_content': {
'html': '<p>Hint 2.</p>',
'content_id': 'hint_2'
}
}, {
'hint_content': {
'html': '<p>Changed hint 1.</p>',
'content_id': 'hint_1'
}
}
],
'state_name': 'Introduction'
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list_2,
'Changes in the hints again.')
change_list_3 = [exp_domain.ExplorationChange({
'old_value': [{
'hint_content': {
'html': '<p>Hint 1.</p>',
'content_id': 'hint_1'
}
}],
'cmd': 'edit_state_property',
'property_name': 'hints',
'new_value': [{
'hint_content': {
'html': '<p>Changed Hint 1.</p>',
'content_id': 'hint_1'
}
}],
'state_name': 'Introduction'
})]
changes_are_not_mergeable = exp_services.are_changes_mergeable(
self.EXP_0_ID, 2, change_list_3)
self.assertEqual(changes_are_not_mergeable, False)
def test_changes_are_mergeable_when_exploration_properties_changes_do_not_conflict(self): # pylint: disable=line-too-long
self.save_new_valid_exploration(
self.EXP_0_ID, self.owner_id, end_state_name='End')
rights_manager.publish_exploration(self.owner, self.EXP_0_ID)
# Changes to all the properties of both states other than
# exploration properties i.e. title, category, objective etc.
# Also included rename states changes to check that
# renaming states doesn't affect anything.
change_list = [exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': {
'html': '<p>Content</p>',
'content_id': 'content'
},
'cmd': 'edit_state_property',
'property_name': 'content',
'old_value': {
'html': '',
'content_id': 'content'
}
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': [
{
'hint_content': {
'html': '<p>Hint 1.</p>',
'content_id': 'hint_1'
}
}
],
'cmd': 'edit_state_property',
'property_name': 'hints',
'old_value': [
]
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': 2,
'cmd': 'edit_state_property',
'property_name': 'next_content_id_index',
'old_value': 1
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': None,
'cmd': 'edit_state_property',
'property_name': 'widget_id',
'old_value': 'TextInput'
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': {},
'cmd': 'edit_state_property',
'property_name': 'widget_customization_args',
'old_value': {
'rows': {
'value': 1
},
'placeholder': {
'value': {
'unicode_str': '',
'content_id': 'ca_placeholder_0'
}
}
}
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': 'NumericInput',
'cmd': 'edit_state_property',
'property_name': 'widget_id',
'old_value': None
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'old_value':
{
'requireNonnegativeInput':
{
'value': True
}
},
'property_name': 'widget_customization_args',
'new_value':
{
'requireNonnegativeInput':
{
'value': False
}
},
'cmd': 'edit_state_property'
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': 3,
'cmd': 'edit_state_property',
'property_name': 'next_content_id_index',
'old_value': 2
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': [
{
'outcome': {
'refresher_exploration_id': None,
'feedback': {
'html': '<p>Good.</p>',
'content_id': 'feedback_2'
},
'missing_prerequisite_skill_id': None,
'labelled_as_correct': False,
'dest': 'End',
'param_changes': []
},
'training_data': [],
'rule_specs': [
{
'inputs': {
'x': 50
},
'rule_type': 'IsLessThanOrEqualTo'
}
],
'tagged_skill_misconception_id': None
}
],
'cmd': 'edit_state_property',
'property_name': 'answer_groups',
'old_value': [
]
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': {
'refresher_exploration_id': None,
'feedback': {
'html': '<p>Try Again.</p>',
'content_id': 'default_outcome'
},
'missing_prerequisite_skill_id': None,
'labelled_as_correct': False,
'dest': 'End',
'param_changes': []
},
'cmd': 'edit_state_property',
'property_name': 'default_outcome',
'old_value': {
'refresher_exploration_id': None,
'feedback': {
'html': '',
'content_id': 'default_outcome'
},
'missing_prerequisite_skill_id': None,
'labelled_as_correct': False,
'dest': 'End',
'param_changes': [
]
}
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': {
'refresher_exploration_id': None,
'feedback': {
'html': '<p>Try Again.</p>',
'content_id': 'default_outcome'
},
'missing_prerequisite_skill_id': None,
'labelled_as_correct': False,
'dest': 'Introduction',
'param_changes': [
]
},
'cmd': 'edit_state_property',
'property_name': 'default_outcome',
'old_value': {
'refresher_exploration_id': None,
'feedback': {
'html': '<p>Try Again.</p>',
'content_id': 'default_outcome'
},
'missing_prerequisite_skill_id': None,
'labelled_as_correct': False,
'dest': 'End',
'param_changes': [
]
}
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list,
'Made changes in interaction, contents, solutions, answer_groups in introduction state.') # pylint: disable=line-too-long
# Changes to properties of second state.
change_list_2 = [exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': {
'answer_is_exclusive': False,
'correct_answer': 25,
'explanation': {
'html': '<p>Explanation.</p>',
'content_id': 'solution'
}
},
'cmd': 'edit_state_property',
'property_name': 'solution',
'old_value': None
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': [
{
'hint_content': {
'html': '<p>Hint 1.</p>',
'content_id': 'hint_1'
}
},
{
'hint_content': {
'html': '<p>Hint 2.</p>',
'content_id': 'hint_3'
}
}
],
'cmd': 'edit_state_property',
'property_name': 'hints',
'old_value': [{
'hint_content': {
'html': '<p>Hint 1.</p>',
'content_id': 'hint_1'
}
}]
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'new_value': 4,
'cmd': 'edit_state_property',
'property_name': 'next_content_id_index',
'old_value': 3
}), exp_domain.ExplorationChange({
'state_name': 'End',
'new_value': {
'html': '<p>Congratulations, you have finished!</p>',
'content_id': 'content'
},
'cmd': 'edit_state_property',
'property_name': 'content',
'old_value': {
'html': '',
'content_id': 'content'
}
}), exp_domain.ExplorationChange({
'new_state_name': 'End-State',
'cmd': 'rename_state',
'old_state_name': 'End'
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list_2,
'Made changes in solutions in introduction state and content, state_name in end state.') # pylint: disable=line-too-long
# Changes to the exploration properties to check
# for mergeability.
change_list_3 = [exp_domain.ExplorationChange({
'property_name': 'title',
'cmd': 'edit_exploration_property',
'old_value': 'A title',
'new_value': 'A changed title.'
}), exp_domain.ExplorationChange({
'property_name': 'objective',
'cmd': 'edit_exploration_property',
'old_value': 'An objective',
'new_value': 'A changed objective.'
}), exp_domain.ExplorationChange({
'property_name': 'category',
'cmd': 'edit_exploration_property',
'old_value': 'A category',
'new_value': 'A changed category'
}), exp_domain.ExplorationChange({
'property_name': 'auto_tts_enabled',
'cmd': 'edit_exploration_property',
'old_value': True,
'new_value': False
}), exp_domain.ExplorationChange({
'property_name': 'tags',
'cmd': 'edit_exploration_property',
'old_value': [
],
'new_value': [
'new'
]
}), exp_domain.ExplorationChange({
'property_name': 'tags',
'cmd': 'edit_exploration_property',
'old_value': [
'new'
],
'new_value': [
'new',
'skill'
]
}), exp_domain.ExplorationChange({
'cmd': 'edit_exploration_property',
'property_name': 'language_code',
'new_value': 'bn',
'old_value': 'en'
}), exp_domain.ExplorationChange({
'cmd': 'edit_exploration_property',
'property_name': 'author_notes',
'new_value': 'author_notes'
}), exp_domain.ExplorationChange({
'cmd': 'edit_exploration_property',
'property_name': 'blurb',
'new_value': 'blurb'
}), exp_domain.ExplorationChange({
'cmd': 'edit_exploration_property',
'property_name': 'init_state_name',
'new_value': 'End',
}), exp_domain.ExplorationChange({
'cmd': 'edit_exploration_property',
'property_name': 'init_state_name',
'new_value': 'Introduction',
}), exp_domain.ExplorationChange({
'cmd': 'edit_exploration_property',
'property_name': 'auto_tts_enabled',
'new_value': False
}), exp_domain.ExplorationChange({
'cmd': 'edit_exploration_property',
'property_name': 'correctness_feedback_enabled',
'new_value': True
}), exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'property_name': 'confirmed_unclassified_answers',
'state_name': 'Introduction',
'new_value': ['test']
}), exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'state_name': 'Introduction',
'property_name': 'linked_skill_id',
'new_value': 'string_1'
}), exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'state_name': 'Introduction',
'property_name': 'card_is_checkpoint',
'new_value': True
})]
changes_are_mergeable = exp_services.are_changes_mergeable(
self.EXP_0_ID, 1, change_list_3)
self.assertEqual(changes_are_mergeable, True)
def test_changes_are_not_mergeable_when_exploration_properties_changes_conflict(self): # pylint: disable=line-too-long
self.save_new_valid_exploration(
self.EXP_0_ID, self.owner_id, end_state_name='End')
rights_manager.publish_exploration(self.owner, self.EXP_0_ID)
# Changes to the exploration properties to check
# for mergeability.
change_list = [exp_domain.ExplorationChange({
'property_name': 'title',
'cmd': 'edit_exploration_property',
'old_value': 'A title',
'new_value': 'A changed title.'
}), exp_domain.ExplorationChange({
'property_name': 'objective',
'cmd': 'edit_exploration_property',
'old_value': 'An objective',
'new_value': 'A changed objective.'
}), exp_domain.ExplorationChange({
'property_name': 'category',
'cmd': 'edit_exploration_property',
'old_value': 'A category',
'new_value': 'A changed category'
}), exp_domain.ExplorationChange({
'property_name': 'auto_tts_enabled',
'cmd': 'edit_exploration_property',
'old_value': True,
'new_value': False
}), exp_domain.ExplorationChange({
'property_name': 'tags',
'cmd': 'edit_exploration_property',
'old_value': [
],
'new_value': [
'new'
]
}), exp_domain.ExplorationChange({
'property_name': 'tags',
'cmd': 'edit_exploration_property',
'old_value': [
'new'
],
'new_value': [
'new',
'skill'
]
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list,
'Changes in the Exploration Properties.')
change_list_2 = [exp_domain.ExplorationChange({
'property_name': 'title',
'cmd': 'edit_exploration_property',
'old_value': 'A title',
'new_value': 'A new title.'
}), exp_domain.ExplorationChange({
'property_name': 'objective',
'cmd': 'edit_exploration_property',
'old_value': 'An objective',
'new_value': 'A new objective.'
}), exp_domain.ExplorationChange({
'property_name': 'category',
'cmd': 'edit_exploration_property',
'old_value': 'A category',
'new_value': 'A new category'
}), exp_domain.ExplorationChange({
'property_name': 'auto_tts_enabled',
'cmd': 'edit_exploration_property',
'old_value': True,
'new_value': False
}), exp_domain.ExplorationChange({
'property_name': 'tags',
'cmd': 'edit_exploration_property',
'old_value': [
],
'new_value': [
'new'
]
}), exp_domain.ExplorationChange({
'property_name': 'tags',
'cmd': 'edit_exploration_property',
'old_value': [
'new'
],
'new_value': [
'new',
'skill'
]
})]
changes_are_not_mergeable = exp_services.are_changes_mergeable(
self.EXP_0_ID, 1, change_list_2)
self.assertEqual(changes_are_not_mergeable, False)
def test_changes_are_mergeable_when_translations_changes_do_not_conflict(self): # pylint: disable=line-too-long
self.save_new_valid_exploration(
self.EXP_0_ID, self.owner_id, end_state_name='End')
rights_manager.publish_exploration(self.owner, self.EXP_0_ID)
# Adding content, feedbacks, solutions so that
# translations can be added later on.
change_list = [exp_domain.ExplorationChange({
'property_name': 'content',
'old_value': {
'content_id': 'content',
'html': ''
},
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'new_value': {
'content_id': 'content',
'html': '<p>First State Content.</p>'
}
}), exp_domain.ExplorationChange({
'property_name': 'widget_customization_args',
'old_value': {
'placeholder': {
'value': {
'unicode_str': '',
'content_id': 'ca_placeholder_0'
}
},
'rows': {
'value': 1
}
},
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'new_value': {
'placeholder': {
'value': {
'unicode_str': 'Placeholder',
'content_id': 'ca_placeholder_0'
}
},
'rows': {
'value': 1
}
}
}), exp_domain.ExplorationChange({
'property_name': 'default_outcome',
'old_value': {
'labelled_as_correct': False,
'missing_prerequisite_skill_id': None,
'refresher_exploration_id': None,
'feedback': {
'content_id': 'default_outcome',
'html': ''
},
'param_changes': [
],
'dest': 'End'
},
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'new_value': {
'labelled_as_correct': False,
'missing_prerequisite_skill_id': None,
'refresher_exploration_id': None,
'feedback': {
'content_id': 'default_outcome',
'html': '<p>Feedback 1.</p>'
},
'param_changes': [
],
'dest': 'End'
}
}), exp_domain.ExplorationChange({
'property_name': 'hints',
'old_value': [
],
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'new_value': [
{
'hint_content': {
'content_id': 'hint_1',
'html': '<p>Hint 1.</p>'
}
}
]
}), exp_domain.ExplorationChange({
'property_name': 'next_content_id_index',
'old_value': 1,
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'new_value': 2
}), exp_domain.ExplorationChange({
'property_name': 'solution',
'old_value': None,
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'new_value': {
'answer_is_exclusive': False,
'explanation': {
'content_id': 'solution',
'html': '<p>Explanation.</p>'
},
'correct_answer': 'Solution'
}
}), exp_domain.ExplorationChange({
'property_name': 'content',
'old_value': {
'content_id': 'content',
'html': ''
},
'state_name': 'End',
'cmd': 'edit_state_property',
'new_value': {
'content_id': 'content',
'html': '<p>Second State Content.</p>'
}
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list,
'Added various contents.')
change_list_2 = [exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'property_name': 'answer_groups',
'old_value': [],
'state_name': 'Introduction',
'new_value': [{
'rule_specs': [{
'rule_type': 'StartsWith',
'inputs': {
'x': {
'contentId': 'rule_input_2',
'normalizedStrSet': [
'Hello',
'Hola'
]
}
}
}],
'tagged_skill_misconception_id': None,
'outcome': {
'labelled_as_correct': False,
'feedback': {
'content_id': 'feedback_1',
'html': '<p>Feedback</p>'
},
'missing_prerequisite_skill_id': None,
'dest': 'End',
'param_changes': [],
'refresher_exploration_id': None
},
'training_data': []
}]
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list_2,
'Added answer group.')
# Adding some translations to the first state.
change_list_3 = [exp_domain.ExplorationChange({
'language_code': 'de',
'data_format': 'html',
'cmd': 'add_written_translation',
'content_id': 'content',
'translation_html': '<p>Translation Content.</p>',
'state_name': 'Introduction',
'content_html': 'N/A'
}), exp_domain.ExplorationChange({
'language_code': 'de',
'data_format': 'html',
'cmd': 'add_written_translation',
'content_id': 'default_outcome',
'translation_html': '<p>Translation Feedback 1.</p>',
'state_name': 'Introduction',
'content_html': 'N/A'
}), exp_domain.ExplorationChange({
'cmd': 'mark_written_translations_as_needing_update',
'state_name': 'Introduction',
'content_id': 'default_outcome'
})]
changes_are_mergeable = exp_services.are_changes_mergeable(
self.EXP_0_ID, 2, change_list_3)
self.assertEqual(changes_are_mergeable, True)
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list_3,
'Added some translations.')
# Adding one translation to the first state.
change_list_7 = [exp_domain.ExplorationChange({
'language_code': 'de',
'data_format': 'html',
'cmd': 'add_written_translation',
'content_id': 'default_outcome',
'translation_html': '<p>Translation Content.</p>',
'state_name': 'End',
'content_html': 'N/A'
}), exp_domain.ExplorationChange({
'language_code': 'de',
'cmd': 'mark_written_translation_as_needing_update',
'state_name': 'End',
'content_id': 'default_outcome'
})]
changes_are_mergeable = exp_services.are_changes_mergeable(
self.EXP_0_ID, 2, change_list_7)
self.assertEqual(changes_are_mergeable, True)
# Adding translations again to the different contents
# of same state to check that they can be merged.
change_list_4 = [exp_domain.ExplorationChange({
'new_state_name': 'Intro-Rename',
'cmd': 'rename_state',
'old_state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'content_html': 'N/A',
'translation_html': 'Placeholder Translation.',
'state_name': 'Intro-Rename',
'language_code': 'de',
'content_id': 'ca_placeholder_0',
'cmd': 'add_written_translation',
'data_format': 'unicode'
}), exp_domain.ExplorationChange({
'content_html': 'N/A',
'translation_html': '<p>Hints Translation.</p>',
'state_name': 'Intro-Rename',
'language_code': 'de',
'content_id': 'hint_1',
'cmd': 'add_written_translation',
'data_format': 'html'
}), exp_domain.ExplorationChange({
'language_code': 'de',
'data_format': 'html',
'cmd': 'add_written_translation',
'content_id': 'rule_input_2',
'translation_html': '<p>Translation Rule Input.</p>',
'state_name': 'Intro-Rename',
'content_html': 'N/A'
}), exp_domain.ExplorationChange({
'language_code': 'de',
'data_format': 'html',
'cmd': 'add_written_translation',
'content_id': 'feedback_1',
'translation_html': '<p>Translation Feedback.</p>',
'state_name': 'Intro-Rename',
'content_html': 'N/A'
}), exp_domain.ExplorationChange({
'language_code': 'de',
'data_format': 'html',
'cmd': 'add_written_translation',
'content_id': 'solution',
'translation_html': '<p>Translation Solution.</p>',
'state_name': 'Intro-Rename',
'content_html': 'N/A'
}), exp_domain.ExplorationChange({
'new_state_name': 'Introduction',
'cmd': 'rename_state',
'old_state_name': 'Intro-Rename'
})]
changes_are_mergeable = exp_services.are_changes_mergeable(
self.EXP_0_ID, 3, change_list_4)
self.assertEqual(changes_are_mergeable, True)
# Adding translations to the second state to check
# that they can be merged even in the same property.
change_list_5 = [exp_domain.ExplorationChange({
'content_html': 'N/A',
'translation_html': '<p>State 2 Content Translation.</p>',
'state_name': 'End',
'language_code': 'de',
'content_id': 'content',
'cmd': 'add_written_translation',
'data_format': 'html'
})]
changes_are_mergeable_1 = exp_services.are_changes_mergeable(
self.EXP_0_ID, 3, change_list_5)
self.assertEqual(changes_are_mergeable_1, True)
# Add changes to the different content of first state to
# check that translation changes to some properties doesn't
# affects the changes of content of other properties.
change_list_6 = [exp_domain.ExplorationChange({
'old_value': {
'rows': {
'value': 1
},
'placeholder': {
'value': {
'unicode_str': 'Placeholder',
'content_id': 'ca_placeholder_0'
}
}
},
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'property_name': 'widget_customization_args',
'new_value': {
'rows': {
'value': 1
},
'placeholder': {
'value': {
'unicode_str': 'Placeholder Changed.',
'content_id': 'ca_placeholder_0'
}
}
}
}), exp_domain.ExplorationChange({
'property_name': 'default_outcome',
'old_value': {
'labelled_as_correct': False,
'missing_prerequisite_skill_id': None,
'refresher_exploration_id': None,
'feedback': {
'content_id': 'default_outcome',
'html': 'Feedback 1.'
},
'param_changes': [
],
'dest': 'End'
},
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'new_value': {
'labelled_as_correct': False,
'missing_prerequisite_skill_id': None,
'refresher_exploration_id': None,
'feedback': {
'content_id': 'default_outcome',
'html': '<p>Feedback 2.</p>'
},
'param_changes': [
],
'dest': 'End'
}
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list_6,
'Changing Customization Args Placeholder in First State.')
changes_are_mergeable_3 = exp_services.are_changes_mergeable(
self.EXP_0_ID, 4, change_list_5)
self.assertEqual(changes_are_mergeable_3, True)
def test_changes_are_not_mergeable_when_translations_changes_conflict(self): # pylint: disable=line-too-long
self.save_new_valid_exploration(
self.EXP_0_ID, self.owner_id, end_state_name='End')
rights_manager.publish_exploration(self.owner, self.EXP_0_ID)
# Adding content, feedbacks, solutions so that
# translations can be added later on.
change_list = [exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'property_name': 'answer_groups',
'old_value': [],
'state_name': 'Introduction',
'new_value': [{
'rule_specs': [{
'rule_type': 'StartsWith',
'inputs': {
'x': {
'contentId': 'rule_input_2',
'normalizedStrSet': [
'Hello',
'Hola'
]
}
}
}],
'tagged_skill_misconception_id': None,
'outcome': {
'labelled_as_correct': False,
'feedback': {
'content_id': 'feedback_1',
'html': '<p>Feedback</p>'
},
'missing_prerequisite_skill_id': None,
'dest': 'End',
'param_changes': [],
'refresher_exploration_id': None
},
'training_data': []
}]
}), exp_domain.ExplorationChange({
'property_name': 'content',
'old_value': {
'content_id': 'content',
'html': ''
},
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'new_value': {
'content_id': 'content',
'html': '<p>First State Content.</p>'
}
}), exp_domain.ExplorationChange({
'property_name': 'widget_customization_args',
'old_value': {
'placeholder': {
'value': {
'unicode_str': '',
'content_id': 'ca_placeholder_0'
}
},
'rows': {
'value': 1
}
},
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'new_value': {
'placeholder': {
'value': {
'unicode_str': 'Placeholder',
'content_id': 'ca_placeholder_0'
}
},
'rows': {
'value': 1
}
}
}), exp_domain.ExplorationChange({
'property_name': 'default_outcome',
'old_value': {
'labelled_as_correct': False,
'missing_prerequisite_skill_id': None,
'refresher_exploration_id': None,
'feedback': {
'content_id': 'default_outcome',
'html': ''
},
'param_changes': [
],
'dest': 'End'
},
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'new_value': {
'labelled_as_correct': False,
'missing_prerequisite_skill_id': None,
'refresher_exploration_id': None,
'feedback': {
'content_id': 'default_outcome',
'html': '<p>Feedback 1.</p>'
},
'param_changes': [
],
'dest': 'End'
}
}), exp_domain.ExplorationChange({
'property_name': 'hints',
'old_value': [
],
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'new_value': [
{
'hint_content': {
'content_id': 'hint_1',
'html': '<p>Hint 1.</p>'
}
}
]
}), exp_domain.ExplorationChange({
'property_name': 'next_content_id_index',
'old_value': 1,
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'new_value': 2
}), exp_domain.ExplorationChange({
'property_name': 'solution',
'old_value': None,
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'new_value': {
'answer_is_exclusive': False,
'explanation': {
'content_id': 'solution',
'html': '<p>Explanation.</p>'
},
'correct_answer': 'Solution'
}
}), exp_domain.ExplorationChange({
'property_name': 'content',
'old_value': {
'content_id': 'content',
'html': ''
},
'state_name': 'End',
'cmd': 'edit_state_property',
'new_value': {
'content_id': 'content',
'html': '<p>Second State Content.</p>'
}
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list,
'Added various contents.')
# Adding some translations to the first state.
change_list_2 = [exp_domain.ExplorationChange({
'state_name': 'Introduction',
'old_value': {
'content_id': 'content',
'html': '<p>First State Content.</p>'
},
'new_value': {
'content_id': 'content',
'html': '<p>Changed First State Content.</p>'
},
'property_name': 'content',
'cmd': 'edit_state_property'
}), exp_domain.ExplorationChange({
'language_code': 'de',
'data_format': 'html',
'cmd': 'add_written_translation',
'content_id': 'content',
'translation_html': '<p>Translation Content.</p>',
'state_name': 'Introduction',
'content_html': 'N/A'
}), exp_domain.ExplorationChange({
'language_code': 'de',
'data_format': 'html',
'cmd': 'add_written_translation',
'content_id': 'default_outcome',
'translation_html': '<p>Translation Feedback 1.</p>',
'state_name': 'Introduction',
'content_html': 'N/A'
}), exp_domain.ExplorationChange({
'language_code': 'de',
'data_format': 'html',
'cmd': 'add_written_translation',
'content_id': 'ca_placeholder_0',
'translation_html': '<p>Translation Placeholder.</p>',
'state_name': 'Introduction',
'content_html': 'N/A'
}), exp_domain.ExplorationChange({
'language_code': 'de',
'data_format': 'html',
'cmd': 'add_written_translation',
'content_id': 'hint_1',
'translation_html': '<p>Translation Hint.</p>',
'state_name': 'Introduction',
'content_html': 'N/A'
}), exp_domain.ExplorationChange({
'language_code': 'de',
'data_format': 'html',
'cmd': 'add_written_translation',
'content_id': 'solution',
'translation_html': '<p>Translation Solution.</p>',
'state_name': 'Introduction',
'content_html': 'N/A'
}), exp_domain.ExplorationChange({
'language_code': 'de',
'data_format': 'html',
'cmd': 'add_written_translation',
'content_id': 'rule_input_2',
'translation_html': '<p>Translation Rule Input.</p>',
'state_name': 'Introduction',
'content_html': 'N/A'
}), exp_domain.ExplorationChange({
'new_state_name': 'Intro-Rename',
'cmd': 'rename_state',
'old_state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'language_code': 'de',
'data_format': 'html',
'cmd': 'add_written_translation',
'content_id': 'feedback_1',
'translation_html': '<p>Translation Feedback.</p>',
'state_name': 'Intro-Rename',
'content_html': 'N/A'
}), exp_domain.ExplorationChange({
'new_state_name': 'Introduction',
'cmd': 'rename_state',
'old_state_name': 'Intro-Rename'
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list_2,
'Added some translations.')
# Adding translations again to the same contents
# of same state to check that they can not be
# merged.
change_list_3 = [exp_domain.ExplorationChange({
'language_code': 'bn',
'data_format': 'html',
'cmd': 'add_written_translation',
'content_id': 'content',
'translation_html': '<p>Translation Content.</p>',
'state_name': 'Introduction',
'content_html': 'N/A'
}), exp_domain.ExplorationChange({
'language_code': 'bn',
'data_format': 'html',
'cmd': 'add_written_translation',
'content_id': 'default_outcome',
'translation_html': '<p>Translation Feedback 1.</p>',
'state_name': 'Introduction',
'content_html': 'N/A'
})]
changes_are_not_mergeable = exp_services.are_changes_mergeable(
self.EXP_0_ID, 2, change_list_3)
self.assertEqual(changes_are_not_mergeable, False)
# Changes to the content of second state to check that
# the changes to the translations can not be made in
# same state if the property which can be translated is
# changed.
change_list_3 = [exp_domain.ExplorationChange({
'state_name': 'End',
'old_value': {
'content_id': 'content',
'html': '<p>Second State Content.</p>'
},
'new_value': {
'content_id': 'content',
'html': '<p>Changed Second State Content.</p>'
},
'property_name': 'content',
'cmd': 'edit_state_property'
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list_3,
'Changing Content in Second State.')
# Adding translations to the same property in
# second state to check that they can not be merged.
change_list_4 = [exp_domain.ExplorationChange({
'content_html': 'N/A',
'translation_html': '<p>State 2 Content Translation.</p>',
'state_name': 'End',
'language_code': 'de',
'content_id': 'content',
'cmd': 'add_written_translation',
'data_format': 'html'
})]
changes_are_not_mergeable_1 = exp_services.are_changes_mergeable(
self.EXP_0_ID, 3, change_list_4)
self.assertEqual(changes_are_not_mergeable_1, False)
def test_changes_are_mergeable_when_voiceovers_changes_do_not_conflict(self): # pylint: disable=line-too-long
self.save_new_valid_exploration(
self.EXP_0_ID, self.owner_id, end_state_name='End')
rights_manager.publish_exploration(self.owner, self.EXP_0_ID)
# Adding content, feedbacks, solutions so that
# voiceovers can be added later on.
change_list = [exp_domain.ExplorationChange({
'property_name': 'content',
'old_value': {
'content_id': 'content',
'html': ''
},
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'new_value': {
'content_id': 'content',
'html': '<p>First State Content.</p>'
}
}), exp_domain.ExplorationChange({
'property_name': 'widget_customization_args',
'old_value': {
'placeholder': {
'value': {
'unicode_str': '',
'content_id': 'ca_placeholder_0'
}
},
'rows': {
'value': 1
}
},
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'new_value': {
'placeholder': {
'value': {
'unicode_str': 'Placeholder',
'content_id': 'ca_placeholder_0'
}
},
'rows': {
'value': 1
}
}
}), exp_domain.ExplorationChange({
'property_name': 'default_outcome',
'old_value': {
'labelled_as_correct': False,
'missing_prerequisite_skill_id': None,
'refresher_exploration_id': None,
'feedback': {
'content_id': 'default_outcome',
'html': ''
},
'param_changes': [
],
'dest': 'End'
},
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'new_value': {
'labelled_as_correct': False,
'missing_prerequisite_skill_id': None,
'refresher_exploration_id': None,
'feedback': {
'content_id': 'default_outcome',
'html': '<p>Feedback 1.</p>'
},
'param_changes': [
],
'dest': 'End'
}
}), exp_domain.ExplorationChange({
'property_name': 'hints',
'old_value': [],
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'new_value': [
{
'hint_content': {
'content_id': 'hint_1',
'html': '<p>Hint 1.</p>'
}
}
]
}), exp_domain.ExplorationChange({
'property_name': 'next_content_id_index',
'old_value': 1,
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'new_value': 2
}), exp_domain.ExplorationChange({
'property_name': 'solution',
'old_value': None,
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'new_value': {
'answer_is_exclusive': False,
'explanation': {
'content_id': 'solution',
'html': '<p>Explanation.</p>'
},
'correct_answer': 'Solution'
}
}), exp_domain.ExplorationChange({
'property_name': 'content',
'old_value': {
'content_id': 'content',
'html': ''
},
'state_name': 'End',
'cmd': 'edit_state_property',
'new_value': {
'content_id': 'content',
'html': '<p>Second State Content.</p>'
}
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list,
'Added various contents.')
# Adding change to the field which is neither
# affected by nor affects voiceovers.
change_list_2 = [exp_domain.ExplorationChange({
'cmd': 'edit_state_property',
'state_name': 'Introduction',
'property_name': 'card_is_checkpoint',
'new_value': True
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list_2,
'Added single unrelated change.')
# Adding some voiceovers to the first state.
change_list_3 = [exp_domain.ExplorationChange({
'property_name': 'recorded_voiceovers',
'old_value': {
'voiceovers_mapping': {
'hint_1': {},
'default_outcome': {},
'solution': {},
'ca_placeholder_0': {},
'content': {}
}
},
'state_name': 'Introduction',
'new_value': {
'voiceovers_mapping': {
'hint_1': {},
'default_outcome': {},
'solution': {},
'ca_placeholder_0': {},
'content': {
'en': {
'needs_update': False,
'filename': 'content-en-xrss3z3nso.mp3',
'file_size_bytes': 114938,
'duration_secs': 7.183625
}
}
}
},
'cmd': 'edit_state_property'
}), exp_domain.ExplorationChange({
'property_name': 'recorded_voiceovers',
'old_value': {
'voiceovers_mapping': {
'hint_1': {},
'default_outcome': {},
'solution': {},
'ca_placeholder_0': {},
'content': {
'en': {
'needs_update': False,
'filename': 'content-en-xrss3z3nso.mp3',
'file_size_bytes': 114938,
'duration_secs': 7.183625
}
}
}
},
'state_name': 'Introduction',
'new_value': {
'voiceovers_mapping': {
'hint_1': {},
'default_outcome': {},
'solution': {},
'ca_placeholder_0': {
'en': {
'needs_update': False,
'filename': 'ca_placeholder_0-en-mfy5l6logg.mp3',
'file_size_bytes': 175542,
'duration_secs': 10.971375
}
},
'content': {
'en': {
'needs_update': False,
'filename': 'content-en-xrss3z3nso.mp3',
'file_size_bytes': 114938,
'duration_secs': 7.183625
}
}
}
},
'cmd': 'edit_state_property'
})]
changes_are_mergeable = exp_services.are_changes_mergeable(
self.EXP_0_ID, 2, change_list_3)
self.assertEqual(changes_are_mergeable, True)
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list_3,
'Added some voiceovers.')
# Adding voiceovers again to the same first state
# to check if they can be applied. They will not
# be mergeable as the changes are in the same property
# i.e. recorded_voiceovers.
change_list_4 = [exp_domain.ExplorationChange({
'property_name': 'recorded_voiceovers',
'cmd': 'edit_state_property',
'old_value': {
'voiceovers_mapping': {
'default_outcome': {},
'solution': {},
'content': {},
'ca_placeholder_0': {},
'hint_1': {}
}
},
'new_value': {
'voiceovers_mapping': {
'default_outcome': {},
'solution': {},
'content': {},
'ca_placeholder_0': {},
'hint_1': {
'en': {
'needs_update': False,
'duration_secs': 30.0669375,
'filename': 'hint_1-en-ajclkw0cnz.mp3',
'file_size_bytes': 481071
}
}
}
},
'state_name': 'Introduction'
})]
changes_are_mergeable = exp_services.are_changes_mergeable(
self.EXP_0_ID, 3, change_list_4)
self.assertEqual(changes_are_mergeable, False)
# Adding voiceovers to the second state to check
# if they can be applied. They can be mergead as
# the changes are in the different states.
change_list_5 = [exp_domain.ExplorationChange({
'old_value': {
'voiceovers_mapping': {
'content': {}
}
},
'property_name': 'recorded_voiceovers',
'cmd': 'edit_state_property',
'new_value': {
'voiceovers_mapping': {
'content': {
'en': {
'duration_secs': 10.3183125,
'filename': 'content-en-ar9zhd7edl.mp3',
'file_size_bytes': 165093,
'needs_update': False
}
}
}
},
'state_name': 'End'
})]
changes_are_mergeable_1 = exp_services.are_changes_mergeable(
self.EXP_0_ID, 3, change_list_5)
self.assertEqual(changes_are_mergeable_1, True)
# Changes to the content of first state to check
# that the changes in the contents of first state
# doesn't affects the changes to the voiceovers in
# second state.
change_list_6 = [exp_domain.ExplorationChange({
'state_name': 'Introduction',
'old_value': {
'content_id': 'content',
'html': '<p>First State Content.</p>'
},
'new_value': {
'content_id': 'content',
'html': '<p>Changed First State Content.</p>'
},
'property_name': 'content',
'cmd': 'edit_state_property'
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list_6,
'Changing Content in First State.')
changes_are_mergeable_3 = exp_services.are_changes_mergeable(
self.EXP_0_ID, 4, change_list_5)
self.assertEqual(changes_are_mergeable_3, True)
# Changes to the content of second state to check that
# the changes to the voiceovers can not be made in
# same state if the property which can be recorded is
# changed.
change_list_6 = [exp_domain.ExplorationChange({
'state_name': 'End',
'old_value': {
'content_id': 'content',
'html': '<p>Second State Content.</p>'
},
'new_value': {
'content_id': 'content',
'html': '<p>Changed Second State Content.</p>'
},
'property_name': 'content',
'cmd': 'edit_state_property'
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list_6,
'Changing Content in Second State.')
changes_are_not_mergeable = exp_services.are_changes_mergeable(
self.EXP_0_ID, 4, change_list_4)
self.assertEqual(changes_are_not_mergeable, False)
def test_changes_are_not_mergeable_when_voiceovers_changes_conflict(self):
self.save_new_valid_exploration(
self.EXP_0_ID, self.owner_id, end_state_name='End')
rights_manager.publish_exploration(self.owner, self.EXP_0_ID)
# Adding content, feedbacks, solutions so that
# voiceovers can be added later on.
change_list = [exp_domain.ExplorationChange({
'property_name': 'content',
'old_value': {
'content_id': 'content',
'html': ''
},
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'new_value': {
'content_id': 'content',
'html': '<p>First State Content.</p>'
}
}), exp_domain.ExplorationChange({
'property_name': 'widget_customization_args',
'old_value': {
'placeholder': {
'value': {
'unicode_str': '',
'content_id': 'ca_placeholder_0'
}
},
'rows': {
'value': 1
}
},
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'new_value': {
'placeholder': {
'value': {
'unicode_str': 'Placeholder',
'content_id': 'ca_placeholder_0'
}
},
'rows': {
'value': 1
}
}
}), exp_domain.ExplorationChange({
'property_name': 'default_outcome',
'old_value': {
'labelled_as_correct': False,
'missing_prerequisite_skill_id': None,
'refresher_exploration_id': None,
'feedback': {
'content_id': 'default_outcome',
'html': ''
},
'param_changes': [
],
'dest': 'End'
},
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'new_value': {
'labelled_as_correct': False,
'missing_prerequisite_skill_id': None,
'refresher_exploration_id': None,
'feedback': {
'content_id': 'default_outcome',
'html': '<p>Feedback 1.</p>'
},
'param_changes': [
],
'dest': 'End'
}
}), exp_domain.ExplorationChange({
'property_name': 'hints',
'old_value': [],
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'new_value': [
{
'hint_content': {
'content_id': 'hint_1',
'html': '<p>Hint 1.</p>'
}
}
]
}), exp_domain.ExplorationChange({
'property_name': 'next_content_id_index',
'old_value': 1,
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'new_value': 2
}), exp_domain.ExplorationChange({
'property_name': 'solution',
'old_value': None,
'state_name': 'Introduction',
'cmd': 'edit_state_property',
'new_value': {
'answer_is_exclusive': False,
'explanation': {
'content_id': 'solution',
'html': '<p>Explanation.</p>'
},
'correct_answer': 'Solution'
}
}), exp_domain.ExplorationChange({
'property_name': 'content',
'old_value': {
'content_id': 'content',
'html': ''
},
'state_name': 'End',
'cmd': 'edit_state_property',
'new_value': {
'content_id': 'content',
'html': '<p>Second State Content.</p>'
}
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list,
'Added various contents.')
# Adding some voiceovers to the first state.
change_list_2 = [exp_domain.ExplorationChange({
'property_name': 'recorded_voiceovers',
'old_value': {
'voiceovers_mapping': {
'hint_1': {},
'default_outcome': {},
'solution': {},
'ca_placeholder_0': {},
'content': {}
}
},
'state_name': 'Introduction',
'new_value': {
'voiceovers_mapping': {
'hint_1': {},
'default_outcome': {},
'solution': {},
'ca_placeholder_0': {},
'content': {
'en': {
'needs_update': False,
'filename': 'content-en-xrss3z3nso.mp3',
'file_size_bytes': 114938,
'duration_secs': 7.183625
}
}
}
},
'cmd': 'edit_state_property'
}), exp_domain.ExplorationChange({
'property_name': 'recorded_voiceovers',
'old_value': {
'voiceovers_mapping': {
'hint_1': {},
'default_outcome': {},
'solution': {},
'ca_placeholder_0': {},
'content': {
'en': {
'needs_update': False,
'filename': 'content-en-xrss3z3nso.mp3',
'file_size_bytes': 114938,
'duration_secs': 7.183625
}
}
}
},
'state_name': 'Introduction',
'new_value': {
'voiceovers_mapping': {
'hint_1': {},
'default_outcome': {},
'solution': {},
'ca_placeholder_0': {
'en': {
'needs_update': False,
'filename': 'ca_placeholder_0-en-mfy5l6logg.mp3',
'file_size_bytes': 175542,
'duration_secs': 10.971375
}
},
'content': {
'en': {
'needs_update': False,
'filename': 'content-en-xrss3z3nso.mp3',
'file_size_bytes': 114938,
'duration_secs': 7.183625
}
}
}
},
'cmd': 'edit_state_property'
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list_2,
'Added some voiceovers.')
# Adding voiceovers again to the same first state
# to check if they can be applied. They will not
# be mergeable as the changes are in the same property
# i.e. recorded_voiceovers.
change_list_3 = [exp_domain.ExplorationChange({
'property_name': 'recorded_voiceovers',
'cmd': 'edit_state_property',
'old_value': {
'voiceovers_mapping': {
'default_outcome': {},
'solution': {},
'content': {},
'ca_placeholder_0': {},
'hint_1': {}
}
},
'new_value': {
'voiceovers_mapping': {
'default_outcome': {},
'solution': {},
'content': {},
'ca_placeholder_0': {},
'hint_1': {
'en': {
'needs_update': False,
'duration_secs': 30.0669375,
'filename': 'hint_1-en-ajclkw0cnz.mp3',
'file_size_bytes': 481071
}
}
}
},
'state_name': 'Introduction'
})]
changes_are_mergeable = exp_services.are_changes_mergeable(
self.EXP_0_ID, 2, change_list_3)
self.assertEqual(changes_are_mergeable, False)
def test_changes_are_not_mergeable_when_state_added_or_deleted(self):
self.save_new_valid_exploration(
self.EXP_0_ID, self.owner_id, end_state_name='End')
rights_manager.publish_exploration(self.owner, self.EXP_0_ID)
# Changes to the various properties of the first and
# second state.
change_list = [exp_domain.ExplorationChange({
'old_value': 'TextInput',
'cmd': 'edit_state_property',
'property_name': 'widget_id',
'new_value': None,
'state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'old_value': {
'placeholder': {
'value': {
'content_id': 'ca_placeholder_0',
'unicode_str': ''
}
},
'rows': {
'value': 1
}
},
'cmd': 'edit_state_property',
'property_name': 'widget_customization_args',
'new_value': {},
'state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'old_value': None,
'cmd': 'edit_state_property',
'property_name': 'widget_id',
'new_value': 'NumericInput',
'state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'old_value':
{
'requireNonnegativeInput':
{
'value': True
}
},
'property_name': 'widget_customization_args',
'new_value':
{
'requireNonnegativeInput':
{
'value': False
}
},
'cmd': 'edit_state_property'
}), exp_domain.ExplorationChange({
'old_value': 1,
'cmd': 'edit_state_property',
'property_name': 'next_content_id_index',
'new_value': 2,
'state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'old_value': [],
'cmd': 'edit_state_property',
'property_name': 'answer_groups',
'new_value': [
{
'tagged_skill_misconception_id': None,
'rule_specs': [
{
'rule_type': 'IsLessThanOrEqualTo',
'inputs': {
'x': 50
}
}
],
'training_data': [],
'outcome': {
'param_changes': [],
'dest': 'End',
'missing_prerequisite_skill_id': None,
'feedback': {
'content_id': 'feedback_1',
'html': ''
},
'labelled_as_correct': False,
'refresher_exploration_id': None
}
}
],
'state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'old_value': [],
'cmd': 'edit_state_property',
'property_name': 'hints',
'new_value': [
{
'hint_content': {
'content_id': 'hint_2',
'html': '<p>Hint.</p>'
}
}
],
'state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'old_value': 2,
'cmd': 'edit_state_property',
'property_name': 'next_content_id_index',
'new_value': 3,
'state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'old_value': {
'content_id': 'content',
'html': 'Congratulations, you have finished!'
},
'cmd': 'edit_state_property',
'property_name': 'content',
'new_value': {
'content_id': 'content',
'html': '<p>2Congratulations, you have finished!</p>'
},
'state_name': 'End'
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list,
'Changed various properties in both states.')
# Change to the unrelated property to check that
# it can be merged.
change_list_2 = [exp_domain.ExplorationChange({
'old_value': {
'html': '',
'content_id': 'content'
},
'new_value': {
'html': '<p>Hello Aryaman!</p>',
'content_id': 'content'
},
'state_name': 'Introduction',
'property_name': 'content',
'cmd': 'edit_state_property'
})]
changes_are_mergeable = exp_services.are_changes_mergeable(
self.EXP_0_ID, 1, change_list_2)
self.assertEqual(changes_are_mergeable, True)
# Deleting and Adding states to check that when any
# state is deleted or added, then the changes can not be
# merged.
change_list_3 = [exp_domain.ExplorationChange({
'new_state_name': 'End-State',
'cmd': 'rename_state',
'old_state_name': 'End'
}), exp_domain.ExplorationChange({
'cmd': 'delete_state',
'state_name': 'End-State'
}), exp_domain.ExplorationChange({
'cmd': 'add_state',
'state_name': 'End'
}), exp_domain.ExplorationChange({
'cmd': 'delete_state',
'state_name': 'End'
}), exp_domain.ExplorationChange({
'cmd': 'add_state',
'state_name': 'End'
}), exp_domain.ExplorationChange({
'new_state_name': 'End-State',
'cmd': 'rename_state',
'old_state_name': 'End'
}), exp_domain.ExplorationChange({
'new_state_name': 'End',
'cmd': 'rename_state',
'old_state_name': 'End-State'
}), exp_domain.ExplorationChange({
'old_value': [{
'tagged_skill_misconception_id': None,
'rule_specs': [{
'rule_type': 'IsLessThanOrEqualTo',
'inputs': {
'x': 50
}
}],
'training_data': [],
'outcome': {
'param_changes': [],
'dest': 'Introduction',
'missing_prerequisite_skill_id': None,
'feedback': {
'content_id': 'feedback_1',
'html': ''
},
'labelled_as_correct': False,
'refresher_exploration_id': None
}
}],
'cmd': 'edit_state_property',
'property_name': 'answer_groups',
'new_value': [{
'tagged_skill_misconception_id': None,
'rule_specs': [{
'rule_type': 'IsLessThanOrEqualTo',
'inputs': {
'x': 50
}
}],
'training_data': [],
'outcome': {
'param_changes': [],
'dest': 'End',
'missing_prerequisite_skill_id': None,
'feedback': {
'content_id': 'feedback_1',
'html': ''
},
'labelled_as_correct': False,
'refresher_exploration_id': None
}
}],
'state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'old_value': {
'param_changes': [],
'dest': 'Introduction',
'missing_prerequisite_skill_id': None,
'feedback': {
'content_id': 'default_outcome',
'html': ''
},
'labelled_as_correct': False,
'refresher_exploration_id': None
},
'cmd': 'edit_state_property',
'property_name': 'default_outcome',
'new_value': {
'param_changes': [],
'dest': 'End',
'missing_prerequisite_skill_id': None,
'feedback': {
'content_id': 'default_outcome',
'html': ''
},
'labelled_as_correct': False,
'refresher_exploration_id': None
},
'state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'old_value': {
'content_id': 'content',
'html': ''
},
'cmd': 'edit_state_property',
'property_name': 'content',
'new_value': {
'content_id': 'content',
'html': 'Congratulations, you have finished!'
},
'state_name': 'End'
}), exp_domain.ExplorationChange({
'old_value': None,
'cmd': 'edit_state_property',
'property_name': 'widget_id',
'new_value': 'EndExploration',
'state_name': 'End'
}), exp_domain.ExplorationChange({
'old_value': {},
'cmd': 'edit_state_property',
'property_name': 'widget_customization_args',
'new_value': {
'recommendedExplorationIds': {
'value': []
}
},
'state_name': 'End'
}), exp_domain.ExplorationChange({
'old_value': {
'param_changes': [],
'dest': 'End',
'missing_prerequisite_skill_id': None,
'feedback': {
'content_id': 'default_outcome',
'html': ''
},
'labelled_as_correct': False,
'refresher_exploration_id': None
},
'cmd': 'edit_state_property',
'property_name': 'default_outcome',
'new_value': None,
'state_name': 'End'
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list_3,
'Added and deleted states.')
# Checking that old changes that could be
# merged previously can not be merged after
# addition or deletion of state.
changes_are_not_mergeable = exp_services.are_changes_mergeable(
self.EXP_0_ID, 1, change_list_2)
self.assertEqual(changes_are_not_mergeable, False)
def test_changes_are_not_mergeable_when_frontend_version_exceeds_backend_version(self): # pylint: disable=line-too-long
self.save_new_valid_exploration(
self.EXP_0_ID, self.owner_id, end_state_name='End')
rights_manager.publish_exploration(self.owner, self.EXP_0_ID)
# Changes to the various properties of the first and
# second state.
change_list = [exp_domain.ExplorationChange({
'old_value': 'TextInput',
'cmd': 'edit_state_property',
'property_name': 'widget_id',
'new_value': None,
'state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'old_value': {
'placeholder': {
'value': {
'content_id': 'ca_placeholder_0',
'unicode_str': ''
}
},
'rows': {
'value': 1
}
},
'cmd': 'edit_state_property',
'property_name': 'widget_customization_args',
'new_value': {},
'state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'old_value': None,
'cmd': 'edit_state_property',
'property_name': 'widget_id',
'new_value': 'NumericInput',
'state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'old_value': 1,
'cmd': 'edit_state_property',
'property_name': 'next_content_id_index',
'new_value': 2,
'state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'old_value': [],
'cmd': 'edit_state_property',
'property_name': 'answer_groups',
'new_value': [
{
'tagged_skill_misconception_id': None,
'rule_specs': [
{
'rule_type': 'IsLessThanOrEqualTo',
'inputs': {
'x': 50
}
}
],
'training_data': [],
'outcome': {
'param_changes': [],
'dest': 'End',
'missing_prerequisite_skill_id': None,
'feedback': {
'content_id': 'feedback_1',
'html': ''
},
'labelled_as_correct': False,
'refresher_exploration_id': None
}
}
],
'state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'old_value': [],
'cmd': 'edit_state_property',
'property_name': 'hints',
'new_value': [
{
'hint_content': {
'content_id': 'hint_2',
'html': '<p>Hint.</p>'
}
}
],
'state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'old_value': 2,
'cmd': 'edit_state_property',
'property_name': 'next_content_id_index',
'new_value': 3,
'state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'old_value': {
'content_id': 'content',
'html': 'Congratulations, you have finished!'
},
'cmd': 'edit_state_property',
'property_name': 'content',
'new_value': {
'content_id': 'content',
'html': '<p>2Congratulations, you have finished!</p>'
},
'state_name': 'End'
})]
# Changes are mergeable when updating the same version.
changes_are_mergeable = exp_services.are_changes_mergeable(
self.EXP_0_ID, 1, change_list)
self.assertEqual(changes_are_mergeable, True)
# Changes are not mergeable when updating from version
# more than that on the backend.
changes_are_not_mergeable = exp_services.are_changes_mergeable(
self.EXP_0_ID, 3, change_list)
self.assertEqual(changes_are_not_mergeable, False)
def test_email_is_sent_to_admin_in_case_of_adding_deleting_state_changes(
self):
self.login(self.OWNER_EMAIL)
with self.swap(feconf, 'CAN_SEND_EMAILS', True):
messages = self._get_sent_email_messages(
feconf.ADMIN_EMAIL_ADDRESS)
self.assertEqual(len(messages), 0)
self.save_new_valid_exploration(
self.EXP_0_ID, self.owner_id, end_state_name='End')
rights_manager.publish_exploration(self.owner, self.EXP_0_ID)
# Changes to the various properties of the first and
# second state.
change_list = [exp_domain.ExplorationChange({
'old_value': 'TextInput',
'cmd': 'edit_state_property',
'property_name': 'widget_id',
'new_value': None,
'state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'old_value': {
'placeholder': {
'value': {
'content_id': 'ca_placeholder_0',
'unicode_str': ''
}
},
'rows': {
'value': 1
}
},
'cmd': 'edit_state_property',
'property_name': 'widget_customization_args',
'new_value': {},
'state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'old_value': None,
'cmd': 'edit_state_property',
'property_name': 'widget_id',
'new_value': 'NumericInput',
'state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'state_name': 'Introduction',
'old_value':
{
'requireNonnegativeInput':
{
'value': True
}
},
'property_name': 'widget_customization_args',
'new_value':
{
'requireNonnegativeInput':
{
'value': False
}
},
'cmd': 'edit_state_property'
}), exp_domain.ExplorationChange({
'old_value': 1,
'cmd': 'edit_state_property',
'property_name': 'next_content_id_index',
'new_value': 2,
'state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'old_value': [],
'cmd': 'edit_state_property',
'property_name': 'answer_groups',
'new_value': [
{
'tagged_skill_misconception_id': None,
'rule_specs': [
{
'rule_type': 'IsLessThanOrEqualTo',
'inputs': {
'x': 50
}
}
],
'training_data': [],
'outcome': {
'param_changes': [],
'dest': 'End',
'missing_prerequisite_skill_id': None,
'feedback': {
'content_id': 'feedback_1',
'html': ''
},
'labelled_as_correct': False,
'refresher_exploration_id': None
}
}
],
'state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'old_value': [],
'cmd': 'edit_state_property',
'property_name': 'hints',
'new_value': [
{
'hint_content': {
'content_id': 'hint_2',
'html': '<p>Hint.</p>'
}
}
],
'state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'old_value': 2,
'cmd': 'edit_state_property',
'property_name': 'next_content_id_index',
'new_value': 3,
'state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'old_value': {
'content_id': 'content',
'html': 'Congratulations, you have finished!'
},
'cmd': 'edit_state_property',
'property_name': 'content',
'new_value': {
'content_id': 'content',
'html': '<p>2Congratulations, you have finished!</p>'
},
'state_name': 'End'
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list,
'Changed various properties in both states.')
change_list_2 = [exp_domain.ExplorationChange({
'new_state_name': 'End-State',
'cmd': 'rename_state',
'old_state_name': 'End'
}), exp_domain.ExplorationChange({
'cmd': 'delete_state',
'state_name': 'End-State'
}), exp_domain.ExplorationChange({
'cmd': 'add_state',
'state_name': 'End'
}), exp_domain.ExplorationChange({
'cmd': 'delete_state',
'state_name': 'End'
}), exp_domain.ExplorationChange({
'cmd': 'add_state',
'state_name': 'End'
}), exp_domain.ExplorationChange({
'new_state_name': 'End-State',
'cmd': 'rename_state',
'old_state_name': 'End'
}), exp_domain.ExplorationChange({
'new_state_name': 'End',
'cmd': 'rename_state',
'old_state_name': 'End-State'
}), exp_domain.ExplorationChange({
'old_value': [{
'tagged_skill_misconception_id': None,
'rule_specs': [{
'rule_type': 'IsLessThanOrEqualTo',
'inputs': {
'x': 50
}
}],
'training_data': [],
'outcome': {
'param_changes': [],
'dest': 'Introduction',
'missing_prerequisite_skill_id': None,
'feedback': {
'content_id': 'feedback_1',
'html': ''
},
'labelled_as_correct': False,
'refresher_exploration_id': None
}
}],
'cmd': 'edit_state_property',
'property_name': 'answer_groups',
'new_value': [{
'tagged_skill_misconception_id': None,
'rule_specs': [{
'rule_type': 'IsLessThanOrEqualTo',
'inputs': {
'x': 50
}
}],
'training_data': [],
'outcome': {
'param_changes': [],
'dest': 'End',
'missing_prerequisite_skill_id': None,
'feedback': {
'content_id': 'feedback_1',
'html': ''
},
'labelled_as_correct': False,
'refresher_exploration_id': None
}
}],
'state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'old_value': {
'param_changes': [],
'dest': 'Introduction',
'missing_prerequisite_skill_id': None,
'feedback': {
'content_id': 'default_outcome',
'html': ''
},
'labelled_as_correct': False,
'refresher_exploration_id': None
},
'cmd': 'edit_state_property',
'property_name': 'default_outcome',
'new_value': {
'param_changes': [],
'dest': 'End',
'missing_prerequisite_skill_id': None,
'feedback': {
'content_id': 'default_outcome',
'html': ''
},
'labelled_as_correct': False,
'refresher_exploration_id': None
},
'state_name': 'Introduction'
}), exp_domain.ExplorationChange({
'old_value': {
'content_id': 'content',
'html': ''
},
'cmd': 'edit_state_property',
'property_name': 'content',
'new_value': {
'content_id': 'content',
'html': 'Congratulations, you have finished!'
},
'state_name': 'End'
}), exp_domain.ExplorationChange({
'old_value': None,
'cmd': 'edit_state_property',
'property_name': 'widget_id',
'new_value': 'EndExploration',
'state_name': 'End'
}), exp_domain.ExplorationChange({
'old_value': {},
'cmd': 'edit_state_property',
'property_name': 'widget_customization_args',
'new_value': {
'recommendedExplorationIds': {
'value': []
}
},
'state_name': 'End'
}), exp_domain.ExplorationChange({
'old_value': {
'param_changes': [],
'dest': 'End',
'missing_prerequisite_skill_id': None,
'feedback': {
'content_id': 'default_outcome',
'html': ''
},
'labelled_as_correct': False,
'refresher_exploration_id': None
},
'cmd': 'edit_state_property',
'property_name': 'default_outcome',
'new_value': None,
'state_name': 'End'
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list_2,
'Added and deleted states.')
change_list_3 = [exp_domain.ExplorationChange({
'old_value': {
'html': '',
'content_id': 'content'
},
'new_value': {
'html': '<p>Hello Aryaman!</p>',
'content_id': 'content'
},
'state_name': 'Introduction',
'property_name': 'content',
'cmd': 'edit_state_property'
})]
changes_are_not_mergeable = exp_services.are_changes_mergeable(
self.EXP_0_ID, 1, change_list_3)
self.assertEqual(changes_are_not_mergeable, False)
change_list_3_dict = [{
'cmd': 'edit_state_property',
'property_name': 'content',
'state_name': 'Introduction',
'new_value': {
'html': '<p>Hello Aryaman!</p>',
'content_id': 'content'
},
'old_value': {
'html': '',
'content_id': 'content'
},
}]
expected_email_html_body = (
'(Sent from dev-project-id)<br/><br/>'
'Hi Admin,<br><br>'
'Some draft changes were rejected in exploration %s because '
'the changes were conflicting and could not be saved. Please '
'see the rejected change list below:<br>'
'Discarded change list: %s <br><br>'
'Frontend Version: %s<br>'
'Backend Version: %s<br><br>'
'Thanks!' % (self.EXP_0_ID, change_list_3_dict, 1, 3)
)
messages = self._get_sent_email_messages(
feconf.ADMIN_EMAIL_ADDRESS)
self.assertEqual(len(messages), 1)
self.assertEqual(messages[0].html, expected_email_html_body)
def test_email_is_sent_to_admin_in_case_of_state_renames_changes_conflict(
self):
self.login(self.OWNER_EMAIL)
with self.swap(feconf, 'CAN_SEND_EMAILS', True):
messages = self._get_sent_email_messages(
feconf.ADMIN_EMAIL_ADDRESS)
self.assertEqual(len(messages), 0)
self.save_new_valid_exploration(
self.EXP_0_ID, self.owner_id, end_state_name='End')
rights_manager.publish_exploration(self.owner, self.EXP_0_ID)
change_list = [exp_domain.ExplorationChange({
'old_value': {
'html': '',
'content_id': 'content'
},
'new_value': {
'html': '<p>End State</p>',
'content_id': 'content'
},
'state_name': 'End',
'property_name': 'content',
'cmd': 'edit_state_property'
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list,
'Changed various properties in both states.')
# State name changed.
change_list_2 = [exp_domain.ExplorationChange({
'new_state_name': 'End-State',
'cmd': 'rename_state',
'old_state_name': 'End'
})]
exp_services.update_exploration(
self.owner_id, self.EXP_0_ID, change_list_2,
'Changed various properties in both states.')
change_list_3 = [exp_domain.ExplorationChange({
'old_value': {
'html': 'End State',
'content_id': 'content'
},
'new_value': {
'html': '<p>End State Changed</p>',
'content_id': 'content'
},
'state_name': 'End',
'property_name': 'content',
'cmd': 'edit_state_property'
})]
changes_are_not_mergeable = exp_services.are_changes_mergeable(
self.EXP_0_ID, 2, change_list_3)
self.assertEqual(changes_are_not_mergeable, False)
change_list_3_dict = [{
'cmd': 'edit_state_property',
'property_name': 'content',
'state_name': 'End',
'new_value': {
'html': '<p>End State Changed</p>',
'content_id': 'content'
},
'old_value': {
'html': 'End State',
'content_id': 'content'
},
}]
expected_email_html_body = (
'(Sent from dev-project-id)<br/><br/>'
'Hi Admin,<br><br>'
'Some draft changes were rejected in exploration %s because '
'the changes were conflicting and could not be saved. Please '
'see the rejected change list below:<br>'
'Discarded change list: %s <br><br>'
'Frontend Version: %s<br>'
'Backend Version: %s<br><br>'
'Thanks!' % (self.EXP_0_ID, change_list_3_dict, 2, 3)
)
messages = self._get_sent_email_messages(
feconf.ADMIN_EMAIL_ADDRESS)
self.assertEqual(len(messages), 1)
self.assertEqual(expected_email_html_body, messages[0].html)
# Add a translation after state renames.
change_list_4 = [exp_domain.ExplorationChange({
'content_html': 'N/A',
'translation_html': '<p>State 2 Content Translation.</p>',
'state_name': 'End',
'language_code': 'de',
'content_id': 'content',
'cmd': 'add_written_translation',
'data_format': 'html'
})]
changes_are_not_mergeable_2 = exp_services.are_changes_mergeable(
self.EXP_0_ID, 2, change_list_4)
self.assertEqual(changes_are_not_mergeable_2, False)
change_list_4_dict = [{
'cmd': 'add_written_translation',
'state_name': 'End',
'content_id': 'content',
'language_code': 'de',
'content_html': 'N/A',
'translation_html': '<p>State 2 Content Translation.</p>',
'data_format': 'html'
}]
expected_email_html_body_2 = (
'(Sent from dev-project-id)<br/><br/>'
'Hi Admin,<br><br>'
'Some draft changes were rejected in exploration %s because '
'the changes were conflicting and could not be saved. Please '
'see the rejected change list below:<br>'
'Discarded change list: %s <br><br>'
'Frontend Version: %s<br>'
'Backend Version: %s<br><br>'
'Thanks!' % (self.EXP_0_ID, change_list_4_dict, 2, 3)
)
messages = self._get_sent_email_messages(
feconf.ADMIN_EMAIL_ADDRESS)
self.assertEqual(len(messages), 2)
self.assertEqual(expected_email_html_body_2, messages[1].html)
class TestExplorationBasicUpdateFunctions(test_utils.GenericTestBase):
"""Tests various update functions"""
def test_explortation_update_language_code(self):
exploration = self.save_new_valid_exploration(
'exp_id', 'user@example.com', title='', category='',
objective='', end_state_name='End')
exploration.update_language_code('en')
self.assertEqual('en', exploration.language_code)
exploration.update_language_code('f')
self.assertEqual('f', exploration.language_code)
def test_explortation_update_blurb(self):
exploration = self.save_new_valid_exploration(
'exp_id', 'user@example.com', title='', category='',
objective='', end_state_name='End')
exploration.update_blurb('hi')
self.assertEqual('hi', exploration.blurb)
exploration.update_blurb('blurb')
self.assertEqual('blurb', exploration.blurb)
def test_explortation_update_author_notes(self):
exploration = self.save_new_valid_exploration(
'exp_id', 'user@example.com', title='', category='',
objective='', end_state_name='End')
exploration.update_author_notes('note 1')
self.assertEqual('note 1', exploration.author_notes)
exploration.update_author_notes('note 2')
self.assertEqual('note 2', exploration.author_notes)
def test_explortation_update_param_specs(self):
exploration = self.save_new_valid_exploration(
'exp_id', 'user@example.com', title='', category='',
objective='', end_state_name='End')
self.assertEqual(0, len(exploration.param_specs))
params = {
'param1': {'obj_type': ''}
}
exploration.update_param_specs(params)
self.assertEqual(1, len(exploration.param_specs))
params = {
'param1': {'obj_type': ''},
'param2': {'obj_type': ''}
}
exploration.update_param_specs(params)
self.assertEqual(2, len(exploration.param_specs))
def test_explortation_update_param_changes(self):
exploration = self.save_new_valid_exploration(
'exp_id', 'user@example.com', title='', category='',
objective='', end_state_name='End')
params = [0, 1]
exploration.update_param_changes(params)
self.assertEqual(params, exploration.param_changes)
params2 = [0, 1, 2]
exploration.update_param_changes(params2)
self.assertEqual(params2, exploration.param_changes)
def test_explortation_update_correctness_feedback_enabled(self):
exploration = self.save_new_valid_exploration(
'exp_id', 'user@example.com', title='', category='',
objective='', end_state_name='End')
exploration.update_correctness_feedback_enabled(True)
self.assertEqual(True, exploration.correctness_feedback_enabled)
exploration.update_correctness_feedback_enabled(False)
self.assertEqual(False, exploration.correctness_feedback_enabled)
def test_update_states_from_model_with_schema_43(self):
exploration = self.save_new_valid_exploration(
'exp_id', 'user@example.com', title='', category='',
objective='', end_state_name='End')
mydict = {'states_schema_version': 43, 'states': {'state1': {}}}
exploration.update_states_from_model(mydict, 43, 'init')
self.assertEqual(mydict['states_schema_version'], 44)
def test_update_states_from_model_with_not_schema_43(self):
exploration = self.save_new_valid_exploration(
'exp_id', 'user@example.com', title='', category='',
objective='', end_state_name='End')
mydict = {'states_schema_version': 44, 'states': {'state1': {}}}
exploration.update_states_from_model(mydict, 44, 'init')
self.assertEqual(mydict['states_schema_version'], 45)
class TestExplorationCommitLogEntry(test_utils.GenericTestBase):
"""Tests exploration commit log entry init function"""
def test_ecle_init(self):
ecle = exp_domain.ExplorationCommitLogEntry(
datetime.datetime.now(), datetime.datetime.now(), 1, 2,
'new', 'msg', '-f', '2.0',
'good', 'yes',
'no')
mydict = ecle.to_dict()
self.assertEqual(mydict['version'], '2.0')
self.assertEqual(mydict['exploration_id'], 2)
class TestVersionedExplorationInteractionIdsMapping(test_utils.GenericTestBase):
"""Tests versioned exploration interaction ids mapping init function"""
def test_veiim_init(self):
veiim = exp_domain.VersionedExplorationInteractionIdsMapping('2.0', {})
self.assertEqual(veiim.version, '2.0')
class UnitTestExpUtil(test_utils.GenericTestBase):
"""Tests clean_math_expression for converting math strings"""
def test_clean_math_expression_with_trig(self):
res = exp_domain.clean_math_expression('cos^2(x)')
self.assertEqual(res, '(cos(x))^2')
res = exp_domain.clean_math_expression('cosx + 1')
self.assertEqual(res, 'cos(x) + 1')
res = exp_domain.clean_math_expression('cos(x)^2')
self.assertEqual(res, 'cos(x)^2')
res = exp_domain.clean_math_expression('sin^2(x)')
self.assertEqual(res, '(sin(x))^2')
def test_clean_math_expression_without_trig(self):
res = exp_domain.clean_math_expression('\u03bb')
self.assertEqual(res, 'lambda')
res = exp_domain.clean_math_expression('\u03c9')
self.assertEqual(res, 'omega')
res = exp_domain.clean_math_expression('2 \\cdot 2')
self.assertEqual(res, '2 * 2')
res = exp_domain.clean_math_expression('1,2')
self.assertEqual(res, '1.2')
| 35.884102 | 133 | 0.511203 | 35,516 | 403,122 | 5.480769 | 0.026016 | 0.026262 | 0.052626 | 0.02918 | 0.89222 | 0.860887 | 0.825188 | 0.791159 | 0.767763 | 0.742483 | 0 | 0.007807 | 0.383241 | 403,122 | 11,233 | 134 | 35.887296 | 0.771028 | 0.040873 | 0 | 0.727687 | 0 | 0 | 0.252283 | 0.033046 | 0 | 0 | 0 | 0.000089 | 0.041856 | 1 | 0.020928 | false | 0.000739 | 0.002585 | 0 | 0.027207 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
71ef4b0fd6356de4e6cef277313fc4a31ebf9e03 | 91,348 | py | Python | src/tests/control/test_orders.py | fakegit/pretix | b6e9e64ff967f7b4f91fe88694f4157d8a0787b4 | [
"Apache-2.0"
] | null | null | null | src/tests/control/test_orders.py | fakegit/pretix | b6e9e64ff967f7b4f91fe88694f4157d8a0787b4 | [
"Apache-2.0"
] | 56 | 2020-05-07T07:54:17.000Z | 2021-04-19T12:14:14.000Z | src/tests/control/test_orders.py | fakegit/pretix | b6e9e64ff967f7b4f91fe88694f4157d8a0787b4 | [
"Apache-2.0"
] | null | null | null | #
# This file is part of pretix (Community Edition).
#
# Copyright (C) 2014-2020 Raphael Michel and contributors
# Copyright (C) 2020-2021 rami.io GmbH and contributors
#
# This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General
# Public License as published by the Free Software Foundation in version 3 of the License.
#
# ADDITIONAL TERMS APPLY: Pursuant to Section 7 of the GNU Affero General Public License, additional terms are
# applicable granting you additional permissions and placing additional restrictions on your usage of this software.
# Please refer to the pretix LICENSE file to obtain the full terms applicable to this work. If you did not receive
# this file, see <https://pretix.eu/about/en/license>.
#
# This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied
# warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more
# details.
#
# You should have received a copy of the GNU Affero General Public License along with this program. If not, see
# <https://www.gnu.org/licenses/>.
#
# This file is based on an earlier version of pretix which was released under the Apache License 2.0. The full text of
# the Apache License 2.0 can be obtained at <http://www.apache.org/licenses/LICENSE-2.0>.
#
# This file may have since been changed and any changes are released under the terms of AGPLv3 as described above. A
# full history of changes and contributors is available at <https://github.com/pretix/pretix>.
#
# This file contains Apache-licensed contributions copyrighted by: Daniel, Flavia Bastos, Jahongir
#
# Unless required by applicable law or agreed to in writing, software distributed under the Apache License 2.0 is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations under the License.
from datetime import timedelta
from decimal import Decimal
from unittest import mock
import pytest
from bs4 import BeautifulSoup
from django.core import mail
from django.utils.timezone import now
from django_countries.fields import Country
from django_scopes import scopes_disabled
from tests.base import SoupTest
from tests.plugins.stripe.test_provider import MockedCharge
from pretix.base.models import (
Event, GiftCard, InvoiceAddress, Item, Order, OrderFee, OrderPayment,
OrderPosition, OrderRefund, Organizer, Question, QuestionAnswer, Quota,
Team, User,
)
from pretix.base.payment import PaymentException
from pretix.base.services.invoices import (
generate_cancellation, generate_invoice,
)
@pytest.fixture
def env():
o = Organizer.objects.create(name='Dummy', slug='dummy')
event = Event.objects.create(
organizer=o, name='Dummy', slug='dummy',
date_from=now(), plugins='pretix.plugins.banktransfer,pretix.plugins.stripe,tests.testdummy'
)
event.settings.set('ticketoutput_testdummy__enabled', True)
user = User.objects.create_user('dummy@dummy.dummy', 'dummy')
t = Team.objects.create(organizer=o, can_view_orders=True, can_change_orders=True)
t.members.add(user)
t.limit_events.add(event)
o = Order.objects.create(
code='FOO', event=event, email='dummy@dummy.test',
status=Order.STATUS_PENDING,
datetime=now(), expires=now() + timedelta(days=10),
total=14, locale='en'
)
o.payments.create(
amount=o.total, provider='banktransfer', state=OrderPayment.PAYMENT_STATE_PENDING
)
ticket = Item.objects.create(event=event, name='Early-bird ticket',
category=None, default_price=23,
admission=True)
event.settings.set('attendee_names_asked', True)
event.settings.set('locales', ['en', 'de'])
OrderPosition.objects.create(
order=o,
item=ticket,
variation=None,
price=Decimal("14"),
attendee_name_parts={'full_name': "Peter", "_scheme": "full"}
)
OrderPosition.objects.create(
order=o,
item=ticket,
variation=None,
price=Decimal("14"),
canceled=True,
attendee_name_parts={'full_name': "Lukas Gelöscht", "_scheme": "full"}
)
return event, user, o, ticket
@pytest.mark.django_db
def test_order_list(client, env):
with scopes_disabled():
otherticket = Item.objects.create(event=env[0], name='Early-bird ticket',
category=None, default_price=23,
admission=True)
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.get('/control/event/dummy/dummy/orders/')
assert 'FOO' in response.content.decode()
response = client.get('/control/event/dummy/dummy/orders/?query=peter')
assert 'FOO' in response.content.decode()
response = client.get('/control/event/dummy/dummy/orders/?query=hans')
assert 'FOO' not in response.content.decode()
response = client.get('/control/event/dummy/dummy/orders/?query=dummy')
assert 'FOO' in response.content.decode()
response = client.get('/control/event/dummy/dummy/orders/?status=p')
assert 'FOO' not in response.content.decode()
response = client.get('/control/event/dummy/dummy/orders/?status=n')
assert 'FOO' in response.content.decode()
response = client.get('/control/event/dummy/dummy/orders/?status=ne')
assert 'FOO' in response.content.decode()
response = client.get('/control/event/dummy/dummy/orders/?item=%s' % otherticket.id)
assert 'FOO' not in response.content.decode()
response = client.get('/control/event/dummy/dummy/orders/?item=%s' % env[3].id)
assert 'FOO' in response.content.decode()
response = client.get('/control/event/dummy/dummy/orders/?provider=free')
assert 'FOO' not in response.content.decode()
response = client.get('/control/event/dummy/dummy/orders/?provider=banktransfer')
assert 'FOO' in response.content.decode()
response = client.get('/control/event/dummy/dummy/orders/?status=o')
assert 'FOO' not in response.content.decode()
env[2].expires = now() - timedelta(days=10)
env[2].save()
response = client.get('/control/event/dummy/dummy/orders/?status=o')
assert 'FOO' in response.content.decode()
response = client.get('/control/event/dummy/dummy/orders/?status=pa')
assert 'FOO' not in response.content.decode()
env[2].require_approval = True
env[2].save()
response = client.get('/control/event/dummy/dummy/orders/?status=pa')
assert 'FOO' in response.content.decode()
with scopes_disabled():
q = Question.objects.create(event=env[0], question="Q", type="N", required=True)
q.items.add(env[3])
op = env[2].positions.first()
qa = QuestionAnswer.objects.create(question=q, orderposition=op, answer="12")
response = client.get('/control/event/dummy/dummy/orders/?question=%d&answer=12' % q.pk)
assert 'FOO' in response.content.decode()
response = client.get('/control/event/dummy/dummy/orders/?question=%d&answer=13' % q.pk)
assert 'FOO' not in response.content.decode()
q.type = "C"
q.save()
with scopes_disabled():
qo1 = q.options.create(answer="Foo")
qo2 = q.options.create(answer="Bar")
qa.options.add(qo1)
response = client.get('/control/event/dummy/dummy/orders/?question=%d&answer=%d' % (q.pk, qo1.pk))
assert 'FOO' in response.content.decode()
response = client.get('/control/event/dummy/dummy/orders/?question=%d&answer=%d' % (q.pk, qo2.pk))
assert 'FOO' not in response.content.decode()
response = client.get('/control/event/dummy/dummy/orders/?status=testmode')
assert 'FOO' not in response.content.decode()
assert 'TEST MODE' not in response.content.decode()
env[2].testmode = True
env[2].save()
response = client.get('/control/event/dummy/dummy/orders/?status=testmode')
assert 'FOO' in response.content.decode()
assert 'TEST MODE' in response.content.decode()
@pytest.mark.django_db
def test_order_detail(client, env):
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.get('/control/event/dummy/dummy/orders/FOO/')
assert 'Early-bird' in response.content.decode()
assert 'Peter' in response.content.decode()
assert 'Lukas Gelöscht' in response.content.decode()
assert 'TEST MODE' not in response.content.decode()
@pytest.mark.django_db
def test_order_detail_show_test_mode(client, env):
env[2].testmode = True
env[2].save()
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.get('/control/event/dummy/dummy/orders/FOO/')
assert 'TEST MODE' in response.content.decode()
@pytest.mark.django_db
def test_order_set_contact(client, env):
with scopes_disabled():
q = Quota.objects.create(event=env[0], size=0)
q.items.add(env[3])
client.login(email='dummy@dummy.dummy', password='dummy')
client.post('/control/event/dummy/dummy/orders/FOO/contact', {
'email': 'admin@rami.io'
})
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
assert o.email == 'admin@rami.io'
@pytest.mark.django_db
def test_order_set_locale(client, env):
with scopes_disabled():
q = Quota.objects.create(event=env[0], size=0)
q.items.add(env[3])
client.login(email='dummy@dummy.dummy', password='dummy')
client.post('/control/event/dummy/dummy/orders/FOO/locale', {
'locale': 'de'
})
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
assert o.locale == 'de'
@pytest.mark.django_db
def test_order_set_locale_with_invalid_locale_value(client, env):
with scopes_disabled():
q = Quota.objects.create(event=env[0], size=0)
q.items.add(env[3])
client.login(email='dummy@dummy.dummy', password='dummy')
client.post('/control/event/dummy/dummy/orders/FOO/locale', {
'locale': 'fr'
})
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
assert o.locale == 'en'
@pytest.mark.django_db
def test_order_set_comment(client, env):
with scopes_disabled():
q = Quota.objects.create(event=env[0], size=0)
q.items.add(env[3])
client.login(email='dummy@dummy.dummy', password='dummy')
client.post('/control/event/dummy/dummy/orders/FOO/comment', {
'comment': 'Foo'
})
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
assert o.comment == 'Foo'
@pytest.mark.django_db
def test_order_transition_to_expired_success(client, env):
with scopes_disabled():
q = Quota.objects.create(event=env[0], size=0)
q.items.add(env[3])
client.login(email='dummy@dummy.dummy', password='dummy')
client.post('/control/event/dummy/dummy/orders/FOO/transition', {
'status': 'e'
})
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
assert o.status == Order.STATUS_EXPIRED
@pytest.mark.django_db
def test_order_transition_to_paid_in_time_success(client, env):
with scopes_disabled():
q = Quota.objects.create(event=env[0], size=0)
q.items.add(env[3])
client.login(email='dummy@dummy.dummy', password='dummy')
client.post('/control/event/dummy/dummy/orders/FOO/transition', {
'amount': str(env[2].pending_sum),
'payment_date': now().date().isoformat(),
'status': 'p'
})
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
assert o.status == Order.STATUS_PAID
@pytest.mark.django_db
def test_order_transition_to_paid_expired_quota_left(client, env):
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
o.status = Order.STATUS_EXPIRED
o.save()
q = Quota.objects.create(event=env[0], size=10)
q.items.add(env[3])
client.login(email='dummy@dummy.dummy', password='dummy')
res = client.post('/control/event/dummy/dummy/orders/FOO/transition', {
'status': 'p',
'payment_date': now().date().isoformat(),
'amount': str(o.pending_sum),
})
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
assert res.status_code < 400
assert o.status == Order.STATUS_PAID
@pytest.mark.django_db
def test_order_approve(client, env):
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
o.status = Order.STATUS_PENDING
o.require_approval = True
o.save()
q = Quota.objects.create(event=env[0], size=10)
q.items.add(env[3])
client.login(email='dummy@dummy.dummy', password='dummy')
res = client.post('/control/event/dummy/dummy/orders/FOO/approve', {
})
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
assert res.status_code < 400
assert o.status == Order.STATUS_PENDING
assert not o.require_approval
@pytest.mark.django_db
def test_order_deny(client, env):
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
o.status = Order.STATUS_PENDING
o.require_approval = True
o.save()
q = Quota.objects.create(event=env[0], size=10)
q.items.add(env[3])
client.login(email='dummy@dummy.dummy', password='dummy')
res = client.post('/control/event/dummy/dummy/orders/FOO/deny', {
})
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
assert res.status_code < 400
assert o.status == Order.STATUS_CANCELED
assert o.require_approval
@pytest.mark.django_db
def test_order_delete_require_testmode(client, env):
client.login(email='dummy@dummy.dummy', password='dummy')
res = client.get('/control/event/dummy/dummy/orders/FOO/delete', {}, follow=True)
assert 'alert-danger' in res.content.decode()
assert 'Only orders created in test mode can be deleted' in res.content.decode()
client.post('/control/event/dummy/dummy/orders/FOO/delete', {}, follow=True)
with scopes_disabled():
assert Order.objects.get(id=env[2].id)
@pytest.mark.django_db
def test_order_delete(client, env):
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
o.testmode = True
o.save()
client.login(email='dummy@dummy.dummy', password='dummy')
client.post('/control/event/dummy/dummy/orders/FOO/delete', {}, follow=True)
with scopes_disabled():
assert not Order.objects.filter(id=env[2].id).exists()
@pytest.mark.django_db
@pytest.mark.parametrize("process", [
# (Old status, new status, success expected)
(Order.STATUS_CANCELED, Order.STATUS_PAID, False),
(Order.STATUS_CANCELED, Order.STATUS_PENDING, False),
(Order.STATUS_CANCELED, Order.STATUS_EXPIRED, False),
(Order.STATUS_PAID, Order.STATUS_PENDING, False),
(Order.STATUS_PAID, Order.STATUS_CANCELED, True),
(Order.STATUS_PAID, Order.STATUS_EXPIRED, False),
(Order.STATUS_PENDING, Order.STATUS_CANCELED, True),
(Order.STATUS_PENDING, Order.STATUS_PAID, True),
(Order.STATUS_PENDING, Order.STATUS_EXPIRED, True),
])
def test_order_transition(client, env, process):
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
o.status = process[0]
o.save()
client.login(email='dummy@dummy.dummy', password='dummy')
client.get('/control/event/dummy/dummy/orders/FOO/transition?status=' + process[1])
client.post('/control/event/dummy/dummy/orders/FOO/transition', {
'amount': str(o.pending_sum),
'payment_date': now().date().isoformat(),
'status': process[1]
})
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
if process[2]:
assert o.status == process[1]
else:
assert o.status == process[0]
@pytest.mark.django_db
def test_order_cancel_free(client, env):
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
o.status = Order.STATUS_PAID
o.total = Decimal('0.00')
o.save()
client.login(email='dummy@dummy.dummy', password='dummy')
client.get('/control/event/dummy/dummy/orders/FOO/transition?status=c')
client.post('/control/event/dummy/dummy/orders/FOO/transition', {
'status': 'c'
})
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
assert o.status == Order.STATUS_CANCELED
@pytest.mark.django_db
def test_order_cancel_paid_keep_fee(client, env):
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
o.payments.create(state=OrderPayment.PAYMENT_STATE_CONFIRMED, amount=o.total)
o.status = Order.STATUS_PAID
o.save()
tr7 = o.event.tax_rules.create(rate=Decimal('7.00'))
o.event.settings.tax_rate_default = tr7
client.login(email='dummy@dummy.dummy', password='dummy')
client.get('/control/event/dummy/dummy/orders/FOO/transition?status=c')
client.post('/control/event/dummy/dummy/orders/FOO/transition', {
'status': 'c',
'cancellation_fee': '6.00'
})
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
assert not o.positions.exists()
assert o.all_positions.exists()
f = o.fees.get()
assert f.fee_type == OrderFee.FEE_TYPE_CANCELLATION
assert f.value == Decimal('6.00')
assert f.tax_value == Decimal('0.39')
assert f.tax_rate == Decimal('7')
assert f.tax_rule == tr7
assert o.status == Order.STATUS_PAID
assert o.total == Decimal('6.00')
assert o.pending_sum == Decimal('-8.00')
@pytest.mark.django_db
def test_order_cancel_pending_keep_fee(client, env):
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
o.payments.create(state=OrderPayment.PAYMENT_STATE_CONFIRMED, amount=Decimal('8.00'))
o.status = Order.STATUS_PENDING
o.save()
client.login(email='dummy@dummy.dummy', password='dummy')
client.get('/control/event/dummy/dummy/orders/FOO/transition?status=c')
client.post('/control/event/dummy/dummy/orders/FOO/transition', {
'status': 'c',
'cancellation_fee': '6.00'
})
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
assert not o.positions.exists()
assert o.all_positions.exists()
f = o.fees.get()
assert f.fee_type == OrderFee.FEE_TYPE_CANCELLATION
assert f.value == Decimal('6.00')
assert o.status == Order.STATUS_PAID
assert o.total == Decimal('6.00')
assert o.pending_sum == Decimal('-2.00')
@pytest.mark.django_db
def test_order_cancel_pending_fee_too_high(client, env):
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
o.payments.create(state=OrderPayment.PAYMENT_STATE_CONFIRMED, amount=Decimal('4.00'))
o.status = Order.STATUS_PENDING
o.save()
client.login(email='dummy@dummy.dummy', password='dummy')
client.get('/control/event/dummy/dummy/orders/FOO/transition?status=c')
client.post('/control/event/dummy/dummy/orders/FOO/transition', {
'status': 'c',
'cancellation_fee': '6.00'
})
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
assert o.positions.exists()
assert not o.fees.exists()
assert o.status == Order.STATUS_PENDING
assert o.total == Decimal('14.00')
@pytest.mark.django_db
def test_order_cancel_unpaid_no_fees_allowed(client, env):
client.login(email='dummy@dummy.dummy', password='dummy')
client.get('/control/event/dummy/dummy/orders/FOO/transition?status=c')
client.post('/control/event/dummy/dummy/orders/FOO/transition', {
'status': 'c',
'cancellation_fee': '6.00'
})
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
assert o.positions.exists()
assert not o.fees.exists()
assert o.status == Order.STATUS_CANCELED
assert o.total == Decimal('14.00')
@pytest.mark.django_db
def test_order_invoice_create_forbidden(client, env):
client.login(email='dummy@dummy.dummy', password='dummy')
env[0].settings.set('invoice_generate', 'no')
response = client.post('/control/event/dummy/dummy/orders/FOO/invoice', {}, follow=True)
assert 'alert-danger' in response.content.decode()
@pytest.mark.django_db
def test_order_invoice_create_duplicate(client, env):
client.login(email='dummy@dummy.dummy', password='dummy')
with scopes_disabled():
generate_invoice(env[2])
env[0].settings.set('invoice_generate', 'admin')
response = client.post('/control/event/dummy/dummy/orders/FOO/invoice', {}, follow=True)
assert 'alert-danger' in response.content.decode()
@pytest.mark.django_db
def test_order_invoice_create_ok(client, env):
client.login(email='dummy@dummy.dummy', password='dummy')
env[0].settings.set('invoice_generate', 'admin')
response = client.post('/control/event/dummy/dummy/orders/FOO/invoice', {}, follow=True)
assert 'alert-success' in response.content.decode()
with scopes_disabled():
assert env[2].invoices.exists()
@pytest.mark.django_db
def test_order_invoice_regenerate(client, env):
client.login(email='dummy@dummy.dummy', password='dummy')
with scopes_disabled():
i = generate_invoice(env[2])
InvoiceAddress.objects.create(name_parts={'full_name': 'Foo', "_scheme": "full"}, order=env[2])
env[0].settings.set('invoice_generate', 'admin')
response = client.post('/control/event/dummy/dummy/orders/FOO/invoices/%d/regenerate' % i.pk, {}, follow=True)
assert 'alert-success' in response.content.decode()
i.refresh_from_db()
assert 'Foo' in i.invoice_to
with scopes_disabled():
assert env[2].invoices.exists()
@pytest.mark.django_db
def test_order_invoice_regenerate_canceled(client, env):
client.login(email='dummy@dummy.dummy', password='dummy')
with scopes_disabled():
i = generate_invoice(env[2])
generate_cancellation(i)
response = client.post('/control/event/dummy/dummy/orders/FOO/invoices/%d/regenerate' % i.pk, {}, follow=True)
assert 'alert-danger' in response.content.decode()
@pytest.mark.django_db
def test_order_invoice_regenerate_unknown(client, env):
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.post('/control/event/dummy/dummy/orders/FOO/invoices/%d/regenerate' % 3, {}, follow=True)
assert 'alert-danger' in response.content.decode()
@pytest.mark.django_db
def test_order_invoice_reissue(client, env):
client.login(email='dummy@dummy.dummy', password='dummy')
with scopes_disabled():
i = generate_invoice(env[2])
InvoiceAddress.objects.create(name_parts={'full_name': 'Foo', "_scheme": "full"}, order=env[2])
env[0].settings.set('invoice_generate', 'admin')
response = client.post('/control/event/dummy/dummy/orders/FOO/invoices/%d/reissue' % i.pk, {}, follow=True)
assert 'alert-success' in response.content.decode()
i.refresh_from_db()
with scopes_disabled():
assert env[2].invoices.count() == 3
assert 'Foo' not in env[2].invoices.all()[0].invoice_to
assert 'Foo' not in env[2].invoices.all()[1].invoice_to
assert 'Foo' in env[2].invoices.all()[2].invoice_to
@pytest.mark.django_db
def test_order_invoice_reissue_canceled(client, env):
client.login(email='dummy@dummy.dummy', password='dummy')
with scopes_disabled():
i = generate_invoice(env[2])
generate_cancellation(i)
response = client.post('/control/event/dummy/dummy/orders/FOO/invoices/%d/reissue' % i.pk, {}, follow=True)
assert 'alert-danger' in response.content.decode()
@pytest.mark.django_db
def test_order_invoice_reissue_unknown(client, env):
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.post('/control/event/dummy/dummy/orders/FOO/invoices/%d/reissue' % 3, {}, follow=True)
assert 'alert-danger' in response.content.decode()
@pytest.mark.django_db
def test_order_resend_link(client, env):
mail.outbox = []
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.post('/control/event/dummy/dummy/orders/FOO/resend', {}, follow=True)
assert 'alert-success' in response.content.decode()
assert 'FOO' in mail.outbox[0].body
@pytest.mark.django_db
def test_order_reactivate_not_canceled(client, env):
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
o.status = Order.STATUS_PAID
o.save()
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.get('/control/event/dummy/dummy/orders/FOO/reactivate', follow=True)
assert 'alert-danger' in response.content.decode()
response = client.post('/control/event/dummy/dummy/orders/FOO/reactivate', follow=True)
assert 'alert-danger' in response.content.decode()
@pytest.mark.django_db
def test_order_reactivate(client, env):
with scopes_disabled():
q = Quota.objects.create(event=env[0], size=3)
q.items.add(env[3])
o = Order.objects.get(id=env[2].id)
o.status = Order.STATUS_CANCELED
o.save()
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.post('/control/event/dummy/dummy/orders/FOO/reactivate', {
}, follow=True)
print(response.content.decode())
assert 'alert-success' in response.content.decode()
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
assert o.status == Order.STATUS_PENDING
@pytest.mark.django_db
def test_order_extend_not_pending(client, env):
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
o.status = Order.STATUS_PAID
o.save()
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.get('/control/event/dummy/dummy/orders/FOO/extend', follow=True)
assert 'alert-danger' in response.content.decode()
response = client.post('/control/event/dummy/dummy/orders/FOO/extend', follow=True)
assert 'alert-danger' in response.content.decode()
@pytest.mark.django_db
def test_order_extend_not_expired(client, env):
with scopes_disabled():
q = Quota.objects.create(event=env[0], size=0)
q.items.add(env[3])
o = Order.objects.get(id=env[2].id)
generate_invoice(o)
newdate = (now() + timedelta(days=20)).strftime("%Y-%m-%d")
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.post('/control/event/dummy/dummy/orders/FOO/extend', {
'expires': newdate
}, follow=True)
assert 'alert-success' in response.content.decode()
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
assert o.expires.strftime("%Y-%m-%d %H:%M:%S") == newdate[:10] + " 23:59:59"
assert o.invoices.count() == 1
@pytest.mark.django_db
def test_order_extend_overdue_quota_empty(client, env):
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
o.expires = now() - timedelta(days=5)
o.save()
q = Quota.objects.create(event=env[0], size=0)
q.items.add(env[3])
newdate = (now() + timedelta(days=20)).strftime("%Y-%m-%d")
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.post('/control/event/dummy/dummy/orders/FOO/extend', {
'expires': newdate
}, follow=True)
assert 'alert-success' in response.content.decode()
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
assert o.expires.strftime("%Y-%m-%d %H:%M:%S") == newdate[:10] + " 23:59:59"
@pytest.mark.django_db
def test_order_extend_overdue_quota_blocked_by_waiting_list(client, env):
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
o.status = Order.STATUS_EXPIRED
o.expires = now() - timedelta(days=5)
o.save()
q = Quota.objects.create(event=env[0], size=1)
q.items.add(env[3])
env[0].waitinglistentries.create(item=env[3], email='foo@bar.com')
generate_cancellation(generate_invoice(o))
newdate = (now() + timedelta(days=20)).strftime("%Y-%m-%d")
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.post('/control/event/dummy/dummy/orders/FOO/extend', {
'expires': newdate
}, follow=True)
assert 'alert-success' in response.content.decode()
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
assert o.expires.strftime("%Y-%m-%d %H:%M:%S") == newdate[:10] + " 23:59:59"
assert o.status == Order.STATUS_PENDING
assert o.invoices.count() == 3
@pytest.mark.django_db
def test_order_extend_expired_quota_left(client, env):
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
o.expires = now() - timedelta(days=5)
o.status = Order.STATUS_EXPIRED
o.save()
generate_cancellation(generate_invoice(o))
q = Quota.objects.create(event=env[0], size=3)
q.items.add(env[3])
newdate = (now() + timedelta(days=20)).strftime("%Y-%m-%d")
client.login(email='dummy@dummy.dummy', password='dummy')
with scopes_disabled():
assert o.invoices.count() == 2
response = client.post('/control/event/dummy/dummy/orders/FOO/extend', {
'expires': newdate
}, follow=True)
assert b'alert-success' in response.content
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
assert o.expires.strftime("%Y-%m-%d %H:%M:%S") == newdate[:10] + " 23:59:59"
assert o.status == Order.STATUS_PENDING
assert o.invoices.count() == 3
@pytest.mark.django_db
def test_order_extend_expired_quota_empty(client, env):
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
o.expires = now() - timedelta(days=5)
o.status = Order.STATUS_EXPIRED
olddate = o.expires
o.save()
with scopes_disabled():
q = Quota.objects.create(event=env[0], size=0)
q.items.add(env[3])
newdate = (now() + timedelta(days=20)).strftime("%Y-%m-%d")
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.post('/control/event/dummy/dummy/orders/FOO/extend', {
'expires': newdate
}, follow=True)
assert b'alert-danger' in response.content
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
assert o.expires.strftime("%Y-%m-%d %H:%M:%S") == olddate.strftime("%Y-%m-%d %H:%M:%S")
assert o.status == Order.STATUS_EXPIRED
@pytest.mark.django_db
def test_order_extend_expired_quota_empty_ignore(client, env):
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
o.expires = now() - timedelta(days=5)
o.status = Order.STATUS_EXPIRED
o.save()
q = Quota.objects.create(event=env[0], size=0)
q.items.add(env[3])
newdate = (now() + timedelta(days=20)).strftime("%Y-%m-%d")
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.post('/control/event/dummy/dummy/orders/FOO/extend', {
'expires': newdate,
'quota_ignore': 'on'
}, follow=True)
assert b'alert-success' in response.content
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
assert o.status == Order.STATUS_PENDING
@pytest.mark.django_db
def test_order_extend_expired_seat_free(client, env):
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
o.expires = now() - timedelta(days=5)
o.status = Order.STATUS_EXPIRED
o.save()
generate_cancellation(generate_invoice(o))
seat_a1 = env[0].seats.create(seat_number="A1", product=env[3], seat_guid="A1")
p = o.positions.first()
p.seat = seat_a1
p.save()
q = Quota.objects.create(event=env[0], size=3)
q.items.add(env[3])
newdate = (now() + timedelta(days=20)).strftime("%Y-%m-%d")
client.login(email='dummy@dummy.dummy', password='dummy')
assert o.invoices.count() == 2
response = client.post('/control/event/dummy/dummy/orders/FOO/extend', {
'expires': newdate
}, follow=True)
assert b'alert-success' in response.content
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
assert o.expires.strftime("%Y-%m-%d %H:%M:%S") == newdate[:10] + " 23:59:59"
assert o.status == Order.STATUS_PENDING
assert o.invoices.count() == 3
@pytest.mark.django_db
def test_order_extend_expired_seat_blocked(client, env):
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
o.expires = now() - timedelta(days=5)
o.status = Order.STATUS_EXPIRED
olddate = o.expires
o.save()
seat_a1 = env[0].seats.create(seat_number="A1", product=env[3], seat_guid="A1", blocked=True)
p = o.positions.first()
p.seat = seat_a1
p.save()
q = Quota.objects.create(event=env[0], size=100)
q.items.add(env[3])
newdate = (now() + timedelta(days=20)).strftime("%Y-%m-%d")
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.post('/control/event/dummy/dummy/orders/FOO/extend', {
'expires': newdate
}, follow=True)
assert b'alert-danger' in response.content
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
assert o.expires.strftime("%Y-%m-%d %H:%M:%S") == olddate.strftime("%Y-%m-%d %H:%M:%S")
assert o.status == Order.STATUS_EXPIRED
@pytest.mark.django_db
def test_order_extend_expired_seat_taken(client, env):
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
o.expires = now() - timedelta(days=5)
o.status = Order.STATUS_EXPIRED
olddate = o.expires
o.save()
seat_a1 = env[0].seats.create(seat_number="A1", product=env[3], seat_guid="A1")
p = o.positions.first()
p.seat = seat_a1
p.save()
o = Order.objects.create(
code='BAR', event=env[0], email='dummy@dummy.test',
status=Order.STATUS_PENDING,
datetime=now(), expires=now() + timedelta(days=10),
total=14, locale='en'
)
OrderPosition.objects.create(
order=o,
item=env[3],
variation=None,
price=Decimal("14"),
attendee_name_parts={'full_name': "Peter", "_scheme": "full"},
seat=seat_a1
)
q = Quota.objects.create(event=env[0], size=100)
q.items.add(env[3])
newdate = (now() + timedelta(days=20)).strftime("%Y-%m-%d")
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.post('/control/event/dummy/dummy/orders/FOO/extend', {
'expires': newdate
}, follow=True)
assert b'alert-danger' in response.content
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
assert o.expires.strftime("%Y-%m-%d %H:%M:%S") == olddate.strftime("%Y-%m-%d %H:%M:%S")
assert o.status == Order.STATUS_EXPIRED
@pytest.mark.django_db
def test_order_extend_expired_quota_partial(client, env):
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
OrderPosition.objects.create(
order=o,
item=env[3],
variation=None,
price=Decimal("14"),
attendee_name_parts={'full_name': "Peter", "_scheme": "full"}
)
o.expires = now() - timedelta(days=5)
o.status = Order.STATUS_EXPIRED
olddate = o.expires
o.save()
q = Quota.objects.create(event=env[0], size=1)
q.items.add(env[3])
newdate = (now() + timedelta(days=20)).strftime("%Y-%m-%d")
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.post('/control/event/dummy/dummy/orders/FOO/extend', {
'expires': newdate
}, follow=True)
assert b'alert-danger' in response.content
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
assert o.expires.strftime("%Y-%m-%d %H:%M:%S") == olddate.strftime("%Y-%m-%d %H:%M:%S")
assert o.status == Order.STATUS_EXPIRED
@pytest.mark.django_db
def test_order_extend_expired_voucher_budget_ok(client, env):
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
o.expires = now() - timedelta(days=5)
o.status = Order.STATUS_EXPIRED
o.save()
v = env[0].vouchers.create(
code="foo", price_mode='subtract', value=Decimal('1.50'), budget=Decimal('1.50')
)
p = o.positions.first()
p.voucher = v
p.price_before_voucher = p.price
p.price -= Decimal('1.50')
p.save()
q = Quota.objects.create(event=env[0], size=100)
q.items.add(env[3])
newdate = (now() + timedelta(days=20)).strftime("%Y-%m-%d")
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.post('/control/event/dummy/dummy/orders/FOO/extend', {
'expires': newdate
}, follow=True)
assert b'alert-success' in response.content
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
assert o.status == Order.STATUS_PENDING
assert v.budget_used() == Decimal('1.50')
@pytest.mark.django_db
def test_order_extend_expired_voucher_budget_fail(client, env):
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
o.expires = now() - timedelta(days=5)
o.status = Order.STATUS_EXPIRED
olddate = o.expires
o.save()
v = env[0].vouchers.create(
code="foo", price_mode='subtract', value=Decimal('1.50'), budget=Decimal('0.00')
)
p = o.positions.first()
p.voucher = v
p.price_before_voucher = p.price
p.price -= Decimal('1.50')
p.save()
q = Quota.objects.create(event=env[0], size=100)
q.items.add(env[3])
newdate = (now() + timedelta(days=20)).strftime("%Y-%m-%d")
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.post('/control/event/dummy/dummy/orders/FOO/extend', {
'expires': newdate
}, follow=True)
assert b'alert-danger' in response.content
assert b'The voucher "FOO" no longer has sufficient budget.' in response.content
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
assert o.expires.strftime("%Y-%m-%d %H:%M:%S") == olddate.strftime("%Y-%m-%d %H:%M:%S")
assert o.status == Order.STATUS_EXPIRED
assert v.budget_used() == Decimal('0.00')
@pytest.mark.django_db
def test_order_mark_paid_overdue_quota_blocked_by_waiting_list(client, env):
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
o.status = Order.STATUS_EXPIRED
o.expires = now() - timedelta(days=5)
o.save()
q = Quota.objects.create(event=env[0], size=1)
q.items.add(env[3])
env[0].waitinglistentries.create(item=env[3], email='foo@bar.com')
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.post('/control/event/dummy/dummy/orders/FOO/transition', {
'status': 'p',
'payment_date': now().date().isoformat(),
'amount': str(o.pending_sum),
}, follow=True)
assert 'alert-success' in response.content.decode()
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
assert o.status == Order.STATUS_PAID
@pytest.mark.django_db
def test_order_mark_paid_blocked(client, env):
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
o.status = Order.STATUS_EXPIRED
o.expires = now() - timedelta(days=5)
o.save()
q = Quota.objects.create(event=env[0], size=0)
q.items.add(env[3])
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.post('/control/event/dummy/dummy/orders/FOO/transition', {
'amount': str(o.pending_sum),
'payment_date': now().date().isoformat(),
'status': 'p'
}, follow=True)
assert 'alert-danger' in response.content.decode()
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
assert o.status == Order.STATUS_EXPIRED
@pytest.mark.django_db
def test_order_mark_paid_overpaid_expired(client, env):
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
o.status = Order.STATUS_EXPIRED
o.expires = now() - timedelta(days=5)
o.save()
o.payments.create(state=OrderPayment.PAYMENT_STATE_CONFIRMED, amount=o.total * 2)
assert o.pending_sum == -1 * o.total
q = Quota.objects.create(event=env[0], size=0)
q.items.add(env[3])
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.post('/control/event/dummy/dummy/orders/FOO/transition', {
'status': 'p',
'payment_date': now().date().isoformat(),
'amount': '0.00',
'force': 'on'
}, follow=True)
assert 'alert-success' in response.content.decode()
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
assert o.status == Order.STATUS_PAID
assert o.payments.last().amount == 0
assert o.pending_sum == -1 * o.total
@pytest.mark.django_db
def test_order_mark_paid_forced(client, env):
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
o.status = Order.STATUS_EXPIRED
o.expires = now() - timedelta(days=5)
o.save()
q = Quota.objects.create(event=env[0], size=0)
q.items.add(env[3])
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.post('/control/event/dummy/dummy/orders/FOO/transition', {
'status': 'p',
'payment_date': now().date().isoformat(),
'amount': str(o.pending_sum),
'force': 'on'
}, follow=True)
print(response.content.decode())
assert 'alert-success' in response.content.decode()
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
assert o.status == Order.STATUS_PAID
@pytest.mark.django_db
def test_order_mark_paid_expired_seat_taken(client, env):
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
o.expires = now() - timedelta(days=5)
o.status = Order.STATUS_EXPIRED
olddate = o.expires
o.save()
seat_a1 = env[0].seats.create(seat_number="A1", product=env[3], seat_guid="A1")
p = o.positions.first()
p.seat = seat_a1
p.save()
o = Order.objects.create(
code='BAR', event=env[0], email='dummy@dummy.test',
status=Order.STATUS_PENDING,
datetime=now(), expires=now() + timedelta(days=10),
total=14, locale='en'
)
OrderPosition.objects.create(
order=o,
item=env[3],
variation=None,
price=Decimal("14"),
attendee_name_parts={'full_name': "Peter", "_scheme": "full"},
seat=seat_a1
)
q = Quota.objects.create(event=env[0], size=100)
q.items.add(env[3])
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.post('/control/event/dummy/dummy/orders/FOO/transition', {
'status': 'p',
'payment_date': now().date().isoformat(),
'amount': str(o.pending_sum),
'force': 'on'
}, follow=True)
assert b'alert-danger' in response.content
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
assert o.expires.strftime("%Y-%m-%d %H:%M:%S") == olddate.strftime("%Y-%m-%d %H:%M:%S")
assert o.status == Order.STATUS_EXPIRED
@pytest.mark.django_db
def test_order_go_lowercase(client, env):
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.get('/control/event/dummy/dummy/orders/go?code=DuMmyfoO')
assert response['Location'].endswith('/control/event/dummy/dummy/orders/FOO/')
@pytest.mark.django_db
def test_order_go_with_slug(client, env):
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.get('/control/event/dummy/dummy/orders/go?code=DUMMYFOO')
assert response['Location'].endswith('/control/event/dummy/dummy/orders/FOO/')
@pytest.mark.django_db
def test_order_go_found(client, env):
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.get('/control/event/dummy/dummy/orders/go?code=FOO')
assert response['Location'].endswith('/control/event/dummy/dummy/orders/FOO/')
@pytest.mark.django_db
def test_order_go_not_found(client, env):
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.get('/control/event/dummy/dummy/orders/go?code=BAR')
assert response['Location'].endswith('/control/event/dummy/dummy/orders/')
@pytest.fixture
def order_url(env):
event = env[0]
order = env[2]
url = '/control/event/{orga}/{event}/orders/{code}'.format(
event=event.slug, orga=event.organizer.slug, code=order.code
)
return url
@pytest.mark.django_db
def test_order_sendmail_view(client, order_url):
client.login(email='dummy@dummy.dummy', password='dummy')
sendmail_url = order_url + '/sendmail'
response = client.get(sendmail_url)
assert response.status_code == 200
@pytest.mark.django_db
def test_order_sendmail_simple_case(client, order_url, env):
order = env[2]
client.login(email='dummy@dummy.dummy', password='dummy')
sendmail_url = order_url + '/sendmail'
mail.outbox = []
response = client.post(
sendmail_url,
{
'sendto': order.email,
'subject': 'Test subject',
'message': 'This is a test file for sending mails.'
},
follow=True)
assert response.status_code == 200
assert 'alert-success' in response.content.decode()
assert len(mail.outbox) == 1
assert mail.outbox[0].to == [order.email]
assert mail.outbox[0].subject == 'Test subject'
assert 'This is a test file for sending mails.' in mail.outbox[0].body
mail_history_url = order_url + '/mail_history'
response = client.get(mail_history_url)
assert response.status_code == 200
assert 'Test subject' in response.content.decode()
@pytest.mark.django_db
def test_order_sendmail_preview(client, order_url, env):
order = env[2]
client.login(email='dummy@dummy.dummy', password='dummy')
sendmail_url = order_url + '/sendmail'
mail.outbox = []
response = client.post(
sendmail_url,
{
'sendto': order.email,
'subject': 'Test subject',
'message': 'This is a test file for sending mails.',
'action': 'preview'
},
follow=True)
assert response.status_code == 200
assert 'E-mail preview' in response.content.decode()
assert len(mail.outbox) == 0
@pytest.mark.django_db
def test_order_sendmail_invalid_data(client, order_url, env):
order = env[2]
client.login(email='dummy@dummy.dummy', password='dummy')
sendmail_url = order_url + '/sendmail'
mail.outbox = []
response = client.post(
sendmail_url,
{
'sendto': order.email,
'subject': 'Test invalid mail',
},
follow=True)
assert 'has-error' in response.content.decode()
assert len(mail.outbox) == 0
mail_history_url = order_url + '/mail_history'
response = client.get(mail_history_url)
assert response.status_code == 200
assert 'Test invalid mail' not in response.content.decode()
class OrderChangeTests(SoupTest):
@scopes_disabled()
def setUp(self):
super().setUp()
o = Organizer.objects.create(name='Dummy', slug='dummy')
self.event = Event.objects.create(organizer=o, name='Dummy', slug='dummy', date_from=now(),
plugins='pretix.plugins.banktransfer')
self.order = Order.objects.create(
code='FOO', event=self.event, email='dummy@dummy.test',
status=Order.STATUS_PENDING,
datetime=now(), expires=now() + timedelta(days=10),
total=Decimal('46.00'),
)
self.tr7 = self.event.tax_rules.create(rate=Decimal('7.00'))
self.tr19 = self.event.tax_rules.create(rate=Decimal('19.00'))
self.ticket = Item.objects.create(event=self.event, name='Early-bird ticket', tax_rule=self.tr7,
default_price=Decimal('23.00'), admission=True)
self.shirt = Item.objects.create(event=self.event, name='T-Shirt', tax_rule=self.tr19,
default_price=Decimal('12.00'))
self.op1 = OrderPosition.objects.create(
order=self.order, item=self.ticket, variation=None,
price=Decimal("23.00"), attendee_name_parts={'full_name': "Peter", "_scheme": "full"}
)
self.op2 = OrderPosition.objects.create(
order=self.order, item=self.ticket, variation=None,
price=Decimal("23.00"), attendee_name_parts={'full_name': "Dieter", "_scheme": "full"}
)
self.op3 = OrderPosition.objects.create(
order=self.order, item=self.ticket, variation=None,
price=Decimal("23.00"), attendee_name_parts={'full_name': "Lukas", "_scheme": "full"},
canceled=True
)
self.quota = self.event.quotas.create(name="All", size=100)
self.quota.items.add(self.ticket)
self.quota.items.add(self.shirt)
user = User.objects.create_user('dummy@dummy.dummy', 'dummy')
t = Team.objects.create(organizer=o, can_view_orders=True, can_change_orders=True)
t.members.add(user)
t.limit_events.add(self.event)
self.client.login(email='dummy@dummy.dummy', password='dummy')
def test_do_not_show_canceled(self):
r = self.client.get('/control/event/{}/{}/orders/{}/change'.format(
self.event.organizer.slug, self.event.slug, self.order.code
))
assert self.op1.secret[:5] in r.content.decode()
assert self.op2.secret[:5] in r.content.decode()
assert self.op3.secret[:5] not in r.content.decode()
def test_change_item_success(self):
self.client.post('/control/event/{}/{}/orders/{}/change'.format(
self.event.organizer.slug, self.event.slug, self.order.code
), {
'add-TOTAL_FORMS': '0',
'add-INITIAL_FORMS': '0',
'add-MIN_NUM_FORMS': '0',
'add-MAX_NUM_FORMS': '100',
'op-{}-itemvar'.format(self.op1.pk): str(self.shirt.pk),
'op-{}-price'.format(self.op1.pk): str('12.00'),
})
self.op1.refresh_from_db()
self.order.refresh_from_db()
assert self.op1.item == self.shirt
assert self.op1.price == self.shirt.default_price
assert self.op1.tax_rate == self.shirt.tax_rule.rate
assert self.order.total == self.op1.price + self.op2.price
def test_change_subevent_success(self):
self.event.has_subevents = True
self.event.save()
with scopes_disabled():
se1 = self.event.subevents.create(name='Foo', date_from=now())
se2 = self.event.subevents.create(name='Bar', date_from=now())
self.op1.subevent = se1
self.op1.save()
self.op2.subevent = se1
self.op2.save()
self.quota.subevent = se1
self.quota.save()
q2 = self.event.quotas.create(name='Q2', size=100, subevent=se2)
q2.items.add(self.ticket)
q2.items.add(self.shirt)
self.client.post('/control/event/{}/{}/orders/{}/change'.format(
self.event.organizer.slug, self.event.slug, self.order.code
), {
'add-TOTAL_FORMS': '0',
'add-INITIAL_FORMS': '0',
'add-MIN_NUM_FORMS': '0',
'add-MAX_NUM_FORMS': '100',
'op-{}-subevent'.format(self.op1.pk): str(se2.pk),
})
self.op1.refresh_from_db()
self.op2.refresh_from_db()
self.order.refresh_from_db()
assert self.op1.subevent == se2
assert self.op2.subevent == se1
def test_change_price_success(self):
self.client.post('/control/event/{}/{}/orders/{}/change'.format(
self.event.organizer.slug, self.event.slug, self.order.code
), {
'add-TOTAL_FORMS': '0',
'add-INITIAL_FORMS': '0',
'add-MIN_NUM_FORMS': '0',
'add-MAX_NUM_FORMS': '100',
'op-{}-operation'.format(self.op1.pk): 'price',
'op-{}-itemvar'.format(self.op1.pk): str(self.ticket.pk),
'op-{}-price'.format(self.op1.pk): '24.00',
'op-{}-operation'.format(self.op2.pk): '',
'op-{}-itemvar'.format(self.op2.pk): str(self.ticket.pk),
})
self.op1.refresh_from_db()
self.order.refresh_from_db()
assert self.op1.item == self.ticket
assert self.op1.price == Decimal('24.00')
assert self.order.total == self.op1.price + self.op2.price
def test_cancel_success(self):
self.client.post('/control/event/{}/{}/orders/{}/change'.format(
self.event.organizer.slug, self.event.slug, self.order.code
), {
'add-TOTAL_FORMS': '0',
'add-INITIAL_FORMS': '0',
'add-MIN_NUM_FORMS': '0',
'add-MAX_NUM_FORMS': '100',
'op-{}-operation_cancel'.format(self.op1.pk): 'on',
})
self.order.refresh_from_db()
with scopes_disabled():
assert self.order.positions.count() == 1
assert self.order.total == self.op2.price
def test_add_item_success(self):
self.client.post('/control/event/{}/{}/orders/{}/change'.format(
self.event.organizer.slug, self.event.slug, self.order.code
), {
'add-TOTAL_FORMS': '1',
'add-INITIAL_FORMS': '0',
'add-MIN_NUM_FORMS': '0',
'add-MAX_NUM_FORMS': '100',
'add-0-itemvar': str(self.shirt.pk),
'add-0-do': 'on',
'add-0-price': '14.00',
})
with scopes_disabled():
assert self.order.positions.count() == 3
assert self.order.positions.last().item == self.shirt
assert self.order.positions.last().price == 14
def test_recalculate_reverse_charge(self):
self.tr7.eu_reverse_charge = True
self.tr7.home_country = Country('DE')
self.tr7.save()
self.tr19.eu_reverse_charge = True
self.tr19.home_country = Country('DE')
self.tr19.save()
with scopes_disabled():
InvoiceAddress.objects.create(
order=self.order, is_business=True, vat_id='ATU1234567', vat_id_validated=True,
country=Country('AT')
)
self.client.post('/control/event/{}/{}/orders/{}/change'.format(
self.event.organizer.slug, self.event.slug, self.order.code
), {
'add-TOTAL_FORMS': '0',
'add-INITIAL_FORMS': '0',
'add-MIN_NUM_FORMS': '0',
'add-MAX_NUM_FORMS': '100',
'other-recalculate_taxes': 'net',
'op-{}-operation'.format(self.op1.pk): '',
'op-{}-operation'.format(self.op2.pk): '',
'op-{}-itemvar'.format(self.op2.pk): str(self.ticket.pk),
'op-{}-price'.format(self.op2.pk): str(self.op2.price),
'op-{}-itemvar'.format(self.op1.pk): str(self.ticket.pk),
'op-{}-price'.format(self.op1.pk): str(self.op1.price),
})
with scopes_disabled():
ops = list(self.order.positions.all())
for op in ops:
assert op.price == Decimal('21.50')
assert op.tax_value == Decimal('0.00')
assert op.tax_rate == Decimal('0.00')
def test_recalculate_reverse_charge_keep_gross(self):
self.tr7.eu_reverse_charge = True
self.tr7.home_country = Country('DE')
self.tr7.save()
self.tr19.eu_reverse_charge = True
self.tr19.home_country = Country('DE')
self.tr19.save()
with scopes_disabled():
InvoiceAddress.objects.create(
order=self.order, is_business=True, vat_id='ATU1234567', vat_id_validated=True,
country=Country('AT')
)
self.client.post('/control/event/{}/{}/orders/{}/change'.format(
self.event.organizer.slug, self.event.slug, self.order.code
), {
'add-TOTAL_FORMS': '0',
'add-INITIAL_FORMS': '0',
'add-MIN_NUM_FORMS': '0',
'add-MAX_NUM_FORMS': '100',
'other-recalculate_taxes': 'gross',
'op-{}-operation'.format(self.op1.pk): '',
'op-{}-operation'.format(self.op2.pk): '',
'op-{}-itemvar'.format(self.op2.pk): str(self.ticket.pk),
'op-{}-price'.format(self.op2.pk): str(self.op2.price),
'op-{}-itemvar'.format(self.op1.pk): str(self.ticket.pk),
'op-{}-price'.format(self.op1.pk): str(self.op1.price),
})
with scopes_disabled():
ops = list(self.order.positions.all())
for op in ops:
assert op.price == Decimal('23.00')
assert op.tax_value == Decimal('0.00')
assert op.tax_rate == Decimal('0.00')
def test_change_fee_value_success(self):
with scopes_disabled():
fee = self.order.fees.create(fee_type="shipping", value=Decimal('5.00'), tax_rule=self.tr19)
self.order.total += Decimal('5.00')
self.order.save()
self.client.post('/control/event/{}/{}/orders/{}/change'.format(
self.event.organizer.slug, self.event.slug, self.order.code
), {
'add-TOTAL_FORMS': '0',
'add-INITIAL_FORMS': '0',
'add-MIN_NUM_FORMS': '0',
'add-MAX_NUM_FORMS': '100',
'op-{}-price'.format(self.op1.pk): '24.00',
'op-{}-operation'.format(self.op2.pk): '',
'op-{}-itemvar'.format(self.op2.pk): str(self.ticket.pk),
'of-{}-value'.format(fee.pk): '3.50',
})
self.op1.refresh_from_db()
self.order.refresh_from_db()
assert self.op1.item == self.ticket
assert self.op1.price == Decimal('24.00')
fee.refresh_from_db()
self.op1.refresh_from_db()
self.op2.refresh_from_db()
assert self.order.total == self.op1.price + self.op2.price + Decimal('3.50')
assert fee.value == Decimal('3.50')
def test_cancel_fee_success(self):
with scopes_disabled():
fee = self.order.fees.create(fee_type="shipping", value=Decimal('5.00'), tax_rule=self.tr19)
self.order.total += Decimal('5.00')
self.order.save()
self.client.post('/control/event/{}/{}/orders/{}/change'.format(
self.event.organizer.slug, self.event.slug, self.order.code
), {
'add-TOTAL_FORMS': '0',
'add-INITIAL_FORMS': '0',
'add-MIN_NUM_FORMS': '0',
'add-MAX_NUM_FORMS': '100',
'op-{}-operation'.format(self.op1.pk): 'price',
'op-{}-itemvar'.format(self.op1.pk): str(self.ticket.pk),
'op-{}-price'.format(self.op1.pk): '24.00',
'op-{}-operation'.format(self.op2.pk): '',
'op-{}-itemvar'.format(self.op2.pk): str(self.ticket.pk),
'of-{}-value'.format(fee.pk): '5.00',
'of-{}-operation_cancel'.format(fee.pk): 'on',
})
self.order.refresh_from_db()
fee.refresh_from_db()
assert fee.canceled
self.op1.refresh_from_db()
self.op2.refresh_from_db()
assert self.order.total == self.op1.price + self.op2.price
@pytest.mark.django_db
def test_check_vatid(client, env):
client.login(email='dummy@dummy.dummy', password='dummy')
with scopes_disabled():
ia = InvoiceAddress.objects.create(order=env[2], is_business=True, vat_id='ATU1234567', country=Country('AT'))
with mock.patch('vat_moss.id.validate') as mock_validate:
mock_validate.return_value = ('AT', 'AT123456', 'Foo')
response = client.post('/control/event/dummy/dummy/orders/FOO/checkvatid', {}, follow=True)
assert 'alert-success' in response.content.decode()
ia.refresh_from_db()
assert ia.vat_id_validated
@pytest.mark.django_db
def test_check_vatid_no_entered(client, env):
client.login(email='dummy@dummy.dummy', password='dummy')
with scopes_disabled():
ia = InvoiceAddress.objects.create(order=env[2], is_business=True, country=Country('AT'))
with mock.patch('vat_moss.id.validate') as mock_validate:
mock_validate.return_value = ('AT', 'AT123456', 'Foo')
response = client.post('/control/event/dummy/dummy/orders/FOO/checkvatid', {}, follow=True)
assert 'alert-danger' in response.content.decode()
ia.refresh_from_db()
assert not ia.vat_id_validated
@pytest.mark.django_db
def test_check_vatid_invalid_country(client, env):
client.login(email='dummy@dummy.dummy', password='dummy')
with scopes_disabled():
ia = InvoiceAddress.objects.create(order=env[2], is_business=True, vat_id='ATU1234567', country=Country('FR'))
with mock.patch('vat_moss.id.validate') as mock_validate:
mock_validate.return_value = ('AT', 'AT123456', 'Foo')
response = client.post('/control/event/dummy/dummy/orders/FOO/checkvatid', {}, follow=True)
assert 'alert-danger' in response.content.decode()
ia.refresh_from_db()
assert not ia.vat_id_validated
@pytest.mark.django_db
def test_check_vatid_noneu_country(client, env):
client.login(email='dummy@dummy.dummy', password='dummy')
with scopes_disabled():
ia = InvoiceAddress.objects.create(order=env[2], is_business=True, vat_id='CHU1234567', country=Country('CH'))
with mock.patch('vat_moss.id.validate') as mock_validate:
mock_validate.return_value = ('AT', 'AT123456', 'Foo')
response = client.post('/control/event/dummy/dummy/orders/FOO/checkvatid', {}, follow=True)
assert 'alert-danger' in response.content.decode()
ia.refresh_from_db()
assert not ia.vat_id_validated
@pytest.mark.django_db
def test_check_vatid_no_country(client, env):
client.login(email='dummy@dummy.dummy', password='dummy')
with scopes_disabled():
ia = InvoiceAddress.objects.create(order=env[2], is_business=True, vat_id='ATU1234567')
with mock.patch('vat_moss.id.validate') as mock_validate:
mock_validate.return_value = ('AT', 'AT123456', 'Foo')
response = client.post('/control/event/dummy/dummy/orders/FOO/checkvatid', {}, follow=True)
assert 'alert-danger' in response.content.decode()
ia.refresh_from_db()
assert not ia.vat_id_validated
@pytest.mark.django_db
def test_check_vatid_no_invoiceaddress(client, env):
client.login(email='dummy@dummy.dummy', password='dummy')
with mock.patch('vat_moss.id.validate') as mock_validate:
mock_validate.return_value = ('AT', 'AT123456', 'Foo')
response = client.post('/control/event/dummy/dummy/orders/FOO/checkvatid', {}, follow=True)
assert 'alert-danger' in response.content.decode()
@pytest.mark.django_db
def test_check_vatid_invalid(client, env):
client.login(email='dummy@dummy.dummy', password='dummy')
with scopes_disabled():
ia = InvoiceAddress.objects.create(order=env[2], is_business=True, vat_id='ATU1234567', country=Country('AT'))
with mock.patch('vat_moss.id.validate') as mock_validate:
def raiser(*args, **kwargs):
import vat_moss.errors
raise vat_moss.errors.InvalidError('Fail')
mock_validate.side_effect = raiser
response = client.post('/control/event/dummy/dummy/orders/FOO/checkvatid', {}, follow=True)
assert 'alert-danger' in response.content.decode()
ia.refresh_from_db()
assert not ia.vat_id_validated
@pytest.mark.django_db
def test_check_vatid_unavailable(client, env):
client.login(email='dummy@dummy.dummy', password='dummy')
with scopes_disabled():
ia = InvoiceAddress.objects.create(order=env[2], is_business=True, vat_id='ATU1234567', country=Country('AT'))
with mock.patch('vat_moss.id.validate') as mock_validate:
def raiser(*args, **kwargs):
import vat_moss.errors
raise vat_moss.errors.WebServiceUnavailableError('Fail')
mock_validate.side_effect = raiser
response = client.post('/control/event/dummy/dummy/orders/FOO/checkvatid', {}, follow=True)
assert 'alert-danger' in response.content.decode()
ia.refresh_from_db()
assert not ia.vat_id_validated
@pytest.mark.django_db
def test_cancel_payment(client, env):
with scopes_disabled():
p = env[2].payments.last()
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.post('/control/event/dummy/dummy/orders/FOO/payments/{}/cancel'.format(p.pk), {}, follow=True)
assert 'alert-success' in response.content.decode()
p.refresh_from_db()
assert p.state == OrderPayment.PAYMENT_STATE_CANCELED
response = client.post('/control/event/dummy/dummy/orders/FOO/payments/{}/cancel'.format(p.pk), {}, follow=True)
assert 'alert-danger' in response.content.decode()
@pytest.mark.django_db
def test_cancel_refund(client, env):
with scopes_disabled():
r = env[2].refunds.create(
provider='stripe',
state='transit',
source='admin',
amount=Decimal('23.00'),
execution_date=now(),
)
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.post('/control/event/dummy/dummy/orders/FOO/refunds/{}/cancel'.format(r.pk), {}, follow=True)
assert 'alert-success' in response.content.decode()
r.refresh_from_db()
assert r.state == OrderRefund.REFUND_STATE_CANCELED
r.state = OrderRefund.REFUND_STATE_DONE
r.save()
response = client.post('/control/event/dummy/dummy/orders/FOO/refunds/{}/cancel'.format(r.pk), {}, follow=True)
assert 'alert-danger' in response.content.decode()
r.refresh_from_db()
assert r.state == OrderRefund.REFUND_STATE_DONE
@pytest.mark.django_db
def test_process_refund(client, env):
with scopes_disabled():
r = env[2].refunds.create(
provider='stripe',
state='external',
source='external',
amount=Decimal('23.00'),
execution_date=now(),
)
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.post('/control/event/dummy/dummy/orders/FOO/refunds/{}/process'.format(r.pk), {}, follow=True)
assert 'alert-success' in response.content.decode()
r.refresh_from_db()
assert r.state == OrderRefund.REFUND_STATE_DONE
env[2].refresh_from_db()
assert env[2].status == Order.STATUS_PENDING
@pytest.mark.django_db
def test_process_refund_overpaid_externally(client, env):
with scopes_disabled():
env[2].payments.first().confirm()
env[2].payments.create(
state='confirmed',
provider='stripe',
amount=Decimal('14.00'),
payment_date=now()
)
assert env[2].pending_sum == -14
r = env[2].refunds.create(
provider='stripe',
state='external',
source='external',
amount=Decimal('14.00'),
execution_date=now(),
)
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.post('/control/event/dummy/dummy/orders/FOO/refunds/{}/process'.format(r.pk), {}, follow=True)
assert 'alert-success' in response.content.decode()
r.refresh_from_db()
assert r.state == OrderRefund.REFUND_STATE_DONE
env[2].refresh_from_db()
assert env[2].status == Order.STATUS_PAID
assert env[2].pending_sum == 0
@pytest.mark.django_db
def test_process_refund_invalid_state(client, env):
with scopes_disabled():
r = env[2].refunds.create(
provider='stripe',
state='canceled',
source='external',
amount=Decimal('23.00'),
execution_date=now(),
)
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.post('/control/event/dummy/dummy/orders/FOO/refunds/{}/process'.format(r.pk), {}, follow=True)
assert 'alert-danger' in response.content.decode()
r.refresh_from_db()
assert r.state == OrderRefund.REFUND_STATE_CANCELED
@pytest.mark.django_db
def test_process_refund_mark_refunded(client, env):
with scopes_disabled():
r = env[2].refunds.create(
provider='stripe',
state='external',
source='external',
amount=Decimal('23.00'),
execution_date=now(),
)
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.post('/control/event/dummy/dummy/orders/FOO/refunds/{}/process'.format(r.pk), {'action': 'r'},
follow=True)
assert 'alert-success' in response.content.decode()
r.refresh_from_db()
assert r.state == OrderRefund.REFUND_STATE_DONE
env[2].refresh_from_db()
assert env[2].status == Order.STATUS_CANCELED
@pytest.mark.django_db
def test_done_refund(client, env):
with scopes_disabled():
r = env[2].refunds.create(
provider='stripe',
state='transit',
source='admin',
amount=Decimal('23.00'),
execution_date=now(),
)
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.post('/control/event/dummy/dummy/orders/FOO/refunds/{}/done'.format(r.pk), {}, follow=True)
assert 'alert-success' in response.content.decode()
r.refresh_from_db()
assert r.state == OrderRefund.REFUND_STATE_DONE
@pytest.mark.django_db
def test_done_refund_invalid_state(client, env):
with scopes_disabled():
r = env[2].refunds.create(
provider='stripe',
state='external',
source='external',
amount=Decimal('23.00'),
execution_date=now(),
)
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.post('/control/event/dummy/dummy/orders/FOO/refunds/{}/done'.format(r.pk), {}, follow=True)
assert 'alert-danger' in response.content.decode()
r.refresh_from_db()
assert r.state == OrderRefund.REFUND_STATE_EXTERNAL
@pytest.mark.django_db
def test_confirm_payment(client, env):
with scopes_disabled():
p = env[2].payments.last()
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.post('/control/event/dummy/dummy/orders/FOO/payments/{}/confirm'.format(p.pk), {}, follow=True)
assert 'alert-success' in response.content.decode()
p.refresh_from_db()
assert p.state == OrderPayment.PAYMENT_STATE_CONFIRMED
env[2].refresh_from_db()
assert env[2].status == Order.STATUS_PAID
@pytest.mark.django_db
def test_confirm_payment_invalid_state(client, env):
with scopes_disabled():
p = env[2].payments.last()
p.state = OrderPayment.PAYMENT_STATE_FAILED
p.save()
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.post('/control/event/dummy/dummy/orders/FOO/payments/{}/confirm'.format(p.pk), {}, follow=True)
assert 'alert-danger' in response.content.decode()
p.refresh_from_db()
assert p.state == OrderPayment.PAYMENT_STATE_FAILED
env[2].refresh_from_db()
assert env[2].status == Order.STATUS_PENDING
@pytest.mark.django_db
def test_confirm_payment_partal_amount(client, env):
with scopes_disabled():
p = env[2].payments.last()
p.amount -= Decimal(5.00)
p.save()
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.post('/control/event/dummy/dummy/orders/FOO/payments/{}/confirm'.format(p.pk), {}, follow=True)
assert 'alert-success' in response.content.decode()
p.refresh_from_db()
assert p.state == OrderPayment.PAYMENT_STATE_CONFIRMED
env[2].refresh_from_db()
assert env[2].status == Order.STATUS_PENDING
@pytest.mark.django_db
def test_refund_paid_order_fully_mark_as_refunded(client, env):
with scopes_disabled():
p = env[2].payments.last()
p.confirm()
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.get('/control/event/dummy/dummy/orders/FOO/refund')
doc = BeautifulSoup(response.content.decode(), "lxml")
assert doc.select("input[name$=partial_amount]")[0]["value"] == "14.00"
client.post('/control/event/dummy/dummy/orders/FOO/refund', {
'start-partial_amount': '14.00',
'start-mode': 'full',
'start-action': 'mark_refunded'
}, follow=True)
client.post('/control/event/dummy/dummy/orders/FOO/refund', {
'start-partial_amount': '14.00',
'start-mode': 'full',
'start-action': 'mark_refunded',
'refund-manual': '14.00',
'manual_state': 'done',
'perform': 'on'
}, follow=True)
p.refresh_from_db()
with scopes_disabled():
assert p.state == OrderPayment.PAYMENT_STATE_CONFIRMED
env[2].refresh_from_db()
r = env[2].refunds.last()
assert r.provider == "manual"
assert r.state == OrderRefund.REFUND_STATE_DONE
assert r.amount == Decimal('14.00')
assert env[2].status == Order.STATUS_CANCELED
@pytest.mark.django_db
def test_refund_paid_order_fully_mark_as_pending(client, env):
with scopes_disabled():
p = env[2].payments.last()
p.confirm()
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.get('/control/event/dummy/dummy/orders/FOO/refund')
doc = BeautifulSoup(response.content.decode(), "lxml")
assert doc.select("input[name$=partial_amount]")[0]["value"] == "14.00"
client.post('/control/event/dummy/dummy/orders/FOO/refund', {
'start-partial_amount': '14.00',
'start-mode': 'full',
'start-action': 'mark_pending',
'refund-manual': '14.00',
'manual_state': 'pending',
'perform': 'on'
}, follow=True)
p.refresh_from_db()
assert p.state == OrderPayment.PAYMENT_STATE_CONFIRMED
env[2].refresh_from_db()
with scopes_disabled():
r = env[2].refunds.last()
assert r.provider == "manual"
assert r.state == OrderRefund.REFUND_STATE_CREATED
assert r.amount == Decimal('14.00')
assert env[2].status == Order.STATUS_PENDING
@pytest.mark.django_db
def test_refund_paid_order_partially_mark_as_pending(client, env):
with scopes_disabled():
p = env[2].payments.last()
p.confirm()
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.get('/control/event/dummy/dummy/orders/FOO/refund')
doc = BeautifulSoup(response.content.decode(), "lxml")
assert doc.select("input[name$=partial_amount]")[0]["value"] == "14.00"
client.post('/control/event/dummy/dummy/orders/FOO/refund', {
'start-partial_amount': '7.00',
'start-mode': 'partial',
'start-action': 'mark_pending'
}, follow=True)
client.post('/control/event/dummy/dummy/orders/FOO/refund', {
'start-partial_amount': '7.00',
'start-mode': 'partial',
'start-action': 'mark_pending',
'refund-manual': '7.00',
'manual_state': 'pending',
'perform': 'on'
}, follow=True)
p.refresh_from_db()
assert p.state == OrderPayment.PAYMENT_STATE_CONFIRMED
env[2].refresh_from_db()
with scopes_disabled():
r = env[2].refunds.last()
assert r.provider == "manual"
assert r.state == OrderRefund.REFUND_STATE_CREATED
assert r.amount == Decimal('7.00')
assert env[2].status == Order.STATUS_PENDING
@pytest.mark.django_db
def test_refund_propose_lower_payment(client, env):
with scopes_disabled():
p = env[2].payments.last()
p.amount = Decimal('8.00')
p.confirm()
p2 = env[2].payments.create(
amount=Decimal('6.00'), provider='stripe', state=OrderPayment.PAYMENT_STATE_CONFIRMED
)
client.login(email='dummy@dummy.dummy', password='dummy')
client.get('/control/event/dummy/dummy/orders/FOO/refund')
response = client.post('/control/event/dummy/dummy/orders/FOO/refund', {
'start-partial_amount': '7.00',
'start-mode': 'partial',
'start-action': 'mark_pending'
}, follow=True)
doc = BeautifulSoup(response.content.decode(), "lxml")
assert doc.select("input[name=refund-{}]".format(p2.pk))[0]['value'] == '6.00'
assert doc.select("input[name=refund-manual]".format(p2.pk))[0]['value'] == '1.00'
@pytest.mark.django_db
def test_refund_propose_equal_payment(client, env):
with scopes_disabled():
p = env[2].payments.last()
p.amount = Decimal('7.00')
p.confirm()
p2 = env[2].payments.create(
amount=Decimal('7.00'), provider='stripe', state=OrderPayment.PAYMENT_STATE_CONFIRMED
)
client.login(email='dummy@dummy.dummy', password='dummy')
client.get('/control/event/dummy/dummy/orders/FOO/refund')
response = client.post('/control/event/dummy/dummy/orders/FOO/refund', {
'start-partial_amount': '7.00',
'start-mode': 'partial',
'start-action': 'mark_pending'
}, follow=True)
doc = BeautifulSoup(response.content.decode(), "lxml")
assert doc.select("input[name=refund-{}]".format(p2.pk))[0]['value'] == '7.00'
assert not doc.select("input[name=refund-manual]".format(p2.pk))[0].get('value')
@pytest.mark.django_db
def test_refund_propose_higher_payment(client, env):
with scopes_disabled():
p = env[2].payments.last()
p.amount = Decimal('6.00')
p.confirm()
p2 = env[2].payments.create(
amount=Decimal('8.00'), provider='stripe', state=OrderPayment.PAYMENT_STATE_CONFIRMED
)
client.login(email='dummy@dummy.dummy', password='dummy')
client.get('/control/event/dummy/dummy/orders/FOO/refund')
response = client.post('/control/event/dummy/dummy/orders/FOO/refund', {
'start-partial_amount': '7.00',
'start-mode': 'partial',
'start-action': 'mark_pending'
}, follow=True)
doc = BeautifulSoup(response.content.decode(), "lxml")
assert doc.select("input[name=refund-{}]".format(p2.pk))[0]['value'] == '7.00'
assert not doc.select("input[name=refund-manual]".format(p2.pk))[0].get('value')
@pytest.mark.django_db
def test_refund_amount_does_not_match_or_invalid(client, env):
with scopes_disabled():
p = env[2].payments.last()
p.confirm()
client.login(email='dummy@dummy.dummy', password='dummy')
resp = client.post('/control/event/dummy/dummy/orders/FOO/refund', {
'start-partial_amount': '7.00',
'start-mode': 'partial',
'start-action': 'mark_pending',
'refund-manual': '4.00',
'refund-{}'.format(p.pk): '4.00',
'manual_state': 'pending',
'perform': 'on'
}, follow=True)
assert b'alert-danger' in resp.content
assert b'do not match the' in resp.content
resp = client.post('/control/event/dummy/dummy/orders/FOO/refund', {
'start-partial_amount': '15.00',
'start-mode': 'partial',
'start-action': 'mark_pending',
'refund-manual': '0.00',
'refund-{}'.format(p.pk): '15.00',
'manual_state': 'pending',
'perform': 'on'
}, follow=True)
assert b'alert-danger' in resp.content
assert b'The refund amount needs to be positive' in resp.content
resp = client.post('/control/event/dummy/dummy/orders/FOO/refund', {
'start-partial_amount': '7.00',
'start-mode': 'partial',
'start-action': 'mark_pending',
'refund-manual': '-3.00',
'refund-{}'.format(p.pk): '10.00',
'manual_state': 'pending',
'perform': 'on'
}, follow=True)
assert b'alert-danger' in resp.content
assert b'do not match the' in resp.content
resp = client.post('/control/event/dummy/dummy/orders/FOO/refund', {
'start-partial_amount': '7.00',
'start-mode': 'partial',
'start-action': 'mark_pending',
'refund-manual': 'AA',
'refund-{}'.format(p.pk): '10.00',
'manual_state': 'pending',
'perform': 'on'
}, follow=True)
assert b'alert-danger' in resp.content
assert b'invalid number' in resp.content
@pytest.mark.django_db
def test_refund_paid_order_automatically_failed(client, env, monkeypatch):
with scopes_disabled():
p = env[2].payments.last()
p.provider = 'stripe'
p.info_data = {
'id': 'foo'
}
p.save()
p.confirm()
client.login(email='dummy@dummy.dummy', password='dummy')
def charge_retr(*args, **kwargs):
def refund_create(amount):
raise PaymentException('This failed.')
c = MockedCharge()
c.refunds.create = refund_create
return c
monkeypatch.setattr("stripe.Charge.retrieve", charge_retr)
r = client.post('/control/event/dummy/dummy/orders/FOO/refund', {
'start-partial_amount': '7.00',
'start-mode': 'partial',
'start-action': 'mark_pending',
'refund-{}'.format(p.pk): '7.00',
'manual_state': 'pending',
'perform': 'on'
}, follow=True)
assert b'This failed.' in r.content
p.refresh_from_db()
assert p.state == OrderPayment.PAYMENT_STATE_CONFIRMED
env[2].refresh_from_db()
with scopes_disabled():
r = env[2].refunds.last()
assert r.provider == "stripe"
assert r.state == OrderRefund.REFUND_STATE_FAILED
assert r.amount == Decimal('7.00')
assert env[2].status == Order.STATUS_PAID
@pytest.mark.django_db
def test_refund_paid_order_automatically(client, env, monkeypatch):
with scopes_disabled():
p = env[2].payments.last()
p.provider = 'stripe'
p.info_data = {
'id': 'foo'
}
p.save()
p.confirm()
client.login(email='dummy@dummy.dummy', password='dummy')
def charge_retr(*args, **kwargs):
def refund_create(amount):
r = MockedCharge()
r.id = 'foo'
r.status = 'succeeded'
return r
c = MockedCharge()
c.refunds.create = refund_create
return c
monkeypatch.setattr("stripe.Charge.retrieve", charge_retr)
client.post('/control/event/dummy/dummy/orders/FOO/refund', {
'start-partial_amount': '7.00',
'start-mode': 'partial',
'start-action': 'mark_pending',
'refund-{}'.format(p.pk): '7.00',
'manual_state': 'pending',
'perform': 'on'
}, follow=True)
p.refresh_from_db()
assert p.state == OrderPayment.PAYMENT_STATE_CONFIRMED
env[2].refresh_from_db()
with scopes_disabled():
r = env[2].refunds.last()
assert r.provider == "stripe"
assert r.state == OrderRefund.REFUND_STATE_DONE
assert r.amount == Decimal('7.00')
assert env[2].status == Order.STATUS_PENDING
@pytest.mark.django_db
def test_refund_paid_order_offsetting_to_unknown(client, env):
with scopes_disabled():
p = env[2].payments.last()
p.confirm()
client.login(email='dummy@dummy.dummy', password='dummy')
r = client.post('/control/event/dummy/dummy/orders/FOO/refund', {
'start-partial_amount': '5.00',
'start-mode': 'partial',
'start-action': 'mark_pending',
'refund-offsetting': '5.00',
'order-offsetting': 'BAZ',
'manual_state': 'pending',
'perform': 'on'
}, follow=True)
assert b'alert-danger' in r.content
@pytest.mark.django_db
def test_refund_paid_order_offsetting_to_expired(client, env):
with scopes_disabled():
p = env[2].payments.last()
p.confirm()
client.login(email='dummy@dummy.dummy', password='dummy')
o = Order.objects.create(
code='BAZ', event=env[0], email='dummy@dummy.test',
status=Order.STATUS_EXPIRED,
datetime=now(), expires=now() + timedelta(days=10),
total=5, locale='en'
)
o.positions.create(price=5, item=env[3])
q = Quota.objects.create(event=env[0], size=0)
q.items.add(env[3])
client.post('/control/event/dummy/dummy/orders/FOO/refund', {
'start-partial_amount': '5.00',
'start-mode': 'partial',
'start-action': 'mark_pending',
'refund-offsetting': '5.00',
'order-offsetting': 'BAZ',
'manual_state': 'pending',
'perform': 'on'
}, follow=True)
p.refresh_from_db()
assert p.state == OrderPayment.PAYMENT_STATE_CONFIRMED
env[2].refresh_from_db()
with scopes_disabled():
r = env[2].refunds.last()
assert r.provider == "offsetting"
assert r.state == OrderRefund.REFUND_STATE_DONE
assert r.amount == Decimal('5.00')
assert env[2].status == Order.STATUS_PENDING
o.refresh_from_db()
assert o.status == Order.STATUS_EXPIRED
p2 = o.payments.first()
assert p2.provider == "offsetting"
assert p2.amount == Decimal('5.00')
assert p2.state == OrderPayment.PAYMENT_STATE_CONFIRMED
@pytest.mark.django_db
def test_refund_paid_order_offsetting(client, env):
with scopes_disabled():
p = env[2].payments.last()
p.confirm()
client.login(email='dummy@dummy.dummy', password='dummy')
o = Order.objects.create(
code='BAZ', event=env[0], email='dummy@dummy.test',
status=Order.STATUS_PENDING,
datetime=now(), expires=now() + timedelta(days=10),
total=5, locale='en'
)
client.post('/control/event/dummy/dummy/orders/FOO/refund', {
'start-partial_amount': '5.00',
'start-mode': 'partial',
'start-action': 'mark_pending',
'refund-offsetting': '5.00',
'order-offsetting': 'BAZ',
'manual_state': 'pending',
'perform': 'on'
}, follow=True)
p.refresh_from_db()
assert p.state == OrderPayment.PAYMENT_STATE_CONFIRMED
env[2].refresh_from_db()
with scopes_disabled():
r = env[2].refunds.last()
assert r.provider == "offsetting"
assert r.state == OrderRefund.REFUND_STATE_DONE
assert r.amount == Decimal('5.00')
assert env[2].status == Order.STATUS_PENDING
o.refresh_from_db()
assert o.status == Order.STATUS_PAID
p2 = o.payments.first()
assert p2.provider == "offsetting"
assert p2.amount == Decimal('5.00')
assert p2.state == OrderPayment.PAYMENT_STATE_CONFIRMED
@pytest.mark.django_db
def test_refund_paid_order_giftcard(client, env):
with scopes_disabled():
p = env[2].payments.last()
p.confirm()
client.login(email='dummy@dummy.dummy', password='dummy')
client.post('/control/event/dummy/dummy/orders/FOO/refund', {
'start-partial_amount': '5.00',
'start-mode': 'partial',
'start-action': 'mark_pending',
'refund-new-giftcard': '5.00',
'manual_state': 'pending',
'perform': 'on'
}, follow=True)
p.refresh_from_db()
assert p.state == OrderPayment.PAYMENT_STATE_CONFIRMED
env[2].refresh_from_db()
with scopes_disabled():
r = env[2].refunds.last()
assert r.provider == "giftcard"
assert r.state == OrderRefund.REFUND_STATE_DONE
assert r.amount == Decimal('5.00')
assert env[2].status == Order.STATUS_PENDING
gk = GiftCard.objects.get(pk=r.info_data['gift_card'])
assert gk.value == Decimal('5.00')
@pytest.mark.django_db
def test_refund_list(client, env):
with scopes_disabled():
env[2].refunds.create(
provider='banktransfer',
state='done',
source='admin',
amount=Decimal('23.00'),
execution_date=now(),
)
env[2].refunds.create(
provider='manual',
state='created',
source='admin',
amount=Decimal('23.00'),
execution_date=now(),
)
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.get('/control/event/dummy/dummy/orders/refunds/')
assert 'R-1' not in response.content.decode()
assert 'R-2' in response.content.decode()
response = client.get('/control/event/dummy/dummy/orders/refunds/?status=all')
assert 'R-1' in response.content.decode()
assert 'R-2' in response.content.decode()
response = client.get('/control/event/dummy/dummy/orders/refunds/?status=created')
assert 'R-1' not in response.content.decode()
assert 'R-2' in response.content.decode()
response = client.get('/control/event/dummy/dummy/orders/refunds/?status=done')
assert 'R-1' in response.content.decode()
assert 'R-2' not in response.content.decode()
response = client.get('/control/event/dummy/dummy/orders/refunds/?status=all&provider=manual')
assert 'R-1' not in response.content.decode()
assert 'R-2' in response.content.decode()
response = client.get('/control/event/dummy/dummy/orders/refunds/?status=all&provider=banktransfer')
assert 'R-1' in response.content.decode()
assert 'R-2' not in response.content.decode()
@pytest.mark.django_db
def test_delete_cancellation_request(client, env):
with scopes_disabled():
r = env[2].cancellation_requests.create(
cancellation_fee=Decimal('4.00'),
refund_as_giftcard=True
)
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.post('/control/event/dummy/dummy/orders/FOO/cancellationrequests/{}/delete'.format(r.pk), {},
follow=True)
assert 'alert-success' in response.content.decode()
assert not env[2].cancellation_requests.exists()
@pytest.mark.django_db
def test_approve_cancellation_request(client, env):
with scopes_disabled():
o = Order.objects.get(id=env[2].id)
o.payments.create(state=OrderPayment.PAYMENT_STATE_CONFIRMED, amount=o.total)
o.status = Order.STATUS_PAID
o.save()
r = env[2].cancellation_requests.create(
cancellation_fee=Decimal('4.00'),
refund_as_giftcard=True
)
client.login(email='dummy@dummy.dummy', password='dummy')
response = client.get('/control/event/dummy/dummy/orders/FOO/transition?status=c&req={}'.format(r.pk), {})
doc = BeautifulSoup(response.content.decode(), "lxml")
assert doc.select('input[name=cancellation_fee]')[0]['value'] == '4.00'
response = client.post('/control/event/dummy/dummy/orders/FOO/transition?req={}'.format(r.pk), {
'status': 'c',
'cancellation_fee': '4.00'
}, follow=True)
doc = BeautifulSoup(response.content.decode(), "lxml")
assert doc.select('input[name=refund-new-giftcard]')[0]['value'] == '10.00'
assert not env[2].cancellation_requests.exists()
| 39.442142 | 118 | 0.640014 | 12,076 | 91,348 | 4.726234 | 0.046787 | 0.059747 | 0.041998 | 0.05435 | 0.896976 | 0.881154 | 0.868259 | 0.85496 | 0.831657 | 0.817903 | 0 | 0.017565 | 0.203486 | 91,348 | 2,315 | 119 | 39.459179 | 0.76685 | 0.02196 | 0 | 0.772414 | 0 | 0.000493 | 0.19295 | 0.086767 | 0 | 0 | 0 | 0 | 0.165025 | 1 | 0.055172 | false | 0.046305 | 0.007882 | 0 | 0.06601 | 0.000985 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
9c055471d0a34808212c6e66a1022b61d582c780 | 63,392 | py | Python | Burst_Spectral_Analysis/time_res_sp_nbs_v1.py | tolgaguver/heaiu | f822afbb7e3e69d773c666918e78e7a43a79db74 | [
"MIT"
] | 1 | 2019-07-24T15:08:33.000Z | 2019-07-24T15:08:33.000Z | Burst_Spectral_Analysis/time_res_sp_nbs_v1.py | tolgaguver/heaiu | f822afbb7e3e69d773c666918e78e7a43a79db74 | [
"MIT"
] | null | null | null | Burst_Spectral_Analysis/time_res_sp_nbs_v1.py | tolgaguver/heaiu | f822afbb7e3e69d773c666918e78e7a43a79db74 | [
"MIT"
] | 1 | 2019-09-03T20:00:01.000Z | 2019-09-03T20:00:01.000Z | import glob
from astropy.io import ascii
from astropy.table import Table
from astropy.io import fits
from astropy.time import Time
import os
import numpy as np
import matplotlib
matplotlib.use('Qt5Agg')
import matplotlib.pyplot as plt
import pandas as pd
# Create the dataframe to be filled later :
col_names = ["Source_Name_l", "BID_l", "SID_l", "OBS_ID_l", "MJD_OBS_l",\
"DATE_OBS_l", "exp_l", "NH_l", "BB_kT_l", "min_BBkT_l","max_BBkt_l",\
"BB_Norm_l", "min_BBNorm_l", "max_BBNorm_l", "BB_flux_l", \
"min_BBflux_l", "max_BBflux_l", "Bol_BBF_l", "min_BolBBF_l", \
"max_BolBBF_l", "BB2_kT_l", "min_2BBkT_l", "max_2BBkt_l", "BB2_Norm_l",\
"min_2BBNorm_l", "max_2BBNorm_l", "BB2_flux_l", "min_2BBflux_l",\
"max_2BBflux_l", "Bol_2BBF_l", "min_2BolBBF_l", "max_2BolBBF_l",\
"fa_l", "min_fa_l", "max_fa_l", "dbb_kT_l", "min_dbbkT_l", "max_dbbkT_l",\
"dbb_Norm_l", "min_dbbNorm_l", "max_dbbNorm_l", "dbb_flux_l", "min_dbbflux_l",\
"max_dbbflux_l", "sBB_kT_l", "min_sBBkT_l", "max_sBBkt_l", "sBB_Norm_l",\
"min_sBBNorm_l", "max_sBBNorm_l", "sBB_flux_l", "min_sBBflux_l", "max_sBBflux_l",\
"RStat_l", "dof_l"]
def populate_with_zero(key_names):
for col in key_names:
data_frame[col][i] = (0)
return
def print_all_lengths(data_frame):
for key in data_frame:
print("%s: %d" %(key, len(data_frame[key])))
return
def spectrum_has_no_exposure():
print('this spectrum does not have any exposure')
columns_to_make_zero = ["BB_kT_l", "min_BBkT_l","max_BBkt_l",\
"BB_Norm_l", "min_BBNorm_l", "max_BBNorm_l", "BB_flux_l", \
"min_BBflux_l", "max_BBflux_l", "Bol_BBF_l", "min_BolBBF_l", \
"max_BolBBF_l", "BB2_kT_l", "min_2BBkT_l", "max_2BBkt_l", "BB2_Norm_l",\
"min_2BBNorm_l", "max_2BBNorm_l", "BB2_flux_l", "min_2BBflux_l",\
"max_2BBflux_l", "Bol_2BBF_l", "min_2BolBBF_l", "max_2BolBBF_l",\
"fa_l", "min_fa_l", "max_fa_l", "dbb_kT_l", "min_dbbkT_l", "max_dbbkT_l",\
"dbb_Norm_l", "min_dbbNorm_l", "max_dbbNorm_l", "dbb_flux_l", "min_dbbflux_l",\
"max_dbbflux_l", "sBB_kT_l", "min_sBBkT_l", "max_sBBkt_l", "sBB_Norm_l",\
"min_sBBNorm_l", "max_sBBNorm_l", "sBB_flux_l", "min_sBBflux_l", "max_sBBflux_l",\
"RStat_l", "dof_l"]
#populate_with_zero(columns_to_make_zero)
print('Finished:')
print(sp_final)
return
def create_plots(components, x_max, x_min, ymax, ymin):
plot_fit_delchi(1,clearwindow=True, color='Black')
fig=plt.gcf()
ax1,ax2=fig.axes
ax1.set_title(source_name+' BID:'+bid+' MJD:'+str(mjdobs)+' SID:'+sid)
ax1.set_yscale('log')
ax1.set_xscale('log')
ax2.set_xscale('log')
ax2.set_xlabel('Energy [keV]', fontsize=14)
ax1.set_ylabel('Counts/sec/keV', fontsize=14)
ax2.set_ylabel('Sigma', fontsize=14)
ax1.set_xlim(x_min,x_max)
ax2.set_xlim(x_min,x_max)
plt.savefig(burst_folder+sid+'_b'+bid+'_'+fit_method+'_full.pdf',orientation='landscape', papertype='a4')
plt.savefig(burst_folder+sid+'_b'+bid+'_'+fit_method+'_full.png',orientation='landscape', papertype='a4')
plot_fit(1,clearwindow=True,xlog=True,ylog=True, color='Black')
if "dbb" in components:
plot_model_component("tb*dbb", replot=False, overplot=True, color='Green')
if "sbb" in components:
plot_model_component("tb*sbb", replot=False, overplot=True, color='Red')
if "bb" in components:
plot_model_component("tb*bb", replot=False, overplot=True, color='Black')
if "bb2" in components:
plot_model_component("tb*bb2", replot=False, overplot=True, color='Blue')
if "fa_dbb" in components:
plot_model_component("tb*fa*dbb", replot=False, overplot=True, color='Green')
if "fa_sbb" in components:
plot_model_component("tb*fa*sbb", replot=False, overplot=True, color='Red')
plt.title(source_name+' BID:'+bid+' MJD:'+str(mjdobs)+' SID:'+sid)
plt.xlabel('Energy [keV]', fontsize=14)
plt.ylabel('Counts/sec/keV', fontsize=14)
plt.xlim(x_min,x_max)
plt.ylim(ymin,ymax)
plt.savefig(burst_folder+sid+'_b'+bid+'_'+fit_method+'_full_comp.pdf',orientation='landscape', papertype='a4')
plt.savefig(burst_folder+sid+'_b'+bid+'_'+fit_method+'_full_comp.png',orientation='landscape', papertype='a4')
print('Skipping to the next spectrum')
# Enter here the name of the source as in the burster_v3f.dat :
source_name = '4U_1608-522'
# enter here the burstid of the burst you would like to fit :
bid = '5'
mine=input('Enter the minimum energy of the fits :')
mines = ":"+mine
maxe=input('Enter the maximum energy of the fits :')
maxes = maxe+":"
folder = '/home/hea/ownCloud/burst_characterization_v4/'
sfolder = '/home/hea/ownCloud/burst_characterization_v4/scripts/'
#folder = '/Users/tolga/ownCloud/burst_characterization_v3/'
#sfolder = '/Users/tolga/ownCloud/burst_characterization_v3/scripts/'
# Read the persistent state analysis (pre burst tbabs*DISKBB+BBODYRAD fit)
pers_file = folder+source_name+'/pers_results.dat'
#pers_file = "/home/rumeysa/Desktop/Burst_Spectral_Analysis"+'/pers_results.dat'
pers_data = pd.read_csv(pers_file)
#X=np.asarray(pers_data).astype(np.float64)
snh = pers_data['NH']
diskbb_temp = pers_data['disk_kT']
diskbb_norm = pers_data['disk_norm']
sbb_kt=pers_data['bb_kT']
sbb_norm=pers_data['bb_norm']
pers_rstat = pers_data['chi']
pers_dof = pers_data['dof']
sel_pre_burst = np.where(pers_data['BID']==int(bid))
ssnh = snh[sel_pre_burst[0]]
sdiskbb_temp = diskbb_temp[sel_pre_burst[0]]
sdiskbb_norm = diskbb_norm[sel_pre_burst[0]]
ssbb_kt = sbb_kt[sel_pre_burst[0]]
ssbb_norm = sbb_norm[sel_pre_burst[0]]
spers_rstat= pers_rstat[sel_pre_burst[0]]
spers_dof = pers_dof[sel_pre_burst[0]]
print('These are the values I get from persistent state analysis :')
print('NH ='+str(ssnh.values[0]))
print('Disk BB kT = '+str(sdiskbb_temp.values[0]))
print('Disk BB Norm = '+str(sdiskbb_norm.values[0]))
print('Surface BB kT = '+str(ssbb_kt.values[0]))
print('Surface BB Norm = '+str(ssbb_norm.values[0]))
print('Reduced Chi2 of = '+str(spers_rstat.values[0]/spers_dof.values[0]))
print('available fit_methods \n')
print('1=fixed background just BB free \n')
print('2=thawed background and one free BB \n')
print('3=fixed background one free BB and fa \n')
print('4=fixed background two BB \n')
fit_method = input('Please enter your fit preference : ')
# fit_method = '1'
burst_folder=folder+source_name+'/burst'+bid+'/'
bkg_folder = folder+source_name+'/burst'+bid+'/pers_analysis/'
bkgfile = glob.glob(bkg_folder+'*3c50*.pha.pi')
sp_list = np.array(glob.glob(burst_folder+'c*.pha'))
pha_count = len(sp_list)
data_frame = {col: [0]*len(sp_list) for col in col_names}
set_stat("chi2xspecvar")
set_covar_opt("sigma",1.0)
set_conf_opt('numcores', 10)
set_conf_opt("max_rstat",250.0)
set_covar_opt('sigma',1.0)
# for i in range(len(sp_list)-1):
for i in range(len(sp_list)):
sp_final = sp_list[i]
print(sp_final)
sp_hdu = fits.open(str(sp_final))
print('read src spec: '+str(sp_final))
mjdobs = sp_hdu[1].header['MJD-OBS']
date_obsi = sp_hdu[1].header['DATE-OBS']
exposure = sp_hdu[1].header['EXPOSURE']
obsid = sp_hdu[1].header['OBS_ID']
sid = sp_final.split('/')[7].split('_')[0][1:]
date_obs = str(Time(date_obsi,format='isot', scale='utc'))
print(date_obs)
print(obsid)
object = sp_hdu[1].header['OBJECT']
data_frame["Source_Name_l"][i] = (object)
data_frame["BID_l"][i] = (bid)
data_frame["SID_l"][i] = (sid)
data_frame["OBS_ID_l"][i] = (obsid)
data_frame["MJD_OBS_l"][i] = (mjdobs)
data_frame["DATE_OBS_l"][i] = (date_obs)
data_frame["exp_l"][i] = (exposure)
data_frame["NH_l"][i] = (ssnh.values[0])
if exposure == 0.0 :
spectrum_has_no_exposure()
continue
# print(date_obsi)
else:
load_pha(1, str(sp_final),use_errors=True)
load_arf(1, sfolder+'nixtionaxis20170601_combined_v004_1434.arf')
load_rmf(1, sfolder+'nixti20170601_combined_v002_1434.rmf')
load_bkg(1, bkgfile[0],use_errors=True)
subtract()
print('This script only subtractfs ni3c50 background')
print('Ignoring : '+mines+' '+maxes)
ignore(mines+','+maxes)
print('Grouping the data to have at least 50 counts per channel')
group_counts(1, 50)
# first let's do a global source definition :
set_source(xstbabs.tb*(xsdiskbb.dbb+xsbbodyrad.sbb))
tb.nH=ssnh.values[0]
dbb.Tin = sdiskbb_temp.values[0]
dbb.norm = sdiskbb_norm.values[0]
sbb.kT = ssbb_kt.values[0]
sbb.norm = ssbb_norm.values[0]
freeze(tb.nH)
freeze(dbb.Tin)
freeze(dbb.norm)
freeze(sbb.kT)
freeze(sbb.norm)
#fit()
initial_rstat = sum(calc_chisqr())/len(calc_chisqr())
if (initial_rstat < (spers_rstat.values[0]+3.50)/spers_dof.values[0]):
print('Current chi2 : '+str(initial_rstat))
print('Persistent chi2 : '+str(spers_rstat.values[0]/spers_dof.values[0]))
print('Deviation from the persistent emission is small I will thaw the parameters and refit the data to save the best fit values:')
thaw(dbb.Tin)
thaw(dbb.norm)
thaw(sbb.kT)
thaw(sbb.norm)
fit()
if get_fit_results().dof <= 0.0:
print('The degree of freedom is very small we need to skip this spectrum:')
columns_to_make_zero = ["BB_kT_l", "min_BBkT_l","max_BBkt_l",\
"BB_Norm_l", "min_BBNorm_l", "max_BBNorm_l", "BB_flux_l", \
"min_BBflux_l", "max_BBflux_l", "Bol_BBF_l", "min_BolBBF_l", \
"max_BolBBF_l", "BB2_kT_l", "min_2BBkT_l", "max_2BBkt_l", "BB2_Norm_l",\
"min_2BBNorm_l", "max_2BBNorm_l", "BB2_flux_l", "min_2BBflux_l",\
"max_2BBflux_l", "Bol_2BBF_l", "min_2BolBBF_l", "max_2BolBBF_l",\
"fa_l", "min_fa_l", "max_fa_l", "dbb_kT_l", "min_dbbkT_l", "max_dbbkT_l",\
"dbb_Norm_l", "min_dbbNorm_l", "max_dbbNorm_l", "dbb_flux_l", "min_dbbflux_l",\
"max_dbbflux_l", "sBB_kT_l", "min_sBBkT_l", "max_sBBkt_l", "sBB_Norm_l",\
"min_sBBNorm_l", "max_sBBNorm_l", "sBB_flux_l", "min_sBBflux_l", "max_sBBflux_l",\
"RStat_l"]
#populate_with_zero(columns_to_make_zero)
data_frame["dof_l"][i] = (get_fit_results().dof)
print('Finished:')
print(sp_final)
continue
covar()
chi = get_fit_results().statval
dof = get_fit_results().dof
parvals = np.array(get_covar_results().parvals)
parnames = np.array(get_covar_results().parnames)
parmins = np.array(get_covar_results().parmins)
parmaxes = np.array(get_covar_results().parmaxes)
covar()
cparmins = np.array(get_covar_results().parmins)
cparmaxes = np.array(get_covar_results().parmaxes)
#if get_covar_results().parmins[1] == None:
# min_dbbNorm_l[i] = (0)
#if get_covar_results().parmins[1] != None:
# min_dbbNorm_l[i] = (get_covar_results().parmins[1])
if (None in cparmins) == True or (None in cparmaxes) == True or (0 in cparmaxes) == True or (0 in cparmins) == True:
print('It seems like you have unconstrained parameters we cant calculate errors')
print('No flux will be reported')
columns_to_make_zero = ["dbb_flux_l", "min_dbbflux_l", "max_dbbflux_l", "sBB_flux_l",\
"max_sBBflux_l", "min_sBBflux_l", "min_dbbkT_l", "max_dbbkT_l",\
"min_dbbNorm_l", "max_dbbNorm_l", "min_sBBkT_l", "max_sBBkt_l",\
"min_sBBNorm_l", "max_sBBNorm_l"]
#populate_with_zero(columns_to_make_zero)
data_frame["dbb_kT_l"][i] = (get_covar_results().parvals[0])
data_frame["dbb_Norm_l"][i] = (get_covar_results().parvals[1])
data_frame["sBB_kT_l"][i] = (get_covar_results().parvals[2])
data_frame["sBB_Norm_l"][i] = (get_covar_results().parvals[3])
if (None in cparmins) == False and (None in cparmaxes) == False and (0 in cparmaxes) == False and (0 in cparmins) == False:
print('The parameters are constrained well calculating errors')
#matrix = get_covar_results().extra_output
#is_all_zero = np. all((matrix >= 0))
#if is_all_zero:
sample2=sample_flux(dbb,loat(mine),float(maxe), num=1e4, correlated=True,confidence=68)
data_frame["dbb_flux_l"][i] = (sample2[1][0])
data_frame["max_dbbflux_l"][i] = (sample2[1][1]-sample2[1][0])
data_frame["min_dbbflux_l"][i] = (sample2[1][0]-sample2[1][2])
sample3=sample_flux(sbb,loat(mine),float(maxe), num=1e4, correlated=True,confidence=68)
data_frame["sBB_flux_l"][i] = (sample3[1][0])
data_frame["max_sBBflux_l"][i] = (sample3[1][1]-sample3[1][0])
data_frame["min_sBBflux_l"][i] = (sample3[1][0]-sample3[1][2])
# Parameter errors will be written as they are :
data_frame["dbb_kT_l"][i] = (get_covar_results().parvals[0])
data_frame["min_dbbkT_l"][i] = (get_covar_results().parmins[0])
data_frame["max_dbbkT_l"][i] = (get_covar_results().parmaxes[0])
data_frame["dbb_Norm_l"][i] = (get_covar_results().parvals[1])
data_frame["min_dbbNorm_l"][i] = (get_covar_results().parmins[1])
data_frame["max_dbbNorm_l"][i] = (get_covar_results().parmaxes[1])
data_frame["sBB_kT_l"][i] = (get_covar_results().parvals[2])
data_frame["min_sBBkT_l"][i] = (get_covar_results().parmins[2])
data_frame["max_sBBkt_l"][i] = (get_covar_results().parmaxes[2])
data_frame["sBB_Norm_l"][i] = (get_covar_results().parvals[3])
data_frame["min_sBBNorm_l"][i] = (get_covar_results().parmins[3])
data_frame["max_sBBNorm_l"][i] = (get_covar_results().parmaxes[3])
columns_to_make_zero = ["BB_kT_l", "min_BBkT_l","max_BBkt_l",\
"BB_Norm_l", "min_BBNorm_l", "max_BBNorm_l", "BB_flux_l", \
"min_BBflux_l", "max_BBflux_l", "Bol_BBF_l", "min_BolBBF_l", \
"max_BolBBF_l", "BB2_kT_l", "min_2BBkT_l", "max_2BBkt_l", "BB2_Norm_l",\
"min_2BBNorm_l", "max_2BBNorm_l", "BB2_flux_l", "min_2BBflux_l",\
"max_2BBflux_l", "Bol_2BBF_l", "min_2BolBBF_l", "max_2BolBBF_l",\
"fa_l", "min_fa_l", "max_fa_l"]
#populate_with_zero(columns_to_make_zero)
data_frame["RStat_l"][i] = (chi)
data_frame["dof_l"][i] = (dof)
plot_data()
x_max=max(get_data_plot().x)+max(get_data_plot().x)*0.05
x_min = np.abs(min(get_data_plot().x)-min(get_data_plot().x)*0.05)
ymax = max(get_data_plot().y)+max(get_data_plot().y)*0.2
ymin = np.abs(min(get_data_plot().y)-min(get_data_plot().y)*0.05)
create_plots(["dbb", "sbb"], x_max, x_min, ymax, ymin)
print('Skipping to the next spectrum')
continue
else:
print('Current chi2 : '+str(initial_rstat))
print('Persistent chi2 : '+str(spers_rstat.values[0]/spers_dof.values[0]))
print('Simple Persistent Emission Does Not Fit the data we need to add components')
if (fit_method == '1') or (fit_method == '4'):
if (fit_method == '1'):
print('This is 1=fixed background just BB free')
else:
print('This is 4=fixed background two BB')
set_source(xstbabs.tb*(xsbbodyrad.bb+xsdiskbb.dbb+xsbbodyrad.sbb))
tb.nH=ssnh.values[0]
dbb.Tin = sdiskbb_temp.values[0]
dbb.norm = sdiskbb_norm.values[0]
sbb.kT = ssbb_kt.values[0]
sbb.norm = ssbb_norm.values[0]
freeze(tb.nH)
freeze(dbb.Tin)
freeze(dbb.norm)
freeze(sbb.kT)
freeze(sbb.norm)
columns_to_make_zero = ["min_dbbkT_l", "max_dbbkT_l", "min_dbbNorm_l", "max_dbbNorm_l",\
"dbb_flux_l", "min_dbbflux_l", "max_dbbflux_l", "min_sBBkT_l",\
"max_sBBkt_l", "min_sBBNorm_l", "max_sBBNorm_l", "sBB_flux_l",\
"min_sBBflux_l", "max_sBBflux_l","fa_l", "min_fa_l","max_fa_l" ]
#populate_with_zero(columns_to_make_zero)
data_frame["dbb_kT_l"][i] = (diskbb_temp.values[0])
data_frame["dbb_Norm_l"][i] = (diskbb_norm.values[0])
data_frame["sBB_Norm_l"][i] = (ssbb_norm.values[0])
elif (fit_method == '2'):
print('2=thawed background and one free BB')
set_source(xstbabs.tb*(xsbbodyrad.bb+xsdiskbb.dbb+xsbbodyrad.sbb))
tb.nh=ssnh.values[0]
dbb.Tin = sdiskbb_temp.values[0]
dbb.norm = sdiskbb_norm.values[0]
sbb.kT = ssbb_kt.values[0]
sbb.norm = ssbb_norm.values[0]
thaw(dbb.Tin)
thaw(dbb.norm)
thaw(sbb.kT)
thaw(sbb.norm)
freeze(tb.nH)
elif fit_method == '3':
print('3=fixed background one free BB and fa')
set_source(xstbabs.tb*(xsbbodyrad.bb+scale1d.fa*(xsdiskbb.dbb+xsbbodyrad.sbb)))
tb.nH=ssnh.values[0]
dbb.Tin = sdiskbb_temp.values[0]
dbb.norm = sdiskbb_norm.values[0]
sbb.kT = ssbb_kt.values[0]
sbb.norm = ssbb_norm.values[0]
freeze(tb.nH)
freeze(dbb.Tin)
freeze(dbb.norm)
freeze(sbb.kT)
freeze(sbb.norm)
set_par (fa.c0, val=1.0, min=0.9)
columns_to_make_zero = ["min_dbbkT_l", "max_dbbkT_l", "min_dbbNorm_l", "max_dbbNorm_l",\
"dbb_flux_l", "min_dbbflux_l", "max_dbbflux_l", "min_sBBkT_l",\
"max_sBBkt_l", "min_sBBNorm_l", "max_sBBNorm_l", "sBB_flux_l",\
"min_sBBflux_l", "max_sBBflux_l"]
#populate_with_zero(columns_to_make_zero)
data_frame["dbb_kT_l"][i] = (diskbb_temp.values[0])
data_frame["dbb_Norm_l"][i] = (diskbb_norm.values[0])
data_frame["sBB_kT_l"][i] = (ssbb_kt.values[0])
data_frame["sBB_Norm_l"][i] = (ssbb_norm.values[0])
bb.kt=0.5
set_xsabund('wilm')
bb.norm = 180.3
set_method("moncar")
fit()
set_method("levmar")
fit()
chi = get_fit_results().statval
dof = get_fit_results().dof
if get_fit_results().dof <= 0.0:
print('The degree of freedom is very small we need to skip this spectrum:')
columns_to_make_zero = ["BB_kT_l", "min_BBkT_l","max_BBkt_l",\
"BB_Norm_l", "min_BBNorm_l", "max_BBNorm_l", "BB_flux_l", \
"min_BBflux_l", "max_BBflux_l", "Bol_BBF_l", "min_BolBBF_l", \
"max_BolBBF_l", "BB2_kT_l", "min_2BBkT_l", "max_2BBkt_l", "BB2_Norm_l",\
"min_2BBNorm_l", "max_2BBNorm_l", "BB2_flux_l", "min_2BBflux_l",\
"max_2BBflux_l", "Bol_2BBF_l", "min_2BolBBF_l", "max_2BolBBF_l",\
"fa_l", "min_fa_l", "max_fa_l", "RStat_l"]
#populate_with_zero(columns_to_make_zero)
data_frame["dof_l"][i] = (get_fit_results().dof)
print('Finished:')
print(sp_final)
continue
if (fit_method == '4') and (get_fit_results().rstat>=1.5):
print('Fit is not acceptable trying to add a second blackbody')
set_source(tb*(bb+xsbbodyrad.bb2+dbb+sbb))
tb.nH=ssnh.values[0]
dbb.Tin = sdiskbb_temp.values[0]
dbb.norm = sdiskbb_norm.values[0]
sbb.kT = ssbb_kt.values[0]
sbb.norm = ssbb_norm.values[0]
freeze(tb.nH)
freeze(dbb.Tin)
freeze(dbb.norm)
freeze(sbb.kT)
freeze(sbb.norm)
bb2.kT=0.2
fit()
covar()
chi = get_fit_results().statval
dof = get_fit_results().dof
parvals = np.array(get_covar_results().parvals)
parnames = np.array(get_covar_results().parnames)
parmins = np.array(get_covar_results().parmins)
parmaxes = np.array(get_covar_results().parmaxes)
#BB_kT_l[i] = (get_covar_results().parvals[0])
#min_BBkT_l[i] = (get_covar_results().parmins[0])
#max_BBkt_l[i] = (get_covar_results().parmaxes[0])
#BB_Norm_l[i] = (get_covar_results().parvals[1])
#min_BBNorm_l[i] = (get_covar_results().parmins[1])
#max_BBNorm_l[i] = (get_covar_results().parmaxes[1])
#BB2_kT_l[i] = (get_covar_results().parvals[2])
#min_2BBkT_l[i] = (get_covar_results().parmins[2])
#max_2BBkt_l[i] = (get_covar_results().parmaxes[2])
#BB2_Norm_l[i] = (get_covar_results().parvals[3])
#min_2BBNorm_l[i] = (get_covar_results().parmins[3])
#max_2BBNorm_l[i] = (get_covar_results().parmaxes[3])
# now the model unabsorbed fluxes :
covar()
cparmins = np.array(get_covar_results().parmins)
cparmaxes = np.array(get_covar_results().parmaxes)
if (None in cparmins) == True or (None in cparmaxes) == True or (0 in cparmaxes) == True or (0 in cparmins) == True:
print('It seems like you have unconstrained parameters we cant calculate errors')
print('No flux will be reported')
columns_to_make_zero = ["BB_flux_l", "min_BBflux_l", "max_BBflux_l", "Bol_BBF_l", "min_BolBBF_l", \
"max_BolBBF_l", "BB2_flux_l", "min_2BBflux_l","min_BBkT_l", "max_BBkt_l", \
"max_2BBflux_l", "min_2BolBBF_l", "max_2BolBBF_l", "min_BBNorm_l", "max_BBNorm_l",\
"min_2BBkT_l", "max_2BBkT_l","min_2BBNorm_l", "max_2BBNorm_l"]
#populate_with_zero(columns_to_make_zero)
# Parameter Errors will be written as 0:
data_frame["BB_kT_l"][i] = (get_covar_results().parvals[1])
data_frame["BB_Norm_l"][i] = (get_covar_results().parvals[2])
data_frame["BB2_kT_l"][i] = (get_covar_results().parvals[3])
data_frame["BB2_Norm_l"][i] = (get_covar_results().parvals[3])
if (None in cparmins) == False and (None in cparmaxes) == False and (0 in cparmaxes) == False and (0 in cparmins) == False:
print('The parameters are constrained well calculating errors')
#matrix = get_covar_results().extra_output
#is_all_zero = np. all((matrix >= 0))
#if is_all_zero:
sample1=sample_flux(bb,float(mine),float(maxe), num=1e4, correlated=True,confidence=68)
data_frame["BB_flux_l"][i] = (sample1[1][0])
data_frame["max_BBflux_l"][i] = (sample1[1][1]-sample1[1][0])
data_frame["min_BBflux_l"][i] = (sample1[1][0]-sample1[1][2])
sample2=sample_flux(bb2,float(mine),float(maxe), num=1e4, correlated=True,confidence=68)
data_frame["BB2_flux_l"][i] = (sample2[1][0])
data_frame["max_2BBflux_l"][i] = (sample2[1][1]-sample2[1][0])
data_frame["min_2BBflux_l"][i] = (sample2[1][0]-sample2[1][2])
# Parameter errors will be written as they are
data_frame["BB_kT_l"][i] = (get_covar_results().parvals[0])
data_frame["min_BBkT_l"][i] = (get_covar_results().parmins[0])
data_frame["max_BBkt_l"][i] = (get_covar_results().parmaxes[0])
data_frame["BB_Norm_l"][i] = (get_covar_results().parvals[1])
data_frame["min_BBNorm_l"][i] = (get_covar_results().parmins[1])
data_frame["max_BBNorm_l"][i] = (get_covar_results().parmaxes[1])
data_frame["BB2_kT_l"][i] = (get_covar_results().parvals[2])
data_frame["min_2BBkT_l"][i] = (get_covar_results().parmins[2])
data_frame["max_2BBkt_l"][i] = (get_covar_results().parmaxes[2])
data_frame["BB2_Norm_l"][i] = (get_covar_results().parvals[3])
data_frame["min_2BBNorm_l"][i] = (get_covar_results().parmins[3])
data_frame["max_2BBNorm_l"][i] = (get_covar_results().parmaxes[3])
# Now the Bolometric Fluxes :
data_frame["Bol_BBF_l"][i] = (1.076e-11*((get_covar_results().parvals[0])**4.0)*get_covar_results().parvals[1])
data_frame["max_BolBBF_l"][i] = (1.076e-11*((get_covar_results().parvals[0]+get_covar_results().parmaxes[0])**4.0)*(get_covar_results().parvals[1]+get_covar_results().parmaxes[1]))
data_frame["min_BolBBF_l"][i] = (1.076e-11*((get_covar_results().parvals[0]-get_covar_results().parmins[0])**4.0)*(get_covar_results().parvals[1]-get_covar_results().parmins[1]))
data_frame["Bol_2BBF_l"][i] = (1.076e-11*((get_covar_results().parvals[2])**4.0)*get_covar_results().parvals[3])
data_frame["max_2BolBBF_l"][i] = (1.076e-11*((get_covar_results().parvals[2]+get_covar_results().parmaxes[2])**4.0)*(get_covar_results().parvals[3]+get_covar_results().parmaxes[3]))
data_frame["min_2BolBBF_l"][i] = (1.076e-11*((get_covar_results().parvals[2]-get_covar_results().parmins[2])**4.0)*(get_covar_results().parvals[3]-get_covar_results().parmins[3]))
data_frame["RStat_l"][i] = (chi)
data_frame["dof_l"][i] = (dof)
# For Plotting Purposes
plot_data()
x_max=max(get_data_plot().x)+max(get_data_plot().x)*0.05
x_min = np.abs(min(get_data_plot().x)-min(get_data_plot().x)*0.05)
ymax = max(get_data_plot().y)+max(get_data_plot().y)*0.2
ymin = np.abs(min(get_data_plot().y)-min(get_data_plot().y)*0.05)
create_plots(["dbb", "sbb", "bb", "bb2"], x_max, x_min, ymax, ymin)
print('Finished, going to the next spectrum')
continue
if (fit_method == '4') and (get_fit_results().rstat < 1.5):
covar()
chi = get_fit_results().statval
dof = get_fit_results().dof
data_frame["RStat_l"][i] = (chi)
data_frame["dof_l"][i] = (dof)
parvals = np.array(get_covar_results().parvals)
parnames = np.array(get_covar_results().parnames)
parmins = np.array(get_covar_results().parmins)
parmaxes = np.array(get_covar_results().parmaxes)
#BB_kT_l[i] = (get_covar_results().parvals[0])
#min_BBkT_l[i] = (get_covar_results().parmins[0])
#max_BBkt_l[i] = (get_covar_results().parmaxes[0])
#BB_Norm_l[i] = (get_covar_results().parvals[1])
#min_BBNorm_l[i] = (get_covar_results().parmins[1])
#max_BBNorm_l[i] = (get_covar_results().parmaxes[1])
columns_to_make_zero = ["BB2_kT_l", "min_2BBkT_l", "max_2BBkt_l", "BB2_Norm_l", "min_2BBNorm_l", \
"max_2BBNorm_l", "Bol_2BBF_l", "max_2BolBBF_l","min_2BolBBF_l", "BB2_flux_l", \
"max_2BBflux_l", "min_2BBflux_l"]
#populate_with_zero(columns_to_make_zero)
# now the model unabsorbed fluxes :
covar()
cparmins = np.array(get_covar_results().parmins)
cparmaxes = np.array(get_covar_results().parmaxes)
if (None in cparmins) == True or (None in cparmaxes) == True or (0 in cparmaxes) == True or (0 in cparmins) == True:
print('It seems like you have unconstrained parameters we cant calculate errors')
print('No flux will be reported')
columns_to_make_zero = ["BB_flux_l", "max_BBflux_l", "min_BBflux_l", "Bol_BBF_l", "min_BolBBF_l", \
"max_BolBBF_l", "min_BBkT_l", "max_BBkt_l","min_BBNorm_l", "max_BBNorm_l"]
#populate_with_zero(columns_to_make_zero)
# Parameter Errors will be written as 0:
data_frame["BB_kT_l"][i] = (get_covar_results().parvals[0])
data_frame["BB_Norm_l"][i] = (get_covar_results().parvals[1])
if (None in cparmins) == False and (None in cparmaxes) == False and (0 in cparmaxes) == False and (0 in cparmins) == False:
print('The parameters are constrained well calculating errors')
matrix = get_covar_results().extra_output
is_all_zero = np. all((matrix >= 0))
if is_all_zero:
sample1=sample_flux(bb,float(mine),float(maxe), num=1e4, correlated=True,confidence=68)
data_frame["BB_flux_l"][i] = (sample1[1][0])
data_frame["max_BBflux_l"][i] = (sample1[1][1]-sample1[1][0])
data_frame["min_BBflux_l"][i] = (sample1[1][0]-sample1[1][2])
# Parameter errors will be written as they are :
data_frame["BB_kT_l"][i] = (get_covar_results().parvals[1])
data_frame["min_BBkT_l"][i] = (get_covar_results().parmins[1])
data_frame["max_BBkt_l"][i] = (get_covar_results().parmaxes[1])
data_frame["BB_Norm_l"][i] = (get_covar_results().parvals[2])
data_frame["min_BBNorm_l"][i] = (get_covar_results().parmins[2])
data_frame["max_BBNorm_l"][i] = (get_covar_results().parmaxes[2])
# Now the Bolometric Fluxes :
sample_bol=sample_flux(bb,0.01,200.0, num=1e4, correlated=True,confidence=68)
data_frame["Bol_BBF_l"][i] = sample_bol[1][0]
data_frame["max_BolBBF_l"][i] = sample_bol[1][1]-sample_bol[1][0]
data_frame["min_BolBBF_l"][i] = sample_bol[1][0]-sample_bol[1][2]
plot_data()
x_max=max(get_data_plot().x)+max(get_data_plot().x)*0.05
x_min = np.abs(min(get_data_plot().x)-min(get_data_plot().x)*0.05)
ymax = max(get_data_plot().y)+max(get_data_plot().y)*0.2
ymin = np.abs(min(get_data_plot().y)-min(get_data_plot().y)*0.05)
create_plots(["dbb", "sbb", "bb"], x_max, x_min, ymax, ymin)
if (fit_method == '3'):
fit()
covar()
chi = get_fit_results().statval
dof = get_fit_results().dof
parvals = np.array(get_covar_results().parvals)
parnames = np.array(get_covar_results().parnames)
parmins = np.array(get_covar_results().parmins)
parmaxes = np.array(get_covar_results().parmaxes)
columns_to_make_zero = ["BB2_kT_l", "min_2BBkT_l", "max_2BBkt_l", "BB2_Norm_l", "min_2BBNorm_l", \
"max_2BBNorm_l", "Bol_2BBF_l", "max_2BolBBF_l","min_2BolBBF_l", "BB2_flux_l", \
"max_2BBflux_l", "min_2BBflux_l"]
#populate_with_zero(columns_to_make_zero)
data_frame["RStat_l"][i] = (chi)
data_frame["dof_l"][i] = (dof)
# now the model unabsorbed fluxes :
covar()
cparmins = np.array(get_covar_results().parmins)
cparmaxes = np.array(get_covar_results().parmaxes)
if (None in cparmins) == False and (None in cparmaxes) == False and (0 in cparmaxes) == False and (0 in cparmins) == False:
#print('The parameters are constrained well calculating errors')
#matrix = get_covar_results().extra_output
#is_all_zero = np. all((matrix >= 0))
#if is_all_zero:
sample1=sample_flux(bb,float(mine),float(maxe), num=1e4, correlated=True,confidence=68)
data_frame["BB_flux_l"][i] = (sample1[1][0])
data_frame["max_BBflux_l"][i] = (sample1[1][1]-sample1[1][0])
data_frame["min_BBflux_l"][i] = (sample1[1][0]-sample1[1][2])
# Parameter errors will be written as they are :
data_frame["BB_kT_l"][i] = (get_covar_results().parvals[0])
data_frame["min_BBkT_l"][i] = (get_covar_results().parmins[0])
data_frame["max_BBkt_l"][i] = (get_covar_results().parmaxes[0])
data_frame["BB_Norm_l"][i] = (get_covar_results().parvals[1])
data_frame["min_BBNorm_l"][i] = (get_covar_results().parmins[1])
data_frame["max_BBNorm_l"][i] = (get_covar_results().parmaxes[1])
data_frame["fa_l"][i] = (get_covar_results().parvals[2])
data_frame["min_fa_l"][i] = (get_covar_results().parmins[2])
data_frame["max_fa_l"][i] = (get_covar_results().parmaxes[2])
# in this method these parameters should not be defined :
#data_frame["BB2_kT_l"][i] = (get_covar_results().parvals[3])
#data_frame["min_2BBkT_l"][i] = (get_covar_results().parmins[3])
#data_frame["max_2BBkt_l"][i] = (get_covar_results().parmaxes[3])
#data_frame["BB2_Norm_l"][i] = (get_covar_results().parvals[4])
#data_frame["min_2BBNorm_l"][i] = (get_covar_results().parmins[4])
#data_frame["max_2BBNorm_l"][i] = (get_covar_results().parmaxes[4])
#dbb_kT_l[i] = (get_covar_results().parvals[5])
#min_dbbkT_l[i] = (get_covar_results().parmins[5])
#max_dbbkT_l[i] = (get_covar_results().parmaxes[5])
#dbb_Norm_l[i] = (get_covar_results().parvals[6])
#min_dbbNorm_l[i] = (get_covar_results().parmins[6])
#max_dbbNorm_l[i] = (get_covar_results().parmaxes[6])
# Now the Bolometric Fluxes :
sample_bol=sample_flux(bb,0.01,200.0, num=1e4, correlated=True,confidence=68)
data_frame["Bol_BBF_l"][i] = sample_bol[1][0]
data_frame["max_BolBBF_l"][i] = sample_bol[1][1]-sample_bol[1][0]
data_frame["min_BolBBF_l"][i] = sample_bol[1][0]-sample_bol[1][2]
save_arrays(burst_folder+sid+'_b'+bid+'_'+fit_method+'_full_comp.dat',
[sample_bol[2][:,1], sample_bol[2][:,2],sample_bol[2][:,3]], ['kT', 'Norm','chi2'], clobber=True)
idx=[0,1,2,3]
v1=vals2[:, idx]
for ii in range(len(v1)):
v1[ii][0] = v1[ii][0] * 10**7
v1[ii][3] = v1[ii][3] / dof
v1=vals2[:, idx]
c=corner.corner(v1,labels=['flux', 'kT (keV)', r'Norm $R^2/D^2_{10kpc}$', r'$\chi^2$'],
show_titles=True, title_kwargs={"fontsize": 12}, color='Blue',title_fmt='.2f')
c.savefig(burst_folder+sid+'_b'+bid+'_'+fit_method+'_full_comp_corner.pdf',orientation='landscape', papertype='a4'))
if (None in cparmins) == True or (None in cparmaxes) == True or (0 in cparmaxes) == True or (0 in cparmins) == True:
print('It seems like you have unconstrained parameters we cant calculate errors')
print('No flux will be reported')
columns_to_make_zero = ["BB_flux_l", "min_BBflux_l", "max_BBflux_l", "Bol_BBF_l", "min_BolBBF_l", \
"max_BolBBF_l", "min_BBkT_l", "max_BBkt_l", "min_BBNorm_l","max_BBNorm_l", \
"min_fa_l", "max_fa_l"]
#populate_with_zero(columns_to_make_zero)
# Parameter Errors will be written as 0
data_frame["BB_kT_l"][i] = (get_covar_results().parvals[0])
data_frame["BB_Norm_l"][i] = (get_covar_results().parvals[1])
data_frame["fa_l"][i] = (get_covar_results().parvals[2])
#BB2_kT_l[i] = (get_covar_results().parvals[3])
#min_2BBkT_l[i] = (0)
#max_2BBkt_l[i] = (0)
#BB2_Norm_l[i] = (get_covar_results().parvals[4])
#min_2BBNorm_l[i] = (0)
#max_2BBNorm_l[i] = (0)
#dbb_kT_l[i] = (get_covar_results().parvals[5])
#min_dbbkT_l[i] = (0)
#max_dbbkT_l[i] = (0)
#dbb_Norm_l[i] = (get_covar_results().parvals[6])
#min_dbbNorm_l[i] = (0)
#max_dbbNorm_l[i] = (0)
plot_data()
x_max=max(get_data_plot().x)+max(get_data_plot().x)*0.1
x_min = np.abs(min(get_data_plot().x)-min(get_data_plot().x)*0.1)
ymax = max(get_data_plot().y)+max(get_data_plot().y)*0.2
ymin = np.abs(min(get_data_plot().y)-min(get_data_plot().y)*0.05)
create_plots(["fa_dbb", "fa_sbb", "bb"], x_max, x_min, ymax, ymin)
if (fit_method == '2'):
covar()
chi = get_fit_results().statval
dof = get_fit_results().dof
parvals = np.array(get_covar_results().parvals)
parnames = np.array(get_covar_results().parnames)
parmins = np.array(get_covar_results().parmins)
parmaxes = np.array(get_covar_results().parmaxes)
#BB_kT_l[i] = (get_covar_results().parvals[0])
#min_BBkT_l[i] = (get_covar_results().parmins[0])
#max_BBkt_l[i] = (get_covar_results().parmaxes[0])
#BB_Norm_l[i] = (get_covar_results().parvals[1])
#min_BBNorm_l[i] = (get_covar_results().parmins[1])
#max_BBNorm_l[i] = (get_covar_results().parmaxes[1])
#dbb_kT_l[i] = (get_covar_results().parvals[2])
#min_dbbkT_l[i] = (get_covar_results().parmins[2])
#max_dbbkT_l[i] = (get_covar_results().parmaxes[2])
#dbb_Norm_l[i] = (get_covar_results().parvals[3])
#min_dbbNorm_l[i] = (get_covar_results().parmins[3])
#max_dbbNorm_l[i] = (get_covar_results().parmaxes[3])
#sBB_kT_l[i] = (get_covar_results().parvals[4])
#min_sBBkT_l[i] = (get_covar_results().parmins[4])
#max_sBBkt_l[i] = (get_covar_results().parmaxes[4])
#sBB_Norm_l[i] = (get_covar_results().parvals[5])
#min_sBBNorm_l[i] = (get_covar_results().parmins[5])
#max_sBBNorm_l[i] = (get_covar_results().parmaxes[5])
covar()
cparmins = np.array(get_covar_results().parmins)
cparmaxes = np.array(get_covar_results().parmaxes)
if (None in cparmins) == True or (None in cparmaxes) == True or (0 in cparmaxes) == True or (0 in cparmins) == True:
print('It seems like you have unconstrained parameters we cant calculate errors')
print('No flux will be reported')
columns_to_make_zero = ["BB_flux_l", "min_BBflux_l", "max_BBflux_l", "Bol_BBF_l", "min_BolBBF_l", \
"max_BolBBF_l", "dbb_flux_l", "min_dbbflux_l", "max_dbbflux_l","sBB_flux_l", \
"min_sBBflux_l", "max_sBBflux_l", "min_dbbNorm_l", "max_dbbNorm_l", \
"min_sBBkT_l", "max_sBBkt_l", "min_sBBNorm_l", "max_sBBNorm_l" ]
#populate_with_zero(columns_to_make_zero)
# Parameter Errors will be written as 0:
data_frame["dbb_Norm_l"][i] = (get_covar_results().parvals[1])
data_frame["sBB_kT_l"][i] = (get_covar_results().parvals[2])
data_frame["sBB_Norm_l"][i] = (get_covar_results().parvals[3])
if (None in cparmins) == False and (None in cparmaxes) == False and (0 in cparmaxes) == False and (0 in cparmins) == False:
print('The parameters are constrained well calculating errors')
#matrix = get_covar_results().extra_output
#is_all_zero = np. all((matrix >= 0))
#if is_all_zero:
sample1=sample_flux(bb,float(mine),float(maxe), num=1e4, correlated=True,confidence=68)
data_frame["BB_flux_l"][i] = (sample1[1][0])
data_frame["max_BBflux_l"][i] = (sample1[1][1]-sample1[1][0])
data_frame["min_BBflux_l"][i] = (sample1[1][0]-sample1[1][2])
sample2=sample_flux(dbb,float(mine),float(maxe), num=1e4, correlated=True,confidence=68)
data_frame["dbb_flux_l"][i] = (sample2[1][0])
data_frame["max_dbbflux_l"][i] = (sample2[1][1]-sample2[1][0])
data_frame["min_dbbflux_l"][i] = (sample2[1][0]-sample2[1][2])
sample3=sample_flux(sbb,float(mine),float(maxe), num=1e4, correlated=True,confidence=68)
data_frame["sBB_flux_l"][i] = (sample3[1][0])
data_frame["max_sBBflux_l"][i] = (sample3[1][1]-sample3[1][0])
data_frame["min_sBBflux_l"][i] = (sample3[1][0]-sample3[1][2])
# Parameter errors will be written as they are :
data_frame["BB_kT_l"][i] = (get_covar_results().parvals[0])
data_frame["min_BBkT_l"][i] = (get_covar_results().parmins[0])
data_frame["max_BBkt_l"][i] = (get_covar_results().parmaxes[0])
data_frame["BB_Norm_l"][i] = (get_covar_results().parvals[1])
data_frame["min_BBNorm_l"][i] = (get_covar_results().parmins[1])
data_frame["max_BBNorm_l"][i] = (get_covar_results().parmaxes[1])
data_frame["dbb_kT_l"][i] = (get_covar_results().parvals[2])
data_frame["min_dbbkT_l"][i] = (get_covar_results().parmins[2])
data_frame["max_dbbkT_l"][i] = (get_covar_results().parmaxes[2])
data_frame["dbb_Norm_l"][i] = (get_covar_results().parvals[3])
data_frame["min_dbbNorm_l"][i] = (get_covar_results().parmins[3])
data_frame["max_dbbNorm_l"][i] = (get_covar_results().parmaxes[3])
data_frame["sBB_kT_l"][i] = (get_covar_results().parvals[4])
data_frame["min_sBBkT_l"][i] = (get_covar_results().parmins[4])
data_frame["max_sBBkt_l"][i] = (get_covar_results().parmaxes[4])
data_frame["sBB_Norm_l"][i] = (get_covar_results().parvals[5])
data_frame["min_sBBNorm_l"][i] = (get_covar_results().parmins[5])
data_frame["max_sBBNorm_l"][i] = (get_covar_results().parmaxes[5])
#dbb_Norm_l[i] = (get_covar_results().parvals[1])
#min_dbbNorm_l[i] = (get_covar_results().parmins[1])
#max_dbbNorm_l[i] = (get_covar_results().parmaxes[1])
#sBB_kT_l[i] = (get_covar_results().parvals[2])
#min_sBBkT_l[i] = (get_covar_results().parmins[2])
#max_sBBkt_l[i] = (get_covar_results().parmaxes[2])
#sBB_Norm_l[i] = (get_covar_results().parvals[3])
#min_sBBNorm_l[i] = (get_covar_results().parmins[3])
#max_sBBNorm_l[i] = (get_covar_results().parmaxes[3])
# Now the Bolometric Fluxes :
data_frame["Bol_BBF_l"][i] = (1.076e-11*((get_covar_results().parvals[0])**4.0)*get_covar_results().parvals[1])
data_frame["max_BolBBF_l"][i] = (1.076e-11*((get_covar_results().parvals[0]+get_covar_results().parmaxes[0])**4.0)*(get_covar_results().parvals[1]+get_covar_results().parmaxes[1]))
data_frame["min_BolBBF_l"][i] = (1.076e-11*((get_covar_results().parvals[0]-get_covar_results().parmins[0])**4.0)*(get_covar_results().parvals[1]-get_covar_results().parmins[1]))
columns_to_make_zero = ["BB2_kT_l", "min_2BBkT_l", "max_2BBkt_l", "BB2_Norm_l", "min_2BBNorm_l", \
"max_2BBNorm_l", "BB2_flux_l", "max_2BBflux_l", "min_2BBflux_l","Bol_2BBF_l", \
"min_2BolBBF_l", "min_2BolBBF_l", "fa_l", "min_fa_l", "max_fa_l"]
#populate_with_zero(columns_to_make_zero)
data_frame["RStat_l"][i] = (chi)
data_frame["dof_l"][i] = (dof)
plot_data()
x_max=max(get_data_plot().x)+max(get_data_plot().x)*0.1
x_min = np.abs(min(get_data_plot().x)-min(get_data_plot().x)*0.1)
ymax = max(get_data_plot().y)+max(get_data_plot().y)*0.2
ymin = np.abs(min(get_data_plot().y)-min(get_data_plot().y)*0.05)
create_plots(["dbb", "sbb", "bb"], x_max, x_min, ymax, ymin)
if (fit_method == '1'):
fit()
covar()
chi = get_fit_results().statval
dof = get_fit_results().dof
parvals = np.array(get_covar_results().parvals)
parnames = np.array(get_covar_results().parnames)
parmins = np.array(get_covar_results().parmins)
parmaxes = np.array(get_covar_results().parmaxes)
#BB_kT_l[i] = (0)
#min_BBkT_l[i] = (0)
#max_BBkt_l[i] = (0)
#BB_Norm_l[i] = (0)
#min_BBNorm_l[i] = (0)
#max_BBNorm_l[i] = (0)
#BB_flux_l[i] = (0)
#max_BBflux_l[i] = (0)
#min_BBflux_l[i] = (0)
columns_to_make_zero = ["BB2_kT_l", "min_2BBkT_l", "max_2BBkt_l", "BB2_Norm_l", "min_2BBNorm_l", \
"max_2BBNorm_l", "Bol_2BBF_l", "max_2BolBBF_l","min_2BolBBF_l", "BB2_flux_l", \
"max_2BBflux_l", "min_2BBflux_l", "sBB_kT_l", "min_sBBkT_l", "max_sBBkt_l",\
"sBB_Norm_l", "min_sBBNorm_l","max_sBBNorm_l", "sBB_flux_l", "min_sBBflux_l",\
"max_sBBflux_l"]
#populate_with_zero(columns_to_make_zero)
# now the model unabsorbed fluxes :
covar()
cparmins = np.array(get_covar_results().parmins)
cparmaxes = np.array(get_covar_results().parmaxes)
if (None in cparmins) == True or (None in cparmaxes) == True or (0 in cparmaxes) == True or (0 in cparmins) == True:
print('It seems like you have unconstrained parameters we cant calculate errors')
print('No flux will be reported')
columns_to_make_zero = ["BB_flux_l", "max_BBflux_l", "min_BBflux_l", "Bol_BBF_l", "min_BolBBF_l", \
"max_BolBBF_l", "min_BBkT_l", "max_BBkt_l","min_BBNorm_l", "max_BBNorm_l"]
#populate_with_zero(columns_to_make_zero)
# Parameter Errors will be written as 0:
data_frame["BB_kT_l"][i] = (get_covar_results().parvals[0])
data_frame["BB_Norm_l"][i] = (get_covar_results().parvals[1])
#BB_flux_l[i] = (get_covar_results().parvals[2])
#max_BBflux_l[i] = (0)
#min_BBflux_l[i] = (0)
else: #if (None in cparmins) == False and (None in cparmaxes) == False and (0 in cparmaxes) == False and (0 in cparmins) == False:
print('The parameters are constrained well calculating errors')
# matrix = get_covar_results().extra_output
# is_all_zero = np. all((matrix >= 0))
# if is_all_zero:
sample1=sample_flux(bb,float(mine),float(maxe), num=1e4, correlated=True,confidence=68)
data_frame["BB_flux_l"][i] = (sample1[1][0])
data_frame["max_BBflux_l"][i] = (sample1[1][1]-sample1[1][0])
data_frame["min_BBflux_l"][i] = (sample1[1][0]-sample1[1][2])
# Parameter errors will be written as they are :
data_frame["BB_kT_l"][i] = (get_covar_results().parvals[0])
data_frame["min_BBkT_l"][i] = (get_covar_results().parmins[0])
data_frame["max_BBkt_l"][i] = (get_covar_results().parmaxes[0])
data_frame["BB_Norm_l"][i] = (get_covar_results().parvals[1])
data_frame["min_BBNorm_l"][i] = (get_covar_results().parmins[1])
data_frame["max_BBNorm_l"][i] = (get_covar_results().parmaxes[1])
# Now the Bolometric Fluxes :
sample_bol=sample_flux(bb,0.01,200.0, num=1e4, correlated=True,confidence=68)
data_frame["Bol_BBF_l"][i] = sample_bol[1][0]
data_frame["max_BolBBF_l"][i] = sample_bol[1][1]-sample_bol[1][0]
data_frame["min_BolBBF_l"][i] = sample_bol[1][0]-sample_bol[1][2]
vals2=sample_bol[2]
idx=[0,1,2,3]
v1=vals2[:, idx]
for ii in range(len(v1)):
v1[ii][0] = v1[ii][0] * 10**7
v1[ii][3] = v1[ii][3] / dof
c=corner.corner(v1,labels=['flux', 'kT (keV)', r'Norm $R^2/D^2_{10kpc}$', r'$\chi^2$'],show_titles=True, title_kwargs={"fontsize": 12}, color='Blue',title_fmt='.2f')
c.savefig(burst_folder+sid+'_b'+bid+'_'+fit_method+'_full_comp_corner.pdf',orientation='landscape', papertype='a4'))
data_frame["RStat_l"][i] = (chi)
data_frame["dof_l"][i] = (dof)
plot_data()
x_max=max(get_data_plot().x)+max(get_data_plot().x)*0.1
x_min = np.abs(min(get_data_plot().x)-min(get_data_plot().x)*0.1)
ymax = max(get_data_plot().y)+max(get_data_plot().y)*0.2
ymin = np.abs(min(get_data_plot().y)-min(get_data_plot().y)*0.05)
create_plots(["dbb", "sbb", "bb"], x_max, x_min, ymax, ymin)
print_all_lengths(data_frame)
df=pd.DataFrame.from_dict(data_frame)
df.transpose()
#df.to_excel(burst_folder+source_name+'_sp_res_'+bid+'_'+fit_method+'.xlsx')
#df.to_csv(burst_folder+source_name+'_sp_res_'+bid+fit_method+'.csv')
sorted_df = df.sort_values(by='MJD_OBS_l')
sorted_df.to_excel(burst_folder+source_name+'_sp_res_'+bid+'_'+fit_method+'.xlsx')
sorted_df.to_csv(burst_folder+source_name+'_sp_res_'+bid+fit_method+'.csv')
| 69.43264 | 232 | 0.466194 | 7,212 | 63,392 | 3.749861 | 0.0599 | 0.017823 | 0.120914 | 0.053247 | 0.861485 | 0.838116 | 0.816669 | 0.798661 | 0.785535 | 0.747301 | 0 | 0.028461 | 0.410257 | 63,392 | 912 | 233 | 69.508772 | 0.694931 | 0 | 0 | 0.622357 | 0 | 0.001511 | 0.158093 | 0.00391 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.015106 | null | null | 0.093656 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
9c0cf24582dd209590769b34eab6e4c196c9ac32 | 222 | py | Python | errors/views.py | germainlefebvre4/store-drive_api | 2d838d5020084f4b9059a72f98126fc07aa70c33 | [
"WTFPL"
] | null | null | null | errors/views.py | germainlefebvre4/store-drive_api | 2d838d5020084f4b9059a72f98126fc07aa70c33 | [
"WTFPL"
] | null | null | null | errors/views.py | germainlefebvre4/store-drive_api | 2d838d5020084f4b9059a72f98126fc07aa70c33 | [
"WTFPL"
] | null | null | null | from django.shortcuts import render
def get404error(request, exception):
return render(request, '404.html', {}, status=404)
def get500error(request, exception):
return render(request, '500.html', {}, status=500)
| 27.75 | 54 | 0.725225 | 27 | 222 | 5.962963 | 0.555556 | 0.198758 | 0.273292 | 0.347826 | 0.434783 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 0.135135 | 222 | 7 | 55 | 31.714286 | 0.744792 | 0 | 0 | 0 | 0 | 0 | 0.072072 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0.4 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
9c7bdd2b781730a92176c127018add3d5af6b68a | 2,628 | py | Python | allure-pytest/test/filtering/severity_label_test.py | Sup3rGeo/allure-python | 568e7b18e7220b1bd260054447fca360fefea77f | [
"Apache-2.0"
] | 1 | 2021-02-19T21:00:11.000Z | 2021-02-19T21:00:11.000Z | allure-pytest/test/filtering/severity_label_test.py | licquia/allure-python | 97ee3c98e51f9127d70f214388b02fdc517b0ebb | [
"Apache-2.0"
] | null | null | null | allure-pytest/test/filtering/severity_label_test.py | licquia/allure-python | 97ee3c98e51f9127d70f214388b02fdc517b0ebb | [
"Apache-2.0"
] | 1 | 2020-08-05T05:40:44.000Z | 2020-08-05T05:40:44.000Z | """
>>> allure_report = getfixture('allure_report_with_params')('--allure-severities=trivial')
>>> assert_that(allure_report,
... all_of(
... has_property('test_cases', has_length(5)),
... has_property('test_groups', has_length(0))
... )) # doctest: +SKIP
"""
import pytest
@pytest.allure.severity(pytest.allure.severity_level.TRIVIAL)
def test_function_with_trivial_severity():
"""
>>> allure_report = getfixture('allure_report_with_params')('--allure-severities=trivial')
>>> assert_that(allure_report,
... has_test_case('test_function_with_trivial_severity',
... with_status('passed')
... )
... )
"""
pass
class TestClass(object):
@pytest.allure.severity(pytest.allure.severity_level.TRIVIAL)
def test_method_with_trivial_severity(self):
"""
>>> allure_report = getfixture('allure_report_with_params')('--allure-severities=trivial')
>>> assert_that(allure_report,
... has_test_case('test_method_with_trivial_severity',
... with_status('passed')
... )
... )
"""
pass
@pytest.allure.severity(pytest.allure.severity_level.NORMAL)
def test_method_with_normal_severity(self):
"""
>>> from hamcrest import not_
>>> allure_report = getfixture('allure_report_with_params')('--allure-severities=trivial')
>>> assert_that(allure_report,
... not_(has_test_case('test_method_with_normal_severity'))
... )
"""
pass
@pytest.allure.severity(pytest.allure.severity_level.TRIVIAL)
class TestClassAgain(object):
def test_method_with_whole_class_trivial_severity(self):
"""
>>> allure_report = getfixture('allure_report_with_params')('--allure-severities=trivial')
>>> assert_that(allure_report,
... has_test_case('test_method_with_whole_class_trivial_severity',
... with_status('passed')
... )
... )
"""
pass
@pytest.allure.severity(pytest.allure.severity_level.NORMAL)
def test_method_with_overridden_class_severity(self):
"""
>>> from hamcrest import not_
>>> allure_report = getfixture('allure_report_with_params')('--allure-severities=trivial')
>>> assert_that(allure_report,
... not_(has_test_case('test_method_with_overridden_class_severity'))
... )
"""
pass
| 34.578947 | 98 | 0.596271 | 254 | 2,628 | 5.732283 | 0.169291 | 0.148352 | 0.137363 | 0.115385 | 0.903159 | 0.858516 | 0.832418 | 0.78228 | 0.743819 | 0.743819 | 0 | 0.001041 | 0.269026 | 2,628 | 75 | 99 | 35.04 | 0.756897 | 0.603881 | 0 | 0.555556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.277778 | false | 0.277778 | 0.055556 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
92c243b9232568db6940c3893c96dfc2e6b1cbdf | 5,382 | py | Python | bfs_dfs.py | marcelosoares735/bfs-dfs | 80007efbaacb17ba398de0365f74e3bed12c31a9 | [
"MIT"
] | null | null | null | bfs_dfs.py | marcelosoares735/bfs-dfs | 80007efbaacb17ba398de0365f74e3bed12c31a9 | [
"MIT"
] | null | null | null | bfs_dfs.py | marcelosoares735/bfs-dfs | 80007efbaacb17ba398de0365f74e3bed12c31a9 | [
"MIT"
] | null | null | null |
import copy
class node:
def __init__(self, estado, pai, movimento):
self.estado = copy.deepcopy(estado)
self.pai = pai
self.movimento = movimento
estadoFinal = [[1, 2, 3], [8, 0, 4], [7, 6, 5]]
estadoInicial = [[0, 1, 3], [6, 8, 2], [7, 5, 4]]
def swap(l, c, nl, nc, estado):
aux = estado[l][c]
estado[l][c] = estado[nl][nc]
estado[nl][nc] = aux
def bfs():
estados = [copy.deepcopy(estadoInicial)]
nos = [node(estadoInicial, None, None)]
cont = 0
movimentos = []
while True:
estadoAtual = copy.deepcopy(estados[cont])
if estadoAtual == estadoFinal:
no = nos[cont]
while no.pai is not None:
movimentos.insert(0, no.movimento)
no = no.pai
print("quantidade de nos expandidos: ", len(estados))
return movimentos
for i in range(3):
for j in range(3):
if estadoAtual[i][j] == 0:
zeroCoord = [i, j]
# vai pra esquerda
if zeroCoord[1] > 0:
estadoAux = copy.deepcopy(estadoAtual)
swap(zeroCoord[0], zeroCoord[1], zeroCoord[0], zeroCoord[1] - 1, estadoAux)
if estadoAux not in estados:
nos.append(node(estadoAux, nos[cont], "esquerda"))
estados.append(estadoAux)
# vai pra direita
if zeroCoord[1] < 2:
estadoAux = copy.deepcopy(estadoAtual)
swap(zeroCoord[0], zeroCoord[1], zeroCoord[0], zeroCoord[1] + 1, estadoAux)
if estadoAux not in estados:
nos.append(node(estadoAux, nos[cont], "direita"))
estados.append(estadoAux)
# vai pra cima
if zeroCoord[0] > 0:
estadoAux = copy.deepcopy(estadoAtual)
swap(zeroCoord[0], zeroCoord[1], zeroCoord[0] - 1, zeroCoord[1], estadoAux)
if estadoAux not in estados:
nos.append(node(estadoAux, nos[cont], "cima"))
estados.append(estadoAux)
# vai pra baixo
if zeroCoord[0] < 2:
estadoAux = copy.deepcopy(estadoAtual)
swap(zeroCoord[0], zeroCoord[1], zeroCoord[0] + 1, zeroCoord[1], estadoAux)
if estadoAux not in estados:
nos.append(node(estadoAux, nos[cont], "baixo"))
estados.append(estadoAux)
cont += 1
def dfs():
estados = [copy.deepcopy(estadoInicial)]
nos = [node(estadoInicial, None, None)]
cont = 0
nivel = 0
movimentos = []
while True:
if nivel == 50:
aux = estados.pop(cont)
estados.append(aux)
noaux = nos.pop(cont)
nos.append(noaux)
nivel -= 1
estadoAtual = copy.deepcopy(estados[cont])
if estadoAtual == estadoFinal:
no = nos[cont]
while no.pai is not None:
movimentos.insert(0, no.movimento)
no = no.pai
print("quantidade de nos expandidos: ", len(estados))
return movimentos
for i in range(3):
for j in range(3):
if estadoAtual[i][j] == 0:
zeroCoord = [i, j]
# vai pra esquerda
if zeroCoord[1] > 0:
estadoAux = copy.deepcopy(estadoAtual)
swap(zeroCoord[0], zeroCoord[1], zeroCoord[0], zeroCoord[1] - 1, estadoAux)
if estadoAux not in estados:
estados.insert(cont, estadoAux)
nos.insert(cont, node(estadoAux, nos[cont], "esquerda"))
nivel += 1
continue
# vai pra direita
if zeroCoord[1] < 2:
estadoAux = copy.deepcopy(estadoAtual)
swap(zeroCoord[0], zeroCoord[1], zeroCoord[0], zeroCoord[1] + 1, estadoAux)
if estadoAux not in estados:
estados.insert(cont, estadoAux)
nos.insert(cont, node(estadoAux, nos[cont], "direita"))
nivel += 1
continue
#vai pra cima
if zeroCoord[0] > 0:
estadoAux = copy.deepcopy(estadoAtual)
swap(zeroCoord[0], zeroCoord[1], zeroCoord[0] - 1, zeroCoord[1], estadoAux)
if estadoAux not in estados:
estados.insert(cont, estadoAux)
nos.insert(cont, node(estadoAux, nos[cont], "cima"))
nivel += 1
continue
#vai pra baixo
if zeroCoord[0] < 2:
estadoAux = copy.deepcopy(estadoAtual)
swap(zeroCoord[0], zeroCoord[1], zeroCoord[0] + 1, zeroCoord[1], estadoAux)
if estadoAux not in estados:
estados.insert(cont, estadoAux)
nos.insert(cont, node(estadoAux, nos[cont], "baixo"))
nivel += 1
continue
cont += 1
nivel -= 1
a = int(input("digite 1 para busca em largura ou 2 para busca em profundidade\n"))
if a == 1:
movimentos = bfs()
print("quantidade de movimentos: ", len(movimentos))
print(movimentos)
if a == 2:
movimentos = dfs()
print("quantidade de movimentos: ", len(movimentos))
print(movimentos)
| 30.579545 | 88 | 0.514864 | 583 | 5,382 | 4.746141 | 0.135506 | 0.07228 | 0.0824 | 0.086737 | 0.822552 | 0.766173 | 0.766173 | 0.766173 | 0.726419 | 0.726419 | 0 | 0.029333 | 0.37291 | 5,382 | 175 | 89 | 30.754286 | 0.790519 | 0.021739 | 0 | 0.731707 | 0 | 0 | 0.044216 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03252 | false | 0 | 0.00813 | 0 | 0.065041 | 0.04878 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
92cdf3cb4b532abe30448e7f26abad6cfe0b362f | 34,933 | py | Python | tests/alerters/mattermost_test.py | perceptron01/elastalert2 | bb91ecdb03dedda207237ca83d628fd5d40d29c6 | [
"Apache-2.0"
] | 250 | 2021-04-24T18:06:30.000Z | 2022-03-31T04:37:47.000Z | tests/alerters/mattermost_test.py | perceptron01/elastalert2 | bb91ecdb03dedda207237ca83d628fd5d40d29c6 | [
"Apache-2.0"
] | 129 | 2021-04-24T17:09:50.000Z | 2022-03-29T08:52:14.000Z | tests/alerters/mattermost_test.py | perceptron01/elastalert2 | bb91ecdb03dedda207237ca83d628fd5d40d29c6 | [
"Apache-2.0"
] | 128 | 2021-04-25T15:20:34.000Z | 2022-03-31T04:37:49.000Z | import json
import logging
import pytest
from unittest import mock
from requests import RequestException
from elastalert.alerters.mattermost import MattermostAlerter
from elastalert.loaders import FileRulesLoader
from elastalert.util import EAException
def test_mattermost_proxy(caplog):
caplog.set_level(logging.INFO)
rule = {
'name': 'Test Mattermost Rule',
'type': 'any',
'alert_text_type': 'alert_text_only',
'mattermost_webhook_url': 'http://xxxxx',
'mattermost_msg_pretext': 'aaaaa',
'mattermost_msg_color': 'danger',
'mattermost_proxy': 'https://proxy.url',
'alert': [],
'alert_subject': 'Test Mattermost'
}
rules_loader = FileRulesLoader({})
rules_loader.load_modules(rule)
alert = MattermostAlerter(rule)
match = {
'@timestamp': '2021-01-01T00:00:00',
'somefield': 'foobarbaz'
}
with mock.patch('requests.post') as mock_post_request:
alert.alert([match])
expected_data = {
'attachments': [
{
'fallback': 'Test Mattermost: aaaaa',
'color': 'danger',
'title': 'Test Mattermost',
'pretext': 'aaaaa',
'fields': [],
'text': 'Test Mattermost Rule\n\n'
}
], 'username': 'elastalert'
}
mock_post_request.assert_called_once_with(
rule['mattermost_webhook_url'],
data=mock.ANY,
headers={'content-type': 'application/json'},
verify=True,
proxies={'https': 'https://proxy.url'}
)
actual_data = json.loads(mock_post_request.call_args_list[0][1]['data'])
assert expected_data == actual_data
assert ('elastalert', logging.INFO, 'Alert sent to Mattermost') == caplog.record_tuples[0]
def test_mattermost_alert_text_only():
rule = {
'name': 'Test Mattermost Rule',
'type': 'any',
'alert_text_type': 'alert_text_only',
'mattermost_webhook_url': 'http://xxxxx',
'mattermost_msg_pretext': 'aaaaa',
'mattermost_msg_color': 'danger',
'alert': [],
'alert_subject': 'Test Mattermost'
}
rules_loader = FileRulesLoader({})
rules_loader.load_modules(rule)
alert = MattermostAlerter(rule)
match = {
'@timestamp': '2021-01-01T00:00:00',
'somefield': 'foobarbaz'
}
with mock.patch('requests.post') as mock_post_request:
alert.alert([match])
expected_data = {
'attachments': [
{
'fallback': 'Test Mattermost: aaaaa',
'color': 'danger',
'title': 'Test Mattermost',
'pretext': 'aaaaa',
'fields': [],
'text': 'Test Mattermost Rule\n\n'
}
], 'username': 'elastalert'
}
mock_post_request.assert_called_once_with(
rule['mattermost_webhook_url'],
data=mock.ANY,
headers={'content-type': 'application/json'},
verify=True,
proxies=None
)
actual_data = json.loads(mock_post_request.call_args_list[0][1]['data'])
assert expected_data == actual_data
def test_mattermost_not_alert_text_only():
rule = {
'name': 'Test Mattermost Rule',
'type': 'any',
'alert_text_type': 'exclude_fields',
'mattermost_webhook_url': 'http://xxxxx',
'mattermost_msg_pretext': 'aaaaa',
'mattermost_msg_color': 'danger',
'alert': [],
'alert_subject': 'Test Mattermost'
}
rules_loader = FileRulesLoader({})
rules_loader.load_modules(rule)
alert = MattermostAlerter(rule)
match = {
'@timestamp': '2021-01-01T00:00:00',
'somefield': 'foobarbaz'
}
with mock.patch('requests.post') as mock_post_request:
alert.alert([match])
expected_data = {
'attachments': [
{
'fallback': 'Test Mattermost: aaaaa',
'color': 'danger',
'title': 'Test Mattermost',
'pretext': 'aaaaa',
'fields': []
}
],
'text': 'Test Mattermost Rule\n\n',
'username': 'elastalert'
}
mock_post_request.assert_called_once_with(
rule['mattermost_webhook_url'],
data=mock.ANY,
headers={'content-type': 'application/json'},
verify=True,
proxies=None
)
actual_data = json.loads(mock_post_request.call_args_list[0][1]['data'])
assert expected_data == actual_data
def test_mattermost_msg_fields():
rule = {
'name': 'Test Mattermost Rule',
'type': 'any',
'alert_text_type': 'alert_text_only',
'mattermost_webhook_url': 'http://xxxxx',
'mattermost_msg_pretext': 'aaaaa',
'mattermost_msg_color': 'danger',
'mattermost_msg_fields': [
{
'title': 'Stack',
'value': "{0} {1}",
'short': False,
'args': ["type", "msg.status_code"]
},
{
'title': 'Name',
'value': 'static field',
'short': False
}
],
'alert': [],
'alert_subject': 'Test Mattermost'
}
rules_loader = FileRulesLoader({})
rules_loader.load_modules(rule)
alert = MattermostAlerter(rule)
match = {
'@timestamp': '2021-01-01T00:00:00',
'somefield': 'foobarbaz'
}
with mock.patch('requests.post') as mock_post_request:
alert.alert([match])
expected_data = {
'attachments': [
{
'fallback': 'Test Mattermost: aaaaa',
'color': 'danger',
'title': 'Test Mattermost',
'pretext': 'aaaaa',
'fields': [
{'title': 'Stack', 'value': '<MISSING VALUE> <MISSING VALUE>', 'short': False},
{'title': 'Name', 'value': 'static field', 'short': False}
],
'text': 'Test Mattermost Rule\n\n'
}
], 'username': 'elastalert'
}
mock_post_request.assert_called_once_with(
rule['mattermost_webhook_url'],
data=mock.ANY,
headers={'content-type': 'application/json'},
verify=True,
proxies=None
)
actual_data = json.loads(mock_post_request.call_args_list[0][1]['data'])
assert expected_data == actual_data
def test_mattermost_icon_url_override():
rule = {
'name': 'Test Mattermost Rule',
'type': 'any',
'alert_text_type': 'alert_text_only',
'mattermost_webhook_url': 'http://xxxxx',
'mattermost_msg_pretext': 'aaaaa',
'mattermost_msg_color': 'danger',
'mattermost_icon_url_override': 'http://xxxx/icon.png',
'alert': [],
'alert_subject': 'Test Mattermost'
}
rules_loader = FileRulesLoader({})
rules_loader.load_modules(rule)
alert = MattermostAlerter(rule)
match = {
'@timestamp': '2021-01-01T00:00:00',
'somefield': 'foobarbaz'
}
with mock.patch('requests.post') as mock_post_request:
alert.alert([match])
expected_data = {
'attachments': [
{
'fallback': 'Test Mattermost: aaaaa',
'color': 'danger',
'title': 'Test Mattermost',
'pretext': 'aaaaa',
'fields': [],
'text': 'Test Mattermost Rule\n\n'
}
],
'username': 'elastalert',
'icon_url': 'http://xxxx/icon.png'
}
mock_post_request.assert_called_once_with(
rule['mattermost_webhook_url'],
data=mock.ANY,
headers={'content-type': 'application/json'},
verify=True,
proxies=None
)
actual_data = json.loads(mock_post_request.call_args_list[0][1]['data'])
assert expected_data == actual_data
def test_mattermost_channel_override():
rule = {
'name': 'Test Mattermost Rule',
'type': 'any',
'alert_text_type': 'alert_text_only',
'mattermost_webhook_url': 'http://xxxxx',
'mattermost_msg_pretext': 'aaaaa',
'mattermost_msg_color': 'danger',
'mattermost_channel_override': 'test channel',
'alert': [],
'alert_subject': 'Test Mattermost'
}
rules_loader = FileRulesLoader({})
rules_loader.load_modules(rule)
alert = MattermostAlerter(rule)
match = {
'@timestamp': '2021-01-01T00:00:00',
'somefield': 'foobarbaz'
}
with mock.patch('requests.post') as mock_post_request:
alert.alert([match])
expected_data = {
'attachments': [
{
'fallback': 'Test Mattermost: aaaaa',
'color': 'danger',
'title': 'Test Mattermost',
'pretext': 'aaaaa',
'fields': [],
'text': 'Test Mattermost Rule\n\n'
}
],
'username': 'elastalert',
'channel': 'test channel'
}
mock_post_request.assert_called_once_with(
rule['mattermost_webhook_url'],
data=mock.ANY,
headers={'content-type': 'application/json'},
verify=True,
proxies=None
)
actual_data = json.loads(mock_post_request.call_args_list[0][1]['data'])
assert expected_data == actual_data
def test_mattermost_ignore_ssl_errors():
rule = {
'name': 'Test Mattermost Rule',
'type': 'any',
'alert_text_type': 'alert_text_only',
'mattermost_webhook_url': 'http://xxxxx',
'mattermost_msg_pretext': 'aaaaa',
'mattermost_msg_color': 'danger',
'mattermost_ignore_ssl_errors': True,
'alert': [],
'alert_subject': 'Test Mattermost'
}
rules_loader = FileRulesLoader({})
rules_loader.load_modules(rule)
alert = MattermostAlerter(rule)
match = {
'@timestamp': '2021-01-01T00:00:00',
'somefield': 'foobarbaz'
}
with mock.patch('requests.post') as mock_post_request:
alert.alert([match])
expected_data = {
'attachments': [
{
'fallback': 'Test Mattermost: aaaaa',
'color': 'danger',
'title': 'Test Mattermost',
'pretext': 'aaaaa',
'fields': [],
'text': 'Test Mattermost Rule\n\n'
}
],
'username': 'elastalert'
}
mock_post_request.assert_called_once_with(
rule['mattermost_webhook_url'],
data=mock.ANY,
headers={'content-type': 'application/json'},
verify=False,
proxies=None
)
actual_data = json.loads(mock_post_request.call_args_list[0][1]['data'])
assert expected_data == actual_data
def test_mattermost_title_link():
rule = {
'name': 'Test Mattermost Rule',
'type': 'any',
'alert_text_type': 'alert_text_only',
'mattermost_webhook_url': 'http://xxxxx',
'mattermost_msg_pretext': 'aaaaa',
'mattermost_msg_color': 'danger',
'mattermost_title': 'mattermost.title',
'mattermost_title_link': 'http://title.url',
'alert': [],
'alert_subject': 'Test Mattermost'
}
rules_loader = FileRulesLoader({})
rules_loader.load_modules(rule)
alert = MattermostAlerter(rule)
match = {
'@timestamp': '2021-01-01T00:00:00',
'somefield': 'foobarbaz'
}
with mock.patch('requests.post') as mock_post_request:
alert.alert([match])
expected_data = {
'attachments': [
{
'fallback': 'Test Mattermost: aaaaa',
'color': 'danger',
'pretext': 'aaaaa',
'fields': [],
'text': 'Test Mattermost Rule\n\n',
'title': 'mattermost.title',
'title_link': 'http://title.url'
}
],
'username': 'elastalert'
}
mock_post_request.assert_called_once_with(
rule['mattermost_webhook_url'],
data=mock.ANY,
headers={'content-type': 'application/json'},
verify=True,
proxies=None
)
actual_data = json.loads(mock_post_request.call_args_list[0][1]['data'])
assert expected_data == actual_data
def test_mattermost_footer():
rule = {
'name': 'Test Mattermost Rule',
'type': 'any',
'alert_text_type': 'alert_text_only',
'mattermost_webhook_url': 'http://xxxxx',
'mattermost_msg_pretext': 'aaaaa',
'mattermost_msg_color': 'danger',
'mattermost_footer': 'Mattermost footer',
'alert': [],
'alert_subject': 'Test Mattermost'
}
rules_loader = FileRulesLoader({})
rules_loader.load_modules(rule)
alert = MattermostAlerter(rule)
match = {
'@timestamp': '2021-01-01T00:00:00',
'somefield': 'foobarbaz'
}
with mock.patch('requests.post') as mock_post_request:
alert.alert([match])
expected_data = {
'attachments': [
{
'fallback': 'Test Mattermost: aaaaa',
'color': 'danger',
'title': 'Test Mattermost',
'pretext': 'aaaaa',
'fields': [],
'text': 'Test Mattermost Rule\n\n',
'footer': 'Mattermost footer'
}
],
'username': 'elastalert'
}
mock_post_request.assert_called_once_with(
rule['mattermost_webhook_url'],
data=mock.ANY,
headers={'content-type': 'application/json'},
verify=True,
proxies=None
)
actual_data = json.loads(mock_post_request.call_args_list[0][1]['data'])
assert expected_data == actual_data
def test_mattermost_footer_icon():
rule = {
'name': 'Test Mattermost Rule',
'type': 'any',
'alert_text_type': 'alert_text_only',
'mattermost_webhook_url': 'http://xxxxx',
'mattermost_msg_pretext': 'aaaaa',
'mattermost_msg_color': 'danger',
'mattermost_footer_icon': 'http://icon.url',
'alert': [],
'alert_subject': 'Test Mattermost'
}
rules_loader = FileRulesLoader({})
rules_loader.load_modules(rule)
alert = MattermostAlerter(rule)
match = {
'@timestamp': '2021-01-01T00:00:00',
'somefield': 'foobarbaz'
}
with mock.patch('requests.post') as mock_post_request:
alert.alert([match])
expected_data = {
'attachments': [
{
'fallback': 'Test Mattermost: aaaaa',
'color': 'danger',
'title': 'Test Mattermost',
'pretext': 'aaaaa',
'fields': [],
'text': 'Test Mattermost Rule\n\n',
'footer_icon': 'http://icon.url'
}
],
'username': 'elastalert'
}
mock_post_request.assert_called_once_with(
rule['mattermost_webhook_url'],
data=mock.ANY,
headers={'content-type': 'application/json'},
verify=True,
proxies=None
)
actual_data = json.loads(mock_post_request.call_args_list[0][1]['data'])
assert expected_data == actual_data
def test_mattermost_image_url():
rule = {
'name': 'Test Mattermost Rule',
'type': 'any',
'alert_text_type': 'alert_text_only',
'mattermost_webhook_url': 'http://xxxxx',
'mattermost_msg_pretext': 'aaaaa',
'mattermost_msg_color': 'danger',
'mattermost_image_url': 'http://image.url',
'alert': [],
'alert_subject': 'Test Mattermost'
}
rules_loader = FileRulesLoader({})
rules_loader.load_modules(rule)
alert = MattermostAlerter(rule)
match = {
'@timestamp': '2021-01-01T00:00:00',
'somefield': 'foobarbaz'
}
with mock.patch('requests.post') as mock_post_request:
alert.alert([match])
expected_data = {
'attachments': [
{
'fallback': 'Test Mattermost: aaaaa',
'color': 'danger',
'title': 'Test Mattermost',
'pretext': 'aaaaa',
'fields': [],
'text': 'Test Mattermost Rule\n\n',
'image_url': 'http://image.url'
}
],
'username': 'elastalert'
}
mock_post_request.assert_called_once_with(
rule['mattermost_webhook_url'],
data=mock.ANY,
headers={'content-type': 'application/json'},
verify=True,
proxies=None
)
actual_data = json.loads(mock_post_request.call_args_list[0][1]['data'])
assert expected_data == actual_data
def test_mattermost_thumb_url():
rule = {
'name': 'Test Mattermost Rule',
'type': 'any',
'alert_text_type': 'alert_text_only',
'mattermost_webhook_url': 'http://xxxxx',
'mattermost_msg_pretext': 'aaaaa',
'mattermost_msg_color': 'danger',
'mattermost_thumb_url': 'http://thumb.url',
'alert': [],
'alert_subject': 'Test Mattermost'
}
rules_loader = FileRulesLoader({})
rules_loader.load_modules(rule)
alert = MattermostAlerter(rule)
match = {
'@timestamp': '2021-01-01T00:00:00',
'somefield': 'foobarbaz'
}
with mock.patch('requests.post') as mock_post_request:
alert.alert([match])
expected_data = {
'attachments': [
{
'fallback': 'Test Mattermost: aaaaa',
'color': 'danger',
'title': 'Test Mattermost',
'pretext': 'aaaaa',
'fields': [],
'text': 'Test Mattermost Rule\n\n',
'thumb_url': 'http://thumb.url'
}
],
'username': 'elastalert'
}
mock_post_request.assert_called_once_with(
rule['mattermost_webhook_url'],
data=mock.ANY,
headers={'content-type': 'application/json'},
verify=True,
proxies=None
)
actual_data = json.loads(mock_post_request.call_args_list[0][1]['data'])
assert expected_data == actual_data
def test_mattermost_author_name():
rule = {
'name': 'Test Mattermost Rule',
'type': 'any',
'alert_text_type': 'alert_text_only',
'mattermost_webhook_url': 'http://xxxxx',
'mattermost_msg_pretext': 'aaaaa',
'mattermost_msg_color': 'danger',
'mattermost_author_name': 'author name',
'alert': [],
'alert_subject': 'Test Mattermost'
}
rules_loader = FileRulesLoader({})
rules_loader.load_modules(rule)
alert = MattermostAlerter(rule)
match = {
'@timestamp': '2021-01-01T00:00:00',
'somefield': 'foobarbaz'
}
with mock.patch('requests.post') as mock_post_request:
alert.alert([match])
expected_data = {
'attachments': [
{
'fallback': 'Test Mattermost: aaaaa',
'color': 'danger',
'title': 'Test Mattermost',
'pretext': 'aaaaa',
'fields': [],
'text': 'Test Mattermost Rule\n\n',
'author_name': 'author name'
}
],
'username': 'elastalert'
}
mock_post_request.assert_called_once_with(
rule['mattermost_webhook_url'],
data=mock.ANY,
headers={'content-type': 'application/json'},
verify=True,
proxies=None
)
actual_data = json.loads(mock_post_request.call_args_list[0][1]['data'])
assert expected_data == actual_data
def test_mattermost_author_link():
rule = {
'name': 'Test Mattermost Rule',
'type': 'any',
'alert_text_type': 'alert_text_only',
'mattermost_webhook_url': 'http://xxxxx',
'mattermost_msg_pretext': 'aaaaa',
'mattermost_msg_color': 'danger',
'mattermost_author_link': 'http://author.link.url',
'alert': [],
'alert_subject': 'Test Mattermost'
}
rules_loader = FileRulesLoader({})
rules_loader.load_modules(rule)
alert = MattermostAlerter(rule)
match = {
'@timestamp': '2021-01-01T00:00:00',
'somefield': 'foobarbaz'
}
with mock.patch('requests.post') as mock_post_request:
alert.alert([match])
expected_data = {
'attachments': [
{
'fallback': 'Test Mattermost: aaaaa',
'color': 'danger',
'title': 'Test Mattermost',
'pretext': 'aaaaa',
'fields': [],
'text': 'Test Mattermost Rule\n\n',
'author_link': 'http://author.link.url'
}
],
'username': 'elastalert'
}
mock_post_request.assert_called_once_with(
rule['mattermost_webhook_url'],
data=mock.ANY,
headers={'content-type': 'application/json'},
verify=True,
proxies=None
)
actual_data = json.loads(mock_post_request.call_args_list[0][1]['data'])
assert expected_data == actual_data
def test_mattermost_author_icon():
rule = {
'name': 'Test Mattermost Rule',
'type': 'any',
'alert_text_type': 'alert_text_only',
'mattermost_webhook_url': 'http://xxxxx',
'mattermost_msg_pretext': 'aaaaa',
'mattermost_msg_color': 'danger',
'mattermost_author_icon': 'http://author.icon.url',
'alert': [],
'alert_subject': 'Test Mattermost'
}
rules_loader = FileRulesLoader({})
rules_loader.load_modules(rule)
alert = MattermostAlerter(rule)
match = {
'@timestamp': '2021-01-01T00:00:00',
'somefield': 'foobarbaz'
}
with mock.patch('requests.post') as mock_post_request:
alert.alert([match])
expected_data = {
'attachments': [
{
'fallback': 'Test Mattermost: aaaaa',
'color': 'danger',
'title': 'Test Mattermost',
'pretext': 'aaaaa',
'fields': [],
'text': 'Test Mattermost Rule\n\n',
'author_icon': 'http://author.icon.url'
}
],
'username': 'elastalert'
}
mock_post_request.assert_called_once_with(
rule['mattermost_webhook_url'],
data=mock.ANY,
headers={'content-type': 'application/json'},
verify=True,
proxies=None
)
actual_data = json.loads(mock_post_request.call_args_list[0][1]['data'])
assert expected_data == actual_data
def test_mattermost_ea_exception():
with pytest.raises(EAException) as ea:
rule = {
'name': 'Test Mattermost Rule',
'type': 'any',
'alert_text_type': 'alert_text_only',
'mattermost_webhook_url': 'http://xxxxx',
'mattermost_msg_pretext': 'aaaaa',
'mattermost_msg_color': 'danger',
'mattermost_author_icon': 'http://author.icon.url',
'alert': [],
'alert_subject': 'Test Mattermost'
}
rules_loader = FileRulesLoader({})
rules_loader.load_modules(rule)
alert = MattermostAlerter(rule)
match = {
'@timestamp': '2021-01-01T00:00:00',
'somefield': 'foobarbaz'
}
mock_run = mock.MagicMock(side_effect=RequestException)
with mock.patch('requests.post', mock_run), pytest.raises(RequestException):
alert.alert([match])
assert 'Error posting to Mattermost: ' in str(ea)
def test_mattermost_get_aggregation_summary_text__maximum_width():
rule = {
'name': 'Test Mattermost Rule',
'type': 'any',
'alert_text_type': 'alert_text_only',
'mattermost_webhook_url': 'http://xxxxx',
'mattermost_msg_pretext': 'aaaaa',
'mattermost_msg_color': 'danger',
'mattermost_author_icon': 'http://author.icon.url',
'alert': [],
'alert_subject': 'Test Mattermost'
}
rules_loader = FileRulesLoader({})
rules_loader.load_modules(rule)
alert = MattermostAlerter(rule)
assert 75 == alert.get_aggregation_summary_text__maximum_width()
@pytest.mark.parametrize('msg_color, except_msg_color', [
('', 'danger'),
('danger', 'danger'),
('good', 'good'),
('warning', 'warning')
])
def test_mattermost_msg_color(msg_color, except_msg_color):
rule = {
'name': 'Test Mattermost Rule',
'type': 'any',
'alert_text_type': 'alert_text_only',
'mattermost_webhook_url': 'http://xxxxx',
'mattermost_msg_pretext': 'aaaaa',
'mattermost_author_icon': 'http://author.icon.url',
'alert': [],
'alert_subject': 'Test Mattermost'
}
if msg_color:
rule['mattermost_msg_color'] = msg_color
rules_loader = FileRulesLoader({})
rules_loader.load_modules(rule)
alert = MattermostAlerter(rule)
match = {
'@timestamp': '2021-01-01T00:00:00',
'somefield': 'foobarbaz'
}
with mock.patch('requests.post') as mock_post_request:
alert.alert([match])
expected_data = {
'attachments': [
{
'fallback': 'Test Mattermost: aaaaa',
'color': except_msg_color,
'title': 'Test Mattermost',
'pretext': 'aaaaa',
'fields': [],
'text': 'Test Mattermost Rule\n\n',
'author_icon': 'http://author.icon.url'
}
],
'username': 'elastalert'
}
mock_post_request.assert_called_once_with(
rule['mattermost_webhook_url'],
data=mock.ANY,
headers={'content-type': 'application/json'},
verify=True,
proxies=None
)
actual_data = json.loads(mock_post_request.call_args_list[0][1]['data'])
assert expected_data == actual_data
def test_mattermost_getinfo():
rule = {
'name': 'Test Mattermost Rule',
'type': 'any',
'alert_text_type': 'alert_text_only',
'mattermost_webhook_url': 'http://xxxxx',
'alert': [],
'alert_subject': 'Test Mattermost'
}
rules_loader = FileRulesLoader({})
rules_loader.load_modules(rule)
alert = MattermostAlerter(rule)
expected_data = {
'type': 'mattermost',
'mattermost_username_override': 'elastalert',
'mattermost_webhook_url': ['http://xxxxx']
}
actual_data = alert.get_info()
assert expected_data == actual_data
@pytest.mark.parametrize('mattermost_webhook_url, expected_data', [
('', 'Missing required option(s): mattermost_webhook_url'),
('http://xxxxx',
{
'type': 'mattermost',
'mattermost_username_override': 'elastalert',
'mattermost_webhook_url': ['http://xxxxx']
}),
])
def test_mattermost_required_error(mattermost_webhook_url, expected_data):
try:
rule = {
'name': 'Test Mattermost Rule',
'type': 'any',
'alert_text_type': 'alert_text_only',
'alert': [],
'alert_subject': 'Test Mattermost'
}
if mattermost_webhook_url:
rule['mattermost_webhook_url'] = mattermost_webhook_url
rules_loader = FileRulesLoader({})
rules_loader.load_modules(rule)
alert = MattermostAlerter(rule)
actual_data = alert.get_info()
assert expected_data == actual_data
except Exception as ea:
assert expected_data in str(ea)
def test_mattermost_attach_kibana_discover_url_when_generated():
rule = {
'name': 'Test Rule',
'type': 'any',
'alert_text_type': 'alert_text_only',
'mattermost_attach_kibana_discover_url': True,
'mattermost_webhook_url': 'http://please.dontgohere.mattermost',
'alert': []
}
rules_loader = FileRulesLoader({})
rules_loader.load_modules(rule)
alert = MattermostAlerter(rule)
match = {
'@timestamp': '2021-01-01T00:00:00',
'kibana_discover_url': 'http://localhost:5601/app/discover#/'
}
with mock.patch('requests.post') as mock_post_request:
alert.alert([match])
expected_data = {
'attachments': [
{
'fallback': 'Test Rule: ',
'color': 'danger',
'title': 'Test Rule',
'pretext': '',
'fields': [],
'text': 'Test Rule\n\n'
},
{
'color': '#ec4b98',
'title': 'Discover in Kibana',
'title_link': 'http://localhost:5601/app/discover#/'
}
], 'username': 'elastalert'
}
mock_post_request.assert_called_once_with(
rule['mattermost_webhook_url'],
data=mock.ANY,
headers={'content-type': 'application/json'},
verify=True,
proxies=None
)
actual_data = json.loads(mock_post_request.call_args_list[0][1]['data'])
assert expected_data == actual_data
def test_mattermost_attach_kibana_discover_url_when_not_generated():
rule = {
'name': 'Test Rule',
'type': 'any',
'alert_text_type': 'alert_text_only',
'mattermost_attach_kibana_discover_url': True,
'mattermost_webhook_url': 'http://please.dontgohere.mattermost',
'alert': []
}
rules_loader = FileRulesLoader({})
rules_loader.load_modules(rule)
alert = MattermostAlerter(rule)
match = {
'@timestamp': '2021-01-01T00:00:00'
}
with mock.patch('requests.post') as mock_post_request:
alert.alert([match])
expected_data = {
'attachments': [
{
'fallback': 'Test Rule: ',
'color': 'danger',
'title': 'Test Rule',
'pretext': '',
'fields': [],
'text': 'Test Rule\n\n'
}
], 'username': 'elastalert'
}
mock_post_request.assert_called_once_with(
rule['mattermost_webhook_url'],
data=mock.ANY,
headers={'content-type': 'application/json'},
verify=True,
proxies=None
)
actual_data = json.loads(mock_post_request.call_args_list[0][1]['data'])
assert expected_data == actual_data
def test_mattermost_kibana_discover_title():
rule = {
'name': 'Test Rule',
'type': 'any',
'alert_text_type': 'alert_text_only',
'mattermost_attach_kibana_discover_url': True,
'mattermost_kibana_discover_title': 'Click to discover in Kibana',
'mattermost_webhook_url': 'http://please.dontgohere.mattermost',
'alert': []
}
rules_loader = FileRulesLoader({})
rules_loader.load_modules(rule)
alert = MattermostAlerter(rule)
match = {
'@timestamp': '2021-01-01T00:00:00',
'kibana_discover_url': 'http://localhost:5601/app/discover#/'
}
with mock.patch('requests.post') as mock_post_request:
alert.alert([match])
expected_data = {
'attachments': [
{
'fallback': 'Test Rule: ',
'color': 'danger',
'title': 'Test Rule',
'pretext': '',
'fields': [],
'text': 'Test Rule\n\n'
},
{
'color': '#ec4b98',
'title': 'Click to discover in Kibana',
'title_link': 'http://localhost:5601/app/discover#/'
}
], 'username': 'elastalert'
}
mock_post_request.assert_called_once_with(
rule['mattermost_webhook_url'],
data=mock.ANY,
headers={'content-type': 'application/json'},
verify=True,
proxies=None
)
actual_data = json.loads(mock_post_request.call_args_list[0][1]['data'])
assert expected_data == actual_data
def test_mattermost_kibana_discover_color():
rule = {
'name': 'Test Rule',
'type': 'any',
'alert_text_type': 'alert_text_only',
'mattermost_attach_kibana_discover_url': True,
'mattermost_kibana_discover_color': 'blue',
'mattermost_webhook_url': 'http://please.dontgohere.mattermost',
'alert': []
}
rules_loader = FileRulesLoader({})
rules_loader.load_modules(rule)
alert = MattermostAlerter(rule)
match = {
'@timestamp': '2021-01-01T00:00:00',
'kibana_discover_url': 'http://localhost:5601/app/discover#/'
}
with mock.patch('requests.post') as mock_post_request:
alert.alert([match])
expected_data = {
'attachments': [
{
'fallback': 'Test Rule: ',
'color': 'danger',
'title': 'Test Rule',
'pretext': '',
'fields': [],
'text': 'Test Rule\n\n'
},
{
'color': 'blue',
'title': 'Discover in Kibana',
'title_link': 'http://localhost:5601/app/discover#/'
}
], 'username': 'elastalert'
}
mock_post_request.assert_called_once_with(
rule['mattermost_webhook_url'],
data=mock.ANY,
headers={'content-type': 'application/json'},
verify=True,
proxies=None
)
actual_data = json.loads(mock_post_request.call_args_list[0][1]['data'])
assert expected_data == actual_data
def test_mattermost_username_override():
rule = {
'name': 'Test Mattermost Rule',
'type': 'any',
'alert_text_type': 'alert_text_only',
'mattermost_webhook_url': 'http://xxxxx',
'mattermost_msg_pretext': 'aaaaa',
'mattermost_msg_color': 'danger',
'mattermost_username_override': 'test user',
'alert': [],
'alert_subject': 'Test Mattermost'
}
rules_loader = FileRulesLoader({})
rules_loader.load_modules(rule)
alert = MattermostAlerter(rule)
match = {
'@timestamp': '2021-01-01T00:00:00',
'somefield': 'foobarbaz'
}
with mock.patch('requests.post') as mock_post_request:
alert.alert([match])
expected_data = {
'attachments': [
{
'fallback': 'Test Mattermost: aaaaa',
'color': 'danger',
'title': 'Test Mattermost',
'pretext': 'aaaaa',
'fields': [],
'text': 'Test Mattermost Rule\n\n'
}
], 'username': 'test user'
}
mock_post_request.assert_called_once_with(
rule['mattermost_webhook_url'],
data=mock.ANY,
headers={'content-type': 'application/json'},
verify=True,
proxies=None
)
actual_data = json.loads(mock_post_request.call_args_list[0][1]['data'])
assert expected_data == actual_data
| 30.482548 | 99 | 0.55844 | 3,405 | 34,933 | 5.465492 | 0.045228 | 0.088017 | 0.050779 | 0.03482 | 0.928049 | 0.910532 | 0.899194 | 0.895433 | 0.891617 | 0.889361 | 0 | 0.015763 | 0.300804 | 34,933 | 1,145 | 100 | 30.50917 | 0.746162 | 0 | 0 | 0.762415 | 0 | 0 | 0.327226 | 0.059943 | 0 | 0 | 0 | 0 | 0.046738 | 1 | 0.024343 | false | 0 | 0.00779 | 0 | 0.032132 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
132815db22ffb44c52f0f2882860a0c1be0f0467 | 29,251 | py | Python | pytests/security/ntonencryptionTests.py | couchbaselabs/testrunner-bharath | 96af90070da2140cc11c549db7403f5ea3b76d34 | [
"Apache-2.0"
] | 1 | 2020-08-31T18:51:45.000Z | 2020-08-31T18:51:45.000Z | pytests/security/ntonencryptionTests.py | couchbaselabs/testrunner-bharath | 96af90070da2140cc11c549db7403f5ea3b76d34 | [
"Apache-2.0"
] | null | null | null | pytests/security/ntonencryptionTests.py | couchbaselabs/testrunner-bharath | 96af90070da2140cc11c549db7403f5ea3b76d34 | [
"Apache-2.0"
] | 2 | 2020-07-24T07:12:01.000Z | 2022-03-17T23:43:28.000Z | from membase.api.rest_client import RestConnection, RestHelper
import urllib.request, urllib.parse, urllib.error
import json
from remote.remote_util import RemoteMachineShellConnection
import subprocess
import socket
import fileinput
import sys
from subprocess import Popen, PIPE
from basetestcase import BaseTestCase
from lib.testconstants import STANDARD_BUCKET_PORT
from couchbase_helper.documentgenerator import BlobGenerator
import time
import time, os
from security.ntonencryptionBase import ntonencryptionBase
from lib.couchbase_helper.tuq_helper import N1QLHelper
from cbas.cbas_base import *
# from fts.fts_callable import *
import logger
from .x509main import x509main
from remote.remote_util import RemoteMachineShellConnection
class ntonencryptionTest(BaseTestCase):
def setUp(self):
super(ntonencryptionTest, self).setUp()
self._reset_original()
self.bucket_list = RestConnection(self.master).get_buckets()
self.shell = RemoteMachineShellConnection(self.master)
self.item_flag = self.input.param("item_flag", 4042322160)
self.full_docs_list = ''
self.n1ql_port = 8093
self.enable_nton_local = self.input.param('enable_nton_local',False)
self.local_clusterEncryption = self.input.param('local_clusterEncryption','control')
self.hostname = self.input.param('hostname',False)
self.x509enable = self.input.param("x509enable",False)
self.wildcard_dns = self.input.param("wildcard_dns",None)
#ntonencryptionBase().setup_nton_cluster(self.servers,clusterEncryptionLevel=self.ntonencrypt_level)
def tearDown(self):
super(ntonencryptionTest, self).tearDown()
ntonencryptionBase().disable_nton_cluster(self.servers)
self._reset_original()
def suite_setUp(self):
self.log.info("---------------Suite Setup---------------")
def suite_tearDown(self):
self.log.info("---------------Suite Teardown---------------")
def _reset_original(self):
self.log.info ("Reverting to original state - regenerating certificate and removing inbox folder")
tmp_path = "/tmp/abcd.pem"
for servers in self.servers:
cli_command = "ssl-manage"
remote_client = RemoteMachineShellConnection(servers)
options = "--regenerate-cert={0}".format(tmp_path)
output, error = remote_client.execute_couchbase_cli(cli_command=cli_command, options=options,
cluster_host=servers.ip, user="Administrator",
password="password")
x509main(servers)._delete_inbox_folder()
def perform_doc_ops_in_all_cb_buckets(self, num_items, operation, start_key=0, end_key=1000):
"""
Create/Update/Delete docs in all cb buckets
:param num_items: No. of items to be created/deleted/updated
:param operation: String - "create","update","delete"
:param start_key: Doc Key to start the operation with
:param end_key: Doc Key to end the operation with
:return:
"""
try:
age = list(range(70))
first = ['james', 'sharon', 'dave', 'bill', 'mike', 'steve']
template = '{{ "number": {0}, "first_name": "{1}" , "mutated":0}}'
gen_load = DocumentGenerator('test_docs', template, age, first,
start=start_key, end=end_key)
self.log.info("%s %s documents..." % (operation, num_items))
try:
self._load_all_buckets(self.master, gen_load, operation, 0)
#self._verify_stats_all_buckets(self.input.servers)
except Exception as e:
self.log.info("Exception is {0}".format(e))
except:
raise Exception("Error while loading document")
def create_fts_index_query_compare(self, index_queue=None):
"""
Call before upgrade
1. creates a default index, one per bucket
2. Loads fts json data
3. Runs queries and compares the results against ElasticSearch
"""
self.log.info("Create FTS index query")
try:
self.fts_obj = FTSCallable(nodes=self.servers, es_validate=False)
for bucket in self.buckets:
self.fts_obj.create_default_index(
index_name="index_{0}".format(bucket.name),
bucket_name=bucket.name)
self.fts_obj.load_data(self.num_items)
self.fts_obj.wait_for_indexing_complete()
for index in self.fts_obj.fts_indexes:
self.fts_obj.run_query_and_compare(index=index, num_queries=20)
return self.fts_obj
except Exception as ex:
self.log.info("Exception is {0}".format(ex))
def create_cbas_index_query(self):
'''
1. Create CBAS index and query the statement'
'''
try:
self.sleep(10)
self.log.info("Creating CBAS Index")
self.cbas_node = self.get_nodes_from_services_map(service_type="cbas")
self.log.info( "Creating CBAS Index")
cbas_rest = RestConnection(self.cbas_node)
self.log.info( "Creating CBAS Index")
dataset_stmt = 'create dataset dataset_1 on default'
content = cbas_rest.execute_statement_on_cbas(dataset_stmt,None)
self.log.info( "Creating CBAS Index")
connect_stmt = 'CONNECT LINK Local;'
content = cbas_rest.execute_statement_on_cbas(connect_stmt,None)
self.log.info( "Creating CBAS Index")
query_stmt = 'SELECT VALUE COUNT(*) FROM dataset_1;'
content = cbas_rest.execute_statement_on_cbas(query_stmt,None)
self.log.info( "Creating CBAS Index")
except Exception as ex:
self.log.info("Exception is {0}".format(ex))
raise Exception ("Exception in CBAS node")
def create_secondary_index_query(self, index_field, index_query=None,step='before'):
'''
1. Create a index
2. Run query for the index
'''
try:
self.n1ql_node = self.get_nodes_from_services_map(service_type="n1ql")
if self.n1ql_node is not None:
self.n1ql_helper = N1QLHelper(shell=self.shell,
max_verify=self.max_verify,
buckets=self.buckets,
item_flag=self.item_flag,
n1ql_port=self.n1ql_port,
full_docs_list=self.full_docs_list,
log=self.log, input=self.input,
master=self.n1ql_node,
use_rest=True
)
query = "Create index " + index_field + " on default(" + index_field + ")"
self.n1ql_helper.run_cbq_query(query=query, server=self.n1ql_node)
if step == 'before':
query = 'create primary index on default'
self.n1ql_helper.run_cbq_query(query=query, server=self.n1ql_node)
self.perform_doc_ops_in_all_cb_buckets(2000,'create',end_key=2000)
self.sleep(10)
query = "select * from default where " + index_field + " is not NULL"
self.n1ql_helper.run_cbq_query(query=query, server=self.n1ql_node)
if index_query is not None:
query = "select * from default where " + index_query + " is not NULL"
self.n1ql_helper.run_cbq_query(query=query, server=self.n1ql_node)
query = "select * from default"
self.n1ql_helper.run_cbq_query(query=query, server=self.n1ql_node)
except:
raise Exception("Error while creating index/n1ql setup")
def check_all_services(self, servers):
cbas_node = self.get_nodes_from_services_map(service_type="cbas")
if cbas_node is not None:
self.create_cbas_index_query()
if self.ntonencrypt == 'disable' and self.enable_nton_local == True:
self.create_secondary_index_query('id')
self.create_secondary_index_query('name','id','after')
else:
self.create_secondary_index_query('name')
#ntonencryptionBase().get_ntonencryption_status(self.servers)
if self.get_nodes_from_services_map(service_type="fts") is not None:
self.create_fts_index_query_compare()
self.perform_doc_ops_in_all_cb_buckets(2000,'create',end_key=2000)
def data_rebalance_in(self):
'''
1. Enable encryption on all nodes
2. Add node to current cluster
'''
services_in = []
self.perform_doc_ops_in_all_cb_buckets(100,'create')
time.sleep(30)
for service in self.services_in.split("-"):
services_in.append(service.split(":")[0])
servs_inout = self.servers[self.nodes_init:]
self.log.info("{0}".format(servs_inout))
rebalance = self.cluster.async_rebalance(self.servers[:self.nodes_init], servs_inout, [],
services=services_in)
rebalance.result()
if self.ntonencrypt == 'disable' and self.enable_nton_local == True:
ntonencryptionBase().setup_nton_cluster(self.servers,'enable',self.local_clusterEncryption)
#ntonencryptionBase(). (self.servers)
final_result = ntonencryptionBase().check_server_ports(self.servers)
result = ntonencryptionBase().validate_results(self.servers, final_result, self.ntonencrypt_level)
self.log.info ("Final result is {0}".format(final_result))
self.assertTrue(result)
def index_rebalance_in(self):
self.perform_doc_ops_in_all_cb_buckets(1000,'create',end_key=1000)
time.sleep(10)
services_in = []
self.log.info ("list of services to be added {0}".format(self.services_in))
for service in self.services_in.split("-"):
services_in.append(service.split(":")[0])
self.log.info ("list of services to be added after formatting {0}".format(services_in))
# add nodes to the cluster
servs_inout = self.servers[self.nodes_init:]
rebalance = self.cluster.async_rebalance(self.servers[:self.nodes_init], servs_inout, [],
services=services_in)
rebalance.result()
if self.ntonencrypt == 'disable' and self.enable_nton_local == True:
self.create_secondary_index_query('id')
ntonencryptionBase().setup_nton_cluster(self.servers,'enable',self.local_clusterEncryption)
time.sleep(10)
self.create_secondary_index_query('name','id','after')
else:
self.create_secondary_index_query('name')
#ntonencryptionBase().get_ntonencryption_status(self.servers)
self.perform_doc_ops_in_all_cb_buckets(2000,'create',end_key=2000)
final_result = ntonencryptionBase().check_server_ports(self.servers)
if self.ntonencrypt == 'disable' and self.enable_nton_local == True:
result = ntonencryptionBase().validate_results(self.servers, final_result, self.local_clusterEncryption)
else:
result = ntonencryptionBase().validate_results(self.servers, final_result, self.ntonencrypt_level)
self.log.info ("Final result is {0}".format(final_result))
self.assertTrue(result)
def cbas_rebalance_in(self):
services_in = []
self.perform_doc_ops_in_all_cb_buckets(10000,'create',end_key=10000)
#time.sleep(30)
for service in self.services_in.split("-"):
services_in.append(service.split(":")[0])
self.log.info ("list of services to be added after formatting {0}".format(services_in))
servs_inout = self.servers[self.nodes_init:]
rebalance = self.cluster.async_rebalance(self.servers[:self.nodes_init], servs_inout, [],
services=services_in)
rebalance.result()
if self.ntonencrypt == 'disable' and self.enable_nton_local == True:
ntonencryptionBase().setup_nton_cluster(self.servers,'enable',self.local_clusterEncryption)
self.create_cbas_index_query()
self.perform_doc_ops_in_all_cb_buckets(2000,'create',end_key=2000)
final_result = ntonencryptionBase().check_server_ports(self.servers)
if self.ntonencrypt == 'disable' and self.enable_nton_local == True:
result = ntonencryptionBase().validate_results(self.servers, final_result, self.local_clusterEncryption)
else:
result = ntonencryptionBase().validate_results(self.servers, final_result, self.ntonencrypt_level)
self.log.info ("Final result is {0}".format(final_result))
self.assertTrue(result)
def fts_rebalance_in(self):
services_in = []
self.perform_doc_ops_in_all_cb_buckets(10000,'create')
time.sleep(30)
for service in self.services_in.split("-"):
services_in.append(service.split(":")[0])
self.log.info ("list of services to be added after formatting {0}".format(services_in))
servs_inout = self.servers[self.nodes_init:]
rebalance = self.cluster.async_rebalance(self.servers[:self.nodes_init], servs_inout, [],
services=services_in)
rebalance.result()
if self.ntonencrypt == 'disable' and self.enable_nton_local == True:
ntonencryptionBase().setup_nton_cluster(self.servers,'enable',self.local_clusterEncryption)
self.create_fts_index_query_compare()
self.sleep(10)
#ntonencryptionBase().get_ntonencryption_status(self.servers)
final_result = ntonencryptionBase().check_server_ports(self.servers)
if self.ntonencrypt == 'disable' and self.enable_nton_local == True:
result = ntonencryptionBase().validate_results(self.servers, final_result, self.local_clusterEncryption)
else:
result = ntonencryptionBase().validate_results(self.servers, final_result, self.ntonencrypt_level)
self.log.info ("Final result is {0}".format(final_result))
self.assertTrue(result)
def all_services_rebalance_in(self):
self.perform_doc_ops_in_all_cb_buckets(1000,'create',end_key=1000)
time.sleep(30)
services_in = []
self.log.info ("list of services to be added {0}".format(self.services_in))
for service in self.services_in.split("-"):
services_in.append(service.split(":")[0])
self.log.info ("list of services to be added after formatting {0}".format(services_in))
# add nodes to the cluster
servs_inout = self.servers[self.nodes_init:]
rebalance = self.cluster.async_rebalance(self.servers[:self.nodes_init], servs_inout, [],
services=services_in)
rebalance.result()
if self.ntonencrypt == 'disable' and self.enable_nton_local == True:
ntonencryptionBase().setup_nton_cluster(self.servers,'enable',self.local_clusterEncryption)
#ntonencryptionBase().get_ntonencryption_status(self.servers)
cbas_node = self.get_nodes_from_services_map(service_type="cbas")
if cbas_node is not None:
self.create_cbas_index_query()
if self.ntonencrypt == 'disable' and self.enable_nton_local == True:
self.create_secondary_index_query('id')
self.create_secondary_index_query('name','id','after')
else:
self.create_secondary_index_query('name')
#ntonencryptionBase().get_ntonencryption_status(self.servers)
if self.get_nodes_from_services_map(service_type="fts") is not None:
self.create_fts_index_query_compare()
self.perform_doc_ops_in_all_cb_buckets(2000,'create',end_key=2000)
#ntonencryptionBase().get_ntonencryption_status(self.servers)
final_result = ntonencryptionBase().check_server_ports(self.servers)
if self.ntonencrypt == 'disable' and self.enable_nton_local == True:
result = ntonencryptionBase().validate_results(self.servers, final_result, self.local_clusterEncryption)
else:
result = ntonencryptionBase().validate_results(self.servers, final_result, self.ntonencrypt_level)
self.log.info ("Final result is {0}".format(final_result))
self.assertTrue(result)
def all_rebalance_in_disable(self):
self.perform_doc_ops_in_all_cb_buckets(10000,'create')
time.sleep(30)
servs_inout = self.servers[self.nodes_init:]
services_in = []
for service in self.services_in.split("-"):
services_in.append(service.split(":")[0])
self.log.info ("list of services to be added after formatting {0}".format(services_in))
rebalance = self.cluster.async_rebalance(self.servers[:self.nodes_init], servs_inout, [],
services=services_in)
rebalance.result()
if self.ntonencrypt == 'disable' and self.enable_nton_local == True:
ntonencryptionBase().setup_nton_cluster(self.servers,'enable',self.local_clusterEncryption)
#ntonencryptionBase().get_cluster_nton_status(self.master)
final_result = ntonencryptionBase().check_server_ports(self.servers)
ntonencryptionBase().ntonencryption_cli(self.servers, 'disable')
final_result = ntonencryptionBase().check_server_ports(self.servers)
if self.ntonencrypt == 'disable' and self.enable_nton_local == True:
result = ntonencryptionBase().validate_results(self.servers, final_result, self.local_clusterEncryption,'disable')
else:
result = ntonencryptionBase().validate_results(self.servers, final_result, self.ntonencrypt_level,'disable')
self.log.info ("Final result is {0}".format(final_result))
self.assertTrue(result)
def test_add_nodes_x509_rebalance(self):
servs_inout = self.servers[self.nodes_init:]
services_in = []
rest = RestConnection(self.master)
copy_servers = copy.deepcopy(self.servers)
self.log.info( 'before cert generate')
x509main(self.master)._generate_cert(copy_servers, type='openssl', encryption='', key_length=1024, client_ip='172.16.1.174', alt_names='non_default', dns=None, uri=None,wildcard_dns=self.wildcard_dns)
x509main(self.master).setup_master()
x509main().setup_cluster_nodes_ssl(servs_inout)
for service in self.services_in.split("-"):
services_in.append(service.split(":")[0])
self.log.info ("list of services to be added after formatting {0}".format(services_in))
# add nodes to the cluster
servs_inout = self.servers[1:4]
rebalance = self.cluster.async_rebalance(self.servers[:self.nodes_init], servs_inout, [],
services=services_in)
rebalance.result()
#Check if n2n can be enabled after adding x509 certificates
if self.x509enable and self.ntonencrypt=='enable':
encryption_result = ntonencryptionBase().setup_nton_cluster(self.servers,'enable',self.ntonencrypt_level)
self.assertTrue(encryption_result,"Retries Exceeded. Cannot enable n2n encryption")
self.check_all_services(self.servers)
#ntonencryptionBase().get_ntonencryption_status(self.servers)
#ntonencryptionBase().get_cluster_nton_status(self.master)
final_result = ntonencryptionBase().check_server_ports(self.servers)
result = ntonencryptionBase().validate_results(self.servers, final_result, self.ntonencrypt_level)
self.assertTrue(result,'Issue with results with x509 enable for sets')
ntonencryptionBase().change_cluster_encryption_cli(self.servers, 'control')
ntonencryptionBase().ntonencryption_cli(self.servers, 'disable')
final_result_disable = ntonencryptionBase().check_server_ports(self.servers)
self.log.info("{0}".format(final_result_disable))
result = ntonencryptionBase().validate_results(self.servers, final_result_disable, self.ntonencrypt_level,'disable')
self.assertTrue(result,'Issue with results with x509 disable for sets')
def test_init_nodes_x509(self):
servs_inout = self.servers[1:4]
rest = RestConnection(self.master)
copy_servers = copy.deepcopy(self.servers)
self.log.info( 'before cert generate')
if self.x509enable:
x509main(self.master)._generate_cert(copy_servers, type='openssl', encryption='', key_length=1024, client_ip='172.16.1.174', alt_names='non_default', dns=None, uri=None,wildcard_dns=self.wildcard_dns)
x509main(self.master).setup_master()
x509main().setup_cluster_nodes_ssl(servs_inout,True)
if self.ntonencrypt == 'disable' and self.enable_nton_local == True:
ntonencryptionBase().setup_nton_cluster(self.servers,'enable',self.local_clusterEncryption)
# Check if n2n can be enabled after adding x509 certificates
elif self.x509enable and self.ntonencrypt=='enable':
encryption_result = ntonencryptionBase().setup_nton_cluster(self.servers,'enable',self.ntonencrypt_level)
self.assertTrue(encryption_result,"Retries Exceeded. Cannot enable n2n encryption")
self.check_all_services(self.servers)
#ntonencryptionBase().get_ntonencryption_status(self.servers)
#ntonencryptionBase().get_cluster_nton_status(self.master)
final_result = ntonencryptionBase().check_server_ports(self.servers)
self.log.info("{0}".format(final_result))
result = ntonencryptionBase().validate_results(self.servers, final_result, self.ntonencrypt_level)
self.assertTrue(result,'Issue with results with x509 enable for sets')
ntonencryptionBase().change_cluster_encryption_cli(self.servers, 'control')
ntonencryptionBase().ntonencryption_cli(self.servers, 'disable')
final_result_disable = ntonencryptionBase().check_server_ports(self.servers)
result = ntonencryptionBase().validate_results(self.servers, final_result_disable, self.ntonencrypt_level,'disable')
self.assertTrue(result,'Issue with results with x509 disable for sets')
def test_add_nodes_x509_rebalance_rotate(self):
servs_inout = self.servers[self.nodes_init:]
services_in = []
rest = RestConnection(self.master)
copy_servers = copy.deepcopy(self.servers)
self.log.info( 'before cert generate')
x509main(self.master)._generate_cert(copy_servers, type='openssl', encryption='', key_length=1024, client_ip='172.16.1.174', alt_names='non_default', dns=None, uri=None,wildcard_dns=self.wildcard_dns)
x509main(self.master).setup_master()
x509main().setup_cluster_nodes_ssl(servs_inout)
for service in self.services_in.split("-"):
services_in.append(service.split(":")[0])
self.log.info ("list of services to be added after formatting {0}".format(services_in))
# add nodes to the cluster
servs_inout = self.servers[1:4]
rebalance = self.cluster.async_rebalance(self.servers[:self.nodes_init], servs_inout, [],
services=services_in)
rebalance.result()
#Check if n2n can be enabled after adding x509 certificates
if self.x509enable and self.ntonencrypt == 'enable':
encryption_result = ntonencryptionBase().setup_nton_cluster(self.servers, 'enable', self.ntonencrypt_level)
self.assertTrue(encryption_result,"Retries Exceeded. Cannot enable n2n encryption")
x509main(self.master)._delete_inbox_folder()
x509main(self.master)._generate_cert(self.servers, root_cn="CB\ Authority", type='openssl', client_ip='172.16.1.174',wildcard_dns=self.wildcard_dns)
ntonencryptionBase().change_cluster_encryption_cli(self.servers, 'control')
ntonencryptionBase().ntonencryption_cli(self.servers, 'disable')
x509main(self.master).setup_master()
x509main().setup_cluster_nodes_ssl(self.servers, reload_cert=True)
ntonencryptionBase().ntonencryption_cli(self.servers, 'enable')
ntonencryptionBase().change_cluster_encryption_cli(self.servers,self.ntonencrypt_level)
self.check_all_services(self.servers)
#ntonencryptionBase().get_ntonencryption_status(self.servers)
#ntonencryptionBase().get_cluster_nton_status(self.master)
final_result = ntonencryptionBase().check_server_ports(self.servers)
result = ntonencryptionBase().validate_results(self.servers, final_result, self.ntonencrypt_level)
self.assertTrue(result,'Issue with results with x509 enable for sets')
ntonencryptionBase().change_cluster_encryption_cli(self.servers, 'control')
ntonencryptionBase().ntonencryption_cli(self.servers, 'disable')
final_result_disable = ntonencryptionBase().check_server_ports(self.servers)
self.log.info("{0}".format(final_result_disable))
result = ntonencryptionBase().validate_results(self.servers, final_result_disable, self.ntonencrypt_level,'disable')
self.assertTrue(result,'Issue with results with x509 disable for sets')
def test_add_nodes_x509_rebalance_rotate_disable(self):
servs_inout = self.servers[self.nodes_init:]
services_in = []
rest = RestConnection(self.master)
copy_servers = copy.deepcopy(self.servers)
self.log.info( 'before cert generate')
x509main(self.master)._generate_cert(copy_servers, type='openssl', encryption='', key_length=1024, client_ip='172.16.1.174', alt_names='non_default', dns=None, uri=None,wildcard_dns=self.wildcard_dns)
x509main(self.master).setup_master()
x509main().setup_cluster_nodes_ssl(servs_inout)
for service in self.services_in.split("-"):
services_in.append(service.split(":")[0])
self.log.info ("list of services to be added after formatting {0}".format(services_in))
# add nodes to the cluster
servs_inout = self.servers[1:4]
rebalance = self.cluster.async_rebalance(self.servers[:self.nodes_init], servs_inout, [],
services=services_in)
rebalance.result()
#Check if n2n can be enabled after adding x509 certificates
if self.x509enable and self.ntonencrypt == 'enable':
encryption_result = ntonencryptionBase().setup_nton_cluster(self.servers, 'enable', self.ntonencrypt_level)
self.assertTrue(encryption_result,"Retries Exceeded. Cannot enable n2n encryption")
ntonencryptionBase().change_cluster_encryption_cli(self.servers, 'control')
ntonencryptionBase().ntonencryption_cli(self.servers, 'disable')
x509main(self.master)._delete_inbox_folder()
x509main(self.master)._generate_cert(self.servers, root_cn="CB\ Authority", type='openssl', client_ip='172.16.1.174',wildcard_dns=self.wildcard_dns)
x509main(self.master).setup_master()
x509main().setup_cluster_nodes_ssl(self.servers, reload_cert=True)
ntonencryptionBase().ntonencryption_cli(self.servers, 'enable')
ntonencryptionBase().change_cluster_encryption_cli(self.servers, self.ntonencrypt_level)
self.check_all_services(self.servers)
#ntonencryptionBase().get_ntonencryption_status(self.servers)
#ntonencryptionBase().get_cluster_nton_status(self.master)
final_result = ntonencryptionBase().check_server_ports(self.servers)
result = ntonencryptionBase().validate_results(self.servers, final_result, self.ntonencrypt_level)
self.assertTrue(result,'Issue with results with x509 enable for sets')
ntonencryptionBase().change_cluster_encryption_cli(self.servers, 'control')
ntonencryptionBase().ntonencryption_cli(self.servers, 'disable')
final_result_disable = ntonencryptionBase().check_server_ports(self.servers)
self.log.info("{0}".format(final_result_disable))
result = ntonencryptionBase().validate_results(self.servers, final_result_disable, self.ntonencrypt_level,'disable')
self.assertTrue(result,'Issue with results with x509 disable for sets') | 54.068392 | 212 | 0.657448 | 3,349 | 29,251 | 5.497462 | 0.088982 | 0.066917 | 0.023301 | 0.026289 | 0.803215 | 0.792352 | 0.770681 | 0.75846 | 0.749715 | 0.74537 | 0 | 0.018696 | 0.239308 | 29,251 | 541 | 213 | 54.068392 | 0.808728 | 0.070322 | 0 | 0.701456 | 0 | 0 | 0.105447 | 0.002484 | 0 | 0 | 0 | 0 | 0.043689 | 1 | 0.048544 | false | 0.002427 | 0.048544 | 0 | 0.101942 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
134c0f9fdf5c92c997ff24875b40ad2a632076ab | 1,972 | py | Python | tests/test_emcee_sampler.py | jorana/DirectDmTargets | 0822f6593c6289b4ff032d7cb523eb1603bd3df3 | [
"MIT"
] | null | null | null | tests/test_emcee_sampler.py | jorana/DirectDmTargets | 0822f6593c6289b4ff032d7cb523eb1603bd3df3 | [
"MIT"
] | 44 | 2020-04-07T06:18:18.000Z | 2021-09-13T15:23:32.000Z | tests/test_emcee_sampler.py | jorana/DirectDmTargets | 0822f6593c6289b4ff032d7cb523eb1603bd3df3 | [
"MIT"
] | null | null | null | import tempfile
import DirectDmTargets as dddm
import matplotlib.pyplot as plt
def test_emcee():
fit_class = dddm.MCMCStatModel('Xe')
fit_class.nwalkers = 10
fit_class.nsteps = 20
fit_class.verbose = 2
with tempfile.TemporaryDirectory() as tmpdirname:
fit_class.run_emcee()
fit_class.show_corner()
fit_class.show_walkers()
fit_class.save_results(save_to_dir=tmpdirname)
save_dir = fit_class.config['save_dir']
r = dddm.emcee_applications.load_chain_emcee(
override_load_from=save_dir)
dddm.emcee_applications.emcee_plots(r)
plt.clf()
plt.close()
def test_emcee_full_prior():
fit_class = dddm.MCMCStatModel('Xe')
fit_class.nwalkers = 10
fit_class.nsteps = 20
fit_class.verbose = 1
with tempfile.TemporaryDirectory() as tmpdirname:
fit_class.run_emcee()
fit_class.show_corner()
fit_class.show_walkers()
fit_class.save_results(save_to_dir=tmpdirname)
save_dir = fit_class.config['save_dir']
r = dddm.emcee_applications.load_chain_emcee(
override_load_from=save_dir)
dddm.emcee_applications.emcee_plots(r, save=True, show=False)
plt.clf()
plt.close()
def test_emcee_astrophysics_prior():
fit_class = dddm.MCMCStatModel('Xe')
fit_class.nwalkers = 10
fit_class.nsteps = 20
fit_class.set_fit_parameters(fit_class.known_parameters)
with tempfile.TemporaryDirectory() as tmpdirname:
fit_class.run_emcee()
fit_class.show_corner()
fit_class.show_walkers()
fit_class.save_results(save_to_dir=tmpdirname)
save_dir = fit_class.config['save_dir']
r = dddm.emcee_applications.load_chain_emcee(
override_load_from=save_dir)
dddm.emcee_applications.emcee_plots(r, save=True, show=False)
plt.clf()
plt.close()
| 32.327869 | 70 | 0.6643 | 250 | 1,972 | 4.896 | 0.204 | 0.183007 | 0.058824 | 0.061275 | 0.888072 | 0.888072 | 0.888072 | 0.857026 | 0.857026 | 0.857026 | 0 | 0.00944 | 0.247972 | 1,972 | 60 | 71 | 32.866667 | 0.815914 | 0 | 0 | 0.803922 | 0 | 0 | 0.01569 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.058824 | 0 | 0.117647 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
13c1cab7f6b469de4f83e39b1319184bfd8bd432 | 2,604 | py | Python | instructions/comparison.py | RenolY2/python-ppc | 7b7ccbaf467813843975773795efa3436544ca9a | [
"MIT"
] | 3 | 2019-09-04T07:18:00.000Z | 2021-03-09T23:41:38.000Z | instructions/comparison.py | RenolY2/python-ppc | 7b7ccbaf467813843975773795efa3436544ca9a | [
"MIT"
] | null | null | null | instructions/comparison.py | RenolY2/python-ppc | 7b7ccbaf467813843975773795efa3436544ca9a | [
"MIT"
] | null | null | null | from .common import *
class CompareImmediate(Instruction):
def __init__(self, val):
self.opcode, self.BF, self.RA, self.SI = parse_dform(val)
self.SI = sign_extend_short(self.SI)
self.BF = self.BF >> 2
def execute(self, machine):
gpr = machine.context.gpr
assert self.BF < 8
machine.context.cr.compare(self.BF, to_python_int(gpr[self.RA]), to_python_int(self.SI))
def __str__(self):
SI = to_python_int(self.SI)
if self.BF > 0:
return "cmpwi cr{0}, r{1}, {2}".format(self.BF, self.RA, self.SI)
else:
return "cmpwi r{1}, {2}".format(self.BF, self.RA, self.SI)
class CompareLogicalImmediate(Instruction):
def __init__(self, val):
self.opcode, self.BF, self.RA, self.SI = parse_dform(val)
self.SI = sign_extend_short(self.SI)
self.BF = self.BF >> 2
def execute(self, machine):
gpr = machine.context.gpr
assert self.BF < 8
machine.context.cr.compare(self.BF, gpr[self.RA], self.SI)
def __str__(self):
SI = to_python_int(self.SI)
if self.BF > 0:
return "cmplwi cr{0}, r{1}, {2}".format(self.BF, self.RA, self.SI)
else:
return "cmplwi r{1}, {2}".format(self.BF, self.RA, self.SI)
class Compare(Instruction):
def __init__(self, val):
self.opcode, self.BF, self.RA, self.RB, self.opcode2, _ = parse_xform(val)
self.BF = self.BF >> 2
def execute(self, machine):
gpr = machine.context.gpr
assert self.BF < 8
machine.context.cr.compare(self.BF, to_python_int(gpr[self.RA]), to_python_int(gpr[self.RB]))
def __str__(self):
SI = to_python_int(self.SI)
if self.BF > 0:
return "cmpw cr{0}, r{1}, r{2}".format(self.BF, self.RA, self.RB)
else:
return "cmpw r{1}, r{2}".format(self.BF, self.RA, self.RB)
class CompareLogical(Instruction):
def __init__(self, val):
self.opcode, self.BF, self.RA, self.RB, self.opcode2, _ = parse_xform(val)
self.BF = self.BF >> 2
def execute(self, machine):
gpr = machine.context.gpr
assert self.BF < 8
machine.context.cr.compare(self.BF, gpr[self.RA], gpr[self.RB])
def __str__(self):
SI = to_python_int(self.SI)
if self.BF > 0:
return "cmpw cr{0}, r{1}, r{2}".format(self.BF, self.RA, self.RB)
else:
return "cmpw r{1}, r{2}".format(self.BF, self.RA, self.RB) | 33.384615 | 101 | 0.566052 | 383 | 2,604 | 3.697128 | 0.122715 | 0.135593 | 0.112994 | 0.101695 | 0.923729 | 0.923729 | 0.923729 | 0.923729 | 0.923729 | 0.923729 | 0 | 0.018359 | 0.288786 | 2,604 | 78 | 102 | 33.384615 | 0.74622 | 0 | 0 | 0.779661 | 0 | 0 | 0.057582 | 0 | 0 | 0 | 0 | 0 | 0.067797 | 1 | 0.20339 | false | 0 | 0.016949 | 0 | 0.423729 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
13d7895b70cd8d76ef02f720c9ce67b5cd15050c | 1,881 | py | Python | tests/test_config_loading.py | pgoelter/pyconfig | 4da120c073488f19fa45e9b830e399c273d567ea | [
"MIT"
] | null | null | null | tests/test_config_loading.py | pgoelter/pyconfig | 4da120c073488f19fa45e9b830e399c273d567ea | [
"MIT"
] | 8 | 2021-04-08T12:33:47.000Z | 2021-04-08T15:26:39.000Z | tests/test_config_loading.py | pgoelter/pyconfig | 4da120c073488f19fa45e9b830e399c273d567ea | [
"MIT"
] | null | null | null | import os
import pytest
from pyconfig import Config
@pytest.fixture(autouse=True)
def before():
Config.__instance__ = None
def test_load_config_from_json():
config: Config = Config.from_json_file(
filename=f"tests{os.sep}resources{os.sep}test-config.json")
assert config.defaults == {
"_comment": "This file should serve as an example.",
"api": {
"authentification": {
"email": "YOURMAILADDRESS",
"password": "YOURPASSWORD",
"ssl_certificate": "PATHTOCERT",
"routes": {
"base": "BASEURL",
"login": "LOGINURL"
}
}
}
}
def test_load_config_from_yml():
config: Config = Config.from_yml_file(
filename=f"tests{os.sep}resources{os.sep}test-config.yml")
assert config.defaults == {
"_comment": "This file should serve as an example.",
"api": {
"authentification": {
"email": "YOURMAILADDRESS",
"password": "YOURPASSWORD",
"ssl_certificate": "PATHTOCERT",
"routes": {
"base": "BASEURL",
"login": "LOGINURL"
}
}
}
}
def test_load_config_from_toml():
config: Config = Config.from_toml_file(
filename=f"tests{os.sep}resources{os.sep}test-config.toml")
assert config.defaults == {
"_comment": "This file should serve as an example.",
"api": {
"authentification": {
"email": "YOURMAILADDRESS",
"password": "YOURPASSWORD",
"ssl_certificate": "PATHTOCERT",
"routes": {
"base": "BASEURL",
"login": "LOGINURL"
}
}
}
}
| 27.26087 | 67 | 0.497608 | 161 | 1,881 | 5.639752 | 0.291925 | 0.066079 | 0.036344 | 0.056167 | 0.786344 | 0.763216 | 0.763216 | 0.763216 | 0.763216 | 0.763216 | 0 | 0 | 0.377459 | 1,881 | 68 | 68 | 27.661765 | 0.775406 | 0 | 0 | 0.526316 | 0 | 0 | 0.326422 | 0.072834 | 0 | 0 | 0 | 0 | 0.052632 | 1 | 0.070175 | true | 0.052632 | 0.052632 | 0 | 0.122807 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
b92356740def2fd97c561fed2b75dd9fa63a8f71 | 181 | py | Python | tests/indent/python/aligned_indent.py | nkrkv/nvim-treesitter | 1b74deaa322a2257f50daeb65093b9a0f41f2fd6 | [
"Apache-2.0"
] | 1 | 2022-02-28T22:26:59.000Z | 2022-02-28T22:26:59.000Z | tests/indent/python/aligned_indent.py | nkrkv/nvim-treesitter | 1b74deaa322a2257f50daeb65093b9a0f41f2fd6 | [
"Apache-2.0"
] | 1 | 2022-02-05T11:38:30.000Z | 2022-02-05T11:38:30.000Z | tests/indent/python/aligned_indent.py | nkrkv/nvim-treesitter | 1b74deaa322a2257f50daeb65093b9a0f41f2fd6 | [
"Apache-2.0"
] | null | null | null | def aligned_indent(arg1,
arg2):
pass
aligned_indent(1,
2)
aligned_indent(1,
2
)
foodsadsa(sdada,
2
| 12.066667 | 25 | 0.41989 | 17 | 181 | 4.294118 | 0.588235 | 0.534247 | 0.383562 | 0.410959 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.079545 | 0.513812 | 181 | 14 | 26 | 12.928571 | 0.75 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.1 | 0 | null | null | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
b9265a636a185c99d96ba6a4dc3b32b166411215 | 119 | py | Python | applications/cloudstats/controllers/plugin_jqmobile.py | wasuaje/web2py5 | 02f310b9526f92c4ec62ab5b0271069a1c101e9f | [
"BSD-3-Clause"
] | null | null | null | applications/cloudstats/controllers/plugin_jqmobile.py | wasuaje/web2py5 | 02f310b9526f92c4ec62ab5b0271069a1c101e9f | [
"BSD-3-Clause"
] | null | null | null | applications/cloudstats/controllers/plugin_jqmobile.py | wasuaje/web2py5 | 02f310b9526f92c4ec62ab5b0271069a1c101e9f | [
"BSD-3-Clause"
] | null | null | null | response.files=response.files[:3]
response.menu=[]
def index():
return locals()
def about():
return locals()
| 13.222222 | 33 | 0.663866 | 15 | 119 | 5.266667 | 0.6 | 0.329114 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010101 | 0.168067 | 119 | 8 | 34 | 14.875 | 0.787879 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0 | 0.333333 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
b96210b25f2ae948a050b16b2fa5d7625c73dddc | 36,955 | py | Python | plugins/sentinelone/komand_sentinelone/actions/get_events/schema.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 46 | 2019-06-05T20:47:58.000Z | 2022-03-29T10:18:01.000Z | plugins/sentinelone/komand_sentinelone/actions/get_events/schema.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 386 | 2019-06-07T20:20:39.000Z | 2022-03-30T17:35:01.000Z | plugins/sentinelone/komand_sentinelone/actions/get_events/schema.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 43 | 2019-07-09T14:13:58.000Z | 2022-03-28T12:04:46.000Z | # GENERATED BY KOMAND SDK - DO NOT EDIT
import insightconnect_plugin_runtime
import json
class Component:
DESCRIPTION = "Get all Deep Visibility events from a queryId"
class Input:
LIMIT = "limit"
QUERY_ID = "query_id"
SUB_QUERY = "sub_query"
class Output:
RESPONSE = "response"
class GetEventsInput(insightconnect_plugin_runtime.Input):
schema = json.loads("""
{
"type": "object",
"title": "Variables",
"properties": {
"limit": {
"type": "integer",
"title": "Limit",
"description": "Limit number of returned items (1-1000), if no limit is provided returns all the results up to 20,000",
"order": 2
},
"query_id": {
"type": "string",
"title": "Query ID",
"description": "QueryId obtained when creating a query under Create Query",
"order": 1
},
"sub_query": {
"type": "string",
"title": "Sub Query",
"description": "Sub query to run on the data that was already pulled",
"order": 3
}
},
"required": [
"query_id"
]
}
""")
def __init__(self):
super(self.__class__, self).__init__(self.schema)
class GetEventsOutput(insightconnect_plugin_runtime.Output):
schema = json.loads("""
{
"type": "object",
"title": "Variables",
"properties": {
"response": {
"$ref": "#/definitions/get_events_response",
"title": "Response",
"description": "SentinelOne API call response data",
"order": 1
}
},
"definitions": {
"get_events_response": {
"type": "object",
"title": "get_events_response",
"properties": {
"data": {
"type": "array",
"title": "Data",
"description": "Response data",
"items": {
"$ref": "#/definitions/query_data"
},
"order": 3
},
"errors": {
"type": "array",
"title": "Errors",
"description": "Errors",
"items": {
"type": "object"
},
"order": 1
},
"pagination": {
"$ref": "#/definitions/pagination",
"title": "Pagination",
"description": "Pagination",
"order": 2
}
},
"definitions": {
"pagination": {
"type": "object",
"title": "pagination",
"properties": {
"nextCursor": {
"type": "string",
"title": "Next Cursor",
"description": "Next cursor",
"order": 2
},
"totalItems": {
"type": "integer",
"title": "Total Items",
"description": "Total items",
"order": 1
}
}
},
"query_data": {
"type": "object",
"title": "query_data",
"properties": {
"agentDomain": {
"type": "string",
"title": "Agent Domain",
"description": "Agent domain",
"order": 66
},
"agentGroupId": {
"type": "string",
"title": "Agent Group ID",
"description": "Agent group ID",
"order": 30
},
"agentId": {
"type": "string",
"title": "Agent ID",
"description": "Agent ID",
"order": 35
},
"agentInfected": {
"type": "boolean",
"title": "Agent Infected",
"description": "Agent infected",
"order": 16
},
"agentIp": {
"type": "string",
"title": "Agent IP",
"description": "Agent IP",
"order": 71
},
"agentIsActive": {
"type": "boolean",
"title": "Agent is Active",
"description": "Agent is active",
"order": 23
},
"agentIsDecommissioned": {
"type": "boolean",
"title": "Agent is Decommissioned",
"description": "Agent is decommissioned",
"order": 36
},
"agentMachineType": {
"type": "string",
"title": "Agent Machine Type",
"description": "Agent machine type",
"order": 48
},
"agentName": {
"type": "string",
"title": "Agent Name",
"description": "Agent name",
"order": 82
},
"agentNetworkStatus": {
"type": "string",
"title": "Agent Network Status",
"description": "Agent network status",
"order": 10
},
"agentOs": {
"type": "string",
"title": "Agent OS",
"description": "Agent Operating System",
"order": 20
},
"agentTimestamp": {
"type": "string",
"title": "Agent Timestamp",
"description": "Agent timestamp",
"order": 87
},
"agentUuid": {
"type": "string",
"title": "Agent UUID",
"description": "Agent UUID",
"order": 81
},
"agentVersion": {
"type": "string",
"title": "Agent Version",
"description": "Agent version",
"order": 31
},
"attributes": {
"type": "array",
"title": "Attributes",
"description": "Attributes",
"items": {
"type": "object"
},
"order": 88
},
"childProcCount": {
"type": "string",
"title": "Child Process Count",
"description": "Child process count",
"order": 89
},
"connectionStatus": {
"type": "string",
"title": "Connection Status",
"description": "Connection status",
"order": 73
},
"createdAt": {
"type": "string",
"title": "Created At",
"description": "Created at",
"order": 55
},
"direction": {
"type": "string",
"title": "Direction",
"description": "Direction",
"order": 80
},
"dnsResponse": {
"type": "string",
"title": "DNS Response",
"description": "DNS response",
"order": 7
},
"dstIp": {
"type": "string",
"title": "Destination IP",
"description": "Destination IP",
"order": 79
},
"dstPort": {
"type": "integer",
"title": "Destination Port",
"description": "Object type",
"order": 75
},
"endpointMachineType": {
"type": "string",
"title": "Endpoint Machine Type",
"description": "Endpoint machine type",
"order": 90
},
"endpointName": {
"type": "string",
"title": "Endpoint Name",
"description": "Endpoint name",
"order": 91
},
"endpointOs": {
"type": "string",
"title": "Endpoint OS",
"description": "Endpoint OS",
"order": 92
},
"eventTime": {
"type": "string",
"title": "Event Time",
"description": "Event time",
"order": 93
},
"eventType": {
"type": "string",
"title": "Event Type",
"description": "Event type",
"order": 67
},
"fileFullName": {
"type": "string",
"title": "File Full Name",
"description": "File full name",
"order": 6
},
"fileId": {
"type": "string",
"title": "File ID",
"description": "File ID",
"order": 14
},
"fileMd5": {
"type": "string",
"title": "File MD5",
"description": "File MD5",
"order": 54
},
"fileSha1": {
"type": "string",
"title": "File SHA1",
"description": "File SHA1",
"order": 68
},
"fileSha256": {
"type": "string",
"title": "File SHA256",
"description": "File SHA256",
"order": 33
},
"fileSize": {
"type": "string",
"title": "File Size",
"description": "File size",
"order": 53
},
"fileType": {
"type": "string",
"title": "File Type",
"description": "File type",
"order": 72
},
"forensicUrl": {
"type": "string",
"title": "Forensic URL",
"description": "Forensic URL",
"order": 78
},
"id": {
"type": "string",
"title": "ID",
"description": "ID",
"order": 41
},
"indicatorCategory": {
"type": "string",
"title": "Indicator Category",
"description": "Indicator category",
"order": 17
},
"indicatorDescription": {
"type": "string",
"title": "Indicator Description",
"description": "Indicator description",
"order": 56
},
"indicatorMetadata": {
"type": "string",
"title": "Indicator Metadata",
"description": "Indicator metadata",
"order": 60
},
"indicatorName": {
"type": "string",
"title": "Indicator Name",
"description": "Indicator name",
"order": 85
},
"loginsBaseType": {
"type": "string",
"title": "Logins Base Type",
"description": "Logins base type",
"order": 64
},
"loginsUserName": {
"type": "string",
"title": "Logins User Name",
"description": "Logins user name",
"order": 86
},
"md5": {
"type": "string",
"title": "MD5",
"description": "MD5",
"order": 65
},
"networkMethod": {
"type": "string",
"title": "Network Method",
"description": "Network method",
"order": 50
},
"networkSource": {
"type": "string",
"title": "Network Source",
"description": "Network source",
"order": 5
},
"networkUrl": {
"type": "string",
"title": "Network URL",
"description": "Network URL",
"order": 37
},
"objectType": {
"type": "string",
"title": "Object Type",
"description": "Object type",
"order": 1
},
"oldFileMd5": {
"type": "string",
"title": "Old File MD5",
"description": "Old file MD5",
"order": 47
},
"oldFileName": {
"type": "string",
"title": "Old File Name",
"description": "Old file name",
"order": 43
},
"oldFileSha1": {
"type": "string",
"title": "Old File SHA1",
"description": "Old file SHA1",
"order": 58
},
"oldFileSha256": {
"type": "string",
"title": "Old File SHA256",
"description": "Old file SHA256",
"order": 19
},
"parentPid": {
"type": "string",
"title": "Parent PID",
"description": "Parent PID",
"order": 34
},
"parentProcessGroupId": {
"type": "string",
"title": "Parent Process Group ID",
"description": "Parent process group ID",
"order": 9
},
"parentProcessIsMalicious": {
"type": "boolean",
"title": "Parent Process is Malicious",
"description": "Parent process is malicious",
"order": 59
},
"parentProcessName": {
"type": "string",
"title": "Parent Process Name",
"description": "Parent process name",
"order": 32
},
"parentProcessStartTime": {
"type": "string",
"title": "Parent Process Start Time",
"description": "Parent process start time",
"order": 51
},
"parentProcessUniqueKey": {
"type": "string",
"title": "Parent Process Unique Key",
"description": "Parent process unique key",
"order": 13
},
"pid": {
"type": "string",
"title": "PID",
"description": "PID",
"order": 11
},
"processCmd": {
"type": "string",
"title": "Process CMD",
"description": "Process CMD",
"order": 22
},
"processDisplayName": {
"type": "string",
"title": "Process Display Name",
"description": "Process display name",
"order": 18
},
"processGroupId": {
"type": "string",
"title": "Process Group ID",
"description": "Process group ID",
"order": 70
},
"processImagePath": {
"type": "string",
"title": "Process Image Path",
"description": "Process image path",
"order": 45
},
"processImageSha1Hash": {
"type": "string",
"title": "Process Image SHA1 Hash",
"description": "Process image SHA1 hash",
"order": 39
},
"processIntegrityLevel": {
"type": "string",
"title": "Process Integrity Level",
"description": "Process integrity level",
"order": 21
},
"processIsMalicious": {
"type": "boolean",
"title": "Process is Malicious",
"description": "Process is malicious",
"order": 77
},
"processIsRedirectedCommandProcessor": {
"type": "string",
"title": "Process is Redirected Command Processor",
"description": "Process is redirected command processor",
"order": 24
},
"processIsWow64": {
"type": "string",
"title": "Process is WOW64",
"description": "Process is WOW64",
"order": 4
},
"processName": {
"type": "string",
"title": "Process Name",
"description": "Process name",
"order": 57
},
"processRoot": {
"type": "string",
"title": "Process Root",
"description": "Process root",
"order": 26
},
"processSessionId": {
"type": "string",
"title": "Process Session ID",
"description": "Process session ID",
"order": 29
},
"processStartTime": {
"type": "string",
"title": "Process Start Time",
"description": "Process start time",
"order": 15
},
"processSubSystem": {
"type": "string",
"title": "Process Sub System",
"description": "Process sub system",
"order": 74
},
"processUniqueKey": {
"type": "string",
"title": "Process Unique Key",
"description": "Process unique key",
"order": 84
},
"processUserName": {
"type": "string",
"title": "Process User Name",
"description": "Process user name",
"order": 38
},
"publisher": {
"type": "string",
"title": "Publisher",
"description": "Publisher",
"order": 8
},
"registryId": {
"type": "string",
"title": "Registry ID",
"description": "Registry ID",
"order": 28
},
"registryPath": {
"type": "string",
"title": "Registry Path",
"description": "Registry path",
"order": 3
},
"relatedToThreat": {
"type": "string",
"title": "Related to Threat",
"description": "Related to threat",
"order": 63
},
"rpid": {
"type": "string",
"title": "RPID",
"description": "RPID",
"order": 61
},
"sha1": {
"type": "string",
"title": "SHA1",
"description": "SHA1",
"order": 62
},
"sha256": {
"type": "string",
"title": "SHA256",
"description": "SHA256",
"order": 44
},
"signatureSignedInvalidReason": {
"type": "string",
"title": "Signature Signed Invalid Reason",
"description": "Signature signed invalid reason",
"order": 27
},
"signedStatus": {
"type": "string",
"title": "Signed Status",
"description": "Signed status",
"order": 76
},
"siteName": {
"type": "string",
"title": "Site Name",
"description": "Site name",
"order": 46
},
"srcIp": {
"type": "string",
"title": "Source IP",
"description": "Source IP",
"order": 52
},
"srcPort": {
"type": "integer",
"title": "Source Port",
"description": "Source port",
"order": 42
},
"taskName": {
"type": "string",
"title": "Task Name",
"description": "Task name",
"order": 49
},
"taskPath": {
"type": "string",
"title": "Task Path",
"description": "Task path",
"order": 40
},
"threatStatus": {
"type": "string",
"title": "Threat Status",
"description": "Threat status",
"order": 83
},
"tid": {
"type": "string",
"title": "TID",
"description": "TID",
"order": 2
},
"trueContext": {
"type": "string",
"title": "True Context",
"description": "True context",
"order": 12
},
"user": {
"type": "string",
"title": "User",
"description": "User",
"order": 69
},
"verifiedStatus": {
"type": "string",
"title": "Verified Status",
"description": "Verified status",
"order": 25
}
}
}
}
},
"pagination": {
"type": "object",
"title": "pagination",
"properties": {
"nextCursor": {
"type": "string",
"title": "Next Cursor",
"description": "Next cursor",
"order": 2
},
"totalItems": {
"type": "integer",
"title": "Total Items",
"description": "Total items",
"order": 1
}
}
},
"query_data": {
"type": "object",
"title": "query_data",
"properties": {
"agentDomain": {
"type": "string",
"title": "Agent Domain",
"description": "Agent domain",
"order": 66
},
"agentGroupId": {
"type": "string",
"title": "Agent Group ID",
"description": "Agent group ID",
"order": 30
},
"agentId": {
"type": "string",
"title": "Agent ID",
"description": "Agent ID",
"order": 35
},
"agentInfected": {
"type": "boolean",
"title": "Agent Infected",
"description": "Agent infected",
"order": 16
},
"agentIp": {
"type": "string",
"title": "Agent IP",
"description": "Agent IP",
"order": 71
},
"agentIsActive": {
"type": "boolean",
"title": "Agent is Active",
"description": "Agent is active",
"order": 23
},
"agentIsDecommissioned": {
"type": "boolean",
"title": "Agent is Decommissioned",
"description": "Agent is decommissioned",
"order": 36
},
"agentMachineType": {
"type": "string",
"title": "Agent Machine Type",
"description": "Agent machine type",
"order": 48
},
"agentName": {
"type": "string",
"title": "Agent Name",
"description": "Agent name",
"order": 82
},
"agentNetworkStatus": {
"type": "string",
"title": "Agent Network Status",
"description": "Agent network status",
"order": 10
},
"agentOs": {
"type": "string",
"title": "Agent OS",
"description": "Agent Operating System",
"order": 20
},
"agentTimestamp": {
"type": "string",
"title": "Agent Timestamp",
"description": "Agent timestamp",
"order": 87
},
"agentUuid": {
"type": "string",
"title": "Agent UUID",
"description": "Agent UUID",
"order": 81
},
"agentVersion": {
"type": "string",
"title": "Agent Version",
"description": "Agent version",
"order": 31
},
"attributes": {
"type": "array",
"title": "Attributes",
"description": "Attributes",
"items": {
"type": "object"
},
"order": 88
},
"childProcCount": {
"type": "string",
"title": "Child Process Count",
"description": "Child process count",
"order": 89
},
"connectionStatus": {
"type": "string",
"title": "Connection Status",
"description": "Connection status",
"order": 73
},
"createdAt": {
"type": "string",
"title": "Created At",
"description": "Created at",
"order": 55
},
"direction": {
"type": "string",
"title": "Direction",
"description": "Direction",
"order": 80
},
"dnsResponse": {
"type": "string",
"title": "DNS Response",
"description": "DNS response",
"order": 7
},
"dstIp": {
"type": "string",
"title": "Destination IP",
"description": "Destination IP",
"order": 79
},
"dstPort": {
"type": "integer",
"title": "Destination Port",
"description": "Object type",
"order": 75
},
"endpointMachineType": {
"type": "string",
"title": "Endpoint Machine Type",
"description": "Endpoint machine type",
"order": 90
},
"endpointName": {
"type": "string",
"title": "Endpoint Name",
"description": "Endpoint name",
"order": 91
},
"endpointOs": {
"type": "string",
"title": "Endpoint OS",
"description": "Endpoint OS",
"order": 92
},
"eventTime": {
"type": "string",
"title": "Event Time",
"description": "Event time",
"order": 93
},
"eventType": {
"type": "string",
"title": "Event Type",
"description": "Event type",
"order": 67
},
"fileFullName": {
"type": "string",
"title": "File Full Name",
"description": "File full name",
"order": 6
},
"fileId": {
"type": "string",
"title": "File ID",
"description": "File ID",
"order": 14
},
"fileMd5": {
"type": "string",
"title": "File MD5",
"description": "File MD5",
"order": 54
},
"fileSha1": {
"type": "string",
"title": "File SHA1",
"description": "File SHA1",
"order": 68
},
"fileSha256": {
"type": "string",
"title": "File SHA256",
"description": "File SHA256",
"order": 33
},
"fileSize": {
"type": "string",
"title": "File Size",
"description": "File size",
"order": 53
},
"fileType": {
"type": "string",
"title": "File Type",
"description": "File type",
"order": 72
},
"forensicUrl": {
"type": "string",
"title": "Forensic URL",
"description": "Forensic URL",
"order": 78
},
"id": {
"type": "string",
"title": "ID",
"description": "ID",
"order": 41
},
"indicatorCategory": {
"type": "string",
"title": "Indicator Category",
"description": "Indicator category",
"order": 17
},
"indicatorDescription": {
"type": "string",
"title": "Indicator Description",
"description": "Indicator description",
"order": 56
},
"indicatorMetadata": {
"type": "string",
"title": "Indicator Metadata",
"description": "Indicator metadata",
"order": 60
},
"indicatorName": {
"type": "string",
"title": "Indicator Name",
"description": "Indicator name",
"order": 85
},
"loginsBaseType": {
"type": "string",
"title": "Logins Base Type",
"description": "Logins base type",
"order": 64
},
"loginsUserName": {
"type": "string",
"title": "Logins User Name",
"description": "Logins user name",
"order": 86
},
"md5": {
"type": "string",
"title": "MD5",
"description": "MD5",
"order": 65
},
"networkMethod": {
"type": "string",
"title": "Network Method",
"description": "Network method",
"order": 50
},
"networkSource": {
"type": "string",
"title": "Network Source",
"description": "Network source",
"order": 5
},
"networkUrl": {
"type": "string",
"title": "Network URL",
"description": "Network URL",
"order": 37
},
"objectType": {
"type": "string",
"title": "Object Type",
"description": "Object type",
"order": 1
},
"oldFileMd5": {
"type": "string",
"title": "Old File MD5",
"description": "Old file MD5",
"order": 47
},
"oldFileName": {
"type": "string",
"title": "Old File Name",
"description": "Old file name",
"order": 43
},
"oldFileSha1": {
"type": "string",
"title": "Old File SHA1",
"description": "Old file SHA1",
"order": 58
},
"oldFileSha256": {
"type": "string",
"title": "Old File SHA256",
"description": "Old file SHA256",
"order": 19
},
"parentPid": {
"type": "string",
"title": "Parent PID",
"description": "Parent PID",
"order": 34
},
"parentProcessGroupId": {
"type": "string",
"title": "Parent Process Group ID",
"description": "Parent process group ID",
"order": 9
},
"parentProcessIsMalicious": {
"type": "boolean",
"title": "Parent Process is Malicious",
"description": "Parent process is malicious",
"order": 59
},
"parentProcessName": {
"type": "string",
"title": "Parent Process Name",
"description": "Parent process name",
"order": 32
},
"parentProcessStartTime": {
"type": "string",
"title": "Parent Process Start Time",
"description": "Parent process start time",
"order": 51
},
"parentProcessUniqueKey": {
"type": "string",
"title": "Parent Process Unique Key",
"description": "Parent process unique key",
"order": 13
},
"pid": {
"type": "string",
"title": "PID",
"description": "PID",
"order": 11
},
"processCmd": {
"type": "string",
"title": "Process CMD",
"description": "Process CMD",
"order": 22
},
"processDisplayName": {
"type": "string",
"title": "Process Display Name",
"description": "Process display name",
"order": 18
},
"processGroupId": {
"type": "string",
"title": "Process Group ID",
"description": "Process group ID",
"order": 70
},
"processImagePath": {
"type": "string",
"title": "Process Image Path",
"description": "Process image path",
"order": 45
},
"processImageSha1Hash": {
"type": "string",
"title": "Process Image SHA1 Hash",
"description": "Process image SHA1 hash",
"order": 39
},
"processIntegrityLevel": {
"type": "string",
"title": "Process Integrity Level",
"description": "Process integrity level",
"order": 21
},
"processIsMalicious": {
"type": "boolean",
"title": "Process is Malicious",
"description": "Process is malicious",
"order": 77
},
"processIsRedirectedCommandProcessor": {
"type": "string",
"title": "Process is Redirected Command Processor",
"description": "Process is redirected command processor",
"order": 24
},
"processIsWow64": {
"type": "string",
"title": "Process is WOW64",
"description": "Process is WOW64",
"order": 4
},
"processName": {
"type": "string",
"title": "Process Name",
"description": "Process name",
"order": 57
},
"processRoot": {
"type": "string",
"title": "Process Root",
"description": "Process root",
"order": 26
},
"processSessionId": {
"type": "string",
"title": "Process Session ID",
"description": "Process session ID",
"order": 29
},
"processStartTime": {
"type": "string",
"title": "Process Start Time",
"description": "Process start time",
"order": 15
},
"processSubSystem": {
"type": "string",
"title": "Process Sub System",
"description": "Process sub system",
"order": 74
},
"processUniqueKey": {
"type": "string",
"title": "Process Unique Key",
"description": "Process unique key",
"order": 84
},
"processUserName": {
"type": "string",
"title": "Process User Name",
"description": "Process user name",
"order": 38
},
"publisher": {
"type": "string",
"title": "Publisher",
"description": "Publisher",
"order": 8
},
"registryId": {
"type": "string",
"title": "Registry ID",
"description": "Registry ID",
"order": 28
},
"registryPath": {
"type": "string",
"title": "Registry Path",
"description": "Registry path",
"order": 3
},
"relatedToThreat": {
"type": "string",
"title": "Related to Threat",
"description": "Related to threat",
"order": 63
},
"rpid": {
"type": "string",
"title": "RPID",
"description": "RPID",
"order": 61
},
"sha1": {
"type": "string",
"title": "SHA1",
"description": "SHA1",
"order": 62
},
"sha256": {
"type": "string",
"title": "SHA256",
"description": "SHA256",
"order": 44
},
"signatureSignedInvalidReason": {
"type": "string",
"title": "Signature Signed Invalid Reason",
"description": "Signature signed invalid reason",
"order": 27
},
"signedStatus": {
"type": "string",
"title": "Signed Status",
"description": "Signed status",
"order": 76
},
"siteName": {
"type": "string",
"title": "Site Name",
"description": "Site name",
"order": 46
},
"srcIp": {
"type": "string",
"title": "Source IP",
"description": "Source IP",
"order": 52
},
"srcPort": {
"type": "integer",
"title": "Source Port",
"description": "Source port",
"order": 42
},
"taskName": {
"type": "string",
"title": "Task Name",
"description": "Task name",
"order": 49
},
"taskPath": {
"type": "string",
"title": "Task Path",
"description": "Task path",
"order": 40
},
"threatStatus": {
"type": "string",
"title": "Threat Status",
"description": "Threat status",
"order": 83
},
"tid": {
"type": "string",
"title": "TID",
"description": "TID",
"order": 2
},
"trueContext": {
"type": "string",
"title": "True Context",
"description": "True context",
"order": 12
},
"user": {
"type": "string",
"title": "User",
"description": "User",
"order": 69
},
"verifiedStatus": {
"type": "string",
"title": "Verified Status",
"description": "Verified status",
"order": 25
}
}
}
}
}
""")
def __init__(self):
super(self.__class__, self).__init__(self.schema)
| 28.938919 | 125 | 0.399865 | 2,474 | 36,955 | 5.953517 | 0.13945 | 0.118134 | 0.177201 | 0.04481 | 0.929595 | 0.929595 | 0.929595 | 0.929595 | 0.922941 | 0.922941 | 0 | 0.023687 | 0.448221 | 36,955 | 1,276 | 126 | 28.961599 | 0.698642 | 0.001001 | 0 | 0.794933 | 1 | 0.000792 | 0.98502 | 0.022456 | 0 | 0 | 0 | 0 | 0 | 1 | 0.001584 | false | 0 | 0.001584 | 0 | 0.012668 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
b96b8ef4bd3d68416458d70037fb781a7c669b01 | 1,536 | py | Python | ambra_sdk/service/entrypoints/session.py | dicomgrid/sdk-python | bb12eed311bad73dfb863917df4dc5cbcd91a447 | [
"Apache-2.0"
] | 9 | 2020-04-20T23:45:44.000Z | 2021-04-18T11:22:17.000Z | ambra_sdk/service/entrypoints/session.py | dicomgrid/sdk-python | bb12eed311bad73dfb863917df4dc5cbcd91a447 | [
"Apache-2.0"
] | 13 | 2020-02-08T16:15:05.000Z | 2021-09-13T22:55:28.000Z | ambra_sdk/service/entrypoints/session.py | dicomgrid/sdk-python | bb12eed311bad73dfb863917df4dc5cbcd91a447 | [
"Apache-2.0"
] | 6 | 2020-03-25T17:47:45.000Z | 2021-04-18T11:22:19.000Z | from typing import Dict, Optional
from ambra_sdk.service.entrypoints.generated.session import \
AsyncSession as GAsyncSession
from ambra_sdk.service.entrypoints.generated.session import Session as GSession
class Session(GSession):
"""Session."""
def get_sid(
self,
username: str,
password: str,
special_headers_for_login: Optional[Dict[str, str]] = None,
) -> 'str':
"""Get sid from credentials.
:param username: user name
:param password: user password
:param special_headers_for_login: special headers for login
:return: sid
"""
query = self.login(login=username, password=password)
query.request_args.headers = special_headers_for_login
response = query.get_once()
sid: str = response.sid
return sid # NOQA: WPS331
class AsyncSession(GAsyncSession):
"""AsyncSession."""
async def get_sid(
self,
username: str,
password: str,
special_headers_for_login: Optional[Dict[str, str]] = None,
) -> 'str':
"""Get sid from credentials.
:param username: user name
:param password: user password
:param special_headers_for_login: special headers for login
:return: sid
"""
query = self.login(login=username, password=password)
query.request_args.headers = special_headers_for_login
response = await query.get_once()
sid: str = response.sid
return sid # NOQA: WPS331
| 29.538462 | 79 | 0.639974 | 173 | 1,536 | 5.531792 | 0.242775 | 0.117032 | 0.142111 | 0.183908 | 0.842215 | 0.842215 | 0.842215 | 0.842215 | 0.733542 | 0.733542 | 0 | 0.005352 | 0.270182 | 1,536 | 51 | 80 | 30.117647 | 0.84835 | 0.134766 | 0 | 0.642857 | 1 | 0 | 0.005623 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035714 | false | 0.142857 | 0.107143 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 9 |
b9901ae00b41c7acd9244a17bf0f9ce5ac029351 | 19,740 | py | Python | undp-transparency-portal-be/undp_purchase_orders/cron_automation.py | undp/transparencyportal | 244fbb82c05d119f0acbe7f5efbb44572d9150a0 | [
"CC-BY-3.0"
] | 5 | 2019-09-10T15:05:18.000Z | 2022-02-02T02:53:32.000Z | undp-transparency-portal-be/undp_purchase_orders/cron_automation.py | undp/transparencyportal | 244fbb82c05d119f0acbe7f5efbb44572d9150a0 | [
"CC-BY-3.0"
] | 4 | 2019-04-02T15:02:20.000Z | 2021-11-09T10:55:32.000Z | undp-transparency-portal-be/undp_purchase_orders/cron_automation.py | undp/transparencyportal | 244fbb82c05d119f0acbe7f5efbb44572d9150a0 | [
"CC-BY-3.0"
] | 2 | 2021-09-01T14:30:29.000Z | 2021-09-01T14:32:57.000Z | from django.conf import settings as main_settings
from django_cron import CronJobBase, Schedule
from undp_admin.models import JOBS, LOG_STATUSES
from undp_admin.utils import add_admin_log, update_admin_log
from undp_projects.utils import save_log, write_file
from undp_purchase_orders.models import Vendor, PurchaseOrder
from utilities import config as settings
from master_tables.models import Bureau, Organisation, Sector, DocumentCategory,\
Region, OperatingUnit
from undp_outputs.models import Expense, Output
from undp_projects.models import Project, ProjectActiveYear
from lxml import etree
import os
import re
import sys
import csv
import xlrd
import datetime
from utilities.config import BULK_INSERT_LIMIT
db = main_settings.DB_FOR_WRITE
class PurchaseOrderCron:
def do(self):
start_time = datetime.datetime.now()
print('Cron Started')
self.purchase_orders()
self.purchase_orders_history()
print('Completed Purchase orders upload')
end_time = datetime.datetime.now()
run_time = end_time - start_time
print('Runtime : ' + str(run_time))
print('Cron Ended')
def purchase_orders(self):
start_time = datetime.datetime.now()
try:
add_admin_log(JOBS.upload_purchase_order, file_name="report_po.xlsx",
start_time=start_time)
file_path = settings.CSV_UPLOAD_DIR + "/report_po.xlsx"
workbook = xlrd.open_workbook(file_path)
sheet = workbook.sheet_by_index(0)
row_iter = 0
error_rows = []
error_vendors = []
purchase_orders = []
outputs = Output.objects.all().values_list(flat=True)
projects = Project.objects.all().values_list(flat=True)
operating_units = OperatingUnit.objects.all().values_list(flat=True)
n = 0
total_records_inserted = 0
for row in range(sheet.nrows):
if row_iter != 0:
operating_unit_iso3 = sheet.cell_value(row, 0)
project_id = sheet.cell_value(row, 2)
output_id = sheet.cell_value(row, 5)
business_unit = sheet.cell_value(row, 6)
partner = sheet.cell_value(row, 7)
vendor_id = sheet.cell_value(row, 8)
vendor_name = sheet.cell_value(row, 9)
vendor_classification = sheet.cell_value(row, 10)
order_id = sheet.cell_value(row, 11)
line_nbr = sheet.cell_value(row, 12)
description = sheet.cell_value(row, 13)
order_date = sheet.cell_value(row, 14)
order_amount = sheet.cell_value(row, 15)
try:
order_ref = sheet.cell_value(row, 16).encode('ascii', 'ignore')
except:
order_ref = str(sheet.cell_value(row, 16))
order_date_as_datetime = self.process_order_date(order_date, workbook.datemode)
try:
vendor_obj, created = Vendor.objects \
.update_or_create(vendor_id=vendor_id,
defaults={'name': vendor_name,
'classification_type': vendor_classification})
except Exception as e:
vendor_obj = None
error_vendors.append(vendor_id)
if output_id in outputs:
output = output_id
else:
output = None
if project_id in projects:
project = project_id
else:
project = None
if output and project:
n += 1
if operating_unit_iso3 in operating_units:
operating_unit = operating_unit_iso3
else:
operating_unit = None
data_dict = {
'order_id': order_id,
'line_nbr': line_nbr,
'project_id': project,
'output_id': output,
'operating_unit_id': operating_unit,
'vendor': vendor_obj,
'vendor_name': vendor_name,
'order_date': order_date_as_datetime,
'ref': order_ref,
'business_unit': business_unit,
'partner': partner,
'description': description,
'amount': order_amount
}
try:
purchase_order = PurchaseOrder(**data_dict)
purchase_orders.append(purchase_order)
if n == BULK_INSERT_LIMIT:
print("Inserting %s records..." % str(n))
total_records_inserted += n
PurchaseOrder.objects.bulk_create(purchase_orders)
purchase_orders = []
n = 0
except Exception as e:
error_rows.append(e)
row_iter += 1
total_records_inserted += n
print("Inserting %s records..." % str(n))
PurchaseOrder.objects.bulk_create(purchase_orders)
print("Total records inserted: ", total_records_inserted)
update_admin_log(JOBS.upload_purchase_order, start_time,
file_name="report_po.xlsx",
end_time=datetime.datetime.now(),
status=LOG_STATUSES.successful)
print("Total records: " + str(row_iter))
write_file(("Total rows: %s" % row_iter))
write_file(("Error rows: %s" % error_rows))
write_file(("\nError error_vendors: %s" % error_vendors))
except Exception as e:
update_admin_log(JOBS.upload_master_data, start_time,
file_name="report_po.xlsx",
end_time=datetime.datetime.now(),
status=LOG_STATUSES.failed,
message=str(e))
def purchase_orders_history(self):
start_time = datetime.datetime.now()
try:
add_admin_log(JOBS.upload_purchase_order, file_name="report_po_history.xlsx",
start_time=start_time)
file_path = settings.CSV_UPLOAD_DIR + "/report_po_history.xlsx"
workbook = xlrd.open_workbook(file_path)
sheet = workbook.sheet_by_index(0)
row_iter = 0
error_rows = []
error_vendors = []
purchase_orders = []
outputs = Output.objects.all().values_list(flat=True)
projects = Project.objects.all().values_list(flat=True)
operating_units = OperatingUnit.objects.all().values_list(flat=True)
n = 0
total_records_inserted = 0
for row in range(sheet.nrows):
if row_iter != 0:
operating_unit_iso3 = sheet.cell_value(row, 0)
project_id = sheet.cell_value(row, 2)
output_id = sheet.cell_value(row, 5)
business_unit = sheet.cell_value(row, 6)
partner = sheet.cell_value(row, 7)
vendor_id = sheet.cell_value(row, 8)
vendor_name = sheet.cell_value(row, 9)
vendor_classification = sheet.cell_value(row, 10)
order_id = sheet.cell_value(row, 11)
line_nbr = sheet.cell_value(row, 12)
description = sheet.cell_value(row, 13)
order_date = sheet.cell_value(row, 14)
order_amount = sheet.cell_value(row, 15)
try:
order_ref = sheet.cell_value(row, 16).encode('ascii', 'ignore')
except:
order_ref = str(sheet.cell_value(row, 16))
order_date_as_datetime = self.process_order_date(order_date, workbook.datemode)
try:
vendor_obj, created = Vendor.objects \
.update_or_create(vendor_id=vendor_id,
defaults={'name': vendor_name,
'classification_type': vendor_classification})
except Exception as e:
vendor_obj = None
error_vendors.append(vendor_id)
if output_id in outputs:
output = output_id
else:
output = None
if project_id in projects:
project = project_id
else:
project = None
if output and project:
n += 1
if operating_unit_iso3 in operating_units:
operating_unit = operating_unit_iso3
else:
operating_unit = None
data_dict = {
'order_id': order_id,
'line_nbr': line_nbr,
'project_id': project,
'output_id': output,
'operating_unit_id': operating_unit,
'vendor': vendor_obj,
'vendor_name': vendor_name,
'order_date': order_date_as_datetime,
'ref': order_ref,
'business_unit': business_unit,
'partner': partner,
'description': description,
'amount': order_amount
}
try:
purchase_order = PurchaseOrder(**data_dict)
purchase_orders.append(purchase_order)
if n == BULK_INSERT_LIMIT:
print("Inserting %s records..." % str(n))
total_records_inserted += n
PurchaseOrder.objects.bulk_create(purchase_orders)
purchase_orders = []
n = 0
except Exception as e:
error_rows.append(e)
row_iter += 1
total_records_inserted += n
print("Inserting %s records..." % str(n))
PurchaseOrder.objects.bulk_create(purchase_orders)
print("Total records inserted: ", total_records_inserted)
update_admin_log(JOBS.upload_purchase_order, start_time,
file_name="report_po.xlsx",
end_time=datetime.datetime.now(),
status=LOG_STATUSES.successful)
print("Total records: " + str(row_iter))
write_file(("Total rows: %s" % row_iter))
write_file(("Error rows: %s" % error_rows))
write_file(("\nError error_vendors: %s" % error_vendors))
except Exception as e:
update_admin_log(JOBS.upload_master_data, start_time,
file_name="report_po_history.xlsx",
end_time=datetime.datetime.now(),
status=LOG_STATUSES.failed,
message=str(e))
def purchase_orders_old(self):
import time
start_time = datetime.datetime.now()
try:
add_admin_log(JOBS.upload_purchase_order, file_name="report_po.xlsx",
start_time=start_time)
file_path = settings.CSV_UPLOAD_DIR + "/report_po.xlsx"
workbook = xlrd.open_workbook(file_path)
sheet = workbook.sheet_by_index(0)
row_iter = 0
error_rows = []
error_vendors = []
purchase_orders = []
for row in range(sheet.nrows):
if row_iter != 0:
operating_unit_iso3 = sheet.cell_value(row, 0)
project_id = sheet.cell_value(row, 2)
output_id = sheet.cell_value(row, 5)
business_unit = sheet.cell_value(row, 6)
partner = sheet.cell_value(row, 7)
vendor_id = sheet.cell_value(row, 8)
vendor_name = sheet.cell_value(row, 9)
vendor_classification = sheet.cell_value(row, 10)
order_id = sheet.cell_value(row, 11)
line_nbr = sheet.cell_value(row, 12)
description = sheet.cell_value(row, 13)
order_date = sheet.cell_value(row, 14)
order_amount = sheet.cell_value(row, 15)
try:
order_ref = sheet.cell_value(row, 16).encode('ascii', 'ignore')
except:
order_ref = str(sheet.cell_value(row, 16))
order_date_as_datetime = self.process_order_date(order_date, workbook.datemode)
try:
vendor_obj, created = Vendor.objects\
.update_or_create(vendor_id=vendor_id,
defaults={'name': vendor_name,
'classification_type': vendor_classification})
except Exception as e:
print(e)
vendor_obj = None
error_vendors.append(vendor_id)
try:
output = Output.objects.using(db).get(output_id=output_id)
except:
output = None
try:
project = Project.objects.using(db).get(project_id=project_id)
except:
project = None
if output and project:
try:
operating_unit = OperatingUnit.objects.using(db).get(iso3=operating_unit_iso3)
except:
operating_unit = None
data_dict = {
'order_id': order_id,
'line_nbr': line_nbr,
'project': project,
'output': output,
'operating_unit': operating_unit,
'vendor': vendor_obj,
'vendor_name': vendor_name,
'order_date': order_date_as_datetime,
'ref': order_ref,
'business_unit': business_unit,
'partner': partner,
'description': description,
'amount': order_amount
}
try:
purchase_order = PurchaseOrder(**data_dict)
purchase_orders.append(purchase_order)
except Exception as e:
error_rows.append(iter)
print(e)
row_iter += 1
total_length = len(purchase_orders)
n = total_length / BULK_INSERT_LIMIT
print("total_length: ", total_length)
print("n: ", n, )
if n > 1:
offset = 0
n = int(n) + 1
print(" : ", n)
for i in range(n):
print(i)
limit = offset + BULK_INSERT_LIMIT
PurchaseOrder.objects.bulk_create(purchase_orders[offset: limit])
time.sleep(2)
offset = limit
else:
PurchaseOrder.objects.bulk_create(purchase_orders)
update_admin_log(JOBS.upload_purchase_order, start_time,
file_name="report_po.xlsx",
end_time=datetime.datetime.now(),
status=LOG_STATUSES.successful)
print("Total records: " + str(row_iter))
write_file(("Total rows: %s" % row_iter))
write_file(("Error rows: %s" % error_rows))
write_file(("\nError error_vendors: %s" % error_vendors))
except Exception as e:
update_admin_log(JOBS.upload_master_data, start_time,
file_name="report_po.xlsx",
end_time=datetime.datetime.now(),
status=LOG_STATUSES.failed,
message=str(e))
@staticmethod
def process_order_date(order_date, workbook_datemode):
try:
order_date_as_datetime = datetime.datetime(*xlrd.xldate_as_tuple(order_date, workbook_datemode))
except:
try:
order_date_as_datetime = datetime.datetime(*xlrd.xldate_as_tuple(float(order_date),
workbook_datemode))
except:
try:
order_date_as_datetime = datetime.datetime.strptime(order_date.lower(), "%d-%b-%y")
except Exception as e:
try:
order_date_as_datetime = datetime.datetime.strptime(order_date.lower(), "%Y-%m-%d %H:%M:%S")
except:
order_date_as_datetime = order_date
return order_date_as_datetime
| 50.101523 | 120 | 0.448379 | 1,723 | 19,740 | 4.839234 | 0.10679 | 0.048573 | 0.075558 | 0.091749 | 0.834253 | 0.829456 | 0.809906 | 0.805949 | 0.801031 | 0.796234 | 0 | 0.00999 | 0.482776 | 19,740 | 393 | 121 | 50.229008 | 0.80666 | 0 | 0 | 0.807163 | 0 | 0 | 0.056231 | 0.003394 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013774 | false | 0 | 0.052342 | 0 | 0.071625 | 0.052342 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b9a3a1804fd51bb7d49ba962f9e60f9298f13e6e | 155 | py | Python | planefigures/__init__.py | OthnielDona/planefigures | 5faa86f9609dc567679f3cb3ad0c543d29b91d83 | [
"MIT"
] | 2 | 2019-07-27T01:01:23.000Z | 2019-07-27T13:23:02.000Z | planefigures/__init__.py | OthnielDona/planefigures | 5faa86f9609dc567679f3cb3ad0c543d29b91d83 | [
"MIT"
] | 3 | 2020-03-24T17:38:59.000Z | 2020-04-29T19:38:54.000Z | planefigures/__init__.py | OthnielDona/planefigures | 5faa86f9609dc567679f3cb3ad0c543d29b91d83 | [
"MIT"
] | null | null | null | from turtle import color
from turtle import circle
from turtle import begin_fill
from turtle import end_fill
from .core import *
from .core import __doc__ | 22.142857 | 29 | 0.825806 | 25 | 155 | 4.88 | 0.4 | 0.327869 | 0.52459 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.154839 | 155 | 7 | 30 | 22.142857 | 0.931298 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
b9d3f1c29b58812d125f52a73e80e93683a0fbf3 | 297 | py | Python | ramda/match_test.py | jakobkolb/ramda.py | 982b2172f4bb95b9a5b09eff8077362d6f2f0920 | [
"MIT"
] | 56 | 2018-08-06T08:44:58.000Z | 2022-03-17T09:49:03.000Z | ramda/match_test.py | jakobkolb/ramda.py | 982b2172f4bb95b9a5b09eff8077362d6f2f0920 | [
"MIT"
] | 28 | 2019-06-17T11:09:52.000Z | 2022-02-18T16:59:21.000Z | ramda/match_test.py | jakobkolb/ramda.py | 982b2172f4bb95b9a5b09eff8077362d6f2f0920 | [
"MIT"
] | 5 | 2019-09-18T09:24:38.000Z | 2021-07-21T08:40:23.000Z | from .match import match
from ramda.private.asserts import assert_equal
def match_nocurry_test():
assert_equal(match("a", "aa"), ["a", "a"])
def match_curry_test():
assert_equal(match("a")("aa"), ["a", "a"])
def match_curry_regex_test():
assert_equal(match(r"a+")("aa"), ["aa"])
| 19.8 | 46 | 0.649832 | 45 | 297 | 4.044444 | 0.355556 | 0.241758 | 0.247253 | 0.32967 | 0.417582 | 0.417582 | 0.417582 | 0.417582 | 0.417582 | 0.417582 | 0 | 0 | 0.138047 | 297 | 14 | 47 | 21.214286 | 0.710938 | 0 | 0 | 0 | 0 | 0 | 0.053872 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.375 | true | 0 | 0.25 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
b9e61016457a7bb5ceb0828c638916fac1bbb919 | 111,797 | py | Python | plugins/ibm_resilient_incident/icon_ibm_resilient_incident/actions/create_incident/schema.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 46 | 2019-06-05T20:47:58.000Z | 2022-03-29T10:18:01.000Z | plugins/ibm_resilient_incident/icon_ibm_resilient_incident/actions/create_incident/schema.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 386 | 2019-06-07T20:20:39.000Z | 2022-03-30T17:35:01.000Z | plugins/ibm_resilient_incident/icon_ibm_resilient_incident/actions/create_incident/schema.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 43 | 2019-07-09T14:13:58.000Z | 2022-03-28T12:04:46.000Z | # GENERATED BY KOMAND SDK - DO NOT EDIT
import komand
import json
class Component:
DESCRIPTION = "Creates an incident"
class Input:
INCIDENT = "incident"
ORGANIZATION_ID = "organization_id"
class Output:
INCIDENT = "incident"
class CreateIncidentInput(komand.Input):
schema = json.loads("""
{
"type": "object",
"title": "Variables",
"properties": {
"incident": {
"type": "object",
"title": "Incident",
"description": "The incident to create, in JSON format. Please see the IncidentDTO JSON reference in your Resilient API documentation",
"order": 2
},
"organization_id": {
"type": "number",
"title": "Organization ID",
"description": "The organization ID",
"order": 1
}
},
"required": [
"incident",
"organization_id"
]
}
""")
def __init__(self):
super(self.__class__, self).__init__(self.schema)
class CreateIncidentOutput(komand.Output):
schema = json.loads("""
{
"type": "object",
"title": "Variables",
"properties": {
"incident": {
"$ref": "#/definitions/FullIncidentDataDTO",
"title": "Incident",
"description": "Incident",
"order": 1
}
},
"definitions": {
"ActionInfoDTO": {
"type": "object",
"title": "ActionInfoDTO",
"properties": {
"enabled": {
"type": "boolean",
"title": "Enabled",
"order": 3
},
"id": {
"type": "number",
"title": "Id",
"order": 1
},
"name": {
"type": "string",
"title": "Name",
"order": 2
}
}
},
"AttachmentDTO": {
"type": "object",
"title": "AttachmentDTO",
"properties": {
"actions": {
"type": "array",
"title": "Actions",
"items": {
"$ref": "#/definitions/ActionInfoDTO"
},
"order": 8
},
"content_type": {
"type": "number",
"title": "Content Type",
"order": 3
},
"created": {
"type": "number",
"title": "Created",
"order": 4
},
"creator_id": {
"type": "number",
"title": "Creator Id",
"order": 5
},
"id": {
"type": "number",
"title": "Id",
"order": 1
},
"inc_id": {
"type": "number",
"title": "Inc Id",
"order": 9
},
"inc_name": {
"type": "string",
"title": "Inc Name",
"order": 10
},
"inc_owner": {
"type": "number",
"title": "Inc Owner",
"order": 11
},
"name": {
"type": "string",
"title": "Name",
"order": 2
},
"size": {
"type": "number",
"title": "Size",
"order": 6
},
"task_id": {
"type": "number",
"title": "Task Id",
"order": 12
},
"task_name": {
"type": "string",
"title": "Task Name",
"order": 13
},
"type": {
"type": "string",
"title": "Type",
"order": 14
},
"vers": {
"type": "number",
"title": "Vers",
"order": 7
}
},
"definitions": {
"ActionInfoDTO": {
"type": "object",
"title": "ActionInfoDTO",
"properties": {
"enabled": {
"type": "boolean",
"title": "Enabled",
"order": 3
},
"id": {
"type": "number",
"title": "Id",
"order": 1
},
"name": {
"type": "string",
"title": "Name",
"order": 2
}
}
}
}
},
"CommentDTO": {
"type": "object",
"title": "CommentDTO",
"properties": {
"actions": {
"type": "array",
"title": "Actions",
"items": {
"$ref": "#/definitions/ActionInfoDTO"
},
"order": 14
},
"children": {
"type": "array",
"title": "Children",
"items": {
"type": "object"
},
"order": 10
},
"comment_perms": {
"$ref": "#/definitions/CommentPermsDTO",
"title": "Comment Perms",
"order": 12
},
"create_date": {
"type": "number",
"title": "Create Date",
"order": 6
},
"id": {
"type": "number",
"title": "Id",
"order": 1
},
"inc_id": {
"type": "number",
"title": "Inc Id",
"order": 15
},
"inc_name": {
"type": "string",
"title": "Inc Name",
"order": 16
},
"inc_owner": {
"type": "number",
"title": "Inc Owner",
"order": 20
},
"is_deleted": {
"type": "boolean",
"title": "Is Deleted",
"order": 13
},
"mentioned_users": {
"type": "array",
"title": "Mentioned Users",
"items": {
"type": "object"
},
"order": 11
},
"modify_date": {
"type": "number",
"title": "Modify Date",
"order": 7
},
"modify_user": {
"$ref": "#/definitions/ModifyUserDTO",
"title": "Modify User",
"order": 8
},
"parent_id": {
"type": "number",
"title": "Parent Id",
"order": 2
},
"task_id": {
"type": "number",
"title": "Task Id",
"order": 17
},
"task_name": {
"type": "string",
"title": "Task Name",
"order": 18
},
"text": {
"$ref": "#/definitions/TextContentDTO",
"title": "Text",
"order": 9
},
"type": {
"type": "object",
"title": "Type",
"order": 19
},
"user_fname": {
"type": "string",
"title": "User Fname",
"order": 4
},
"user_id": {
"type": "object",
"title": "User Id",
"order": 3
},
"user_lname": {
"type": "string",
"title": "User Lname",
"order": 5
}
},
"definitions": {
"ActionInfoDTO": {
"type": "object",
"title": "ActionInfoDTO",
"properties": {
"enabled": {
"type": "boolean",
"title": "Enabled",
"order": 3
},
"id": {
"type": "number",
"title": "Id",
"order": 1
},
"name": {
"type": "string",
"title": "Name",
"order": 2
}
}
},
"CommentPermsDTO": {
"type": "object",
"title": "CommentPermsDTO",
"properties": {
"delete": {
"type": "boolean",
"title": "Delete",
"order": 2
},
"update": {
"type": "boolean",
"title": "Update",
"order": 1
}
}
},
"ModifyUserDTO": {
"type": "object",
"title": "ModifyUserDTO",
"properties": {
"first_name": {
"type": "string",
"title": "First Name",
"order": 2
},
"id": {
"type": "object",
"title": "Id",
"order": 1
},
"last_name": {
"type": "string",
"title": "Last Name",
"order": 3
}
}
},
"TextContentDTO": {
"type": "object",
"title": "TextContentDTO",
"properties": {
"content": {
"type": "string",
"title": "Content",
"order": 2
},
"format": {
"type": "string",
"title": "Format",
"order": 1
}
}
}
}
},
"CommentPermsDTO": {
"type": "object",
"title": "CommentPermsDTO",
"properties": {
"delete": {
"type": "boolean",
"title": "Delete",
"order": 2
},
"update": {
"type": "boolean",
"title": "Update",
"order": 1
}
}
},
"FullIncidentDataDTO": {
"type": "object",
"title": "FullIncidentDataDTO",
"properties": {
"actions": {
"type": "array",
"title": "Actions",
"items": {
"$ref": "#/definitions/ActionInfoDTO"
},
"order": 8
},
"addr": {
"type": "string",
"title": "Addr",
"order": 10
},
"artifacts": {
"type": "array",
"title": "Artifacts",
"items": {
"$ref": "#/definitions/IncidentArtifactDTO"
},
"order": 6
},
"assessment": {
"type": "string",
"title": "Assessment",
"order": 36
},
"city": {
"type": "string",
"title": "City",
"order": 11
},
"cm": {
"$ref": "#/definitions/IncidentCountsDTO",
"title": "Cm",
"order": 2
},
"comments": {
"type": "array",
"title": "Comments",
"items": {
"$ref": "#/definitions/CommentDTO"
},
"order": 7
},
"confirmed": {
"type": "boolean",
"title": "Confirmed",
"order": 33
},
"country": {
"type": "number",
"title": "Country",
"order": 25
},
"create_date": {
"type": "number",
"title": "Create Date",
"order": 47
},
"creator": {
"$ref": "#/definitions/JustUserDTO",
"title": "Creator",
"order": 13
},
"creator_id": {
"type": "number",
"title": "Creator Id",
"order": 12
},
"crimestatus_id": {
"type": "number",
"title": "Crimestatus Id",
"order": 14
},
"data_compromised": {
"type": "boolean",
"title": "Data Compromised",
"order": 37
},
"description": {
"type": "string",
"title": "Description",
"order": 48
},
"discovered_date": {
"type": "number",
"title": "Discovered Date",
"order": 45
},
"dtm": {
"type": "object",
"title": "Dtm",
"order": 1
},
"due_date": {
"type": "number",
"title": "Due Date",
"order": 46
},
"employee_involved": {
"type": "boolean",
"title": "Employee Involved",
"order": 15
},
"end_date": {
"type": "number",
"title": "End Date",
"order": 16
},
"exposure": {
"type": "number",
"title": "Exposure",
"order": 27
},
"exposure_dept_id": {
"type": "number",
"title": "Exposure Dept Id",
"order": 17
},
"exposure_individual_name": {
"type": "string",
"title": "Exposure Individual Name",
"order": 18
},
"exposure_type_id": {
"type": "number",
"title": "Exposure Type Id",
"order": 35
},
"exposure_vendor_id": {
"type": "number",
"title": "Exposure Vendor Id",
"order": 19
},
"hipaa": {
"$ref": "#/definitions/HIPAARiskDTO",
"title": "Hipaa",
"order": 4
},
"id": {
"type": "number",
"title": "Id",
"order": 43
},
"inc_training": {
"type": "boolean",
"title": "Inc Training",
"order": 53
},
"incident_type_ids": {
"type": "array",
"title": "Incident Type Ids",
"items": {
"type": "number"
},
"order": 20
},
"is_scenario": {
"type": "boolean",
"title": "Is Scenario",
"order": 29
},
"jurisdiction_name": {
"type": "string",
"title": "Jurisdiction Name",
"order": 21
},
"members": {
"type": "array",
"title": "Members",
"items": {
"type": "number"
},
"order": 30
},
"name": {
"type": "string",
"title": "Name",
"order": 44
},
"negative_pr_likely": {
"type": "boolean",
"title": "Negative Pr Likely",
"order": 31
},
"org_id": {
"type": "number",
"title": "Org Id",
"order": 28
},
"owner_id": {
"type": "number",
"title": "Owner Id",
"order": 49
},
"perms": {
"$ref": "#/definitions/IncidentPermsDTO",
"title": "Perms",
"order": 32
},
"phase_id": {
"type": "number",
"title": "Phase Id",
"order": 50
},
"pii": {
"$ref": "#/definitions/IncidentPIIDTO",
"title": "Pii",
"order": 42
},
"plan_status": {
"type": "string",
"title": "Plan Status",
"order": 52
},
"properties": {
"type": "object",
"title": "Properties",
"order": 39
},
"regulators": {
"$ref": "#/definitions/RegulatorsDTO",
"title": "Regulators",
"order": 3
},
"reporter": {
"type": "string",
"title": "Reporter",
"order": 22
},
"resolution_id": {
"type": "number",
"title": "Resolution Id",
"order": 40
},
"resolution_summary": {
"type": "string",
"title": "Resolution Summary",
"order": 41
},
"risk_attack_vectors": {
"type": "array",
"title": "Risk Attack Vectors",
"items": {
"type": "object"
},
"order": 38
},
"severity_code": {
"type": "number",
"title": "Severity Code",
"order": 51
},
"start_date": {
"type": "number",
"title": "Start Date",
"order": 23
},
"state": {
"type": "object",
"title": "State",
"order": 24
},
"task_changes": {
"$ref": "#/definitions/TaskChangeDTO",
"title": "Task Changes",
"order": 34
},
"tasks": {
"$ref": "#/definitions/TaskDTO",
"title": "Tasks",
"order": 5
},
"vers": {
"type": "number",
"title": "Vers",
"order": 9
},
"zip": {
"type": "string",
"title": "Zip",
"order": 26
}
},
"definitions": {
"ActionInfoDTO": {
"type": "object",
"title": "ActionInfoDTO",
"properties": {
"enabled": {
"type": "boolean",
"title": "Enabled",
"order": 3
},
"id": {
"type": "number",
"title": "Id",
"order": 1
},
"name": {
"type": "string",
"title": "Name",
"order": 2
}
}
},
"AttachmentDTO": {
"type": "object",
"title": "AttachmentDTO",
"properties": {
"actions": {
"type": "array",
"title": "Actions",
"items": {
"$ref": "#/definitions/ActionInfoDTO"
},
"order": 8
},
"content_type": {
"type": "number",
"title": "Content Type",
"order": 3
},
"created": {
"type": "number",
"title": "Created",
"order": 4
},
"creator_id": {
"type": "number",
"title": "Creator Id",
"order": 5
},
"id": {
"type": "number",
"title": "Id",
"order": 1
},
"inc_id": {
"type": "number",
"title": "Inc Id",
"order": 9
},
"inc_name": {
"type": "string",
"title": "Inc Name",
"order": 10
},
"inc_owner": {
"type": "number",
"title": "Inc Owner",
"order": 11
},
"name": {
"type": "string",
"title": "Name",
"order": 2
},
"size": {
"type": "number",
"title": "Size",
"order": 6
},
"task_id": {
"type": "number",
"title": "Task Id",
"order": 12
},
"task_name": {
"type": "string",
"title": "Task Name",
"order": 13
},
"type": {
"type": "string",
"title": "Type",
"order": 14
},
"vers": {
"type": "number",
"title": "Vers",
"order": 7
}
},
"definitions": {
"ActionInfoDTO": {
"type": "object",
"title": "ActionInfoDTO",
"properties": {
"enabled": {
"type": "boolean",
"title": "Enabled",
"order": 3
},
"id": {
"type": "number",
"title": "Id",
"order": 1
},
"name": {
"type": "string",
"title": "Name",
"order": 2
}
}
}
}
},
"CommentDTO": {
"type": "object",
"title": "CommentDTO",
"properties": {
"actions": {
"type": "array",
"title": "Actions",
"items": {
"$ref": "#/definitions/ActionInfoDTO"
},
"order": 14
},
"children": {
"type": "array",
"title": "Children",
"items": {
"type": "object"
},
"order": 10
},
"comment_perms": {
"$ref": "#/definitions/CommentPermsDTO",
"title": "Comment Perms",
"order": 12
},
"create_date": {
"type": "number",
"title": "Create Date",
"order": 6
},
"id": {
"type": "number",
"title": "Id",
"order": 1
},
"inc_id": {
"type": "number",
"title": "Inc Id",
"order": 15
},
"inc_name": {
"type": "string",
"title": "Inc Name",
"order": 16
},
"inc_owner": {
"type": "number",
"title": "Inc Owner",
"order": 20
},
"is_deleted": {
"type": "boolean",
"title": "Is Deleted",
"order": 13
},
"mentioned_users": {
"type": "array",
"title": "Mentioned Users",
"items": {
"type": "object"
},
"order": 11
},
"modify_date": {
"type": "number",
"title": "Modify Date",
"order": 7
},
"modify_user": {
"$ref": "#/definitions/ModifyUserDTO",
"title": "Modify User",
"order": 8
},
"parent_id": {
"type": "number",
"title": "Parent Id",
"order": 2
},
"task_id": {
"type": "number",
"title": "Task Id",
"order": 17
},
"task_name": {
"type": "string",
"title": "Task Name",
"order": 18
},
"text": {
"$ref": "#/definitions/TextContentDTO",
"title": "Text",
"order": 9
},
"type": {
"type": "object",
"title": "Type",
"order": 19
},
"user_fname": {
"type": "string",
"title": "User Fname",
"order": 4
},
"user_id": {
"type": "object",
"title": "User Id",
"order": 3
},
"user_lname": {
"type": "string",
"title": "User Lname",
"order": 5
}
},
"definitions": {
"ActionInfoDTO": {
"type": "object",
"title": "ActionInfoDTO",
"properties": {
"enabled": {
"type": "boolean",
"title": "Enabled",
"order": 3
},
"id": {
"type": "number",
"title": "Id",
"order": 1
},
"name": {
"type": "string",
"title": "Name",
"order": 2
}
}
},
"CommentPermsDTO": {
"type": "object",
"title": "CommentPermsDTO",
"properties": {
"delete": {
"type": "boolean",
"title": "Delete",
"order": 2
},
"update": {
"type": "boolean",
"title": "Update",
"order": 1
}
}
},
"ModifyUserDTO": {
"type": "object",
"title": "ModifyUserDTO",
"properties": {
"first_name": {
"type": "string",
"title": "First Name",
"order": 2
},
"id": {
"type": "object",
"title": "Id",
"order": 1
},
"last_name": {
"type": "string",
"title": "Last Name",
"order": 3
}
}
},
"TextContentDTO": {
"type": "object",
"title": "TextContentDTO",
"properties": {
"content": {
"type": "string",
"title": "Content",
"order": 2
},
"format": {
"type": "string",
"title": "Format",
"order": 1
}
}
}
}
},
"CommentPermsDTO": {
"type": "object",
"title": "CommentPermsDTO",
"properties": {
"delete": {
"type": "boolean",
"title": "Delete",
"order": 2
},
"update": {
"type": "boolean",
"title": "Update",
"order": 1
}
}
},
"GeoUnassignedDTO": {
"type": "object",
"title": "GeoUnassignedDTO",
"properties": {
"count": {
"type": "number",
"title": "Count",
"order": 2
},
"geo": {
"type": "object",
"title": "Geo",
"order": 1
}
}
},
"HIPAARiskDTO": {
"type": "object",
"title": "HIPAARiskDTO",
"properties": {
"hipaa_acquired": {
"type": "boolean",
"title": "Hipaa Acquired",
"order": 3
},
"hipaa_acquired_comment": {
"type": "string",
"title": "Hipaa Acquired Comment",
"order": 8
},
"hipaa_additional_misuse": {
"type": "boolean",
"title": "Hipaa Additional Misuse",
"order": 4
},
"hipaa_additional_misuse_comment": {
"type": "string",
"title": "Hipaa Additional Misuse Comment",
"order": 9
},
"hipaa_adverse": {
"type": "boolean",
"title": "Hipaa Adverse",
"order": 1
},
"hipaa_adverse_comment": {
"type": "string",
"title": "Hipaa Adverse Comment",
"order": 6
},
"hipaa_breach": {
"type": "boolean",
"title": "Hipaa Breach",
"order": 5
},
"hipaa_breach_comment": {
"type": "string",
"title": "Hipaa Breach Comment",
"order": 10
},
"hipaa_misused": {
"type": "boolean",
"title": "Hipaa Misused",
"order": 2
},
"hipaa_misused_comment": {
"type": "string",
"title": "Hipaa Misused Comment",
"order": 7
}
}
},
"IncidentArtifactDTO": {
"type": "object",
"title": "IncidentArtifactDTO",
"properties": {
"actions": {
"type": "array",
"title": "Actions",
"items": {
"$ref": "#/definitions/ActionInfoDTO"
},
"order": 18
},
"attachment": {
"$ref": "#/definitions/AttachmentDTO",
"title": "Attachment",
"order": 7
},
"created": {
"type": "number",
"title": "Created",
"order": 12
},
"creator": {
"$ref": "#/definitions/JustUserDTO",
"title": "Creator",
"order": 5
},
"description": {
"type": "string",
"title": "Description",
"order": 4
},
"hits": {
"type": "array",
"title": "Hits",
"items": {
"$ref": "#/definitions/ThreatHitDTO"
},
"order": 6
},
"id": {
"type": "number",
"title": "Id",
"order": 1
},
"inc_id": {
"type": "number",
"title": "Inc Id",
"order": 9
},
"inc_name": {
"type": "string",
"title": "Inc Name",
"order": 10
},
"inc_owner": {
"type": "number",
"title": "Inc Owner",
"order": 11
},
"location": {
"$ref": "#/definitions/IncidentArtifactLocationDTO",
"title": "Location",
"order": 16
},
"parent_id": {
"type": "number",
"title": "Parent Id",
"order": 8
},
"pending_sources": {
"type": "array",
"title": "Pending Sources",
"items": {
"type": "number"
},
"order": 13
},
"perms": {
"$ref": "#/definitions/IncidentArtifactPermsDTO",
"title": "Perms",
"order": 14
},
"properties": {
"type": "array",
"title": "Properties",
"items": {
"$ref": "#/definitions/IncidentArtifactPropertyDTO"
},
"order": 15
},
"relating": {
"type": "boolean",
"title": "Relating",
"order": 19
},
"type": {
"type": "number",
"title": "Type",
"order": 2
},
"value": {
"type": "string",
"title": "Value",
"order": 3
},
"whois": {
"$ref": "#/definitions/WhoisDTO",
"title": "Whois",
"order": 17
}
},
"definitions": {
"ActionInfoDTO": {
"type": "object",
"title": "ActionInfoDTO",
"properties": {
"enabled": {
"type": "boolean",
"title": "Enabled",
"order": 3
},
"id": {
"type": "number",
"title": "Id",
"order": 1
},
"name": {
"type": "string",
"title": "Name",
"order": 2
}
}
},
"AttachmentDTO": {
"type": "object",
"title": "AttachmentDTO",
"properties": {
"actions": {
"type": "array",
"title": "Actions",
"items": {
"$ref": "#/definitions/ActionInfoDTO"
},
"order": 8
},
"content_type": {
"type": "number",
"title": "Content Type",
"order": 3
},
"created": {
"type": "number",
"title": "Created",
"order": 4
},
"creator_id": {
"type": "number",
"title": "Creator Id",
"order": 5
},
"id": {
"type": "number",
"title": "Id",
"order": 1
},
"inc_id": {
"type": "number",
"title": "Inc Id",
"order": 9
},
"inc_name": {
"type": "string",
"title": "Inc Name",
"order": 10
},
"inc_owner": {
"type": "number",
"title": "Inc Owner",
"order": 11
},
"name": {
"type": "string",
"title": "Name",
"order": 2
},
"size": {
"type": "number",
"title": "Size",
"order": 6
},
"task_id": {
"type": "number",
"title": "Task Id",
"order": 12
},
"task_name": {
"type": "string",
"title": "Task Name",
"order": 13
},
"type": {
"type": "string",
"title": "Type",
"order": 14
},
"vers": {
"type": "number",
"title": "Vers",
"order": 7
}
},
"definitions": {
"ActionInfoDTO": {
"type": "object",
"title": "ActionInfoDTO",
"properties": {
"enabled": {
"type": "boolean",
"title": "Enabled",
"order": 3
},
"id": {
"type": "number",
"title": "Id",
"order": 1
},
"name": {
"type": "string",
"title": "Name",
"order": 2
}
}
}
}
},
"IncidentArtifactLocationDTO": {
"type": "object",
"title": "IncidentArtifactLocationDTO",
"properties": {
"city": {
"type": "string",
"title": "City",
"order": 2
},
"country": {
"type": "string",
"title": "Country",
"order": 4
},
"latlng": {
"$ref": "#/definitions/LatLngDTO",
"title": "Latlng",
"order": 1
},
"postalCode": {
"type": "string",
"title": "PostalCode",
"order": 5
},
"state": {
"type": "string",
"title": "State",
"order": 3
}
},
"definitions": {
"LatLngDTO": {
"type": "object",
"title": "LatLngDTO",
"properties": {
"lat": {
"type": "number",
"title": "Lat",
"order": 1
},
"lng": {
"type": "number",
"title": "Lng",
"order": 2
}
}
}
}
},
"IncidentArtifactPermsDTO": {
"type": "object",
"title": "IncidentArtifactPermsDTO",
"properties": {
"delete": {
"type": "boolean",
"title": "Delete",
"order": 3
},
"read": {
"type": "boolean",
"title": "Read",
"order": 1
},
"write": {
"type": "boolean",
"title": "Write",
"order": 2
}
}
},
"IncidentArtifactPropertyDTO": {
"type": "object",
"title": "IncidentArtifactPropertyDTO",
"properties": {
"name": {
"type": "string",
"title": "Name",
"order": 1
},
"value": {
"type": "string",
"title": "Value",
"order": 2
}
}
},
"JustUserDTO": {
"type": "object",
"title": "JustUserDTO",
"properties": {
"cell": {
"type": "string",
"title": "Cell",
"order": 7
},
"email": {
"type": "string",
"title": "Email",
"order": 5
},
"fname": {
"type": "string",
"title": "Fname",
"order": 2
},
"id": {
"type": "number",
"title": "Id",
"order": 1
},
"is_external": {
"type": "boolean",
"title": "Is External",
"order": 12
},
"last_login": {
"type": "number",
"title": "Last Login",
"order": 10
},
"lname": {
"type": "string",
"title": "Lname",
"order": 3
},
"locked": {
"type": "boolean",
"title": "Locked",
"order": 11
},
"notes": {
"type": "string",
"title": "Notes",
"order": 9
},
"phone": {
"type": "string",
"title": "Phone",
"order": 6
},
"status": {
"type": "string",
"title": "Status",
"order": 4
},
"title": {
"type": "string",
"title": "Title",
"order": 8
}
}
},
"LatLngDTO": {
"type": "object",
"title": "LatLngDTO",
"properties": {
"lat": {
"type": "number",
"title": "Lat",
"order": 1
},
"lng": {
"type": "number",
"title": "Lng",
"order": 2
}
}
},
"ThreatHitDTO": {
"type": "object",
"title": "ThreatHitDTO",
"properties": {
"active": {
"type": "boolean",
"title": "Active",
"order": 5
},
"artifact_type_id": {
"type": "object",
"title": "Artifact Type Id",
"order": 4
},
"id": {
"type": "string",
"title": "Id",
"order": 1
},
"threat_source_id": {
"type": "object",
"title": "Threat Source Id",
"order": 3
},
"value": {
"type": "string",
"title": "Value",
"order": 2
}
}
},
"WhoisDTO": {
"type": "object",
"title": "WhoisDTO",
"properties": {
"invalid": {
"type": "boolean",
"title": "Invalid",
"order": 3
},
"pending": {
"type": "boolean",
"title": "Pending",
"order": 2
},
"raw": {
"type": "string",
"title": "Raw",
"order": 1
}
}
}
}
},
"IncidentArtifactLocationDTO": {
"type": "object",
"title": "IncidentArtifactLocationDTO",
"properties": {
"city": {
"type": "string",
"title": "City",
"order": 2
},
"country": {
"type": "string",
"title": "Country",
"order": 4
},
"latlng": {
"$ref": "#/definitions/LatLngDTO",
"title": "Latlng",
"order": 1
},
"postalCode": {
"type": "string",
"title": "PostalCode",
"order": 5
},
"state": {
"type": "string",
"title": "State",
"order": 3
}
},
"definitions": {
"LatLngDTO": {
"type": "object",
"title": "LatLngDTO",
"properties": {
"lat": {
"type": "number",
"title": "Lat",
"order": 1
},
"lng": {
"type": "number",
"title": "Lng",
"order": 2
}
}
}
}
},
"IncidentArtifactPermsDTO": {
"type": "object",
"title": "IncidentArtifactPermsDTO",
"properties": {
"delete": {
"type": "boolean",
"title": "Delete",
"order": 3
},
"read": {
"type": "boolean",
"title": "Read",
"order": 1
},
"write": {
"type": "boolean",
"title": "Write",
"order": 2
}
}
},
"IncidentArtifactPropertyDTO": {
"type": "object",
"title": "IncidentArtifactPropertyDTO",
"properties": {
"name": {
"type": "string",
"title": "Name",
"order": 1
},
"value": {
"type": "string",
"title": "Value",
"order": 2
}
}
},
"IncidentCountsDTO": {
"type": "object",
"title": "IncidentCountsDTO",
"properties": {
"geo_counts": {
"type": "object",
"title": "Geo Counts",
"order": 3
},
"total": {
"type": "number",
"title": "Total",
"order": 2
},
"unassigneds": {
"type": "array",
"title": "Unassigneds",
"items": {
"$ref": "#/definitions/GeoUnassignedDTO"
},
"order": 1
}
},
"definitions": {
"GeoUnassignedDTO": {
"type": "object",
"title": "GeoUnassignedDTO",
"properties": {
"count": {
"type": "number",
"title": "Count",
"order": 2
},
"geo": {
"type": "object",
"title": "Geo",
"order": 1
}
}
}
}
},
"IncidentPIIDTO": {
"type": "object",
"title": "IncidentPIIDTO",
"properties": {
"assessment": {
"type": "string",
"title": "Assessment",
"order": 7
},
"data_compromised": {
"type": "boolean",
"title": "Data Compromised",
"order": 1
},
"data_contained": {
"type": "boolean",
"title": "Data Contained",
"order": 4
},
"data_encrypted": {
"type": "boolean",
"title": "Data Encrypted",
"order": 3
},
"data_format": {
"type": "number",
"title": "Data Format",
"order": 6
},
"data_source_ids": {
"type": "array",
"title": "Data Source Ids",
"items": {
"type": "number"
},
"order": 5
},
"exposure": {
"type": "number",
"title": "Exposure",
"order": 8
},
"gdpr_harm_risk": {
"type": "object",
"title": "Gdpr Harm Risk",
"order": 9
},
"gdpr_lawful_data_processing_categories": {
"type": "array",
"title": "Gdpr Lawful Data Processing Categories",
"items": {
"type": "object"
},
"order": 10
},
"harmstatus_id": {
"type": "number",
"title": "Harmstatus Id",
"order": 2
}
}
},
"IncidentPermsDTO": {
"type": "object",
"title": "IncidentPermsDTO",
"properties": {
"assign": {
"type": "boolean",
"title": "Assign",
"order": 9
},
"attach_file": {
"type": "boolean",
"title": "Attach File",
"order": 12
},
"change_members": {
"type": "boolean",
"title": "Change Members",
"order": 11
},
"close": {
"type": "boolean",
"title": "Close",
"order": 10
},
"comment": {
"type": "boolean",
"title": "Comment",
"order": 8
},
"create_artifacts": {
"type": "boolean",
"title": "Create Artifacts",
"order": 3
},
"create_milestones": {
"type": "boolean",
"title": "Create Milestones",
"order": 1
},
"delete": {
"type": "boolean",
"title": "Delete",
"order": 5
},
"delete_attachments": {
"type": "boolean",
"title": "Delete Attachments",
"order": 14
},
"list_artifacts": {
"type": "boolean",
"title": "List Artifacts",
"order": 4
},
"list_milestones": {
"type": "boolean",
"title": "List Milestones",
"order": 2
},
"read": {
"type": "boolean",
"title": "Read",
"order": 6
},
"read_attachments": {
"type": "boolean",
"title": "Read Attachments",
"order": 13
},
"write": {
"type": "boolean",
"title": "Write",
"order": 7
}
}
},
"JustUserDTO": {
"type": "object",
"title": "JustUserDTO",
"properties": {
"cell": {
"type": "string",
"title": "Cell",
"order": 7
},
"email": {
"type": "string",
"title": "Email",
"order": 5
},
"fname": {
"type": "string",
"title": "Fname",
"order": 2
},
"id": {
"type": "number",
"title": "Id",
"order": 1
},
"is_external": {
"type": "boolean",
"title": "Is External",
"order": 12
},
"last_login": {
"type": "number",
"title": "Last Login",
"order": 10
},
"lname": {
"type": "string",
"title": "Lname",
"order": 3
},
"locked": {
"type": "boolean",
"title": "Locked",
"order": 11
},
"notes": {
"type": "string",
"title": "Notes",
"order": 9
},
"phone": {
"type": "string",
"title": "Phone",
"order": 6
},
"status": {
"type": "string",
"title": "Status",
"order": 4
},
"title": {
"type": "string",
"title": "Title",
"order": 8
}
}
},
"LatLngDTO": {
"type": "object",
"title": "LatLngDTO",
"properties": {
"lat": {
"type": "number",
"title": "Lat",
"order": 1
},
"lng": {
"type": "number",
"title": "Lng",
"order": 2
}
}
},
"ModifyUserDTO": {
"type": "object",
"title": "ModifyUserDTO",
"properties": {
"first_name": {
"type": "string",
"title": "First Name",
"order": 2
},
"id": {
"type": "object",
"title": "Id",
"order": 1
},
"last_name": {
"type": "string",
"title": "Last Name",
"order": 3
}
}
},
"NamedEntityDTO": {
"type": "object",
"title": "NamedEntityDTO",
"properties": {
"id": {
"type": "number",
"title": "Id",
"order": 1
},
"name": {
"type": "string",
"title": "Name",
"order": 2
}
}
},
"RegulatorsDTO": {
"type": "object",
"title": "RegulatorsDTO",
"properties": {
"ids": {
"type": "array",
"title": "Ids",
"items": {
"type": "number"
},
"order": 1
}
}
},
"TaskChangeDTO": {
"type": "object",
"title": "TaskChangeDTO",
"properties": {
"added": {
"type": "array",
"title": "Added",
"items": {
"$ref": "#/definitions/NamedEntityDTO"
},
"order": 1
},
"removed": {
"type": "array",
"title": "Removed",
"items": {
"$ref": "#/definitions/NamedEntityDTO"
},
"order": 2
}
},
"definitions": {
"NamedEntityDTO": {
"type": "object",
"title": "NamedEntityDTO",
"properties": {
"id": {
"type": "number",
"title": "Id",
"order": 1
},
"name": {
"type": "string",
"title": "Name",
"order": 2
}
}
}
}
},
"TaskDTO": {
"type": "object",
"title": "TaskDTO",
"properties": {
"actions": {
"type": "array",
"title": "Actions",
"items": {
"$ref": "#/definitions/ActionInfoDTO"
},
"order": 27
},
"active": {
"type": "boolean",
"title": "Active",
"order": 21
},
"at_id": {
"type": "number",
"title": "At Id",
"order": 20
},
"attachments_count": {
"type": "number",
"title": "Attachments Count",
"order": 31
},
"cat_name": {
"type": "string",
"title": "Cat Name",
"order": 16
},
"category_id": {
"type": "object",
"title": "Category Id",
"order": 29
},
"closed_date": {
"type": "number",
"title": "Closed Date",
"order": 26
},
"creator": {
"$ref": "#/definitions/JustUserDTO",
"title": "Creator",
"order": 24
},
"custom": {
"type": "boolean",
"title": "Custom",
"order": 4
},
"description": {
"type": "string",
"title": "Description",
"order": 17
},
"due_date": {
"type": "number",
"title": "Due Date",
"order": 7
},
"frozen": {
"type": "boolean",
"title": "Frozen",
"order": 13
},
"id": {
"type": "number",
"title": "Id",
"order": 10
},
"inc_id": {
"type": "number",
"title": "Inc Id",
"order": 5
},
"inc_name": {
"type": "string",
"title": "Inc Name",
"order": 1
},
"inc_owner_id": {
"type": "number",
"title": "Inc Owner Id",
"order": 6
},
"inc_training": {
"type": "boolean",
"title": "Inc Training",
"order": 12
},
"init_date": {
"type": "number",
"title": "Init Date",
"order": 18
},
"instr_text": {
"type": "string",
"title": "Instr Text",
"order": 19
},
"members": {
"type": "array",
"title": "Members",
"items": {
"type": "number"
},
"order": 22
},
"name": {
"type": "string",
"title": "Name",
"order": 2
},
"notes": {
"type": "array",
"title": "Notes",
"items": {
"$ref": "#/definitions/CommentDTO"
},
"order": 25
},
"notes_count": {
"type": "number",
"title": "Notes Count",
"order": 30
},
"owner_fname": {
"type": "string",
"title": "Owner Fname",
"order": 14
},
"owner_id": {
"type": "number",
"title": "Owner Id",
"order": 9
},
"owner_lname": {
"type": "string",
"title": "Owner Lname",
"order": 15
},
"perms": {
"$ref": "#/definitions/TaskPermsDTO",
"title": "Perms",
"order": 23
},
"phase_id": {
"type": "number",
"title": "Phase Id",
"order": 28
},
"regs": {
"type": "object",
"title": "Regs",
"order": 3
},
"required": {
"type": "boolean",
"title": "Required",
"order": 8
},
"status": {
"type": "string",
"title": "Status",
"order": 11
}
},
"definitions": {
"ActionInfoDTO": {
"type": "object",
"title": "ActionInfoDTO",
"properties": {
"enabled": {
"type": "boolean",
"title": "Enabled",
"order": 3
},
"id": {
"type": "number",
"title": "Id",
"order": 1
},
"name": {
"type": "string",
"title": "Name",
"order": 2
}
}
},
"CommentDTO": {
"type": "object",
"title": "CommentDTO",
"properties": {
"actions": {
"type": "array",
"title": "Actions",
"items": {
"$ref": "#/definitions/ActionInfoDTO"
},
"order": 14
},
"children": {
"type": "array",
"title": "Children",
"items": {
"type": "object"
},
"order": 10
},
"comment_perms": {
"$ref": "#/definitions/CommentPermsDTO",
"title": "Comment Perms",
"order": 12
},
"create_date": {
"type": "number",
"title": "Create Date",
"order": 6
},
"id": {
"type": "number",
"title": "Id",
"order": 1
},
"inc_id": {
"type": "number",
"title": "Inc Id",
"order": 15
},
"inc_name": {
"type": "string",
"title": "Inc Name",
"order": 16
},
"inc_owner": {
"type": "number",
"title": "Inc Owner",
"order": 20
},
"is_deleted": {
"type": "boolean",
"title": "Is Deleted",
"order": 13
},
"mentioned_users": {
"type": "array",
"title": "Mentioned Users",
"items": {
"type": "object"
},
"order": 11
},
"modify_date": {
"type": "number",
"title": "Modify Date",
"order": 7
},
"modify_user": {
"$ref": "#/definitions/ModifyUserDTO",
"title": "Modify User",
"order": 8
},
"parent_id": {
"type": "number",
"title": "Parent Id",
"order": 2
},
"task_id": {
"type": "number",
"title": "Task Id",
"order": 17
},
"task_name": {
"type": "string",
"title": "Task Name",
"order": 18
},
"text": {
"$ref": "#/definitions/TextContentDTO",
"title": "Text",
"order": 9
},
"type": {
"type": "object",
"title": "Type",
"order": 19
},
"user_fname": {
"type": "string",
"title": "User Fname",
"order": 4
},
"user_id": {
"type": "object",
"title": "User Id",
"order": 3
},
"user_lname": {
"type": "string",
"title": "User Lname",
"order": 5
}
},
"definitions": {
"ActionInfoDTO": {
"type": "object",
"title": "ActionInfoDTO",
"properties": {
"enabled": {
"type": "boolean",
"title": "Enabled",
"order": 3
},
"id": {
"type": "number",
"title": "Id",
"order": 1
},
"name": {
"type": "string",
"title": "Name",
"order": 2
}
}
},
"CommentPermsDTO": {
"type": "object",
"title": "CommentPermsDTO",
"properties": {
"delete": {
"type": "boolean",
"title": "Delete",
"order": 2
},
"update": {
"type": "boolean",
"title": "Update",
"order": 1
}
}
},
"ModifyUserDTO": {
"type": "object",
"title": "ModifyUserDTO",
"properties": {
"first_name": {
"type": "string",
"title": "First Name",
"order": 2
},
"id": {
"type": "object",
"title": "Id",
"order": 1
},
"last_name": {
"type": "string",
"title": "Last Name",
"order": 3
}
}
},
"TextContentDTO": {
"type": "object",
"title": "TextContentDTO",
"properties": {
"content": {
"type": "string",
"title": "Content",
"order": 2
},
"format": {
"type": "string",
"title": "Format",
"order": 1
}
}
}
}
},
"CommentPermsDTO": {
"type": "object",
"title": "CommentPermsDTO",
"properties": {
"delete": {
"type": "boolean",
"title": "Delete",
"order": 2
},
"update": {
"type": "boolean",
"title": "Update",
"order": 1
}
}
},
"JustUserDTO": {
"type": "object",
"title": "JustUserDTO",
"properties": {
"cell": {
"type": "string",
"title": "Cell",
"order": 7
},
"email": {
"type": "string",
"title": "Email",
"order": 5
},
"fname": {
"type": "string",
"title": "Fname",
"order": 2
},
"id": {
"type": "number",
"title": "Id",
"order": 1
},
"is_external": {
"type": "boolean",
"title": "Is External",
"order": 12
},
"last_login": {
"type": "number",
"title": "Last Login",
"order": 10
},
"lname": {
"type": "string",
"title": "Lname",
"order": 3
},
"locked": {
"type": "boolean",
"title": "Locked",
"order": 11
},
"notes": {
"type": "string",
"title": "Notes",
"order": 9
},
"phone": {
"type": "string",
"title": "Phone",
"order": 6
},
"status": {
"type": "string",
"title": "Status",
"order": 4
},
"title": {
"type": "string",
"title": "Title",
"order": 8
}
}
},
"ModifyUserDTO": {
"type": "object",
"title": "ModifyUserDTO",
"properties": {
"first_name": {
"type": "string",
"title": "First Name",
"order": 2
},
"id": {
"type": "object",
"title": "Id",
"order": 1
},
"last_name": {
"type": "string",
"title": "Last Name",
"order": 3
}
}
},
"TaskPermsDTO": {
"type": "object",
"title": "TaskPermsDTO",
"properties": {
"assign": {
"type": "boolean",
"title": "Assign",
"order": 4
},
"attach_file": {
"type": "boolean",
"title": "Attach File",
"order": 7
},
"change_members": {
"type": "boolean",
"title": "Change Members",
"order": 6
},
"close": {
"type": "boolean",
"title": "Close",
"order": 5
},
"comment": {
"type": "boolean",
"title": "Comment",
"order": 3
},
"delete_attachments": {
"type": "boolean",
"title": "Delete Attachments",
"order": 9
},
"read": {
"type": "boolean",
"title": "Read",
"order": 1
},
"read_attachments": {
"type": "boolean",
"title": "Read Attachments",
"order": 8
},
"write": {
"type": "boolean",
"title": "Write",
"order": 2
}
}
},
"TextContentDTO": {
"type": "object",
"title": "TextContentDTO",
"properties": {
"content": {
"type": "string",
"title": "Content",
"order": 2
},
"format": {
"type": "string",
"title": "Format",
"order": 1
}
}
}
}
},
"TaskPermsDTO": {
"type": "object",
"title": "TaskPermsDTO",
"properties": {
"assign": {
"type": "boolean",
"title": "Assign",
"order": 4
},
"attach_file": {
"type": "boolean",
"title": "Attach File",
"order": 7
},
"change_members": {
"type": "boolean",
"title": "Change Members",
"order": 6
},
"close": {
"type": "boolean",
"title": "Close",
"order": 5
},
"comment": {
"type": "boolean",
"title": "Comment",
"order": 3
},
"delete_attachments": {
"type": "boolean",
"title": "Delete Attachments",
"order": 9
},
"read": {
"type": "boolean",
"title": "Read",
"order": 1
},
"read_attachments": {
"type": "boolean",
"title": "Read Attachments",
"order": 8
},
"write": {
"type": "boolean",
"title": "Write",
"order": 2
}
}
},
"TextContentDTO": {
"type": "object",
"title": "TextContentDTO",
"properties": {
"content": {
"type": "string",
"title": "Content",
"order": 2
},
"format": {
"type": "string",
"title": "Format",
"order": 1
}
}
},
"ThreatHitDTO": {
"type": "object",
"title": "ThreatHitDTO",
"properties": {
"active": {
"type": "boolean",
"title": "Active",
"order": 5
},
"artifact_type_id": {
"type": "object",
"title": "Artifact Type Id",
"order": 4
},
"id": {
"type": "string",
"title": "Id",
"order": 1
},
"threat_source_id": {
"type": "object",
"title": "Threat Source Id",
"order": 3
},
"value": {
"type": "string",
"title": "Value",
"order": 2
}
}
},
"WhoisDTO": {
"type": "object",
"title": "WhoisDTO",
"properties": {
"invalid": {
"type": "boolean",
"title": "Invalid",
"order": 3
},
"pending": {
"type": "boolean",
"title": "Pending",
"order": 2
},
"raw": {
"type": "string",
"title": "Raw",
"order": 1
}
}
}
}
},
"GeoUnassignedDTO": {
"type": "object",
"title": "GeoUnassignedDTO",
"properties": {
"count": {
"type": "number",
"title": "Count",
"order": 2
},
"geo": {
"type": "object",
"title": "Geo",
"order": 1
}
}
},
"HIPAARiskDTO": {
"type": "object",
"title": "HIPAARiskDTO",
"properties": {
"hipaa_acquired": {
"type": "boolean",
"title": "Hipaa Acquired",
"order": 3
},
"hipaa_acquired_comment": {
"type": "string",
"title": "Hipaa Acquired Comment",
"order": 8
},
"hipaa_additional_misuse": {
"type": "boolean",
"title": "Hipaa Additional Misuse",
"order": 4
},
"hipaa_additional_misuse_comment": {
"type": "string",
"title": "Hipaa Additional Misuse Comment",
"order": 9
},
"hipaa_adverse": {
"type": "boolean",
"title": "Hipaa Adverse",
"order": 1
},
"hipaa_adverse_comment": {
"type": "string",
"title": "Hipaa Adverse Comment",
"order": 6
},
"hipaa_breach": {
"type": "boolean",
"title": "Hipaa Breach",
"order": 5
},
"hipaa_breach_comment": {
"type": "string",
"title": "Hipaa Breach Comment",
"order": 10
},
"hipaa_misused": {
"type": "boolean",
"title": "Hipaa Misused",
"order": 2
},
"hipaa_misused_comment": {
"type": "string",
"title": "Hipaa Misused Comment",
"order": 7
}
}
},
"IncidentArtifactDTO": {
"type": "object",
"title": "IncidentArtifactDTO",
"properties": {
"actions": {
"type": "array",
"title": "Actions",
"items": {
"$ref": "#/definitions/ActionInfoDTO"
},
"order": 18
},
"attachment": {
"$ref": "#/definitions/AttachmentDTO",
"title": "Attachment",
"order": 7
},
"created": {
"type": "number",
"title": "Created",
"order": 12
},
"creator": {
"$ref": "#/definitions/JustUserDTO",
"title": "Creator",
"order": 5
},
"description": {
"type": "string",
"title": "Description",
"order": 4
},
"hits": {
"type": "array",
"title": "Hits",
"items": {
"$ref": "#/definitions/ThreatHitDTO"
},
"order": 6
},
"id": {
"type": "number",
"title": "Id",
"order": 1
},
"inc_id": {
"type": "number",
"title": "Inc Id",
"order": 9
},
"inc_name": {
"type": "string",
"title": "Inc Name",
"order": 10
},
"inc_owner": {
"type": "number",
"title": "Inc Owner",
"order": 11
},
"location": {
"$ref": "#/definitions/IncidentArtifactLocationDTO",
"title": "Location",
"order": 16
},
"parent_id": {
"type": "number",
"title": "Parent Id",
"order": 8
},
"pending_sources": {
"type": "array",
"title": "Pending Sources",
"items": {
"type": "number"
},
"order": 13
},
"perms": {
"$ref": "#/definitions/IncidentArtifactPermsDTO",
"title": "Perms",
"order": 14
},
"properties": {
"type": "array",
"title": "Properties",
"items": {
"$ref": "#/definitions/IncidentArtifactPropertyDTO"
},
"order": 15
},
"relating": {
"type": "boolean",
"title": "Relating",
"order": 19
},
"type": {
"type": "number",
"title": "Type",
"order": 2
},
"value": {
"type": "string",
"title": "Value",
"order": 3
},
"whois": {
"$ref": "#/definitions/WhoisDTO",
"title": "Whois",
"order": 17
}
},
"definitions": {
"ActionInfoDTO": {
"type": "object",
"title": "ActionInfoDTO",
"properties": {
"enabled": {
"type": "boolean",
"title": "Enabled",
"order": 3
},
"id": {
"type": "number",
"title": "Id",
"order": 1
},
"name": {
"type": "string",
"title": "Name",
"order": 2
}
}
},
"AttachmentDTO": {
"type": "object",
"title": "AttachmentDTO",
"properties": {
"actions": {
"type": "array",
"title": "Actions",
"items": {
"$ref": "#/definitions/ActionInfoDTO"
},
"order": 8
},
"content_type": {
"type": "number",
"title": "Content Type",
"order": 3
},
"created": {
"type": "number",
"title": "Created",
"order": 4
},
"creator_id": {
"type": "number",
"title": "Creator Id",
"order": 5
},
"id": {
"type": "number",
"title": "Id",
"order": 1
},
"inc_id": {
"type": "number",
"title": "Inc Id",
"order": 9
},
"inc_name": {
"type": "string",
"title": "Inc Name",
"order": 10
},
"inc_owner": {
"type": "number",
"title": "Inc Owner",
"order": 11
},
"name": {
"type": "string",
"title": "Name",
"order": 2
},
"size": {
"type": "number",
"title": "Size",
"order": 6
},
"task_id": {
"type": "number",
"title": "Task Id",
"order": 12
},
"task_name": {
"type": "string",
"title": "Task Name",
"order": 13
},
"type": {
"type": "string",
"title": "Type",
"order": 14
},
"vers": {
"type": "number",
"title": "Vers",
"order": 7
}
},
"definitions": {
"ActionInfoDTO": {
"type": "object",
"title": "ActionInfoDTO",
"properties": {
"enabled": {
"type": "boolean",
"title": "Enabled",
"order": 3
},
"id": {
"type": "number",
"title": "Id",
"order": 1
},
"name": {
"type": "string",
"title": "Name",
"order": 2
}
}
}
}
},
"IncidentArtifactLocationDTO": {
"type": "object",
"title": "IncidentArtifactLocationDTO",
"properties": {
"city": {
"type": "string",
"title": "City",
"order": 2
},
"country": {
"type": "string",
"title": "Country",
"order": 4
},
"latlng": {
"$ref": "#/definitions/LatLngDTO",
"title": "Latlng",
"order": 1
},
"postalCode": {
"type": "string",
"title": "PostalCode",
"order": 5
},
"state": {
"type": "string",
"title": "State",
"order": 3
}
},
"definitions": {
"LatLngDTO": {
"type": "object",
"title": "LatLngDTO",
"properties": {
"lat": {
"type": "number",
"title": "Lat",
"order": 1
},
"lng": {
"type": "number",
"title": "Lng",
"order": 2
}
}
}
}
},
"IncidentArtifactPermsDTO": {
"type": "object",
"title": "IncidentArtifactPermsDTO",
"properties": {
"delete": {
"type": "boolean",
"title": "Delete",
"order": 3
},
"read": {
"type": "boolean",
"title": "Read",
"order": 1
},
"write": {
"type": "boolean",
"title": "Write",
"order": 2
}
}
},
"IncidentArtifactPropertyDTO": {
"type": "object",
"title": "IncidentArtifactPropertyDTO",
"properties": {
"name": {
"type": "string",
"title": "Name",
"order": 1
},
"value": {
"type": "string",
"title": "Value",
"order": 2
}
}
},
"JustUserDTO": {
"type": "object",
"title": "JustUserDTO",
"properties": {
"cell": {
"type": "string",
"title": "Cell",
"order": 7
},
"email": {
"type": "string",
"title": "Email",
"order": 5
},
"fname": {
"type": "string",
"title": "Fname",
"order": 2
},
"id": {
"type": "number",
"title": "Id",
"order": 1
},
"is_external": {
"type": "boolean",
"title": "Is External",
"order": 12
},
"last_login": {
"type": "number",
"title": "Last Login",
"order": 10
},
"lname": {
"type": "string",
"title": "Lname",
"order": 3
},
"locked": {
"type": "boolean",
"title": "Locked",
"order": 11
},
"notes": {
"type": "string",
"title": "Notes",
"order": 9
},
"phone": {
"type": "string",
"title": "Phone",
"order": 6
},
"status": {
"type": "string",
"title": "Status",
"order": 4
},
"title": {
"type": "string",
"title": "Title",
"order": 8
}
}
},
"LatLngDTO": {
"type": "object",
"title": "LatLngDTO",
"properties": {
"lat": {
"type": "number",
"title": "Lat",
"order": 1
},
"lng": {
"type": "number",
"title": "Lng",
"order": 2
}
}
},
"ThreatHitDTO": {
"type": "object",
"title": "ThreatHitDTO",
"properties": {
"active": {
"type": "boolean",
"title": "Active",
"order": 5
},
"artifact_type_id": {
"type": "object",
"title": "Artifact Type Id",
"order": 4
},
"id": {
"type": "string",
"title": "Id",
"order": 1
},
"threat_source_id": {
"type": "object",
"title": "Threat Source Id",
"order": 3
},
"value": {
"type": "string",
"title": "Value",
"order": 2
}
}
},
"WhoisDTO": {
"type": "object",
"title": "WhoisDTO",
"properties": {
"invalid": {
"type": "boolean",
"title": "Invalid",
"order": 3
},
"pending": {
"type": "boolean",
"title": "Pending",
"order": 2
},
"raw": {
"type": "string",
"title": "Raw",
"order": 1
}
}
}
}
},
"IncidentArtifactLocationDTO": {
"type": "object",
"title": "IncidentArtifactLocationDTO",
"properties": {
"city": {
"type": "string",
"title": "City",
"order": 2
},
"country": {
"type": "string",
"title": "Country",
"order": 4
},
"latlng": {
"$ref": "#/definitions/LatLngDTO",
"title": "Latlng",
"order": 1
},
"postalCode": {
"type": "string",
"title": "PostalCode",
"order": 5
},
"state": {
"type": "string",
"title": "State",
"order": 3
}
},
"definitions": {
"LatLngDTO": {
"type": "object",
"title": "LatLngDTO",
"properties": {
"lat": {
"type": "number",
"title": "Lat",
"order": 1
},
"lng": {
"type": "number",
"title": "Lng",
"order": 2
}
}
}
}
},
"IncidentArtifactPermsDTO": {
"type": "object",
"title": "IncidentArtifactPermsDTO",
"properties": {
"delete": {
"type": "boolean",
"title": "Delete",
"order": 3
},
"read": {
"type": "boolean",
"title": "Read",
"order": 1
},
"write": {
"type": "boolean",
"title": "Write",
"order": 2
}
}
},
"IncidentArtifactPropertyDTO": {
"type": "object",
"title": "IncidentArtifactPropertyDTO",
"properties": {
"name": {
"type": "string",
"title": "Name",
"order": 1
},
"value": {
"type": "string",
"title": "Value",
"order": 2
}
}
},
"IncidentCountsDTO": {
"type": "object",
"title": "IncidentCountsDTO",
"properties": {
"geo_counts": {
"type": "object",
"title": "Geo Counts",
"order": 3
},
"total": {
"type": "number",
"title": "Total",
"order": 2
},
"unassigneds": {
"type": "array",
"title": "Unassigneds",
"items": {
"$ref": "#/definitions/GeoUnassignedDTO"
},
"order": 1
}
},
"definitions": {
"GeoUnassignedDTO": {
"type": "object",
"title": "GeoUnassignedDTO",
"properties": {
"count": {
"type": "number",
"title": "Count",
"order": 2
},
"geo": {
"type": "object",
"title": "Geo",
"order": 1
}
}
}
}
},
"IncidentPIIDTO": {
"type": "object",
"title": "IncidentPIIDTO",
"properties": {
"assessment": {
"type": "string",
"title": "Assessment",
"order": 7
},
"data_compromised": {
"type": "boolean",
"title": "Data Compromised",
"order": 1
},
"data_contained": {
"type": "boolean",
"title": "Data Contained",
"order": 4
},
"data_encrypted": {
"type": "boolean",
"title": "Data Encrypted",
"order": 3
},
"data_format": {
"type": "number",
"title": "Data Format",
"order": 6
},
"data_source_ids": {
"type": "array",
"title": "Data Source Ids",
"items": {
"type": "number"
},
"order": 5
},
"exposure": {
"type": "number",
"title": "Exposure",
"order": 8
},
"gdpr_harm_risk": {
"type": "object",
"title": "Gdpr Harm Risk",
"order": 9
},
"gdpr_lawful_data_processing_categories": {
"type": "array",
"title": "Gdpr Lawful Data Processing Categories",
"items": {
"type": "object"
},
"order": 10
},
"harmstatus_id": {
"type": "number",
"title": "Harmstatus Id",
"order": 2
}
}
},
"IncidentPermsDTO": {
"type": "object",
"title": "IncidentPermsDTO",
"properties": {
"assign": {
"type": "boolean",
"title": "Assign",
"order": 9
},
"attach_file": {
"type": "boolean",
"title": "Attach File",
"order": 12
},
"change_members": {
"type": "boolean",
"title": "Change Members",
"order": 11
},
"close": {
"type": "boolean",
"title": "Close",
"order": 10
},
"comment": {
"type": "boolean",
"title": "Comment",
"order": 8
},
"create_artifacts": {
"type": "boolean",
"title": "Create Artifacts",
"order": 3
},
"create_milestones": {
"type": "boolean",
"title": "Create Milestones",
"order": 1
},
"delete": {
"type": "boolean",
"title": "Delete",
"order": 5
},
"delete_attachments": {
"type": "boolean",
"title": "Delete Attachments",
"order": 14
},
"list_artifacts": {
"type": "boolean",
"title": "List Artifacts",
"order": 4
},
"list_milestones": {
"type": "boolean",
"title": "List Milestones",
"order": 2
},
"read": {
"type": "boolean",
"title": "Read",
"order": 6
},
"read_attachments": {
"type": "boolean",
"title": "Read Attachments",
"order": 13
},
"write": {
"type": "boolean",
"title": "Write",
"order": 7
}
}
},
"JustUserDTO": {
"type": "object",
"title": "JustUserDTO",
"properties": {
"cell": {
"type": "string",
"title": "Cell",
"order": 7
},
"email": {
"type": "string",
"title": "Email",
"order": 5
},
"fname": {
"type": "string",
"title": "Fname",
"order": 2
},
"id": {
"type": "number",
"title": "Id",
"order": 1
},
"is_external": {
"type": "boolean",
"title": "Is External",
"order": 12
},
"last_login": {
"type": "number",
"title": "Last Login",
"order": 10
},
"lname": {
"type": "string",
"title": "Lname",
"order": 3
},
"locked": {
"type": "boolean",
"title": "Locked",
"order": 11
},
"notes": {
"type": "string",
"title": "Notes",
"order": 9
},
"phone": {
"type": "string",
"title": "Phone",
"order": 6
},
"status": {
"type": "string",
"title": "Status",
"order": 4
},
"title": {
"type": "string",
"title": "Title",
"order": 8
}
}
},
"LatLngDTO": {
"type": "object",
"title": "LatLngDTO",
"properties": {
"lat": {
"type": "number",
"title": "Lat",
"order": 1
},
"lng": {
"type": "number",
"title": "Lng",
"order": 2
}
}
},
"ModifyUserDTO": {
"type": "object",
"title": "ModifyUserDTO",
"properties": {
"first_name": {
"type": "string",
"title": "First Name",
"order": 2
},
"id": {
"type": "object",
"title": "Id",
"order": 1
},
"last_name": {
"type": "string",
"title": "Last Name",
"order": 3
}
}
},
"NamedEntityDTO": {
"type": "object",
"title": "NamedEntityDTO",
"properties": {
"id": {
"type": "number",
"title": "Id",
"order": 1
},
"name": {
"type": "string",
"title": "Name",
"order": 2
}
}
},
"RegulatorsDTO": {
"type": "object",
"title": "RegulatorsDTO",
"properties": {
"ids": {
"type": "array",
"title": "Ids",
"items": {
"type": "number"
},
"order": 1
}
}
},
"TaskChangeDTO": {
"type": "object",
"title": "TaskChangeDTO",
"properties": {
"added": {
"type": "array",
"title": "Added",
"items": {
"$ref": "#/definitions/NamedEntityDTO"
},
"order": 1
},
"removed": {
"type": "array",
"title": "Removed",
"items": {
"$ref": "#/definitions/NamedEntityDTO"
},
"order": 2
}
},
"definitions": {
"NamedEntityDTO": {
"type": "object",
"title": "NamedEntityDTO",
"properties": {
"id": {
"type": "number",
"title": "Id",
"order": 1
},
"name": {
"type": "string",
"title": "Name",
"order": 2
}
}
}
}
},
"TaskDTO": {
"type": "object",
"title": "TaskDTO",
"properties": {
"actions": {
"type": "array",
"title": "Actions",
"items": {
"$ref": "#/definitions/ActionInfoDTO"
},
"order": 27
},
"active": {
"type": "boolean",
"title": "Active",
"order": 21
},
"at_id": {
"type": "number",
"title": "At Id",
"order": 20
},
"attachments_count": {
"type": "number",
"title": "Attachments Count",
"order": 31
},
"cat_name": {
"type": "string",
"title": "Cat Name",
"order": 16
},
"category_id": {
"type": "object",
"title": "Category Id",
"order": 29
},
"closed_date": {
"type": "number",
"title": "Closed Date",
"order": 26
},
"creator": {
"$ref": "#/definitions/JustUserDTO",
"title": "Creator",
"order": 24
},
"custom": {
"type": "boolean",
"title": "Custom",
"order": 4
},
"description": {
"type": "string",
"title": "Description",
"order": 17
},
"due_date": {
"type": "number",
"title": "Due Date",
"order": 7
},
"frozen": {
"type": "boolean",
"title": "Frozen",
"order": 13
},
"id": {
"type": "number",
"title": "Id",
"order": 10
},
"inc_id": {
"type": "number",
"title": "Inc Id",
"order": 5
},
"inc_name": {
"type": "string",
"title": "Inc Name",
"order": 1
},
"inc_owner_id": {
"type": "number",
"title": "Inc Owner Id",
"order": 6
},
"inc_training": {
"type": "boolean",
"title": "Inc Training",
"order": 12
},
"init_date": {
"type": "number",
"title": "Init Date",
"order": 18
},
"instr_text": {
"type": "string",
"title": "Instr Text",
"order": 19
},
"members": {
"type": "array",
"title": "Members",
"items": {
"type": "number"
},
"order": 22
},
"name": {
"type": "string",
"title": "Name",
"order": 2
},
"notes": {
"type": "array",
"title": "Notes",
"items": {
"$ref": "#/definitions/CommentDTO"
},
"order": 25
},
"notes_count": {
"type": "number",
"title": "Notes Count",
"order": 30
},
"owner_fname": {
"type": "string",
"title": "Owner Fname",
"order": 14
},
"owner_id": {
"type": "number",
"title": "Owner Id",
"order": 9
},
"owner_lname": {
"type": "string",
"title": "Owner Lname",
"order": 15
},
"perms": {
"$ref": "#/definitions/TaskPermsDTO",
"title": "Perms",
"order": 23
},
"phase_id": {
"type": "number",
"title": "Phase Id",
"order": 28
},
"regs": {
"type": "object",
"title": "Regs",
"order": 3
},
"required": {
"type": "boolean",
"title": "Required",
"order": 8
},
"status": {
"type": "string",
"title": "Status",
"order": 11
}
},
"definitions": {
"ActionInfoDTO": {
"type": "object",
"title": "ActionInfoDTO",
"properties": {
"enabled": {
"type": "boolean",
"title": "Enabled",
"order": 3
},
"id": {
"type": "number",
"title": "Id",
"order": 1
},
"name": {
"type": "string",
"title": "Name",
"order": 2
}
}
},
"CommentDTO": {
"type": "object",
"title": "CommentDTO",
"properties": {
"actions": {
"type": "array",
"title": "Actions",
"items": {
"$ref": "#/definitions/ActionInfoDTO"
},
"order": 14
},
"children": {
"type": "array",
"title": "Children",
"items": {
"type": "object"
},
"order": 10
},
"comment_perms": {
"$ref": "#/definitions/CommentPermsDTO",
"title": "Comment Perms",
"order": 12
},
"create_date": {
"type": "number",
"title": "Create Date",
"order": 6
},
"id": {
"type": "number",
"title": "Id",
"order": 1
},
"inc_id": {
"type": "number",
"title": "Inc Id",
"order": 15
},
"inc_name": {
"type": "string",
"title": "Inc Name",
"order": 16
},
"inc_owner": {
"type": "number",
"title": "Inc Owner",
"order": 20
},
"is_deleted": {
"type": "boolean",
"title": "Is Deleted",
"order": 13
},
"mentioned_users": {
"type": "array",
"title": "Mentioned Users",
"items": {
"type": "object"
},
"order": 11
},
"modify_date": {
"type": "number",
"title": "Modify Date",
"order": 7
},
"modify_user": {
"$ref": "#/definitions/ModifyUserDTO",
"title": "Modify User",
"order": 8
},
"parent_id": {
"type": "number",
"title": "Parent Id",
"order": 2
},
"task_id": {
"type": "number",
"title": "Task Id",
"order": 17
},
"task_name": {
"type": "string",
"title": "Task Name",
"order": 18
},
"text": {
"$ref": "#/definitions/TextContentDTO",
"title": "Text",
"order": 9
},
"type": {
"type": "object",
"title": "Type",
"order": 19
},
"user_fname": {
"type": "string",
"title": "User Fname",
"order": 4
},
"user_id": {
"type": "object",
"title": "User Id",
"order": 3
},
"user_lname": {
"type": "string",
"title": "User Lname",
"order": 5
}
},
"definitions": {
"ActionInfoDTO": {
"type": "object",
"title": "ActionInfoDTO",
"properties": {
"enabled": {
"type": "boolean",
"title": "Enabled",
"order": 3
},
"id": {
"type": "number",
"title": "Id",
"order": 1
},
"name": {
"type": "string",
"title": "Name",
"order": 2
}
}
},
"CommentPermsDTO": {
"type": "object",
"title": "CommentPermsDTO",
"properties": {
"delete": {
"type": "boolean",
"title": "Delete",
"order": 2
},
"update": {
"type": "boolean",
"title": "Update",
"order": 1
}
}
},
"ModifyUserDTO": {
"type": "object",
"title": "ModifyUserDTO",
"properties": {
"first_name": {
"type": "string",
"title": "First Name",
"order": 2
},
"id": {
"type": "object",
"title": "Id",
"order": 1
},
"last_name": {
"type": "string",
"title": "Last Name",
"order": 3
}
}
},
"TextContentDTO": {
"type": "object",
"title": "TextContentDTO",
"properties": {
"content": {
"type": "string",
"title": "Content",
"order": 2
},
"format": {
"type": "string",
"title": "Format",
"order": 1
}
}
}
}
},
"CommentPermsDTO": {
"type": "object",
"title": "CommentPermsDTO",
"properties": {
"delete": {
"type": "boolean",
"title": "Delete",
"order": 2
},
"update": {
"type": "boolean",
"title": "Update",
"order": 1
}
}
},
"JustUserDTO": {
"type": "object",
"title": "JustUserDTO",
"properties": {
"cell": {
"type": "string",
"title": "Cell",
"order": 7
},
"email": {
"type": "string",
"title": "Email",
"order": 5
},
"fname": {
"type": "string",
"title": "Fname",
"order": 2
},
"id": {
"type": "number",
"title": "Id",
"order": 1
},
"is_external": {
"type": "boolean",
"title": "Is External",
"order": 12
},
"last_login": {
"type": "number",
"title": "Last Login",
"order": 10
},
"lname": {
"type": "string",
"title": "Lname",
"order": 3
},
"locked": {
"type": "boolean",
"title": "Locked",
"order": 11
},
"notes": {
"type": "string",
"title": "Notes",
"order": 9
},
"phone": {
"type": "string",
"title": "Phone",
"order": 6
},
"status": {
"type": "string",
"title": "Status",
"order": 4
},
"title": {
"type": "string",
"title": "Title",
"order": 8
}
}
},
"ModifyUserDTO": {
"type": "object",
"title": "ModifyUserDTO",
"properties": {
"first_name": {
"type": "string",
"title": "First Name",
"order": 2
},
"id": {
"type": "object",
"title": "Id",
"order": 1
},
"last_name": {
"type": "string",
"title": "Last Name",
"order": 3
}
}
},
"TaskPermsDTO": {
"type": "object",
"title": "TaskPermsDTO",
"properties": {
"assign": {
"type": "boolean",
"title": "Assign",
"order": 4
},
"attach_file": {
"type": "boolean",
"title": "Attach File",
"order": 7
},
"change_members": {
"type": "boolean",
"title": "Change Members",
"order": 6
},
"close": {
"type": "boolean",
"title": "Close",
"order": 5
},
"comment": {
"type": "boolean",
"title": "Comment",
"order": 3
},
"delete_attachments": {
"type": "boolean",
"title": "Delete Attachments",
"order": 9
},
"read": {
"type": "boolean",
"title": "Read",
"order": 1
},
"read_attachments": {
"type": "boolean",
"title": "Read Attachments",
"order": 8
},
"write": {
"type": "boolean",
"title": "Write",
"order": 2
}
}
},
"TextContentDTO": {
"type": "object",
"title": "TextContentDTO",
"properties": {
"content": {
"type": "string",
"title": "Content",
"order": 2
},
"format": {
"type": "string",
"title": "Format",
"order": 1
}
}
}
}
},
"TaskPermsDTO": {
"type": "object",
"title": "TaskPermsDTO",
"properties": {
"assign": {
"type": "boolean",
"title": "Assign",
"order": 4
},
"attach_file": {
"type": "boolean",
"title": "Attach File",
"order": 7
},
"change_members": {
"type": "boolean",
"title": "Change Members",
"order": 6
},
"close": {
"type": "boolean",
"title": "Close",
"order": 5
},
"comment": {
"type": "boolean",
"title": "Comment",
"order": 3
},
"delete_attachments": {
"type": "boolean",
"title": "Delete Attachments",
"order": 9
},
"read": {
"type": "boolean",
"title": "Read",
"order": 1
},
"read_attachments": {
"type": "boolean",
"title": "Read Attachments",
"order": 8
},
"write": {
"type": "boolean",
"title": "Write",
"order": 2
}
}
},
"TextContentDTO": {
"type": "object",
"title": "TextContentDTO",
"properties": {
"content": {
"type": "string",
"title": "Content",
"order": 2
},
"format": {
"type": "string",
"title": "Format",
"order": 1
}
}
},
"ThreatHitDTO": {
"type": "object",
"title": "ThreatHitDTO",
"properties": {
"active": {
"type": "boolean",
"title": "Active",
"order": 5
},
"artifact_type_id": {
"type": "object",
"title": "Artifact Type Id",
"order": 4
},
"id": {
"type": "string",
"title": "Id",
"order": 1
},
"threat_source_id": {
"type": "object",
"title": "Threat Source Id",
"order": 3
},
"value": {
"type": "string",
"title": "Value",
"order": 2
}
}
},
"WhoisDTO": {
"type": "object",
"title": "WhoisDTO",
"properties": {
"invalid": {
"type": "boolean",
"title": "Invalid",
"order": 3
},
"pending": {
"type": "boolean",
"title": "Pending",
"order": 2
},
"raw": {
"type": "string",
"title": "Raw",
"order": 1
}
}
}
}
}
""")
def __init__(self):
super(self.__class__, self).__init__(self.schema)
| 25.975139 | 141 | 0.301072 | 6,376 | 111,797 | 5.226631 | 0.042974 | 0.063316 | 0.094974 | 0.044381 | 0.932783 | 0.928822 | 0.922041 | 0.92057 | 0.912348 | 0.907487 | 0 | 0.017232 | 0.540104 | 111,797 | 4,303 | 142 | 25.981176 | 0.630925 | 0.000331 | 0 | 0.745688 | 1 | 0.000233 | 0.99566 | 0.031112 | 0 | 0 | 0 | 0 | 0 | 1 | 0.000466 | false | 0 | 0.000466 | 0 | 0.003497 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
b9fe015670b9e3ac99c4965a6bf7d4402cf7b0a1 | 36,074 | py | Python | test/test_server_asyncio.py | vmacari/pymodbus | ec97e2f2b50c6db0a932f44e550a5dee60bf0970 | [
"BSD-3-Clause"
] | 1,125 | 2017-05-11T06:11:36.000Z | 2022-03-31T02:59:45.000Z | test/test_server_asyncio.py | vmacari/pymodbus | ec97e2f2b50c6db0a932f44e550a5dee60bf0970 | [
"BSD-3-Clause"
] | 575 | 2017-05-12T02:46:55.000Z | 2022-03-31T16:00:33.000Z | test/test_server_asyncio.py | vmacari/pymodbus | ec97e2f2b50c6db0a932f44e550a5dee60bf0970 | [
"BSD-3-Clause"
] | 516 | 2017-05-19T14:06:06.000Z | 2022-03-31T06:10:13.000Z | #!/usr/bin/env python
from pymodbus.compat import IS_PYTHON3, PYTHON_VERSION
import pytest
import asynctest
import asyncio
import logging
import time
_logger = logging.getLogger()
if IS_PYTHON3: # Python 3
from asynctest.mock import patch, Mock, MagicMock
from pymodbus.device import ModbusDeviceIdentification
from pymodbus.factory import ServerDecoder
from pymodbus.server.asynchronous import ModbusTcpProtocol, ModbusUdpProtocol
from pymodbus.server.async_io import StartTcpServer, StartTlsServer, StartUdpServer, StartSerialServer, StopServer, ModbusServerFactory
from pymodbus.server.async_io import ModbusConnectedRequestHandler, ModbusBaseRequestHandler
from pymodbus.datastore import ModbusSequentialDataBlock
from pymodbus.datastore import ModbusSlaveContext, ModbusServerContext
from pymodbus.compat import byte2int
from pymodbus.transaction import ModbusSocketFramer
from pymodbus.exceptions import NoSuchSlaveException, ModbusIOException
import sys
import ssl
#---------------------------------------------------------------------------#
# Fixture
#---------------------------------------------------------------------------#
import platform
from distutils.version import LooseVersion
IS_DARWIN = platform.system().lower() == "darwin"
OSX_SIERRA = LooseVersion("10.12")
if IS_DARWIN:
IS_HIGH_SIERRA_OR_ABOVE = LooseVersion(platform.mac_ver()[0])
SERIAL_PORT = '/dev/ptyp0' if not IS_HIGH_SIERRA_OR_ABOVE else '/dev/ttyp0'
else:
IS_HIGH_SIERRA_OR_ABOVE = False
SERIAL_PORT = "/dev/ptmx"
@pytest.mark.skipif(not IS_PYTHON3, reason="requires python3.4 or above")
class AsyncioServerTest(asynctest.TestCase):
'''
This is the unittest for the pymodbus.server.asyncio module
The scope of this unit test is the life-cycle management of the network
connections and server objects.
This unittest suite does not attempt to test any of the underlying protocol details
'''
#-----------------------------------------------------------------------#
# Setup/TearDown
#-----------------------------------------------------------------------#
def setUp(self):
'''
Initialize the test environment by setting up a dummy store and context
'''
self.store = ModbusSlaveContext( di=ModbusSequentialDataBlock(0, [17]*100),
co=ModbusSequentialDataBlock(0, [17]*100),
hr=ModbusSequentialDataBlock(0, [17]*100),
ir=ModbusSequentialDataBlock(0, [17]*100))
self.context = ModbusServerContext(slaves=self.store, single=True)
def tearDown(self):
''' Cleans up the test environment '''
pass
#-----------------------------------------------------------------------#
# Test ModbusConnectedRequestHandler
#-----------------------------------------------------------------------#
@asyncio.coroutine
@pytest.mark.skipif(not IS_PYTHON3, reason="requires python3.4 or above")
def testStartTcpServer(self):
''' Test that the modbus tcp asyncio server starts correctly '''
identity = ModbusDeviceIdentification(info={0x00: 'VendorName'})
self.loop = asynctest.Mock(self.loop)
server = yield from StartTcpServer(context=self.context,loop=self.loop,identity=identity)
self.assertEqual(server.control.Identity.VendorName, 'VendorName')
if PYTHON_VERSION >= (3, 6):
self.loop.create_server.assert_called_once()
@pytest.mark.skipif(PYTHON_VERSION < (3, 7), reason="requires python3.7 or above")
@asyncio.coroutine
def testTcpServerServeNoDefer(self):
''' Test StartTcpServer without deferred start (immediate execution of server) '''
with patch('asyncio.base_events.Server.serve_forever', new_callable=asynctest.CoroutineMock) as serve:
server = yield from StartTcpServer(context=self.context,address=("127.0.0.1", 0), loop=self.loop, defer_start=False)
serve.assert_awaited()
@pytest.mark.skipif(PYTHON_VERSION < (3, 7), reason="requires python3.7 or above")
@asyncio.coroutine
def testTcpServerServeForever(self):
''' Test StartTcpServer serve_forever() method '''
with patch('asyncio.base_events.Server.serve_forever', new_callable=asynctest.CoroutineMock) as serve:
server = yield from StartTcpServer(context=self.context,address=("127.0.0.1", 0), loop=self.loop)
yield from server.serve_forever()
serve.assert_awaited()
@asyncio.coroutine
@pytest.mark.skipif(not IS_PYTHON3, reason="requires python3.4 or above")
def testTcpServerServeForeverTwice(self):
''' Call on serve_forever() twice should result in a runtime error '''
server = yield from StartTcpServer(context=self.context,address=("127.0.0.1", 0), loop=self.loop)
if PYTHON_VERSION >= (3, 7):
server_task = asyncio.create_task(server.serve_forever())
else:
server_task = asyncio.ensure_future(server.serve_forever())
yield from server.serving
with self.assertRaises(RuntimeError):
yield from server.serve_forever()
server.server_close()
@asyncio.coroutine
@pytest.mark.skipif(not IS_PYTHON3, reason="requires python3.4 or above")
def testTcpServerReceiveData(self):
''' Test data sent on socket is received by internals - doesn't not process data '''
data = b'\x01\x00\x00\x00\x00\x06\x01\x03\x00\x00\x00\x19'
server = yield from StartTcpServer(context=self.context,address=("127.0.0.1", 0),loop=self.loop)
if PYTHON_VERSION >= (3, 7):
server_task = asyncio.create_task(server.serve_forever())
else:
server_task = asyncio.ensure_future(server.serve_forever())
yield from server.serving
with patch('pymodbus.transaction.ModbusSocketFramer.processIncomingPacket', new_callable=Mock) as process:
# process = server.framer.processIncomingPacket = Mock()
connected = self.loop.create_future()
random_port = server.server.sockets[0].getsockname()[1] # get the random server port
class BasicClient(asyncio.BaseProtocol):
def connection_made(self, transport):
self.transport = transport
self.transport.write(data)
connected.set_result(True)
def eof_received(self):
pass
transport, protocol = yield from self.loop.create_connection(BasicClient, host='127.0.0.1',port=random_port)
yield from asyncio.sleep(0.1) # this may be better done by making an internal hook in the actual implementation
# if this unit test fails on a machine, see if increasing the sleep time makes a difference, if it does
# blame author for a fix
if PYTHON_VERSION >= (3, 6):
process.assert_called_once()
self.assertTrue( process.call_args[1]["data"] == data )
server.server_close()
@asyncio.coroutine
@pytest.mark.skipif(not IS_PYTHON3, reason="requires python3.4 or above")
def testTcpServerRoundtrip(self):
''' Test sending and receiving data on tcp socket '''
data = b"\x01\x00\x00\x00\x00\x06\x01\x03\x00\x00\x00\x01" # unit 1, read register
expected_response = b'\x01\x00\x00\x00\x00\x05\x01\x03\x02\x00\x11' # value of 17 as per context
server = yield from StartTcpServer(context=self.context,address=("127.0.0.1", 0),loop=self.loop)
if PYTHON_VERSION >= (3, 7):
server_task = asyncio.create_task(server.serve_forever())
else:
server_task = asyncio.ensure_future(server.serve_forever())
yield from server.serving
random_port = server.server.sockets[0].getsockname()[1] # get the random server port
connected, done = self.loop.create_future(),self.loop.create_future()
received_value = None
class BasicClient(asyncio.BaseProtocol):
def connection_made(self, transport):
self.transport = transport
self.transport.write(data)
connected.set_result(True)
def data_received(self, data):
nonlocal received_value, done
received_value = data
done.set_result(True)
def eof_received(self):
pass
transport, protocol = yield from self.loop.create_connection(BasicClient, host='127.0.0.1',port=random_port)
yield from asyncio.wait_for(done, timeout=0.1)
self.assertEqual(received_value, expected_response)
transport.close()
yield from asyncio.sleep(0)
server.server_close()
@asyncio.coroutine
@pytest.mark.skipif(not IS_PYTHON3, reason="requires python3.4 or above")
def testTcpServerConnectionLost(self):
''' Test tcp stream interruption '''
data = b"\x01\x00\x00\x00\x00\x06\x01\x01\x00\x00\x00\x01"
server = yield from StartTcpServer(context=self.context, address=("127.0.0.1", 0), loop=self.loop)
if PYTHON_VERSION >= (3, 7):
server_task = asyncio.create_task(server.serve_forever())
else:
server_task = asyncio.ensure_future(server.serve_forever())
yield from server.serving
random_port = server.server.sockets[0].getsockname()[1] # get the random server port
step1 = self.loop.create_future()
# done = self.loop.create_future()
# received_value = None
time.sleep(1)
class BasicClient(asyncio.BaseProtocol):
def connection_made(self, transport):
self.transport = transport
step1.set_result(True)
transport, protocol = yield from self.loop.create_connection(BasicClient, host='127.0.0.1', port=random_port)
yield from step1
# On Windows we seem to need to give this an extra chance to finish,
# otherwise there ends up being an active connection at the assert.
yield from asyncio.sleep(0.0)
self.assertTrue(len(server.active_connections) == 1)
protocol.transport.close() # close isn't synchronous and there's no notification that it's done
# so we have to wait a bit
allowed_delay = 1
deadline = time.monotonic() + allowed_delay
while time.monotonic() <= deadline:
yield from asyncio.sleep(0.1)
if len(server.active_connections) == 0:
break
else:
self.assertTrue(
len(server.active_connections) == 0,
msg="connections not closed within {} seconds".format(allowed_delay),
)
server.server_close()
@asyncio.coroutine
@pytest.mark.skipif(not IS_PYTHON3, reason="requires python3.4 or above")
def testTcpServerCloseActiveConnection(self):
''' Test server_close() while there are active TCP connections '''
data = b"\x01\x00\x00\x00\x00\x06\x01\x01\x00\x00\x00\x01"
server = yield from StartTcpServer(context=self.context,address=("127.0.0.1", 0),loop=self.loop)
if PYTHON_VERSION >= (3, 7):
server_task = asyncio.create_task(server.serve_forever())
else:
server_task = asyncio.ensure_future(server.serve_forever())
yield from server.serving
random_port = server.server.sockets[0].getsockname()[1] # get the random server port
step1 = self.loop.create_future()
done = self.loop.create_future()
received_value = None
class BasicClient(asyncio.BaseProtocol):
def connection_made(self, transport):
self.transport = transport
step1.set_result(True)
transport, protocol = yield from self.loop.create_connection(BasicClient, host='127.0.0.1',port=random_port)
yield from step1
# On Windows we seem to need to give this an extra chance to finish,
# otherwise there ends up being an active connection at the assert.
yield from asyncio.sleep(0.0)
server.server_close()
# close isn't synchronous and there's no notification that it's done
# so we have to wait a bit
yield from asyncio.sleep(0.0)
self.assertTrue( len(server.active_connections) == 0 )
@asyncio.coroutine
@pytest.mark.skipif(not IS_PYTHON3, reason="requires python3.4 or above")
def testTcpServerNoSlave(self):
''' Test unknown slave unit exception '''
context = ModbusServerContext(slaves={0x01: self.store, 0x02: self.store }, single=False)
data = b"\x01\x00\x00\x00\x00\x06\x05\x03\x00\x00\x00\x01" # get slave 5 function 3 (holding register)
server = yield from StartTcpServer(context=context,address=("127.0.0.1", 0),loop=self.loop)
if PYTHON_VERSION >= (3, 7):
server_task = asyncio.create_task(server.serve_forever())
else:
server_task = asyncio.ensure_future(server.serve_forever())
yield from server.serving
connect, receive, eof = self.loop.create_future(),self.loop.create_future(),self.loop.create_future()
received_data = None
random_port = server.server.sockets[0].getsockname()[1] # get the random server port
class BasicClient(asyncio.BaseProtocol):
def connection_made(self, transport):
_logger.debug("Client connected")
self.transport = transport
transport.write(data)
connect.set_result(True)
def data_received(self, data):
_logger.debug("Client received data")
receive.set_result(True)
received_data = data
def eof_received(self):
_logger.debug("Client stream eof")
eof.set_result(True)
transport, protocol = yield from self.loop.create_connection(BasicClient, host='127.0.0.1',port=random_port)
yield from asyncio.wait_for(connect, timeout=0.1)
self.assertFalse(eof.done())
server.server_close()
@asyncio.coroutine
@pytest.mark.skipif(not IS_PYTHON3, reason="requires python3.4 or above")
def testTcpServerModbusError(self):
''' Test sending garbage data on a TCP socket should drop the connection '''
data = b"\x01\x00\x00\x00\x00\x06\x01\x03\x00\x00\x00\x01" # get slave 5 function 3 (holding register)
server = yield from StartTcpServer(context=self.context,address=("127.0.0.1", 0),loop=self.loop)
if PYTHON_VERSION >= (3, 7):
server_task = asyncio.create_task(server.serve_forever())
else:
server_task = asyncio.ensure_future(server.serve_forever())
yield from server.serving
with patch("pymodbus.register_read_message.ReadHoldingRegistersRequest.execute",
side_effect=NoSuchSlaveException):
connect, receive, eof = self.loop.create_future(),self.loop.create_future(),self.loop.create_future()
received_data = None
random_port = server.server.sockets[0].getsockname()[1] # get the random server port
class BasicClient(asyncio.BaseProtocol):
def connection_made(self, transport):
_logger.debug("Client connected")
self.transport = transport
transport.write(data)
connect.set_result(True)
def data_received(self, data):
_logger.debug("Client received data")
receive.set_result(True)
received_data = data
def eof_received(self):
_logger.debug("Client stream eof")
eof.set_result(True)
transport, protocol = yield from self.loop.create_connection(BasicClient, host='127.0.0.1',port=random_port)
yield from asyncio.wait_for(connect, timeout=0.1)
yield from asyncio.wait_for(receive, timeout=0.1)
self.assertFalse(eof.done())
transport.close()
server.server_close()
@asyncio.coroutine
@pytest.mark.skipif(not IS_PYTHON3, reason="requires python3.4 or above")
def testTcpServerInternalException(self):
''' Test sending garbage data on a TCP socket should drop the connection '''
data = b"\x01\x00\x00\x00\x00\x06\x01\x03\x00\x00\x00\x01" # get slave 5 function 3 (holding register)
server = yield from StartTcpServer(context=self.context,address=("127.0.0.1", 0),loop=self.loop)
if PYTHON_VERSION >= (3, 7):
server_task = asyncio.create_task(server.serve_forever())
else:
server_task = asyncio.ensure_future(server.serve_forever())
yield from server.serving
with patch("pymodbus.register_read_message.ReadHoldingRegistersRequest.execute",
side_effect=Exception):
connect, receive, eof = self.loop.create_future(),self.loop.create_future(),self.loop.create_future()
received_data = None
random_port = server.server.sockets[0].getsockname()[1] # get the random server port
class BasicClient(asyncio.BaseProtocol):
def connection_made(self, transport):
_logger.debug("Client connected")
self.transport = transport
transport.write(data)
connect.set_result(True)
def data_received(self, data):
_logger.debug("Client received data")
receive.set_result(True)
received_data = data
def eof_received(self):
_logger.debug("Client stream eof")
eof.set_result(True)
transport, protocol = yield from self.loop.create_connection(BasicClient, host='127.0.0.1',port=random_port)
yield from asyncio.wait_for(connect, timeout=0.1)
yield from asyncio.wait_for(receive, timeout=0.1)
self.assertFalse(eof.done())
transport.close()
server.server_close()
#-----------------------------------------------------------------------#
# Test ModbusTlsProtocol
#-----------------------------------------------------------------------#
@asyncio.coroutine
@pytest.mark.skipif(not IS_PYTHON3, reason="requires python3.4 or above")
def testStartTlsServer(self):
''' Test that the modbus tls asyncio server starts correctly '''
with patch.object(ssl.SSLContext, 'load_cert_chain') as mock_method:
identity = ModbusDeviceIdentification(info={0x00: 'VendorName'})
self.loop = asynctest.Mock(self.loop)
server = yield from StartTlsServer(context=self.context,loop=self.loop,identity=identity)
self.assertEqual(server.control.Identity.VendorName, 'VendorName')
self.assertIsNotNone(server.sslctx)
if PYTHON_VERSION >= (3, 6):
self.loop.create_server.assert_called_once()
@pytest.mark.skipif(PYTHON_VERSION < (3, 7), reason="requires python3.7 or above")
@asyncio.coroutine
def testTlsServerServeNoDefer(self):
''' Test StartTcpServer without deferred start (immediate execution of server) '''
with patch('asyncio.base_events.Server.serve_forever', new_callable=asynctest.CoroutineMock) as serve:
with patch.object(ssl.SSLContext, 'load_cert_chain') as mock_method:
server = yield from StartTlsServer(context=self.context,address=("127.0.0.1", 0), loop=self.loop, defer_start=False)
serve.assert_awaited()
@pytest.mark.skipif(PYTHON_VERSION < (3, 7), reason="requires python3.7 or above")
@asyncio.coroutine
def testTlsServerServeForever(self):
''' Test StartTcpServer serve_forever() method '''
with patch('asyncio.base_events.Server.serve_forever', new_callable=asynctest.CoroutineMock) as serve:
with patch.object(ssl.SSLContext, 'load_cert_chain') as mock_method:
server = yield from StartTlsServer(context=self.context,address=("127.0.0.1", 0), loop=self.loop)
yield from server.serve_forever()
serve.assert_awaited()
@asyncio.coroutine
@pytest.mark.skipif(not IS_PYTHON3, reason="requires python3.4 or above")
def testTlsServerServeForeverTwice(self):
''' Call on serve_forever() twice should result in a runtime error '''
with patch.object(ssl.SSLContext, 'load_cert_chain') as mock_method:
server = yield from StartTlsServer(context=self.context,address=("127.0.0.1", 0), loop=self.loop)
if PYTHON_VERSION >= (3, 7):
server_task = asyncio.create_task(server.serve_forever())
else:
server_task = asyncio.ensure_future(server.serve_forever())
yield from server.serving
with self.assertRaises(RuntimeError):
yield from server.serve_forever()
server.server_close()
#-----------------------------------------------------------------------#
# Test ModbusUdpProtocol
#-----------------------------------------------------------------------#
@asyncio.coroutine
@pytest.mark.skipif(not IS_PYTHON3, reason="requires python3.4 or above")
def testStartUdpServer(self):
''' Test that the modbus udp asyncio server starts correctly '''
identity = ModbusDeviceIdentification(info={0x00: 'VendorName'})
self.loop = asynctest.Mock(self.loop)
server = yield from StartUdpServer(context=self.context,loop=self.loop,identity=identity)
self.assertEqual(server.control.Identity.VendorName, 'VendorName')
if PYTHON_VERSION >= (3, 6):
self.loop.create_datagram_endpoint.assert_called_once()
# async def testUdpServerServeNoDefer(self):
# ''' Test StartUdpServer without deferred start - NOT IMPLEMENTED - this test is hard to do without additional
# internal plumbing added to the implementation '''
# asyncio.base_events.Server.serve_forever = asynctest.CoroutineMock()
# server = yield from StartUdpServer(address=("127.0.0.1", 0), loop=self.loop, defer_start=False)
# server.server.serve_forever.assert_awaited()
@pytest.mark.skipif(PYTHON_VERSION < (3, 7), reason="requires python3.7 or above")
@asyncio.coroutine
def testUdpServerServeForeverStart(self):
''' Test StartUdpServer serve_forever() method '''
with patch('asyncio.base_events.Server.serve_forever', new_callable=asynctest.CoroutineMock) as serve:
server = yield from StartTcpServer(context=self.context,address=("127.0.0.1", 0), loop=self.loop)
yield from server.serve_forever()
serve.assert_awaited()
@asyncio.coroutine
@pytest.mark.skipif(not IS_PYTHON3, reason="requires python3.4 or above")
def testUdpServerServeForeverClose(self):
''' Test StartUdpServer serve_forever() method '''
server = yield from StartUdpServer(context=self.context,address=("127.0.0.1", 0), loop=self.loop)
if PYTHON_VERSION >= (3, 7):
server_task = asyncio.create_task(server.serve_forever())
else:
server_task = asyncio.ensure_future(server.serve_forever())
yield from server.serving
self.assertTrue(asyncio.isfuture(server.on_connection_terminated))
self.assertFalse(server.on_connection_terminated.done())
server.server_close()
self.assertTrue(server.protocol.is_closing())
@asyncio.coroutine
@pytest.mark.skipif(not IS_PYTHON3, reason="requires python3.4 or above")
def testUdpServerServeForeverTwice(self):
''' Call on serve_forever() twice should result in a runtime error '''
identity = ModbusDeviceIdentification(info={0x00: 'VendorName'})
server = yield from StartUdpServer(context=self.context,address=("127.0.0.1", 0),
loop=self.loop,identity=identity)
if PYTHON_VERSION >= (3, 7):
server_task = asyncio.create_task(server.serve_forever())
else:
server_task = asyncio.ensure_future(server.serve_forever())
yield from server.serving
with self.assertRaises(RuntimeError):
yield from server.serve_forever()
server.server_close()
@asyncio.coroutine
@pytest.mark.skipif(not IS_PYTHON3, reason="requires python3.4 or above")
def testUdpServerReceiveData(self):
''' Test that the sending data on datagram socket gets data pushed to framer '''
server = yield from StartUdpServer(context=self.context,address=("127.0.0.1", 0),loop=self.loop)
if PYTHON_VERSION >= (3, 7):
server_task = asyncio.create_task(server.serve_forever())
else:
server_task = asyncio.ensure_future(server.serve_forever())
yield from server.serving
with patch('pymodbus.transaction.ModbusSocketFramer.processIncomingPacket',new_callable=Mock) as process:
server.endpoint.datagram_received(data=b"12345", addr=("127.0.0.1", 12345))
yield from asyncio.sleep(0.1)
process.seal()
if PYTHON_VERSION >= (3, 6):
process.assert_called_once()
self.assertTrue( process.call_args[1]["data"] == b"12345" )
server.server_close()
@asyncio.coroutine
@pytest.mark.skipif(not IS_PYTHON3, reason="requires python3.4 or above")
def testUdpServerSendData(self):
''' Test that the modbus udp asyncio server correctly sends data outbound '''
identity = ModbusDeviceIdentification(info={0x00: 'VendorName'})
data = b'x\01\x00\x00\x00\x00\x06\x01\x03\x00\x00\x00\x19'
server = yield from StartUdpServer(context=self.context,address=("127.0.0.1", 0))
if PYTHON_VERSION >= (3, 7):
server_task = asyncio.create_task(server.serve_forever())
else:
server_task = asyncio.ensure_future(server.serve_forever())
yield from server.serving
random_port = server.protocol._sock.getsockname()[1]
received = server.endpoint.datagram_received = Mock(wraps=server.endpoint.datagram_received)
done = self.loop.create_future()
received_value = None
class BasicClient(asyncio.DatagramProtocol):
def connection_made(self, transport):
self.transport = transport
self.transport.sendto(data)
def datagram_received(self, data, addr):
nonlocal received_value, done
print("received")
received_value = data
done.set_result(True)
self.transport.close()
transport, protocol = yield from self.loop.create_datagram_endpoint( BasicClient,
remote_addr=('127.0.0.1', random_port))
yield from asyncio.sleep(0.1)
if PYTHON_VERSION >= (3, 6):
received.assert_called_once()
self.assertEqual(received.call_args[0][0], data)
server.server_close()
self.assertTrue(server.protocol.is_closing())
yield from asyncio.sleep(0.1)
@asyncio.coroutine
@pytest.mark.skipif(not IS_PYTHON3, reason="requires python3.4 or above")
def testUdpServerRoundtrip(self):
''' Test sending and receiving data on udp socket'''
data = b"\x01\x00\x00\x00\x00\x06\x01\x03\x00\x00\x00\x01" # unit 1, read register
expected_response = b'\x01\x00\x00\x00\x00\x05\x01\x03\x02\x00\x11' # value of 17 as per context
server = yield from StartUdpServer(context=self.context,address=("127.0.0.1", 0),loop=self.loop)
if PYTHON_VERSION >= (3, 7):
server_task = asyncio.create_task(server.serve_forever())
else:
server_task = asyncio.ensure_future(server.serve_forever())
yield from server.serving
random_port = server.protocol._sock.getsockname()[1]
connected, done = self.loop.create_future(),self.loop.create_future()
received_value = None
class BasicClient(asyncio.DatagramProtocol):
def connection_made(self, transport):
self.transport = transport
self.transport.sendto(data)
def datagram_received(self, data, addr):
nonlocal received_value, done
print("received")
received_value = data
done.set_result(True)
transport, protocol = yield from self.loop.create_datagram_endpoint( BasicClient,
remote_addr=('127.0.0.1', random_port))
yield from asyncio.wait_for(done, timeout=0.1)
self.assertEqual(received_value, expected_response)
transport.close()
yield from asyncio.sleep(0)
server.server_close()
@asyncio.coroutine
@pytest.mark.skipif(not IS_PYTHON3, reason="requires python3.4 or above")
def testUdpServerException(self):
''' Test sending garbage data on a TCP socket should drop the connection '''
garbage = b'\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF'
server = yield from StartUdpServer(context=self.context,address=("127.0.0.1", 0),loop=self.loop)
if PYTHON_VERSION >= (3, 7):
server_task = asyncio.create_task(server.serve_forever())
else:
server_task = asyncio.ensure_future(server.serve_forever())
yield from server.serving
with patch('pymodbus.transaction.ModbusSocketFramer.processIncomingPacket',
new_callable=lambda: Mock(side_effect=Exception)) as process:
connect, receive, eof = self.loop.create_future(),self.loop.create_future(),self.loop.create_future()
received_data = None
random_port = server.protocol._sock.getsockname()[1] # get the random server port
class BasicClient(asyncio.DatagramProtocol):
def connection_made(self, transport):
_logger.debug("Client connected")
self.transport = transport
transport.sendto(garbage)
connect.set_result(True)
def datagram_received(self, data, addr):
nonlocal receive
_logger.debug("Client received data")
receive.set_result(True)
received_data = data
transport, protocol = yield from self.loop.create_datagram_endpoint(BasicClient,
remote_addr=('127.0.0.1', random_port))
yield from asyncio.wait_for(connect, timeout=0.1)
self.assertFalse(receive.done())
self.assertFalse(server.protocol._sock._closed)
server.server_close()
# -----------------------------------------------------------------------#
# Test ModbusServerFactory
# -----------------------------------------------------------------------#
@pytest.mark.skipif(not IS_PYTHON3, reason="requires python3.4 or above")
def testModbusServerFactory(self):
''' Test the base class for all the clients '''
with self.assertWarns(DeprecationWarning):
factory = ModbusServerFactory(store=None)
@pytest.mark.skipif(not IS_PYTHON3, reason="requires python3.4 or above")
def testStopServer(self):
with self.assertWarns(DeprecationWarning):
StopServer()
@asyncio.coroutine
@pytest.mark.skipif(not IS_PYTHON3, reason="requires python3.4 or above")
def testTcpServerException(self):
''' Sending garbage data on a TCP socket should drop the connection '''
garbage = b'\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF'
server = yield from StartTcpServer(context=self.context, address=("127.0.0.1", 0), loop=self.loop)
if PYTHON_VERSION >= (3, 7):
server_task = asyncio.create_task(server.serve_forever())
else:
server_task = asyncio.ensure_future(server.serve_forever())
yield from server.serving
with patch('pymodbus.transaction.ModbusSocketFramer.processIncomingPacket',
new_callable=lambda: Mock(side_effect=Exception)) as process:
connect, receive, eof = self.loop.create_future(), self.loop.create_future(), self.loop.create_future()
received_data = None
random_port = server.server.sockets[0].getsockname()[1] # get the random server port
class BasicClient(asyncio.BaseProtocol):
def connection_made(self, transport):
_logger.debug("Client connected")
self.transport = transport
transport.write(garbage)
connect.set_result(True)
def data_received(self, data):
_logger.debug("Client received data")
receive.set_result(True)
received_data = data
def eof_received(self):
_logger.debug("Client stream eof")
eof.set_result(True)
transport, protocol = yield from self.loop.create_connection(BasicClient, host='127.0.0.1',
port=random_port)
yield from asyncio.wait_for(connect, timeout=0.1)
yield from asyncio.wait_for(eof, timeout=0.1)
# neither of these should timeout if the test is successful
server.server_close()
@asyncio.coroutine
@pytest.mark.skipif(not IS_PYTHON3, reason="requires python3.4 or above")
def testTcpServerException(self):
''' Sending garbage data on a TCP socket should drop the connection '''
garbage = b'\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF'
server = yield from StartTcpServer(context=self.context, address=("127.0.0.1", 0), loop=self.loop)
if PYTHON_VERSION >= (3, 7):
server_task = asyncio.create_task(server.serve_forever())
else:
server_task = asyncio.ensure_future(server.serve_forever())
yield from server.serving
with patch('pymodbus.transaction.ModbusSocketFramer.processIncomingPacket',
new_callable=lambda: Mock(side_effect=Exception)) as process:
connect, receive, eof = self.loop.create_future(), self.loop.create_future(), self.loop.create_future()
received_data = None
random_port = server.server.sockets[0].getsockname()[1] # get the random server port
class BasicClient(asyncio.BaseProtocol):
def connection_made(self, transport):
_logger.debug("Client connected")
self.transport = transport
transport.write(garbage)
connect.set_result(True)
def data_received(self, data):
_logger.debug("Client received data")
receive.set_result(True)
received_data = data
def eof_received(self):
_logger.debug("Client stream eof")
eof.set_result(True)
transport, protocol = yield from self.loop.create_connection(BasicClient, host='127.0.0.1',
port=random_port)
yield from asyncio.wait_for(connect, timeout=0.1)
yield from asyncio.wait_for(eof, timeout=0.1)
# neither of these should timeout if the test is successful
server.server_close()
# --------------------------------------------------------------------------- #
# Main
# --------------------------------------------------------------------------- #
if __name__ == "__main__":
asynctest.main()
| 47.907039 | 135 | 0.624411 | 4,073 | 36,074 | 5.410263 | 0.095753 | 0.034716 | 0.038392 | 0.009802 | 0.817254 | 0.804229 | 0.793701 | 0.782084 | 0.776729 | 0.771102 | 0 | 0.029414 | 0.24982 | 36,074 | 752 | 136 | 47.970745 | 0.784864 | 0.134668 | 0 | 0.8 | 0 | 0.019643 | 0.093349 | 0.040515 | 0 | 0 | 0.000905 | 0 | 0.066071 | 1 | 0.101786 | false | 0.005357 | 0.0375 | 0 | 0.1625 | 0.003571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b9ff7179dea62a0632bf7466d60c179e89b334e8 | 51 | py | Python | current.py | viktor-ws/year | 9622f15b3f495569ec7fd12d2d0d12ecba2926ee | [
"Apache-2.0"
] | null | null | null | current.py | viktor-ws/year | 9622f15b3f495569ec7fd12d2d0d12ecba2926ee | [
"Apache-2.0"
] | null | null | null | current.py | viktor-ws/year | 9622f15b3f495569ec7fd12d2d0d12ecba2926ee | [
"Apache-2.0"
] | null | null | null | print(1)
print(1)
print(1)
# 测试一下
# 测试两下-跟进ssssb
| 6.375 | 14 | 0.647059 | 9 | 51 | 3.666667 | 0.555556 | 0.545455 | 0.666667 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 0.176471 | 51 | 7 | 15 | 7.285714 | 0.714286 | 0.333333 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 8 |
6a2b4c2122e542b53b86069f862a49b154b989c0 | 167,866 | py | Python | src/v145.py | numb3r33/Kaggle_Home_Credit | f8f56a0514b928d7ed4b8f38c6edc53b67bab32d | [
"MIT"
] | null | null | null | src/v145.py | numb3r33/Kaggle_Home_Credit | f8f56a0514b928d7ed4b8f38c6edc53b67bab32d | [
"MIT"
] | 14 | 2020-01-28T22:02:01.000Z | 2022-03-11T23:33:08.000Z | src/v145.py | numb3r33/Kaggle_Home_Credit | f8f56a0514b928d7ed4b8f38c6edc53b67bab32d | [
"MIT"
] | null | null | null | import pandas as pd
import numpy as np
import scipy as sp
import argparse
import os
import gc
import time
from base import *
from features import *
from datetime import datetime
from sklearn.externals import joblib
from sklearn.model_selection import cross_val_score, StratifiedKFold
basepath = os.path.expanduser('../')
SEED = 1231
np.random.seed(SEED)
#############################################################################################################
# EXPERIMENT PARAMETERS #
#############################################################################################################
COLS_TO_REMOVE = ['TARGET',
"due_to_paid_3",
"instalment_dpd_num_147",
"instalment_amount_diff_num_143",
"total_cash_credit_dpd",
"due_to_paid_2",
"instalment_amount_diff_num_169",
"NONLIVINGAPARTMENTS_AVG",
"instalment_amount_diff_num_48",
"instalment_amount_diff_num_31",
"instalment_dpd_num_100",
"instalment_amount_diff_num_16",
"instalment_dpd_num_144",
"instalment_amount_diff_num_18",
"instalment_amount_diff_num_190",
"instalment_dpd_num_38",
"instalment_dpd_num_22",
"HOUR_APPR_PROCESS_START_7",
"instalment_dpd_num_191",
"instalment_amount_diff_num_170",
"instalment_amount_diff_num_69",
"instalment_dpd_num_171",
"instalment_amount_diff_num_212",
"instalment_dpd_num_175",
"instalment_dpd_num_72",
"instalment_dpd_num_97",
"instalment_amount_diff_num_192",
"instalment_amount_diff_num_26",
"instalment_amount_diff_num_160",
"instalment_dpd_num_57",
"bureau_credit_type_7.0",
"instalment_dpd_num_184",
"instalment_amount_diff_num_239",
"instalment_amount_diff_num_38",
"change_in_credit_limit_ot",
"instalment_amount_diff_num_131",
"instalment_amount_diff_num_130",
"mean_NAME_INCOME_TYPE_AMT_ANNUITY",
"instalment_amount_diff_num_146",
"instalment_amount_diff_num_198",
"instalment_amount_diff_num_39",
"instalment_amount_diff_num_6",
"instalment_dpd_num_194",
"instalment_amount_diff_num_204",
"instalment_dpd_num_51",
"due_to_paid_15",
"bureau_credit_type_14.0",
"instalment_dpd_num_168",
"instalment_dpd_num_160",
"instalment_amount_diff_num_90",
"instalment_dpd_num_78",
"HOUR_APPR_PROCESS_START_18",
"NONLIVINGAPARTMENTS_MEDI",
"instalment_amount_diff_num_33",
"instalment_amount_diff_num_178",
"instalment_dpd_num_136",
"instalment_dpd_num_17",
"instalment_amount_diff_num_89",
"prev_credit_year_4",
"instalment_amount_diff_num_105",
"instalment_dpd_num_64",
"instalment_dpd_num_21",
"NAME_GOODS_CATEGORY_19",
"instalment_amount_diff_num_194",
"instalment_dpd_num_114",
"instalment_dpd_num_134",
"instalment_dpd_num_98",
"due_to_paid_9",
"instalment_dpd_num_84",
"STATUS1.0",
"instalment_amount_diff_num_127",
"instalment_amount_diff_num_40",
"bureau_credit_type_5.0",
"prev_credit_year_5",
"instalment_dpd_num_127",
"instalment_amount_diff_num_56",
"PRODUCT_COMBINATION_9",
"instalment_amount_diff_num_155",
"instalment_amount_diff_num_219",
"due_to_paid_1",
"instalment_dpd_num_116",
"instalment_dpd_num_35",
"instalment_amount_diff_num_1",
"instalment_dpd_num_154",
"instalment_amount_diff_num_50",
"instalment_amount_diff_num_211",
"prev_credit_year_10",
"instalment_dpd_num_67",
"instalment_dpd_num_174",
"mean_OCCUPATION_TYPE_AMT_CREDIT",
"bbal_2",
"instalment_dpd_num_36",
"instalment_dpd_num_81",
"instalment_dpd_num_213",
"instalment_dpd_num_71",
"instalment_dpd_num_55",
"instalment_amount_diff_num_156",
"CNT_FAM_MEMBERS",
"bureau_credit_type_13.0",
"instalment_dpd_num_125",
"instalment_dpd_num_41",
"range_min_max_credit_limit",
"instalment_amount_diff_num_3",
"instalment_amount_diff_num_96",
"instalment_dpd_num_59",
"due_to_paid_19",
"instalment_dpd_num_69",
"instalment_dpd_num_130",
"instalment_dpd_num_204",
"instalment_amount_diff_num_177",
"instalment_dpd_num_135",
"NAME_GOODS_CATEGORY_2",
"instalment_amount_diff_num_150",
"instalment_dpd_num_143",
"instalment_amount_diff_num_122",
"instalment_dpd_num_122",
"instalment_dpd_num_117",
"instalment_dpd_num_146",
"instalment_amount_diff_num_55",
"due_to_paid_17",
"instalment_amount_diff_num_30",
"instalment_amount_diff_num_136",
"instalment_amount_diff_num_180",
"instalment_amount_diff_num_162",
"instalment_dpd_num_170",
"instalment_amount_diff_num_71",
"instalment_amount_diff_num_42",
"due_to_paid_4",
"mean_NAME_INCOME_TYPE_OCCUPATION_TYPE_AMT_ANNUITY",
"instalment_amount_diff_num_23",
"PRODUCT_COMBINATION_8",
"instalment_dpd_num_159",
"instalment_amount_diff_num_118",
"instalment_amount_diff_num_78",
"instalment_dpd_num_227",
"instalment_amount_diff_num_187",
"instalment_dpd_num_214",
"instalment_amount_diff_num_145",
"instalment_dpd_num_158",
"instalment_dpd_num_203",
"instalment_amount_diff_num_161",
"instalment_amount_diff_num_21",
"NUM_NULLS_EXT_SCORES",
"instalment_dpd_num_65",
"NAME_GOODS_CATEGORY_5",
"prev_credit_year_3",
"instalment_amount_diff_num_191",
"mean_cb_credit_annuity",
"instalment_amount_diff_num_17",
"instalment_dpd_num_63",
"instalment_amount_diff_num_129",
"instalment_amount_diff_num_148",
"instalment_amount_diff_num_27",
"instalment_dpd_num_121",
"HOUSETYPE_MODE",
"instalment_dpd_num_195",
"instalment_amount_diff_num_68",
"instalment_dpd_num_186",
"instalment_amount_diff_num_245",
"instalment_dpd_num_148",
"instalment_amount_diff_num_41",
"instalment_dpd_num_66",
"num_high_int_no_info_loans",
"mean_NAME_EDUCATION_TYPE_OCCUPATION_TYPE_DAYS_EMPLOYED",
"instalment_dpd_num_128",
"bbal_4",
"instalment_dpd_num_95",
"instalment_dpd_num_155",
"instalment_dpd_num_89",
"instalment_dpd_num_132",
"instalment_amount_diff_num_28",
"instalment_dpd_num_52",
"instalment_dpd_num_40",
"instalment_dpd_num_190",
"instalment_amount_diff_num_99",
"instalment_dpd_num_92",
"instalment_dpd_num_109",
"instalment_dpd_num_115",
"instalment_dpd_num_149",
"instalment_amount_diff_num_104",
"instalment_amount_diff_num_158",
"instalment_dpd_num_180",
"instalment_dpd_num_230",
"instalment_dpd_num_208",
"instalment_amount_diff_num_222",
"instalment_amount_diff_num_199",
"bureau_credit_year_10",
"instalment_dpd_num_177",
"instalment_amount_diff_num_63",
"due_to_paid_20",
"instalment_amount_diff_num_19",
"instalment_dpd_num_61",
"instalment_amount_diff_num_32",
"instalment_dpd_num_210",
"instalment_amount_diff_num_116",
"instalment_dpd_num_140",
"mean_OCCUPATION_TYPE_AMT_ANNUITY",
"instalment_amount_diff_num_117",
"due_to_paid_13",
"NAME_INCOME_TYPE__7",
"instalment_amount_diff_num_188",
"instalment_dpd_num_198",
"instalment_amount_diff_num_34",
"instalment_amount_diff_num_262",
"instalment_dpd_num_202",
"instalment_amount_diff_num_53",
"instalment_amount_diff_num_108",
"instalment_dpd_num_56",
"instalment_amount_diff_num_214",
"FONDKAPREMONT_MODE",
"instalment_dpd_num_192",
"instalment_amount_diff_num_189",
"instalment_amount_diff_num_86",
"instalment_dpd_num_169",
"instalment_amount_diff_num_172",
"instalment_dpd_num_46",
"instalment_dpd_num_211",
"instalment_amount_diff_num_109",
"mean_NAME_FAMILY_STATUS_NAME_INCOME_TYPE_DAYS_EMPLOYED",
"instalment_amount_diff_num_175",
"instalment_amount_diff_num_168",
"MONTHS_BALANCE_median",
"mean_CODE_GENDER_REG_CITY_NOT_WORK_CITY_AMT_INCOME_TOTAL",
"instalment_amount_diff_num_58",
"instalment_amount_diff_num_51",
"instalment_dpd_num_74",
"instalment_dpd_num_113",
"instalment_amount_diff_num_137",
"instalment_dpd_num_39",
"instalment_amount_diff_num_25",
"NAME_YIELD_GROUP_3",
"instalment_dpd_num_165",
"instalment_amount_diff_num_107",
"HOUR_APPR_PROCESS_START_16",
"prev_credit_year_11",
"CHANNEL_TYPE_6",
"instalment_amount_diff_num_88",
"instalment_amount_diff_num_64",
"instalment_amount_diff_num_201",
"ELEVATORS_AVG",
"prev_credit_year_2",
"instalment_amount_diff_num_37",
"instalment_dpd_num_54",
"instalment_amount_diff_num_153",
"instalment_amount_diff_num_203",
"instalment_dpd_num_166",
"ENTRANCES_MEDI",
"instalment_amount_diff_num_166",
"mean_NAME_INCOME_TYPE_DAYS_BIRTH",
"due_to_paid_10",
"instalment_amount_diff_num_141",
"instalment_dpd_num_96",
"instalment_dpd_num_167",
"instalment_amount_diff_num_140",
"instalment_amount_diff_num_77",
"NAME_FAMILY_STATUS",
"instalment_dpd_num_133",
"NAME_TYPE_SUITE",
"instalment_amount_diff_num_134",
"instalment_amount_diff_num_72",
"instalment_amount_diff_num_80",
"instalment_dpd_num_193",
"instalment_dpd_num_86",
"instalment_amount_diff_num_207",
"instalment_amount_diff_num_234",
"instalment_dpd_num_29",
"instalment_amount_diff_num_196",
"instalment_amount_diff_num_195",
"instalment_dpd_num_75",
"bureau_bal_pl_5",
"instalment_amount_diff_num_73",
"instalment_amount_diff_num_81",
"instalment_amount_diff_num_215",
"due_to_paid_23",
"instalment_amount_diff_num_114",
"instalment_amount_diff_num_157",
"bureau_credit_status_1.0",
"instalment_amount_diff_num_2",
"instalment_dpd_num_94",
"instalment_amount_diff_num_45",
"instalment_amount_diff_num_4",
"instalment_amount_diff_num_22",
"instalment_amount_diff_num_74",
"instalment_amount_diff_num_70",
"bureau_credit_year_11",
"instalment_dpd_num_85",
"instalment_amount_diff_num_184",
"instalment_amount_diff_num_126",
"instalment_dpd_num_14",
"instalment_amount_diff_num_62",
"instalment_amount_diff_num_121",
"instalment_amount_diff_num_15",
"instalment_dpd_num_172",
"instalment_dpd_num_142",
"mean_OCCUPATION_TYPE_DAYS_BIRTH",
"instalment_amount_diff_num_44",
"instalment_amount_diff_num_100",
"instalment_dpd_num_58",
"instalment_amount_diff_num_49",
"instalment_dpd_num_26",
"instalment_dpd_num_79",
"instalment_dpd_num_119",
"instalment_amount_diff_num_149",
"bbal_3",
"mean_CODE_GENDER_REG_CITY_NOT_WORK_CITY_DAYS_BIRTH",
"due_to_paid_22",
"instalment_amount_diff_num_202",
"instalment_amount_diff_num_208",
"instalment_dpd_num_47",
"young_age",
"mean_CODE_GENDER_NAME_EDUCATION_TYPE_DAYS_BIRTH",
"due_to_paid_24",
"instalment_dpd_num_212",
"mean_CODE_GENDER_REG_CITY_NOT_WORK_CITY_AMT_CREDIT",
"mean_OCCUPATION_TYPE_DAYS_EMPLOYED",
"instalment_dpd_num_44",
"instalment_amount_diff_num_182",
"due_to_paid_7",
"instalment_amount_diff_num_154",
"instalment_amount_diff_num_95",
"instalment_dpd_num_93",
"instalment_dpd_num_179",
"due_to_paid_11",
"bureau_credit_type_9.0",
"instalment_amount_diff_num_111",
"prev_credit_year_-1",
"mean_NAME_EDUCATION_TYPE_AMT_INCOME_TOTAL",
"instalment_dpd_num_189",
"instalment_amount_diff_num_256",
"instalment_dpd_num_90",
"instalment_amount_diff_num_254",
"diff_education_ext_income_mean",
"AMT_INCOME_TOTAL",
"instalment_amount_diff_num_29",
"instalment_amount_diff_num_60",
"prev_credit_year_9",
"instalment_amount_diff_num_210",
"mean_NAME_INCOME_TYPE_AMT_INCOME_TOTAL",
"instalment_amount_diff_num_176",
"instalment_amount_diff_num_98",
"instalment_amount_diff_num_47",
"instalment_amount_diff_num_173",
"HOUR_APPR_PROCESS_START_12",
"DPD_9",
"instalment_dpd_num_42",
"instalment_amount_diff_num_43",
"bureau_credit_type_11.0",
"instalment_amount_diff_num_221",
"instalment_dpd_num_138",
"instalment_amount_diff_num_128",
"instalment_dpd_num_108",
"mean_OCCUPATION_TYPE_EXT_SOURCE_2",
"instalment_dpd_num_123",
"instalment_amount_diff_num_76",
"instalment_dpd_num_24",
"instalment_dpd_num_139",
"prev_credit_year_7",
"credit_total_instalment_regular",
"due_to_paid_18",
"instalment_amount_diff_num_164",
"instalment_amount_diff_num_268",
"instalment_dpd_num_183",
"instalment_dpd_num_145",
"instalment_dpd_num_201",
"instalment_amount_diff_num_57",
"mean_NAME_INCOME_TYPE_DAYS_EMPLOYED",
"instalment_dpd_num_99",
"due_to_paid_25",
"instalment_dpd_num_137",
"instalment_dpd_num_73",
"instalment_dpd_num_68",
"instalment_amount_diff_num_183",
"instalment_dpd_num_30",
"instalment_dpd_num_70",
"instalment_dpd_num_37",
"NAME_EDUCATION_TYPE__1",
"instalment_dpd_num_151",
"bureau_credit_year_9",
"instalment_dpd_num_152",
"due_to_paid_5",
"instalment_dpd_num_207",
"child_to_non_child_ratio",
"instalment_dpd_num_87",
"bureau_credit_type_8.0",
"due_to_paid_6",
"due_to_paid_16",
"instalment_amount_diff_num_110",
"NONLIVINGAPARTMENTS_MODE",
"instalment_amount_diff_num_181",
"bureau_credit_year_0",
"instalment_amount_diff_num_91",
"instalment_amount_diff_num_152",
"bureau_bal_pl_3",
"instalment_dpd_num_45",
"instalment_amount_diff_num_54",
"instalment_dpd_num_173",
"instalment_dpd_num_120",
"instalment_dpd_num_31",
"due_to_paid_0",
"instalment_amount_diff_num_179",
"instalment_dpd_num_124",
"instalment_amount_diff_num_159",
"instalment_amount_diff_num_65",
"instalment_dpd_num_176",
"instalment_dpd_num_33",
"instalment_amount_diff_num_167",
"bureau_credit_year_8",
"instalment_dpd_num_53",
"instalment_dpd_num_164",
"EMERGENCYSTATE_MODE",
"instalment_dpd_num_188",
"instalment_amount_diff_num_79",
"instalment_dpd_num_141",
"bureau_credit_type_1.0",
"instalment_amount_diff_num_82",
"mean_CODE_GENDER_REG_CITY_NOT_WORK_CITY_CNT_CHILDREN",
"cash_dpd_sum",
"instalment_amount_diff_num_125",
"FLAG_OWN_CAR",
"instalment_amount_diff_num_132",
"mean_CODE_GENDER_REG_CITY_NOT_WORK_CITY_DAYS_ID_PUBLISH",
"instalment_amount_diff_num_8",
"instalment_amount_diff_num_138",
"instalment_dpd_num_80",
"instalment_amount_diff_num_106",
"instalment_amount_diff_num_135",
"bbal_5",
"mean_CODE_GENDER_NAME_EDUCATION_TYPE_AMT_CREDIT",
"instalment_dpd_num_62",
"instalment_dpd_num_126",
"due_to_paid_14",
"HOUR_APPR_PROCESS_START_11",
"mean_NAME_INCOME_TYPE_NAME_EDUCATION_TYPE_DAYS_BIRTH",
"instalment_amount_diff_num_139",
"instalment_amount_diff_num_87",
"instalment_amount_diff_num_61",
"most_recent_min_pos_cash_dpd",
"instalment_dpd_num_77",
"instalment_amount_diff_num_119",
"instalment_dpd_num_150",
"instalment_amount_diff_num_103",
"instalment_amount_diff_num_59",
"HOUR_APPR_PROCESS_START_17",
"instalment_dpd_num_82",
"mean_NAME_EDUCATION_TYPE_AMT_CREDIT",
"bureau_credit_type_2.0",
"bureau_credit_type_12.0",
"mean_NAME_EDUCATION_TYPE_AMT_ANNUITY",
"instalment_amount_diff_num_97",
"instalment_amount_diff_num_36",
"instalment_amount_diff_num_66",
"CODE_GENDER",
"instalment_dpd_num_112",
"instalment_dpd_num_34",
"HOUR_APPR_PROCESS_START_9",
"YEARS_BUILD_AVG",
"max_credit_term",
"instalment_amount_diff_num_147",
"due_to_paid_21",
"instalment_amount_diff_num_151",
"instalment_dpd_num_129",
"instalment_amount_diff_num_123",
"mean_CODE_GENDER_NAME_EDUCATION_TYPE_AMT_ANNUITY",
"instalment_dpd_num_215",
"instalment_dpd_num_218",
"instalment_amount_diff_num_94",
"instalment_dpd_num_178",
"instalment_dpd_num_118",
"instalment_dpd_num_162",
"STATUS7.0",
"prev_credit_year_8",
"HOUR_APPR_PROCESS_START_6",
"instalment_dpd_num_60",
"instalment_amount_diff_num_142",
"instalment_amount_diff_num_186",
"instalment_dpd_num_76",
"instalment_amount_diff_num_75",
"instalment_dpd_num_88",
"instalment_amount_diff_num_35",
"instalment_amount_diff_num_102",
"instalment_amount_diff_num_67",
"instalment_amount_diff_num_237",
"instalment_dpd_num_187",
"instalment_dpd_num_50",
"credit_dpd_sum",
"instalment_dpd_num_196",
"instalment_amount_diff_num_84",
"instalment_dpd_num_181",
"instalment_dpd_num_49",
"instalment_dpd_num_161",
"CNT_CHILDREN",
"instalment_dpd_num_157",
"total_credit_debt_active_to_closed",
"mean_NAME_INCOME_TYPE_NAME_EDUCATION_TYPE_DAYS_EMPLOYED",
"bureau_credit_type_6.0",
"instalment_amount_diff_num_174",
"mean_OCCUPATION_TYPE_OWN_CAR_AGE",
"instalment_amount_diff_num_133",
"instalment_amount_diff_num_144",
"instalment_dpd_num_91",
"instalment_amount_diff_num_124",
"instalment_amount_diff_num_120",
"instalment_amount_diff_num_85",
"due_to_paid_12",
"instalment_dpd_num_156",
"instalment_amount_diff_num_185",
"bureau_credit_year_-1",
"instalment_dpd_num_83",
"instalment_amount_diff_num_52",
"instalment_dpd_num_163",
"instalment_amount_diff_num_12",
"due_to_paid_8",
"instalment_dpd_num_131",
"instalment_dpd_num_32",
"FLOORSMAX_MEDI",
"NAME_EDUCATION_TYPE__4",
"instalment_amount_diff_num_93",
"instalment_dpd_num_110",
"instalment_amount_diff_num_113",
"instalment_dpd_num_185",
"instalment_amount_diff_num_163",
"instalment_amount_diff_num_92",
"instalment_amount_diff_num_264",
"instalment_amount_diff_num_112",
"children_ratio",
"instalment_amount_diff_num_165",
"ELEVATORS_MEDI",
"instalment_amount_diff_num_197",
"instalment_amount_diff_num_115",
"instalment_amount_diff_num_171",
"num_diff_credits",
"instalment_dpd_num_200",
"instalment_dpd_num_182",
"instalment_amount_diff_num_83",
"bureau_credit_type_0.0",
"instalment_amount_diff_num_13",
"FLOORSMAX_MODE",
"instalment_amount_diff_num_193",
"instalment_dpd_num_153",
"mean_NAME_FAMILY_STATUS_NAME_INCOME_TYPE_DAYS_BIRTH",
"STATUS2.0",
"mean_NAME_EDUCATION_TYPE_DAYS_EMPLOYED",
"instalment_dpd_num_111""due_to_paid_3",
"instalment_dpd_num_147",
"instalment_amount_diff_num_143",
"total_cash_credit_dpd",
"due_to_paid_2",
"instalment_amount_diff_num_169",
"NONLIVINGAPARTMENTS_AVG",
"instalment_amount_diff_num_48",
"instalment_amount_diff_num_31",
"instalment_dpd_num_100",
"instalment_amount_diff_num_16",
"instalment_dpd_num_144",
"instalment_amount_diff_num_18",
"instalment_amount_diff_num_190",
"instalment_dpd_num_38",
"instalment_dpd_num_22",
"HOUR_APPR_PROCESS_START_7",
"instalment_dpd_num_191",
"instalment_amount_diff_num_170",
"instalment_amount_diff_num_69",
"instalment_dpd_num_171",
"instalment_amount_diff_num_212",
"instalment_dpd_num_175",
"instalment_dpd_num_72",
"instalment_dpd_num_97",
"instalment_amount_diff_num_192",
"instalment_amount_diff_num_26",
"instalment_amount_diff_num_160",
"instalment_dpd_num_57",
"bureau_credit_type_7.0",
"instalment_dpd_num_184",
"instalment_amount_diff_num_239",
"instalment_amount_diff_num_38",
"change_in_credit_limit_ot",
"instalment_amount_diff_num_131",
"instalment_amount_diff_num_130",
"mean_NAME_INCOME_TYPE_AMT_ANNUITY",
"instalment_amount_diff_num_146",
"instalment_amount_diff_num_198",
"instalment_amount_diff_num_39",
"instalment_amount_diff_num_6",
"instalment_dpd_num_194",
"instalment_amount_diff_num_204",
"instalment_dpd_num_51",
"due_to_paid_15",
"bureau_credit_type_14.0",
"instalment_dpd_num_168",
"instalment_dpd_num_160",
"instalment_amount_diff_num_90",
"instalment_dpd_num_78",
"HOUR_APPR_PROCESS_START_18",
"NONLIVINGAPARTMENTS_MEDI",
"instalment_amount_diff_num_33",
"instalment_amount_diff_num_178",
"instalment_dpd_num_136",
"instalment_dpd_num_17",
"instalment_amount_diff_num_89",
"prev_credit_year_4",
"instalment_amount_diff_num_105",
"instalment_dpd_num_64",
"instalment_dpd_num_21",
"NAME_GOODS_CATEGORY_19",
"instalment_amount_diff_num_194",
"instalment_dpd_num_114",
"instalment_dpd_num_134",
"instalment_dpd_num_98",
"due_to_paid_9",
"instalment_dpd_num_84",
"STATUS1.0",
"instalment_amount_diff_num_127",
"instalment_amount_diff_num_40",
"bureau_credit_type_5.0",
"prev_credit_year_5",
"instalment_dpd_num_127",
"instalment_amount_diff_num_56",
"PRODUCT_COMBINATION_9",
"instalment_amount_diff_num_155",
"instalment_amount_diff_num_219",
"due_to_paid_1",
"instalment_dpd_num_116",
"instalment_dpd_num_35",
"instalment_amount_diff_num_1",
"instalment_dpd_num_154",
"instalment_amount_diff_num_50",
"instalment_amount_diff_num_211",
"prev_credit_year_10",
"instalment_dpd_num_67",
"instalment_dpd_num_174",
"mean_OCCUPATION_TYPE_AMT_CREDIT",
"bbal_2",
"instalment_dpd_num_36",
"instalment_dpd_num_81",
"instalment_dpd_num_213",
"instalment_dpd_num_71",
"instalment_dpd_num_55",
"instalment_amount_diff_num_156",
"CNT_FAM_MEMBERS",
"bureau_credit_type_13.0",
"instalment_dpd_num_125",
"instalment_dpd_num_41",
"range_min_max_credit_limit",
"instalment_amount_diff_num_3",
"instalment_amount_diff_num_96",
"instalment_dpd_num_59",
"due_to_paid_19",
"instalment_dpd_num_69",
"instalment_dpd_num_130",
"instalment_dpd_num_204",
"instalment_amount_diff_num_177",
"instalment_dpd_num_135",
"NAME_GOODS_CATEGORY_2",
"instalment_amount_diff_num_150",
"instalment_dpd_num_143",
"instalment_amount_diff_num_122",
"instalment_dpd_num_122",
"instalment_dpd_num_117",
"instalment_dpd_num_146",
"instalment_amount_diff_num_55",
"due_to_paid_17",
"instalment_amount_diff_num_30",
"instalment_amount_diff_num_136",
"instalment_amount_diff_num_180",
"instalment_amount_diff_num_162",
"instalment_dpd_num_170",
"instalment_amount_diff_num_71",
"instalment_amount_diff_num_42",
"due_to_paid_4",
"mean_NAME_INCOME_TYPE_OCCUPATION_TYPE_AMT_ANNUITY",
"instalment_amount_diff_num_23",
"PRODUCT_COMBINATION_8",
"instalment_dpd_num_159",
"instalment_amount_diff_num_118",
"instalment_amount_diff_num_78",
"instalment_dpd_num_227",
"instalment_amount_diff_num_187",
"instalment_dpd_num_214",
"instalment_amount_diff_num_145",
"instalment_dpd_num_158",
"instalment_dpd_num_203",
"instalment_amount_diff_num_161",
"instalment_amount_diff_num_21",
"NUM_NULLS_EXT_SCORES",
"instalment_dpd_num_65",
"NAME_GOODS_CATEGORY_5",
"prev_credit_year_3",
"instalment_amount_diff_num_191",
"mean_cb_credit_annuity",
"instalment_amount_diff_num_17",
"instalment_dpd_num_63",
"instalment_amount_diff_num_129",
"instalment_amount_diff_num_148",
"instalment_amount_diff_num_27",
"instalment_dpd_num_121",
"HOUSETYPE_MODE",
"instalment_dpd_num_195",
"instalment_amount_diff_num_68",
"instalment_dpd_num_186",
"instalment_amount_diff_num_245",
"instalment_dpd_num_148",
"instalment_amount_diff_num_41",
"instalment_dpd_num_66",
"num_high_int_no_info_loans",
"mean_NAME_EDUCATION_TYPE_OCCUPATION_TYPE_DAYS_EMPLOYED",
"instalment_dpd_num_128",
"bbal_4",
"instalment_dpd_num_95",
"instalment_dpd_num_155",
"instalment_dpd_num_89",
"instalment_dpd_num_132",
"instalment_amount_diff_num_28",
"instalment_dpd_num_52",
"instalment_dpd_num_40",
"instalment_dpd_num_190",
"instalment_amount_diff_num_99",
"instalment_dpd_num_92",
"instalment_dpd_num_109",
"instalment_dpd_num_115",
"instalment_dpd_num_149",
"instalment_amount_diff_num_104",
"instalment_amount_diff_num_158",
"instalment_dpd_num_180",
"instalment_dpd_num_230",
"instalment_dpd_num_208",
"instalment_amount_diff_num_222",
"instalment_amount_diff_num_199",
"bureau_credit_year_10",
"instalment_dpd_num_177",
"instalment_amount_diff_num_63",
"due_to_paid_20",
"instalment_amount_diff_num_19",
"instalment_dpd_num_61",
"instalment_amount_diff_num_32",
"instalment_dpd_num_210",
"instalment_amount_diff_num_116",
"instalment_dpd_num_140",
"mean_OCCUPATION_TYPE_AMT_ANNUITY",
"instalment_amount_diff_num_117",
"due_to_paid_13",
"NAME_INCOME_TYPE__7",
"instalment_amount_diff_num_188",
"instalment_dpd_num_198",
"instalment_amount_diff_num_34",
"instalment_amount_diff_num_262",
"instalment_dpd_num_202",
"instalment_amount_diff_num_53",
"instalment_amount_diff_num_108",
"instalment_dpd_num_56",
"instalment_amount_diff_num_214",
"FONDKAPREMONT_MODE",
"instalment_dpd_num_192",
"instalment_amount_diff_num_189",
"instalment_amount_diff_num_86",
"instalment_dpd_num_169",
"instalment_amount_diff_num_172",
"instalment_dpd_num_46",
"instalment_dpd_num_211",
"instalment_amount_diff_num_109",
"mean_NAME_FAMILY_STATUS_NAME_INCOME_TYPE_DAYS_EMPLOYED",
"instalment_amount_diff_num_175",
"instalment_amount_diff_num_168",
"MONTHS_BALANCE_median",
"mean_CODE_GENDER_REG_CITY_NOT_WORK_CITY_AMT_INCOME_TOTAL",
"instalment_amount_diff_num_58",
"instalment_amount_diff_num_51",
"instalment_dpd_num_74",
"instalment_dpd_num_113",
"instalment_amount_diff_num_137",
"instalment_dpd_num_39",
"instalment_amount_diff_num_25",
"NAME_YIELD_GROUP_3",
"instalment_dpd_num_165",
"instalment_amount_diff_num_107",
"HOUR_APPR_PROCESS_START_16",
"prev_credit_year_11",
"CHANNEL_TYPE_6",
"instalment_amount_diff_num_88",
"instalment_amount_diff_num_64",
"instalment_amount_diff_num_201",
"ELEVATORS_AVG",
"prev_credit_year_2",
"instalment_amount_diff_num_37",
"instalment_dpd_num_54",
"instalment_amount_diff_num_153",
"instalment_amount_diff_num_203",
"instalment_dpd_num_166",
"ENTRANCES_MEDI",
"instalment_amount_diff_num_166",
"mean_NAME_INCOME_TYPE_DAYS_BIRTH",
"due_to_paid_10",
"instalment_amount_diff_num_141",
"instalment_dpd_num_96",
"instalment_dpd_num_167",
"instalment_amount_diff_num_140",
"instalment_amount_diff_num_77",
"NAME_FAMILY_STATUS",
"instalment_dpd_num_133",
"NAME_TYPE_SUITE",
"instalment_amount_diff_num_134",
"instalment_amount_diff_num_72",
"instalment_amount_diff_num_80",
"instalment_dpd_num_193",
"instalment_dpd_num_86",
"instalment_amount_diff_num_207",
"instalment_amount_diff_num_234",
"instalment_dpd_num_29",
"instalment_amount_diff_num_196",
"instalment_amount_diff_num_195",
"instalment_dpd_num_75",
"bureau_bal_pl_5",
"instalment_amount_diff_num_73",
"instalment_amount_diff_num_81",
"instalment_amount_diff_num_215",
"due_to_paid_23",
"instalment_amount_diff_num_114",
"instalment_amount_diff_num_157",
"bureau_credit_status_1.0",
"instalment_amount_diff_num_2",
"instalment_dpd_num_94",
"instalment_amount_diff_num_45",
"instalment_amount_diff_num_4",
"instalment_amount_diff_num_22",
"instalment_amount_diff_num_74",
"instalment_amount_diff_num_70",
"bureau_credit_year_11",
"instalment_dpd_num_85",
"instalment_amount_diff_num_184",
"instalment_amount_diff_num_126",
"instalment_dpd_num_14",
"instalment_amount_diff_num_62",
"instalment_amount_diff_num_121",
"instalment_amount_diff_num_15",
"instalment_dpd_num_172",
"instalment_dpd_num_142",
"mean_OCCUPATION_TYPE_DAYS_BIRTH",
"instalment_amount_diff_num_44",
"instalment_amount_diff_num_100",
"instalment_dpd_num_58",
"instalment_amount_diff_num_49",
"instalment_dpd_num_26",
"instalment_dpd_num_79",
"instalment_dpd_num_119",
"instalment_amount_diff_num_149",
"bbal_3",
"mean_CODE_GENDER_REG_CITY_NOT_WORK_CITY_DAYS_BIRTH",
"due_to_paid_22",
"instalment_amount_diff_num_202",
"instalment_amount_diff_num_208",
"instalment_dpd_num_47",
"young_age",
"mean_CODE_GENDER_NAME_EDUCATION_TYPE_DAYS_BIRTH",
"due_to_paid_24",
"instalment_dpd_num_212",
"mean_CODE_GENDER_REG_CITY_NOT_WORK_CITY_AMT_CREDIT",
"mean_OCCUPATION_TYPE_DAYS_EMPLOYED",
"instalment_dpd_num_44",
"instalment_amount_diff_num_182",
"due_to_paid_7",
"instalment_amount_diff_num_154",
"instalment_amount_diff_num_95",
"instalment_dpd_num_93",
"instalment_dpd_num_179",
"due_to_paid_11",
"bureau_credit_type_9.0",
"instalment_amount_diff_num_111",
"prev_credit_year_-1",
"mean_NAME_EDUCATION_TYPE_AMT_INCOME_TOTAL",
"instalment_dpd_num_189",
"instalment_amount_diff_num_256",
"instalment_dpd_num_90",
"instalment_amount_diff_num_254",
"diff_education_ext_income_mean",
"AMT_INCOME_TOTAL",
"instalment_amount_diff_num_29",
"instalment_amount_diff_num_60",
"prev_credit_year_9",
"instalment_amount_diff_num_210",
"mean_NAME_INCOME_TYPE_AMT_INCOME_TOTAL",
"instalment_amount_diff_num_176",
"instalment_amount_diff_num_98",
"instalment_amount_diff_num_47",
"instalment_amount_diff_num_173",
"HOUR_APPR_PROCESS_START_12",
"DPD_9",
"instalment_dpd_num_42",
"instalment_amount_diff_num_43",
"bureau_credit_type_11.0",
"instalment_amount_diff_num_221",
"instalment_dpd_num_138",
"instalment_amount_diff_num_128",
"instalment_dpd_num_108",
"mean_OCCUPATION_TYPE_EXT_SOURCE_2",
"instalment_dpd_num_123",
"instalment_amount_diff_num_76",
"instalment_dpd_num_24",
"instalment_dpd_num_139",
"prev_credit_year_7",
"credit_total_instalment_regular",
"due_to_paid_18",
"instalment_amount_diff_num_164",
"instalment_amount_diff_num_268",
"instalment_dpd_num_183",
"instalment_dpd_num_145",
"instalment_dpd_num_201",
"instalment_amount_diff_num_57",
"mean_NAME_INCOME_TYPE_DAYS_EMPLOYED",
"instalment_dpd_num_99",
"due_to_paid_25",
"instalment_dpd_num_137",
"instalment_dpd_num_73",
"instalment_dpd_num_68",
"instalment_amount_diff_num_183",
"instalment_dpd_num_30",
"instalment_dpd_num_70",
"instalment_dpd_num_37",
"NAME_EDUCATION_TYPE__1",
"instalment_dpd_num_151",
"bureau_credit_year_9",
"instalment_dpd_num_152",
"due_to_paid_5",
"instalment_dpd_num_207",
"child_to_non_child_ratio",
"instalment_dpd_num_87",
"bureau_credit_type_8.0",
"due_to_paid_6",
"due_to_paid_16",
"instalment_amount_diff_num_110",
"NONLIVINGAPARTMENTS_MODE",
"instalment_amount_diff_num_181",
"bureau_credit_year_0",
"instalment_amount_diff_num_91",
"instalment_amount_diff_num_152",
"bureau_bal_pl_3",
"instalment_dpd_num_45",
"instalment_amount_diff_num_54",
"instalment_dpd_num_173",
"instalment_dpd_num_120",
"instalment_dpd_num_31",
"due_to_paid_0",
"instalment_amount_diff_num_179",
"instalment_dpd_num_124",
"instalment_amount_diff_num_159",
"instalment_amount_diff_num_65",
"instalment_dpd_num_176",
"instalment_dpd_num_33",
"instalment_amount_diff_num_167",
"bureau_credit_year_8",
"instalment_dpd_num_53",
"instalment_dpd_num_164",
"EMERGENCYSTATE_MODE",
"instalment_dpd_num_188",
"instalment_amount_diff_num_79",
"instalment_dpd_num_141",
"bureau_credit_type_1.0",
"instalment_amount_diff_num_82",
"mean_CODE_GENDER_REG_CITY_NOT_WORK_CITY_CNT_CHILDREN",
"cash_dpd_sum",
"instalment_amount_diff_num_125",
"FLAG_OWN_CAR",
"instalment_amount_diff_num_132",
"mean_CODE_GENDER_REG_CITY_NOT_WORK_CITY_DAYS_ID_PUBLISH",
"instalment_amount_diff_num_8",
"instalment_amount_diff_num_138",
"instalment_dpd_num_80",
"instalment_amount_diff_num_106",
"instalment_amount_diff_num_135",
"bbal_5",
"mean_CODE_GENDER_NAME_EDUCATION_TYPE_AMT_CREDIT",
"instalment_dpd_num_62",
"instalment_dpd_num_126",
"due_to_paid_14",
"HOUR_APPR_PROCESS_START_11",
"mean_NAME_INCOME_TYPE_NAME_EDUCATION_TYPE_DAYS_BIRTH",
"instalment_amount_diff_num_139",
"instalment_amount_diff_num_87",
"instalment_amount_diff_num_61",
"most_recent_min_pos_cash_dpd",
"instalment_dpd_num_77",
"instalment_amount_diff_num_119",
"instalment_dpd_num_150",
"instalment_amount_diff_num_103",
"instalment_amount_diff_num_59",
"HOUR_APPR_PROCESS_START_17",
"instalment_dpd_num_82",
"mean_NAME_EDUCATION_TYPE_AMT_CREDIT",
"bureau_credit_type_2.0",
"bureau_credit_type_12.0",
"mean_NAME_EDUCATION_TYPE_AMT_ANNUITY",
"instalment_amount_diff_num_97",
"instalment_amount_diff_num_36",
"instalment_amount_diff_num_66",
"CODE_GENDER",
"instalment_dpd_num_112",
"instalment_dpd_num_34",
"HOUR_APPR_PROCESS_START_9",
"YEARS_BUILD_AVG",
"max_credit_term",
"instalment_amount_diff_num_147",
"due_to_paid_21",
"instalment_amount_diff_num_151",
"instalment_dpd_num_129",
"instalment_amount_diff_num_123",
"mean_CODE_GENDER_NAME_EDUCATION_TYPE_AMT_ANNUITY",
"instalment_dpd_num_215",
"instalment_dpd_num_218",
"instalment_amount_diff_num_94",
"instalment_dpd_num_178",
"instalment_dpd_num_118",
"instalment_dpd_num_162",
"STATUS7.0",
"prev_credit_year_8",
"HOUR_APPR_PROCESS_START_6",
"instalment_dpd_num_60",
"instalment_amount_diff_num_142",
"instalment_amount_diff_num_186",
"instalment_dpd_num_76",
"instalment_amount_diff_num_75",
"instalment_dpd_num_88",
"instalment_amount_diff_num_35",
"instalment_amount_diff_num_102",
"instalment_amount_diff_num_67",
"instalment_amount_diff_num_237",
"instalment_dpd_num_187",
"instalment_dpd_num_50",
"credit_dpd_sum",
"instalment_dpd_num_196",
"instalment_amount_diff_num_84",
"instalment_dpd_num_181",
"instalment_dpd_num_49",
"instalment_dpd_num_161",
"CNT_CHILDREN",
"instalment_dpd_num_157",
"total_credit_debt_active_to_closed",
"mean_NAME_INCOME_TYPE_NAME_EDUCATION_TYPE_DAYS_EMPLOYED",
"bureau_credit_type_6.0",
"instalment_amount_diff_num_174",
"mean_OCCUPATION_TYPE_OWN_CAR_AGE",
"instalment_amount_diff_num_133",
"instalment_amount_diff_num_144",
"instalment_dpd_num_91",
"instalment_amount_diff_num_124",
"instalment_amount_diff_num_120",
"instalment_amount_diff_num_85",
"due_to_paid_12",
"instalment_dpd_num_156",
"instalment_amount_diff_num_185",
"bureau_credit_year_-1",
"instalment_dpd_num_83",
"instalment_amount_diff_num_52",
"instalment_dpd_num_163",
"instalment_amount_diff_num_12",
"due_to_paid_8",
"instalment_dpd_num_131",
"instalment_dpd_num_32",
"FLOORSMAX_MEDI",
"NAME_EDUCATION_TYPE__4",
"instalment_amount_diff_num_93",
"instalment_dpd_num_110",
"instalment_amount_diff_num_113",
"instalment_dpd_num_185",
"instalment_amount_diff_num_163",
"instalment_amount_diff_num_92",
"instalment_amount_diff_num_264",
"instalment_amount_diff_num_112",
"children_ratio",
"instalment_amount_diff_num_165",
"ELEVATORS_MEDI",
"instalment_amount_diff_num_197",
"instalment_amount_diff_num_115",
"instalment_amount_diff_num_171",
"num_diff_credits",
"instalment_dpd_num_200",
"instalment_dpd_num_182",
"instalment_amount_diff_num_83",
"bureau_credit_type_0.0",
"instalment_amount_diff_num_13",
"FLOORSMAX_MODE",
"instalment_amount_diff_num_193",
"instalment_dpd_num_153",
"mean_NAME_FAMILY_STATUS_NAME_INCOME_TYPE_DAYS_BIRTH",
"STATUS2.0",
"mean_NAME_EDUCATION_TYPE_DAYS_EMPLOYED",
"instalment_dpd_num_111"
]
PARAMS = {
'num_boost_round': 20000,
'early_stopping_rounds': 200,
'objective': 'binary',
'boosting_type': 'gbdt',
'learning_rate': .01,
'metric': 'auc',
'num_leaves': 20,
'sub_feature': 0.05,
'bagging_fraction': 0.9,
'reg_lambda': 75,
'reg_alpha': 5,
'min_split_gain': .5,
'min_data_in_leaf': 15,
'min_sum_hessian_in_leaf': 1,
'nthread': 16,
'verbose': -1,
'seed': SEED
}
PCA_PARAMS = {
'n_components': 10,
'whiten': True,
'random_state': SEED
}
MODEL_FILENAME = 'v145'
SAMPLE_SIZE = .3
# NOTE: column in frequency encoded columns
# cannot be in ohe cols.
FREQ_ENCODING_COLS = ['ORGANIZATION_OCCUPATION',
'age_emp_categorical',
'age_occupation'
]
OHE_COLS = [
'ORGANIZATION_TYPE',
'OCCUPATION_TYPE',
'NAME_EDUCATION_TYPE',
'NAME_HOUSING_TYPE',
'NAME_INCOME_TYPE'
]
TARGET_ENCODING_COLS = []
class Modelv145(BaseModel):
def __init__(self, **params):
self.params = params
self.n_train = 307511 # TODO: find a way to remove this constant
def load_data(self, filenames):
dfs = []
for filename in filenames:
dfs.append(pd.read_csv(filename, parse_dates=True, keep_date_col=True))
df = pd.concat(dfs)
df.index = np.arange(len(df))
df = super(Modelv145, self).reduce_mem_usage(df)
return df
def reduce_mem_usage(self, df):
return super(Modelv145, self).reduce_mem_usage(df)
def preprocess(self):
tr = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'application_train.pkl'))
te = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'application_test.pkl'))
ntrain = len(tr)
data = pd.concat((tr, te))
del tr, te
gc.collect()
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'current_application_train.pkl')):
print('Generating features based on current application ....')
t0 = time.clock()
data, FEATURE_NAMES = current_application_features(data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'current_application_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'current_application_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on current application')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'bureau_train.pkl')):
bureau = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'bureau.pkl'))
for col in bureau.select_dtypes(include=['category']).columns:
bureau.loc[:, col] = bureau.loc[:, col].cat.codes
print('Generating features based on credits reported to bureau ....')
t0 = time.clock()
data, FEATURE_NAMES = bureau_features(bureau, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'bureau_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'bureau_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
del bureau
gc.collect()
else:
print('Already generated features based on bureau application')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'bureau_bal_train.pkl')):
bureau = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'bureau.pkl'))
bureau_bal = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'bureau_balance.pkl'))
for col in bureau.select_dtypes(include=['category']).columns:
bureau.loc[:, col] = bureau.loc[:, col].cat.codes
for col in bureau_bal.select_dtypes(include=['category']).columns:
bureau_bal.loc[:, col] = bureau_bal.loc[:, col].cat.codes
print('Generating features based on credits reported to bureau and bureau balance ....')
t0 = time.clock()
data, FEATURE_NAMES = bureau_and_balance(bureau, bureau_bal, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'bureau_bal_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'bureau_bal_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on bureau and balance')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_train.pkl')):
prev_app = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'previous_application.pkl'))
for col in prev_app.select_dtypes(include=['category']).columns:
prev_app.loc[:, col] = prev_app.loc[:, col].cat.codes
print('Generating features based on previous application ....')
t0 = time.clock()
data, FEATURE_NAMES = prev_app_features(prev_app, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
del prev_app
gc.collect()
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on previous application')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'pos_cash_train.pkl')):
pos_cash = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'POS_CASH_balance.pkl'))
for col in pos_cash.select_dtypes(include=['category']).columns:
pos_cash.loc[:, col] = pos_cash.loc[:, col].cat.codes
print('Generating features based on pos cash ....')
t0 = time.clock()
data, FEATURE_NAMES = pos_cash_features(pos_cash, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
del pos_cash
gc.collect()
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'pos_cash_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'pos_cash_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on pos cash')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'credit_train.pkl')):
credit_bal = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'credit_card_balance.pkl'))
for col in credit_bal.select_dtypes(include=['category']).columns:
credit_bal.loc[:, col] = credit_bal.loc[:, col].cat.codes
print('Generating features based on Credit Card ....')
t0 = time.clock()
data, FEATURE_NAMES = credit_card_features(credit_bal, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
del credit_bal
gc.collect()
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'credit_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'credit_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on Credit Card')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'installments_train.pkl')):
installments = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'installments_payments.pkl'))
for col in installments.select_dtypes(include=['category']).columns:
installments.loc[:, col] = installments.loc[:, col].cat.codes
print('Generating features based on Installments ....')
t0 = time.clock()
data, FEATURE_NAMES = get_installment_features(installments, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
del installments
gc.collect()
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'installments_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'installments_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on Installments')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_bureau_train.pkl')):
prev_app = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'previous_application.pkl'))
bureau = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'bureau.pkl'))
for col in prev_app.select_dtypes(include=['category']).columns:
prev_app.loc[:, col] = prev_app.loc[:, col].cat.codes
for col in bureau.select_dtypes(include=['category']).columns:
bureau.loc[:, col] = bureau.loc[:, col].cat.codes
print('Generating features based on Previous Applications and Bureau Applications....')
t0 = time.clock()
data, FEATURE_NAMES = prev_app_bureau(prev_app, bureau, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
del bureau, prev_app
gc.collect()
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_bureau_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_bureau_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on Previous application and Bureau Applications')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_credit_train.pkl')):
prev_app = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'previous_application.pkl'))
credit_bal = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'credit_card_balance.pkl'))
for col in prev_app.select_dtypes(include=['category']).columns:
prev_app.loc[:, col] = prev_app.loc[:, col].cat.codes
for col in credit_bal.select_dtypes(include=['category']).columns:
credit_bal.loc[:, col] = credit_bal.loc[:, col].cat.codes
print('Generating features based on Previous Applications and Credit card balance ....')
t0 = time.clock()
data, FEATURE_NAMES = prev_app_credit_card(prev_app, credit_bal, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
del credit_bal, prev_app
gc.collect()
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_credit_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_credit_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on Previous application and Credit card balance')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_installments_train.pkl')):
prev_app = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'previous_application.pkl'))
installments = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'installments_payments.pkl'))
for col in prev_app.select_dtypes(include=['category']).columns:
prev_app.loc[:, col] = prev_app.loc[:, col].cat.codes
for col in installments.select_dtypes(include=['category']).columns:
installments.loc[:, col] = installments.loc[:, col].cat.codes
print('Generating features based on Previous Applications and Installment Payments ....')
t0 = time.clock()
data, FEATURE_NAMES = prev_app_installments(prev_app, installments, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
del installments, prev_app
gc.collect()
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_installments_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_installments_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on Previous application and Installment Payments.')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'loan_stacking_train.pkl')):
bureau = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'bureau.pkl'))
prev_app = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'previous_application.pkl'))
credit_bal = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'credit_card_balance.pkl'))
for col in bureau.select_dtypes(include=['category']).columns:
bureau.loc[:, col] = bureau.loc[:, col].cat.codes
for col in prev_app.select_dtypes(include=['category']).columns:
prev_app.loc[:, col] = prev_app.loc[:, col].cat.codes
for col in credit_bal.select_dtypes(include=['category']).columns:
credit_bal.loc[:, col] = credit_bal.loc[:, col].cat.codes
print('Generating features based on loan stacking ....')
t0 = time.clock()
data, FEATURE_NAMES = loan_stacking(bureau, prev_app, credit_bal, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'loan_stacking_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'loan_stacking_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
del bureau
gc.collect()
else:
print('Already generated features based on loan stacking.')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'feature_groups_train.pkl')):
print('Generating features based on feature groups ....')
t0 = time.clock()
data, FEATURE_NAMES = feature_groups(data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'feature_groups_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'feature_groups_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on feature groups.')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_pos_cash_train.pkl')):
print('Generating features based on previous application and pos cash ....')
prev_app = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'previous_application.pkl'))
pos_cash = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'POS_CASH_balance.pkl'))
for col in prev_app.select_dtypes(include=['category']).columns:
prev_app.loc[:, col] = prev_app.loc[:, col].cat.codes
for col in pos_cash.select_dtypes(include=['category']).columns:
pos_cash.loc[:, col] = pos_cash.loc[:, col].cat.codes
t0 = time.clock()
data, FEATURE_NAMES = prev_app_pos(prev_app, pos_cash, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_pos_cash_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_pos_cash_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on previous application and pos cash.')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_pos_cash_credit_bal_train.pkl')):
print('Generating features based on previous application, pos cash and credit card balance ....')
prev_app = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'previous_application.pkl'))
pos_cash = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'POS_CASH_balance.pkl'))
credit_bal = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'credit_card_balance.pkl'))
for col in prev_app.select_dtypes(include=['category']).columns:
prev_app.loc[:, col] = prev_app.loc[:, col].cat.codes
for col in pos_cash.select_dtypes(include=['category']).columns:
pos_cash.loc[:, col] = pos_cash.loc[:, col].cat.codes
for col in credit_bal.select_dtypes(include=['category']).columns:
credit_bal.loc[:, col] = credit_bal.loc[:, col].cat.codes
t0 = time.time()
data, FEATURE_NAMES = prev_app_pos_credit(prev_app, pos_cash, credit_bal, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_pos_cash_credit_bal_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_pos_cash_credit_bal_test.pkl'))
print('\nTook: {} seconds'.format(time.time() - t0))
else:
print('Already generated features based on previous application, pos cash and credit card balance.')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_ohe_train.pkl')):
print('Generating features based on previous application one hot encoded features ....')
prev_app = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'previous_application.pkl'))
for col in prev_app.select_dtypes(include=['category']).columns:
prev_app.loc[:, col] = prev_app.loc[:, col].cat.codes
t0 = time.time()
data, FEATURE_NAMES = prev_app_ohe(prev_app, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_ohe_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_ohe_test.pkl'))
print('\nTook: {} seconds'.format(time.time() - t0))
else:
print('Already generated features based on previous application one hot encode features.')
def prepare_features(self):
tr = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'application_train.pkl'))
te = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'application_test.pkl'))
ntrain = len(tr)
data = pd.concat((tr, te))
del tr, te
gc.collect()
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'current_application_train.pkl')):
print('Generating features based on current application ....')
t0 = time.clock()
data, FEATURE_NAMES = current_application_features(data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'current_application_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'current_application_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on current application')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'bureau_train.pkl')):
bureau = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'bureau.pkl'))
for col in bureau.select_dtypes(include=['category']).columns:
bureau.loc[:, col] = bureau.loc[:, col].cat.codes
print('Generating features based on credits reported to bureau ....')
t0 = time.clock()
data, FEATURE_NAMES = bureau_features(bureau, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'bureau_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'bureau_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
del bureau
gc.collect()
else:
print('Already generated features based on bureau application')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'bureau_bal_train.pkl')):
bureau = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'bureau.pkl'))
bureau_bal = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'bureau_balance.pkl'))
for col in bureau.select_dtypes(include=['category']).columns:
bureau.loc[:, col] = bureau.loc[:, col].cat.codes
for col in bureau_bal.select_dtypes(include=['category']).columns:
bureau_bal.loc[:, col] = bureau_bal.loc[:, col].cat.codes
print('Generating features based on credits reported to bureau and bureau balance ....')
t0 = time.clock()
data, FEATURE_NAMES = bureau_and_balance(bureau, bureau_bal, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'bureau_bal_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'bureau_bal_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on bureau and balance')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_train.pkl')):
prev_app = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'previous_application.pkl'))
for col in prev_app.select_dtypes(include=['category']).columns:
prev_app.loc[:, col] = prev_app.loc[:, col].cat.codes
print('Generating features based on previous application ....')
t0 = time.clock()
data, FEATURE_NAMES = prev_app_features(prev_app, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
del prev_app
gc.collect()
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on previous application')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'pos_cash_train.pkl')):
pos_cash = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'POS_CASH_balance.pkl'))
for col in pos_cash.select_dtypes(include=['category']).columns:
pos_cash.loc[:, col] = pos_cash.loc[:, col].cat.codes
print('Generating features based on pos cash ....')
t0 = time.clock()
data, FEATURE_NAMES = pos_cash_features(pos_cash, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
del pos_cash
gc.collect()
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'pos_cash_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'pos_cash_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on pos cash')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'credit_train.pkl')):
credit_bal = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'credit_card_balance.pkl'))
for col in credit_bal.select_dtypes(include=['category']).columns:
credit_bal.loc[:, col] = credit_bal.loc[:, col].cat.codes
print('Generating features based on Credit Card ....')
t0 = time.clock()
data, FEATURE_NAMES = credit_card_features(credit_bal, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
del credit_bal
gc.collect()
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'credit_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'credit_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on Credit Card')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'installments_train.pkl')):
installments = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'installments_payments.pkl'))
for col in installments.select_dtypes(include=['category']).columns:
installments.loc[:, col] = installments.loc[:, col].cat.codes
print('Generating features based on Installments ....')
t0 = time.clock()
data, FEATURE_NAMES = get_installment_features(installments, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
del installments
gc.collect()
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'installments_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'installments_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on Installments')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_bureau_train.pkl')):
prev_app = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'previous_application.pkl'))
bureau = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'bureau.pkl'))
for col in prev_app.select_dtypes(include=['category']).columns:
prev_app.loc[:, col] = prev_app.loc[:, col].cat.codes
for col in bureau.select_dtypes(include=['category']).columns:
bureau.loc[:, col] = bureau.loc[:, col].cat.codes
print('Generating features based on Previous Applications and Bureau Applications....')
t0 = time.clock()
data, FEATURE_NAMES = prev_app_bureau(prev_app, bureau, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
del bureau, prev_app
gc.collect()
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_bureau_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_bureau_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on Previous application and Bureau Applications')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_credit_train.pkl')):
prev_app = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'previous_application.pkl'))
credit_bal = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'credit_card_balance.pkl'))
for col in prev_app.select_dtypes(include=['category']).columns:
prev_app.loc[:, col] = prev_app.loc[:, col].cat.codes
for col in credit_bal.select_dtypes(include=['category']).columns:
credit_bal.loc[:, col] = credit_bal.loc[:, col].cat.codes
print('Generating features based on Previous Applications and Credit card balance ....')
t0 = time.clock()
data, FEATURE_NAMES = prev_app_credit_card(prev_app, credit_bal, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
del credit_bal, prev_app
gc.collect()
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_credit_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_credit_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on Previous application and Credit card balance')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_installments_train.pkl')):
prev_app = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'previous_application.pkl'))
installments = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'installments_payments.pkl'))
for col in prev_app.select_dtypes(include=['category']).columns:
prev_app.loc[:, col] = prev_app.loc[:, col].cat.codes
for col in installments.select_dtypes(include=['category']).columns:
installments.loc[:, col] = installments.loc[:, col].cat.codes
print('Generating features based on Previous Applications and Installment Payments ....')
t0 = time.clock()
data, FEATURE_NAMES = prev_app_installments(prev_app, installments, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
del installments, prev_app
gc.collect()
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_installments_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_installments_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on Previous application and Installment Payments.')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'loan_stacking_train.pkl')):
bureau = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'bureau.pkl'))
prev_app = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'previous_application.pkl'))
credit_bal = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'credit_card_balance.pkl'))
for col in bureau.select_dtypes(include=['category']).columns:
bureau.loc[:, col] = bureau.loc[:, col].cat.codes
for col in prev_app.select_dtypes(include=['category']).columns:
prev_app.loc[:, col] = prev_app.loc[:, col].cat.codes
for col in credit_bal.select_dtypes(include=['category']).columns:
credit_bal.loc[:, col] = credit_bal.loc[:, col].cat.codes
print('Generating features based on loan stacking ....')
t0 = time.clock()
data, FEATURE_NAMES = loan_stacking(bureau, prev_app, credit_bal, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'loan_stacking_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'loan_stacking_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
del bureau
gc.collect()
else:
print('Already generated features based on loan stacking.')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'feature_groups_train.pkl')):
print('Generating features based on feature groups ....')
t0 = time.clock()
data, FEATURE_NAMES = feature_groups(data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'feature_groups_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'feature_groups_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on feature groups.')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_pos_cash_train.pkl')):
print('Generating features based on previous application and pos cash ....')
prev_app = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'previous_application.pkl'))
pos_cash = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'POS_CASH_balance.pkl'))
for col in prev_app.select_dtypes(include=['category']).columns:
prev_app.loc[:, col] = prev_app.loc[:, col].cat.codes
for col in pos_cash.select_dtypes(include=['category']).columns:
pos_cash.loc[:, col] = pos_cash.loc[:, col].cat.codes
t0 = time.clock()
data, FEATURE_NAMES = prev_app_pos(prev_app, pos_cash, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_pos_cash_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_pos_cash_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on previous application and pos cash.')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_pos_cash_credit_bal_train.pkl')):
print('Generating features based on previous application, pos cash and credit card balance ....')
prev_app = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'previous_application.pkl'))
pos_cash = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'POS_CASH_balance.pkl'))
credit_bal = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'credit_card_balance.pkl'))
for col in prev_app.select_dtypes(include=['category']).columns:
prev_app.loc[:, col] = prev_app.loc[:, col].cat.codes
for col in pos_cash.select_dtypes(include=['category']).columns:
pos_cash.loc[:, col] = pos_cash.loc[:, col].cat.codes
for col in credit_bal.select_dtypes(include=['category']).columns:
credit_bal.loc[:, col] = credit_bal.loc[:, col].cat.codes
t0 = time.time()
data, FEATURE_NAMES = prev_app_pos_credit(prev_app, pos_cash, credit_bal, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_pos_cash_credit_bal_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_pos_cash_credit_bal_test.pkl'))
print('\nTook: {} seconds'.format(time.time() - t0))
else:
print('Already generated features based on previous application, pos cash and credit card balance.')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_ohe_train.pkl')):
print('Generating features based on previous application one hot encoded features ....')
prev_app = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'previous_application.pkl'))
for col in prev_app.select_dtypes(include=['category']).columns:
prev_app.loc[:, col] = prev_app.loc[:, col].cat.codes
t0 = time.time()
data, FEATURE_NAMES = prev_app_ohe(prev_app, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_ohe_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_ohe_test.pkl'))
print('\nTook: {} seconds'.format(time.time() - t0))
else:
print('Already generated features based on previous application one hot encode features.')
# This method currently takes care of loading engineered features from disk
# and merging train and test to report back a dataframe (data) which can be used by
# other layers.
def merge_datasets(self):
def get_filenames():
filenames = [f'application_',
f'current_application_',
f'bureau_',
f'bureau_bal_',
f'prev_app_',
f'pos_cash_',
f'credit_',
f'installments_',
f'prev_app_bureau_',
f'prev_app_credit_',
f'prev_app_installments_',
f'loan_stacking_',
f'feature_groups_',
f'prev_app_pos_cash_',
f'prev_app_pos_cash_credit_bal_',
f'prev_app_ohe_'
]
return filenames
train = []
test = []
filenames = get_filenames()
for filename_ in filenames:
tmp = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'{filename_}train.pkl'))
tmp.index = np.arange(len(tmp))
train.append(tmp)
for filename_ in filenames:
tmp = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'{filename_}test.pkl'))
tmp.index = np.arange(len(tmp))
test.append(tmp)
return pd.concat(train, axis=1), pd.concat(test, axis=1)
def feature_interaction(self, data, key, agg_feature, agg_func, agg_func_name):
key_name = '_'.join(key)
tmp = data.groupby(key)[agg_feature].apply(agg_func)\
.reset_index()\
.rename(columns={agg_feature: f'{agg_func_name}_{key_name}_{agg_feature}'})
feat_name = f'{agg_func_name}_{key_name}_{agg_feature}'
data.loc[:, feat_name] = data.loc[:, key].merge(tmp, on=key, how='left')[feat_name]
return data, feat_name
def feature_preprocessing(self, data):
# current application preprocessing
data['DAYS_LAST_PHONE_CHANGE'].replace(0, np.nan, inplace=True)
data['CODE_GENDER'].replace(2, np.nan, inplace=True)
data['DAYS_EMPLOYED'].replace(365243, np.nan, inplace=True)
# previous application
data['DAYS_FIRST_DRAWING'].replace(365243, np.nan, inplace=True)
data['DAYS_FIRST_DUE'].replace(365243, np.nan, inplace=True)
data['DAYS_LAST_DUE_1ST_VERSION'].replace(365243, np.nan, inplace=True)
data['DAYS_LAST_DUE'].replace(365243, np.nan, inplace=True)
data['DAYS_TERMINATION'].replace(365243, np.nan, inplace=True)
return data
def add_missing_values_flag(self, data):
# preprocess for pca
SKIP_COLS = ['SK_ID_CURR', 'TARGET']
for col in data.columns.drop(SKIP_COLS):
# replace inf with np.nan
data[col] = data[col].replace([np.inf, -np.inf], np.nan)
# fill missing values with median
if data[col].isnull().sum():
data[f'{col}_flag'] = data[col].isnull().astype(np.uint8)
if pd.isnull(data[col].median()):
data[col] = data[col].fillna(-1)
else:
data[col] = data[col].fillna(data[col].median())
return data
def get_features(self, train, test, compute_categorical):
data = pd.concat((train, test))
data.index = np.arange(len(data))
for col in data.select_dtypes(include=['category']).columns:
data[col] = data[col].cat.codes
# TODO: not very happy with the way we are computing interactions
# because if we omit any of this feature from pipeline it would
# still work but would most likely be a feature of all null values.
# concatenate OCCUPATION TYPE AND ORGANIZATION TYPE
data.loc[:, 'ORGANIZATION_OCCUPATION'] = pd.factorize(data.ORGANIZATION_TYPE.astype(np.str) +\
data.OCCUPATION_TYPE.astype(np.str)
)[0]
# interaction between total debt to income and (annuity / credit)
data.loc[:, 'debt_income_to_annuity_credit'] = data.total_debt_to_income / data.ratio_annuity_credit
# interaction between days birth and ratio of annuity to credit
data.loc[:, 'add_days_birth_annuity_credit'] = data.DAYS_BIRTH + data.ratio_annuity_credit
# interaction between ratio of annuity to credit with external source 2 score
data.loc[:, 'mult_annuity_credit_ext_source_2'] = data.ratio_annuity_credit * data.EXT_SOURCE_2
data.loc[:, 'ratio_annuity_credit_ext_source_2'] = data.ratio_annuity_credit / data.EXT_SOURCE_2.map(np.log1p)
data.loc[:, 'mult_annuity_credit_ext_source_1'] = data.ratio_annuity_credit * data.EXT_SOURCE_1
data.loc[:, 'ratio_annuity_credit_ext_source_1'] = data.ratio_annuity_credit / data.EXT_SOURCE_1.map(np.log1p)
data.loc[:, 'mult_annuity_credit_ext_source_3'] = data.ratio_annuity_credit * data.EXT_SOURCE_3
data.loc[:, 'ratio_annuity_credit_ext_source_3'] = data.ratio_annuity_credit / data.EXT_SOURCE_3.map(np.log1p)
# interaction between ratio of annuity to credit with total amount paid in installments
data.loc[:, 'mult_annuity_credit_amt_payment_sum'] = data.ratio_annuity_credit * data.AMT_PAYMENT_sum
# interaction between total amount paid in installments and delay in installments
data.loc[:, 'mult_amt_payment_sum_delay_installment'] = data.AMT_PAYMENT_sum * data.delay_in_installment_payments
# interaction between credit / annuity and age
data.loc[:, 'diff_credit_annuity_age'] = (data.AMT_CREDIT / data.AMT_ANNUITY) - (-data.DAYS_BIRTH / 365)
# interaction between ext_3 and age
data.loc[:, 'ext_3_age'] = data.EXT_SOURCE_3 * (-data.DAYS_BIRTH / 365)
# interaction between ext_2 and age
data.loc[:, 'ext_2_age'] = data.EXT_SOURCE_2 * (-data.DAYS_BIRTH / 365)
# interaction between rate and external source 2
data.loc[:, 'add_rate_ext_2'] = (data.AMT_CREDIT / data.AMT_ANNUITY) + data.EXT_SOURCE_2
# interaction between rate and age
data.loc[:, 'add_rate_age'] = (data.AMT_CREDIT / data.AMT_ANNUITY) + (-data.DAYS_BIRTH / 365)
# interaction between age and employed and external score 2
data.loc[:, 'add_mult_age_employed_ext_2'] = ((-data.DAYS_BIRTH / 365) +\
(-data.DAYS_EMPLOYED.replace({365243: np.nan}))) *\
(data.EXT_SOURCE_2)
# combine ratio annuity credit, region populative relative and ext source 2
data.loc[:, 'rate_annuity_region_ext_source_2'] = data.ratio_annuity_credit * data.REGION_POPULATION_RELATIVE * data.EXT_SOURCE_2
data.loc[:, 'region_ext_source_3'] = data.REGION_POPULATION_RELATIVE * data.EXT_SOURCE_3
# Relationship between AMT_REQ_CREDIT_BUREAU_HOUR and AMT_REQ_CREDIT_BUREAU_YEAR
data.loc[:, 'ratio_check_hour_to_year'] = data.AMT_REQ_CREDIT_BUREAU_HOUR.div(data.AMT_REQ_CREDIT_BUREAU_YEAR)
# Relationship between Income and ratio annuity credit
data.loc[:, 'mult_ratio_income'] = (data.ratio_annuity_credit * data.AMT_INCOME_TOTAL).map(np.log1p)
data.loc[:, 'div_ratio_income'] = (data.AMT_INCOME_TOTAL / data.ratio_annuity_credit).map(np.log1p)
# Gender, Education and other features
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'NAME_EDUCATION_TYPE'], 'EXT_SOURCE_2', np.mean, 'mean')
data.loc[:, 'diff_code_gender_name_education_type_source_2_mean'] = data[feat_name] - data['EXT_SOURCE_2']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'NAME_EDUCATION_TYPE'], 'EXT_SOURCE_2', np.var, 'var')
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'NAME_EDUCATION_TYPE'], 'EXT_SOURCE_1', np.mean, 'mean')
data.loc[:, 'diff_code_gender_name_education_type_source_1_mean'] = data[feat_name] - data['EXT_SOURCE_1']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'NAME_EDUCATION_TYPE'], 'AMT_CREDIT', np.mean, 'mean')
data.loc[:, 'diff_code_gender_name_education_type_amt_credit_mean'] = data[feat_name] - data['AMT_CREDIT']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'NAME_EDUCATION_TYPE'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_code_gender_name_education_type_amt_annuity_mean'] = data[feat_name] - data['AMT_ANNUITY']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'NAME_EDUCATION_TYPE'], 'OWN_CAR_AGE', np.max, 'max')
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'NAME_EDUCATION_TYPE'], 'OWN_CAR_AGE', np.sum, 'sum')
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'NAME_EDUCATION_TYPE'], 'DAYS_BIRTH', np.mean, 'mean')
data.loc[:, 'diff_code_gender_education_type_age'] = data[feat_name] - data['DAYS_BIRTH']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'NAME_EDUCATION_TYPE'], 'DAYS_EMPLOYED', np.mean, 'mean')
data.loc[:, 'diff_code_gender_education_type_empl'] = data[feat_name] - data['DAYS_EMPLOYED']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'NAME_EDUCATION_TYPE'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_code_gender_education_type_income'] = data[feat_name] - data['AMT_INCOME_TOTAL']
# Gender, Occupation and other features
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'OCCUPATION_TYPE'], 'EXT_SOURCE_2', np.mean, 'mean')
data.loc[:, 'diff_code_gender_occupation_source_2_mean'] = data[feat_name] - data['EXT_SOURCE_2']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'OCCUPATION_TYPE'], 'EXT_SOURCE_1', np.mean, 'mean')
data.loc[:, 'diff_code_gender_occupation_source_1_mean'] = data[feat_name] - data['EXT_SOURCE_1']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'OCCUPATION_TYPE'], 'EXT_SOURCE_3', np.mean, 'mean')
data.loc[:, 'diff_code_gender_occupation_source_3_mean'] = data[feat_name] - data['EXT_SOURCE_3']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'OCCUPATION_TYPE'], 'DAYS_BIRTH', np.mean, 'mean')
data.loc[:, 'diff_code_gender_occupation_days_birth_mean'] = data[feat_name] - data['DAYS_BIRTH']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'OCCUPATION_TYPE'], 'DAYS_EMPLOYED', np.mean, 'mean')
data.loc[:, 'diff_code_gender_occupation_empl_mean'] = data[feat_name] - data['DAYS_EMPLOYED']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'OCCUPATION_TYPE'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_code_gender_occupation_income_mean'] = data[feat_name] - data['AMT_INCOME_TOTAL']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'OCCUPATION_TYPE'], 'AMT_CREDIT', np.mean, 'mean')
data.loc[:, 'diff_code_gender_occupation_credit_mean'] = data[feat_name] - data['AMT_CREDIT']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'OCCUPATION_TYPE'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_code_gender_occupation_annuity_mean'] = data[feat_name] - data['AMT_ANNUITY']
# Gender, Organization and other features
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'ORGANIZATION_TYPE'], 'EXT_SOURCE_2', np.mean, 'mean')
data.loc[:, 'diff_code_gender_organization_type_source_2_mean'] = data[feat_name] - data['EXT_SOURCE_2']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'ORGANIZATION_TYPE'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_code_gender_organization_type_amt_annuity_mean'] = data[feat_name] - data['AMT_ANNUITY']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'ORGANIZATION_TYPE'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_code_gender_organization_type_income_mean'] = data[feat_name] - data['AMT_INCOME_TOTAL']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'ORGANIZATION_TYPE'], 'DAYS_REGISTRATION', np.mean, 'mean')
data.loc[:, 'diff_code_gender_organization_type_days_reg_mean'] = data[feat_name] - data['DAYS_REGISTRATION']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'ORGANIZATION_TYPE'], 'EXT_SOURCE_1', np.mean, 'mean')
data.loc[:, 'diff_code_gender_organization_type_source_1_mean'] = data[feat_name] - data['EXT_SOURCE_1']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'ORGANIZATION_TYPE'], 'DAYS_BIRTH', np.mean, 'mean')
data.loc[:, 'diff_code_gender_organization_type_age_mean'] = data[feat_name] - data['EXT_SOURCE_1']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'ORGANIZATION_TYPE'], 'DAYS_EMPLOYED', np.mean, 'mean')
data.loc[:, 'diff_code_gender_organization_type_empl_mean'] = data[feat_name] - data['EXT_SOURCE_1']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'ORGANIZATION_TYPE'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_code_gender_organization_type_income_mean'] = data[feat_name] - data['AMT_INCOME_TOTAL']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'ORGANIZATION_TYPE'], 'AMT_CREDIT', np.mean, 'mean')
data.loc[:, 'diff_code_gender_organization_type_credit_mean'] = data[feat_name] - data['AMT_CREDIT']
# Gender, Reg city not work city and other fatures
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'REG_CITY_NOT_WORK_CITY'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_code_gender_reg_city_amount_annuity_mean'] = data[feat_name] - data['AMT_ANNUITY']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'REG_CITY_NOT_WORK_CITY'], 'CNT_CHILDREN', np.mean, 'mean')
data.loc[:, 'diff_code_gender_reg_city_cnt_children_mean'] = data[feat_name] - data['CNT_CHILDREN']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'REG_CITY_NOT_WORK_CITY'], 'DAYS_ID_PUBLISH', np.mean, 'mean')
data.loc[:, 'diff_code_gender_reg_city_days_id_mean'] = data[feat_name] - data['DAYS_ID_PUBLISH']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'REG_CITY_NOT_WORK_CITY'], 'DAYS_BIRTH', np.mean, 'mean')
data.loc[:, 'diff_code_gender_reg_city_age_mean'] = data[feat_name] - data['DAYS_BIRTH']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'REG_CITY_NOT_WORK_CITY'], 'DAYS_EMPLOYED', np.mean, 'mean')
data.loc[:, 'diff_code_gender_reg_city_empl_mean'] = data[feat_name] - data['DAYS_EMPLOYED']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'REG_CITY_NOT_WORK_CITY'], 'AMT_CREDIT', np.mean, 'mean')
data.loc[:, 'diff_code_gender_reg_city_credit_mean'] = data[feat_name] - data['AMT_CREDIT']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'REG_CITY_NOT_WORK_CITY'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_code_gender_reg_city_income_mean'] = data[feat_name] - data['AMT_INCOME_TOTAL']
# Income, Occupation and Ext Score
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE', 'OCCUPATION_TYPE'], 'EXT_SOURCE_2', np.mean, 'mean')
data.loc[:, 'diff_name_income_type_occupation_source_2_mean'] = data[feat_name] - data['EXT_SOURCE_2']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE', 'OCCUPATION_TYPE'], 'EXT_SOURCE_1', np.mean, 'mean')
data.loc[:, 'diff_name_income_type_occupation_source_1_mean'] = data[feat_name] - data['EXT_SOURCE_1']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE', 'OCCUPATION_TYPE'], 'EXT_SOURCE_3', np.mean, 'mean')
data.loc[:, 'diff_name_income_type_occupation_source_3_mean'] = data[feat_name] - data['EXT_SOURCE_3']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE', 'OCCUPATION_TYPE'], 'DAYS_BIRTH', np.mean, 'mean')
data.loc[:, 'diff_name_income_type_occupation_age_mean'] = data[feat_name] - data['DAYS_BIRTH']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE', 'OCCUPATION_TYPE'], 'DAYS_EMPLOYED', np.mean, 'mean')
data.loc[:, 'diff_name_income_type_occupation_empl_mean'] = data[feat_name] - data['DAYS_EMPLOYED']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE', 'OCCUPATION_TYPE'], 'AMT_CREDIT', np.mean, 'mean')
data.loc[:, 'diff_name_income_type_occupation_credit_mean'] = data[feat_name] - data['AMT_CREDIT']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE', 'OCCUPATION_TYPE'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_name_income_type_occupation_annuity_mean'] = data[feat_name] - data['AMT_ANNUITY']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE', 'OCCUPATION_TYPE'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_name_income_type_occupation_income_mean'] = data[feat_name] - data['AMT_INCOME_TOTAL']
# Occupation and Organization and Ext Score
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE', 'ORGANIZATION_TYPE'], 'EXT_SOURCE_2', np.mean, 'mean')
data.loc[:, 'diff_occupation_organization_source_2_mean'] = data[feat_name] - data['DAYS_ID_PUBLISH']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE', 'ORGANIZATION_TYPE'], 'EXT_SOURCE_1', np.mean, 'mean')
data.loc[:, 'diff_occupation_organization_source_1_mean'] = data[feat_name] - data['DAYS_ID_PUBLISH']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE', 'ORGANIZATION_TYPE'], 'EXT_SOURCE_3', np.mean, 'mean')
data.loc[:, 'diff_occupation_organization_source_3_mean'] = data[feat_name] - data['DAYS_ID_PUBLISH']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE', 'ORGANIZATION_TYPE'], 'DAYS_BIRTH', np.mean, 'mean')
data.loc[:, 'diff_occupation_organization_age_mean'] = data[feat_name] - data['DAYS_BIRTH']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE', 'ORGANIZATION_TYPE'], 'DAYS_EMPLOYED', np.mean, 'mean')
data.loc[:, 'diff_occupation_organization_empl_mean'] = data[feat_name] - data['DAYS_EMPLOYED']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE', 'ORGANIZATION_TYPE'], 'AMT_CREDIT', np.mean, 'mean')
data.loc[:, 'diff_occupation_organization_credit_mean'] = data[feat_name] - data['AMT_CREDIT']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE', 'ORGANIZATION_TYPE'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_occupation_organization_annuity_mean'] = data[feat_name] - data['AMT_ANNUITY']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE', 'ORGANIZATION_TYPE'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_occupation_organization_income_mean'] = data[feat_name] - data['AMT_INCOME_TOTAL']
# Income, Education and Ext score
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE', 'NAME_EDUCATION_TYPE'], 'EXT_SOURCE_2', np.mean, 'mean')
data.loc[:, 'diff_income_type_education_type_source_2_mean'] = data[feat_name] - data['EXT_SOURCE_2']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE', 'NAME_EDUCATION_TYPE'], 'EXT_SOURCE_1', np.mean, 'mean')
data.loc[:, 'diff_income_type_education_type_source_1_mean'] = data[feat_name] - data['EXT_SOURCE_1']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE', 'NAME_EDUCATION_TYPE'], 'EXT_SOURCE_3', np.mean, 'mean')
data.loc[:, 'diff_income_type_education_type_source_3_mean'] = data[feat_name] - data['EXT_SOURCE_3']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE', 'NAME_EDUCATION_TYPE'], 'DAYS_BIRTH', np.mean, 'mean')
data.loc[:, 'diff_income_type_education_type_age_mean'] = data[feat_name] - data['DAYS_BIRTH']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE', 'NAME_EDUCATION_TYPE'], 'DAYS_EMPLOYED', np.mean, 'mean')
data.loc[:, 'diff_income_type_education_type_empl_mean'] = data[feat_name] - data['DAYS_EMPLOYED']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE', 'NAME_EDUCATION_TYPE'], 'AMT_CREDIT', np.mean, 'mean')
data.loc[:, 'diff_income_type_education_type_credit_mean'] = data[feat_name] - data['AMT_CREDIT']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE', 'NAME_EDUCATION_TYPE'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_income_type_education_type_annuity_mean'] = data[feat_name] - data['AMT_ANNUITY']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE', 'NAME_EDUCATION_TYPE'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_income_type_education_type_income_mean'] = data[feat_name] - data['AMT_INCOME_TOTAL']
# Education and Occupation and other features
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE'], 'AMT_CREDIT', np.mean, 'mean')
data.loc[:, 'diff_education_occupation_amt_credit_mean'] = data[feat_name] - data['AMT_CREDIT']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE'], 'EXT_SOURCE_1', np.mean, 'mean')
data.loc[:, 'diff_education_occupation_source_1_mean'] = data[feat_name] - data['EXT_SOURCE_1']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE'], 'EXT_SOURCE_2', np.mean, 'mean')
data.loc[:, 'diff_education_occupation_source_2_mean'] = data[feat_name] - data['EXT_SOURCE_2']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE'], 'EXT_SOURCE_3', np.mean, 'mean')
data.loc[:, 'diff_education_occupation_source_3_mean'] = data[feat_name] - data['EXT_SOURCE_3']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE'], 'OWN_CAR_AGE', np.mean, 'mean')
data.loc[:, 'diff_education_occupation_car_age_mean'] = data[feat_name] - data['OWN_CAR_AGE']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE'], 'DAYS_BIRTH', np.mean, 'mean')
data.loc[:, 'diff_education_occupation_age_mean'] = data[feat_name] - data['DAYS_BIRTH']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE'], 'DAYS_EMPLOYED', np.mean, 'mean')
data.loc[:, 'diff_education_occupation_empl_mean'] = data[feat_name] - data['DAYS_EMPLOYED']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_education_occupation_annuity_mean'] = data[feat_name] - data['AMT_ANNUITY']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_education_occupation_income_mean'] = data[feat_name] - data['AMT_INCOME_TOTAL']
# Education, Occupation, Reg city not work city and other features
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE', 'REG_CITY_NOT_WORK_CITY'], 'EXT_SOURCE_2', np.mean, 'mean')
data.loc[:, 'diff_education_occupation_ext_source_2_mean'] = data[feat_name] - data['EXT_SOURCE_2']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE', 'REG_CITY_NOT_WORK_CITY'], 'EXT_SOURCE_1', np.mean, 'mean')
data.loc[:, 'diff_education_occupation_ext_source_1_mean'] = data[feat_name] - data['EXT_SOURCE_1']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE', 'REG_CITY_NOT_WORK_CITY'], 'EXT_SOURCE_3', np.mean, 'mean')
data.loc[:, 'diff_education_occupation_ext_source_3_mean'] = data[feat_name] - data['EXT_SOURCE_3']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE', 'REG_CITY_NOT_WORK_CITY'], 'DAYS_BIRTH', np.mean, 'mean')
data.loc[:, 'diff_education_occupation_age_mean'] = data[feat_name] - data['DAYS_BIRTH']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE', 'REG_CITY_NOT_WORK_CITY'], 'DAYS_EMPLOYED', np.mean, 'mean')
data.loc[:, 'diff_education_occupation_empl_mean'] = data[feat_name] - data['DAYS_EMPLOYED']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE', 'REG_CITY_NOT_WORK_CITY'], 'AMT_CREDIT', np.mean, 'mean')
data.loc[:, 'diff_education_occupation_credit_mean'] = data[feat_name] - data['AMT_CREDIT']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE', 'REG_CITY_NOT_WORK_CITY'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_education_occupation_annuity_mean'] = data[feat_name] - data['AMT_ANNUITY']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE', 'REG_CITY_NOT_WORK_CITY'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_education_occupation_income_mean'] = data[feat_name] - data['AMT_INCOME_TOTAL']
# Occupation and other features
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_occupation_reg_city_mean'] = data[feat_name] - data['AMT_ANNUITY']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE'], 'CNT_CHILDREN', np.mean, 'mean')
data.loc[:, 'diff_occupation_cnt_children_mean'] = data[feat_name] - data['CNT_CHILDREN']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE'], 'CNT_FAM_MEMBERS', np.mean, 'mean')
data.loc[:, 'diff_occupation_cnt_fam_mebers_mean'] = data[feat_name] - data['CNT_FAM_MEMBERS']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE'], 'DAYS_BIRTH', np.mean, 'mean')
data.loc[:, 'diff_occupation_days_birth_mean'] = data[feat_name] - data['DAYS_BIRTH']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE'], 'DAYS_EMPLOYED', np.mean, 'mean')
data.loc[:, 'diff_occupation_days_employed_mean'] = data[feat_name] - data['DAYS_EMPLOYED']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE'], 'EXT_SOURCE_2', np.mean, 'mean')
data.loc[:, 'diff_occupation_source_2_mean'] = data[feat_name] - data['EXT_SOURCE_2']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE'], 'EXT_SOURCE_3', np.mean, 'mean')
data.loc[:, 'diff_occupation_source_3_mean'] = data[feat_name] - data['EXT_SOURCE_3']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE'], 'EXT_SOURCE_1', np.mean, 'mean')
data.loc[:, 'diff_occupation_source_1_mean'] = data[feat_name] - data['EXT_SOURCE_1']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE'], 'OWN_CAR_AGE', np.mean, 'mean')
data.loc[:, 'diff_occupation_own_car_age_mean'] = data[feat_name] - data['OWN_CAR_AGE']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE'], 'YEARS_BUILD_AVG', np.mean, 'mean')
data.loc[:, 'diff_occupation_year_build_mean'] = data[feat_name] - data['YEARS_BUILD_AVG']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE'], 'ratio_annuity_credit', np.mean, 'mean')
data.loc[:, 'diff_occupation_annuity_credit_mean'] = data[feat_name] - data['ratio_annuity_credit']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE'], 'AMT_CREDIT', np.mean, 'mean')
data.loc[:, 'diff_occupation_credit_mean'] = data[feat_name] - data['AMT_CREDIT']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_occupation_annuity_mean'] = data[feat_name] - data['AMT_ANNUITY']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_occupation_income_mean'] = data[feat_name] - data['AMT_INCOME_TOTAL']
# Organization type and other features
data, feat_name = self.feature_interaction(data, ['ORGANIZATION_TYPE'], 'EXT_SOURCE_1', np.mean, 'mean')
data.loc[:, 'diff_organization_ext_source_1_mean'] = data[feat_name] - data['EXT_SOURCE_1']
data, feat_name = self.feature_interaction(data, ['ORGANIZATION_TYPE'], 'EXT_SOURCE_2', np.mean, 'mean')
data.loc[:, 'diff_organization_ext_source_2_mean'] = data[feat_name] - data['EXT_SOURCE_2']
data, feat_name = self.feature_interaction(data, ['ORGANIZATION_TYPE'], 'EXT_SOURCE_3', np.mean, 'mean')
data.loc[:, 'diff_organization_ext_source_3_mean'] = data[feat_name] - data['EXT_SOURCE_3']
data, feat_name = self.feature_interaction(data, ['ORGANIZATION_TYPE'], 'DAYS_BIRTH', np.mean, 'mean')
data.loc[:, 'diff_organization_age_mean'] = data[feat_name] - data['DAYS_BIRTH']
data, feat_name = self.feature_interaction(data, ['ORGANIZATION_TYPE'], 'DAYS_EMPLOYED', np.mean, 'mean')
data.loc[:, 'diff_organization_empl_mean'] = data[feat_name] - data['DAYS_EMPLOYED']
data, feat_name = self.feature_interaction(data, ['ORGANIZATION_TYPE'], 'AMT_CREDIT', np.mean, 'mean')
data.loc[:, 'diff_organization_credit_mean'] = data[feat_name] - data['AMT_CREDIT']
data, feat_name = self.feature_interaction(data, ['ORGANIZATION_TYPE'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_organization_annuity_mean'] = data[feat_name] - data['AMT_ANNUITY']
data, feat_name = self.feature_interaction(data, ['ORGANIZATION_TYPE'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_organization_income_mean'] = data[feat_name] - data['AMT_INCOME_TOTAL']
# INCOME Type and other features
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE'], 'EXT_SOURCE_1', np.mean, 'mean')
data.loc[:, 'diff_income_ext_source_1_mean'] = data[feat_name] - data['EXT_SOURCE_1']
data.loc[:, 'ratio_income_ext_source_1_mean'] = data[feat_name] / data['EXT_SOURCE_1']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE'], 'EXT_SOURCE_2', np.mean, 'mean')
data.loc[:, 'diff_income_ext_source_2_mean'] = data[feat_name] - data['EXT_SOURCE_2']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE'], 'EXT_SOURCE_3', np.mean, 'mean')
data.loc[:, 'diff_income_ext_source_3_mean'] = data[feat_name] - data['EXT_SOURCE_3']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE'], 'DAYS_BIRTH', np.mean, 'mean')
data.loc[:, 'diff_income_ext_age_mean'] = data[feat_name] - data['DAYS_BIRTH']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE'], 'DAYS_EMPLOYED', np.mean, 'mean')
data.loc[:, 'diff_income_ext_empl_mean'] = data[feat_name] - data['DAYS_EMPLOYED']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE'], 'AMT_CREDIT', np.mean, 'mean')
data.loc[:, 'diff_income_ext_credit_mean'] = data[feat_name] - data['AMT_CREDIT']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_income_ext_annuity_mean'] = data[feat_name] - data['AMT_ANNUITY']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_income_ext_income_mean'] = data[feat_name] - data['AMT_INCOME_TOTAL']
# EDUCATION Type and other features
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE'], 'EXT_SOURCE_1', np.mean, 'mean')
data.loc[:, 'diff_education_ext_source_1_mean'] = data[feat_name] - data['EXT_SOURCE_1']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE'], 'EXT_SOURCE_2', np.mean, 'mean')
data.loc[:, 'diff_education_ext_source_2_mean'] = data[feat_name] - data['EXT_SOURCE_2']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE'], 'EXT_SOURCE_3', np.mean, 'mean')
data.loc[:, 'diff_education_ext_source_3_mean'] = data[feat_name] - data['EXT_SOURCE_3']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE'], 'DAYS_BIRTH', np.mean, 'mean')
data.loc[:, 'diff_education_ext_age_mean'] = data[feat_name] - data['DAYS_BIRTH']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE'], 'DAYS_EMPLOYED', np.mean, 'mean')
data.loc[:, 'diff_education_ext_empl_mean'] = data[feat_name] - data['DAYS_EMPLOYED']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE'], 'AMT_CREDIT', np.mean, 'mean')
data.loc[:, 'diff_education_ext_credit_mean'] = data[feat_name] - data['AMT_CREDIT']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_education_ext_annuity_mean'] = data[feat_name] - data['AMT_ANNUITY']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_education_ext_income_mean'] = data[feat_name] - data['AMT_INCOME_TOTAL']
# Family Type and Income Type
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'NAME_INCOME_TYPE'], 'EXT_SOURCE_1', np.mean, 'mean')
data.loc[:, 'diff_family_income_ext_source_1_mean'] = data[feat_name] - data['EXT_SOURCE_1']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'NAME_INCOME_TYPE'], 'EXT_SOURCE_2', np.mean, 'mean')
data.loc[:, 'diff_family_income_ext_source_2_mean'] = data[feat_name] - data['EXT_SOURCE_2']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'NAME_INCOME_TYPE'], 'EXT_SOURCE_3', np.mean, 'mean')
data.loc[:, 'diff_family_income_ext_source_3_mean'] = data[feat_name] - data['EXT_SOURCE_3']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'NAME_INCOME_TYPE'], 'DAYS_BIRTH', np.mean, 'mean')
data.loc[:, 'diff_family_income_ext_age_mean'] = data[feat_name] - data['DAYS_BIRTH']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'NAME_INCOME_TYPE'], 'DAYS_EMPLOYED', np.mean, 'mean')
data.loc[:, 'diff_family_income_ext_empl_mean'] = data[feat_name] - data['DAYS_EMPLOYED']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'NAME_INCOME_TYPE'], 'AMT_CREDIT', np.mean, 'mean')
data.loc[:, 'diff_family_income_ext_credit_mean'] = data[feat_name] - data['AMT_CREDIT']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'NAME_INCOME_TYPE'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_family_income_ext_annuity_mean'] = data[feat_name] - data['AMT_ANNUITY']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'NAME_INCOME_TYPE'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_family_income_ext_income_mean'] = data[feat_name] - data['AMT_INCOME_TOTAL']
# Family Type and Education Type
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'NAME_EDUCATION_TYPE'], 'EXT_SOURCE_1', np.mean, 'mean')
data.loc[:, 'diff_family_education_ext_source_1_mean'] = data[feat_name] - data['EXT_SOURCE_1']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'NAME_EDUCATION_TYPE'], 'EXT_SOURCE_2', np.mean, 'mean')
data.loc[:, 'diff_family_education_ext_source_2_mean'] = data[feat_name] - data['EXT_SOURCE_2']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'NAME_EDUCATION_TYPE'], 'EXT_SOURCE_3', np.mean, 'mean')
data.loc[:, 'diff_family_education_ext_source_3_mean'] = data[feat_name] - data['EXT_SOURCE_3']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'NAME_EDUCATION_TYPE'], 'DAYS_BIRTH', np.mean, 'mean')
data.loc[:, 'diff_family_education_ext_age_mean'] = data[feat_name] - data['DAYS_BIRTH']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'NAME_EDUCATION_TYPE'], 'DAYS_EMPLOYED', np.mean, 'mean')
data.loc[:, 'diff_family_education_ext_empl_mean'] = data[feat_name] - data['DAYS_EMPLOYED']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'NAME_EDUCATION_TYPE'], 'AMT_CREDIT', np.mean, 'mean')
data.loc[:, 'diff_family_education_credit_mean'] = data[feat_name] - data['AMT_CREDIT']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'NAME_EDUCATION_TYPE'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_family_education_annuity_mean'] = data[feat_name] - data['AMT_ANNUITY']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'NAME_EDUCATION_TYPE'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_family_education_income_mean'] = data[feat_name] - data['AMT_INCOME_TOTAL']
# Family Type, Organization Type
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'ORGANIZATION_TYPE'], 'EXT_SOURCE_1', np.mean, 'mean')
data.loc[:, 'diff_family_organization_ext_source_1_mean'] = data[feat_name] - data['EXT_SOURCE_1']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'ORGANIZATION_TYPE'], 'EXT_SOURCE_2', np.mean, 'mean')
data.loc[:, 'diff_family_organization_ext_source_2_mean'] = data[feat_name] - data['EXT_SOURCE_2']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'ORGANIZATION_TYPE'], 'EXT_SOURCE_3', np.mean, 'mean')
data.loc[:, 'diff_family_organization_ext_source_3_mean'] = data[feat_name] - data['EXT_SOURCE_3']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'ORGANIZATION_TYPE'], 'DAYS_BIRTH', np.mean, 'mean')
data.loc[:, 'diff_family_organization_ext_age_mean'] = data[feat_name] - data['DAYS_BIRTH']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'ORGANIZATION_TYPE'], 'DAYS_EMPLOYED', np.mean, 'mean')
data.loc[:, 'diff_family_organization_ext_empl_mean'] = data[feat_name] - data['DAYS_EMPLOYED']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'ORGANIZATION_TYPE'], 'AMT_CREDIT', np.mean, 'mean')
data.loc[:, 'diff_family_organization_ext_credit_mean'] = data[feat_name] - data['AMT_CREDIT']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'ORGANIZATION_TYPE'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_family_organization_ext_annuity_mean'] = data[feat_name] - data['AMT_ANNUITY']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'ORGANIZATION_TYPE'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_family_organization_ext_income_mean'] = data[feat_name] - data['AMT_INCOME_TOTAL']
# Family Type, Occupation Type
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'OCCUPATION_TYPE'], 'EXT_SOURCE_1', np.mean, 'mean')
data.loc[:, 'diff_family_occupation_ext_source_1_mean'] = data[feat_name] - data['EXT_SOURCE_1']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'OCCUPATION_TYPE'], 'EXT_SOURCE_2', np.mean, 'mean')
data.loc[:, 'diff_family_occupation_ext_source_2_mean'] = data[feat_name] - data['EXT_SOURCE_2']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'OCCUPATION_TYPE'], 'EXT_SOURCE_3', np.mean, 'mean')
data.loc[:, 'diff_family_occupation_ext_source_3_mean'] = data[feat_name] - data['EXT_SOURCE_3']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'OCCUPATION_TYPE'], 'DAYS_BIRTH', np.mean, 'mean')
data.loc[:, 'diff_family_occupation_ext_age_mean'] = data[feat_name] - data['DAYS_BIRTH']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'OCCUPATION_TYPE'], 'DAYS_EMPLOYED', np.mean, 'mean')
data.loc[:, 'diff_family_occupation_ext_empl_mean'] = data[feat_name] - data['DAYS_EMPLOYED']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'OCCUPATION_TYPE'], 'AMT_CREDIT', np.mean, 'mean')
data.loc[:, 'diff_family_occupation_ext_credit_mean'] = data[feat_name] - data['AMT_CREDIT']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'OCCUPATION_TYPE'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_family_occupation_ext_annuity_mean'] = data[feat_name] - data['AMT_ANNUITY']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'OCCUPATION_TYPE'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_family_occupation_ext_income_mean'] = data[feat_name] - data['AMT_INCOME_TOTAL']
# frequency encoding of some of the categorical variables.
data = frequency_encoding(data, FREQ_ENCODING_COLS)
# add pca components
if os.path.exists(os.path.join(basepath, self.params['output_path'] + f'{self.params["data_folder"]}pca.pkl')):
pca_components = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + f'{self.params["data_folder"]}pca.pkl'))
else:
pca_components = super(Modelv145, self).add_pca_components(data.copy(), PCA_PARAMS)
pca_components.to_pickle(os.path.join(basepath, self.params['output_path'] + f'{self.params["data_folder"]}pca.pkl'))
# add tsne components
if os.path.exists(os.path.join(basepath, self.params['output_path'] + f'{self.params["data_folder"]}tsne.pkl')):
tsne_components = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + f'{self.params["data_folder"]}tsne.pkl'))
else:
tsne_components = super(Modelv145, self).add_tsne_components(data.copy())
tsne_components.to_pickle(os.path.join(basepath, self.params['output_path'] + f'{self.params["data_folder"]}tsne.pkl'))
pca_components.index = data.index
data = pd.concat((data, pca_components), axis=1)
tsne_components.index = data.index
data = pd.concat((data, tsne_components), axis=1)
# one hot encoding of some of the categorical variables controlled by a flag
# if flag is True then one hot encoding else do frequency encoding.
if compute_categorical == 'ohe':
print('Computing One Hot Encoding of categorical features ...')
print('*' * 100)
data = super(Modelv145, self).prepare_ohe(data, OHE_COLS, drop_col=True)
elif compute_categorical == 'freq':
print('Computing Frequency Encoding of Categorical features ....')
print('*' * 100)
data = frequency_encoding(data, OHE_COLS)
return data
# This method would perform feature engineering on merged datasets.
def fe(self, train, test, compute_categorical=None):
original_train = train.copy()
data = self.get_features(original_train, test, compute_categorical)
train = data.iloc[:len(train)]
test = data.iloc[len(train):]
del data, original_train
gc.collect()
return train, test
# This method just calls the base class with X,y, Xte and yte in the right format
# to train and returns a trained model which could be dumped on disk for further use.
# TODO: Find out why we are not able to load back model from disk and generate correct predictions
# there seems to be some issue in it right now.
def train(self, train, test, feature_list, is_eval, TARGET_NAME='TARGET', **params):
X = train.loc[:, feature_list]
y = train.loc[:, TARGET_NAME]
Xte = test.loc[:, feature_list]
yte = []
if is_eval:
yte = test.loc[:, TARGET_NAME]
return super(Modelv145, self).train_lgb(X, y, Xte, yte, **params)
# This method just takes in a model and test dataset and returns predictions
# prints out AUC on the test dataset as well in the process.
def evaluate(self, test, feature_list, is_eval, model, TARGET_NAME='TARGET'):
Xte = test.loc[:, feature_list]
yte = []
if is_eval:
yte = test.loc[:, TARGET_NAME]
return super(Modelv145, self).evaluate_lgb(Xte, yte, model)
def cv_predict(self, train, test, feature_list, params, cv_adversarial_filepath=None, categorical_feature='auto'):
return super(Modelv145, self).cv_predict(train,
test,
feature_list,
params,
cv_adversarial_filepath=cv_adversarial_filepath,
categorical_feature=categorical_feature
)
def predict_test(self, train, test, feature_list, params, save_path, n_folds=5):
return super(Modelv145, self).predict_test(train, test, feature_list, params, save_path, n_folds=n_folds)
def cross_validate(self, train, feature_list, params, cv_adversarial_filepath=None, TARGET_NAME='TARGET'):
Xtr = train.loc[:, feature_list]
ytr = train.loc[:, TARGET_NAME]
return super(Modelv145, self).cross_validate(Xtr, ytr, params, cv_adversarial_filepath=cv_adversarial_filepath)
def rf_fi(self, train, feature_list, SEED, target='TARGET'):
X = train.loc[:, feature_list]
y = train.loc[:, target]
return super(Modelv145, self).rf_fi(X, y, SEED)
def optimize_lgb(self, train, test, feature_list, TARGET_NAME='TARGET'):
Xtr = train.loc[:, feature_list]
ytr = train.loc[:, TARGET_NAME]
Xte = test.loc[:, feature_list]
yte = test.loc[:, TARGET_NAME]
param_grid = {
'sub_feature': (.01, .3),
'max_depth': (3, 8),
'min_data_in_leaf': (20, 100),
'min_child_weight': (1, 100),
'reg_lambda': (.1, 100),
'reg_alpha': (.1, 100),
'min_split_gain': (.01, .03),
'num_leaves': (5, 100)
}
return super(Modelv145, self).optimize_lgb(Xtr, ytr, Xte, yte, param_grid)
def get_oof_preds(self, train, test, feature_list, model, TARGET_NAME='TARGET'):
X = train.loc[:, feature_list]
y = train.loc[:, TARGET_NAME]
Xte = test.loc[:, feature_list]
return super(Modelv145, self).oof_preds(X, y, Xte, model)
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Home Credit Default Risk Solution')
parser.add_argument('-input_path', help='Path to input directory') # path to raw files
parser.add_argument('-output_path', help='Path to output directory') # path to working data folder
parser.add_argument('-data_folder', help='Folder name of the dataset') # dataset folder name
parser.add_argument('-p', type=bool, help='Preprocess')
parser.add_argument('-cv', type=bool, help='Cross Validation')
parser.add_argument('-cv_predict',type=bool, help='Cross Validation and Predictions for test set.')
parser.add_argument('-v', type=str, help='Validation')
parser.add_argument('-features', type=bool, help='Generate Features')
parser.add_argument('-rf_fi', type=bool, help='Random Forest Classifier Feature Importance')
parser.add_argument('-s', type=bool, help='Whether to work on a sample or not.')
parser.add_argument('-seed', type=int, help='Random SEED')
parser.add_argument('-cv_seed', type=int, help='CV SEED')
parser.add_argument('-oof', type=bool, help='OOF preds for training and test set.')
parser.add_argument('-t', type=bool, help='Full Training Loop.')
parser.add_argument('-bo', type=bool, help='Hyper-Parameter Tuning using Bayesian Optimization')
parser.add_argument('-ensemble', type=bool , help='Average out predictions.')
args = parser.parse_args()
if args.p:
print('Preprocessing ...')
input_path = args.input_path
output_path = args.output_path
params = {
'input_path': input_path,
'output_path': output_path
}
m = Modelv145(**params)
m.preprocess()
elif args.features:
print('Generating features ...')
print()
input_path = args.input_path
output_path = args.output_path
params = {
'input_path': input_path,
'output_path': output_path,
}
m = Modelv145(**params)
m.prepare_features()
elif args.v is not None and len(args.v):
print('Train and generate predictions on a fold')
input_path = args.input_path
output_path = args.output_path
data_folder = args.data_folder
fold_indicator = args.v
is_sample = args.s
cv_seed = args.cv_seed
SEED = int(args.seed)
print('*' * 100)
print('SEED FOUND: {}'.format(SEED))
params = {
'input_path': input_path,
'output_path': output_path,
'data_folder': data_folder
}
PARAMS = joblib.load(os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_{cv_seed}_params.pkl'))
# Set seed to Params
PARAMS['seed'] = SEED
PARAMS['feature_fraction_seed'] = SEED
PARAMS['bagging_seed'] = SEED
PARAMS['early_stopping_rounds'] = None # explicitly make it None
print('*' * 100)
print('PARAMS: {}'.format(PARAMS))
m = Modelv145(**params)
if os.path.exists(os.path.join(basepath, output_path + f'{data_folder}data.h5')):
print('Loading dataset from disk ...')
data = pd.read_hdf(os.path.join(basepath, output_path + f'{data_folder}data.h5'), format='table', key='data')
else:
print('Merge feature groups and save them to disk ...')
train, test = m.merge_datasets()
train, test = m.fe(train, test, compute_categorical='ohe')
data = pd.concat((train, test))
data = m.reduce_mem_usage(data)
data.to_hdf(os.path.join(basepath, output_path + f'{data_folder}data.h5'), format='table', key='data')
del train, test
gc.collect()
# ite = pd.read_csv(os.path.join(basepath, input_path + 'cv_adversarial_idx_v1.csv'), usecols=[fold_indicator])[fold_indicator].values
ite = pd.read_csv(os.path.join(basepath, input_path + 'cv_idx_test_stratified.csv'), usecols=[fold_indicator])[fold_indicator].values
print('Shape of fold indices ', len(ite))
itr = np.array(list(set(data.iloc[:m.n_train].index) - set(ite)))
train = data.loc[data.index.isin(itr)]
test = data.loc[data.index.isin(ite)]
del data
gc.collect()
if is_sample:
print('*' * 100)
print('Take a random sample of the training data ...')
train = train.sample(frac=SAMPLE_SIZE)
# check to see if feature list exists on disk or not for a particular model
if os.path.exists(os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_features.npy')):
feature_list = np.load(os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_features.npy'))
else:
feature_list = train.columns.tolist()
feature_list = list(set(feature_list) - set(COLS_TO_REMOVE))
np.save(os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_features.npy'), feature_list)
# print features with null percentage
print('Top-5 features with highest percentage of null values ...\n')
print((train.loc[:, feature_list].isnull().sum() / len(train)).sort_values(ascending=False).iloc[:5])
# print number of features explored in the experiment
print('*' * 100)
print('Number of features: {}'.format(len(feature_list)))
print('*' * 100)
model_identifier = f'{data_folder}{MODEL_FILENAME}_{fold_indicator}_{SEED}'
if os.path.exists(os.path.join(basepath, output_path + f'{model_identifier}_model.txt')):
print('Loading model from disk ...')
model = lgb.Booster(model_file=os.path.join(basepath, output_path + f'{model_identifier}_model.txt'))
yhold = test.TARGET
hold_preds = np.array(model.predict(test.loc[:, feature_list]))
print('AUC score: {}'.format(roc_auc_score(yhold, hold_preds)))
else:
print('Saving model to disk ...')
# train model
model, feat_df = m.train(train, test, feature_list, is_eval=True, **PARAMS)
if not is_sample:
model.save_model(os.path.join(basepath, output_path + f'{model_identifier}_model.txt'))
if not os.path.exists(os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}{fold_indicator}_true_holdout.npy')):
np.save(os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}{fold_indicator}_true_holdout.npy'), test.TARGET)
hold_preds = model.predict(test.loc[:, feature_list])
np.save(os.path.join(basepath, output_path + f'{model_identifier}_preds_holdout.npy'), hold_preds)
feat_df.to_csv(os.path.join(basepath, output_path + f'{model_identifier}_feat_imp.csv'), index=False)
elif args.cv:
print('Cross validation on training and store parameters and cv score on disk ...')
input_path = args.input_path
output_path = args.output_path
data_folder = args.data_folder
is_sample = args.s
SEED = args.seed
params = {
'input_path': input_path,
'output_path': output_path,
'data_folder': data_folder
}
m = Modelv145(**params)
if os.path.exists(os.path.join(basepath, output_path + f'{data_folder}data.h5')):
print('Loading dataset from disk ...')
data = pd.read_hdf(os.path.join(basepath, output_path + f'{data_folder}data.h5'), format='table', key='data')
else:
print('Merge feature groups and save them to disk ...')
train, test = m.merge_datasets()
train, test = m.fe(train, test, compute_categorical='ohe')
data = pd.concat((train, test))
data = m.reduce_mem_usage(data)
data.to_hdf(os.path.join(basepath, output_path + f'{data_folder}data.h5'), format='table', key='data')
del train, test
gc.collect()
train = data.iloc[:m.n_train]
del data
gc.collect()
if is_sample:
print('*' * 100)
print('Take a random sample of the training data ...')
train = train.sample(frac=SAMPLE_SIZE)
# check to see if feature list exists on disk or not for a particular model
if os.path.exists(os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_features.npy')):
feature_list = np.load(os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_features.npy'))
else:
feature_list = train.columns.tolist()
feature_list = list(set(feature_list) - set(COLS_TO_REMOVE))
np.save(os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_features.npy'), feature_list)
PARAMS['seed'] = SEED
PARAMS['feature_fraction_seed'] = SEED
PARAMS['bagging_seed'] = SEED
cv_adversarial_filepath = os.path.join(basepath, 'data/raw/cv_idx_test_stratified.csv')
# cv_adversarial_filepath = None
cv_history = m.cross_validate(train, feature_list, PARAMS.copy(), cv_adversarial_filepath)
cv_score = str(cv_history.iloc[-1]['auc-mean']) + '_' + str(cv_history.iloc[-1]['auc-stdv'])
PARAMS['num_boost_round'] = len(cv_history)
print('*' * 100)
print('Best AUC: {}'.format(cv_score))
joblib.dump(PARAMS, os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_{SEED}_params.pkl'))
joblib.dump(cv_score, os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_{SEED}_cv.pkl'))
elif args.cv_predict:
print('Cross validation with different seeds and produce submission for test set ..')
input_path = args.input_path
output_path = args.output_path
data_folder = args.data_folder
is_sample = args.s
SEED = args.seed
CV_SEED = args.cv_seed
params = {
'input_path': input_path,
'output_path': output_path,
'data_folder': data_folder
}
m = Modelv145(**params)
# Loading data
if os.path.exists(os.path.join(basepath, output_path + f'{data_folder}data.h5')):
print('Loading dataset from disk ...')
data = pd.read_hdf(os.path.join(basepath, output_path + f'{data_folder}data.h5'), format='table', key='data')
else:
print('Merge feature groups and save them to disk ...')
train, test = m.merge_datasets()
train, test = m.fe(train, test, compute_categorical='ohe')
data = pd.concat((train, test))
data = m.reduce_mem_usage(data)
data.to_hdf(os.path.join(basepath, output_path + f'{data_folder}data.h5'), format='table', key='data')
del train, test
gc.collect()
train = data.iloc[:m.n_train]
test = data.iloc[m.n_train:]
del data
gc.collect()
# Generating a sample if required
if is_sample:
print('*' * 100)
print('Take a random sample of the training data ...')
train = train.sample(frac=SAMPLE_SIZE)
# check to see if feature list exists on disk or not for a particular model
if os.path.exists(os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_features.npy')):
feature_list = np.load(os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_features.npy'))
else:
feature_list = train.columns.tolist()
feature_list = list(set(feature_list) - set(COLS_TO_REMOVE))
np.save(os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_features.npy'), feature_list)
PARAMS['seed'] = SEED
PARAMS['feature_fraction_seed'] = SEED
PARAMS['bagging_seed'] = SEED
if os.path.exists(os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_{CV_SEED}_test_preds.npy')):
oof_train_preds = np.load(os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_{CV_SEED}_oof_train_preds.npy'))
test_preds = np.load(os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_{CV_SEED}_test_preds.npy'))
test_preds_final = np.load(os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_{CV_SEED}_test_preds_final.npy'))
auc = joblib.load(os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_{CV_SEED}_oof_auc.pkl'))
else:
save_path = os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}')
auc, oof_train_preds, test_preds, test_preds_final = m.predict_test(train, test, feature_list, PARAMS.copy(), save_path)
np.save(os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_{CV_SEED}_oof_train_preds.npy'), oof_train_preds)
np.save(os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_{CV_SEED}_test_preds.npy'), test_preds)
np.save(os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_{CV_SEED}_test_preds_final.npy'), test_preds_final)
joblib.dump(auc, os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_{CV_SEED}_oof_auc.pkl'))
sub_identifier = "%s-%s-%s-%s-%s" % (datetime.now().strftime('%Y%m%d-%H%M'), MODEL_FILENAME, auc, SEED, data_folder[:-1])
# generate for test set
sub = pd.read_csv(os.path.join(basepath, 'data/raw/sample_submission.csv.zip'))
sub['TARGET'] = test_preds_final
sub.to_csv(os.path.join(basepath, 'submissions/%s.csv'%(sub_identifier)), index=False)
elif args.oof:
print('Generate oof predictions for train and test set ...')
input_path = args.input_path
output_path = args.output_path
data_folder = args.data_folder
SEED = args.seed
params = {
'input_path': input_path,
'output_path': output_path,
'data_folder': data_folder
}
m = Modelv145(**params)
if os.path.exists(os.path.join(basepath, output_path + f'{data_folder}data.h5')):
print('Loading dataset from disk ...')
data = pd.read_hdf(os.path.join(basepath, output_path + f'{data_folder}data.h5'), format='table', key='data')
else:
print('Merge feature groups and save them to disk ...')
train, test = m.merge_datasets()
train, test = m.fe(train, test)
data = pd.concat((train, test))
data = m.reduce_mem_usage(data)
data.to_hdf(os.path.join(basepath, output_path + f'{data_folder}data.h5'), format='table', key='data')
del train, test
gc.collect()
train = data.iloc[:m.n_train]
test = data.iloc[m.n_train:]
del data
gc.collect()
# check to see if feature list exists on disk or not for a particular model
if os.path.exists(os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_features.npy')):
feature_list = np.load(os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_features.npy'))
else:
feature_list = train.columns.tolist()
feature_list = list(set(feature_list) - set(COLS_TO_REMOVE))
np.save(os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_features.npy'), feature_list)
PARAMS = joblib.load(os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_{SEED}_params.pkl'))
# model construction
model = lgb.LGBMClassifier(num_leaves=PARAMS['num_leaves'],
max_depth=PARAMS['max_depth'],
learning_rate=PARAMS['learning_rate'],
n_estimators=PARAMS['num_boost_round'],
objective=PARAMS['objective'],
min_child_weight=PARAMS['min_child_weight'],
min_child_samples=PARAMS['min_data_in_leaf'],
subsample=PARAMS['bagging_fraction'],
colsample_bytree=PARAMS['sub_feature'],
reg_lambda=PARAMS['reg_lambda'],
random_state=SEED,
verbose=-1,
n_jobs=8
)
oof_preds, test_preds = m.get_oof_preds(train, test, feature_list, model)
np.save(os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_{SEED}_oof_preds.npy'), oof_preds)
np.save(os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_{SEED}_test.npy'), test_preds)
elif args.t:
print('Full Training')
input_path = args.input_path
output_path = args.output_path
data_folder = args.data_folder
CV_SEED = args.cv_seed
SEED = args.seed
params = {
'input_path': input_path,
'output_path': output_path,
'data_folder': data_folder
}
m = Modelv145(**params)
# Load or save data from/ on disk
if os.path.exists(os.path.join(basepath, output_path + f'{data_folder}data.h5')):
print('Loading dataset from disk ...')
data = pd.read_hdf(os.path.join(basepath, output_path + f'{data_folder}data.h5'), format='table', key='data')
else:
print('Merge feature groups and save them to disk ...')
train, test = m.merge_datasets()
train, test = m.fe(train, test)
data = pd.concat((train, test))
data = m.reduce_mem_usage(data)
data.to_hdf(os.path.join(basepath, output_path + f'{data_folder}data.h5'), format='table', key='data')
del train, test
gc.collect()
# separate out training and test set.
train = data.iloc[:m.n_train]
test = data.iloc[m.n_train:]
# check to see if feature list exists on disk or not for a particular model
if os.path.exists(os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_features.npy')):
feature_list = np.load(os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_features.npy'))
else:
feature_list = train.columns.tolist()
feature_list = list(set(feature_list) - set(COLS_TO_REMOVE))
np.save(os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_features.npy'), feature_list)
# Load params and holdout score from disk.
PARAMS = joblib.load(os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_{CV_SEED}_params.pkl'))
HOLDOUT_SCORE = joblib.load(os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_{CV_SEED}_cv.pkl'))
PARAMS['num_boost_round'] = int(1.1 * PARAMS['num_boost_round'])
PARAMS['learning_rate'] /= 1.1
PARAMS['seed'] = SEED
PARAMS['feature_fraction_seed'] = SEED
PARAMS['bagging_seed'] = SEED
print('*' * 100)
print('PARAMS are: {}'.format(PARAMS))
# train model
model, feat_df = m.train(train, test, feature_list, is_eval=False, **PARAMS)
# evaluation part
preds, score = m.evaluate(test, feature_list, is_eval=False, model=model)
sub_identifier = "%s-%s-%s-%s-%s" % (datetime.now().strftime('%Y%m%d-%H%M'), MODEL_FILENAME, HOLDOUT_SCORE, SEED, data_folder[:-1])
sub = pd.read_csv(os.path.join(basepath, 'data/raw/sample_submission.csv.zip'))
sub['TARGET'] = preds
sub.to_csv(os.path.join(basepath, 'submissions/%s.csv'%(sub_identifier)), index=False)
elif args.rf_fi:
print('Train a random forest classifier and save feature importance on disk ...')
input_path = args.input_path
output_path = args.output_path
data_folder = args.data_folder
SEED = args.seed
params = {
'input_path': input_path,
'output_path': output_path,
'data_folder': data_folder
}
m = Modelv145(**params)
# Loading data
if os.path.exists(os.path.join(basepath, output_path + f'{data_folder}data.h5')):
print('Loading dataset from disk ...')
data = pd.read_hdf(os.path.join(basepath, output_path + f'{data_folder}data.h5'), format='table', key='data')
else:
print('Merge feature groups and save them to disk ...')
train, test = m.merge_datasets()
train, test = m.fe(train, test)
data = pd.concat((train, test))
data = m.reduce_mem_usage(data)
data.to_hdf(os.path.join(basepath, output_path + f'{data_folder}data.h5'), format='table', key='data')
del train, test
gc.collect()
train = data.iloc[:m.n_train]
del data
gc.collect()
# check to see if feature list exists on disk or not for a particular model
if os.path.exists(os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_features.npy')):
feature_list = np.load(os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_features.npy'))
else:
feature_list = train.columns.tolist()
feature_list = list(set(feature_list) - set(COLS_TO_REMOVE))
np.save(os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_features.npy'), feature_list)
feat_df = m.rf_fi(train, feature_list, SEED)
feat_df.to_csv(os.path.join(basepath, output_path + f'{data_folder}{MODEL_FILENAME}_rf_fi.csv'), index=False) | 51.524248 | 178 | 0.609957 | 19,974 | 167,866 | 4.724792 | 0.039802 | 0.071885 | 0.089856 | 0.103335 | 0.90812 | 0.897799 | 0.891611 | 0.880644 | 0.87471 | 0.869274 | 0 | 0.023631 | 0.274499 | 167,866 | 3,258 | 179 | 51.524248 | 0.751271 | 0.028183 | 0 | 0.75191 | 0 | 0 | 0.363805 | 0.221611 | 0 | 0 | 0 | 0.000307 | 0 | 1 | 0.008042 | false | 0 | 0.005629 | 0.001206 | 0.020909 | 0.055489 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
dbfb118d317164941e1568ba738761b1e4c02f83 | 332 | py | Python | dockerman/__init__.py | synic/dockerman | 0d743c46dd8a4fc33215a98c3ba169c884ca73fe | [
"MIT"
] | 1 | 2021-09-08T18:19:16.000Z | 2021-09-08T18:19:16.000Z | dockerman/__init__.py | synic/dockerman | 0d743c46dd8a4fc33215a98c3ba169c884ca73fe | [
"MIT"
] | null | null | null | dockerman/__init__.py | synic/dockerman | 0d743c46dd8a4fc33215a98c3ba169c884ca73fe | [
"MIT"
] | null | null | null | __all__ = [
"command",
"crun",
"file",
"log",
"logcmd",
"help",
"info",
"warning",
"error",
"main",
"option",
"run",
]
from .dockerman import ( # noqa
command,
crun,
file,
log,
logcmd,
help,
info,
warning,
error,
main,
option,
run,
)
| 11.066667 | 32 | 0.433735 | 29 | 332 | 4.827586 | 0.586207 | 0.157143 | 0.214286 | 0.257143 | 0.814286 | 0.814286 | 0.814286 | 0.814286 | 0.814286 | 0.814286 | 0 | 0 | 0.400602 | 332 | 29 | 33 | 11.448276 | 0.703518 | 0.012048 | 0 | 0 | 0 | 0 | 0.174847 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.035714 | 0 | 0.035714 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
e00f5821b40227422c699d7c5e5f045f631db77d | 308 | py | Python | semester-6/Python Practice/numpyPractice/program4.py | saranshbht/bsc-codes | 7386c09cc986de9c84947f7dea7db3dc42219a35 | [
"MIT"
] | 3 | 2021-03-22T12:07:14.000Z | 2021-08-30T17:28:23.000Z | semester-6/Python Practice/numpyPractice/program4.py | saranshbht/bsc-codes | 7386c09cc986de9c84947f7dea7db3dc42219a35 | [
"MIT"
] | null | null | null | semester-6/Python Practice/numpyPractice/program4.py | saranshbht/bsc-codes | 7386c09cc986de9c84947f7dea7db3dc42219a35 | [
"MIT"
] | null | null | null | import numpy as np
x = np.array([1, 0, 0, 0])
print("Original array:")
print(x)
print("Test if any of the elements of a given array is non-zero:")
print(np.any(x))
x = np.array([0, 0, 0, 0])
print("Original array:")
print(x)
print("Test if any of the elements of a given array is non-zero:")
print(np.any(x)) | 28 | 66 | 0.672078 | 64 | 308 | 3.234375 | 0.328125 | 0.048309 | 0.043478 | 0.077295 | 0.84058 | 0.84058 | 0.84058 | 0.84058 | 0.84058 | 0.84058 | 0 | 0.030651 | 0.152597 | 308 | 11 | 67 | 28 | 0.762452 | 0 | 0 | 0.727273 | 0 | 0 | 0.466019 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0.727273 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 8 |
e079d69b13f1500cff5ec4713f1ac97bd4455f7c | 1,494 | py | Python | toolkit/tests/emails/backends/test_ses.py | pythonitalia/pycon | 14e03b2158916f9437fdbde70e48e5bf5266997e | [
"MIT"
] | 56 | 2018-01-20T17:18:40.000Z | 2022-03-28T22:42:04.000Z | toolkit/tests/emails/backends/test_ses.py | pythonitalia/pycon | 14e03b2158916f9437fdbde70e48e5bf5266997e | [
"MIT"
] | 2,029 | 2018-01-20T11:37:24.000Z | 2022-03-31T04:10:51.000Z | toolkit/tests/emails/backends/test_ses.py | pythonitalia/pycon | 14e03b2158916f9437fdbde70e48e5bf5266997e | [
"MIT"
] | 17 | 2018-03-17T09:44:28.000Z | 2021-12-27T19:57:35.000Z | from unittest.mock import patch
from pythonit_toolkit.emails.backends.ses import SESEmailBackend
from pythonit_toolkit.emails.templates import EmailTemplate
from ward import test
@test("send email via ses")
async def _():
with patch("pythonit_toolkit.emails.backends.ses.boto3") as mock_boto:
SESEmailBackend("production").send_email(
template=EmailTemplate.RESET_PASSWORD,
subject="Subject",
from_="test@email.it",
to="destination@email.it",
variables={"a": "b", "c": "d"},
)
mock_boto.client.return_value.send_templated_email.assert_called_once_with(
Source="test@email.it",
Destination={"ToAddresses": ["destination@email.it"]},
Template="pythonit-production-reset-password",
TemplateData='{"subject": "Subject", "a": "b", "c": "d"}',
)
@test("send email without variables")
async def _():
with patch("pythonit_toolkit.emails.backends.ses.boto3") as mock_boto:
SESEmailBackend("production").send_email(
template=EmailTemplate.RESET_PASSWORD,
subject="Subject",
from_="test@email.it",
to="destination@email.it",
)
mock_boto.client.return_value.send_templated_email.assert_called_once_with(
Source="test@email.it",
Destination={"ToAddresses": ["destination@email.it"]},
Template="pythonit-production-reset-password",
TemplateData='{"subject": "Subject"}',
)
| 34.744186 | 79 | 0.655957 | 163 | 1,494 | 5.840491 | 0.294479 | 0.058824 | 0.088235 | 0.091387 | 0.802521 | 0.768908 | 0.768908 | 0.768908 | 0.768908 | 0.768908 | 0 | 0.001688 | 0.206827 | 1,494 | 42 | 80 | 35.571429 | 0.801688 | 0 | 0 | 0.628571 | 0 | 0 | 0.303882 | 0.10174 | 0 | 0 | 0 | 0 | 0.057143 | 1 | 0 | true | 0.114286 | 0.114286 | 0 | 0.114286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
0ec5c975dacbf607200dfe509c09ca806d04f1e1 | 12,739 | py | Python | openmdao/components/tests/test_linear_system_comp.py | friedenhe/OpenMDAO | db1d7e22a8bf9f66afa82ec3544b7244d5545f6d | [
"Apache-2.0"
] | 451 | 2015-07-20T11:52:35.000Z | 2022-03-28T08:04:56.000Z | openmdao/components/tests/test_linear_system_comp.py | friedenhe/OpenMDAO | db1d7e22a8bf9f66afa82ec3544b7244d5545f6d | [
"Apache-2.0"
] | 1,096 | 2015-07-21T03:08:26.000Z | 2022-03-31T11:59:17.000Z | openmdao/components/tests/test_linear_system_comp.py | friedenhe/OpenMDAO | db1d7e22a8bf9f66afa82ec3544b7244d5545f6d | [
"Apache-2.0"
] | 301 | 2015-07-16T20:02:11.000Z | 2022-03-28T08:04:39.000Z | """Test the LinearSystemComp."""
import unittest
import numpy as np
import openmdao.api as om
from openmdao.utils.assert_utils import assert_near_equal
class TestLinearSystemComp(unittest.TestCase):
"""Test the LinearSystemComp class with a 3x3 linear system."""
def test_basic(self):
"""Check against the scipy solver."""
model = om.Group()
x = np.array([1, 2, -3])
A = np.array([[5.0, -3.0, 2.0], [1.0, 7.0, -4.0], [1.0, 0.0, 8.0]])
b = A.dot(x)
model.add_subsystem('p1', om.IndepVarComp('A', A))
model.add_subsystem('p2', om.IndepVarComp('b', b))
lingrp = model.add_subsystem('lingrp', om.Group(), promotes=['*'])
lingrp.add_subsystem('lin', om.LinearSystemComp(size=3))
model.connect('p1.A', 'lin.A')
model.connect('p2.b', 'lin.b')
prob = om.Problem(model)
prob.setup()
lingrp.linear_solver = om.ScipyKrylov()
prob.set_solver_print(level=0)
prob.run_model()
assert_near_equal(prob['lin.x'], x, .0001)
assert_near_equal(prob.model._residuals.get_norm(), 0.0, 1e-10)
model.run_apply_nonlinear()
with model._scaled_context_all():
val = model.lingrp.lin._residuals['x']
assert_near_equal(val, np.zeros((3, )), tolerance=1e-8)
def test_vectorized(self):
"""Check against the scipy solver."""
model = om.Group()
x = np.array([[1, 2, -3], [2, -1, 4]])
A = np.array([[5.0, -3.0, 2.0], [1.0, 7.0, -4.0], [1.0, 0.0, 8.0]])
b = np.einsum('jk,ik->ij', A, x)
model.add_subsystem('p1', om.IndepVarComp('A', A))
model.add_subsystem('p2', om.IndepVarComp('b', b))
lingrp = model.add_subsystem('lingrp', om.Group(), promotes=['*'])
lingrp.add_subsystem('lin', om.LinearSystemComp(size=3, vec_size=2))
model.connect('p1.A', 'lin.A')
model.connect('p2.b', 'lin.b')
prob = om.Problem(model)
prob.setup()
lingrp.linear_solver = om.ScipyKrylov()
prob.set_solver_print(level=0)
prob.run_model()
assert_near_equal(prob['lin.x'], x, .0001)
assert_near_equal(prob.model._residuals.get_norm(), 0.0, 1e-10)
model.run_apply_nonlinear()
with model._scaled_context_all():
val = model.lingrp.lin._residuals['x']
assert_near_equal(val, np.zeros((2, 3)), tolerance=1e-8)
def test_vectorized_A(self):
"""Check against the scipy solver."""
model = om.Group()
x = np.array([[1, 2, -3], [2, -1, 4]])
A = np.array([[[5.0, -3.0, 2.0], [1.0, 7.0, -4.0], [1.0, 0.0, 8.0]],
[[2.0, 3.0, 4.0], [1.0, -1.0, -2.0], [3.0, 2.0, -2.0]]])
b = np.einsum('ijk,ik->ij', A, x)
model.add_subsystem('p1', om.IndepVarComp('A', A))
model.add_subsystem('p2', om.IndepVarComp('b', b))
lingrp = model.add_subsystem('lingrp', om.Group(), promotes=['*'])
lingrp.add_subsystem('lin', om.LinearSystemComp(size=3, vec_size=2, vectorize_A=True))
model.connect('p1.A', 'lin.A')
model.connect('p2.b', 'lin.b')
prob = om.Problem(model)
prob.setup()
lingrp.linear_solver = om.ScipyKrylov()
prob.set_solver_print(level=0)
prob.run_model()
assert_near_equal(prob['lin.x'], x, .0001)
assert_near_equal(prob.model._residuals.get_norm(), 0.0, 1e-10)
model.run_apply_nonlinear()
with model._scaled_context_all():
val = model.lingrp.lin._residuals['x']
assert_near_equal(val, np.zeros((2, 3)), tolerance=1e-8)
def test_solve_linear(self):
"""Check against solve_linear."""
x = np.array([1, 2, -3])
A = np.array([[1., 1., 1.], [1., 2., 3.], [0., 1., 3.]])
b = A.dot(x)
b_T = A.T.dot(x)
lin_sys_comp = om.LinearSystemComp(size=3)
prob = om.Problem()
prob.model.add_subsystem('p1', om.IndepVarComp('A', A))
prob.model.add_subsystem('p2', om.IndepVarComp('b', b))
lingrp = prob.model.add_subsystem('lingrp', om.Group(), promotes=['*'])
lingrp.add_subsystem('lin', lin_sys_comp)
prob.model.connect('p1.A', 'lin.A')
prob.model.connect('p2.b', 'lin.b')
prob.setup()
prob.set_solver_print(level=0)
prob.run_model()
prob.model.run_linearize()
# Compare against calculated derivs
Ainv = np.array([[3., -2., 1.],
[-3., 3., -2.],
[1., -1., 1.]])
dx_dA = np.outer(Ainv, -x).reshape(3, 9)
dx_db = Ainv
d_inputs, d_outputs, d_residuals = lingrp.get_linear_vectors()
# Forward mode with RHS of self.b
d_residuals['lin.x'] = b
lingrp.run_solve_linear('fwd')
sol = d_outputs['lin.x']
assert_near_equal(sol, x, .0001)
# Reverse mode with RHS of self.b_T
d_outputs['lin.x'] = b_T
lingrp.run_solve_linear('rev')
sol = d_residuals['lin.x']
assert_near_equal(sol, x, .0001)
prob.model.lingrp.lin._no_check_partials = False # override skipping of check_partials
J = prob.compute_totals(['lin.x'], ['p1.A', 'p2.b', 'lin.x'], return_format='flat_dict')
assert_near_equal(J['lin.x', 'p1.A'], dx_dA, .0001)
assert_near_equal(J['lin.x', 'p2.b'], dx_db, .0001)
data = prob.check_partials(out_stream=None)
abs_errors = data['lingrp.lin'][('x', 'x')]['abs error']
self.assertTrue(len(abs_errors) > 0)
self.assertTrue(abs_errors[0] < 1.e-6)
def test_solve_linear_vectorized(self):
"""Check against solve_linear."""
x = np.array([[1, 2, -3], [2, -1, 4]])
A = np.array([[1., 1., 1.], [1., 2., 3.], [0., 1., 3.]])
b = np.einsum('jk,ik->ij', A, x)
b_T = np.einsum('jk,ik->ij', A.T, x)
lin_sys_comp = om.LinearSystemComp(size=3, vec_size=2)
prob = om.Problem()
prob.model.add_subsystem('p1', om.IndepVarComp('A', A))
prob.model.add_subsystem('p2', om.IndepVarComp('b', b))
lingrp = prob.model.add_subsystem('lingrp', om.Group(), promotes=['*'])
lingrp.add_subsystem('lin', lin_sys_comp)
prob.model.connect('p1.A', 'lin.A')
prob.model.connect('p2.b', 'lin.b')
prob.setup()
prob.set_solver_print(level=0)
prob.run_model()
prob.model.run_linearize()
# Compare against calculated derivs
Ainv = np.array([[3., -2., 1.],
[-3., 3., -2.],
[1., -1., 1.]])
dx_dA0 = np.outer(Ainv, -x[0]).reshape(3, 9)
dx_dA1 = np.outer(Ainv, -x[1]).reshape(3, 9)
dx_dA = np.vstack((dx_dA0, dx_dA1))
dx_db = np.kron(np.eye(2), Ainv)
d_inputs, d_outputs, d_residuals = lingrp.get_linear_vectors()
# Forward mode with RHS of self.b
d_residuals['lin.x'] = b
lingrp.run_solve_linear('fwd')
sol = d_outputs['lin.x']
assert_near_equal(sol, x, .0001)
# Reverse mode with RHS of self.b_T
d_outputs['lin.x'] = b_T
lingrp.run_solve_linear('rev')
sol = d_residuals['lin.x']
assert_near_equal(sol, x, .0001)
prob.model.lingrp.lin._no_check_partials = False # override skipping of check_partials
J = prob.compute_totals(['lin.x'], ['p1.A', 'p2.b'], return_format='flat_dict')
assert_near_equal(J['lin.x', 'p1.A'], dx_dA, .0001)
assert_near_equal(J['lin.x', 'p2.b'], dx_db, .0001)
data = prob.check_partials(out_stream=None)
abs_errors = data['lingrp.lin'][('x', 'x')]['abs error']
self.assertTrue(len(abs_errors) > 0)
self.assertTrue(abs_errors[0] < 1.e-6)
def test_solve_linear_vectorized_A(self):
"""Check against solve_linear."""
x = np.array([[1, 2, -3], [2, -1, 4]])
A = np.array([[[1., 1., 1.], [1., 2., 3.], [0., 1., 3.]],
[[2.0, 3.0, 4.0], [1.0, -1.0, -2.0], [3.0, 2.0, -2.0]]])
b = np.einsum('ijk,ik->ij', A, x)
b_T = np.einsum('ijk,ik->ij', A.transpose(0, 2, 1), x)
lin_sys_comp = om.LinearSystemComp(size=3, vec_size=2, vectorize_A=True)
prob = om.Problem()
prob.model.add_subsystem('p1', om.IndepVarComp('A', A))
prob.model.add_subsystem('p2', om.IndepVarComp('b', b))
lingrp = prob.model.add_subsystem('lingrp', om.Group(), promotes=['*'])
lingrp.add_subsystem('lin', lin_sys_comp)
prob.model.connect('p1.A', 'lin.A')
prob.model.connect('p2.b', 'lin.b')
prob.setup()
prob.set_solver_print(level=0)
prob.run_model()
prob.model.run_linearize()
# Compare against calculated derivs
Ainv1 = np.array([[3., -2., 1.],
[-3., 3., -2.],
[1., -1., 1.]])
Ainv2 = np.array([[ 0.3 , 0.7 , -0.1 ],
[-0.2 , -0.8 , 0.4 ],
[ 0.25, 0.25, -0.25]])
dx_dA0 = np.outer(Ainv1, -x[0]).reshape(3, 9)
dx_dA1 = np.outer(Ainv2, -x[1]).reshape(3, 9)
dx_dA = np.zeros((6, 18))
dx_dA[:3, :9] = dx_dA0
dx_dA[3:, 9:] = dx_dA1
dx_db = np.zeros((6, 6))
dx_db[:3, :3] = Ainv1
dx_db[3:, 3:] = Ainv2
d_inputs, d_outputs, d_residuals = lingrp.get_linear_vectors()
# Forward mode with RHS of self.b
d_residuals['lin.x'] = b
lingrp.run_solve_linear('fwd')
sol = d_outputs['lin.x']
assert_near_equal(sol, x, .0001)
# Reverse mode with RHS of self.b_T
d_outputs['lin.x'] = b_T
lingrp.run_solve_linear('rev')
sol = d_residuals['lin.x']
assert_near_equal(sol, x, .0001)
prob.model.lingrp.lin._no_check_partials = False # override skipping of check_partials
J = prob.compute_totals(['lin.x'], ['p1.A', 'p2.b'], return_format='flat_dict')
assert_near_equal(J['lin.x', 'p1.A'], dx_dA, .0001)
assert_near_equal(J['lin.x', 'p2.b'], dx_db, .0001)
data = prob.check_partials(out_stream=None)
abs_errors = data['lingrp.lin'][('x', 'x')]['abs error']
self.assertTrue(len(abs_errors) > 0)
self.assertTrue(abs_errors[0] < 1.e-6)
def test_feature_basic(self):
model = om.Group()
A = np.array([[5.0, -3.0, 2.0], [1.0, 7.0, -4.0], [1.0, 0.0, 8.0]])
b = np.array([1.0, 2.0, -3.0])
lingrp = model.add_subsystem('lingrp', om.Group(), promotes=['*'])
lingrp.add_subsystem('lin', om.LinearSystemComp(size=3))
prob = om.Problem(model)
prob.setup()
prob.set_val('lin.A', A)
prob.set_val('lin.b', b)
lingrp.linear_solver = om.ScipyKrylov()
prob.run_model()
assert_near_equal(prob.get_val('lin.x'), np.array([0.36423841, -0.00662252, -0.4205298 ]), .0001)
def test_feature_vectorized(self):
model = om.Group()
A = np.array([[5.0, -3.0, 2.0], [1.0, 7.0, -4.0], [1.0, 0.0, 8.0]])
b = np.array([[2.0, -3.0, 4.0], [1.0, 0.0, -1.0]])
lingrp = model.add_subsystem('lingrp', om.Group(), promotes=['*'])
lingrp.add_subsystem('lin', om.LinearSystemComp(size=3, vec_size=2))
prob = om.Problem(model)
prob.setup()
prob.set_val('lin.A', A)
prob.set_val('lin.b', b)
lingrp.linear_solver = om.ScipyKrylov()
prob.run_model()
assert_near_equal(prob.get_val('lin.x'), np.array([[ 0.10596026, -0.16556291, 0.48675497],
[ 0.19205298, -0.11258278, -0.14900662]]),
.0001)
def test_feature_vectorized_A(self):
model = om.Group()
A = np.array([[[5.0, -3.0, 2.0], [1.0, 7.0, -4.0], [1.0, 0.0, 8.0]],
[[2.0, 3.0, 4.0], [1.0, -1.0, -2.0], [3.0, 2.0, -2.0]]])
b = np.array([[-5.0, 2.0, 3.0], [-1.0, 1.0, -3.0]])
lingrp = model.add_subsystem('lingrp', om.Group(), promotes=['*'])
lingrp.add_subsystem('lin', om.LinearSystemComp(size=3, vec_size=2, vectorize_A=True))
prob = om.Problem(model)
prob.setup()
prob.set_val('lin.A', A)
prob.set_val('lin.b', b)
lingrp.linear_solver = om.ScipyKrylov()
prob.run_model()
assert_near_equal(prob.get_val('lin.x'), np.array([[-0.78807947, 0.66887417, 0.47350993],
[ 0.7 , -1.8 , 0.75 ]]),
.0001)
if __name__ == "__main__":
unittest.main()
| 32.748072 | 105 | 0.539446 | 1,883 | 12,739 | 3.488051 | 0.086564 | 0.018879 | 0.057095 | 0.00609 | 0.910932 | 0.894641 | 0.8919 | 0.885505 | 0.867235 | 0.860231 | 0 | 0.067473 | 0.272863 | 12,739 | 388 | 106 | 32.832474 | 0.641585 | 0.05283 | 0 | 0.793522 | 0 | 0 | 0.052272 | 0 | 0 | 0 | 0 | 0 | 0.125506 | 1 | 0.036437 | false | 0 | 0.016194 | 0 | 0.05668 | 0.024292 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0ee8e59f8c0f718613ea2f85d0bb6158e675f1f6 | 22,585 | py | Python | tests/storage/integration/data/simulation_log.py | AntaresSimulatorTeam/antaREST | d686d2a86a52737c211ae67f3cee591f559909f2 | [
"Apache-2.0"
] | 2 | 2021-11-15T09:26:33.000Z | 2022-02-24T09:53:54.000Z | tests/storage/integration/data/simulation_log.py | AntaresSimulatorTeam/antaREST | d686d2a86a52737c211ae67f3cee591f559909f2 | [
"Apache-2.0"
] | 542 | 2021-01-11T13:23:47.000Z | 2022-03-31T15:38:10.000Z | tests/storage/integration/data/simulation_log.py | AntaresSimulatorTeam/antaREST | d686d2a86a52737c211ae67f3cee591f559909f2 | [
"Apache-2.0"
] | 1 | 2020-10-01T12:18:15.000Z | 2020-10-01T12:18:15.000Z | simulation_log = """[Wed Oct 14 14:25:04 2020][solver][check] Antares Solver v7.0.0 (RTE France)
[Wed Oct 14 14:25:04 2020][solver][infos] :: built for 64-bit architectures, Microsoft Windows, 8 cpu(s)
[Wed Oct 14 14:25:04 2020][solver][infos] :: hostname = GROESNWP7
[Wed Oct 14 14:25:04 2020][solver][infos]
[Wed Oct 14 14:25:04 2020][solver][infos] :: from C:\\Program Files\\RTE\\Antares\\7.0.0\\bin
[Wed Oct 14 14:25:04 2020][solver][infos] :: log filename: D:\\Users\\andrsgat\\Documents\\TESTI-ALTRI\\API-OCTO\\STA-mini\\logs\\solver-20201014-142504.log
[Wed Oct 14 14:25:04 2020][solver][infos]
[Wed Oct 14 14:25:04 2020][solver][notic] Preparing STA-mini...
[Wed Oct 14 14:25:04 2020][solver][infos] detected version: 700
[Wed Oct 14 14:25:04 2020][solver][infos] from `D:\\Users\\andrsgat\\Documents\\TESTI-ALTRI\\API-OCTO\\STA-mini`
[Wed Oct 14 14:25:04 2020][solver][infos]
[Wed Oct 14 14:25:04 2020][solver][infos] simulation mode: Economy
[Wed Oct 14 14:25:04 2020][solver][infos] simplex optimization range: week
[Wed Oct 14 14:25:04 2020][solver][infos] 2 years in the user's playlist
[Wed Oct 14 14:25:04 2020][solver][infos] :: enabling the 'year-by-year' mode
[Wed Oct 14 14:25:04 2020][solver][infos] :: enabling the user playlist
[Wed Oct 14 14:25:04 2020][solver][infos] :: enabling the custom build mode
[Wed Oct 14 14:25:04 2020][solver][infos] :: enabling filtering
[Wed Oct 14 14:25:04 2020][solver][infos] :: ignoring export mps
[Wed Oct 14 14:25:04 2020][solver][infos] Output folder : D:\\Users\\andrsgat\\Documents\\TESTI-ALTRI\\API-OCTO\\STA-mini\\output\\20201014-1425eco-goodbye
[Wed Oct 14 14:25:04 2020][solver][infos] Loading the list of areas...
[Wed Oct 14 14:25:04 2020][solver][infos] Loading the area list from `D:\\Users\\andrsgat\\Documents\\TESTI-ALTRI\\API-OCTO\\STA-mini\\input\\areas\\list.txt`
[Wed Oct 14 14:25:04 2020][solver][infos] 4 areas found
[Wed Oct 14 14:25:04 2020][solver][infos] Loading global hydro data...
[Wed Oct 14 14:25:04 2020][solver][infos] Loading thermal clusters...
[Wed Oct 14 14:25:04 2020][solver][infos] Loading thermal configuration for the area DE
[Wed Oct 14 14:25:04 2020][solver][infos] Loading thermal configuration for the area ES
[Wed Oct 14 14:25:04 2020][solver][infos] Loading thermal configuration for the area FR
[Wed Oct 14 14:25:04 2020][solver][infos] Loading thermal configuration for the area IT
[Wed Oct 14 14:25:04 2020][solver][infos] Loading the area 1/4: DE
[Wed Oct 14 14:25:04 2020][solver][infos] Loading the area 1/4: DE 6%
[Wed Oct 14 14:25:04 2020][solver][infos] Loading the area 1/4: DE 13%
[Wed Oct 14 14:25:04 2020][solver][infos] Loading the area 1/4: DE 20%
[Wed Oct 14 14:25:04 2020][solver][infos] Loading the area 1/4: DE 26%
[Wed Oct 14 14:25:04 2020][solver][infos] Loading the area 1/4: DE 33%
[Wed Oct 14 14:25:04 2020][solver][infos] Loading the area 1/4: DE 40%
[Wed Oct 14 14:25:04 2020][solver][infos] Loading the area 1/4: DE 46%
[Wed Oct 14 14:25:04 2020][solver][infos] Loading the area 1/4: DE 53%
[Wed Oct 14 14:25:04 2020][solver][infos] Loading the area 1/4: DE 60%
[Wed Oct 14 14:25:04 2020][solver][infos] Loading the area 1/4: DE 66%
[Wed Oct 14 14:25:04 2020][solver][infos] Loading the area 1/4: DE 73%
[Wed Oct 14 14:25:04 2020][solver][infos] Loading the area 1/4: DE 80%
[Wed Oct 14 14:25:04 2020][solver][infos] Loading the area 1/4: DE 86%
[Wed Oct 14 14:25:04 2020][solver][infos] Loading the area 1/4: DE 93%
[Wed Oct 14 14:25:04 2020][solver][infos] Loading the area 2/4: ES
[Wed Oct 14 14:25:04 2020][solver][infos] Loading the area 2/4: ES 6%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 2/4: ES 13%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 2/4: ES 20%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 2/4: ES 26%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 2/4: ES 33%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 2/4: ES 40%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 2/4: ES 46%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 2/4: ES 53%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 2/4: ES 60%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 2/4: ES 66%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 2/4: ES 73%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 2/4: ES 80%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 2/4: ES 86%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 2/4: ES 93%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 3/4: FR
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 3/4: FR 6%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 3/4: FR 13%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 3/4: FR 20%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 3/4: FR 26%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 3/4: FR 33%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 3/4: FR 40%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 3/4: FR 46%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 3/4: FR 53%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 3/4: FR 60%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 3/4: FR 66%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 3/4: FR 73%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 3/4: FR 80%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 3/4: FR 86%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 3/4: FR 93%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 4/4: IT
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 4/4: IT 6%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 4/4: IT 13%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 4/4: IT 20%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 4/4: IT 26%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 4/4: IT 33%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 4/4: IT 40%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 4/4: IT 46%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 4/4: IT 53%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 4/4: IT 60%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 4/4: IT 66%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 4/4: IT 73%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 4/4: IT 80%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 4/4: IT 86%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading the area 4/4: IT 93%
[Wed Oct 14 14:25:05 2020][solver][infos] Loading correlation matrices...
[Wed Oct 14 14:25:05 2020][solver][infos]
[Wed Oct 14 14:25:05 2020][solver][infos] Loading constraints...
[Wed Oct 14 14:25:05 2020][solver][infos] No binding constraint found
[Wed Oct 14 14:25:05 2020][solver][infos]
[Wed Oct 14 14:25:05 2020][solver][infos] Loading sets of areas...
[Wed Oct 14 14:25:05 2020][solver][infos] found `all areas` (4 items, no output)
[Wed Oct 14 14:25:05 2020][solver][infos] Elapsed time: Study loading: 394ms
[Wed Oct 14 14:25:05 2020][solver][infos] [statistics] disk: read: 5933 ko, written: 0 ko
[Wed Oct 14 14:25:05 2020][solver][infos] [statistics] network: read: 0 ko, written: 0 ko
[Wed Oct 14 14:25:05 2020][solver][infos] The study is loaded.
[Wed Oct 14 14:25:05 2020][solver][infos] [UI] Display messages: Off
[Wed Oct 14 14:25:05 2020][solver][infos]
[Wed Oct 14 14:25:05 2020][solver][infos] Generating calendar informations
[Wed Oct 14 14:25:05 2020][solver][infos] Calendar: hours:1..336, days:1..14, weeks:1..2, months:1..1, years:1..2
[Wed Oct 14 14:25:05 2020][solver][infos] Simulation days per month : 14, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0
[Wed Oct 14 14:25:05 2020][solver][infos]
[Wed Oct 14 14:25:05 2020][solver][infos] Removing disabled thermal clusters in from solver computations...
[Wed Oct 14 14:25:05 2020][solver][infos] No disabled thermal cluster removed before solver computations
[Wed Oct 14 14:25:05 2020][solver][infos]
[Wed Oct 14 14:25:05 2020][solver][infos]
[Wed Oct 14 14:25:05 2020][solver][infos] Optimizing the thermal clusters in 'must-run' mode...
[Wed Oct 14 14:25:05 2020][solver][infos] No thermal cluster in 'must-run' mode
[Wed Oct 14 14:25:05 2020][solver][infos]
[Wed Oct 14 14:25:05 2020][solver][infos] No binding constraint to consider
[Wed Oct 14 14:25:05 2020][solver][infos]
[Wed Oct 14 14:25:05 2020][solver][infos] Summary
[Wed Oct 14 14:25:05 2020][solver][infos] areas: 4
[Wed Oct 14 14:25:05 2020][solver][infos] links: 3
[Wed Oct 14 14:25:05 2020][solver][infos] thermal clusters: 36
[Wed Oct 14 14:25:05 2020][solver][infos] thermal clusters (must-run): 0
[Wed Oct 14 14:25:05 2020][solver][infos] binding constraints: 0
[Wed Oct 14 14:25:05 2020][solver][infos] filtering:true
[Wed Oct 14 14:25:05 2020][solver][infos] memory : 19Mo
[Wed Oct 14 14:25:05 2020][solver][infos]
[Wed Oct 14 14:25:05 2020][solver][infos] Initializing random number generators...
[Wed Oct 14 14:25:05 2020][solver][infos] [UI] Progression map: D:\\Users\\andrsgat\\Documents\\TESTI-ALTRI\\API-OCTO\\STA-mini\\output\\20201014-1425eco-goodbye\\about-the-study\\map
[Wed Oct 14 14:25:05 2020][solver][infos] system memory report: 7975 Mib / 32332 Mib, 24.665966% free
[Wed Oct 14 14:25:05 2020][solver][infos]
[Wed Oct 14 14:25:05 2020][solver][infos] [UI] Display messages: On
[Wed Oct 14 14:25:05 2020][solver][check] Running the simulation (economy)
[Wed Oct 14 14:25:05 2020][solver][infos] Allocating resources...
[Wed Oct 14 14:25:05 2020][solver][infos] Allocating resources...
[Wed Oct 14 14:25:05 2020][solver][infos] Variables: (63Mo)
[Wed Oct 14 14:25:05 2020][solver][infos] + Areas
[Wed Oct 14 14:25:05 2020][solver][infos] OV. COST Euro Overall Cost throughout all MC years
[Wed Oct 14 14:25:05 2020][solver][infos] OP. COST Euro Operating Cost throughout all MC years, of all the thermal dispatchable clusters
[Wed Oct 14 14:25:05 2020][solver][infos] MRG. PRICE Euro Marginal Price, throughout all MC years
[Wed Oct 14 14:25:05 2020][solver][infos] CO2 EMIS. Tons Overall CO2 emissions expected from all the thermal dispatchable clusters
[Wed Oct 14 14:25:05 2020][solver][infos] ENERG. PLANT MWh Energy generated by all the clusters
[Wed Oct 14 14:25:05 2020][solver][infos] BALANCE MWh Nodal energy balance, throughout all MC years
[Wed Oct 14 14:25:05 2020][solver][infos] ROW BAL. MWh Row Balance
[Wed Oct 14 14:25:05 2020][solver][infos] PSP MWh PSP
[Wed Oct 14 14:25:05 2020][solver][infos] MISC. NDG MWh Non-dispatchable generation (not including wind and run-of-the-river)
[Wed Oct 14 14:25:05 2020][solver][infos] LOAD MWh Load generation, thoughout all MC years
[Wed Oct 14 14:25:05 2020][solver][infos] H. ROR MWh Hydro generation, thoughout all MC years
[Wed Oct 14 14:25:05 2020][solver][infos] WIND MWh Wind generation, thoughout all MC years
[Wed Oct 14 14:25:05 2020][solver][infos] SOLAR MWh Solar generation, thoughout all MC years
[Wed Oct 14 14:25:05 2020][solver][infos] Dispatch. Gen. MWh Value of all the dispatchable generation throughout all MC years
[Wed Oct 14 14:25:05 2020][solver][infos] H. STOR MWh Hydro Storage Generation
[Wed Oct 14 14:25:05 2020][solver][infos] H. PUMP MWh Hydraulic pumping
[Wed Oct 14 14:25:05 2020][solver][infos] H. LEV % Hydro Level
[Wed Oct 14 14:25:05 2020][solver][infos] H. INFL MWh Hydraulic inflows
[Wed Oct 14 14:25:05 2020][solver][infos] H. OVFL % Hydro overflow
[Wed Oct 14 14:25:05 2020][solver][infos] H. VAL Euro/MWhWater value
[Wed Oct 14 14:25:05 2020][solver][infos] H. COST Euro Hydro Cost throughout all MC years, of all the thermal dispatchable clusters
[Wed Oct 14 14:25:05 2020][solver][infos] UNSP. ENRG MWh Unsuplied Energy (demand that cannot be satisfied)
[Wed Oct 14 14:25:05 2020][solver][infos] SPIL. ENRG MWh Spilled Energy (generation that cannot be satisfied)
[Wed Oct 14 14:25:05 2020][solver][infos] LOLD Hours LOLD
[Wed Oct 14 14:25:05 2020][solver][infos] LOLP % LOLP
[Wed Oct 14 14:25:05 2020][solver][infos] AVL DTG MWh Available dispatchable generation
[Wed Oct 14 14:25:05 2020][solver][infos] DTG MRG MWh Dispatchable Generation Margin
[Wed Oct 14 14:25:05 2020][solver][infos] MAX MRG MWh Maximum margin throughout all MC years
[Wed Oct 14 14:25:05 2020][solver][infos] NP COST Euro Non Proportional Cost throughout all MC years, of all the thermal dispatchable clusters
[Wed Oct 14 14:25:05 2020][solver][infos] NP Cost Non proportional costs by all the clusters
[Wed Oct 14 14:25:05 2020][solver][infos] NODU Number Of Dispatched Units throughout all MC years, of all the thermal dispatchable clusters
[Wed Oct 14 14:25:05 2020][solver][infos] NODU Number of Dispatchable Units by plant
[Wed Oct 14 14:25:05 2020][solver][infos] + Links
[Wed Oct 14 14:25:05 2020][solver][infos] FLOW LIN. MWh Flow assessed, over all MC years, through linear optimization
[Wed Oct 14 14:25:05 2020][solver][infos] UCAP LIN. MWh Used capacity assessed, over all MC years, through linear optimization
[Wed Oct 14 14:25:05 2020][solver][infos] LOOP FLOW MWh Loop flow
[Wed Oct 14 14:25:05 2020][solver][infos] FLOW QUAD. MWh Flow (quad.)
[Wed Oct 14 14:25:05 2020][solver][infos] CONG. FEE (ALG.) Euro Congestion fee collected throughout all MC years (Alg.)
[Wed Oct 14 14:25:05 2020][solver][infos] CONG. FEE (ABS.) Euro Congestion fee collected throughout all MC years (Absolute value)
[Wed Oct 14 14:25:05 2020][solver][infos] MARG. COST Euro/MW Decrease of the overall operating cost expected by a 1MW capacity reinforcement
[Wed Oct 14 14:25:05 2020][solver][infos] CONG. PROB. (+/-) % Probability for the line to be congested in the upstream-downstream way
[Wed Oct 14 14:25:05 2020][solver][infos] HURDLE COST Euro Hurdle costs, over all MC years
[Wed Oct 14 14:25:05 2020][solver][infos] + Sets of Areas
[Wed Oct 14 14:25:05 2020][solver][infos] OV. COST Euro Overall Cost throughout all MC years
[Wed Oct 14 14:25:05 2020][solver][infos] OP. COST Euro Operating Cost throughout all MC years, of all the thermal dispatchable clusters
[Wed Oct 14 14:25:05 2020][solver][infos] MRG. PRICE Euro Marginal Price, throughout all MC years
[Wed Oct 14 14:25:05 2020][solver][infos] CO2 EMIS. Tons Overall CO2 emissions expected from all the thermal dispatchable clusters
[Wed Oct 14 14:25:05 2020][solver][infos] BALANCE MWh Nodal energy balance, throughout all MC years
[Wed Oct 14 14:25:05 2020][solver][infos] ROW BAL. MWh Row Balance
[Wed Oct 14 14:25:05 2020][solver][infos] PSP MWh PSP
[Wed Oct 14 14:25:05 2020][solver][infos] MISC. NDG MWh Non-dispatchable generation (not including wind and run-of-the-river)
[Wed Oct 14 14:25:05 2020][solver][infos] LOAD MWh Load generation, thoughout all MC years
[Wed Oct 14 14:25:05 2020][solver][infos] H. ROR MWh Hydro generation, thoughout all MC years
[Wed Oct 14 14:25:05 2020][solver][infos] WIND MWh Wind generation, thoughout all MC years
[Wed Oct 14 14:25:05 2020][solver][infos] SOLAR MWh Solar generation, thoughout all MC years
[Wed Oct 14 14:25:05 2020][solver][infos] Dispatch. Gen. MWh Value of all the dispatchable generation throughout all MC years
[Wed Oct 14 14:25:05 2020][solver][infos] H. STOR MWh Hydro Storage Generation
[Wed Oct 14 14:25:05 2020][solver][infos] H. PUMP MWh Hydraulic pumping
[Wed Oct 14 14:25:05 2020][solver][infos] H. LEV % Hydro Level
[Wed Oct 14 14:25:05 2020][solver][infos] H. INFL MWh Hydraulic inflows
[Wed Oct 14 14:25:05 2020][solver][infos] H. OVFL % Hydro overflow
[Wed Oct 14 14:25:05 2020][solver][infos] H. VAL Euro/MWhWater value
[Wed Oct 14 14:25:05 2020][solver][infos] H. COST Euro Hydro Cost throughout all MC years, of all the thermal dispatchable clusters
[Wed Oct 14 14:25:05 2020][solver][infos] UNSP. ENRG MWh Unsuplied Energy (demand that cannot be satisfied)
[Wed Oct 14 14:25:05 2020][solver][infos] SPIL. ENRG MWh Spilled Energy (generation that cannot be satisfied)
[Wed Oct 14 14:25:05 2020][solver][infos] LOLD Hours LOLD
[Wed Oct 14 14:25:05 2020][solver][infos] LOLP % LOLP
[Wed Oct 14 14:25:05 2020][solver][infos] AVL DTG MWh Available dispatchable generation
[Wed Oct 14 14:25:05 2020][solver][infos] DTG MRG MWh Dispatchable Generation Margin
[Wed Oct 14 14:25:05 2020][solver][infos] MAX MRG MWh Maximum margin throughout all MC years
[Wed Oct 14 14:25:05 2020][solver][infos] NP COST Euro Non Proportional Cost throughout all MC years, of all the thermal dispatchable clusters
[Wed Oct 14 14:25:05 2020][solver][infos] NODU Number Of Dispatched Units throughout all MC years, of all the thermal dispatchable clusters
[Wed Oct 14 14:25:05 2020][solver][infos]
[Wed Oct 14 14:25:05 2020][solver][infos] Preparing time-series numbers...
[Wed Oct 14 14:25:05 2020][solver][infos] Preparing time-series numbers... (default ruleset)
[Wed Oct 14 14:25:05 2020][solver][infos] :: Scenario Builder, active target: default ruleset
[Wed Oct 14 14:25:05 2020][solver][infos] > loading scenario builder data from D:\\Users\\andrsgat\\Documents\\TESTI-ALTRI\\API-OCTO\\STA-mini\\settings\\scenariobuilder.dat
[Wed Oct 14 14:25:05 2020][solver][infos]
[Wed Oct 14 14:25:05 2020][solver][infos] MC-Years : [1 .. 2], total: 2
[Wed Oct 14 14:25:05 2020][solver][infos] Starting the simulation
[Wed Oct 14 14:25:05 2020][solver][infos] parallel batch size : 1
[Wed Oct 14 14:25:05 2020][solver][infos] Year 1
[Wed Oct 14 14:25:05 2020][solver][infos]
[Wed Oct 14 14:25:05 2020][solver][infos] Starting Memory Allocation for a Weekly Optimization problem in Canonical form
[Wed Oct 14 14:25:05 2020][solver][infos] ( Problem Size :8904 Variables 1848 Constraints)
[Wed Oct 14 14:25:05 2020][solver][infos] Expected Number of Non-zero terms in Problem Matrix : 67200
[Wed Oct 14 14:25:05 2020][solver][infos]
[Wed Oct 14 14:25:05 2020][solver][infos]
[Wed Oct 14 14:25:05 2020][solver][infos] Status of Preliminary Allocations for Generic Problem Resolution : Successful
[Wed Oct 14 14:25:05 2020][solver][infos]
[Wed Oct 14 14:25:05 2020][solver][infos] Exporting the annual results
[Wed Oct 14 14:25:05 2020][solver][infos] Exporting results : DE
[Wed Oct 14 14:25:05 2020][solver][infos] Exporting results : ES
[Wed Oct 14 14:25:05 2020][solver][infos] Exporting results : FR
[Wed Oct 14 14:25:05 2020][solver][infos] Exporting results : IT
[Wed Oct 14 14:25:05 2020][solver][infos] Exporting results : DE - FR
[Wed Oct 14 14:25:05 2020][solver][infos] Exporting results : ES - FR
[Wed Oct 14 14:25:05 2020][solver][infos] Exporting results : FR - IT
[Wed Oct 14 14:25:05 2020][solver][infos] Elapsed time: Survey report: 41ms
[Wed Oct 14 14:25:05 2020][solver][progress] task 0 mc, year: 0, 100
[Wed Oct 14 14:25:05 2020][solver][infos] parallel batch size : 1
[Wed Oct 14 14:25:05 2020][solver][infos] Year 2
[Wed Oct 14 14:25:05 2020][solver][infos] Exporting the annual results
[Wed Oct 14 14:25:05 2020][solver][infos] Exporting results : DE
[Wed Oct 14 14:25:05 2020][solver][infos] Exporting results : ES
[Wed Oct 14 14:25:05 2020][solver][infos] Exporting results : FR
[Wed Oct 14 14:25:05 2020][solver][infos] Exporting results : IT
[Wed Oct 14 14:25:05 2020][solver][infos] Exporting results : DE - FR
[Wed Oct 14 14:25:05 2020][solver][infos] Exporting results : ES - FR
[Wed Oct 14 14:25:05 2020][solver][infos] Exporting results : FR - IT
[Wed Oct 14 14:25:05 2020][solver][infos] Elapsed time: Survey report: 41ms
[Wed Oct 14 14:25:05 2020][solver][progress] task 0 mc, year: 1, 100
[Wed Oct 14 14:25:05 2020][solver][infos] The quadratic optimisation has been skipped
[Wed Oct 14 14:25:05 2020][solver][infos] Elapsed time: MC Years: 191ms
[Wed Oct 14 14:25:05 2020][solver][infos]
[Wed Oct 14 14:25:05 2020][solver][check] Exporting the survey results...
[Wed Oct 14 14:25:05 2020][solver][infos] Exporting results : DE
[Wed Oct 14 14:25:05 2020][solver][infos] Exporting results : ES
[Wed Oct 14 14:25:05 2020][solver][infos] Exporting results : FR
[Wed Oct 14 14:25:05 2020][solver][infos] Exporting results : IT
[Wed Oct 14 14:25:05 2020][solver][infos] Exporting results : DE - FR
[Wed Oct 14 14:25:05 2020][solver][infos] Exporting results : ES - FR
[Wed Oct 14 14:25:05 2020][solver][infos] Exporting results : FR - IT
[Wed Oct 14 14:25:05 2020][solver][infos] Exporting digest...
[Wed Oct 14 14:25:05 2020][solver][infos] Elapsed time: Survey report: 24ms
[Wed Oct 14 14:25:05 2020][solver][infos] [UI] Quitting the solver gracefully
[Wed Oct 14 14:25:05 2020][solver][infos] Writing log file: D:\\Users\\andrsgat\\Documents\\TESTI-ALTRI\\API-OCTO\\STA-mini\\output\\20201014-1425eco-goodbye\\simulation.log
"""
| 85.874525 | 183 | 0.664158 | 3,987 | 22,585 | 3.761976 | 0.088789 | 0.104407 | 0.139209 | 0.174012 | 0.889326 | 0.889326 | 0.885126 | 0.884659 | 0.878525 | 0.84719 | 0 | 0.198316 | 0.211202 | 22,585 | 262 | 184 | 86.20229 | 0.643615 | 0 | 0 | 0.408397 | 0 | 0.69084 | 0.998937 | 0.033252 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
0ef051d564b98e466cc52afef66ef42b4d83453d | 38,747 | py | Python | sentisead/Hybrid/SentiseadTenFold.py | al-alamin/Sentiment4SE | f22935072be2006e6ed8af873e01a2360f325545 | [
"MIT"
] | 2 | 2021-09-23T23:00:57.000Z | 2021-11-22T21:53:13.000Z | sentisead/HybridSentimentSEStudy-master/Hybrid/SentiseadTenFold.py | al-alamin/Sentiment4SE | f22935072be2006e6ed8af873e01a2360f325545 | [
"MIT"
] | null | null | null | sentisead/HybridSentimentSEStudy-master/Hybrid/SentiseadTenFold.py | al-alamin/Sentiment4SE | f22935072be2006e6ed8af873e01a2360f325545 | [
"MIT"
] | null | null | null | '''
Created on Mar 23, 2019
@author: Gias
'''
import os
import re
import pandas as pd
import nltk
from nltk.stem.snowball import SnowballStemmer
from imblearn.over_sampling import SMOTE
from statistics import mean
import cPickle as pickle
import numpy as np
import argparse
import csv
from django.conf import settings
import utils.fileutils as fileutils
from utils import nlputils
import scipy as sp
from scipy.sparse import coo_matrix, hstack
from sklearn.neural_network import MLPClassifier
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.svm import LinearSVC
from sklearn.linear_model import SGDClassifier
from sklearn.naive_bayes import BernoulliNB, MultinomialNB
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import Lasso
from sklearn import svm
import utils.metrics as metrics
import sentiplus.DiversityMetrics as dm
from nltk.stem.snowball import SnowballStemmer
from imblearn.over_sampling import SMOTE
from imblearn.over_sampling import SVMSMOTE
import math
from nltk.tokenize import sent_tokenize, word_tokenize, WordPunctTokenizer
from sentiplus.Hybrid import Utils
from nltk.corpus import stopwords
import re
from bs4 import BeautifulSoup
from multiprocessing import Process
from xgboost import XGBClassifier
stopWords = set(stopwords.words('english'))
stemmer =SnowballStemmer("english")
from multiprocessing import Process
#rootdir = r"C:\dev\opinion\papers\sentiplus"
def stem_tokens(tokens):
stemmed = []
for item in tokens:
stemmed.append(stemmer.stem(item))
return stemmed
def tokenize_and_stem(text):
tokens = nltk.word_tokenize(text)
#stems = stem_tokens(tokens)
stems = tokens
return stems
def preprocess_text(text):
if text is None: return []
#comments = text.encode('ascii', 'ignore')
text = text.lower()
text = text.replace("\\", "")
comments = text
comments = Utils.expand_contractions(comments)
comments = Utils.remove_url(comments)
comments = Utils.replace_all(comments, Utils.emodict)
#comments = tweet_cleaner_updated(comments)
#comments = Utils.handle_negation(comments)
#comments = detectSentimentDominentClause(comments) return comments
return comments
def hotEncodeDSOSEValues(infile, outfile):
df = pd.read_excel(infile, sheet_name="Sheet1")
encodes = []
for index, row in df.iterrows():
label = row["DSOSE"]
if label == "p":
label = 1
elif label == "n":
label = -1
else:
label = 0
encodes.append(label)
df["DSOSE_HotEncoded"] = (pd.Series(encodes)).values
df.to_excel(outfile, sheet_name="Sheet1", encoding="ISO-8859-1")
def hotEncodePOMEValues(infile, outfile):
df = pd.read_excel(infile, sheet_name="Sheet1")
encodes = []
for index, row in df.iterrows():
label = row["POME"]
if label == "p":
label = 1
elif label == "n":
label = -1
else:
label = 0
encodes.append(label)
df["POME_HotEncoded"] = (pd.Series(encodes)).values
df.to_excel(outfile, sheet_name="Sheet1", encoding="ISO-8859-1")
def hotEncodeEnsembleRFValues(infile, outfile):
df = pd.read_excel(infile, sheet_name="Sheet1")
encodes = []
for index, row in df.iterrows():
label = row["Ensemble_RF"]
if label == "p":
label = 1
elif label == "n":
label = -1
else:
label = 0
encodes.append(label)
df["Ensemble_RF_HotEncoded"] = (pd.Series(encodes)).values
df.to_excel(outfile, sheet_name="Sheet1", encoding="ISO-8859-1")
class CommonUtils(object):
def __init__(self):
pass
def prepareTrainTestFiles(self, infile, outdir):
df = fileutils.readExcel(infile, "Sheet1", encoding="ISO-8859-1")
folds = dict()
for index, row in df.iterrows():
uid = row["UID"]
fname = row["File"].split("_")[0]
if fname not in folds:
folds[fname] = dict()
fold = row["File"].split("_")[-1]
fold = int(fold)
if fold not in folds[fname]:
folds[fname][fold] = []
folds[fname][fold].append(row)
for fname in folds:
for i in range(10):
test = i
outfileTest = fname+"_Test_"+str(test)+".csv"
outfileTest = os.path.join(outdir, outfileTest)
testData = folds[fname][test]
records = []
for row in testData:
records.append(row)
df = pd.DataFrame.from_dict(records)
df.to_csv(outfileTest, encoding="ISO-8859-1", index=False)
outfileTrain = fname+"_Train_"+str(test)+".csv"
outfileTrain = os.path.join(outdir, outfileTrain)
records = []
for j in range(10):
if i == j: continue
trainData = folds[fname][j]
for row in trainData:
records.append(row)
df = pd.DataFrame.from_dict(records)
df.to_csv(outfileTrain, encoding="ISO-8859-1", index=False)
def run(self, filename, fold, dirResult, outdir, featCols, algoName, ngram):
print (filename, fold)
infile = filename+"_Train_"+str(fold)+".csv"
infile = os.path.join(outdir, infile)
outfileModel = filename+"_Train_"+str(fold)+"_model.pkl"
outfileModel = os.path.join(dirResult, outfileModel)
senticr = SupervisedDetector(infile, outfileModel, featCols, algo=algoName, ngram_range=ngram)
infileTest = filename+"_Test_"+str(fold)+".csv"
infileTest = os.path.join(outdir, infileTest)
dfTest = pd.read_csv(infileTest, encoding="ISO-8859-1")
results = []
for index, row in dfTest.iterrows():
text = row["Sentence"]
additionalColVals = []
for col in featCols:
additionalColVals.append(row[col])
label = senticr.get_sentiment_polarity(text, additionalColVals)[0]
#print label
if algoName == "LASSO":
if label >= 0.5:
label = "p"
elif label <= -0.5:
label = "n"
else:
label = "o"
else:
if label == 1:
label = "p"
elif label == -1:
label = "n"
elif label == 0:
label = "o"
else:
label = "WTF"
results.append(label)
dfTest["Sentisead"] = (pd.Series(results)).values
outfileResults = filename+"_Test_"+str(fold)+"_Results.csv"
outfileResults = os.path.join(dirResult, outfileResults)
dfTest.to_csv(outfileResults, index = False, encoding="ISO-8859-1")
def runNoBoW(self, filename, fold, dirResult, outdir, featCols, algoName, ngram):
print (filename, fold)
infile = filename+"_Train_"+str(fold)+".csv"
infile = os.path.join(outdir, infile)
outfileModel = filename+"_Train_"+str(fold)+"_model.pkl"
outfileModel = os.path.join(dirResult, outfileModel)
senticr = SupervisedDetectorNoBoW(infile, outfileModel, featCols, algo=algoName, ngram_range=ngram)
infileTest = filename+"_Test_"+str(fold)+".csv"
infileTest = os.path.join(outdir, infileTest)
dfTest = pd.read_csv(infileTest, encoding="ISO-8859-1")
results = []
for index, row in dfTest.iterrows():
text = row["Sentence"]
additionalColVals = []
for col in featCols:
additionalColVals.append(row[col])
label = senticr.get_sentiment_polarity(text, additionalColVals)[0]
#print label
if algoName == "LASSO":
if label >= 0.5:
label = "p"
elif label <= -0.5:
label = "n"
else:
label = "o"
else:
if label == 1:
label = "p"
elif label == -1:
label = "n"
elif label == 0:
label = "o"
else:
label = "WTF"
results.append(label)
dfTest["Sentisead_NoBoW"] = (pd.Series(results)).values
outfileResults = filename+"_Test_"+str(fold)+"_Results.csv"
outfileResults = os.path.join(dirResult, outfileResults)
dfTest.to_csv(outfileResults, index = False, encoding="ISO-8859-1")
def trainTestSentiCRCustomized(self, algoName, ngram, outdir, featCols, filenames, parallelized=False):
dirResult = os.path.join(outdir, "Sentisead_"+algoName)
fileutils.make_dir(dirResult)
#outdir = os.path.join(rootdir, "results_senticr")
folds = 10
for filename in filenames:
ps = []
for i in range(folds):
p = Process(target=self.run, args = (filename, i, dirResult, outdir, featCols, algoName, ngram))
ps.append(p)
if parallelized == True:
for p in ps:
p.start()
for p in ps:
p.join()
else:
for p in ps:
p.start()
p.join()
def trainTestSentiCRCustomizedNoBoW(self, algoName, ngram, outdir, featCols, filenames, parallelized=False):
dirResult = os.path.join(outdir, "Sentisead_NoBoW_"+algoName)
fileutils.make_dir(dirResult)
#outdir = os.path.join(rootdir, "results_senticr")
folds = 10
for filename in filenames:
ps = []
for i in range(folds):
p = Process(target=self.runNoBoW, args = (filename, i, dirResult, outdir, featCols, algoName, ngram))
ps.append(p)
if parallelized == True:
for p in ps:
p.start()
for p in ps:
p.join()
else:
for p in ps:
p.start()
p.join()
def consolidateResults(self, algo, outdir, filenames, dirConsolidated):
#algos = ["RF", "ADB", "GBT"]
records = []
dirResults = os.path.join(outdir, "Sentisead_"+algo)
folds = 10
for filename in filenames:
for i in range(folds):
fid = filename + "_Test_"+str(i)
infile = fid+"_Results.csv"
infile = os.path.join(dirResults, infile)
df = pd.read_csv(infile, encoding="ISO-8859-1")
for index, row in df.iterrows():
records.append(row)
# now append to existing consolidated
outfile = os.path.join(dirConsolidated, "ResultsConsolidated_"+algo+".xls")
df = pd.DataFrame.from_dict(records)
fileutils.writeExcel(outfile, "Sheet1", df)
def consolidateResultsNoBoW(self, algo, outdir, filenames, dirConsolidated):
#algos = ["RF", "ADB", "GBT"]
records = []
dirResults = os.path.join(outdir, "Sentisead_NoBoW_"+algo)
folds = 10
for filename in filenames:
for i in range(folds):
fid = filename + "_Test_"+str(i)
infile = fid+"_Results.csv"
infile = os.path.join(dirResults, infile)
df = pd.read_csv(infile, encoding="ISO-8859-1")
for index, row in df.iterrows():
records.append(row)
# now append to existing consolidated
outfile = os.path.join(dirConsolidated, "ResultsConsolidated_NoBoW_"+algo+".xls")
df = pd.DataFrame.from_dict(records)
fileutils.writeExcel(outfile, "Sheet1", df)
def computePerformanceOverallOfLearner(self, algo, learnerCol, dirConsolidated, filenames):
infile = os.path.join(dirConsolidated, "ResultsConsolidated_"+algo+".xls")
df = fileutils.readExcel(infile, "Sheet1", encoding="ISO-8859-1")
exps = []
gots = []
labels = set()
for index, row in df.iterrows():
fname = row["File"]
fname = fname.split("_")[0]
if fname not in filenames:
#print fname, " not in filenmaes"
#return
continue
else:
exp = row["ManualLabel"]
got = row[learnerCol]
labels.add(exp)
exps.append(exp)
gots.append(got)
computer = metrics.PerformanceMultiClass(exps, gots, labels = list(labels))
for label in labels:
pr = computer.precision(label)
re = computer.recall(label)
f1 = 2*pr*re/(pr+re)
print "Label = %s. Precision = %.3f. Recall = %.3f. F1 = %.3f"%(label, pr, re, f1)
f1_macro = computer.f1_macro_average()
pr_macro = computer.precision_macro_average()
rec_macro = computer.recall_macro_average()
f1_micro, _, _ = computer.compute_micro_average()
print "F1 Macro = %.3f. Micro = %.3f"%(f1_macro, f1_micro)
print "Macro Precision = %.3f. Recall = %.3f"%(pr_macro, rec_macro)
print "-------------------------------"
def computePerformanceOverallOfLearnerNoBoW(self, algo, learnerCol, dirConsolidated, filenames):
infile = os.path.join(dirConsolidated, "ResultsConsolidated_NoBoW_"+algo+".xls")
df = fileutils.readExcel(infile, "Sheet1", encoding="ISO-8859-1")
exps = []
gots = []
labels = set()
for index, row in df.iterrows():
fname = row["File"]
fname = fname.split("_")[0]
if fname not in filenames:
#print fname, " not in filenmaes"
#return
continue
else:
exp = row["ManualLabel"]
got = row[learnerCol]
labels.add(exp)
exps.append(exp)
gots.append(got)
computer = metrics.PerformanceMultiClass(exps, gots, labels = list(labels))
for label in labels:
pr = computer.precision(label)
re = computer.recall(label)
f1 = 2*pr*re/(pr+re)
print "Label = %s. Precision = %.3f. Recall = %.3f. F1 = %.3f"%(label, pr, re, f1)
f1_macro = computer.f1_macro_average()
pr_macro = computer.precision_macro_average()
rec_macro = computer.recall_macro_average()
f1_micro, _, _ = computer.compute_micro_average()
print "F1 Macro = %.3f. Micro = %.3f"%(f1_macro, f1_micro)
print "Macro Precision = %.3f. Recall = %.3f"%(pr_macro, rec_macro)
print "-------------------------------"
def computePerformancOfLearner(self, algo, learnerCol, dirConsolidated, filenames):
infile = os.path.join(dirConsolidated, "ResultsConsolidated_"+algo+".xls")
df = fileutils.readExcel(infile, "Sheet1", encoding="ISO-8859-1")
exps = dict()
gots = dict()
labels = dict()
for index, row in df.iterrows():
fname = row["File"]
fname = fname.split("_")[0]
if fname not in filenames:
#print fname, " not in filenmaes"
continue
else:
if fname not in exps:
exps[fname] = []
gots[fname] = []
labels[fname] = set()
exp = row["ManualLabel"]
got = row[learnerCol]
labels[fname].add(exp)
exps[fname].append(exp)
gots[fname].append(got)
for fname in filenames:
computer = metrics.PerformanceMultiClass(exps[fname], gots[fname], labels = list(labels[fname]))
for label in labels[fname]:
pr = computer.precision(label)
re = computer.recall(label)
f1 = 2*pr*re/(pr+re)
print "File %s. Label = %s. Precision = %.2f. Recall = %.2f. F1 = %.2f"%(fname, label, pr, re, f1)
f1_macro = computer.f1_macro_average()
f1_micro, _, _ = computer.compute_micro_average()
print "File = %s. F1 Macro = %.2f. Micro = %.2f"%(fname, f1_macro, f1_micro)
print "-------------------------------"
def computePerformancOfLearnerNoBoW(self, algo, learnerCol, dirConsolidated, filenames):
infile = os.path.join(dirConsolidated, "ResultsConsolidated_NoBoW_"+algo+".xls")
df = fileutils.readExcel(infile, "Sheet1", encoding="ISO-8859-1")
exps = dict()
gots = dict()
labels = dict()
for index, row in df.iterrows():
fname = row["File"]
fname = fname.split("_")[0]
if fname not in filenames:
#print fname, " not in filenmaes"
continue
else:
if fname not in exps:
exps[fname] = []
gots[fname] = []
labels[fname] = set()
exp = row["ManualLabel"]
got = row[learnerCol]
labels[fname].add(exp)
exps[fname].append(exp)
gots[fname].append(got)
for fname in filenames:
computer = metrics.PerformanceMultiClass(exps[fname], gots[fname], labels = list(labels[fname]))
for label in labels[fname]:
pr = computer.precision(label)
re = computer.recall(label)
f1 = 2*pr*re/(pr+re)
print "File %s. Label = %s. Precision = %.2f. Recall = %.2f. F1 = %.2f"%(fname, label, pr, re, f1)
f1_macro = computer.f1_macro_average()
f1_micro, _, _ = computer.compute_micro_average()
print "File = %s. F1 Macro = %.2f. Micro = %.2f"%(fname, f1_macro, f1_micro)
print "-------------------------------"
class Sentisead(object):
def __init__(self, rootdir):
self.basedir = os.path.join(rootdir, "Hybrid")
self.featCols = [
#"Ensemble_RF_HotEncoded",
#'Adaptive_POLAR_ADB_HotEncoded',
#'Adaptive_POLAR_RF_HotEncoded',
#'Adaptive_POLAR_GBT_HotEncoded',
#'DSOSE_HotEncoded',
'DsoLabelFullText_HotEncoded',
#'DsoLabelFullTextW2V_HotEncoded',
#'Pscore_FullText',
#'Nscore_FullText',
#'NscoreW2V_FullText',
#'PscoreW2V_FullText',
#'DsoLabelFirstWord_HotEncoded',
#'DsoLabelLastWord_HotEncoded',
'Senti4SD_HotEncoded',
'SentiCR_HotEncoded',
'SentistrengthSE_HotEncoded',
#'Majority_HotEncoded',
'POME_HotEncoded',
"ShannonPolarOverall",
"ShannonPolarPositive",
"ShannonPolarNegative",
"ShannonVerb",
"ShannonAdjective",
#"ShannonPolarOverallBin",
#"ShannonPolarPositiveBin",
#"ShannonPolarNegativeBin",
#"ShannonVerbBin",
#"ShannonAdjectiveBin"
]
self.indir = self.basedir
self.infile = os.path.join(self.indir, "ResultsConsolidatedWithEnsembleAssessment.xls")
self.filenames = ["DatasetLinJIRA", "BenchmarkUddinSO", "DatasetLinAppReviews",
"DatasetLinSO", "DatasetSenti4SDSO", "OrtuJIRA"]
self.outdir = os.path.join(self.basedir, "Sentisead")
fileutils.make_dir(self.outdir)
self.dirConsolidated = os.path.join(self.outdir, "consolidated")
fileutils.make_dir(self.dirConsolidated)
self.utils = CommonUtils()
def prepareTrainTestFiles(self):
self.utils.prepareTrainTestFiles(self.infile, self.outdir)
def trainTestSupervisedDetector(self, algoName, parallelized=False, ngram=(1,1)):
self.utils.trainTestSentiCRCustomized(algoName, ngram, self.outdir, self.featCols, self.filenames, parallelized)
def consolidateResults(self, algo):
self.utils.consolidateResults(algo, self.outdir, self.filenames, self.dirConsolidated)
def computePerformanceOverallOfLearner(self, algo, learnerCol):
self.utils.computePerformanceOverallOfLearner(algo, learnerCol, self.dirConsolidated, self.filenames)
def computePerformancOfLearner(self, algo, learnerCol):
self.utils.computePerformancOfLearner(algo, learnerCol, self.dirConsolidated, self.filenames)
def pipeline(self, algo, ngram, parallelized=False):
if ngram == 1:
ngram = (1,1)
elif ngram == 2:
ngram = (1,2)
elif ngram == 3:
ngram = (1,3)
else:
print "ngram out of accepted range. using unigram!"
ngram = (1,1)
learnerCol = "Sentisead"
algoSpec = get_classifier(algo)
self.prepareTrainTestFiles()
vect = getVectorizer(max_df=0.5, min_df=3, ngram_range=ngram)
self.trainTestSupervisedDetector(algo, parallelized, ngram)
self.consolidateResults(algo)
print algoSpec
print "-"*80
print "Vectorizer"
print vect
print "-"*80
print "Overall Performance"
self.computePerformanceOverallOfLearner(algo, learnerCol)
print "-"*80
print "By File Performance"
print "-"*80
self.computePerformancOfLearner(algo, learnerCol)
class SentiseadNoBoW(object):
def __init__(self, rootdir):
self.basedir = os.path.join(rootdir, "Hybrid")
self.featCols = [
#"Ensemble_RF_HotEncoded",
#'Adaptive_POLAR_ADB_HotEncoded',
#'Adaptive_POLAR_RF_HotEncoded',
#'Adaptive_POLAR_GBT_HotEncoded',
#'DSOSE_HotEncoded',
'DsoLabelFullText_HotEncoded',
#'DsoLabelFullTextW2V_HotEncoded',
#'Pscore_FullText',
#'Nscore_FullText',
#'NscoreW2V_FullText',
#'PscoreW2V_FullText',
#'DsoLabelFirstWord_HotEncoded',
#'DsoLabelLastWord_HotEncoded',
'Senti4SD_HotEncoded',
'SentiCR_HotEncoded',
'SentistrengthSE_HotEncoded',
#'Majority_HotEncoded',
'POME_HotEncoded',
"ShannonPolarOverall",
"ShannonPolarPositive",
"ShannonPolarNegative",
"ShannonVerb",
"ShannonAdjective",
#"ShannonPolarOverallBin",
#"ShannonPolarPositiveBin",
#"ShannonPolarNegativeBin",
#"ShannonVerbBin",
#"ShannonAdjectiveBin"
]
self.indir = self.basedir
self.infile = os.path.join(self.indir, "ResultsConsolidatedWithEnsembleAssessment.xls")
self.filenames = ["DatasetLinJIRA", "BenchmarkUddinSO", "DatasetLinAppReviews",
"DatasetLinSO", "DatasetSenti4SDSO", "OrtuJIRA"]
self.outdir = os.path.join(self.basedir, "Sentisead")
fileutils.make_dir(self.outdir)
self.dirConsolidated = os.path.join(self.outdir, "consolidated")
fileutils.make_dir(self.dirConsolidated)
self.utils = CommonUtils()
def prepareTrainTestFiles(self):
self.utils.prepareTrainTestFiles(self.infile, self.outdir)
def trainTestSupervisedDetector(self, algoName, parallelized=False, ngram=(1,1)):
self.utils.trainTestSentiCRCustomizedNoBoW(algoName, ngram, self.outdir, self.featCols, self.filenames, parallelized)
def consolidateResults(self, algo):
self.utils.consolidateResultsNoBoW(algo, self.outdir, self.filenames, self.dirConsolidated)
def computePerformanceOverallOfLearner(self, algo, learnerCol):
self.utils.computePerformanceOverallOfLearnerNoBoW(algo, learnerCol, self.dirConsolidated, self.filenames)
def computePerformancOfLearner(self, algo, learnerCol):
self.utils.computePerformancOfLearnerNoBoW(algo, learnerCol, self.dirConsolidated, self.filenames)
def pipeline(self, algo, ngram, parallelized=False):
if ngram == 1:
ngram = (1,1)
elif ngram == 2:
ngram = (1,2)
elif ngram == 3:
ngram = (1,3)
else:
print "ngram out of accepted range. using unigram!"
ngram = (1,1)
learnerCol = "Sentisead_NoBoW"
algoSpec = get_classifier(algo)
self.prepareTrainTestFiles()
vect = getVectorizer(max_df=0.5, min_df=3, ngram_range=ngram)
self.trainTestSupervisedDetector(algo, parallelized, ngram)
self.consolidateResults(algo)
print algoSpec
print "-"*80
print "Vectorizer"
print vect
print "-"*80
print "Overall Performance"
self.computePerformanceOverallOfLearner(algo, learnerCol)
print "-"*80
print "By File Performance"
print "-"*80
self.computePerformancOfLearner(algo, learnerCol)
class SentimentData:
def __init__(self, text,rating):
self.text = text
self.rating =rating
def get_classifier(algo):
if algo=="GBT":
return GradientBoostingClassifier(learning_rate=0.1, n_estimators=500,max_depth=10, min_samples_split=100,
min_samples_leaf=20, subsample=0.85, random_state=10)
if algo=="GBTSentiCR":
return GradientBoostingClassifier()
elif algo=="RF":
return RandomForestClassifier( n_estimators=150)
elif algo == "xgb":
return XGBClassifier()
elif algo=="ADB":
return AdaBoostClassifier()
elif algo =="DT":
return DecisionTreeClassifier()
elif algo=="NB":
return BernoulliNB()
elif algo=="SGD":
return SGDClassifier()
elif algo == "LASSO":
return Lasso(alpha=0.1)
elif algo=="SVC":
return LinearSVC(C=1.0, loss = "hinge", max_iter=100000, penalty="l2")
elif algo == "SVM":
return svm.SVC(gamma='scale', decision_function_shape='ovo')
#return LinearSVC()
elif algo == "SGD":
return SGDClassifier(alpha=.0001, n_iter=2000,
epsilon=0.5, loss='log',penalty="l2",
power_t=0.5, warm_start=False, shuffle=True),
elif algo == "LogisticRegression":
return LogisticRegression()
elif algo=="MLPC":
return MLPClassifier(activation='logistic', batch_size='auto',
early_stopping=True, hidden_layer_sizes=(100,), learning_rate='adaptive',
learning_rate_init=0.1, max_iter=5000, random_state=1,
solver='lbfgs', tol=0.0001, validation_fraction=0.1, verbose=False,
warm_start=False)
return 0
def getVectorizer(max_df, min_df, ngram_range):
vectorizer = TfidfVectorizer(tokenizer=tokenize_and_stem, sublinear_tf=True, max_df=max_df,
#stop_words=stopWords,
min_df=min_df, ngram_range=ngram_range)
# vectorizer = TfidfVectorizer(tokenizer=tokenize_and_stem, sublinear_tf=True, max_df=max_df, min_df=min_df, ngram_range=ngram_range)
return vectorizer
class SupervisedDetector:
def __init__(self, infileTraining, infileModel, featCols, training=True, encoding = 'ISO-8859-1',
infileSheetName = "Sheet1", infileSentCol = "Sentence", infileRatingCol = "ManualLabel_HotEncoded",
algo="GBT", ngram_range = (1,1), max_df = 0.5, min_df = 3):
self.additionalCols = featCols
self.algo = algo
#self.indir = "/home/gias/dev/opinion/papers/opinionvalue/SentiCR"
self.vectorizer = getVectorizer(max_df, min_df, ngram_range)
#modelFile = infileTraining.split('_Train')[0]+"_"+algo+".pkl"
self.modelFile = infileModel #os.path.join(dirTrainedModelsOriginal, modelFile) #os.path.join(self.indir, "crpolar.pkl")
self.trainingFile = infileTraining#os.path.join(dirTrainedModelsOriginal, infileTraining)
self.encoding = encoding
print ("Algo = ", algo)
if training == True:
print("Training ....")
self.training_data=self.read_data_from_oracle_pd(infileSheetName, infileSentCol, infileRatingCol)
self.model = self.create_model_from_training_data()
print("saving model ", self.modelFile)
with open(self.modelFile, 'wb') as f:
pickle.dump(self.model, f)
else:
with open(self.modelFile, 'rb') as f:
self.model = pickle.load(f)
training_comments=[]
self.training_data=self.read_data_from_oracle_pd(infileSheetName, infileSentCol, infileRatingCol)
for sentidata in self.training_data:
comments = preprocess_text(sentidata.text)
training_comments.append(comments)
self.vectorizer.fit_transform(training_comments).toarray()
def create_model_from_training_data(self):
training_comments=[]
training_ratings=[]
print("Training classifier model..")
for sentidata in self.training_data:
comments = preprocess_text(sentidata.text)
training_comments.append(comments)
training_ratings.append(sentidata.rating)
X = hstack((self.vectorizer.fit_transform(training_comments),
self.train_df[self.additionalCols].values),
format='csr')
#X_train = self.vectorizer.fit_transform(training_comments).toarray()
X_train = X.toarray()
Y_train = np.array(training_ratings)
#Apply SMOTE to improve ratio of the minority class
#smote_model = SMOTE(sampling_strategy=0.5, random_state=None, k_neighbors=10, m_neighbors=10, out_step=.0001, kind='regular', svm_estimator=None, n_jobs=1)
#smote_model = SVMSMOTE(sampling_strategy=0.5, random_state=5000, k_neighbors=10, m_neighbors=10, out_step=.0001,n_jobs=1, svm_estimator=None)
#smote_model = SMOTE(sampling_strategy=0.5)
smote_model = SVMSMOTE(sampling_strategy=0.5, random_state=42, k_neighbors=object, m_neighbors=object, out_step=.0001,n_jobs=1, svm_estimator=None)
model= get_classifier(self.algo)
try:
X_resampled, Y_resampled=smote_model.fit_sample(X_train, Y_train)
model.fit(X_resampled, Y_resampled)
except:
model.fit(X_train, Y_train)
#model.fit(X_train, Y_train)
return model
def read_data_from_oracle_pd(self, sheetName="Sheet1", sentCol="Sentence", ratingCol ="ManualLabel_HotEncoded"):
print("Reading data from oracle..")
oracle_data=[]
if self.trainingFile.endswith(".csv") == False:
self.train_df = fileutils.readExcel(self.trainingFile, sheetName, encoding = self.encoding)
else:
self.train_df = pd.read_csv(self.trainingFile, encoding = self.encoding)
for index, row in self.train_df.iterrows():
text = row[sentCol]
rating = row[ratingCol]
comments = SentimentData(text, rating)
oracle_data.append(comments)
return oracle_data
def get_sentiment_polarity(self,text, additionalColVals):
comment= preprocess_text(text)
feature_vector= hstack((self.vectorizer.transform([comment]),
additionalColVals),
format='csr')
feature_vector = feature_vector.toarray()
sentiment_class=self.model.predict(feature_vector)
return sentiment_class
class SupervisedDetectorNoBoW:
def __init__(self, infileTraining, infileModel, featCols, training=True, encoding = 'ISO-8859-1',
infileSheetName = "Sheet1", infileSentCol = "Sentence", infileRatingCol = "ManualLabel_HotEncoded",
algo="GBT", ngram_range = (1,1), max_df = 0.5, min_df = 3):
self.additionalCols = featCols
self.algo = algo
#self.indir = "/home/gias/dev/opinion/papers/opinionvalue/SentiCR"
self.vectorizer = getVectorizer(max_df, min_df, ngram_range)
#modelFile = infileTraining.split('_Train')[0]+"_"+algo+".pkl"
self.modelFile = infileModel #os.path.join(dirTrainedModelsOriginal, modelFile) #os.path.join(self.indir, "crpolar.pkl")
self.trainingFile = infileTraining#os.path.join(dirTrainedModelsOriginal, infileTraining)
self.encoding = encoding
print ("Algo = ", algo)
if training == True:
print("Training ....")
self.training_data=self.read_data_from_oracle_pd(infileSheetName, infileSentCol, infileRatingCol)
self.model = self.create_model_from_training_data()
print("saving model ", self.modelFile)
with open(self.modelFile, 'wb') as f:
pickle.dump(self.model, f)
else:
with open(self.modelFile, 'rb') as f:
self.model = pickle.load(f)
training_comments=[]
self.training_data=self.read_data_from_oracle_pd(infileSheetName, infileSentCol, infileRatingCol)
for sentidata in self.training_data:
comments = preprocess_text(sentidata.text)
training_comments.append(comments)
self.vectorizer.fit_transform(training_comments).toarray()
def create_model_from_training_data(self):
training_comments=[]
training_ratings=[]
print("Training classifier model..")
for sentidata in self.training_data:
#comments = preprocess_text(sentidata.text)
#training_comments.append(comments)
training_ratings.append(sentidata.rating)
X = self.train_df[self.additionalCols].values
#X_train = self.vectorizer.fit_transform(training_comments).toarray()
X_train = X#.toarray()
Y_train = np.array(training_ratings)
#Apply SMOTE to improve ratio of the minority class
#smote_model = SMOTE(sampling_strategy=0.5, random_state=None, k_neighbors=10, m_neighbors=10, out_step=.0001, kind='regular', svm_estimator=None, n_jobs=1)
#smote_model = SVMSMOTE(sampling_strategy=0.5, random_state=5000, k_neighbors=10, m_neighbors=10, out_step=.0001,n_jobs=1, svm_estimator=None)
#smote_model = SMOTE(sampling_strategy=0.5)
smote_model = SVMSMOTE(sampling_strategy=0.5, random_state=42, k_neighbors=object, m_neighbors=object, out_step=.0001,n_jobs=1, svm_estimator=None)
model= get_classifier(self.algo)
try:
X_resampled, Y_resampled=smote_model.fit_sample(X_train, Y_train)
model.fit(X_resampled, Y_resampled)
except:
model.fit(X_train, Y_train)
#model.fit(X_train, Y_train)
return model
def read_data_from_oracle_pd(self, sheetName="Sheet1", sentCol="Sentence", ratingCol ="ManualLabel_HotEncoded"):
print("Reading data from oracle..")
oracle_data=[]
if self.trainingFile.endswith(".csv") == False:
self.train_df = fileutils.readExcel(self.trainingFile, sheetName, encoding = self.encoding)
else:
self.train_df = pd.read_csv(self.trainingFile, encoding = self.encoding)
for index, row in self.train_df.iterrows():
text = row[sentCol]
rating = row[ratingCol]
comments = SentimentData(text, rating)
oracle_data.append(comments)
return oracle_data
def get_sentiment_polarity(self,text, additionalColVals):
#comment= preprocess_text(text)
feature_vector= [additionalColVals]
#feature_vector = feature_vector.toarray()
sentiment_class=self.model.predict(feature_vector)
return sentiment_class
| 44.130979 | 165 | 0.561592 | 3,755 | 38,747 | 5.656192 | 0.121704 | 0.010735 | 0.017892 | 0.01356 | 0.814539 | 0.80371 | 0.79467 | 0.792316 | 0.789962 | 0.787466 | 0 | 0.016574 | 0.335071 | 38,747 | 877 | 166 | 44.1813 | 0.807794 | 0.089504 | 0 | 0.740057 | 0 | 0.002841 | 0.082229 | 0.014823 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.00142 | 0.059659 | null | null | 0.065341 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
16aa4ba7700b2d37ba04b5dcd8bee01218532241 | 233 | py | Python | fbWaitForTerminationForThreadHandle.py | SkyLined/mWindowsAPI | d64d57bbf87d2a7b33cf7de89263553793484a84 | [
"CC-BY-4.0"
] | 7 | 2017-10-09T14:32:22.000Z | 2021-01-30T07:25:50.000Z | fbWaitForTerminationForThreadHandle.py | SkyLined/mWindowsAPI | d64d57bbf87d2a7b33cf7de89263553793484a84 | [
"CC-BY-4.0"
] | 2 | 2017-12-12T02:53:18.000Z | 2019-02-19T09:23:18.000Z | fbWaitForTerminationForThreadHandle.py | SkyLined/mWindowsAPI | d64d57bbf87d2a7b33cf7de89263553793484a84 | [
"CC-BY-4.0"
] | 1 | 2017-12-12T02:42:18.000Z | 2017-12-12T02:42:18.000Z | from .fbWaitForSingleObject import fbWaitForSingleObject;
def fbWaitForTerminationForThreadHandle(ohThread, nTimeoutInSeconds = None):
return fbWaitForSingleObject(ohThread, nTimeoutInSeconds, bInvalidHandleMeansSignaled = True);
| 46.6 | 96 | 0.866953 | 15 | 233 | 13.466667 | 0.733333 | 0.247525 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.077253 | 233 | 4 | 97 | 58.25 | 0.939535 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 8 |
bc674b66a55cc643798946aa8817e06e0ea226c4 | 50,429 | py | Python | app/grandchallenge/reader_studies/migrations/0012_auto_20210825_1346.py | nlessmann/grand-challenge.org | 36abf6ccb40e2fc3fd3ff00e81deabd76f7e1ef8 | [
"Apache-2.0"
] | 101 | 2018-04-11T14:48:04.000Z | 2022-03-28T00:29:48.000Z | app/grandchallenge/reader_studies/migrations/0012_auto_20210825_1346.py | nlessmann/grand-challenge.org | 36abf6ccb40e2fc3fd3ff00e81deabd76f7e1ef8 | [
"Apache-2.0"
] | 1,733 | 2018-03-21T11:56:16.000Z | 2022-03-31T14:58:30.000Z | app/grandchallenge/reader_studies/migrations/0012_auto_20210825_1346.py | nlessmann/grand-challenge.org | 36abf6ccb40e2fc3fd3ff00e81deabd76f7e1ef8 | [
"Apache-2.0"
] | 42 | 2018-06-08T05:49:07.000Z | 2022-03-29T08:43:01.000Z | # Generated by Django 3.1.13 on 2021-08-25 13:46
from django.db import migrations, models
import grandchallenge.core.validators
class Migration(migrations.Migration):
dependencies = [
("reader_studies", "0011_auto_20210601_0802"),
]
operations = [
migrations.AlterField(
model_name="answer",
name="answer",
field=models.JSONField(
null=True,
validators=[
grandchallenge.core.validators.JSONValidator(
schema={
"$schema": "http://json-schema.org/draft-07/schema#",
"anyOf": [
{"$ref": "#/definitions/null"},
{"$ref": "#/definitions/STXT"},
{"$ref": "#/definitions/MTXT"},
{"$ref": "#/definitions/BOOL"},
{"$ref": "#/definitions/NUMB"},
{"$ref": "#/definitions/HEAD"},
{"$ref": "#/definitions/2DBB"},
{"$ref": "#/definitions/DIST"},
{"$ref": "#/definitions/MDIS"},
{"$ref": "#/definitions/POIN"},
{"$ref": "#/definitions/MPOI"},
{"$ref": "#/definitions/POLY"},
{"$ref": "#/definitions/PIMG"},
{"$ref": "#/definitions/MPOL"},
{"$ref": "#/definitions/MPIM"},
{"$ref": "#/definitions/CHOI"},
{"$ref": "#/definitions/MCHO"},
{"$ref": "#/definitions/MCHD"},
{"$ref": "#/definitions/M2DB"},
{"$ref": "#/definitions/MASK"},
],
"definitions": {
"2D-bounding-box-object": {
"additionalProperties": False,
"properties": {
"corners": {
"items": {
"items": {"type": "number"},
"maxItems": 3,
"minItems": 3,
"type": "array",
},
"maxItems": 4,
"minItems": 4,
"type": "array",
},
"name": {"type": "string"},
"probability": {
"maximum": 1,
"minimum": 0,
"type": "number",
},
"type": {"enum": ["2D bounding box"]},
},
"required": ["corners"],
"type": "object",
},
"2DBB": {
"additionalProperties": False,
"properties": {
"corners": {
"items": {
"items": {"type": "number"},
"maxItems": 3,
"minItems": 3,
"type": "array",
},
"maxItems": 4,
"minItems": 4,
"type": "array",
},
"name": {"type": "string"},
"probability": {
"maximum": 1,
"minimum": 0,
"type": "number",
},
"type": {"enum": ["2D bounding box"]},
"version": {
"$ref": "#/definitions/version-object"
},
},
"required": ["version", "type", "corners"],
"type": "object",
},
"BOOL": {"type": "boolean"},
"CHOI": {"type": "number"},
"DIST": {
"additionalProperties": False,
"properties": {
"end": {
"items": {"type": "number"},
"maxItems": 3,
"minItems": 3,
"type": "array",
},
"name": {"type": "string"},
"probability": {
"maximum": 1,
"minimum": 0,
"type": "number",
},
"start": {
"items": {"type": "number"},
"maxItems": 3,
"minItems": 3,
"type": "array",
},
"type": {
"enum": ["Distance measurement"]
},
"version": {
"$ref": "#/definitions/version-object"
},
},
"required": [
"version",
"type",
"start",
"end",
],
"type": "object",
},
"HEAD": {"type": "null"},
"M2DB": {
"additionalProperties": False,
"properties": {
"boxes": {
"items": {
"allOf": [
{
"$ref": "#/definitions/2D-bounding-box-object"
}
]
},
"type": "array",
},
"name": {"type": "string"},
"type": {
"enum": [
"Multiple 2D bounding boxes"
]
},
"version": {
"$ref": "#/definitions/version-object"
},
},
"required": ["version", "type", "boxes"],
"type": "object",
},
"MASK": {
"additionalProperties": False,
"properties": {
"upload_session_pk": {
"format": "uuid",
"type": "string",
}
},
"required": ["upload_session_pk"],
"type": "object",
},
"MCHD": {
"items": {"type": "number"},
"type": "array",
},
"MCHO": {
"items": {"type": "number"},
"type": "array",
},
"MDIS": {
"additionalProperties": False,
"properties": {
"lines": {
"items": {
"allOf": [
{
"$ref": "#/definitions/line-object"
}
]
},
"type": "array",
},
"name": {"type": "string"},
"type": {
"enum": [
"Multiple distance measurements"
]
},
"version": {
"$ref": "#/definitions/version-object"
},
},
"required": ["version", "type", "lines"],
"type": "object",
},
"MPIM": {
"additionalProperties": False,
"properties": {
"upload_session_pk": {
"format": "uuid",
"type": "string",
}
},
"required": ["upload_session_pk"],
"type": "object",
},
"MPOI": {
"additionalProperties": False,
"properties": {
"name": {"type": "string"},
"points": {
"items": {
"allOf": [
{
"$ref": "#/definitions/point-object"
}
]
},
"type": "array",
},
"type": {"enum": ["Multiple points"]},
"version": {
"$ref": "#/definitions/version-object"
},
},
"required": ["version", "type", "points"],
"type": "object",
},
"MPOL": {
"additionalProperties": False,
"properties": {
"name": {"type": "string"},
"polygons": {
"items": {
"$ref": "#/definitions/polygon-object"
},
"type": "array",
},
"type": {
"enum": ["Multiple polygons"]
},
"version": {
"$ref": "#/definitions/version-object"
},
},
"required": [
"type",
"version",
"polygons",
],
"type": "object",
},
"MTXT": {"type": "string"},
"NUMB": {"type": "number"},
"PIMG": {
"additionalProperties": False,
"properties": {
"upload_session_pk": {
"format": "uuid",
"type": "string",
}
},
"required": ["upload_session_pk"],
"type": "object",
},
"POIN": {
"additionalProperties": False,
"properties": {
"name": {"type": "string"},
"point": {
"items": {"type": "number"},
"maxItems": 3,
"minItems": 3,
"type": "array",
},
"probability": {
"maximum": 1,
"minimum": 0,
"type": "number",
},
"type": {"enum": ["Point"]},
"version": {
"$ref": "#/definitions/version-object"
},
},
"required": ["version", "type", "point"],
"type": "object",
},
"POLY": {
"additionalProperties": False,
"properties": {
"groups": {
"items": {"type": "string"},
"type": "array",
},
"name": {"type": "string"},
"path_points": {
"items": {
"items": {"type": "number"},
"maxItems": 3,
"minItems": 3,
"type": "array",
},
"type": "array",
},
"probability": {
"maximum": 1,
"minimum": 0,
"type": "number",
},
"seed_point": {
"items": {"type": "number"},
"maxItems": 3,
"minItems": 3,
"type": "array",
},
"sub_type": {"type": "string"},
"type": {"enum": ["Polygon"]},
"version": {
"$ref": "#/definitions/version-object"
},
},
"required": [
"seed_point",
"path_points",
"sub_type",
"groups",
"version",
],
"type": "object",
},
"STXT": {"type": "string"},
"line-object": {
"additionalProperties": False,
"properties": {
"end": {
"items": {"type": "number"},
"maxItems": 3,
"minItems": 3,
"type": "array",
},
"name": {"type": "string"},
"probability": {
"maximum": 1,
"minimum": 0,
"type": "number",
},
"start": {
"items": {"type": "number"},
"maxItems": 3,
"minItems": 3,
"type": "array",
},
"type": {
"enum": ["Distance measurement"]
},
},
"required": ["start", "end"],
"type": "object",
},
"null": {"type": "null"},
"point-object": {
"additionalProperties": False,
"properties": {
"name": {"type": "string"},
"point": {
"items": {"type": "number"},
"maxItems": 3,
"minItems": 3,
"type": "array",
},
"probability": {
"maximum": 1,
"minimum": 0,
"type": "number",
},
"type": {"enum": ["Point"]},
},
"required": ["point"],
"type": "object",
},
"polygon-object": {
"additionalProperties": False,
"properties": {
"groups": {
"items": {"type": "string"},
"type": "array",
},
"name": {"type": "string"},
"path_points": {
"items": {
"items": {"type": "number"},
"maxItems": 3,
"minItems": 3,
"type": "array",
},
"type": "array",
},
"probability": {
"maximum": 1,
"minimum": 0,
"type": "number",
},
"seed_point": {
"items": {"type": "number"},
"maxItems": 3,
"minItems": 3,
"type": "array",
},
"sub_type": {"type": "string"},
"type": {"enum": ["Polygon"]},
},
"required": [
"seed_point",
"path_points",
"sub_type",
"groups",
],
"type": "object",
},
"version-object": {
"additionalProperties": False,
"properties": {
"major": {
"minimum": 0,
"multipleOf": 1.0,
"type": "number",
},
"minor": {
"minimum": 0,
"multipleOf": 1.0,
"type": "number",
},
},
"required": ["major", "minor"],
"type": "object",
},
},
}
)
],
),
),
migrations.AlterField(
model_name="historicalanswer",
name="answer",
field=models.JSONField(
null=True,
validators=[
grandchallenge.core.validators.JSONValidator(
schema={
"$schema": "http://json-schema.org/draft-07/schema#",
"anyOf": [
{"$ref": "#/definitions/null"},
{"$ref": "#/definitions/STXT"},
{"$ref": "#/definitions/MTXT"},
{"$ref": "#/definitions/BOOL"},
{"$ref": "#/definitions/NUMB"},
{"$ref": "#/definitions/HEAD"},
{"$ref": "#/definitions/2DBB"},
{"$ref": "#/definitions/DIST"},
{"$ref": "#/definitions/MDIS"},
{"$ref": "#/definitions/POIN"},
{"$ref": "#/definitions/MPOI"},
{"$ref": "#/definitions/POLY"},
{"$ref": "#/definitions/PIMG"},
{"$ref": "#/definitions/MPOL"},
{"$ref": "#/definitions/MPIM"},
{"$ref": "#/definitions/CHOI"},
{"$ref": "#/definitions/MCHO"},
{"$ref": "#/definitions/MCHD"},
{"$ref": "#/definitions/M2DB"},
{"$ref": "#/definitions/MASK"},
],
"definitions": {
"2D-bounding-box-object": {
"additionalProperties": False,
"properties": {
"corners": {
"items": {
"items": {"type": "number"},
"maxItems": 3,
"minItems": 3,
"type": "array",
},
"maxItems": 4,
"minItems": 4,
"type": "array",
},
"name": {"type": "string"},
"probability": {
"maximum": 1,
"minimum": 0,
"type": "number",
},
"type": {"enum": ["2D bounding box"]},
},
"required": ["corners"],
"type": "object",
},
"2DBB": {
"additionalProperties": False,
"properties": {
"corners": {
"items": {
"items": {"type": "number"},
"maxItems": 3,
"minItems": 3,
"type": "array",
},
"maxItems": 4,
"minItems": 4,
"type": "array",
},
"name": {"type": "string"},
"probability": {
"maximum": 1,
"minimum": 0,
"type": "number",
},
"type": {"enum": ["2D bounding box"]},
"version": {
"$ref": "#/definitions/version-object"
},
},
"required": ["version", "type", "corners"],
"type": "object",
},
"BOOL": {"type": "boolean"},
"CHOI": {"type": "number"},
"DIST": {
"additionalProperties": False,
"properties": {
"end": {
"items": {"type": "number"},
"maxItems": 3,
"minItems": 3,
"type": "array",
},
"name": {"type": "string"},
"probability": {
"maximum": 1,
"minimum": 0,
"type": "number",
},
"start": {
"items": {"type": "number"},
"maxItems": 3,
"minItems": 3,
"type": "array",
},
"type": {
"enum": ["Distance measurement"]
},
"version": {
"$ref": "#/definitions/version-object"
},
},
"required": [
"version",
"type",
"start",
"end",
],
"type": "object",
},
"HEAD": {"type": "null"},
"M2DB": {
"additionalProperties": False,
"properties": {
"boxes": {
"items": {
"allOf": [
{
"$ref": "#/definitions/2D-bounding-box-object"
}
]
},
"type": "array",
},
"name": {"type": "string"},
"type": {
"enum": [
"Multiple 2D bounding boxes"
]
},
"version": {
"$ref": "#/definitions/version-object"
},
},
"required": ["version", "type", "boxes"],
"type": "object",
},
"MASK": {
"additionalProperties": False,
"properties": {
"upload_session_pk": {
"format": "uuid",
"type": "string",
}
},
"required": ["upload_session_pk"],
"type": "object",
},
"MCHD": {
"items": {"type": "number"},
"type": "array",
},
"MCHO": {
"items": {"type": "number"},
"type": "array",
},
"MDIS": {
"additionalProperties": False,
"properties": {
"lines": {
"items": {
"allOf": [
{
"$ref": "#/definitions/line-object"
}
]
},
"type": "array",
},
"name": {"type": "string"},
"type": {
"enum": [
"Multiple distance measurements"
]
},
"version": {
"$ref": "#/definitions/version-object"
},
},
"required": ["version", "type", "lines"],
"type": "object",
},
"MPIM": {
"additionalProperties": False,
"properties": {
"upload_session_pk": {
"format": "uuid",
"type": "string",
}
},
"required": ["upload_session_pk"],
"type": "object",
},
"MPOI": {
"additionalProperties": False,
"properties": {
"name": {"type": "string"},
"points": {
"items": {
"allOf": [
{
"$ref": "#/definitions/point-object"
}
]
},
"type": "array",
},
"type": {"enum": ["Multiple points"]},
"version": {
"$ref": "#/definitions/version-object"
},
},
"required": ["version", "type", "points"],
"type": "object",
},
"MPOL": {
"additionalProperties": False,
"properties": {
"name": {"type": "string"},
"polygons": {
"items": {
"$ref": "#/definitions/polygon-object"
},
"type": "array",
},
"type": {
"enum": ["Multiple polygons"]
},
"version": {
"$ref": "#/definitions/version-object"
},
},
"required": [
"type",
"version",
"polygons",
],
"type": "object",
},
"MTXT": {"type": "string"},
"NUMB": {"type": "number"},
"PIMG": {
"additionalProperties": False,
"properties": {
"upload_session_pk": {
"format": "uuid",
"type": "string",
}
},
"required": ["upload_session_pk"],
"type": "object",
},
"POIN": {
"additionalProperties": False,
"properties": {
"name": {"type": "string"},
"point": {
"items": {"type": "number"},
"maxItems": 3,
"minItems": 3,
"type": "array",
},
"probability": {
"maximum": 1,
"minimum": 0,
"type": "number",
},
"type": {"enum": ["Point"]},
"version": {
"$ref": "#/definitions/version-object"
},
},
"required": ["version", "type", "point"],
"type": "object",
},
"POLY": {
"additionalProperties": False,
"properties": {
"groups": {
"items": {"type": "string"},
"type": "array",
},
"name": {"type": "string"},
"path_points": {
"items": {
"items": {"type": "number"},
"maxItems": 3,
"minItems": 3,
"type": "array",
},
"type": "array",
},
"probability": {
"maximum": 1,
"minimum": 0,
"type": "number",
},
"seed_point": {
"items": {"type": "number"},
"maxItems": 3,
"minItems": 3,
"type": "array",
},
"sub_type": {"type": "string"},
"type": {"enum": ["Polygon"]},
"version": {
"$ref": "#/definitions/version-object"
},
},
"required": [
"seed_point",
"path_points",
"sub_type",
"groups",
"version",
],
"type": "object",
},
"STXT": {"type": "string"},
"line-object": {
"additionalProperties": False,
"properties": {
"end": {
"items": {"type": "number"},
"maxItems": 3,
"minItems": 3,
"type": "array",
},
"name": {"type": "string"},
"probability": {
"maximum": 1,
"minimum": 0,
"type": "number",
},
"start": {
"items": {"type": "number"},
"maxItems": 3,
"minItems": 3,
"type": "array",
},
"type": {
"enum": ["Distance measurement"]
},
},
"required": ["start", "end"],
"type": "object",
},
"null": {"type": "null"},
"point-object": {
"additionalProperties": False,
"properties": {
"name": {"type": "string"},
"point": {
"items": {"type": "number"},
"maxItems": 3,
"minItems": 3,
"type": "array",
},
"probability": {
"maximum": 1,
"minimum": 0,
"type": "number",
},
"type": {"enum": ["Point"]},
},
"required": ["point"],
"type": "object",
},
"polygon-object": {
"additionalProperties": False,
"properties": {
"groups": {
"items": {"type": "string"},
"type": "array",
},
"name": {"type": "string"},
"path_points": {
"items": {
"items": {"type": "number"},
"maxItems": 3,
"minItems": 3,
"type": "array",
},
"type": "array",
},
"probability": {
"maximum": 1,
"minimum": 0,
"type": "number",
},
"seed_point": {
"items": {"type": "number"},
"maxItems": 3,
"minItems": 3,
"type": "array",
},
"sub_type": {"type": "string"},
"type": {"enum": ["Polygon"]},
},
"required": [
"seed_point",
"path_points",
"sub_type",
"groups",
],
"type": "object",
},
"version-object": {
"additionalProperties": False,
"properties": {
"major": {
"minimum": 0,
"multipleOf": 1.0,
"type": "number",
},
"minor": {
"minimum": 0,
"multipleOf": 1.0,
"type": "number",
},
},
"required": ["major", "minor"],
"type": "object",
},
},
}
)
],
),
),
migrations.AlterField(
model_name="question",
name="answer_type",
field=models.CharField(
choices=[
("STXT", "Single line text"),
("MTXT", "Multi line text"),
("BOOL", "Bool"),
("NUMB", "Number"),
("HEAD", "Heading"),
("2DBB", "2D bounding box"),
("M2DB", "Multiple 2D bounding boxes"),
("DIST", "Distance measurement"),
("MDIS", "Multiple distance measurements"),
("POIN", "Point"),
("MPOI", "Multiple points"),
("POLY", "Polygon"),
("PIMG", "Polygon (saved as mask)"),
("MPOL", "Multiple polygons"),
("MPIM", "Multiple polygons (saved as mask)"),
("CHOI", "Choice"),
("MCHO", "Multiple choice"),
("MCHD", "Multiple choice dropdown"),
("MASK", "Mask"),
],
default="STXT",
max_length=4,
),
),
]
| 54.166488 | 102 | 0.174999 | 1,535 | 50,429 | 5.712052 | 0.085342 | 0.10219 | 0.127737 | 0.062956 | 0.926095 | 0.926095 | 0.926095 | 0.926095 | 0.926095 | 0.926095 | 0 | 0.011672 | 0.729878 | 50,429 | 930 | 103 | 54.224731 | 0.631992 | 0.000912 | 0 | 0.745395 | 1 | 0 | 0.16796 | 0.014787 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.002167 | 0 | 0.005417 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
bcc97809bae5b3ef3ce33fa4847a57ad78a25213 | 3,903 | py | Python | silo/benchmarks/results/istc11-3-16-13.py | anshsarkar/TailBench | 25845756aee9a892229c25b681051591c94daafd | [
"MIT"
] | 274 | 2015-01-23T16:24:09.000Z | 2022-02-22T03:16:14.000Z | silo/benchmarks/results/istc11-3-16-13.py | anshsarkar/TailBench | 25845756aee9a892229c25b681051591c94daafd | [
"MIT"
] | 3 | 2015-03-17T11:52:36.000Z | 2019-07-22T23:04:25.000Z | silo/benchmarks/results/istc11-3-16-13.py | anshsarkar/TailBench | 25845756aee9a892229c25b681051591c94daafd | [
"MIT"
] | 94 | 2015-01-07T06:55:36.000Z | 2022-01-22T08:14:15.000Z | RESULTS = [({'scale_factor': 1000, 'threads': 1, 'txn_flags': 1, 'db': 'kvdb', 'bench': 'ycsb'}, (455923.0, 0.0)), ({'scale_factor': 1000, 'threads': 1, 'txn_flags': 1, 'db': 'ndb-proto2', 'bench': 'ycsb'}, (392189.0, 0.0)), ({'scale_factor': 4000, 'threads': 4, 'txn_flags': 1, 'db': 'kvdb', 'bench': 'ycsb'}, (1837830.0, 0.0)), ({'scale_factor': 4000, 'threads': 4, 'txn_flags': 1, 'db': 'ndb-proto2', 'bench': 'ycsb'}, (1386150.0, 0.0)), ({'scale_factor': 8000, 'threads': 8, 'txn_flags': 1, 'db': 'kvdb', 'bench': 'ycsb'}, (3117300.0, 0.0)), ({'scale_factor': 8000, 'threads': 8, 'txn_flags': 1, 'db': 'ndb-proto2', 'bench': 'ycsb'}, (2378310.0, 0.0)), ({'scale_factor': 12000, 'threads': 12, 'txn_flags': 1, 'db': 'kvdb', 'bench': 'ycsb'}, (3941100.0, 0.0)), ({'scale_factor': 12000, 'threads': 12, 'txn_flags': 1, 'db': 'ndb-proto2', 'bench': 'ycsb'}, (3129000.0, 0.0)), ({'scale_factor': 16000, 'threads': 16, 'txn_flags': 1, 'db': 'kvdb', 'bench': 'ycsb'}, (4299420.0, 0.0)), ({'scale_factor': 16000, 'threads': 16, 'txn_flags': 1, 'db': 'ndb-proto2', 'bench': 'ycsb'}, (3477480.0, 0.0)), ({'scale_factor': 20000, 'threads': 20, 'txn_flags': 1, 'db': 'kvdb', 'bench': 'ycsb'}, (4436690.0, 0.0)), ({'scale_factor': 20000, 'threads': 20, 'txn_flags': 1, 'db': 'ndb-proto2', 'bench': 'ycsb'}, (3591450.0, 0.0)), ({'scale_factor': 24000, 'threads': 24, 'txn_flags': 1, 'db': 'kvdb', 'bench': 'ycsb'}, (4492090.0, 0.0)), ({'scale_factor': 24000, 'threads': 24, 'txn_flags': 1, 'db': 'ndb-proto2', 'bench': 'ycsb'}, (3583380.0, 0.0)), ({'scale_factor': 28000, 'threads': 28, 'txn_flags': 1, 'db': 'kvdb', 'bench': 'ycsb'}, (4523280.0, 0.0)), ({'scale_factor': 28000, 'threads': 28, 'txn_flags': 1, 'db': 'ndb-proto2', 'bench': 'ycsb'}, (3737430.0, 0.0)), ({'scale_factor': 32000, 'threads': 32, 'txn_flags': 1, 'db': 'kvdb', 'bench': 'ycsb'}, (4557360.0, 0.0)), ({'scale_factor': 32000, 'threads': 32, 'txn_flags': 1, 'db': 'ndb-proto2', 'bench': 'ycsb'}, (4139190.0, 0.0)), ({'scale_factor': 1, 'threads': 1, 'txn_flags': 1, 'db': 'kvdb', 'bench': 'tpcc'}, (28194.3, 0.0)), ({'scale_factor': 1, 'threads': 1, 'txn_flags': 1, 'db': 'ndb-proto2', 'bench': 'tpcc'}, (15643.4, 0.0)), ({'scale_factor': 4, 'threads': 4, 'txn_flags': 1, 'db': 'kvdb', 'bench': 'tpcc'}, (103030.0, 0.0)), ({'scale_factor': 4, 'threads': 4, 'txn_flags': 1, 'db': 'ndb-proto2', 'bench': 'tpcc'}, (58260.7, 0.866664)), ({'scale_factor': 8, 'threads': 8, 'txn_flags': 1, 'db': 'kvdb', 'bench': 'tpcc'}, (199311.0, 0.0)), ({'scale_factor': 8, 'threads': 8, 'txn_flags': 1, 'db': 'ndb-proto2', 'bench': 'tpcc'}, (115993.0, 1.83333)), ({'scale_factor': 12, 'threads': 12, 'txn_flags': 1, 'db': 'kvdb', 'bench': 'tpcc'}, (288046.0, 0.0)), ({'scale_factor': 12, 'threads': 12, 'txn_flags': 1, 'db': 'ndb-proto2', 'bench': 'tpcc'}, (161253.0, 2.68333)), ({'scale_factor': 16, 'threads': 16, 'txn_flags': 1, 'db': 'kvdb', 'bench': 'tpcc'}, (369982.0, 0.0)), ({'scale_factor': 16, 'threads': 16, 'txn_flags': 1, 'db': 'ndb-proto2', 'bench': 'tpcc'}, (214555.0, 3.24999)), ({'scale_factor': 20, 'threads': 20, 'txn_flags': 1, 'db': 'kvdb', 'bench': 'tpcc'}, (458774.0, 0.0)), ({'scale_factor': 20, 'threads': 20, 'txn_flags': 1, 'db': 'ndb-proto2', 'bench': 'tpcc'}, (260806.0, 3.78332)), ({'scale_factor': 24, 'threads': 24, 'txn_flags': 1, 'db': 'kvdb', 'bench': 'tpcc'}, (544124.0, 0.0)), ({'scale_factor': 24, 'threads': 24, 'txn_flags': 1, 'db': 'ndb-proto2', 'bench': 'tpcc'}, (296078.0, 4.59998)), ({'scale_factor': 28, 'threads': 28, 'txn_flags': 1, 'db': 'kvdb', 'bench': 'tpcc'}, (616619.0, 0.0)), ({'scale_factor': 28, 'threads': 28, 'txn_flags': 1, 'db': 'ndb-proto2', 'bench': 'tpcc'}, (320886.0, 5.46665)), ({'scale_factor': 32, 'threads': 32, 'txn_flags': 1, 'db': 'kvdb', 'bench': 'tpcc'}, (646355.0, 0.0)), ({'scale_factor': 32, 'threads': 32, 'txn_flags': 1, 'db': 'ndb-proto2', 'bench': 'tpcc'}, (295248.0, 4.09999))]
| 1,951.5 | 3,902 | 0.570074 | 595 | 3,903 | 3.618487 | 0.134454 | 0.050163 | 0.150488 | 0.183929 | 0.862982 | 0.850906 | 0.850906 | 0.817464 | 0.708778 | 0.347422 | 0 | 0.172384 | 0.111197 | 3,903 | 1 | 3,903 | 3,903 | 0.448256 | 0 | 0 | 0 | 0 | 0 | 0.424289 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
4c415b864bb71ed2a841824e78401eb702260b2d | 264 | py | Python | pystt/utils.py | nws0507/pystt | af7393a2d16858f43b776d94511fd36c9c8e5312 | [
"MIT"
] | null | null | null | pystt/utils.py | nws0507/pystt | af7393a2d16858f43b776d94511fd36c9c8e5312 | [
"MIT"
] | null | null | null | pystt/utils.py | nws0507/pystt | af7393a2d16858f43b776d94511fd36c9c8e5312 | [
"MIT"
] | null | null | null | def escape_string(s):
s = s.replace("@", "@A")
s = s.replace('/', '@S')
return s
def unescape_string(s):
s = s.replace("@A", "@")
s = s.replace('@S', '/')
return s
def first_unescape_string(s):
s = s.replace("@A", "@")
return s
| 16.5 | 29 | 0.5 | 39 | 264 | 3.282051 | 0.230769 | 0.125 | 0.351563 | 0.210938 | 0.835938 | 0.835938 | 0.835938 | 0.578125 | 0.578125 | 0.578125 | 0 | 0 | 0.257576 | 264 | 15 | 30 | 17.6 | 0.653061 | 0 | 0 | 0.454545 | 0 | 0 | 0.056818 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
4c55905526775df7f486aa46eab833456adf59f3 | 29,966 | py | Python | views/AntdTreeSelect.py | RuixiangS/feffery-antd-docs | c48d34ed657ec8d6893440c0ee6382598c564922 | [
"MIT"
] | 10 | 2021-05-20T06:52:42.000Z | 2022-03-29T08:36:58.000Z | views/AntdTreeSelect.py | RuixiangS/feffery-antd-docs | c48d34ed657ec8d6893440c0ee6382598c564922 | [
"MIT"
] | null | null | null | views/AntdTreeSelect.py | RuixiangS/feffery-antd-docs | c48d34ed657ec8d6893440c0ee6382598c564922 | [
"MIT"
] | 2 | 2021-09-14T07:07:00.000Z | 2021-12-10T01:03:25.000Z | from dash import html
import feffery_antd_components as fac
import feffery_utils_components as fuc
import callbacks.AntdTreeSelect
docs_content = html.Div(
[
html.Div(
[
html.H2(
'AntdTreeSelect(id, className, style, *args, **kwargs)',
style={
'borderLeft': '4px solid grey',
'padding': '3px 0 3px 10px',
'backgroundColor': '#f5f5f5'
}
),
fac.AntdBackTop(
containerId='docs-content',
duration=0.6
),
html.Span(
'主要参数说明:',
id='主要参数说明',
style={
'borderLeft': '4px solid grey',
'padding': '3px 0 3px 10px',
'backgroundColor': '#f5f5f5',
'fontWeight': 'bold',
'fontSize': '1.2rem'
}
),
fuc.FefferyMarkdown(
markdownStr=open('documents/AntdTreeSelect.md', encoding='utf-8').read()
),
html.Div(
html.Span(
'使用示例',
id='使用示例',
style={
'borderLeft': '4px solid grey',
'padding': '3px 0 3px 10px',
'backgroundColor': '#f5f5f5',
'fontWeight': 'bold',
'fontSize': '1.2rem'
}
),
style={
'marginBottom': '10px'
}
),
html.Div(
[
fac.AntdTreeSelect(
treeData=[
{
"title": "Node1",
"value": "0-0",
"key": "0-0",
"children": [
{
"title": "Child Node1",
"value": "0-0-0",
"key": "0-0-0"
}
]
},
{
"title": "Node2",
"value": "0-1",
"key": "0-1",
"children": [
{
"title": "Child Node3",
"value": "0-1-0",
"key": "0-1-0"
},
{
"title": "Child Node4",
"value": "0-1-1",
"key": "0-1-1"
},
{
"title": "Child Node5",
"value": "0-1-2",
"key": "0-1-2"
}
]
}
],
style={
'width': '250px'
}
),
fac.AntdDivider(
'基础使用',
lineColor='#f0f0f0',
innerTextOrientation='left'
),
fac.AntdCollapse(
fuc.FefferySyntaxHighlighter(
showLineNumbers=True,
showInlineLineNumbers=True,
language='python',
codeStyle='coy-without-shadows',
codeString='''
fac.AntdTreeSelect(
treeData=[
{
"title": "Node1",
"value": "0-0",
"key": "0-0",
"children": [
{
"title": "Child Node1",
"value": "0-0-0",
"key": "0-0-0"
}
]
},
{
"title": "Node2",
"value": "0-1",
"key": "0-1",
"children": [
{
"title": "Child Node3",
"value": "0-1-0",
"key": "0-1-0"
},
{
"title": "Child Node4",
"value": "0-1-1",
"key": "0-1-1"
},
{
"title": "Child Node5",
"value": "0-1-2",
"key": "0-1-2"
}
]
}
],
style={
'width': '250px'
}
)'''
),
title='点击查看代码',
is_open=False,
ghost=True
)
],
style={
'marginBottom': '40px',
'padding': '10px 10px 20px 10px',
'border': '1px solid #f0f0f0'
},
id='基础使用',
className='div-highlight'
),
html.Div(
[
fac.AntdTreeSelect(
treeData=[
{
"title": "Node1",
"value": "0-0",
"key": "0-0",
"children": [
{
"title": "Child Node1",
"value": "0-0-0",
"key": "0-0-0"
}
]
},
{
"title": "Node2",
"value": "0-1",
"key": "0-1",
"children": [
{
"title": "Child Node3",
"value": "0-1-0",
"key": "0-1-0"
},
{
"title": "Child Node4",
"value": "0-1-1",
"key": "0-1-1"
},
{
"title": "Child Node5",
"value": "0-1-2",
"key": "0-1-2"
}
]
}
],
multiple=True,
style={
'width': '250px'
}
),
fac.AntdDivider(
'多选模式',
lineColor='#f0f0f0',
innerTextOrientation='left'
),
fac.AntdCollapse(
fuc.FefferySyntaxHighlighter(
showLineNumbers=True,
showInlineLineNumbers=True,
language='python',
codeStyle='coy-without-shadows',
codeString='''
fac.AntdTreeSelect(
treeData=[
{
"title": "Node1",
"value": "0-0",
"key": "0-0",
"children": [
{
"title": "Child Node1",
"value": "0-0-0",
"key": "0-0-0"
}
]
},
{
"title": "Node2",
"value": "0-1",
"key": "0-1",
"children": [
{
"title": "Child Node3",
"value": "0-1-0",
"key": "0-1-0"
},
{
"title": "Child Node4",
"value": "0-1-1",
"key": "0-1-1"
},
{
"title": "Child Node5",
"value": "0-1-2",
"key": "0-1-2"
}
]
}
],
multiple=True,
style={
'width': '250px'
}
)'''
),
title='点击查看代码',
is_open=False,
ghost=True
)
],
style={
'marginBottom': '40px',
'padding': '10px 10px 20px 10px',
'border': '1px solid #f0f0f0'
},
id='多选模式',
className='div-highlight'
),
html.Div(
[
fac.AntdTreeSelect(
treeData=[
{
"title": "Node1",
"value": "0-0",
"key": "0-0",
"children": [
{
"title": "Child Node1",
"value": "0-0-0",
"key": "0-0-0"
}
]
},
{
"title": "Node2",
"value": "0-1",
"key": "0-1",
"children": [
{
"title": "Child Node3",
"value": "0-1-0",
"key": "0-1-0"
},
{
"title": "Child Node4",
"value": "0-1-1",
"key": "0-1-1"
},
{
"title": "Child Node5",
"value": "0-1-2",
"key": "0-1-2"
}
]
}
],
treeCheckable=True,
style={
'width': '250px'
}
),
fac.AntdDivider(
'开启勾选框',
lineColor='#f0f0f0',
innerTextOrientation='left'
),
fac.AntdCollapse(
fuc.FefferySyntaxHighlighter(
showLineNumbers=True,
showInlineLineNumbers=True,
language='python',
codeStyle='coy-without-shadows',
codeString='''
fac.AntdTreeSelect(
treeData=[
{
"title": "Node1",
"value": "0-0",
"key": "0-0",
"children": [
{
"title": "Child Node1",
"value": "0-0-0",
"key": "0-0-0"
}
]
},
{
"title": "Node2",
"value": "0-1",
"key": "0-1",
"children": [
{
"title": "Child Node3",
"value": "0-1-0",
"key": "0-1-0"
},
{
"title": "Child Node4",
"value": "0-1-1",
"key": "0-1-1"
},
{
"title": "Child Node5",
"value": "0-1-2",
"key": "0-1-2"
}
]
}
],
treeCheckable=True,
style={
'width': '250px'
}
)'''
),
title='点击查看代码',
is_open=False,
ghost=True
)
],
style={
'marginBottom': '40px',
'padding': '10px 10px 20px 10px',
'border': '1px solid #f0f0f0'
},
id='开启勾选框',
className='div-highlight'
),
html.Div(
[
fac.AntdTreeSelect(
treeData=[
{
"title": "Node1",
"value": "0-0",
"key": "0-0",
"children": [
{
"title": "Child Node1",
"value": "0-0-0",
"key": "0-0-0"
}
]
},
{
"title": "Node2",
"value": "0-1",
"key": "0-1",
"children": [
{
"title": "Child Node3",
"value": "0-1-0",
"key": "0-1-0"
},
{
"title": "Child Node4",
"value": "0-1-1",
"key": "0-1-1"
},
{
"title": "Child Node5",
"value": "0-1-2",
"key": "0-1-2"
}
]
}
],
treeCheckable=True,
treeLine=True,
style={
'width': '250px'
}
),
fac.AntdDivider(
'显示树连接线',
lineColor='#f0f0f0',
innerTextOrientation='left'
),
fac.AntdCollapse(
fuc.FefferySyntaxHighlighter(
showLineNumbers=True,
showInlineLineNumbers=True,
language='python',
codeStyle='coy-without-shadows',
codeString='''
fac.AntdTreeSelect(
treeData=[
{
"title": "Node1",
"value": "0-0",
"key": "0-0",
"children": [
{
"title": "Child Node1",
"value": "0-0-0",
"key": "0-0-0"
}
]
},
{
"title": "Node2",
"value": "0-1",
"key": "0-1",
"children": [
{
"title": "Child Node3",
"value": "0-1-0",
"key": "0-1-0"
},
{
"title": "Child Node4",
"value": "0-1-1",
"key": "0-1-1"
},
{
"title": "Child Node5",
"value": "0-1-2",
"key": "0-1-2"
}
]
}
],
treeCheckable=True,
treeLine=True,
style={
'width': '250px'
}
)'''
),
title='点击查看代码',
is_open=False,
ghost=True
)
],
style={
'marginBottom': '40px',
'padding': '10px 10px 20px 10px',
'border': '1px solid #f0f0f0'
},
id='显示树连接线',
className='div-highlight'
),
html.Div(
[
fac.AntdTreeSelect(
treeData=[
{
"title": "Node1",
"value": "0-0",
"key": "0-0",
"children": [
{
"title": f"Child Node{i + 1}",
"value": f"0-0-{i}",
"key": f"0-0-{i}"
}
for i in range(20)
]
},
{
"title": "Node2",
"value": "0-1",
"key": "0-1",
"children": [
{
"title": "Child Node3",
"value": "0-1-0",
"key": "0-1-0"
},
{
"title": "Child Node4",
"value": "0-1-1",
"key": "0-1-1"
},
{
"title": "Child Node5",
"value": "0-1-2",
"key": "0-1-2"
}
]
}
],
treeCheckable=True,
treeLine=True,
virtual=False,
treeDefaultExpandAll=True,
style={
'width': '250px'
}
),
fac.AntdDivider(
'关闭虚拟滚动',
lineColor='#f0f0f0',
innerTextOrientation='left'
),
fac.AntdCollapse(
fuc.FefferySyntaxHighlighter(
showLineNumbers=True,
showInlineLineNumbers=True,
language='python',
codeStyle='coy-without-shadows',
codeString='''
fac.AntdTreeSelect(
treeData=[
{
"title": "Node1",
"value": "0-0",
"key": "0-0",
"children": [
{
"title": f"Child Node{i + 1}",
"value": f"0-0-{i}",
"key": f"0-0-{i}"
}
for i in range(20)
]
},
{
"title": "Node2",
"value": "0-1",
"key": "0-1",
"children": [
{
"title": "Child Node3",
"value": "0-1-0",
"key": "0-1-0"
},
{
"title": "Child Node4",
"value": "0-1-1",
"key": "0-1-1"
},
{
"title": "Child Node5",
"value": "0-1-2",
"key": "0-1-2"
}
]
}
],
treeCheckable=True,
treeLine=True,
virtual=False,
treeDefaultExpandAll=True,
style={
'width': '250px'
}
)'''
),
title='点击查看代码',
is_open=False,
ghost=True
)
],
style={
'marginBottom': '40px',
'padding': '10px 10px 20px 10px',
'border': '1px solid #f0f0f0'
},
id='关闭虚拟滚动',
className='div-highlight'
),
html.Div(
[
fac.AntdSpin(
[
html.Div(
[
fac.AntdText('value:', strong=True),
fac.AntdText(id='tree-select-demo-output')
]
),
fac.AntdTreeSelect(
id='tree-select-demo',
treeData=[
{
"title": "Node1",
"value": "0-0",
"key": "0-0",
"children": [
{
"title": "Child Node1",
"value": "0-0-0",
"key": "0-0-0"
}
]
},
{
"title": "Node2",
"value": "0-1",
"key": "0-1",
"children": [
{
"title": "Child Node3",
"value": "0-1-0",
"key": "0-1-0"
},
{
"title": "Child Node4",
"value": "0-1-1",
"key": "0-1-1"
},
{
"title": "Child Node5",
"value": "0-1-2",
"key": "0-1-2"
}
]
}
],
style={
'width': '250px'
}
)
],
text='回调中'
),
fac.AntdDivider(
'回调示例',
lineColor='#f0f0f0',
innerTextOrientation='left'
),
fac.AntdCollapse(
fuc.FefferySyntaxHighlighter(
showLineNumbers=True,
showInlineLineNumbers=True,
language='python',
codeStyle='coy-without-shadows',
codeString='''
fac.AntdSpin(
[
html.Div(
[
fac.AntdText('value:', strong=True),
fac.AntdText(id='tree-select-demo-output')
]
),
fac.AntdTreeSelect(
id='tree-select-demo',
treeData=[
{
"title": "Node1",
"value": "0-0",
"key": "0-0",
"children": [
{
"title": "Child Node1",
"value": "0-0-0",
"key": "0-0-0"
}
]
},
{
"title": "Node2",
"value": "0-1",
"key": "0-1",
"children": [
{
"title": "Child Node3",
"value": "0-1-0",
"key": "0-1-0"
},
{
"title": "Child Node4",
"value": "0-1-1",
"key": "0-1-1"
},
{
"title": "Child Node5",
"value": "0-1-2",
"key": "0-1-2"
}
]
}
],
style={
'width': '250px'
}
)
],
text='回调中'
)
...
@app.callback(
Output('tree-select-demo-output', 'children'),
Input('tree-select-demo', 'value')
)
def tree_select_demo_callback(value):
return str(value)
'''
),
title='点击查看代码',
is_open=False,
ghost=True
)
],
style={
'marginBottom': '40px',
'padding': '10px 10px 20px 10px',
'border': '1px solid #f0f0f0'
},
id='回调示例',
className='div-highlight'
),
html.Div(style={'height': '100px'})
],
style={
'flex': 'auto'
}
),
html.Div(
fac.AntdAnchor(
linkDict=[
{'title': '主要参数说明', 'href': '#主要参数说明'},
{
'title': '使用示例',
'href': '#使用示例',
'children': [
{'title': '基础使用', 'href': '#基础使用'},
{'title': '多选模式', 'href': '#多选模式'},
{'title': '开启勾选框', 'href': '#开启勾选框'},
{'title': '显示树连接线', 'href': '#显示树连接线'},
{'title': '关闭虚拟滚动', 'href': '#关闭虚拟滚动'},
{'title': '回调示例', 'href': '#回调示例'},
]
},
],
containerId='docs-content',
targetOffset=200
),
style={
'flex': 'none',
'margin': '20px'
}
)
],
style={
'display': 'flex'
}
)
| 36.147165 | 92 | 0.202296 | 1,394 | 29,966 | 4.338594 | 0.099713 | 0.031746 | 0.055556 | 0.02381 | 0.859954 | 0.853009 | 0.83912 | 0.83912 | 0.83912 | 0.83912 | 0 | 0.07032 | 0.696756 | 29,966 | 828 | 93 | 36.190821 | 0.595246 | 0 | 0 | 0.653266 | 0 | 0 | 0.321331 | 0.008443 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.005025 | 0 | 0.006281 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4c667d8c0150192597439b3d4c1f83f8f2583563 | 50,565 | py | Python | canopy/openapi/api/membership_api.py | CanopySimulations/canopy-python | 9ec37e674e65d6fbef0402ac0c612c163d55631e | [
"MIT"
] | null | null | null | canopy/openapi/api/membership_api.py | CanopySimulations/canopy-python | 9ec37e674e65d6fbef0402ac0c612c163d55631e | [
"MIT"
] | 1 | 2022-01-31T10:18:08.000Z | 2022-01-31T10:18:08.000Z | canopy/openapi/api/membership_api.py | CanopySimulations/canopy-python | 9ec37e674e65d6fbef0402ac0c612c163d55631e | [
"MIT"
] | null | null | null | # coding: utf-8
"""
Canopy.Api
No description provided (generated by Openapi Generator https://github.com/openapitools/openapi-generator) # noqa: E501
The version of the OpenAPI document: v1
Generated by: https://openapi-generator.tech
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from canopy.openapi.api_client import ApiClient
from canopy.openapi.exceptions import ( # noqa: F401
ApiTypeError,
ApiValueError
)
class MembershipApi(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def membership_delete_refresh_tokens(self, tenant_id, user_id, **kwargs): # noqa: E501
"""membership_delete_refresh_tokens # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.membership_delete_refresh_tokens(tenant_id, user_id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str tenant_id: (required)
:param str user_id: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.membership_delete_refresh_tokens_with_http_info(tenant_id, user_id, **kwargs) # noqa: E501
def membership_delete_refresh_tokens_with_http_info(self, tenant_id, user_id, **kwargs): # noqa: E501
"""membership_delete_refresh_tokens # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.membership_delete_refresh_tokens_with_http_info(tenant_id, user_id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str tenant_id: (required)
:param str user_id: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'tenant_id',
'user_id'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method membership_delete_refresh_tokens" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'tenant_id' is set
if self.api_client.client_side_validation and ('tenant_id' not in local_var_params or # noqa: E501
local_var_params['tenant_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `tenant_id` when calling `membership_delete_refresh_tokens`") # noqa: E501
# verify the required parameter 'user_id' is set
if self.api_client.client_side_validation and ('user_id' not in local_var_params or # noqa: E501
local_var_params['user_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `user_id` when calling `membership_delete_refresh_tokens`") # noqa: E501
collection_formats = {}
path_params = {}
if 'tenant_id' in local_var_params:
path_params['tenantId'] = local_var_params['tenant_id'] # noqa: E501
if 'user_id' in local_var_params:
path_params['userId'] = local_var_params['user_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/membership/refresh-tokens/{tenantId}/{userId}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def membership_get_password_reset_token_validity(self, user_id, token, **kwargs): # noqa: E501
"""membership_get_password_reset_token_validity # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.membership_get_password_reset_token_validity(user_id, token, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str user_id: (required)
:param str token: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: object
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.membership_get_password_reset_token_validity_with_http_info(user_id, token, **kwargs) # noqa: E501
def membership_get_password_reset_token_validity_with_http_info(self, user_id, token, **kwargs): # noqa: E501
"""membership_get_password_reset_token_validity # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.membership_get_password_reset_token_validity_with_http_info(user_id, token, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str user_id: (required)
:param str token: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(object, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'user_id',
'token'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method membership_get_password_reset_token_validity" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'user_id' is set
if self.api_client.client_side_validation and ('user_id' not in local_var_params or # noqa: E501
local_var_params['user_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `user_id` when calling `membership_get_password_reset_token_validity`") # noqa: E501
# verify the required parameter 'token' is set
if self.api_client.client_side_validation and ('token' not in local_var_params or # noqa: E501
local_var_params['token'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `token` when calling `membership_get_password_reset_token_validity`") # noqa: E501
collection_formats = {}
path_params = {}
if 'user_id' in local_var_params:
path_params['userId'] = local_var_params['user_id'] # noqa: E501
query_params = []
if 'token' in local_var_params and local_var_params['token'] is not None: # noqa: E501
query_params.append(('token', local_var_params['token'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'text/json', 'application/xml', 'text/xml']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/membership/password-reset-tokens/{userId}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='object', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def membership_get_user_roles(self, tenant_id, user_id, **kwargs): # noqa: E501
"""membership_get_user_roles # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.membership_get_user_roles(tenant_id, user_id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str tenant_id: (required)
:param str user_id: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: GetUserRolesQueryResult
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.membership_get_user_roles_with_http_info(tenant_id, user_id, **kwargs) # noqa: E501
def membership_get_user_roles_with_http_info(self, tenant_id, user_id, **kwargs): # noqa: E501
"""membership_get_user_roles # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.membership_get_user_roles_with_http_info(tenant_id, user_id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str tenant_id: (required)
:param str user_id: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(GetUserRolesQueryResult, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'tenant_id',
'user_id'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method membership_get_user_roles" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'tenant_id' is set
if self.api_client.client_side_validation and ('tenant_id' not in local_var_params or # noqa: E501
local_var_params['tenant_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `tenant_id` when calling `membership_get_user_roles`") # noqa: E501
# verify the required parameter 'user_id' is set
if self.api_client.client_side_validation and ('user_id' not in local_var_params or # noqa: E501
local_var_params['user_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `user_id` when calling `membership_get_user_roles`") # noqa: E501
collection_formats = {}
path_params = {}
if 'tenant_id' in local_var_params:
path_params['tenantId'] = local_var_params['tenant_id'] # noqa: E501
if 'user_id' in local_var_params:
path_params['userId'] = local_var_params['user_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'text/json']) # noqa: E501
# Authentication setting
auth_settings = ['oauth2'] # noqa: E501
return self.api_client.call_api(
'/membership/roles/{tenantId}/{userId}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='GetUserRolesQueryResult', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def membership_post_identified_user(self, identified_user_data, **kwargs): # noqa: E501
"""membership_post_identified_user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.membership_post_identified_user(identified_user_data, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param IdentifiedUserData identified_user_data: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: object
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.membership_post_identified_user_with_http_info(identified_user_data, **kwargs) # noqa: E501
def membership_post_identified_user_with_http_info(self, identified_user_data, **kwargs): # noqa: E501
"""membership_post_identified_user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.membership_post_identified_user_with_http_info(identified_user_data, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param IdentifiedUserData identified_user_data: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(object, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'identified_user_data'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method membership_post_identified_user" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'identified_user_data' is set
if self.api_client.client_side_validation and ('identified_user_data' not in local_var_params or # noqa: E501
local_var_params['identified_user_data'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `identified_user_data` when calling `membership_post_identified_user`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'identified_user_data' in local_var_params:
body_params = local_var_params['identified_user_data']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'text/json', 'application/xml', 'text/xml']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json', 'text/json', 'application/xml', 'text/xml', 'application/x-www-form-urlencoded']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/membership/identified-users', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='object', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def membership_post_initialize(self, **kwargs): # noqa: E501
"""membership_post_initialize # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.membership_post_initialize(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.membership_post_initialize_with_http_info(**kwargs) # noqa: E501
def membership_post_initialize_with_http_info(self, **kwargs): # noqa: E501
"""membership_post_initialize # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.membership_post_initialize_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method membership_post_initialize" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/membership/initialize', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def membership_post_password_reset_confirmation(self, password_reset_confirmation_data, **kwargs): # noqa: E501
"""membership_post_password_reset_confirmation # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.membership_post_password_reset_confirmation(password_reset_confirmation_data, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param PasswordResetConfirmationData password_reset_confirmation_data: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: object
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.membership_post_password_reset_confirmation_with_http_info(password_reset_confirmation_data, **kwargs) # noqa: E501
def membership_post_password_reset_confirmation_with_http_info(self, password_reset_confirmation_data, **kwargs): # noqa: E501
"""membership_post_password_reset_confirmation # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.membership_post_password_reset_confirmation_with_http_info(password_reset_confirmation_data, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param PasswordResetConfirmationData password_reset_confirmation_data: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(object, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'password_reset_confirmation_data'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method membership_post_password_reset_confirmation" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'password_reset_confirmation_data' is set
if self.api_client.client_side_validation and ('password_reset_confirmation_data' not in local_var_params or # noqa: E501
local_var_params['password_reset_confirmation_data'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `password_reset_confirmation_data` when calling `membership_post_password_reset_confirmation`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'password_reset_confirmation_data' in local_var_params:
body_params = local_var_params['password_reset_confirmation_data']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'text/json', 'application/xml', 'text/xml']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json', 'text/json', 'application/xml', 'text/xml', 'application/x-www-form-urlencoded']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/membership/password-reset-confirmations', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='object', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def membership_post_password_reset_request(self, password_reset_request_data, **kwargs): # noqa: E501
"""membership_post_password_reset_request # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.membership_post_password_reset_request(password_reset_request_data, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param PasswordResetRequestData password_reset_request_data: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: object
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.membership_post_password_reset_request_with_http_info(password_reset_request_data, **kwargs) # noqa: E501
def membership_post_password_reset_request_with_http_info(self, password_reset_request_data, **kwargs): # noqa: E501
"""membership_post_password_reset_request # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.membership_post_password_reset_request_with_http_info(password_reset_request_data, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param PasswordResetRequestData password_reset_request_data: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(object, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'password_reset_request_data'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method membership_post_password_reset_request" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'password_reset_request_data' is set
if self.api_client.client_side_validation and ('password_reset_request_data' not in local_var_params or # noqa: E501
local_var_params['password_reset_request_data'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `password_reset_request_data` when calling `membership_post_password_reset_request`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'password_reset_request_data' in local_var_params:
body_params = local_var_params['password_reset_request_data']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'text/json', 'application/xml', 'text/xml']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json', 'text/json', 'application/xml', 'text/xml', 'application/x-www-form-urlencoded']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/membership/password-reset-requests', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='object', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def membership_post_registration(self, registration_data, **kwargs): # noqa: E501
"""membership_post_registration # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.membership_post_registration(registration_data, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param RegistrationData registration_data: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: object
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.membership_post_registration_with_http_info(registration_data, **kwargs) # noqa: E501
def membership_post_registration_with_http_info(self, registration_data, **kwargs): # noqa: E501
"""membership_post_registration # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.membership_post_registration_with_http_info(registration_data, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param RegistrationData registration_data: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(object, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'registration_data'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method membership_post_registration" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'registration_data' is set
if self.api_client.client_side_validation and ('registration_data' not in local_var_params or # noqa: E501
local_var_params['registration_data'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `registration_data` when calling `membership_post_registration`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'registration_data' in local_var_params:
body_params = local_var_params['registration_data']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'text/json', 'application/xml', 'text/xml']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json', 'text/json', 'application/xml', 'text/xml', 'application/x-www-form-urlencoded']) # noqa: E501
# Authentication setting
auth_settings = ['oauth2'] # noqa: E501
return self.api_client.call_api(
'/membership/registrations', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='object', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def membership_put_user_role(self, tenant_id, user_id, role_data, **kwargs): # noqa: E501
"""membership_put_user_role # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.membership_put_user_role(tenant_id, user_id, role_data, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str tenant_id: (required)
:param str user_id: (required)
:param UserRoleData role_data: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.membership_put_user_role_with_http_info(tenant_id, user_id, role_data, **kwargs) # noqa: E501
def membership_put_user_role_with_http_info(self, tenant_id, user_id, role_data, **kwargs): # noqa: E501
"""membership_put_user_role # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.membership_put_user_role_with_http_info(tenant_id, user_id, role_data, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str tenant_id: (required)
:param str user_id: (required)
:param UserRoleData role_data: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'tenant_id',
'user_id',
'role_data'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method membership_put_user_role" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'tenant_id' is set
if self.api_client.client_side_validation and ('tenant_id' not in local_var_params or # noqa: E501
local_var_params['tenant_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `tenant_id` when calling `membership_put_user_role`") # noqa: E501
# verify the required parameter 'user_id' is set
if self.api_client.client_side_validation and ('user_id' not in local_var_params or # noqa: E501
local_var_params['user_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `user_id` when calling `membership_put_user_role`") # noqa: E501
# verify the required parameter 'role_data' is set
if self.api_client.client_side_validation and ('role_data' not in local_var_params or # noqa: E501
local_var_params['role_data'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `role_data` when calling `membership_put_user_role`") # noqa: E501
collection_formats = {}
path_params = {}
if 'tenant_id' in local_var_params:
path_params['tenantId'] = local_var_params['tenant_id'] # noqa: E501
if 'user_id' in local_var_params:
path_params['userId'] = local_var_params['user_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'role_data' in local_var_params:
body_params = local_var_params['role_data']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json', 'text/json', 'application/xml', 'text/xml', 'application/x-www-form-urlencoded']) # noqa: E501
# Authentication setting
auth_settings = ['oauth2'] # noqa: E501
return self.api_client.call_api(
'/membership/roles/{tenantId}/{userId}', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
| 46.389908 | 173 | 0.604212 | 5,537 | 50,565 | 5.219614 | 0.037024 | 0.039583 | 0.060552 | 0.028027 | 0.962112 | 0.953323 | 0.943255 | 0.930279 | 0.92547 | 0.918134 | 0 | 0.012926 | 0.323742 | 50,565 | 1,089 | 174 | 46.432507 | 0.832256 | 0.423554 | 0 | 0.719165 | 1 | 0 | 0.208645 | 0.084199 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036053 | false | 0.055028 | 0.009488 | 0 | 0.081594 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 9 |
d5b79550c3bb602df7bb1b6755406f7ae8269472 | 1,163 | py | Python | recipes/stages/_base_/data/pipelines/semisl_wo_hflip.py | openvinotoolkit/model_preparation_algorithm | 8d36bf5944837b7a3d22fc2c3a4cb93423619fc2 | [
"Apache-2.0"
] | null | null | null | recipes/stages/_base_/data/pipelines/semisl_wo_hflip.py | openvinotoolkit/model_preparation_algorithm | 8d36bf5944837b7a3d22fc2c3a4cb93423619fc2 | [
"Apache-2.0"
] | null | null | null | recipes/stages/_base_/data/pipelines/semisl_wo_hflip.py | openvinotoolkit/model_preparation_algorithm | 8d36bf5944837b7a3d22fc2c3a4cb93423619fc2 | [
"Apache-2.0"
] | null | null | null | img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
__resize_target_size = 224
train_pipeline = [
dict(type="Resize", size=__resize_target_size),
dict(type="AugMixAugment", config_str="augmix-m5-w3"),
dict(type="RandomRotate", p=0.35, angle=(-10, 10)),
dict(type="ToNumpy"),
dict(type='Normalize', **img_norm_cfg),
dict(type='ImageToTensor', keys=['img']),
dict(type="ToTensor", keys=["gt_label"]),
dict(type="Collect", keys=["img", "gt_label"]),
]
test_pipeline = [
dict(type='Resize', size=224),
dict(type='Normalize', **img_norm_cfg),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
]
train_pipeline_strong = [
dict(type="Resize", size=__resize_target_size),
dict(type="RandAugment", n=2, m=10),
dict(type="AugMixAugment", config_str="augmix-m5-w3"),
dict(type="RandomRotate", p=0.35, angle=(-10, 10)),
dict(type="ToNumpy"),
dict(type='Normalize', **img_norm_cfg),
dict(type='ImageToTensor', keys=['img']),
dict(type="ToTensor", keys=["gt_label"]),
dict(type="Collect", keys=["img", "gt_label"]),
]
| 36.34375 | 77 | 0.638005 | 162 | 1,163 | 4.388889 | 0.32716 | 0.236287 | 0.056259 | 0.078762 | 0.822785 | 0.755274 | 0.755274 | 0.755274 | 0.755274 | 0.648383 | 0 | 0.056886 | 0.138435 | 1,163 | 31 | 78 | 37.516129 | 0.652695 | 0 | 0 | 0.6 | 0 | 0 | 0.232158 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d5bafc7859e5d1011c31c26e7f36d33300b78ca5 | 9,422 | py | Python | demos/yelp_demo/demo_setup/project_creation/test_project.py | qq2016/kubeflow_learning | 930706686108f997aab42ccf2fe455dcf09a4afc | [
"Apache-2.0"
] | 1,165 | 2018-03-01T01:47:14.000Z | 2022-03-31T08:35:00.000Z | demos/yelp_demo/demo_setup/project_creation/test_project.py | arki1/examples | c93b792d67c8c52bc91d4ccf5fbaead4e2324331 | [
"Apache-2.0"
] | 929 | 2018-02-04T18:20:16.000Z | 2022-03-31T18:20:43.000Z | demos/yelp_demo/demo_setup/project_creation/test_project.py | arki1/examples | c93b792d67c8c52bc91d4ccf5fbaead4e2324331 | [
"Apache-2.0"
] | 687 | 2018-02-01T21:35:30.000Z | 2022-03-29T07:47:47.000Z | """Unit tests for `project.py`"""
import copy
import unittest
import project as p
class Context:
def __init__(self, env, properties):
self.env = env
self.properties = properties
class ProjectTestCase(unittest.TestCase):
"""Tests for `project.py`."""
default_env = {'name': 'my-project',
'project_number': '1234',
'current_time': 0}
default_properties = {
'organization-id': "1234",
'billing-account-name': 'foo',
'apis': [],
'set-dm-service-account-as-owner': True,
'concurrent_api_activation': True,
'service-accounts': []
}
def test_only_one_of_organizationid_or_parentfolderid(self):
"""Test that we validate that there can be exactly one of organization-id
or parent-folder-id specified"""
properties_oid = {
'organization-id': "12345"
}
properties_folder = {
'parent-folder-id': "12345"
}
properties_both = {
'organization-id': "12345",
'parent-folder-id': "12345"
}
properties_none = {}
self.assertTrue(p.IsProjectParentValid(properties_oid))
self.assertTrue(p.IsProjectParentValid(properties_folder))
self.assertFalse(p.IsProjectParentValid(properties_both))
self.assertFalse(p.IsProjectParentValid(properties_none))
def test_generateconfig_sets_project_parent(self):
"""Test that we set the right values for project parent"""
env = copy.deepcopy(self.default_env)
properties = copy.deepcopy(self.default_properties)
context = Context(env, properties)
resources = p.GenerateConfig(context)['resources']
expected_project_parent = {
'type': 'organization',
'id': "1234"
}
project_resource = [
resource for resource in resources
if resource.get('type') == 'cloudresourcemanager.v1.project']
self.assertEqual(
expected_project_parent, project_resource[0]['properties']['parent'])
properties['parent-folder-id'] = "1234"
del properties['organization-id']
context = Context(env, properties)
resources = p.GenerateConfig(context)['resources']
expected_project_parent = {
'type': 'folder',
'id': "1234"
}
project_resource = [
resource for resource in resources
if resource.get('type') == 'cloudresourcemanager.v1.project']
self.assertEqual(
expected_project_parent, project_resource[0]['properties']['parent'])
def test_patch_iam_policy_with_owner(self):
"""Test that we set the right values for project parent"""
env = copy.deepcopy(self.default_env)
properties = copy.deepcopy(self.default_properties)
context = Context(env, properties)
resources = p.GenerateConfig(context)['resources']
expected_patch = {
'add': [{
'role': 'roles/owner',
'members': [
'serviceAccount:$(ref.my-project.projectNumber)'
'@cloudservices.gserviceaccount.com'
]
}],
'remove': []
}
patch_action = [
resource for resource in resources
if resource['name'] == 'patch-iam-policy-my-project']
self.assertEqual(
expected_patch, patch_action[0]['properties']['gcpIamPolicyPatch'])
del properties['set-dm-service-account-as-owner']
context = Context(env, properties)
resources = p.GenerateConfig(context)['resources']
patch_action = [
resource for resource in resources
if resource['name'] == 'set-dm-service-account-as-owner']
self.assertEqual([], patch_action)
def test_patch_iam_policy_with_default_dm_and_adding_owner(self):
"""Test IAM patching correctly adds and removes service accounts and merges
in the default DM service account to the owner role"""
env = copy.deepcopy(self.default_env)
properties = copy.deepcopy(self.default_properties)
properties['iam-policy-patch'] = {
'add': [{
'role': 'roles/owner',
'members': [
'user:me@domain.com',
]
}]
}
context = Context(env, properties)
resources = p.GenerateConfig(context)['resources']
expected_patch = {
'add': [{
'role': 'roles/owner',
'members': [
'user:me@domain.com',
'serviceAccount:$(ref.my-project.projectNumber)'
'@cloudservices.gserviceaccount.com'
]
}],
'remove': []
}
patch_action = [
resource for resource in resources
if resource['name'] == 'patch-iam-policy-my-project']
self.assertEqual(
expected_patch, patch_action[0]['properties']['gcpIamPolicyPatch'])
def test_patch_iam_policy_containing_default_dm_as_owner_already(self):
"""Test IAM patching correctly merges in the default DM service account to
the owner role only once"""
env = copy.deepcopy(self.default_env)
properties = copy.deepcopy(self.default_properties)
properties['iam-policy-patch'] = {
'add': [{
'role': 'roles/owner',
'members': [
'serviceAccount:$(ref.my-project.projectNumber)'
'@cloudservices.gserviceaccount.com'
]
}]
}
context = Context(env, properties)
resources = p.GenerateConfig(context)['resources']
expected_patch = {
'add': [{
'role': 'roles/owner',
'members': [
'serviceAccount:$(ref.my-project.projectNumber)'
'@cloudservices.gserviceaccount.com'
]
}],
'remove': []
}
patch_action = [
resource for resource in resources
if resource['name'] == 'patch-iam-policy-my-project']
self.assertEqual(
expected_patch, patch_action[0]['properties']['gcpIamPolicyPatch'])
def test_patch_iam_policy_with_default_dm(self):
"""Test IAM patching correctly adds and removes service accounts and adds
in the default DM service account to the owner role"""
env = copy.deepcopy(self.default_env)
properties = copy.deepcopy(self.default_properties)
properties['iam-policy-patch'] = {
'add': [{
'role': 'roles/viewer',
'members': [
'user:me@domain.com',
]
}]
}
context = Context(env, properties)
resources = p.GenerateConfig(context)['resources']
expected_patch = {
'add': [{
'role': 'roles/viewer',
'members': [
'user:me@domain.com',
]
}, {
'role': 'roles/owner',
'members': [
'serviceAccount:$(ref.my-project.projectNumber)'
'@cloudservices.gserviceaccount.com'
]
}],
'remove': []
}
patch_action = [
resource for resource in resources
if resource['name'] == 'patch-iam-policy-my-project']
self.assertEqual(
expected_patch, patch_action[0]['properties']['gcpIamPolicyPatch'])
def test_patch_iam_policy_without_default_dm(self):
"""Test IAM patching correctly adds and removes service accounts without
merging in the DM service account to the owner role"""
env = copy.deepcopy(self.default_env)
properties = copy.deepcopy(self.default_properties)
del properties['set-dm-service-account-as-owner']
properties['iam-policy-patch'] = {
'add': [{
'role': 'roles/owner',
'members': [
'user:me@domain.com',
]
}],
'remove': [{
'role': 'roles/editor',
'members': [
'serviceAccount:horribly-invalid-service-account@twitter.ru',
]
}]
}
context = Context(env, properties)
resources = p.GenerateConfig(context)['resources']
expected_patch = {
'add': [{
'role': 'roles/owner',
'members': [
'user:me@domain.com',
]
}],
'remove': [{
'role': 'roles/editor',
'members': [
'serviceAccount:horribly-invalid-service-account@twitter.ru',
]
}]
}
patch_action = [
resource for resource in resources
if resource['name'] == 'patch-iam-policy-my-project']
self.assertEqual(
expected_patch, patch_action[0]['properties']['gcpIamPolicyPatch'])
def test_generateconfig_fails_if_both_folder_and_org_present(self):
"""Test that we sys.exit() if both the parents are present"""
env = copy.deepcopy(self.default_env)
properties = copy.deepcopy(self.default_properties)
properties['parent-folder-id'] = "1234"
context = Context(env, properties)
with self.assertRaises(SystemExit) as cm:
p.GenerateConfig(context)
self.assertEqual(cm.exception.code,
('Invalid [organization-id, parent-folder-id], '
'must specify exactly one.'))
def test_generateconfig_fails_if_neither_folder_nor_org_present(self):
"""Test that we sys.exit() if both the parents are present"""
env = copy.deepcopy(self.default_env)
properties = copy.deepcopy(self.default_properties)
del properties['organization-id']
context = Context(env, properties)
with self.assertRaises(SystemExit) as cm:
p.GenerateConfig(context)
self.assertEqual(cm.exception.code,
('Invalid [organization-id, parent-folder-id], '
'must specify exactly one.'))
if __name__ == '__main__':
unittest.main()
| 33.293286 | 79 | 0.617916 | 973 | 9,422 | 5.842754 | 0.142857 | 0.043448 | 0.045031 | 0.064732 | 0.861038 | 0.797186 | 0.783641 | 0.783641 | 0.758311 | 0.746526 | 0 | 0.007673 | 0.253025 | 9,422 | 282 | 80 | 33.411348 | 0.800085 | 0.088835 | 0 | 0.705394 | 0 | 0 | 0.244505 | 0.101328 | 0 | 0 | 0 | 0 | 0.06639 | 1 | 0.041494 | false | 0 | 0.012448 | 0 | 0.070539 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d5fa94550ff8c6a3b97df70c3c0de9606172d525 | 23,099 | py | Python | tests/gamification/test_event_models.py | mrgambal/ner_trainer | 4ea617bb9a1c4778ce6dfa084c53e2667d037f67 | [
"BSD-3-Clause"
] | 33 | 2015-01-20T12:12:40.000Z | 2020-02-23T14:21:24.000Z | tests/gamification/test_event_models.py | mrgambal/vulyk | 4ea617bb9a1c4778ce6dfa084c53e2667d037f67 | [
"BSD-3-Clause"
] | 48 | 2015-01-13T16:29:44.000Z | 2020-10-21T13:09:23.000Z | tests/gamification/test_event_models.py | mrgambal/ner_trainer | 4ea617bb9a1c4778ce6dfa084c53e2667d037f67 | [
"BSD-3-Clause"
] | 9 | 2015-04-01T15:19:13.000Z | 2021-06-21T15:44:28.000Z | # -*- coding: utf-8 -*-
"""
test_event_models
"""
from datetime import datetime, timedelta
from decimal import Decimal
import unittest
from vulyk.blueprints.gamification.core.events import (
Event, NoAchievementsEvent, LevelEvent, AchievementsEvent,
AchievementsLevelEvent, DonateEvent)
from vulyk.blueprints.gamification.core.rules import Rule
from vulyk.blueprints.gamification.models.events import EventModel
from vulyk.blueprints.gamification.models.foundations import FundModel
from vulyk.blueprints.gamification.models.rules import RuleModel
from vulyk.models.tasks import AbstractTask, AbstractAnswer, Batch
from vulyk.models.user import Group, User
from .fixtures import FixtureFund
from ..base import BaseTest
from ..fixtures import FakeType
class TestEventModels(BaseTest):
TASK_TYPE = FakeType.type_name
USER = None
TASK = None
ANSWER = None
TIMESTAMP = datetime.now()
@classmethod
def setUpClass(cls):
super().setUpClass()
Group.objects.create(
description='test', id='default', allowed_types=[cls.TASK_TYPE])
cls.BATCH = Batch(id='default', task_type=cls.TASK_TYPE).save()
cls.USER = User(username='user0', email='user0@email.com').save()
cls.TASK = FakeType.task_model(
id='task1',
task_type=cls.TASK_TYPE,
batch=cls.BATCH,
closed=False,
users_count=0,
users_processed=[],
task_data={'data': 'data'}).save()
@classmethod
def tearDownClass(cls):
User.objects.delete()
Group.objects.delete()
AbstractTask.objects.delete()
Batch.objects.delete()
super().tearDownClass()
def setUp(self):
super().setUp()
self.ANSWER = FakeType.answer_model(
task=self.TASK,
created_by=self.USER,
created_at=datetime.now(),
task_type=self.TASK_TYPE,
result={}).save()
def tearDown(self):
AbstractAnswer.objects.delete()
EventModel.objects.delete()
FundModel.objects.delete()
FundModel._get_db()['images.files'].drop()
FundModel._get_db()['images.chunks'].drop()
RuleModel.objects.delete()
super().tearDown()
def test_no_achievements_ok(self):
ev = Event.build(
timestamp=self.TIMESTAMP,
user=self.USER,
answer=self.ANSWER,
points_given=Decimal(10),
coins=Decimal(10),
achievements=[],
acceptor_fund=None,
level_given=None,
viewed=False)
EventModel.from_event(ev).save()
ev2 = EventModel.objects.get(answer=self.ANSWER).to_event()
self.assertIsInstance(ev, NoAchievementsEvent,
'Event is of a wrong type')
self.assertEqual(ev, ev2, 'Event was not saved and restored fine')
def test_level_given_ok(self):
ev = Event.build(
timestamp=self.TIMESTAMP,
user=self.USER,
answer=self.ANSWER,
points_given=Decimal(10),
coins=Decimal(10),
achievements=[],
acceptor_fund=None,
level_given=2,
viewed=False)
EventModel.from_event(ev).save()
ev2 = EventModel.objects.get(answer=self.ANSWER).to_event()
self.assertIsInstance(ev, LevelEvent, 'Event is of a wrong type')
self.assertEqual(ev, ev2, 'Event was not saved and restored fine')
def test_badge_given_ok(self):
rule = Rule(
badge='',
name='',
description='',
bonus=0,
tasks_number=0,
days_number=5,
is_weekend=False,
is_adjacent=True,
rule_id='100')
RuleModel.from_rule(rule).save()
ev = Event.build(
timestamp=self.TIMESTAMP,
user=self.USER,
answer=self.ANSWER,
points_given=Decimal(10),
coins=Decimal(10),
achievements=[rule],
acceptor_fund=None,
level_given=None,
viewed=False)
EventModel.from_event(ev).save()
ev2 = EventModel.objects.get(answer=self.ANSWER).to_event()
self.assertIsInstance(ev, AchievementsEvent,
'Event is of a wrong type')
self.assertEqual(ev, ev2, 'Event was not saved and restored fine')
def test_level_badge_given_ok(self):
rule = Rule(
badge='',
name='',
description='',
bonus=0,
tasks_number=0,
days_number=5,
is_weekend=False,
is_adjacent=True,
rule_id='100')
RuleModel.from_rule(rule).save()
ev = Event.build(
timestamp=self.TIMESTAMP,
user=self.USER,
answer=self.ANSWER,
points_given=Decimal(10),
coins=Decimal(10),
achievements=[rule],
acceptor_fund=None,
level_given=2,
viewed=False)
EventModel.from_event(ev).save()
ev2 = EventModel.objects.get(answer=self.ANSWER).to_event()
self.assertIsInstance(ev, AchievementsLevelEvent,
'Event is of a wrong type')
self.assertEqual(ev, ev2, 'Event was not saved and restored fine')
def test_donate_ok(self):
fund = FixtureFund.get_fund()
ev = Event.build(
timestamp=self.TIMESTAMP,
user=self.USER,
answer=None,
points_given=0,
coins=Decimal(-10),
achievements=[],
acceptor_fund=fund,
level_given=None,
viewed=True)
EventModel.from_event(ev).save()
ev2 = EventModel.objects.get(
user=self.USER,
acceptor_fund=fund.id
).to_event()
self.assertIsInstance(ev, DonateEvent, 'Event is of a wrong type')
self.assertEqual(ev, ev2, 'Event was not saved and restored fine')
def test_level_badge_to_dict(self):
rule = Rule(
badge='',
name='',
description='',
bonus=0,
tasks_number=0,
days_number=5,
is_weekend=False,
is_adjacent=True,
rule_id='100')
ev = Event.build(
timestamp=self.TIMESTAMP,
user=self.USER,
answer=self.ANSWER,
points_given=Decimal(10),
coins=Decimal(10),
achievements=[rule],
acceptor_fund=None,
level_given=2,
viewed=False)
expected = {
'timestamp': self.TIMESTAMP,
'user': self.USER.username,
'answer': self.ANSWER.as_dict(),
'points_given': 10,
'coins': 10,
'achievements': [rule.to_dict()],
'acceptor_fund': None,
'level_given': 2,
'viewed': False}
self.assertDictEqual(expected, ev.to_dict(),
'Event was not translated to dict correctly')
del expected['answer']
self.assertDictEqual(expected, ev.to_dict(ignore_answer=True),
'Event was not translated to dict correctly')
def test_donate_to_dict(self):
fund = FixtureFund.get_fund()
ev = Event.build(
timestamp=self.TIMESTAMP,
user=self.USER,
answer=None,
points_given=0,
coins=Decimal(-10),
achievements=[],
acceptor_fund=fund,
level_given=None,
viewed=True)
expected = {
'timestamp': self.TIMESTAMP,
'user': self.USER.username,
'answer': None,
'points_given': 0,
'coins': -10,
'achievements': [],
'acceptor_fund': fund.to_dict(),
'level_given': None,
'viewed': True}
self.assertDictEqual(expected, ev.to_dict(),
'Event was not translated to dict correctly')
del expected['answer']
self.assertDictEqual(expected, ev.to_dict(ignore_answer=True),
'Event was not translated to dict correctly')
def test_unread_events_correct_user(self):
users = [
User(username='user%s' % i, email='user%s@email.com' % i).save()
for i in range(0, 3)
]
for i in range(0, 9):
user = users[i % 3]
ev = Event.build(
timestamp=self.TIMESTAMP + timedelta(seconds=i),
user=users[i % 3],
answer=FakeType.answer_model(
task=FakeType.task_model(
id='task%s' % i,
task_type=self.TASK_TYPE,
batch='default',
closed=False,
users_count=0,
users_processed=[],
task_data={'data': 'data'}).save(),
created_by=user,
created_at=datetime.now(),
task_type=self.TASK_TYPE,
result={}).save(),
points_given=Decimal(10),
coins=Decimal(10),
achievements=[],
acceptor_fund=None,
level_given=2,
viewed=False)
EventModel.from_event(ev).save()
index = 2
events = list(EventModel.get_unread_events(users[index]))
self.assertEqual(
len(events), 3,
'Wrong number of unread events extracted')
self.assertTrue(
all([e.user.id == users[index].id for e in events]),
'Unread events list contains wrong user\'s events')
def test_unread_events_correct_sorting(self):
users = [
User(username='user%s' % i, email='user%s@email.com' % i).save()
for i in range(0, 3)]
for i in range(0, 9):
user = users[i % 3]
ev = Event.build(
timestamp=self.TIMESTAMP + timedelta(seconds=i),
user=users[i % 3],
answer=FakeType.answer_model(
task=FakeType.task_model(
id='task%s' % i,
task_type=self.TASK_TYPE,
batch='default',
closed=False,
users_count=0,
users_processed=[],
task_data={'data': 'data'}).save(),
created_by=user,
created_at=datetime.now(),
task_type=self.TASK_TYPE,
result={}).save(),
points_given=Decimal(10),
coins=Decimal(10),
achievements=[],
acceptor_fund=None,
level_given=2,
viewed=False)
EventModel.from_event(ev).save()
index = 2
events = EventModel.get_unread_events(users[index])
self.assertSequenceEqual([
self.TIMESTAMP + timedelta(seconds=index + 3 * i)
for i in range(3)],
[e.timestamp for e in events],
'Unread events list has wrong sorting')
def test_unread_events_set_viewed(self):
users = [
User(username='user%s' % i, email='user%s@email.com' % i).save()
for i in range(0, 3)]
for i in range(0, 9):
user = users[i % 3]
ev = Event.build(
timestamp=self.TIMESTAMP + timedelta(seconds=i),
user=user,
answer=FakeType.answer_model(
task=FakeType.task_model(
id='task%s' % i,
task_type=self.TASK_TYPE,
batch='default',
closed=False,
users_count=0,
users_processed=[],
task_data={'data': 'data'}).save(),
created_by=user,
created_at=datetime.now(),
task_type=self.TASK_TYPE,
result={}).save(),
points_given=Decimal(10),
coins=Decimal(10),
achievements=[],
acceptor_fund=None,
level_given=2,
viewed=False)
EventModel.from_event(ev).save()
for user in users:
events = list(EventModel.get_unread_events(user))
self.assertEqual(
len(events), 3,
'%s should have 3 unread events' % user.username)
EventModel.mark_events_as_read(user)
new_events = list(EventModel.get_unread_events(user))
self.assertEqual(
len(new_events), 0,
'Unexpected unread events for %s.' % user.username)
def test_all_events(self):
users = [
User(username='user%s' % i, email='user%s@email.com' % i).save()
for i in range(0, 3)]
for i in range(0, 9):
user = users[i % 3]
ev = Event.build(
timestamp=self.TIMESTAMP + timedelta(seconds=i),
user=user,
answer=FakeType.answer_model(
task=FakeType.task_model(
id='task%s' % i,
task_type=self.TASK_TYPE,
batch='default',
closed=False,
users_count=0,
users_processed=[],
task_data={'data': 'data'}).save(),
created_by=user,
created_at=datetime.now(),
task_type=self.TASK_TYPE,
result={}).save(),
points_given=Decimal(10),
coins=Decimal(10),
achievements=[],
acceptor_fund=None,
level_given=2,
viewed=False)
EventModel.from_event(ev).save()
for user in users:
events = EventModel.get_all_events(user)
self.assertEqual(
len(list(events)), 3,
'%s should have 3 events' % user.username)
# Still 3!
new_events = EventModel.get_all_events(user)
self.assertEqual(
len(list(new_events)), 3,
'%s should have 3 events' % user.username)
# Checking that get_unread_events doesn't have
# side effect on get_all_events
new_events = EventModel.get_unread_events(user)
self.assertEqual(
len(list(new_events)), 3,
'%s should have 3 events' % user.username)
new_events = EventModel.get_all_events(user)
self.assertEqual(
len(list(new_events)), 3,
'%s should have 3 events' % user.username)
def test_done_by_user_returns_all(self):
for i in range(0, 3):
ev = Event.build(
timestamp=self.TIMESTAMP + timedelta(seconds=i),
user=self.USER,
answer=FakeType.answer_model(
task=FakeType.task_model(
id='task_%s' % i,
task_type=self.TASK_TYPE,
batch='default',
closed=False,
users_count=0,
users_processed=[],
task_data={'data': 'data'}).save(),
created_by=self.USER,
created_at=datetime.now(),
task_type=self.TASK_TYPE,
result={}).save(),
points_given=Decimal(10),
coins=Decimal(10),
achievements=[],
acceptor_fund=None,
level_given=2,
viewed=False)
EventModel.from_event(ev).save()
self.assertEqual(
EventModel.count_of_tasks_done_by_user(self.USER),
3)
def test_done_by_user_returns_only_related(self):
users = [
User(username='user%s' % i, email='user%s@email.com' % i).save()
for i in range(0, 3)]
for i in range(0, 9):
user = users[i % 3]
ev = Event.build(
timestamp=self.TIMESTAMP + timedelta(seconds=i),
user=user,
answer=FakeType.answer_model(
task=FakeType.task_model(
id='task%s' % i,
task_type=self.TASK_TYPE,
batch='default',
closed=False,
users_count=0,
users_processed=[],
task_data={'data': 'data'}).save(),
created_by=user,
created_at=datetime.now(),
task_type=self.TASK_TYPE,
result={}).save(),
points_given=Decimal(10),
coins=Decimal(10),
achievements=[],
acceptor_fund=None,
level_given=2,
viewed=False)
EventModel.from_event(ev).save()
self.assertEqual(
EventModel.count_of_tasks_done_by_user(user=users[0]),
3)
def test_done_by_user_only_answers(self):
fund = FixtureFund.get_fund()
ev = Event.build(
timestamp=self.TIMESTAMP,
user=self.USER,
answer=self.ANSWER,
points_given=Decimal(10),
coins=Decimal(10),
achievements=[],
acceptor_fund=None,
level_given=None,
viewed=False)
EventModel.from_event(ev).save()
ev2 = Event.build(
timestamp=self.TIMESTAMP,
user=self.USER,
answer=None,
points_given=Decimal(0),
coins=Decimal(-10),
achievements=[],
acceptor_fund=fund,
level_given=None,
viewed=True)
EventModel.from_event(ev2).save()
self.assertEqual(
EventModel.count_of_tasks_done_by_user(self.USER),
1)
def test_batches_worked(self):
for i in range(0, 3):
ev = Event.build(
timestamp=self.TIMESTAMP + timedelta(seconds=i),
user=self.USER,
answer=FakeType.answer_model(
task=FakeType.task_model(
id='task_%s' % i,
task_type=self.TASK_TYPE,
batch=Batch(
id='batch_%s' % i,
task_type=self.TASK_TYPE).save(),
closed=False,
users_count=0,
users_processed=[],
task_data={'data': 'data'}).save(),
created_by=self.USER,
created_at=datetime.now(),
task_type=self.TASK_TYPE,
result={}).save(),
points_given=Decimal(10),
coins=Decimal(10),
achievements=[],
acceptor_fund=None,
level_given=2,
viewed=False)
EventModel.from_event(ev).save()
result = list(EventModel.batches_user_worked_on(self.USER))
self.assertSequenceEqual(
['batch_0', 'batch_1', 'batch_2'],
[batch.id for batch in result]
)
def test_batches_worked_user_restricted(self):
users = [
User(username='user%s' % i, email='user%s@email.com' % i).save()
for i in range(0, 3)]
for i in range(0, 9):
user = users[i % 3]
ev = Event.build(
timestamp=self.TIMESTAMP + timedelta(seconds=i),
user=user,
answer=FakeType.answer_model(
task=FakeType.task_model(
id='task_%s' % i,
task_type=self.TASK_TYPE,
batch=Batch(
id='batch_%s' % i,
task_type=self.TASK_TYPE).save(),
closed=False,
users_count=0,
users_processed=[],
task_data={'data': 'data'}).save(),
created_by=user,
created_at=datetime.now(),
task_type=self.TASK_TYPE,
result={}).save(),
points_given=Decimal(10),
coins=Decimal(10),
achievements=[],
acceptor_fund=None,
level_given=2,
viewed=False)
EventModel.from_event(ev).save()
result_first = list(EventModel.batches_user_worked_on(users[0]))
result_second = list(EventModel.batches_user_worked_on(users[1]))
result_third = list(EventModel.batches_user_worked_on(users[2]))
self.assertSequenceEqual(
['batch_0', 'batch_3', 'batch_6'],
[batch.id for batch in result_first]
)
self.assertSequenceEqual(
['batch_1', 'batch_4', 'batch_7'],
[batch.id for batch in result_second]
)
self.assertSequenceEqual(
['batch_2', 'batch_5', 'batch_8'],
[batch.id for batch in result_third]
)
def test_batches_worked_dedup(self):
for i in range(0, 3):
ev = Event.build(
timestamp=self.TIMESTAMP + timedelta(seconds=i),
user=self.USER,
answer=FakeType.answer_model(
task=FakeType.task_model(
id='task_%s' % i,
task_type=self.TASK_TYPE,
batch=Batch(
id='batch_0',
task_type=self.TASK_TYPE).save(),
closed=False,
users_count=0,
users_processed=[],
task_data={'data': 'data'}).save(),
created_by=self.USER,
created_at=datetime.now(),
task_type=self.TASK_TYPE,
result={}).save(),
points_given=Decimal(10),
coins=Decimal(10),
achievements=[],
acceptor_fund=None,
level_given=2,
viewed=False)
EventModel.from_event(ev).save()
result = list(EventModel.batches_user_worked_on(self.USER))
self.assertSequenceEqual(
['batch_0'],
[batch.id for batch in result]
)
if __name__ == '__main__':
unittest.main()
| 35.21189 | 76 | 0.498983 | 2,319 | 23,099 | 4.799914 | 0.081501 | 0.035936 | 0.023718 | 0.031623 | 0.818884 | 0.777558 | 0.757704 | 0.735424 | 0.730303 | 0.714311 | 0 | 0.014982 | 0.396078 | 23,099 | 655 | 77 | 35.265649 | 0.782939 | 0.005368 | 0 | 0.767123 | 0 | 0 | 0.061836 | 0 | 0 | 0 | 0 | 0 | 0.053082 | 1 | 0.035959 | false | 0 | 0.02226 | 0 | 0.068493 | 0.008562 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
913928a59d4961fa17237e0578392da6e5d29b44 | 141 | py | Python | src/graph_transpiler/webdnn/backend/code_generator/injectors/__init__.py | steerapi/webdnn | 1df51cc094e5a528cfd3452c264905708eadb491 | [
"MIT"
] | 1 | 2021-04-09T15:55:35.000Z | 2021-04-09T15:55:35.000Z | src/graph_transpiler/webdnn/backend/code_generator/injectors/__init__.py | steerapi/webdnn | 1df51cc094e5a528cfd3452c264905708eadb491 | [
"MIT"
] | null | null | null | src/graph_transpiler/webdnn/backend/code_generator/injectors/__init__.py | steerapi/webdnn | 1df51cc094e5a528cfd3452c264905708eadb491 | [
"MIT"
] | null | null | null | from webdnn.backend.code_generator.injectors import buffer_injector
from webdnn.backend.code_generator.injectors import kernel_name_injector
| 47 | 72 | 0.900709 | 19 | 141 | 6.421053 | 0.578947 | 0.163934 | 0.278689 | 0.344262 | 0.737705 | 0.737705 | 0.737705 | 0 | 0 | 0 | 0 | 0 | 0.056738 | 141 | 2 | 73 | 70.5 | 0.917293 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 9 |
e69ad0f05ea3c5973f884d0f32ea5aa31f4a60a1 | 24,263 | py | Python | python/testData/MockSdk2.7/python_stubs/exceptions.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 52 | 2019-01-11T22:51:59.000Z | 2021-12-12T13:28:21.000Z | python/testData/MockSdk2.7/python_stubs/exceptions.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 417 | 2019-01-11T19:02:48.000Z | 2022-03-28T14:52:04.000Z | python/testData/MockSdk2.7/python_stubs/exceptions.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 10 | 2019-05-17T08:10:52.000Z | 2021-07-26T18:20:03.000Z | # encoding: utf-8
# module exceptions
# from (built-in)
# by generator 1.138
"""
Python's standard exception class hierarchy.
Exceptions found here are defined both in the exceptions module and the
built-in namespace. It is recommended that user-defined exceptions
inherit from Exception. See the documentation for the exception
inheritance hierarchy.
"""
# no imports
# no functions
# classes
class BaseException(object):
""" Common base class for all exceptions """
def __delattr__(self, name): # real signature unknown; restored from __doc__
""" x.__delattr__('name') <==> del x.name """
pass
def __getattribute__(self, name): # real signature unknown; restored from __doc__
""" x.__getattribute__('name') <==> x.name """
pass
def __getitem__(self, y): # real signature unknown; restored from __doc__
""" x.__getitem__(y) <==> x[y] """
pass
def __getslice__(self, i, j): # real signature unknown; restored from __doc__
"""
x.__getslice__(i, j) <==> x[i:j]
Use of negative indices is not supported.
"""
pass
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
def __reduce__(self, *args, **kwargs): # real signature unknown
pass
def __repr__(self): # real signature unknown; restored from __doc__
""" x.__repr__() <==> repr(x) """
pass
def __setattr__(self, name, value): # real signature unknown; restored from __doc__
""" x.__setattr__('name', value) <==> x.name = value """
pass
def __setstate__(self, *args, **kwargs): # real signature unknown
pass
def __str__(self): # real signature unknown; restored from __doc__
""" x.__str__() <==> str(x) """
pass
def __unicode__(self): # known case of exceptions.BaseException.__unicode__
# no doc
return u""
args = property(lambda self: tuple())
""":type: tuple"""
message = property(lambda self: '', lambda self, v: None, lambda self: None)
""":type: string"""
__dict__ = None # (!) real value is ''
class Exception(BaseException):
""" Common base class for all non-exit exceptions. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class StandardError(Exception):
"""
Base class for all standard Python exceptions that do not represent
interpreter exiting.
"""
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class ArithmeticError(StandardError):
""" Base class for arithmetic errors. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class AssertionError(StandardError):
""" Assertion failed. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class AttributeError(StandardError):
""" Attribute not found. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class BufferError(StandardError):
""" Buffer error. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class Warning(Exception):
""" Base class for warning categories. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class BytesWarning(Warning):
"""
Base class for warnings about bytes and buffer related problems, mostly
related to conversion from str or comparing to str.
"""
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class DeprecationWarning(Warning):
""" Base class for warnings about deprecated features. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class EnvironmentError(StandardError):
""" Base class for I/O related errors. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
def __reduce__(self, *args, **kwargs): # real signature unknown
pass
def __str__(self): # real signature unknown; restored from __doc__
""" x.__str__() <==> str(x) """
pass
errno = property(lambda self: 0, lambda self, v: None, lambda self: None)
"""exception errno
:type: int
"""
filename = property(lambda self: '', lambda self, v: None, lambda self: None)
"""exception filename
:type: string
"""
strerror = property(lambda self: 0, lambda self, v: None, lambda self: None)
"""exception strerror
:type: int
"""
class EOFError(StandardError):
""" Read beyond end of file. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class FloatingPointError(ArithmeticError):
""" Floating point operation failed. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class FutureWarning(Warning):
"""
Base class for warnings about constructs that will change semantically
in the future.
"""
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class GeneratorExit(BaseException):
""" Request that a generator exit. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class ImportError(StandardError):
""" Import can't find module, or can't find name in module. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class ImportWarning(Warning):
""" Base class for warnings about probable mistakes in module imports """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class SyntaxError(StandardError):
""" Invalid syntax. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
def __str__(self): # real signature unknown; restored from __doc__
""" x.__str__() <==> str(x) """
pass
filename = property(lambda self: '', lambda self, v: None, lambda self: None)
"""exception filename
:type: string
"""
lineno = property(lambda self: 0, lambda self, v: None, lambda self: None)
"""exception lineno
:type: int
"""
msg = property(lambda self: '', lambda self, v: None, lambda self: None)
"""exception msg
:type: string
"""
offset = property(lambda self: 0, lambda self, v: None, lambda self: None)
"""exception offset
:type: int
"""
print_file_and_line = property(lambda self: True, lambda self, v: None, lambda self: None)
"""exception print_file_and_line
:type: bool
"""
text = property(lambda self: '', lambda self, v: None, lambda self: None)
"""exception text
:type: string
"""
class IndentationError(SyntaxError):
""" Improper indentation. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class LookupError(StandardError):
""" Base class for lookup errors. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class IndexError(LookupError):
""" Sequence index out of range. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class IOError(EnvironmentError):
""" I/O operation failed. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class KeyboardInterrupt(BaseException):
""" Program interrupted by user. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class KeyError(LookupError):
""" Mapping key not found. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
def __str__(self): # real signature unknown; restored from __doc__
""" x.__str__() <==> str(x) """
pass
class MemoryError(StandardError):
""" Out of memory. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class NameError(StandardError):
""" Name not found globally. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class RuntimeError(StandardError):
""" Unspecified run-time error. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class NotImplementedError(RuntimeError):
""" Method or function hasn't been implemented yet. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class OSError(EnvironmentError):
""" OS system call failed. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class OverflowError(ArithmeticError):
""" Result too large to be represented. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class PendingDeprecationWarning(Warning):
"""
Base class for warnings about features which will be deprecated
in the future.
"""
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class ReferenceError(StandardError):
""" Weak ref proxy used after referent went away. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class RuntimeWarning(Warning):
""" Base class for warnings about dubious runtime behavior. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class StopIteration(Exception):
""" Signal the end from iterator.next(). """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class SyntaxWarning(Warning):
""" Base class for warnings about dubious syntax. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class SystemError(StandardError):
"""
Internal error in the Python interpreter.
Please report this to the Python maintainer, along with the traceback,
the Python version, and the hardware/OS platform and version.
"""
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class SystemExit(BaseException):
""" Request to exit from the interpreter. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
code = property(lambda self: object(), lambda self, v: None, lambda self: None)
class TabError(IndentationError):
""" Improper mixture of spaces and tabs. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class TypeError(StandardError):
""" Inappropriate argument type. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class UnboundLocalError(NameError):
""" Local name referenced but not bound to a value. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class ValueError(StandardError):
""" Inappropriate argument value (of correct type). """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class UnicodeError(ValueError):
""" Unicode related error. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class UnicodeDecodeError(UnicodeError):
""" Unicode decoding error. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
def __str__(self): # real signature unknown; restored from __doc__
""" x.__str__() <==> str(x) """
pass
encoding = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""exception encoding"""
end = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""exception end"""
object = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""exception object"""
reason = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""exception reason"""
start = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""exception start"""
class UnicodeEncodeError(UnicodeError):
""" Unicode encoding error. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
def __str__(self): # real signature unknown; restored from __doc__
""" x.__str__() <==> str(x) """
pass
encoding = property(lambda self: '', lambda self, v: None, lambda self: None)
"""exception encoding
:type: string
"""
end = property(lambda self: 0, lambda self, v: None, lambda self: None)
"""exception end
:type: int
"""
object = property(lambda self: object(), lambda self, v: None, lambda self: None)
reason = property(lambda self: '', lambda self, v: None, lambda self: None)
"""exception reason
:type: string
"""
start = property(lambda self: 0, lambda self, v: None, lambda self: None)
"""exception start
:type: int
"""
class UnicodeTranslateError(UnicodeError):
""" Unicode translation error. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
def __str__(self): # real signature unknown; restored from __doc__
""" x.__str__() <==> str(x) """
pass
encoding = property(lambda self: '', lambda self, v: None, lambda self: None)
"""exception encoding
:type: string
"""
end = property(lambda self: 0, lambda self, v: None, lambda self: None)
"""exception end
:type: int
"""
object = property(lambda self: object(), lambda self, v: None, lambda self: None)
reason = property(lambda self: '', lambda self, v: None, lambda self: None)
"""exception reason
:type: string
"""
start = property(lambda self: 0, lambda self, v: None, lambda self: None)
"""exception start
:type: int
"""
class UnicodeWarning(Warning):
"""
Base class for warnings about Unicode related problems, mostly
related to conversion problems.
"""
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class UserWarning(Warning):
""" Base class for warnings generated by user code. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
class ZeroDivisionError(ArithmeticError):
""" Second argument to a division or modulo operation was zero. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
| 31.51039 | 98 | 0.625232 | 3,000 | 24,263 | 4.674667 | 0.094 | 0.103822 | 0.159726 | 0.121791 | 0.767042 | 0.762122 | 0.742727 | 0.729464 | 0.723902 | 0.717627 | 0 | 0.000718 | 0.253802 | 24,263 | 769 | 99 | 31.551365 | 0.773875 | 0.459177 | 0 | 0.794286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002857 | 1 | 0.322857 | false | 0.32 | 0.005714 | 0.002857 | 0.548571 | 0.002857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 9 |
e6ec530ef98e40efd2734adf444116be5504f34a | 7,142 | py | Python | tests/test_partition_filter.py | joar/disk-usage-exporter | cb23ced094f3410d15e4ad99d1006acf65812123 | [
"BSD-2-Clause"
] | 9 | 2017-10-20T21:27:30.000Z | 2021-05-27T13:54:57.000Z | tests/test_partition_filter.py | joar/disk-usage-exporter | cb23ced094f3410d15e4ad99d1006acf65812123 | [
"BSD-2-Clause"
] | 3 | 2017-08-15T18:43:46.000Z | 2020-09-30T08:43:35.000Z | tests/test_partition_filter.py | joar/disk-usage-exporter | cb23ced094f3410d15e4ad99d1006acf65812123 | [
"BSD-2-Clause"
] | 2 | 2018-01-19T17:48:08.000Z | 2019-03-28T08:45:25.000Z | from unittest import mock
import pytest
from disk_usage_exporter import collect
from disk_usage_exporter.collect import Mount
from disk_usage_exporter.context import Context
MISC_MOUNTPOINTS = [
Mount(
device='/dev/root',
mountpoint='/rootfs',
fstype='ext2',
opts='ro,relatime,block_validity,barrier,user_xattr,acl'),
Mount(
device='/dev/sda1',
mountpoint='/rootfs/mnt/stateful_partition',
fstype='ext4',
opts='rw,nosuid,nodev,noexec,relatime,commit=30,data=ordered'),
Mount(
device='/dev/sda8',
mountpoint='/rootfs/usr/share/oem',
fstype='ext4',
opts='ro,nosuid,nodev,noexec,relatime,data=ordered'),
Mount(
device='/dev/sda1',
mountpoint='/rootfs/home',
fstype='ext4',
opts='rw,nosuid,nodev,noexec,relatime,commit=30,data=ordered'),
Mount(
device='/dev/sda1',
mountpoint='/rootfs/home/chronos',
fstype='ext4',
opts='rw,nosuid,nodev,noexec,relatime,commit=30,data=ordered'),
Mount(
device='/dev/sda1',
mountpoint='/rootfs/home/kubernetes/bin',
fstype='ext4',
opts='rw,nosuid,nodev,relatime,commit=30,data=ordered'),
Mount(
device='/dev/sda1',
mountpoint='/rootfs/var',
fstype='ext4',
opts='rw,nosuid,nodev,noexec,relatime,commit=30,data=ordered'),
Mount(
device='/dev/sda1',
mountpoint='/rootfs/var/lib/google',
fstype='ext4',
opts='rw,nosuid,nodev,relatime,commit=30,data=ordered'),
Mount(
device='/dev/sda1',
mountpoint='/rootfs/var/lib/docker',
fstype='ext4',
opts='rw,nosuid,nodev,relatime,commit=30,data=ordered'),
Mount(
device='/dev/sda1',
mountpoint='/rootfs/var/lib/toolbox',
fstype='ext4',
opts='rw,nodev,relatime,commit=30,data=ordered'),
Mount(
device='/dev/sda1',
mountpoint='/rootfs/var/lib/kubelet',
fstype='ext4',
opts='rw,relatime,commit=30,data=ordered'),
]
CONTAINERIZED_MOUNTER_MOUNTPOINS = [
Mount(
device='/dev/sda1',
mountpoint='/rootfs/home/kubernetes/containerized_mounter',
fstype='ext4',
opts='rw,nosuid,nodev,noexec,relatime,commit=30,data=ordered'),
Mount(
device='/dev/sda1',
mountpoint='/rootfs/home/kubernetes/containerized_mounter/rootfs/var'
'/lib/kubelet',
fstype='ext4',
opts='rw,relatime,commit=30,data=ordered'),
Mount(
device='/dev/sdb',
mountpoint='/rootfs/home/kubernetes/containerized_mounter/rootfs/var'
'/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/gke'
'-cluster-6d98ef61-dyn-pvc-11fa90bb-5a69-11e7-ba69'
'-42010af0012c',
fstype='ext4',
opts='rw,relatime,data=ordered'),
Mount(
device='/dev/sdb',
mountpoint='/rootfs/home/kubernetes/containerized_mounter/rootfs/var'
'/lib/kubelet/pods/3cc99367-5c20-11e7-ba69-42010af0012c'
'/volumes/kubernetes.io~gce-pd/pvc-11fa90bb-5a69-11e7-ba69'
'-42010af0012c',
fstype='ext4',
opts='rw,relatime,data=ordered'),
Mount(
device='/dev/sda1',
mountpoint='/rootfs/home/kubernetes/containerized_mounter/rootfs/var'
'/lib/kubelet',
fstype='ext4',
opts='rw,relatime,commit=30,data=ordered'),
Mount(
device='/dev/sdb',
mountpoint='/rootfs/home/kubernetes/containerized_mounter/rootfs/var'
'/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/gke'
'-cluster-6d98ef61-dyn-pvc-11fa90bb-5a69-11e7-ba69'
'-42010af0012c',
fstype='ext4',
opts='rw,relatime,data=ordered'),
Mount(
device='/dev/sdb',
mountpoint='/rootfs/home/kubernetes/containerized_mounter/rootfs/var'
'/lib/kubelet/pods/3cc99367-5c20-11e7-ba69-42010af0012c'
'/volumes/kubernetes.io~gce-pd/pvc-11fa90bb-5a69-11e7-ba69'
'-42010af0012c',
fstype='ext4',
opts='rw,relatime,data=ordered'),
]
VAR_LIB_PLUGIN_MOUNTPOINTS = [
Mount(
device='/dev/sdc',
mountpoint='/rootfs/var/lib/kubelet/plugins/kubernetes.io/gce-pd'
'/mounts/gke-cluster-6d98ef61-dyn-pvc-670e4abe-5a71-11e7'
'-ba69-42010af0012c',
fstype='ext4',
opts='rw,relatime,data=ordered'),
Mount(
device='/dev/sdb',
mountpoint='/rootfs/var/lib/kubelet/plugins/kubernetes.io/gce-pd'
'/mounts/gke-cluster-6d98ef61-dyn-pvc-11fa90bb-5a69-11e7'
'-ba69-42010af0012c',
fstype='ext4',
opts='rw,relatime,data=ordered'),
Mount(
device='/dev/sdc',
mountpoint='/rootfs/var/lib/kubelet/plugins/kubernetes.io/gce-pd'
'/mounts/gke-cluster-6d98ef61-dyn-pvc-670e4abe-5a71-11e7'
'-ba69-42010af0012c',
fstype='ext4',
opts='rw,relatime,data=ordered'),
Mount(
device='/dev/sdb',
mountpoint='/rootfs/var/lib/kubelet/plugins/kubernetes.io/gce-pd'
'/mounts/gke-cluster-6d98ef61-dyn-pvc-11fa90bb-5a69-11e7'
'-ba69-42010af0012c',
fstype='ext4',
opts='rw,relatime,data=ordered'),
]
VAR_LIB_VOLUME_MOUNTPOINTS = [
Mount(
device='/dev/sdc',
mountpoint='/rootfs/var/lib/kubelet/pods/5dd6d312-5a74-11e7-ba69'
'-42010af0012c/volumes/kubernetes.io~gce-pd/pvc-670e4abe'
'-5a71-11e7-ba69-42010af0012c',
fstype='ext4',
opts='rw,relatime,data=ordered'),
Mount(
device='/dev/sdb',
mountpoint='/rootfs/var/lib/kubelet/pods/3cc99367-5c20-11e7-ba69'
'-42010af0012c/volumes/kubernetes.io~gce-pd/pvc-11fa90bb'
'-5a69-11e7-ba69-42010af0012c',
fstype='ext4',
opts='rw,relatime,data=ordered'),
Mount(
device='/dev/sdc',
mountpoint='/rootfs/var/lib/kubelet/pods/5dd6d312-5a74-11e7-ba69'
'-42010af0012c/volumes/kubernetes.io~gce-pd/pvc-670e4abe'
'-5a71-11e7-ba69-42010af0012c',
fstype='ext4',
opts='rw,relatime,data=ordered'),
Mount(
device='/dev/sdb',
mountpoint='/rootfs/var/lib/kubelet/pods/3cc99367-5c20-11e7-ba69'
'-42010af0012c/volumes/kubernetes.io~gce-pd/pvc-11fa90bb'
'-5a69-11e7-ba69-42010af0012c',
fstype='ext4',
opts='rw,relatime,data=ordered')
]
ALL_MOUNTPOINTS = (
MISC_MOUNTPOINTS +
CONTAINERIZED_MOUNTER_MOUNTPOINS +
VAR_LIB_VOLUME_MOUNTPOINTS +
VAR_LIB_VOLUME_MOUNTPOINTS
)
@pytest.mark.parametrize('partition', ALL_MOUNTPOINTS)
def test_partition_filter(partition):
context = Context()
included = collect.partition_filter(context, partition)
should_be_included = partition in VAR_LIB_VOLUME_MOUNTPOINTS
assert should_be_included == included
| 36.438776 | 78 | 0.602352 | 777 | 7,142 | 5.477477 | 0.124839 | 0.067199 | 0.085526 | 0.090226 | 0.825188 | 0.825188 | 0.817199 | 0.817199 | 0.817199 | 0.817199 | 0 | 0.089588 | 0.24825 | 7,142 | 195 | 79 | 36.625641 | 0.70311 | 0 | 0 | 0.762162 | 0 | 0 | 0.477177 | 0.402548 | 0 | 0 | 0 | 0 | 0.005405 | 1 | 0.005405 | false | 0 | 0.027027 | 0 | 0.032432 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
fc1c6d018eb2dbe83f6e1f9f86c71d4dd48e9a42 | 5,720 | py | Python | tests/registries/test_decorators_deprecated_cooldown.py | tinyzimmer/kopf | 74c42a2acdf2a72446d290fa1f27b53ec5d43218 | [
"MIT"
] | null | null | null | tests/registries/test_decorators_deprecated_cooldown.py | tinyzimmer/kopf | 74c42a2acdf2a72446d290fa1f27b53ec5d43218 | [
"MIT"
] | null | null | null | tests/registries/test_decorators_deprecated_cooldown.py | tinyzimmer/kopf | 74c42a2acdf2a72446d290fa1f27b53ec5d43218 | [
"MIT"
] | null | null | null | import pytest
import kopf
from kopf.structs.handlers import Activity, Reason, HANDLER_REASONS
from kopf.structs.resources import Resource
def test_on_startup_with_cooldown():
registry = kopf.get_default_registry()
with pytest.deprecated_call(match=r"use backoff="):
@kopf.on.startup(cooldown=78)
def fn(**_):
pass
handlers = registry.activity_handlers.get_handlers(activity=Activity.STARTUP)
assert len(handlers) == 1
assert handlers[0].fn is fn
assert handlers[0].backoff == 78
with pytest.deprecated_call(match=r"use handler.backoff"):
assert handlers[0].cooldown == 78
def test_on_cleanup_with_cooldown():
registry = kopf.get_default_registry()
with pytest.deprecated_call(match=r"use backoff="):
@kopf.on.cleanup(cooldown=78)
def fn(**_):
pass
handlers = registry.activity_handlers.get_handlers(activity=Activity.CLEANUP)
assert len(handlers) == 1
assert handlers[0].fn is fn
assert handlers[0].backoff == 78
with pytest.deprecated_call(match=r"use handler.backoff"):
assert handlers[0].cooldown == 78
def test_on_probe_with_cooldown():
registry = kopf.get_default_registry()
with pytest.deprecated_call(match=r"use backoff="):
@kopf.on.probe(cooldown=78)
def fn(**_):
pass
handlers = registry.activity_handlers.get_handlers(activity=Activity.PROBE)
assert len(handlers) == 1
assert handlers[0].fn is fn
assert handlers[0].backoff == 78
with pytest.deprecated_call(match=r"use handler.backoff"):
assert handlers[0].cooldown == 78
# Resume handlers are mixed-in into all resource-changing reactions with initial listing.
@pytest.mark.parametrize('reason', HANDLER_REASONS)
def test_on_resume_with_cooldown(mocker, reason, cause_factory):
registry = kopf.get_default_registry()
resource = Resource('group', 'version', 'plural')
cause = cause_factory(resource=resource, reason=reason, initial=True)
mocker.patch('kopf.reactor.registries.match', return_value=True)
with pytest.deprecated_call(match=r"use backoff="):
@kopf.on.resume('group', 'version', 'plural', cooldown=78)
def fn(**_):
pass
handlers = registry.resource_changing_handlers[resource].get_handlers(cause)
assert len(handlers) == 1
assert handlers[0].fn is fn
assert handlers[0].backoff == 78
with pytest.deprecated_call(match=r"use handler.backoff"):
assert handlers[0].cooldown == 78
def test_on_create_with_cooldown(mocker, cause_factory):
registry = kopf.get_default_registry()
resource = Resource('group', 'version', 'plural')
cause = cause_factory(resource=resource, reason=Reason.CREATE)
mocker.patch('kopf.reactor.registries.match', return_value=True)
with pytest.deprecated_call(match=r"use backoff="):
@kopf.on.create('group', 'version', 'plural', cooldown=78)
def fn(**_):
pass
handlers = registry.resource_changing_handlers[resource].get_handlers(cause)
assert len(handlers) == 1
assert handlers[0].fn is fn
assert handlers[0].backoff == 78
with pytest.deprecated_call(match=r"use handler.backoff"):
assert handlers[0].cooldown == 78
def test_on_update_with_cooldown(mocker, cause_factory):
registry = kopf.get_default_registry()
resource = Resource('group', 'version', 'plural')
cause = cause_factory(resource=resource, reason=Reason.UPDATE)
mocker.patch('kopf.reactor.registries.match', return_value=True)
with pytest.deprecated_call(match=r"use backoff="):
@kopf.on.update('group', 'version', 'plural', cooldown=78)
def fn(**_):
pass
handlers = registry.resource_changing_handlers[resource].get_handlers(cause)
assert len(handlers) == 1
assert handlers[0].fn is fn
assert handlers[0].backoff == 78
with pytest.deprecated_call(match=r"use handler.backoff"):
assert handlers[0].cooldown == 78
@pytest.mark.parametrize('optional', [
pytest.param(True, id='optional'),
pytest.param(False, id='mandatory'),
])
def test_on_delete_with_cooldown(mocker, optional, cause_factory):
registry = kopf.get_default_registry()
resource = Resource('group', 'version', 'plural')
cause = cause_factory(resource=resource, reason=Reason.DELETE)
mocker.patch('kopf.reactor.registries.match', return_value=True)
with pytest.deprecated_call(match=r"use backoff="):
@kopf.on.delete('group', 'version', 'plural', cooldown=78)
def fn(**_):
pass
handlers = registry.resource_changing_handlers[resource].get_handlers(cause)
assert len(handlers) == 1
assert handlers[0].fn is fn
assert handlers[0].backoff == 78
with pytest.deprecated_call(match=r"use handler.backoff"):
assert handlers[0].cooldown == 78
def test_on_field_with_cooldown(mocker, cause_factory):
registry = kopf.get_default_registry()
resource = Resource('group', 'version', 'plural')
diff = [('op', ('field', 'subfield'), 'old', 'new')]
cause = cause_factory(resource=resource, reason=Reason.UPDATE, diff=diff)
mocker.patch('kopf.reactor.registries.match', return_value=True)
with pytest.deprecated_call(match=r"use backoff="):
@kopf.on.field('group', 'version', 'plural', 'field.subfield', cooldown=78)
def fn(**_):
pass
handlers = registry.resource_changing_handlers[resource].get_handlers(cause)
assert len(handlers) == 1
assert handlers[0].fn is fn
assert handlers[0].backoff == 78
with pytest.deprecated_call(match=r"use handler.backoff"):
assert handlers[0].cooldown == 78
| 34.878049 | 89 | 0.69458 | 725 | 5,720 | 5.328276 | 0.103448 | 0.086979 | 0.093192 | 0.099405 | 0.846751 | 0.846751 | 0.846751 | 0.846751 | 0.831996 | 0.831996 | 0 | 0.017018 | 0.178147 | 5,720 | 163 | 90 | 35.092025 | 0.804722 | 0.01521 | 0 | 0.719008 | 0 | 0 | 0.113479 | 0.02575 | 0 | 0 | 0 | 0 | 0.264463 | 1 | 0.132231 | false | 0.066116 | 0.033058 | 0 | 0.165289 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
fc22a9625311e232a38fdd4052ef94ede8d1c607 | 113 | py | Python | pyzeebe/errors/message_errors.py | Explorer1092/pyzeebe | c76dc2bfae9ea0c491c1ea2b278cffd938a39d02 | [
"MIT"
] | 30 | 2021-04-13T15:48:20.000Z | 2022-03-28T15:48:01.000Z | pyzeebe/errors/message_errors.py | Explorer1092/pyzeebe | c76dc2bfae9ea0c491c1ea2b278cffd938a39d02 | [
"MIT"
] | 121 | 2021-04-02T10:21:33.000Z | 2022-03-31T04:06:43.000Z | pyzeebe/errors/message_errors.py | Explorer1092/pyzeebe | c76dc2bfae9ea0c491c1ea2b278cffd938a39d02 | [
"MIT"
] | 9 | 2021-05-05T10:51:26.000Z | 2022-03-17T08:07:32.000Z | from pyzeebe.errors.pyzeebe_errors import PyZeebeError
class MessageAlreadyExistsError(PyZeebeError):
pass
| 18.833333 | 54 | 0.840708 | 11 | 113 | 8.545455 | 0.727273 | 0.276596 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115044 | 113 | 5 | 55 | 22.6 | 0.94 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
fc98e34ee6a748a73a2101336b4e72b052ca2198 | 174 | py | Python | substitution files/decocare/records/__init__.py | HaotianRen12/OpenAPS-Glucosym-3.9 | 3eb577a88b8bc3d3a31e6b96ac918fa50dfd532f | [
"Unlicense",
"MIT"
] | 1 | 2022-02-09T00:15:52.000Z | 2022-02-09T00:15:52.000Z | substitution files/decocare/records/__init__.py | HaotianRen12/OpenAPS-Glucosym-3.9 | 3eb577a88b8bc3d3a31e6b96ac918fa50dfd532f | [
"Unlicense",
"MIT"
] | null | null | null | substitution files/decocare/records/__init__.py | HaotianRen12/OpenAPS-Glucosym-3.9 | 3eb577a88b8bc3d3a31e6b96ac918fa50dfd532f | [
"Unlicense",
"MIT"
] | 2 | 2021-09-09T15:35:43.000Z | 2021-09-09T15:58:44.000Z |
from .times import * # Changed from "from times import *"
from .base import * # Changed from "from base import *"
from .bolus import * # Changed from "from bolus import *"
| 29 | 57 | 0.695402 | 24 | 174 | 5.041667 | 0.25 | 0.322314 | 0.421488 | 0.520661 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.201149 | 174 | 5 | 58 | 34.8 | 0.870504 | 0.591954 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
5d79ba4e18b23d63aed016da32a10cfa21981b45 | 17,648 | py | Python | tests/e2e/tests/test_kiali_health.py | sniperking1234/kiali | ac6c796bc66e6e221492d9fe26d71962aee9fefc | [
"Apache-2.0"
] | null | null | null | tests/e2e/tests/test_kiali_health.py | sniperking1234/kiali | ac6c796bc66e6e221492d9fe26d71962aee9fefc | [
"Apache-2.0"
] | null | null | null | tests/e2e/tests/test_kiali_health.py | sniperking1234/kiali | ac6c796bc66e6e221492d9fe26d71962aee9fefc | [
"Apache-2.0"
] | null | null | null | import pytest
import tests.conftest as conftest
from utils.common_utils import common_utils
bookinfo_namespace = conftest.get_bookinfo_namespace()
INVALID_PARAMS_NAMESPACE_HEALTH = {'namespace':'invalid'}
INVALID_PATH_WORKLOAD_HEALTH_DEPLOYMENT = {'namespace':'invalid', 'workload':'invalid'}
INVALID_PARAM_WORKLOAD_HEALTH_DEPLOYMENT = {'type':'invalid', 'rateInterval':'invalid'}
INVALID_NAMESPACE_WORKLOAD_HEALTH_DEPLOYMENT = {'namespace':'invalid', 'workload':'details-v1'}
INVALID_WORKLOAD_HEALTH_DEPLOYMENT_WORKLOAD = {'namespace':bookinfo_namespace, 'workload':'invalid'}
VALID_PATH_WORKLOAD_HEALTH_DEPLOYMENT = {'namespace':bookinfo_namespace, 'workload':'details-v1'}
VALID_PARAM_WORKLOAD_HEALTH_DEPLOYMENT = {'type':'Deployment', 'rateInterval':'60s'}
INVALID_TYPE_WORKLOAD_HEALTH_DEPLOYMENT = {'type':'invalid', 'rateInterval':'60s'}
INVALID_RATE_INTERVAL_WORKLOAD_HEALTH_DEPLOYMENT = {'type':'Deployment', 'rateInterval':'invalid'}
INVALID_PATH_WORKLOAD_HEALTH_REPLICA_SET = {'namespace':'invalid', 'workload':'invalid'}
INVALID_PARAM_WORKLOAD_HEALTH_REPLICA_SET = {'type':'invalid', 'rateInterval':'invalid'}
INVALID_NAMESPACE_WORKLOAD_HEALTH_REPLICA_SET = {'namespace':'invalid', 'workload':'kiali-traffic-generator'}
INVALID_WORKLOAD_HEALTH_REPLICA_SET_WORKLOAD = {'namespace':bookinfo_namespace, 'workload':'invalid'}
VALID_PATH_WORKLOAD_HEALTH_REPLICA_SET = {'namespace':bookinfo_namespace, 'workload':'kiali-traffic-generator'}
VALID_PARAM_WORKLOAD_HEALTH_REPLICA_SET = {'type':'ReplicaSet', 'rateInterval':'60s'}
INVALID_TYPE_WORKLOAD_HEALTH_REPLICA_SET = {'type':'invalid', 'rateInterval':'60s'}
INVALID_RATE_INTERVAL_WORKLOAD_HEALTH_REPLICA_SET = {'type':'ReplicaSet', 'rateInterval':'invalid'}
def test_workload_health_deployment(kiali_client):
workload_type = 'Deployment'
workload_name = 'details-v1'
response = common_utils.get_response(kiali_client, method_name='workloadHealth', path={'namespace':bookinfo_namespace, 'workload':workload_name}, params={'type':workload_type, 'rateInterval':'60s'})
assert response.json().get('workloadStatus') is not None
assert workload_name == response.json().get('workloadStatus').get('name')
def test_workload_health_replicaset(kiali_client):
workload_type = 'ReplicaSet'
workload_name = 'kiali-traffic-generator'
response = common_utils.get_response(kiali_client, method_name='workloadHealth', path={'namespace':bookinfo_namespace, 'workload':workload_name}, params={'type':workload_type, 'rateInterval':'60s'})
assert response.json().get('workloadStatus') is not None
assert workload_name == response.json().get('workloadStatus').get('name')
def test_service_health_deployment(kiali_client):
service_name = 'ratings'
response = common_utils.get_response(kiali_client, method_name='serviceHealth', path={'namespace':bookinfo_namespace, 'service':service_name}, params={'rateInterval':'60s'})
assert response.json().get('requests') is not None
def test_namespace_health_workload(kiali_client):
type_ = 'workload'
response = common_utils.get_response(kiali_client, method_name='namespaceHealth', path={'namespace':bookinfo_namespace}, params={'type':type_, 'rateInterval':'60s'})
assert response.json().get('ratings-v1') is not None
def test_namespace_health_service(kiali_client):
type_ = 'service'
response = common_utils.get_response(kiali_client, method_name='namespaceHealth', path={'namespace':bookinfo_namespace}, params={'type':type_, 'rateInterval':'60s'})
assert response.json().get('details') is not None
def test_namespace_health_app(kiali_client):
type_ = 'app'
response = common_utils.get_response(kiali_client, method_name='namespaceHealth', path={'namespace':bookinfo_namespace}, params={'type':type_, 'rateInterval':'60s'})
assert response.json().get('details') is not None
def test_namespace_health_app_invalid_namespace_negative(kiali_client):
response = common_utils.get_response(kiali_client, method_name='namespaceHealth', path=INVALID_PARAMS_NAMESPACE_HEALTH, params={'type':'app','rateInterval':'60s'},status_code_expected=403)
def test_namespace_health_service_invalid_namespace_negative(kiali_client):
response = common_utils.get_response(kiali_client, method_name='namespaceHealth', path=INVALID_PARAMS_NAMESPACE_HEALTH, params={'type':'service','rateInterval':'60s'},status_code_expected=403)
def test_namespace_health_workload_invalid_namespace_negative(kiali_client):
response = common_utils.get_response(kiali_client, method_name='namespaceHealth', path=INVALID_PARAMS_NAMESPACE_HEALTH, params={'type':'workload','rateInterval':'60s'},status_code_expected=403)
def test_namespace_health_app_invalid_namespace_invalid_rateinterval_negative(kiali_client):
INVALID_APP_QUERY_PARAMS_NAMESPACE_HEALTH = {'type':'app','rateInterval':'invalid'}
response = common_utils.get_response(kiali_client, method_name='namespaceHealth', path=INVALID_PARAMS_NAMESPACE_HEALTH, params=INVALID_APP_QUERY_PARAMS_NAMESPACE_HEALTH,status_code_expected=403)
def test_namespace_health_service_invalid_namespace_invalid_rateinterval_negative(kiali_client):
INVALID_SERVICE_QUERY_PARAMS_NAMESPACE_HEALTH = {'type':'service','rateInterval':'invalid'}
response = common_utils.get_response(kiali_client, method_name='namespaceHealth', path=INVALID_PARAMS_NAMESPACE_HEALTH, params=INVALID_SERVICE_QUERY_PARAMS_NAMESPACE_HEALTH,status_code_expected=403)
def test_namespace_health_workload_invalid_namespace_invalid_rateinterval_negative(kiali_client):
INVALID_WORKLOAD_QUERY_PARAMS_NAMESPACE_HEALTH = {'type':'workload','rateInterval':'invalid'}
response = common_utils.get_response(kiali_client, method_name='namespaceHealth', path=INVALID_PARAMS_NAMESPACE_HEALTH, params=INVALID_WORKLOAD_QUERY_PARAMS_NAMESPACE_HEALTH,status_code_expected=403)
def test_namespace_health_invalid_type_negative(kiali_client):
INVALID_RATEINTERVALQUERY_PARAMS_NAMESPACE_HEALTH = {'type':'invalid','rateInterval':'60s'}
response = common_utils.get_response(kiali_client, method_name='namespaceHealth', path={'namespace':bookinfo_namespace}, params=INVALID_RATEINTERVALQUERY_PARAMS_NAMESPACE_HEALTH,status_code_expected=400)
def test_namespace_health_invalid_type_invalid_rateinterval_negative(kiali_client):
INVALID_TYPE_QUERY_PARAMS_NAMESPACE_HEALTH = {'type':'invalid','rateInterval':'invalid'}
response = common_utils.get_response(kiali_client, method_name='namespaceHealth', path={'namespace':bookinfo_namespace}, params=INVALID_TYPE_QUERY_PARAMS_NAMESPACE_HEALTH,status_code_expected=400)
def test_namespace_health_invalid_negative(kiali_client):
INVALID_QUERY_PARAMS_NAMESPACE_HEALTH = {'type':'invalid','rateInterval':'invalid'}
response = common_utils.get_response(kiali_client, method_name='namespaceHealth', path=INVALID_PARAMS_NAMESPACE_HEALTH, params=INVALID_QUERY_PARAMS_NAMESPACE_HEALTH,status_code_expected=400)
def test_namespace_health_invalid_namespace_invalid_type_negative(kiali_client):
INVALID_TYPE_QUERY_PARAMS_NAMESPACE_HEALTH = {'type':'invalid','rateInterval':'60s'}
response = common_utils.get_response(kiali_client, method_name='namespaceHealth', path=INVALID_PARAMS_NAMESPACE_HEALTH, params=INVALID_TYPE_QUERY_PARAMS_NAMESPACE_HEALTH,status_code_expected=400)
def test_namespace_health_app_invalid_rateinterval_negative(kiali_client):
INVALID_APP_QUERY_PARAMS_NAMESPACE_HEALTH = {'type':'app','rateInterval':'invalid'}
response = common_utils.get_response(kiali_client, method_name='namespaceHealth', path={'namespace':bookinfo_namespace}, params=INVALID_APP_QUERY_PARAMS_NAMESPACE_HEALTH,status_code_expected=500)
def test_namespace_health_service_invalid_rateinterval_negative(kiali_client):
INVALID_SERVICE_QUERY_PARAMS_NAMESPACE_HEALTH = {'type':'service','rateInterval':'invalid'}
response = common_utils.get_response(kiali_client, method_name='namespaceHealth', path={'namespace':bookinfo_namespace}, params=INVALID_SERVICE_QUERY_PARAMS_NAMESPACE_HEALTH,status_code_expected=500)
def test_namespace_health_workload_invalid_rateinterval_negative(kiali_client):
INVALID_WORKLOAD_QUERY_PARAMS_NAMESPACE_HEALTH = {'type':'workload','rateInterval':'invalid'}
response = common_utils.get_response(kiali_client, method_name='namespaceHealth', path={'namespace':bookinfo_namespace}, params=INVALID_WORKLOAD_QUERY_PARAMS_NAMESPACE_HEALTH,status_code_expected=500)
def test_workload_health_invalid_replicaset_negative(kiali_client):
workload_type = 'ReplicaSet'
workload_name = 'details-v1'
response = common_utils.get_response(kiali_client, method_name='workloadHealth', path={'namespace':bookinfo_namespace, 'workload':workload_name}, params={'type':workload_type, 'rateInterval':'60s'},status_code_expected=502)
def test_workload_health_deployment_invalid_namespace_negative(kiali_client):
response = common_utils.get_response(kiali_client, method_name='workloadHealth', path=INVALID_NAMESPACE_WORKLOAD_HEALTH_DEPLOYMENT, params=VALID_PARAM_WORKLOAD_HEALTH_DEPLOYMENT,status_code_expected=403)
def test_workload_health_deployment_invalid_workload_negative(kiali_client):
response = common_utils.get_response(kiali_client, method_name='workloadHealth', path=INVALID_WORKLOAD_HEALTH_DEPLOYMENT_WORKLOAD, params=VALID_PARAM_WORKLOAD_HEALTH_DEPLOYMENT,status_code_expected=404)
def test_workload_health_deployment_invalid_type_negative(kiali_client):
response = common_utils.get_response(kiali_client, method_name='workloadHealth', path=VALID_PATH_WORKLOAD_HEALTH_DEPLOYMENT, params=INVALID_TYPE_WORKLOAD_HEALTH_DEPLOYMENT,status_code_expected=502)
def test_workload_health_deployment_invalid_rateinterval_negative(kiali_client):
response = common_utils.get_response(kiali_client, method_name='workloadHealth', path=VALID_PATH_WORKLOAD_HEALTH_DEPLOYMENT, params=INVALID_RATE_INTERVAL_WORKLOAD_HEALTH_DEPLOYMENT,status_code_expected=500)
def test_workload_health_deployment_invalid_type_invalid_rateinterval_negative(kiali_client):
response = common_utils.get_response(kiali_client, method_name='workloadHealth', path=VALID_PATH_WORKLOAD_HEALTH_DEPLOYMENT, params=INVALID_PARAM_WORKLOAD_HEALTH_DEPLOYMENT,status_code_expected=500)
def test_workload_health_deployment_invalid_workload_invalid_type_negative(kiali_client):
response = common_utils.get_response(kiali_client, method_name='workloadHealth', path=INVALID_WORKLOAD_HEALTH_DEPLOYMENT_WORKLOAD, params=INVALID_TYPE_WORKLOAD_HEALTH_DEPLOYMENT,status_code_expected=404)
def test_workload_health_deployment_invalid_namespace_invalid_workload_negative(kiali_client):
response = common_utils.get_response(kiali_client, method_name='workloadHealth', path=INVALID_PATH_WORKLOAD_HEALTH_DEPLOYMENT, params=VALID_PARAM_WORKLOAD_HEALTH_DEPLOYMENT,status_code_expected=403)
def test_workload_health_deployment_invalid_namespace_invalid_rateinterval_negative(kiali_client):
response = common_utils.get_response(kiali_client, method_name='workloadHealth', path=INVALID_NAMESPACE_WORKLOAD_HEALTH_DEPLOYMENT, params=INVALID_RATE_INTERVAL_WORKLOAD_HEALTH_DEPLOYMENT,status_code_expected=403)
def test_workload_health_deployment_invalid_namespace_invalid_type_negative(kiali_client):
response = common_utils.get_response(kiali_client, method_name='workloadHealth', path=INVALID_NAMESPACE_WORKLOAD_HEALTH_DEPLOYMENT, params=INVALID_TYPE_WORKLOAD_HEALTH_DEPLOYMENT,status_code_expected=403)
def test_workload_health_deployment_invalid_workload_invalid_rateinterval_negative(kiali_client):
response = common_utils.get_response(kiali_client, method_name='workloadHealth', path=INVALID_WORKLOAD_HEALTH_DEPLOYMENT_WORKLOAD, params=INVALID_RATE_INTERVAL_WORKLOAD_HEALTH_DEPLOYMENT,status_code_expected=500)
def test_workload_health_deployment_invalid_workload_invalid_query_param_negative(kiali_client):
response = common_utils.get_response(kiali_client, method_name='workloadHealth', path=INVALID_WORKLOAD_HEALTH_DEPLOYMENT_WORKLOAD, params=INVALID_PARAM_WORKLOAD_HEALTH_DEPLOYMENT,status_code_expected=500)
def test_workload_health_deployment_invalid_path_invalid_type_negative(kiali_client):
response = common_utils.get_response(kiali_client, method_name='workloadHealth', path=INVALID_PATH_WORKLOAD_HEALTH_DEPLOYMENT, params=INVALID_TYPE_WORKLOAD_HEALTH_DEPLOYMENT,status_code_expected=403)
def test_workload_health_deployment_invalid_namespace_invalid_query_param_negative(kiali_client):
response = common_utils.get_response(kiali_client, method_name='workloadHealth', path=INVALID_NAMESPACE_WORKLOAD_HEALTH_DEPLOYMENT, params=INVALID_PARAM_WORKLOAD_HEALTH_DEPLOYMENT,status_code_expected=403)
def test_workload_health_deployment_invalid_path_invalid_rateinterval_negative(kiali_client):
response = common_utils.get_response(kiali_client, method_name='workloadHealth', path=INVALID_PATH_WORKLOAD_HEALTH_DEPLOYMENT, params=INVALID_RATE_INTERVAL_WORKLOAD_HEALTH_DEPLOYMENT,status_code_expected=403)
def test_workload_health_deployment_invalid_path_invalid_query_param_negative(kiali_client):
response = common_utils.get_response(kiali_client, method_name='workloadHealth', path=INVALID_PATH_WORKLOAD_HEALTH_DEPLOYMENT, params=INVALID_PARAM_WORKLOAD_HEALTH_DEPLOYMENT,status_code_expected=403)
def test_workload_health_replica_set_invalid_namespace_negative(kiali_client):
response = common_utils.get_response(kiali_client, method_name='workloadHealth', path=INVALID_NAMESPACE_WORKLOAD_HEALTH_REPLICA_SET, params=VALID_PARAM_WORKLOAD_HEALTH_REPLICA_SET,status_code_expected=403)
def test_workload_health_replica_set_invalid_workload_negative(kiali_client):
response = common_utils.get_response(kiali_client, method_name='workloadHealth', path=INVALID_WORKLOAD_HEALTH_REPLICA_SET_WORKLOAD, params=VALID_PARAM_WORKLOAD_HEALTH_REPLICA_SET,status_code_expected=404)
def test_workload_health_replica_set_invalid_rateinterval_negative(kiali_client):
response = common_utils.get_response(kiali_client, method_name='workloadHealth', path=VALID_PATH_WORKLOAD_HEALTH_REPLICA_SET, params=INVALID_RATE_INTERVAL_WORKLOAD_HEALTH_REPLICA_SET,status_code_expected=500)
def test_workload_health_replica_set_invalid_type_invalid_rateinterval_negative(kiali_client):
response = common_utils.get_response(kiali_client, method_name='workloadHealth', path=VALID_PATH_WORKLOAD_HEALTH_REPLICA_SET, params=INVALID_PARAM_WORKLOAD_HEALTH_REPLICA_SET,status_code_expected=500)
def test_workload_health_replica_set_invalid_workload_invalid_type_negative(kiali_client):
response = common_utils.get_response(kiali_client, method_name='workloadHealth', path=INVALID_WORKLOAD_HEALTH_REPLICA_SET_WORKLOAD, params=INVALID_TYPE_WORKLOAD_HEALTH_REPLICA_SET,status_code_expected=404)
def test_workload_health_replica_set_invalid_namespace_invalid_workload_negative(kiali_client):
response = common_utils.get_response(kiali_client, method_name='workloadHealth', path=INVALID_PATH_WORKLOAD_HEALTH_REPLICA_SET, params=VALID_PARAM_WORKLOAD_HEALTH_REPLICA_SET,status_code_expected=403)
def test_workload_health_replica_set_invalid_namespace_invalid_rateinterval_negative(kiali_client):
response = common_utils.get_response(kiali_client, method_name='workloadHealth', path=INVALID_NAMESPACE_WORKLOAD_HEALTH_REPLICA_SET, params=INVALID_RATE_INTERVAL_WORKLOAD_HEALTH_REPLICA_SET,status_code_expected=403)
def test_workload_health_replica_set_invalid_namespace_invalid_type_negative(kiali_client):
response = common_utils.get_response(kiali_client, method_name='workloadHealth', path=INVALID_NAMESPACE_WORKLOAD_HEALTH_REPLICA_SET, params=INVALID_TYPE_WORKLOAD_HEALTH_REPLICA_SET,status_code_expected=403)
def test_workload_health_replica_set_invalid_workload_invalid_rateinterval_negative(kiali_client):
response = common_utils.get_response(kiali_client, method_name='workloadHealth', path=INVALID_WORKLOAD_HEALTH_REPLICA_SET_WORKLOAD, params=INVALID_RATE_INTERVAL_WORKLOAD_HEALTH_REPLICA_SET,status_code_expected=500)
def test_workload_health_replica_set_invalid_workload_invalid_query_param_negative(kiali_client):
response = common_utils.get_response(kiali_client, method_name='workloadHealth', path=INVALID_WORKLOAD_HEALTH_REPLICA_SET_WORKLOAD, params=INVALID_PARAM_WORKLOAD_HEALTH_REPLICA_SET,status_code_expected=500)
def test_workload_health_replica_set_invalid_path_invalid_type_negative(kiali_client):
response = common_utils.get_response(kiali_client, method_name='workloadHealth', path=INVALID_PATH_WORKLOAD_HEALTH_REPLICA_SET, params=INVALID_TYPE_WORKLOAD_HEALTH_REPLICA_SET,status_code_expected=403)
def test_workload_health_replica_set_invalid_namespace_invalid_query_param_negative(kiali_client):
response = common_utils.get_response(kiali_client, method_name='workloadHealth', path=INVALID_NAMESPACE_WORKLOAD_HEALTH_REPLICA_SET, params=INVALID_PARAM_WORKLOAD_HEALTH_REPLICA_SET,status_code_expected=403)
def test_workload_health_replica_set_invalid_path_invalid_rateinterval_negative(kiali_client):
response = common_utils.get_response(kiali_client, method_name='workloadHealth', path=INVALID_PATH_WORKLOAD_HEALTH_REPLICA_SET, params=INVALID_RATE_INTERVAL_WORKLOAD_HEALTH_REPLICA_SET,status_code_expected=403)
def test_workload_health_replica_set_invalid_path_invalid_query_param_negative(kiali_client):
response = common_utils.get_response(kiali_client, method_name='workloadHealth', path=INVALID_PATH_WORKLOAD_HEALTH_REPLICA_SET, params=INVALID_PARAM_WORKLOAD_HEALTH_REPLICA_SET,status_code_expected=403)
| 67.102662 | 227 | 0.855338 | 2,191 | 17,648 | 6.349612 | 0.028298 | 0.106671 | 0.093157 | 0.086256 | 0.955722 | 0.943358 | 0.916906 | 0.890742 | 0.871765 | 0.858611 | 0 | 0.010004 | 0.05978 | 17,648 | 262 | 228 | 67.358779 | 0.828422 | 0 | 0 | 0.142857 | 0 | 0 | 0.11526 | 0.00391 | 0 | 0 | 0 | 0 | 0.054422 | 1 | 0.333333 | false | 0 | 0.020408 | 0 | 0.353742 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5d8cbc374984cfea12b2cba88e1c9e09bcf9b88e | 11,850 | py | Python | script.py | vinayak1998/Amazon-Rating-Prediction | fdf1ff9c6e9b5692961cb5527bd35f61eeaf610c | [
"MIT"
] | null | null | null | script.py | vinayak1998/Amazon-Rating-Prediction | fdf1ff9c6e9b5692961cb5527bd35f61eeaf610c | [
"MIT"
] | null | null | null | script.py | vinayak1998/Amazon-Rating-Prediction | fdf1ff9c6e9b5692961cb5527bd35f61eeaf610c | [
"MIT"
] | 1 | 2020-02-22T22:02:01.000Z | 2020-02-22T22:02:01.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
@author: vinayak
"""
import sys
part=sys.argv[1]
if part=='a':
import pandas as pd
import numpy as np
Train = pd.read_csv(sys.argv[2],header = None)
Test = pd.read_csv(sys.argv[3], header = None)
one = {}
two = {}
three = {}
four = {}
five = {}
total={}
Train.columns = ['rating', 'review']
Test.columns = ['rating', 'review']
len_train = len(Train.rating)
len_test = len(Test.rating)
X_Train = list(Train.review)
Y_Train = list(Train.rating)
X_Test = list(Test.review)
def adddict(d , s):
if s in d:
d[s] = d[s]+1
else:
d[s] = 1
prob = [0 for _ in range(5)]
for i in range(0,len_train):
a = int(Y_Train[i])
prob[a-1] += 1
if type(X_Train[i]) is str:
b = X_Train[i].split()
for s in b:
adddict(total , s)
if a == 1:
adddict(one , s)
elif a == 2:
adddict(two , s)
elif a == 3:
adddict(three , s)
elif a == 4:
adddict(four , s)
elif a == 5:
adddict(five , s)
for i in range(0,5):
prob[i] /= len_train
haha = []
for i in range(0 , len_test):
if type(X_Test[i]) is str:
b = X_Test[i].split()
p1 = np.log(prob[0])
for s in b:
if s in one:
p1 += np.log(one[s]/total[s])
p2 = np.log(prob[1])
for s in b:
if s in two:
p2 += np.log(two[s]/total[s])
p3 = np.log(prob[2])
for s in b:
if s in three:
p3 += np.log(three[s]/total[s])
p4 = np.log(prob[3])
for s in b:
if s in four:
p4 += np.log(four[s]/total[s])
p5 = np.log(prob[4])
for s in b:
if s in five:
p5 += np.log(five[s]/total[s])
ans = [p1,p2,p3,p4,p5]
haha += [ans.index(max(ans))+1]
with open(sys.argv[4], 'w') as f:
for item in haha:
f.write("%s\n" % item)
if part=='b':
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
@author: vinayak
"""
import pandas as pd
import numpy as np
import nltk
from nltk.tokenize import sent_tokenize, word_tokenize
from nltk.corpus import stopwords
nltk.download('stopwords')
stop_words = set(stopwords.words('english'))
from nltk.stem import PorterStemmer, WordNetLemmatizer
stemmer = PorterStemmer()
lemmatiser = WordNetLemmatizer()
def clean(string):
string = string.replace("0", "")
string = string.replace("1", "")
string = string.replace("2", "")
string = string.replace("3", "")
string = string.replace("4", "")
string = string.replace("5", "")
string = string.replace("6", "")
string = string.replace("7", "")
string = string.replace("8", "")
string = string.replace("9", "")
string = string.replace("&", " ")
string = string.replace("#", " ")
string = string.replace("'", " ")
string = string.replace(",", " ")
string = string.replace("(", " ")
string = string.replace(")", " ")
string = string.replace("-", " ")
string = string.replace("/", " ")
string = string.replace(":", " ")
string = string.replace(";", " ")
string = string.replace("*", " ")
string = string.replace("$", " ")
string = string.replace("%", " ")
string = string.replace(".", " ")
string = string.replace('"', " ")
string = string.replace('-', " ")
string = string.replace('_', " ")
string = string.replace('[', " ")
string = string.replace(']', " ")
string = string.replace('?', " ")
string = string.replace('+', " ")
string = string.replace('=', " ")
string = string.replace('{', " ")
string = string.replace('}', " ")
string = string.replace(' ', " ")
string = string.replace(' ', " ")
string = string.replace(' ', " ")
return string
Train = pd.read_csv(sys.argv[2],header = None)
Test = pd.read_csv(sys.argv[3], header = None)
one = {}
two = {}
three = {}
four = {}
five = {}
total={}
Train.columns = ['rating', 'review']
Test.columns = ['rating', 'review']
len_train = len(Train.rating)
len_test = len(Test.rating)
X_Train = list(Train.review)
Y_Train = list(Train.rating)
X_Test = list(Test.review)
def adddict(d , s):
if s in d:
d[s] = d[s]+1
else:
d[s] = 1
prob = [0 for _ in range(5)]
for i in range(0,len_train):
a = int(Y_Train[i])
prob[a-1] += 1
print(i)
if type(X_Train[i]) is str:
b = clean(X_Train[i])
b = stemmer.stem(b)
b = b.split()
b = [item for item in b if item not in stop_words]
for s in b:
adddict(total , s)
if a == 1:
adddict(one , s)
elif a == 2:
adddict(two , s)
elif a == 3:
adddict(three , s)
elif a == 4:
adddict(four , s)
elif a == 5:
adddict(five , s)
for i in range(0,5):
prob[i] /= len_train
haha = []
for i in range(0 , len_test):
if type(X_Test[i]) is str:
b = X_Test[i].split()
p1 = np.log(prob[0])
for s in b:
if s in one:
p1 += np.log(one[s]/total[s])
p2 = np.log(prob[1])
for s in b:
if s in two:
p2 += np.log(two[s]/total[s])
p3 = np.log(prob[2])
for s in b:
if s in three:
p3 += np.log(three[s]/total[s])
p4 = np.log(prob[3])
for s in b:
if s in four:
p4 += np.log(four[s]/total[s])
p5 = np.log(prob[4])
for s in b:
if s in five:
p5 += np.log(five[s]/total[s])
ans = [p1,p2,p3,p4,p5]
haha += [ans.index(max(ans))+1]
with open(sys.argv[4], 'w') as f:
for item in haha:
f.write("%s\n" % item)
if part=='c':
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
@author: vinayak
"""
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
@author: vinayak
"""
import pandas as pd
import numpy as np
import nltk
from nltk.tokenize import sent_tokenize, word_tokenize
from nltk.corpus import stopwords
nltk.download('stopwords')
stop_words = set(stopwords.words('english'))
from nltk.stem import PorterStemmer, WordNetLemmatizer
stemmer = PorterStemmer()
lemmatiser = WordNetLemmatizer()
def clean(string):
string = string.replace("0", "")
string = string.replace("1", "")
string = string.replace("2", "")
string = string.replace("3", "")
string = string.replace("4", "")
string = string.replace("5", "")
string = string.replace("6", "")
string = string.replace("7", "")
string = string.replace("8", "")
string = string.replace("9", "")
string = string.replace("&", " ")
string = string.replace("#", " ")
string = string.replace("'", " ")
string = string.replace(",", " ")
string = string.replace("(", " ")
string = string.replace(")", " ")
string = string.replace("-", " ")
string = string.replace("/", " ")
string = string.replace(":", " ")
string = string.replace(";", " ")
string = string.replace("*", " ")
string = string.replace("$", " ")
string = string.replace("%", " ")
string = string.replace(".", " ")
string = string.replace('"', " ")
string = string.replace('-', " ")
string = string.replace('_', " ")
string = string.replace('[', " ")
string = string.replace(']', " ")
string = string.replace('?', " ")
string = string.replace('+', " ")
string = string.replace('=', " ")
string = string.replace('{', " ")
string = string.replace('}', " ")
string = string.replace(' ', " ")
string = string.replace(' ', " ")
string = string.replace(' ', " ")
return string
Train = pd.read_csv(sys.argv[2],header = None)
Test = pd.read_csv(sys.argv[3], header = None)
one = {}
two = {}
three = {}
four = {}
five = {}
total={}
Train.columns = ['rating', 'review']
Test.columns = ['rating', 'review']
len_train = len(Train.rating)
len_test = len(Test.rating)
X_Train = list(Train.review)
Y_Train = list(Train.rating)
X_Test = list(Test.review)
def adddict(d , s):
if s in d:
d[s] = d[s]+1
else:
d[s] = 1
prob = [0 for _ in range(5)]
for i in range(0,len_train):
a = int(Y_Train[i])
prob[a-1] += 1
print(i)
if type(X_Train[i]) is str:
b = clean(X_Train[i])
b = lemmatiser.lemmatize(b)
b = stemmer.stem(b)
b = list(nltk.bigrams(b.split()))
b = [item for item in b if item not in stop_words]
for s in b:
adddict(total , s)
if a == 1:
adddict(one , s)
elif a == 2:
adddict(two , s)
elif a == 3:
adddict(three , s)
elif a == 4:
adddict(four , s)
elif a == 5:
adddict(five , s)
for i in range(0,5):
prob[i] /= len_train
haha = []
for i in range(0 , len_test):
if type(X_Test[i]) is str:
b = X_Test[i].split()
p1 = np.log(prob[0])
for s in b:
if s in one:
p1 += np.log(one[s]/total[s])
p2 = np.log(prob[1])
for s in b:
if s in two:
p2 += np.log(two[s]/total[s])
p3 = np.log(prob[2])
for s in b:
if s in three:
p3 += np.log(three[s]/total[s])
p4 = np.log(prob[3])
for s in b:
if s in four:
p4 += np.log(four[s]/total[s])
p5 = np.log(prob[4])
for s in b:
if s in five:
p5 += np.log(five[s]/total[s])
ans = [p1,p2,p3,p4,p5]
haha += [ans.index(max(ans))+1]
with open(sys.argv[4], 'w') as f:
for item in haha:
f.write("%s\n" % item)
| 27.686916 | 64 | 0.420169 | 1,356 | 11,850 | 3.629056 | 0.079646 | 0.185328 | 0.285714 | 0.264174 | 0.985775 | 0.980085 | 0.980085 | 0.980085 | 0.969925 | 0.962 | 0 | 0.02148 | 0.426414 | 11,850 | 427 | 65 | 27.751756 | 0.702516 | 0.015949 | 0 | 0.971963 | 0 | 0 | 0.022523 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015576 | false | 0 | 0.046729 | 0 | 0.068536 | 0.006231 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
5da5063f4f967a2e09b74fcf7973d42af9c84c6c | 24,697 | py | Python | equipment/tests/test_equipment.py | shubhamkulkarni01/EMSTrack-Django | 32ff9ed94a38730c0e9f6385c75060e2d30a930e | [
"MIT",
"BSD-3-Clause"
] | 2 | 2020-07-16T01:44:54.000Z | 2020-10-25T02:08:47.000Z | equipment/tests/test_equipment.py | shubhamkulkarni01/EMSTrack-Django | 32ff9ed94a38730c0e9f6385c75060e2d30a930e | [
"MIT",
"BSD-3-Clause"
] | 8 | 2020-04-20T22:13:56.000Z | 2022-02-04T17:50:44.000Z | equipment/tests/test_equipment.py | shubhamkulkarni01/EMSTrack-Django | 32ff9ed94a38730c0e9f6385c75060e2d30a930e | [
"MIT",
"BSD-3-Clause"
] | 2 | 2020-07-20T23:39:44.000Z | 2022-02-24T00:29:10.000Z | import json
from io import BytesIO
from rest_framework.parsers import JSONParser
from django.conf import settings
from django.test import Client
from equipment.models import Equipment, EquipmentItem
from equipment.serializers import EquipmentItemSerializer, EquipmentSerializer
from emstrack.tests.util import date2iso
from login.tests.setup_data import TestSetup
class TestEquipmentItemGetList(TestSetup):
def test_equipment_item_serializer(self):
# test HospitalSerializer
for he in (self.he1, self.he2, self.he3, self.he4):
serializer = EquipmentItemSerializer(he)
result = {
'equipmentholder_id': he.equipmentholder.id,
'equipment_id': he.equipment.id,
'equipment_name': he.equipment.name,
'equipment_type': he.equipment.type,
'value': he.value,
'comment': he.comment,
'updated_by': he.updated_by.id,
'updated_on': date2iso(he.updated_on)
}
self.assertDictEqual(serializer.data, result)
def test_equipment_item_get_viewset(self):
# instantiate client
client = Client()
# login as admin
client.login(username=settings.MQTT['USERNAME'], password=settings.MQTT['PASSWORD'])
# retrieve any hospital equipment
response = client.get('/en/api/equipment/{}/item/{}/'.format(str(self.h1.equipmentholder.id), str(self.e1.id)),
follow=True)
self.assertEqual(response.status_code, 200)
result = JSONParser().parse(BytesIO(response.content))
answer = EquipmentItemSerializer(
EquipmentItem.objects.get(equipmentholder=self.h1.equipmentholder.id, equipment=self.e1.id)).data
self.assertDictEqual(result, answer)
# retrieve any hospital equipment
response = client.get('/en/api/equipment/{}/item/{}/'.format(str(self.h1.equipmentholder.id), str(self.e2.id)),
follow=True)
self.assertEqual(response.status_code, 200)
result = JSONParser().parse(BytesIO(response.content))
answer = EquipmentItemSerializer(
EquipmentItem.objects.get(equipmentholder=self.h1.equipmentholder.id, equipment=self.e2.id)).data
self.assertDictEqual(result, answer)
# retrieve any hospital equipment
response = client.get('/en/api/equipment/{}/item/{}/'.format(str(self.h2.equipmentholder.id), str(self.e1.id)),
follow=True)
self.assertEqual(response.status_code, 200)
result = JSONParser().parse(BytesIO(response.content))
answer = EquipmentItemSerializer(
EquipmentItem.objects.get(equipmentholder=self.h2.equipmentholder.id, equipment=self.e1.id)).data
self.assertDictEqual(result, answer)
# retrieve any hospital equipment
response = client.get('/en/api/equipment/{}/item/{}/'.format(str(self.h2.equipmentholder.id), str(self.e3.id)),
follow=True)
self.assertEqual(response.status_code, 200)
result = JSONParser().parse(BytesIO(response.content))
answer = EquipmentItemSerializer(
EquipmentItem.objects.get(equipmentholder=self.h2.equipmentholder.id, equipment=self.e3.id)).data
self.assertDictEqual(result, answer)
# retrieve any hospital equipment
response = client.get('/en/api/equipment/{}/item/{}/'.format(str(self.h3.equipmentholder.id), str(self.e1.id)),
follow=True)
self.assertEqual(response.status_code, 200)
result = JSONParser().parse(BytesIO(response.content))
answer = EquipmentItemSerializer(
EquipmentItem.objects.get(equipmentholder=self.h3.equipmentholder.id, equipment=self.e1.id)).data
self.assertDictEqual(result, answer)
# retrieve inexistent
response = client.get('/en/api/equipment/{}/item/{}/'.format(str(self.h3.equipmentholder.id), str(self.e2.id)),
follow=True)
self.assertEqual(response.status_code, 404)
# logout
client.logout()
# login as testuser1
client.login(username='testuser1', password='top_secret')
# retrieve someone else's
response = client.get('/en/api/equipment/{}/item/{}/'.format(str(self.h3.equipmentholder.id), str(self.e1.id)),
follow=True)
self.assertEqual(response.status_code, 403)
# retrieve own hospital equipment
response = client.get('/en/api/equipment/{}/item/{}/'.format(str(self.h1.equipmentholder.id), str(self.e1.id)),
follow=True)
self.assertEqual(response.status_code, 200)
result = JSONParser().parse(BytesIO(response.content))
answer = EquipmentItemSerializer(
EquipmentItem.objects.get(equipmentholder=self.h1.equipmentholder.id, equipment=self.e1.id)).data
self.assertDictEqual(result, answer)
# retrieve own hospital equipment
response = client.get('/en/api/equipment/{}/item/{}/'.format(str(self.h1.equipmentholder.id), str(self.e2.id)),
follow=True)
self.assertEqual(response.status_code, 200)
result = JSONParser().parse(BytesIO(response.content))
answer = EquipmentItemSerializer(
EquipmentItem.objects.get(equipmentholder=self.h1.equipmentholder.id, equipment=self.e2.id)).data
self.assertDictEqual(result, answer)
# retrieve own hospital equipment
response = client.get('/en/api/equipment/{}/item/{}/'.format(str(self.h2.equipmentholder.id), str(self.e1.id)),
follow=True)
self.assertEqual(response.status_code, 200)
result = JSONParser().parse(BytesIO(response.content))
answer = EquipmentItemSerializer(
EquipmentItem.objects.get(equipmentholder=self.h2.equipmentholder.id, equipment=self.e1.id)).data
self.assertDictEqual(result, answer)
# retrieve own hospital equipment
response = client.get('/en/api/equipment/{}/item/{}/'.format(str(self.h2.equipmentholder.id), str(self.e3.id)),
follow=True)
self.assertEqual(response.status_code, 200)
result = JSONParser().parse(BytesIO(response.content))
answer = EquipmentItemSerializer(
EquipmentItem.objects.get(equipmentholder=self.h2.equipmentholder.id, equipment=self.e3.id)).data
self.assertDictEqual(result, answer)
# logout
client.logout()
# login as testuser2
client.login(username='testuser2', password='very_secret')
# retrieve someone else's
response = client.get('/en/api/equipment/{}/item/{}/'.format(str(self.h3.equipmentholder.id), str(self.e1.id)),
follow=True)
self.assertEqual(response.status_code, 403)
# retrieve someone else's
response = client.get('/en/api/equipment/{}/item/{}/'.format(str(self.h1.equipmentholder.id), str(self.e1.id)),
follow=True)
self.assertEqual(response.status_code, 403)
# retrieve someone else's
response = client.get('/en/api/equipment/{}/item/{}/'.format(str(self.h1.equipmentholder.id), str(self.e2.id)),
follow=True)
self.assertEqual(response.status_code, 403)
# retrieve someone else's
response = client.get('/en/api/equipment/{}/item/{}/'.format(str(self.h2.equipmentholder.id), str(self.e1.id)),
follow=True)
self.assertEqual(response.status_code, 403)
# retrieve someone else's
response = client.get('/en/api/equipment/{}/item/{}/'.format(str(self.h2.equipmentholder.id), str(self.e3.id)),
follow=True)
self.assertEqual(response.status_code, 403)
# logout
client.logout()
def test_equipment_item_list_viewset(self):
# instantiate client
client = Client()
# login as admin
client.login(username=settings.MQTT['USERNAME'], password=settings.MQTT['PASSWORD'])
# retrieve all hospital equipment
response = client.get('/en/api/equipment/{}/item/'.format(str(self.h1.equipmentholder.id)),
follow=True)
self.assertEqual(response.status_code, 200)
result = JSONParser().parse(BytesIO(response.content))
answer = [
EquipmentItemSerializer(EquipmentItem.objects.get(equipmentholder=self.h1.equipmentholder.id, equipment=self.e1.id)).data,
EquipmentItemSerializer(EquipmentItem.objects.get(equipmentholder=self.h1.equipmentholder.id, equipment=self.e2.id)).data
]
self.assertCountEqual(result, answer)
# retrieve all hospital equipment
response = client.get('/en/api/equipment/{}/item/'.format(str(self.h2.equipmentholder.id)),
follow=True)
self.assertEqual(response.status_code, 200)
result = JSONParser().parse(BytesIO(response.content))
answer = [
EquipmentItemSerializer(EquipmentItem.objects.get(equipmentholder=self.h2.equipmentholder.id, equipment=self.e1.id)).data,
EquipmentItemSerializer(EquipmentItem.objects.get(equipmentholder=self.h2.equipmentholder.id, equipment=self.e3.id)).data
]
self.assertCountEqual(result, answer)
# retrieve all hospital equipment
response = client.get('/en/api/equipment/{}/item/'.format(str(self.h3.equipmentholder.id)),
follow=True)
self.assertEqual(response.status_code, 200)
result = JSONParser().parse(BytesIO(response.content))
answer = [
EquipmentItemSerializer(EquipmentItem.objects.get(equipmentholder=self.h3.equipmentholder.id, equipment=self.e1.id)).data
]
self.assertCountEqual(result, answer)
# retrieve inexistent
response = client.get('/en/api/equipment/{}/item/'.format(1000),
follow=True)
self.assertEqual(response.status_code, 403)
# logout
client.logout()
# login as testuser1
client.login(username='testuser1', password='top_secret')
# retrieve all hospital equipment
response = client.get('/en/api/equipment/{}/item/'.format(str(self.h1.equipmentholder.id)),
follow=True)
self.assertEqual(response.status_code, 200)
result = JSONParser().parse(BytesIO(response.content))
answer = [
EquipmentItemSerializer(EquipmentItem.objects.get(equipmentholder=self.h1.equipmentholder.id, equipment=self.e1.id)).data,
EquipmentItemSerializer(EquipmentItem.objects.get(equipmentholder=self.h1.equipmentholder.id, equipment=self.e2.id)).data
]
self.assertCountEqual(result, answer)
# retrieve all hospital equipment
response = client.get('/en/api/equipment/{}/item/'.format(str(self.h2.equipmentholder.id)),
follow=True)
self.assertEqual(response.status_code, 200)
result = JSONParser().parse(BytesIO(response.content))
answer = [
EquipmentItemSerializer(EquipmentItem.objects.get(equipmentholder=self.h2.equipmentholder.id, equipment=self.e1.id)).data,
EquipmentItemSerializer(EquipmentItem.objects.get(equipmentholder=self.h2.equipmentholder.id, equipment=self.e3.id)).data
]
self.assertCountEqual(result, answer)
# retrieve all hospital equipment
response = client.get('/en/api/equipment/{}/item/'.format(str(self.h3.equipmentholder.id)),
follow=True)
self.assertEqual(response.status_code, 403)
# logout
client.logout()
# login as testuser2
client.login(username='testuser2', password='very_secret')
# retrieve all hospital equipment
response = client.get('/en/api/equipment/{}/item/'.format(str(self.h1.equipmentholder.id)),
follow=True)
self.assertEqual(response.status_code, 403)
# retrieve all hospital equipment
response = client.get('/en/api/equipment/{}/item/'.format(str(self.h2.equipmentholder.id)),
follow=True)
self.assertEqual(response.status_code, 403)
# retrieve all hospital equipment
response = client.get('/en/api/equipment/{}/item/'.format(str(self.h3.equipmentholder.id)),
follow=True)
self.assertEqual(response.status_code, 403)
# logout
client.logout()
class TestEquipmentItemUpdate(TestSetup):
def test_equipment_item_update_viewset(self):
# instantiate client
client = Client()
# login as admin
client.login(username=settings.MQTT['USERNAME'], password=settings.MQTT['PASSWORD'])
# set equipment value
value = 'True'
response = client.patch('/en/api/equipment/{}/item/{}/'.format(str(self.h1.equipmentholder.id), str(self.e1.id)),
content_type='application/json',
data=json.dumps({
'value': value
})
)
self.assertEqual(response.status_code, 200)
result = JSONParser().parse(BytesIO(response.content))
answer = EquipmentItemSerializer(
EquipmentItem.objects.get(equipmentholder=self.h1.equipmentholder.id, equipment=self.e1.id)).data
self.assertDictEqual(result, answer)
# retrieve equipment value
response = client.get('/en/api/equipment/{}/item/{}/'.format(str(self.h1.equipmentholder.id), str(self.e1.id)),
follow=True)
self.assertEqual(response.status_code, 200)
result = JSONParser().parse(BytesIO(response.content))
self.assertEqual(result['value'], value)
# set equipment comment
comment = 'some comment'
response = client.patch('/en/api/equipment/{}/item/{}/'.format(str(self.h1.equipmentholder.id), str(self.e1.id)),
content_type='application/json',
data=json.dumps({
'comment': comment
})
)
self.assertEqual(response.status_code, 200)
result = JSONParser().parse(BytesIO(response.content))
answer = EquipmentItemSerializer(
EquipmentItem.objects.get(equipmentholder=self.h1.equipmentholder.id, equipment=self.e1.id)).data
self.assertDictEqual(result, answer)
# retrieve equipment comment
response = client.get('/en/api/equipment/{}/item/{}/'.format(str(self.h1.equipmentholder.id), str(self.e1.id)),
follow=True)
self.assertEqual(response.status_code, 200)
result = JSONParser().parse(BytesIO(response.content))
self.assertEqual(result['value'], value)
self.assertEqual(result['comment'], comment)
# set inexistent equipment
response = client.patch('/en/api/equipment/{}/item/{}/'.format(str(self.h1.equipmentholder.id), str(self.e3.id)),
content_type='application/json',
data=json.dumps({
'comment': comment
})
)
self.assertEqual(response.status_code, 404)
# set wrong ambulance id
response = client.patch('/en/api/equipment/{}/item/{}/'.format(str(self.h1.equipmentholder.id + 100), str(self.e1.id)),
content_type='application/json',
data=json.dumps({
'comment': comment
})
)
self.assertEqual(response.status_code, 403)
# set wrong equipment name
response = client.patch('/en/api/equipment/{}/item/{}/'.format(str(self.h1.equipmentholder.id), -1),
content_type='application/json',
data=json.dumps({
'comment': comment
})
)
self.assertEqual(response.status_code, 404)
# logout
client.logout()
# login as testuser1
client.login(username='testuser1', password='top_secret')
# set equipment value
value = 'False'
response = client.patch('/en/api/equipment/{}/item/{}/'.format(str(self.h2.equipmentholder.id), str(self.e1.id)),
content_type='application/json',
data=json.dumps({
'value': value
})
)
self.assertEqual(response.status_code, 200)
result = JSONParser().parse(BytesIO(response.content))
answer = EquipmentItemSerializer(
EquipmentItem.objects.get(equipmentholder=self.h2.equipmentholder.id, equipment=self.e1.id)).data
self.assertDictEqual(result, answer)
# retrieve equipment value
response = client.get('/en/api/equipment/{}/item/{}/'.format(str(self.h2.equipmentholder.id), str(self.e1.id)),
follow=True)
self.assertEqual(response.status_code, 200)
result = JSONParser().parse(BytesIO(response.content))
self.assertEqual(result['value'], value)
# set equipment comment
comment = 'some new comment'
response = client.patch('/en/api/equipment/{}/item/{}/'.format(str(self.h2.equipmentholder.id), str(self.e1.id)),
content_type='application/json',
data=json.dumps({
'comment': comment
})
)
self.assertEqual(response.status_code, 200)
result = JSONParser().parse(BytesIO(response.content))
answer = EquipmentItemSerializer(
EquipmentItem.objects.get(equipmentholder=self.h2.equipmentholder.id, equipment=self.e1.id)).data
self.assertDictEqual(result, answer)
# retrieve equipment comment
response = client.get('/en/api/equipment/{}/item/{}/'.format(str(self.h2.equipmentholder.id), str(self.e1.id)),
follow=True)
self.assertEqual(response.status_code, 200)
result = JSONParser().parse(BytesIO(response.content))
self.assertEqual(result['value'], value)
self.assertEqual(result['comment'], comment)
# not permitted to write
response = client.patch('/en/api/equipment/{}/item/{}/'.format(str(self.h1.equipmentholder.id), str(self.e1.id)),
content_type='application/json',
data=json.dumps({
'value': value
})
)
self.assertEqual(response.status_code, 403)
# logout
client.logout()
# login as testuser2
client.login(username='testuser2', password='very_secret')
# set equipment value
response = client.patch('/en/api/equipment/{}/item/{}/'.format(str(self.h1.equipmentholder.id), str(self.e1.id)),
content_type='application/json',
data=json.dumps({
'value': value
})
)
self.assertEqual(response.status_code, 403)
# set equipment value
response = client.patch('/en/api/equipment/{}/item/{}/'.format(str(self.h1.equipmentholder.id), str(self.e2.id)),
content_type='application/json',
data=json.dumps({
'value': value
})
)
self.assertEqual(response.status_code, 403)
# logout
client.logout()
class TestEquipmentMetadata(TestSetup):
def test_equipment_metadata_viewset(self):
# instantiate client
client = Client()
# login as admin
client.login(username=settings.MQTT['USERNAME'], password=settings.MQTT['PASSWORD'])
# retrieve any hospital equipment
response = client.get('/en/api/equipment/{}/metadata/'.format(str(self.h1.equipmentholder.id)),
follow=True)
self.assertEqual(response.status_code, 200)
result = JSONParser().parse(BytesIO(response.content))
answer = [
EquipmentSerializer(Equipment.objects.get(id=self.e1.id)).data,
EquipmentSerializer(Equipment.objects.get(id=self.e2.id)).data
]
self.assertCountEqual(result, answer)
# retrieve any hospital equipment
response = client.get('/en/api/equipment/{}/metadata/'.format(str(self.h2.equipmentholder.id)),
follow=True)
self.assertEqual(response.status_code, 200)
result = JSONParser().parse(BytesIO(response.content))
answer = [
EquipmentSerializer(Equipment.objects.get(id=self.e1.id)).data,
EquipmentSerializer(Equipment.objects.get(id=self.e3.id)).data
]
self.assertCountEqual(result, answer)
# retrieve any hospital equipment
response = client.get('/en/api/equipment/{}/metadata/'.format(str(self.h3.equipmentholder.id)),
follow=True)
self.assertEqual(response.status_code, 200)
result = JSONParser().parse(BytesIO(response.content))
answer = [
EquipmentSerializer(Equipment.objects.get(id=self.e1.id)).data
]
self.assertCountEqual(result, answer)
# logout
client.logout()
# login as testuser1
client.login(username='testuser1', password='top_secret')
# retrieve any hospital equipment
response = client.get('/en/api/equipment/{}/metadata/'.format(str(self.h1.equipmentholder.id)),
follow=True)
self.assertEqual(response.status_code, 200)
result = JSONParser().parse(BytesIO(response.content))
answer = [
EquipmentSerializer(Equipment.objects.get(id=self.e1.id)).data,
EquipmentSerializer(Equipment.objects.get(id=self.e2.id)).data
]
self.assertCountEqual(result, answer)
# retrieve any hospital equipment
response = client.get('/en/api/equipment/{}/metadata/'.format(str(self.h2.equipmentholder.id)),
follow=True)
self.assertEqual(response.status_code, 200)
result = JSONParser().parse(BytesIO(response.content))
answer = [
EquipmentSerializer(Equipment.objects.get(id=self.e1.id)).data,
EquipmentSerializer(Equipment.objects.get(id=self.e3.id)).data
]
self.assertCountEqual(result, answer)
# retrieve any hospital equipment
response = client.get('/en/api/equipment/{}/metadata/'.format(str(self.h3.equipmentholder.id)),
follow=True)
self.assertEqual(response.status_code, 404)
# logout
client.logout()
# login as testuser2
client.login(username='testuser2', password='very_secret')
# retrieve any hospital equipment
response = client.get('/en/api/equipment/{}/metadata/'.format(str(self.h1.equipmentholder.id)),
follow=True)
self.assertEqual(response.status_code, 404)
# retrieve any hospital equipment
response = client.get('/en/api/equipment/{}/metadata/'.format(str(self.h2.equipmentholder.id)),
follow=True)
self.assertEqual(response.status_code, 404)
# retrieve any hospital equipment
response = client.get('/en/api/equipment/{}/metadata/'.format(str(self.h3.equipmentholder.id)),
follow=True)
self.assertEqual(response.status_code, 404)
# logout
client.logout()
| 45.990689 | 134 | 0.59574 | 2,434 | 24,697 | 6.005752 | 0.049712 | 0.036872 | 0.046928 | 0.097209 | 0.934259 | 0.930291 | 0.930155 | 0.929744 | 0.926461 | 0.926461 | 0 | 0.017305 | 0.281654 | 24,697 | 536 | 135 | 46.076493 | 0.806663 | 0.071547 | 0 | 0.829268 | 0 | 0 | 0.087567 | 0.061266 | 0 | 0 | 0 | 0 | 0.214092 | 1 | 0.01355 | false | 0.03252 | 0.02439 | 0 | 0.04607 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5dbf9bcc31353ae9b0fd011827a218770492299c | 60,278 | py | Python | sdk/python/pulumi_aws/codebuild/project.py | aamir-locus/pulumi-aws | 3e234b050129bde35d8e072a88bd608562f02142 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/codebuild/project.py | aamir-locus/pulumi-aws | 3e234b050129bde35d8e072a88bd608562f02142 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/codebuild/project.py | aamir-locus/pulumi-aws | 3e234b050129bde35d8e072a88bd608562f02142 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from . import outputs
from ._inputs import *
__all__ = ['ProjectArgs', 'Project']
@pulumi.input_type
class ProjectArgs:
def __init__(__self__, *,
artifacts: pulumi.Input['ProjectArtifactsArgs'],
environment: pulumi.Input['ProjectEnvironmentArgs'],
service_role: pulumi.Input[str],
source: pulumi.Input['ProjectSourceArgs'],
badge_enabled: Optional[pulumi.Input[bool]] = None,
build_timeout: Optional[pulumi.Input[int]] = None,
cache: Optional[pulumi.Input['ProjectCacheArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encryption_key: Optional[pulumi.Input[str]] = None,
logs_config: Optional[pulumi.Input['ProjectLogsConfigArgs']] = None,
name: Optional[pulumi.Input[str]] = None,
queued_timeout: Optional[pulumi.Input[int]] = None,
secondary_artifacts: Optional[pulumi.Input[Sequence[pulumi.Input['ProjectSecondaryArtifactArgs']]]] = None,
secondary_sources: Optional[pulumi.Input[Sequence[pulumi.Input['ProjectSecondarySourceArgs']]]] = None,
source_version: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
vpc_config: Optional[pulumi.Input['ProjectVpcConfigArgs']] = None):
"""
The set of arguments for constructing a Project resource.
:param pulumi.Input['ProjectArtifactsArgs'] artifacts: Configuration block. Detailed below.
:param pulumi.Input['ProjectEnvironmentArgs'] environment: Configuration block. Detailed below.
:param pulumi.Input[str] service_role: Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) role that enables AWS CodeBuild to interact with dependent AWS services on behalf of the AWS account.
:param pulumi.Input['ProjectSourceArgs'] source: Configuration block. Detailed below.
:param pulumi.Input[bool] badge_enabled: Generates a publicly-accessible URL for the projects build badge. Available as `badge_url` attribute when enabled.
:param pulumi.Input[int] build_timeout: Number of minutes, from 5 to 480 (8 hours), for AWS CodeBuild to wait until timing out any related build that does not get marked as completed. The default is 60 minutes.
:param pulumi.Input['ProjectCacheArgs'] cache: Configuration block. Detailed below.
:param pulumi.Input[str] description: Short description of the project.
:param pulumi.Input[str] encryption_key: AWS Key Management Service (AWS KMS) customer master key (CMK) to be used for encrypting the build project's build output artifacts.
:param pulumi.Input['ProjectLogsConfigArgs'] logs_config: Configuration block. Detailed below.
:param pulumi.Input[str] name: Name of the project. If `type` is set to `S3`, this is the name of the output artifact object
:param pulumi.Input[int] queued_timeout: Number of minutes, from 5 to 480 (8 hours), a build is allowed to be queued before it times out. The default is 8 hours.
:param pulumi.Input[Sequence[pulumi.Input['ProjectSecondaryArtifactArgs']]] secondary_artifacts: Configuration block. Detailed below.
:param pulumi.Input[Sequence[pulumi.Input['ProjectSecondarySourceArgs']]] secondary_sources: Configuration block. Detailed below.
:param pulumi.Input[str] source_version: Version of the build input to be built for this project. If not specified, the latest version is used.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: Map of tags to assign to the resource.
:param pulumi.Input['ProjectVpcConfigArgs'] vpc_config: Configuration block. Detailed below.
"""
pulumi.set(__self__, "artifacts", artifacts)
pulumi.set(__self__, "environment", environment)
pulumi.set(__self__, "service_role", service_role)
pulumi.set(__self__, "source", source)
if badge_enabled is not None:
pulumi.set(__self__, "badge_enabled", badge_enabled)
if build_timeout is not None:
pulumi.set(__self__, "build_timeout", build_timeout)
if cache is not None:
pulumi.set(__self__, "cache", cache)
if description is not None:
pulumi.set(__self__, "description", description)
if encryption_key is not None:
pulumi.set(__self__, "encryption_key", encryption_key)
if logs_config is not None:
pulumi.set(__self__, "logs_config", logs_config)
if name is not None:
pulumi.set(__self__, "name", name)
if queued_timeout is not None:
pulumi.set(__self__, "queued_timeout", queued_timeout)
if secondary_artifacts is not None:
pulumi.set(__self__, "secondary_artifacts", secondary_artifacts)
if secondary_sources is not None:
pulumi.set(__self__, "secondary_sources", secondary_sources)
if source_version is not None:
pulumi.set(__self__, "source_version", source_version)
if tags is not None:
pulumi.set(__self__, "tags", tags)
if vpc_config is not None:
pulumi.set(__self__, "vpc_config", vpc_config)
@property
@pulumi.getter
def artifacts(self) -> pulumi.Input['ProjectArtifactsArgs']:
"""
Configuration block. Detailed below.
"""
return pulumi.get(self, "artifacts")
@artifacts.setter
def artifacts(self, value: pulumi.Input['ProjectArtifactsArgs']):
pulumi.set(self, "artifacts", value)
@property
@pulumi.getter
def environment(self) -> pulumi.Input['ProjectEnvironmentArgs']:
"""
Configuration block. Detailed below.
"""
return pulumi.get(self, "environment")
@environment.setter
def environment(self, value: pulumi.Input['ProjectEnvironmentArgs']):
pulumi.set(self, "environment", value)
@property
@pulumi.getter(name="serviceRole")
def service_role(self) -> pulumi.Input[str]:
"""
Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) role that enables AWS CodeBuild to interact with dependent AWS services on behalf of the AWS account.
"""
return pulumi.get(self, "service_role")
@service_role.setter
def service_role(self, value: pulumi.Input[str]):
pulumi.set(self, "service_role", value)
@property
@pulumi.getter
def source(self) -> pulumi.Input['ProjectSourceArgs']:
"""
Configuration block. Detailed below.
"""
return pulumi.get(self, "source")
@source.setter
def source(self, value: pulumi.Input['ProjectSourceArgs']):
pulumi.set(self, "source", value)
@property
@pulumi.getter(name="badgeEnabled")
def badge_enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Generates a publicly-accessible URL for the projects build badge. Available as `badge_url` attribute when enabled.
"""
return pulumi.get(self, "badge_enabled")
@badge_enabled.setter
def badge_enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "badge_enabled", value)
@property
@pulumi.getter(name="buildTimeout")
def build_timeout(self) -> Optional[pulumi.Input[int]]:
"""
Number of minutes, from 5 to 480 (8 hours), for AWS CodeBuild to wait until timing out any related build that does not get marked as completed. The default is 60 minutes.
"""
return pulumi.get(self, "build_timeout")
@build_timeout.setter
def build_timeout(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "build_timeout", value)
@property
@pulumi.getter
def cache(self) -> Optional[pulumi.Input['ProjectCacheArgs']]:
"""
Configuration block. Detailed below.
"""
return pulumi.get(self, "cache")
@cache.setter
def cache(self, value: Optional[pulumi.Input['ProjectCacheArgs']]):
pulumi.set(self, "cache", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Short description of the project.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptionKey")
def encryption_key(self) -> Optional[pulumi.Input[str]]:
"""
AWS Key Management Service (AWS KMS) customer master key (CMK) to be used for encrypting the build project's build output artifacts.
"""
return pulumi.get(self, "encryption_key")
@encryption_key.setter
def encryption_key(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "encryption_key", value)
@property
@pulumi.getter(name="logsConfig")
def logs_config(self) -> Optional[pulumi.Input['ProjectLogsConfigArgs']]:
"""
Configuration block. Detailed below.
"""
return pulumi.get(self, "logs_config")
@logs_config.setter
def logs_config(self, value: Optional[pulumi.Input['ProjectLogsConfigArgs']]):
pulumi.set(self, "logs_config", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Name of the project. If `type` is set to `S3`, this is the name of the output artifact object
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="queuedTimeout")
def queued_timeout(self) -> Optional[pulumi.Input[int]]:
"""
Number of minutes, from 5 to 480 (8 hours), a build is allowed to be queued before it times out. The default is 8 hours.
"""
return pulumi.get(self, "queued_timeout")
@queued_timeout.setter
def queued_timeout(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "queued_timeout", value)
@property
@pulumi.getter(name="secondaryArtifacts")
def secondary_artifacts(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ProjectSecondaryArtifactArgs']]]]:
"""
Configuration block. Detailed below.
"""
return pulumi.get(self, "secondary_artifacts")
@secondary_artifacts.setter
def secondary_artifacts(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ProjectSecondaryArtifactArgs']]]]):
pulumi.set(self, "secondary_artifacts", value)
@property
@pulumi.getter(name="secondarySources")
def secondary_sources(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ProjectSecondarySourceArgs']]]]:
"""
Configuration block. Detailed below.
"""
return pulumi.get(self, "secondary_sources")
@secondary_sources.setter
def secondary_sources(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ProjectSecondarySourceArgs']]]]):
pulumi.set(self, "secondary_sources", value)
@property
@pulumi.getter(name="sourceVersion")
def source_version(self) -> Optional[pulumi.Input[str]]:
"""
Version of the build input to be built for this project. If not specified, the latest version is used.
"""
return pulumi.get(self, "source_version")
@source_version.setter
def source_version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "source_version", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
Map of tags to assign to the resource.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
@property
@pulumi.getter(name="vpcConfig")
def vpc_config(self) -> Optional[pulumi.Input['ProjectVpcConfigArgs']]:
"""
Configuration block. Detailed below.
"""
return pulumi.get(self, "vpc_config")
@vpc_config.setter
def vpc_config(self, value: Optional[pulumi.Input['ProjectVpcConfigArgs']]):
pulumi.set(self, "vpc_config", value)
@pulumi.input_type
class _ProjectState:
def __init__(__self__, *,
arn: Optional[pulumi.Input[str]] = None,
artifacts: Optional[pulumi.Input['ProjectArtifactsArgs']] = None,
badge_enabled: Optional[pulumi.Input[bool]] = None,
badge_url: Optional[pulumi.Input[str]] = None,
build_timeout: Optional[pulumi.Input[int]] = None,
cache: Optional[pulumi.Input['ProjectCacheArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encryption_key: Optional[pulumi.Input[str]] = None,
environment: Optional[pulumi.Input['ProjectEnvironmentArgs']] = None,
logs_config: Optional[pulumi.Input['ProjectLogsConfigArgs']] = None,
name: Optional[pulumi.Input[str]] = None,
queued_timeout: Optional[pulumi.Input[int]] = None,
secondary_artifacts: Optional[pulumi.Input[Sequence[pulumi.Input['ProjectSecondaryArtifactArgs']]]] = None,
secondary_sources: Optional[pulumi.Input[Sequence[pulumi.Input['ProjectSecondarySourceArgs']]]] = None,
service_role: Optional[pulumi.Input[str]] = None,
source: Optional[pulumi.Input['ProjectSourceArgs']] = None,
source_version: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
vpc_config: Optional[pulumi.Input['ProjectVpcConfigArgs']] = None):
"""
Input properties used for looking up and filtering Project resources.
:param pulumi.Input[str] arn: ARN of the CodeBuild project.
:param pulumi.Input['ProjectArtifactsArgs'] artifacts: Configuration block. Detailed below.
:param pulumi.Input[bool] badge_enabled: Generates a publicly-accessible URL for the projects build badge. Available as `badge_url` attribute when enabled.
:param pulumi.Input[str] badge_url: URL of the build badge when `badge_enabled` is enabled.
:param pulumi.Input[int] build_timeout: Number of minutes, from 5 to 480 (8 hours), for AWS CodeBuild to wait until timing out any related build that does not get marked as completed. The default is 60 minutes.
:param pulumi.Input['ProjectCacheArgs'] cache: Configuration block. Detailed below.
:param pulumi.Input[str] description: Short description of the project.
:param pulumi.Input[str] encryption_key: AWS Key Management Service (AWS KMS) customer master key (CMK) to be used for encrypting the build project's build output artifacts.
:param pulumi.Input['ProjectEnvironmentArgs'] environment: Configuration block. Detailed below.
:param pulumi.Input['ProjectLogsConfigArgs'] logs_config: Configuration block. Detailed below.
:param pulumi.Input[str] name: Name of the project. If `type` is set to `S3`, this is the name of the output artifact object
:param pulumi.Input[int] queued_timeout: Number of minutes, from 5 to 480 (8 hours), a build is allowed to be queued before it times out. The default is 8 hours.
:param pulumi.Input[Sequence[pulumi.Input['ProjectSecondaryArtifactArgs']]] secondary_artifacts: Configuration block. Detailed below.
:param pulumi.Input[Sequence[pulumi.Input['ProjectSecondarySourceArgs']]] secondary_sources: Configuration block. Detailed below.
:param pulumi.Input[str] service_role: Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) role that enables AWS CodeBuild to interact with dependent AWS services on behalf of the AWS account.
:param pulumi.Input['ProjectSourceArgs'] source: Configuration block. Detailed below.
:param pulumi.Input[str] source_version: Version of the build input to be built for this project. If not specified, the latest version is used.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: Map of tags to assign to the resource.
:param pulumi.Input['ProjectVpcConfigArgs'] vpc_config: Configuration block. Detailed below.
"""
if arn is not None:
pulumi.set(__self__, "arn", arn)
if artifacts is not None:
pulumi.set(__self__, "artifacts", artifacts)
if badge_enabled is not None:
pulumi.set(__self__, "badge_enabled", badge_enabled)
if badge_url is not None:
pulumi.set(__self__, "badge_url", badge_url)
if build_timeout is not None:
pulumi.set(__self__, "build_timeout", build_timeout)
if cache is not None:
pulumi.set(__self__, "cache", cache)
if description is not None:
pulumi.set(__self__, "description", description)
if encryption_key is not None:
pulumi.set(__self__, "encryption_key", encryption_key)
if environment is not None:
pulumi.set(__self__, "environment", environment)
if logs_config is not None:
pulumi.set(__self__, "logs_config", logs_config)
if name is not None:
pulumi.set(__self__, "name", name)
if queued_timeout is not None:
pulumi.set(__self__, "queued_timeout", queued_timeout)
if secondary_artifacts is not None:
pulumi.set(__self__, "secondary_artifacts", secondary_artifacts)
if secondary_sources is not None:
pulumi.set(__self__, "secondary_sources", secondary_sources)
if service_role is not None:
pulumi.set(__self__, "service_role", service_role)
if source is not None:
pulumi.set(__self__, "source", source)
if source_version is not None:
pulumi.set(__self__, "source_version", source_version)
if tags is not None:
pulumi.set(__self__, "tags", tags)
if vpc_config is not None:
pulumi.set(__self__, "vpc_config", vpc_config)
@property
@pulumi.getter
def arn(self) -> Optional[pulumi.Input[str]]:
"""
ARN of the CodeBuild project.
"""
return pulumi.get(self, "arn")
@arn.setter
def arn(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "arn", value)
@property
@pulumi.getter
def artifacts(self) -> Optional[pulumi.Input['ProjectArtifactsArgs']]:
"""
Configuration block. Detailed below.
"""
return pulumi.get(self, "artifacts")
@artifacts.setter
def artifacts(self, value: Optional[pulumi.Input['ProjectArtifactsArgs']]):
pulumi.set(self, "artifacts", value)
@property
@pulumi.getter(name="badgeEnabled")
def badge_enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Generates a publicly-accessible URL for the projects build badge. Available as `badge_url` attribute when enabled.
"""
return pulumi.get(self, "badge_enabled")
@badge_enabled.setter
def badge_enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "badge_enabled", value)
@property
@pulumi.getter(name="badgeUrl")
def badge_url(self) -> Optional[pulumi.Input[str]]:
"""
URL of the build badge when `badge_enabled` is enabled.
"""
return pulumi.get(self, "badge_url")
@badge_url.setter
def badge_url(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "badge_url", value)
@property
@pulumi.getter(name="buildTimeout")
def build_timeout(self) -> Optional[pulumi.Input[int]]:
"""
Number of minutes, from 5 to 480 (8 hours), for AWS CodeBuild to wait until timing out any related build that does not get marked as completed. The default is 60 minutes.
"""
return pulumi.get(self, "build_timeout")
@build_timeout.setter
def build_timeout(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "build_timeout", value)
@property
@pulumi.getter
def cache(self) -> Optional[pulumi.Input['ProjectCacheArgs']]:
"""
Configuration block. Detailed below.
"""
return pulumi.get(self, "cache")
@cache.setter
def cache(self, value: Optional[pulumi.Input['ProjectCacheArgs']]):
pulumi.set(self, "cache", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Short description of the project.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptionKey")
def encryption_key(self) -> Optional[pulumi.Input[str]]:
"""
AWS Key Management Service (AWS KMS) customer master key (CMK) to be used for encrypting the build project's build output artifacts.
"""
return pulumi.get(self, "encryption_key")
@encryption_key.setter
def encryption_key(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "encryption_key", value)
@property
@pulumi.getter
def environment(self) -> Optional[pulumi.Input['ProjectEnvironmentArgs']]:
"""
Configuration block. Detailed below.
"""
return pulumi.get(self, "environment")
@environment.setter
def environment(self, value: Optional[pulumi.Input['ProjectEnvironmentArgs']]):
pulumi.set(self, "environment", value)
@property
@pulumi.getter(name="logsConfig")
def logs_config(self) -> Optional[pulumi.Input['ProjectLogsConfigArgs']]:
"""
Configuration block. Detailed below.
"""
return pulumi.get(self, "logs_config")
@logs_config.setter
def logs_config(self, value: Optional[pulumi.Input['ProjectLogsConfigArgs']]):
pulumi.set(self, "logs_config", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Name of the project. If `type` is set to `S3`, this is the name of the output artifact object
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="queuedTimeout")
def queued_timeout(self) -> Optional[pulumi.Input[int]]:
"""
Number of minutes, from 5 to 480 (8 hours), a build is allowed to be queued before it times out. The default is 8 hours.
"""
return pulumi.get(self, "queued_timeout")
@queued_timeout.setter
def queued_timeout(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "queued_timeout", value)
@property
@pulumi.getter(name="secondaryArtifacts")
def secondary_artifacts(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ProjectSecondaryArtifactArgs']]]]:
"""
Configuration block. Detailed below.
"""
return pulumi.get(self, "secondary_artifacts")
@secondary_artifacts.setter
def secondary_artifacts(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ProjectSecondaryArtifactArgs']]]]):
pulumi.set(self, "secondary_artifacts", value)
@property
@pulumi.getter(name="secondarySources")
def secondary_sources(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ProjectSecondarySourceArgs']]]]:
"""
Configuration block. Detailed below.
"""
return pulumi.get(self, "secondary_sources")
@secondary_sources.setter
def secondary_sources(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ProjectSecondarySourceArgs']]]]):
pulumi.set(self, "secondary_sources", value)
@property
@pulumi.getter(name="serviceRole")
def service_role(self) -> Optional[pulumi.Input[str]]:
"""
Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) role that enables AWS CodeBuild to interact with dependent AWS services on behalf of the AWS account.
"""
return pulumi.get(self, "service_role")
@service_role.setter
def service_role(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "service_role", value)
@property
@pulumi.getter
def source(self) -> Optional[pulumi.Input['ProjectSourceArgs']]:
"""
Configuration block. Detailed below.
"""
return pulumi.get(self, "source")
@source.setter
def source(self, value: Optional[pulumi.Input['ProjectSourceArgs']]):
pulumi.set(self, "source", value)
@property
@pulumi.getter(name="sourceVersion")
def source_version(self) -> Optional[pulumi.Input[str]]:
"""
Version of the build input to be built for this project. If not specified, the latest version is used.
"""
return pulumi.get(self, "source_version")
@source_version.setter
def source_version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "source_version", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
Map of tags to assign to the resource.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
@property
@pulumi.getter(name="vpcConfig")
def vpc_config(self) -> Optional[pulumi.Input['ProjectVpcConfigArgs']]:
"""
Configuration block. Detailed below.
"""
return pulumi.get(self, "vpc_config")
@vpc_config.setter
def vpc_config(self, value: Optional[pulumi.Input['ProjectVpcConfigArgs']]):
pulumi.set(self, "vpc_config", value)
class Project(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
artifacts: Optional[pulumi.Input[pulumi.InputType['ProjectArtifactsArgs']]] = None,
badge_enabled: Optional[pulumi.Input[bool]] = None,
build_timeout: Optional[pulumi.Input[int]] = None,
cache: Optional[pulumi.Input[pulumi.InputType['ProjectCacheArgs']]] = None,
description: Optional[pulumi.Input[str]] = None,
encryption_key: Optional[pulumi.Input[str]] = None,
environment: Optional[pulumi.Input[pulumi.InputType['ProjectEnvironmentArgs']]] = None,
logs_config: Optional[pulumi.Input[pulumi.InputType['ProjectLogsConfigArgs']]] = None,
name: Optional[pulumi.Input[str]] = None,
queued_timeout: Optional[pulumi.Input[int]] = None,
secondary_artifacts: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['ProjectSecondaryArtifactArgs']]]]] = None,
secondary_sources: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['ProjectSecondarySourceArgs']]]]] = None,
service_role: Optional[pulumi.Input[str]] = None,
source: Optional[pulumi.Input[pulumi.InputType['ProjectSourceArgs']]] = None,
source_version: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
vpc_config: Optional[pulumi.Input[pulumi.InputType['ProjectVpcConfigArgs']]] = None,
__props__=None):
"""
Provides a CodeBuild Project resource. See also the `codebuild.Webhook` resource, which manages the webhook to the source (e.g. the "rebuild every time a code change is pushed" option in the CodeBuild web console).
## Example Usage
```python
import pulumi
import pulumi_aws as aws
example_bucket = aws.s3.Bucket("exampleBucket", acl="private")
example_role = aws.iam.Role("exampleRole", assume_role_policy=\"\"\"{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "codebuild.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
\"\"\")
example_role_policy = aws.iam.RolePolicy("exampleRolePolicy",
role=example_role.name,
policy=pulumi.Output.all(example_bucket.arn, example_bucket.arn).apply(lambda exampleBucketArn, exampleBucketArn1: f\"\"\"{{
"Version": "2012-10-17",
"Statement": [
{{
"Effect": "Allow",
"Resource": [
"*"
],
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
]
}},
{{
"Effect": "Allow",
"Action": [
"ec2:CreateNetworkInterface",
"ec2:DescribeDhcpOptions",
"ec2:DescribeNetworkInterfaces",
"ec2:DeleteNetworkInterface",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups",
"ec2:DescribeVpcs"
],
"Resource": "*"
}},
{{
"Effect": "Allow",
"Action": [
"ec2:CreateNetworkInterfacePermission"
],
"Resource": [
"arn:aws:ec2:us-east-1:123456789012:network-interface/*"
],
"Condition": {{
"StringEquals": {{
"ec2:Subnet": [
"{aws_subnet["example1"]["arn"]}",
"{aws_subnet["example2"]["arn"]}"
],
"ec2:AuthorizedService": "codebuild.amazonaws.com"
}}
}}
}},
{{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"{example_bucket_arn}",
"{example_bucket_arn1}/*"
]
}}
]
}}
\"\"\"))
example_project = aws.codebuild.Project("exampleProject",
description="test_codebuild_project",
build_timeout=5,
service_role=example_role.arn,
artifacts=aws.codebuild.ProjectArtifactsArgs(
type="NO_ARTIFACTS",
),
cache=aws.codebuild.ProjectCacheArgs(
type="S3",
location=example_bucket.bucket,
),
environment=aws.codebuild.ProjectEnvironmentArgs(
compute_type="BUILD_GENERAL1_SMALL",
image="aws/codebuild/standard:1.0",
type="LINUX_CONTAINER",
image_pull_credentials_type="CODEBUILD",
environment_variables=[
aws.codebuild.ProjectEnvironmentEnvironmentVariableArgs(
name="SOME_KEY1",
value="SOME_VALUE1",
),
aws.codebuild.ProjectEnvironmentEnvironmentVariableArgs(
name="SOME_KEY2",
value="SOME_VALUE2",
type="PARAMETER_STORE",
),
],
),
logs_config=aws.codebuild.ProjectLogsConfigArgs(
cloudwatch_logs=aws.codebuild.ProjectLogsConfigCloudwatchLogsArgs(
group_name="log-group",
stream_name="log-stream",
),
s3_logs=aws.codebuild.ProjectLogsConfigS3LogsArgs(
status="ENABLED",
location=example_bucket.id.apply(lambda id: f"{id}/build-log"),
),
),
source=aws.codebuild.ProjectSourceArgs(
type="GITHUB",
location="https://github.com/mitchellh/packer.git",
git_clone_depth=1,
git_submodules_config=aws.codebuild.ProjectSourceGitSubmodulesConfigArgs(
fetch_submodules=True,
),
),
source_version="master",
vpc_config=aws.codebuild.ProjectVpcConfigArgs(
vpc_id=aws_vpc["example"]["id"],
subnets=[
aws_subnet["example1"]["id"],
aws_subnet["example2"]["id"],
],
security_group_ids=[
aws_security_group["example1"]["id"],
aws_security_group["example2"]["id"],
],
),
tags={
"Environment": "Test",
})
project_with_cache = aws.codebuild.Project("project-with-cache",
description="test_codebuild_project_cache",
build_timeout=5,
queued_timeout=5,
service_role=example_role.arn,
artifacts=aws.codebuild.ProjectArtifactsArgs(
type="NO_ARTIFACTS",
),
cache=aws.codebuild.ProjectCacheArgs(
type="LOCAL",
modes=[
"LOCAL_DOCKER_LAYER_CACHE",
"LOCAL_SOURCE_CACHE",
],
),
environment=aws.codebuild.ProjectEnvironmentArgs(
compute_type="BUILD_GENERAL1_SMALL",
image="aws/codebuild/standard:1.0",
type="LINUX_CONTAINER",
image_pull_credentials_type="CODEBUILD",
environment_variables=[aws.codebuild.ProjectEnvironmentEnvironmentVariableArgs(
name="SOME_KEY1",
value="SOME_VALUE1",
)],
),
source=aws.codebuild.ProjectSourceArgs(
type="GITHUB",
location="https://github.com/mitchellh/packer.git",
git_clone_depth=1,
),
tags={
"Environment": "Test",
})
```
## Import
CodeBuild Project can be imported using the `name`, e.g.
```sh
$ pulumi import aws:codebuild/project:Project name project-name
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[pulumi.InputType['ProjectArtifactsArgs']] artifacts: Configuration block. Detailed below.
:param pulumi.Input[bool] badge_enabled: Generates a publicly-accessible URL for the projects build badge. Available as `badge_url` attribute when enabled.
:param pulumi.Input[int] build_timeout: Number of minutes, from 5 to 480 (8 hours), for AWS CodeBuild to wait until timing out any related build that does not get marked as completed. The default is 60 minutes.
:param pulumi.Input[pulumi.InputType['ProjectCacheArgs']] cache: Configuration block. Detailed below.
:param pulumi.Input[str] description: Short description of the project.
:param pulumi.Input[str] encryption_key: AWS Key Management Service (AWS KMS) customer master key (CMK) to be used for encrypting the build project's build output artifacts.
:param pulumi.Input[pulumi.InputType['ProjectEnvironmentArgs']] environment: Configuration block. Detailed below.
:param pulumi.Input[pulumi.InputType['ProjectLogsConfigArgs']] logs_config: Configuration block. Detailed below.
:param pulumi.Input[str] name: Name of the project. If `type` is set to `S3`, this is the name of the output artifact object
:param pulumi.Input[int] queued_timeout: Number of minutes, from 5 to 480 (8 hours), a build is allowed to be queued before it times out. The default is 8 hours.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['ProjectSecondaryArtifactArgs']]]] secondary_artifacts: Configuration block. Detailed below.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['ProjectSecondarySourceArgs']]]] secondary_sources: Configuration block. Detailed below.
:param pulumi.Input[str] service_role: Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) role that enables AWS CodeBuild to interact with dependent AWS services on behalf of the AWS account.
:param pulumi.Input[pulumi.InputType['ProjectSourceArgs']] source: Configuration block. Detailed below.
:param pulumi.Input[str] source_version: Version of the build input to be built for this project. If not specified, the latest version is used.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: Map of tags to assign to the resource.
:param pulumi.Input[pulumi.InputType['ProjectVpcConfigArgs']] vpc_config: Configuration block. Detailed below.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: ProjectArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Provides a CodeBuild Project resource. See also the `codebuild.Webhook` resource, which manages the webhook to the source (e.g. the "rebuild every time a code change is pushed" option in the CodeBuild web console).
## Example Usage
```python
import pulumi
import pulumi_aws as aws
example_bucket = aws.s3.Bucket("exampleBucket", acl="private")
example_role = aws.iam.Role("exampleRole", assume_role_policy=\"\"\"{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "codebuild.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
\"\"\")
example_role_policy = aws.iam.RolePolicy("exampleRolePolicy",
role=example_role.name,
policy=pulumi.Output.all(example_bucket.arn, example_bucket.arn).apply(lambda exampleBucketArn, exampleBucketArn1: f\"\"\"{{
"Version": "2012-10-17",
"Statement": [
{{
"Effect": "Allow",
"Resource": [
"*"
],
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
]
}},
{{
"Effect": "Allow",
"Action": [
"ec2:CreateNetworkInterface",
"ec2:DescribeDhcpOptions",
"ec2:DescribeNetworkInterfaces",
"ec2:DeleteNetworkInterface",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups",
"ec2:DescribeVpcs"
],
"Resource": "*"
}},
{{
"Effect": "Allow",
"Action": [
"ec2:CreateNetworkInterfacePermission"
],
"Resource": [
"arn:aws:ec2:us-east-1:123456789012:network-interface/*"
],
"Condition": {{
"StringEquals": {{
"ec2:Subnet": [
"{aws_subnet["example1"]["arn"]}",
"{aws_subnet["example2"]["arn"]}"
],
"ec2:AuthorizedService": "codebuild.amazonaws.com"
}}
}}
}},
{{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"{example_bucket_arn}",
"{example_bucket_arn1}/*"
]
}}
]
}}
\"\"\"))
example_project = aws.codebuild.Project("exampleProject",
description="test_codebuild_project",
build_timeout=5,
service_role=example_role.arn,
artifacts=aws.codebuild.ProjectArtifactsArgs(
type="NO_ARTIFACTS",
),
cache=aws.codebuild.ProjectCacheArgs(
type="S3",
location=example_bucket.bucket,
),
environment=aws.codebuild.ProjectEnvironmentArgs(
compute_type="BUILD_GENERAL1_SMALL",
image="aws/codebuild/standard:1.0",
type="LINUX_CONTAINER",
image_pull_credentials_type="CODEBUILD",
environment_variables=[
aws.codebuild.ProjectEnvironmentEnvironmentVariableArgs(
name="SOME_KEY1",
value="SOME_VALUE1",
),
aws.codebuild.ProjectEnvironmentEnvironmentVariableArgs(
name="SOME_KEY2",
value="SOME_VALUE2",
type="PARAMETER_STORE",
),
],
),
logs_config=aws.codebuild.ProjectLogsConfigArgs(
cloudwatch_logs=aws.codebuild.ProjectLogsConfigCloudwatchLogsArgs(
group_name="log-group",
stream_name="log-stream",
),
s3_logs=aws.codebuild.ProjectLogsConfigS3LogsArgs(
status="ENABLED",
location=example_bucket.id.apply(lambda id: f"{id}/build-log"),
),
),
source=aws.codebuild.ProjectSourceArgs(
type="GITHUB",
location="https://github.com/mitchellh/packer.git",
git_clone_depth=1,
git_submodules_config=aws.codebuild.ProjectSourceGitSubmodulesConfigArgs(
fetch_submodules=True,
),
),
source_version="master",
vpc_config=aws.codebuild.ProjectVpcConfigArgs(
vpc_id=aws_vpc["example"]["id"],
subnets=[
aws_subnet["example1"]["id"],
aws_subnet["example2"]["id"],
],
security_group_ids=[
aws_security_group["example1"]["id"],
aws_security_group["example2"]["id"],
],
),
tags={
"Environment": "Test",
})
project_with_cache = aws.codebuild.Project("project-with-cache",
description="test_codebuild_project_cache",
build_timeout=5,
queued_timeout=5,
service_role=example_role.arn,
artifacts=aws.codebuild.ProjectArtifactsArgs(
type="NO_ARTIFACTS",
),
cache=aws.codebuild.ProjectCacheArgs(
type="LOCAL",
modes=[
"LOCAL_DOCKER_LAYER_CACHE",
"LOCAL_SOURCE_CACHE",
],
),
environment=aws.codebuild.ProjectEnvironmentArgs(
compute_type="BUILD_GENERAL1_SMALL",
image="aws/codebuild/standard:1.0",
type="LINUX_CONTAINER",
image_pull_credentials_type="CODEBUILD",
environment_variables=[aws.codebuild.ProjectEnvironmentEnvironmentVariableArgs(
name="SOME_KEY1",
value="SOME_VALUE1",
)],
),
source=aws.codebuild.ProjectSourceArgs(
type="GITHUB",
location="https://github.com/mitchellh/packer.git",
git_clone_depth=1,
),
tags={
"Environment": "Test",
})
```
## Import
CodeBuild Project can be imported using the `name`, e.g.
```sh
$ pulumi import aws:codebuild/project:Project name project-name
```
:param str resource_name: The name of the resource.
:param ProjectArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(ProjectArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
artifacts: Optional[pulumi.Input[pulumi.InputType['ProjectArtifactsArgs']]] = None,
badge_enabled: Optional[pulumi.Input[bool]] = None,
build_timeout: Optional[pulumi.Input[int]] = None,
cache: Optional[pulumi.Input[pulumi.InputType['ProjectCacheArgs']]] = None,
description: Optional[pulumi.Input[str]] = None,
encryption_key: Optional[pulumi.Input[str]] = None,
environment: Optional[pulumi.Input[pulumi.InputType['ProjectEnvironmentArgs']]] = None,
logs_config: Optional[pulumi.Input[pulumi.InputType['ProjectLogsConfigArgs']]] = None,
name: Optional[pulumi.Input[str]] = None,
queued_timeout: Optional[pulumi.Input[int]] = None,
secondary_artifacts: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['ProjectSecondaryArtifactArgs']]]]] = None,
secondary_sources: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['ProjectSecondarySourceArgs']]]]] = None,
service_role: Optional[pulumi.Input[str]] = None,
source: Optional[pulumi.Input[pulumi.InputType['ProjectSourceArgs']]] = None,
source_version: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
vpc_config: Optional[pulumi.Input[pulumi.InputType['ProjectVpcConfigArgs']]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = ProjectArgs.__new__(ProjectArgs)
if artifacts is None and not opts.urn:
raise TypeError("Missing required property 'artifacts'")
__props__.__dict__["artifacts"] = artifacts
__props__.__dict__["badge_enabled"] = badge_enabled
__props__.__dict__["build_timeout"] = build_timeout
__props__.__dict__["cache"] = cache
__props__.__dict__["description"] = description
__props__.__dict__["encryption_key"] = encryption_key
if environment is None and not opts.urn:
raise TypeError("Missing required property 'environment'")
__props__.__dict__["environment"] = environment
__props__.__dict__["logs_config"] = logs_config
__props__.__dict__["name"] = name
__props__.__dict__["queued_timeout"] = queued_timeout
__props__.__dict__["secondary_artifacts"] = secondary_artifacts
__props__.__dict__["secondary_sources"] = secondary_sources
if service_role is None and not opts.urn:
raise TypeError("Missing required property 'service_role'")
__props__.__dict__["service_role"] = service_role
if source is None and not opts.urn:
raise TypeError("Missing required property 'source'")
__props__.__dict__["source"] = source
__props__.__dict__["source_version"] = source_version
__props__.__dict__["tags"] = tags
__props__.__dict__["vpc_config"] = vpc_config
__props__.__dict__["arn"] = None
__props__.__dict__["badge_url"] = None
super(Project, __self__).__init__(
'aws:codebuild/project:Project',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
arn: Optional[pulumi.Input[str]] = None,
artifacts: Optional[pulumi.Input[pulumi.InputType['ProjectArtifactsArgs']]] = None,
badge_enabled: Optional[pulumi.Input[bool]] = None,
badge_url: Optional[pulumi.Input[str]] = None,
build_timeout: Optional[pulumi.Input[int]] = None,
cache: Optional[pulumi.Input[pulumi.InputType['ProjectCacheArgs']]] = None,
description: Optional[pulumi.Input[str]] = None,
encryption_key: Optional[pulumi.Input[str]] = None,
environment: Optional[pulumi.Input[pulumi.InputType['ProjectEnvironmentArgs']]] = None,
logs_config: Optional[pulumi.Input[pulumi.InputType['ProjectLogsConfigArgs']]] = None,
name: Optional[pulumi.Input[str]] = None,
queued_timeout: Optional[pulumi.Input[int]] = None,
secondary_artifacts: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['ProjectSecondaryArtifactArgs']]]]] = None,
secondary_sources: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['ProjectSecondarySourceArgs']]]]] = None,
service_role: Optional[pulumi.Input[str]] = None,
source: Optional[pulumi.Input[pulumi.InputType['ProjectSourceArgs']]] = None,
source_version: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
vpc_config: Optional[pulumi.Input[pulumi.InputType['ProjectVpcConfigArgs']]] = None) -> 'Project':
"""
Get an existing Project resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] arn: ARN of the CodeBuild project.
:param pulumi.Input[pulumi.InputType['ProjectArtifactsArgs']] artifacts: Configuration block. Detailed below.
:param pulumi.Input[bool] badge_enabled: Generates a publicly-accessible URL for the projects build badge. Available as `badge_url` attribute when enabled.
:param pulumi.Input[str] badge_url: URL of the build badge when `badge_enabled` is enabled.
:param pulumi.Input[int] build_timeout: Number of minutes, from 5 to 480 (8 hours), for AWS CodeBuild to wait until timing out any related build that does not get marked as completed. The default is 60 minutes.
:param pulumi.Input[pulumi.InputType['ProjectCacheArgs']] cache: Configuration block. Detailed below.
:param pulumi.Input[str] description: Short description of the project.
:param pulumi.Input[str] encryption_key: AWS Key Management Service (AWS KMS) customer master key (CMK) to be used for encrypting the build project's build output artifacts.
:param pulumi.Input[pulumi.InputType['ProjectEnvironmentArgs']] environment: Configuration block. Detailed below.
:param pulumi.Input[pulumi.InputType['ProjectLogsConfigArgs']] logs_config: Configuration block. Detailed below.
:param pulumi.Input[str] name: Name of the project. If `type` is set to `S3`, this is the name of the output artifact object
:param pulumi.Input[int] queued_timeout: Number of minutes, from 5 to 480 (8 hours), a build is allowed to be queued before it times out. The default is 8 hours.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['ProjectSecondaryArtifactArgs']]]] secondary_artifacts: Configuration block. Detailed below.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['ProjectSecondarySourceArgs']]]] secondary_sources: Configuration block. Detailed below.
:param pulumi.Input[str] service_role: Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) role that enables AWS CodeBuild to interact with dependent AWS services on behalf of the AWS account.
:param pulumi.Input[pulumi.InputType['ProjectSourceArgs']] source: Configuration block. Detailed below.
:param pulumi.Input[str] source_version: Version of the build input to be built for this project. If not specified, the latest version is used.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: Map of tags to assign to the resource.
:param pulumi.Input[pulumi.InputType['ProjectVpcConfigArgs']] vpc_config: Configuration block. Detailed below.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _ProjectState.__new__(_ProjectState)
__props__.__dict__["arn"] = arn
__props__.__dict__["artifacts"] = artifacts
__props__.__dict__["badge_enabled"] = badge_enabled
__props__.__dict__["badge_url"] = badge_url
__props__.__dict__["build_timeout"] = build_timeout
__props__.__dict__["cache"] = cache
__props__.__dict__["description"] = description
__props__.__dict__["encryption_key"] = encryption_key
__props__.__dict__["environment"] = environment
__props__.__dict__["logs_config"] = logs_config
__props__.__dict__["name"] = name
__props__.__dict__["queued_timeout"] = queued_timeout
__props__.__dict__["secondary_artifacts"] = secondary_artifacts
__props__.__dict__["secondary_sources"] = secondary_sources
__props__.__dict__["service_role"] = service_role
__props__.__dict__["source"] = source
__props__.__dict__["source_version"] = source_version
__props__.__dict__["tags"] = tags
__props__.__dict__["vpc_config"] = vpc_config
return Project(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter
def arn(self) -> pulumi.Output[str]:
"""
ARN of the CodeBuild project.
"""
return pulumi.get(self, "arn")
@property
@pulumi.getter
def artifacts(self) -> pulumi.Output['outputs.ProjectArtifacts']:
"""
Configuration block. Detailed below.
"""
return pulumi.get(self, "artifacts")
@property
@pulumi.getter(name="badgeEnabled")
def badge_enabled(self) -> pulumi.Output[Optional[bool]]:
"""
Generates a publicly-accessible URL for the projects build badge. Available as `badge_url` attribute when enabled.
"""
return pulumi.get(self, "badge_enabled")
@property
@pulumi.getter(name="badgeUrl")
def badge_url(self) -> pulumi.Output[str]:
"""
URL of the build badge when `badge_enabled` is enabled.
"""
return pulumi.get(self, "badge_url")
@property
@pulumi.getter(name="buildTimeout")
def build_timeout(self) -> pulumi.Output[Optional[int]]:
"""
Number of minutes, from 5 to 480 (8 hours), for AWS CodeBuild to wait until timing out any related build that does not get marked as completed. The default is 60 minutes.
"""
return pulumi.get(self, "build_timeout")
@property
@pulumi.getter
def cache(self) -> pulumi.Output[Optional['outputs.ProjectCache']]:
"""
Configuration block. Detailed below.
"""
return pulumi.get(self, "cache")
@property
@pulumi.getter
def description(self) -> pulumi.Output[str]:
"""
Short description of the project.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter(name="encryptionKey")
def encryption_key(self) -> pulumi.Output[str]:
"""
AWS Key Management Service (AWS KMS) customer master key (CMK) to be used for encrypting the build project's build output artifacts.
"""
return pulumi.get(self, "encryption_key")
@property
@pulumi.getter
def environment(self) -> pulumi.Output['outputs.ProjectEnvironment']:
"""
Configuration block. Detailed below.
"""
return pulumi.get(self, "environment")
@property
@pulumi.getter(name="logsConfig")
def logs_config(self) -> pulumi.Output[Optional['outputs.ProjectLogsConfig']]:
"""
Configuration block. Detailed below.
"""
return pulumi.get(self, "logs_config")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
Name of the project. If `type` is set to `S3`, this is the name of the output artifact object
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="queuedTimeout")
def queued_timeout(self) -> pulumi.Output[Optional[int]]:
"""
Number of minutes, from 5 to 480 (8 hours), a build is allowed to be queued before it times out. The default is 8 hours.
"""
return pulumi.get(self, "queued_timeout")
@property
@pulumi.getter(name="secondaryArtifacts")
def secondary_artifacts(self) -> pulumi.Output[Optional[Sequence['outputs.ProjectSecondaryArtifact']]]:
"""
Configuration block. Detailed below.
"""
return pulumi.get(self, "secondary_artifacts")
@property
@pulumi.getter(name="secondarySources")
def secondary_sources(self) -> pulumi.Output[Optional[Sequence['outputs.ProjectSecondarySource']]]:
"""
Configuration block. Detailed below.
"""
return pulumi.get(self, "secondary_sources")
@property
@pulumi.getter(name="serviceRole")
def service_role(self) -> pulumi.Output[str]:
"""
Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) role that enables AWS CodeBuild to interact with dependent AWS services on behalf of the AWS account.
"""
return pulumi.get(self, "service_role")
@property
@pulumi.getter
def source(self) -> pulumi.Output['outputs.ProjectSource']:
"""
Configuration block. Detailed below.
"""
return pulumi.get(self, "source")
@property
@pulumi.getter(name="sourceVersion")
def source_version(self) -> pulumi.Output[Optional[str]]:
"""
Version of the build input to be built for this project. If not specified, the latest version is used.
"""
return pulumi.get(self, "source_version")
@property
@pulumi.getter
def tags(self) -> pulumi.Output[Optional[Mapping[str, str]]]:
"""
Map of tags to assign to the resource.
"""
return pulumi.get(self, "tags")
@property
@pulumi.getter(name="vpcConfig")
def vpc_config(self) -> pulumi.Output[Optional['outputs.ProjectVpcConfig']]:
"""
Configuration block. Detailed below.
"""
return pulumi.get(self, "vpc_config")
| 45.321805 | 223 | 0.620127 | 6,276 | 60,278 | 5.779637 | 0.057043 | 0.083699 | 0.078047 | 0.047859 | 0.940259 | 0.929948 | 0.916246 | 0.9026 | 0.900477 | 0.876051 | 0 | 0.005438 | 0.270945 | 60,278 | 1,329 | 224 | 45.355907 | 0.819961 | 0.414048 | 0 | 0.810169 | 1 | 0 | 0.139565 | 0.035639 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166102 | false | 0.001695 | 0.011864 | 0 | 0.277966 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
5deddbcc38b79a262a334b96df9283434a37ca01 | 1,165 | py | Python | tests/test_compat.py | dmsimard/dynaconf | ec394ab07e3b522879c8be678c65ebeb05fc2b59 | [
"MIT"
] | null | null | null | tests/test_compat.py | dmsimard/dynaconf | ec394ab07e3b522879c8be678c65ebeb05fc2b59 | [
"MIT"
] | null | null | null | tests/test_compat.py | dmsimard/dynaconf | ec394ab07e3b522879c8be678c65ebeb05fc2b59 | [
"MIT"
] | null | null | null | from dynaconf import LazySettings
def test_compatibility_checks(tmpdir):
settings = LazySettings(
DYNACONF_NAMESPACE='FOO',
DYNACONF_SETTINGS_MODULE='foo.py',
SETTINGS_MODULE='foo.py',
PROJECT_ROOT=str(tmpdir),
DYNACONF_SILENT_ERRORS=True,
DYNACONF_ALWAYS_FRESH_VARS=['BAR']
)
assert settings.ENV_FOR_DYNACONF == 'FOO'
assert settings.SETTINGS_MODULE_FOR_DYNACONF == 'foo.py'
assert settings.ROOT_PATH_FOR_DYNACONF == str(tmpdir)
assert settings.SILENT_ERRORS_FOR_DYNACONF is True
assert settings.FRESH_VARS_FOR_DYNACONF == ['BAR']
settings = LazySettings(
NAMESPACE_FOR_DYNACONF='FOO',
DYNACONF_SETTINGS_MODULE='foo.py',
SETTINGS_MODULE='foo.py',
PROJECT_ROOT_FOR_DYNACONF=str(tmpdir),
DYNACONF_SILENT_ERRORS=True,
DYNACONF_ALWAYS_FRESH_VARS=['BAR']
)
assert settings.ENV_FOR_DYNACONF == 'FOO'
assert settings.SETTINGS_MODULE_FOR_DYNACONF == 'foo.py'
assert settings.ROOT_PATH_FOR_DYNACONF == str(tmpdir)
assert settings.SILENT_ERRORS_FOR_DYNACONF is True
assert settings.FRESH_VARS_FOR_DYNACONF == ['BAR']
| 34.264706 | 60 | 0.716738 | 140 | 1,165 | 5.585714 | 0.207143 | 0.168798 | 0.089514 | 0.097187 | 0.808184 | 0.808184 | 0.808184 | 0.808184 | 0.808184 | 0.808184 | 0 | 0 | 0.191416 | 1,165 | 33 | 61 | 35.30303 | 0.830149 | 0 | 0 | 0.714286 | 0 | 0 | 0.051502 | 0 | 0 | 0 | 0 | 0 | 0.357143 | 1 | 0.035714 | false | 0 | 0.035714 | 0 | 0.071429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5dfe1ffa34fe41f56f3b92e79b233577399d7591 | 35,264 | py | Python | gluon/tests/test_sqlhtml.py | dartg/web2py | 419dc7b5fc92c6242590ef011bcfb8a598461776 | [
"BSD-3-Clause"
] | 1 | 2020-02-07T03:51:56.000Z | 2020-02-07T03:51:56.000Z | gluon/tests/test_sqlhtml.py | dartg/web2py | 419dc7b5fc92c6242590ef011bcfb8a598461776 | [
"BSD-3-Clause"
] | null | null | null | gluon/tests/test_sqlhtml.py | dartg/web2py | 419dc7b5fc92c6242590ef011bcfb8a598461776 | [
"BSD-3-Clause"
] | null | null | null | #!/bin/python
# -*- coding: utf-8 -*-
"""
Unit tests for gluon.sqlhtml
"""
import os
import sys
import unittest
from gluon.sqlhtml import safe_int, SQLFORM, SQLTABLE
DEFAULT_URI = os.getenv('DB', 'sqlite:memory')
from gluon.dal import DAL, Field
from pydal.objects import Table
from gluon.tools import Auth, Mail
from gluon.globals import Request, Response, Session
from gluon.storage import Storage
from gluon.languages import translator
from gluon.http import HTTP
from gluon.validators import *
# TODO: Create these test...
# class Test_add_class(unittest.TestCase):
# def test_add_class(self):
# pass
# class Test_represent(unittest.TestCase):
# def test_represent(self):
# pass
# class TestCacheRepresenter(unittest.TestCase):
# def test___call__(self):
# pass
# def test___init__(self):
# pass
class Test_safe_int(unittest.TestCase):
def test_safe_int(self):
# safe int
self.assertEqual(safe_int(1), 1)
# not safe int
self.assertEqual(safe_int('1x'), 0)
# class Test_safe_float(unittest.TestCase):
# def test_safe_float(self):
# pass
# class Test_show_if(unittest.TestCase):
# def test_show_if(self):
# pass
# class TestFormWidget(unittest.TestCase):
# def test__attributes(self):
# pass
# def test_widget(self):
# pass
# class TestStringWidget(unittest.TestCase):
# def test__attributes(self):
# pass
# def test_widget(self):
# pass
# class TestIntegerWidget(unittest.TestCase):
# def test__attributes(self):
# pass
# def test_widget(self):
# pass
# class TestDoubleWidget(unittest.TestCase):
# def test__attributes(self):
# pass
# def test_widget(self):
# pass
# class TestDecimalWidget(unittest.TestCase):
# def test__attributes(self):
# pass
# def test_widget(self):
# pass
# class TestDateWidget(unittest.TestCase):
# def test__attributes(self):
# pass
# def test_widget(self):
# pass
# class TestDatetimeWidget(unittest.TestCase):
# def test__attributes(self):
# pass
# def test_widget(self):
# pass
# class TestTextWidget(unittest.TestCase):
# def test__attributes(self):
# pass
# def test_widget(self):
# pass
# class TestJSONWidget(unittest.TestCase):
# def test__attributes(self):
# pass
# def test_widget(self):
# pass
# class TestBooleanWidget(unittest.TestCase):
# def test__attributes(self):
# pass
# def test_widget(self):
# pass
# class TestListWidget(unittest.TestCase):
# def test__attributes(self):
# pass
# def test_widget(self):
# pass
# class TestMultipleOptionsWidget(unittest.TestCase):
# def test__attributes(self):
# pass
# def test_widget(self):
# pass
# class TestRadioWidget(unittest.TestCase):
# def test__attributes(self):
# pass
# def test_widget(self):
# pass
# class TestCheckboxesWidget(unittest.TestCase):
# def test__attributes(self):
# pass
# def test_widget(self):
# pass
# class TestPasswordWidget(unittest.TestCase):
# def test__attributes(self):
# pass
# def test_widget(self):
# pass
# class TestUploadWidget(unittest.TestCase):
# def test__attributes(self):
# pass
# def test_represent(self):
# pass
# def test_widget(self):
# pass
# class TestAutocompleteWidget(unittest.TestCase):
# def test___call__(self):
# pass
# def test___init__(self):
# pass
# def test_callback(self):
# pass
# class Test_formstyle_table3cols(unittest.TestCase):
# def test_formstyle_table3cols(self):
# pass
# class Test_formstyle_table2cols(unittest.TestCase):
# def test_formstyle_table2cols(self):
# pass
# class Test_formstyle_divs(unittest.TestCase):
# def test_formstyle_divs(self):
# pass
# class Test_formstyle_inline(unittest.TestCase):
# def test_formstyle_inline(self):
# pass
# class Test_formstyle_ul(unittest.TestCase):
# def test_formstyle_ul(self):
# pass
# class Test_formstyle_bootstrap(unittest.TestCase):
# def test_formstyle_bootstrap(self):
# pass
# class Test_formstyle_bootstrap3_stacked(unittest.TestCase):
# def test_formstyle_bootstrap3_stacked(self):
# pass
# class Test_formstyle_bootstrap3_inline_factory(unittest.TestCase):
# def test_formstyle_bootstrap3_inline_factory(self):
# pass
class TestSQLFORM(unittest.TestCase):
def setUp(self):
request = Request(env={})
request.application = 'a'
request.controller = 'c'
request.function = 'f'
request.folder = 'applications/admin'
response = Response()
session = Session()
T = translator('', 'en')
session.connect(request, response)
from gluon.globals import current
current.request = request
current.response = response
current.session = session
current.T = T
self.db = DAL(DEFAULT_URI, check_reserved=['all'])
self.auth = Auth(self.db)
self.auth.define_tables(username=True, signature=False)
self.db.define_table('t0', Field('tt', default='web2py'), self.auth.signature)
self.auth.enable_record_versioning(self.db)
# Create a user
self.db.auth_user.insert(first_name='Bart',
last_name='Simpson',
username='user1',
email='user1@test.com',
password='password_123',
registration_key=None,
registration_id=None)
self.db.commit()
def test_SQLFORM(self):
form = SQLFORM(self.db.auth_user)
self.assertEqual(form.xml(), b'<form action="#" enctype="multipart/form-data" method="post"><table><tr id="auth_user_first_name__row"><td class="w2p_fl"><label class="" for="auth_user_first_name" id="auth_user_first_name__label">First name: </label></td><td class="w2p_fw"><input class="string" id="auth_user_first_name" name="first_name" type="text" value="" /></td><td class="w2p_fc"></td></tr><tr id="auth_user_last_name__row"><td class="w2p_fl"><label class="" for="auth_user_last_name" id="auth_user_last_name__label">Last name: </label></td><td class="w2p_fw"><input class="string" id="auth_user_last_name" name="last_name" type="text" value="" /></td><td class="w2p_fc"></td></tr><tr id="auth_user_email__row"><td class="w2p_fl"><label class="" for="auth_user_email" id="auth_user_email__label">E-mail: </label></td><td class="w2p_fw"><input class="string" id="auth_user_email" name="email" type="text" value="" /></td><td class="w2p_fc"></td></tr><tr id="auth_user_username__row"><td class="w2p_fl"><label class="" for="auth_user_username" id="auth_user_username__label">Username: </label></td><td class="w2p_fw"><input class="string" id="auth_user_username" name="username" type="text" value="" /></td><td class="w2p_fc"></td></tr><tr id="auth_user_password__row"><td class="w2p_fl"><label class="" for="auth_user_password" id="auth_user_password__label">Password: </label></td><td class="w2p_fw"><input class="password" id="auth_user_password" name="password" type="password" value="" /></td><td class="w2p_fc"></td></tr><tr id="submit_record__row"><td class="w2p_fl"></td><td class="w2p_fw"><input type="submit" value="Submit" /></td><td class="w2p_fc"></td></tr></table></form>')
def test_represent_SQLFORM(self):
id = self.db.t0.insert()
self.db.t0.tt.represent = lambda value: value.capitalize()
self.db.t0.tt.writable = False
self.db.t0.tt.readable = True
form = SQLFORM(self.db.t0, id)
self.assertTrue(b'Web2py' in form.xml())
self.db.t0.tt.represent = lambda value, row: value.capitalize()
form = SQLFORM(self.db.t0, id)
self.assertTrue(b'Web2py' in form.xml())
# def test_assert_status(self):
# pass
# def test_createform(self):
# pass
# def test_accepts(self):
# pass
# def test_dictform(self):
# pass
# def test_smartdictform(self):
# pass
def test_factory(self):
factory_form = SQLFORM.factory(Field('field_one', 'string', IS_NOT_EMPTY()),
Field('field_two', 'string'))
self.assertEqual(factory_form.xml(), b'<form action="#" enctype="multipart/form-data" method="post"><table><tr id="no_table_field_one__row"><td class="w2p_fl"><label class="" for="no_table_field_one" id="no_table_field_one__label">Field One: </label></td><td class="w2p_fw"><input class="string" id="no_table_field_one" name="field_one" type="text" value="" /></td><td class="w2p_fc"></td></tr><tr id="no_table_field_two__row"><td class="w2p_fl"><label class="" for="no_table_field_two" id="no_table_field_two__label">Field Two: </label></td><td class="w2p_fw"><input class="string" id="no_table_field_two" name="field_two" type="text" value="" /></td><td class="w2p_fc"></td></tr><tr id="submit_record__row"><td class="w2p_fl"></td><td class="w2p_fw"><input type="submit" value="Submit" /></td><td class="w2p_fc"></td></tr></table></form>')
# def test_build_query(self):
# pass
# def test_search_menu(self):
# pass
def test_grid(self):
grid_form = SQLFORM.grid(self.db.auth_user)
self.assertEqual(grid_form.xml(), b'<div class="web2py_grid "><div class="web2py_console "><form action="/a/c/f" enctype="multipart/form-data" method="GET"><input class="form-control" id="w2p_keywords" name="keywords" onfocus="jQuery('#w2p_query_fields').change();jQuery('#w2p_query_panel').slideDown();" type="text" value="" /><input class="btn btn-default" type="submit" value="Search" /><input class="btn btn-default" onclick="jQuery('#w2p_keywords').val('');" type="submit" value="Clear" /></form><div id="w2p_query_panel" style="display:none;"><select class="form-control" id="w2p_query_fields" onchange="jQuery('.w2p_query_row').hide();jQuery('#w2p_field_'+jQuery('#w2p_query_fields').val().replace('.','-')).show();" style="float:left"><option value="auth_user.id">Id</option><option value="auth_user.first_name">First name</option><option value="auth_user.last_name">Last name</option><option value="auth_user.email">E-mail</option><option value="auth_user.username">Username</option></select><div class="w2p_query_row" id="w2p_field_auth_user-id" style="display:none"><select class="form-control"><option value="=">=</option><option value="!=">!=</option><option value="<"><</option><option value=">">></option><option value="<="><=</option><option value=">=">>=</option><option value="in">in</option><option value="not in">not in</option></select><input class="id form-control" id="w2p_value_auth_user-id" type="text" /><input class="btn btn-default" onclick="w2p_build_query('new','auth_user.id')" title="Start building a new search" type="button" value="New Search" /><input class="btn btn-default" onclick="w2p_build_query('and','auth_user.id')" title="Add this to the search as an AND term" type="button" value="+ And" /><input class="btn btn-default" onclick="w2p_build_query('or','auth_user.id')" title="Add this to the search as an OR term" type="button" value="+ Or" /><input class="btn btn-default" onclick="jQuery('#w2p_query_panel').slideUp()" type="button" value="Close" /></div><div class="w2p_query_row" id="w2p_field_auth_user-first_name" style="display:none"><select class="form-control"><option value="=">=</option><option value="!=">!=</option><option value="<"><</option><option value=">">></option><option value="<="><=</option><option value=">=">>=</option><option value="starts with">starts with</option><option value="contains">contains</option><option value="in">in</option><option value="not in">not in</option></select><input class="string form-control" id="w2p_value_auth_user-first_name" type="text" /><input class="btn btn-default" onclick="w2p_build_query('new','auth_user.first_name')" title="Start building a new search" type="button" value="New Search" /><input class="btn btn-default" onclick="w2p_build_query('and','auth_user.first_name')" title="Add this to the search as an AND term" type="button" value="+ And" /><input class="btn btn-default" onclick="w2p_build_query('or','auth_user.first_name')" title="Add this to the search as an OR term" type="button" value="+ Or" /><input class="btn btn-default" onclick="jQuery('#w2p_query_panel').slideUp()" type="button" value="Close" /></div><div class="w2p_query_row" id="w2p_field_auth_user-last_name" style="display:none"><select class="form-control"><option value="=">=</option><option value="!=">!=</option><option value="<"><</option><option value=">">></option><option value="<="><=</option><option value=">=">>=</option><option value="starts with">starts with</option><option value="contains">contains</option><option value="in">in</option><option value="not in">not in</option></select><input class="string form-control" id="w2p_value_auth_user-last_name" type="text" /><input class="btn btn-default" onclick="w2p_build_query('new','auth_user.last_name')" title="Start building a new search" type="button" value="New Search" /><input class="btn btn-default" onclick="w2p_build_query('and','auth_user.last_name')" title="Add this to the search as an AND term" type="button" value="+ And" /><input class="btn btn-default" onclick="w2p_build_query('or','auth_user.last_name')" title="Add this to the search as an OR term" type="button" value="+ Or" /><input class="btn btn-default" onclick="jQuery('#w2p_query_panel').slideUp()" type="button" value="Close" /></div><div class="w2p_query_row" id="w2p_field_auth_user-email" style="display:none"><select class="form-control"><option value="=">=</option><option value="!=">!=</option><option value="<"><</option><option value=">">></option><option value="<="><=</option><option value=">=">>=</option><option value="starts with">starts with</option><option value="contains">contains</option><option value="in">in</option><option value="not in">not in</option></select><input class="string form-control" id="w2p_value_auth_user-email" type="text" /><input class="btn btn-default" onclick="w2p_build_query('new','auth_user.email')" title="Start building a new search" type="button" value="New Search" /><input class="btn btn-default" onclick="w2p_build_query('and','auth_user.email')" title="Add this to the search as an AND term" type="button" value="+ And" /><input class="btn btn-default" onclick="w2p_build_query('or','auth_user.email')" title="Add this to the search as an OR term" type="button" value="+ Or" /><input class="btn btn-default" onclick="jQuery('#w2p_query_panel').slideUp()" type="button" value="Close" /></div><div class="w2p_query_row" id="w2p_field_auth_user-username" style="display:none"><select class="form-control"><option value="=">=</option><option value="!=">!=</option><option value="<"><</option><option value=">">></option><option value="<="><=</option><option value=">=">>=</option><option value="starts with">starts with</option><option value="contains">contains</option><option value="in">in</option><option value="not in">not in</option></select><input class="string form-control" id="w2p_value_auth_user-username" type="text" /><input class="btn btn-default" onclick="w2p_build_query('new','auth_user.username')" title="Start building a new search" type="button" value="New Search" /><input class="btn btn-default" onclick="w2p_build_query('and','auth_user.username')" title="Add this to the search as an AND term" type="button" value="+ And" /><input class="btn btn-default" onclick="w2p_build_query('or','auth_user.username')" title="Add this to the search as an OR term" type="button" value="+ Or" /><input class="btn btn-default" onclick="jQuery('#w2p_query_panel').slideUp()" type="button" value="Close" /></div></div><script><!--\n\n jQuery(\'#w2p_query_fields input,#w2p_query_fields select\').css(\n \'width\',\'auto\');\n jQuery(function(){web2py_ajax_fields(\'#w2p_query_fields\');});\n function w2p_build_query(aggregator,a) {\n var b=a.replace(\'.\',\'-\');\n var option = jQuery(\'#w2p_field_\'+b+\' select\').val();\n var value;\n var $value_item = jQuery(\'#w2p_value_\'+b);\n if ($value_item.is(\':checkbox\')){\n if ($value_item.is(\':checked\'))\n value = \'True\';\n else value = \'False\';\n }\n else\n { value = $value_item.val().replace(\'"\',\'\\\\"\')}\n var s=a+\' \'+option+\' "\'+value+\'"\';\n var k=jQuery(\'#w2p_keywords\');\n var v=k.val();\n if(aggregator==\'new\') k.val(s); else k.val((v?(v+\' \'+ aggregator +\' \'):\'\')+s);\n }\n \n//--></script><div class="web2py_counter">1 records found</div></div><div class="web2py_table"><div class="web2py_htmltable" style="width:100%;overflow-x:auto;-ms-overflow-x:scroll"><table><colgroup><col data-column="1" id="auth_user-id" /><col data-column="2" id="auth_user-first_name" /><col data-column="3" id="auth_user-last_name" /><col data-column="4" id="auth_user-email" /><col data-column="5" id="auth_user-username" /><col data-column="6" /></colgroup><thead><tr class=""><th class=""><a href="/a/c/f?keywords=&order=auth_user.id">Id</a></th><th class=""><a href="/a/c/f?keywords=&order=auth_user.first_name">First name</a></th><th class=""><a href="/a/c/f?keywords=&order=auth_user.last_name">Last name</a></th><th class=""><a href="/a/c/f?keywords=&order=auth_user.email">E-mail</a></th><th class=""><a href="/a/c/f?keywords=&order=auth_user.username">Username</a></th><th class=""></th></tr></thead><tbody><tr class="w2p_odd odd with_id" id="1"><td>1</td><td>Bart</td><td>Simpson</td><td>user1@test.com</td><td>user1</td><td class="row_buttons" nowrap="nowrap"><a class="button btn btn-default" href="/a/c/f/view/auth_user/1"><span class="icon magnifier icon-zoom-in glyphicon glyphicon-zoom-in"></span> <span class="buttontext button" title="View">View</span></a></td></tr></tbody></table></div></div><div class="w2p_export_menu">Export:<a class="btn btn-default" href="/a/c/f?_export_type=csv&keywords=&order=" title="Comma-separated export of visible columns. Fields from other tables are exported as they appear on-screen but this may be slow for many rows">CSV</a><a class="btn btn-default" href="/a/c/f?_export_type=csv_with_hidden_cols&keywords=&order=" title="Comma-separated export including columns not shown; fields from other tables are exported as raw values for faster export">CSV (hidden cols)</a><a class="btn btn-default" href="/a/c/f?_export_type=html&keywords=&order=" title="HTML export of visible columns">HTML</a><a class="btn btn-default" href="/a/c/f?_export_type=json&keywords=&order=" title="JSON export of visible columns">JSON</a><a class="btn btn-default" href="/a/c/f?_export_type=tsv&keywords=&order=" title="Spreadsheet-optimised export of tab-separated content, visible columns only. May be slow.">TSV (Spreadsheets)</a><a class="btn btn-default" href="/a/c/f?_export_type=tsv_with_hidden_cols&keywords=&order=" title="Spreadsheet-optimised export of tab-separated content including hidden columns. May be slow">TSV (Spreadsheets, hidden cols)</a><a class="btn btn-default" href="/a/c/f?_export_type=xml&keywords=&order=" title="XML export of columns shown">XML</a></div></div>')
def test_smartgrid(self):
smartgrid_form = SQLFORM.smartgrid(self.db.auth_user)
self.assertEqual(smartgrid_form.xml(), b'<div class="web2py_grid "><div class="web2py_breadcrumbs"><ul class=""><li class="active w2p_grid_breadcrumb_elem"><a href="/a/c/f/auth_user">Auth users</a></li></ul></div><div class="web2py_console "><form action="/a/c/f/auth_user" enctype="multipart/form-data" method="GET"><input class="form-control" id="w2p_keywords" name="keywords" onfocus="jQuery('#w2p_query_fields').change();jQuery('#w2p_query_panel').slideDown();" type="text" value="" /><input class="btn btn-default" type="submit" value="Search" /><input class="btn btn-default" onclick="jQuery('#w2p_keywords').val('');" type="submit" value="Clear" /></form><div id="w2p_query_panel" style="display:none;"><select class="form-control" id="w2p_query_fields" onchange="jQuery('.w2p_query_row').hide();jQuery('#w2p_field_'+jQuery('#w2p_query_fields').val().replace('.','-')).show();" style="float:left"><option value="auth_user.id">Id</option><option value="auth_user.first_name">First name</option><option value="auth_user.last_name">Last name</option><option value="auth_user.email">E-mail</option><option value="auth_user.username">Username</option></select><div class="w2p_query_row" id="w2p_field_auth_user-id" style="display:none"><select class="form-control"><option value="=">=</option><option value="!=">!=</option><option value="<"><</option><option value=">">></option><option value="<="><=</option><option value=">=">>=</option><option value="in">in</option><option value="not in">not in</option></select><input class="id form-control" id="w2p_value_auth_user-id" type="text" /><input class="btn btn-default" onclick="w2p_build_query('new','auth_user.id')" title="Start building a new search" type="button" value="New Search" /><input class="btn btn-default" onclick="w2p_build_query('and','auth_user.id')" title="Add this to the search as an AND term" type="button" value="+ And" /><input class="btn btn-default" onclick="w2p_build_query('or','auth_user.id')" title="Add this to the search as an OR term" type="button" value="+ Or" /><input class="btn btn-default" onclick="jQuery('#w2p_query_panel').slideUp()" type="button" value="Close" /></div><div class="w2p_query_row" id="w2p_field_auth_user-first_name" style="display:none"><select class="form-control"><option value="=">=</option><option value="!=">!=</option><option value="<"><</option><option value=">">></option><option value="<="><=</option><option value=">=">>=</option><option value="starts with">starts with</option><option value="contains">contains</option><option value="in">in</option><option value="not in">not in</option></select><input class="string form-control" id="w2p_value_auth_user-first_name" type="text" /><input class="btn btn-default" onclick="w2p_build_query('new','auth_user.first_name')" title="Start building a new search" type="button" value="New Search" /><input class="btn btn-default" onclick="w2p_build_query('and','auth_user.first_name')" title="Add this to the search as an AND term" type="button" value="+ And" /><input class="btn btn-default" onclick="w2p_build_query('or','auth_user.first_name')" title="Add this to the search as an OR term" type="button" value="+ Or" /><input class="btn btn-default" onclick="jQuery('#w2p_query_panel').slideUp()" type="button" value="Close" /></div><div class="w2p_query_row" id="w2p_field_auth_user-last_name" style="display:none"><select class="form-control"><option value="=">=</option><option value="!=">!=</option><option value="<"><</option><option value=">">></option><option value="<="><=</option><option value=">=">>=</option><option value="starts with">starts with</option><option value="contains">contains</option><option value="in">in</option><option value="not in">not in</option></select><input class="string form-control" id="w2p_value_auth_user-last_name" type="text" /><input class="btn btn-default" onclick="w2p_build_query('new','auth_user.last_name')" title="Start building a new search" type="button" value="New Search" /><input class="btn btn-default" onclick="w2p_build_query('and','auth_user.last_name')" title="Add this to the search as an AND term" type="button" value="+ And" /><input class="btn btn-default" onclick="w2p_build_query('or','auth_user.last_name')" title="Add this to the search as an OR term" type="button" value="+ Or" /><input class="btn btn-default" onclick="jQuery('#w2p_query_panel').slideUp()" type="button" value="Close" /></div><div class="w2p_query_row" id="w2p_field_auth_user-email" style="display:none"><select class="form-control"><option value="=">=</option><option value="!=">!=</option><option value="<"><</option><option value=">">></option><option value="<="><=</option><option value=">=">>=</option><option value="starts with">starts with</option><option value="contains">contains</option><option value="in">in</option><option value="not in">not in</option></select><input class="string form-control" id="w2p_value_auth_user-email" type="text" /><input class="btn btn-default" onclick="w2p_build_query('new','auth_user.email')" title="Start building a new search" type="button" value="New Search" /><input class="btn btn-default" onclick="w2p_build_query('and','auth_user.email')" title="Add this to the search as an AND term" type="button" value="+ And" /><input class="btn btn-default" onclick="w2p_build_query('or','auth_user.email')" title="Add this to the search as an OR term" type="button" value="+ Or" /><input class="btn btn-default" onclick="jQuery('#w2p_query_panel').slideUp()" type="button" value="Close" /></div><div class="w2p_query_row" id="w2p_field_auth_user-username" style="display:none"><select class="form-control"><option value="=">=</option><option value="!=">!=</option><option value="<"><</option><option value=">">></option><option value="<="><=</option><option value=">=">>=</option><option value="starts with">starts with</option><option value="contains">contains</option><option value="in">in</option><option value="not in">not in</option></select><input class="string form-control" id="w2p_value_auth_user-username" type="text" /><input class="btn btn-default" onclick="w2p_build_query('new','auth_user.username')" title="Start building a new search" type="button" value="New Search" /><input class="btn btn-default" onclick="w2p_build_query('and','auth_user.username')" title="Add this to the search as an AND term" type="button" value="+ And" /><input class="btn btn-default" onclick="w2p_build_query('or','auth_user.username')" title="Add this to the search as an OR term" type="button" value="+ Or" /><input class="btn btn-default" onclick="jQuery('#w2p_query_panel').slideUp()" type="button" value="Close" /></div></div><script><!--\n\n jQuery(\'#w2p_query_fields input,#w2p_query_fields select\').css(\n \'width\',\'auto\');\n jQuery(function(){web2py_ajax_fields(\'#w2p_query_fields\');});\n function w2p_build_query(aggregator,a) {\n var b=a.replace(\'.\',\'-\');\n var option = jQuery(\'#w2p_field_\'+b+\' select\').val();\n var value;\n var $value_item = jQuery(\'#w2p_value_\'+b);\n if ($value_item.is(\':checkbox\')){\n if ($value_item.is(\':checked\'))\n value = \'True\';\n else value = \'False\';\n }\n else\n { value = $value_item.val().replace(\'"\',\'\\\\"\')}\n var s=a+\' \'+option+\' "\'+value+\'"\';\n var k=jQuery(\'#w2p_keywords\');\n var v=k.val();\n if(aggregator==\'new\') k.val(s); else k.val((v?(v+\' \'+ aggregator +\' \'):\'\')+s);\n }\n \n//--></script><div class="web2py_counter">1 records found</div></div><div class="web2py_table"><div class="web2py_htmltable" style="width:100%;overflow-x:auto;-ms-overflow-x:scroll"><table><colgroup><col data-column="1" id="auth_user-id" /><col data-column="2" id="auth_user-first_name" /><col data-column="3" id="auth_user-last_name" /><col data-column="4" id="auth_user-email" /><col data-column="5" id="auth_user-username" /><col data-column="6" /></colgroup><thead><tr class=""><th class=""><a href="/a/c/f/auth_user?keywords=&order=auth_user.id">Id</a></th><th class=""><a href="/a/c/f/auth_user?keywords=&order=auth_user.first_name">First name</a></th><th class=""><a href="/a/c/f/auth_user?keywords=&order=auth_user.last_name">Last name</a></th><th class=""><a href="/a/c/f/auth_user?keywords=&order=auth_user.email">E-mail</a></th><th class=""><a href="/a/c/f/auth_user?keywords=&order=auth_user.username">Username</a></th><th class=""></th></tr></thead><tbody><tr class="w2p_odd odd with_id" id="1"><td>1</td><td>Bart</td><td>Simpson</td><td>user1@test.com</td><td>user1</td><td class="row_buttons" nowrap="nowrap"><a href="/a/c/f/auth_user/auth_membership.user_id/1"><span>Auth memberships</span></a><a href="/a/c/f/auth_user/auth_event.user_id/1"><span>Auth events</span></a><a href="/a/c/f/auth_user/auth_cas.user_id/1"><span>Auth cases</span></a><a href="/a/c/f/auth_user/t0.created_by/1"><span>T0s(created_by)</span></a><a href="/a/c/f/auth_user/t0.modified_by/1"><span>T0s(modified_by)</span></a><a href="/a/c/f/auth_user/t0_archive.created_by/1"><span>T0 archives(created_by)</span></a><a href="/a/c/f/auth_user/t0_archive.modified_by/1"><span>T0 archives(modified_by)</span></a><a class="button btn btn-default" href="/a/c/f/auth_user/view/auth_user/1"><span class="icon magnifier icon-zoom-in glyphicon glyphicon-zoom-in"></span> <span class="buttontext button" title="View">View</span></a></td></tr></tbody></table></div></div><div class="w2p_export_menu">Export:<a class="btn btn-default" href="/a/c/f/auth_user?_export_type=csv&keywords=&order=" title="Comma-separated export of visible columns. Fields from other tables are exported as they appear on-screen but this may be slow for many rows">CSV</a><a class="btn btn-default" href="/a/c/f/auth_user?_export_type=csv_with_hidden_cols&keywords=&order=" title="Comma-separated export including columns not shown; fields from other tables are exported as raw values for faster export">CSV (hidden cols)</a><a class="btn btn-default" href="/a/c/f/auth_user?_export_type=html&keywords=&order=" title="HTML export of visible columns">HTML</a><a class="btn btn-default" href="/a/c/f/auth_user?_export_type=json&keywords=&order=" title="JSON export of visible columns">JSON</a><a class="btn btn-default" href="/a/c/f/auth_user?_export_type=tsv&keywords=&order=" title="Spreadsheet-optimised export of tab-separated content, visible columns only. May be slow.">TSV (Spreadsheets)</a><a class="btn btn-default" href="/a/c/f/auth_user?_export_type=tsv_with_hidden_cols&keywords=&order=" title="Spreadsheet-optimised export of tab-separated content including hidden columns. May be slow">TSV (Spreadsheets, hidden cols)</a><a class="btn btn-default" href="/a/c/f/auth_user?_export_type=xml&keywords=&order=" title="XML export of columns shown">XML</a></div></div>')
class TestSQLTABLE(unittest.TestCase):
def setUp(self):
request = Request(env={})
request.application = 'a'
request.controller = 'c'
request.function = 'f'
request.folder = 'applications/admin'
response = Response()
session = Session()
T = translator('', 'en')
session.connect(request, response)
from gluon.globals import current
current.request = request
current.response = response
current.session = session
current.T = T
self.db = DAL(DEFAULT_URI, check_reserved=['all'])
self.auth = Auth(self.db)
self.auth.define_tables(username=True, signature=False)
self.db.define_table('t0', Field('tt'), self.auth.signature)
self.auth.enable_record_versioning(self.db)
# Create a user
self.db.auth_user.insert(first_name='Bart',
last_name='Simpson',
username='user1',
email='user1@test.com',
password='password_123',
registration_key=None,
registration_id=None)
self.db.commit()
def test_SQLTABLE(self):
rows = self.db(self.db.auth_user.id > 0).select(self.db.auth_user.ALL)
sqltable = SQLTABLE(rows)
self.assertEqual(sqltable.xml(), b'<table><thead><tr><th>auth_user.id</th><th>auth_user.first_name</th><th>auth_user.last_name</th><th>auth_user.email</th><th>auth_user.username</th><th>auth_user.password</th><th>auth_user.registration_key</th><th>auth_user.reset_password_key</th><th>auth_user.registration_id</th></tr></thead><tbody><tr class="w2p_odd odd"><td>1</td><td>Bart</td><td>Simpson</td><td>user1@test.com</td><td>user1</td><td>password_123</td><td>None</td><td></td><td>None</td></tr></tbody></table>')
# class TestExportClass(unittest.TestCase):
# def test___init__(self):
# pass
# def test_export(self):
# pass
# def test_represented(self):
# pass
# class TestExporterTSV(unittest.TestCase):
# def test___init__(self):
# pass
# def test_export(self):
# pass
# def test_represented(self):
# pass
# class TestExporterCSV(unittest.TestCase):
# def test___init__(self):
# pass
# def test_export(self):
# pass
# def test_represented(self):
# pass
# class TestExporterCSV_hidden(unittest.TestCase):
# def test___init__(self):
# pass
# def test_export(self):
# pass
# def test_represented(self):
# pass
# class TestExporterHTML(unittest.TestCase):
# def test___init__(self):
# pass
# def test_export(self):
# pass
# def test_represented(self):
# pass
# class TestExporterXML(unittest.TestCase):
# def test___init__(self):
# pass
# def test_export(self):
# pass
# def test_represented(self):
# pass
# class TestExporterJSON(unittest.TestCase):
# def test___init__(self):
# pass
# def test_export(self):
# pass
# def test_represented(self):
# pass
| 80.145455 | 11,578 | 0.67023 | 5,221 | 35,264 | 4.370619 | 0.065888 | 0.049082 | 0.070029 | 0.045751 | 0.896183 | 0.858407 | 0.845217 | 0.842281 | 0.837811 | 0.831675 | 0 | 0.019191 | 0.132628 | 35,264 | 439 | 11,579 | 80.328018 | 0.726845 | 0.147147 | 0 | 0.568627 | 0 | 0.068627 | 0.833947 | 0.475991 | 0 | 0 | 0 | 0.002278 | 0.088235 | 1 | 0.088235 | false | 0.039216 | 0.137255 | 0 | 0.254902 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
5d23b1bea50ab5f2a823395feb0c70099fba465b | 119,926 | py | Python | sdflexutils/tests/unit/hpssa/raid_constants.py | LaudateCorpus1/sdflexutils | b1e1cd8f1a15ca27cfe28adfc22060a1ded41641 | [
"Apache-2.0"
] | 2 | 2021-01-27T08:21:24.000Z | 2022-01-11T01:52:43.000Z | sdflexutils/tests/unit/hpssa/raid_constants.py | LaudateCorpus1/sdflexutils | b1e1cd8f1a15ca27cfe28adfc22060a1ded41641 | [
"Apache-2.0"
] | 3 | 2021-02-02T14:00:49.000Z | 2021-02-10T06:51:19.000Z | sdflexutils/tests/unit/hpssa/raid_constants.py | LaudateCorpus1/sdflexutils | b1e1cd8f1a15ca27cfe28adfc22060a1ded41641 | [
"Apache-2.0"
] | 1 | 2022-02-18T06:48:18.000Z | 2022-02-18T06:48:18.000Z | # Copyright 2015 Hewlett-Packard Development Company, L.P.
# Copyright 2019 Hewlett Packard Enterprise Development LP
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# Hewlett Packard Enterprise made changes in this file.
HPSSA_NO_DRIVES = '''
MSCC SmartRAID 3154-8e in Slot 2085
Bus Interface: PCI
Slot: 2085
Serial Number: 8A02F3004A0
RAID 6 (ADG) Status: Enabled
Controller Status: OK
Hardware Revision: B
Firmware Version: 1.98-0
Firmware Supports Online Firmware Activation: True
Driver Supports Online Firmware Activation: False
Wait for Cache Room: Disabled
Surface Analysis Inconsistency Notification: Disabled
Post Prompt Timeout: 15 secs
Cache Board Present: True
Cache Status: Not Configured
Configured Drive Write Cache Policy: Default
Unconfigured Drive Write Cache Policy: Default
HBA Drive Write Cache Policy: Default
Total Cache Size: 4.0
Total Cache Memory Available: 3.8
No-Battery Write Cache: Disabled
SSD Caching RAID5 WriteBack Enabled: True
SSD Caching Version: 2
Cache Backup Power Source: Batteries
Battery/Capacitor Count: 1
Battery/Capacitor Status: Recharging
SATA NCQ Supported: True
Spare Activation Mode: Activate on physical drive failure (default)
Controller Temperature (C): 34
Number of Ports: 2 External only
Encryption: Not Set
Driver Name: smartpqi
Driver Version: Linux 1.0.4-100
I2C Address: 0xDE
PCI Address (Domain:Bus:Device.Function): 0000:C5:00.0
Negotiated PCIe Data Rate: PCIe 3.0 x8 (7880 MB/s)
Controller Mode: RAID
Controller Mode Reboot: Not Required
Port Max Phy Rate Limiting Supported: False
Latency Scheduler Setting: Disabled
Current Power Mode: MaxPerformance
Survival Mode: Enabled
Sanitize Erase Supported: True
Sanitize Lock: None
Sensor ID: 0
Location: Inlet Ambient
Current Value (C): 25
Max Value Since Power On: 25
Sensor ID: 1
Location: ASIC
Current Value (C): 34
Max Value Since Power On: 34
Sensor ID: 2
Location: Top
Current Value (C): 26
Max Value Since Power On: 26
Sensor ID: 3
Location: Bottom
Current Value (C): 28
Max Value Since Power On: 28
Primary Boot Volume: None
Secondary Boot Volume: None
HP D3700 Enclosure at Port CN1, Box 1, OK
Fan Status: OK
Temperature Status: OK
Power Supply Status: Redundant
Vendor ID: HP
Serial Number: 2M273002X8
Firmware Version: 4.12
Drive Bays: 25
Port: CN1
Box: 1
Location: External
Expander 378
Device Number: 378
Firmware Version: 4.12
WWID: 51402EC001CBE47D
Port: CN1
Box: 1
Vendor ID: HP
Enclosure SEP (Vendor ID HP, Model D3700) 377
Device Number: 377
Firmware Version: 4.12
WWID: 51402EC001CBE47C
Port: CN1
Box: 1
Vendor ID: HP
Model: D3700
IO Module Board Serial Number: PDNFNB1LM710EO
IO Module Serial Number: 0000000000
IO Module Part Number: QW967-04402
IO Module Spare Part Number: 700521-001
Backplane 1 Board Serial Number: PCZCDC1LM6703J
Backplane 1 Serial Number: 2M273002X8
Backplane 1 Part Number: QW967-60301
Backplane 1 Spare Part Number: 734345-001
Backplane 1 System SKU: QW967A
Physical Drives
physicaldrive CN1:1:1 (port CN1:box 1:bay 1, SATA SSD, 500 GB, OK)
physicaldrive CN1:1:2 (port CN1:box 1:bay 2, SATA SSD, 500 GB, OK)
physicaldrive CN1:1:3 (port CN1:box 1:bay 3, SATA SSD, 500 GB, OK)
physicaldrive CN1:1:4 (port CN1:box 1:bay 4, SATA SSD, 500 GB, OK)
physicaldrive CN1:1:5 (port CN1:box 1:bay 5, SATA SSD, 500 GB, OK)
physicaldrive CN1:1:11 (port CN1:box 1:bay 11, SATA SSD, 500 GB, OK)
physicaldrive CN1:1:12 (port CN1:box 1:bay 12, SATA SSD, 500 GB, OK)
Port Name: CN0
Port ID: 0
Port Mode: RAID
Port Connection Number: 0
SAS Address: 50000D1E00190840
Port Location: External
Managed Cable Connected: False
Port Name: CN1
Port ID: 1
Port Mode: RAID
Port Connection Number: 1
SAS Address: 50000D1E00190844
Port Location: External
Managed Cable Connected: True
Managed Cable Length: 2
Managed Cable Serial Number: APF16500030TJG
Managed Cable Part Number: 691970-003
Unassigned
physicaldrive CN1:1:1
Port: CN1
Box: 1
Bay: 1
Status: OK
Drive Type: Unassigned Drive
Interface Type: Solid State SATA
Size: 500 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 172317742957
WWID: 51402EC001CBE440
Model: ATA MK000500GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 22
Maximum Temperature (C): 34
Usage remaining: 99.84%
Power On Hours: 17659
Estimated Life Remaining based on workload to date: 459133 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: 6DA7A33CE83BDD9133399B6F0F8108B2
physicaldrive CN1:1:2
Port: CN1
Box: 1
Bay: 2
Status: OK
Drive Type: Unassigned Drive
Interface Type: Solid State SATA
Size: 500 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 172317742962
WWID: 51402EC001CBE441
Model: ATA MK000500GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 22
Maximum Temperature (C): 35
Usage remaining: 99.84%
Power On Hours: 17659
Estimated Life Remaining based on workload to date: 459133 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: 4FD17AD610DC69B1300F6071DDACD5F9
physicaldrive CN1:1:3
Port: CN1
Box: 1
Bay: 3
Status: OK
Drive Type: Unassigned Drive
Interface Type: Solid State SATA
Size: 500 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 1723177429F7
WWID: 51402EC001CBE442
Model: ATA MK000500GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 22
Maximum Temperature (C): 35
Usage remaining: 99.84%
Power On Hours: 17659
Estimated Life Remaining based on workload to date: 459133 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: C3D462620B715A67DCAE1772A3855B4A
physicaldrive CN1:1:4
Port: CN1
Box: 1
Bay: 4
Status: OK
Drive Type: Unassigned Drive
Interface Type: Solid State SATA
Size: 500 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 1720173A1558
WWID: 51402EC001CBE443
Model: ATA MK000500GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 22
Maximum Temperature (C): 34
Usage remaining: 99.84%
Power On Hours: 17659
Estimated Life Remaining based on workload to date: 459133 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: CB6A024F1DF6E13FD56CED7B61173E3E
physicaldrive CN1:1:5
Port: CN1
Box: 1
Bay: 5
Status: OK
Drive Type: Unassigned Drive
Interface Type: Solid State SATA
Size: 500 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 17231774296D
WWID: 51402EC001CBE444
Model: ATA MK000500GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 22
Maximum Temperature (C): 35
Usage remaining: 99.86%
Power On Hours: 17659
Estimated Life Remaining based on workload to date: 524829 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: 4E2F74AFE1BD94AC121CA43B7181134A
physicaldrive CN1:1:11
Port: CN1
Box: 1
Bay: 11
Status: OK
Drive Type: Unassigned Drive
Interface Type: Solid State SATA
Size: 500 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 1723177429D8
WWID: 51402EC001CBE44A
Model: ATA MK000500GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 21
Maximum Temperature (C): 34
Usage remaining: 99.86%
Power On Hours: 17659
Estimated Life Remaining based on workload to date: 524829 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: A5557CC7B709477D516D8E78BCFE870B
physicaldrive CN1:1:12
Port: CN1
Box: 1
Bay: 12
Status: OK
Drive Type: Unassigned Drive
Interface Type: Solid State SATA
Size: 500 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 172317742487
WWID: 51402EC001CBE44B
Model: ATA MK000500GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 22
Maximum Temperature (C): 35
Usage remaining: 99.74%
Power On Hours: 17659
Estimated Life Remaining based on workload to date: 282261 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: F50D85711F310CCD737C4FCE99EE9E16
Enclosure SEP (Vendor ID HP, Model D3700) 377
Device Number: 377
Firmware Version: 4.12
WWID: 51402EC001CBE47C
Port: CN1
Box: 1
Vendor ID: HP
Model: D3700
IO Module Board Serial Number: PDNFNB1LM710EO
IO Module Serial Number: 0000000000
IO Module Part Number: QW967-04402
IO Module Spare Part Number: 700521-001
Backplane 1 Board Serial Number: PCZCDC1LM6703J
Backplane 1 Serial Number: 2M273002X8
Backplane 1 Part Number: QW967-60301
Backplane 1 Spare Part Number: 734345-001
Backplane 1 System SKU: QW967A
Expander 378
Device Number: 378
Firmware Version: 4.12
WWID: 51402EC001CBE47D
Port: CN1
Box: 1
Vendor ID: HP
SEP (Vendor ID MSCC, Model Smart Adapter) 379
Device Number: 379
Firmware Version: 1.98
WWID: 50000D1E00190848
Port: Unknown
Vendor ID: MSCC
Model: Smart Adapter
'''
HPSSA_DRIVES_SSD = '''
MSCC SmartRAID 3154-8e in Slot 2085
Bus Interface: PCI
Slot: 2085
Serial Number: 8A02F3004A0
RAID 6 (ADG) Status: Enabled
Controller Status: OK
Hardware Revision: B
Firmware Version: 1.98-0
Firmware Supports Online Firmware Activation: True
Driver Supports Online Firmware Activation: False
Rebuild Priority: High
Expand Priority: Medium
Surface Scan Delay: 3 secs
Surface Scan Mode: Idle
Parallel Surface Scan Supported: Yes
Current Parallel Surface Scan Count: 1
Max Parallel Surface Scan Count: 16
Queue Depth: Automatic
Monitor and Performance Delay: 60 min
Elevator Sort: Enabled
Degraded Performance Optimization: Disabled
Inconsistency Repair Policy: Disabled
Write Cache Bypass Threshold Size: 1040 KiB
Wait for Cache Room: Disabled
Surface Analysis Inconsistency Notification: Disabled
Post Prompt Timeout: 15 secs
Cache Board Present: True
Cache Status: Not Configured
Configured Drive Write Cache Policy: Default
Unconfigured Drive Write Cache Policy: Default
HBA Drive Write Cache Policy: Default
Total Cache Size: 4.0
Total Cache Memory Available: 3.8
No-Battery Write Cache: Disabled
SSD Caching RAID5 WriteBack Enabled: True
SSD Caching Version: 2
Cache Backup Power Source: Batteries
Battery/Capacitor Count: 1
Battery/Capacitor Status: OK
SATA NCQ Supported: True
Spare Activation Mode: Activate on physical drive failure (default)
Controller Temperature (C): 33
Number of Ports: 2 External only
Encryption: Not Set
Driver Name: smartpqi
Driver Version: Linux 1.0.4-100
I2C Address: 0xDE
PCI Address (Domain:Bus:Device.Function): 0000:C5:00.0
Negotiated PCIe Data Rate: PCIe 3.0 x8 (7880 MB/s)
Controller Mode: RAID
Controller Mode Reboot: Not Required
Port Max Phy Rate Limiting Supported: False
Latency Scheduler Setting: Disabled
Current Power Mode: MaxPerformance
Survival Mode: Enabled
Sanitize Erase Supported: True
Sanitize Lock: None
Sensor ID: 0
Location: Inlet Ambient
Current Value (C): 24
Max Value Since Power On: 24
Sensor ID: 1
Location: ASIC
Current Value (C): 33
Max Value Since Power On: 34
Sensor ID: 2
Location: Top
Current Value (C): 25
Max Value Since Power On: 25
Sensor ID: 3
Location: Bottom
Current Value (C): 27
Max Value Since Power On: 27
Primary Boot Volume: None
Secondary Boot Volume: None
unassigned
physicaldrive CN1:2:14
Port: CN1
Box: 1
Bay: 14
Status: OK
Drive Type: Unassigned Drive
Interface Type: Solid State SAS
Size: 480 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 1720173A1594
WWID: 51402EC001CBE44D
Model: ATA MK000480GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 22
Maximum Temperature (C): 34
Usage remaining: 99.73%
Power On Hours: 17660
Estimated Life Remaining based on workload to date: 271795 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: D2BC303144FC6EB7D44C2485299BEECF
physicaldrive CN1:2:15
Port: CN1
Box: 1
Bay: 15
Status: OK
Drive Type: Unassigned Drive
Interface Type: Solid State SATA
Size: 480 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 17231774232E
WWID: 51402EC001CBE44E
Model: ATA MK000480GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 22
Maximum Temperature (C): 34
Usage remaining: 99.74%
Power On Hours: 17660
Estimated Life Remaining based on workload to date: 282276 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: 2102E568FC6C0373122097BFB0C12C8B
SEP (Vendor ID MSCC, Model Smart Adapter) 379
Device Number: 379
Firmware Version: 1.98
WWID: 50000D1E00190848
Port: Unknown
Vendor ID: MSCC
Model: Smart Adapter
'''
HPSSA_ONE_DRIVE = '''
MSCC SmartRAID 3154-8e in Slot 2085
Bus Interface: PCI
Slot: 2085
Serial Number: 8A02F3004A0
RAID 6 (ADG) Status: Enabled
Controller Status: OK
Hardware Revision: B
Firmware Version: 1.98-0
Firmware Supports Online Firmware Activation: True
Driver Supports Online Firmware Activation: False
Rebuild Priority: High
Expand Priority: Medium
Surface Scan Delay: 3 secs
Surface Scan Mode: Idle
Parallel Surface Scan Supported: Yes
Current Parallel Surface Scan Count: 1
Max Parallel Surface Scan Count: 16
Queue Depth: Automatic
Monitor and Performance Delay: 60 min
Elevator Sort: Enabled
Degraded Performance Optimization: Disabled
Inconsistency Repair Policy: Disabled
Write Cache Bypass Threshold Size: 1040 KiB
Wait for Cache Room: Disabled
Surface Analysis Inconsistency Notification: Disabled
Post Prompt Timeout: 15 secs
Cache Board Present: True
Cache Status: Not Configured
Configured Drive Write Cache Policy: Default
Unconfigured Drive Write Cache Policy: Default
HBA Drive Write Cache Policy: Default
Total Cache Size: 4.0
Total Cache Memory Available: 3.8
No-Battery Write Cache: Disabled
SSD Caching RAID5 WriteBack Enabled: True
SSD Caching Version: 2
Cache Backup Power Source: Batteries
Battery/Capacitor Count: 1
Battery/Capacitor Status: Recharging
SATA NCQ Supported: True
Spare Activation Mode: Activate on physical drive failure (default)
Controller Temperature (C): 35
Number of Ports: 2 External only
Encryption: Not Set
Driver Name: smartpqi
Driver Version: Linux 1.0.4-100
I2C Address: 0xDE
PCI Address (Domain:Bus:Device.Function): 0000:C5:00.0
Negotiated PCIe Data Rate: PCIe 3.0 x8 (7880 MB/s)
Controller Mode: RAID
Controller Mode Reboot: Not Required
Port Max Phy Rate Limiting Supported: False
Latency Scheduler Setting: Disabled
Current Power Mode: MaxPerformance
Survival Mode: Enabled
Sanitize Erase Supported: True
Sanitize Lock: None
Sensor ID: 0
Location: Inlet Ambient
Current Value (C): 25
Max Value Since Power On: 25
Sensor ID: 1
Location: ASIC
Current Value (C): 35
Max Value Since Power On: 35
Sensor ID: 2
Location: Top
Current Value (C): 26
Max Value Since Power On: 26
Sensor ID: 3
Location: Bottom
Current Value (C): 28
Max Value Since Power On: 28
Primary Boot Volume: None
Secondary Boot Volume: None
HP D3700 Enclosure at Port CN1, Box 1, OK
Fan Status: OK
Temperature Status: OK
Power Supply Status: Redundant
Vendor ID: HP
Serial Number: 2M273002X8
Firmware Version: 4.12
Drive Bays: 25
Port: CN1
Box: 1
Location: External
Expander 378
Device Number: 378
Firmware Version: 4.12
WWID: 51402EC001CBE47D
Port: CN1
Box: 1
Vendor ID: HP
Enclosure SEP (Vendor ID HP, Model D3700) 377
Device Number: 377
Firmware Version: 4.12
WWID: 51402EC001CBE47C
Port: CN1
Box: 1
Vendor ID: HP
Model: D3700
IO Module Board Serial Number: PDNFNB1LM710EO
IO Module Serial Number: 0000000000
IO Module Part Number: QW967-04402
IO Module Spare Part Number: 700521-001
Backplane 1 Board Serial Number: PCZCDC1LM6703J
Backplane 1 Serial Number: 2M273002X8
Backplane 1 Part Number: QW967-60301
Backplane 1 Spare Part Number: 734345-001
Backplane 1 System SKU: QW967A
Physical Drives
physicaldrive CN1:1:1 (port CN1:box 1:bay 1, SATA SSD, 500 GB, OK)
physicaldrive CN1:1:2 (port CN1:box 1:bay 2, SATA SSD, 500 GB, OK)
physicaldrive CN1:1:3 (port CN1:box 1:bay 3, SATA SSD, 500 GB, OK)
physicaldrive CN1:1:4 (port CN1:box 1:bay 4, SATA SSD, 500 GB, OK)
physicaldrive CN1:1:5 (port CN1:box 1:bay 5, SATA SSD, 500 GB, OK)
physicaldrive CN1:1:11 (port CN1:box 1:bay 11, SATA SSD, 500 GB, OK)
physicaldrive CN1:1:12 (port CN1:box 1:bay 12, SATA SSD, 500 GB, OK)
Port Name: CN0
Port ID: 0
Port Mode: RAID
Port Connection Number: 0
SAS Address: 50000D1E00190840
Port Location: External
Managed Cable Connected: False
Port Name: CN1
Port ID: 1
Port Mode: RAID
Port Connection Number: 1
SAS Address: 50000D1E00190844
Port Location: External
Managed Cable Connected: True
Managed Cable Length: 2
Managed Cable Serial Number: APF16500030TJG
Managed Cable Part Number: 691970-003
Array: A
Interface Type: Solid State SATA
Unused Space: 710864 MB (77.63%)
Used Space: 200.00 GB (22.37%)
Status: OK
MultiDomain Status: OK
Array Type: Data
I/O Bypass: enable
Logical Drive: 1
Size: 100.00 GB
Fault Tolerance: 1
Heads: 255
Sectors Per Track: 32
Cylinders: 25700
Strip Size: 256 KB
Full Stripe Size: 256 KB
Status: OK
Unrecoverable Media Errors: None
MultiDomain Status: OK
Caching: Disabled
Unique Identifier: 600508B1001CA1778DB3DFDF190B31C2
Disk Name: /dev/sde
Mount Points: None
Logical Drive Label: 0216D8AE8A02F3004A0 DFDE
Mirror Group 1:
physicaldrive CN1:1:1 (port CN1:box 1:bay 1, SATA SSD, 500 GB, OK)
Mirror Group 2:
physicaldrive CN1:1:2 (port CN1:box 1:bay 2, SATA SSD, 500 GB, OK)
Drive Type: Data
LD Acceleration Method: I/O Bypass
physicaldrive CN1:1:1
Port: CN1
Box: 1
Bay: 1
Status: OK
Drive Type: Data Drive
Interface Type: Solid State SATA
Size: 500 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 172317742962
WWID: 51402EC001CBE441
Model: ATA MK000500GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 23
Maximum Temperature (C): 35
Usage remaining: 99.84%
Power On Hours: 17660
Estimated Life Remaining based on workload to date: 459159 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: 4FD17AD610DC69B1300F6071DDACD5F9
physicaldrive CN1:1:2
Port: CN1
Box: 1
Bay: 2
Status: OK
Drive Type: Data Drive
Interface Type: Solid State SATA
Size: 500 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 1723177429F7
WWID: 51402EC001CBE442
Model: ATA MK000500GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 23
Maximum Temperature (C): 35
Usage remaining: 99.84%
Power On Hours: 17660
Estimated Life Remaining based on workload to date: 459159 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: C3D462620B715A67DCAE1772A3855B4A
Unassigned
physicaldrive CN1:1:3
Port: CN1
Box: 1
Bay: 3
Status: OK
Drive Type: Unassigned Drive
Interface Type: Solid State SATA
Size: 500 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 172317742957
WWID: 51402EC001CBE440
Model: ATA MK000500GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 22
Maximum Temperature (C): 34
Usage remaining: 99.84%
Power On Hours: 17660
Estimated Life Remaining based on workload to date: 459159 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: 6DA7A33CE83BDD9133399B6F0F8108B2
physicaldrive CN1:1:4
Port: CN1
Box: 1
Bay: 4
Status: OK
Drive Type: Unassigned Drive
Interface Type: Solid State SATA
Size: 500 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 1720173A1558
WWID: 51402EC001CBE443
Model: ATA MK000500GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 22
Maximum Temperature (C): 34
Usage remaining: 99.84%
Power On Hours: 17660
Estimated Life Remaining based on workload to date: 459159 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: CB6A024F1DF6E13FD56CED7B61173E3E
physicaldrive CN1:1:5
Port: CN1
Box: 1
Bay: 5
Status: OK
Drive Type: Unassigned Drive
Interface Type: Solid State SATA
Size: 500 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 17231774296D
WWID: 51402EC001CBE444
Model: ATA MK000500GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 22
Maximum Temperature (C): 35
Usage remaining: 99.86%
Power On Hours: 17660
Estimated Life Remaining based on workload to date: 524859 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: 4E2F74AFE1BD94AC121CA43B7181134A
physicaldrive CN1:1:11
Port: CN1
Box: 1
Bay: 11
Status: OK
Drive Type: Unassigned Drive
Interface Type: Solid State SATA
Size: 500 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 1723177429D8
WWID: 51402EC001CBE44A
Model: ATA MK000500GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 21
Maximum Temperature (C): 34
Usage remaining: 99.86%
Power On Hours: 17660
Estimated Life Remaining based on workload to date: 524859 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: A5557CC7B709477D516D8E78BCFE870B
physicaldrive CN1:1:12
Port: CN1
Box: 1
Bay: 12
Status: OK
Drive Type: Unassigned Drive
Interface Type: Solid State SATA
Size: 500 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 172317742487
WWID: 51402EC001CBE44B
Model: ATA MK000500GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 22
Maximum Temperature (C): 35
Usage remaining: 99.74%
Power On Hours: 17660
Estimated Life Remaining based on workload to date: 282276 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: F50D85711F310CCD737C4FCE99EE9E16
Enclosure SEP (Vendor ID HP, Model D3700) 377
Device Number: 377
Firmware Version: 4.12
WWID: 51402EC001CBE47C
Port: CN1
Box: 1
Vendor ID: HP
Model: D3700
IO Module Board Serial Number: PDNFNB1LM710EO
IO Module Serial Number: 0000000000
IO Module Part Number: QW967-04402
IO Module Spare Part Number: 700521-001
Backplane 1 Board Serial Number: PCZCDC1LM6703J
Backplane 1 Serial Number: 2M273002X8
Backplane 1 Part Number: QW967-60301
Backplane 1 Spare Part Number: 734345-001
Backplane 1 System SKU: QW967A
Expander 378
Device Number: 378
Firmware Version: 4.12
WWID: 51402EC001CBE47D
Port: CN1
Box: 1
Vendor ID: HP
SEP (Vendor ID MSCC, Model Smart Adapter) 379
Device Number: 379
Firmware Version: 1.98
WWID: 50000D1E00190848
Port: Unknown
Vendor ID: MSCC
Model: Smart Adapter
'''
HPSSA_ONE_DRIVE_RAID_50 = '''
MSCC SmartRAID 3154-8e in Slot 2085
Bus Interface: PCI
Slot: 2
Serial Number: PDVTF0BRH5T0MO
Cache Serial Number: PBKUD0BRH5T3I6
RAID 6 (ADG) Status: Enabled
Controller Status: OK
Hardware Revision: B
Rebuild Priority: Medium
Expand Priority: Medium
Surface Scan Delay: 3 secs
Surface Scan Mode: Idle
Queue Depth: Automatic
Monitor and Performance Delay: 60 min
Elevator Sort: Enabled
Degraded Performance Optimization: Disabled
Inconsistency Repair Policy: Disabled
Wait for Cache Room: Disabled
Surface Analysis Inconsistency Notification: Disabled
Post Prompt Timeout: 15 secs
Cache Board Present: True
Cache Status: OK
Cache Ratio: 10% Read / 90% Write
Drive Write Cache: Disabled
Total Cache Size: 2.0 GB
Total Cache Memory Available: 1.8 GB
No-Battery Write Cache: Disabled
Cache Backup Power Source: Capacitors
Battery/Capacitor Count: 1
Battery/Capacitor Status: OK
SATA NCQ Supported: True
Spare Activation Mode: Activate on physical drive failure (default)
Controller Temperature (C): 88
Cache Module Temperature (C): 38
Capacitor Temperature (C): 23
Number of Ports: 6 (2 Internal / 4 External )
Driver Name: hpsa
Driver Version: 3.4.4
Driver Supports HP SSD Smart Path: True
Array: A
Interface Type: Solid State SATA
Unused Space: 2593386 MB (94.41%)
Used Space: 150.00 GB (5.59%)
Status: OK
MultiDomain Status: OK
Array Type: Data
I/O Bypass: enable
Logical Drive: 1
Size: 100.00 GB
Fault Tolerance: 50
Number of Parity Groups: 2
Heads: 255
Sectors Per Track: 32
Cylinders: 25700
Strip Size: 256 KB
Full Stripe Size: 512 KB
Status: OK
Unrecoverable Media Errors: None
MultiDomain Status: OK
Caching: Disabled
Parity Initialization Status: Initialization Completed
Unique Identifier: 600508B1001C7575301CB1820BEC6260
Disk Name: /dev/sdd
Mount Points: None
Logical Drive Label: 01F8D4F48A02F3004A0 B741
Parity Group 1:
physicaldrive CN1:1:1 (port CN1:box 1:bay 1, SATA SSD, 480 GB, OK)
physicaldrive CN1:1:2 (port CN1:box 1:bay 2, SATA SSD, 480 GB, OK)
physicaldrive CN1:1:3 (port CN1:box 1:bay 3, SATA SSD, 480 GB, OK)
Parity Group 2:
physicaldrive CN1:1:4 (port CN1:box 1:bay 4, SATA SSD, 480 GB, OK)
physicaldrive CN1:1:5 (port CN1:box 1:bay 5, SATA SSD, 480 GB, OK)
physicaldrive CN1:1:11 (port CN1:box 1:bay 11, SATA SSD, 480 GB, OK)
Drive Type: Data
LD Acceleration Method: I/O Bypass
physicaldrive CN1:1:1
Port: CN1
Box: 1
Bay: 1
Status: OK
Drive Type: Data Drive
Interface Type: Solid State SATA
Size: 480 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 172317742957
WWID: 51402EC001CBE440
Model: ATA MK000480GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 21
Maximum Temperature (C): 34
Usage remaining: 99.84%
Power On Hours: 17371
Estimated Life Remaining based on workload to date: 451645 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: 6DA7A33CE83BDD9133399B6F0F8108B2
physicaldrive CN1:1:2
Port: CN1
Box: 1
Bay: 2
Status: OK
Drive Type: Data Drive
Interface Type: Solid State SATA
Size: 480 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 172317742962
WWID: 51402EC001CBE441
Model: ATA MK000480GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 22
Maximum Temperature (C): 35
Usage remaining: 99.84%
Power On Hours: 17371
Estimated Life Remaining based on workload to date: 451645 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: 4FD17AD610DC69B1300F6071DDACD5F9
physicaldrive CN1:1:3
Port: CN1
Box: 1
Bay: 3
Status: OK
Drive Type: Data Drive
Interface Type: Solid State SATA
Size: 480 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 1723177429F7
WWID: 51402EC001CBE442
Model: ATA MK000480GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 22
Maximum Temperature (C): 35
Usage remaining: 99.84%
Power On Hours: 17371
Estimated Life Remaining based on workload to date: 451645 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: C3D462620B715A67DCAE1772A3855B4A
physicaldrive CN1:1:4
Port: CN1
Box: 1
Bay: 4
Status: OK
Drive Type: Data Drive
Interface Type: Solid State SATA
Size: 480 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 1720173A1558
WWID: 51402EC001CBE443
Model: ATA MK000480GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 21
Maximum Temperature (C): 34
Usage remaining: 99.84%
Power On Hours: 17371
Estimated Life Remaining based on workload to date: 451645 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: CB6A024F1DF6E13FD56CED7B61173E3E
physicaldrive CN1:1:5
Port: CN1
Box: 1
Bay: 5
Status: OK
Drive Type: Data Drive
Interface Type: Solid State SATA
Size: 480 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 17231774296D
WWID: 51402EC001CBE444
Model: ATA MK000480GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 22
Maximum Temperature (C): 35
Usage remaining: 99.86%
Power On Hours: 17371
Estimated Life Remaining based on workload to date: 516270 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: 4E2F74AFE1BD94AC121CA43B7181134A
physicaldrive CN1:1:11
Port: CN1
Box: 1
Bay: 11
Status: OK
Drive Type: Data Drive
Interface Type: Solid State SATA
Size: 480 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 17231774232E
WWID: 51402EC001CBE44E
Model: ATA MK000480GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 22
Maximum Temperature (C): 34
Usage remaining: 99.74%
Power On Hours: 17371
Estimated Life Remaining based on workload to date: 277657 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: 2102E568FC6C0373122097BFB0C12C8B
unassigned
physicaldrive CN1:1:12
Port: CN1
Box: 1
Bay: 12
Status: OK
Drive Type: Unassigned Drive
Interface Type: Solid State SATA
Size: 480 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 172317742487
WWID: 51402EC001CBE44B
Model: ATA MK000480GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 22
Maximum Temperature (C): 35
Usage remaining: 99.74%
Power On Hours: 17659
Estimated Life Remaining based on workload to date: 282261 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: F50D85711F310CCD737C4FCE99EE9E16
SEP (Vendor ID PMCSIERA, Model SRCv24x6G) 380
Device Number: 380
Firmware Version: RevB
WWID: 5001438028842E1F
Vendor ID: PMCSIERA
Model: SRCv24x6G
'''
HPSSA_ONE_DRIVE_100GB_RAID_5 = '''
MSCC SmartRAID 3154-8e in Slot 2085
Bus Interface: PCI
Slot: 2085
Serial Number: 8A02F3004A0
RAID 6 (ADG) Status: Enabled
Controller Status: OK
Hardware Revision: B
Firmware Version: 1.98-0
Firmware Supports Online Firmware Activation: True
Driver Supports Online Firmware Activation: False
Rebuild Priority: High
Expand Priority: Medium
Surface Scan Delay: 3 secs
Surface Scan Mode: Idle
Parallel Surface Scan Supported: Yes
Current Parallel Surface Scan Count: 1
Max Parallel Surface Scan Count: 16
Queue Depth: Automatic
Monitor and Performance Delay: 60 min
Elevator Sort: Enabled
Degraded Performance Optimization: Disabled
Inconsistency Repair Policy: Disabled
Write Cache Bypass Threshold Size: 1040 KiB
Wait for Cache Room: Disabled
Surface Analysis Inconsistency Notification: Disabled
Post Prompt Timeout: 15 secs
Cache Board Present: True
Cache Status: Not Configured
Configured Drive Write Cache Policy: Default
Unconfigured Drive Write Cache Policy: Default
HBA Drive Write Cache Policy: Default
Total Cache Size: 4.0
Total Cache Memory Available: 3.8
No-Battery Write Cache: Disabled
SSD Caching RAID5 WriteBack Enabled: True
SSD Caching Version: 2
Cache Backup Power Source: Batteries
Battery/Capacitor Count: 1
Battery/Capacitor Status: OK
SATA NCQ Supported: True
Spare Activation Mode: Activate on physical drive failure (default)
Controller Temperature (C): 33
Number of Ports: 2 External only
Encryption: Not Set
Driver Name: smartpqi
Driver Version: Linux 1.0.4-100
I2C Address: 0xDE
PCI Address (Domain:Bus:Device.Function): 0000:C5:00.0
Negotiated PCIe Data Rate: PCIe 3.0 x8 (7880 MB/s)
Controller Mode: RAID
Controller Mode Reboot: Not Required
Port Max Phy Rate Limiting Supported: False
Latency Scheduler Setting: Disabled
Current Power Mode: MaxPerformance
Survival Mode: Enabled
Sanitize Erase Supported: True
Sanitize Lock: None
Sensor ID: 0
Location: Inlet Ambient
Current Value (C): 24
Max Value Since Power On: 24
Sensor ID: 1
Location: ASIC
Current Value (C): 33
Max Value Since Power On: 34
Sensor ID: 2
Location: Top
Current Value (C): 25
Max Value Since Power On: 25
Sensor ID: 3
Location: Bottom
Current Value (C): 27
Max Value Since Power On: 27
Primary Boot Volume: None
Secondary Boot Volume: None
HP D3700 Enclosure at Port CN1, Box 1, OK
Fan Status: OK
Temperature Status: OK
Power Supply Status: Redundant
Vendor ID: HP
Serial Number: 2M273002X8
Firmware Version: 4.12
Drive Bays: 25
Port: CN1
Box: 1
Location: External
Expander 378
Device Number: 378
Firmware Version: 4.12
WWID: 51402EC001CBE47D
Port: CN1
Box: 1
Vendor ID: HP
Enclosure SEP (Vendor ID HP, Model D3700) 377
Device Number: 377
Firmware Version: 4.12
WWID: 51402EC001CBE47C
Port: CN1
Box: 1
Vendor ID: HP
Model: D3700
IO Module Board Serial Number: PDNFNB1LM710EO
IO Module Serial Number: 0000000000
IO Module Part Number: QW967-04402
IO Module Spare Part Number: 700521-001
Backplane 1 Board Serial Number: PCZCDC1LM6703J
Backplane 1 Serial Number: 2M273002X8
Backplane 1 Part Number: QW967-60301
Backplane 1 Spare Part Number: 734345-001
Backplane 1 System SKU: QW967A
Physical Drives
physicaldrive CN1:1:1 (port CN1:box 1:bay 1, SATA SSD, 500 GB, OK)
physicaldrive CN1:1:2 (port CN1:box 1:bay 2, SATA SSD, 500 GB, OK)
physicaldrive CN1:1:3 (port CN1:box 1:bay 3, SATA SSD, 500 GB, OK)
physicaldrive CN1:1:4 (port CN1:box 1:bay 4, SATA SSD, 500 GB, OK)
physicaldrive CN1:1:5 (port CN1:box 1:bay 5, SATA SSD, 500 GB, OK)
physicaldrive CN1:1:11 (port CN1:box 1:bay 11, SATA SSD, 500 GB, OK)
physicaldrive CN1:1:12 (port CN1:box 1:bay 12, SATA SSD, 500 GB, OK)
Port Name: CN0
Port ID: 0
Port Mode: RAID
Port Connection Number: 0
SAS Address: 50000D1E00190840
Port Location: External
Managed Cable Connected: False
Port Name: CN1
Port ID: 1
Port Mode: RAID
Port Connection Number: 1
SAS Address: 50000D1E00190844
Port Location: External
Managed Cable Connected: True
Managed Cable Length: 2
Managed Cable Serial Number: APF16500030TJG
Managed Cable Part Number: 691970-003
Array: A
Interface Type: Solid State SATA
Unused Space: 2593386 MB (94.41%)
Used Space: 150.00 GB (5.59%)
Status: OK
MultiDomain Status: OK
Array Type: Data
I/O Bypass: enable
Logical Drive: 1
Size: 100.00 GB
Fault Tolerance: 5
Number of Parity Groups: 2
Heads: 255
Sectors Per Track: 32
Cylinders: 25700
Strip Size: 256 KB
Full Stripe Size: 512 KB
Status: OK
Unrecoverable Media Errors: None
MultiDomain Status: OK
Caching: Disabled
Parity Initialization Status: Initialization Completed
Unique Identifier: 600508B1001C7575301CB1820BEC6260
Disk Name: /dev/sdd
Mount Points: None
Logical Drive Label: 01F8D4F48A02F3004A0 B741
physicaldrive CN1:1:1 (port CN1:box 1:bay 1, SATA SSD, 500 GB, OK)
physicaldrive CN1:1:2 (port CN1:box 1:bay 2, SATA SSD, 500 GB, OK)
physicaldrive CN1:1:3 (port CN1:box 1:bay 3, SATA SSD, 500 GB, OK)
Drive Type: Data
LD Acceleration Method: I/O Bypass
physicaldrive CN1:1:1
Port: CN1
Box: 1
Bay: 1
Status: OK
Drive Type: Data Drive
Interface Type: Solid State SATA
Size: 500 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 172317742957
WWID: 51402EC001CBE440
Model: ATA MK000500GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 21
Maximum Temperature (C): 34
Usage remaining: 99.84%
Power On Hours: 17371
Estimated Life Remaining based on workload to date: 451645 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: 6DA7A33CE83BDD9133399B6F0F8108B2
physicaldrive CN1:1:2
Port: CN1
Box: 1
Bay: 2
Status: OK
Drive Type: Data Drive
Interface Type: Solid State SATA
Size: 500 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 172317742962
WWID: 51402EC001CBE441
Model: ATA MK000500GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 22
Maximum Temperature (C): 35
Usage remaining: 99.84%
Power On Hours: 17371
Estimated Life Remaining based on workload to date: 451645 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: 4FD17AD610DC69B1300F6071DDACD5F9
physicaldrive CN1:1:3
Port: CN1
Box: 1
Bay: 3
Status: OK
Drive Type: Data Drive
Interface Type: Solid State SATA
Size: 500 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 1723177429F7
WWID: 51402EC001CBE442
Model: ATA MK000500GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 22
Maximum Temperature (C): 35
Usage remaining: 99.84%
Power On Hours: 17371
Estimated Life Remaining based on workload to date: 451645 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: C3D462620B715A67DCAE1772A3855B4A
unassigned
physicaldrive CN1:1:4
Port: CN1
Box: 1
Bay: 4
Status: OK
Drive Type: Data Drive
Interface Type: Solid State SATA
Size: 500 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 1720173A1558
WWID: 51402EC001CBE443
Model: ATA MK000500GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 21
Maximum Temperature (C): 34
Usage remaining: 99.84%
Power On Hours: 17371
Estimated Life Remaining based on workload to date: 451645 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: CB6A024F1DF6E13FD56CED7B61173E3E
physicaldrive CN1:1:5
Port: CN1
Box: 1
Bay: 5
Status: OK
Drive Type: Data Drive
Interface Type: Solid State SATA
Size: 500 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 17231774296D
WWID: 51402EC001CBE444
Model: ATA MK000500GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 22
Maximum Temperature (C): 35
Usage remaining: 99.86%
Power On Hours: 17371
Estimated Life Remaining based on workload to date: 516270 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: 4E2F74AFE1BD94AC121CA43B7181134A
physicaldrive CN1:1:11
Port: CN1
Box: 1
Bay: 11
Status: OK
Drive Type: Data Drive
Interface Type: Solid State SATA
Size: 500 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 172317742957
WWID: 51402EC001CBE440
Model: ATA MK000500GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 21
Maximum Temperature (C): 34
Usage remaining: 99.84%
Power On Hours: 17371
Estimated Life Remaining based on workload to date: 451645 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: 6DA7A33CE83BDD9133399B6F0F8108B2
physicaldrive CN1:1:12
Port: CN1
Box: 1
Bay: 15
Status: Erase Complete. Reenable Before Using.
Drive Type: Unassigned Drive
Interface Type: Solid State SATA
Size: 500 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 17231774232E
WWID: 51402EC001CBE44E
Model: ATA MK000500GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 22
Maximum Temperature (C): 34
Usage remaining: 99.74%
Power On Hours: 17371
Estimated Life Remaining based on workload to date: 277657 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: 2102E568FC6C0373122097BFB0C12C8B
Enclosure SEP (Vendor ID HP, Model D3700) 377
Device Number: 377
Firmware Version: 4.12
WWID: 51402EC001CBE47C
Port: CN1
Box: 1
Vendor ID: HP
Model: D3700
IO Module Board Serial Number: PDNFNB1LM710EO
IO Module Serial Number: 0000000000
IO Module Part Number: QW967-04402
IO Module Spare Part Number: 700521-001
Backplane 1 Board Serial Number: PCZCDC1LM6703J
Backplane 1 Serial Number: 2M273002X8
Backplane 1 Part Number: QW967-60301
Backplane 1 Spare Part Number: 734345-001
Backplane 1 System SKU: QW967A
Expander 378
Device Number: 378
Firmware Version: 4.12
WWID: 51402EC001CBE47D
Port: CN1
Box: 1
Vendor ID: HP
SEP (Vendor ID MSCC, Model Smart Adapter) 379
Device Number: 379
Firmware Version: 1.98
WWID: 50000D1E00190848
Port: Unknown
Vendor ID: MSCC
Model: Smart Adapter
'''
HPSSA_TWO_DRIVES_100GB_RAID5_50GB_RAID1 = '''
MSCC SmartRAID 3154-8e in Slot 2085
Bus Interface: PCI
Slot: 2085
Serial Number: 8A02F3004A0
RAID 6 (ADG) Status: Enabled
Controller Status: OK
Hardware Revision: B
Firmware Version: 1.98-0
Firmware Supports Online Firmware Activation: True
Driver Supports Online Firmware Activation: False
Rebuild Priority: High
Expand Priority: Medium
Surface Scan Delay: 3 secs
Surface Scan Mode: Idle
Parallel Surface Scan Supported: Yes
Current Parallel Surface Scan Count: 1
Max Parallel Surface Scan Count: 16
Queue Depth: Automatic
Monitor and Performance Delay: 60 min
Elevator Sort: Enabled
Degraded Performance Optimization: Disabled
Inconsistency Repair Policy: Disabled
Write Cache Bypass Threshold Size: 1040 KiB
Wait for Cache Room: Disabled
Surface Analysis Inconsistency Notification: Disabled
Post Prompt Timeout: 15 secs
Cache Board Present: True
Cache Status: Not Configured
Configured Drive Write Cache Policy: Default
Unconfigured Drive Write Cache Policy: Default
HBA Drive Write Cache Policy: Default
Total Cache Size: 4.0
Total Cache Memory Available: 3.8
No-Battery Write Cache: Disabled
SSD Caching RAID5 WriteBack Enabled: True
SSD Caching Version: 2
Cache Backup Power Source: Batteries
Battery/Capacitor Count: 1
Battery/Capacitor Status: Recharging
SATA NCQ Supported: True
Spare Activation Mode: Activate on physical drive failure (default)
Controller Temperature (C): 35
Number of Ports: 2 External only
Encryption: Not Set
Driver Name: smartpqi
Driver Version: Linux 1.0.4-100
I2C Address: 0xDE
PCI Address (Domain:Bus:Device.Function): 0000:C5:00.0
Negotiated PCIe Data Rate: PCIe 3.0 x8 (7880 MB/s)
Controller Mode: RAID
Controller Mode Reboot: Not Required
Port Max Phy Rate Limiting Supported: False
Latency Scheduler Setting: Disabled
Current Power Mode: MaxPerformance
Survival Mode: Enabled
Sanitize Erase Supported: True
Sanitize Lock: None
Sensor ID: 0
Location: Inlet Ambient
Current Value (C): 25
Max Value Since Power On: 25
Sensor ID: 1
Location: ASIC
Current Value (C): 35
Max Value Since Power On: 35
Sensor ID: 2
Location: Top
Current Value (C): 26
Max Value Since Power On: 26
Sensor ID: 3
Location: Bottom
Current Value (C): 28
Max Value Since Power On: 28
Primary Boot Volume: None
Secondary Boot Volume: None
HP D3700 Enclosure at Port CN1, Box 1, OK
Fan Status: OK
Temperature Status: OK
Power Supply Status: Redundant
Vendor ID: HP
Serial Number: 2M273002X8
Firmware Version: 4.12
Drive Bays: 25
Port: CN1
Box: 1
Location: External
Expander 378
Device Number: 378
Firmware Version: 4.12
WWID: 51402EC001CBE47D
Port: CN1
Box: 1
Vendor ID: HP
Enclosure SEP (Vendor ID HP, Model D3700) 377
Device Number: 377
Firmware Version: 4.12
WWID: 51402EC001CBE47C
Port: CN1
Box: 1
Vendor ID: HP
Model: D3700
IO Module Board Serial Number: PDNFNB1LM710EO
IO Module Serial Number: 0000000000
IO Module Part Number: QW967-04402
IO Module Spare Part Number: 700521-001
Backplane 1 Board Serial Number: PCZCDC1LM6703J
Backplane 1 Serial Number: 2M273002X8
Backplane 1 Part Number: QW967-60301
Backplane 1 Spare Part Number: 734345-001
Backplane 1 System SKU: QW967A
Physical Drives
physicaldrive CN1:1:1 (port CN1:box 1:bay 1, SATA SSD, 500 GB, OK)
physicaldrive CN1:1:2 (port CN1:box 1:bay 2, SATA SSD, 500 GB, OK)
physicaldrive CN1:1:3 (port CN1:box 1:bay 3, SATA SSD, 500 GB, OK)
physicaldrive CN1:1:4 (port CN1:box 1:bay 4, SATA SSD, 500 GB, OK)
physicaldrive CN1:1:5 (port CN1:box 1:bay 5, SATA SSD, 500 GB, OK)
physicaldrive CN1:1:11 (port CN1:box 1:bay 11, SATA SSD, 500 GB, OK)
physicaldrive CN1:1:12 (port CN1:box 1:bay 12, SATA SSD, 500 GB, OK)
Port Name: CN0
Port ID: 0
Port Mode: RAID
Port Connection Number: 0
SAS Address: 50000D1E00190840
Port Location: External
Managed Cable Connected: False
Port Name: CN1
Port ID: 1
Port Mode: RAID
Port Connection Number: 1
SAS Address: 50000D1E00190844
Port Location: External
Managed Cable Connected: True
Managed Cable Length: 2
Managed Cable Serial Number: APF16500030TJG
Managed Cable Part Number: 691970-003
Array: A
Interface Type: Solid State SATA
Unused Space: 2593386 MB (94.41%)
Used Space: 150.00 GB (5.59%)
Status: OK
MultiDomain Status: OK
Array Type: Data
I/O Bypass: enable
Logical Drive: 1
Size: 100.00 GB
Fault Tolerance: 5
Number of Parity Groups: 2
Heads: 255
Sectors Per Track: 32
Cylinders: 25700
Strip Size: 256 KB
Full Stripe Size: 512 KB
Status: OK
Unrecoverable Media Errors: None
MultiDomain Status: OK
Caching: Disabled
Parity Initialization Status: Initialization Completed
Unique Identifier: 600508B1001C7575301CB1820BEC6260
Disk Name: /dev/sdd
Mount Points: None
Logical Drive Label: 01F8D4F48A02F3004A0 B741
physicaldrive CN1:1:1 (port CN1:box 1:bay 1, SATA SSD, 500 GB, OK)
physicaldrive CN1:1:2 (port CN1:box 1:bay 2, SATA SSD, 500 GB, OK)
physicaldrive CN1:1:3 (port CN1:box 1:bay 3, SATA SSD, 500 GB, OK)
Drive Type: Data
LD Acceleration Method: I/O Bypass
physicaldrive CN1:1:1
Port: CN1
Box: 1
Bay: 1
Status: OK
Drive Type: Data Drive
Interface Type: Solid State SATA
Size: 500 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 172317742957
WWID: 51402EC001CBE440
Model: ATA MK000500GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 21
Maximum Temperature (C): 34
Usage remaining: 99.84%
Power On Hours: 17371
Estimated Life Remaining based on workload to date: 451645 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: 6DA7A33CE83BDD9133399B6F0F8108B2
physicaldrive CN1:1:2
Port: CN1
Box: 1
Bay: 2
Status: OK
Drive Type: Data Drive
Interface Type: Solid State SATA
Size: 500 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 172317742962
WWID: 51402EC001CBE441
Model: ATA MK000500GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 22
Maximum Temperature (C): 35
Usage remaining: 99.84%
Power On Hours: 17371
Estimated Life Remaining based on workload to date: 451645 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: 4FD17AD610DC69B1300F6071DDACD5F9
physicaldrive CN1:1:3
Port: CN1
Box: 1
Bay: 3
Status: OK
Drive Type: Data Drive
Interface Type: Solid State SATA
Size: 500 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 1723177429F7
WWID: 51402EC001CBE442
Model: ATA MK000500GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 22
Maximum Temperature (C): 35
Usage remaining: 99.84%
Power On Hours: 17371
Estimated Life Remaining based on workload to date: 451645 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: C3D462620B715A67DCAE1772A3855B4A
Array: B
Interface Type: Solid State SATA
Unused Space: 710864 MB (77.63%)
Used Space: 200.00 GB (22.37%)
Status: OK
MultiDomain Status: OK
Array Type: Data
I/O Bypass: enable
Logical Drive: 2
Size: 100.00 GB
Fault Tolerance: 1
Heads: 255
Sectors Per Track: 32
Cylinders: 25700
Strip Size: 256 KB
Full Stripe Size: 256 KB
Status: OK
Unrecoverable Media Errors: None
MultiDomain Status: OK
Caching: Disabled
Unique Identifier: 600508B1001CA1778DB3DFDF190B31C2
Disk Name: /dev/sde
Mount Points: None
Logical Drive Label: 0216D8AE8A02F3004A0 DFDE
Mirror Group 1:
physicaldrive CN1:1:4 (port CN1:box 1:bay 4, SATA SSD, 500 GB, OK)
Mirror Group 2:
physicaldrive CN1:1:5 (port CN1:box 1:bay 5, SATA SSD, 500 GB, OK)
Drive Type: Data
LD Acceleration Method: I/O Bypass
physicaldrive CN1:1:4
Port: CN1
Box: 1
Bay: 4
Status: OK
Drive Type: Data Drive
Interface Type: Solid State SATA
Size: 500 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 1720173A1558
WWID: 51402EC001CBE443
Model: ATA MK000500GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 21
Maximum Temperature (C): 34
Usage remaining: 99.84%
Power On Hours: 17371
Estimated Life Remaining based on workload to date: 451645 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: CB6A024F1DF6E13FD56CED7B61173E3E
physicaldrive CN1:1:5
Port: CN1
Box: 1
Bay: 5
Status: OK
Drive Type: Data Drive
Interface Type: Solid State SATA
Size: 500 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 17231774296D
WWID: 51402EC001CBE444
Model: ATA MK000500GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 22
Maximum Temperature (C): 35
Usage remaining: 99.86%
Power On Hours: 17371
Estimated Life Remaining based on workload to date: 516270 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: 4E2F74AFE1BD94AC121CA43B7181134A
unassigned
physicaldrive CN1:1:11
Port: CN1
Box: 1
Bay: 11
Status: OK
Drive Type: Unassigned Drive
Interface Type: Solid State SATA
Size: 500 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 1723177429D8
WWID: 51402EC001CBE44A
Model: ATA MK000500GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 21
Maximum Temperature (C): 34
Usage remaining: 99.86%
Power On Hours: 17660
Estimated Life Remaining based on workload to date: 524859 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: A5557CC7B709477D516D8E78BCFE870B
physicaldrive CN1:1:12
Port: CN1
Box: 1
Bay: 12
Status: OK
Drive Type: Unassigned Drive
Interface Type: Solid State SATA
Size: 500 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 172317742487
WWID: 51402EC001CBE44B
Model: ATA MK000500GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 22
Maximum Temperature (C): 35
Usage remaining: 99.74%
Power On Hours: 17660
Estimated Life Remaining based on workload to date: 282276 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: F50D85711F310CCD737C4FCE99EE9E16
Enclosure SEP (Vendor ID HP, Model D3700) 377
Device Number: 377
Firmware Version: 4.12
WWID: 51402EC001CBE47C
Port: CN1
Box: 1
Vendor ID: HP
Model: D3700
IO Module Board Serial Number: PDNFNB1LM710EO
IO Module Serial Number: 0000000000
IO Module Part Number: QW967-04402
IO Module Spare Part Number: 700521-001
Backplane 1 Board Serial Number: PCZCDC1LM6703J
Backplane 1 Serial Number: 2M273002X8
Backplane 1 Part Number: QW967-60301
Backplane 1 Spare Part Number: 734345-001
Backplane 1 System SKU: QW967A
Expander 378
Device Number: 378
Firmware Version: 4.12
WWID: 51402EC001CBE47D
Port: CN1
Box: 1
Vendor ID: HP
SEP (Vendor ID MSCC, Model Smart Adapter) 379
Device Number: 379
Firmware Version: 1.98
WWID: 50000D1E00190848
Port: Unknown
Vendor ID: MSCC
Model: Smart Adapter
'''
HPSSA_BAD_SIZE_PHYSICAL_DRIVE = '''
MSCC SmartRAID 3154-8e in Slot 2085
Bus Interface: PCI
Slot: 2085
Serial Number: 8A02F3004A0
RAID 6 (ADG) Status: Enabled
Controller Status: OK
Hardware Revision: B
Firmware Version: 1.98-0
Firmware Supports Online Firmware Activation: True
Driver Supports Online Firmware Activation: False
Wait for Cache Room: Disabled
Surface Analysis Inconsistency Notification: Disabled
Post Prompt Timeout: 15 secs
Cache Board Present: True
Cache Status: Not Configured
Configured Drive Write Cache Policy: Default
Unconfigured Drive Write Cache Policy: Default
HBA Drive Write Cache Policy: Default
Total Cache Size: 4.0
Total Cache Memory Available: 3.8
No-Battery Write Cache: Disabled
SSD Caching RAID5 WriteBack Enabled: True
SSD Caching Version: 2
Cache Backup Power Source: Batteries
Battery/Capacitor Count: 1
Battery/Capacitor Status: Recharging
SATA NCQ Supported: True
Spare Activation Mode: Activate on physical drive failure (default)
Controller Temperature (C): 34
Number of Ports: 2 External only
Encryption: Not Set
Driver Name: smartpqi
Driver Version: Linux 1.0.4-100
I2C Address: 0xDE
PCI Address (Domain:Bus:Device.Function): 0000:C5:00.0
Negotiated PCIe Data Rate: PCIe 3.0 x8 (7880 MB/s)
Controller Mode: RAID
Controller Mode Reboot: Not Required
Port Max Phy Rate Limiting Supported: False
Latency Scheduler Setting: Disabled
Current Power Mode: MaxPerformance
Survival Mode: Enabled
Sanitize Erase Supported: True
Sanitize Lock: None
Sensor ID: 0
Location: Inlet Ambient
Current Value (C): 25
Max Value Since Power On: 25
Sensor ID: 1
Location: ASIC
Current Value (C): 34
Max Value Since Power On: 34
Sensor ID: 2
Location: Top
Current Value (C): 26
Max Value Since Power On: 26
Sensor ID: 3
Location: Bottom
Current Value (C): 28
Max Value Since Power On: 28
Primary Boot Volume: None
Secondary Boot Volume: None
HP D3700 Enclosure at Port CN1, Box 1, OK
Fan Status: OK
Temperature Status: OK
Power Supply Status: Redundant
Vendor ID: HP
Serial Number: 2M273002X8
Firmware Version: 4.12
Drive Bays: 25
Port: CN1
Box: 1
Location: External
Expander 378
Device Number: 378
Firmware Version: 4.12
WWID: 51402EC001CBE47D
Port: CN1
Box: 1
Vendor ID: HP
Enclosure SEP (Vendor ID HP, Model D3700) 377
Device Number: 377
Firmware Version: 4.12
WWID: 51402EC001CBE47C
Port: CN1
Box: 1
Vendor ID: HP
Model: D3700
IO Module Board Serial Number: PDNFNB1LM710EO
IO Module Serial Number: 0000000000
IO Module Part Number: QW967-04402
IO Module Spare Part Number: 700521-001
Backplane 1 Board Serial Number: PCZCDC1LM6703J
Backplane 1 Serial Number: 2M273002X8
Backplane 1 Part Number: QW967-60301
Backplane 1 Spare Part Number: 734345-001
Backplane 1 System SKU: QW967A
Unassigned
physicaldrive CN1:1:1
Port: CN1
Box: 1
Bay: 1
Status: OK
Drive Type: Unassigned Drive
Interface Type: Solid State SATA
Size: 500foo
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 172317742957
WWID: 51402EC001CBE440
Model: ATA MK000480GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 22
Maximum Temperature (C): 34
Usage remaining: 99.84%
Power On Hours: 17659
Estimated Life Remaining based on workload to date: 459133 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: 6DA7A33CE83BDD9133399B6F0F8108B2
'''
HPSSA_BAD_SIZE_LOGICAL_DRIVE = '''
MSCC SmartRAID 3154-8e in Slot 2085
Bus Interface: PCI
Slot: 2085
Serial Number: 8A02F3004A0
RAID 6 (ADG) Status: Enabled
Controller Status: OK
Hardware Revision: B
Firmware Version: 1.98-0
Firmware Supports Online Firmware Activation: True
Driver Supports Online Firmware Activation: False
Rebuild Priority: High
Expand Priority: Medium
Surface Scan Delay: 3 secs
Surface Scan Mode: Idle
Parallel Surface Scan Supported: Yes
Current Parallel Surface Scan Count: 1
Max Parallel Surface Scan Count: 16
Queue Depth: Automatic
Monitor and Performance Delay: 60 min
Elevator Sort: Enabled
Degraded Performance Optimization: Disabled
Inconsistency Repair Policy: Disabled
Write Cache Bypass Threshold Size: 1040 KiB
Array: A
Interface Type: Solid State SATA
Unused Space: 2593386 MB (94.41%)
Used Space: 150.00 GB (5.59%)
Status: OK
MultiDomain Status: OK
Array Type: Data
I/O Bypass: enable
Logical Drive: 1
Size: 558.9foo
Fault Tolerance: 1
Number of Parity Groups: 2
Heads: 255
Sectors Per Track: 32
Cylinders: 25700
Strip Size: 256 KB
Full Stripe Size: 512 KB
Status: OK
Unrecoverable Media Errors: None
MultiDomain Status: OK
Caching: Disabled
Parity Initialization Status: Initialization Completed
Unique Identifier: 600508B1001C7575301CB1820BEC6260
Disk Name: /dev/sdd
Mount Points: None
Logical Drive Label: 01F8D4F48A02F3004A0 B741
Mirror Group 0:
physicaldrive CN1:1:1 (port CN1:box 1:bay 1, SATA SSD, 480 GB, OK)
Mirror Group 1:
physicaldrive CN1:1:2 (port CN1:box 1:bay 2, SATA SSD, 480 GB, OK)
Drive Type: Data
LD Acceleration Method: Controller Cache
physicaldrive CN1:1:1
Port: CN1
Box: 1
Bay: 1
Status: OK
Drive Type: Data Drive
Interface Type: Solid State SATA
Size: 480 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 172317742957
WWID: 51402EC001CBE440
Model: ATA MK000480GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 21
Maximum Temperature (C): 34
Usage remaining: 99.84%
Power On Hours: 17371
Estimated Life Remaining based on workload to date: 451645 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: 6DA7A33CE83BDD9133399B6F0F8108B2
physicaldrive CN1:1:2
Port: CN1
Box: 1
Bay: 2
Status: OK
Drive Type: Data Drive
Interface Type: Solid State SATA
Size: 480 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 172317742962
WWID: 51402EC001CBE441
Model: ATA MK000480GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 22
Maximum Temperature (C): 35
Usage remaining: 99.84%
Power On Hours: 17371
Estimated Life Remaining based on workload to date: 451645 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: 4FD17AD610DC69B1300F6071DDACD5F9
'''
HPSSA_SMALL_SIZE_PHYSICAL_DRIVE = '''
MSCC SmartRAID 3154-8e in Slot 2085
Bus Interface: PCI
Slot: 2085
Serial Number: 8A02F3004A0
RAID 6 (ADG) Status: Enabled
Controller Status: OK
Hardware Revision: B
Firmware Version: 1.98-0
Firmware Supports Online Firmware Activation: True
Driver Supports Online Firmware Activation: False
Rebuild Priority: High
Expand Priority: Medium
Surface Scan Delay: 3 secs
Surface Scan Mode: Idle
Parallel Surface Scan Supported: Yes
Current Parallel Surface Scan Count: 1
Max Parallel Surface Scan Count: 16
Queue Depth: Automatic
Monitor and Performance Delay: 60 min
Elevator Sort: Enabled
Degraded Performance Optimization: Disabled
Inconsistency Repair Policy: Disabled
Write Cache Bypass Threshold Size: 1040 KiB
Wait for Cache Room: Disabled
Surface Analysis Inconsistency Notification: Disabled
Post Prompt Timeout: 15 secs
Cache Board Present: True
unassigned
physicaldrive CN1:1:11
Port: CN1
Box: 1
Bay: 11
Status: Erase Complete. Reenable Before Using.
Drive Type: Unassigned Drive
Interface Type: Solid State SATA
Size: 2048 MB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 1723177429D8
WWID: 51402EC001CBE44A
'''
ARRAY_ACCOMODATE_LOGICAL_DISK = '''
Available options are:
Max: 1042188 (Units in MB)
Min: 16 (Units in MB)
'''
ARRAY_ACCOMODATE_LOGICAL_DISK_INVALID = '''
Error: "raid=1" is not a valid option for array A
Available options are:
0
1adm
5 (default value)
'''
HPSSA_NO_DRIVES_3_PHYSICAL_DISKS = '''
MSCC SmartRAID 3154-8e in Slot 2085
Bus Interface: PCI
Slot: 2085
Serial Number: PDVTF0BRH5T0MO
Cache Serial Number: PBKUD0BRH5T3I6
RAID 6 (ADG) Status: Enabled
Controller Status: OK
Hardware Revision: B
Firmware Version: 4.68
Wait for Cache Room: Disabled
Surface Analysis Inconsistency Notification: Disabled
Post Prompt Timeout: 15 secs
Cache Board Present: True
Cache Status: OK
Drive Write Cache: Disabled
Total Cache Size: 2.0 GB
Total Cache Memory Available: 1.8 GB
No-Battery Write Cache: Disabled
Cache Backup Power Source: Capacitors
Battery/Capacitor Count: 1
Battery/Capacitor Status: OK
SATA NCQ Supported: True
Spare Activation Mode: Activate on physical drive failure (default)
Controller Temperature (C): 88
Cache Module Temperature (C): 37
Capacitor Temperature (C): 21
Number of Ports: 6 (2 Internal / 4 External )
Driver Name: hpsa
Driver Version: 3.4.4
Driver Supports HP SSD Smart Path: True
unassigned
physicaldrive CN1:1:1
Port: 5I
Box: 1
Bay: 1
Status: OK
Drive Type: Unassigned Drive
Interface Type: SAS
Size: 500 GB
Native Block Size: 512
Rotational Speed: 15000
Firmware Revision: HPGB
Serial Number: 6SL7G55D0000N4173JLT
Model: ATA MK000480GWEZH
Current Temperature (C): 35
Maximum Temperature (C): 43
PHY Count: 2
PHY Transfer Rate: 6.0Gbps, Unknown
Drive Authentication Status: OK
Carrier Application Version: 11
Carrier Bootloader Version: 6
physicaldrive CN1:1:2
Port: 5I
Box: 1
Bay: 2
Status: OK
Drive Type: Unassigned Drive
Interface Type: SAS
Size: 600 GB
Native Block Size: 512
Rotational Speed: 15000
Firmware Revision: HPGB
Serial Number: 6SL7H2DM0000B41800Y0
Model: ATA MK000480GWEZH
Current Temperature (C): 35
Maximum Temperature (C): 44
PHY Count: 2
PHY Transfer Rate: 6.0Gbps, Unknown
Drive Authentication Status: OK
Carrier Application Version: 11
Carrier Bootloader Version: 6
physicaldrive CN1:1:3
Port: 5I
Box: 1
Bay: 1
Status: OK
Drive Type: Unassigned Drive
Interface Type: SAS
Size: 700 GB
Native Block Size: 512
Rotational Speed: 15000
Firmware Revision: HPGB
Serial Number: 6SL7G55D0000N4173JLT
Model: ATA MK000480GWEZH
Current Temperature (C): 35
Maximum Temperature (C): 43
PHY Count: 2
PHY Transfer Rate: 6.0Gbps, Unknown
Drive Authentication Status: OK
Carrier Application Version: 11
Carrier Bootloader Version: 6
SEP (Vendor ID PMCSIERA, Model SRCv24x6G) 380
Device Number: 380
Firmware Version: RevB
WWID: 5001438028842E1F
Vendor ID: PMCSIERA
Model: SRCv24x6G
'''
ONE_DRIVE_RAID_1 = '''
MSCC SmartRAID 3154-8e in Slot 2085
Bus Interface: PCI
Slot: 2085
Serial Number: PDVTF0BRH5T0MO
Cache Serial Number: PBKUD0BRH5T3I6
RAID 6 (ADG) Status: Enabled
Controller Status: OK
Hardware Revision: B
Firmware Version: 4.68
Rebuild Priority: Medium
Expand Priority: Medium
Surface Scan Delay: 3 secs
Surface Scan Mode: Idle
Queue Depth: Automatic
Monitor and Performance Delay: 60 min
Elevator Sort: Enabled
Degraded Performance Optimization: Disabled
Inconsistency Repair Policy: Disabled
Wait for Cache Room: Disabled
Surface Analysis Inconsistency Notification: Disabled
Post Prompt Timeout: 15 secs
Cache Board Present: True
Cache Status: OK
Cache Ratio: 10% Read / 90% Write
Drive Write Cache: Disabled
Total Cache Size: 2.0 GB
Total Cache Memory Available: 1.8 GB
No-Battery Write Cache: Disabled
Cache Backup Power Source: Capacitors
Battery/Capacitor Count: 1
Battery/Capacitor Status: OK
SATA NCQ Supported: True
Spare Activation Mode: Activate on physical drive failure (default)
Controller Temperature (C): 88
Cache Module Temperature (C): 38
Capacitor Temperature (C): 23
Number of Ports: 6 (2 Internal / 4 External )
Driver Name: hpsa
Driver Version: 3.4.4
Driver Supports HP SSD Smart Path: True
Array: A
Interface Type: SAS
Unused Space: 1042189 MB
Status: OK
MultiDomain Status: OK
Array Type: Data
HP SSD Smart Path: disable
Logical Drive: 1
Size: 50.0 GB
Fault Tolerance: 1
Heads: 255
Sectors Per Track: 32
Cylinders: 12850
Strip Size: 256 KB
Full Stripe Size: 256 KB
Status: OK
MultiDomain Status: OK
Caching: Enabled
Unique Identifier: 600508B1001C02BDBCB659B8A264186A
Disk Name: /dev/sda
Mount Points: None
Logical Drive Label: 02896A0EPDVTF0BRH5T0MOEBAA
Mirror Group 0:
physicaldrive CN1:1:1 (port 5I:box 1:bay 1, SAS, 600 GB, OK)
Mirror Group 1:
physicaldrive CN1:1:2 (port 5I:box 1:bay 2, SAS, 600 GB, OK)
Drive Type: Data
LD Acceleration Method: Controller Cache
physicaldrive CN1:1:1
Port: 5I
Box: 1
Bay: 1
Status: OK
Drive Type: Data Drive
Interface Type: SAS
Size: 600 GB
Native Block Size: 512
Rotational Speed: 15000
Firmware Revision: HPD5
Serial Number: 6SL7G55D0000N4173JLT
Model: ATA MK000480GWEZH
Current Temperature (C): 37
Maximum Temperature (C): 43
PHY Count: 2
PHY Transfer Rate: 6.0Gbps, Unknown
Drive Authentication Status: OK
Carrier Application Version: 11
Carrier Bootloader Version: 6
physicaldrive CN1:1:2
Port: 5I
Box: 1
Bay: 2
Status: OK
Drive Type: Data Drive
Interface Type: SAS
Size: 600 GB
Native Block Size: 512
Rotational Speed: 15000
Firmware Revision: HPGB
Serial Number: 6SL7H2DM0000B41800Y0
Model: ATA MK000480GWEZH
Current Temperature (C): 37
Maximum Temperature (C): 44
PHY Count: 2
PHY Transfer Rate: 6.0Gbps, Unknown
Drive Authentication Status: OK
Carrier Application Version: 11
Carrier Bootloader Version: 6
unassigned
physicaldrive CN1:1:3
Port: 5I
Box: 1
Bay: 1
Status: OK
Drive Type: Unassigned Drive
Interface Type: SAS
Size: 500 GB
Native Block Size: 512
Rotational Speed: 15000
Firmware Revision: HPGB
Serial Number: 6SL7G55D0000N4173JLT
Model: ATA MK000480GWEZH
Current Temperature (C): 35
Maximum Temperature (C): 43
PHY Count: 2
PHY Transfer Rate: 6.0Gbps, Unknown
Drive Authentication Status: OK
Carrier Application Version: 11
Carrier Bootloader Version: 6
'''
DRIVE_2_RAID_1_OKAY_TO_SHARE = '''
Available options are:
Max: 521094 (Units in MB)
Min: 16 (Units in MB)
'''
TWO_DRIVES_50GB_RAID1 = '''
MSCC SmartRAID 3154-8e in Slot 2085
Bus Interface: PCI
Slot: 2085
Serial Number: PDVTF0BRH5T0MO
Cache Serial Number: PBKUD0BRH5T3I6
RAID 6 (ADG) Status: Enabled
Controller Status: OK
Hardware Revision: B
Firmware Version: 4.68
Rebuild Priority: Medium
Expand Priority: Medium
Surface Scan Delay: 3 secs
Surface Scan Mode: Idle
Queue Depth: Automatic
Monitor and Performance Delay: 60 min
Elevator Sort: Enabled
Degraded Performance Optimization: Disabled
Inconsistency Repair Policy: Disabled
Wait for Cache Room: Disabled
Surface Analysis Inconsistency Notification: Disabled
Post Prompt Timeout: 15 secs
Cache Board Present: True
Cache Status: OK
Cache Ratio: 10% Read / 90% Write
Drive Write Cache: Disabled
Total Cache Size: 2.0 GB
Total Cache Memory Available: 1.8 GB
No-Battery Write Cache: Disabled
Cache Backup Power Source: Capacitors
Battery/Capacitor Count: 1
Battery/Capacitor Status: OK
SATA NCQ Supported: True
Spare Activation Mode: Activate on physical drive failure (default)
Controller Temperature (C): 88
Cache Module Temperature (C): 38
Capacitor Temperature (C): 23
Number of Ports: 6 (2 Internal / 4 External )
Driver Name: hpsa
Driver Version: 3.4.4
Driver Supports HP SSD Smart Path: True
Array: A
Interface Type: SAS
Unused Space: 939791 MB
Status: OK
MultiDomain Status: OK
Array Type: Data
HP SSD Smart Path: disable
Logical Drive: 1
Size: 50.0 GB
Fault Tolerance: 1
Heads: 255
Sectors Per Track: 32
Cylinders: 12850
Strip Size: 256 KB
Full Stripe Size: 256 KB
Status: OK
MultiDomain Status: OK
Caching: Enabled
Unique Identifier: 600508B1001C02BDBCB659B8A264186A
Disk Name: /dev/sda
Mount Points: None
Logical Drive Label: 02896A0EPDVTF0BRH5T0MOEBAA
Mirror Group 0:
physicaldrive CN1:1:1 (port 5I:box 1:bay 1, SAS, 600 GB, OK)
Mirror Group 1:
physicaldrive CN1:1:2 (port 5I:box 1:bay 2, SAS, 600 GB, OK)
Drive Type: Data
LD Acceleration Method: Controller Cache
Logical Drive: 2
Size: 50.0 GB
Fault Tolerance: 1
Heads: 255
Sectors Per Track: 32
Cylinders: 12850
Strip Size: 256 KB
Full Stripe Size: 256 KB
Status: OK
MultiDomain Status: OK
Caching: Enabled
Unique Identifier: 600508B1001C1614116817E8A9DA1D2F
Disk Name: /dev/sdb
Mount Points: None
Logical Drive Label: 06896EEAPDVTF0BRH5T0MO55C7
Mirror Group 0:
physicaldrive CN1:1:1 (port 5I:box 1:bay 1, SAS, 600 GB, OK)
Mirror Group 1:
physicaldrive CN1:1:2 (port 5I:box 1:bay 2, SAS, 600 GB, OK)
Drive Type: Data
LD Acceleration Method: Controller Cache
physicaldrive CN1:1:1
Port: 5I
Box: 1
Bay: 1
Status: OK
Drive Type: Data Drive
Interface Type: SAS
Size: 600 GB
Native Block Size: 512
Rotational Speed: 15000
Firmware Revision: HPGB
Serial Number: 6SL7G55D0000N4173JLT
Model: ATA MK000480GWEZH
Current Temperature (C): 37
Maximum Temperature (C): 43
PHY Count: 2
PHY Transfer Rate: 6.0Gbps, Unknown
Drive Authentication Status: OK
Carrier Application Version: 11
Carrier Bootloader Version: 6
physicaldrive CN1:1:2
Port: 5I
Box: 1
Bay: 2
Status: OK
Drive Type: Data Drive
Interface Type: SAS
Size: 600 GB
Native Block Size: 512
Rotational Speed: 15000
Firmware Revision: HPGB
Serial Number: 6SL7H2DM0000B41800Y0
Model: ATA MK000480GWEZH
Current Temperature (C): 37
Maximum Temperature (C): 44
PHY Count: 2
PHY Transfer Rate: 6.0Gbps, Unknown
Drive Authentication Status: OK
Carrier Application Version: 11
Carrier Bootloader Version: 6
unassigned
physicaldrive CN1:1:3
Port: 5I
Box: 1
Bay: 2
Status: OK
Drive Type: Data Drive
Interface Type: SAS
Size: 600 GB
Native Block Size: 512
Rotational Speed: 15000
Firmware Revision: HPGB
Serial Number: 6SL7H2DM0000B41800Y0
Model: ATA MK000480GWEZH
Current Temperature (C): 37
Maximum Temperature (C): 44
PHY Count: 2
PHY Transfer Rate: 6.0Gbps, Unknown
Drive Authentication Status: OK
Carrier Application Version: 11
Carrier Bootloader Version: 6
SEP (Vendor ID PMCSIERA, Model SRCv24x6G) 380
Device Number: 380
Firmware Version: RevB
WWID: 5001438028842E1F
Vendor ID: PMCSIERA
Model: SRCv24x6G
'''
NO_DRIVES_HPSSA_7_DISKS = '''
MSCC SmartRAID 3154-8e in Slot 2085
Bus Interface: PCI
Slot: 2085
Serial Number: PDVTF0BRH5T0KV
unassigned
physicaldrive CN1:1:1
Port: 5I
Box: 1
Bay: 1
Status: OK
Interface Type: SAS
Size: 199 GB
Firmware Revision: HPGB
Serial Number: 6SL7G4QV0000B41803GZ
Model: ATA MK000480GWEZH
physicaldrive CN1:1:2
Port: 5I
Box: 1
Bay: 2
Status: OK
Interface Type: SAS
Size: 200 GB
Firmware Revision: HPGB
Serial Number: 6SL7HK0Y0000N419008G
Model: ATA MK000480GWEZH
physicaldrive CN1:1:3
Port: 5I
Box: 1
Bay: 3
Status: OK
Interface Type: SAS
Size: 600 GB
Firmware Revision: HPGB
Serial Number: 6SL7H1L50000B4180V5Y
Model: ATA MK000480GWEZH
physicaldrive CN1:1:4
Port: 5I
Box: 1
Bay: 4
Status: OK
Interface Type: SAS
Size: 599 GB
Firmware Revision: HPGB
Serial Number: 6SL7H1K30000B41800TT
Model: ATA MK000480GWEZH
physicaldrive CN0:1:5
Port: 6I
Box: 1
Bay: 5
Status: OK
Interface Type: SAS
Size: 598 GB
Firmware Revision: HPDB
Serial Number: 2AVUR97N
Model: ATA MK000480GWEZH
physicaldrive CN0:1:6
Port: 6I
Box: 1
Bay: 6
Status: OK
Interface Type: SAS
Size: 500 GB
Firmware Revision: HPDB
Serial Number: 2AVVJR1N
Model: ATA MK000480GWEZH
physicaldrive CN0:1:7
Port: 6I
Box: 1
Bay: 7
Status: OK
Interface Type: SAS
Size: 500 GB
Firmware Revision: HPDB
Serial Number: 2AVVENJN
Model: ATA MK000480GWEZH
'''
ONE_DRIVE_RAID_1_50_GB = '''
MSCC SmartRAID 3154-8e in Slot 2085
Slot: 2085
Serial Number: PDVTF0BRH5T0KV
Array: A
Interface Type: SAS
Unused Space: 1042189 MB (91.1%)
Used Space: 100.0 GB (8.9%)
Logical Drive: 1
Size: 50.0 GB
Fault Tolerance: 1
Status: OK
MultiDomain Status: OK
Unique Identifier: 600508B1001C861A72C774A7394AE2AC
Disk Name: /dev/sda
Logical Drive Label: 013400ABPDVTF0BRH5T0KV22C5
LD Acceleration Method: Controller Cache
physicaldrive CN1:1:1
Port: 5I
Box: 1
Bay: 1
Status: OK
Interface Type: SAS
Size: 199 GB
Firmware Revision: HPGB
Serial Number: 6SL7G4QV0000B41803GZ
Model: ATA MK000480GWEZH
physicaldrive CN1:1:2
Port: 5I
Box: 1
Bay: 2
Status: OK
Interface Type: SAS
Size: 200 GB
Firmware Revision: HPGB
Serial Number: 6SL7HK0Y0000N419008G
Model: ATA MK000480GWEZH
unassigned
physicaldrive CN1:1:3
Port: 5I
Box: 1
Bay: 3
Status: OK
Interface Type: SAS
Size: 600 GB
Firmware Revision: HPGB
Serial Number: 6SL7H1L50000B4180V5Y
Model: ATA MK000480GWEZH
physicaldrive CN1:1:4
Port: 5I
Box: 1
Bay: 4
Status: OK
Interface Type: SAS
Size: 599 GB
Firmware Revision: HPGB
Serial Number: 6SL7H1K30000B41800TT
Model: ATA MK000480GWEZH
physicaldrive CN0:1:5
Port: 6I
Box: 1
Bay: 5
Status: OK
Interface Type: SAS
Size: 598 GB
Firmware Revision: HPDB
Serial Number: 2AVUR97N
Model: ATA MK000480GWEZH
physicaldrive CN0:1:6
Port: 6I
Box: 1
Bay: 6
Status: OK
Interface Type: SAS
Size: 500 GB
Firmware Revision: HPDB
Serial Number: 2AVVJR1N
Model: ATA MK000480GWEZH
physicaldrive CN0:1:7
Port: 6I
Box: 1
Bay: 7
Status: OK
Interface Type: SAS
Size: 500 GB
Firmware Revision: HPDB
Serial Number: 2AVVENJN
Model: ATA MK000480GWEZH
'''
TWO_DRIVES_50GB_RAID1_MAXGB_RAID5 = '''
MSCC SmartRAID 3154-8e in Slot 2085
Slot: 2085
Serial Number: PDVTF0BRH5T0KV
Array: A
Interface Type: SAS
Unused Space: 1042189 MB (91.1%)
Used Space: 100.0 GB (8.9%)
Status: OK
Logical Drive: 1
Size: 50.0 GB
Fault Tolerance: 1
Status: OK
Unique Identifier: 600508B1001C861A72C774A7394AE2AC
Disk Name: /dev/sda
physicaldrive CN1:1:1
Port: 5I
Box: 1
Bay: 1
Status: OK
Interface Type: SAS
Size: 199 GB
Firmware Revision: HPGB
Serial Number: 6SL7G4QV0000B41803GZ
Model: ATA MK000480GWEZH
physicaldrive CN1:1:2
Port: 5I
Box: 1
Bay: 2
Status: OK
Interface Type: SAS
Size: 200 GB
Firmware Revision: HPGB
Serial Number: 6SL7HK0Y0000N419008G
Model: ATA MK000480GWEZH
Array: B
Interface Type: SAS
Unused Space: 0 MB (0.0%)
Used Space: 1.6 TB (100.0%)
Status: OK
MultiDomain Status: OK
Array Type: Data
HP SSD Smart Path: disable
Logical Drive: 2
Size: 1.1 TB
Fault Tolerance: 5
Status: OK
Unique Identifier: 600508B1001CE9DE8551AEE29D5A72F7
physicaldrive CN1:1:3
Port: 5I
Box: 1
Bay: 3
Status: OK
Interface Type: SAS
Size: 600 GB
Firmware Revision: HPGB
Serial Number: 6SL7H1L50000B4180V5Y
Model: ATA MK000480GWEZH
physicaldrive CN1:1:4
Port: 5I
Box: 1
Bay: 4
Status: OK
Interface Type: SAS
Size: 599 GB
Firmware Revision: HPGB
Serial Number: 6SL7H1K30000B41800TT
Model: ATA MK000480GWEZH
physicaldrive CN0:1:5
Port: 6I
Box: 1
Bay: 5
Status: OK
Interface Type: SAS
Size: 598 GB
Firmware Revision: HPDB
Serial Number: 2AVUR97N
Model: ATA MK000480GWEZH
unassigned
physicaldrive CN0:1:6
Port: 6I
Box: 1
Bay: 6
Status: OK
Interface Type: SAS
Size: 500 GB
Firmware Revision: HPDB
Serial Number: 2AVVJR1N
physicaldrive CN0:1:7
Port: 6I
Box: 1
Bay: 7
Status: OK
Interface Type: SAS
Size: 500 GB
Firmware Revision: HPDB
Serial Number: 2AVVENJN
Model: ATA MK000480GWEZH
'''
HPSSA_HBA_MODE = '''
MSCC SmartRAID 3154-8e in Slot 2085
Bus Interface: PCI
Slot: 2085
Serial Number: 8A02F3004A0
RAID 6 (ADG) Status: Enabled
Controller Status: OK
Hardware Revision: B
Firmware Version: 1.98-0
Firmware Supports Online Firmware Activation: True
Driver Supports Online Firmware Activation: False
Wait for Cache Room: Disabled
Surface Analysis Inconsistency Notification: Disabled
Post Prompt Timeout: 15 secs
Cache Board Present: True
Cache Status: Not Configured
Configured Drive Write Cache Policy: Default
Unconfigured Drive Write Cache Policy: Default
HBA Drive Write Cache Policy: Default
Total Cache Size: 4.0
Total Cache Memory Available: 3.8
No-Battery Write Cache: Disabled
SSD Caching RAID5 WriteBack Enabled: True
SSD Caching Version: 2
Cache Backup Power Source: Batteries
Battery/Capacitor Count: 1
Battery/Capacitor Status: Recharging
SATA NCQ Supported: True
Spare Activation Mode: Activate on physical drive failure (default)
HBA Mode Enabled: True
Controller Temperature (C): 34
Number of Ports: 2 External only
Encryption: Not Set
Driver Name: smartpqi
Driver Version: Linux 1.0.4-100
I2C Address: 0xDE
PCI Address (Domain:Bus:Device.Function): 0000:C5:00.0
Negotiated PCIe Data Rate: PCIe 3.0 x8 (7880 MB/s)
Controller Mode: RAID
Controller Mode Reboot: Not Required
Port Max Phy Rate Limiting Supported: False
Latency Scheduler Setting: Disabled
Current Power Mode: MaxPerformance
Survival Mode: Enabled
Sanitize Erase Supported: True
Sanitize Lock: None
Port Name: CN1
Port ID: 1
Port Mode: RAID
Port Connection Number: 1
SAS Address: 50000D1E00190844
Port Location: External
Managed Cable Connected: True
Managed Cable Length: 2
Managed Cable Serial Number: APF16500030TJG
Managed Cable Part Number: 691970-003
Physical Drives
physicaldrive CN1:1:1 (port CN1:box 1:bay 1, SATA SSD, 480 GB, OK)
Unassigned
physicaldrive CN1:1:1
Port: CN1
Box: 1
Bay: 1
Status: OK
Drive Type: HBA Mode Drive
Interface Type: Solid State SATA
Size: 480 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 172317742957
WWID: 51402EC001CBE440
Model: ATA MK000480GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 22
Maximum Temperature (C): 34
Usage remaining: 99.84%
Power On Hours: 17659
Estimated Life Remaining based on workload to date: 459133 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: 6DA7A33CE83BDD9133399B6F0F8108B2
'''
SSA_ERASE_DRIVE = '''
MSCC SmartRAID 3154-8e in Slot 2085
Bus Interface: PCI
Slot: 2
Serial Number: PDNMF0ARH8Y342
RAID 6 (ADG) Status: Enabled
Controller Status: OK
Firmware Version: 4.52-0
Spare Activation Mode: Activate on physical drive failure (default)
Encryption: Disabled
Driver Name: hpsa
Driver Version: 3.4.16
Controller Mode: RAID
Pending Controller Mode: RAID
Controller Mode Reboot: Not Required
Host Serial Number: SGH537Y7AY
Sanitize Erase Supported: True
Primary Boot Volume: None
Secondary Boot Volume: None
Port Name: 1I
Port ID: 0
Port Connection Number: 0
SAS Address: 5001438035544EC0
Port Location: Internal
Managed Cable Connected: False
Physical Drives
physicaldrive 1I:2:1 (port 1I:box 2:bay 1, SAS HDD, 300 GB, OK)
unassigned
physicaldrive 1I:2:1
Port: 1I
Box: 2
Bay: 1
Status: OK
Drive Type: Unassigned Drive
Interface Type: SAS
Size: 300 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/512
Rotational Speed: 15100
Firmware Revision: HPD4
Serial Number: S7K0C3FJ0000K601EZLM
WWID: 5000C5008E183B1D
Model: HP EH0300JEDHC
Current Temperature (C): 42
Maximum Temperature (C): 52
PHY Count: 2
PHY Transfer Rate: 12.0Gbps, Unknown
Drive Authentication Status: OK
Carrier Application Version: 11
Carrier Bootloader Version: 6
Sanitize Erase Supported: True
Sanitize Estimated Max Erase Time: 0 hour(s)36 minute(s)
Unrestricted Sanitize Supported: False
Shingled Magnetic Recording Support: None
'''
SSA_ERASE_IN_PROGRESS = '''
MSCC SmartRAID 3154-8e in Slot 2085
Bus Interface: PCI
Slot: 2085
Serial Number: 8A02F3004A0
RAID 6 (ADG) Status: Enabled
Controller Status: OK
Hardware Revision: B
Firmware Version: 1.98-0
Firmware Supports Online Firmware Activation: True
Driver Supports Online Firmware Activation: False
Rebuild Priority: High
Expand Priority: Medium
Surface Scan Delay: 3 secs
Surface Scan Mode: Idle
Parallel Surface Scan Supported: Yes
Current Parallel Surface Scan Count: 1
Max Parallel Surface Scan Count: 16
Queue Depth: Automatic
Monitor and Performance Delay: 60 min
Elevator Sort: Enabled
Degraded Performance Optimization: Disabled
Inconsistency Repair Policy: Disabled
Write Cache Bypass Threshold Size: 1040 KiB
Wait for Cache Room: Disabled
Surface Analysis Inconsistency Notification: Disabled
Post Prompt Timeout: 15 secs
Cache Board Present: True
Cache Status: Not Configured
Configured Drive Write Cache Policy: Default
Unconfigured Drive Write Cache Policy: Default
HBA Drive Write Cache Policy: Default
Total Cache Size: 4.0
Total Cache Memory Available: 3.8
No-Battery Write Cache: Disabled
SSD Caching RAID5 WriteBack Enabled: True
SSD Caching Version: 2
Cache Backup Power Source: Batteries
Battery/Capacitor Count: 1
Battery/Capacitor Status: OK
SATA NCQ Supported: True
Spare Activation Mode: Activate on physical drive failure (default)
Controller Temperature (C): 33
Number of Ports: 2 External only
Encryption: Not Set
Driver Name: smartpqi
Driver Version: Linux 1.0.4-100
I2C Address: 0xDE
PCI Address (Domain:Bus:Device.Function): 0000:C5:00.0
Negotiated PCIe Data Rate: PCIe 3.0 x8 (7880 MB/s)
Controller Mode: RAID
Controller Mode Reboot: Not Required
Port Max Phy Rate Limiting Supported: False
Latency Scheduler Setting: Disabled
Current Power Mode: MaxPerformance
Survival Mode: Enabled
Sanitize Erase Supported: True
Sanitize Lock: None
Sensor ID: 0
Location: Inlet Ambient
Current Value (C): 24
Max Value Since Power On: 26
Sensor ID: 1
Location: ASIC
Current Value (C): 33
Max Value Since Power On: 35
Sensor ID: 2
Location: Top
Current Value (C): 25
Max Value Since Power On: 26
Sensor ID: 3
Location: Bottom
Current Value (C): 27
Max Value Since Power On: 28
Primary Boot Volume: None
Secondary Boot Volume: None
unassigned
physicaldrive CN1:1:14
Port: CN1
Box: 1
Bay: 14
Status: Erase In Progress
Drive Type: Unassigned Drive
Interface Type: Solid State SATA
Size: 500 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 1720173A1594
WWID: 51402EC001CBE44D
Model: ATA MK000500GWEZH
SATA NCQ Capable: True
SATA NCQ Enabled: True
Erase Pattern: zero
Erase Percent Complete: 66%
Current Temperature (C): 21
Maximum Temperature (C): 34
Usage remaining: 99.74%
Power On Hours: 17366
Estimated Life Remaining based on workload to date: 277577 days
SSD Smart Trip Wearout: False
PHY Count: 1
PHY Transfer Rate: 6.0Gbps
Sanitize Erase Supported: True
Sanitize Freeze Lock Supported: True
Sanitize Anti-Freeze Lock Supported: True
Sanitize Lock: None
Sanitize Estimated Max Erase Time: 1 minute(s), 16 second(s)
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
Drive Unique ID: D2BC303144FC6EB7D44C2485299BEECF
'''
SSA_ERASE_COMPLETE = '''
MSCC SmartRAID 3154-8e in Slot 2085
Bus Interface: PCI
Slot: 2085
Serial Number: 8A02F3004A0
RAID 6 (ADG) Status: Enabled
Controller Status: OK
Hardware Revision: B
Firmware Version: 1.98-0
Firmware Supports Online Firmware Activation: True
Sanitize Erase Supported: True
unassigned
physicaldrive CN1:1:14
Port: CN1
Box: 1
Bay: 14
Status: Erase Complete. Reenable Before Using.
Drive Type: Unassigned Drive
Interface Type: Solid State SATA
Size: 480 GB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Firmware Revision: HPGB
Serial Number: 1720173A1594
WWID: 51402EC001CBE44D
Model: ATA MK000480GWEZH
Sanitize Erase Supported: True
'''
SSA_ERASE_NOT_SUPPORTED = '''
MSCC SmartRAID 3154-8e in Slot 2085
Controller Status: OK
Firmware Version: 1.98-0
Spare Activation Mode: Activate on physical drive failure (default)
Controller Mode: RAID
Pending Controller Mode: RAID
Controller Mode Reboot: Not Required
Sanitize Erase Supported: False
Primary Boot Volume: None
Secondary Boot Volume: None
unassigned
physicaldrive CN1:2:1
Drive Type: Unassigned Drive
Interface Type: SAS
Size: 300 GB
Status: OK
Drive Type: Unassigned Drive
Sanitize Erase Supported: False
Sanitize Estimated Max Erase Time: 0 hour(s)36 minute(s)
Unrestricted Sanitize Supported: False
'''
SSA_ERASE_COMPLETE_NOT_SUPPORTED = '''
MSCC SmartRAID 3154-8e in Slot 2085
Controller Status: OK
Firmware Version: 1.98-0
Spare Activation Mode: Activate on physical drive failure (default)
Controller Mode: RAID
Pending Controller Mode: RAID
Controller Mode Reboot: Not Required
Sanitize Erase Supported: False
Primary Boot Volume: None
Secondary Boot Volume: None
unassigned
physicaldrive CN1:2:1
Drive Type: Unassigned Drive
Interface Type: SAS
Size: 300 GB
Status: Erase Complete. Reenable Before Using.
Drive Type: Unassigned Drive
Sanitize Erase Supported: False
Sanitize Estimated Max Erase Time: 0 hour(s)36 minute(s)
Unrestricted Sanitize Supported: False
'''
SSA_ERASE_IN_PROGRESS_NOT_SUPPORTED = '''
MSCC SmartRAID 3154-8e in Slot 2085
Controller Mode: RAID
Pending Controller Mode: RAID
Sanitize Erase Supported: True
Primary Boot Volume: None
Secondary Boot Volume: None
unassigned
physicaldrive CN1:2:1
Drive Type: Unassigned Drive
Interface Type: SAS
Size: 300 GB
Status: Erase In Progress
Drive Type: Unassigned Drive
Sanitize Erase Supported: False
Sanitize Estimated Max Erase Time: 0 hour(s)36 minute(s)
Unrestricted Sanitize Supported: False
'''
SSACLI_PARSING_TESTS = '''
MSCC SmartRAID 3162-8i in Slot 1 (RAID Mode)
Slot: 1
Controller Mode: RAID Mode
Internal Drive Cage at Port 1I, Box 1, OK
Drive Bays: 4
Port: 1I
Box: 1
Physical Drives
physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SAS HDD, 900 GB, OK)
physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SAS HDD, 900 GB, OK)
Internal Drive Cage at Port 2I, Box 1, OK
Drive Bays: 4
Port: 2I
Box: 1
Physical Drives
physicaldrive 2I:1:5 (port 2I:box 1:bay 5, SAS HDD, 900 GB, OK)
physicaldrive 2I:1:6 (port 2I:box 1:bay 6, SAS HDD, 900 GB, OK)
Unassigned
physicaldrive 1I:1:4
Port: 1I
Box: 1
Bay: 4
Size: 900 GB
Interface Type: SAS
MSCC SmartRAID 3162-8i in Slot 2 (RAID Mode)
Slot: 2
Controller Mode: RAID Mode
PCI Address (Domain:Bus:Device.Function): 0000:0B:00.0
Array: H
Interface Type: SAS
Logical Drive: 8
Size: 838.3 GB
Status: OK
physicaldrive 2I:2:8
Port: 2I
Box: 2
Bay: 8
Size: 900 GB
Interface Type: SAS
MSCC SmartRAID 3162-8i in Slot 3 (RAID Mode)
Slot: 3
Controller Mode: RAID Mode
Intel RSTe SATA in Slot 0 (Embedded) (RAID Mode)
Bus Interface: PCI
Slot: 0
'''
| 31.501445 | 79 | 0.631848 | 14,726 | 119,926 | 5.139277 | 0.039997 | 0.033152 | 0.03746 | 0.017296 | 0.971367 | 0.966848 | 0.964654 | 0.96258 | 0.960122 | 0.958074 | 0 | 0.110475 | 0.321748 | 119,926 | 3,806 | 80 | 31.509721 | 0.819953 | 0.005754 | 0 | 0.969714 | 0 | 0.016571 | 0.992023 | 0.017556 | 0 | 0 | 0.000268 | 0 | 0 | 1 | 0 | false | 0.005143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
5d48fdc9bbde418337a26ae469346e51c8ce8102 | 106,257 | py | Python | applications/3DimViewer/python/VPLSwig/Image/ImageFilters.py | SindenDev/3dimviewer | e23a3147edc35034ef4b75eae9ccdcbc7192b1a1 | [
"Apache-2.0"
] | 6 | 2020-04-14T16:10:55.000Z | 2021-05-21T07:13:55.000Z | applications/3DimViewer/python/VPLSwig/Image/ImageFilters.py | SindenDev/3dimviewer | e23a3147edc35034ef4b75eae9ccdcbc7192b1a1 | [
"Apache-2.0"
] | null | null | null | applications/3DimViewer/python/VPLSwig/Image/ImageFilters.py | SindenDev/3dimviewer | e23a3147edc35034ef4b75eae9ccdcbc7192b1a1 | [
"Apache-2.0"
] | 2 | 2020-07-24T16:25:38.000Z | 2021-01-19T09:23:18.000Z | # This file was automatically generated by SWIG (http://www.swig.org).
# Version 3.0.10
#
# Do not make changes to this file unless you know what you are doing--modify
# the SWIG interface file instead.
from sys import version_info as _swig_python_version_info
if _swig_python_version_info >= (2, 7, 0):
def swig_import_helper():
import importlib
pkg = __name__.rpartition('.')[0]
mname = '.'.join((pkg, '_ImageFilters')).lstrip('.')
try:
return importlib.import_module(mname)
except ImportError:
return importlib.import_module('_ImageFilters')
_ImageFilters = swig_import_helper()
del swig_import_helper
elif _swig_python_version_info >= (2, 6, 0):
def swig_import_helper():
from os.path import dirname
import imp
fp = None
try:
fp, pathname, description = imp.find_module('_ImageFilters', [dirname(__file__)])
except ImportError:
import _ImageFilters
return _ImageFilters
if fp is not None:
try:
_mod = imp.load_module('_ImageFilters', fp, pathname, description)
finally:
fp.close()
return _mod
_ImageFilters = swig_import_helper()
del swig_import_helper
else:
import _ImageFilters
del _swig_python_version_info
try:
_swig_property = property
except NameError:
pass # Python < 2.2 doesn't have 'property'.
try:
import builtins as __builtin__
except ImportError:
import __builtin__
def _swig_setattr_nondynamic(self, class_type, name, value, static=1):
if (name == "thisown"):
return self.this.own(value)
if (name == "this"):
if type(value).__name__ == 'SwigPyObject':
self.__dict__[name] = value
return
method = class_type.__swig_setmethods__.get(name, None)
if method:
return method(self, value)
if (not static):
if _newclass:
object.__setattr__(self, name, value)
else:
self.__dict__[name] = value
else:
raise AttributeError("You cannot add attributes to %s" % self)
def _swig_setattr(self, class_type, name, value):
return _swig_setattr_nondynamic(self, class_type, name, value, 0)
def _swig_getattr(self, class_type, name):
if (name == "thisown"):
return self.this.own()
method = class_type.__swig_getmethods__.get(name, None)
if method:
return method(self)
raise AttributeError("'%s' object has no attribute '%s'" % (class_type.__name__, name))
def _swig_repr(self):
try:
strthis = "proxy of " + self.this.__repr__()
except __builtin__.Exception:
strthis = ""
return "<%s.%s; %s >" % (self.__class__.__module__, self.__class__.__name__, strthis,)
try:
_object = object
_newclass = 1
except __builtin__.Exception:
class _object:
pass
_newclass = 0
try:
import weakref
weakref_proxy = weakref.proxy
except __builtin__.Exception:
weakref_proxy = lambda x: x
import VPLSwig.Image.Image
import VPLSwig.Core.Core
import VPLSwig.Core.Geometry
class swig_imageFilter_Image8(_object):
"""Proxy of C++ vpl::img::CImageFilter<(vpl::img::CImage<(vpl::img::tPixel8,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, swig_imageFilter_Image8, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, swig_imageFilter_Image8, name)
__repr__ = _swig_repr
TEMPLATE_PARAMETER_IS_NOT_IMAGE = _ImageFilters.swig_imageFilter_Image8_TEMPLATE_PARAMETER_IS_NOT_IMAGE
def __init__(self):
"""__init__(self) -> swig_imageFilter_Image8"""
if self.__class__ == swig_imageFilter_Image8:
_self = None
else:
_self = self
this = _ImageFilters.new_swig_imageFilter_Image8(_self, )
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _ImageFilters.delete_swig_imageFilter_Image8
__del__ = lambda self: None
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.swig_imageFilter_Image8___call__(self, SrcImage, DstImage)
def getDX(self):
"""getDX(self) -> double"""
return _ImageFilters.swig_imageFilter_Image8_getDX(self)
def getDY(self):
"""getDY(self) -> double"""
return _ImageFilters.swig_imageFilter_Image8_getDY(self)
def setPixel(self, dx, dy):
"""setPixel(self, dx, dy)"""
return _ImageFilters.swig_imageFilter_Image8_setPixel(self, dx, dy)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_swig_imageFilter_Image8(self)
return weakref_proxy(self)
swig_imageFilter_Image8_swigregister = _ImageFilters.swig_imageFilter_Image8_swigregister
swig_imageFilter_Image8_swigregister(swig_imageFilter_Image8)
class swig_imageFilter_Image16(_object):
"""Proxy of C++ vpl::img::CImageFilter<(vpl::img::CImage<(vpl::img::tPixel16,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, swig_imageFilter_Image16, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, swig_imageFilter_Image16, name)
__repr__ = _swig_repr
TEMPLATE_PARAMETER_IS_NOT_IMAGE = _ImageFilters.swig_imageFilter_Image16_TEMPLATE_PARAMETER_IS_NOT_IMAGE
def __init__(self):
"""__init__(self) -> swig_imageFilter_Image16"""
if self.__class__ == swig_imageFilter_Image16:
_self = None
else:
_self = self
this = _ImageFilters.new_swig_imageFilter_Image16(_self, )
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _ImageFilters.delete_swig_imageFilter_Image16
__del__ = lambda self: None
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.swig_imageFilter_Image16___call__(self, SrcImage, DstImage)
def getDX(self):
"""getDX(self) -> double"""
return _ImageFilters.swig_imageFilter_Image16_getDX(self)
def getDY(self):
"""getDY(self) -> double"""
return _ImageFilters.swig_imageFilter_Image16_getDY(self)
def setPixel(self, dx, dy):
"""setPixel(self, dx, dy)"""
return _ImageFilters.swig_imageFilter_Image16_setPixel(self, dx, dy)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_swig_imageFilter_Image16(self)
return weakref_proxy(self)
swig_imageFilter_Image16_swigregister = _ImageFilters.swig_imageFilter_Image16_swigregister
swig_imageFilter_Image16_swigregister(swig_imageFilter_Image16)
class swig_imageFilter_Image32(_object):
"""Proxy of C++ vpl::img::CImageFilter<(vpl::img::CImage<(vpl::img::tPixel32,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, swig_imageFilter_Image32, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, swig_imageFilter_Image32, name)
__repr__ = _swig_repr
TEMPLATE_PARAMETER_IS_NOT_IMAGE = _ImageFilters.swig_imageFilter_Image32_TEMPLATE_PARAMETER_IS_NOT_IMAGE
def __init__(self):
"""__init__(self) -> swig_imageFilter_Image32"""
if self.__class__ == swig_imageFilter_Image32:
_self = None
else:
_self = self
this = _ImageFilters.new_swig_imageFilter_Image32(_self, )
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _ImageFilters.delete_swig_imageFilter_Image32
__del__ = lambda self: None
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.swig_imageFilter_Image32___call__(self, SrcImage, DstImage)
def getDX(self):
"""getDX(self) -> double"""
return _ImageFilters.swig_imageFilter_Image32_getDX(self)
def getDY(self):
"""getDY(self) -> double"""
return _ImageFilters.swig_imageFilter_Image32_getDY(self)
def setPixel(self, dx, dy):
"""setPixel(self, dx, dy)"""
return _ImageFilters.swig_imageFilter_Image32_setPixel(self, dx, dy)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_swig_imageFilter_Image32(self)
return weakref_proxy(self)
swig_imageFilter_Image32_swigregister = _ImageFilters.swig_imageFilter_Image32_swigregister
swig_imageFilter_Image32_swigregister(swig_imageFilter_Image32)
class swig_imageFilter_FImage(_object):
"""Proxy of C++ vpl::img::CImageFilter<(vpl::img::CImage<(vpl::img::tFloatPixel,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, swig_imageFilter_FImage, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, swig_imageFilter_FImage, name)
__repr__ = _swig_repr
TEMPLATE_PARAMETER_IS_NOT_IMAGE = _ImageFilters.swig_imageFilter_FImage_TEMPLATE_PARAMETER_IS_NOT_IMAGE
def __init__(self):
"""__init__(self) -> swig_imageFilter_FImage"""
if self.__class__ == swig_imageFilter_FImage:
_self = None
else:
_self = self
this = _ImageFilters.new_swig_imageFilter_FImage(_self, )
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _ImageFilters.delete_swig_imageFilter_FImage
__del__ = lambda self: None
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.swig_imageFilter_FImage___call__(self, SrcImage, DstImage)
def getDX(self):
"""getDX(self) -> double"""
return _ImageFilters.swig_imageFilter_FImage_getDX(self)
def getDY(self):
"""getDY(self) -> double"""
return _ImageFilters.swig_imageFilter_FImage_getDY(self)
def setPixel(self, dx, dy):
"""setPixel(self, dx, dy)"""
return _ImageFilters.swig_imageFilter_FImage_setPixel(self, dx, dy)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_swig_imageFilter_FImage(self)
return weakref_proxy(self)
swig_imageFilter_FImage_swigregister = _ImageFilters.swig_imageFilter_FImage_swigregister
swig_imageFilter_FImage_swigregister(swig_imageFilter_FImage)
class swig_imageFilter_DImage(_object):
"""Proxy of C++ vpl::img::CImageFilter<(vpl::img::CImage<(vpl::img::tDensityPixel,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, swig_imageFilter_DImage, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, swig_imageFilter_DImage, name)
__repr__ = _swig_repr
TEMPLATE_PARAMETER_IS_NOT_IMAGE = _ImageFilters.swig_imageFilter_DImage_TEMPLATE_PARAMETER_IS_NOT_IMAGE
def __init__(self):
"""__init__(self) -> swig_imageFilter_DImage"""
if self.__class__ == swig_imageFilter_DImage:
_self = None
else:
_self = self
this = _ImageFilters.new_swig_imageFilter_DImage(_self, )
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _ImageFilters.delete_swig_imageFilter_DImage
__del__ = lambda self: None
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.swig_imageFilter_DImage___call__(self, SrcImage, DstImage)
def getDX(self):
"""getDX(self) -> double"""
return _ImageFilters.swig_imageFilter_DImage_getDX(self)
def getDY(self):
"""getDY(self) -> double"""
return _ImageFilters.swig_imageFilter_DImage_getDY(self)
def setPixel(self, dx, dy):
"""setPixel(self, dx, dy)"""
return _ImageFilters.swig_imageFilter_DImage_setPixel(self, dx, dy)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_swig_imageFilter_DImage(self)
return weakref_proxy(self)
swig_imageFilter_DImage_swigregister = _ImageFilters.swig_imageFilter_DImage_swigregister
swig_imageFilter_DImage_swigregister(swig_imageFilter_DImage)
class swig_imageFilter_RGBAImage(_object):
"""Proxy of C++ vpl::img::CImageFilter<(vpl::img::CImage<(vpl::img::tRGBAPixel)>)> class."""
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, swig_imageFilter_RGBAImage, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, swig_imageFilter_RGBAImage, name)
__repr__ = _swig_repr
TEMPLATE_PARAMETER_IS_NOT_IMAGE = _ImageFilters.swig_imageFilter_RGBAImage_TEMPLATE_PARAMETER_IS_NOT_IMAGE
def __init__(self):
"""__init__(self) -> swig_imageFilter_RGBAImage"""
if self.__class__ == swig_imageFilter_RGBAImage:
_self = None
else:
_self = self
this = _ImageFilters.new_swig_imageFilter_RGBAImage(_self, )
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _ImageFilters.delete_swig_imageFilter_RGBAImage
__del__ = lambda self: None
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.swig_imageFilter_RGBAImage___call__(self, SrcImage, DstImage)
def getDX(self):
"""getDX(self) -> double"""
return _ImageFilters.swig_imageFilter_RGBAImage_getDX(self)
def getDY(self):
"""getDY(self) -> double"""
return _ImageFilters.swig_imageFilter_RGBAImage_getDY(self)
def setPixel(self, dx, dy):
"""setPixel(self, dx, dy)"""
return _ImageFilters.swig_imageFilter_RGBAImage_setPixel(self, dx, dy)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_swig_imageFilter_RGBAImage(self)
return weakref_proxy(self)
swig_imageFilter_RGBAImage_swigregister = _ImageFilters.swig_imageFilter_RGBAImage_swigregister
swig_imageFilter_RGBAImage_swigregister(swig_imageFilter_RGBAImage)
class swig_imageFilter_ComplexImage(_object):
"""Proxy of C++ vpl::img::CImageFilter<(vpl::img::CImage<(vpl::img::tComplexPixel,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, swig_imageFilter_ComplexImage, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, swig_imageFilter_ComplexImage, name)
__repr__ = _swig_repr
TEMPLATE_PARAMETER_IS_NOT_IMAGE = _ImageFilters.swig_imageFilter_ComplexImage_TEMPLATE_PARAMETER_IS_NOT_IMAGE
def __init__(self):
"""__init__(self) -> swig_imageFilter_ComplexImage"""
if self.__class__ == swig_imageFilter_ComplexImage:
_self = None
else:
_self = self
this = _ImageFilters.new_swig_imageFilter_ComplexImage(_self, )
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _ImageFilters.delete_swig_imageFilter_ComplexImage
__del__ = lambda self: None
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.swig_imageFilter_ComplexImage___call__(self, SrcImage, DstImage)
def getDX(self):
"""getDX(self) -> double"""
return _ImageFilters.swig_imageFilter_ComplexImage_getDX(self)
def getDY(self):
"""getDY(self) -> double"""
return _ImageFilters.swig_imageFilter_ComplexImage_getDY(self)
def setPixel(self, dx, dy):
"""setPixel(self, dx, dy)"""
return _ImageFilters.swig_imageFilter_ComplexImage_setPixel(self, dx, dy)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_swig_imageFilter_ComplexImage(self)
return weakref_proxy(self)
swig_imageFilter_ComplexImage_swigregister = _ImageFilters.swig_imageFilter_ComplexImage_swigregister
swig_imageFilter_ComplexImage_swigregister(swig_imageFilter_ComplexImage)
class swig_separableImageFilter_Image8(swig_imageFilter_Image8):
"""Proxy of C++ vpl::img::CSeparableImageFilter<(vpl::img::CImage<(vpl::img::tPixel8,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_imageFilter_Image8]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, swig_separableImageFilter_Image8, name, value)
__swig_getmethods__ = {}
for _s in [swig_imageFilter_Image8]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, swig_separableImageFilter_Image8, name)
__repr__ = _swig_repr
def __init__(self):
"""__init__(self) -> swig_separableImageFilter_Image8"""
if self.__class__ == swig_separableImageFilter_Image8:
_self = None
else:
_self = self
this = _ImageFilters.new_swig_separableImageFilter_Image8(_self, )
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _ImageFilters.delete_swig_separableImageFilter_Image8
__del__ = lambda self: None
def __disown__(self):
self.this.disown()
_ImageFilters.disown_swig_separableImageFilter_Image8(self)
return weakref_proxy(self)
swig_separableImageFilter_Image8_swigregister = _ImageFilters.swig_separableImageFilter_Image8_swigregister
swig_separableImageFilter_Image8_swigregister(swig_separableImageFilter_Image8)
class swig_separableImageFilter_Image16(swig_imageFilter_Image16):
"""Proxy of C++ vpl::img::CSeparableImageFilter<(vpl::img::CImage<(vpl::img::tPixel16,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_imageFilter_Image16]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, swig_separableImageFilter_Image16, name, value)
__swig_getmethods__ = {}
for _s in [swig_imageFilter_Image16]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, swig_separableImageFilter_Image16, name)
__repr__ = _swig_repr
def __init__(self):
"""__init__(self) -> swig_separableImageFilter_Image16"""
if self.__class__ == swig_separableImageFilter_Image16:
_self = None
else:
_self = self
this = _ImageFilters.new_swig_separableImageFilter_Image16(_self, )
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _ImageFilters.delete_swig_separableImageFilter_Image16
__del__ = lambda self: None
def __disown__(self):
self.this.disown()
_ImageFilters.disown_swig_separableImageFilter_Image16(self)
return weakref_proxy(self)
swig_separableImageFilter_Image16_swigregister = _ImageFilters.swig_separableImageFilter_Image16_swigregister
swig_separableImageFilter_Image16_swigregister(swig_separableImageFilter_Image16)
class swig_separableImageFilter_Image32(swig_imageFilter_Image32):
"""Proxy of C++ vpl::img::CSeparableImageFilter<(vpl::img::CImage<(vpl::img::tPixel32,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_imageFilter_Image32]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, swig_separableImageFilter_Image32, name, value)
__swig_getmethods__ = {}
for _s in [swig_imageFilter_Image32]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, swig_separableImageFilter_Image32, name)
__repr__ = _swig_repr
def __init__(self):
"""__init__(self) -> swig_separableImageFilter_Image32"""
if self.__class__ == swig_separableImageFilter_Image32:
_self = None
else:
_self = self
this = _ImageFilters.new_swig_separableImageFilter_Image32(_self, )
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _ImageFilters.delete_swig_separableImageFilter_Image32
__del__ = lambda self: None
def __disown__(self):
self.this.disown()
_ImageFilters.disown_swig_separableImageFilter_Image32(self)
return weakref_proxy(self)
swig_separableImageFilter_Image32_swigregister = _ImageFilters.swig_separableImageFilter_Image32_swigregister
swig_separableImageFilter_Image32_swigregister(swig_separableImageFilter_Image32)
class swig_separableImageFilter_FImage(swig_imageFilter_FImage):
"""Proxy of C++ vpl::img::CSeparableImageFilter<(vpl::img::CImage<(vpl::img::tFloatPixel,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_imageFilter_FImage]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, swig_separableImageFilter_FImage, name, value)
__swig_getmethods__ = {}
for _s in [swig_imageFilter_FImage]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, swig_separableImageFilter_FImage, name)
__repr__ = _swig_repr
def __init__(self):
"""__init__(self) -> swig_separableImageFilter_FImage"""
if self.__class__ == swig_separableImageFilter_FImage:
_self = None
else:
_self = self
this = _ImageFilters.new_swig_separableImageFilter_FImage(_self, )
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _ImageFilters.delete_swig_separableImageFilter_FImage
__del__ = lambda self: None
def __disown__(self):
self.this.disown()
_ImageFilters.disown_swig_separableImageFilter_FImage(self)
return weakref_proxy(self)
swig_separableImageFilter_FImage_swigregister = _ImageFilters.swig_separableImageFilter_FImage_swigregister
swig_separableImageFilter_FImage_swigregister(swig_separableImageFilter_FImage)
class swig_separableImageFilter_DImage(swig_imageFilter_DImage):
"""Proxy of C++ vpl::img::CSeparableImageFilter<(vpl::img::CImage<(vpl::img::tDensityPixel,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_imageFilter_DImage]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, swig_separableImageFilter_DImage, name, value)
__swig_getmethods__ = {}
for _s in [swig_imageFilter_DImage]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, swig_separableImageFilter_DImage, name)
__repr__ = _swig_repr
def __init__(self):
"""__init__(self) -> swig_separableImageFilter_DImage"""
if self.__class__ == swig_separableImageFilter_DImage:
_self = None
else:
_self = self
this = _ImageFilters.new_swig_separableImageFilter_DImage(_self, )
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _ImageFilters.delete_swig_separableImageFilter_DImage
__del__ = lambda self: None
def __disown__(self):
self.this.disown()
_ImageFilters.disown_swig_separableImageFilter_DImage(self)
return weakref_proxy(self)
swig_separableImageFilter_DImage_swigregister = _ImageFilters.swig_separableImageFilter_DImage_swigregister
swig_separableImageFilter_DImage_swigregister(swig_separableImageFilter_DImage)
class swig_separableImageFilter_RGBAImage(swig_imageFilter_RGBAImage):
"""Proxy of C++ vpl::img::CSeparableImageFilter<(vpl::img::CImage<(vpl::img::tRGBAPixel)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_imageFilter_RGBAImage]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, swig_separableImageFilter_RGBAImage, name, value)
__swig_getmethods__ = {}
for _s in [swig_imageFilter_RGBAImage]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, swig_separableImageFilter_RGBAImage, name)
__repr__ = _swig_repr
def __init__(self):
"""__init__(self) -> swig_separableImageFilter_RGBAImage"""
if self.__class__ == swig_separableImageFilter_RGBAImage:
_self = None
else:
_self = self
this = _ImageFilters.new_swig_separableImageFilter_RGBAImage(_self, )
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _ImageFilters.delete_swig_separableImageFilter_RGBAImage
__del__ = lambda self: None
def __disown__(self):
self.this.disown()
_ImageFilters.disown_swig_separableImageFilter_RGBAImage(self)
return weakref_proxy(self)
swig_separableImageFilter_RGBAImage_swigregister = _ImageFilters.swig_separableImageFilter_RGBAImage_swigregister
swig_separableImageFilter_RGBAImage_swigregister(swig_separableImageFilter_RGBAImage)
class swig_separableImageFilter_ComplexImage(swig_imageFilter_ComplexImage):
"""Proxy of C++ vpl::img::CSeparableImageFilter<(vpl::img::CImage<(vpl::img::tComplexPixel,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_imageFilter_ComplexImage]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, swig_separableImageFilter_ComplexImage, name, value)
__swig_getmethods__ = {}
for _s in [swig_imageFilter_ComplexImage]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, swig_separableImageFilter_ComplexImage, name)
__repr__ = _swig_repr
def __init__(self):
"""__init__(self) -> swig_separableImageFilter_ComplexImage"""
if self.__class__ == swig_separableImageFilter_ComplexImage:
_self = None
else:
_self = self
this = _ImageFilters.new_swig_separableImageFilter_ComplexImage(_self, )
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _ImageFilters.delete_swig_separableImageFilter_ComplexImage
__del__ = lambda self: None
def __disown__(self):
self.this.disown()
_ImageFilters.disown_swig_separableImageFilter_ComplexImage(self)
return weakref_proxy(self)
swig_separableImageFilter_ComplexImage_swigregister = _ImageFilters.swig_separableImageFilter_ComplexImage_swigregister
swig_separableImageFilter_ComplexImage_swigregister(swig_separableImageFilter_ComplexImage)
class CAvg3Filter_Image8(swig_imageFilter_Image8):
"""Proxy of C++ vpl::img::CAvg3Filter<(vpl::img::CImage<(vpl::img::tPixel8,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_imageFilter_Image8]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, CAvg3Filter_Image8, name, value)
__swig_getmethods__ = {}
for _s in [swig_imageFilter_Image8]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, CAvg3Filter_Image8, name)
__repr__ = _swig_repr
DENOM = _ImageFilters.CAvg3Filter_Image8_DENOM
def __init__(self):
"""__init__(self) -> CAvg3Filter_Image8"""
if self.__class__ == CAvg3Filter_Image8:
_self = None
else:
_self = self
this = _ImageFilters.new_CAvg3Filter_Image8(_self, )
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.CAvg3Filter_Image8___call__(self, SrcImage, DstImage)
def getResponse(self, SrcImage, x, y):
"""getResponse(self, SrcImage, x, y) -> vpl::img::CAvg3Filter< vpl::img::CImage< unsigned __int8,vpl::base::CRefData > >::tPixel"""
return _ImageFilters.CAvg3Filter_Image8_getResponse(self, SrcImage, x, y)
def getSize(self):
"""getSize(self) -> vpl::tSize"""
return _ImageFilters.CAvg3Filter_Image8_getSize(self)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_CAvg3Filter_Image8(self)
return weakref_proxy(self)
CAvg3Filter_Image8_swigregister = _ImageFilters.CAvg3Filter_Image8_swigregister
CAvg3Filter_Image8_swigregister(CAvg3Filter_Image8)
class CAvg3Filter_Image16(swig_imageFilter_Image16):
"""Proxy of C++ vpl::img::CAvg3Filter<(vpl::img::CImage<(vpl::img::tPixel16,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_imageFilter_Image16]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, CAvg3Filter_Image16, name, value)
__swig_getmethods__ = {}
for _s in [swig_imageFilter_Image16]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, CAvg3Filter_Image16, name)
__repr__ = _swig_repr
DENOM = _ImageFilters.CAvg3Filter_Image16_DENOM
def __init__(self):
"""__init__(self) -> CAvg3Filter_Image16"""
if self.__class__ == CAvg3Filter_Image16:
_self = None
else:
_self = self
this = _ImageFilters.new_CAvg3Filter_Image16(_self, )
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.CAvg3Filter_Image16___call__(self, SrcImage, DstImage)
def getResponse(self, SrcImage, x, y):
"""getResponse(self, SrcImage, x, y) -> vpl::img::CAvg3Filter< vpl::img::CImage< unsigned __int16,vpl::base::CRefData > >::tPixel"""
return _ImageFilters.CAvg3Filter_Image16_getResponse(self, SrcImage, x, y)
def getSize(self):
"""getSize(self) -> vpl::tSize"""
return _ImageFilters.CAvg3Filter_Image16_getSize(self)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_CAvg3Filter_Image16(self)
return weakref_proxy(self)
CAvg3Filter_Image16_swigregister = _ImageFilters.CAvg3Filter_Image16_swigregister
CAvg3Filter_Image16_swigregister(CAvg3Filter_Image16)
class CAvg3Filter_Image32(swig_imageFilter_Image32):
"""Proxy of C++ vpl::img::CAvg3Filter<(vpl::img::CImage<(vpl::img::tPixel32,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_imageFilter_Image32]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, CAvg3Filter_Image32, name, value)
__swig_getmethods__ = {}
for _s in [swig_imageFilter_Image32]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, CAvg3Filter_Image32, name)
__repr__ = _swig_repr
DENOM = _ImageFilters.CAvg3Filter_Image32_DENOM
def __init__(self):
"""__init__(self) -> CAvg3Filter_Image32"""
if self.__class__ == CAvg3Filter_Image32:
_self = None
else:
_self = self
this = _ImageFilters.new_CAvg3Filter_Image32(_self, )
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.CAvg3Filter_Image32___call__(self, SrcImage, DstImage)
def getResponse(self, SrcImage, x, y):
"""getResponse(self, SrcImage, x, y) -> vpl::img::CAvg3Filter< vpl::img::CImage< unsigned __int32,vpl::base::CRefData > >::tPixel"""
return _ImageFilters.CAvg3Filter_Image32_getResponse(self, SrcImage, x, y)
def getSize(self):
"""getSize(self) -> vpl::tSize"""
return _ImageFilters.CAvg3Filter_Image32_getSize(self)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_CAvg3Filter_Image32(self)
return weakref_proxy(self)
CAvg3Filter_Image32_swigregister = _ImageFilters.CAvg3Filter_Image32_swigregister
CAvg3Filter_Image32_swigregister(CAvg3Filter_Image32)
class CAvg3Filter_FImage(swig_imageFilter_FImage):
"""Proxy of C++ vpl::img::CAvg3Filter<(vpl::img::CImage<(vpl::img::tFloatPixel,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_imageFilter_FImage]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, CAvg3Filter_FImage, name, value)
__swig_getmethods__ = {}
for _s in [swig_imageFilter_FImage]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, CAvg3Filter_FImage, name)
__repr__ = _swig_repr
DENOM = _ImageFilters.CAvg3Filter_FImage_DENOM
def __init__(self):
"""__init__(self) -> CAvg3Filter_FImage"""
if self.__class__ == CAvg3Filter_FImage:
_self = None
else:
_self = self
this = _ImageFilters.new_CAvg3Filter_FImage(_self, )
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.CAvg3Filter_FImage___call__(self, SrcImage, DstImage)
def getResponse(self, SrcImage, x, y):
"""getResponse(self, SrcImage, x, y) -> vpl::img::CAvg3Filter< vpl::img::CImage< float,vpl::base::CRefData > >::tPixel"""
return _ImageFilters.CAvg3Filter_FImage_getResponse(self, SrcImage, x, y)
def getSize(self):
"""getSize(self) -> vpl::tSize"""
return _ImageFilters.CAvg3Filter_FImage_getSize(self)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_CAvg3Filter_FImage(self)
return weakref_proxy(self)
CAvg3Filter_FImage_swigregister = _ImageFilters.CAvg3Filter_FImage_swigregister
CAvg3Filter_FImage_swigregister(CAvg3Filter_FImage)
class CAvg3Filter_DImage(swig_imageFilter_DImage):
"""Proxy of C++ vpl::img::CAvg3Filter<(vpl::img::CImage<(vpl::img::tDensityPixel,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_imageFilter_DImage]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, CAvg3Filter_DImage, name, value)
__swig_getmethods__ = {}
for _s in [swig_imageFilter_DImage]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, CAvg3Filter_DImage, name)
__repr__ = _swig_repr
DENOM = _ImageFilters.CAvg3Filter_DImage_DENOM
def __init__(self):
"""__init__(self) -> CAvg3Filter_DImage"""
if self.__class__ == CAvg3Filter_DImage:
_self = None
else:
_self = self
this = _ImageFilters.new_CAvg3Filter_DImage(_self, )
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.CAvg3Filter_DImage___call__(self, SrcImage, DstImage)
def getResponse(self, SrcImage, x, y):
"""getResponse(self, SrcImage, x, y) -> vpl::img::CAvg3Filter< vpl::img::CImage< __int16,vpl::base::CRefData > >::tPixel"""
return _ImageFilters.CAvg3Filter_DImage_getResponse(self, SrcImage, x, y)
def getSize(self):
"""getSize(self) -> vpl::tSize"""
return _ImageFilters.CAvg3Filter_DImage_getSize(self)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_CAvg3Filter_DImage(self)
return weakref_proxy(self)
CAvg3Filter_DImage_swigregister = _ImageFilters.CAvg3Filter_DImage_swigregister
CAvg3Filter_DImage_swigregister(CAvg3Filter_DImage)
class CAvg5Filter_Image8(swig_imageFilter_Image8):
"""Proxy of C++ vpl::img::CAvg5Filter<(vpl::img::CImage<(vpl::img::tPixel8,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_imageFilter_Image8]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, CAvg5Filter_Image8, name, value)
__swig_getmethods__ = {}
for _s in [swig_imageFilter_Image8]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, CAvg5Filter_Image8, name)
__repr__ = _swig_repr
DENOM = _ImageFilters.CAvg5Filter_Image8_DENOM
def __init__(self):
"""__init__(self) -> CAvg5Filter_Image8"""
if self.__class__ == CAvg5Filter_Image8:
_self = None
else:
_self = self
this = _ImageFilters.new_CAvg5Filter_Image8(_self, )
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.CAvg5Filter_Image8___call__(self, SrcImage, DstImage)
def getResponse(self, SrcImage, x, y):
"""getResponse(self, SrcImage, x, y) -> vpl::img::CAvg5Filter< vpl::img::CImage< unsigned __int8,vpl::base::CRefData > >::tPixel"""
return _ImageFilters.CAvg5Filter_Image8_getResponse(self, SrcImage, x, y)
def getSize(self):
"""getSize(self) -> vpl::tSize"""
return _ImageFilters.CAvg5Filter_Image8_getSize(self)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_CAvg5Filter_Image8(self)
return weakref_proxy(self)
CAvg5Filter_Image8_swigregister = _ImageFilters.CAvg5Filter_Image8_swigregister
CAvg5Filter_Image8_swigregister(CAvg5Filter_Image8)
class CAvg5Filter_Image16(swig_imageFilter_Image16):
"""Proxy of C++ vpl::img::CAvg5Filter<(vpl::img::CImage<(vpl::img::tPixel16,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_imageFilter_Image16]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, CAvg5Filter_Image16, name, value)
__swig_getmethods__ = {}
for _s in [swig_imageFilter_Image16]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, CAvg5Filter_Image16, name)
__repr__ = _swig_repr
DENOM = _ImageFilters.CAvg5Filter_Image16_DENOM
def __init__(self):
"""__init__(self) -> CAvg5Filter_Image16"""
if self.__class__ == CAvg5Filter_Image16:
_self = None
else:
_self = self
this = _ImageFilters.new_CAvg5Filter_Image16(_self, )
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.CAvg5Filter_Image16___call__(self, SrcImage, DstImage)
def getResponse(self, SrcImage, x, y):
"""getResponse(self, SrcImage, x, y) -> vpl::img::CAvg5Filter< vpl::img::CImage< unsigned __int16,vpl::base::CRefData > >::tPixel"""
return _ImageFilters.CAvg5Filter_Image16_getResponse(self, SrcImage, x, y)
def getSize(self):
"""getSize(self) -> vpl::tSize"""
return _ImageFilters.CAvg5Filter_Image16_getSize(self)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_CAvg5Filter_Image16(self)
return weakref_proxy(self)
CAvg5Filter_Image16_swigregister = _ImageFilters.CAvg5Filter_Image16_swigregister
CAvg5Filter_Image16_swigregister(CAvg5Filter_Image16)
class CAvg5Filter_Image32(swig_imageFilter_Image32):
"""Proxy of C++ vpl::img::CAvg5Filter<(vpl::img::CImage<(vpl::img::tPixel32,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_imageFilter_Image32]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, CAvg5Filter_Image32, name, value)
__swig_getmethods__ = {}
for _s in [swig_imageFilter_Image32]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, CAvg5Filter_Image32, name)
__repr__ = _swig_repr
DENOM = _ImageFilters.CAvg5Filter_Image32_DENOM
def __init__(self):
"""__init__(self) -> CAvg5Filter_Image32"""
if self.__class__ == CAvg5Filter_Image32:
_self = None
else:
_self = self
this = _ImageFilters.new_CAvg5Filter_Image32(_self, )
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.CAvg5Filter_Image32___call__(self, SrcImage, DstImage)
def getResponse(self, SrcImage, x, y):
"""getResponse(self, SrcImage, x, y) -> vpl::img::CAvg5Filter< vpl::img::CImage< unsigned __int32,vpl::base::CRefData > >::tPixel"""
return _ImageFilters.CAvg5Filter_Image32_getResponse(self, SrcImage, x, y)
def getSize(self):
"""getSize(self) -> vpl::tSize"""
return _ImageFilters.CAvg5Filter_Image32_getSize(self)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_CAvg5Filter_Image32(self)
return weakref_proxy(self)
CAvg5Filter_Image32_swigregister = _ImageFilters.CAvg5Filter_Image32_swigregister
CAvg5Filter_Image32_swigregister(CAvg5Filter_Image32)
class CAvg5Filter_FImage(swig_imageFilter_FImage):
"""Proxy of C++ vpl::img::CAvg5Filter<(vpl::img::CImage<(vpl::img::tFloatPixel,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_imageFilter_FImage]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, CAvg5Filter_FImage, name, value)
__swig_getmethods__ = {}
for _s in [swig_imageFilter_FImage]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, CAvg5Filter_FImage, name)
__repr__ = _swig_repr
DENOM = _ImageFilters.CAvg5Filter_FImage_DENOM
def __init__(self):
"""__init__(self) -> CAvg5Filter_FImage"""
if self.__class__ == CAvg5Filter_FImage:
_self = None
else:
_self = self
this = _ImageFilters.new_CAvg5Filter_FImage(_self, )
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.CAvg5Filter_FImage___call__(self, SrcImage, DstImage)
def getResponse(self, SrcImage, x, y):
"""getResponse(self, SrcImage, x, y) -> vpl::img::CAvg5Filter< vpl::img::CImage< float,vpl::base::CRefData > >::tPixel"""
return _ImageFilters.CAvg5Filter_FImage_getResponse(self, SrcImage, x, y)
def getSize(self):
"""getSize(self) -> vpl::tSize"""
return _ImageFilters.CAvg5Filter_FImage_getSize(self)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_CAvg5Filter_FImage(self)
return weakref_proxy(self)
CAvg5Filter_FImage_swigregister = _ImageFilters.CAvg5Filter_FImage_swigregister
CAvg5Filter_FImage_swigregister(CAvg5Filter_FImage)
class CAvg5Filter_DImage(swig_imageFilter_DImage):
"""Proxy of C++ vpl::img::CAvg5Filter<(vpl::img::CImage<(vpl::img::tDensityPixel,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_imageFilter_DImage]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, CAvg5Filter_DImage, name, value)
__swig_getmethods__ = {}
for _s in [swig_imageFilter_DImage]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, CAvg5Filter_DImage, name)
__repr__ = _swig_repr
DENOM = _ImageFilters.CAvg5Filter_DImage_DENOM
def __init__(self):
"""__init__(self) -> CAvg5Filter_DImage"""
if self.__class__ == CAvg5Filter_DImage:
_self = None
else:
_self = self
this = _ImageFilters.new_CAvg5Filter_DImage(_self, )
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.CAvg5Filter_DImage___call__(self, SrcImage, DstImage)
def getResponse(self, SrcImage, x, y):
"""getResponse(self, SrcImage, x, y) -> vpl::img::CAvg5Filter< vpl::img::CImage< __int16,vpl::base::CRefData > >::tPixel"""
return _ImageFilters.CAvg5Filter_DImage_getResponse(self, SrcImage, x, y)
def getSize(self):
"""getSize(self) -> vpl::tSize"""
return _ImageFilters.CAvg5Filter_DImage_getSize(self)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_CAvg5Filter_DImage(self)
return weakref_proxy(self)
CAvg5Filter_DImage_swigregister = _ImageFilters.CAvg5Filter_DImage_swigregister
CAvg5Filter_DImage_swigregister(CAvg5Filter_DImage)
class CAvg7Filter_Image8(swig_imageFilter_Image8):
"""Proxy of C++ vpl::img::CAvg7Filter<(vpl::img::CImage<(vpl::img::tPixel8,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_imageFilter_Image8]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, CAvg7Filter_Image8, name, value)
__swig_getmethods__ = {}
for _s in [swig_imageFilter_Image8]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, CAvg7Filter_Image8, name)
__repr__ = _swig_repr
DENOM = _ImageFilters.CAvg7Filter_Image8_DENOM
def __init__(self):
"""__init__(self) -> CAvg7Filter_Image8"""
if self.__class__ == CAvg7Filter_Image8:
_self = None
else:
_self = self
this = _ImageFilters.new_CAvg7Filter_Image8(_self, )
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.CAvg7Filter_Image8___call__(self, SrcImage, DstImage)
def getResponse(self, SrcImage, x, y):
"""getResponse(self, SrcImage, x, y) -> vpl::img::CAvg7Filter< vpl::img::CImage< unsigned __int8,vpl::base::CRefData > >::tPixel"""
return _ImageFilters.CAvg7Filter_Image8_getResponse(self, SrcImage, x, y)
def getSize(self):
"""getSize(self) -> vpl::tSize"""
return _ImageFilters.CAvg7Filter_Image8_getSize(self)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_CAvg7Filter_Image8(self)
return weakref_proxy(self)
CAvg7Filter_Image8_swigregister = _ImageFilters.CAvg7Filter_Image8_swigregister
CAvg7Filter_Image8_swigregister(CAvg7Filter_Image8)
class CAvg7Filter_Image16(swig_imageFilter_Image16):
"""Proxy of C++ vpl::img::CAvg7Filter<(vpl::img::CImage<(vpl::img::tPixel16,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_imageFilter_Image16]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, CAvg7Filter_Image16, name, value)
__swig_getmethods__ = {}
for _s in [swig_imageFilter_Image16]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, CAvg7Filter_Image16, name)
__repr__ = _swig_repr
DENOM = _ImageFilters.CAvg7Filter_Image16_DENOM
def __init__(self):
"""__init__(self) -> CAvg7Filter_Image16"""
if self.__class__ == CAvg7Filter_Image16:
_self = None
else:
_self = self
this = _ImageFilters.new_CAvg7Filter_Image16(_self, )
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.CAvg7Filter_Image16___call__(self, SrcImage, DstImage)
def getResponse(self, SrcImage, x, y):
"""getResponse(self, SrcImage, x, y) -> vpl::img::CAvg7Filter< vpl::img::CImage< unsigned __int16,vpl::base::CRefData > >::tPixel"""
return _ImageFilters.CAvg7Filter_Image16_getResponse(self, SrcImage, x, y)
def getSize(self):
"""getSize(self) -> vpl::tSize"""
return _ImageFilters.CAvg7Filter_Image16_getSize(self)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_CAvg7Filter_Image16(self)
return weakref_proxy(self)
CAvg7Filter_Image16_swigregister = _ImageFilters.CAvg7Filter_Image16_swigregister
CAvg7Filter_Image16_swigregister(CAvg7Filter_Image16)
class CAvg7Filter_Image32(swig_imageFilter_Image32):
"""Proxy of C++ vpl::img::CAvg7Filter<(vpl::img::CImage<(vpl::img::tPixel32,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_imageFilter_Image32]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, CAvg7Filter_Image32, name, value)
__swig_getmethods__ = {}
for _s in [swig_imageFilter_Image32]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, CAvg7Filter_Image32, name)
__repr__ = _swig_repr
DENOM = _ImageFilters.CAvg7Filter_Image32_DENOM
def __init__(self):
"""__init__(self) -> CAvg7Filter_Image32"""
if self.__class__ == CAvg7Filter_Image32:
_self = None
else:
_self = self
this = _ImageFilters.new_CAvg7Filter_Image32(_self, )
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.CAvg7Filter_Image32___call__(self, SrcImage, DstImage)
def getResponse(self, SrcImage, x, y):
"""getResponse(self, SrcImage, x, y) -> vpl::img::CAvg7Filter< vpl::img::CImage< unsigned __int32,vpl::base::CRefData > >::tPixel"""
return _ImageFilters.CAvg7Filter_Image32_getResponse(self, SrcImage, x, y)
def getSize(self):
"""getSize(self) -> vpl::tSize"""
return _ImageFilters.CAvg7Filter_Image32_getSize(self)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_CAvg7Filter_Image32(self)
return weakref_proxy(self)
CAvg7Filter_Image32_swigregister = _ImageFilters.CAvg7Filter_Image32_swigregister
CAvg7Filter_Image32_swigregister(CAvg7Filter_Image32)
class CAvg7Filter_FImage(swig_imageFilter_FImage):
"""Proxy of C++ vpl::img::CAvg7Filter<(vpl::img::CImage<(vpl::img::tFloatPixel,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_imageFilter_FImage]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, CAvg7Filter_FImage, name, value)
__swig_getmethods__ = {}
for _s in [swig_imageFilter_FImage]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, CAvg7Filter_FImage, name)
__repr__ = _swig_repr
DENOM = _ImageFilters.CAvg7Filter_FImage_DENOM
def __init__(self):
"""__init__(self) -> CAvg7Filter_FImage"""
if self.__class__ == CAvg7Filter_FImage:
_self = None
else:
_self = self
this = _ImageFilters.new_CAvg7Filter_FImage(_self, )
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.CAvg7Filter_FImage___call__(self, SrcImage, DstImage)
def getResponse(self, SrcImage, x, y):
"""getResponse(self, SrcImage, x, y) -> vpl::img::CAvg7Filter< vpl::img::CImage< float,vpl::base::CRefData > >::tPixel"""
return _ImageFilters.CAvg7Filter_FImage_getResponse(self, SrcImage, x, y)
def getSize(self):
"""getSize(self) -> vpl::tSize"""
return _ImageFilters.CAvg7Filter_FImage_getSize(self)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_CAvg7Filter_FImage(self)
return weakref_proxy(self)
CAvg7Filter_FImage_swigregister = _ImageFilters.CAvg7Filter_FImage_swigregister
CAvg7Filter_FImage_swigregister(CAvg7Filter_FImage)
class CAvg7Filter_DImage(swig_imageFilter_DImage):
"""Proxy of C++ vpl::img::CAvg7Filter<(vpl::img::CImage<(vpl::img::tDensityPixel,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_imageFilter_DImage]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, CAvg7Filter_DImage, name, value)
__swig_getmethods__ = {}
for _s in [swig_imageFilter_DImage]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, CAvg7Filter_DImage, name)
__repr__ = _swig_repr
DENOM = _ImageFilters.CAvg7Filter_DImage_DENOM
def __init__(self):
"""__init__(self) -> CAvg7Filter_DImage"""
if self.__class__ == CAvg7Filter_DImage:
_self = None
else:
_self = self
this = _ImageFilters.new_CAvg7Filter_DImage(_self, )
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.CAvg7Filter_DImage___call__(self, SrcImage, DstImage)
def getResponse(self, SrcImage, x, y):
"""getResponse(self, SrcImage, x, y) -> vpl::img::CAvg7Filter< vpl::img::CImage< __int16,vpl::base::CRefData > >::tPixel"""
return _ImageFilters.CAvg7Filter_DImage_getResponse(self, SrcImage, x, y)
def getSize(self):
"""getSize(self) -> vpl::tSize"""
return _ImageFilters.CAvg7Filter_DImage_getSize(self)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_CAvg7Filter_DImage(self)
return weakref_proxy(self)
CAvg7Filter_DImage_swigregister = _ImageFilters.CAvg7Filter_DImage_swigregister
CAvg7Filter_DImage_swigregister(CAvg7Filter_DImage)
class CGaussFilter_Image8(swig_separableImageFilter_Image8):
"""Proxy of C++ vpl::img::CGaussFilter<(vpl::img::CImage<(vpl::img::tPixel8,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_separableImageFilter_Image8]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, CGaussFilter_Image8, name, value)
__swig_getmethods__ = {}
for _s in [swig_separableImageFilter_Image8]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, CGaussFilter_Image8, name)
__repr__ = _swig_repr
def __init__(self, *args):
"""
__init__(self, dSigma) -> CGaussFilter_Image8
__init__(self, Size) -> CGaussFilter_Image8
"""
if self.__class__ == CGaussFilter_Image8:
_self = None
else:
_self = self
this = _ImageFilters.new_CGaussFilter_Image8(_self, *args)
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _ImageFilters.delete_CGaussFilter_Image8
__del__ = lambda self: None
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.CGaussFilter_Image8___call__(self, SrcImage, DstImage)
def getResponse(self, SrcImage, x, y):
"""getResponse(self, SrcImage, x, y) -> vpl::img::CGaussFilter< vpl::img::CImage< unsigned __int8,vpl::base::CRefData > >::tPixel"""
return _ImageFilters.CGaussFilter_Image8_getResponse(self, SrcImage, x, y)
def getSigma(self):
"""getSigma(self) -> double"""
return _ImageFilters.CGaussFilter_Image8_getSigma(self)
def getSize(self):
"""getSize(self) -> vpl::tSize"""
return _ImageFilters.CGaussFilter_Image8_getSize(self)
def resize(self, Size):
"""resize(self, Size)"""
return _ImageFilters.CGaussFilter_Image8_resize(self, Size)
def setSigma(self, dSigma):
"""setSigma(self, dSigma)"""
return _ImageFilters.CGaussFilter_Image8_setSigma(self, dSigma)
def sigma2Size(dSigma):
"""sigma2Size(dSigma) -> vpl::tSize"""
return _ImageFilters.CGaussFilter_Image8_sigma2Size(dSigma)
sigma2Size = staticmethod(sigma2Size)
def size2Sigma(Size):
"""size2Sigma(Size) -> double"""
return _ImageFilters.CGaussFilter_Image8_size2Sigma(Size)
size2Sigma = staticmethod(size2Sigma)
def getGaussianFuncValue(dX, dY, dSigma):
"""getGaussianFuncValue(dX, dY, dSigma) -> double"""
return _ImageFilters.CGaussFilter_Image8_getGaussianFuncValue(dX, dY, dSigma)
getGaussianFuncValue = staticmethod(getGaussianFuncValue)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_CGaussFilter_Image8(self)
return weakref_proxy(self)
CGaussFilter_Image8_swigregister = _ImageFilters.CGaussFilter_Image8_swigregister
CGaussFilter_Image8_swigregister(CGaussFilter_Image8)
def CGaussFilter_Image8_sigma2Size(dSigma):
"""CGaussFilter_Image8_sigma2Size(dSigma) -> vpl::tSize"""
return _ImageFilters.CGaussFilter_Image8_sigma2Size(dSigma)
def CGaussFilter_Image8_size2Sigma(Size):
"""CGaussFilter_Image8_size2Sigma(Size) -> double"""
return _ImageFilters.CGaussFilter_Image8_size2Sigma(Size)
def CGaussFilter_Image8_getGaussianFuncValue(dX, dY, dSigma):
"""CGaussFilter_Image8_getGaussianFuncValue(dX, dY, dSigma) -> double"""
return _ImageFilters.CGaussFilter_Image8_getGaussianFuncValue(dX, dY, dSigma)
class CGaussFilter_Image16(swig_separableImageFilter_Image16):
"""Proxy of C++ vpl::img::CGaussFilter<(vpl::img::CImage<(vpl::img::tPixel16,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_separableImageFilter_Image16]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, CGaussFilter_Image16, name, value)
__swig_getmethods__ = {}
for _s in [swig_separableImageFilter_Image16]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, CGaussFilter_Image16, name)
__repr__ = _swig_repr
def __init__(self, *args):
"""
__init__(self, dSigma) -> CGaussFilter_Image16
__init__(self, Size) -> CGaussFilter_Image16
"""
if self.__class__ == CGaussFilter_Image16:
_self = None
else:
_self = self
this = _ImageFilters.new_CGaussFilter_Image16(_self, *args)
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _ImageFilters.delete_CGaussFilter_Image16
__del__ = lambda self: None
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.CGaussFilter_Image16___call__(self, SrcImage, DstImage)
def getResponse(self, SrcImage, x, y):
"""getResponse(self, SrcImage, x, y) -> vpl::img::CGaussFilter< vpl::img::CImage< unsigned __int16,vpl::base::CRefData > >::tPixel"""
return _ImageFilters.CGaussFilter_Image16_getResponse(self, SrcImage, x, y)
def getSigma(self):
"""getSigma(self) -> double"""
return _ImageFilters.CGaussFilter_Image16_getSigma(self)
def getSize(self):
"""getSize(self) -> vpl::tSize"""
return _ImageFilters.CGaussFilter_Image16_getSize(self)
def resize(self, Size):
"""resize(self, Size)"""
return _ImageFilters.CGaussFilter_Image16_resize(self, Size)
def setSigma(self, dSigma):
"""setSigma(self, dSigma)"""
return _ImageFilters.CGaussFilter_Image16_setSigma(self, dSigma)
def sigma2Size(dSigma):
"""sigma2Size(dSigma) -> vpl::tSize"""
return _ImageFilters.CGaussFilter_Image16_sigma2Size(dSigma)
sigma2Size = staticmethod(sigma2Size)
def size2Sigma(Size):
"""size2Sigma(Size) -> double"""
return _ImageFilters.CGaussFilter_Image16_size2Sigma(Size)
size2Sigma = staticmethod(size2Sigma)
def getGaussianFuncValue(dX, dY, dSigma):
"""getGaussianFuncValue(dX, dY, dSigma) -> double"""
return _ImageFilters.CGaussFilter_Image16_getGaussianFuncValue(dX, dY, dSigma)
getGaussianFuncValue = staticmethod(getGaussianFuncValue)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_CGaussFilter_Image16(self)
return weakref_proxy(self)
CGaussFilter_Image16_swigregister = _ImageFilters.CGaussFilter_Image16_swigregister
CGaussFilter_Image16_swigregister(CGaussFilter_Image16)
def CGaussFilter_Image16_sigma2Size(dSigma):
"""CGaussFilter_Image16_sigma2Size(dSigma) -> vpl::tSize"""
return _ImageFilters.CGaussFilter_Image16_sigma2Size(dSigma)
def CGaussFilter_Image16_size2Sigma(Size):
"""CGaussFilter_Image16_size2Sigma(Size) -> double"""
return _ImageFilters.CGaussFilter_Image16_size2Sigma(Size)
def CGaussFilter_Image16_getGaussianFuncValue(dX, dY, dSigma):
"""CGaussFilter_Image16_getGaussianFuncValue(dX, dY, dSigma) -> double"""
return _ImageFilters.CGaussFilter_Image16_getGaussianFuncValue(dX, dY, dSigma)
class CGaussFilter_Image32(swig_separableImageFilter_Image32):
"""Proxy of C++ vpl::img::CGaussFilter<(vpl::img::CImage<(vpl::img::tPixel32,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_separableImageFilter_Image32]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, CGaussFilter_Image32, name, value)
__swig_getmethods__ = {}
for _s in [swig_separableImageFilter_Image32]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, CGaussFilter_Image32, name)
__repr__ = _swig_repr
def __init__(self, *args):
"""
__init__(self, dSigma) -> CGaussFilter_Image32
__init__(self, Size) -> CGaussFilter_Image32
"""
if self.__class__ == CGaussFilter_Image32:
_self = None
else:
_self = self
this = _ImageFilters.new_CGaussFilter_Image32(_self, *args)
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _ImageFilters.delete_CGaussFilter_Image32
__del__ = lambda self: None
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.CGaussFilter_Image32___call__(self, SrcImage, DstImage)
def getResponse(self, SrcImage, x, y):
"""getResponse(self, SrcImage, x, y) -> vpl::img::CGaussFilter< vpl::img::CImage< unsigned __int32,vpl::base::CRefData > >::tPixel"""
return _ImageFilters.CGaussFilter_Image32_getResponse(self, SrcImage, x, y)
def getSigma(self):
"""getSigma(self) -> double"""
return _ImageFilters.CGaussFilter_Image32_getSigma(self)
def getSize(self):
"""getSize(self) -> vpl::tSize"""
return _ImageFilters.CGaussFilter_Image32_getSize(self)
def resize(self, Size):
"""resize(self, Size)"""
return _ImageFilters.CGaussFilter_Image32_resize(self, Size)
def setSigma(self, dSigma):
"""setSigma(self, dSigma)"""
return _ImageFilters.CGaussFilter_Image32_setSigma(self, dSigma)
def sigma2Size(dSigma):
"""sigma2Size(dSigma) -> vpl::tSize"""
return _ImageFilters.CGaussFilter_Image32_sigma2Size(dSigma)
sigma2Size = staticmethod(sigma2Size)
def size2Sigma(Size):
"""size2Sigma(Size) -> double"""
return _ImageFilters.CGaussFilter_Image32_size2Sigma(Size)
size2Sigma = staticmethod(size2Sigma)
def getGaussianFuncValue(dX, dY, dSigma):
"""getGaussianFuncValue(dX, dY, dSigma) -> double"""
return _ImageFilters.CGaussFilter_Image32_getGaussianFuncValue(dX, dY, dSigma)
getGaussianFuncValue = staticmethod(getGaussianFuncValue)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_CGaussFilter_Image32(self)
return weakref_proxy(self)
CGaussFilter_Image32_swigregister = _ImageFilters.CGaussFilter_Image32_swigregister
CGaussFilter_Image32_swigregister(CGaussFilter_Image32)
def CGaussFilter_Image32_sigma2Size(dSigma):
"""CGaussFilter_Image32_sigma2Size(dSigma) -> vpl::tSize"""
return _ImageFilters.CGaussFilter_Image32_sigma2Size(dSigma)
def CGaussFilter_Image32_size2Sigma(Size):
"""CGaussFilter_Image32_size2Sigma(Size) -> double"""
return _ImageFilters.CGaussFilter_Image32_size2Sigma(Size)
def CGaussFilter_Image32_getGaussianFuncValue(dX, dY, dSigma):
"""CGaussFilter_Image32_getGaussianFuncValue(dX, dY, dSigma) -> double"""
return _ImageFilters.CGaussFilter_Image32_getGaussianFuncValue(dX, dY, dSigma)
class CGaussFilter_FImage(swig_separableImageFilter_FImage):
"""Proxy of C++ vpl::img::CGaussFilter<(vpl::img::CImage<(vpl::img::tFloatPixel,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_separableImageFilter_FImage]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, CGaussFilter_FImage, name, value)
__swig_getmethods__ = {}
for _s in [swig_separableImageFilter_FImage]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, CGaussFilter_FImage, name)
__repr__ = _swig_repr
def __init__(self, *args):
"""
__init__(self, dSigma) -> CGaussFilter_FImage
__init__(self, Size) -> CGaussFilter_FImage
"""
if self.__class__ == CGaussFilter_FImage:
_self = None
else:
_self = self
this = _ImageFilters.new_CGaussFilter_FImage(_self, *args)
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _ImageFilters.delete_CGaussFilter_FImage
__del__ = lambda self: None
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.CGaussFilter_FImage___call__(self, SrcImage, DstImage)
def getResponse(self, SrcImage, x, y):
"""getResponse(self, SrcImage, x, y) -> vpl::img::CGaussFilter< vpl::img::CImage< float,vpl::base::CRefData > >::tPixel"""
return _ImageFilters.CGaussFilter_FImage_getResponse(self, SrcImage, x, y)
def getSigma(self):
"""getSigma(self) -> double"""
return _ImageFilters.CGaussFilter_FImage_getSigma(self)
def getSize(self):
"""getSize(self) -> vpl::tSize"""
return _ImageFilters.CGaussFilter_FImage_getSize(self)
def resize(self, Size):
"""resize(self, Size)"""
return _ImageFilters.CGaussFilter_FImage_resize(self, Size)
def setSigma(self, dSigma):
"""setSigma(self, dSigma)"""
return _ImageFilters.CGaussFilter_FImage_setSigma(self, dSigma)
def sigma2Size(dSigma):
"""sigma2Size(dSigma) -> vpl::tSize"""
return _ImageFilters.CGaussFilter_FImage_sigma2Size(dSigma)
sigma2Size = staticmethod(sigma2Size)
def size2Sigma(Size):
"""size2Sigma(Size) -> double"""
return _ImageFilters.CGaussFilter_FImage_size2Sigma(Size)
size2Sigma = staticmethod(size2Sigma)
def getGaussianFuncValue(dX, dY, dSigma):
"""getGaussianFuncValue(dX, dY, dSigma) -> double"""
return _ImageFilters.CGaussFilter_FImage_getGaussianFuncValue(dX, dY, dSigma)
getGaussianFuncValue = staticmethod(getGaussianFuncValue)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_CGaussFilter_FImage(self)
return weakref_proxy(self)
CGaussFilter_FImage_swigregister = _ImageFilters.CGaussFilter_FImage_swigregister
CGaussFilter_FImage_swigregister(CGaussFilter_FImage)
def CGaussFilter_FImage_sigma2Size(dSigma):
"""CGaussFilter_FImage_sigma2Size(dSigma) -> vpl::tSize"""
return _ImageFilters.CGaussFilter_FImage_sigma2Size(dSigma)
def CGaussFilter_FImage_size2Sigma(Size):
"""CGaussFilter_FImage_size2Sigma(Size) -> double"""
return _ImageFilters.CGaussFilter_FImage_size2Sigma(Size)
def CGaussFilter_FImage_getGaussianFuncValue(dX, dY, dSigma):
"""CGaussFilter_FImage_getGaussianFuncValue(dX, dY, dSigma) -> double"""
return _ImageFilters.CGaussFilter_FImage_getGaussianFuncValue(dX, dY, dSigma)
class CGaussFilter_DImage(swig_separableImageFilter_DImage):
"""Proxy of C++ vpl::img::CGaussFilter<(vpl::img::CImage<(vpl::img::tDensityPixel,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_separableImageFilter_DImage]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, CGaussFilter_DImage, name, value)
__swig_getmethods__ = {}
for _s in [swig_separableImageFilter_DImage]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, CGaussFilter_DImage, name)
__repr__ = _swig_repr
def __init__(self, *args):
"""
__init__(self, dSigma) -> CGaussFilter_DImage
__init__(self, Size) -> CGaussFilter_DImage
"""
if self.__class__ == CGaussFilter_DImage:
_self = None
else:
_self = self
this = _ImageFilters.new_CGaussFilter_DImage(_self, *args)
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _ImageFilters.delete_CGaussFilter_DImage
__del__ = lambda self: None
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.CGaussFilter_DImage___call__(self, SrcImage, DstImage)
def getResponse(self, SrcImage, x, y):
"""getResponse(self, SrcImage, x, y) -> vpl::img::CGaussFilter< vpl::img::CImage< __int16,vpl::base::CRefData > >::tPixel"""
return _ImageFilters.CGaussFilter_DImage_getResponse(self, SrcImage, x, y)
def getSigma(self):
"""getSigma(self) -> double"""
return _ImageFilters.CGaussFilter_DImage_getSigma(self)
def getSize(self):
"""getSize(self) -> vpl::tSize"""
return _ImageFilters.CGaussFilter_DImage_getSize(self)
def resize(self, Size):
"""resize(self, Size)"""
return _ImageFilters.CGaussFilter_DImage_resize(self, Size)
def setSigma(self, dSigma):
"""setSigma(self, dSigma)"""
return _ImageFilters.CGaussFilter_DImage_setSigma(self, dSigma)
def sigma2Size(dSigma):
"""sigma2Size(dSigma) -> vpl::tSize"""
return _ImageFilters.CGaussFilter_DImage_sigma2Size(dSigma)
sigma2Size = staticmethod(sigma2Size)
def size2Sigma(Size):
"""size2Sigma(Size) -> double"""
return _ImageFilters.CGaussFilter_DImage_size2Sigma(Size)
size2Sigma = staticmethod(size2Sigma)
def getGaussianFuncValue(dX, dY, dSigma):
"""getGaussianFuncValue(dX, dY, dSigma) -> double"""
return _ImageFilters.CGaussFilter_DImage_getGaussianFuncValue(dX, dY, dSigma)
getGaussianFuncValue = staticmethod(getGaussianFuncValue)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_CGaussFilter_DImage(self)
return weakref_proxy(self)
CGaussFilter_DImage_swigregister = _ImageFilters.CGaussFilter_DImage_swigregister
CGaussFilter_DImage_swigregister(CGaussFilter_DImage)
def CGaussFilter_DImage_sigma2Size(dSigma):
"""CGaussFilter_DImage_sigma2Size(dSigma) -> vpl::tSize"""
return _ImageFilters.CGaussFilter_DImage_sigma2Size(dSigma)
def CGaussFilter_DImage_size2Sigma(Size):
"""CGaussFilter_DImage_size2Sigma(Size) -> double"""
return _ImageFilters.CGaussFilter_DImage_size2Sigma(Size)
def CGaussFilter_DImage_getGaussianFuncValue(dX, dY, dSigma):
"""CGaussFilter_DImage_getGaussianFuncValue(dX, dY, dSigma) -> double"""
return _ImageFilters.CGaussFilter_DImage_getGaussianFuncValue(dX, dY, dSigma)
class swig_base_CImage8EdgeDetector(_object):
"""Proxy of C++ vpl::img::CImageEdgeDetector<(vpl::img::CImage<(vpl::img::tPixel8,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, swig_base_CImage8EdgeDetector, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, swig_base_CImage8EdgeDetector, name)
__repr__ = _swig_repr
TEMPLATE_PARAMETER_IS_NOT_IMAGE = _ImageFilters.swig_base_CImage8EdgeDetector_TEMPLATE_PARAMETER_IS_NOT_IMAGE
def __init__(self):
"""__init__(self) -> swig_base_CImage8EdgeDetector"""
if self.__class__ == swig_base_CImage8EdgeDetector:
_self = None
else:
_self = self
this = _ImageFilters.new_swig_base_CImage8EdgeDetector(_self, )
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _ImageFilters.delete_swig_base_CImage8EdgeDetector
__del__ = lambda self: None
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.swig_base_CImage8EdgeDetector___call__(self, SrcImage, DstImage)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_swig_base_CImage8EdgeDetector(self)
return weakref_proxy(self)
swig_base_CImage8EdgeDetector_swigregister = _ImageFilters.swig_base_CImage8EdgeDetector_swigregister
swig_base_CImage8EdgeDetector_swigregister(swig_base_CImage8EdgeDetector)
class swig_base_CImage16EdgeDetector(_object):
"""Proxy of C++ vpl::img::CImageEdgeDetector<(vpl::img::CImage<(vpl::img::tPixel16,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, swig_base_CImage16EdgeDetector, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, swig_base_CImage16EdgeDetector, name)
__repr__ = _swig_repr
TEMPLATE_PARAMETER_IS_NOT_IMAGE = _ImageFilters.swig_base_CImage16EdgeDetector_TEMPLATE_PARAMETER_IS_NOT_IMAGE
def __init__(self):
"""__init__(self) -> swig_base_CImage16EdgeDetector"""
if self.__class__ == swig_base_CImage16EdgeDetector:
_self = None
else:
_self = self
this = _ImageFilters.new_swig_base_CImage16EdgeDetector(_self, )
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _ImageFilters.delete_swig_base_CImage16EdgeDetector
__del__ = lambda self: None
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.swig_base_CImage16EdgeDetector___call__(self, SrcImage, DstImage)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_swig_base_CImage16EdgeDetector(self)
return weakref_proxy(self)
swig_base_CImage16EdgeDetector_swigregister = _ImageFilters.swig_base_CImage16EdgeDetector_swigregister
swig_base_CImage16EdgeDetector_swigregister(swig_base_CImage16EdgeDetector)
class swig_base_CImage32EdgeDetector(_object):
"""Proxy of C++ vpl::img::CImageEdgeDetector<(vpl::img::CImage<(vpl::img::tPixel32,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, swig_base_CImage32EdgeDetector, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, swig_base_CImage32EdgeDetector, name)
__repr__ = _swig_repr
TEMPLATE_PARAMETER_IS_NOT_IMAGE = _ImageFilters.swig_base_CImage32EdgeDetector_TEMPLATE_PARAMETER_IS_NOT_IMAGE
def __init__(self):
"""__init__(self) -> swig_base_CImage32EdgeDetector"""
if self.__class__ == swig_base_CImage32EdgeDetector:
_self = None
else:
_self = self
this = _ImageFilters.new_swig_base_CImage32EdgeDetector(_self, )
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _ImageFilters.delete_swig_base_CImage32EdgeDetector
__del__ = lambda self: None
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.swig_base_CImage32EdgeDetector___call__(self, SrcImage, DstImage)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_swig_base_CImage32EdgeDetector(self)
return weakref_proxy(self)
swig_base_CImage32EdgeDetector_swigregister = _ImageFilters.swig_base_CImage32EdgeDetector_swigregister
swig_base_CImage32EdgeDetector_swigregister(swig_base_CImage32EdgeDetector)
class swig_base_CFImageEdgeDetector(_object):
"""Proxy of C++ vpl::img::CImageEdgeDetector<(vpl::img::CImage<(vpl::img::tFloatPixel,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, swig_base_CFImageEdgeDetector, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, swig_base_CFImageEdgeDetector, name)
__repr__ = _swig_repr
TEMPLATE_PARAMETER_IS_NOT_IMAGE = _ImageFilters.swig_base_CFImageEdgeDetector_TEMPLATE_PARAMETER_IS_NOT_IMAGE
def __init__(self):
"""__init__(self) -> swig_base_CFImageEdgeDetector"""
if self.__class__ == swig_base_CFImageEdgeDetector:
_self = None
else:
_self = self
this = _ImageFilters.new_swig_base_CFImageEdgeDetector(_self, )
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _ImageFilters.delete_swig_base_CFImageEdgeDetector
__del__ = lambda self: None
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.swig_base_CFImageEdgeDetector___call__(self, SrcImage, DstImage)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_swig_base_CFImageEdgeDetector(self)
return weakref_proxy(self)
swig_base_CFImageEdgeDetector_swigregister = _ImageFilters.swig_base_CFImageEdgeDetector_swigregister
swig_base_CFImageEdgeDetector_swigregister(swig_base_CFImageEdgeDetector)
class swig_base_CDImageEdgeDetector(_object):
"""Proxy of C++ vpl::img::CImageEdgeDetector<(vpl::img::CImage<(vpl::img::tDensityPixel,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, swig_base_CDImageEdgeDetector, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, swig_base_CDImageEdgeDetector, name)
__repr__ = _swig_repr
TEMPLATE_PARAMETER_IS_NOT_IMAGE = _ImageFilters.swig_base_CDImageEdgeDetector_TEMPLATE_PARAMETER_IS_NOT_IMAGE
def __init__(self):
"""__init__(self) -> swig_base_CDImageEdgeDetector"""
if self.__class__ == swig_base_CDImageEdgeDetector:
_self = None
else:
_self = self
this = _ImageFilters.new_swig_base_CDImageEdgeDetector(_self, )
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _ImageFilters.delete_swig_base_CDImageEdgeDetector
__del__ = lambda self: None
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.swig_base_CDImageEdgeDetector___call__(self, SrcImage, DstImage)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_swig_base_CDImageEdgeDetector(self)
return weakref_proxy(self)
swig_base_CDImageEdgeDetector_swigregister = _ImageFilters.swig_base_CDImageEdgeDetector_swigregister
swig_base_CDImageEdgeDetector_swigregister(swig_base_CDImageEdgeDetector)
class CCanny_Image8(swig_base_CImage8EdgeDetector):
"""Proxy of C++ vpl::img::CCanny<(vpl::img::CImage<(vpl::img::tPixel8,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_base_CImage8EdgeDetector]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, CCanny_Image8, name, value)
__swig_getmethods__ = {}
for _s in [swig_base_CImage8EdgeDetector]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, CCanny_Image8, name)
__repr__ = _swig_repr
def __init__(self, dSigma, dT1, dT2):
"""__init__(self, dSigma, dT1, dT2) -> CCanny_Image8"""
if self.__class__ == CCanny_Image8:
_self = None
else:
_self = self
this = _ImageFilters.new_CCanny_Image8(_self, dSigma, dT1, dT2)
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _ImageFilters.delete_CCanny_Image8
__del__ = lambda self: None
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.CCanny_Image8___call__(self, SrcImage, DstImage)
def getSigma(self):
"""getSigma(self) -> double"""
return _ImageFilters.CCanny_Image8_getSigma(self)
def setSigma(self, dSigma):
"""setSigma(self, dSigma)"""
return _ImageFilters.CCanny_Image8_setSigma(self, dSigma)
def getThresholds(self, dT1, dT2):
"""getThresholds(self, dT1, dT2)"""
return _ImageFilters.CCanny_Image8_getThresholds(self, dT1, dT2)
def setThresholds(self, dT1, dT2):
"""setThresholds(self, dT1, dT2)"""
return _ImageFilters.CCanny_Image8_setThresholds(self, dT1, dT2)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_CCanny_Image8(self)
return weakref_proxy(self)
CCanny_Image8_swigregister = _ImageFilters.CCanny_Image8_swigregister
CCanny_Image8_swigregister(CCanny_Image8)
class CCanny_Image16(swig_base_CImage16EdgeDetector):
"""Proxy of C++ vpl::img::CCanny<(vpl::img::CImage<(vpl::img::tPixel16,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_base_CImage16EdgeDetector]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, CCanny_Image16, name, value)
__swig_getmethods__ = {}
for _s in [swig_base_CImage16EdgeDetector]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, CCanny_Image16, name)
__repr__ = _swig_repr
def __init__(self, dSigma, dT1, dT2):
"""__init__(self, dSigma, dT1, dT2) -> CCanny_Image16"""
if self.__class__ == CCanny_Image16:
_self = None
else:
_self = self
this = _ImageFilters.new_CCanny_Image16(_self, dSigma, dT1, dT2)
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _ImageFilters.delete_CCanny_Image16
__del__ = lambda self: None
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.CCanny_Image16___call__(self, SrcImage, DstImage)
def getSigma(self):
"""getSigma(self) -> double"""
return _ImageFilters.CCanny_Image16_getSigma(self)
def setSigma(self, dSigma):
"""setSigma(self, dSigma)"""
return _ImageFilters.CCanny_Image16_setSigma(self, dSigma)
def getThresholds(self, dT1, dT2):
"""getThresholds(self, dT1, dT2)"""
return _ImageFilters.CCanny_Image16_getThresholds(self, dT1, dT2)
def setThresholds(self, dT1, dT2):
"""setThresholds(self, dT1, dT2)"""
return _ImageFilters.CCanny_Image16_setThresholds(self, dT1, dT2)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_CCanny_Image16(self)
return weakref_proxy(self)
CCanny_Image16_swigregister = _ImageFilters.CCanny_Image16_swigregister
CCanny_Image16_swigregister(CCanny_Image16)
class CCanny_Image32(swig_base_CImage32EdgeDetector):
"""Proxy of C++ vpl::img::CCanny<(vpl::img::CImage<(vpl::img::tPixel32,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_base_CImage32EdgeDetector]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, CCanny_Image32, name, value)
__swig_getmethods__ = {}
for _s in [swig_base_CImage32EdgeDetector]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, CCanny_Image32, name)
__repr__ = _swig_repr
def __init__(self, dSigma, dT1, dT2):
"""__init__(self, dSigma, dT1, dT2) -> CCanny_Image32"""
if self.__class__ == CCanny_Image32:
_self = None
else:
_self = self
this = _ImageFilters.new_CCanny_Image32(_self, dSigma, dT1, dT2)
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _ImageFilters.delete_CCanny_Image32
__del__ = lambda self: None
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.CCanny_Image32___call__(self, SrcImage, DstImage)
def getSigma(self):
"""getSigma(self) -> double"""
return _ImageFilters.CCanny_Image32_getSigma(self)
def setSigma(self, dSigma):
"""setSigma(self, dSigma)"""
return _ImageFilters.CCanny_Image32_setSigma(self, dSigma)
def getThresholds(self, dT1, dT2):
"""getThresholds(self, dT1, dT2)"""
return _ImageFilters.CCanny_Image32_getThresholds(self, dT1, dT2)
def setThresholds(self, dT1, dT2):
"""setThresholds(self, dT1, dT2)"""
return _ImageFilters.CCanny_Image32_setThresholds(self, dT1, dT2)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_CCanny_Image32(self)
return weakref_proxy(self)
CCanny_Image32_swigregister = _ImageFilters.CCanny_Image32_swigregister
CCanny_Image32_swigregister(CCanny_Image32)
class CCanny_FImage(swig_base_CFImageEdgeDetector):
"""Proxy of C++ vpl::img::CCanny<(vpl::img::CImage<(vpl::img::tFloatPixel,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_base_CFImageEdgeDetector]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, CCanny_FImage, name, value)
__swig_getmethods__ = {}
for _s in [swig_base_CFImageEdgeDetector]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, CCanny_FImage, name)
__repr__ = _swig_repr
def __init__(self, dSigma, dT1, dT2):
"""__init__(self, dSigma, dT1, dT2) -> CCanny_FImage"""
if self.__class__ == CCanny_FImage:
_self = None
else:
_self = self
this = _ImageFilters.new_CCanny_FImage(_self, dSigma, dT1, dT2)
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _ImageFilters.delete_CCanny_FImage
__del__ = lambda self: None
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.CCanny_FImage___call__(self, SrcImage, DstImage)
def getSigma(self):
"""getSigma(self) -> double"""
return _ImageFilters.CCanny_FImage_getSigma(self)
def setSigma(self, dSigma):
"""setSigma(self, dSigma)"""
return _ImageFilters.CCanny_FImage_setSigma(self, dSigma)
def getThresholds(self, dT1, dT2):
"""getThresholds(self, dT1, dT2)"""
return _ImageFilters.CCanny_FImage_getThresholds(self, dT1, dT2)
def setThresholds(self, dT1, dT2):
"""setThresholds(self, dT1, dT2)"""
return _ImageFilters.CCanny_FImage_setThresholds(self, dT1, dT2)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_CCanny_FImage(self)
return weakref_proxy(self)
CCanny_FImage_swigregister = _ImageFilters.CCanny_FImage_swigregister
CCanny_FImage_swigregister(CCanny_FImage)
class CCanny_DImage(swig_base_CDImageEdgeDetector):
"""Proxy of C++ vpl::img::CCanny<(vpl::img::CImage<(vpl::img::tDensityPixel,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_base_CDImageEdgeDetector]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, CCanny_DImage, name, value)
__swig_getmethods__ = {}
for _s in [swig_base_CDImageEdgeDetector]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, CCanny_DImage, name)
__repr__ = _swig_repr
def __init__(self, dSigma, dT1, dT2):
"""__init__(self, dSigma, dT1, dT2) -> CCanny_DImage"""
if self.__class__ == CCanny_DImage:
_self = None
else:
_self = self
this = _ImageFilters.new_CCanny_DImage(_self, dSigma, dT1, dT2)
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _ImageFilters.delete_CCanny_DImage
__del__ = lambda self: None
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.CCanny_DImage___call__(self, SrcImage, DstImage)
def getSigma(self):
"""getSigma(self) -> double"""
return _ImageFilters.CCanny_DImage_getSigma(self)
def setSigma(self, dSigma):
"""setSigma(self, dSigma)"""
return _ImageFilters.CCanny_DImage_setSigma(self, dSigma)
def getThresholds(self, dT1, dT2):
"""getThresholds(self, dT1, dT2)"""
return _ImageFilters.CCanny_DImage_getThresholds(self, dT1, dT2)
def setThresholds(self, dT1, dT2):
"""setThresholds(self, dT1, dT2)"""
return _ImageFilters.CCanny_DImage_setThresholds(self, dT1, dT2)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_CCanny_DImage(self)
return weakref_proxy(self)
CCanny_DImage_swigregister = _ImageFilters.CCanny_DImage_swigregister
CCanny_DImage_swigregister(CCanny_DImage)
class CZeroCrossDetector_Image8(swig_base_CImage8EdgeDetector):
"""Proxy of C++ vpl::img::CZeroCrossDetector<(vpl::img::CImage<(vpl::img::tPixel8,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_base_CImage8EdgeDetector]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, CZeroCrossDetector_Image8, name, value)
__swig_getmethods__ = {}
for _s in [swig_base_CImage8EdgeDetector]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, CZeroCrossDetector_Image8, name)
__repr__ = _swig_repr
def __init__(self, dSigma, dThreshold):
"""__init__(self, dSigma, dThreshold) -> CZeroCrossDetector_Image8"""
if self.__class__ == CZeroCrossDetector_Image8:
_self = None
else:
_self = self
this = _ImageFilters.new_CZeroCrossDetector_Image8(_self, dSigma, dThreshold)
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _ImageFilters.delete_CZeroCrossDetector_Image8
__del__ = lambda self: None
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.CZeroCrossDetector_Image8___call__(self, SrcImage, DstImage)
def getSigma(self):
"""getSigma(self) -> double"""
return _ImageFilters.CZeroCrossDetector_Image8_getSigma(self)
def setSigma(self, dSigma):
"""setSigma(self, dSigma)"""
return _ImageFilters.CZeroCrossDetector_Image8_setSigma(self, dSigma)
def getThreshold(self):
"""getThreshold(self) -> double"""
return _ImageFilters.CZeroCrossDetector_Image8_getThreshold(self)
def setThreshold(self, dThreshold):
"""setThreshold(self, dThreshold)"""
return _ImageFilters.CZeroCrossDetector_Image8_setThreshold(self, dThreshold)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_CZeroCrossDetector_Image8(self)
return weakref_proxy(self)
CZeroCrossDetector_Image8_swigregister = _ImageFilters.CZeroCrossDetector_Image8_swigregister
CZeroCrossDetector_Image8_swigregister(CZeroCrossDetector_Image8)
class CZeroCrossDetector_Image16(swig_base_CImage16EdgeDetector):
"""Proxy of C++ vpl::img::CZeroCrossDetector<(vpl::img::CImage<(vpl::img::tPixel16,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_base_CImage16EdgeDetector]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, CZeroCrossDetector_Image16, name, value)
__swig_getmethods__ = {}
for _s in [swig_base_CImage16EdgeDetector]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, CZeroCrossDetector_Image16, name)
__repr__ = _swig_repr
def __init__(self, dSigma, dThreshold):
"""__init__(self, dSigma, dThreshold) -> CZeroCrossDetector_Image16"""
if self.__class__ == CZeroCrossDetector_Image16:
_self = None
else:
_self = self
this = _ImageFilters.new_CZeroCrossDetector_Image16(_self, dSigma, dThreshold)
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _ImageFilters.delete_CZeroCrossDetector_Image16
__del__ = lambda self: None
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.CZeroCrossDetector_Image16___call__(self, SrcImage, DstImage)
def getSigma(self):
"""getSigma(self) -> double"""
return _ImageFilters.CZeroCrossDetector_Image16_getSigma(self)
def setSigma(self, dSigma):
"""setSigma(self, dSigma)"""
return _ImageFilters.CZeroCrossDetector_Image16_setSigma(self, dSigma)
def getThreshold(self):
"""getThreshold(self) -> double"""
return _ImageFilters.CZeroCrossDetector_Image16_getThreshold(self)
def setThreshold(self, dThreshold):
"""setThreshold(self, dThreshold)"""
return _ImageFilters.CZeroCrossDetector_Image16_setThreshold(self, dThreshold)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_CZeroCrossDetector_Image16(self)
return weakref_proxy(self)
CZeroCrossDetector_Image16_swigregister = _ImageFilters.CZeroCrossDetector_Image16_swigregister
CZeroCrossDetector_Image16_swigregister(CZeroCrossDetector_Image16)
class CZeroCrossDetector_Image32(swig_base_CImage32EdgeDetector):
"""Proxy of C++ vpl::img::CZeroCrossDetector<(vpl::img::CImage<(vpl::img::tPixel32,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_base_CImage32EdgeDetector]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, CZeroCrossDetector_Image32, name, value)
__swig_getmethods__ = {}
for _s in [swig_base_CImage32EdgeDetector]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, CZeroCrossDetector_Image32, name)
__repr__ = _swig_repr
def __init__(self, dSigma, dThreshold):
"""__init__(self, dSigma, dThreshold) -> CZeroCrossDetector_Image32"""
if self.__class__ == CZeroCrossDetector_Image32:
_self = None
else:
_self = self
this = _ImageFilters.new_CZeroCrossDetector_Image32(_self, dSigma, dThreshold)
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _ImageFilters.delete_CZeroCrossDetector_Image32
__del__ = lambda self: None
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.CZeroCrossDetector_Image32___call__(self, SrcImage, DstImage)
def getSigma(self):
"""getSigma(self) -> double"""
return _ImageFilters.CZeroCrossDetector_Image32_getSigma(self)
def setSigma(self, dSigma):
"""setSigma(self, dSigma)"""
return _ImageFilters.CZeroCrossDetector_Image32_setSigma(self, dSigma)
def getThreshold(self):
"""getThreshold(self) -> double"""
return _ImageFilters.CZeroCrossDetector_Image32_getThreshold(self)
def setThreshold(self, dThreshold):
"""setThreshold(self, dThreshold)"""
return _ImageFilters.CZeroCrossDetector_Image32_setThreshold(self, dThreshold)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_CZeroCrossDetector_Image32(self)
return weakref_proxy(self)
CZeroCrossDetector_Image32_swigregister = _ImageFilters.CZeroCrossDetector_Image32_swigregister
CZeroCrossDetector_Image32_swigregister(CZeroCrossDetector_Image32)
class CZeroCrossDetector_FImage(swig_base_CFImageEdgeDetector):
"""Proxy of C++ vpl::img::CZeroCrossDetector<(vpl::img::CImage<(vpl::img::tFloatPixel,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_base_CFImageEdgeDetector]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, CZeroCrossDetector_FImage, name, value)
__swig_getmethods__ = {}
for _s in [swig_base_CFImageEdgeDetector]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, CZeroCrossDetector_FImage, name)
__repr__ = _swig_repr
def __init__(self, dSigma, dThreshold):
"""__init__(self, dSigma, dThreshold) -> CZeroCrossDetector_FImage"""
if self.__class__ == CZeroCrossDetector_FImage:
_self = None
else:
_self = self
this = _ImageFilters.new_CZeroCrossDetector_FImage(_self, dSigma, dThreshold)
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _ImageFilters.delete_CZeroCrossDetector_FImage
__del__ = lambda self: None
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.CZeroCrossDetector_FImage___call__(self, SrcImage, DstImage)
def getSigma(self):
"""getSigma(self) -> double"""
return _ImageFilters.CZeroCrossDetector_FImage_getSigma(self)
def setSigma(self, dSigma):
"""setSigma(self, dSigma)"""
return _ImageFilters.CZeroCrossDetector_FImage_setSigma(self, dSigma)
def getThreshold(self):
"""getThreshold(self) -> double"""
return _ImageFilters.CZeroCrossDetector_FImage_getThreshold(self)
def setThreshold(self, dThreshold):
"""setThreshold(self, dThreshold)"""
return _ImageFilters.CZeroCrossDetector_FImage_setThreshold(self, dThreshold)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_CZeroCrossDetector_FImage(self)
return weakref_proxy(self)
CZeroCrossDetector_FImage_swigregister = _ImageFilters.CZeroCrossDetector_FImage_swigregister
CZeroCrossDetector_FImage_swigregister(CZeroCrossDetector_FImage)
class CZeroCrossDetector_DImage(swig_base_CDImageEdgeDetector):
"""Proxy of C++ vpl::img::CZeroCrossDetector<(vpl::img::CImage<(vpl::img::tDensityPixel,vpl::base::CRefData)>)> class."""
__swig_setmethods__ = {}
for _s in [swig_base_CDImageEdgeDetector]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, CZeroCrossDetector_DImage, name, value)
__swig_getmethods__ = {}
for _s in [swig_base_CDImageEdgeDetector]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, CZeroCrossDetector_DImage, name)
__repr__ = _swig_repr
def __init__(self, dSigma, dThreshold):
"""__init__(self, dSigma, dThreshold) -> CZeroCrossDetector_DImage"""
if self.__class__ == CZeroCrossDetector_DImage:
_self = None
else:
_self = self
this = _ImageFilters.new_CZeroCrossDetector_DImage(_self, dSigma, dThreshold)
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _ImageFilters.delete_CZeroCrossDetector_DImage
__del__ = lambda self: None
def __call__(self, SrcImage, DstImage):
"""__call__(self, SrcImage, DstImage) -> bool"""
return _ImageFilters.CZeroCrossDetector_DImage___call__(self, SrcImage, DstImage)
def getSigma(self):
"""getSigma(self) -> double"""
return _ImageFilters.CZeroCrossDetector_DImage_getSigma(self)
def setSigma(self, dSigma):
"""setSigma(self, dSigma)"""
return _ImageFilters.CZeroCrossDetector_DImage_setSigma(self, dSigma)
def getThreshold(self):
"""getThreshold(self) -> double"""
return _ImageFilters.CZeroCrossDetector_DImage_getThreshold(self)
def setThreshold(self, dThreshold):
"""setThreshold(self, dThreshold)"""
return _ImageFilters.CZeroCrossDetector_DImage_setThreshold(self, dThreshold)
def __disown__(self):
self.this.disown()
_ImageFilters.disown_CZeroCrossDetector_DImage(self)
return weakref_proxy(self)
CZeroCrossDetector_DImage_swigregister = _ImageFilters.CZeroCrossDetector_DImage_swigregister
CZeroCrossDetector_DImage_swigregister(CZeroCrossDetector_DImage)
# This file is compatible with both classic and new-style classes.
| 40.325237 | 141 | 0.704612 | 10,998 | 106,257 | 6.189853 | 0.019276 | 0.023386 | 0.029614 | 0.044421 | 0.90045 | 0.847362 | 0.819143 | 0.749618 | 0.743375 | 0.692359 | 0 | 0.01611 | 0.190924 | 106,257 | 2,634 | 142 | 40.340547 | 0.775747 | 0.151962 | 0 | 0.621667 | 1 | 0 | 0.0178 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.162222 | false | 0.001111 | 0.012778 | 0.000556 | 0.537778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 8 |
53b1d9b611b598804aa11c8742cf40fd0288ad51 | 44 | py | Python | utils/__init__.py | mbarbetti/unifi-physics-lab3 | 8ddcca3ab3dce08886b835d4e0fedb4d3911ad48 | [
"MIT"
] | 2 | 2021-11-25T10:48:04.000Z | 2021-11-25T15:37:42.000Z | utils/__init__.py | mbarbetti/unifi-physics-lab3 | 8ddcca3ab3dce08886b835d4e0fedb4d3911ad48 | [
"MIT"
] | null | null | null | utils/__init__.py | mbarbetti/unifi-physics-lab3 | 8ddcca3ab3dce08886b835d4e0fedb4d3911ad48 | [
"MIT"
] | null | null | null | from .data_processing import data_processing | 44 | 44 | 0.909091 | 6 | 44 | 6.333333 | 0.666667 | 0.736842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068182 | 44 | 1 | 44 | 44 | 0.926829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
53d811a256c111bb4c7f3c1d31dc5d80d6b7018f | 10,539 | py | Python | integration_tests/strategy_bnh_ITG_test.py | hoondental/smtm | f7648da652c5437ee27efef6fbf2480045130c16 | [
"MIT"
] | null | null | null | integration_tests/strategy_bnh_ITG_test.py | hoondental/smtm | f7648da652c5437ee27efef6fbf2480045130c16 | [
"MIT"
] | null | null | null | integration_tests/strategy_bnh_ITG_test.py | hoondental/smtm | f7648da652c5437ee27efef6fbf2480045130c16 | [
"MIT"
] | 1 | 2022-03-25T03:06:54.000Z | 2022-03-25T03:06:54.000Z | import unittest
from smtm import StrategyBuyAndHold
from unittest.mock import *
class StrategyBuyAndHoldIntegrationTests(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
pass
def test_ITG_strategy_buy_and_hold_full(self):
strategy = StrategyBuyAndHold()
self.assertEqual(strategy.get_request(), None)
strategy.initialize(50000, 5000)
# 거래 정보 입력 - 1
strategy.update_trading_info(
{
"market": "KRW-BTC",
"date_time": "2020-04-30T14:51:00",
"opening_price": 11288000.0,
"high_price": 11304000.0,
"low_price": 11282000.0,
"closing_price": 11304000.0,
"acc_price": 587101574.8949,
"acc_volume": 51.97606868,
}
)
# 거래 요청 정보 생성
request = strategy.get_request()
expected_request = {
"type": "buy",
"price": 11304000.0,
"amount": 0.0008,
}
self.assertEqual(request[0]["type"], expected_request["type"])
self.assertEqual(request[0]["price"], expected_request["price"])
self.assertEqual(request[0]["amount"], expected_request["amount"])
# 거래 결과 입력 - 정상 체결 됨
strategy.update_result(
{
"request": {
"id": request[0]["id"],
"type": "buy",
"price": 11304000.0,
"amount": 0.0009,
"date_time": "2020-04-30T14:51:00",
},
"type": "buy",
"price": 11304000.0,
"amount": 0.0009,
"msg": "success",
"balance": 0,
"state": "done",
"date_time": "2020-04-30T14:51:00",
}
)
self.assertEqual(strategy.balance, 39821)
# 거래 정보 입력 - 2
strategy.update_trading_info(
{
"market": "KRW-BTC",
"date_time": "2020-04-30T14:52:00",
"opening_price": 11304000.0,
"high_price": 21304000.0,
"low_price": 11304000.0,
"closing_price": 21304000.0,
"acc_price": 587101574.8949,
"acc_volume": 51.97606868,
}
)
# 거래 요청 정보 생성
request = strategy.get_request()
expected_request = {
"type": "buy",
"price": 21304000.0,
"amount": 0.0004,
}
self.assertEqual(request[0]["type"], expected_request["type"])
self.assertEqual(request[0]["price"], expected_request["price"])
self.assertEqual(request[0]["amount"], expected_request["amount"])
# 거래 결과 입력 - 요청되었으나 체결 안됨
self.assertEqual(strategy.balance, 39821)
strategy.update_result(
{
"request": {
"id": request[0]["id"],
"type": "buy",
"price": 11304000.0,
"amount": 0.0009,
"date_time": "2020-04-30T14:52:00",
},
"type": "buy",
"price": 11304000.0,
"amount": 0.0009,
"msg": "success",
"balance": 0,
"state": "requested",
"date_time": "2020-04-30T14:52:00",
}
)
self.assertEqual(strategy.balance, 39821)
last_id = request[0]["id"]
# 거래 정보 입력 - 3
strategy.update_trading_info(
{
"market": "KRW-BTC",
"date_time": "2020-04-30T14:52:00",
"opening_price": 21304000.0,
"high_price": 21304000.0,
"low_price": 21304000.0,
"closing_price": 21304000.0,
"acc_price": 587101574.8949,
"acc_volume": 51.97606868,
}
)
# 거래 요청 정보 생성
request = strategy.get_request()
expected_request = {
"type": "buy",
"price": 21304000.0,
"amount": 0.0004,
}
self.assertEqual(request[0]["type"], "cancel")
self.assertEqual(request[0]["id"], last_id)
self.assertEqual(request[1]["type"], expected_request["type"])
self.assertEqual(request[1]["price"], expected_request["price"])
self.assertEqual(request[1]["amount"], expected_request["amount"])
# 거래 결과 입력 - 일부 체결됨
self.assertEqual(strategy.balance, 39821)
strategy.update_result(
{
"request": {
"id": request[0]["id"],
"type": "buy",
"price": 21304000.0,
"amount": 0.0009,
"date_time": "2020-04-30T14:52:00",
},
"type": "buy",
"price": 21304000.0,
"amount": 0.0002,
"msg": "success",
"balance": 0,
"state": "done",
"date_time": "2020-04-30T14:52:00",
}
)
self.assertEqual(strategy.balance, 35558)
# 거래 정보 입력 - 4
strategy.update_trading_info(
{
"market": "KRW-BTC",
"date_time": "2020-04-30T14:52:00",
"opening_price": 21304000.0,
"high_price": 41304000.0,
"low_price": 21304000.0,
"closing_price": 41304000.0,
"acc_price": 587101574.8949,
"acc_volume": 51.97606868,
}
)
# 거래 요청 정보 생성
request = strategy.get_request()
expected_request = {
"type": "buy",
"price": 41304000.0,
"amount": 0.0002,
}
self.assertEqual(request[0]["type"], expected_request["type"])
self.assertEqual(request[0]["price"], expected_request["price"])
self.assertEqual(request[0]["amount"], expected_request["amount"])
# 거래 결과 입력 - 정상 체결됨
self.assertEqual(strategy.balance, 35558)
strategy.update_result(
{
"request": {
"id": request[0]["id"],
"type": "buy",
"price": 41304000.0,
"amount": 0.0009,
"date_time": "2020-04-30T14:52:00",
},
"type": "buy",
"price": 41304000.0,
"amount": 0.0002,
"msg": "success",
"balance": 0,
"state": "done",
"date_time": "2020-04-30T14:52:00",
}
)
self.assertEqual(strategy.balance, 27293)
# 거래 정보 입력 - 5
strategy.update_trading_info(
{
"market": "KRW-BTC",
"date_time": "2020-04-30T14:52:00",
"opening_price": 41304000.0,
"high_price": 61304000.0,
"low_price": 41304000.0,
"closing_price": 61304000.0,
"acc_price": 587101574.8949,
"acc_volume": 51.97606868,
}
)
# 거래 요청 정보 생성
request = strategy.get_request()
expected_request = {
"type": "buy",
"price": 61304000.0,
"amount": 0.0001,
}
self.assertEqual(request[0]["type"], expected_request["type"])
self.assertEqual(request[0]["price"], expected_request["price"])
self.assertEqual(request[0]["amount"], expected_request["amount"])
# 거래 결과 입력 - 정상 체결됨
self.assertEqual(strategy.balance, 27293)
strategy.update_result(
{
"request": {
"id": request[0]["id"],
"type": "buy",
"price": 61304000.0,
"amount": 0.0009,
"date_time": "2020-04-30T14:52:00",
},
"type": "buy",
"price": 61304000.0,
"amount": 0.0002,
"msg": "success",
"balance": 0,
"state": "done",
"date_time": "2020-04-30T14:52:00",
}
)
self.assertEqual(strategy.balance, 15026)
# 거래 정보 입력 - 6
strategy.update_trading_info(
{
"market": "KRW-BTC",
"date_time": "2020-04-30T14:52:00",
"opening_price": 61304000.0,
"high_price": 61304000.0,
"low_price": 61304000.0,
"closing_price": 61304000.0,
"acc_price": 587101574.8949,
"acc_volume": 51.97606868,
}
)
# 거래 요청 정보 생성
request = strategy.get_request()
expected_request = {
"type": "buy",
"price": 61304000.0,
"amount": 0.0001,
}
self.assertEqual(request[0]["type"], expected_request["type"])
self.assertEqual(request[0]["price"], expected_request["price"])
self.assertEqual(request[0]["amount"], expected_request["amount"])
# 거래 결과 입력 - 정상 체결됨
self.assertEqual(strategy.balance, 15026)
strategy.update_result(
{
"request": {
"id": request[0]["id"],
"type": "buy",
"price": 61304000.0,
"amount": 0.0002,
"date_time": "2020-04-30T14:52:00",
},
"type": "buy",
"price": 61304000.0,
"amount": 0.0002,
"msg": "success",
"balance": 0,
"state": "done",
"date_time": "2020-04-30T14:52:00",
}
)
self.assertEqual(strategy.balance, 2759)
# 거래 정보 입력 - 7
strategy.update_trading_info(
{
"market": "KRW-BTC",
"date_time": "2020-04-30T14:52:00",
"opening_price": 61304000.0,
"high_price": 61304000.0,
"low_price": 61304000.0,
"closing_price": 61304000.0,
"acc_price": 587101574.8949,
"acc_volume": 51.97606868,
}
)
# 거래 요청 정보 생성
request = strategy.get_request()
self.assertEqual(request, None)
self.assertEqual(strategy.balance, 2759)
| 34.441176 | 74 | 0.44843 | 975 | 10,539 | 4.723077 | 0.104615 | 0.110749 | 0.100326 | 0.057763 | 0.872096 | 0.856678 | 0.847991 | 0.786319 | 0.77937 | 0.77937 | 0 | 0.168152 | 0.414271 | 10,539 | 305 | 75 | 34.554098 | 0.577839 | 0.027422 | 0 | 0.709091 | 0 | 0 | 0.177534 | 0 | 0 | 0 | 0 | 0 | 0.123636 | 1 | 0.010909 | false | 0.007273 | 0.010909 | 0 | 0.025455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
07089ff2aafca5bba743a28df2cb0f2e659dbef4 | 4,339 | py | Python | tests/plugins/test_schoolism.py | xcgx/streamlink | b635e0d9d0fe9363817a96ec7d31faefed95cb57 | [
"BSD-2-Clause"
] | 10 | 2017-04-10T18:25:41.000Z | 2021-09-15T20:14:58.000Z | tests/plugins/test_schoolism.py | xcgx/streamlink | b635e0d9d0fe9363817a96ec7d31faefed95cb57 | [
"BSD-2-Clause"
] | 9 | 2020-04-04T09:49:52.000Z | 2020-04-21T01:52:02.000Z | tests/plugins/test_schoolism.py | xcgx/streamlink | b635e0d9d0fe9363817a96ec7d31faefed95cb57 | [
"BSD-2-Clause"
] | 12 | 2022-01-30T23:34:18.000Z | 2022-03-26T17:09:43.000Z | import unittest
from streamlink.plugins.schoolism import Schoolism
from tests.plugins import PluginCanHandleUrl
class TestPluginCanHandleUrlSchoolism(PluginCanHandleUrl):
__plugin__ = Schoolism
should_match = [
'https://www.schoolism.com/watchLesson.php',
]
should_not_match = [
'https://www.schoolism.com',
]
class TestPluginSchoolism(unittest.TestCase):
def test_playlist_parse_subs(self):
with_subs = """var allVideos=[
{sources:[{type:"application/x-mpegurl",src:"https://d8u31iyce9xic.cloudfront.net/44/2/part1.m3u8?Policy=TOKEN&Signature=TOKEN&Key-Pair-Id=TOKEN",title:"Digital Painting - Lesson 2 - Part 1",playlistTitle:"Part 1",}], subtitles: [{
"default": true,
kind: "subtitles", srclang: "en", label: "English",
src: "https://s3.amazonaws.com/schoolism-encoded/44/subtitles/2/2-1.vtt",
}],
},
{sources:[{type:"application/x-mpegurl",src:"https://d8u31iyce9xic.cloudfront.net/44/2/part2.m3u8?Policy=TOKEN&Signature=TOKEN&Key-Pair-Id=TOKEN",title:"Digital Painting - Lesson 2 - Part 2",playlistTitle:"Part 2",}], subtitles: [{
"default": true,
kind: "subtitles", srclang: "en", label: "English",
src: "https://s3.amazonaws.com/schoolism-encoded/44/subtitles/2/2-2.vtt",
}]
}];
""" # noqa: E501
data = Schoolism.playlist_schema.validate(with_subs)
self.assertIsNotNone(data)
self.assertEqual(2, len(data))
def test_playlist_parse(self):
without_subs = """var allVideos=[
{sources:[{type:"application/x-mpegurl",src:"https://d8u31iyce9xic.cloudfront.net/14/1/part1.m3u8?Policy=TOKEN&Signature=TOKEN&Key-Pair-Id=TOKEN",title:"Gesture Drawing - Lesson 1 - Part 1",playlistTitle:"Part 1",}],},
{sources:[{type:"application/x-mpegurl",src:"https://d8u31iyce9xic.cloudfront.net/14/1/part2.m3u8?Policy=TOKEN&Signature=TOKEN&Key-Pair-Id=TOKEN",title:"Gesture Drawing - Lesson 1 - Part 2",playlistTitle:"Part 2",}]}
];
""" # noqa: E501
data = Schoolism.playlist_schema.validate(without_subs)
self.assertIsNotNone(data)
self.assertEqual(2, len(data))
def test_playlist_parse_colon_in_title(self):
colon_in_title = """var allVideos=[
{sources:[{type:"application/x-mpegurl",src:"https://d8u31iyce9xic.cloudfront.net/52/1/part1.m3u8?Policy=TOKEN&Signature=TOKEN&Key-Pair-Id=TOKEN",title:"Deconstructed: Drawing People - Lesson 1 - Part 1",playlistTitle:"Part 1",}],},
{sources:[{type:"application/x-mpegurl",src:"https://d8u31iyce9xic.cloudfront.net/52/1/part2.m3u8?Policy=TOKEN&Signature=TOKEN&Key-Pair-Id=TOKEN",title:"Deconstructed: Drawing People - Lesson 1 - Part 2",playlistTitle:"Part 2",}],},
{sources:[{type:"application/x-mpegurl",src:"https://d8u31iyce9xic.cloudfront.net/52/1/part3.m3u8?Policy=TOKEN&Signature=TOKEN&Key-Pair-Id=TOKEN",title:"Deconstructed: Drawing People - Lesson 1 - Part 3",playlistTitle:"Part 3",}],},
{sources:[{type:"application/x-mpegurl",src:"https://d8u31iyce9xic.cloudfront.net/52/1/part4.m3u8?Policy=TOKEN&Signature=TOKEN&Key-Pair-Id=TOKEN",title:"Deconstructed: Drawing People - Lesson 1 - Part 4",playlistTitle:"Part 4",}],},
{sources:[{type:"application/x-mpegurl",src:"https://d8u31iyce9xic.cloudfront.net/52/1/part5.m3u8?Policy=TOKEN&Signature=TOKEN&Key-Pair-Id=TOKEN",title:"Deconstructed: Drawing People - Lesson 1 - Part 5",playlistTitle:"Part 5",}],},
{sources:[{type:"application/x-mpegurl",src:"https://d8u31iyce9xic.cloudfront.net/52/1/part6.m3u8?Policy=TOKEN&Signature=TOKEN&Key-Pair-Id=TOKEN",title:"Deconstructed: Drawing People - Lesson 1 - Part 6",playlistTitle:"Part 6",}],},
{sources:[{type:"application/x-mpegurl",src:"https://d8u31iyce9xic.cloudfront.net/52/1/part7.m3u8?Policy=TOKEN&Signature=TOKEN&Key-Pair-Id=TOKEN",title:"Deconstructed: Drawing People - Lesson 1 - Part 7",playlistTitle:"Part 7",}]}
];
""" # noqa: E501
data = Schoolism.playlist_schema.validate(colon_in_title)
self.assertIsNotNone(data)
self.assertEqual(7, len(data))
| 62.884058 | 250 | 0.662826 | 524 | 4,339 | 5.435115 | 0.183206 | 0.036517 | 0.084972 | 0.088834 | 0.836025 | 0.791784 | 0.791784 | 0.733146 | 0.733146 | 0.733146 | 0 | 0.04657 | 0.173542 | 4,339 | 68 | 251 | 63.808824 | 0.74763 | 0.007375 | 0 | 0.264151 | 0 | 0.245283 | 0.773646 | 0.05624 | 0 | 0 | 0 | 0 | 0.113208 | 1 | 0.056604 | false | 0 | 0.056604 | 0 | 0.207547 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.